upvote
Of course this is slightly messy too. Fraudsters are probably always incorrect, of course they could have stolen the data. But being incorrect doesn't mean your intentionally committing fraud.
reply
That would be great if journals bothered publishing replication studies. But since they don't, researchers can't get adequate funding to perform them, and since they can't perform them, they don't exist.

We can't look for failed replication experiments if none exist.

reply
that approach is accurate, but not scalable.

the effort to publish a fraudulent study is less (sometimes much less) than the effort to replicate a study.

reply
Yeah, but this happens all the time.

>>95% of the time, the fraudsters get off scot-free. Look at Dan Ariely: Caught red-handed faking data in Excel using the stupidest approach imaginable, and outed as a sex pest in the Epstein files. Duke is still giving him their full backing.

It’s easy to find fraud, but what’s the point if our institutions have rotten all the way through and don’t care, even when there’s a smoking gun?

reply
Is it that easy?

Machine Learning papers, for example, used to have a terrible reputation for being inconsistent and impossible to replicate.

That didn't make them (all) fraudulent, because that requires intent to deceive.

reply
What do you think it is about machine learning that makes it hard to replicate? I'm an outsider to academic research, but it seems like computer based science would be uniquely easy - publish the code, publish the data, and let other people run it. Unless it's a matter of scale, or access to specific hardware.
reply
A lot of things are easy if you ignore the incentive structure. E.g. a lot of papers will no longer be published if the data must be published. You’d lose all published research from ML labs. Many people like you would say “that’s perfectly okay; we don’t need them” but others prefer to be able to see papers like Language Models Are Few-Shot Learners https://arxiv.org/abs/2005.14165

So the answer is that we still want to see a lot of the papers we currently see because knowing the technique helps a lot. So it’s fine to lose replicability here for us. I’d rather have that paper than replicability through dataset openness.

reply
But the lab must publish at least the general category of data, and if that doesn't replicate, then the model only works on a more specific category than they claim (e.g. only their dataset).
reply
Even with the exact same dataset and architecture, ML results aren't perfectly replicable due to random weight initialisation, training data order, and non-deterministic GPU operations. I've trained identical networks on identical data and gotten different final weights and performance metrics.

This doesn't mean the model only works on that specific dataset - it means ML training is inherently stochastic. The question isn't 'can you get identical results' but 'can you get comparable performance on similar data distributions.

reply
Then researchers should re-train their models a couple times, and if they can't get consistent results, figure out why. This doesn't even mean they must throw out the work: a paper "here's why our replications failed" followed by "here's how to eliminate the failure" or "here's why our study is wrong" is useful for future experiments and deserves publication.
reply
As per my previous comment - we are discussing stochastic systems.

By definition, they involve variance that cannot be explained or eliminated through simple repetition. Demanding a 'deterministic' explanation for stochastic noise is a category error; it's like asking a meteorologist to explain why a specific raindrop fell an inch to the left during a storm replication.

reply