upvote
> Do you want issues of Nature and cell to be replication studies?

Of course I do! Not all of course, and taking (subjectively measured) impact into account. "We tried to replicate the study published in the same journal 3 years ago using a larger sample size and failed to achieve similar results..." OR "after successfully replicating the study we can confirm the therapeutic mechanism proposed by X actually works" - these are extremely important results that are takin into account in meta studies and e.g. form the base of policies worldwide.

reply
Honestly even if they didn't publish the whole paper, if there was just a page that was a table of all the replication studies that were done recently, that would be pretty cool.
reply
> Do you want issues of Nature and cell to be replication studies?

More than anything. That might legitimately be enough to save science on its own.

reply
Maybe nature and cell and a few other journals should be exceptions: they should be the place that the most advanced scientists publish interesting ideas early for the consumption by their competitors. At that level of science, all the competitors can reproduce each other's experiments if necessary; the real value is expanding the knowledge of what seems possible quickly.

(I am not seriously proposing this, but it's interesting to think about distinguishing between the very small amount of truly innovative discovery versus the very long tail of more routine methods development and filling out gaps in knowledge)

reply
> that level of science, all the competitors can reproduce each other's experiments if necessary

But they don't, and that's the problem!

reply
The problem is bigger. It even blocks research!

In my own experience I was unable to publish a few works because I was unable to outperform a "competitor" (technically we're all on the same side, right?). So I dig more and more into their work and really try to replicate their work. I can't! Emailing the authors I get no further and only more questions. I submit the papers anyways, adding a section about replication efforts. You guessed it, rejected. With explicit comments from reviewers about lack of impact due to "competitor's" results.

Is an experience I've found a lot of colleagues share. And I don't understand it. Every failed replication should teach us something new. Something about the bounds of where a method works.

It's odd. In our strive for novelty we sure do turn down a lot of novel results. In our strive to reduce redundancy we sure do create a lot of redundancy.

reply
Advanced groups usually replicate their competitor's results in their own hands shortly after publication (or they just trust their competitor's competence). But they don't spend any time publishing it unless they fail to replicate and can explain why they can't replicate. From their perspective, it's a waste of time. I think this has been shown to be a naive approach (given the high rate of image fraud in molecular biology) but people who are in the top of the field have strong incentives to focus on moving the state of the art forward without expending energy on improving the field as a whole.
reply
"strong incentives to focus on moving the state of the art forward without expending energy on improving the field as a whole"

That sort of Orwellian doublethink is exactly the problem. They need to move it forward without improving it, contribute without adding anything, challenge accepted dogma without rocking the boat, and...blech!

reply

  > challenge accepted dogma without rocking the boat
I think the funniest part is how we have all these heroes of science who faced scrutiny by their peers, but triumphed in the end. They struggled because they challenged the status quo. We celebrate their anti authoritative nature. We congratulate them for their pursuit of truth! And then get mad when it happens. We pretend this is a thing of the past, but it's as common as ever[0,1].

You must create paradigm shifts without challenging the current paradigm!

[0] https://www.scientificamerican.com/article/katalin-karikos-n...

[1] https://www.globalperformanceinsights.com/post/how-a-rejecte...

reply
Are you explaining this from experience or from speculation?

I can tell you that it doesn't match my own experience. I also think it doesn't match your example. Those cases of verified image fraud are typically part of replication efforts. The reason the fraud is able to persist is due to the lack of replication, not the abundance of it.

reply
Mostly experience (based on being a PhD scientist, a postdoc, a National Lab scientist, and engineer at several bigtech companies), partly speculation (none of the groups/labs I worked in operated at "the highest level", but I worked adjacent to many of those).

I'm pretty sure most image fraud went completely unrealized even in the case of replication failure. It looks like (pre AI) it was mostly a few folks who did it as a hobby, unrelated to their regular jobs/replication work.

reply
In most of the labs I've worked in replication is not a common task[0]

  > 'm pretty sure most image fraud went completely unrealized even in the case of replication failure
Part of my point is that being unable to publish replication efforts means we don't reduce ambiguity in the original experiments. I was taught that I should write a paper well enough that a PhD student (rather than candidate) should be able to reproduce the work. IME replication failures are often explained with "well I must be doing something wrong." A reasonable conclusion, but even if true the conclusion is that the original explanation was insufficiently clear.

  > It looks like (pre AI) it was mostly a few folks who did it as a hobby
I'm sorry, didn't you say

  >>> Advanced groups usually replicate their competitor's results in their own hands shortly after publication 
Because your current statement seems to completely contradict your previous one.

Or are you suggesting that the groups you didn't work with (and are thus speculating) are the ones who replicate works and the ones you did work with "just trust their competitor's competence")? Because if this is what you're saying then I do not think this "mostly" matches your experience. That your experience more closely matches my own.

[0] I should take that back. I started in physics (undergrad) and went to CS for grad. Replication could often be de facto in physics, as it was a necessary step towards progress. You often couldn't improve an idea without understanding/replicating it (both theoretical and experimental). But my experience in CS, including at national labs, was that people didn't even run the code. Even when code was provided as part of reviewing artifacts I found that my fellow reviewers often didn't even look at it, let alone run it... This was common at tier 1 conferences mind you... I only knew one other person that consistently ran code.

reply
Note that my field is biophysics (quantitative biology) while yours is physics and CS. Those are done completely differently from biology; with the exception of some truly enormous/complex/delicate experiments that require unique hardware, physics tends to be much more reproducible than biology, and CS doubly-so.

Replication of an experiment and finding image fraud are kind of done as two different things. If somebody publishes a paper with image fraud, it's still entirely possible to replicate their results(!) and if somebody publishes a paper without any image fraud, it's still entirely possible that others could fail to replicate. Also, most image errors in papers are, imho, due to sloppy handling/individual errors, rather than intentional fraud (it's one of the reasons I worked so hard on automating my papers- if I did make an error, there should be audit log demonstrating the problem, and the error should be rectified easily/quickly in the same way we fix bugs in production at big tech).

This came up a bunch when I was at LBL because of work done by Mina Bissell there on extracellular matrix. She is actively rewriting the paradigm but many people can't reproduce her results- complex molecular biology is notororiously fickle. Usually the answer is, "if you're a good researcher and can't reproduce my work, you come to my lab and reproduce it there" because the variables that affect this are usually things in the lab- the temperature, the reagents, the handling.

See https://www.nature.com/articles/503333a (written by Dr. Bissell).

reply
All that makes it more important for top journals to reward replication, not less!
reply
Top journals are not inherently prestigious. They are prestigious because they try to publish only the most interesting and most significant results. If they started publishing successful replication studies, they would lose prestige, and more interesting journals would eventually rise to the top. (Replication studies that fail to replicate a major result in a spectacular way are another matter.)
reply
I know you got a ton of responses already but not caring about replicability just invalidates science as a method. If we care only about first to publish we end up in the current situation where we don't even know that we know is actually even remotely correct.

All because journals prefer novelty over confirmation. It's like a castle of cards, looks cool but not stable or long-term at all.

reply
> Do you want issues of Nature and cell to be replication studies? As a reader even from within the field, im not interested in browsing through negative studies.

Actually, yes, I do. The marginal cost for publishing a study online at this point is essentially nil.

reply
I think archives with pretty low standards for notability are a good idea. At some point though you have to pick what actually counts as interesting enough to go in the curated list that is actually suggested reading, where the prestige is attached. If there's no curation by Nature then it falls to bloggers or another journal to sift through the fire-hose and make best-of lists. Most of the value is in the curation, not the publishing. Without exclusivity there's very little signal.
reply
> The marginal cost for publishing a study online at this point is essentially nil.

The marginal cost for doing a study remains the same, which is quite a bit. Society doesn't have unlimited scientific talent or hours. Every year someone spends replicating is a year lost to creating something new and valuable.

reply
Even if that negative study could save you one, two, three+ years of work for the same outcome (which you then also can't really do anything with)? Shouldn't there BE funding for replication studies? Shouldn't that count towards tenure? Part of the problem is that publications play such a heavy role in getting tenure in the first place.

I'm sure you can more narrowly tune your email alerts FFS.

reply
"Original research" isn't worth much unless replicated, which is the entire problem being discussed in this thread. Replicating studies are great though because they tell you if the original research actually stands and is valid.

> Replicating work is far more difficult than a lot of original work.

Only if the original work was BS. And what, just because it's harder, we shouldn't do it?

reply
Why blame just the journals when every other system also disintivizes the same.
reply
I must be missing something, surely the argument isn't "other systems also disincentivize solving the problem, therefore we shouldn't work to fix this one"
reply
If you're a reader within the field, then you are the one person in the world who should be most interested in negative replication studies.
reply
> Do you want issues of Nature and cell to be replication studies?

Hell yeah. We’re all trying to get that Nature paper. Imagine if you could accomplish that by setting the record straight.

reply
If you're thoroughly debunking a previous Nature paper they just might publish that. But the expectation is that you'll succeed. Publishing that sort of mundane article would reduce the prestige of getting something into the journal. Publishing in a high impact journal is only seen as an achievement in the first place because of what it implies about the content of your paper.
reply
Realistically, everyone will say “yes” to the “do you want” question because if you’re not a reader or a subscriber you benefit from the readers reading replication studies.

I believe people will enthusiastically say yes but that they do not routinely read that journal.

reply
Suggesting that people would stop reading Nature if they also included replication studies send like an incredible leap.
reply
It would directly undermine the reason that people read Nature in the first place.
reply
Not really.

"It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so."

Knowing that something I thought was true was actually false would have saved me years in several situations.

reply
I didn't understand us to only be talking about failed replication studies of previous Nature papers which would hopefully be few and far between and thus noteworthy indeed. Rather replication studies in general which on average are arguably less interesting to the reader than even the content of the typical archival journal.
reply
They certainly will be few and far between when the system is structured to repress them. But there's reason to believe they wouldn't be as rare as you seem to think:

https://www.nature.com/nature/articles?type=retraction

reply
Are you seriously attempting to imply that Nature retractions aren't few and far between?

What's even your point here? Hopefully we are at least in agreement that Nature is seen as prestigious and worth looking through precisely because of the sort of content that they publish. Diluting that would dilute their very nature. (Bad pun very much intended sorry I just couldn't resist.)

reply
That is a novel interpretation of my comment certainly.
reply
Tagging seems like an option here.
reply
>Also who's funding you for replication work? Do you know the pressure you have in tenure track to have a consistent thesis on what you work on?

This is partly why much of today's science is bs, pure and simple.

reply
> Replicating work is far more difficult than a lot of original work.

I don’t regularly read scientific studies but I’ve read a few of them.

How is it possible that a serious study is harder to replicate than it is to do originally. Are papers no longer including their process? Are we at the point where they are just saying “trust me bro” for how they achieved their results?

> Do you want issues of Nature and cell to be replication studies?

Not issues of Nature but I’ve long thought that universities or the government should fund a department of “I don’t believe you” entirely focused on reproducing scientific results and seeing if they are real

reply
> How is it possible that a serious study is harder to replicate than it is to do originally.

They aren't. GP was on point until that last sentence. Just pretend that wasn't there. It's pretty much always much easier to do something when all the key details have been figured out for you in advance.

There is some difficulty if something doesn't work to distinguish user error from ambiguity of original publication from outright fraud. That can be daunting. But the vast majority of the time it isn't fraud and simply emailing the original author will get you on track. Most authors are overjoyed to learn about someone using their work. If you want to be cynical about it, how else would you get your citation count up?

reply