A crazy world we live in where Robert Maxwell's daughter is more notorious than he is.
Shit apple doesn’t fall far from the shit tree I guess.
perhaps a bit off-topic, but what is coincidental about this and/or what is the relevance of Ghislaine Maxwell here?
Robert Maxwell was a crook, he used pension funds (supposed to be ring-fenced for the benefit of the pensioners) to prop up his companies, so, after his slightly mysterious death it was discovered that basically there's no money to pay people who've been assured of a pension when they retire.
He was also very litigious. If you said he was a crook when he was alive you'd better hope you can prove it and that you have funding to stay in the fight until you do. So this means the sort of people who call out crooks were especially unhappy about Robert Maxwell because he was a crook and he might sue you if you pointed it out.
For example Donald Barr (father of twice-former US Attorney General Bill Barr) hiring college-dropout Jeffrey Epstein whilst headmaster at the elite Dalton School
Additional fun facts about Donald Barr: he served in US intelligence during WWII, and wrote a sci-fi book featuring child sex slaves
It's why you would say something like "more than coincidental" if you were trying to make some causal claim, like one thing causing the other, or both things coming from the same cause.
So, "What is coincidental about that?" is a weird question. It reads as a rhetorical claim of a causal connection through asking for a denial or a disproof of one.
what is the relevance to the discussion about journals and peer review is my main question.
if i randomly mentioned that your name appears to be an alternate spelling of a 3-band active EQ guitar pedal, coincidentally sharing all of the letters except one, in my reply to you, most people would be confused. that is how i felt when randomly reading "Ghislaine Maxwell" in this context of journals and peer review.
https://sarahkendzior.substack.com/p/red-lines
tl;dr He is the bridge that uncomfortably links Biden's former Secretary of State, Antony Blinken, to Jeffrey Epstein and Mossad. Hence, *gestures at the last couple of weeks and years*. Dude was just, like, Fraud Central, apparently.
I know a PhD professor doing post doc or something, and he accepted a scientific study just because it was published in Nature.
He didn't look at methodology or data.
From that point forward, I have never really respected Academia. They seem like bottom floor scientists who never truly understood the scientific method.
It helped that a year later Ivys had their cheating scandals, fake data, and academia wide replication crisis.
People are constantly filtering everything based on heuristics. The important thing is to know how deep to look in any given situation. Hopefully the person you're referring to is proficient at that.
Keep in mind that research scientists need to keep abreast of far more developments than any human could possibly study in detail. Also that 50% of people are below average at their job.
As a student you are to be directed* in your reading by an expert in the field of study that you are learning from. In many higher level courses a professor will assign multiple textbooks and assign reading from only particular chapters of those textbooks specifically because they have vetted those chapters for accuracy and alignment with their curriculum.
As a researcher and scientist a very large portion of your job is verifying and then integrating the research of others into your domain knowledge. The whole purpose of replicating studies is to look critically at the methodology of another scientist and try as hard as you can to prove them wrong. If you fail to prove them wrong and can produce the same results as them, they have done Good Science.
A textbook is the product of scientists and researchers Doing Science and publishing their results, other scientists and researchers verifying via replication, and then one of those scientists or researchers who is an expert in the field doing their best to compile their knowledge on the domain into a factually accurate and (relatively) easy to understand summary of the collective research performed in a specific domain.
The fact is that people make mistakes, and the job of a professor (who is an expert in a given field) is to identify what errors have made it through the various checks mentioned above and into circulation, often times making subjective judgement calls about what is 'factual enough' for the level of the class they are teaching, and leverage that to build a curriculum that is sound and helps elevate other individuals to the level of knowledge required to contribute to the ongoing scientific journey.
In short, it's not a bad thing if you're learning a subject by yourself for your own purposes and are not contributing to scientific advancement or working as an educator in higher-education.
* You can self-study, but to become an expert while doing so requires extremely keen discernment to be able to root out the common misconceptions that proliferate in any given field. In a blue-collar field this would be akin to picking up 'bad technique' by watching YouTube videos published by another self-taught tradesman; it's not always obvious when it happens.
Not really. Both are learning new things. Neither has the time or access to resources to replicate even a small fraction of things learned. Neither will ever make direct use of the vast majority of things learned.
Thus both depend on a cooperative model where trust is given to third parties to whom knowledge aggregation is outsourced. In that sense a textbook and prestigious peer reviewed journals serve the same purpose.
Not really in my humble opinion. Sure, the Popperian vibe is kind of fundamental, but the whole truncation into binary-valued true/false categories seldom makes sense with many (or even most?) problems for which probabilities, effect sizes, and related things matter more.
And if you fail to replicate a study, they may have still done Good Science. With replications, it should not be about Bad Science and Good Science but about the cumulation of evidence (or a lack thereof). That's what meta-analyses are about.
When we talk about Bad Science, it is about the industrial-scale fraud the article is talking about. No one should waste time replicating, citing, or reading that.
Ideally, you should independently verify claims that appear to be particularly consequential or particularly questionable on the surface. But at some point you have to rely on heuristics like chain of trust (it was peer reviewed, it was published in a reputable textbook), or you will never make forward progress on anything.
It is if what you read is factually incorrect, yes.
For example, I have read in a textbook that the tongue has very specific regions for taste. This is patently false.
> Keep in mind that research scientists need to keep abreast of far more developments than any human could possibly study in detail. Also that 50% of people are below average at their job.
So, we should probably just discount half of what we read from research scientists as "bad at their job" and not pay much attention to it? Which half? Why are you defending corruption?
So the problem is reduced to "I believe what I want! This person said it and so I think it's true!"
Sounds like politics in a nutshell.
> Sounds like politics in a nutshell.
Again, no. It sounds like the division of labor. The thing that made modern human societies possible.
The jokes write themselves,
Do you grow your own food and sew your own clothes? Also, did you personally etch the microprocessor that runs your computer? The division of labor inherently means trusting others. So when I buy a bag of M4 screws, I'm not going to measure each screw with a micrometer, and I'm not taking X-ray spectra to verify their material composition.
The academic world also used to trust large publishers to take care to actually review papers. It appears that this trust is now misplaced. But I don't think it was somehow stupid.
The exact reproductions is never published, because journals don't accept them, but if you add a few tweaks here and there you have a nice seed for an article to publish somewhere.
(I may "accept" an article in a field I don't care, but you probably should not thrust my opinion in fields I don't care.)
Fake data—you can only get that type of scandal when people are checking the data. I’d be more skeptical of communities that never have that kind of scandal.
To have research happening, you need someone saying "I want to give money to this researcher". There is an endless queue of people lining up who are ready to take this money and do something with it. The person with money (govt or private) has to use some heuristics to pick. One way is to say "I trust this one, I don't care too much what the project is, I'm sure this person will do something that makes sense". But that is dependent on a track record.
Replications don't have to be in the journals either. As long as money flows, someone will do them, and that is what matters. The randomization will help prevent coordination between authors and replicators.
In a better world, negative studies and replications would count towards tenure, but that is unlikely to occur. At least half of the problem is the pressure to continuously publish positive results.
Also who's funding you for replication work? Do you know the pressure you have in tenure track to have a consistent thesis on what you work on?
Literally every single know that designs academia is tuned to not incentivize what you complain about. Its not just journals being picky.
Also the people committing fraud aren't ones who will say "gosh I will replicate things now!" Replicating work is far more difficult than a lot of original work.
Of course I do! Not all of course, and taking (subjectively measured) impact into account. "We tried to replicate the study published in the same journal 3 years ago using a larger sample size and failed to achieve similar results..." OR "after successfully replicating the study we can confirm the therapeutic mechanism proposed by X actually works" - these are extremely important results that are takin into account in meta studies and e.g. form the base of policies worldwide.
More than anything. That might legitimately be enough to save science on its own.
(I am not seriously proposing this, but it's interesting to think about distinguishing between the very small amount of truly innovative discovery versus the very long tail of more routine methods development and filling out gaps in knowledge)
But they don't, and that's the problem!
In my own experience I was unable to publish a few works because I was unable to outperform a "competitor" (technically we're all on the same side, right?). So I dig more and more into their work and really try to replicate their work. I can't! Emailing the authors I get no further and only more questions. I submit the papers anyways, adding a section about replication efforts. You guessed it, rejected. With explicit comments from reviewers about lack of impact due to "competitor's" results.
Is an experience I've found a lot of colleagues share. And I don't understand it. Every failed replication should teach us something new. Something about the bounds of where a method works.
It's odd. In our strive for novelty we sure do turn down a lot of novel results. In our strive to reduce redundancy we sure do create a lot of redundancy.
That sort of Orwellian doublethink is exactly the problem. They need to move it forward without improving it, contribute without adding anything, challenge accepted dogma without rocking the boat, and...blech!
> challenge accepted dogma without rocking the boat
I think the funniest part is how we have all these heroes of science who faced scrutiny by their peers, but triumphed in the end. They struggled because they challenged the status quo. We celebrate their anti authoritative nature. We congratulate them for their pursuit of truth! And then get mad when it happens. We pretend this is a thing of the past, but it's as common as ever[0,1].You must create paradigm shifts without challenging the current paradigm!
[0] https://www.scientificamerican.com/article/katalin-karikos-n...
[1] https://www.globalperformanceinsights.com/post/how-a-rejecte...
I can tell you that it doesn't match my own experience. I also think it doesn't match your example. Those cases of verified image fraud are typically part of replication efforts. The reason the fraud is able to persist is due to the lack of replication, not the abundance of it.
I'm pretty sure most image fraud went completely unrealized even in the case of replication failure. It looks like (pre AI) it was mostly a few folks who did it as a hobby, unrelated to their regular jobs/replication work.
> 'm pretty sure most image fraud went completely unrealized even in the case of replication failure
Part of my point is that being unable to publish replication efforts means we don't reduce ambiguity in the original experiments. I was taught that I should write a paper well enough that a PhD student (rather than candidate) should be able to reproduce the work. IME replication failures are often explained with "well I must be doing something wrong." A reasonable conclusion, but even if true the conclusion is that the original explanation was insufficiently clear. > It looks like (pre AI) it was mostly a few folks who did it as a hobby
I'm sorry, didn't you say >>> Advanced groups usually replicate their competitor's results in their own hands shortly after publication
Because your current statement seems to completely contradict your previous one.Or are you suggesting that the groups you didn't work with (and are thus speculating) are the ones who replicate works and the ones you did work with "just trust their competitor's competence")? Because if this is what you're saying then I do not think this "mostly" matches your experience. That your experience more closely matches my own.
[0] I should take that back. I started in physics (undergrad) and went to CS for grad. Replication could often be de facto in physics, as it was a necessary step towards progress. You often couldn't improve an idea without understanding/replicating it (both theoretical and experimental). But my experience in CS, including at national labs, was that people didn't even run the code. Even when code was provided as part of reviewing artifacts I found that my fellow reviewers often didn't even look at it, let alone run it... This was common at tier 1 conferences mind you... I only knew one other person that consistently ran code.
Replication of an experiment and finding image fraud are kind of done as two different things. If somebody publishes a paper with image fraud, it's still entirely possible to replicate their results(!) and if somebody publishes a paper without any image fraud, it's still entirely possible that others could fail to replicate. Also, most image errors in papers are, imho, due to sloppy handling/individual errors, rather than intentional fraud (it's one of the reasons I worked so hard on automating my papers- if I did make an error, there should be audit log demonstrating the problem, and the error should be rectified easily/quickly in the same way we fix bugs in production at big tech).
This came up a bunch when I was at LBL because of work done by Mina Bissell there on extracellular matrix. She is actively rewriting the paradigm but many people can't reproduce her results- complex molecular biology is notororiously fickle. Usually the answer is, "if you're a good researcher and can't reproduce my work, you come to my lab and reproduce it there" because the variables that affect this are usually things in the lab- the temperature, the reagents, the handling.
See https://www.nature.com/articles/503333a (written by Dr. Bissell).
All because journals prefer novelty over confirmation. It's like a castle of cards, looks cool but not stable or long-term at all.
Actually, yes, I do. The marginal cost for publishing a study online at this point is essentially nil.
The marginal cost for doing a study remains the same, which is quite a bit. Society doesn't have unlimited scientific talent or hours. Every year someone spends replicating is a year lost to creating something new and valuable.
I'm sure you can more narrowly tune your email alerts FFS.
> Replicating work is far more difficult than a lot of original work.
Only if the original work was BS. And what, just because it's harder, we shouldn't do it?
Hell yeah. We’re all trying to get that Nature paper. Imagine if you could accomplish that by setting the record straight.
I believe people will enthusiastically say yes but that they do not routinely read that journal.
"It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so."
Knowing that something I thought was true was actually false would have saved me years in several situations.
What's even your point here? Hopefully we are at least in agreement that Nature is seen as prestigious and worth looking through precisely because of the sort of content that they publish. Diluting that would dilute their very nature. (Bad pun very much intended sorry I just couldn't resist.)
This is partly why much of today's science is bs, pure and simple.
I don’t regularly read scientific studies but I’ve read a few of them.
How is it possible that a serious study is harder to replicate than it is to do originally. Are papers no longer including their process? Are we at the point where they are just saying “trust me bro” for how they achieved their results?
> Do you want issues of Nature and cell to be replication studies?
Not issues of Nature but I’ve long thought that universities or the government should fund a department of “I don’t believe you” entirely focused on reproducing scientific results and seeing if they are real
They aren't. GP was on point until that last sentence. Just pretend that wasn't there. It's pretty much always much easier to do something when all the key details have been figured out for you in advance.
There is some difficulty if something doesn't work to distinguish user error from ambiguity of original publication from outright fraud. That can be daunting. But the vast majority of the time it isn't fraud and simply emailing the original author will get you on track. Most authors are overjoyed to learn about someone using their work. If you want to be cynical about it, how else would you get your citation count up?
It's not perfect. You don't get any credit unless you can demonstrate a substantial break of the prior work. But it's better than in a lot of other fields.
top on my list of things to do if i were a billionaire: launch an institute for the sole purpose of reproducing other's findings.
The biggest problem by far is modern society: Tenure, getting paid a livable wage as a researcher, not getting stack-ranked and eliminated from your organization all overindex on positive research results that are marketable. This "loss function" encourages scientific fraud of sorts.
> Most will refuse to publish replications, negative studies, or anything they deem unimportant, even if the study was conducted correctly.
I think this was really caused by the rise of bureaucracy in academia. Bureaucrats favorite thing is a measurement, especially when they don't understand its meaning. There's always been a drive for novelty in academia, it's just at the very core of the game. But we placed far too much focus on this, despite the foundation of science being replication. We made a trade, foundation for (the illusion of) progress. It's like trying to build a skyscraper higher and higher without concern for the ground it stands on. Doesn't take a genius to tell you that building is going to come crashing down. But proponents say "it hasn't yet! If it was going to fall it would have already" while critics are actually saying "we can't tell you when it'll fall, but there's some concerning cracks and we're worried it'll collapse and we won't even be able to tell we're in a pile of rubble."I don't know what the solution is, but I do know that our fear of people wasting money and creating fraudulent studies has only resulted in wasting money and fraudulent studies. We've removed the verification system while creating strong incentives to cheat (punish or perish, right?).
I think one thing we do need to recognize is that in the grand scheme of things, academia isn't very expensive. A small percentage of a large number is still a large number. Even if half of academics were frauds it would be a small percentage of waste, and pale in comparison to more common waste, fraud, and abuse of government funds.
From what I can tell, the US spent $60bn for University R&D in 2023[0] (less than 1% of US Federal expenditures). But in that same time there was $400bn in waste and fraud through Covid relief funds [1]. With $280bn being straight up fraud. That alone is more than 4x of all academic research funding!!!
I'm unconvinced most in academia are motivated by money or prestige, as it's a terrible way to achieve those things. But I am convinced people are likely to commit fraud when their livelihoods are at stake or when they can believe that a small lie now will allow them to continue doing their work. So as I see it, the publish or perish paradigm only promotes the former. The lack of replication only allows, and even normalizes, the latter. The stress for novelty only makes academics try to write more like business people, trying to sell their product in some perverse rat race.
So I think we have to be a bit honest here. Even if we were to naively make this space essentially unregulated it couldn't be the pinnacle of waste, fraud, and abuse that many claim it is. But I doubt even letting scientists be entirely free from publication requirements that you'd find much waste, fraud, and abuse. Science has a naturally regulating structure. It was literally created to be that way! We got to where we are in through this self regulating system because scientists love to argue about who is right and the process of science is meant to do exactly that. Was there waste and fraud in the past? Yes. I don't think it's entirely avoidable, it'll never be $0 of waste money. But the system was undoubtably successful. And those that took advantage of the system were better at fooling the public than they were their fellow scientists. Which is something I think we've still failed to catch onto
[0] https://usafacts.org/articles/what-do-universities-do-with-t...
[1] https://apnews.com/article/pandemic-fraud-waste-billions-sma...
> You either have something documented and quantified and measured and objective criteria tickboxes and deal with this style of failure mode, or you rely on subjective judgment and assessment and accept the failure mode of bias, nepotism, old boy's clubs etc
My argument is that our current pursuit of the former only reinforces the existence of the latter.You have a fundamental flaw in your argument, one that illustrates a common, yet fundamental, misunderstanding of science. There is no "objective" thing to measure, there are only proxies. I actually recently stumbled on a short by Adam Savage that I think captures this[0], although I think he's a bit wrong too. Regardless of precision we are always using a proxy. A tape measure does not define a meter, it only serves as a reference to compare with. A reference where not only the human makes error when reading, but that the reference itself has error[1]. So there are no direct measurements, there are only measurements by proxy.
You may have heard someone say "science doesn't prove things, it disproves them", and that's in part a consequence to this. Our measurements are meaningless without an understanding of their uncertainty (both quantifiable and unquantifiable!) as well as the assumptions they are made under.
I'm not trying to be pedantic here, I think this precision in understanding matters to the conversation. My argument is that by discounting those errors that they accumulate. We've had a pretty good run. This current system has only really started to be practiced in the 60s and 70's. So 50 years is a lot of time for error to accumulate. 50 years is a lot of time for small, seemingly insignificant, and easy to dismiss errors to accumulate into large, intangible, and complex problems.
There's something that I guess is more subtle in my argument: science is self-correcting. I don't mean "science" as the category of pursuits that seek truths about the world around us, but I mean "science" as a systematic approach to obtaining knowledge. A key reason this self-correction happens is due to replication. But in reality that is a consequence of how we pin down truth itself. We seek causal structures. More specifically, we seek counterfactual models. Assuming honest practitioners, failures of reproduction happen for primarily for one of two reasons: 1) ambiguity of communication between the original experimenters and those replicating or 2) a variation in conditions. 2) is actually quite common and tells us something new about that causal structure. In practice it is extremely difficult, if not impossible, to exactly replicate the conditions of the original experiment, so even with successful replication we gain information about the robustness of the results.
But why am I talking about all this? Because without the explicit acknowledgement of these limitations we seem to easily forget them. We are often treating substantially more subjective measures (such as impact or novelty) as far more objective than we would treat even physical measurements. It should be absolutely no surprise that things like impact are at best extremely difficult to measure. Even with a time machine we may not accurately measure the impact of a work for decades, or more. Ironically, a major reason for a work's impact to be found only after decades (or centuries) is the belief that at its time it had no impact, and was a dead end. You'd be amazed at how common this actually is. It's where jokes similar to how everything is named after the second person to discover something, the first being Euler[2]. But science is self-correcting. Even if a discovery of Euler's was lost, it is only a matter of time before someone (independently) rediscovers it.
I'm talking about this because there is no perfect system. Because a measurement without the acknowledge of its uncertainty is far less accurate than a measurement with. I'm talking about this because we will always have errors and the existence of them is not a reason to dismiss things. Instead we have to compare and contrast both the benefits and limits of competing ideas. We are only doing ourselves a disservice by pretending the limits don't exist. And if we mindlessly pursue objective measurements we'll only end up finding we've metric hacked our way into reading tea leaves. As we advance in any subject the minutia always ends up being the critical element (see [0]) and so the problem is it doesn't matter if we're 90% "objective" and 10% reading the tea leaves. Not when the decisions are made differentiating the 10%. In reality we're not even good at measuring that 90% when it comes to determining how productive academics are[3-5]
[0] https://www.youtube.com/shorts/JGa_X4QfE-0
[1] https://www.youtube.com/watch?v=EstiCb1gA3U
[2] https://en.wikipedia.org/wiki/List_of_topics_named_after_Leo...
[3] https://briankeating.substack.com/p/peter-higgs-wouldnt-get-...
[4] https://yoshuabengio.org/2020/02/26/time-to-rethink-the-publ...
[5] See the two links in this comment as further evidence. They are about relatively recent Nobel works that faced frequent rejections https://news.ycombinator.com/item?id=47340733