Unfortunately I don't think a dialogue around vague anecdotes is going to be particularly enlightening. What matters is culture, but also process--mechanisms and checks--plus consequences. Consequences don't happen if everyone is hush-hush about it and no one wants to be a "rat".
That is where being good at politics come into play. And if you are good at it, instead of being career-ending, fraud will put you in the highest of the positions!
No one wants a "plant" who cannot navigate scrutiny!
I worked for exactly one academic, and he indulged in impossible-to-detect research fraud. So in my own limited experience research fraud was 100%.
It was a biology lab, and this was an extremely hard working man. 18 hours per day in the lab was the norm. But the data wasn't coming out the way he wanted, and his career was at stake, so he put his thumb on the scale in various ways to get the data he needed. E.g. he didn't like one neural recording, so he repeated it until he got what he wanted and ignored the others. You would have to be right in the middle of the experiment to notice anything, and he just waved me off when I did.
This same professor was the loudest voice in the department when it came to critiquing experimental designs and championing rigor. I knew what he did was wrong, because he taught me that. And he really appeared to mean it, but when push came to shove, he fiddled, and was probably even lying to himself.
So I came away feeling that academic fraud is probably rampant, because the incentives all align that way. Anyone with the extraordinary integrity to resist was generally self-curated out of the job.
Over time I learned that most papers in my field (computational biology) are embellished to some extent or another (or cherry-picked/curated/structured for success) and often irreproducible- some key step is left out, or no code is provided that replicates the results, etc. I can see this from two perspectives:
1) science should be trivially reproducible; it should not require the smartest/most capable people in the field to read the paper and reproduce the results. This places a burden on the people who are at the state of the art of the field to make it easy for other folks, which slows them down (but presumably makes overall progress go faster).
2) science should be done by geniuses; the leaders in the field don't need to replicate their competitors paper. it's sufficient to read the paper, apply priors, and move on (possibly learning whatever novel method/technique the paper shows so they can apply it in their own hands). It allows the field innovators to move quickly and discover new things, but is prone to all sorts of reliability/reproducibility problems, and ideally science should be egalitarian, not credentials-based.
I have repeated it many times on this site but here’s the reality of human experience: if the rate of fraudulent labs is even as high as 10% you should expect that any viewpoint that it’s widespread would be drowned out by views that it’s not real.
Also, the phenomenon you observed where people are champions till the rubber meets the road is more common than one thinks.
If "it" is fraud here I would expect the viewpoint that it's widespread to be less and less drowned out as it approached 10% since everyone would know that it's real. I think I'm misunderstanding the sentence.
To be clear, not “as it approaches 10%”. I mean “even as high as 10%”.
However, among certain departments, at large schools, under certain leaders.. yes, and growing
$0.02
The much broader point though is the dismissal of the bulk consensus of academic research because academics are in it for the "money".