upvote
One Offs. A lot of research results in one-off code. You may not go back to this dataset, these ideas again. When you do, sometimes years, later, you go, oh shit, this is hard to work with. So then you begin to build better structures, do the extra work it takes to make things easy to apply to new purposes or to accept new (but slightly different) datasets. It takes time and effort, and money. And that is where it all breaks down. Most scientists have to be jacks of many trades to get by.
reply
It's hard to avoid, but there are steps we can make towards fixing it. I spent years in academia building open-source data processing pipelines for neuroscience data and helping other researchers do the same. Most quantitative research goes through "lossy" steps between raw data and final results involving Excel spreadsheets, one-off MATLAB commands, copy pasting the results, etc.

In a lot of cases (where data is being collected by humans with a tape measure, say) there is room for error. But one of the things that's getting traction in some fields is open-source publication of both raw datasets and the evaluation/processing methods (in a Jupyter Notebook, say) in a way that lets other people run their analysis on your data, your analysis on their data, or at least re-run your start-to-finish pipeline and look for errors!

As is often the case, the holdups are mostly political: methods papers are less prestigious than the "real science" ones, and it takes journals / funders to mandate these things and provide funding/hosting for datasets for 10+ years, etc - researchers are a time-poor bunch and often won't do things unless there's an incentive to!

reply
Taking notebooks to a production environment isn't fun either. With ai there's no more excuse for using that coding crutch.
reply
Yes…mistakes are inevitable, and I get not expecting or demanding perfection. But the subtext here is that this is unlikely to be a mistake, and much more likely to be fraud.

There are incentives for these spreadsheets having the values that they do, and also there is no conceivable way that the values are correct, and on top of that, the most likely ways to get these values are to copy and paste large amounts of numbers, and even perturb some of them manually.

If you see this in accounting,(where there are also mistakes), it’s definitely fraud. (Awww man - we accidentally inflated our revenue and profit to meet expectations by accidentally duplicating numerous revenue lines and no one internally caught it! Dang interns!) If you see it in science, you ask the authors about it and they shrug and mumble a semi plausible explanation if you’re lucky? I can totally imagine a lab tech or grad student making a large copy paste mistake. I can’t imagine them making a series of them in such a way that it bolsters or proves the author’s claim AND goes completely undetected by everyone involved.

reply
> I can’t imagine them making a series of them in such a way that it bolsters or proves the author’s claim AND goes completely undetected by everyone involved.

The small minority of cases that do fit this pattern get selected to be on the front page of HN. So we aren't drawing from a random sample of mistakes. All the selection effects work against the more common categories of mistakes showing up on the HN front page, such as author disinterest, reader disinterest, to rejection by the journal, to a lack of publicity if the null result is published. The more reliable tell that it's a fraud is that the authors didn't respond when the errors were discovered.

reply
well, in that case, its bad. Obviously.
reply
> their workflows aren't great

Sounds like a startup idea.

reply
Spend a few years working in the target environment. It will disabuse you of the idea that science research can be regularized with technology.
reply
You'll want to sit down when I tell you the budget these folks have for workflow solutions. Ain't gonna take long but might be shocking if you've got big startup hopes. ;)
reply
This was almost two decades ago, but I worked in a lab running particle detection experiments from an “internet-capable”computer that started life with “Windows 98 already installed- no upgrade needed.” Any “workflow solutions” talk started and ended with “Can we get undergrads to do it for class credit?”
reply
For sure. These are often people who want better equipment to do their research, not software subscriptions that promise to force them to work in unfamiliar and uncompelling ways. You'd need a fantastic, game-changing idea to get meaningful traction.

One example of these might be systems like S3 and distributed computing in AWS. Like, huge ideas that take massive initiatives to implement, but make science meaningfully easier. I can't think of many other modern technologies we use that the team doesn't mostly resent (like Slack or Google Drive). They're largely interested in just doing the science, the rest eats into funding (which is increasingly sparse these days).

reply
If you want to make no money, sure.

The solutions these scientists need are bespoke and share little in common. They also have fixed grant funding.

In 2009 I made $15/hr working with some PhDs and grad students in a couple different labs to automate their workflows - I was the highest paid person in the room most of the time.

reply
A lot of the work I have done for scientists when I was a contractor (and a bit while working for bespoke software consultancies) was quite literally just making programmatic applications out of Excel sheets.

In one case, we used mdftools to literally use the original excel spreadsheet as our logic engine.

reply
just imagine you scan private insustry. this is a generic problem that LLMs wont solve in generative capabilities.
reply