Goodhart's law states "When a measure becomes a target, it ceases to be a good measure", and that's what we see here. There is a strong incentive to publish more instead of better. Ideas are spread into multiple papers, people push to be listed as authors, citations are fought for, and some become dishonest and start with citation cartels, "hidden" citations in papers (printed small in white-on-white, meaning it's indexed by citation crawlers but not visible to reviewers) and so forth.
This also destroys the peer review system upon which many venues depend. Peer reviews were never meant to catch cheaters. The huge number of low-to-medium quality papers in some fields (ML, CV) overworks reviewers, leading to things like CVPR forcing authors to be reviewers or face desk rejection. AI papers, AI reviews of dubious quality slice in even more.
Ultimately the only true fix for this is to remove the incentives. Funding and careers should no longer depend on the sheer number of papers and citations. The issue is that we have not really found anything better yet.
Oh boy, you seem to be missing the forest for the trees. When science was a hobby of the rich, there was no need to measure output. Only when "scientist" became a career and these scientists started demanding government funding (which only really crystallized in the 20th century), then we started needing a way to measure output.
You could try doing away with an objective measure of academic output and replace it with the "social fabric of researchers and institutes" (whatever the fuck that means) instead , but then all you'd have is a good ol' boys club funded by taxpayer money.
The decision makers who are the target audience for these metrics value "objective" data. They value the appearance of being quantitative, but lack the intellectual tools to distinguish between quantitative science and pseudoscience with numbers bolted on.
That's modern bureaucracy in a nutshell.
I’d even argue that still today women and minorities are strongly disadvantaged at many institutions. I’d say that as a white male that recently left academia myself. I have seen how some of my colleagues have been treated.
There are lots of better things, like people making hiring and firing decisions based on their evaluation of the content of papers they have actually read, instead of just a number. If someone is publishing so many papers that a hiring committee can't even read a meaningful fraction of them, that should be a red flag in itself, rather than a green one.
It will just pick the best allocation metric it has available, even if that metric would never stand up to scrutiny in the private sector, or any more directly measured domain, public or private.
A distressingly high percentage of humans like zero-sum status games. More people are happier when status is recognized as a semi-unbounded positive-sum game.
There’s not a whole lot to gain for the individual or even the institution unless they hit an absolute home run on the first try that also shows positive results very quickly. More than likely the decision will be questioned at every turn
Publishing uninteresting science for the record is different from an incentive to go against the crowd to refute incorrect claims.
Both would be good especially these days.