upvote
This puts the burden to make sure it's right on the submitter, where it should be. Verification can come at any time after that; the submitter understands the consequences of hallucinated references. Verification can be crowd-sourced (and likely will be).

Nothing stops someone from putting a PDF on the internet. I'm fine with ArXiv holding a high standard.

reply
More than fine, let’s encourage it.

We deserve it, it’s one of the ways to differentiate from the Elsevier et al shitboxes!

reply
Not to mention Zenodo, Academia.edu, etc.
reply
> ArXiv doesn't even check the submission closely, so how can they know?

They can be informed by people who read the papers and check the citations. A zero-tolerance policy provides an incentive to report sloppy papers (namely, that you can be confident something will be done about it), and each time a paper is removed or an author is banned, it incrementally increases the value of the arXiv as a whole.

> Being required to publish in a peer reviewed journal will close off arxiv for many researchers for good.

At the end of the day, demanding that people carefully proofread their LLM-generated papers before sharing them on the arXiv seems like a relative low bar to clear, and I sort of question whether it's reasonable to call individuals who find it too onerous "researchers" in the first place.

reply
You could at least filter out hallucinated references which simply don't exist pretty trivially, I'd imagine.
reply
It's more than that. if there are mistakes, then you can also be flagged.

read the whole tweet:

If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s).

reply
If you'd read the whole series of tweets it's obvious that is not their intention and there needs to be "incontrovertible evidence that the authors did not check the results of LLM generation" for the penalty to apply.

It's not hard to divine their intentions: you are entirely responsible for what you summit and if it's clearly slop(py) you get a ban. In a reply they state that they are seeking to apply this rule fairly and accurately and are mindful of unintended effects.

reply
You don't need to be actively enforcing a rule 100% on everyone. Speed cameras don't cover every stretch of road either.

It's enough for them to place this policy and enforce it when they become aware of violations. Someone reading the slopped paper (or, here, trying to follow a reference) will notice sooner or later.

> Being required to publish in a peer reviewed journal will close off arxiv for many researchers for good. It also defeats the point of it being a pre-print.

You sound like it's impossible for researchers to write papers without slopped references, and inevitable to get hit by this policy.

reply
Even acts that would be criminal in the US occur less in China due to properly enforced fines. Nobody does things assuming they will get caught unless there is a high likelihood of getting caught.

Research and practice has shown that the strongest deterrent is certainty.

reply
Impact = Risk * Probability Occurrence.

If the fine is high enough (risk) , but probability low, people will not do the thing because of the impact on them.

reply