upvote
> Quixotic, unworkable, pointless. It’s fundamentally impossible (at least without a level of surveillance that would obviously be unavceptable) to prove the “artisanal hand-crafted human code” label.

Difficulty of enforcing is a detail. Since the rule exists, it can be used when detection is done. And importantly it means that ignoring the rule means you’re intentionally defrauding the project.

reply
Debian has always been Debian and thus there are these purist opinions, but perhaps my take too would be something along the "one-strike-and-you-are-out" kind of a policy (i.e., you submit slop without being able to explain your submission in any way) already followed in some projects:

https://news.ycombinator.com/item?id=47109952

reply
Yeah this is what I was getting at with “reputation” - I think the world where anyone can submit a patch and get human eyes on it is a thing of the past.

IIRC Mitchell Hashimoto recently proposed some system of attestations for OSS contributors. It’s non-obvious how you’d scale this.

reply
This is like trying to stop spam by banning emails that send you spam.

They can spin up LLM-backed contributors faster than you can ban them.

reply
If the situation becomes that worse, I agree with you; otherwise, I don't see that as a problem.
reply
Banning AI would hardly stop that, the LLM contributors would simply claim they're not AI.

Hence why banning AI contributions is meaningless, you literally only punish 'good' actors.

reply
I agree. If the real concern is the flood of low-effort slop, unmaintainable patches, accidental code reuse, or licensing violations, then the process should target those directly. The useful work is improving review and triage so those problems get filtered out early. The genie is already out of the bottle with AI tooling, so broad “no AI” rules feel like a reaction to the tool and do not seem especially useful or enforceable.
reply