This anecdotal argument is a dead end. The nuance is clear: not all software is the same, and not all edits to software are the same.
Your argument has nothing to do with AI and more to do with PR size and 'fire and forget' feature merges. That's what the commenter your responding to is pointing out.
The way to get around this without getting all the LLM influencer bros in an uproar is to come up with a system that allows open source libraries to evaluate the risk of a PR (including the author’s ability to explain wtf the code does) without referencing AI because apparently it’s an easily-triggered community.