This isn't necessarily true; I've seen some projects absorb a PR of roughly that size, and after the smoke tests and other standard development stuff, the original PR author basically disappeared.
It added a feature he wanted, he tested and coded it, and got it in.
This anecdotal argument is a dead end. The nuance is clear: not all software is the same, and not all edits to software are the same.
Your argument has nothing to do with AI and more to do with PR size and 'fire and forget' feature merges. That's what the commenter your responding to is pointing out.
The way to get around this without getting all the LLM influencer bros in an uproar is to come up with a system that allows open source libraries to evaluate the risk of a PR (including the author’s ability to explain wtf the code does) without referencing AI because apparently it’s an easily-triggered community.
So what metric are you going to try to use to prove yourself?
And in the context of high-value contributors that GP was mentioning, they are never going to land a +3000 PR because they know there is going to be a human reviewer on the other side.