As for open source PRs, I wonder if for trust's sake you would need to self identify the use of AI in your response (All AI, some AI, no AI). And there would need to be some sort of AI detection algorithm flag your response as % AI. I wonder if this would force people to at least translate the LLM responses to their own words. It would for sure stop the issue of someone's WhatsApp 24/7 claw bot cranking out PR slop. Maybe this can lessen the reviewers burden. That being said, more thought is needed to distinguish helpful LLM use that enhances the objective vs unhelpful slop that places burden on the reviewer.
For instance I copy pasted the above to gemini and it produced an excellent condensing of my thoughts, "It is now 10x easier to generate a "plausible" paper or Pull Request (PR) than it is to verify its correctness."
Then again, we see how well robots.txt was honored in practice over the years. As with everything in late-stage capitalism, the humans who showed up with good intentions to legitimately help typically did the right things, and those who came to extract every last gram of value out of something for their own gain ignored the rules with few consequences.