The core issue is that it takes a large amount of effort to even assess this, because LLM generated code looks good superficially.
It is said that static FP languages make it hard to implement something if you don't really understand what you are implementing. Dynamically typed languages makes it easier to implement something when you don't fully understand what you are implementing.
LLMs takes this to another level when it enables one to implement something with zero understanding of what they are implementing.
The people following the policies are the most likely to use AI responsibly and not submit low-effort contributions.
I’m more interested in how we might allow people to build trust so that reviewers can positively spend time on their contributions, whilst avoiding wasting reviewers time on drive-by contributors. This seems like a hard problem.
Therefore, policies restricting AI-use on the basis of avoiding low-quality contributions are probably hurting more than they’re helping.
Without that policy it feels rude to ask, and rude to ignore in case they didn't use AI.
For example, someone might have done a lot of investigation to find the root cause of an issue, followed by getting Claude Code to implement the fix, which they then tested. That has a good chance of being a good contribution.
I think tackling this from the trust side is likely to be a better solution. One approach would be to only allow new contributors to make small patches. Once those are accepted, then allow them to make larger contributions. That would help with the real problem, which is higher volumes of low-effort contributions overwhelming maintainers.
Actually not shrink, but just transfer it to reviewers.