That plus ai sycophancy means, in my opinion, a great portion of contributions made in this manner will be bad, and waste maintainers time - which is obviously undesirable.
On my first week of claude code I submitted a PR to a FOSS and I was 100% sure it was correct - ai was giving me great confindence, and it worked! But I had no clue about how that software worked - at all. I later sent an email to the maintainer, apologizing.
Some changes are in the area of "Well no one did that yet because no one needed it or had time for it", or "Well shit, no one thought of that". If Claude Code did these changes with good documentation and good intent and guidance behind it, why not? It is an honest and valid improvement to the library.
Some other changes rip core assumptions of the library apart. They were easy, because Claude Code did the ripping and tearing. In such a case, is it really a contribution to improve the library? If we end up with a wasteland of code torn apart by AI just because?
Errors are fine too. Just not negligence.
imagine someone emailed you a diff with the note "idk lol. my friend sent me this, and it works on my machine". would you even consider applying it?
If I got a PR for one of my projects where the fix was LLM-generated, I wouldn't dismiss it out of hand, but I would want to see (somehow) that the submitter themselves understood both the problem and the solution. Along with all the other usual qualifiers (passes tests, follows existing coding style, diff doesn't touch more than it has to, etc). There's likely no one easy way to tell this, however.