upvote
While I see the point you're trying to make, truth is 90% of the times at least it will be a workaround instead of a proper solution. Even if it's a proper solution, there's a high chance it will only work on your specific setup - most open source software are made for a range array of systems, configurations, etc.

That plus ai sycophancy means, in my opinion, a great portion of contributions made in this manner will be bad, and waste maintainers time - which is obviously undesirable.

On my first week of claude code I submitted a PR to a FOSS and I was 100% sure it was correct - ai was giving me great confindence, and it worked! But I had no clue about how that software worked - at all. I later sent an email to the maintainer, apologizing.

reply
It depends on the complexity and if your LLM-driven changes fight the architecture of the project.

Some changes are in the area of "Well no one did that yet because no one needed it or had time for it", or "Well shit, no one thought of that". If Claude Code did these changes with good documentation and good intent and guidance behind it, why not? It is an honest and valid improvement to the library.

Some other changes rip core assumptions of the library apart. They were easy, because Claude Code did the ripping and tearing. In such a case, is it really a contribution to improve the library? If we end up with a wasteland of code torn apart by AI just because?

reply
I don't think anybody would complain about working code. Your PR would explain your reasoning and choice of solution, and that on its own could make or break through acceptance criteria. At least it would by mine.

Errors are fine too. Just not negligence.

reply
the thresholds of quality for "this works on my machine, for my purposes" and "this is viable to merge upstream" are _extremely_ different. claude code has no effect on this, except to confuse certain would-be contributors.

imagine someone emailed you a diff with the note "idk lol. my friend sent me this, and it works on my machine". would you even consider applying it?

reply
I don't think most maintainers are opposed to LLM-generated bug fixes or solutions _in general_, just the ones that are pure slop: Generated end-to-end by a Claude-maxed computer enthusiast who thinks that enough green boxes on their GitHub profile means they can somehow BS their way into a high-paying FAANG software engineer position. (Spoiler: it won't work.)

If I got a PR for one of my projects where the fix was LLM-generated, I wouldn't dismiss it out of hand, but I would want to see (somehow) that the submitter themselves understood both the problem and the solution. Along with all the other usual qualifiers (passes tests, follows existing coding style, diff doesn't touch more than it has to, etc). There's likely no one easy way to tell this, however.

reply