upvote
Your analogy with CI/CD is flawed because while not all were convinced of the merits of CI/CD, it's also not technology built on vast energy use and copyright violation at a scale unseen in all of history, which has upended the hardware market, shaken the idea of job security for developers to its very foundation and done it while offering no really obvious benefits to groups wishing to produce really solid software. Maybe that comes eventually, but not at this level of maturity.

But you're right it's probably unenforceable. They will probably end up accepting PRs which were written with LLM assistance, but if they do it will be because it's well-written code that the contributor can explain in a way that doesn't sound to the maintainers like an LLM is answering their questions. And maybe at that point the community as a whole would have less to worry about - if we're still assuming that we're not setting ourselves up for horrible licence violation problems in the future when it turns out an LLM spat out something verbatim from a GPLed project.

reply
> Or if the code is a mess. Or if it doesn't follow conventions.

In my experience these things are very easily fixable by ai, I just ask it to follow the patterns found and conventions used in the code and it does that pretty well.

reply
I've recently worked extensively with "prompt coding", and the model we're using is very good at following such instructions early on. However after deep reasoning around problems, it tends to focus more on solving the problem at hand than following established guidelines.

Still haven't found a good way to keep it on course other than "Hey, remember that thing that you're required to do? Still do that please."

reply
A separate pre-planning step, so the context window doesn’t get too full too early on.

Off the shelf agentic coding tools should be doing this for you.

reply
At the moment verification at scale is an unsolved problem, though. As mentioned, I think this will act as a rough filter for now, but probably not work forever - and denying contributions from non-vetted contributors will likely end up being the new default.

Once outside contributions are rejected by default, the maintainers can of course choose whether or not to use LLMs or not.

I do think that it is a misconception that OSS software needs to "viable". OSS maintainers can have many motivations to build something, and just shipping a product might not be at the top of that list at all, and they certainly don't have that obligation. Personally, I use OSS as a way to build and design software with a level of gold plating that is not possible in most work settings, for the feeling that _I_ built something, and the pure joy of coding - using LLMs to write code would work directly against those goals. Whether LLMs are essential in more competitive environments is also something that there are mixed opinions on, but in those cases being dogmatic is certainly more risky.

reply
> That being said, to outright ban a technology in 2026 on pure "vibes" is not something I'd say is reasonable.

To outright accept LLM contributions would be as much "pure vibes" as banning it.

The thing is, those that maintain open source projects have to make a decision where they want to spend their time. It's open source, they are not being paid for it, they should and will decide what it acceptable and what is not.

If you dislike it, you are free to fork it and make a "LLM's welcome" fork. If, as you imply, the LLM contributions are invaluable, your fork should eventually become the better choice.

Or you can complain to the void that open source maintainers don't want to deal with low effort vibe coded bullshit PRs.

reply
Your reply is based on a 100% bad-faith, intellectually dishonest interpretation of the comment to which you’re replying. You know that. Nobody claimed that LLM code should be outright accepted. Also, nobody claimed that open source maintainers have the right to accept or decline based on whichever criteria they choose. To always come back to this point is so…American. It’s a cop-out. It’s a thought-terminating cliche. If you aren’t interested in discussing the merits of the decision, don’t bother joining the conversation. The world doesn’t need you to explain what consent is.

Most of all, I’m sick of the patronising “don’t forget that you can fork the project!” What’s the point of saying this? We all know. Nobody needs to be reminded. Nobody isn’t aware. You aren’t being clever. You aren’t adding anything to the conversation. You’re being snarky.

reply
> Nobody claimed that LLM code should be outright accepted

Not directly, but that's the implication.

I just did not pretend that was not the implication.

> always come back to this point is so…American

I am not American.

To be frank, this was the most insulting thing someone ever told me online. Congratulations. I feel insulted. You win this one.

> If you aren’t interested in discussing the merits of the decision, don’t bother joining the conversation.

I will join whatever conversation I want, and to my desires I adressed the merits of the discussion perfectly.

You are not the judge here, your opinion is as meaningless as mine.

> Most of all, I’m sick of the patronising “don’t forget that you can fork the project!” What’s the point of saying this?

That sounds like a "you" problem. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.

> You aren’t adding anything to the conversation. You’re being snarky.

I disagree. In fact, I contributed more than you. I adressed arguments. You went on a whinging session about me.

reply
owing "nothing to no one" means you are allowed to be unreasonable...
reply
> That being said, to outright ban a technology in 2026 on pure "vibes" is not something I'd say is reasonable.

The response to a large enough amount of data is always vibes. You cannot analyze it all so you offload it to your intuition.

> It leaves stuff on the table in a time where they really shouldn't. Things like documentation tracking, regression tracking, security, feature parity, etc. can all be enhanced with carefully orchestrated assistance.

What’s stopping the maintainers themselves from doing just that? Nothing.

Producing it through their own pipeline means they don’t have to guess at the intentions of someone else.

Maintainers just doing it themselves is just the logical conclusion. Why go through the process of vetting the contribution of some random person who says that they’ve used AI “a little” to check if it was maybe really 90%, whether they have ulterior motives... just do it yourself.

reply