The code produced will only be understandable by AI. You could use locally hosted LLMs, but it won't be as performant as AI run by big guys. And there is nothing stopping greedy companies implementing some ridiculous pattern that only their model can reasonably work with.
So what you'll do in situation when you can't understand "your" codebase and you have to make changes or fix a bug?
It will be a black box, and the code will be generated just in time by ai for each api request
The open weight models are nipping on the heels of frontier models. The frontier labs have to make forward progress and keep tokens cheap in order to maintain marketshare.
Eventually, we'll have a Mythos-level model running on integrated hardware on every PC.
Code that is organized well and operates coherently in the first place, by an LLM or not, will be easier to iterate on, by an LLM or not.
No, just no.