upvote
I genuinely think it's part of a psyop. If we bloat all codebases and eventually start printing the models on chips to reduce inference costs by 50-100x they'll take in massive profits from 5M line codebases instead of 350k
reply
Prior to the advent of LLMs, I had this concept of the 'complexity horizon' - essentially a [hand built] software system will naturally tend to get more and more complex until no-one can understand it - until it meets the complexity horizon. And there it stays, being essentially unmaintainable.

With LLMs, you can race right for that horizon, go right through, and continue far beyond! But then of course you find yourself in a place without reason (the real hell), with all the horror and madness that that entails.

reply
> The scary part is that codebases are getting layers of AI complexity, that it's going to cost $$$ to have the latest model decipher

Isn't this a bit like old Java or IDE-heavy languages like old Java/C#? If you tried to make Android apps back in the early days, you HAD to use an IDE, writing the ridicolous amount of boilerplate you had to write to display a "Hello Word" alert after clicking a button was soul destroying.

reply
The difference is that the complexity to achieve “Hello World” was the same for everyone, and more or less well-understood and documented. With AI, you get some different random spaghetti slop each time.
reply
At least a human can get involved. Complex codebases written by humans can be understood.

If the barrier is too high, code is refactored.

reply
The models today will happily slop over a single 1k loc react index component on a brand new project.

They really are bad for creating a healthy codebase

reply