upvote
There is definitely going to be some Wirth's law-like [0] effect about the asymmetry of software complexity outpacing LLMs' abilities to untangle said software. Claude 9.2 Optimus Prime might be able to wrangle 1M LoC, but somehow YC 2035 will have some Series A startup with 1B+ LoC in prod — we'll always have software companies teetering on the very edge of unmaintainability.

[0] https://en.wikipedia.org/wiki/Wirth%27s_law

reply
It's the Peter principle for computers. Codebases expand to the limits of the organization's ability to manage them. If you make one person use ed to write code for a bare metal environment, you'll get a comparatively small, laser-focused codebase. If you task a hundred modern developers to solve the same problem, you'll get a Linux box device running a million lines of JavaScript.

Same thing happens in other fields. A rich country and a poor country might build equivalent roads, but they won't pay the same price for them.

reply
It won't be an LLM that does it, the entire feature of an LLM is it produces generalizable reasonably "correct" text in response to a context.

The system that makes it have an opinion about good vs bad architecture or engineering sensibilities will be something on top of the transformer and probably something more deterministic than a prompt.

reply
We can do this today too (but definitely hopefully future LLMs make better architectural decisions). With Claude, I've been working on an application for the last 2 months. I didn't have a great vision of what I wanted when I started but I didn't want that to slow me down. The architecture is terrible - Claude separated some functionality into different classes but did a bad job at it and created a big ball of mud. Now that I finally have my vision locked down and implemented (albeit poorly), it'd be a great time to throw it away and start over. It'd be interesting to see the result and see how long it takes.
reply
Just have claude (or gpt maybe) do an architecture review and request a multi-phase refactoring plan. This is probably better to do incrementally as you notice the balls of mud forming but it might not be too late. Either way, if it does something you don't like, `git checkout` and start over
reply
Will work just as good as today or 20 years ago.
reply
Are you suggesting AI coding was as good 20 years ago as it is today?
reply
I think they're being sarcastic, saying that rewrites from scratch have rarely worked well (whether done by AI or humans).
reply
Exactly. Sorry for not being explicit about it. I thought it was clear enough, because 'this code is crap, let's just rewrite the whole thing, doesn't look to hard' is kind of famous for being a bad idea most of the time since forever.
reply
It sure wrote less crappy code.
reply
"Make sure to double check everything, and MAKE NO MISTAKES!!!"
reply
Don't hallucinate!
reply
"YOU'RE A SENIOR SOFTWARE ENGINEER!!!"
reply
"Ultrathink!"
reply
"Write me a really cool game, that will make me lots of money, fast!"
reply
Make me a 1hr episode of my favorite book. Make it as lore accurate as possible. Plot out the script for the next 100 episodes.
reply
I see your point, however: EA sports has been doing this for literally the entire lifetime of gaming as an industry
reply
Electronic Sharts slogans and franchises:

"Shit's in the Game!"

"Chunder Everything"

"Maddening NFL 26"

"FIFiAsco 26"

"UFC 26 (Un Finished Code)"

"The Shits 4"

"Battlefailed"

"Need for Greed"

reply
[dead]
reply
Do you think new LLMs are going to write better and better code? When all they are going to have is the slop generated by previous, worse models?
reply
Yes. The models may have started from indiscriminate scraping, but people are undoubtedly working on refining the training data. Combined with the overall model capabilities, I suspect code quality will continue to go up.

What you're suggesting is a negative flywheel where quality spirals down, but I'm hoping it becomes a positive loop and the quality floor goes up. We had plenty of slop before LLMs, and not all LLM output is slop. Time will tell, but I think LLMs will continue to improve their coding abilities and push overall quality higher.

reply