upvote
> If you’re more or less experienced, you can easily see the “good” and “bad” sides of it. So you kinda plan it out in a way that you can “evolve AI generated software”.

If you're truly "managing fleets of agents" there's no way you're able to sift through the good and the bad in the output. If your AI-generated code is evolvable (which is hard to tell right now) then you're not writing it with "fleets of agents". If you are writing it with fleets of agents, I would bet it's not evolvable; you just haven't reached the breaking point yet.

reply
We’re not managing fleets of agents. They’re not productive for our workflows yet. It’s usually a couple of CC CLIs running and going back and forth on specific tasks we closely control.
reply
Most of the people making this argument vastly overestimate the quality of engineering and discipline that behind the software powering most corporations. CRUD apps are likely to be the most prominent type of application across industries, and most of them are crud
reply
If the code is really simple, it's cheap to read it. When people don't read it (and when they need to use "fleets of agents"), it's because it's not so simple, and then the people who trust the outcome are those who don't know what it is that they've committed into the codebase. Their logic is no more than: the system hasn't collapsed under the load of 50 (or 500) changes so it probably won't collapse under the load of the next 500 (or 5000). Because that's how engineered systems work, right? If they're fine under light stress, they're fine under heavier stress.
reply
> Because that's how engineered systems work, right? If they're fine under light stress, they're fine under heavier stress.

Isn't this wrong? I thought engineered systems meant something designed with limits.

reply
I was being sarcastic.
reply