upvote
Software developers working on their own have built monstrosities before (not as quickly) but it seems likely that this is a skill issue and we will learn how to use these tools better. You can tell coding agents to work on cleaning up code, improving the architecture, and so on.

Maybe adopting some hard constraints on code complexity that agents have to work within would help?

reply
Yep, surely humans write bad code, too. But not nearly as fast. This feels a lot like hiring oodles of hyper-productive junior developers. Are we going to get true productivity out of that or a scrambled mess? I don’t know the answer to that. Or maybe the models get so much better that it’s like hiring oodles of senior developers and architects and the payoff is real.
reply
Humans just don't commit the same kinds of booboos as LLMs do. My team at work recently started using LLM agents for coding and I have since seen WTFs that I know no human would ever write.

It's not all bad! It's also enormously fun. I've been able to work on things I'd been putting off forever. When I can use LLM agents, I less often feel paralyzed by perfectionism, which is probably the biggest productivity boost I get. My own code has not decreased in quality, and I think that for the truly important things, neither has that of my colleagues.

But LLMs don't make junior dev mistakes. They make "my brain has worms in it" mistakes.

reply
It used to be that most college graduates had little or no experience working on large-scale projects. Now they’ll get to speed-run the issues involved in maintaining a large project.
reply
So, is that a good thing? There’s still something to be said for experience, no?
reply
Yes, getting out of college already having some experience using coding agents seems good.
reply
Maybe the bots should be made to write MISRA-C. It isn’t like they get annoyed, right?
reply