upvote
What do you think about Cory Doctorow's theory that the AI produced code is going to come back to bite companies due to tech debt / unmaintainability?

I am skeptical of Doctorow's theory because it looks like LLMs will continue to improve enough over the near term to be able to handle issues caused by AI-written code from the past few years.

reply
I've heard OpenClaw got over 600k lines of code vibed over 80 days.

I have this theory that the bloat will follow to the full extent possible. OpenClaw has this, the OpenEye or whatever that comes on another day, with better models, will have 3 million lines of code. All of the possibilities that you mention will not come to fruition the way you'd like to, because speed is preferred over building better things, and to hell with maintainability.

Eventually these things will become a ton of black boxes, and the only option will be to write them from scratch with another next gen LLM. Lots of costly busywork, and it will all take time.

reply
Tech debt and maintainability were important because time was of the essence in another era. If the cycles get compressed by say 95%, to hell with it, just trash the old and write everything from scratch, start from a clean slate each time?
reply
Claude Code is similar. It's fairly clean for AI coding standards but it's also most likely much, much bigger than what it should be for what it does.
reply
In the mature service I worked on, adding new code was “templatized”, you had to add feature flags, logs etc which didn’t vary much no matter which feature it was. The business logic was also not that complex, I can see AI tools one-shotting that and it indeed is a productivity boost. You would be surprised to know that most work was exactly this, writing rather mundane code. Majority of the time was spent coordinating with “stakeholders” (actually more like gate keepers) and testing code (our testing infrastructure was laborious). This was at MSFT. There are lot of teams that are innovating at the frontier (mine wasn’t, at-least not technically), I don’t know how AI tools work in those situations.
reply
the near term is not an issue, because most ai code is still reviewed by experienced engineers with experience. the problem comes in the future where junior engineers who never acquired enough experience to handle engineering problems
reply
Ain’t nobody reviewing the majority of vibe coded output.
reply
> My experience working at Big tech companies is that people with roles like “agile coaches", "technical project managers", UX testers add questionable value.

"Agile" can go and die in a hellfire for all I care.

But good technical project managers aka "bridges between the higher-up beancounters and the workers" are worth their weight in gold.

reply
At MSFT, Product Managers were Technical Program Managers. Yes, a good PM is a joy to work with.
reply
>But good technical project managers aka "bridges between the higher-up beancounters and the workers" are worth their weight in gold.

Yeah but it's not easy do distinguish those from the snake oil salesmen who are just good at smooth talking during the interviews.

reply
Pretty easy. Get them to talk about a project they've managed and start poking holes. Who was on the team? How did they organize meetings? What were the bottlenecks? How well did everyone get along? What did they do to help grease the gears? Did they have to change the process? How did they like the software? Which software did they use? Did they have to administer it themselves? How did they deal with management changes / team changes / tons of support requests / issues in production? Where did they draw the line between PM work and engineering work?
reply
From witching other managers, and via LLMs, you can literally learn and interview prep for all those questions on and lie. It's not difficult.
reply
Well if they’ve learned that much it’s a good thing.

The remaining piece is to speak with some personal references to verify they did some real work.

reply