upvote
This is already the case for many startups. In fact, the figure might be closer to 100%. The work shifts to requirements analysis, high-level specifications, and final review instead (after AI code review).
reply
The first link states literally

"AI will take over almost all the work of software engineers (SWEs) end - to - end in just 6 - 12 months!"

What you describe is >50% of the job of SWEs, even when they write all code by hand.

Are you saying that "for many start-ups", this isn't done by SWE's but by some other career type or are you implying that it's just the code written (and first review) is replaced by AI?

reply
I have watched Dario’s interview at WEF referred to in the article and I am quite certain Dario didn’t say that. He talked about AI automating most coding already or soon, not software engineering as a whole.

He did say a few months later in an interview in India that AI will eventually take over most of SWE tasks.

—-

My statement on startups is largely about automating coding by SWEs. My startup also uses AI to automate part of technical specifications and code review but I am not sure how widespread that is.

reply
Yeah I'm working on one of those now that a 3rd-party vendor cranked out for us. I spent all day ripping out an endpoint that did 98% of what another endpoint did and should never have existed. I also ripped out 80 lines of code that looked like this:

const sqlStatement = (!params.mostRecentOnly) ? {giant SQL statement} : {identical giant SQL statement + 'LIMIT 1' at the end}

AI never met a problem that can't be solved with more code. Need some data in a slightly different structure? Don't try to modify an existing endpoint, just build a new one! Need to access a field that's buried in a JSON object in the database? Just create a new column, but don't bother removing the field from the JSON object. The more sources of truth, the merrier! When it comes time to update, just write more code to update the field everywhere it lives!

Factor out the extra sources of truth you say? Good luck scanning the most verbose front-end you've ever seen to make sure nothing is looking at the source you want to remove. In the beginning of big projects, you have to be absolutely ruthless about keeping complexity down so it doesn't get out of control later. AI is terrible at keeping complexity down.

My goal is to halve the lines of code from what the vendor turned over to us. One baby step at a time.

reply
If only we had this tech back when managers were looking at how many lines of code you were committing weekly as a performance metric.
reply
Now they're looking at your token consumption, which is even more gameable (and stupid).
reply
That is a skill issue though. I have rules for my agents to write compositional, reusable, modular, small files and to avoid any sort of boilerplate etc. Being config driven, single source of truth, having other agents review that rules are followed, etc. Any API or UI or any sort of entry points very light, just proxying to the modular logic basically, so this logic could be reused by any entrypoint easily.

UI components always presentational only logic abstracted modularly, etc...

reply
How do you make it so that the model doesn't forget to follow those rules and skills? How do you make it actually understand the architecture and constraints? You can't, current models don't work that way to make it happen.
reply
Can you share your rules and some of the example PRs that it auto generates and reviews?

The number of times I’ve seen Claude say “this test was failing already so is ignored” when it _wasnt_ despite me telling it to never do that makes me doubt.

reply
Ah, the make_no_mistakes.md
reply
I mean quite frankly I have seen enough code that was definitely written by humans that had exactly this "style".

Then again I don't want to pay for AI to give me the coding style of the worst I ever worked with either.

reply
> many startups

which startups? I'm genuinely curious

reply
And not only startups...
reply
He would be right if claude code was written by a team of humans. The AI written blob is slowing progress.
reply
I mean, since Opus 4.6 came out, that rings more and more true. You still have to babysit the output, do some planning and be proactive about ways to do things better… but 80-90% isn’t out of the question if you’re in the domains that are well represented in the training data, e.g. if you’re writing a lot of CRUD functionality as a web dev.

Companies will definitely expect devs to ship more with the same headcount, oftentimes either won’t hire juniors to train them up or will straight up do layoffs, sometimes the AI just being a convenient scapegoat. We kind of can’t ignore that either, sure a lot of those companies will be shooting themselves in the foot, but livelihoods will be impacted a bunch.

reply