This is arguably a key quote: "Then, it was time to read all the code, line by line. ... I found many small inefficiencies or design errors ... so I started a process of manual and AI-assisted rewrite of many modules." We should not underestimate that step: reading code line by line might easily require more time than writing it from scratch.
I remain unconvinced by the "faster to write it by hand than read it" arguments though. My experience throughout my career is that most people, myself included, top out at a couple of hundred lines of tested, production-ready code per day. I can productively review a couple of thousand.
In particular, doing direct comparisons between metrics like that doesn't work. "Lines of code" isn't a good way to measure complexity of the code, and the amount of time it takes to review the code will vary quite a bit based on the use case.
There's a lot of diversity in what kind of code people write and just because it worked for someone else doesn't mean it will work for the kinds of problems you solve. It's anecdotal evidence that someone else found it useful, your mileage may vary.
It would absolutely NOT work for production-code with critical concurrency / embedded / real-time stuff
To quote another of his posts:
> I fixed transient failures in the Redis test. This is very annoying work, timing related issues, TCP deadlock conditions, and so forth. Claude Code iterated for all the time needed to reproduce it, inspected the state of the processes to understand what was happening, and fixed the bugs.
...
> In the past weeks I operated changes to Redis Streams internals. I had a design document for the work I did. I tried to give it to Claude Code and it reproduced my work in, like, 20 minutes or less (mostly because I'm slow at checking and authorizing to run the commands needed).
From "Don't fall into the anti-AI hype" https://antirez.com/news/158
He's saying you should be writing up complex, highly detailed specs for the LLM to turn into code, stressing that it's critical to work in a self-contained and "textually representable" problem domain. This is not one-shotting complete products from a vague prompt. You're still going to need software architects, and they'll still be doing much the same work. Turning fully-specified design into code has never been a "10x" task, it was always regarded as a relatively straightforward, if often tricky part of the job. And the way he worked with Redis makes it clear that you can't take what the AI delivers at face value, either: you'll have to go through it yourself, and that will take time and effort.
Also his whole blog is about how, in order to do a task, he would need to spec it properly, then do "code inpainting" with the LLM, then fix all the issues that he could spot only because he's a senior, then repeat, etc
Did you read it?
Let the LLM cook by doing the issues one by one. In the meantime I could start reviewing them. Checkout, running, reading. It was definitely faster since it also correctly linked everything, etc. of course once the change goes beyond that it probably is not working. However I really thought that a good idea would be to check for that work and implement it according to the issue description and change a Mr once the description changes, at least as long as the Mr is 1-3 lines. And even if it does not work, I can just discard it.
(A lot of these problems are often typos that do not even need a checkout, they come in through bigger Mrs that should not be blocked because of them)
When antirez says 'I ventured to a level of complexity that I would have otherwise skipped,' I don't think you can call that a minor gain. The alternative is likely something 'good enough' that leaves the community dissatisfied for months, and then after initial design mistakes become load-bearing the ideal implementation can never be realized.
Then you need a senior to go realize the 100 mistakes it did, fix them, and iterate, which is why you can't replace "natural intelligence"
And there are real mathematical reasons why computers won't be able to break through "mathematical reasoning" on their own (indecidability, etc)
> For high quality system programming tasks you have to still be fully involved, but I ventured to a level of complexity that I would have otherwise skipped. AI provided the safety net for two things: certain massive tasks that are very tiring (like the 32 bit support that was added and tested later), and at the same time the virtual work force required to make sure there are no obvious bugs in complicated algorithms.
Might?
Reading code you did not write is always going to take more time if you do it properly.
"Automatic programming"
> I started to refer to the process of writing software using AI assistance (soon to become just "the process of writing software", I believe) with the term "Automatic Programming"
To clarify, from TFA:
> even before LLMs the implementation was likely something I could do in four months. What changed is that in the same time span, I was able to do a lot more
The initial timeframe was 4 months, he was able to do more work within the same timeframe with LLMs.
I've been working on a Database adapter for a couple months using an LLM... I've got a couple minor refactors to do still, then getting the "publish" to jsr/npm working... I've mostly held off as I haven't actually done a full review of the code... I've reviewed the tests, and confirmed they're working though. The hard part is there's some features I really want when in Windows to a Windows SQL Server instance that isn't available in linux/containers. I don't think I'll ever choose SQL again, but at least I can use/access a good API with windows direct auth and FILESTREAM access in Deno/Bun/Node.
FWIW: My final implementation landed on ODBC via rust+ffi so after I get the mssql driver out, I'll strip a few bits in a fork and publish a more generic odbc client adapter. using/dispose and async iterators as first class features in the driver.
He's not, but his work is obviously not average.
Average dev work is plumbing and CRUDs.
I'm doing my work mostly the same as Antirez is doing, writing detailed spec (which is actually 80% of the hard work, even without LLMs), then where I would have written the "boring stuff" I use the LLM to "autocomplete", and then see all the mistakes (which require being a senior to see / fix), correct, and iterate
It makes the work "feel" easier because we mostly skip writing the boilerplate, but it still doesn't replace coders. And companies that think they will be able to skip training juniors (in order to later replace seniors) and still have seniors onboard are making a huge mistake