upvote
Right, I don't think you can "productively review a couple thousand" lines of code per day. That would imply that the review step for this very patch only took a couple days in total (since the core code is described as 5k lines) which is rather implausible to say the least.
reply
Both Simon Willison and Antirez said that using LLMs helped them, so it's kind of perverse to read them and conclude the opposite.

In particular, doing direct comparisons between metrics like that doesn't work. "Lines of code" isn't a good way to measure complexity of the code, and the amount of time it takes to review the code will vary quite a bit based on the use case.

There's a lot of diversity in what kind of code people write and just because it worked for someone else doesn't mean it will work for the kinds of problems you solve. It's anecdotal evidence that someone else found it useful, your mileage may vary.

reply
The relevant question is whether it helped them 10x or anywhere close to what AI is now being sold as (supposedly even replacing software developers' jobs altogether and one-shotting complete products from a single prompt), or it's just acting as a kind of glorified autocomplete. So far we're clearly seeing the latter based on what both Simon Willison and Antirez are referencing.
reply
Simon often says that its LLMs help him "write productive code", but most of the code he shows are python libs doing menial tasks. That's fine for tooling, etc, which is sometimes useful.

It would absolutely NOT work for production-code with critical concurrency / embedded / real-time stuff

reply
Antirez wrote Redis. That is "production-code with critical concurrency"

To quote another of his posts:

> I fixed transient failures in the Redis test. This is very annoying work, timing related issues, TCP deadlock conditions, and so forth. Claude Code iterated for all the time needed to reproduce it, inspected the state of the processes to understand what was happening, and fixed the bugs.

...

> In the past weeks I operated changes to Redis Streams internals. I had a design document for the work I did. I tried to give it to Claude Code and it reproduced my work in, like, 20 minutes or less (mostly because I'm slow at checking and authorizing to run the commands needed).

From "Don't fall into the anti-AI hype" https://antirez.com/news/158

reply
His summarized assessment from that very post: "...state of the art LLMs are able to complete large subtasks or medium size projects alone, almost unassisted, given a good set of hints about what the end result should be. The degree of success you'll get is related to the kind of programming you do (the more isolated, and the more textually representable, the better: system programming is particularly apt), and to your ability to create a mental representation of the problem to communicate to the LLM."

He's saying you should be writing up complex, highly detailed specs for the LLM to turn into code, stressing that it's critical to work in a self-contained and "textually representable" problem domain. This is not one-shotting complete products from a vague prompt. You're still going to need software architects, and they'll still be doing much the same work. Turning fully-specified design into code has never been a "10x" task, it was always regarded as a relatively straightforward, if often tricky part of the job. And the way he worked with Redis makes it clear that you can't take what the AI delivers at face value, either: you'll have to go through it yourself, and that will take time and effort.

reply
First he didn't write Redis with LLMs, it was way before. Second I'm not speaking of him in that comment.

Also his whole blog is about how, in order to do a task, he would need to spec it properly, then do "code inpainting" with the LLM, then fix all the issues that he could spot only because he's a senior, then repeat, etc

Did you read it?

reply