When you’re starting with a complete codebase to use as an example and a test suite to check everything it’s much easier to iterate toward the desired goal. The LLM can already see what the goals are and how they’ve been implemented once already, which is a much easier problem than starting from a spec.
My point is, there’s no chance of a “haves and have nots” emerging, any more than electricity turned out that way in the modern world.
In the US, (nearly) full electrification wasn't achieved until the late 1940's/early 1950's - a process of nearly a century. (A moment of personal trivia, my great grandfather worked on crews electrifying rural areas of the midwest.)
What comparable gap is there to bridge?
Energy costs vary widely across the world and that has enormous capacity for the economies of different countries and their industrial capacity.
Electricity looks pretty even. Higher in Europe but they can afford that.
(And the profit from selling GPUs isn't haves versus have nots, it's a couple companies versus the entire world.)
These models are a race to the bottom just like compute.
This is both amazing and scary; has been for a while now.
That seems like an especially wild guess. If you take e.g. Opus 4.7 prices, and make the assumption that you are consuming roughly $30 for every million tokens of output (this comes from just summing the $25 per million tokens of output and $5 per million tokens of input and assuming that caching basically makes all that work out), and assume an output rate of 80 tokens per second (which seems like a high estimate based on online searching), it would take you about 2411 days of non-stop Opus 4.7 usage to hit 500k in API spend.
The only way you could possibly run that amount of usage in 6 days is if you were running ~400 instances in parallel. From personal experience, that seems crazy high for this project.
I think you are off by at least an order of magnitude (potentially even 2 depending on how the person is managing agents, but I could see something like dozens of agents 24/7, so I'm way less confident in 2, but I think it's still more likely to be closer to 10-20k in API spend).
Being able to afford half a million doesn't mean you do it on a whim, or just throw all of that away if things don't go well.
But what do I know. I am nothing compared to our AI overlords like Anthropic.
Perfect, $1mil in salaries to spare the company $500k in spend :)
45 million lines would get to ~$1.125 mil for the linux kernel.
950k lines for Bun would get to $23,750
use whatever math you like ofc.
Does an Anthropic/employee pay that, no. Even if it's at a loss in terms of company revenue, it's worth burning the private capital for all kinds of other reasons.
1 person did a rust rewrite that took 6 days that would have taken hundreds of engineers more than a year to do.
The entire bun team was only about a dozen people and they wrote it from scratch.
It would not take hundreds of engineers to port the existing codebase to another language.
I think this is a cool experiment, but some of these claims are getting absurd.
I agree it’s still mind blowing compared to before times, though.
This is estimating what, 10 lines per day each? No way translating code is anywhere near that slow.
I'm sure they'll market what you said, but it's so ridiculous that I would hope people would see through this stuff.
Even cheaper would just be to not do it in the first place. Was there a pressing need to rewrite it?