This is a decent argument, but it's not the death knell you think.
Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.
The number of applications where AI is already "good enough" keeps growing every day. If the cost goes down 99% every three years, it doesn't take long until you can make a ton of money on those applications.
If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.
For the foreseeable future, there are MANY MANY uses of models where a company would not want to host its own models and would be GLAD to pay an 4-5x cost for someone else to host the model and hardware for them.
I'm as bullish on OpenAI being "worth" $730B as I was on Snap being worth what it IPO'd for - which it's still down about 80% (AFTER inflation, or about ~95% adjusting for gold inflation).
But guess what - these are MINIMUM valuations based on 50-80% margins - i.e. they're really getting about ~$30B - the rest is market value of hardware and hosting. OpenAI could be worth 80% less, and they could still make a metric fuck-ton of money selling at IPO with a $1T+ market cap to speculative morons easily...
Realistically, very rich people with high risk tolerance are saying that they think OpenAI has a MINIMUM value of ~$100B. That seems very reasonable given the risk tolerance and wealth.
And as the number of things AI is “good enough” at increases, the list of things on the frontier that people will want to pay OpenAI for shrinks. Even if OpenAI can consistently churn out PhD level math, most companies don’t care about that.
So a necessary (but not sufficient) condition for the math to work out is that frontier tasks still exist and are profitable. This is why CEOs keep hyping up AGI. But what they really want is for developers to keep paying to get AI to center a div.
Irrelevant. The model is the moat
> most companies don’t care about that.
Wrong. They will use the model that gives them an edge. If they are using a PhD but their competitors are using Einstein, they will lose.
> center a div
For sure a common use case, but is bot what the CEO is concerned about with AI.
For some tasks that matters. But for a lot of tasks, "good enough but cheaper" will win out.
I'm sure there will be a market for whichever company has the best model, but just like most companies don't hire many PhD's, most companies won't feel a need for the highest end models either, above a certain level.
E.g. with the release of Sonnet 4.6, I switched a lot of my processes from Opus to Sonnet, because Sonnet 4.6 is good enough, and it means I can do more for less.
But I'm also experimenting with Kimi, Qwen, Deepseek, and others for a number of tasks, including fine-grained switching and interleaving. E.g. have a cheap but dumb model filter data or take over when a sub-task is simple enough, in order to have the smart model do less, for example.
Even if true, this still doesn't bend the curve when paying for the next model.
> If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.
If this is true, it's true for the technology overall, and not necessarily OpenAI since inference would get commoditized quickly at that point. OpenAI could continue to have a capital advantage as a public stock, but I don't think it would if the music stopped.
The market adoption has increased a lot. The cost to serve has come down a lot per token.
Model sizes have not increased exponentially recently (The high point being the aborted GPT-4.5), most refinement recently seems to be extending training on relatively smaller models.
When you take this into account together, the relative training to inference income/cost ratio likely has actually changed dramatically.
It's 2x efficiency. Then I'd take 50% less power instead of ridiculous 99% less power.
All techs, eventually.
AI stopped progressing, or LLMs? I really dislike people throwing the term AI around.
The LLM industry has only be around for like 4 years. Extrapolating trends from that is pretty naive.
How many years total are you basing this on?
Yes, but there's a chance that actually training is done more or less for free by companies like OpenAI. The reason being that they do a gigantic amount of inference for end users (for which they get paid), but their servers can't be constantly utilized at 100% by inference. So, if they know how to schedule things correctly (and they probably do), they can do the training of their new model on the unutilized compute capacity. If you or I were to pay for that training, it would be billions of dollars, but for them it is just using compute that otherwise would be idle.
Why are we so against, in principle, to the current pre-training scaling laws? Perhaps, we'll require new innovations at some point, but the momentum allows us to reach to newer heights that we've never climbed before.