Wrote about both the per-model math and the scaling question:
(1) https://philippdubach.com/posts/ai-models-as-standalone-pls/
(2) https://philippdubach.com/posts/the-most-expensive-assumptio...
EDIT: Removed the dot after et; bc apparently it's an entire word (the more you know..)
This is a decent argument, but it's not the death knell you think.
Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.
The number of applications where AI is already "good enough" keeps growing every day. If the cost goes down 99% every three years, it doesn't take long until you can make a ton of money on those applications.
If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.
For the foreseeable future, there are MANY MANY uses of models where a company would not want to host its own models and would be GLAD to pay an 4-5x cost for someone else to host the model and hardware for them.
I'm as bullish on OpenAI being "worth" $730B as I was on Snap being worth what it IPO'd for - which it's still down about 80% (AFTER inflation, or about ~95% adjusting for gold inflation).
But guess what - these are MINIMUM valuations based on 50-80% margins - i.e. they're really getting about ~$30B - the rest is market value of hardware and hosting. OpenAI could be worth 80% less, and they could still make a metric fuck-ton of money selling at IPO with a $1T+ market cap to speculative morons easily...
Realistically, very rich people with high risk tolerance are saying that they think OpenAI has a MINIMUM value of ~$100B. That seems very reasonable given the risk tolerance and wealth.
Even if true, this still doesn't bend the curve when paying for the next model.
> If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.
If this is true, it's true for the technology overall, and not necessarily OpenAI since inference would get commoditized quickly at that point. OpenAI could continue to have a capital advantage as a public stock, but I don't think it would if the music stopped.
The market adoption has increased a lot. The cost to serve has come down a lot per token.
Model sizes have not increased exponentially recently (The high point being the aborted GPT-4.5), most refinement recently seems to be extending training on relatively smaller models.
When you take this into account together, the relative training to inference income/cost ratio likely has actually changed dramatically.
It's 2x efficiency. Then I'd take 50% less power instead of ridiculous 99% less power.
AI stopped progressing, or LLMs? I really dislike people throwing the term AI around.
The LLM industry has only be around for like 4 years. Extrapolating trends from that is pretty naive.
From my perspective, I hope that OpenAI survives and can pull of their IPO but I just have that nagging feeling in my gut that their IPO will be rejected in much the same way that the WeWork IPO was rejected.
On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.
When their IPO hits later this year I hope that it's the former case and there's actually some good underlying fundamentals to invest in. But based on everything I've read, my gut is telling me they will eventually implode under the weight of their business model and spending commitments.
It’s like Toys R Us not having enough money to pay Mattel for Barbie dolls and telling Mattel they can have partial ownership of the company if they just supply them with some more toys.
But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.
Toys R Us continues selling toys faster and faster despite a lack of profit, making Mattel even more dependent on Toys R Us as a customer. It blows up the bubble where a more natural course of action would be for Toys R Us to go bankrupt or scale back ambitions earlier.
Because it’s circular like this, it lends toward bigger crashing and burning. If OpenAI fails, all these investors that are deeply integrated into their supply chains lose both their investment and customer.
Obviously, there’s a scenario of super power AI and then it’s a matter of continuing course. Electricity and silicon.
What if you are right, and the scaling doesn’t work. It is too much power, time, hardware to improve… does openAI fold?
Do they just actual use the models they have?
Does everyone just decide that AI didn’t work and go back 5 years like it didn’t happen?
Does the price change so that they have to be profitable making AI services expensive and rare instead of today where they are everywhere pointlessly?
Or does this insane valuation only make sense with information you don’t have like insider scaling or efficiency news?
Does China’s strategy of undercutting US value of models pay off bigly?
It is not like we threw away the dotcom advances, they were just put on hold for a while..
Nvidia is investing assets into OAI - it has to. Because OAI needs to become successful for Nvidia's story in the long-term to play out, to justify its current stock price.
But it can also simply be the financial framing for direct bartering. Which is even more direct than regular financial transactions.
"I will provide these resources you need, in exchange for part ownership", and/or "a limited license to your tech", "right to provide access to our customers on these terms", Etc."
Amazon doesn't need any frothy fake revenue. But they do want to offer their customers the most in demand models, with the best financial terms for Amazon.
Nvidia wants customers, but not at the expense of throwing money away. Their market cap may be volatile, but their books are beyond solid.
I would be a lot more concerned if OpenAI was getting "funding" from a quantum computer startup, and vice versa.
Doubt Jensen sees himself as a “dealer” but considering the vendor lock-in and margins, he pretty much is the Tony Montana of Ai Chips.
It’s nuts that this type of financing is legal.
You need people to burn in house fires for regulation to require extinguishers.
We're going to be the next generation’s cautionary tale.
How someone can compare the above situation to a person getting a payday loan to put a roof over their head or food on their plate is beyond me.
The “it’s like <insert wild and inappropriate analogy to stoke emotion>” is a tired trope.
I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".
NVIDIA gross margins lately are like 75%, so it's more like you give me $100 to buy something from me that cost me $25 to produce, hence I end up with $100 worth of stock in your company and it only cost me $25.
You also lost out on $75 worth of cash revenue (opportunity cost from selling the same thing to a different customer), so really you just took stock in lieu of cash.
It'd be different if Nvidia (TSMC) had excess production capacity, but afaik they're capped out.
So it's really just whether they'd be selling them to OpenAI and getting equity in return or selling to customers and getting cash in return.
If OpenAI thinks their own stock is valued above fundamentals, it's a no brainer to try and buy Nvidia hardware with stock.
Sure, but how's that a cheat code? If you normally sell something for $100 that costs $80 to make, and then use that $100 revenue to buy $100 of stock, this is an identical outcome for you.
Again, this is not a cheat code: if you sell $80 of cost for $100 of stock, the stock you now own can go up or down, and if you overvalued it then down is the more likely direction.
> Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80?
Why limit myself to $100 for a product that costs $80? I could just as well give you $1 000 000 to buy this same product from me. That way, I have a $1 000 000 share of your company, and I have $1 000 000 in revenue, and it only cost me $80.
This distorts the market for the product we're trading, and distorts the share price for both my company and yours.
In your accounting, you can claim that you have an investment worth $100 and book $100 worth of revenue. You're juicing your sales numbers to impress shareholders - presumably, without your $100, the investee wouldn't have bought $100 worth of your product. The last thing your shareholders want to see are your sales numbers stop growing, or heaven forbid, start shrinking.
Nvidia is not the first company to "buy" sales of its own product via simple or convoluted incentive schemes. The scheme will work for a while until it doesn't.
And inflate your revenue by $80.
Laws on competition make this kind of arrangements illegal, so you would have to exerce influence and have the invested in company pretends you happen to have been picked among competitors.
In any case the SEC will be focused on whether the filings aren't made up to fraud investors, so they could reject the IPO, of the invested in company. Your own entity also is at risk.
We all know MS gets away with it, they have good legal goons who find way to make all of it appears fair with regards to the law.
Also Nvidia margins are waaay higher than 20%
The issue is that there's no organic force behind those changes and it makes everything hollow. You could create a market inside a deserted area and make it appear like a metropolis.
What if the product only costs you $20 to produce?
WeWork was a short-term/long-term lease arbitrage business. The two are nothing alike.
It used to be revolutionary, but now there is a huge difference: plenty of competition, and a growing number of high-quality models that can run offline (for free!) or cheaper (Gemini-Flash for example).
They are in some way the Nokia of AI, "we have the distribution, product will sell", but this is not enough if innovation is weak.
They are even lagging behind (GPT-5 is a weaker coder than Claude, Sora is a toy compared to Seedance 2.0, etc).
One Apple releases the AIPhone, running offline models, with 32 GB of unified memory, with optional cloud requests, then it's going to be super though for OpenAI.
OpenAI have made this claim and maybe it is with API pay-per-use (there's also good evidence eveb that is not if you dive into how much a rack of B200s cost to operate), but I'd be very sceptical that the free, $20 or $200 a month plans are profitable.
Then the questions are if the market will bear the real cost and if so how competitive OpenAI are with Google when Google can do what Microsoft did to Netscape and subsidize inference for far longer than OpenAI can.
This valuation puts their P/E around 40.
Anthropic $380B valuation on $13B ARR. P/E around 30.
5 years ago Uber was in similar territory. Tesla... Well we won't mention Tesla.
They are in the business of selling compute / datacenter rack spaces. A server where you pay per GBs transferred in/out.
If it’s Gemini or GPT behind, for most use cases users wouldn’t care.
I think the even better analogy than browsers is search engines. There aren't any network effects or platform lock-in, but there is potential for a data flywheel, building a brand, and just getting users in the habit of using you. The results won't necessarily turn out the same - I think OpenAI's edge on results quality is a lot less than early Google over its competitors - but the shape of the competition is similar.
There is no moat
On iOS with the Apple agreement, and on Android (though the question of hardware remains when considering beyond Pixel phones).
About 5% according to a news article a few months ago.
Will the other 95% stick around once ads or payments are required?
Google worked as a free service because their backend was cheap. AI models lack that same benefit. The business model seems to be missing a step 2.
If it’s not the quality of their answers ?
Then it can be something along the lines of "subscribe to Google XXX or Apple +++ and have 'unlimited' cloud requests"
> This plan may include ads. Learn more
> When will ads be available in ChatGPT?
We’re beginning in the US on February 9, 2026
> Starting in February, if ads personalization is turned on, ads will be personalized based on your chats and any context ChatGPT uses to respond to you. If memory is on, ChatGPT may save and use memories and reference recent chats when selecting an ad.
You pay 8 USD / month and have higher limits and adsChatGPT has 800 millions monthly active users currently, out of 8 billions humans.
Those conditions are an IPO or reaching AGI [1].
Nvidia and SofBank will pay in installments.
Also very interesting that Microsoft decided to not invest in this round. A PR statement was made though [2].
[1] https://americanbazaaronline.com/2026/02/26/amazon-to-invest...
[2] https://openai.com/index/continuing-microsoft-partnership/
A year ago I would have said that was crazy. In the last month, I've been using Claude Code to write 20kloc of Rust code every day (and I review all of it).
A week is now a day. If that figure doubles, I have no idea what will happen to us. And I think it's coming.
Only one of this can be true. It's not a shame to say you don't bother reviewing it, in the future that may well be the norm.
I can't get Augment / Opus 4.5 to edit a few C++ files from within VSCode without going off on a wild goose chase or getting stuck in an infinite loop after I tell that it should be doing this: "oh, you're right, I need to do X", "To do X, I must understand how to do Y", "I see now that to do Y, I should look at at Z". "Let me look at Z", followed by: "oh, you're right, I need to do X"..
Building things at a mature company with a market is a lot different than hacking together your own tools. There are a lot more people you can let down at scale.
Reviewing 1k lines of code an hour is a breakneck pace, are you spending 20 hours a day reviewing code?
I think you've crossed the line from being an AI maxi to just rage baiting. This comment is a pointless anecdote at best, please take your ridiculous FOMO takes elsewhere.
The actual quote is this though:
> hitting an AGI milestone or pursuing an IPO
So it seems softer than actually achieving AGI or finalising an IPO.
Fortunately, OpenAI already wrote theirs down. Well, Microsoft[0] says they did, anyway. Some people claimed it was a secret only a few years ago, and since then LLMs have made it so much harder to tell the difference between leaks and hallucinated news saying this, but I can say there's at least a claim of a leak[1].
[0] https://blogs.microsoft.com/blog/2026/02/27/microsoft-and-op...
[1] It talks about it, but links to a paywalled site, so I still don't know what it is: https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
Incredible, how an entire religion has sprung up around AGI.
Are they going to get stock for it or is it a PIPE?
Personally, I don’t think I want to get in on this at retail prices.
It can both be true at the same time that AI going to disrupt our world and that being an AI lab is a terrible business.
Such a waste of burnt money that could be used for more useful projects.
I'm ready to embrace change, however in this case no one cares. The cheese hasn't just been moved, it has been taken to another planet where us mice are not allowed to go.
Note: I need work, not interviews. ;-)
Very interesting, I will follow it closely, mostly to see how you ROI 110 Billions in a couple of years.
$30B at $380B post-money for Anthropic announced two weeks ago
This does not increase my confidence in OpenAI's future
> Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight
90% chance it's all PR but who knows
200 USD at Claude, versus 3000 USD (literally) at Gemini. Well, then it will be Claude.
If tomorrow Claude is 5000 USD, well, then it will be Gemini.
Might save you €20 next month.
Use these freebies/relatively cheap tools up 'whilst stocks last'.
I personally managed to create a very high quality marketing promo vid using grok. After spending weeks of enduring a lot of pain. But I saved myself tens of thousands.
I took advantage of 30 Grok premium subscriptions that were given to me via a free trial. There's no doubt the cost of services I took advantage of is in the tens of thousands.
But what do I care? I get what I want and then I get out before the freebies disappear.
LOL at the cry babys down-voting. Get mad bruh, get mad.
> Today we’re announcing $110B in new investment at a $730B pre-money valuation. This includes $30B from SoftBank, $30B from NVIDIA, and $50B from Amazon.
We try to avoid having corporate press releases as the top-level link, though of course there are exceptions sometimes.
e.g. it talks about running NVIDIA's systems (?) on AWS
> NVIDIA has long been one of our most important partners, and their chips are the foundation of AI computing. We are grateful for their continued trust in us, and excited to run their systems in AWS. Their upcoming generations should be great.
They just passed $20B in revenue, you can't really expect a company with this much hype and traction to have a 1x multiple.. that's not to say a 35x multiple makes sense either.
While nothing fancy has happened yet in the area of cheap energy, there is still enough power around the world to build AI data centers. The problem is this power exits in countries that the West has decided, many times for good reasons, they don't want to deal with their leaders.
I'm predicting that over 2027, either the US will become more aggressive in making war with these countries or company CEOs will start developing "reality-distorsion-fields" around them and decide having enough power for the next datacenter is more for the good of humanity. Before that Europe will decide that AI training on human faces(eg. of non-Europeans) is not really a problem and will allow US companies to train their models in EU countries.
Is the same thing true for corporations? At some point the numbers are so wild the entire economy must help you succeed? I don't mean "too big to fail" exactly, more like "so big eventual success is guaranteed at all costs"
I'm sure that $50b has my money in there somewhere.
Bad comments about OpenAI's long-term viability I've seen plenty here. But that's not the same as the people predicting one of the hottest companies right now will somehow suddenly run out of cash all on its own
The fact it's become a household name internationally (giving it the appearance of success) can't save it from spending dramatically more money than it makes. It's been coasting on investments, but it's not even close to being actually profitable.
Huge or well-known companies have collapsed before, even though - because people become so used to them existing - it never quite feels like it will actually happen until it does.
By comparison, Anthropic is projected to break even in 2028. Google's Gemini is already profitable.
https://advergroup.com/gemini-hits-650-million-users/
I didn't really realize how big Gemini was until I saw that Qualia was using it, they apparently used 0.01% of Geminis total tokens (100 billion) in about 3 months, they're in production with the title and escrow industry, so that's a great deal of data going through Gemini, unlike some chat subscription this is all API driven, which I doubt Google is charging at a loss for.
https://www.qualia.com/qualia-clear/
Unlike OpenAI, Google has an actual business model, not just strange circular deals.
Edit: I misswrote "majority of" instead of 15% of Google's profits.
This does not at all tell us Gemini is profitable or driving 15% of its profits. The article does not mention profits even once. It then goes on to bizarrely compare Gemini's monthly active users to Open AI's weekly active ones.
It kinda feels like an LLM-generated article that another LLM picked as a "citation", and then no human bothered to check if it actually said what the LLM said it did.
And, really, advergroup.com? Who sites an advertising agency as if it's a reliable resource?
https://advergroup.com/digital-marketing/
"AdverGroup Web Design and Creative Media Solutions is a full service advertising agency that delivers digital marketing services. We manage Google Ad Word campaigns and/or Meta Ad Campaigns for local clients in Chicago, Las Vegas and their surrounding suburbs."
So credible a resource on Gemini's performance/profitability... /sarc
But yeah it doesn't even actually say anything about profits, let alone attribute any specific percentage of profits to Gemini. It just vague marketing copy.
There will definitely be room for AI. OpenAI is just not really showing that they care about a particular business model. Probably a strong indicator that Sam Altman is probably the worst person to lead that company. Anthropic will be profitable before OpenAI ever will be.
Gemini is in the green in terms of spending / income ratio FYI. I'm not talking about stocks.
I can't believe people who think this actually exist.
By the way if Kamala, Biden or Newsom was in office id also call them führer.
We live in a technocratic authoritarian state, the worlds largest prison population, the most police executions, we are actively sponsoring multiple genocides, we've killed over one million civilions in the middle east in two decades.
our politicians on both sides will go out of their way to protect pedophilic members of the ruling class...
But you want to tell us we're exaggerating or interpreting a reality that doesnt exist, i think youre the one who's been convinced through the regimes doublespeak that everythings alright.
Please revaluate. The US government is literally the 4th reich and actively committing halocausts on multiple fronts.
It’s not a dishonor to their memories, or the atrocities committed, to call that out. It is not a dishonor to say there are stark and real similarities between the way the US is operating and treating civilians.
I personally find the opposite, IMHO it is dishonors their memories to refuse to acknowledge the similarities.
I’ve posted a comment similar to this one here before, and like how I ended it. I strongly encourage you to read about the history of Nazi Germany and how it came to happen. It wasn’t just a zero to death camps, it was 15 years in the making. That history is deeply shocking, as it is depressing, because the parallels and timelines are too similar for anything besides outright discomfort, sadness, and fear between it and the US. But without knowing it, we are ever more likely to repeat it.
One final thing to note: the US has a history of extreme violence, slave patrols and the treatment of non-whites of the 19th century were an inspiration for Hitler.
Now it's looking like a competitive blood bath where ever increasing levels of investment is needed just to main market position. Their frontier models are SOTA for 4 weeks before a competitor comes and takes the crown. They are standing on much shakier ground than they were 2 years ago.
If investors keep throwing obscene money at OpenAI, sure, they can stay afloat forever. Can't argue with that. But if we're talking about a sustainable business, I still don't see it.
At one point Jensen Huang will be out (retired or forced by staginating sales) and can definitely look back on a very successful career. That much is certain.
The signal the agent usage is sending though is that Anthropic is way ahead since all we hear about is Claude these days despite OpenAI spending so much more money, Antrophic is also out trialling vending machines,etc.
ChatGPT apart from generating text was a bit of a query/research tool but now that Google has their AI search augmentation shit somewhat together I'm not feeling much need for ChatGPT as a research partner.
So now the big question is, with coding and search niches curtailed, where will OpenAI be able to generate profits from to justify their insane spending?
Recent high-profile examples include Segway, NFT, Crypto as a whole, pre-tranformers voice assistants and various "Design Thinking" projects like those Amazon prime buttons.
Free ChatGPT chat has made the company a household name, and helped it to persuade investors, but every single one of those free users costs the company money. Most of those free users have proved unwilling to convert to paid users, and adding ads to the free service promises to send it into the same enshittification death spiral so many other companies have fallen into.
Also, how on Earth would your grandma and parents not have heard of crypto? Crypto is frequently front page news, even in print newspapers. There have been crypto superbowl ads. Are they living under a rock?
If OpenAI keeps getting circular financing, of course they will not collapse yet.
I think it's still too early to tell. By what measure did you even determine that Nvidia is falling?
Also Softbank invested, which is never a great signal.
They also invested in Uber
At least Anthropic has some runway in terms of valuation and isn't bleeding all over some free tier.
It's clear that the stock market cannot be considered normal anymore, held up on hopes at prayers at best.
This sounds a bit like going forward (some) OpenAI APIs will also run on platforms other than Azure (AWS)?
Anyone knows more?
OpenAI desperately needs to be available outside Azure. We are exclusively using Anthropic atm because it is what is available in AWS Bedrock and it works. These things are solidifying fast.
To me it feels like one of those throw some play money into it and see what happens sort of situations. Expect it will return negative due to the raw financials and outlook, but small chance the brand carries enough weight with the public that it spikes.
I'd love to hear other thoughts though
But at such numbers it's nonsense.
I don't see any moat. LLMs are commodities.
Enterprise is on Gemini/NotebookLM and Copilot as it's a natural extension of the Google and Office suite they use.
Devs are in Anthropic camp, but they will jump as soon as they can save 90% of the money for 99% of the output.
Or is it just to keep Nvidia from crashing?
Incredible.
It can both be true at the same time: That AI is going to disrupt our world and that Open AI does not have a business model that supports its valuation.
I don't think that's my relationship with AI, I'm hardly an uncritical booster. But would I know if it was?
https://fortune.com/2026/02/26/tesla-robotaxis-4x-8x-worse-t...
World will still need software, lots of it. Their valuation is based on an entire developer-less future world (no labor costs).
The majority of my coworkers now push AI-generated code each day, and it has completely absolved me of any fear whatsoever that AI will take my job.
What is somewhat justifying OpenAI's valuation is that they are still trying for AGI. They are not just working on models that work here and now, they are still approaching "simulating worlds" from all kinds of angles (vision, image generation, video generation, world generation), presumably in hopes that this will at some point coalesce in a model with much better understanding of our world and its agency in it. If this comes to pass OpenAI's value is near unlimited. If it doesn't, its value is at best half what it is today
And that's the dealbreaker for me since they've been so adamant on scaling taking them there, while we're all seeing how it's been diminishing returns for a while.
I was worried a few years back with the overwhelming buzz, but my 2017 blogpost is still holding strong. To be fair it did point to ASI where valuation is indeed unlimited, but nowadays the definition of AGI is quite weakened in comparison.. but does that then convey an unlimited valuation?
Yes, this is kind of like Tesla promising full self driving in 2016
"If your goal is to get your dirty car washed… you should probably drive it to the car wash "
The large hosted model providers always "fix" these issues as best as they can after they become popular. It's a consistent pattern repeated many times now, benefitting from this exact scenario seemingly "debunking" it well after the fact. Often the original behavior can be replicated after finding sufficient distance of modified wording/numbers/etc from the original prompt.
But this question posed to humans is plenty ambiguous because it doesn't specify whether you need to get to the boat or not, and whether or not the boat is at the wash already. ChatGPT Free Tier handles the ambiguity, note the finishing remark:
"If the boat wash is 50 meters down the street…
Drive? By the time you start the engine, you’re already there.
Sail? Unless there’s a canal running down your street, that’s going to be a very short and very awkward voyage.
Walk? You’ll be there in about 40 seconds.
The obvious winner is walk — unless this is a trick question and your yacht is currently parked in your living room.
If your yacht is already in the water and the wash is dock-accessible, then you’d idle it over. But if you’re just going there to arrange detailing, definitely walk."
"any human can instantly grok the right answer."
When asking a human about general world knowledge, they don't have the generality to give good answers for 90% of it. Even very basic questions humans like this, humans will trip up on many many more than the frontier LLMs.
I completely agree. I'm ashamed to admit, I've actually walked to the car wash without my car on more than one occasion. We all make mistakes!
Not that dumb, no. That's why it's laughable to claim that LLMs are intelligent.
"AGI" is the IPO.
How?
If we have AGI, we have a scenario where human knowledge-based value creation as we know it is suddenly worthless. It's not a stretch to imagine that human labor-based value creation wouldn't be far behind. Altman himself has said that it would break capitalism.
This isn't a value proposition for a business, it's an end of value proposition for society. The only people who find real value in that are people who spend far too much time online doing things like arguing about Roko's Basilisk - which is just Pascal's Wager with GPUs - and people who are so wealthy that they've been disconnected with real-world consequences.
The only reason anyone sees value in this is because the second group of people think it'll serve their self-concept as the best and brightest humanity has ever had to offer. They're confusing ego with ability to create economic value.
a) AI is going to replace a Bazillion-Dollar Industry and that
b) being an AI model provider does not allow to capture margins above 5% long-term
I am not saying that this is what will happen, but it's a plausible scenario. Without farmers we would all be dead but that does not mean the they capture monopoly rents on their assets.
- Someone in the 16th century, probably
On a tangent, I remember companies like Slack triggering the unicorn craze. They said that it was just better to aim for a billion than some number like 900M or 1.2B, because psychologically, it meant more to employees, investors, and customers.
OpenAI is in that place where nobody really cares for these mind games. It's not very reliable. But it is useful enough to pay for. It's cheap enough to be an impulse purchase where some guy decides to just subscribe to ChatGPT because they're working on an important slide or sketching a logo.
Good times.
BTW, real money or credits?
It is bad enough AI sucked up so much investment money, hitting companies that do make profitable things hard if AI bubble collapses would be bad...
https://www.inc.com/leila-sheridan/nvidia-is-wavering-on-its...
What's the statue of limitations for securities fraud? The current administration won't last forever.
Nope. That 100B is in "promises" for over several years in total.
They have $15B out of the $50B from Amazon right now.
> The current administration won't last forever.
This is why OpenAI must IPO and when it does, I won't be surprised that a crash is followed up before 2030.
By then, they will "announce" "AGI" (Which actually means an IPO)
It’s already a joke to call the slop generators “AI”, so giving it another fake name won’t really make much of a difference any more. Nothing short of a miracle will be able to top the “creative marketing” we already have.
Edit: yes, it is true that many people do integrate directly with OpenAI. That doesn't negate the fact that Openrouter users are largely not using OpenAI.
OpenRouter claims "5M+" users; OpenAI is claiming >900M weekly active users.
I don't really think it's possible to learn anything about the broader market by looking at the OpenRouter model rankings.
On the other hand, big users don't use openrouter. At $work we have our own routing logic.
2. people often use openrouter for the sole purpose of using a unified chat completions API
3. OpenAI invented chat completions; if you use openrouter for chat completions often you can just switch your endpoint URL to point to the OAI endpoint to avoid the openrouter surcharge!
4. Hence anyone with large enough volume will very likely not use openrouter for OpenAI; there is an active incentive to take the easy route of changing the endpoint URL to OAI’s
Is it?
At what point are the models going to all be "good enough", with the differentiating factor being everything else, other than model ranking?
That day will come. Not everyone needs a Ferrari.
Edit: I misread the parent, I think they're saying the same thing.
The differentiating factor will be access to proprietary training data. Everyone can scrape the public web and use that to train an LLM. The frontier companies are spending a fortune to buy exclusive licenses to private data sources, and even hiring expert humans specifically to create new training data on priority topics.
It's already come for vast swathes of industries.
Most organizations have already been able to operationalize what are essentially GPT4 and GPT5 wrappers for standard enterprise usecases such as network security (eg. Horizon3) and internal knowledge discovery and synthesis (eg. GleanAI back in 2024-25).
Foundation Models have reached a relative plateau and much of the recent hype wasn't due to enhanced model performance but smart packaging on top of existing capabilities to solve business outcomes (eg. OpenClaw, Antheopic's business suite, etc).
Most foundation model rounds are essentially growth equity rounds (not venture capital) to finance infra/DC buildouts to scale out delivery or custom ASICs to enhance operating margins.
This isn't a bad thing - it means AI in the colloquial definition has matured to the point that it has become reality.
- Amazon's $50B is only $15B, with the rest being "after certain conditions are met", whatever that means (probably an IPO, which isn't happening)
- The $30B each from softbank and NVIDIA is paid in installments
So this is more a $35B fundraise, with a _promise_ of more, maybe, if conditions are met. Not _bad_, but yet more gaslighting from Mr Altman. Anyone reporting this as a closed fundraising deal is being disingenuous at best.
Startup funding is often given in increments depending on milestones being met. Most startups just don’t announce that it’s conditional.
For large funding rounds, nobody gets a check for the full amount at once.
The funding would not be conditional on an IPO because that wouldn’t make any sense. The IPO is the liquidity event for the investors and there’s no reason for a startup to take private investment money that only enters the company after IPO.
So if they hit 100 billion annual then it's AGI but if Kellogg's launches “FrostedFlakes-GPT" and steals 30% of the market it's no longer AGI at 70 billion?
You'll never get a billion dollar check from anyone.
I've even seen startups raise like 500k pre-seed with tranches in it, lmao!
s/breathing/investment/g s/balloon/bubble/g s/air/money/g
(Vibes ~ Vibrations ~ Heat)
Tbf it's a reasonable question... I think it's a little tricky to pin down the equivalent of "kinetic energy" in purely economic terms, though you might look at the rate of flow of money as some analogy for the speed/energy of particles (speed of individual dollars changing hands). In that sense, the more frequent and larger these deals get, the hotter the market is. This is not a novel analogy.
One of them wanted to have some fun, so said to the other - "I'll give you $100 if you take a big bite of that turd".
His colleague figured $100 was a good chunk of cash, so did the deed. Feeling thoroughly humiliated, he pocketed the $100 and they carried on.
Further down the street they came upon another turd.
The angry economist now wanted revenge so made the same proposal back to his colleague, who also agreed and took a bite of the turd, earning back his $100.
Later one of them said to the other "you know, I can't help but feel we both ate shit for no reason."
His collegue replied "what do you mean? We raised the national GDP by $200."
Money was just the means of the transaction.
surely that behavior leads to a good society and doesn't encourage nefarious behaviors
Seeing this phenomenon, a silicon valley entrepreneur get an idea with the following sales pitch:
"Turd-bars that will make you the fittest version of yourself , answer all your deepest questions, and take you to the promised land (mars)."
Surprisingly, the turd-bars sell well, and GDP rockets up. Meanwhile VCs with fomo are funding its competitor: the shit-sandwich.
In practice, people don't tend to pay people to eat shit without gain. You are paying people to help you. Money gaslights everyone into helping each other, the most selfish people become the most selfless.
Of course, real capitalism is much more complex and much uglier than this fantasy. When certain people end up with long-term control of large piles of money, the whole thing gets distorted. They get to make lots of money on interest without doing anything, and making other people eat more shit for scraps. That's the "capital" part of capitalism.
But the toy world-model that this joke is making fun of, is actually the one core positive aspect of capitalism and brings all the prosperity we have: tricking people into helping each other.
You reminded me of this Stewart Brand quote:
> Computers suppress our animal presence. When you communicate through a computer, you communicate like an angel.
You scratch my back for a $10M IOU.
The debts cancel out.
How is the economic gain calculated?
That's certainly a take, industry loves it. Sure, all that "everybody will print widgets at home instead of going to the store" stuff was never going to happen, but 3d printing is nonetheless here to stay.
But it's not magical, and not much different to injection moulding or something in concept.
Almost everything created with home level 3d printers is plastic junk you can buy for a few dollars on aliexpress (without weird rough edges).
If it weren’t subsidized I would pay more. Wouldn’t be happy about it but I would do it.
At this stage in the game I don’t really understand where this skepticism of the value these tools provides comes from.
Fear
An echo cannot go on forever!
This is an argument from 2024. Somehow, the models have continued to improve.
If they stopped improving today they are good enough as they already are to generate profound change.
The wave front is already visible, we’re just on the shore waiting for the impact.
May be there is some way to keep the model up-to date in less dramatic ways. But I think something gotto give..
I mean, even now the vibe coded stuff is reprehensible.
It is a bubble with extreme levels of debt + funding from too many promises from companies that are in these sort of rounds.
People being consumed by the hype will also be completely consumed by the crash.
Comments like this is exactly how a 2000 and a 2008 style crash will happen.
What bitcoin gave us essentially? Huge pump and dump schemes coordinated by big hands? Crypto investments which made 95% of investors poorer? What's left? Maybe 0.01% of it was beneficial.
I guess it isn't that noticeable from inside US, but the rest of the world is grateful.
Maybe speak for yourself? As part of the rest of the world, I am not grateful.