upvote
Dario Amodei has said that their models actually have a good return, even when accounting for training costs [0]. They lose money because of R&D, training the next bigger models, and I assume also investment in other areas like data centers.

Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply. All of this makes me believe them when they say they are profitable on API usage. Usage on the plans is a bit more unknown.

[0] https://youtu.be/GcqQ1ebBqkc?si=Vs2R4taIhj3uwIyj&t=1088

reply
We can also look at the inference costs at 3rd party inference providers.
reply
> Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply.

Sam Altman got fired by his own board for dishonesty, and a lot of the original OpenAI people have left. I don't know the guy, but given his track record I'm not sure I'd just take his word for it.

As for chinese models..: https://www.wheresyoured.at/the-enshittifinancial-crisis/#th...

From the article:

> You’re probably gonna say at this point that Anthropic or OpenAI might go public, which will infuse capital into the system, and I want to give you a preview of what to look forward to, courtesy of AI labs MiniMax and Zhipu (as reported by The Information), which just filed to go public in Hong Kong.

> Anyway, I’m sure these numbers are great-oh my GOD!

> In the first half of this year, Zhipu had a net loss of $334 million on $27 million in revenue, and guess what, 85% of that revenue came from enterprise customers. Meanwhile, MiniMax made $53.4 million in revenue in the first nine months of the year, and burned $211 million to earn it.

reply
Their whole company has to be profitable, or at least not run out of money/investors. If you have no cash you can't just point to one part of your business as being profitable, given that it will quickly become hopelessly out-of-date when other models overtake it.
reply
Other models will only overtake as long as there is enough investor money or margins from inference for others to continue training bigger and bigger models.

We can see from inference costs at third party providers that the inference is profitable enough to sustain even third party providers of proprietary models that they are undoubtedly paying licensing/usage fees for, and so these models won't go away.

reply
Yeah, that’s the whole game they’re playing. Compete until they can’t raise more and then they will start cutting costs and introducing new revenue sources like ads.

They spend money on growth and new models. At some point that will slow and then they’ll start to spend less on R&D and training. Competition means some may lose, but models will continue to be served.

reply
This is my understanding as well. If GPT made money the companies that run them would be publicly traded?

Furthermore, companies which are publicly traded show that overall the products are not economical. Meta and MSFT are great examples of this, though they have recently seen opposite sides of investors appraising their results. Notably, OpenAI and MSFT are more closely linked than any other Mag7 companies with an AI startup.

https://www.forbes.com/sites/phoebeliu/2025/11/10/openai-spe...

reply
Going public is not a trivial thing for a company to do. You may want to bring in additional facts to support your thesis.
reply
Going public also brings with it a lot of pesky reporting requirements and challenges. If it wasn't for the benefit of liquidity for shareholders, "nobody" would go public. If the bigger shareholders can get enough liquidity from private sales, or have a long enough time horizon, there's very little to be gained from going public.
reply
> From what I've read, every major AI player is losing a (lot) of money on running LLMs, even just with inference.

> It's hard to say for sure because they don't publish the financials (or if they do, it tends to be obfuscated)

Yeah, exactly. So how the hell the bloggers you read know AI players are losing money? Are they whistleblowers? Or they're pulling numbers out of their asses? Your choice.

reply
Some of it's whistle blowers, some of it is pretty simple math and analysis. Some of it's just common sense. Constantly raising money isn't sustainable and just increases obligations dramatically.. if these companies didn't need the cash to keep operating, they probably wouldn't be asking for tens of billions a year because it creates profit expectations that simply can't be delivered on.
reply
Sam Altman is on record saying that OpenAI is profitable on inference. He might be lying, but it seems an unlikely thing to lie about.
reply