The hope is to get a big userbase who eventually become dependent on it for their workflow, then crank up the price until it finally becomes profitable.
The price for all models by all companies will continue to go up, and quickly.
Subscriptions and free plans are the thing that can easily burn money.
Do you think this is true for DeepSeek as well?
This might entirely be true but I'm hoping that's because the frontier models are just actually more expensive to run as well.
Said another way, I would hope, the price of GPT-5.5 falls significantly in a year when GPT-5.8 is out.
Someone else on this post commented:
> For API usage, GPT-5.5 is 2x the price of GPT-5.4, ~4x the price of GPT-5.1, and ~10x the price of Kimi-2.6.
Having used Kimi-2.6, it can go on for hours spewing nonsense. I personally am happy to pay 10x the price of something that doesn't help me, for something else that does, in even half the time.
That's a big if, though. I wish Meta were still releasing top of the line, expensively produced open-weights models. Or if Anthropic, Google, or X would release an open mini version.
Sure, they’re distilled and should be cheaper to run but at the same time, these hosting providers do turn a margin on these given it’s their core business, unless they do it out of the kindness of their heart.
So it’s hard for me to imagine these providers are losing money on API pricing.
I think Kimi and qwen are similar?
Where can i find up to date resources on open source models for coding?
Bit of a hype madhouse whenever a new model is released, but it's pretty easy to filter out simple hype from people showing reproducible experiments, specific configs for llama.cpp, github links etc.