upvote
> Anthropic offering 2.5x makes me assume they have 5x or 10x themselves.

They said the 2.5X offering is what they've been using internally. Now they're offering via the API: https://x.com/claudeai/status/2020207322124132504

LLM APIs are tuned to handle a lot of parallel requests. In short, the overall token throughput is higher, but the individual requests are processed more slowly.

The scaling curves aren't that extreme, though. I doubt they could tune the knobs to get individual requests coming through at 10X the normal rate.

This likely comes from having some servers tuned for higher individual request throughput, at the expense of overall token throughput. It's possible that it's on some newer generation serving hardware, too.

reply
Where on earth are you getting these numbers? Why would a SaaS company that is fighting for market dominance withhold 10x performance if they had it? Where are you getting 2.5x?

This is such bizarre magical thinking, borderline conspiratorial.

There is no reason to believe any of the big AI players are serving anything less than the best trade off of stability and speed that they can possibly muster, especially when their cost ratios are so bad.

reply
deleted
reply
Not magical thinking, not conspiratorial, just hypothetical.

Just because you can't afford to 10x all your customers' inference doesn't mean you can't afford to 10x your inhouse inference.

And 2.5x is from Anthropic's latest offering. But it costs you 6x normal API pricing.

reply
Also, from a comment in another thread, from roon, who works at OpenAI:

> codex-5.2 is really amazing but using it from my personal and not work account over the weekend taught me some user empathy lol it’s a bit slow

[0] https://nitter.net/tszzl/status/2016338961040548123

reply
This makes no sense. It's not like they have a "slow it down" knob, they're probably parallelizing your request so you get a 2.5x speedup at 10x the price.
reply
All of these systems use massive pools of GPUs, and allocate many requests to each node. The “slow it down” knob is to steer a request to nodes with more concurrent requests; “speed it up” is to route to less-loaded nodes.
reply
Right, but that's still not Anthropic adding an intentional delay for the sole purpose of having you pay more to remove it.
reply
Thats also called slowing down default experience so users have to pay more for the fast mode. I think its the first time we are seeing blatant speed ransoms in the LLMs.
reply
That's not how this works. LLM serving at scale processes multiple requests in parallel for efficiency. Reduce the parallelism and you can process individual requests faster, but the overall number of tokens processed is lower.
reply
They can now easily decrease the speed for the normal mode, and then users will have to pay more for fast mode.
reply
Do you have any evidence that this is happening? Or is it just a hypothetical threat you're proposing?

These companies aren't operating in a vacuum. Most of their users could change providers quickly if they started degrading their service.

reply
They have contracts with companies, and those companies wont be able to change quickly. By the time those contracts will come back for renewals it will already be too late, their code becoming completely unreadable by humans. Individual devs can move quickly but companies don't.
reply
Are you at all familiar with the architecture of systems like theirs?

The reason people don't jump to your conclusion here (and why you get downvoted) is that for anyone familiar with how this is orchestrated on the backend it's obvious that they don't need to do artificial slowdowns.

reply
I am familiar with the business model. This is clear indication of what their future plan is.

Also, I just pointed out at the business issue, just raising a point which was not raised here. Just want people to be more cautious

reply
Slowing down respect to what?
reply
Slowing down with respect to original speed of response. Basically what we used to get few months back and what is the best possible experience.
reply
There is no "original speed of response". The more resources you pour in, the faster it goes.
reply
Watch them decrease resources for the normal mode so people are penny pinched into using fast mode.
reply
Seriously, thinking at the price structure of this (6x the price for 2.5x the speed, if that's correct) it seems to target something like real time applications with very small context. Maybe vocal assistants? I guess that if you're doing development it makes more sense to parallelize over more agents rather than paying that much for a modest increase in speed.
reply