upvote
I bet that's the real reason why they're not releasing Mythos ;)
reply
It worked. Although I have a Claude Code subscription, I got the ChatGPT Pro plan, and 5.4 xHigh at 1.5x speed was better than 4.6 with adaptive thinking disabled. I was working all day, about 8 hours, and did not run into any limits. 5.4 surprised me many times by doing things I usually would not do myself, because I am lazy, so yeah, I am sticking with 5.4 for now until all the Claude drama is over.
reply
Is that why Anthropic recently gave out free credits for use in off-hours? Possibly an attempt to more evenly distribute their compute load throughout the day?
reply
That was the carrot, but it was followed immediately by the stick (5 hour session limits were halved during peak hours)
reply
i suspect they get cheap off peak electricity and compute is cheaper at those times
reply
That's not really how datacenter power works. It's usually a bulk buy with a 95th percentile usage.
reply
I think it's a lot simpler than that. At peak, gpus are all running hot. During low volume, they aren't.
reply
> Is that why Anthropic recently gave out free credits for use in off-hours?

That was the carrot for the stick. The limits and the issues were never officially recognized or communicated. Neither have been the "off-hours credits". You would only know about them if you logged in to your dashboard. When is the last time you logged in there?

reply
Its a hard game to play anyway.

Anthropics revenue is increasing very fast.

OpenAI though made crazy claims after all its responsible for the memory prices.

In parallel anthropic announced partnership with google and broadcom for gigawatts of TPU chips while also announcing their own 50 Billion invest in compute.

OpenAI always believed in compute though and i'm pretty sure plenty of people want to see what models 10x or 100x or 1000x can do.

reply
Hard for me to reconcile the idea that they don't have enough compute with the idea that they are also losing money to subsidies.
reply
they clearly arent losing money, i dont understand why people think this is true
reply
People think it's true because it is true, and OpenAI has told us themselves.

They (very optimistically) say they'll be profitable in 2030.

reply
They're saying Anthropic doesn't have enough compute, not OpenAI. They said OpenAI specifically invested early in compute at a loss.
reply
They are loosing money because the model training costs billions.
reply
Model inference compute over model lifetime is ~10x of model training compute now for major providers. Expected to climb as demand for AI inference rises.
reply
For sure and growth also costs money for buying DCs etc.
reply
They are constantly training and getting rid of older models, they are losing money
reply
Which part of "over model lifetime" did you not understand?
reply
That's not a sufficient condition for profitability if both inference and scaling costs continue to increase over time.
reply
Honestly, I personally would rather a time-out than the quality of my response noticably downgrading. I think what I found especially distrustful is the responses from employees claiming that no degredation has occured.

An honest response of "Our compute is busy, use X model?" would be far better than silent downgrading.

reply
Are they convinced that claiming they have technical issues while continuing to adjust their internal levers to choose which customers to serve is holistically the best path?
reply
Prepare for the prices to go up!
reply
You state your hypnosis quite confidently. Can you tell me how taking down authentication many times is related to GPU capacity?
reply