upvote
I would have not believed your argument 3 months ago but I strongly suspect Anthropic actively engages in model quality throttling due to their compute constraints. Their recent deal for multi GWs worth of data center might help them correct their approach.
reply
For what it's worth Anthropic explicity denies that. "To state it plainly: We never reduce model quality due to demand, time of day, or server load"

Also can see https://marginlab.ai/trackers/claude-code/

It's very interesting to me how widespread this conception is. Maybe it's as simple as LLM productivity degrading over time within a project, as slop compounds.

Or more recently since they added a 1m context window, maybe people are more reckless with context usage

reply
Inference is where they make the money they spend on training, so this feels unlikely. Perhaps this does not true for Mythos though
reply