upvote
because they’re using it for different things where it works well and that’s all they know?
reply
And yet another "AI doesn't work" comment without any meaningful information. What were your exact prompts? What was the output?

This is like a user of conventional software complaining that "it crashes", without a single bit of detail, like what they did before the crash, if there was any error message, whether the program froze or completely disappeared, etc.

reply
This is quite hostile. Yes, criticism is valid without an accompanying essay detailing every aspect of the associated environment, because these tools are still quite flawed.
reply
[flagged]
reply
Because it was good until January 2026, then it detoriated into a opus-3.1. Probably given much less context windows or ram.
reply
It released in February 2026.
reply
I don’t think I’ve ever seen otherwise reasonable people go completely unhinged over anything like they do with Opus
reply
I've seen a similar psychological phenomenon where people like something a lot, and then they get unreasonably angry and vocal about changes to that thing.

Usage limits are necessary but I guess people expect more subsidized inference than the company can afford. So they make very angry comments online.

For example, there is no evidence that 4.6 ever degraded in quality: https://marginlab.ai/trackers/claude-code-historical-perform...

reply
> Usage limits are necessary but I guess people expect more subsidized inference than the company can afford. So they make very angry comments online

This is reductive. You're both calling people unreasonably angry but then acknowledging there's a limit in compute that is a practical reality for Anthropic. This isn't that hard. They have two choices, rate limit, or silently degrade to save compute.

I have never hit a rate limit, but I have seen it get noticeably stupider. It doesn't make me angry, but comments like these are a bit annoying to read, because you are trying to make people sound delusional while, at the same time, confirming everything they're saying.

I don't think they have turned a big knob that makes it stupider for everyone. I think they can see when a user is overtapping their $20 plan and silently degrade them. Because there's no alert for that. Which is why AI benchmark sites are irrelevant.

reply
just my perspective: i pay $20/month and i hit usage limits regularly. have never experienced performance degradation. in fact i have been very happy with performance lately. my experience has never matched that of those saying model has been intentionally degraded. have been using claude a long time now (3 years).

i do find usage limits frustrating. should prob fork out more...

reply
That's what I thought today reading the comments in the Mozilla Thunderbolt thread today. Something about Mozilla absolutely sets people off.
reply
[flagged]
reply
I recognize the sarcasm. The data I can find says it's performing at baseline however?

https://marginlab.ai/trackers/claude-code/

reply
Yeah, that's my point. Humans are not reliable LLM evaluators. "Secret model nerfs" happen in "vibes" far more often than they do in any reality.
reply
This but unironically.

"I reject your reality, and substitute my own".

It worked for cheeto in chief, and it worked for Elon, so why not do it in our normal daily lives?

reply