upvote
I've run into this, and I highly doubt I am one of the more extraordinary users. I have delays between working with it, don't have many running at once, am running on smaller codebases, etc. Yet just a few minutes ago I hit a quota. In the past I did far more work with it without running into the quota.

I emailed their support a few days ago with details, concerns, a link to the twitter thread from one of their employees, and a concrete support request, which had an AI agent ('Fin') tell me:

> While our Support team is unable to manually reset or work around usage limits, you can learn about best practices here. If you’ve hit a message limit, you’ll need to wait until the reset time, or you can consider purchasing an upgraded plan (if applicable).

I replied saying that was not an appropriate answer.

You're absolutely right re the lack of transparency and accountability. On one hand, Anthropic generates good will by appearing to have a more ethical stance then OpenAI, and a better product. On the other hand, they kill it fast through extremely poor treatment of their customers.

If they have a bug, they need to resolve it: and in the meantime refund quotas. 'Unable to' - that's shocking. This is simple and reasonable. It's basic customer service. I don't know if they realise the damage their attitude is doing.

reply
Fin is the most useless thing ever. There's no obvious way to get reports in front of a human in a timely manner, and there's no clue to believe fin interactions are retained.

This does mean ultimately no loyalty. I can't stay loyal to a brand that doesn't actually respond to inquiries, bug reports or down reports at all.

I do understand that Anthropic is operating at a tremendous scale and can't have enough humans in the loop. This sounds like a good use for ai classification and triage, really!

reply
> I can't stay loyal to a brand that doesn't actually respond to inquiries, bug reports or down reports at all.

Amen to this.

Being in business means having to respond to customer enquiries at some point.

Given the amount of billions being pumped into Anthropic's pockets and given the millions their senior-leadership no doubt pay themselves, I'm sure they could spare a bit of cash to get off their backsides and sort out the Customer Service.

I simply do not buy the "poor Antropic, they are operating at scale, they are too busy winning to deal with customer service" argument that comes up time and time again.

The fact is there are many large businesses, many large governments that are able to deal with customers "at scale".

Scale means you respond a bit slower, maybe a few days or at most a couple of weeks AT MOST. But complete silence for months or years is inexcusable.

All of my experiences with "Fin" matches that of my friends and colleagues .... namely that "Fin" is a synonym for "black hole". I've got "tickets" opened with "Fin" months ago that have not had a modicum of reply.

reply
It is also interesting to observe that your most valuable accounts in this kind of pricing model are the ones that are least used and therefore are not confronted by the limits. Heavy users canceling their accounts in frustration is a win for Anthropic not a punishment, at least a short term.
reply
Casual users follow the recommendations of power users. Pushing heavy users off your service is a post-growth optimization
reply
Once you get used to using claude as an abstraction layer you start getting pretty reckless with it.

My organization has the concept of "premium models" where our limits reset every month. I hit my limit pretty quickly last month because I was burning tokens doing things that would have been a simple bash loop in the past - all because I was used to interfacing with Claude at the chat layer for all my automation needs and not thinking any more about it.

reply
This is a real danger that I think a lot of people will run into as prices go up more and more in the future.

Completely outside of the productivity debate, offloading cognitive tasks to LLMs leaves you less practiced in them and less ready to do them when the LLM isn't available. When you have to delegate only certain tasks to the LLM for financial reasons, you may find yourself very frustrated.

reply
I'm really hoping locally hosted llms get to the point of competing with current-day frontier models so that we all have "unlimited" usage.
reply
This is the bet of many of the big AI companies, and why they're subsidizing majorly the calls. With the latest cracks by the US gov, it seems Anthropic is starting to reduce those subsidies given their edge in the game. I am starting to consider local models more seriously beside just testing, but nowadays the ram/gpu market is bloated.
reply
Local models just don't seem that useful for me for these particular tasks yet - the most recent versions of Codex and Claude Opus are the first time I've found them to be particularly useful in a "real engineering" context that isn't just vibe coding.

Google's TurboQuant might help address this, but it also might just widen the gap even further.

I am far on the skeptic edge when it comes to the generative AI side of ML tools though, so do take my opinion with that weight.

reply
Seriously, who isnt planning a local first strategy?
reply
This feels a lot like the same playbook we’re seeing with dynamic pricing in retail, just applied to compute instead of products. You never really know what you’re getting, and the rules shift under you.

What makes it worse is the lack of transparency. If there were clear, hard limits, people could plan around it. Instead it’s this moving target that makes it impossible to trust for real work.

At some point it stops feeling like a bug and starts feeling like a pricing experiment on users.

reply
The clear trend over the past decade or so has been using analytics and data gathering to extract maximum rents from every customer in every industry and AI is going to massively accelerate this.

The only way out is government regulation which means we are screwed in the US (our government is too far gone to represent average citizen interests in any meaningful way) but Europeans maybe have a chance if they get it together and demand change.

reply
What a horrid glimpse in the future. I hope we won't get there and we all collectively fight back with our wallets.
reply
It's going to get much worse. We're soon going to have enough data and compute (and are losing enough online privacy) to allow every company to apply personalized pricing down to the individual. My local restaurant is going to know that I am willing to buy a burger for at most $4.57 and my neighbor is only willing to pay $2.91 for it, and they will have the ability to charge us individually. Every business is going to soak each of us us to the maximum extent that the data says they can.
reply
Who would voluntarily do business with a company that does this? Not me.
reply
Eventually, when all of them do this (and they will be effectively forced to in order to remain competitive), then we will not have a choice.
reply
I will make burgers myself. I take this approach with many things and services without great suppliers anyway. And I don't care if it's suboptimal because, in the long run, I'll have better skills and be protected from exactly this trend.
reply
But the supermarkets will do it too
reply
Everyone who uses Uber is voluntarily doing business with a company that does this. When was the last time you took an Uber?
reply
I'm worried that the present is actually living off a line of credit that will be spent/closed soon.
reply
I suspect that Claude had a bug that undercounted tokens and they fixed it.
reply
I wonder if that was why they were offering the bonus off hours limits. Ease people in to the transition.
reply
They keep running experiments like free $50 in extra use credits or 2x usage outside certain windows where inference is very slow. You can’t help but think this is all a slowly boiling the frog experiment. Experimenting how much they can charge.
reply
Are they going to pay back if subscription was payed but token limit was less than advertised? Is there some tiny text somewhere preventing just suing or pulling money back with credit cards?
reply
Part of the issue is that they don't actually advertise what the token limit is. Just some vague, "this is 5x more than free, and 5x more than pro". They seem to be free to change the basis however they please, because most of us are more than happy to use what they give us at the discounted subscription pricing.
reply
Working as intended? They openly state that how quickly your limit is reached depends on many factors (that you don't know) as well as current load on their systems.

Could just be that usage has gone up.

reply