You are paying to be using that limit some of the time. There are 5 hour windows when you are sleeping and can't use it. There are weekend limits.
Theoretically you can max out every 5 hour window, but they lose money on that.
It's structured so users can have bursts of unlimited usage, and spend ~15% of the theoretical max cap, and that's still cheaper than a subscription for that user.
An OpenClaw user can use 6, 7, 8 times what a human subscriber is using.
Ah, to be human!
I grew up in an Asian household of six. We definitely took food home at AYCE places. My parents definitely knew it wasn't OK, but they felt like they were gaming the system (like a dubious life hack of sorts) and saving money, so they were actually quite proud of it, bragging to friends how much they were able to get.
To be human indeed!
Goes to show just how fragile a high-trust society is. Theft and corruption can easily be normalized to such an extent that not participanting gets reframed as immoral.
If the factory is yours, then everything inside is yours ;)
But it's funny how low wages under the broken Soviet economic system turned such things into a semi-official, informal work perks, allowing people to make ends meet.
I wouldn't call it "funny" though. It ws quite sad and I'm glad it's over.
That said, the general unavailability of everything was caused by an incompetent government rather the the system itself but the system itself caused the government. My point is that it was a succession of demagogueries hiding personal interests that caused the recurring and unrecoverable tragedies of that state. Being controlled and misguided is not exclusive to any particular government or political system.
I don't think communism is a good form of government and I don't think the soviet union was marching the right way.
But the biggest blunts came from other much more serious mistakes caused by politicians ignoring science, like the big famine and many others, including the Chernobyl connerie
I guess if it’s your moral obligation to steal from the workplace it reframes it somewhat.
No, there is a weekly limit as well. Maxing out a single 5h window uses ~10% of the weekly limit
Perhaps people at Anthropic should ask Sonnet (or Kimi, it's much better value) how power laws and pareto distributions work? You are advertising for people who can justify a virtually unlimited amount of tokens, why is it surprising that they would use as many as you're offering them in the plan?
PS: interesting that you'd use a throwaway account to post this
If you manage developers or product folk, do you allow them to work when you're not looking over their shoulder? All developers can be managers/team leads now. You plan, you delegate, you review.
You're welcome to not do this, surely that's appropriate in quite a few areas of work, but many of us are because we can get more work done than if we we're micromanaging every line of code change. For startups, where a bit of quality can suffer in favor of finding market fit, this is huge.
This is just the morning ones, and saves shitloads of time of clicking around from tool to tool, freeing up time for the thinking and deciding.
They could easily structure their limits to enforce that kind of pattern fairly on both human and automated users. They could e.g. force a cooldown period between your daily activity bursts, by decreeing that continued heavy use on a 24h basis would count exponentially more towards your limit. That would be transparent and force the claws to lighten their load below that of a typical human user. We're talking about a company that's worth hundreds of billions of dollars and targeting highly sophisticated enterprise users, not consumers; it's just not credible that they'd be technically unable to set that up.
Even the API pricing is subsidized by investors. When that stop, pricing will escalate.
Most estimates I've seen have shown that API usage seems to be at least unit profitable (paying for infra and electricity, not R&D)
I downgraded from my $200 a month plan to my $20 plan and hit limits constantly. I try to use the API access I purchased separately, and it doesn't work with Claude Code (something about the 1 million context requiring extra usage) so I have to use it Continue. Then I get instantly rate limited when it's trying to read 1-2 files.
It just sucks. This whole landscape is still emerging, but if this is what it's like now, pre enshittification, when these companies have shitloads of money - it's going to be so much worse when they start to tighten the screws.
Right now my own incentive is to stop being dependent on Claude for as much as I can as quickly as I can.
Either you get a flat rate fee based on certain allowed usage patterns or everyone has to be billed à la carte.
Your comparisons are all also "unlimited" situations to Claude's very much limited situation. You can't buy a plan for Claude that is marketed as being unlimited. They're already selling people metered usage. They're just also adding restrictions on top of that.
So they further restricted the metered caps, which were only offered to NOT be reached by that many.
Simple as that.
Then they should figure out how to structure an offering that accommodates this type of usage not just blanket ban it
They did, didn't they? You can pay the non-plan rate.
> not just blanket ban it
They didn't do that. The email specifically tells you how to use Openclaw with Anthropic. There is no "blanket ban".
You got that right; in this case they are signalling that AI token providers are not going to be able to run at a profit anytime soon.
Not sure if that helps or hurts your argument, though.
Not all power users. Some re-invent the wheel and/or do things inefficiently, and in most cases there's no business incentive to adapt the service to fit the usage patterns of those users, or of other users that deviate from the norm in regards to resource usage.
Because it is clear that there is a market demand for it.
Yes, and that's exactly the problem I'm pointig at.
Your comment "that people would love an offering without the discussed restriction" ignores the pricing burden of that, which is why it's confused why Anthropic don't just offer this.
The API has no restrictions; what is the people's objection to that?
I've had to unwind "unlimited" within startups that oversold. I've been bit by ISPs, storage providers, music streamers, fuckin _Ubers_, now AI subscription services, that all dealt in "unlimited". None of them delivered in the long run.
I'd be mad at Anthropic if it weren't for the fact that my experience now can see this sort of thing from a mile away. There are a lot folks, even on HN, that haven't been around for as long. I understand the outrage. I've been there. But these computers cost money to run, and companies don't operate at a loss in the fullness of time.
Once you know that unlimited trends towards limited, the real question is whether we're equipped as a society to deal with the fact that the capital-L Labor input to the economic equation is about to be replaced with a Capital input for which only a handful of companies have a non-zero value.
Reminds me of when ATT had a fake 5G decoration on phones.
"AT&T won’t remove fake 5G logo even after ad board says it’s misleading"[0]
You can just get away with lying. That's the level of enforcement that exists against unethical behavior in business today.
0. https://www.theverge.com/2020/5/20/21265048/att-5g-e-mislead...
But now you might get things like “unlimited” 1Gbps… which reverts to 10Mbps (1% speed) or worse after 3.6TB (eight hours). And so your new theoretical maximum is about 6.8TB per month rather than 330TB.
Not the best example. The upkeep cost of a gym is pretty flat regardless of how much people use the facilities. Two people can't use a single machine at the same time make it wear out twice as fast. The price of memberships is not correlated to usage, it's inversely correlated to the number of memberships sold.
The machine doesn't care about the number of people using it. If it's constantly being used, it will wear out faster. You are conflating "we price based on expected under-utilization" with "costs don't scale with usage." Those are different things.
The inverse correlation you talk about isn't relevant here - People buy gym memberships intending to go, feel good about the intention, and then don't follow through. The business model is built on that gap. That's pretty specific to fitness and a handful of similar industries where aspiration drives purchase.
Anthropic doesn't sell based on a "golly gee I hope people dont use this" gap - they sell compute. Different business.
There is nothing anywhere hinting at that.
They don’t sell compute. They sell a subscription for LLM token budgets that they hope people don’t use because the compute is vastly more expensive than what they charge or what users are ever willing to pay.
Especially with enterprise subscription plans the idea is for customers to never utilize anywhere close to their limits.
Yeah, but there's an absolute limit to that, beyond which the cost doesn't keep increasing. Beyond that point, the QoS goes down (queues).
>You are conflating "we price based on expected under-utilization" with "costs don't scale with usage."
I'm not conflating anything, I'm responding to what you said:
>If gyms faced a situation where people would go and spend 18 hours working out every day for a month, they would probably change how they billed things.
Why would a gym need to change how they bill things if all their customers were aiming for maximal utilization, when their costs would barely see any change? I doubt your typical gym operates on razor-thin margins.
Setting that aside, even if we accept your argument that gym costs barely scale with usage, then that makes gyms a bad comparison case for Anthropic, whose costs directly scale with usage. You can't use the gym model to defend Anthropic's pricing decisions if the two cost structures are nothing alike.
I'm arguing that both gyms and Anthropic have usage costs that scale with usage, but gym business model assumes a large margin of under-utilization and there's a hard cap to "power user" - I think both of those extremes don't apply to Anthropic's situation. Under-utilizers aren't paying for AI they have a free tier. There's also a natural ceiling on how much any one person can use a gym. There's no equivalent constraint on API usage.
Yes. In fact i remember hearing about a gym which offered a flat-rate pricing model but explicitly excluded certain professions from partaking in it. I remember the deal was excluding police, bouncers, models, actors and air stewardesses. They had a separate more costly tier for these people. (And I think i heard about it from the indignation the deal has caused online.)
Am I? I think you read something into my comments that I didn't write.
Sure they do. Free tiers suck. I may not always need to use AI, but when I need it, I don't want to immediately get hit by stupidly low quotas and rate limits, or get anything but SOTA models.
What do you expect them to do? You are looking at a business currently running at a loss, and complaining about their billing even though this is not a price-rise?
Unrelated, is it still possible to use $10k/m worth of tokens on their $200/plan?
Internal projections show the company reaching cash-flow break-even in 2028, after stopping cash burn in 2027.
They’ve already implemented several of the features that put OpenClaw on the map.
I don't know what that means in this context.
> Internal projections show the company reaching cash-flow break-even in 2028, after stopping cash burn in 2027.
What does that have to do with them implementing restrictions on their plans because they are currently running at a loss?
Okay, lets say their internal projections[1] are accurate: were those before or after Openclaw released? Maybe their projections were made on the assumption that people would stop using $10k/m worth of tokens on a $200/m plan? Or that those users doing that will only be doing code? Or that the plan users won't be running requests at a rate of 5/minute, every minute of every hour of every day?
--------------------------------
[1] Where did you find those projections? I'm skeptical, at their current prices and current plans, that a break-even at any point in the future is possible unless they shut off or severely scale down training. Running at a per-unit loss means that the more you sell, the larger your loss - increasing your sales increases your loss.
I'm sorry is there anything even close to sonnet, much less opus, that can be run on a 4080? Or 64gb of ram, even slowly?
* Weird thing of the day: https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-...
The issue is, and always will be, competing views on what these services are for. Most, see them as augments of their normal everyday workflow. Others see it as the tool that allows their creativity to flow as fast as their thoughts do. The problem is the service is more than capable of catering to both but the creative vibe commander will hit those limits far faster. Simply telling them to “take a break” is a kin to those video game screen nags that developers were forced to put into games to remind people to pee.
This typically results in a ban for TOS violations after a few windows in a row on a claude subscription
I neither got a warning or a ban or anything - and that was with the double token amount during those days.
So I don't see human usage being something they ban for TOS violation, like you describe. But as always YMMV.
do you have any proof of your statement ?
Then it's not priced correctly. As I said, you can do all of this without OpenClaw.. claude code ships with everything you need to maximize the limits.
I mean, you can. Electricity is already sold that way. Subscribers with uncharacteristic usage spikes don't get blackouts, they get a slightly larger bill, and perhaps get moved up a tier.
It's just how hyperscaling works. You are not wrong, but in the wrong timeline.
Just because outliers can be money-losing doesn’t mean you should raise the price for everyone.
If they are losing money then it's not priced correctly. That's what I responded to.
Yes, subscriptions work as you say. Plenty of people under utilize subscriptions from prime, to credit cards, to netflix. But if they lost money overall, they too would raise prices. Because that's how economics works. Shortage of capacity, high demand, raise prices until equilibrium.
There's other knobs beyond ToS. They just didn't choose those options.
Raising prices is a bad strategy if you have a smaller base that costs enormously larger than the rest. "A million users that cost $1 and one user that costs $10 million, charge everyone $10 equilibrium", you're screwing over almost all of your users. The $20/month sub price is basically just not trying to capture the openclaw users, it doesn't make sense that all of the vanilla Claude users should subsidize them (and in fact it wouldn't even work because they will just go to Gemini or ChatGPT if your cheapest paid plan was very expensive to try to subsidize the other users)
Just a few years ago this was the standard business model for startups: attract VC money, offer plans at a loss, capture a huge market, boil the frog with incremental price increases to become profitable.
Companies like Uber wouldn't have been anywhere near as successful if they had been forced to make a profit from day one.
The erosion of the norm of things doing what they advertise rather than being weasel-worded BS is particularly unfortunate, and leads to claims like this.
Whether it's human token use, or future OpenClaws
I even think an LLM trained to communicate using telegram style might even be faster and way cheaper.
.- -. -.. / .. --..-- / ..-. --- .-. / --- -. . --..-- / .-- . .-.. -.-. --- -- . / --- ..- .-. / -. . .-- / - . .-.. . --. .-. .- -- -....- -... .- ... . -.. / --- ...- . .-. .-.. --- .-. -.. ...
Terse.
This mainly just affects hobbyists.
This makes zero sense. I'm paying to use that limit all of the time. If that's too much for Anthropic, they are free to lower the limits or increase the price. Claiming otherwise would be false advertising.
I’m glad they give us the leeway to experiment, and I’m also glad they weed the garden from time to time. To switch metaphors, I’m deeply frustrated when my very modest, commuter-grade use gets run off the figurative highway by figurative hot-rodders. It’s been extra-529y this week, and it’s about time they reined it in a little.
You’re always welcome to pay-as-you-go for as many tokens as you’d like to burn on their infrastructure… or to compute against any of the wide array of ever-improving open models on commodity compute providers…
Thats an interesting way of phrasing it - so is there a way to use the quota that's not 'abuse'? MCP/claude code seems to be want they want you to use it - are loops or ralph abuse as well ?
> is there a way to use the quota that's not 'abuse'?
I think my answer is “no.” In that I’ve never thought of the limits as “quotas,” and I don’t think I’ve heard Anthropic speak of them that way. Quotas are to be used up, while limits are to signal that what you’re doing is outside the envelope of acceptable use. Quotas are to be met, limits are to be avoided.
I interpret the intention of the subscription, like a membership at a makerspace, to be to allow novices to experiment with stuff, to take on personal-scale projects, to allow them to learn without having to understand the tool’s economics upfront. To play without fear of expensive mistakes.
And, like the makerspace, it can only offer generous limits to the extent that most of us rarely bump up against them. If you’re doing production runs in the makerspace, you’re crowding out the other members, and something’s gotta give.
To the extent that we do bump against the limits during “ordinary” use—and we do with Claude Code, especially those of us around here—it’s really frustrating. The limits need to rise in order for it to remain attractive to casual users like me, the economics still need to add up for the subscription program as a whole, and part of that is separating out what patterns of use belong under a different regime.
If these harnesses or OpenClaws or whatever stop making sense as soon as they have to pay their actual costs, then that’s a pretty good sign they’re abusing the spirit of the subscription.
But Anthropic seem more than happy to service those uses via the API or metered usage, and even to sweeten the deal with more reliable access and bulk discounts. I certainly wouldn’t characterize the same automated usage as “abuse” via that channel.
Fair enough.
>> But that’s not what the subscription product is for.
This was the point I was trying to make - I pay for XX tokens/usage. But somehow using them all is 'taking advantage' ?
BTW - I'm actually not complaining about the limits - I probably only use half my tokens on average week. I'm just annoyed at having to jump thru hoops if I want to try something 'API' oriented. For me, AI is still the new shiny - I try all different sorts of things learning/playing. There was an article posted today about writing agent harnesses. That could be interesting - maybe I want to try my hand at it. But then I've got to mess around/pay extra to _try_ something I that my subscription already easily covers.
[added:] >>to take on personal-scale projects, to allow them to learn without having to understand the tool’s economics upfront. To play without fear of expensive mistakes.
This is exactly what I'm trying to do - however, as soon as you want to try anything 'API' oriented, the 'fear of expensive mistakes' comes right back.
More users spinning up OpenClaw means that balance starts to shift towards more users maxing their tokens, thus the average increases, so I think their explanation makes sense still.
So they profit overall if I use all my tokens either way? Again, I understand usage limits - I just don't understand why some usage is 'good' and some 'bad' if I'm using the same either way.
>>More users spinning up OpenClaw
I'm pretty sure that's a small percentage of overall users, and probably skewed towards the very people that would be recommending/implementing you model for work/businesses. Seems like that would be the group you are encouraging/cultivating ?
I wonder if anyone else has experienced this?
Perhaps because your Claude agent usage is not representative of the average user, and closer to the average OpenClaw user levels...
Basically; spin up in the morning eats a lot of tokens because the cache is cold. This has actually gotten worse now that Opus supports a 1Mt context.
So: compact before closing up for the night (reduces the size of the cache that needs to be spun up); and the default cache life is 5 minutes, so keep a heartbeat running when you step away from the keyboard to keep the cache warm.
Also, things like web-research eat context like crazy. Keep those separate, and ask for an md report with the key findings to feed into your main.
This is not exhaustive list and it's potentially subtly wrong sometimes. But it's a good band-aid.
https://news.ycombinator.com/item?id=47616297
Know what's funny? Openclaw might actually burn less tokens than a naive claude code user; if configured correctly. %-/
And I'm skeptical of the 6x-8x claim myself. They'd have to explain that in more detail.
With data, it's an engineering target.
They could just 429 badly behaved clients.
Power users always cost these services more than they pay, and OpenClaw turns every user into a power user. A recalculation was rational.
From Anthropic's perspective, everyone pays to be in bins with a given max.
And to everyone's benefit, there is a wide distribution of actual use. Most people pay for the convenience of knowing they have a max if they need it, not so they always use it.
So Anthropic does something nice, and drops the price for everyone. They kick back some of the (actual/potential) savings to their customers.
But if everyone automates the use of all their tokens Anthropic must either raise prices for everyone (which is terribly unfair for most users, who are not banging the ceiling every single time), or separate the continuous ceiling thumpers into another bin.
That's economics. Service/cost assumptions change, something has to give.
And of the two choices, they chose the one that is fair to everyone. As apposed to the one that is unfair (in different directions) to everyone.
From the email: > but these tools put an outsized strain on our systems. Capacity is a resource we manage carefully and we need to prioritize our customers using our core products
OpenClaw doesn't put an outsized strain on their systems any more than Anthropics own tools. They just happen to have more demand than they can serve and they benefit more when people to use their own tools. They just aren't saying that explicitly.
It has nothing to do with fairness or being nice.
which said customer paid for. And now they want to back out of it because it turns out they thought users wouldn't do that.
I say they ought to be punished by consumer competition laws - they need to uphold the terms of the subscription as understood by the customer at the time of the sign up.
except when people start using openclaw, and the distribution narrows (to that of a power user).
I hate companies that try to oversell capacity but hides it in the expected usage distribution. Same goes for internet bandwidth from ISP (or download limit - rarer these days, but exists).
Or airplane seats. Or electricity.
Except they charge you less because of the distribution. Competition for customers doesn't evaporate.
They might charge you less, but they don't have to and wont if the market allows it
That's a "fixed" constraint, because maximizing future adjusted value is what companies do.
So they don't play little games with mass products. If they did they would be harming their own bottom line/market cap.
(For small products, careful optimization often doesn't happen, because they are not a priority.)
Note this thesis explains what is going on here. What was previously one kind of customer (wide distribution of use), is now identifiably two. The non-automated token maxers (original distribution) and automated token maxers (all maxed, and growing in number). To maintain margins Anthropic has to move the latter to a new bin.
But the customer centric view also holds. By optimizing margins, that counter intuitively incentivizes reduced pricing on lower utilized products. (Because margin optimization is a balance to optimize total value, i.e. margins are not the variable being maximized.)
The alternatives would be bad for someone. Either they under optimize their margins, or change regular customers more which is unfair. Neither of those would be a rational choice.
(Fine tuning: Well run companies don't play those games. But companies with sketchy leaders do all kinds of strange things. Primarily because they are attempting manage contradictory stories in order to optimize their personal income/wealth over the companies. But I don't see Anthropic in that category.)
Instead, you can prioritize people "earnestly" bursting to the usage limits, like the users who are actually sitting at their computer using the service over someone's server saturating the limit 24/7.
The goal is to have different tiers for manual users vs automated/programmatic tools. Not just Anthropic, this is how we design systems in general.
When your least automated, most interactive users are competing for capacity with fully-automated tools, let's say, you're forced to define some sort of periphery between these groups.
OpenClaw is a self-directed, automated loop that sits on a server. It's wowing its owner by shitposting on moltbook and doing any number of crazy stories you can find online that amount to "omg I can't believe my self-directed claude loop spent all day doing this crazy thing haha."
On the other end of the spectrum is someone using Claude.app's interface.
And then in the middle, you can imagine "claude -p" inside a CI tool that was still invoked downstream of a user's action. Still quite different from the claude loop.
I'm sorry but this framing just doesnt make sense.
- The intention of subscriptions, as anywhere, is a combination of trying to promote brand loyalty, and the gym membership model of getting people to pay for oversubscribed resources that many will never use. As the parent noted, people maxxing out their allowed usage, for whatever reason, are not the most profitable customers, and in this case probably not profitable at all
- OpenClaw is now owned by a competitor, OpenAI, and Anthropic are trying to compete in this space
https://www.semafor.com/article/04/03/2026/anthropic-eyes-it...
- Anthropic are capacity constrained, having sensibly chosen to err on the side of safety (not going bankrupt), and are now trying to do the best they can to manage that.
Presumably they might be acting differently if they had capacity to spare, but even then helping a competitor to build market share in a potentially lucrative segment doesn't make strategic sense.
I do wonder about the wisdom of Anthropic promoting usage-maxxing development patterns such as running a dozen agents in parallel ... maybe not the wisest thing to do when capacity constrained! It would make more sense to promote usage at night with low priority "batch jobs" rather than encourage people to increase usage during periods of maximum demand.
Do you have an example of how this is how they have advertised or sold the plan? I don’t recall ever seeing any advertisement that their plan is simply pre paying for tokens.
As you said, I would imagine where the token usage comes from is irrelevant - you are generating the same load whether you do it from claude code or some other agent. So it seems like the rules are more to do with encouraging claude code usage, rather then claude model usage.
OpenClaw just happens to also get telemetry, of probably higher value, out of the same tokens. It also happens to be owned by their competitor.
edit: I'm wrong OpenClaw surprisingly doesn't collect telemetry. Good for them.
At least that’s my read. I don’t believe it is nefarious
Tokens and these agents(Claude Code/cowork/claude.ai) are separate from model tokens, and they want to discount for their own product usage.
The subscription they sell is a package of these products, not tokens. They never sell token subscriptions, so why do we need to relate tokens with the subscription? Fundamentally, they never meant to sell token usage in that subscription, similar to any other SaaS company trying to sell API usage.
Nothing beyond fumbling the PR around it.
I haven't even heard of claude -p before your comment.
OpenClaw is for sure not just a good cover story. Or its the cover face of the issue of automated tool workflows.
I don't think they are bothered too much about other frontends who do the same as claude code.
This is so wrong.
The subscription is to Claude (the app, Claude code, etc) not the API.
Anthropic subsidizes Claude code because they collect a ton of super useful telemetry and logs so they can improve… Claude code.
Wanting to pay for a subscription to Claude and treat it like an API discount is like going to an all you can eat buffet and asking them to bring unlimited quantities of raw ingredients to you so you can cook at home. Ok, not a perfect analogy, but you get the idea.
You just paraphrased my argument
(Maybe I'm just being paranoid here).
I mean, humans sleep and do other things than work, so they likely don’t hit their weekly limits or their 5 hour limits every single 5 hour chunk :)
If you max out your token limits, you are costing Anthropic more than you are paying them. They only expect a small percentage of their users to do this, but OpenClaw changed the dynamic.
Anthropic knows that they will lose more users by lowering limits than they will by blocking OpenClaw, because OpenClaw users will overwhelmingly switch to API pricing, while chatbot users will leave for competitors with higher limits.
They are a business. They hope to become profitable. This was the correct move.
It’s shame they do all this sketchy stuff, I switched to Codex I have enough of their bs.
I’m pretty happy knowing that it supports my development workflow for a week. Recent features like the Code Desktop built in browser, Cowork with Claude in Chrome and remote control matter to me way more than the number of tokens. But that’s me.
Depends on their targeted ICP also, which they are free to define. Is it those users maxing out tokens for the buck? I have the feeling there’s even better alternatives on the market right now.
For many it doesn't. It's opaque, it changes, and they bury the news in fucking twitter. https://x.com/trq212/status/2037254607001559305
There's a lot to love about Anthropic. But man do they suck at PR.
Subscriptions are crazy subsidized.
So you can’t use OpenClaw, OpenCode, etc. because they take you outside their applications/lock in and their ability to easily monetize in the future.
Second, OpenAI is burning UNIMAGINABLE sums of money. Three days ago they raised $122 billion [2], the largest funding rounding in history. By comparison, Anthropic has emphasized a more capital efficient approach, with a ~30% burn rate. [3]
[1] https://x.com/sama/status/2023150230905159801
[2] https://openai.com/index/accelerating-the-next-phase-ai/
[3] https://www.wsj.com/tech/ai/openai-anthropic-profitability-e...
its obvious they will tighten everything and raise prices for years to come