upvote
At this point everyone doing these kind of flows (using claws or any other flows that run agents in a loop 24/7) using any kind of subscription-based billing for inference must be aware they're on borrowed time.

Enough people have gone over the economics - you're costing OpenAI/Anthropic money, potentially a lot of money, so it's inevitable that sooner or later that particular party will come to an end.

Having said that, doing it by running a regex on your prompts to look for keywords is a bit loose

reply
We all get the "realpolitik" of it. That doesn't mean anthropic just gets to ignore the contract they signed. Well it does as long as you're fighting the fight for them before it even gets to anthropic.
reply
I strongly dislike all of these companies (and the people who run them), and I don't love LLMs in general, although I use them every day because they are useful for my job.

But the simple fact is, if you're paying $20/mo and using $200/mo of tokens, that is not going to last forever.

The only way to make it last a bit longer for the people with relatively sane usage patterns is to try and stop people absolutely taking the piss

reply
That's not true, you're using RIAA-style wishful accounting here. If the company is willing to sell me $200 worth of tokens for $20, that's still worth only $20 to me.
reply
The worth of something to you can be more or less than the number of dollars you paid for it. If those tokens let you build something that you sell for far more dollars or saves you time that you put more value on.
reply
Ok well they need to do it above board and legally then.
reply
I don't get it though. Why not just revise the billing so that if users are hitting the servers above some defined frequency, they get charged more?

I'm tired of this startup-adjacent mindset that promotes endless adversarial scamming. I absolutely think people should be able to run OpenClaw or whatever harnesses they want, but I also think they should pay in some proportion to usage rather than trying to exploit an all-you-can-eat buffet offer to stock their own catering business.

reply
If they do that, they lose market share to their competitors, which kills their ability to raise investor capital, which kills the company, because they are almost entirely funded by investor capital.
reply
The demo above uses the prompt "hi". The openclaw string is in the git history, which Claude goes looking for.
reply
You're right, didn't read that properly. Okay then that actually makes sense if that's a (relatively) deterministic way to work out if openclaw is used
reply
It's definitely not! Now I can Claude Code proof all future PRs into my open source repo with a single commit message.
reply
that is a terrible way to figure out if openclaw is used, hah
reply
The only reasonable thing to do if you care about the longevity of your workflow is to build it around open-weight models.

If you choose to not be able to get work done without Claude you're at the mercy of whatever they want.

reply
They can just do token caps. But they don't want to do that because "infinite" sells better.
reply
Oh it's way worse than people realize. The monthly vs api keys is a huge issue for them. They will have to end monthly subscription plans. You can pay $20 a month and use $10k in api tokens. They are in all out panic trying to fix this. But yes, the house of cards is ending.

The company ending part is when they have to cut the $20 a month plan and take things away. They are creating a massive group of coders that can't code - soon to have no way to code. This cohort will rampage through all social forums.

reply
They might not be able to scale it, and indeed they might indeed have to jack the prices. But vibe coding is here to stay. Maybe it'll recede for a few years while people figure out the scaling. But the Pandora's Box is opened and it ain't closing
reply
> You can pay $20 a month and use $10k in api tokens.

Do you have a source? I would be interested to read more about any hard figures that have been posted like this.

reply
> scamming from the literal money

That's par the course for Anthropic. I added some money to my account before I really had a use case for product. A year later they said my money had expired and when I contacted support they basically told me to pound sand.

This while they have the audacity to list one of their corporate values as 'Be good to our users'. They'll never get another dollar from me.

reply
I had exactly the same issue with Anthropic API. It was only $15, but I was so annoyed when they just decided that they'll take my money for free. If it's really the law as some people state, it's a stupid law.

I think my Zalando gift cards expire after 4 years.

reply
Fal.ai does the same thing.

It's pretty much a universal API credit policy at this point. I'm not sure if this legitimately escapes the prepaid gift card requirements or if the providers see nuance where there might not be any.

reply
it makes it hard to think their "safe ai" will ever be human friendly. itll match their company ethos of theft and lack of empathy for the people interacting with it.
reply
Everybody does that, the only question is how much time they give you. The issue, as far as I remember hearing, is that in the US expiring company credit can be immediately recorded as income, whereas indefinite-term credit only becomes income once the user spends it.
reply
Not true of non-US companies. I had also added money to Deepseek, and it was still there (and Z.ai and Moonshot are the same). I'm reasonable though, if it's been 5 years or something I might have understood, but it was 1 year and the account was in use during that time.

Where I live (in Canada) it's actually illegal for gift cards to ever expire, and there's lots available from US companies, so if it's an accounting issue other companies have figured it out.

reply
I put $20 on Mistral and Deepinfra several years ago, and it’s still there.
reply
Gift cards generally cannot expire until 5 years after activation in the United States (CARD Act 2009), so I would have wanted a similar time period here at least.
reply
deleted
reply
> Sooner or later some sort of complaint to the relevant trade authority should happen - this is a scam operation at this point.

I'm sure both people left at that trade authority will get right on with investigating.

reply
No. Hanlon's razor applies here.
reply
You lose little by assuming malicious intent when it comes to billion-dollar tech companies and your money. They can prove otherwise by remedying the situation.
reply
When it comes to understanding large organizations I think a simple principle should apply:

The Purpose of a System is What it Does[1].

Whether malicious or not, the system does what it does. If people wanted it to do something else they would change the system. The reality is that when corporations make mistakes that benefit them those mistakes rarely get fixed without some sort of public outcry, turning the "mistake" into a "feature".

1. https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

reply
Intriguing concept, but I feel it needlessly breaks language. A more narrow (and to me, less pompous) formulation would be that social groups have their own purpose, different from (though not unrelated to) the purposes of the individual members. And this collective purpose can be read best from the actions of the collective, just like the purpose of a person is best divined from their actions (actions speak louder than words).

More about where I think Stafford Beer goes wrong here: https://gemini.google.com/share/9a14f90f096e

reply
The insight for me is that the assumptions of system need to be stated, not just the intent.
reply
Not really sure you gain much, either. Unless false confidence is your goal.
reply
False confidence in what?
reply
Not to corporations, no. You do not need to be charitable to a corporation.
reply
ok, how is this adequately explained by stupidity?

If it is adequately explained by stupidity then you should be able to get it to display the same behavior without mentioning OpenClaw? Do you have any theory as to what stupid thing they have done to make this happen, non-maliciously? Because, Hanlon's razor doesn't just work by saying Hanlon's razor - you have to actually explain how the stupidity happened.

reply
Gross negligence is malicious.
reply
What you do shows what you value. This clearly wasn't a mistake on the part of Anthropic. Time has shown that. They made the call based on what they believe in
reply
It does not. I would be fairly magical the most favorable interpretation that makes sense is that its supposed to disconnect but also taking your money is a defect.
reply
deleted
reply
'we know we sold you 50 gallons of gas, but you are only allowed to use 40 gallons.'
reply
Nobody ever uses more than 40 gallons though. So if you do, you're abusing the system.
reply
So making someone pay for 10 gallons of gas they're not allowed to use is fine with you?
reply
[dead]
reply
[dead]
reply
There are many possible explanations for this outcome to have occurred other than malice. If you're an engineer by trade, consider how many bugs you've been responsible for over the course of your career that you didn't intend. Probably a lot.

How about we turn down the heat, everyone?

reply
There's been a sustained pattern of incidents. If Anthropic were truly serious about not wanting to take people's money, then they would have put in place whatever review processes were necessary to stop this from happening. So regardless of whether or not they specifically intend to cause harm, they're willingly letting it happen, which is just about as bad.

Yes, it's reasonable to turn down the heat. But it's also reasonable for people to be upset when their money is taken from them, and when the company that does so is effectively beyond persecution for doing so.

reply
Even with the best of faiths, this is at the very least a shoddily vibe coded “detect and low-key block attempts to use Claude for Openclaw” - it decided to look for specific strings wrapped in json without realizing this doesn’t always imply it’s an actual payload for Openclaw itself. And the human driving it was too dumb to review/catch this bad inplementation.

So maybe not malice, but certainly a level of ineptitude I don’t expect from a crucial vendor from a tool that’s become essential for many developers.

(I don’t care, I do just fine when Claude is down or refuses to help me (it has happened) though)

reply
> was too dumb to review

Yolo ship it! Move fast and break things. Reviewing just slows everybody down. Nobody can keep up with those coding agents output any longer.

/s

reply
I am engineer by trade. If I pushed an update which wrongly busted my customer's usage limits at a trillion dollar company, I would expect to get fired. Alongside my EM.
reply
Regardless of your expectations (I'm not criticizing them), that is just not how it works at most American companies. Especially not for your manager.
reply
You're right. They'd prefer to fire 7% of their team that did nothing wrong instead.
reply
Did Anthropic announce layoffs that I missed?
reply
They will by next year.
reply
I would expect someone would be critiqued to avoid it re-occurring and the persons money to be refunded. A company which fires so trivially will quickly flush institutional knowledge and team cohesion along with eating substantial recruitment costs.
reply
deleted
reply
This is not how any engineering workplace anywhere operates.
reply
There are more software engineers outside the first-world than there are within.
reply
This is not how any engineering workplace anywhere operates.

Anywhere inside your bubble. The world is a big place.

reply
> consider how many bugs you've been responsible for over the course of your career that you didn't intend.

Through some amount of carelessness that ended up costing people money? 0.

Maybe 1 if you want to count the automated monthly charging system that did over charge (extra erroneous charges for the same month) a handful of clients too many times. I noticed before anyone else did, and all of those 1am charges were reversed before 4am. So I don't think that one counts because it was a boring bug that would have been very bad if I wasn't paying attention.

Incompetence to the point of negligence can reasonably be considered malicious. If you're an engineer by trade, you have an ethical and professional responsibility to make sure things like this can't happen. And then, when bugs introduce said complications, fixing them, and remediating the damage.

reply
> How about we turn down the heat, everyone?

How about Anthropic turn down the heat and refunds money to everyone for every bug it created with its LLM?

reply
deleted
reply
And the stealing of $200 here? More non malice?

https://github.com/anthropics/claude-code/issues/53262#issue...

reply
Last I heard, the money is being refunded.
reply
I do a see a tweet saying something about that, which I had to search for and only did because of your post. But remember, this only came about after denying him the refund first (while thanking him for the 'bug' and told they would fix the problem) and it going viral on HN and X.

I'm sure they will proactively reach out to everyone who was affected without any need on the users part and make everyone whole....

reply
Yeah they probably just typed in "Hey Claude, figure out a way to get our inference spend under control - no mistakes!" and shipped it
reply
Also they ain't wrong. In what other context does OpenClaw get mentioned?

"You may not use our service if you mention OpenClaw" is a harsh line but hardly illegal or forbidden any more than any other service restriction (i.e. no use allowed for high-stakes financial modeling). Don't like it, cancel your plan.

reply
> is a harsh line

But that's the thing -- there is no line! Where is this specified? How can we know what service restrictions there are? For all I know, my plan could be exhausted at any point during the workday just because I happened to touch on some keyword Anthropic has decided to ban.

> Don't like it, cancel your plan.

Ah, but I thought these models were supposed to have been trained for the sake of humanity? That the arbitrary enclosure of the collective intelligence was for our own good? These concepts are not compatible.

reply
> I thought these models were supposed to have been trained for the sake of humanity?

Tbh blocking OpenClaw might just be for the betterment of humanity. It's yet to be proven either way.

reply
When you signed up, you agreed you understood the line - which is whatever Anthropic decides the line is. Legally, the line hasn't changed at all, nor has your moral position relative to Anthropic. Don't like it, cancel, but it was always the deal.

This is, by the way, the same legal principle that the website you are posting on, right now, uses. Some uses are prohibited. Not every line need be explicit. You aren't allowed to smack talk Y Combinator or the moderators without possibly being banned for life, and you certainly do not have a legal case if they do.

reply
Do you think businesses are allowed to just take your money, laugh, and refuse service for no reason?

People spend large sums of money for this tool. They can't just delete your balance because they feel like it.

reply
> Do you think businesses are allowed to just take your money, laugh, and refuse service for no reason?

> People spend large sums of money for this tool. They can't just delete your balance because they feel like it.

Unfortunately, in the US, they can. I'm not a lawyer working in this area, but my understanding is that companies are in general free to stop doing business with any customer at any time (other than reasons like the race of the customer). And in this type of transaction, there is no obligation to give a refund when they cut off the business relationship. This is different from a business-to-business contract or other types of contracts. This type of sale you're generally out of luck if the business cuts you off. That's why Amazon can delete the music library they sold you and give you no compensation.

reply
Amazon doesn't sell digital music; they sell a license that contractually they can revoke at any time.

It's possible that Anthropic also structured its EULA such that we're buying Claude Fun-Bucks with no value and that they can obliterate at any time with no recourse. I haven't read the EULA so who knows. But if they did this and it went to court, they'd still need to get a jury to agree to this interpretation and that's a huge unknown.

reply
They can not prolong the contract but obviously they still have to provide the service you already paid for. Imagine paying for 1 year of Netflix and one week later Netflix decides to cut you off. Does that make sense?
reply
I'm not a lawyer working in this area

You could have just stopped there. The rest of what you wrote just re-demonstrates that you don't know what you're talking about.

reply
If you’re paying for it, they can’t just arbitrarily deny you service for made up reasons. I would cancel, but then I would also charge back my payment I’m not getting my promised service for.
reply
Sure they can. But they have to refund your money.
reply
There are plenty of ways you could wind up with a git commit containing "OpenClaw" despite zero interaction with OpenClaw itself...adding a blog post to a static site repo, or even a clause in your own app's ToS disallowing use of OpenClaw with your API.
reply
Somebody elses repo that you cloned can contain lots of fun things.
reply
> but hardly illegal or forbidden any more than any other service restriction

Intentionally (or negligently) anti-competitive behavior is illegal in the US.

> Don't like it, cancel your plan.

Don't like being abused by a company? Just pretend it's not happening! Anyone else exactly as smart as you were, they deserve to be cheated out of their money too!

reply
There's a lot of people making tools for coding with LLMs and those have a high chance of mentioning OpenClaw somewhere.
reply
Where is this restriction documented?
reply
> How about we turn down the heat, everyone?

The heat is coming, in part, from the lack of a proper support channel.

reply
I agree that their support is abysmal, and that is intentional. It's unfortunate that the greater market doesn't seem to care that much right now.
reply
This would have been easy to say if it was the first time it or something similar happened.

But there is a clear pattern emerging. There's no reason to turn down the heat when a company of this size and influence is allowed this level of absurdity time and time again.

reply
Nuance? Ignorance vs malice? You think too highly of folks.
reply
deleted
reply
Well this regex nonsense was likely vibe coded. If it escaped quality checks then this is a testament to how dangerous things coming out of Anthropic are, but not in the scifi sense that their CEO tries to make everybody believe.
reply
Nah, however this was implemented this was a clear and obvious probable side effect. If they want to block the access at the mention of openclaw, that’s silly but mostly harmless, but why charge extra for an ambiguous case? At best that’s incredibly lazy, which for a company with as much money, influence, and power as Anthropic, is equivalent to malice.
reply
This is not the first, nor likely last, of behavior like this.

My personal story is that I bought $50 of credit into their system, didn't use it all that much, and then after a year had gone by they kept the leftovers. I consider that a kind of theft.

reply
How about no?

Why should we coddle a corporations when they screw over customers?

It matters very little if they did this out of incompetence or malice.

reply