upvote
You also seem to have a bug where people get randomly invoiced: https://news.ycombinator.com/item?id=47693679

I got a random invoice for $45.08 back in March, despite not having auto top up enabled. Trying to reach support met with a brick wall. Based on the post I linked to, I'm not the only one facing this problem.

reply
They also have a bug where people get randomly suspended: https://www.reddit.com/r/ClaudeAI/comments/1b82cpu/where_you...

It happened this year to my one and only personal account. The account was one week old. Unique e-mail address. $5 balance for API credits. No usage yet. Suspended and refunded. Appeal denied without explanation.

I did create the account on a VPN because I was using public WiFi at a tech conference. That's probably what tripped their automation.

reply
Using certain types of cards will get you automatically banned, I’ve found that out after getting 3 accounts suspended. I made them all using same VPN and email domain. I’ve been using the 4th account with no issues with a reputable bank debit card.
reply
I also got randomly invoiced $5.00 for absolutely no reason on the 28th. I don't have auto-reload enabled, nor did I explicitly buy extra usage.
reply
Happened to me too but my card didn’t actually get charged, maybe check yours. Also the card in the invoice wasn’t even the card I’m using with Anthropic
reply
My card did get charged.
reply
lol, are they doing stochastic invoicing?
reply
But why did you say that

> I need to let you know that we are unable to issue compensation for degraded service or technical errors that result in incorrect billing routing.

What prevents you from issuing compensations?

reply
As a large language model, their support is not allowed to issue compensation
reply
I know this is a joke, but Amazon’s bots give me compensation literally all the time when something goes wrong. It’s possible.
reply
Of course its possible, its just a permissions decision.
reply
Same experience. Literally yesterday it refunded me for a thermos shattered by the delivery guys.
reply
Interestingly, the starlink customer service bot has applied credits to my account before.
reply
Perhaps this is a matter of who is being referred to by 'we'.

Obviously someone can do it because it got done.

If the 'we' is referring to some team handling issues it would make more sense. In that case they should have said something along the lines of "I have informed someone who can help"

reply
Does AI using first person pronouns gross anyone else out? If there’s one AI regulation I could get behind it would be banning the use of computer systems to impersonate a human
reply
I don't perceive an AI as impersonating a human if it uses first person pronouns. Emulating is not impersonating. One is behaving similarly, the other is asserting that the similarity implies equivalence.

I have not personally encountered an AI who claimed to be human (as far as I could detect)

reply
I agree with you, but I also envy you for having never encountered an AI scam bot (where someone would hack someone's WhatsApp or other account and use an Ai to get money from them, or even do the "hey sorry I missed your call" scam).
reply
Maybe this is a regional thing, I don't think anyone who I have encountered in real life has mentioned anything like this happening to them.
reply
Wow these were quite common to me personally a few years ago. Still get them time to time but I used to get them weekly. In the US, where scams are pretty rampant.
reply
I have been trying to convince Claude to use "Claude" instead of first-person pronouns, and only recently have gotten it to say stuff like "Claude'll go ahead and take care of that now", but it's very inconsistent (shocking).
reply
Well they hoped this person would walk away and forget about it, died, or something else. That's why.It's how health insurance works in the US.
reply
That's a very categorical statement from support. I get that Anthropic is going to throw out their usual support rules in this case since it has garnered so much negative attention, but I'm very curious how many other people have been over-billed and refused a refund through no fault of their own.
reply
To be fair, that looks like an LLM response.
reply
LLM or not, that seems to be an official response to a support request, where they clearly say "yes, we fucked up but now you fuck off", and it looks like the model was conditioned to produce these particular responses.
reply
Which they, of all companies, are responsible for
reply
You're not wrong.
reply
That may be true (and likely is), but it doesn't explain why that initial answer from Anthropic was "we can't" instead of the truth, which is "we can".
reply
It's not hard to imagine how this happens. I assume most people here have used these models extensively.

The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".

The system prompt includes statements about how it doesn't have tools for managing funds.

A little bit of A and a bit of B and you get a message from Haiku telling you that you can't get your money back said as though this isn't a trivial customer service thing to do.

reply
Thanks for the follow up here and the transparency.

For those of us not on X, what are the best communication channels for us to follow this sort of communication?

reply
I'd recommend a good credit card like Amex, and a lawyer.

These fucks only respond when they get bad publicity.

reply
Amex, like basically all other card issuers, have essentially stopped giving customers preference in chargebacks since 2020 or so. What used to be solid advice now rings hollow - you’re more likely to be asked for information that not available to you than allowing your chargeback to go through.
reply
Anecdotal but Chase helped me out when my gym kept charging me after I canceled. I kept my cancelation receipt and sent that in and that's all I needed to do.
reply
[flagged]
reply
"Our support flow wasn't set up"

Would be more accurate. It still isn't setup. Talking to a bot as support who only tells you to talk to the bot for support is not actually support at all. It looks like support, but there's no way to ACTUALLY GET support.

reply
I try to avoid jumping on the bandwagon when it's already covered but billing bugs being treated like other software issue and the major comms channel being X (which I can't get to load half the time) is ridiculous.
reply
Could really use a post-mortem to set the story straight. The apparently-hallucinated support response copied-pasted by the submitter showing up in the github issue thread is very misleading without scrutiny
reply
Weekly postmortem at this rate.
reply
It's only "very misleading" if Anthropic has implemented an actual support system in the meantime.
reply
A side aspect of this drama is the root feature which enabled this bug:

> ugh sorry this was a bug with the 3rd party harness detection and how we pull git status into the system prompt

Claude wants to exercise control of how I use the "inclusive volume" that I purchased with my monthly subscription. This harms competition (someone else could write a more efficient or safer coding agent) and is generally not in the best interest of society. Why do we allow this?

This specific case is interesting, because it is so clear cut. There is no cross financing via ads, they already have the infrastructure to measure usage and even the infrastructure to bill extra usage. I also don't see how you can plausible make the argument that restricting usage to their blessed client is necessary for fair use or for the basic structure of their business model (this would be the standard argument for e.g. Youtube: Purposefully degrading the experience of their free client to not support background playback enables the subscription model).

reply
Have a look at https://github.com/anthropics/claude-code/issues/54497

I can’t use Claude Code online at all

reply
I have the same issue when I try to run /ultraplan
reply
I tried /debug as the only input, hoping CC wouldn’t shit the bed and give me some data.

Heck, just saying “hello” causes Claude Code to fail.

I’m thinking of doing a charge back, and creating a new account. Others don’t seem to have this issue.

reply
Sorry but you have to make a separate HN post for them to care. Wait like 2 hours so this one dies down otherwise it might not get to the front page with enough other people dealing with it
reply
I tried and it got no feedback.
reply
Can people please raise this person's comment to the top of HN by upvoting it so this person can get their money back. Because that's where we are right now.
reply
Is it complex? I was somewhat taken aback by how simple it was. Still very confused as to how it could happen.
reply
Only the weights and the RNG used to select tokens can answer that. You will understand much if you read up on the quality of code in the CC source leak, it's completely vibe coded and the printf fn is genuinely impossible for a human to comprehend.
reply
> Our support flow wasn't set up to route a complex bug like this to engineering.

What does that even mean? Does it mean, "our support flow is just an LLM that fobs off customers and puts their issues into the bin"? Or is there some genuine "routing" of simple bugs to engineering which accidentally drops "complex" bugs? Could you drescibe that process, it sounds fascinating?

Also, how is changing a customer's billing based on detecting a certain string in a certain place a "complex" bug? Grep the string, remove the if statement, done. I'd love a post-mortem about why this was a complex bug.

More questions than answers here Thariq.

reply
Hey Thariq, I love Claude! I use Claude every single day and it has changed my life, which is why I did what I'm about to describe.

Happy to talk privately, but as I detailed here, https://news.ycombinator.com/item?id=47954005 . I've been billed $200 for a Max gift card to a 27 character alphanumeric icloud address that bounces.

I was looking through the system, and there are several UI/UX and process gaps in the gift card and billing order flow that expose Anthropic to significant liability. I'm genuinely not trying to concern troll or make some kind of overwrought threat here. Genuinely trying to be constructive. Let me give you an example.

I sent an email to Anthropic Support outlining the disputed / possibly malicious charge. The AI Agent / Claude instance agreed and replied with,

    Thank you for confirming.
    
    I've documented all the details about this unauthorized [specific amount + tax] charge for the Gift Max 20X subscription (invoice [lalala]) sent to [insert the random alphanumeric]@icloud.com.
    
    An error occurred while evaluating the refund eligibility for your account. Your request has been fully documented and our team will follow up with you shortly to investigate this unauthorized transaction and assist with the refund and cancellation.
    
    Best regards,
And then no one followed up, the conversation was closed without recourse and I wasn't allowed to reply.

I'm not sure how familiar you are with international trading practises, but in multiple jurisdictions, the AI agent assumed legal liability for Anthropic. It accepted that the charge was unauthorized / fraudulent, stated that redressal was needed, but then failed to offer the means to redress it / didn't allow for the refund to continue.

I am not a lawyer, but based on my understanding of prior cases (I read this kind of stuff for fun, don't ask) – in the EU, the US and Canada, users can approach courts and invoke the doctrine of promissory estoppel (again don't quote me on this, IANAL, just like reading case law). And if enough users are affected / do so, it becomes a deceptive practises issue.

I've been thinking about how to solve this problem, and as strange as it sounds, I think Anthropic already has the tools to make the best customer support service in human history. No exaggeration. I think that this crisis could be an opportunity.

reply
Apparently we are now expected to know by some telepathic mechanism that important customer service announcements are made only on Twitter.
reply
Please do explain why someone at Anthropic decided, on purpose, to write code that says something along the lines of: "if ( git_history_str contains "HERMES.md" ... )" then { bill more money }

Somebody (or something) wrote this code. This bug wouldn't be happening for any other reason. It's not a glitch, an oversight, a feature gap, or a temporary outage. It is a piece of written code in your system.

Everyone here is upset about the $200, which is probably much less money than the time that engineer spent ranting about the overcharge on GitHub.

The real problem in my mind is that that bit of code existed in the first place.

Why?

Are you vibe coding your billing!?

Without review!?!?

Or worse, a human being decided to add this to your code base? And nobody noticed or flagged it during code review?

Or much, much worse, Anthropic is purposefully ripping off customers?

This deserves a thorough post-mortem.

reply
Would imagine it's the simplest answer: they're flying by the seat of their pants, there's 1000 things happening every day that demand attention and there's not enough of it to go around. They toss their LLM at it, give it a cursory glance, and ship it. A quick glance at the Claude Code source code bears the result of this process out. The fundamental question is, if their model is so powerful, why do they keep fucking up such simple things? We're led to believe this is a serious company with a model so powerful they can't release it to the general public.
reply
Hermes is one of these OpenClaw clones, so this was certainly intentional, not a model hallucinating something.

I think the problem is clear. Anthropic saw their usage go up much more than their capacity could handle. There are a few tried and true solutions to this, like "increase the price" or "restrict signups so you can guarantee service to what you have already sold".

Then there is the "large scale fraud" option, where you materially change and degrade the service you have already sold. Just because you have obfuscated and mislead in how you describe the product you are selling doesn't mean you get to capture the cash flow of 1 year subscriptions then not honor that contract for the full duration.

reply
> Hermes is one of these OpenClaw clones

So that's what it is. Reading its README I thought it was another harness like Pi [1], but with built-in memory so it remembers what it learns, and gets more capable the longer it runs.

Like Letta [2], Dirac [3][4] and the other "more experimental harnesses that look interesting but I haven't had time to try out".

1. https://pi.dev/

2. https://www.letta.com/

3. https://dirac.run/

4. https://news.ycombinator.com/item?id=47920787

reply
Mind pointing out where exactly in the contract you were allowed to use OpenClaw?
reply
Non-Claude client access is not permitted in the terms and conditions, except via API key.

The correct implementation of this condition by Anthropic on the server side would be to block usage by non-Claude apps via Claude's authentication mechanism, and allow it via the per-token API key billing.

Instead of a simple 403 error, which would block usage, they silently redirect to a different billing bucket, which is not ethical behaviour especially since it is based on fuzzy heuristics.

reply
I doubt an AI would be stupid enough to write code like that without being explicitly prompted to do so. It's so... specific.

That specific nature would mean it would get caught by even the most cursory of code reviews.

Even if I was just "scanning my eyeballs over the code" without properly reading it, this would jump out as very odd and make me pause.

reply
Vibes were strong dude. Don't blame the dev blame the bots brah. They forgot to use mythos obviously otherwise this wouldn't happen simple mistake.
reply
Anthropic obviously vibe code everything and it shows
reply
hey guys can you please fix claude design? I've been trying to test it tonight and already used up 20% of my usage and all i get is continuous [unknown] missing EndStreamResponse errors (and this is after your status page reflected everything ok).
reply
I have been badly affected - it killed my vibe.
reply
Is there no constraint preventing extra usage billing from being used before regular usage billing has been exhausted?
reply
I’ve had similar terrible experiences with the Claude support bot when my usage limit was disappearing after a few minutes using Sonnet. I asked for help, asked for escalation, asked for a human, anything. All I got was a non-answers from an AI. I won’t spend real money on Claude now. I’m ok with losing $20 if there’s a rug pull of one way or another, but not $200.

Please, please, please hire more humans with the actual ability to do the right thing for support if your AI agents can’t do the job.

reply
deleted
reply
[dead]
reply
[flagged]
reply
[flagged]
reply
That being flagged is completely absurd and honestly I believe you're right because I've never seen anything like it on HN. It's completely out of place for that comment to be flagged to death. That isn't natural.
reply
It wasn't flagged. Compare to this comment by the same user that was actually flagged: https://news.ycombinator.com/item?id=47954834 Note the part where it says [flagged] [dead] instead of just [dead].
reply
That seems.. worse? What would've caused this?
reply