upvote
As a large language model, their support is not allowed to issue compensation
reply
I know this is a joke, but Amazon’s bots give me compensation literally all the time when something goes wrong. It’s possible.
reply
Of course its possible, its just a permissions decision.
reply
Same experience. Literally yesterday it refunded me for a thermos shattered by the delivery guys.
reply
Interestingly, the starlink customer service bot has applied credits to my account before.
reply
Perhaps this is a matter of who is being referred to by 'we'.

Obviously someone can do it because it got done.

If the 'we' is referring to some team handling issues it would make more sense. In that case they should have said something along the lines of "I have informed someone who can help"

reply
Does AI using first person pronouns gross anyone else out? If there’s one AI regulation I could get behind it would be banning the use of computer systems to impersonate a human
reply
I don't perceive an AI as impersonating a human if it uses first person pronouns. Emulating is not impersonating. One is behaving similarly, the other is asserting that the similarity implies equivalence.

I have not personally encountered an AI who claimed to be human (as far as I could detect)

reply
I agree with you, but I also envy you for having never encountered an AI scam bot (where someone would hack someone's WhatsApp or other account and use an Ai to get money from them, or even do the "hey sorry I missed your call" scam).
reply
Maybe this is a regional thing, I don't think anyone who I have encountered in real life has mentioned anything like this happening to them.
reply
Wow these were quite common to me personally a few years ago. Still get them time to time but I used to get them weekly. In the US, where scams are pretty rampant.
reply
I have been trying to convince Claude to use "Claude" instead of first-person pronouns, and only recently have gotten it to say stuff like "Claude'll go ahead and take care of that now", but it's very inconsistent (shocking).
reply
Well they hoped this person would walk away and forget about it, died, or something else. That's why.It's how health insurance works in the US.
reply
That's a very categorical statement from support. I get that Anthropic is going to throw out their usual support rules in this case since it has garnered so much negative attention, but I'm very curious how many other people have been over-billed and refused a refund through no fault of their own.
reply
To be fair, that looks like an LLM response.
reply
LLM or not, that seems to be an official response to a support request, where they clearly say "yes, we fucked up but now you fuck off", and it looks like the model was conditioned to produce these particular responses.
reply
Which they, of all companies, are responsible for
reply
You're not wrong.
reply
That may be true (and likely is), but it doesn't explain why that initial answer from Anthropic was "we can't" instead of the truth, which is "we can".
reply
It's not hard to imagine how this happens. I assume most people here have used these models extensively.

The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".

The system prompt includes statements about how it doesn't have tools for managing funds.

A little bit of A and a bit of B and you get a message from Haiku telling you that you can't get your money back said as though this isn't a trivial customer service thing to do.

reply