upvote
LLM or not, that seems to be an official response to a support request, where they clearly say "yes, we fucked up but now you fuck off", and it looks like the model was conditioned to produce these particular responses.
reply
Which they, of all companies, are responsible for
reply
You're not wrong.
reply
That may be true (and likely is), but it doesn't explain why that initial answer from Anthropic was "we can't" instead of the truth, which is "we can".
reply
It's not hard to imagine how this happens. I assume most people here have used these models extensively.

The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".

The system prompt includes statements about how it doesn't have tools for managing funds.

A little bit of A and a bit of B and you get a message from Haiku telling you that you can't get your money back said as though this isn't a trivial customer service thing to do.

reply