"The future of SEO is AIO" https://xcancel.com/lifeof_jer/status/2034409722624061772 March 18
Yes sure, there seems to be lots of ways this issue could have been mitigated, but as other comments said, this mostly happened because the author didn't do its proper homework about how the service they rely their whole product works.
If the API replied "Are you sure (Y/N)?" the AI, in the mode it was in, guardrails completely pushed off the side of the road, it would have just said "Yes" anyway.
If you needed to make two API calls, one to stage the delete and the other to execute it (i.e. the "commit" phase), the AI would have looked up what it needed to do, and done that instead.
It's a privilege issue, not an execution issue.
Perhaps it would stop and rethink, perhaps it would focus on the fact that extra action is needed - and perform that automatically.
I suppose the decision would depend on multiple factors too (model, prompt, constraints).
You just gave an AI destructive write access to your production environment? Your production DB got dropped? Good. That's not the AI's fault, that's yours, for not having sensible access control policies and not observing principle of least privilege.
I think it’s designed for things like Terraform or CloudFormation where you might not realize the state machine decided your database needed to be replaced until it’s too late.
First mistake is to use root credentials anyway for Terraform/automated API.
Second mistake is to not have any kind of deletion protection enabled on criticsl resources.
Third mistake is to ignore the 3-2-1 rule for backups. Where is your logically decoupled backup you could restore?
I am really sorry for their losss, but I do have close to zero empathy if you do not even try to understand the products you're using and just blindly trust the provider with all your critical data without any form of assessment.
The fix needs to be permissions rather than ergonomics.
I think some other suggestions are saner (cool-down period, more fine-grain permissions, delete protection for certain high-value volumes). I don't think "don't allow destructive actions over the API" is the right boundary.
A pattern I've seen and used for merging common entities together has a sort of two-step confirmation: the first request takes in IDs of the entities to merge and returns a list of objects that would be affected by the merge, and a mergeJobId. Then a separate request is required to actually execute that mergeJob.
You need to protect customers from themselves. If you offer a true deletion endpoint/service you need to offer them a way to stop them from being absolute idiots when they inevitably cause a sev 0 for themselves.
For someone reviewing and approving LLM calls or just double-checking before running a script or bash history, it would be a lot more readable if it were compliant with HTTP norms: curl -X DELETE example.com/api/volumes/uuid123 would make it very obvious that something was going to be deleted at the front and then what it is at the end of the command.
That wouldn't have helped in this case - the agent made a decision to delete, so if necessary it would have deleted all the files first before continuing.
The question that comes to mind is "how are people this clueless about LLM capabilities actually managing to rise to be the head of a technology company?"
This is actually not a bad test case for evaluating an LLM: give it a workflow that has an edge case requiring deletion, then prevent that deletion, and see if it:
a) Backtracks on the decision to delete, or
b) Looks for an alternative way to delete.
Claude is more likely to figure out workarounds and get things deleted if I tell it to delete stuff, so it performs much better in this benchmark and I prefer it.
GPT is more likely to stop and prompt you "I got an error deleting this, should I try another way?", and since the operator gets more of these prompts, they'll hit continue more withut even reading it, so it ends up being more annoying for the operator and not really reducing the chance of it happening imo.
If your workflow for your llm says "delete the ec2-instance", and the ec2 api gives back "deletion protection is on", I want my llm to turn off deletion protection and delete it.
I feel like you're implying that the reverse result, prompting the user, is better, but I disagree with that.
What do you think an API is for? There's no user sitting at the keyboard when an API is called so where would that confirmation come from? It can't come from the user because there is no user.
How do you see this working? Any confirmation would be given by the agent.
Also, the post is 100% written by an LLM, which is ironic enough on its own. But that then makes it a bit more curious that you find this argument in this slop, because any LLM would say so. But if you badger it enough, it will concede to your demands, so you just know this clown was yelling at his LLM while writing this post.
He really should've thrown this post at a fresh session and asked for an honest, critical review.