upvote
> what they thought was a narrowly scoped API token, and they very clearly state that they never would have given an AI full access if they realized it had the ability to do stuff like this with that token

It sounds like the token the author created just didn't have any scope, it had full permissions. From the post:

> Tokens are not scoped by operation, by environment, or by resource at the permission level. There is no role-based access control for the Railway API — every token is effectively root. The Railway community has been asking for scoped tokens for years. It hasn't shipped.

So it wasn't "a narrowly scoped API token", it was a full access token, and I suspect the author didn't have any reason to think it was some special specific purpose token, he just didn't think about what the token can do. What he's describing is his intent of creating the token (how he wanted to use it), not some property of the token.

Author said in an X post[0] that it was an "API token", not a "project token", which allows "account level actions"[1], with a scope of "All your resources and workspaces" or "Single workspace"[2], with no possibility of specifying granular permissions. Account token "can perform any API action you're authorized to do across all your resources and workspaces". Workspace token "has access to all the workspace's resources".

[0] https://x.com/lifeof_jer/status/2047733995186847912

[1] https://docs.railway.com/cli#tokens

[2] https://docs.railway.com/integrations/api#choosing-a-token-t...

reply
Then you need to reread the article. The author made a key for the LLM that didn't have permissions to delete a volume. The agent then found ANOTHER key with those permissions and used that instead.
reply
You're not contradicting my comment, I was talking specifically about the key with full permissions that the LLM found (the article doesn't talk about other keys that LLM could have had, unless I missed something).

Somewhere in the files there was a key with full API permissions. The author had no intent of having the LLM use that key, and wasn't aware that LLM can access that key. That key was created to manage some domains, and that was unrelated to the LLM's work. The author wasn't aware how dangerous the key was and is surprised that it could be used to delete a volume.

Essentially I agree with gwerbin that the situation comes down to mishandling of the key. The author makes it seem like the key was allowed to do something that it shouldn't be allowed to, but it was just a full access key, no scoping possible for that type of key (Railway has also other, less privileged types of keys/APIs).

Btw, I partially agree with author's criticisms, ideally these keys should be scoped, and maybe the UI should give more warnings when creating that type of key. But this situation could still happen as long as you put a wrong key in a wrong place (and specifically a place accessible to LLMs).

reply
> The author made a key for the LLM that didn't have permissions to delete a volume.

No he didn’t, because this doesn’t exist. Railway does not have a token with that kind of scoping.

reply
Anecdote: As a hapless junior engineer I once did something extremely similar.

I ran a declarative coding tool on a resource that I thought would be a PATCH but ended up being a PUT and it resulted in a very similar outcome to the one in this post.

reply
Yeah that's the typical junior engineer scenario right? Run a command that wasn't meant to be destructive but accidentally destroy something. This is different. AI agent went on some kind of wild goose chase of fixing problems, and eventually the most probable token sequence ended up at "delete this database". This is more like if your senior engineer with extreme ADHD ate a bunch of acid before sitting down to work.
reply
creating isolated staging & prod environments -- good idea

allowing an AI agent to get hold of creds that let it execute destructive changes against production -- not a great idea

allowing prod database changes from the machine where the AI agent is running at all -- not a great idea

choosing a backup approach that fails completely if there's an accidental volume wipe API call -- not a great idea

choosing to outsource key dependencies to a vendor, where you want a recovery SLA, without negotiating & paying for a recovery SLA -- you get what you get, and you dont get upset

reply
> creating isolated staging & prod environments -- good idea

Would have been a good idea but he didn’t do this either. The volume in question was used in both staging and production apparently, per the “confession”. The agent was deleting the volume because it was used for staging, not realizing it was also used for prod.

reply
> choosing to outsource key dependencies to a vendor

This is the entire thing. The author is basically slinging blame at a bunch of different vendors, and while some of the criticisms might be valid product feedback, it absolutely does not achieve what they're trying to, which is to absolve themselves of responsibility. This is a largely unregulated industry, which means when you stand up a service and sell it to customers, you are responsible for the outcome. Not anyone else. It doesn't matter if one of your vendors does something unexpected. You don't get to hide behind that. It was your one and only job to not be taken by surprise. Letting the hipster ipsum parrot loose with API credentials is a choice. Trusting vendors without verifying their claims is a choice. Failing to read and understand documentation is a choice.

reply