upvote
This is the right direction. Running AI coding agents in production, the scariest moment is when an agent needs API access to do its job but you can't trust what it'll do with those credentials. We ended up with a simpler version of this: each agent runs in an isolated git worktree with only the env vars it specifically needs, and network access restricted to localhost + our API. No MITM proxy, just a stripped-down environment.

The deny-by-default model is correct. The question is how granular you need to be. For AI agents, I'd argue coarse-grained is better — network yes/no, filesystem scoped to one directory, no credential access. Fine-grained permissions add complexity the agent will just work around anyway.

reply
Thanks and agreed! Zerobox uses the Deno sandboxing policy and also the same pattern for cred injection (placeholders as env vars, replaced at network call time).

Real secrets are never readable by any processes inside the sandbox:

```

zerobox -- echo $OPENAI_API_KEY

ZEROBOX_SECRET_a1b2c3d4e5...

```

reply
Do you know if there's a widely shared name for this pattern? I've been collecting examples of it recently - it's a really good idea - but I'm not sure if there's good terminology. "Credential injection" is one option I've seen floating around.
reply
simonw, I have been seeing "credential injection" and "credential tokenizing" (a la tokenizer: https://github.com/superfly/tokenizer). I'm also seeing credential "surrogates" mentioned.

I am currently working on a mitm proxy for use with devcontainers to try to implement this pattern, but I'm certainly not the only one!

reply
Thanks, I think I'll go with "credential injection" since the word "tokenization" has other meanings that I find confusing here.
reply
Not sure. I took this idea from the Deno sandboxing docs. They also do the exact same thing, different sandboxing mechanism though (I think Deno has it's own way of sandboxing subprocesses).
reply