It's one thing to sandbox, maybe give the bot a temporary, limited $100 card or account to go perform a specific task, but there's no coherent mind underlying these agents.
Depending on how the chain of thought / reasoning goes, or what text they get exposed to on the internet, it could tap into spy novel, hacker fanfic, erotic fiction, or some weird reddit rabbithole and go completely off the rails in ways that you'll never be able to guard against, audit, or account for.
Claw bots seem to be a weird sort of alternate reality RPG more than a useful tool, so far. If you limit it to verifiable tasks, it might be safer, but I keep seeing people rave about "leaving it on overnight and waking up to a finished project" and so on. Well sure, but it could also hack your home network, delete your family pictures folder, log into your bank account and wire all your money to shrimp charities.
Might be wise to wait on safer iterations of these products, I think.
So basically crypto DeFi/Web3/Metaverse delusion redux
So yeah, a whole lot of people will play with powerful technology that they have no business playing with and will get hurt, but also a lot of amazing things will get done. I think the main difference between the crypto delusion stuff and this is that AI is actually useful, it's just legitimately dangerous in ways that crypto couldn't be. The worst risks of crypto were like gambling - getting rubber hosed by thugs or losing your savings. AI could easily land people in jail if things go off the rails. "Gee, I see this other network, I need to hack into it, to expand my reach. Let me just load Kali Linux and..." off to the races.
People claim, you can use Claw-agents more safely while getting some of the benefits, by essentially proxying your services. For example on Gmail people are creating a new Google accounts, forwarding email via rule, and adding access to their calendar via Google's Family Sharing. This allows the Claw agent to read email, access the calendar, but even if you ask it to send an email it can only send as the proxy account, and it can only create calendar appointments then add you as an attendee rather than destroy/altering appointments you've made.
Is the juice worth the squeeze after all that? That's where I struggle. I think insecure/dangerous Claw-agents could be useful but cannot be made safe (for the logical fallacy you pointed out), and secure Claw-agents are only barely useful. Which feels like the whole idea gets squished.
Your Gmail account vs my Gmail account. Your macOS account vs my macOS account.
Yes, I can spam you from my Gmail. Yes, I can use sudo on my Mac and damage your account. But the impact is by default limited.
The answer is to just treat assistants as a different user profile, use the same sharing mechanisms already developed (calendar sharing, etc), and call it a day.
You are indeed missing a TON. A lot of Open Claw users don't give it everything. We give it specific access to a group of things it needs to do the things we want. If I want an agent to sit there 24/7 maximizing uptime of my service, I give it access to certain data, the GitHub repo with PR privileges, and maybe even permissions to restart the service. All of this has to be very thoughtful and intentional. The idea that the only "useful" way to use Open Claw is to give it everything is a straw man.
OPs question was more around sandboxes though. To which, I would say that it's to limit unintended actions on host machine.
> NemoClaw installs the NVIDIA OpenShell runtime and Nemotron models, then uses a versioned blueprint to create a sandboxed environment where every network request, file access, and inference call is governed by declarative policy. The nemoclaw CLI orchestrates the full stack: OpenShell gateway, sandbox, inference provider, and network policy.
I think this means you get a true proxy layer with a network gateway that let's you stop in-flight requests with policies you define, so it's not their hardware but the combination of it plus OpenShell gateway and network policies.
I also think the reason they are doing this is to try and get some moat around these one-clik deployments and leverage their GPU for rent type of thing instead of having you go buy a mac mini and learn "scary" stuff (remember, the user market here is pretty strange lol)
It's not something like Mesa. It's open source in the same way chromium or android is open source. A single company is the major contributor and decides the architecture and direction the whole ecosystem will go.
What are the odds that Intel would ever use any of this open source Nemo stuff or vice-versa? If they do, it would be a complete rewrite that favors their own hardware ecosystem and reverses the lock-in effect. When you write code that integrates with it, you're writing an interface for one company's hardware. It's not a common interface like vulkan. I call it the CUDA effect.
> Credentials never leak into the sandbox filesystem; they are injected as environment variables at runtime.
If anyone from the team is reading - you should copy surrogate credentials approach from here to secure the credentials further: https://github.com/airutorg/airut/blob/main/doc/network-sand...
Alternatively where is needs an API key, it should be one bound to the endpoint using it. E.g. a ticket granting ticket is used to create a bound ticket.
A copy on write filesystem would be an interesting way to sandbox writes, but there is difficulty in checking the diff.
Then again, I was wary of OpenClaw's unfettered access and made my own alternative (https://github.com/skorokithakis/stavrobot) with a focus on "all the access it needs, and no more".
Maybe you don't want the dog to shit all over the place after eating said documents, so you put it in a crate.
you put the dog in crate with a COW of your documents