I saw on The Verve that they partnered with the company that repeatedly disclosed security vulnerabilities to try to make skills more secure though which is interesting: https://openclaw.ai/blog/virustotal-partnership
I’m guessing most of that malware was really obvious, people just weren’t looking, so it’s probably found a lot. But I also suspect it’s essentially impossible to actually reliably find malware in LLM skills by using an LLM.
A Reddit post with white invisible text can hijack your agent to do what an attacker wants. Even a decade or 2 back, SQL injection attacks used to require a lot of proficiency on the attacker and prevention strategies from a backend engineer. Compare that with the weak security of so called AI agents that can be hijacked with random white text on an email or pdf or reddit comment
It cannot. This is the security equivalent of telling it to not make mistakes.
> Restrict downstream tool usage and permissions for each agentic use case
Reasonable, but you have to actually do this and not screw it up.
> Harden the system according to state of the art security
"Draw the rest of the owl"
You're better off treating the system as fundamentally unsecurable, because it is. The only real solution is to never give it untrusted data or access to anything you care about. Which yes, makes it pretty useless.
I have OPA and set policies on each tool I provide at the gateway level. It makes this stuff way easier.
It does not. Security theater like that only makes you feel safer and therefore complacent.
As the old saying goes, "Don't worry, men! They can't possibly hit us from this dist--"
If you wanna yolo, it's fine. Accept that it's insecure and unsecurable and yolo from there.
I never want to be one wayward email away from an AI tool dumping my company's entire slack history into a public github issue.