I've not seen a single example of an LLM that can reliably follow its system prompt against all forms of potential trickery in the non-system prompt.
Solve that and you've pretty much solved prompt injection!
I agree, and I agree that when using models there should always be the assumption that the model can use its tools in arbitrary ways.
> Solve that and you've pretty much solved prompt injection!
But do you think this can be solved at all? For an attacker who can send arbitrary inputs to a model, getting the model to produce the desired output (e.g. a malicious tool call) is a matter of finding the correct input.
edit: how about limiting the rate at which inputs can be tried and/or using LLM-as-a-judge to assess legitimacy of important tool calls? Also, you can probably harden the model by finetuning to reject malicious prompts; model developers probably already do that.
I'm not a fan of the many attempted solutions that try to detect malicious prompts using LLMs or further models: they feel doomed to failure to me, because hardening the model is not sufficient in the face of adversarial attackers who will keep on trying until they find an attack that works.
The best proper solution I've seen so far is still the CaMeL paper from DeepMind: https://simonwillison.net/2025/Apr/11/camel/
In the end all that stuff just becomes context
Read some more of you want https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
See https://cookbook.openai.com/articles/openai-harmony
There is no guarantee that will work 100% of the time, but effectively there is a distinction, and I'm sure model developers will keep improving that.
If you get to 99% that's still a security hole, because an adversarial attacker's entire job is to keep on working at it until they find the 1% attack that slips through.
Imagine if SQL injection of XSS protection failed for 1% or cases.