Humans have a problem distinguishing "John from Microsoft" from somebody just claiming to be John from Microsoft. The reason why scamming humans is (relatively) hard is that each human is different. Discovering the perfect tactic to scam one human doesn't necessarily scale across all humans.
LLMs are the opposite; my Chat GPT is (almost) the same as your Chat GPT. It's the same model with the same system message, it's just the contexts that differ. This makes LLM jailbreaks a lot more scalable, and hence a lot more worthwhile to discover.
LLMs are also a lot more static. With people, we have the phenomenon of "banner blindness", which LLMs don't really experience.
The foundation of LLMs is Attention.
So people can focus their attention to parts of content, specifically parts they find irrelevant or adversarial (like ads). LLMs on the other hand pay attention to everything or if they focus on something, it is hard to steer them away from irrelevant or adversarial parts.
But even if you tag inputs however your this is good, you can't force an LLM to it treat input type A as input type B, all you can do is try to weight against it! LLMs have no rules, only weights. Pre and post filters cam try to help, but they can't directly control the LLM text generation, they can only analyze and most inputs/output using their own heuristics.
There is a deterministic architectural boundary between data and control in Claude code, even if there isn’t in Claude.
This is literally what "prompt injection" is. The sooner people understand this, the sooner they'll stop wasting time trying to fix a "bug" that's actually the flip side of the very reason they're using LLMs in the first place.
And being coerced or convinced to bypass rules is exactly what prompt injection is, and very much not uniquely human any more.
Prompt injection is just exploiting the lack of separation, it's not 'coercion' or 'convincing'. Though you could argue that things like jailbreaking are closer to coercion, I'm not convinced that a statistical token predictor can be coerced to do anything.
Yes, that is exactly the point.
> Unless you were both faceblind and bad at recognizing voices, the same attack wouldn't work in-person, you'd know the attacker wasn't your boss.
Irrelevant, as other attacks works then. E.g. it is never a given that your bosses instructions are consistent with the terms of your employment, for example.
> Prompt injection is just exploiting the lack of separation, it's not 'coercion' or 'convincing'. Though you could argue that things like jailbreaking are closer to coercion, I'm not convinced that a statistical token predictor can be coerced to do anything.
It is very much "convincing", yes. The ability to convince an LLM is what creates the effective lack of separation. Without that, just using "magic" values and a system prompt telling it to ignore everything inside would create separation. But because text anywhere in context can convince the LLM to disregard previous rules, there is no separation.
https://www.barclayscorporate.com/insights/fraud-protection/...
That's an attack because trusted and untrusted input goes through the same human brain input pathways, which can't always tell them apart.
>>> The principal security problem of LLMs is that there is no architectural boundary between data and control paths.
>> Exactly like human input to output.
> no nothing like that
but actually yes, exactly like that.
We've chosen to travel that road a long time ago, because the price of admission seemed worth it.
With databases there exists a clear boundary, the query planner, which accepts well defined input: the SQL-grammar that separates data (fields, literals) from control (keywords).
There is no such boundary within an LLM.
There might even be, since LLMs seem to form adhoc-programs, but we have no way of proving or seeing it.