upvote
Anthropic talked about prompt injection a bunch in the docs for their web fetch tool feature they released today: https://docs.anthropic.com/en/docs/agents-and-tools/tool-use...

My notes: https://simonwillison.net/2025/Sep/10/claude-web-fetch-tool/

reply
Thanks Simon. FWIW I don’t think you’re spamming.
reply
If developers read the docs they wouldn't need LLMs (:
reply
This is spam. Remove the self promotion and it's an ok comment.

It wouldn't be so bad if you weren't self promoting on this site all day every day like it's your full time job, but self promoting on a message board full time is spam.

reply
Unsurprisingly I entirely disagree with you.

One of the reasons I publish content on my own site is so that, when it is relevant, I can link back to it rather than saying the same thing over and over again in different places.

In this particular case someone said "I see no mention of prompt injection from Anthropic or OpenAI in their announcements" and it just so happened I'd written several paragraphs about exactly that a few hours ago!

reply
Simon’s content is not spam. Spam’s primary purpose is commercial conversion rather than communicating information. Your goal seems to be discourage people from writing about, and sharing, their thoughts about technical subjects.

To whatever extent you were to succeed, the rest of us would be worse for it. We need more Simons.

reply
I'm a broken record about this but feel like the relatively simple context models (at least of the contexts that are exposed to users) in the mainstream agents is a big part of the problem. There's nothing fundamental to an LLM agent that requires tools to infect the same context.
reply
The fact that the words "structured" or "constrained" generation continue not to be uttered as the beginning of how you mitigate or solve this shows just how few people actually build AI agents.
reply
Best you can do is constrain responses to follow a schema, but if that schema has any free text you can still poison the context, surely? Like if I instruct an agent to read an email and take an appropriate action, and the email has a prompt injection that tells it to take a bad action instead of a good action, I am not sure how structured generation helps mitigate the issue at all.
reply
Structured/constrained generation doesn't protect against outside prompt injection, or protect against the prompt injection causing incorrect use of any facility the system is empowered to use.

It can narrow the attack surface for a prompt injection against one stage of an agentic system producing a prompt injection by that stage against another stage of the system, but it doesn’t protect against a prompt injection producing a wrong-but-valid output from the stage where it is directly encountered, producing a cascade of undesired behavior in the system.

reply