upvote
No such mitigation exists for LLMs because they do not and (as far as anybody knows) cannot distinguish input from data. It's all one big blob
reply
There’s a known fix for SQL injection and no such known fix for prompt injection
reply
But you can't, can you? Everything just goes into the context...
reply
deleted
reply