Hacker News
new
past
comments
ask
show
jobs
points
by
philipallstar
6 hours ago
|
comments
by
WickyNilliams
4 hours ago
|
next
[-]
No such mitigation exists for LLMs because they do not and (as far as anybody knows) cannot distinguish input from data. It's all one big blob
reply
by
arjvik
6 hours ago
|
prev
|
next
[-]
There’s a known fix for SQL injection and no such known fix for prompt injection
reply
by
rawling
6 hours ago
|
prev
|
[-]
But you can't, can you? Everything just goes into the context...
reply
by
6 hours ago
|
parent
|
[-]
deleted
reply