My personal beef in this particular instance is that we've seemingly decided to throw decades of advice in the form of "don't allow untrusted input to be executable" out the window. Like, say, having an LLM read github issues that other people can write. It's not like prompt injections and LLM jailbreaks are a new phenomenon. We've known about those problems about as long as we've known about LLMs themselves.
S- Security
E- Exploitable
X- Exfiltration
Y- Your base belong to us.