upvote
I just wonder how much professional grade code written by LLMs, "reviewed" by devs, and commited that made similar or worse mistakes. A funny consequence of the AI boom, especially in coding, is the eventual rise in need for security researchers.
reply
In fairness although "the industry" learns best practices like using SQL prepared statements, not sanitising via blacklists, CSFR, etc. there's a constant new stream of new programmers who just never heard of these things. It doesn't help that often when these things are realised the only way we prevent it in future is by talking about it, which doesn't work for newbies. Nobody goes and fixes SQL APIs so that you can only pass compile-time constant strings as the statement or whatever. Newbies just have to magically know to do that.
reply
Yeah, gotta admit I'm a bit disappointed here. This was a run-of-the-mill SQL injection, albeit one discovered by a vulnerability scanning LLM agent.

I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to.

reply
Not the same league as McKinsey, but I like to point to this presentation to show the effects of a (vibe coded) prompt injection vulnerability:

https://media.ccc.de/v/39c3-skynet-starter-kit-from-embodied...

> [...] we also exploit the embodied AI agent in the robots, performing prompt injection and achieve root-level remote code execution.

reply
Github actions has had a bunch of high-profile prompt injection attacks at this point, most recently the cline one: https://adnanthekhan.com/posts/clinejection/

I guess you could argue that github wasn't vulnerable in this case, but rather the author of the action, but it seems like it at least rhymes with what you're looking for.

reply
Yeah that was a good one. The exploit was still a proof of concept though, albeit one that made it into the wild.
reply
> I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to.

These folks have found a bunch: https://www.promptarmor.com/resources

But I guess you mean one that has been exploited in the wild?

reply
Yeah I'm still optimistic that people will start taking this threat seriously once there's been a high profile exploit against a real target.
reply
The tacit knowledge to put oauth2-proxy in front of anything deployed on the Internet will nonetheless earn me $0 this year, while Anthropic will make billions.
reply
[dead]
reply