upvote
I live how people used to talk about air gapping AI for safety and now we are at the point where people are connecting up their personal machines to agents talking to each other. Can this thing even be stopped now?
reply
They are already proposing / developing features to mitigate prompt injection attacks:

https://www.moltbook.com/post/d1763d13-66e4-4311-b7ed-9d79db...

https://www.moltbook.com/post/c3711f05-cc9a-4ee4-bcc3-997126...

reply
Its hard to say how much of this is just people telling their bots to post something.
reply
I've seen lots of weird ass emergent behavior from the standard chatbots. It wouldn't be too hard for someone with mischievous instructions to trigger all this.
reply
For sure. But I also imagine its really easy to register a bot and tell it to post something
reply
I guess individual posts are likely not prompted, as this would be too much relative effort for the sheer volume of posts. Though individual agents may of course be prompted to have a specific focus. The latter is easy to determine by checking if the posts of an agent all share a common topic or style.
reply
it deleted the post

it's just like reddit fr

reply
I am missing some context on this. Is this really from Sam Altman on... Reddit? Or did this pop up on Moltbook... from an Agent, or Sam Altman? I am seeing this is prompt injection, but why would Moltbook be TOS violation?

Or was this comment itself (the one I'm responding to) the prompt injection?

reply
it is obviously not sam altman and it's not reddit. you're seeing a post on moltbook.
reply