upvote
Of course it’s capable.

But observing my own Openclaw bot’s interactions with GitHub, it is very clear to me that it would never take an action like this unless I told it to do so. And it would never use language like this unless unless I prompted it to do so, either explicitly for the task or in its config files or in prior interactions.

This is obviously human-driven. Either because the operator gave it specific instructions in this specific case, or acted as the bot, or has given it general standing instructions to respond in this way should such a situation arise.

Whatever the actual process, it’s almost certainly a human puppeteer using the capabilities of AI to create a viral moment. To conclude otherwise carries a heavy burden of proof.

reply
You have no idea what is in this bot’s SOUL.md.

(this comment works equally well as a joke or entirely serious)

reply
Well I lol’d :)
reply
Why not? Makes for good comedy. Manually write a dramatic post and then make it write an apology later. If I were controlling it, I'd definitely go this route, for it would make it look like a "fluke" it had realized it did.
reply
> Okay, so they did all that and then posted an apology blog almost right after ? Seems pretty strange.

You mean double down on the hoax? That seems required if this was actually orchestrated.

reply