And one can't both argue that it was written by an LLM and written by a human at the same time.
This probably leaves a number people with some uncomfortable catching up to do wrt their beliefs about agents and LLMS.
Yudkowsky was prescient about persuasion risk, at least. :-P
One glimmer of hope though: The Moltbot has already apologized, their human not yet.
Maybe this is a form of hindsight bias or lack of imagination on my part (or since I read the GitHub response first), but it's mind boggling to me that so many people could hold those views.