What’s interesting about the malware in this post is that it goes one step further: instead of exploiting mismatches, it corrupts the computation itself — so every infected system agrees on the same wrong answer!
More broadly: any interpretive mismatch between components creates a failure surface. Sometimes it shows up as a bug, sometimes as an exploit primitive, sometimes as a testing blind spot. You see it everywhere — this paper, IDS vs OS, proxies vs backends, test vs prod, and now LLMs vs “guardrails.”
Fun HN moment for me: as I was about to post this, I noticed a reply from @tptacek himself. His 1998 paper with Newsham (IDS vs OS mismatches) was my first exposure to this idea — and in hindsight it nudged me toward infosec, the Atlanta scene, spam filtering (PG's bayesian stuff) and eventually YC.
https://users.ece.cmu.edu/~adrian/731-sp04/readings/Ptacek-N...
The paper starts with this Einstein quote "Not everything that is counted counts and not everything that counts can be counted", which seems quite apt for the malware analyzed here :)
Where do you think the LLM is getting it from? ^_^
it was generated that way, or else this person happens to know the correct combination of buttons to make that happen.
in 2026, at least 20-40% of social media traffic is bots (and probably higher with better LLMs), so it is usually safer to just assume.
Do you mean skeptical on which government was responsible or that it was in fact a government effort?
I can see how attribution could be debatable (between two main suspects mainly), but are / were there any good arguments against this being a gov effort? I would find it highly unlikely that someone other than a gov could muster up so much domain knowledge, source pristine 0days and be so stealthy at the same time.
I still use RCS today. It's certainly not my preferred option, but my collaborator likes it, and it's not too annoying for me to use.