People keep forgetting (or worse, still disbelieving) that LLMs can "read between the lines" and infer intent with good accuracy - because that's exactly what they're trained to do[0].
Also there's prior art for time-displaced HN, and it's universally been satire.
--
[0] - The goal function for LLM output is basically "feels right, makes sense in context to humans" - in fully general meaning of that statement.