It referred to me by my login name on the AI site rather than the name it would have used if it actually found my website, so I think it was more logic than an actual identification, but it had clearly corrupted the search enough to no longer be a valid test.
Which does make me wonder about the original article; if the AI has in context any sort of clue that the user is "Kelsey Piper" (a memory of their name, a username of kpiper or kelseyp, etc.), that will radically tip the balance in favor of the AI guessing that way just by the nature of LLMs. That is to say, it highly increases the odds of that guess even if it's wrong.
Even if that is the case, though, the general identifiability of writing remains true. It's been shown for a while with techniques a lot less powerful than a frontier LLM.
It's a lossy representation
There might be ten million people who have quoted Harry Potter at some point in their blogs or forum posts. There are only so many words in the books.
https://arstechnica.com/features/2025/06/study-metas-llama-3...
if the original essay was stuffed within the prompt window. the result will be word accurate.
unless this is a model trained specifically on Micken's essay (which claude is not).
I swear there was a whole court case about this in the last year.