How likely is it that it might take into account that it knows for sure it's not anything from Mickens from the latest training data? I'd be curious if it correctly identified a new piece from him that comes out as from him before it gets trained on it.
It's a lossy representation
https://arstechnica.com/features/2025/06/study-metas-llama-3...
if the original essay was stuffed within the prompt window. the result will be word accurate.
unless this is a model trained specifically on Micken's essay (which claude is not).
I swear there was a whole court case about this in the last year.
i wouldn't be too impressed at n of 1