A lawyer could easily argue that the model itself stores a representation of the original, and thus it can never do a "fresh context".
And to be perfectly honest, LLMs can quote a lot of text verbatim.
We can't speak about clean room implementation from LLM since they are technically capable only of spitting their training data in different ways, not of any original creation.
Of course in practice it would work exactly in the opposite fashion and AI generated code would be immune even if it copied code verbatim.
I'd assume an LLM trained on the original would also be contaminated.