But a human just using an LLM to generate code will do it accidentally. The difference is that regurgitation of training text is a documented failure mode of LLMs.
And there’s no way for the human using it to be aware it’s happening.
If you can’t be sure, don’t sign.