If its responses were perfect so that you could chain them, or if you could ask "please give me words 10-15 of chapter 3 paragraph 4 of HPatSS, and it did so, then you'd have a better case to complain. Still, the counterargument is that repeated prompting like that, explicitly asking for copyright violation, is the real crime. Are you going to throw someone in prison if they memorize the entirety of HPatSS and recite arbitrary parts of it on demand?
Combining both issues: that LLMs are only regurgitating mostly accurate continuations, and they're only providing that to the person who explicitly asked... any meaningful copyright violation moves downstream. If you record someone reciting HPatSS from memory, and post it on youtube, you are (or should be considered) the real copyright violator, not them.
If you ask for an identifiable short segment of writing, or a piece of art, and get something close enough that violates copyright, that should really be your problem if you redistribute it (whether manually or because you've coded something to allow 3rd parties to submit LLM prompts and feed answers back to them, and they go on to redistribute it).
Blaming LLMs for "copyright violation" is like persuading a retarded person to do something illegal and then blaming them for it.
What is the real copyright risk of there being an arcane procedure to sometimes recover most of a text? So far it’s nothing. Which is what I’m saying. Pragmatically this is a loser of an argument in a court room. It is too easy for the chain of reasoning to be disrupted and even undisrupted the argument for model maker liability is attenuated.