I think that's only part of the story. I think that while it's true what LLMs do is somehow represented in their corpus of training data, they also lack any understanding of how to adapt to the context, how to find a suitable "voice", and how not to overdo it, unless you explicitly prompt them otherwise, which is too much of a burden. Their default voice sucks, basically.
So let's say they learned to speak in Redditese. They don't know when not to speak in that voice. They always seem to be trying to make persuasive arguments, follow patterns of "It's not X. It's Y. And you know it (mic drop)." But real humans don't speak like this all the damn time. If you speak like this to your mom or to your closest friends, you're basically an idiot.
It's not that you cannot speak like this. It's that you cannot do it all the time. And that's the real problem with LLMs.
(Sorry, couldn't resist!)
Especially if it's unsupervised training