upvote
I think that bias is not due to the proportion of books and more due to how they are fine-tuned after the pretraining.
reply
> For what it's worth, whatever LLMs do extensively, they do because it's a convention in well-established writing styles.

I think that's only part of the story. I think that while it's true what LLMs do is somehow represented in their corpus of training data, they also lack any understanding of how to adapt to the context, how to find a suitable "voice", and how not to overdo it, unless you explicitly prompt them otherwise, which is too much of a burden. Their default voice sucks, basically.

So let's say they learned to speak in Redditese. They don't know when not to speak in that voice. They always seem to be trying to make persuasive arguments, follow patterns of "It's not X. It's Y. And you know it (mic drop)." But real humans don't speak like this all the damn time. If you speak like this to your mom or to your closest friends, you're basically an idiot.

It's not that you cannot speak like this. It's that you cannot do it all the time. And that's the real problem with LLMs.

(Sorry, couldn't resist!)

reply
Aren’t books massively outweighed by the crawled internet corpus?
reply
I would doubt that because books are probably weighed as higher quality and more trustworthy than random Reddit posts

Especially if it's unsupervised training

reply