This is both true and irrelevant. Written records can capture an enormous quantity of the human experience in absolute terms while simultaneously capturing a miniscule portion of the human experience in relative terms. Even if it's the best "that we have available" that doesn't mean it's fit for purpose. In other words, if you had a human infant and did nothing other than lock it in a windowless box and recite terabytes of text at it for 20 years, you would not expect to get a well-adjusted human on the other side.
I take that as a moderately strong signal against that "miniscule portion" notion. Clearly, raw text captures a lot.
If we're looking at biologicals, then "human infant" is a weird object, because it falls out of the womb pre-trained. Evolution is an optimization process - and it spent an awful lot of time running a highly parallel search of low k-complexity priors to wire into mammal brains. Frontier labs can only wish they had the compute budget to do this kind of meta-learning.
Humans get a bag of computational primitives evolved for high fitness across a diverse range of environments - LLMs get the pit of vaguely constrained random initialization. No wonder they have to brute force their way out of it with the sheer amount of data. Sample efficiency is low because we're paying the inverse problem tax on every sample.
Training on a bunch of text someone wrote when they were mad doesn't capture the internal state of that person that caused the outburst, so it cannot be accurately reproduced by the system. The data does not exist.
Without the cause to the effect you essentially have to predict hallucinations from noise, which makes the end result verisimilar nonsense that is convincingly correlated with the actual thing but doesn't know why it is the way it is. It's like training a blind man to describe a landscape based on lots of descriptions and no idea what the colour green even is, only that it's something that might appear next to brown in nature based on lots of examples. So the guy gets it kinda right cause he's heard a description of that town before and we think he's actually seeing and tell him to drive a car next.
Another example would say, you're trying to train a time series model to predict the weather. You take the last 200 years of rainfall data, feed it all in, and ask it to predict what the weather's gonna be tomorrow. It will probably learn that certain parts of the year get more or less rain, that there will be rain after long periods of sun and vice versa, but its accuracy will be that of a coin toss because it does not look at the actual factors that influence rain: temperature, pressure, humidity, wind, cloud coverage radar data. Even with all that info it's still gonna be pretty bad, but at least an educated guess instead of an almost random one.
The DL modelling approach itself is not conceptually wrong, the data just happens to be complete garbage so the end result is weird in ways that are hard to predict and correctly account for. We end up assuming the models know more than they realistically ever can. Sure there are cases where it's possible to capture the entire domain with a dataset, i.e. math, abstract programming. Clearly defined closed systems where we can generate as much synthetic data as needed that covers the entire problem domain. And LLMs expectedly do much better in those when you do actually do that.