upvote
> This matters because (1) the world cannot be modeled anywhere close to completely with language alone

LLMs being "Language Models" means they model language, it doesn't mean they "model the world with language".

On the contrary, modeling language requires you to also model the world, but that's in the hidden state, and not using language.

reply
Let's be more precise: LLMs have to model the world from an intermediate tokenized representation of the text on the internet. Most of this text is natural language, but to allow for e.g. code and math, let's say "tokens" to keep it generic, even though in practice, tokens mostly tokenize natural language.

LLMs can only model tokens, and tokens are produced by humans trying to model the world. Tokenized models are NOT the only kinds of models humans can produce (we can have visual, kinaesthetic, tactile, gustatory, and all sorts of sensory, non-linguistic models of the world).

LLMs are trained on tokenizations of text, and most of that text is humans attempting to translate their various models of the world into tokenized form. I.e. humans make tokenized models of their actual models (which are still just messy models of the world), and this is what LLMs are trained on.

So, do "LLMS model the world with language"? Well, they are constrained in that they can only model the world that is already modeled by language (generally: tokenized). So the "with" here is vague. But patterns encoded in the hidden state are still patterns of tokens.

Humans can have models that are much more complicated than patterns of tokens. Non-LLM models (e.g. models connected to sensors, such as those in self-driving vehicles, and VLMs) can use more than simple linguistic tokens to model the world, but LLMs are deeply constrained relative to humans, in this very specific sense.

reply
Large Language Models is a misnomer- these things were originally trained to reproduce language, but they went far beyond that. The fact that they're trained on language (if that's even still the case) is irrelevant- it's like claiming that student trained on quizzes and exercise books are only able to solve quizzes and exercises.
reply
It isn't a misnomer at all, and comments like yours are why it is increasingly important to remind people about the linguistic foundations of these models.

For example, no matter many books you read about riding a bike, you still need to actually get on a bike and do some practice before you can ride it. The reading can certainly help, at least in theory, but, in practice, is not necessary and may even hurt (if it makes certain processes that need to be unconscious held too strongly in consciousness, due to the linguistic model presented in the book).

This is why LLMs being so strongly tied to natural language is still an important limitation (even it is clearly less limiting than most expected).

reply
> no matter many books you read about riding a bike, you still need to actually get on a bike and do some practice before you can ride it

This is like saying that no matter how much you know theoretically about a foreign language you still need to train your brain to talk it. It has little to do with the reality of that language or the correctness of your model of it, but rather with the need to train realtime circuits to do some work.

Let me try some variations: "no matter how many books you read about ancient history, you need to have lived there before you can reasonably talk about it". "No matter how many books you have read about quantum mechanics, you need to be a particle..."

reply
> It has little to do with the reality of that language or the correctness of your model of it, but rather with the need to train realtime circuits to do some work.

To the contrary, this is purely speculative and almost certainly wrong, riding a bike is co-ordinating the realtime circuits in the right way, and language and a linguistic model fundamentally cannot get you there.

There are plenty of other domains like this, where semantic reasoning (e.g. unquantified syllogistic reasoning) just doesn't get you anywhere useful. I gave an example from cooking later in this thread.

You are falling IMO into exactly the trap of the linguistic reductionist, thinking that language is the be-all and end-all of cognition. Talk to e.g. actual mathematicians, and they will generally tell you they may broadly recruit visualization, imagined tactile and proprioceptive senses, and hard-to-vocalize "intuition". One has to claim this is all epiphenomenal, or that e.g. all unconscious thought is secretly using language, to think that all modeling is fundamentally linguistic (or more broadly, token manipulation). This is not a particularly credible or plausible claim given the ubiquity of cognition across animals or from direct human experiences, so the linguistic boundedness of LLMs is very important and relevant.

reply
Funny, because riding a bicycle or speaking a language is exactly something people don't have a world model of. Ask someone to explain how riding a bicycle works, or an uneducated native speaker to explain the grammar of their language. They have no clue. "Making the right movement at the right time within a narrow boundary of conditions" is a world model, or is it just predicting the next move?

> You are falling IMO into exactly the trap of the linguistic reductionist, thinking that language is the be-all and end-all of cognition.

I'm not saying that at all. I am saying that any (sufficiently long, varied) coherent speech needs a world model, so if something produces coherent speech, there must be a world model behind. We can agree that the model is lacking as much as the language productions are incoherent: which is very little, these days.

reply
> Funny, because riding a bicycle or speaking a language is exactly something people don't have a world model of. Ask someone to explain how riding a bicycle works, or an uneducated native speaker to explain the grammar of their language. They have no clue

This is circular, because you are assuming their world-model of biking can be expressed in language. It can't!

EDIT: There are plenty of skilled experts, artists and etc. that clearly and obviously have complex world models that let them produce best-in-the-world outputs, but who can't express very precisely how they do this. I would never claim such people have no world model or understanding of what they do. Perhaps we have a semantic / definitional issue here?

reply
> This is circular, because you are assuming their world-model of biking can be expressed in language. It can't!

Ok. So I think I get it. For me, producing coherent discourse about things requires a world model, because you can't just make up coherent relationships between objects and actions long enough if you don't understand what their properties are and how they relate to each other.

You, on the other hand, claim that there are infinite firsthand sensory experiences (maybe we can call them qualia?) that fall in between the cracks of language and are rarely communicated (though we use for that a wealth of metaphors and synesthesia) and can only be understood by those who have experienced them firsthand.

I can agree with that if that's what you mean, but at the same time I'm not sure they constitute such a big part of our thought and communication. For example, we are discussing about reality in this thread and yet there are no necessary references to first hand experiences. Any time we talk about history, physics, space, maths, philosophy, we're basically juggling concepts in our heads with zero direct experience of them.

reply
> You, on the other hand, claim that there are infinite firsthand sensory experiences (maybe we can call them qualia?) that fall in between the cracks of language and are rarely communicated (though we use for that a wealth of metaphors and synesthesia) and can only be understood by those who have experienced them firsthand.

Well, not infinite, but, yes! I am indeed claiming much world models are patterns and associations between qualia, and that only some qualia are essentially representable as or look like linguistic tokens (specifically, the sounds of those tokens being pronounced, or their visual shapes if e.g. math symbols). E.g. I am claiming that the way one learns to e.g. cook, or "do theoretical math" may be more about forming associations between those non-linguistic qualia than, say, obviously, doing philosophy is.

> I'm not sure they constitute such a big part of our thought and communication

The communication part is mostly tautological again, but, yes, it remains very much an open question in cognitive science just how exactly thought works. A lot of mathematicians claim to lean heavily on visualization and/or tactile and kinaesthetic modeling for their intuitions (and most deep math is driven by intuition first), but also a lot of mathematicians can produce similar works and disagree about how they think about it intuitively. And we are seeing some progress from e.g. Aristotle using LEAN to generate math proofs in a strictly tokenized / symbolic way, but it remains to be seen if this will ever produce anything truly impressive to mathematicians. So it is really hard to know what actually matters for general human cognition.

I think introspection makes it clear there are a LOT of domains where it is obvious the core knowledge is not mostly linguistic. This is easiest to argue for embodied domains and skills (e.g. anything that requires direct physical interaction with the world), and it is areas like these (e.g. self-driving vehicle AI) where LLMs will be (most likely) least useful in isolation, IMO.

reply
You and I can't learn to ride a bike by reading thousands of books about cycling and Newtonian physics, but a robot driven by an LLM-like process certainly can.

In practice it would make heavy use of RL, as humans do.

reply
> In practice it would make heavy use of RL, as humans do.

Oh, so you mean, it would be in a harness of some sort that lets it connect to sensors that tell it things about its position, speed, balance and etc? Well, yes, but then it isn't an LLM anymore, because it has more than language to model things!

reply
I have no idea why you used the word “certainly” there.
reply
What is in the nature of bike-riding that cannot be reduced to text?

You know transformers can do math, right?

reply
> What is in the nature of bike-riding that cannot be reduced to text?

You're asking someone to answer this question in a text forum. This is not quite the gotcha you think it is.

The distinction between "knowing" and "putting into language" is a rich source of epistemological debate going back to Plato and is still widely regarded to represent a particularly difficult philosophical conundrum. I don't see how you can make this claim with so much certainty.

reply
Every time someone has said, "Yeah, but they can't do X," they've ended up with egg on their face. Have you seen the price of eggs lately?
reply
1. LLMs are transformers, and transformers are next state predictors. LLMs are not Language models (in the sense you are trying to imply) because even when training is restricted to only text, text is much more than language.

2. People need to let go of this strange and erroneous idea that humans somehow have this privileged access to the 'real world'. You don't. You run on a heavily filtered, tiny slice of reality. You think you understand electro-magnetism ? Tell that to the birds that innately navigate by sensing the earth's magnetic field. To them, your brain only somewhat models the real world, and evidently quite incompletely. You'll never truly understand electro-magnetism, they might say.

reply
LLMs are language models, something being a transformer or next-state predictor does not make it a language model. You can also have e.g. convolutional language models or LSTM-based language models. This is a basic point that anyone with any proper understanding of these models would know.

Even if you disagree with these semantics, the major LLMs today are primarily trained on natural language. But, yes, as I said in another comment on this thread, it isn't that simple, because LLMs today are trained on tokens from tokenizers, and these tokenizers are trained on text that includes e.g. natural language, mathematical symbolism, and code.

Yes, humans have incredibly limited access to the real world. But they experience and model this world with far more tools and machinery than language. Sometimes, in certain cases, they attempt to messily translate this messy, multimodal understanding into tokens, and then make those tokens available on the internet.

An LLM (in the sense everyone means it, which, again, is largely a natural language model, but certainly just a tokenized text model) has access only to these messy tokens, so, yes, far less capacity than humanity collectively. And though the LLM can integrate knowledge from a massive amount of tokens from a huge amount of humans, even a single human has more different kinds of sensory information and modality-specific knowledge than the LLM. So humans DO have more privileged access to the real world than LLMs (even though we can barely access a slice of reality at all).

reply
>LLMs are language models, something being a transformer or next-state predictors does not make it a language model. You can also have e.g. convolutional language models or LSTM-based language models. This is a basic point that anyone with any proper understanding of these models would know.

'Language Model' has no inherent meaning beyond 'predicts natural language sequences'. You are trying to make it mean more than that. You can certainly make something you'd call a language model with convolution or LSTMs, but that's just a semantics game. In practice, they would not work like transformers and would in fact perform much worse than them with the same compute budget.

>Even if you disagree with these semantics, the major LLMs today are primarily trained on natural language.

The major LLMs today are trained on trillions of tokens of text, much of which has nothing to do with language beyond the means of communication, millions of images and million(s) of hours of audio.

The problem as I tried to explain is that you're packing more meaning into 'Language Model' than you should. Being trained on text does not mean all your responses are modelled via language as you seem to imply. Even for a model trained on text, only the first and last few layers of a LLM concerns language.

reply
You clearly have no idea about the basics of what you are talking about (as do almost all people that can't grasp the simple distinctions between transformer architectures vs. LLMs generally) and are ignoring most of what I am saying.

I see no value in engaging further.

reply
>You clearly have idea about the basics of what you are talking about (as do almost all people that can't grasp the simple distinctions between transformer architectures vs. LLMs generally)

Yeah I'm not the one who doesn't understand the distinction between transformers and other potential LM architectures if your words are anything to go by, but sure, feel free to do whatever you want regardless.

reply
> 2. People need to let go of this strange and erroneous idea that humans somehow have this privileged access to 'the real world'. You don't.

You are denouncing a claim that the comment you're replying to did not make.

reply
They made it implicitly, otherwise this:

>(2) language only somewhat models the world

is completely irrelevant.

Everyone is only 'somewhat modeling' the world. Humans, Animals, and LLMs.

reply
Completely relevant, because LLMs only "somewhat model" humans' "somewhat modeling" of the world...
reply
LLMs aren't modeling "humans modeling the world" - they're modeling patterns in data that reflect the world directly. When an LLM learns physics from textbooks, scientific papers, and code, it's learning the same compressed representations of reality that humans use, not a "model of a model."

Your argument would suggest that because you learned about quantum mechanics through language (textbooks, lectures), you only have access to "humans' modeling of humans' modeling of quantum mechanics" - an infinite regress that's clearly absurd.

reply
A language model in computer science is a model that predicts the probability of a sentence or a word given a sentence. This definition predates LLMs.
reply
A 'language model' only has meaning in so far as it tells you this thing 'predicts natural language sequences'. It does not tell you how these sequences are being predicted or any anything about what's going on inside, so all the extra meaning OP is trying to place by calling them Language Models is well...misplaced. That's the point I was trying to make.
reply