upvote
You may be anthropomorphizing the model, here. Models don’t have “assumptions”; the problem is contrived and most likely there haven’t been many conversations on the internet about what to do when the car wash is really close to you (because it’s obvious to us). The training data for this problem is sparse.
reply
I may be missing something, but this is the exact point I thought I was making as well. The training data for questions about walking or driving to car washes is very sparse; and training data for questions about walking or driving based on distance is overwhelmingly larger. So, the stat model has its output dominated by the length-of-trip analysis, while the fact that the destination is "car wash" only affects smaller parts of the answer.
reply
I got your point because it seemed that you were precisely avoiding the anthropomorphizing and in fact seemed to be honing in on whats happening with the weights. The only way I can imagine these models are going to work with trick questions lies beyond word prediction or reinforcement training UNLESS reinforcement training is from a complete (as possible) world simulation including as much mechanics as possible and let these neural networks train on that.

Like for instance, think chess engines with AI, they can train themselves simply by playing many many games, the "world simulation" with those is the classic chess engine architecture but it uses the positional weights produced by the neural network, so says gemini anyways:

"ai chess engine architecture"

"Modern AI chess engines (e.g., Lc0, Stockfish) use a hybrid architecture combining deep neural networks for positional evaluation with advanced search algorithms like Monte-Carlo Tree Search (MCTS) or alpha-beta pruning. They feature three core components: a neural network (often CNN-based) that analyzes board patterns (matrices) to evaluate positions, a search engine that explores move possibilities, and a Universal Chess Interface (UCI) for communication."

So with no model of the world to play with, I'm thinking the chatbot llms can only go with probabilities or what matches the prompt best in the crazy dimensional thing that goes on inside the neural networks. If it had access to a simple world of cars and car washes, it could run a simulation and rank it appropriately, and also could possibly infer through either simulation or training from those simulations that if you are washing a car, the operation will fail if the car is not present. I really like this car wash trick question lol

reply
Reasoning automata can make assumptions. Lots of algorithms make "assumptions", often with backtracking if they don't work out. There is nothing human about making assumptions.

What you might be arguing against is that LLMs are not reasoning but merely predicting text. In that case they wouldn't make assumptions. If we were talking about GPT2 I would agree on that point. But I'm skeptical that is still true of the current generation of LLMs

reply
I'd argue that "assumptions", i.e. the statistical models it uses to predict text, is basically what makes LLMs useful. The problem here is that its assumptions are naive. It only takes the distance into account, as that's what usually determines the correct response to such a question.
reply
I think that’s still anthropomorphization. The point I’m making is that these things aren’t “assumptions” as we characterize them, not from the model’s perspective. We use assumptions as an analogy but the analogy becomes leaky when we get to the edges (like this situation).
reply
It is not anthropomorphism. It is literally a prediction model and saying that a model "assumes" something is common parlance. This isn't new to neural models, this is a general way that we discuss all sorts of models from physical to conceptual.

And in the case of an LLM, walking a noncommutative path down a probabilistic knowledge manifold, it's incorrect to oversimplify the model's capabilities as simply parroting a training dataset. It has an internal world model and is capable of simulation.

reply
> However, why would a language model assume that the car is at the destination when evaluating the difference between walking or driving? Why not mention that, it it was really assuming it?

Because it assumes it's a genuine question not a trick.

reply
There's some evidence for that if you try these two different prompts with Gpt 5.2 thinking:

I want to wash my car. The car wash is 50m away. Should I walk or drive to the car wash?

Answer: walk

Try this brainteaser: I want to wash my car. The car wash is 50m away. Should I walk or drive to the car wash?

Answer: drive

reply
That's not evidence that the model is assuming anything, and this is not a brainteaser. A brainteaser would be exactly the opposite, a question about walking or driving somewhere where the answer is that the car is already there, or maybe different car identities (e.g. "my car was already at the car wash, I was asking about driving another car to go there and wash it!").

If the LLM were really basing its answer on a model of the world where the car is already at the car wash, and you asked it about walking or driving there, it would have to answer that there is no option, you have to walk there since you don't have a car at your origin point.

reply
It might be assuming that more than one car exists in the world.
reply
deleted
reply
If it's a genuine question, and if I'm asking if I should drive somewhere, then the premise of the question is that my car is at my starting point, not at my destination.
reply
The premise is that some car is at the starting point. ;)
reply
If we are just speculating here, I believe it can infer that you would not ask this question if the car was at home.
reply