upvote
> it understands you intend to wash the car you drive but still suggests not bringing it.

Doesn't it actually show it doesn't understand anything? It doesn't understand what a car is. It doesn't understand what a car wash is. Fundamentally, it's just parsing text cleverly.

reply
By default for this kind of short question it will probably just route to mini, or at least zero thinking. For free users they'll have tuned their "routing" so that it only adds thinking for a very small % of queries, to save money. If any at all.
reply
I don't understand this approach. How are you going to convince customers-to-be by demoing an inferior product?
reply
Because they have too many free users that will always remain on the free plan, as they are the "default" LLM for people who don't care much, and that is a enormous cost. Also the capabilities of their paid tiers are well known to enough people that they can rely on word of mouth and don't need to demo to customers-to-be
reply
They're not more default than people innocently googling something and getting an AI response from some form of Gemini.
reply
Right, but that form of Gemini is also not the top Gemini model with high thinking budget that you would get to use with a subscription, the response is probably generate with Gemini Flash and low thinking.
reply
It's all trade offs. The router works most of the time so most free users get the expensive model when necessary.

They lost x% of customers and cut costs by y%. I bet y is lots bigger than x.

reply
Through hype. I am really into this new LLM stuff but the companies around this tech suck. Their current strategy is essentially media blitz, reminds me of the advertising of coca cola rather than a Apple IIe.
reply
The good news for them is that all their competitors have the exact same issue, and it's unsolvable.

And to an extent holds for lots of SaaS products, even non-AI.

reply
I don't understand why they need to save money...
reply
Every business needs to minimize costs in order to maximize profits.
reply
Gemini 3 Flash answers tongue-in-cheek with a table of pro & cons where one of the cons of walking is that you are at the car wash but your car is still at your home and recommends to drive it if I don't have an "extremely long brush" or don't want to push it to the car wash. Kinda funny.
reply
> You avoid the irony of driving your dirty car 50 meters just to wash it.

The LLM has very much mixed its signals -- there's nothing at all ironic about that. There are cases where it's ironic to drive a car 50 meters just to do X but that definitely isn't one of them. I asked Claude for examples; it struggled with it but eventually came up with "The irony of driving your car 50 meters just to attend a 'walkable neighborhoods' advocacy meeting."

reply
That's actually an amusing example from Claude.
reply
I think this shows that LLMs do NOT 'understand' anything.
reply
> I think this shows that LLMs do NOT 'understand' anything.

It shows these LLMs don't understand what's necessary for washing your car. But I don't see how that generalizes to "LLMs do NOT 'understand' anything".

What's your reasoning, there? Why does this show that LLMs don't understand anything at all?

reply
I think this rather shows that GPT 5.2 Instant, which is the version that he most probably used as a free user, is shit and unsusable for anything.
reply
Another/newer/less restricted LLM may give a better answer but I don't think we can conclude that it 'understands' anything still.
reply
If it answers this out-of-distribution question correctly -- which the other major models do -- what else should we conclude, other than that a meaningful form of "understanding" is being exhibited?

Do we need a new dictionary word that acts as a synonym for "understanding" specifically for non-human actors? I don't see why, personally, but I guess a case could be made.

reply
You may be tempted to conclude that. Then you find something else to ask that leads to an answer obviously nonsensical to a human being, or it hallucinates something, and you realise that, in fact, that's not the case.

IMHO 'understanding' in the usual human sense requires thinking and however good and fast improving LLMs are I don't think anyone would suggest that any of them has become sentient yet. They can infer things based on their training data set better and better but do not 'understand' anmything.

This is a deep and complex topic, and has been for decades.

reply