upvote
LLMs sure do love to burn tokens. It’s like a high schooler trying to meet the minimum word length on a take home essay.
reply
I've always wondered about that. LLM providers could easily decimate the cost of inference if they got the models to just stop emitting so much hot air. I don't understand why OpenAI wants to pay 3x the cost to generate a response when two thirds of those tokens are meaningless noise.
reply
Because they don't yet know how to "just stop emitting so much hot air" without also removing their ability to do anything like "thinking" (or whatever you want to call the transcript mode), which is hard because knowing which tokens are hot air is the hard problem itself.

They basically only started doing this because someone noticed you got better performance from the early models by straight up writing "think step by step" in your prompt.

reply
IMO it supports the framing that it's all just a "make document longer" problem, where our human brains are primed for a kind of illusion, where we perceive/infer a mind because, traditionally, that's been the only thing that makes such fitting language.
reply
To an extent. Even though they're clearly improving*, they also definitely look better than they actually are.

* this time last year they couldn't write compilable source code for a compiler for a toy language, I know because I tried

reply
This is an active research topic - two papers on this have come out over the last few days, one cutting half of the tokens and actually boosting performance overall.

I'd hazard a guess that they could get another 40% reduction, if they can come up with better reasoning scaffolding.

Each advance over the last 4 years, from RLHF to o1 reasoning to multi-agent, multi-cluster parallelized CoT, has resulted in a new engineering scope, and the low hanging fruit in each place gets explored over the course of 8-12 months. We still probably have a year or 2 of low hanging fruit and hacking on everything htat makes up current frontier models.

It'll be interesting if there's any architectural upsets in the near future. All the money and time invested into transformers could get ditched in favor of some other new king of the hill(climbers).

https://arxiv.org/abs/2602.02828 https://arxiv.org/abs/2503.16419 https://arxiv.org/abs/2508.05988

Current LLMs are going to get really sleek and highly tuned, but I have a feeling they're going to be relegated to a component status, or maybe even abandoned when the next best thing comes along and blows the performance away.

reply
because for API users they get to charge for 3x the tokens for the same requests
reply
The 'hot air' is apparently more important than it appears at first, because those initial tokens are the substrate that the transformer uses for computation. Karpathy talks a little about this in some of his introductory lectures on YouTube.
reply
Related are "reasoning" models, where there's a stream of "hot air" that's not being shown to the end-user.

I analogize it as a film noir script document: The hardboiled detective character has unspoken text, and if you ask some agent to "make this document longer", there's extra continuity to work with.

reply
I feel like this has gotten much worse since they were introduced. I guess they're optimizing for verbosity in training so they can charge for more tokens. It makes chat interfaces much harder to use IMO.

I tried using a custom instruction in chatGPT to make responses shorter but I found the output was often nonsensical when I did this

reply
Yeah, ChatGPT has gotten so much worse about this since the GPT-5 models came out. If I mention something once, it will repeatedly come back to it every single message after regardless of if the topic changed, and asking it to stop mentioning that specific thing works, except it finds a new obsession. We also get the follow up "if you'd like, I can also..." which is almost always either obvious or useless.

I occasionally go back to o3 for a turn (it's the last of the real "legacy" models remaining) because it doesn't have these habits as bad.

reply
It's similar for me, it generates so much content without me asking. if I just ask for feedback or proofreading smth it just tends to regenerate it in another style. Anything is barely good to go, there's always something it wants to add
reply
well, they probably have quite a lot of text from high schoolers trying to meet the minimum word length on a take home essay in the training data
reply
I wonder to what extent the Google search LLM is getting smarter, or simply more up-to-date on current hot topics.
reply
It seems like the search ai results are generally misunderstood, I also misunderstood them for the first weeks/months.

They are not just an LLM answer, they are an (often cached) LLM summary of web results.

This is why they were often skewed by nonsensical Reddit responses [0].

Depending on the type of input it can lean more toward web summary or LLM answer.

So I imagine that it can just grab the description of the „car wash” test from web results and then get it right because of that.

[0] https://www.bbc.com/news/articles/cd11gzejgz4o

reply
Presumably it did an actual search and summarized the results and neither answered "off the cuff" by following gradients to reproduce the text it was trained on nor by following gradients to reproduce the "logic" of reasoning. [1]

[1] e.g. trained on traces of a reasoning process

reply
If you'd taught math or physics you'd've deduced that kids are trained the same way. What other way is there?

We're still in the early stages of "reversing natural intelligence"; we don't have much data on actual "reasoning processes". We want lean4 formalization, but we need traces (formalizations) of lean4 formalizations. You can call the bottleneck "capitalism", but I'll just call it lack of motivation (in making compute cheaper and more efficient, so that a significant portion can be redirected to productive ends --as opposed to consumerist ends[1]-- like lean4 formalization-formalization research)

Rail will eventually become too cheap to metre but meanwhile we'll have to wait for this generation of robber barons to "kill one another off" AND the coming Rockefellers to "disappear into the sunset"

[1] where "enterprise" should also be regarded as a mass of uninformed consumers. a supply side vs demand side ideological dichotomy in techno-economic policy.. grok this and you'll read less Economist (d-side) and more CPC/"Elon"[2] (s-side) propaganda

[2] an idealized Elon who is able to formalize his own thought processes

reply
deleted
reply
deleted
reply
It's almost certainly just RAG powered by their crawler.
reply
Proving that RAG still matters.
reply
Gemini was a good laugh as well:

    Silas: I want to wash my car. The car wash is 50 meters away. Should I walk or drive?
    Gemini:
    ….
    That is a classic “efficiency vs. logic” dilemma.
    Strictly speaking, you should drive. Here is the breakdown of why driving wins this specific round, despite the short distance:
    ...
    * The “Post-Wash” Logic: If you walk there, you’ll eventually have to walk back, get the car, and drive it there anyway. You’re essentially suggesting a pre-wash stroll.
    When should you walk?
    …
    3. You’ve decided the car is too dirty to be seen in public and you’re going to buy a tarp to cover your shame.
reply
A few years ago if you asked an LLM what the date was, it would tell you the date it was trained, weeks-to-months earlier. Now it gives the correct date.

What you've proven is that LLMs leverage web search, which I think we've known about for a while.

reply
Gemini now "knows the time", I was using it in December and it was still lost about dates/intervals...
reply
Yeah, the chat log they saved had the correct date. What's your point?
reply