upvote
I think this might be an unsolved problem. When GPT-5 came out, they had a "router" (classifier?) decide whether to use the thinking model or not.

It was terrible. You could upload 30 pages of financial documents and it would decide "yeah this doesn't require reasoning." They improved it a lot but it still makes mistakes constantly.

I assume something similar is happening in this case.

reply
I find that GPT 5.4 is okay at it. It does think harder for harder problems and still answers quickly for simpler ones, IME.
reply
Is knowing how hard a problem is, before doing it, solved in humans?
reply
Yes, everyweek when assigning fking points to tasks on jira/s
reply
As a unit this is funny, Jira points assigned per second (now possible with parallel tool calling AIs)
reply
[dead]
reply
It makes me think of this parallel: often in combinatorial optimization ,estimating if it is hard to find a solution to a problem costs you as much as solving it.

With a small bounded compute budget, you're going to sometimes make mistakes with your router/thinking switch. Same with speculative decoding, branch predictors etc.

reply
Maybe it is an unsolved problem, but either way I am confused why Anthropic is pushing adaptive thinking so hard, making it the only option on their latest models. To combat how unreliable it is, they set thinking effort to "high" by default in the API. In Claude Code, they now set it to "xhigh" by default. The fact that you cannot even inspect the thinking blocks to try and understand its behavior doesn't help. I know they throw around instructions how to enable thinking blocks, or blocks with thinking summaries, or whatever (I am too confused by now, what it is that they allow us to see), but nothing worked for me so far.
reply
Because with adaptive thinking they control compute, not you
reply
[dead]
reply