upvote
This is an interesting denial of reality.
reply
A "reasoning" LLM is just an LLM that's been instructed or trained to start every response with some text wrapped in <BEGIN_REASONING></END_REASONING> or similar. The UI may show or obscure this part. Then when the model decides to give its "real" response, it has all that reasoning text in its context window, helping it generate a better answer.
reply