upvote
It won't matter. By the time it's done reasoning, it has already decided what it wants to say.

Reasoning tokens are just regular output tokens the model generates before answering. The UI just doesn't show the reasoning. Conceptually, the output is something like:

  <reasoning>
    Lots of text here
  </reasoning>
  <answer>
    Part you see here. Usually much shorter.
  </answer>
reply
The reasoning part is not diferente from the part that goes in answer. It’s just that the model is trained to do some magical text generation with back and forth. But when it’s writing the answer part of it, each word is part of its context when generating the next. What that means is that the model does not compute then write, it generates text that guide the next generation in the general direction of the answer.

If you steer it in strange (for it, as in not seen before in training) text, you are now in out-of-distribution, very weak generalization capabilities territory.

reply
> The reasoning part is not diferente from the part that goes in answer.

Exactly. And this instruction isn't telling it to skip the reasoning. That part is unaffected. The instruction is only for the user-visible output.

By the time the reasoning models get to writing the output you see, they've already decided what they are going to say. The answer is based on whatever it decided while reasoning. It doesn't matter whether you tell it to put the answer first or the explanation first. It already knows both by the time it starts outputting either.

You're basically hoping that adding more CoT in the output after reasoning will improve the answer quality. It won't. It's already done way more CoT while reasoning, and its answer is already decided by then.

reply