Reasoning tokens are just regular output tokens the model generates before answering. The UI just doesn't show the reasoning. Conceptually, the output is something like:
<reasoning>
Lots of text here
</reasoning>
<answer>
Part you see here. Usually much shorter.
</answer>If you steer it in strange (for it, as in not seen before in training) text, you are now in out-of-distribution, very weak generalization capabilities territory.
Exactly. And this instruction isn't telling it to skip the reasoning. That part is unaffected. The instruction is only for the user-visible output.
By the time the reasoning models get to writing the output you see, they've already decided what they are going to say. The answer is based on whatever it decided while reasoning. It doesn't matter whether you tell it to put the answer first or the explanation first. It already knows both by the time it starts outputting either.
You're basically hoping that adding more CoT in the output after reasoning will improve the answer quality. It won't. It's already done way more CoT while reasoning, and its answer is already decided by then.