upvote
We’re years into the industry leaning into “chain of thought” and then “thinking models” that are based on this premise, forcing more token usage to avoid premature conclusions and notice contradictions (I sometimes see this leak into final output). You may remember in the early days users themselves would have to say “think deeply” or after a response “now check your work” and it would find its own “one shot” mistakes often.

So it must be studied and at least be proven effective in practice to be so universally used now.

Someone else posted a few articles like this in the thread above but there’s probably more and better ones if you search. https://news.ycombinator.com/item?id=47647907

reply
I have seen a paper though I can’t find it right now on asking your prompt and expert language produces better results than layman language. The idea of being that the answers that are actually correct will probably be closer to where people who are expert are speaking about it so the training data will associate those two things closer to each other versus Lyman talking about stuff and getting it wrong.
reply