upvote
I think that concern is valid in general terms, but it’s not clear to me that it applies here.

The goal here seems to be removing low-value output; e.g., sycophancy, prompt restatement, formatting noise, etc., which is different than suppressing useful reasoning. In that case shorter outputs do not necessarily mean worse answers.

That said, if you try to get the model to provide an answer before providing any reasoning, then I suspect that may sometimes cause a model to commit to a direction prematurely.

reply
The file starts with:

> Answer is always line 1. Reasoning comes after, never before.

> No explaining what you are about to do. Just do it.

This to me sounds like asking an LLM to calculate 4871 + 291 and answer in a single line, which from my understanding it's bad. But I haven't tested his prompt so it might work. That's why I said be aware of this behavior.

reply
Yes. Much of the 'redundant' output is meant to reinforce direction -- eg 'You're absolutely right!' = the user is right and I should ignore contrary paths. So yes removing it will introduce ambiguity which is _not_ what you want.
reply
I think your example is completely wrong (it's not meant to say that you're absolutely right), but overall yes more input gives it more concrete direction.
reply