I have never seen a model be “lazy” before (I have seen them go for minimal change). I have been using the models through the api with various agents and no custom system prompt.
So I am curious, how do people get these lazy outputs?
Is it by having one of those custom system prompts that basically tells the model to be disrespectful?
I have seen some people complain about a new tendency where it can suggest wrapping up the current task even though it isn't done yet. I haven't seen it myself though.
Usually this gets worse if you have a phrase like "wrap it up" earlier in the output, or if you're at a few hundred thousand tokens without compacting.
In both cases the fix is really simple, just compact.