The steering problem is worst when you go in without any sort of position. If you sit down and say "what should I write about X" then yeah, you're risking ending up thinking whatever the model thinks. But if you sit down with even a half-formed argument and conviction and say "here's what I think, build on it and poke holes in it," the dynamic is completely different. You're still driving. You still need to maintain meta-awareness of how your thinking might be shifting in response to the AI, but you remain in control.
With that I think the editor vs thinking tool framing is a false binary. The best use I've found is somewhere in between — more adversarial than an editor, less open-ended than brainstorming. Alternating between convergent building together and structured disagreement, basically.
Only with humans, it's admittedly way more fun. :)