upvote
That's a real risk and I'd be lying if I said it never happens. But the distinction I'd draw is between using AI to generate conclusions vs using it to stress-test yours. Thinking FOR you vs thinking WITH you. When I say rubber ducking I mean something closer to what borski described — "fight and engage me on my ideas" — not "tell me what to think."

The steering problem is worst when you go in without any sort of position. If you sit down and say "what should I write about X" then yeah, you're risking ending up thinking whatever the model thinks. But if you sit down with even a half-formed argument and conviction and say "here's what I think, build on it and poke holes in it," the dynamic is completely different. You're still driving. You still need to maintain meta-awareness of how your thinking might be shifting in response to the AI, but you remain in control.

With that I think the editor vs thinking tool framing is a false binary. The best use I've found is somewhere in between — more adversarial than an editor, less open-ended than brainstorming. Alternating between convergent building together and structured disagreement, basically.

reply
If you let it, sure. But I don't go into a session asking 'what should I write.' Rather, I ask it to help fight me on my ideas, so that I can stress-test the logic behind them, which is precisely what I do with humans too.

Only with humans, it's admittedly way more fun. :)

reply