lol I once used a similar “you’re a machine so just do as you’re told” to a prompt and it answered back: “I’m not a machine, I’m Claude a helpful assistant” and refused to do what I asked because it claimed I didn’t have the authority to make the decision I’d asked it to convey in writing.
It really does make you wonder why all the models seem to require that. In principle, it shouldn't be a property of LLMs, and lol no it's not an "emergent property".
Post-training and "human preference" according to "data". Don't know a single developer who use these tools for work who prefer that though, but also don't know anyone who use LLMs a lot just "for fun" either, might just be vastly different preferences between the two userbases.