upvote
The personification seems to be at the training level. When I ask an LLM why it did something destructive, the ideal response would be a matter of fact evaluation of the mistakes that I myself have made in setting up the agent and it's environment, and how to prevent it from happening again. Instead the model itself has been trained to apologize and list exactly what it did wrong without any suggestions of how to actually prevent it in the future.
reply
100% this. AI perversion to fluff human egos is rewarded.

I had a PM-turned-vibe-coder tell me "Talking with you is the only bad part of my week" and realized in horror that the rest of his week is spent exclusively talking to sycophantic AI.

We have met the enemy, and he is us.

reply
You forget that people running these companies have near zero understanding of what LLM is and rely solely on their personal experience and social media hype.

I've inclined to believe that they also have outsourced their thinking process to Agents. It's useless trying to talk sense into them. Let them crash and burn. And pray there will be something left working, after all this madness ends.

reply
Shouting at them is like shouting at your chainsaw after it just chopped off your foot
reply
*you chopped off your own foot by utilising the tool poorly
reply
I agree with you but I feel like this piece is meant to be a cautionary tale to CEOs and the like to not consider them as real engineers.
reply
It is a bit silly, yes. But opus sometimes gives answers like, I am not allowed to do x and then brags about doing it anyway. So it is not just a hindsight thing
reply