upvote
"I literally requested no screw ups, and this is a screw up"

I bet these people are bad at managing humans too.

reply
Maybe - humans have agency, they understand actions / consequences.

AI agents do not have agency(!), they have no understanding of consequences. They actually have no understanding. At all.

reply
He blames everyone and everything for his own bad decisions. For sure he is unbearable.
reply
I have opposite view - LLMs have many similarities with humans. Human, especially poorly trained one, could have made the same mistake. Human after amnesia could have found similar reasons to that LLM.

While LLM generate "plausible text" humans just generate "plausible thoughts".

reply
Just because it sounds coherent doesn’t mean it is. You can make up false equivalence for anything if you try hard enough: A sheet of plywood also has many similarities with humans (made from carbon, contain water, break when hit hard enough), but that doesn’t mean they are even remotely equal.
reply
Humans also don't follow given rules. Or we wouldn't need jail. We wouldn't need any security. We wouldn't need even user accounts.
reply
Humans are able to follow rules. If you tell someone "don't press the History Eraser Button", and they decide they agree with the rule, they won't press the button unless by accident. If they really believe in the importance of the rule, they will take measures to stop themselves from accidentally press it, and if they really believe in the importance, they'll take measures to stop anyone from pressing it at all.

No matter how you insist to an LLM not to press the History Eraser Button, the mere fact that it's been mentioned raises the probability that it will press it.

reply
I don’t mean that in a small way (ie sometimes they don’t follow rules), I mean it in the more important sense that they don’t have a sense of right or wrong and the instructions we give them are just more context, they are not hard constraints as most humans would see them.

This leads to endless frustration as people try to use text to constrain what LLMs generate, it’s fundamentally not going to work because of how they function.

reply