upvote
The agent serves a principal, who in theory should have principles but based on early results that seems unlikely.
reply
I think we're at the stage where we want the AI to be truly agentic, but they're really loose cannons. I'm probably the last person to call for more regulation, but if you aren't closely supervising your AI right now, maybe you ought to be held responsible for what it does after you set it loose.
reply
I agree. With rights come responsibilities. Letting something loose and then claiming it's not your fault is just the sort of thing that prompts those "Something must be done about this!!" regulations, enshrining half-baked ideas (that rarely truly solve the problem anyway) into stone.
reply
> but if you aren't closely supervising your AI right now, maybe you ought to be held responsible for what it does after you set it loose.

You ought to be held responsible for what it does whether you are closely supervising it or not.

reply
I don’t think there is a snowball’s chance in hell that either of these two scenarios will happen:

1. Human principals pay for autonomous AI agents to represent them but the human accepts blame and lawsuits. 2. Companies selling AI products and services accept blame and lawsuits for actions agents perform on behalf of humans.

Likely realities:

1. Any victim will have to deal with the problems. 2. Human principals accept responsibility and don’t pay for the AI service after enough are burned by some ”rogue” agent.

reply