upvote
> What's being delivered now is, an agent running on someone else's computer, copying your data to someone else's database, with zero responsibility, or mandate to protect that data and not share with with anyone else (in fact, they almost always promise to share it with their thousand partners), offering suggestions and preferences based on someone else's so-called recommendations, influenced by paying the agent's operators, and increasing pressure to make using someone else's computers + agents the only way to interact with other people and systems.

If we're going to have AI regulation, this is where to start. If a company's AI service acts for a user, the company has non-disclaimable financial responsibility for anything that goes wrong. There's an area of law called "agency", which covers the liability of an employer for the actions of its employees. The law of agency should apply to AI agents. One court already did that. An airline AI gave wrong but reasonable sounding advice on fares, a customer made a decision based on that advice, and the court held that the AI's advice was binding on the company, even though it cost the company money.

This is something lawyers and politicians can understand, because there's settled law on this for human agents.

reply
A few decades back, a lot of computer use was emails. And it was stored on someone else's servers - with everyone from server operators along the route, to the government potentially having access to it. Even HTTPS is a relatively recent thing.

I guess what I'm saying is - we've always had this problem.

reply
Yea there have always been gaps in privacy, but nowadays it's several orders of magnitude easier for corporations to exploit that private data at scale.
reply
Snail mail is also not secure and can be tampered with. I don’t mind someone hosting my mail. But I do mind Google doing it (based on their behavior).
reply