Clear, simple, direct: Whatever was required of The Bell Telephone Company and nothing more.
It's a good thing those human operators couldn't listen in to whichever conversation they wanted.
(Reconsider my post. I'm arguing for no regulation.)
Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed.
I think this is the real issue. We should free ourselves from "social networks" such as Tiktok, Facebook, Instagram and others. Even with direct messages truly E2EE, they create countless other privacy problems. They enable surveillance of people at scale and should be completely shunned for that reason alone.
Hypothetically speaking: What if it's a neural network in which each user has his/her own unique weights which are undergoing frequent retraining?
Would it not be an undue burden to necessitate the release of the weights every time they change?
Also, what value would the weights have? We haven't yet hit the point of having neural networks with interpretability.
Wouldn't enforcing algorithmic interpretability additionally be an undue burden?
> They must be able to know why a content was served to them.
What if the authors of the code are unable to tell you why?
The apples to oranges in this comparison is probably top five on HN ever.
If the NYT publishes and advert or editorial, it's held accountable for the contents.
fake and scam AD.
they literally profit from those ADs. When the AD distributes malware or make scam, they don't take any responsibility
They should have a responsibility of transparency, accountability and empathy towards users. They should work for the user and in the interests of the user. But multiple constraints make this impossible in practice.