upvote
What even is the problem? I keep my kids computers in the living room where it's easy to see what they are doing. Their lan shuts down at night when I'm asleep. They don't get full control of their own cell phone until they are around 16-years old. Bots on social media discourage me from using it which is a Good Thing if you ask me.
reply
The problem is that companies have a legitimate reason to want to block AI agents and verify the users are actually real. And it's incredibly difficult to do that when the old methods of clicking on squares or reading blurry words don't work anymore.

Solving proof of humanity is very difficult without tying to some kind of difficult to replicate or automate ID.

reply
deleted
reply
> Bots on social media

... are not problems, no - but bots in general are

reply
> Are there open and privacy-preserving standards that can solve the problem of bots and minors? If not, what would be required to establish one, and is it realistic?

Ideally there shouldn't be standards for this. What we have already is enough.

Companies claiming they are closing down their services/devices to protect the users is total BS. Facebook has admitted they get 10% of their ad revenue from scams, and that's the reason they won't go after scammers on their platforms.

Same can be said for Google. They could come up with numerous ways to block bots or make captchas harder for actual bots (while also not flagging every non-Chrome user as a potential bot, like they do nowadays), but they pretend this is an unsolvable problem that requires a nuclear solution, it used to be Web DRM but now it's called Fraud Defense.

reply