upvote
I don't love what discord is doing, but where are you getting the idea that discord is going to estimate the user's age using "some AI garbage tool"? The article says everyone is on "child" mode by default, and verification is only required if you need to use the features / access content marked as adult only.
reply
> but where are you getting the idea that discord is going to estimate the user's age using "some AI garbage tool"

"Additionally, Discord will implement its age inference model, a new system that runs in the background to help determine whether an account belongs to an adult, without always requiring users to verify their age"[0]

0: https://discord.com/press-releases/discord-launches-teen-by-...

reply
One aspect is already implemented, you open your webcame and it uses an AI tool to figure out if you are of age.

This is obviously ineffective but I must admit it's a bit of a boon for privacy enthusiasts as you can pretty easily fake the webcam using a game engine. Presumably someone will make a purpose built tool.

As well, if you aren't going to subvert it, and are willing to tie your identity to the discord account, it is still better than submitting government IDs.

reply
I know of at least one person who's child was flagged as 17 when they were 14. That seems like a mistake that should never, ever, ever happen if your goal is safety. The software sucks. The methodology sucks. The reason is flimsy at best.
reply
never, ever is quite strong wording when you're in an arms race with 14 year olds who want to gain illicit access to something digital. I know everyone's a digital native these days and real life isn't a 90s hacker movie, but 'rarely' already seems like a pretty high bar given how ingenious a 14 year old deprived of their preferred entertainment can get.
reply
if your goal and reasoning is child safety this is a big issue that it can even happen. my point is these tools are unreliable. It is using a problem that cannot be fixed as justification for a big privacy invasion.

I was 14 once too, that’s how I got into what I do now.

reply
Not to mention it introduces different threats to safety when additional personal information of yours is made available to an entity you cannot audit in an industry famous for redefining privacy to mean "your data or derivatives of your data can be infinitely shared and sold and resold with little-to-no consequence".

It introduces the threat of being personally unmasked to anyone and everyone is introduced in the event the verification system (or a component thereof containing your personal info) is hacked and data dumped to the public.

It introduces the threat of your data being sold around with the "ground truth" of your identity and photo associated with it.

And even if these threats aren't realized....it happens often enough with related companies that the uncertainty will forever be there.

The threat of public humiliation.

The threat of losing your job.

The threat of losing your social connections.

The threat of personal assault.

All of these come to mind as concrete threats that have played out when someone has been doxed by a malicious person.

And now the risk and consequence of doxing is made so much worse when your government ID is associated with chats that are ostensibly private.

reply
I’ve mentioned before publicly I got randomly shadowbanned before on linkedin with these invasive “security” checks for no reason. It ended up costing me money because I mostly at that time used that network to actually network and looked for consulting opportunities. and to this day i have no real way to know what they know about me or how they’re using the facial data i did provide. There was nothing from my pov that should have been flagged, but due to the unreliable way they flag users and the invasive id verification checks (that dont work) involved, I had to self opt out of the platform, which is really stupid to me given the fact i was a pro paying user for 10+ years. and all these platforms have the capability to easily do that. whether its triggered by something benign or malicious is irrelevant - the tech simply doesnt work. the people that control how it works have questionable motives. So i must then ask the question, why? you are getting at the reason I think.
reply
Then, out of respect for your view that children’s safety must be the absolute top priority and that false positives must never, ever be tolerated, let’s require people to personally visit Discord’s office in the United States with a government-issued ID, have it inspected, and formally swear an oath. Of course, Discord will retain the ID and the person’s facial photograph for a semi-permanent period. Naturally, that’s perfectly acceptable—after all, it’s for the safety of the children, right?
reply
What I read explicitly stated use of AI would be involved in their guesswork of determining if a user is or is not an adult.

Also - the outcry here isn't from people who think they will no longer be able to use Discord in any way, shape, or form without going through an age verification process. That's a bizarre strawman that doesn't represent the main grievances being aired.

reply
deleted
reply