(www.bbc.com)
Like, it's literally a platform that was run under the watchful eye of the CCP, and now the US version is some kleptocratic nightmare, so I just don't see the point in expecting some sort of principled stance out of them.
In some ways I think it's worse for places like Facebook to "care about privacy" and use E2EE but then massively under-resource policing of CSAM on their platform. If you're going to embrace 'privacy' I do think it's on you to also then put additional resources into tackling the downsides of that.
IMO no consumer service should have private 1:1 messaging without e2e. Either only do public messaging (ie. Like a forum), or implement e2e.
It's better that they're honest about this, nobody should believe for a second that WhatsApp or FB messages are truly E2EE.
DM on social media shouldn't be used for anything remotely private. It's a convenience feature, nothing more.
Meta still tracks analytics which isn't good for privacy, but I'm not aware of any news of them or 3rd parties reading messages without consent of one of the 1st parties? Signal is probably much better though
Correct. WhatsApp uses the Signal protocol, and there is zero evidence of them reading message contents except with the consent of one of the users involved (such as a user reporting a message for moderation purposes).
(And before anyone takes issue with that last qualifier, consent from at least one party is the bar for secure communications on any platform, Signal included. If you don't trust the person you are communicating with, no amount of encryption will protect you).
Discovering a backdoor in WhatsApp for Facebook/Meta to read messages would be a career-defining finding for a security researcher, so it's not like this is some topic nobody has ever thought to investigate.
Yet. Until they say "We delete these messages after X time and they are gone gone, and we're not reading them" Assume they are reading them, or will read them and the information just hasn't got out yet.
I mean we keep finding more and more cases where companies like FB and Google were reading messages years ago and it wasn't till now we found out.
They never had the plaintext of the messages in the first place, so they don't need to delete them. That's what end-to-end encrypted means.
In the former case, Facebook can decrypt the messages at will, and the e2ee only protects against hackers, not Facebook itself, nor against law enforcement, since if Facebook has the decryption key they can be legally compelled to hand it over (and probably would voluntarily, going by their history).
It may not be called that, but what are users expecting? Some folks may later be surprised when a warrant gets issued (e.g., from a divorce judge).
"You moved into a neighborhood with lead pipes? That's on you, should have done more research" "Your vitamins contained undisclosed allergens? You're an adult, and it didn't say it DIDN'T contain those" "Passwords stolen because your provider stored them in plaintext? They never claimed to store them securely, so it's really on you"
Also consider what this means for open source. No hobbyist can ship an IM app if they don't go all the way and E2E encrypt (and security audit) the damn thing. The barriers of entry this creates are huge and very beneficial for the already powerful since they can afford to deal with this stuff from day one.
Telephones can be tapped, people sold special boxes that would encrypt/decrypt that audio before passing it to the phone or to the ear. Mail can be opened, covertly or not. AIM was in the clear (I think at one point, fully in the clear, later probably in the clear as far as the aol servers were concerned)...
Unless the app/method is directly lying to users about being e2ee it's not a slippery slope, it's the status quo. Now there are some apps out there that I think i've seen that are lying. They are claiming they are 'encrypted' but fail to clarify that it's only private on the wire, like the aim story.. the message is encrypted while it flys to the 'switchboard' where it's plain text and then it's put wrapped in encryption on the wire to send it to the recipient.
The claim here that actually makes me chuckle is somehow trying to paint e2ee as 'unsafe' for users.
Unfortunately, this doesn't scale.
Obviously, one way to improve the situation would be to make sure people are paid fairly and not overworked and have access to good and affordable or free childcare and elder-care and medical care, but corporations don't want that either. If anything, they're incentivised to disempower workers and keep them uninformed, and to get as much time out of them as they can for as little money as possible.
same discussion for any form of technology be it TVs or changing their car's oil
the deliberate app-store-ification of all things computer is also designed to keep people from asking those questions -- just download in and install, pleb.
it's why the Zoomers can't email attachments or change file types: all of the computers they grew up with were designed so they never had to understand what happens under the hood.
People can't be knowledgable about everything. There's just too much information in the world, and too many different skills that could be learned, and not enough time.
A carpenter can rely on power tools without understanding fully how the tools work, and it's fine, as long as the tools are made to safe standards and the user understands basic safety instructions (e.g. wear protective eyewear).
To me, making sure that apps don't screw with people, even if they don't understand how the apps work, is roughly the equivalent of making sure power drills are made safely so they don't explode in peoples' hands.
Now TikTok wants to be a messaging app. Snapchat has a short video feed just like TikTok. WhatsApp only has a text feed, how long until they also add a video feed?
That's interesting. You think all firms that audited WhatsApp and Signal protocol used by WhatsApp and all programmers who worked there for decades and can see a lie and leak if it was true are all crooks? valid opinion I guess, but I won't call it "no one should believe for a second
(curious you didn't mention Telegram, it is actually marketed as secure and e2e and it has completely gimped "secret chats" that are off by default and used by like almost nobody.)
Also, backups have nothing to do with the messages being end-to-end encrypted. Like if you don't use a passcode on the phone, the messages are still encrypted.
iMessage also syncs to iCloud unencrypted by default[2].
[1] Depends on you paying for iCloud storage, so that you have space for a full phone backup to occur.
[2] Might be "free" with "iMessage in iCloud", an option to enable separately.
Not true. You must choose to enable it or not when you set up new phone. On mine it does not back up
Additionally I think it is fine to say "we don't support e2ee". I prefer honesty to a bad (leaky) e2ee implementation, at least the user can make an informed choice.
Yeah but it's kind of accepted that the forum owner could read it all if they so chose. Maybe this is a hold over from back in the old days when encryption was nowhere near default during which forums arose.
for all intents and purposes email is not e2ee.
The intended payload can be in an header-less encrypted file on a throw-away SFTP server in the tmpfs ram disk.
I understand that metadata is valuable information for spies/governments and that encrypting or hiding it is valuable for privacy. But if you use that definition, there are almost no E2EE protocols on the planet in use.
First and foremost, any protocol that uses Apple or Google push notifications is giving metadata to those organizations. Even Whatsapp, iMessage, Signal, Telegram private messages, all of that leaks metadata but the contents of messages are hidden from the provider.
I know, right? I admit that is mostly for people on Linux desktops. People on smart phones are 100% monitored regardless of encryption or fake E2EE that platforms pinky promise is really E2EE like Signal. Shame on Moxie, he knows better.
Ovaltine has a crapload of sugar. Don't drink that horse piss.
Once you have enormous network effect like TikTok has, you don't really have any free selection of alternative apps. You are free to use one, but you will be the only sad user over there.
Regulations are needed that would force large platforms like TikTok and Instagram to enable federation, opening them up to actual competition. This way platforms would be able to compete on monetisation and usability, instead of competing on locking in their precious users more strictly.
> MySpace is well on the way to becoming what economists call a "natural monopoly". Users have invested so much social capital in putting up data about themselves it is not worth their changing sites, especially since every new user that MySpace attracts adds to its value as a network of interacting people.
> "In social networking, there is a huge advantage to have scale. You can find almost anyone on MySpace and the more time that has been invested in the site, the more locked in people are".
https://www.theguardian.com/technology/2007/feb/08/business....
And nobody gained privacy in the process (I rather think everyone lost even more of it).
The situation currently permits only a tiny number of winning companies at a time, and the userbase is locked in even as the site becomes wildly unpopular, until some threshold of discontent is reached, and then everyone moves, and then that new site also enshittifies and the cycle repeats.
Federation is a mechanism whereby people would be able to actually choose providers as individuals and at any time, instead of having to wait years for a critical mass of upset people to build up and leave [current most popular social media site], and instead of being forced to go to [new most popular social media site].
Lolololol. No, not regulations. Regulators. With the people we currently have voted into office in the US the only regulations we are going to get are ones saying Sam and Peter must look at everything you do all the time.
Until we stop voting for more authoritarianism, expect ever increasing amounts of authoritarianism.
But bullshitting about it is making users more safe, that is ... bullshit! Worse that that, distorting public opinion, intentionally fooling the gullible.
They are lying straight off though... police and safety team don't read messages only "if they needed to" to keep people safe. They do so for a large variety of other reasons, such as suppressing political dissent and asserting domination and control.
I don't think we can expect most people to understand TikTok's BS here either. I notice even a skeptic like you is uncritically echoing the dubious conflation of privacy and CSAM.
Good implementations of E2EE:
1. Generate the key pairs on device, and the private key is never seen by the server nor accessible via any server push triggered code.
2. If an encrypted form of the private key is sent to the server for convenience, it needs to be encrypted with a password with enough bits of entropy to prevent people who have access to the server from being able to brute force decode it.
3. Have an open-source implementation of the client app facilitating verifiability of (1) and (2)
4. Permit the users to self-compile and use the open-source implementation
If company isn't willing to do this, I'd rather they not call it E2EE and dupe the public into thinking they're safe from bad actors.
It’s at best subpar for the same reasons as if it was the usual Silicon Valley spyware.
I could leave well enough alone. But why? Because there are choices? There are five other brands of cereal that do not have 25% sugar? I’d rather be a negative nancy towards these on-purpose addictive, privacy-leaking attention pimp apps.
The logic of "anything is better than before" is also fallacious.
If it's E2EE, no one except the sender and receiver know about this conversation. You want an MITM in this case to detect/block such things or at least keep record of what's going on for a subpoena.
I agree that every messaging platform in the world shouldn't be MITM'd, but every messaging platform doesn't need to be E2EE'd either.
I'm not saying no E2E messaging apps should exist, but maybe it doesn't need to for minors in social media apps. However, an alternative could be allowing the sharing of the encryption key with a parent so that there is the ability for someone to monitor messages.
Would it be a fair argument to say the police have a better opportunity to prevent crimes if they can enter your house without a warrant? People are paranoid about this sort of thing not because they think law enforcement is more effective when it is constrained. But how easily crimes can be prosecuted is only one dimension of safety.
> However, an alternative could be allowing the sharing of the encryption key with a parent
Right, but this is worlds apart from "sharing the encryption key with a private company", is it not?
Police can access your home with a warrant.
Police cannot access your E2EE DMs with a warrant.
> Police cannot access your E2EE DMs with a warrant.
They can and do, regularly. What they can't do is prevent you from deleting your DMs if you know you're under investigation and likely to be caught. But refusing to give up encryption keys and supiciously empty chat histories with a valid warrant is very good evidence of a crime in itself.
They also can't prevent you from flushing drugs down the toilet, but somehow people are still convicted for drug-related crimes all the time. So - yes, obviously, the police could prosecute more crimes if we gave up this protection. That's how limitations on police power work.
Uh, it absolutely isn't? WTF dystopian idea is this?
Well the kind of can if they nab your cell phone or other device that has a valid access token.
I think it's kind of analogous to the police getting at one's safe. You might have removed the contents before they got there but that's your prerogative.
I think this results in acceptable tradeoffs.
We shouldn't make the world a worse place for every one because some parents can't take care of their children.
See also: That time the FBI took over a CSAM site and kept it running so they could nab a bunch of users.
What's more dangerous? CSAM on the internet? Or actual child predators running loose?
Pick your definition of safe.
Similarly in "traditional" media you may not want to discuss such private conversation on a radio broadcast. Perhaps you would rather discuss it on the phone or over snail mail as there is more of an expectation of privacy on those medium.
What does the "p" in "pm" stand for?
I will update above
Sure, they can fabricate some evidence and get access to your messages, in which case, valid point.
E2E makes political activists and anti-chinese dissidents safer, at the cost of making children less safe. Whether this is a worthwhile tradeoff is a political, not technical decision, but if we claim that there are any absolutes here, we just make sure that we'll never be taken seriously by anybody who matters.
What are children at risk of, when E2EE is not used?
Potential exposure to abusive adults.
> What are children at risk of, when E2EE is not used?
State-sanctioned violence.
and for tiktok's stance, I think they just don't want to get involved with the Chinese government related with encryption (and give false sense of privacy to user)
Disagree. To analogize why: privacy isn't heated seats, *its seat belts*. Comfort features and preferences are fine to tailor to your customers and your business model. Jaguar targets a different market than Ford, and that's just fine.
Safety features should be non-negotiable for all. Both Jaguar and Ford drivers merit the utmost protection against injury in crashes. Likewise, all applications that offer user messaging functionality should offer non-defective, non-harmful versions of it. To do that, e2e privacy is absolutely necessary.
>I just don't see the point in expecting some sort of principled stance out of them.
This is the defeatism that adds momentum to a downhill trajectory. Exactly the opposite approach arrests the slide - users expecting their applications and providers to behave in principled ways, and punishing those who do not, are what keeps principles alive. Failing to expect lawful and upright behavior out of those you depend on, be they political leaders or software solutions providers, guarantees that tomorrow's behavior will be less lawful and upright than yesterday's. Stop writing these people a pass for this horrible behavior, and start holding them unreasonably accountable for it, then we'll see behavior start to change in the direction that we mostly all agree that it needs to.
The most effective protests against internet censorship came from massive grass roots movements, with users drawing a line in the sand that they will not tolerate further impositions on their freedom.
>In some ways I think it's worse for places like Facebook to "care about privacy" and use E2EE but then massively under-resource policing of CSAM on their platform.
The irony is so manifest of billions of people having their privacy stripped by politicians and business elites in the name of protecting our children, while those politicians and business elites conspire en masse to prey on and sex traffick our children. If these forces actually took those concerns seriously, rather than sensing them as an opportunity to push ulterior motives, they'd be eating each other alive, right now. Half of DC, half of Hollywood, and at least a tenth of most major college administrations would ALL be at the docket.
We're talking about an app that's controlled by the CCP, I do expect them to take a principled stance - stances like Taiwan is a part of China and you can't be openly critical of the leader of the party. They don't have the same principles as you. You can force them to put in E2EE, but you can't force them to be honest about it or competent about it. I would rather know what we're getting than to push them to lie.
This is the same thing as the OpenAI/Anthropic thing. You've got Anthropic taking a principled stance and getting pain for it, and you've got OpenAI claiming to take the same stance, but somehow agreeing to the terms of the DoW. Do you think it's more likely that Anthropic carelessly caused themselves massive trouble, or do you think OpenAI is claiming to have got the concessions that clearly won't work in practice. I think it's naive to think the former.
In the area of large scale internet service providers, who do you expect to take a principled stance, and why do you expect them to take it?
If the answer is, "nobody", then why keep singling out China? And if the answer isn't "nobody", then how do we apply the same pressures and principles to TikTok and other platforms that offer messaging?
This isn't some abstract concern. We know that WESTERN journalists, activists, and others have been murdered in acts of transnational repression that either began or were focused and abetted by communications surveillance aimed toward political dissidence. It seems incredibly naive to believe that current Western political and military leadership could ever be dissuaded from taking effective action (and such surveillance and repression campaigns certainly are effective) by moral qualms unsupported by strong checks and balances of accountability. In other words - this sort of repression most likely continues happening to journalists, activists, human rights lawyers, and other political dissidents, in our society, today. Enabled by the refusal of our service providers to protect us, their users.
It seems incredibly naive - civilization threateningly so - to write a pass to anyone, let alone Larry Ellison, for opting to deliberately expose "his" users to this risk. Nothing is OK about this dereliction of responsibility towards them.
Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal. We already heavily restrict the freedom of children so there is plenty of precedent for this. Optionally we could provide service points to unlock devices when they turn 18 to avoid E-waste as well.
This way it's the point of sale where you provide your ID, instead of attaching it to the hardware itself and sending it out to every single SaaS on the planet to do what they wish.
China has restrictions for social media and screen time for kids — how do they implement this?
It's obvious we're moving in a direction where we are going to get these restrictions in one way or another, and this is the only way I've come up with that doesn't come with serious privacy implications.
Most importantly, this solution would be simple for anyone to understand. You don't need to be a cryptography expert to understand there are child safe devices and then there are unrestricted devices for adults.
If most adults would be convinced there is an issue, one probably has enough lock-down modes even nowadays, not sure it is a "technical" problem.
I can also see also large support for uploading ID to various services when talking about kids, but when you re-frame the question to adults, most seems to really dislike the idea immensely.
Sure there will be children with access to unrestricted devices, just like we had kids with porn mags hidden in a forest somewhere back in the day, or how that one sketchy guy was buying alcohol, etc. But I think this is an acceptable level of risk for whatever harm people want to prevent.
Consider that even with something as divisive as covid lockdowns and vaccines, the overwhelming majority of people complied with government instructions.
There are a minority of people currently refusing to vaccinate their children properly, and their fucking around is being found out with measles outbreaks in various countries.
Why would this be different? Why wouldn't it be a minority of parents permitting their children to drink, to smoke, to use unrestricted computing resources?
Are you saying that kids now buy their phones with pocket money without their parents knowing?
> It's obvious we're moving in a direction where we are going to get these restrictions in one way or another
It’s not obvious, it’s just sad. I still hope reason will prevail in this.
I keep thinking that computers that are actually made to be good for children should be a thing. Perhaps like "A Young lady's Illustrated Primer" ( https://en.wikipedia.org/wiki/The_Diamond_Age )
https://www.technologyreview.com/2023/08/09/1077567/china-ch...
That describes something very similar to what the OP suggested.
> Essentially, this is a cross-platform, cross-device, government-led parental control system that has been painstakingly planned out by Beijing.
> The rules are incredibly specific: kids under eight, for instance, can only use smart devices for 40 minutes every day and only consume content about “elementary education, hobbies and interests, and liberal arts education”; when they turn eight, they graduate to 60 minutes of screen time and “entertainment content with positive guidance.” Honestly, this newsletter would have to go on forever to explain all the specifics.
We don’t do this in free societies. Let the parents decide.
Centralized power and being unafraid to use authoritarian tactics. Also the general cultural ethos of the people.
China is much more socially conservative, and less likely to abandon their kids to latest thing.
I don't know about Korea but if memorizing an ID number works, then that's just a badly designed system.
I'm not sure what your argument is really, unless you're saying there's technically and absolutely no feasible way to securely verify the age of a person before allowing them to access an online service (even if you allow the government to be authoritarian)
the actual users of each simcard did not have to identify themselves. so at least then it wasn't about age controls, but it obviously would allow tracing the owner eventually.
That's exactly how I'm doing technology. I sign my kid up for kid accounts. And I apply parental controls.
Notice that consumption of those things is also down for adults even though adults are not banned from getting them.
That’s why children must be free.
The better question to ask ourselves is, does the capability to gather more information also lead to more power to act on this information? If the investigative resources are spread thin already it's not like they're gonna catch more criminals with investing more there. Repelling questionable individuals off the platform with lots transparancy -is- an effective way, but just a specific tool for a symptom.
I think a part of a better solution is to give parents and children better tools to manage their social graph themselves. Essentially the real problem is discovery and warding off of social outliers in a way that doesnt out all responsibility on opaque algos or corporations.
A part of their e2e keys could be shared using an intentionally obtuse way like mailing an item or a physical "friend code". That way parents and vetted friends can have their privacy. You don't need to tie an id to someone's person to get positive confirmation on someone's poor behaviour. If someone crossed the line then parents can see it and escalate. In additon, what would happen to a child with abusive parents who can then arbitrarily restrict and deny a childs freedom to communicate? I did not have this myself, but without free access to other minds and information I would have been duller. Does a large information dragnet really serve our collective interests or are more precise tools needed?
This is actually a key consideration for the proposed implementation. The biggest issue for parents when restricting their children's online activity is that they simply don't understand the tool available for it.
By having a "child mode" iPhone, parents don't have to know any of that. They simply buy the iPhone Kids for their children and then get a plain iPhone for themselves.
If these restrictions were to actually be enforced by law as well, then it would make it very easy for teachers and other guardians to check if a device is appropriate for the child using it.
So if the teen phone turned into a restricted "call mom" device with no cameras and with neon yellow obvious fuck you coloring and a restricted set of apps, and police took away a full phone much like they take away cigs and beer it might be enough to break the critical mass to create this issue. They can have dedicated cameras for video club, use the family computer, have an xbox or switch and have whatever tech experience that millenials had, the last generation to not have exponential increases in anxiety , depression and sexlessness.
It's the covert camera + internet that it's the key issue.
California is mandating OSes provide ages to app stores, and HN lost their mind because it's a ban on Linux.
They forgot to put in the provision which exempts apps which do not need an age rating? As in: everything os related.
Sounds like a good way to get rid of snap at least since that is where all the commercial bloat is located. Last time I did a fresh Debian install I do not remember installing any app from the os repository which would require age restrictions (afaik).
That's correct. You need to provide your age to install grep.
1. You end up being the bad guy, other parents don't restrict their kids internet usage etc. Some folks would argue to just not set up restrictions and trust them. But it's a slippery slope and puts kids in a weird position. They start out with innocent YouTube videos, but pretty quickly a web search or even a comment can lead them to strange places. They want to play games online, but then creeps abuse that all the time. Even if you trust them to not do anything "wrong", it's a lot to put on their shoulders.
2. If you want to put restrictions in place, even if you're an expert, the tools out there are pretty wonky. You can set up a child protection DNS, but most home routers don't make it easy (or even allow you) to set a different DNS server. And that's not particularly hard to circumvent. I suppose a proxy would be a more solid solution, but setting that up would be major yak shaving. Any "family safety" features (especially those from Microsoft) are ridiculously complicated and often quite buggy. Right now, I got the problem on my plate that I need to migrate one of my kid's accounts from a local Windows account to a Microsoft account (without them loosing all their stuff), because for local accounts, it seems the button to add the device is just missing? Naturally, the docs don't mention that, I had to do research to arrive at that hypothesis. The amount of yak shaving, setup and configuration you have to do for a reasonable setup is just nuts.
3. If you're not good with tech - I don't see how you have _any_ chance in hell to set up meaningful restrictions.
Some countries are banning social media - sure, that's one thing. But there's a _lot_ of weird places on the internet, kids will find something else. I for one would appreciate dedicated devices or modes for kids < 18. Would solve all this stuff in a heartbeat.
After providing their identities to prove they are adults, and having all their activities tracked wherever they go and whatever they do.
The first 18 years aren't freedom either, just the system prepping you for what's ahead.
I see you Mr Quaker Oats
ID please.
Seems entirely reasonable.
Possibility entirely ineffective, but then again I don’t often see children walking around with bottle a of booze.
Or, in other words: If there is no alternative, this is due to your own faults. Either deal with it, or find ways to undo your mistakes.
Uh, Signal. SimpleX. Session. XMPP/OTR. PGP.
Discussing things on TikTok, that the government must not know about, seems a bad idea.
It’s ok for a platform to not feature private conversations. They should just have no DM feature at all, then; make all messages publicly visible.
Private conversations are indeed not for all ages. Parents should be able to grant access to that on individual basis.
This makes no sense.
I can discuss something in a bar which is not a very private conversation, I wouldn't care if someone else hear what I'm saying. But I also don't want someone to record it and post it on the internet to be seen by the whole world.
Privacy is not just boolean you toggle somewhere.
To quote a comment I made some time ago:
- You can call your service e2e encrypted even if every client has the same key bundled into the binary, and rotate it from time to time when it's reversed.
- You can call your service e2e encrypted even if you have a server that stores and pushes client keys. That is how you could access your message history on multiple devices.
- You can call your service e2e encrypted and just retrieve or push client keys at will whenever you get a government request.
E2EE only prevents naive middlemen from reading your messages.
It is phrase that sounds good. But actually doing it effectively in way that average user understand and can use system with it with minimal effort is very hard.
There are parents out there who would record and AI-analyze every single private conversation their kids have if only the technology enabled it.
During times in which I was unable to socialize irl (eg school holidays), and unable to talk to my friends online, I can confirm that the isolation was not good for my mental health.
So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.
Hogwash.
Where are these mythical people who aren’t concerned with both?
People don't care about "what companies serve them". They only care if the children see sexual content (or things considered deviant). Once sexual and deviant content is filtered, they're happy to give away their children's development to the company's algos.
In effect, the people don't want to concern themselves with what their children consume, unless they're outraged by things normally taboo in their age group. Besides, if everyone is in it "it's not that wrong". They seek reactive entertainment rather than proactive engagement in their children's development.
They're called politicians.
Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).
Clear, simple, direct: Whatever was required of The Bell Telephone Company and nothing more.
It's a good thing those human operators couldn't listen in to whichever conversation they wanted.
(Reconsider my post. I'm arguing for no regulation.)
Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed.
I think this is the real issue. We should free ourselves from "social networks" such as Tiktok, Facebook, Instagram and others. Even with direct messages truly E2EE, they create countless other privacy problems. They enable surveillance of people at scale and should be completely shunned for that reason alone.
Hypothetically speaking: What if it's a neural network in which each user has his/her own unique weights which are undergoing frequent retraining?
Would it not be an undue burden to necessitate the release of the weights every time they change?
Also, what value would the weights have? We haven't yet hit the point of having neural networks with interpretability.
Wouldn't enforcing algorithmic interpretability additionally be an undue burden?
> They must be able to know why a content was served to them.
What if the authors of the code are unable to tell you why?
The apples to oranges in this comparison is probably top five on HN ever.
If the NYT publishes and advert or editorial, it's held accountable for the contents.
fake and scam AD.
they literally profit from those ADs. When the AD distributes malware or make scam, they don't take any responsibility
They should have a responsibility of transparency, accountability and empathy towards users. They should work for the user and in the interests of the user. But multiple constraints make this impossible in practice.
Kids should be able to write a journal or talk to friends with total trust that this information will not reach their parents.
Yup, but the tools provided make that easy or hard.
But putting that emotive bit to one side, Megacorps have a vested interest in not being responsible to children. They need children's eye balls to drive advertising revenue. If that means sending them corrosive shit, then so be it.
Its a bigger issue than encryption, its editorial choice.
The children yearn for the mines(?).
Many parental controls are massive pains to get working. Apple does fairly well (although I don't get a parental pin number to unlock the phone, which is normally fine as my child will tell me, but in some circumstances it wouldn't be), but does require the parent to be on the apple ecosystem too.
EA and Microsoft however are terrible, especially as it's likely the child will be playing fortnite/minecraft and the parent won't have ever touched it. I think with minecraft we had to make something like 5 or 6 accounts across three different sites to allow online minecraft play from a nintendo switch.
That said, these platforms are making it impossible for parents to monitor anything. They're literally designed to profit off addiction in children.
At some point between the age of 0 and 18 the child has to be fully ready for an independent world. A cliff edge is a terrible idea, allowing 3 year olds unmonitored uncontrolled conversations with strangers is a terrible idea, but not allowing 15 year olds to talk to their friends is a terrible idea.
Why?
> They already got so much data on their users
There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.
Allowing for more effective propaganda, electrol control, and lights a fire on the concept of a government _representing_ anyone.
How so?
Please explain in detail, because there are already schemes such as "verifiable credentials" which allow people to prove they are of age without handing over ID to online services.
You need to 100% trust those verification services. And considering their success rate [1], you shouldn't.
[0] https://thinkingcybersecurity.com/DigitalID/
[1] https://discord.com/press-releases/update-on-security-incide...
First link - mitigation: use a well supported standard like OIDC, not a home-cooked scheme. Duh.
Second link - this is part of the problem such schemes as verifiable credentials are designed to address, random third parties collecting ID they don't need.
Yes, any system needs to be executed well. Neither of these really display that.
The point is that systems today, aren't really well executed. So it is unreasonable to expect them to be well executed.
If you can't trust people not to build the bomb well - then don't let them build a bomb.
Who was talking about the government implementing it? I wasn't.
And also "This has been done poorly in the past so we should never attempt to do it again, better" seems an odd way to go about things. There are well put together schemes by international standards bodies in this area now. Neither of the above links followed them.
Because we don't believe anyone will ever use the standards in this area, despite loads of companies and government bodies actually using OIDC already?
I'm not really sure what you're driving at.
MyGovID _is_ an age verifier. Sorry. The successor after the rebrand, is called myID [0], and advertised as:
> myID is a secure way to prove who you are online.
---
> I'm not really sure what you're driving at.
Clearly. You seem to think that because it might one day be done correctly, by one group, the rest of the world is safe. However, over in this reality, we have fuck ups by governments and private corporations, who are the people the rest of the world actually deals with.
You cannot enforce these real groups, to actually follow good practices. Thus, in practice, everyone gets fucked when you bring in these laws. Because it will always be done the wrong way, by someone.
It's an identity scheme and SSO solution for accessing government services. As said at [0] in the "What is myID" section.
I sincerely hope that they're using something standard and well tested like OIDC behind the scenes this time, because otherwise it's ripe for another fuckup like the one you linked. If it is also used for age verification that appears to be secondary.
> You cannot enforce these real groups, to actually follow good practices. Thus, in practice, everyone gets fucked when you bring in these laws. Because it will always be done the wrong way, by someone.
So we need to stop the Australian government from ever using an SSO/identity solution again because it can't be trusted to do it properly, having messed up in the past, and the rest of us have had to live with the consequences. And as they aren't the only ones to have messed up, companies do it all the time too, we should also ban all identity and SSO solutions (because that's what we're talking about in this thread, banning of age verification, not mandating it).
I don't think you get to call out age validation as a uniquely hard problem that cannot possibly be made safe, but allow other identity-style services a pass. There are many areas in which we (through the government) can and do mandate good practice, both by government and private entities.
Its a sovereign identity verification service. That is not limited to above PL2 verifications. There are age-only accredited entities in the registry.
Its one of the approved verification tools for the Online Safety Act 2021 . It was renamed as part of the passage of the law. You're just not forced to use it, for verification.
And yes, it does it poorly, and does not follow a standard. Its using Vanguard's PAS behind the scenes [1], with extras ServiceNow tacked on. Until they rearchitect the entire damn thing.
So... As I might have doxxed myself a little just now... No, uploading identity documents is never a safe process. Its a king's hoard in treasure before nations that never sleep.
Name a provider, and there will be a breach, and it will continue to affect the victims most of their lives.
[1] https://www.sec.gov/enforcement-litigation/administrative-pr...
Perhaps what we're really saying is "Ban age verification that collects lots of personal information".
Or perhaps we could distil it down further to "Ban unnecessary collection and storage of PII". In which case, Congrats! You've arrived back at the GDPR :)
Which I think is a good thing, and should be strengthened further.
(Also the other response to "because most implementations are not going to be like that" is "why not?". People are already building such ecosystems.)
There is a problem with schemes like that.
The way computer security works is, attacks always get better, they never get worse. A scheme that nobody has found any privacy holes in when it's enacted will have one found a week after.
The way governments work is, the compromise bill passes if the people who care about privacy support it because then it has the votes of the people who care about privacy and the people who want to ID everyone. But then when the vulnerability is found, the people who care about privacy can't get it fixed because they can't pass a new bill without also having the votes of the people who want to ID everyone, and those people already have what they want. More specifically, many of them then have what they really want, which is to invade everyone's privacy, as they were hoping to do once the vulnerability was found.
Which means you need it to be perfect the first time or it's already ossified and can't be fixed. But the chances of that happening in practice are zero, which means it needs to not happen at all.
/goes on to discuss how government legislation of specific schemes is the issue, not the schemes themselves.
Then we don't legislate specific schemes? The GDPR doesn't do that, for instance, it spells out responsibilities and penalties but doesn't say "Though shalt use this specific algorithm".
Remember, this discussion started with a call to ban all age checks, which itself is a government action and restriction on the agency of private business.
There are ways that private entities can implement age checks both securely and without leaking much other information, so it seems very heavy-handed to ban them. Private entities are building such systems between themselves already, without government mandates on the specifics.
(at least not yet)
To get it from Discord you need to sneeze.
The internet has scale and availability, that physical locations do not.
You might be able to get somewhere by getting a tech company on your side, but they generally also hate adult content and don't mind banning it entirely.
(people are not going to get age verification _banned_ any time soon! That's simply not going to happen!)
This is the next two steps into 1984.
Once you start mandating this, there's no going back.
The next generation will start associating wrongthink with government IDs. (Wait, we already do that, right?)
I think that it's rather funny that people like to appeal to 1984 as if the only point of Mr. Orwell was that surveillance is bad, missing the entire point about stuff like the control of the language or the idea that the only self-justification of the (Inner) Party is power for the sake of power (see also: The Theory and Practice of Oligarchical Collectivism).
I'd even go as far as to say that if "telescreens are horrible" is the only thing that someone takes away from 1984, they've frankly missed the point.
Is it? I thought that was a logical fallacy?
> This is the next two steps into 1984.
How so?
> Once you start mandating this, there's no going back. > The next generation will start associating wrongthink with government IDs.
Could you provide some more details on why you think this? For a start I talked about a scheme in which you don't hand over ID.
I don't see how verifiable credentials with zero knowledge proofs provide that however.
Once it gets big enough in your location you buy it for that sweet sweet intel.
They don't believe that. It makes it more difficult to deal with governments, is all. Big Brother needs your messages from time to time, and TikTok is not willing to risk getting shut down to argue against that. We can't have pesky principles getting in the way of money.
We know the technology exists. Apple had it all polished and ready to go for image scanning. I suppose the only thing in which we can place our faith is that it would be such an enormous scandal to be caught in the act that WhatsApp et al daren’t even try it.
(There is something to be said for e2ee: it protects you against an attack on Meta’s servers. Anyone who gets a shell will have nothing more than random data. Anyone who finds a hard drive in the data centre dumpster will have nothing more than a paperweight.)
Sure, we should all be doing PGP on Tails with verified key fingerprints. But how many people can actually do that?
People want to believe in E2EE, it's almost like religion at this point.
Protecting people is synonymous with E2EE, even if you cant verify it, and it can be potentially broken.
I was even more controversial and singled out Signal as an example: https://blog.dijit.sh/i-don-t-trust-signal/
Perhaps your e2ee is only securing your data in travel if their servers are considered the other end.
Also one thing people seem to misunderstand is that for most applications the conversation itself is not very interesting, the metadata (who to who, when, how many messages etc.) is 100x more valuable.
Fixed a bit.
Smugly dismissing them doesn't do you any favors except for making you feel good about yourself for a few seconds.
• 43% of US 18-29 year olds regularly get news on TikTok
• Half of US adults get news on TikTok, 1 in 5 US "regularly" do so
• This is 2 points less than Twitter and two points more than Facebook
Data from Pew Research (Sep 2025): https://www.pewresearch.org/short-reads/2025/09/25/1-in-5-am...
I'm mindful that it's less secure than other apps, but for a lot of chats it doesn't matter.
It's a communication channel attached to the most popular social network for young people. Obviously they're going to use it a lot. They use it for the extreme convenience.
And in a perfect world essentially shouldn’t have to be, at least inside expensive walled garden app stores.
"controversial" according to who? The NSA / GCHQ?
The recent Meta lawsuits also mention opposition from the National Center for Missing and Exploited Children and Meta's own executives: Monika Bickert (head of content policy) and Antigone Davis (global head of safety). Both executives mention the danger end-to-end encryption poses to children when attached to a social media graph.
https://www.reuters.com/legal/government/meta-executive-warn...
So the fact that we welded a messaging platform onto a global-child-discovery-service is bad? Sure. Not encrypting that messaging platform is sort of closing the barn door after the horse has gone walkabout
Hence why nobody up in arms (in either direction) about e2e encryption for Chatroulette
https://web.archive.org/web/https://www.devever.net/~hl/webc...
Same as with OS updates, browser updates, dependencies used by the OS, dependencies used by the browser. Also you can run malicious software such as keyloggers and you're compromised.
That argument doesn't mean E2E (even web based) is snake oil. Browsers just give you more points of failure.
For some companies (eg facebook, google, tiktok) i would be mostly worried about the company itself being untrustworthy. For others I would be mostly worried about the company being vulnerable.
Depends on who is defined as the other end, it may be that the company db is the other end.
Obviously not in somewhere like Hacker News where there’s a clear consensus, but if you asked a random sample of the UK population “should law enforcement be allowed to compel tech companies to hand over all DMs of confirmed paedophiles?”, I’d bet very good money the majority would say “yes”.
The notion that “Big Tech” can absolve themselves of the responsibility to help law enforcement find child abusers by saying “it’s all encrypted, not my problem”, does not sit well with a large sector of the population.
Whether it’s good or bad is an ultimately political question, and both sides of the debate tend to talk past each other on this topic, but it’s undeniably a controversial point within the broader population.
If you asked 'Would you support weakening encryption in messaging apps if it helped catch some criminals, even though it could make it easier for hackers to read your messages and steal your passwords, bank details, or personal photos?' I'd bet a large proportion of the general population would say no.
But that side never gets explored, or there's an assumption that there's some way of only letting the good guys access the information.
But other people are not technologists. Lawyers think the law is robust enough to determine if someone is a pedophile and only issue warrants for pedophile's data and simultaneously punish anyone who leaks the data of non-pedophiles. Most of the public also believe the police and the law can do that.
When the law is set up to do that, always gets abused eventually, after a time of not getting abused. The public gets outraged and the responsible person gets a slap on the wrist, and the abuse is normalized. In other words, lawyers are wrong and it doesn't work - by our standards. That doesn't stop them thinking it does. Our definition of "you can't do that" is "it's impossible to do that." Their definition of "you can't do that" is "you can do that, but if the police find out, you will go to jail."
1. New power introduced after crisis or scandal, justified as exceptional and targeted
2. Enforcement is patchy or politically difficult. Police either lack resources or big tech platforms don't want to or can't play ball
3. Failure or abuse case becomes public and reported in chattering classes tabloid press
4. Response is not "use existing powers better" but expand powers, broaden scope, lower initial 'targeted' thresholds
5. Cycle repeats
Issue is compounded because you have politicians who will either not understand things or pretend not to understand things.
It makes sense - they extract every possible bit of personal information from your device - why would they make you believe they care about your privacy?
You want to communicate privately? TikTok is not the place, and that’s ok. shrugs
Most large platforms rely heavily on server-side visibility for abuse detection, spam filtering, recommendation systems, and safety tooling. End-to-end encryption removes that visibility by design. Once a platform is built around centralized analysis of user content, adding strong E2EE later isn’t just a feature toggle — it conflicts with large parts of the existing architecture.
It really depends on whether you think your government is more dangerous than, say, suicide trends, grooming, scamming.
I know the answer is pretty easy for US citizens to answer right now.
Youtube, twitter, bluesky, whatsapp? Every app with a social aspect will be used by teens. And no, tiktok is not "only for teens" or "specially targeted at teens", nowadays everyone uses it and creates content on it.
If you run (say) a restaurant, you get big spikes in business from TikTok videos in ways you don't get from Facebook or Instagram or others.
TikTok is the platform everyone is one right now.
You can’t moderate an E2EE platform.
TikTok is a social media app, and it gets heavily abused as it is.
Like they give a damn. I report accounts that explicitly sell fake credit cards, citing laws that make it illegal and 95% of the time "we checked and there is no violation here, we know that you're not happy but don't give a crap".
So the argument of security is utter bullshit and they just want to snoop.
They're saying this at the same time as they're clutching pearls over Iran's repression of protestors. Typical of the ethical consistency I would expect from them.
https://digitaldemocracynow.org/2025/03/22/the-troubling-imp...
> TikTok won't protect DMs with controversial privacy tech, saying it would put users at risk
Not sure if this was changed since first posting, I don't mind updates, but unless it'd redacting for legal purposes (which should then itself be clearly mentioned), the BBC should provide a public changelog like wikipedia
And yet, it's even more complex than that, since it's now owned by cronies of the current US President. I've never had a TikTok account, but conceptually I was mostly pretty okay with being spied-upon by China. I'm never going to China.
China will come to us.
Or should that be:
China will come to the US.
Voluntarily.
Means they read every message
because tiktok is addicting, and they know it…
>“Parents should also be aware that players may want to find out more about the game using other platforms such as YouTube, Twitch, Reddit and Discord, where other game fans can discuss strategies and experiences.
It's uncontroversial amongst people who value their privacy.
The tension between the two camps (there are obviously nuances and this is a false dichotomy) is at a current peak. It's an ongoing controversy. It's a matter of public debate.
You might have liked it better if the angle had been "...which the government, controversially, wants to clamp down on" or something.
After you notice it, you'll notice it everywhere.
Interesting I'm not a native English speaker but in news articles I have always interpreted "controversial" as meaning "under discussion" (perhaps even around a 50/50 divide) hence why they are writing an article about it.
I feel it is the news outlet trying to justify why the topic is important to read about since most people reading it will interpret the issue at hand as having a "common" stance. Usually it is used in topics that are very binary, for or against.
> Usually it is used in topics that are very binary, for or against.
It can be for those topics, but very rarely to describe the side of such topics with which they align.
As opposed to doomscrolling and brainrot, which are not risky to expose children to at all. /s
If TikTok cared about children in the slightest, they would not exist.