Why not? Physical businesses have liability if they provide age restricted items to children. As far as I know, strip clubs are liable for who enters. Selling alcohol to a child carries personal criminal liability for store clerks. Assuming society decides to restrict something from children, why should online businesses be exempt?
On who should be responsible, parents or businesses, historically the answer has been both. Parents have decision making authority. Businesses must not undermine that by providing service to minors.
This implies the creation of an infrastructure for the total surveillance of citizens, unlike age verification by physical businesses.
How do you reconcile porn sites as a line in the sand with things like banking or online real estate transactions or applying for an apartment already performing ID checks? The verification infrastructure is already in place. It's mundane. In fact the apartment one is probably more offensive because they'll likely make you do their online thing even if you could just walk in and show ID.
I mean, we're talking about age verification in the OS itself in some of these laws, so tell me how it doesn't.
Quantity is a quality. We're not just seeing it for porn, it's moving to social media in general. Politicians are already talking about it for all sites that allow posts, that would include this site.
So you tell me.
California is also stupid for creating liability for service/app providers that don't even deal in age restricted apps, like calculators or maps. It's playing right into the "this affects the whole Internet/all of computing" narrative when in fact it's really a small set of businesses that are causing issues and should be subject to regulation.
There is also the problem of mission creep. Once the infrastructure is in place, to control access to age-restricted content, other services might become out of reach. In particular, anonymous usage of online forums might no longer be possible.
OS-level ability to verify the age of the person using it absolutely provides infrastructure for the OS to verify all sorts of other things. Citizenship, identity, you name it. When it's at the OS level there's no way to do anything privately on that machine ever again.
Ok, suppose the strip club is the website, and the club's door is the OS.
Would you fine the door's manufacturer for teens getting into the strip club?
How do we fight? It seems like agree or disagree, this isn't going to stop. There's so much money behind it in a time where the have nots can barely survive as is.
These are often clear cut. They're physical controlled items. Tobacco, alcohol, guns, physical porn, and sometimes things like spray paint.
The internet is not. There are people who believe discussions about human sexuality (ie "how do I know if I'm gay?") should be age restricted. There are people who believe any discussion about the human form should be age restricted. What about discussions of other forms of government? Plenty would prefer their children not be able to learn about communism from anywhere other than the Victims of Communism Memorial Foundation.
The landscape of age restricting information is infinitely more complex than age restricting physical items. This complexity enables certain actors to censor wide swaths of information due to a provider's fear of liability.
This is closer to a law that says "if a store sells an item that is used to damage property whatsoever, they are liable", so now the store owner must fear the full can of soda could be used to break a window.
So again, assuming we have decided to restrict something (and there are clear lines online too like commercial porn sites, or sites that sell alcohol (which already comes with an ID check!)), why isn't liability for online providers the obvious conclusion?
The crux is we cannot decide what is protected speech, and even things that are protected speech are still considered adult content.
> why isn't liability for online providers the obvious conclusion?
We tried. The providers with power and money(Meta) are funding these bills. They want to avoid all liability while continuing to design platforms that degrade society.
This may be a little tin-foil hat of me, but I don't think these bills are about porn at all. They're about how the last few years people were able to see all the gory details of the conflict in Gaza.
The US stopped letting a majority of journalists embed with the military. In the last few decades it's been easier for journalists to embed with the Taliban than the US Military.
The US Gov learned from Vietnam that showing people what they're doing cuts the domestic support. I've seen people suggesting it's bad for Bellingcat to report on the US strike of the girls school because it would hurt morale at home.
The end goal is labeling content covering wars/conflicts as "adult content". Removing any teenagers from the material reality of international affairs, while also creating a barrier for adults to see this content. Those who pass the barrier will then be more accurately tracked via these measures.
Anatomical reference material for artists with real nude models?
What about Sexual education materials? Medical textbooks?
Women baring their breasts in NYC where it's legal?
Where is the clear cut line of Pornography? At what point do we say any depiction of a human body is pornographic?
Plenty of people would prefer that children not learn about scientology from pro-scientology cultists too. It's not that they can't know about scientology (they probably should, in fact, because knowledge can have an immunizing effect against cults)...
And it's not that they can't know about communism (they probably should, in fact, because knowledge can have an immunizing effect against cults)...
This is a comment section about large corporations lobbying against our ability to freely use computers and you break out the 80's cold war propaganda edition of understanding a complicated economic system that intertwines with methodology for historical analysis with various levels of implementations from a governmental level.
You're either a mark or trying to find a mark.
Physical businesses nominally aren't selling their items to people across state or country borders.
Of course, we threw that out when we decided people could buy things online. How'd that tax loophole turn out?
It turned out we pretty much closed the tax loophole. I don't remember an online purchase with no sales tax since the mid 00s.
App and website operators should add one static header. [1] That's it, nothing more. Site operators could do this in their sleep.
User-agents must look for said header [1] and activate parental controls if they were enabled on the device by a parent. That's it, nothing more. No signalling to a website, no leaking data, no tracking, no identifying. A junior developer could do this in their sleep.
None of this will happen of course as bribery (lobbying) is involved.
The real answer to the problem is for websites/appstores to publish tags that are legally binding assertions of age appropriateness, and then browsers/systems can be configured to use those tags to only show appropriate content to their intended user.
This also gives parents the ability to additionally decide other types of websites are not suitable for their children, rather than trusting websites themselves to make that decision within the context of their regulatory capture. For example imagine a Facebook4Kidz website that vets posts as being age appropriate, but does nothing to alleviate the dopamine drip mechanics.
There has been a market failure here, so it wouldn't be unreasonable for legislation to dictate that large websites must implement these tags (over a certain number of users), and that popular mobile operating systems / browsers implement the parental controls functionality. But there would be no need to cover all websites and operating systems - untagged websites fail as unavailable in the kid-appropriate browsers, and parents would only give devices with parental controls enabled to their kids.
Agreed, recycling a comment: on reasons for it to be that way:
___________
1. Most of the dollar costs of making it all happen will be paid by the people who actually need/use the feature.
2. No toxic Orwellian panopticon.
3. Key enforcement falls into a realm non-technical parents can actually observe and act upon: What device is little Timmy holding?
4. Every site in the world will not need a monthly update to handle Elbonia's rite of manhood on the 17th lunar year to make it permitted to see bare ankles. Instead, parents of that region/religion can download their own damn plugin.
To expand on your #3, it also gives parents a way to have different policies on different devices for the same child. Perhaps absolutely no social media on their phone (which is always drawing them, and can be used in private when they're supposed to be doing something else), but allowing it on a desktop computer in an observable area (ie accountability).
The way the proposed legislation is made, once companies have cleared the hurdle of what the law requires, parents are then left up to the mercy of whatever the companies deem appropriate for their kids. Which isn't terribly surprising for regulatory capture legislation! But since it's branded with protecting kids and helping parents, we need to be shouting about all the ways it actually undermines those goals.
Where do you go to vote for this option?
Surely you can find a rationalwiki article for your fallacy too.
In fact, I suspect adults, and not just children, would also appreciate it if the pervasive surveillance was simply banned, instead of trying to age gate it. Why should bad actors be allowed to prey on adults?
The 2 billion dollars are the one twisting it.
Also, I heard the same thing about video games, TV shows, D&D, texting and even youth novels. It's yet another moral panic.
From the Guardian[1]:
> Social media time does not increase teenagers’ mental health problems – study
> Research finds no evidence heavier social media use or more gaming increases symptoms of anxiety or depression
> Screen time spent gaming or on social media does not cause mental health problems in teenagers, according to a large-scale study.
> With ministers in the UK considering whether to follow Australia’s example by banning social media use for under-16s, the findings challenge concerns that long periods spent gaming or scrolling TikTok or Instagram are driving an increase in teenagers’ depression, anxiety and other mental health conditions.
> Researchers at the University of Manchester followed 25,000 11- to 14-year-olds over three school years, tracking their self-reported social media habits, gaming frequency and emotional difficulties to find out whether technology use genuinely predicted later mental health difficulties.
From Nature[2]:
> Time spent on social media among the least influential factors in adolescent mental health
From the Atlantic[3] with citations in the article:
> The Panic Over Smartphones Doesn’t Help Teens, It may only make things worse.
> I am a developmental psychologist[4], and for the past 20 years, I have worked to identify how children develop mental illnesses. Since 2008, I have studied 10-to-15-year-olds using their mobile phones, with the goal of testing how a wide range of their daily experiences, including their digital-technology use, influences their mental health. My colleagues and I have repeatedly failed to find[5] compelling support for the claim that digital-technology use is a major contributor to adolescent depression and other mental-health symptoms.
> Many other researchers have found the same[6]. In fact, a recent[6] study and a review of research[7] on social media and depression concluded that social media is one of the least influential factors in predicting adolescents’ mental health. The most influential factors include a family history of mental disorder; early exposure to adversity, such as violence and discrimination; and school- and family-related stressors, among others. At the end of last year, the National Academies of Sciences, Engineering, and Medicine released a report[8] concluding, “Available research that links social media to health shows small effects and weak associations, which may be influenced by a combination of good and bad experiences. Contrary to the current cultural narrative that social media is universally harmful to adolescents, the reality is more complicated.”
[1] https://www.theguardian.com/media/2026/jan/14/social-media-t...
[2] https://www.nature.com/articles/s44220-023-00063-7
[3] https://www.theatlantic.com/technology/archive/2024/05/candi...
[5] https://pubmed.ncbi.nlm.nih.gov/31929951/
[6] https://www.nature.com/articles/s44220-023-00063-7#:~:text=G...
[7] https://pubmed.ncbi.nlm.nih.gov/32734903/
[8] https://nap.nationalacademies.org/resource/27396/Highlights_...