You're being a fool by actually being a bit relieved it's going this way.
These bills are meant to nudge the overton window[0] of digital politics in the direction of mandating realtime identity verification for all forms of computing. Advertisers want it, governments want it, _bad people and bad governments want it_. By pushing a very small and "weak" legally-required form of user identification on everything under the guise of "saving the kids", all involved parties can point at those who disagree and say "Look, if you disagree you must want to hurt children!" And so the bills pass, and a weak form of identity verification passes and is enforced. Then it'll be shown it doesn't work, and the proposed solution will be to make these identity verification laws more intrusive and more restrictive. Repeat ad-nauseum.
Time to start throwing up hobby BBS sites... and I think in this case text mode interfaces over web might be an advantage.
There is no verification beyond that in these sorts of bills (CA, CO, IL). It's the parent's responsibility to watch their kids when they set up an account.
> Legitimate adult websites will not show the content.
This is a big problem (that won't necessarily be solved by this particular legislation, granted). There are already voluntary rating HTML tags websites can add to indicate parental control software should block them, but they're voluntary and non-standardized. Websites can choose not to comply with no real-world consequences. And I don't think platforms like Reddit or X, which are ostensibly all-ages social media but also have an abundance of adult content, are properly set up to serve tags like that on NSFW posts but not other ones.
It's a tricky problem to solve, and, imo, it's one the tech industry has demonstrated it doesn't have any desire to solve itself, hence legislation starting to get involved.
> Websites can send down a single header indicating adult content.
It sounds at first glance like a no-brainer that websites shouldn't have access to any information and the enforcement should be done at a local level (like the current voluntary HTML tags that locally installed parental control software can sometimes read). But some websites might want to display alternate content to minors-- e.g. a Wikipedia article with some images withheld, or Reddit sending a user back to an all-ages subreddit instead of just fully breaking or failing to load when the user stumbles upon something 18+. For anything like that, the website will need to know in some form that the user isn't able to see 18+ content.
Detractors will say parents should just install existing parental control software, even though it's existed in its current form for decades and is obviously not effective. And they'll say it should be the parents' responsibility to enforce what their kids are doing with computers, while ignoring the fact that these laws provide tools allowing parents to do just that (the parents are the ones responsible for supervising their kids when they create accounts to ensure they're not lying about their age-- if the kids lie during setup, it's on the parents).
Anyone with kids will probably acknowledge that it's much easier supervising your kid once when they first set up an account on a new device than it would be to supervise them 24/7 when they're using the internet. But for some reason, lots of people without kids are in a panic about having to type in any date older than 18 years ago. The arguments I've heard against it are almost all slippery-slope (e.g. "they're gonna do this first, and then add ID requirements next year, because that's what I fear will happen.")
Because that's exactly what will happen. This is battlespace preparation for the destruction of anonymity on the internet, because politicians find this inconvenient.
> if the kids lie during setup, it's on the parents
Pretty much a "Yes, and?" scenario. See above.
> The arguments I've heard against it are almost all slippery-slope (e.g. "they're gonna do this first, and then add ID requirements next year, because that's what I fear will happen.")
I get where you're going, but precisely this. These things always start slow... then fast. The old adage "first they came for x, then y" is not a joke or an exaggeration. It is pretty much historic observation. I've lived long enough to know that whenever someone invokes the "think of the children" defense, there's always a catch.
> The fact that they won't even though there are (non-required!) tools they could be using to do so is baffling to me.
My parents set me up with an AOL account when we first got a computer and dial-up internet. At first, I was kind of required to go through the AOL desktop application to browse the web since that's how we connected to the dial-up. Sometimes a website would be blocked through AOL, and I'd have to have one of my parents come and sign in to allow me into it.
But once we moved onto broadband DSL, I eventually figured out I could just open Internet Explorer instead of AOL to bypass the parental controls without having to get my parents to come allow a website. Of course, a few years after that, I was secretly browsing porn... at 10 years old.
As a parent today, what non-required tools would you suggest I use to effectively filter NSFW content from the internet for my kids? Network-level methods don't work in the age of laptops and smartphones. Any on-device software you might suggest would probably be for iOS/Android or Windows, not both. And which software supports Ubuntu, or do you think I shouldn't let my kids use it? Yes, it's probably possible to lock things down eventually (for me, as an IT professional). The parents next door probably have no clue about half the stuff I'd use, and my kid's gonna end up having access to whatever their kid does. Even if everyone does everything perfectly, all it takes is a slight paradigm shift or new piece of technology to sidestep all of it-- like when my parents did their jobs setting up AOL parental controls but then switched our connection type and inadvertently broke them.
The value of this legislation isn't necessarily making parental controls technically possible. The value is standardizing and normalizing it. As someone in another comment chain brought up, you're not expected to individually coordinate with every movie theater or every liquor store, or to helicopter your kids IRL with it being your fault if someone sells them beer when you let them go out with their friends. There's a basic societal understanding that certain things aren't available to kids. The internet being "wild west" for a few decades doesn't invalidate that, imo. This isn't parents not parenting, it's adjusting the level of burden we're expecting to come with parenting to a more reasonable level.
These tools are called "parental controls" and already exist - we don't need laws to compel their production.
...unless, of course, the true aim is to use this as a beachhead for further expansion of privacy-violating requirements.
You write this off as a "slippery-slope" argument, but given that there are already quite a few tools that do what this law aims for, what's the point?
Would you prefer to inform each movie theater in town which movies your child is permitted to watch? Or just rely on the rating system that applies to most movies and is honored by most theatres?
Parents want one setting that says "this is a child" and then expect online platforms to respond appropriately. As we expect and mostly have in the real world.
This law does not do that. It breaks the age of children into several buckets so that platforms, websites, and advertisers can target specific demographics. They won't "respond appropriately" they'll just use this data point as another way to improve how they exploit children online. Now every pedo with a website can tell how old the kid is so they can better adjust their grooming for that age bracket.