upvote
This law doesn't do anything that prevents non-anonymous access. Here's how you would access things anonymously if you bought a new computer that implemented this.

1. When you set up your account and it asks for your birthdate, make up any date you want that is at least far enough in the past to indicate an age older that what any site you might use that checks age requires.

2. Access things the way you've always done. All that has changed is that things that care about age checks find out you claim to be old enough.

The only people it actually materially affects on your new computer are people who cannot set up their own accounts, such as children if you have set up permissions so they have to get you to make their accounts.

Then if you want you can enter a birthdate that gives an age that says non-adult, so sites that check age will block them.

From a privacy and anonymity perspective this is essentially equivalent to sites that ask "Are you 18+?" and let you in if you click "yes" and block you if you click "no". It is just doing the asking locally and caching the result.

reply
I agree. I feel the flow of having browsers send some flag to sites is the most privacy-preserving approach to this whole topic. The system owner creates a “child” account that has the flag set by the OS and prevents the execution of unsanctioned software.

This puts the responsibility back on parents to do the bare minimum required in moderating their child’s activities.

reply
What would be even more privacy preserving would be to mandate sites to send age appropriateness headers (mainstream porn sites already do this voluntarily).

Possibly it could be further mandated that the OS collect relevant rating information for each account and provide APIs with which browsers and other software could implement filtering.

And possibly it could be further mandated that web browsers adopt support for this filtering standard.

And if you want a really crazy idea you could pass a law mandating that parents configure parental controls on devices of children under (say) 12 and attach civil penalties for repeated failure to do so.

There's never any need for information about the user to be sent off to third parties, nor should we adopt schemes that will inevitably provide ammo for those advocating attested digital platforms.

reply
So does Google send a header for each search result when you look up "Ron Jeremy" so that some results get hidden, or does the browser just block the whole page?

Sending all the "bad" data to the client and hoping the client does the right thing outs a lot of complexity on the client. A lot easier to know things are working if the bad data doesn't ever get sent to the client - it can't display what it didn't get.

reply
Google would send a header that it is appropriate for all ages (I'm not sure how the safe search toggle would interact with this, the idea is just a rough sketch after all).

When you click on a search result, you load a new page on a different website. The new page would once again come with a header indicating the content rating. This header would be attached to all pages by law. It would be sent every time you load any page.

Assuming that the actual problem here is the difficulty of implementing reliable content filtering (ala parental controls) then the minimally invasive solution is to institute an open standard that enables any piece of software to easily implement the desired functionality. You can then further pass legislation requiring (for example) that certain classes of website (ex social media) include an indication of this as part of the header.

Concretely, an example header might look like "X-Content-Filter: 13,social-media". If it were legally mandated that all websites send such it would become trivially easy to implement filtering on device since you could simply block any site that failed to send it.

> A lot easier to know things are working if ...

Which is followed by wanting an attested OS (to make sure the value is reliably reported), followed by a process for a third party to verify a government issued ID (since the user might have lied), followed by ...

It's entirely the wrong mentality. It isn't necessary for solving the actual problem, it mandates the leaking of personal data, and it opens an entire can of worms regarding verification of reported fact.

reply
I think you would find widespread support from the various websites out there for this. Most porn websites today voluntarily implement some type of mechanism that advertises them as not for children.
reply
If browsers are going to send flags, they should only send a flag if its a minor. Otherwise is another point of tracking data that can be used for fingerprinting.
reply
I'm not sure it's worth entertaining these hypotheticals. Just another absurd CA law that's impossible to comply with. "When you set up your account and it asks for your birthdate." What does this mean? "Setup" what account? "It" what? Some graphical installer? What if I don't want to use one? How would this protocol be implemented in such a way where it's not trivially easy for the user to alter the "age signal" before sending a request? The "signal" is signed with some secret that you attest to but can't write? So it's in some enclave? What if my smart toaster doesn't have an enclave? Does my toaster now have to implement software enclave? I'm not aware of a standard, or industry standards body, or standard specification, or implementation of a specification, around this "age signal" thing. Is this some proprietary technology that some company has a patent on, and they've been lobbying for their patent to be legally mandated? If so that's very concerning and probably has antitrust implications (it is ironic that ever-tightening surveillance of people is a downstream consequence of all this deregulation of corporate persons; fine for me but not for thee I guess). I would love to know the full story here, since this is being shopped around in several states, but I haven't seen any sort of investigative journalism about this which is disappointing. This whole thing is really curious.
reply
I was curious about your question and googled. Here's the legislative history of the law: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm....

Reading the first analysis PDF:

> This bill, sponsored by the International Centre for Missing and Exploited Children and Children Now, seeks to require device and operating systems manufacturers to develop an age assurance signal that will be sent to application developers informing them of the age-bracket of the user who is downloading their application or entering their website. Depending on the age range of the user, a parent or guardian will have to consent prior to the user being allowed access to the platform. The bill presents a potentially elegant solution to a vexing problem underpinning many efforts to protect children online. However, there are several details to be worked out on the bill to ensure technical feasibility and that it strikes the appropriate balance between parental control and the autonomy of children, particularly older teens. The bill is supported by several parents’ organizations, including Parents for School Options, Protect our Kids, and Parents Support for Online Learning. In addition, the TransLatin Coalition and The Source LGBT+ Center are in support. The bill is opposed by Oakland Privacy, TechNet, and Chamber of Progress.

reply
> It seems all at once, everywhere that many groups that have a vested interest in forcing precedent and compliance of non-anonymous access across the computer world. It smacks of something less-than-organic.

I think you’ve nailed it here. How many of these people campaigned on this issue? Where were the grassroots to push this? Where did this even come from?

Somebody, somewhere - with a heck of a lot of money - wants to see this happen. And I don’t think they have good intentions with it.

reply
Death threats mainly. Personally I think it would be easier if they just made it so that platforms ran a tiny LLM against the content that will be posted - determined if it is a death threat, then require them to be identified before it's posted, then it would solve a lot of these problems.

TLDR: Evil people be doxxed internally not everyone.

reply
That turns jokes into contracts that nobody wants. Bad idea.
reply
Maybe just don’t make “jokes” like that.
reply
a "tiny large language model"? lol
reply
See https://tinyllm.org

These days the name "LLM" refers more to the architecture & usage patterns than it does to the size of model (though to be fair, even the "tiny" LLMs are huge compared to any models from 10+ years ago, so it's all relative).

reply
Yeah, a small one that is cheaper because they'll be processing billions of messages per year.
reply
Good thing all the kind people doing death threats won’t just bypass it?
reply
I'm totally lost here. If you don't identify, you don't post.
reply
Good thing no one ever breaks any rules!
reply