4/3/2026 at 8:52:02 PM
People need to understand the difference between age indication and age verification. Two very different things. Age indication is a completely private and realistically as-effective alternative to the invasive age verification.Age _indication_ means that when you set up your device or create a user account, you enter a date of birth for the user. The OS then provides a native API to return a user's age bracket (not full date-of-birth). If the user is a minor, the OS will require parental authentication in some way to modify the setting again. This can all be done completely offline. It works because parents almost always buy the devices used by children, and can enter the correct date-of-birth during setup.
Age _verification_ means that some online service has to verify your age, and collects a bunch of (meta)data in the process. This is highly problematic for privacy, security, and the open internet.
by uyzstvqs
4/4/2026 at 1:32:01 AM
There are two things very very wrong with the California law, which you call "age indication".1) The parental responsibility is given to the wrong people. You're basically being forced by law to give all apps and websites your child's age on request, and then trusting those online platforms to serve the right content (lol). It should be the other way around. The apps and websites should broadcast the age rating of their content, and the OS fetches that age rating, and decides whether the content is appropriate by comparing the age rating to the user's age. The user's age, or age bracket, or any information about the user at all, should not leave the user's computer.
2) The age API is not "completely private". It's a legally-mandated data point that can be used to track a user across apps and websites. We must reject all legally-mandated tracking data points because it sets the precedent for even more mandatory tracking to be added in the future. We should not be providing an API that makes it easier for web platforms to get their hands on user data!
For many years, certain tech companies, SIGs, and governments have fought against technologies that could enable real digital parenting, all while claiming to do the opposite and "protecting children". They craft a narrative to convince you that top-down digital surveillance and access-control is for your own good, but it's time we reject that and flip their narrative upside down: https://news.ycombinator.com/item?id=47472805
by txrx0000
4/4/2026 at 1:45:09 AM
> For many years, certain tech companies, SIGs, and governments have fought against technologies that could enable real digital parenting, all while claiming to do the opposite and "protecting children". They craft a narrative to convince you that top-down digital surveillance and access-control is for your own good, but it's time we reject that and flip their narrative upside downThe EFF has a good series related to this[1].
[1] https://www.eff.org/deeplinks/2026/03/rep-finke-was-right-ag...
by heavyset_go
4/4/2026 at 2:25:04 AM
> 1) The parental responsibility is given to the wrong people. You're basically being forced by law to give all apps and websites your child's age on request, and then trusting those online platforms to serve the right content (lol). It should be the other way around. The apps and websites should broadcast the age rating of their content, and the OS fetches that age rating, and decides whether the content is appropriate by comparing the age rating to the user's age. The user's age, or age bracket, or any information about the user at all, should not leave the user's computer.FWIW, this is not quite an accurate description of AB1043, in at least three respects:
1. Apps don't get your exact age, just an age range.
2. Websites don't get your age at all.
3. AB1043 itself doesn't mandate any content restrictions; it just says that the app now has "actual knowledge" of the user's age. That's not to say that there aren't other laws which require age-specific behaviors, but this particular one is pretty thi on this.
In addition, I certainly understand the position that the age range shouldn't leave the computer, but I'm not sure how well that works technically, assuming you want age-based content restrictions. First, a number of the behaviors that age assurance laws want to restrict are hard to implement client side. For example, the NY SAFE For Kids act forbids algorithmic feeds, and for obvious reasons that's a lot easier to do on the server. Second, even if you do have device-side filtering, it's hard to prevent the site/app from learning what age brackets are in place, because they can experimentally provide content with different age markings and see what's accepted and what's blocked. Cooper, Arnao, and I discuss this in some more detail on pp 39--42 of our report on Age Assurance: https://kgi.georgetown.edu/research-and-commentary/age-assur...
I'm not saying that this makes a material difference in how you should feel about AB 1043, just trying to clarify the technical situation.
by ekr____
4/4/2026 at 3:12:52 AM
Thanks for the clarification.Regarding what to do with algorithmic feeds, instead of forcing platforms like Facebook to be less evil, we should give parents the ability to simply uninstall Facebook, and prevent it from being installed by the child. We could implement a password lock for app installation/updates at the OS-level that can be enabled in the phone's settings, that works like Linux's sudo. Every time you install/uninstall/update an app, it asks for a password. Then parents would be able to choose which apps can run on their child's device.
Notice their strategy: these companies make it hard/impossible for you to uninstall preloaded apps, and they make it hard to develop competing apps and OSes, and they degrade the non-preloaded software UX on purpose, which creates the artificial need to filter the feeds in existing platforms that these companies control. They also monopolize the app store and gatekeep which apps can be listed on it, and which OS APIs non-affliated apps can use. Instead of accepting that and settling with just filtering those existing platforms' feeds, we should have the option to abandon them entirely.
We need the phone hardware companies to open-source their device firmware, drivers, and let the device owner lock/unlock the bootloader with a password, so that we could never have a situation like the current one where OSes come preinstalled with bloat like TikTok or Facebook, and the bootloader is locked so you can't even install a different OS and your phone becomes a brick when they stop providing updates. If we allow software competition, then child protection would have never been a problem in the first place because people would be able to make child-friendly toy apps and toy OSes, and control what apps and OS can run on the hardware they purchased. Parents would have lots of child-friendly choices. This digital parenting problem was manufactured by the same companies trying to sell us a "solution" like this Cali bill or in other cases ID verification, which coincidentally makes it easier for them to track people online.
by txrx0000
4/4/2026 at 3:52:34 AM
> instead of forcing platforms like Facebook to be less evil, we should give parents the ability to simply uninstall Facebook, and prevent it from being installed by the child.Isn't that how parental controls already work?
There are problems, though:
1. The kids want to use Facebook. If parent A refuses to let their kid use Facebook, then kids B, C, D, E, F... all use Facebook and kid A becomes a social outcast. This actually happens. (Well, now it's other apps; kids don't use Facebook anymore.) This is similar to the mobile-phones-in-schools problem: if a parent doesn't let their kid bring a phone to school, and all the other parents do, that creates social isolation. When the school district bans the phones, it solves the problem for everyone. (So it's a collective action problem, really.)
2. Web browsers. Unless the parent is going to uninstall and disallow web browser use, the kid can still sign into whatever service they want using the web browser. I don't think parental controls block specific sites, and even if they do, there are ways around that, certainly.
I am very often the person who says that parents should actually parent their kids and not rely on the government to nanny them. But in this case I think there actually is value to the government making laws that make Facebook (etc.) less evil. And as a bonus, maybe they'll be forced to be less evil to adults too!
by kelnos
4/4/2026 at 5:25:30 AM
1. The current norm of social siloing apps was created by these tech companies in the first place. What regulators can do is discourage anti-competitive practices that lock users into specific software and hardware platforms. If there's plenty of competition for every kind of social app, and competition for OSes, and users could freely choose and move between them, then not having a particular app would not result in social isolation. This affects adults as well.2. The OS has a firewall. But it's currently not user-controllable on your phone. Phone companies have decided you don't need that feature. But actually, they can easily implement a nice UI in the settings for the firewall and lock it behind a password, then parents would be able to use it to block individual websites. We can even make it possible to import/export site lists as a txt file so that you can download/share a curated block list that you or other parents made, to block many things at once. You could also do this for your entire home WiFi network in your WiFi router's settings, if your router's firmware has that feature.
And yeah, I agree that we should make the platforms less evil in general. But I think the way to do that is to give people the ability to easily ditch bad platforms and build new ones. Let the platforms actually compete, then the best will prevail. Right now, they don't prevail because of layers and layers of anti-competitive barriers. It would take great technical effort to regulate all the tricks these tech companies use, that's why I propose dealing with it at the root: make it so that all computer/phone hardware manufacturers must open-source their device drivers and firmware, and let the user lock/unlock the bootloader and install alternative OSes. If we do this, then the entire software ecosystem will fix itself over time along with all the downstream problems.
by txrx0000
4/4/2026 at 8:01:06 AM
> Phone companies have decided you don't need that feature.Bu actually, they can easily implement a nice UI in the settings for the firewall and lock it behind a password, then parents would be able to use it to block individual websites.iOS: Settings > Screen Time > Content & Privacy Restrictions > Toggle on
Then same area:
- App Installations & Purchases: disallow all
- App Store, Media, Web & Games > Web Content > Limit Adult Websites > Fill in allowlist and/or denylist, or Only Approved Websites and fill in allowlist
by lloeki
4/4/2026 at 10:52:55 AM
Apple is indeed better than most other companies on #2. But that's because it's the worst offender on #1. Its strategy is to appear to be the model company that cares about user rights and privacy, in hopes of capturing everyone in their closed-source walled garden that's already surveiling you at the OS level.They're a part of the corp-gov surveillance complex [0]. This is the real threat behind the age verification push. The feds already have mass surveillance capabilities in iOS and macOS, and even Windows and most Android distros, but not on most open-source Linux distros, so they're starting to force it legally in the open. They're desperate because Linux is about to outcompete the enshittified Windows on desktops.
[0] https://en.wikipedia.org/wiki/Edward_Snowden#Revelations
by txrx0000
4/5/2026 at 6:47:07 PM
> The kids want to use Facebook. If parent A refuses to let their kid use Facebook, then kids B, C, D, E, F... all use Facebook and kid A becomes a social outcast. This actually happens. (Well, now it's other apps; kids don't use Facebook anymore.) This is similar to the mobile-phones-in-schools problem: if a parent doesn't let their kid bring a phone to school, and all the other parents do, that creates social isolation. When the school district bans the phones, it solves the problem for everyone. (So it's a collective action problem, really.)If so many people give their kids phones and so few don't, why ban them in the first place? Clearly the vast majority of parents are fine with their kids having one.
You're just inventing a problem then. Or worse, implement a conservative talking point.
by wolvoleo
4/4/2026 at 10:29:28 AM
It's possible to mandate effective parental controls and then say "it's illegal to give your child access to facebook" and then just see what happens. You don't have to jump straight to making it technologically guaranteed by construction, maybe it's enough to just give parents the tools and an excuse to say no.We don't need DNA testing locks on cans of beer that won't let you drink from them unless you're an adult, do we? It's perfectly possible for a parent to buy their child all the beer they want, and there's nothing stopping the children from trying to peer pressure them into it, and in many countries it's not even generally illegal to let your child drink beer! And yet almost all parents are able to almost completely enforce a reasonable level of restricted access, simply because society frowns upon it.
by amenhotep
4/5/2026 at 6:29:43 PM
Had this problem with my kid - social media caused serious mental health issues. Toxic content in kids areas.But taking it away was worse.
Once “not using it” isn’t an option, government intervention becomes reasonable.
by pyuser583
4/4/2026 at 3:31:05 PM
[dead]by aaztehcy
4/4/2026 at 3:27:08 AM
[dead]by iririririr
4/4/2026 at 8:13:42 AM
If we accept the premise that age restrictions of any kind are good (which, just to be clear, I don't think we should), there are good reasons for tailoring your content based on the user's age.Imagine you're a streaming service, trying to show a list of movies that a user can watch. If you can only communicate age restrictions to the OS, but can't actually check the users age, you have a choice of showing a list of movies that some users won't actually be able to watch, or a list of movies limited to those appropriate for all ages. Neither are great options.
If you can check the user's age bracket, you can actually tailor the list to what the user can realistically watch.
by miki123211
4/4/2026 at 8:18:16 AM
There are only about 120 versions to target if you pick each individual age - or a handful if you bracket it. You can simply create a lookup table for eachage group and let the user's device decide which one to show.by exe34
4/4/2026 at 8:56:22 AM
The user can voluntarily give the platform their age by typing it into their account profile in that streaming app. You can already do this right now. No laws required.The problem at hand is we have a new law that forces everyone to give their age to every app. It's mandatory personal info collection.
by txrx0000
4/4/2026 at 3:38:34 AM
1. I don’t see how that’s better in any real way. You can infer the exact same information as querying the range and it makes dynamic behavior based on age range (ex. access to age restricted chat rooms as an obvious example) completely impossible.2. Is it meaningfully more identifying than User-Agent? There’s dozens of other datapoints for uniquely identifying a user. If we get a few high profile lawsuits because advertising companies knowingly showed harmful ads to children, I’d consider it a win. Age is not that interesting of a data point.
by packetlost
4/4/2026 at 3:51:45 AM
I wouldn’t focus on whether it’s “identifying” but whether it’s revealing. Young teenagers are a very high-value target for advertisers. They are very impressionable, and they provide a proxy for advertisers for their parents’ money. So this law essentially makes it mandatory to share that information with advertisers. And also by proxy, predators.by throwaway173738
4/4/2026 at 4:06:22 AM
It also makes it explicitly illegal to do use it for such purposes. While I agree on the point, I think in practice it changes little. I also think it could be a net positive, because now there’s no plausible deniability about the targets age, opening up a decent amount of liability for exploitative practices targeting children specifically.by packetlost
4/4/2026 at 3:48:28 AM
> I don’t see how that’s better in any real way.It's so much better. In the one case, the OS is leaking age information (even if just an age range) to every service it talks to. In the other case, the OS isn't telling anyone anything, and is just responding to the age rating that the app/service advertises.
by kelnos
4/4/2026 at 9:10:40 AM
How would you implement a feed of mixed content? Say you're YouTube and some videos are about puppies and some videos are about guns? How would you hide only the gun videos from the homepage when the user is under 16?by gzread
4/4/2026 at 10:53:46 PM
Why does YouTube allow videos about guns but not boobs?by hdgvhicv
4/6/2026 at 10:16:39 PM
why not?These are quite modest and decent examples
Music video by Mylène Farmer performing Libertine. (C) 1997 Polydor (France) ^[https://youtu.be/oGFr_NcKyfo?t=325]
TWIN BUSCH® Germany - Making-of Kalender 2017 ^[https://www.youtube.com/watch?v=WP7HYlBsVB4]
TWIN BUSCH® Germany - Making-of Kalender 2018 ^[https://www.youtube.com/watch?v=sdCga9jqD_8]
Making-of TWIN BUSCH® Kalender 2024 ^[https://www.youtube.com/watch?v=A9JNBdYUYiA]
MAKING OF | Twin Busch Kalender 2026 ^[https://www.youtube.com/watch?v=cWPastHi8Vs]
and more: https://youtu.be/YzDHQXKBRek
I'm not even talking about entire sections that feature blatantly pornographic or perverted content, some of which are clearly aimed at a younger audience who might accidentally stumble upon it through keywords you wouldn't expect.
by user205738
4/4/2026 at 4:01:57 AM
That response reveals exactly the same information.by packetlost
4/4/2026 at 4:16:57 AM
1. Depends on how it's implemented. It won't identify you to individual platforms if the OS filters on a per-app or per-website basis. And yeah, there would be no dynamic behavior based on age, as that would enable tracking based on age. I don't think any kind of API is the ideal solution though, it's just better than the malicious one being mandated in the Cali bill. Instead of an API, it's simpler and more effective to just have an app installation lock (like sudo on Linux) and a firewall for website blocking with a nice UI in the phone's settings, locked behind a password/pin.2. Other data points like User-Agent are not required by law, and browsers already spoof user agent by default. I agree that there are other data points we need to address, but the problem in this specific case is the slippery slope of legally-mandated data points. And I don't think winning high profile lawsuits is a real "win", it just exposes problem which we already know in this case. Keep in mind those people can get away with the Epstein files.
by txrx0000
4/4/2026 at 1:50:36 AM
> The apps and websites should broadcast the age rating of their content, and the OS fetches that age rating, and decides whether the content is appropriate by comparing the age rating to the user's age.How would you make that happen? Many websites would not be subject to your jurisdiction.
by Ferret7446
4/4/2026 at 1:53:13 AM
Assume they're 18+ then.But even that's still not a great solution. I outline a better solution that doesn't require any legal enforcement at all, in the link at the bottom of my original comment.
by txrx0000
4/4/2026 at 2:29:51 AM
We're actually seeing this play out right now with the server-based age assurance systems which are already widely deployed and mandated under the UK Online Safety Act and laws in about 25 US States. In many cases, the sites just comply, presumably because they are worried that the regulators have a way to reach them even if they aren't hosted in the relevant jurisdiction. In some cases, however, the sites just ignore the regulations or tell the regulators to pound sand, as 4Chan is doing with UK OfCom: https://www.bbc.com/news/articles/c624330lg1koby ekr____
4/4/2026 at 3:56:06 AM
So? The same problem exists for having the OS broadcast the user's age range to all apps/services/websites: the service outside your jurisdiction doesn't have to actually restrict content based on age.At least with the reverse system (services broadcast an age rating), you have some nice properties:
1. You can set it up so that if the service doesn't broadcast an age rating, access is denied.
2. You aren't leaking age information (even if it's just a range) to random websites outside your jurisdiction.
by kelnos
4/4/2026 at 8:32:28 AM
Apps need to know the age of the user in order to follow the law. There will always need to be a way for apps to get the age of the user. If the OS does not give anything the apps will have to implement it themselves.by charcircuit
4/4/2026 at 12:02:54 AM
It's a distinction that hinges on one law from one state that doesn't reflect the reality of the dozens of laws in dozens of states, nor proposed federal legislation, that all require age verification via AI face scans and ID uploads.That's to say, this distinction is meaningless unless you're planning on blocking every jurisdiction outside of California so you can just adhere to its age verification laws and no one else's.
by heavyset_go
4/3/2026 at 9:39:38 PM
The issue though with "age indication" is that it creates an additional flag that can be used to fingerprint users. But it is infinitely preferable to any sort of age verification or age assurance.by EmbarrassedHelp
4/3/2026 at 8:57:11 PM
I like the term "age indication". Thank you.If I may nitpick, the conventional term for systems which attempt to determine the user's age is "age assurance". This covers a variety of techniques, which are typically broken down into:
* Age estimation, which is based on statistical models of some physical characteristic (e.g., facial age estimation).
* Age verification, which uses identity documents such as driver's licenses.
* Age inference, which tries to determine the user's age range from some identifier, e.g., by using your email address to see how old your account is.
These distinctions aren't perfect by any means, and it's not uncommon to see "age verification" used for all three of these together but more typically people are using "age assurance".
by ekr____
4/4/2026 at 12:50:17 AM
That's just setting things up for a smoother slippery slope...As appealing as the private part sounds I genuinely think it may make the situation worse here by facilitating the transition & muddying the waters
by Havoc
4/4/2026 at 10:05:19 PM
Why reach for a slippery slope fallacy when plenty of other fallacies will do? Have you considered reworking your argument to use proof by assertion or even the moralistic fallacy[1]? You might get better milage out of those.by kelseyfrog
4/4/2026 at 9:05:33 AM
We all have opinions, mine is you’re just incredibly naïve if you don’t understand that these laws are a shim to establish an eventual chain that links TPM to your license to end anonymous Internet usage.by user3939382
4/4/2026 at 9:11:33 AM
And you're incredibly naive if you think the TPM-linked internet usage isn't a shim to put a camera in your toilet bowl.by gzread
4/4/2026 at 11:55:03 PM
First they say they need your age, then they say they need proof. We already have a huge sudden trend of online services requiring your license, so your absurd comparison is a ridiculous non-sequitur.by user3939382
4/4/2026 at 6:20:44 AM
> The OS then provides a native API to return a user's age bracket (not full date-of-birth)Call the API every day, when the age bracket changes you can infer the date-of-birth.
by cmovq
4/4/2026 at 6:06:52 PM
The distinction doesn't matter in this case. The fundamental question is whether a government can compel a decentralized open-source project to change its codebase. If you believe code is speech, it's a violation of the right to free expression.Even if you think adding "age indication" to a project is harmless, you have to consider the precedent this is setting for compelled speech in the future, potentially by regimes that you are not politically aligned with.
by fasterik
4/4/2026 at 3:23:10 AM
A pointless slippery slope to attempt to stand on that points directly at the Overton Window being drawn around this.by ddtaylor
4/4/2026 at 8:46:54 AM
Is it? A lot of parents uses Family Link and similar solutions, which are way more invasive than that.by dzikimarian
4/4/2026 at 8:46:42 PM
Those are examples of software people choose to use voluntarily. The context here is government removing that choice and forcing you to use something under the conditions they set.I'm sure there are parental controls for many that go too far or not far enough. A reminder of why the government trying to solve parenting problems is likely to fail like most of their other attempts, such as failing to stop people from growing plants.
by ddtaylor
4/5/2026 at 12:09:06 PM
I agree to some extent, but who should make parental controls reasonable then? What corporations deliver is both invasive and ineffective.by dzikimarian
4/5/2026 at 8:08:25 PM
The market decides. Google and Apple both compete and there are other disruptors. I worked on an education product in 2018 and it would contact third-party services like Khan Academy or Duolingo. And if a child had not earned enough measurable results, they would be unable to access non-educational content.by ddtaylor
4/6/2026 at 8:13:07 AM
Market is two companies who do not compete in this area at all, because Google literally earns on monetizing attention and Apple since Jobs era uses children as part of the strategy to lock you down in their ecosystem (see emails they had to make available to the court). There are zero serious disruptors and chance they'll appear gets smaller, because of push for device attestation being required for more and more apps.Making children do an hour of Duolingo before they access open internet is hardly the goal. It's more about limiting their exposure to brain rot content. Existing tool would require you to block it domain-by-domain.
Honestly I can't see less invasive solution for that tool to work than page broadcasting age-rating with http response and device being aware it's owned by minor and refusing to display it.
by dzikimarian
4/4/2026 at 5:55:43 AM
In both cases the operating system stores information it has zero business with.by shevy-java
4/4/2026 at 9:12:19 AM
The operating system already stores your full name. Isn't that a problem?by gzread
4/4/2026 at 3:43:10 PM
Not necessarily your real full name. Plus on Unix systems full name is not a required field in /etc/passwd.by MarsIronPI
4/4/2026 at 8:16:39 AM
Most importantly, people need to understand how indication leads to verification.by rixed