2/15/2026 at 8:38:43 PM
What I find puzzling about these proposals is that it SEEMS like they could be designed to achieve 90% of the stated goals with almost 0% of the loss of privacy.The idea would be that devices could "opt in" to safety rather than opt out. Allow parents to purchase a locked-down device that always includes a "kids" flag whenever it requests online information, and simply require online services to not provide kid-unfriendly information if that flag is included.
I know a lot of people believe that this is just all just a secret ploy to destroy privacy. Personally, I don't think so. I think they genuinely want to protect kids, and the privacy destruction is driven by a combination of not caring and not understanding.
by dynm
2/15/2026 at 9:37:32 PM
You are mistaking cause for effect. The loss of privacy is the goal, not a side effect, the rest is just a fig leaf.by jacquesm
2/16/2026 at 5:06:16 AM
I generally try to think of things like this in terms of the natural incentives of systems, politicians, and well-meaning voters smoking hopeium. But now with the revelations of how insidious the Epstein class is, I have to wonder if the reason all these digital lockdowns are being shamelessly pushed with a simultaneous urgency is really just because of a giant fucking conspiracy. The common wisdom has always been that conspiracies naturally fall apart as they grow, succumbing to an increasing possibility of a defector. But I think that calculus might change when the members have all got mortal crimes hanging over their heads.Never mind thinking about how legitimacy was laundered through scientific institutions, and extrapolating to wondering how much that same dynamic applies to "save the children" lobbying NGOs and whatnot.
by mindslight
2/15/2026 at 8:56:14 PM
Better yet, require online services to send a 'not for kids' flag along with any restricted content then let families configure their devices however they want.Even better, make the flags granular: <recommended age>, <content flag>, <source>, <type>
13+, profane language, user, text
17+, violence, self, video
18+, unmoderated content, user, text
13+, drug themes, self, audio
and so on...
by c22
2/15/2026 at 11:30:14 PM
And here we are again...ASACP/RTA https://en.wikipedia.org/wiki/Association_of_Sites_Advocatin...
PICS https://en.wikipedia.org/wiki/Platform_for_Internet_Content_...
POWDER https://en.wikipedia.org/wiki/Protocol_for_Web_Description_R...
Tools avaliable for decades.
But as said multiple times, the childs are the distraction, the targets are privacy and freedom.
by ElectroBuffoon
2/15/2026 at 9:27:09 PM
No - Kid friendly should be something site's Attest to and claim they ARE. That becomes an FTC enforceable market claim (or insert other thing here).Foreign sites, places that aren't trying to publish things for children? The default state should be unrated content for consumers (adults) prepared to see the content they asked for.
by mjevans
2/15/2026 at 9:42:41 PM
Okay...0+, kid friendly, self, interactive content
by c22
2/15/2026 at 9:07:25 PM
Just say the whole internet is not for kids without adult supervision and leave it at that.It doesn't even matter if you can get something that technically works. Half the "age appropriate" content targeted at children is horrifying brainrot. Hardcore pornography would be less damaging to them.
Just supervise your damn children people.
by struant
2/15/2026 at 10:45:46 PM
This gets complicated when you need to start giving your kids some degree of independence. I would also argue this could be implemented in a more accessibility-oriented approach.Also, not all 13-year-olds are of equal level of maturity/content appropriate material. I find it very annoying that I can’t just set limits like: no drug-referencing but idgaf about my kid hearing swear words.
On other machines: I do not want certain content to ever be displayed on my work machine. I’d like to have the ability to set that. Someone who has specific background may not want to see things like: children in danger. This could even be applied to their Netflix algorithm. The website: does the dog die, does a good job of categorizing these kinds of content.
by glenpierce
2/16/2026 at 12:59:12 AM
But, in essence, they want to strip the ability of parents to give their kids the responsibility you describe. No letting your kids use social media, look adult content, or whatever else. It's simply banned.by nitwit005
2/15/2026 at 9:22:29 PM
yep, 18+, show id at the time of purchasing access soooo easy and zero technical issues.by cowboylowrez
2/15/2026 at 9:32:13 PM
Other advantages include:- It's much easier for web sites to implement, potentially even on a page-by-page basis (e.g. using <meta> tags).
- It doesn't disclose whether the user is underage to service providers.
- As mentioned, it allows user agents to filter content "on their own terms" without the server's involvement, e.g. by voluntarily displaying a content warning and allowing the user to click through it.
by duskwuff
2/15/2026 at 11:10:36 PM
This exact method was implemented back around the turn of the century by RSAC/ICRA. I think only MSIE ever looked at those tags. But it seems like they met the stated goal of today's age-verification proposals.That's why I have a hard time crediting the theory that today's proposals are just harmlessly clueless and well intentioned (as dynm suggests). There are many possible ways to make a child-safe internet and it's been a concern for a long time. But, just in the last year there are simultaneous pushes in many regions to enact one specific technique which just happens to pipe a ton of money to a few shady companies, eliminate general purpose computing, be tailor made for social control and political oppression, and on top of that, it isn't even any better at keeping porn away from kids! I think Hanlon's razor has to give way to Occam's here; malice is the simpler explanation.
by wiml
2/15/2026 at 9:03:24 PM
Internet Explorer had content ratings back in the dayby user3939382
2/15/2026 at 10:05:09 PM
The "problem" back then was that nothing required sites to provide a rating and most of them didn't. Then you didn't have much of a content rating system, instead you effectively had a choice for what to do with "unrated" sites where if you allow them you allow essentially the whole internet and if you block them you might as well save yourself some money by calling up your ISP to cancel.This could pretty easily be solved by just giving sites some incentive to actually provide a rating.
by AnthonyMouse
2/15/2026 at 9:55:03 PM
As others have said, the goal is the surveillance. But this notion goes further than that. So many ills people face in life can be solved by just not doing something. Addicted to something? Just stop. Fat? Stop eating. Getting depressed about social media? Stop browsing.Some people have enough self control to do that and quit cold turkey. Other people don't even consciously realize what they are doing as they perform that maladaptive action without any thought at all, akin to scratching a mosquito bite.
If someone could figure out why some people are more self aware than others, a whole host of the worlds problems would be better understood.
by asdff
2/16/2026 at 6:44:20 AM
The Purpose Of a System Is What It Does. Whether it is stated (or even designed) to protect kids, if it does anything more or different from that goal, it will perform those actions regardless of what is said about what the System should be doing.by r2_pilot
2/15/2026 at 8:44:39 PM
I have not once seen a proposal actually contain zero knowledge proof. This isn't something exotic or difficult. It is clear to me there is ulterior motives, and perhaps a few well meaning folks have been co-opted.by KoolKat23
2/15/2026 at 8:47:00 PM
FWIW, the EU is working on zero-knowledge proofs: https://digital-strategy.ec.europa.eu/en/news/commission-mak...But I strongly prefer my solution!
by dynm
2/15/2026 at 11:58:11 PM
Apple and Google age verification are both zero knowledge based.by wmf
2/15/2026 at 9:43:02 PM
A ZKP will work as a base, but the proof mechanism will have to be combined with anti-user measures like device attestation to prevent things like me offering an API to continually sign requests for strangers. You can rate-limit it, or you can add an identifier, both of which makes it not zero knowledge.Parent's proposal is better in that it would only take away general purpose computing from children rather than from everyone. A sympathetic parent can also allow it anyway, just like how a parent can legally provide a teen with alcohol in most places. As a society we generally consider that parents have a right to decide which things are appropriate for their children.
by digiown
2/15/2026 at 10:42:15 PM
Honestly I think no measure is and should be perfect. It's completely disproportionate. If there's a will there's a way.by KoolKat23
2/16/2026 at 1:49:06 AM
> A ZKP will work as a base, but the proof mechanism will have to be combined with anti-user measures like device attestation to prevent things like me offering an API to continually sign requests for strangersSpot on! The "technical" proposal from Google of a ZKP system is best seen as technically-disingenuous marketing meant to encourage governments to mandate the use of Google's locked down devices and user-hostile ecosystem.
The only sane way to implement this is to confine the locked-down computing blast radius to the specific devices that need child protection, rather than forcing the restrictions onto everyone.
by mindslight
2/16/2026 at 3:21:35 AM
I'm not sure what I feel about depriving teens of general purpose computing devices, either, which is the logical consequence of both the pseudo-ZKP scheme and parent's "underage signal". I believe most of us here learned programming through being able to run arbitrary programs, and that would never have happened if we only had access to locked down devices. And that habit of viewing computers as appliances controlled by other people isn't going to go away on their 18th birthday either.Overall I think while there is a reasonable argument in favor of age verification for some types of sites, the harms of implementing it would drastically outweigh any benefits that it should not be done.
by digiown
2/16/2026 at 3:37:02 AM
Sure, I'm sympathetic to that idea. The point is that it leaves such a decision up to parents, putting non-locked-down computers in the same position as any other potentially-harmful thing you might want to keep away from your kids.Keeping parents in control also lets them make decisions contrary to what the corporate surveillance industry can legally get away with. For example we can easily imagine an equivalent of Facebook jumping through whatever hoops it needs to do to target minors, perhaps outright banned various places but not generally in the US. If age restrictions are going to be the responsibility of websites, then parents will still have been given no additional tools to prevent their kids from becoming addicted to crap like that.
Shooting from the hip about the situation you describe, I'd be tempted to give a kid a locked-down phone with heavy filtering (or perhaps without even a web browser so they can't use Facebook and its ilk), and then an unrestricted desktop computer which carries more "social accountability".
by mindslight
2/16/2026 at 4:47:30 AM
I think banning facebook/instagram/etc is one of the special cases where it makes more sense to be enforced by the site, because people use these out of mainly peer pressure and network effect. If a majority is kept off, the rest have little use for it regardless of their personal wishes. Heck, I'd reckon most kids don't actually want to use them all that much. Regardless of technical details, giving parents this control will also cause a lot of resentment if most parents don't go along.As opposed to censoring internet content in general, which does not work because there will always be sites not under your jurisdiction and things like VPNs. I don't support any such censorship measures as a result.
by digiown
2/16/2026 at 5:09:58 AM
But why not both? I'm coming from a USian perspective here where I don't see much possibility of actual widespread bans of these types of products, rather just a retrenching to what can be supported by regulatory capture.Also, we're getting the locked down computing devices anyway - mobile phones as they are right now are a sufficient root of trust for parental purposes. So it seems pointless to avoid using that capability (which corpos are happy to continue embracing regardless) but instead put an additional system of control front and center.
by mindslight
2/16/2026 at 2:16:01 PM
> don't see much possibility of actual widespread bansWhy do you think there would be regulation to honor the "underage signal", but not explicitly ban social media sites for "unverified" users?
> seems pointless to avoid using that capability
It's not pointless, because relying on it will soon make these locked down devices mandatory for everyone under 18, and they will keep using it past 18. Everyone will lose general purpose computing, along with adblocking and other mitigations that protect you from various harms. It also leads to widespread surveillance being possible as parents will want to be able to "audit" their teen's usage.
> put an additional system of control front and center
The problem should be controlled at the source, not the destination, if feasible.
by digiown
2/16/2026 at 5:21:52 PM
> Why do you think there would be regulation to honor the "underage signal"Our ancestor comment still has the direction backwards. This is the specific dynamic that makes sense to me: https://news.ycombinator.com/item?id=47027738 .
This means any legislation should be aimed at directing device manufacturers to implement software that can respect content assertions sent by websites.
> relying on it will soon make these locked down devices mandatory for everyone under 18, and they will keep using it past 18
Okay, but in 2026 we're basically at this point. Show me a mobile phone that doesn't have a bootloader locked down with "secure boot." For this particular threat that we had worried about for a long time, we've already lost. Not in the total-sweeping way that analysis from first principles leads you to, but in the day to day practical way. It's everywhere.
The next control we're staring down is remote attestation, which is already being implemented for niches like banking. The scaffolding is there for it to be implemented on every website - "verifying your device's security" - I get that on basically everywhere these days. As soon as 80% of browsers can be assumed to have remote attestation capabilities, we can be sure they will start demanding these signals and slowly clamping down on libre browsers (as has been done with browser/IP fingerprinting over the past decade)
Any of these talks of getting the server involved intrinsically rely on shoring up "device security" through remote attestation. That is exactly what can end ad-blocking and every other client-represents-the-user freedom.
> The problem should be controlled at the source, not the destination, if feasible.
You've already acknowledged VPNs and foreign jurisdictions, which means "at the source" implies a national firewall, right?
Unless your goal is to undermine any solution on this topic? I'm sympathetic to this, I just don't see that being realistic in today's environment!
by mindslight
2/16/2026 at 10:26:12 PM
I agree with controls on addictive/exploitative platforms like Facebook or Instagram. These can be feasibly controlled at the source.In principle I agree with keeping some content away from children, but I don't think any of the implementations will work without causing worse problems, so I disagree with implementing those.
> in the day to day practical way
There's a world of difference between practically required and it being illegal to use anything else, even if initially for a small set of population. You still have a choice to avoid those now. Moreover there is a fairly large subculture of gamers etc opposed to these movements, and open computing platforms will take a long time to fizzle out without intervention.
If you mandate locked down devices for kids, it will very quickly become locked down devices for everyone except for "licensed developers", because no one gets a bunch of new computers upon becoming an adult, and a new campaign from big tech will try to associate open computers with criminals.
by digiown
2/17/2026 at 1:11:07 AM
> Moreover there is a fairly large subculture of gamers etc opposed to these movements, and open computing platforms will take a long time to fizzle out without intervention.You kind of skipped over the distinction I made between "secure boot" and "remote attestation". Based on what you wrote here I'm not quite sure if you understand the difference between them. And in the context of locked down computing, the difference between them, and their specific implications, is highly important.
I'm not pointing this out to shoot down your point or something, rather I think you'd benefit from learning about this outside of this comment. But I'll be a little more explicit here to get you started:
The worry with secure boot was based around the possibility that all manufacturers would stop making non-locked-down devices. This has not really panned out - all phones basically have secure boot, there are many you can install your own OS image onto, and there are many escape hatches.
The worry with remote attestation is that website owners will be able to insist that you run specific software environment and/or hardware, and deny you access otherwise. On desktop web browsers, this is the WEI proposal that seems to have stalled. But on mobile, this is still going full speed ahead, both web and apps (SafetyNet).
The thing about remote attestation is that its restrictions take the same shape as current CAPTCHA nags, IP block based hassling, etc. When websites see that more and more visitors are compliant, they can crank up the pain. First it's invisible, then it's a warning, then it's a big hassle (eg lots of CAPTCHAs), and then finally it's a hard lockout. This can happen, led by specific industries (eg banking), regardless of any communities working to resist it. What you should picture is all of our old computers working just fine, but being able to access modern websites in a way that cannot be technically worked around.
by mindslight
2/15/2026 at 9:27:57 PM
it may be simple to sleuth out over time kid status or not, but i would be very uncomfortable with a tag that verifies kid status instantly no challenges, as it would provide a targeting vector, and defeat safety.by rolph
2/16/2026 at 1:43:06 AM
> I think they genuinely want to protect kids, and the privacy destruction is driven by a combination of not caring and not understanding.Advancing a case for a precedent-creating decision is a well-known tactic for creating the environment of success you want for a separate goal.
It's possible you can find a genuine belief in the people who advance the cause. Charitably, they're perhaps naive or coincidentally aligned, and uncharitably sometimes useful idiots who are brought in-line directly or indirectly with various powerful donors' causes.
by 9x39
2/15/2026 at 9:21:14 PM
It has nothing much to do with kids and everything to do with monitoring and suppressing adults.by OutOfHere
2/15/2026 at 9:19:25 PM
You are assuming good faith. This is why you are puzzled.by ImHereToVote
2/15/2026 at 8:47:11 PM
I completely agree. The problem is the lack of compromise on both sides of the issue.I wouldn't say it's a lack of understanding, but that any compromise is seen as weakness by other members of their party. That needs to end.
by sublinear