4/19/2026 at 3:56:35 PM
Our world. It was a good design in our world.I don't think v6 is the absolute pinnacle of protocol design, but whenever anybody says it's bad and tries to come up with a better alternative, they end up coming up with something equivalent to IPv6. If people consistently can't do better than v6, then I'd say v6 is probably pretty decent.
by Dagger2
4/19/2026 at 6:02:25 PM
> they end up coming up with something equivalent to IPv6Not just that. Almost every single thing people think up that's "better" is something that was considered and rejected by the IPv6 design process, almost always for well-considered reasons.
by zrail
4/19/2026 at 10:53:05 PM
The converse also happens: people look at something IPv6 supports and says "that's crazy, why would that be allowed/designed for", without knowing that IPv4 does it too.by dwattttt
4/19/2026 at 7:49:37 PM
Or frequently, considered and accepted. 6to4 is a popular one to reinvent.by Dagger2
4/19/2026 at 4:10:50 PM
In retrospect I think just adding another 16 or 32 bits to V4 would have been fine, but I don’t disagree with you. V6 is fine and it works great.All the complaints I hear are pretty much all ignorance except one: long addresses. That is a genuine inconvenience and the encoding is kind of crap. Fixing the human readable address encoding would help.
by api
4/19/2026 at 8:02:12 PM
If you add new bits to v4 you invent an incompatible protocol, and you should add a lot of bits so you'll never have to invent another incompatible protocol again. You can also fix the minor annoyances in v4.by pocksuppet
4/19/2026 at 8:17:55 PM
Flexible! The first byte tells you how many bytes of addressing you have. Perfect and future proof!by bombcar
4/20/2026 at 4:29:30 AM
At best future-resistant.True future-proofing would require representing address length as an arbitrary-precision nonzero unsigned integer.
Since allowing a zero-length network address format would serve no purpose other than to pointlessly complicate standards definitions, you could trivially and without loss of generality interpret zero to denote some extended-length address length representation to be defined in a future version of the standard.
by jasomill
4/20/2026 at 11:33:38 AM
Hardware implementations typically do not like variable-size fields. Not just because the total header size becomes unpredictable, but because it means any following fields no longer have a fixed offset, and that complicates parsing.by tremon
4/19/2026 at 9:39:57 PM
IPv4 was designed with extension headers: it boggles my mind that simply using the headers to extend the address was never seriously considered. It was proposed: https://www.rfc-editor.org/rfc/rfc1365.htmlIt still would have been a ton of work, but we could have just had what IPv6 claimed to be: IPv4 with bigger addresses. Except after the upgrade, there'd be no parallel system. And all of DJB's points apply: https://cr.yp.to/djbdns/ipv6mess.html
by perennialmind
4/20/2026 at 3:10:20 AM
I said "whenever anybody says it's bad and tries to come up with a better alternative, they end up coming up with something equivalent to IPv6", and that's what you did here. And as predicted, it was 6to4 you reinvented.v4 extension headers are well known to get your packets dropped on the Internet, so they're a non-starter, but there's another extension mechanism you can use: you can set the "next protocol" field to a special value, then put the extended address at the start of the payload, followed by the original payload. This is functionally identical to using extension headers, but using a mechanism that doesn't get your packets dropped.
Far from not being seriously considered, this approach was adopted in v6 as RFC 3056.
> Except after the upgrade, there'd be no parallel system.
No. You get a parallel system because v6 addresses are too big to work with v4. Even if you used extension headers, v6 addresses would still be too big to work with v4. Whatever you do, v6 addresses are too big to work with v4. You WILL get a parallel system, and there's no way around this other than not making the addresses bigger.
by Dagger2
4/20/2026 at 12:55:32 PM
The hopes were for a converged software stack, but the candidates were all parallel protocols competing with IPv4. A full transition would end with the extinction of IPv4. Upgrading IPv4, quite apart from the brass tacks of the wire format, would have entailed variable-length addresses and even the idea of starting a new protocol with 64-bit addresses with an upgrade path was considered far too scary at the time. That was only one of a slew of non-technical requirements imposed from above for future proofing, NIH paranoia, vague security promises and politics in general.A decade later, when IPv6 had real-world deployments was far to late for 6to4 to save the day: entirely because a swath of non-6to4 addresses existed and needed to be reachable. Given no strategic gain apparent for upgrading the commercial core, aligning financial interests by upgrading past the edge instead would absolutely have made sense. Unfortunately the hard parts the engineers anticipated in the early 90s were not the ones that held IPv6 back.
In summary, I agree: 6to4 could have been great!
by perennialmind
4/19/2026 at 10:22:37 PM
Here’s my understanding.The people involved in core Internet protocol design were used to the net being a largely walled garden of governments, corporations, universities, and a small number of BBSes and niche ISPs.
Major protocol upgrades had happened before, not just for the core protocol but all kinds of other then-core services.
It had been a while but not that long, I think less than 20 years, and last time it was pretty easy. They assumed they could design something better and phase it in and all the members of the Internet community would just do the right thing.
That’s probably what made them feel they could push a more radical upgrade.
Unfortunately they started this right as the massive tsunami of Internet commercialization hit. Since V6 was too new, everyone went with V4. Now all the sudden you had thousands of times more nodes, sites, and personnel, and all of them were steeped in IPv4 and rushing to ship on top of it. You also lost the small town atmosphere of the early net where admins were a club and could coordinate things.
Had V6 launched five years earlier V4 would probably be dead.
V6 usage will probably keep creeping up, but as it stands we will likely be dual stack forever. Once the installed user base and sunk cost is this high the design is fixed and can never be changed without a hard core heavy handed measure like a government mandate.
by api
4/19/2026 at 11:46:20 PM
They weren't all that wrong. NAT was an incompatible protocol upgrade - that's why it broke protocols that made pre-NAT assumptions, like FTP - but it kept most of them working. DNS64 is also an incompatible protocol upgrade that breaks protocols that make pre-DNS64 assumptions, like hardcoding addresses - but it keeps most keeps of them working.In DNS64, whenever your DNS resolver encounters an IPv4-only site, it translates it to an IPv6 address under a translator prefix, and returns that address to the client. The client connects to the translator server via that address, and the translator server opens an IPv4 connection to the website. Your side of the network is IPv6-only, not even running tunneled v4.
This only breaks things to about the same small extent that the introduction of NAT did.
by pocksuppet
4/19/2026 at 10:49:25 PM
iOS is benefitted from a heavy handed mandate so that it and all of its apps sing on IPv6 only networks. They just need to expose IPv4 internet as IPv6 addresses.by iknowstuff
4/20/2026 at 2:36:23 AM
> Had V6 launched five years earlier V4 would probably be dead.Not a chance. IPv6 ate way more memory than IPv4 and memory was expensive back in 1995. Even IPv4 proliferation was chewing up memory and that was why the IETF introduced Classless Inter-Domain Routing (CIDR) in 1993 which gave us subnet masking.
Memory cost was a problem in routing tables until after both the DotBomb and the TeleBomb.
by bsder
4/19/2026 at 11:43:21 PM
You would have ended up with a protocol identical to IPv6, but with fewer address bits.If you add *any* address bits you've already broken protocol compatibility and you need to upgrade the entire world. While you're already upgrading the entire world, you should add so many address bits that we'll never need more, because it costs the same, and you may as well fix those other niggling problems as well, right?
by pocksuppet
4/19/2026 at 7:46:48 PM
> Fixing the human readable address encoding would helpYes! They need an alternate encoding form that distills to the same addresses.
My machines Link-local IPV6 address is "fe90::6329:c59:ad67:4b52%8"
If I try to paste that into the address bar in Edge or Chrome (with the https://) it does an internet search on that string! No way around it.
I have to do workarounds like: "http://fe90::6329:c59:ad67:4b52%8.ipv6-literal.net:8081/
All to test the IPv6 interface on a web server I'm running on my local machine.
by fortran77
4/20/2026 at 1:12:46 AM
Blame the WHATWG for that. They're the reason that v6 addresses in URLs are such a mess. http://[fe90::6329:c59:ad67:4b52%8]:8081/ should work, but doesn't because they refuse to allow a % there. (This is really damned frustrating, because link-locals are excellent for setting up routers or embedded machines, or for recovering from network misconfigurations.)If it's on the same machine then just use http://[::1]:8081/. Dropping the interface specifier (http://[fe90::6329:c59:ad67:4b52]:8081/) works if the OS picks a default, which some will. curl seems to be happy to work. Or just use one of the non-link-local addresses on the machine, if you have any.
The other frustrating part of this is that it makes it impossible to come up with your own address syntax. An NSS plugin on Linux could implement a custom address format, and it's kind of obvious that the intention behind the URL syntax is that "[" and "]" enter and exit a raw address mode where other URI metacharacters have no special meaning. In general you can't syntax validate the address anyway because you don't know what formats it could be in (including future formats or ones local to a specific machine), so the only sane thing to do is pass the contents verbatim to getaddrinfo() and see if you get an error.
But no, they wrote the spec to only allow a subset of v6 addresses and nothing else.
I very much didn't test it, but this patch might do the job on Firefox (provided there's no code in the UI doing extra validation on top):
--- a/netwerk/base/nsURLHelper.cpp
+++ b/netwerk/base/nsURLHelper.cpp
@@ -928,3 +928,3 @@ bool net_IsValidIPv4Addr(const nsACString& aAddr) {
bool net_IsValidIPv6Addr(const nsACString& aAddr) {
- return mozilla::net::rust_net_is_valid_ipv6_addr(&aAddr);
+ return true;
}
by Dagger2
4/19/2026 at 8:02:44 PM
An IPv6 literal hostname in a URL must be surrounded by square brackets.by pocksuppet
4/20/2026 at 3:20:43 AM
Chrome and Edge still do a search on it in my default search engine even with []https://[fe80::5ad6:9567:26b7:763b%18]:8081/
Even Hacker News doesn't think it's a link
by fortran77
4/20/2026 at 12:29:45 PM
On Mozilla Firefox after reenabling the separation into URL and searchbar it reports: "Invalid URL – Hmm. That address doesn’t look right. \n Please check that the URL is correct and try again." What does the '%' mean in there?by 1718627440
4/19/2026 at 4:31:50 PM
IPv4 is absolutely fine. Consumers can be behind NAT. That's fine. Servers can be behind reverse proxies, routing by DNS hostname. That's also fine. IPv4 address might be a valuable resource, shared between multiple users. Nothing wrong with it.Yes, it denies simple P2P connectivity. World doesn't need it. Consumers are behind firewalls either way. We need a way for consumers to connect to a server. That's all.
by vbezhenar
4/19/2026 at 5:08:32 PM
You're the reason I have to call my ISP to host a minecraft server for a couple of my friends.by Lt_Riza_Hawkeye
4/19/2026 at 6:21:48 PM
No, they're not. That's other weird policies specific to your ISP.With IPv4 + NAT, you have a public IP address. That public address goes to your router. Your router can forward any port to any machine on your LAN. I used to run Minecraft servers from a residential connection on IPv4, it was fine. Never had to call the ISP.
by mort96
4/19/2026 at 6:32:21 PM
This assumes the ISP allocates a public IPv4 address.In many countries they don't have enough, so you have CGNAT.
by Symbiote
4/19/2026 at 6:45:45 PM
That's a fair point. In my mind, residential ISPs give out public IP addresses and CGNAT is just for cell phones. But I recognize that the philosophy of, "we don't need to solve IP address exhaustion, we just need to keep people able to access Facebook" leads to CGNAT or multi level NAT.Still, I do think that the solution of, "one IPv4 address per household + NAT" is a perfectly good system. I view the IPv6 mentality of giving each computer in the world a globally unique IPv6 address as a non-goal.
by mort96
4/19/2026 at 7:13:22 PM
Even if you go with one IPv4 per household + 1 per company you're going to be hard stretched to find room for that in 32 bits, at least after you add the routing infrastructure.by wyufro
4/19/2026 at 8:03:19 PM
There are more households than IP addresses. They can't all have one each. So you need longer addresses, and then you're already reinventing IPv6.by pocksuppet
4/19/2026 at 9:06:02 PM
There are roughly twice as many IPv4 addresses as households globally.by mburns
4/20/2026 at 7:39:35 AM
That's not enough.For one, businesses and other entities also need Internet access. Cloud companies in particular needs a ton of addresses. That's gonna eat up a fair chunk of the remaining 50%.
Two, humanity is still growing, governments across the world are building new housing. That's gonna eat up another chunk.
Three, routing is hierarchical, and infrastructure organisations and ISPs are assigned blocks of addresses, not individual addresses. We can't just have a pool of free IP addresses and assign any address to any house in the world as needed. So even having 50% of IP addresses free wouldn't really be enough.
So in my mind, an IP addresses to household ratio of 0.5 means residential CGNAT is inevitable, even if we ignore legacy issues like individual universities and other institutions owning gigantic /8 or /16 ranges.
by mort96
4/19/2026 at 11:09:46 PM
Regardless of the actual number, I'm pretty sure that IPv4 addresses are not proportionally assigned to each region according to # of households.by tremon
4/19/2026 at 8:47:58 PM
> That's a fair point. In my mind, residential ISPs give out public IP addresses and CGNAT is just for cell phones.If you are giving out public IPs then you aren't really NAT'ing.
by happymellon
4/19/2026 at 8:53:23 PM
Hm? The ISP gives one IP address to a router in a house, that router uses NAT to let all the computers inside that house use the Internet through the one single shared public IP address. That's NAT, isn't it?by mort96
4/20/2026 at 8:20:48 AM
Well, in a strict sense, it is "you" who chooses to run a nat'ing router there, you could just have one single computer per ISP connection. Or have it run a proxy for you, or nat.I mean, I understand that this feels normal today, that 10-20-50 devices need internet and that the way to manage that is to nat the connections, but your ISP isn't doing nat, it is you.
by IcePic
4/20/2026 at 12:26:15 PM
The model of "every Internet subscriber gets one IP address" only works thanks to NAT.by mort96
4/19/2026 at 6:31:28 PM
Nope, CGNAT means I need to call my ISP. We now have 2 levels of NAT because the IPv4 address situation has gotten so bad they can't even give every residence its own public IP. If your ISP hasn't adopted it yet its likely they got lucky and bought a ton of IPv4 addresses a long time ago when they were cheap and have decided using them is cheaper than upgrading their network to support CGNAT.by voxic11
4/19/2026 at 9:05:29 PM
Nope. If you get assigned a routable IPv4 IP, you just have a shit ISP. I led the rollout one of the larger O365 implementations. Outlook and the office stack needed like 10-16 ports per user. We served like 150k people with 30 outbound IPs. If you have an IP, you have 64k+ ports to use.I also deployed it as a pilot on an internal network. Other than getting direct IPv6 connectivity to some services, which sometimes gave us better performance, it conferred no advantage to us.
IPv6 is great for phones where you don't expect any inbound traffic. Even then, every US carrier is using Carrier NAT to route and proxy traffic for their own purposes.
by Spooky23
4/19/2026 at 9:27:16 PM
I'm glad I have a shit ISP, then. So shitty being able to host my own software.by unethical_ban
4/20/2026 at 10:59:43 AM
The “don’t” was missing. Honestly, I give up with Siri dictation. Either my voice has changed or it’s changed in a way that it doesn’t like my cadence or diction.Either way, mea culpa.
by Spooky23
4/19/2026 at 5:45:54 PM
IPv4 usage in its current state would've been much more limited and annoying in a world without IPv6. Therefore, IPv4 exists as-is thanks to others adopting IPv6.by Fnoord
4/19/2026 at 7:17:17 PM
> IPv4 is absolutely fine. Consumers can be behind NAT.I don't want our communications infrastructures to be just for consumers.
by throw0101a
4/19/2026 at 4:36:26 PM
Yeah, if you ignore literally every use of the internet except "check Facebook" then it's perfect.Unfortunately, the internet is used for a lot more than using one of the six gigantic centralized websites.
by estimator7292
4/19/2026 at 5:16:22 PM
I used to think that too, but outside of our niche bubble most people genuinely do only be web browsing.Speaking of that, why don't we just keep ipv4 for ourselves and let them eat ipv6?
by moffkalast
4/19/2026 at 5:22:21 PM
Most people browse Facebook/Youtube over IPv6 as a matter of fact and that work perfectly.by betaby
4/19/2026 at 6:06:49 PM
> Yes, it denies simple P2P connectivity. World doesn't need it.Worth pointing out that this article was written by the now-CEO of Tailscale. I don't know if "The world doesn't need P2P connectivity" is a compelling take.
by RIMR
4/19/2026 at 11:03:49 PM
With the obligatory caveat that I am but a single datapoint, I use various P2P apps through multiple levels of NAT without issue and I very intentionally prevent devices on my local LAN from being publicly reachable. So it rings true to me.I do wish ISPs would refrain from intentionally breaking things though. It ought to be illegal for them to block specific ports or filter specific sorts of traffic absent a pressing and active security concern.
by fc417fc802
4/20/2026 at 2:36:36 AM
A lot of us don't like this "you will own nothing and you will be happy" kind of energy.by kalleboo
4/19/2026 at 6:36:44 PM
This comment exemplifies my worst fear and reinforces my somewhat incomplete idea that IPv4 is perhaps overall safer for the world, and that "worse is better" depending on what you're optimizing for.Roughly, it's my belief that an IPv6 world makes it easier for centralizing forces and harder for local p2p or p2p-esque ones; e.g. an IPv6 world would have likely made it easier to do bad things like "charge for individual internet user in a home."
The decentralization of "routing power" is more a good thing than bad, what you pay for in complexity you get back in "power to the people."
by jrm4
4/19/2026 at 7:02:11 PM
> easier to do bad things like "charge for individual internet user in a home."This idea comes up in every HN conversation about IPv6, and so I suppose this time it's my turn to point out RFC 8981[0]. tl;dr: typically, machines which receive IPv6 address assignment via SLAAC (functional equivalent of DHCP) periodically cycle their addresses. Supposed to offer pretty effective protection against host-counting.
by MrDOS
4/19/2026 at 9:14:06 PM
You know that's not what he meant. the world is always changing. it was designed in 1998 by networking gear companies, with their own company needs in mind. It wasn't engineered with end user, or even network administrators and app developers in mind.The only reason it's around is because of sunken cost fallacy and people stuck in decades old tech-debt. A new protocol designed today will be different, much the same as how Rust is different than Ada. SD-WAN wasn't a thing in 1998, the cost of chips and the demand of mobile customers wasn't a thing. supply/demand economics have changed the very requirments behind the protocol.
Even concepts like source and destination addressing should be re-thought. The very concept of a network layer protocol that doesn't incorporate 0RTT encryption by default is ridiculous in 2026. Even protocols like ND, ARP, RA, DHCP and many more are insecure by default. Why is my device just trusting random claims that a neighbor has a specific address without authentication? Why is it connecting to a network (any! wired,wireless, why does it matter, this is a network layer concern) without authenticating the network's security and identity authority? I despise the corporatized term "zero trust" but this is what it means more or less.
People don't talk about security, trust, identity and more, because ipv6 was designed to save networking gear vendors money, and any new costly features better come with revenue streams like SD-WAN hosting by those same companies. There are lots and lots of new things a new layer-3 protocol could bring to the scene. But security aside, the main thing would be replacing numbered addressing with identity-based addressing.
It all comes down to how much money it costs the participants of the RFC committees. given how dependent the world is on this tech, I'm hoping governments intervene. It's sad that this is the tech we're passing to future generations. We'll be setting up colonies on mars, and troubleshooting addressing and security issues like it's 2005.
by notepad0x90
4/19/2026 at 10:59:41 PM
> it was designed in 1998 by networking gear companiesThat's false. Firstly, rfc1883 was published in 1995 which means work started some time before that, and the RFC process included operating system vendors and RIR administrators. The primary author of rfc1883 worked at Xerox Parc, and the primary author of rfc1885 worked at DEC. Neither were networking gear companies.
by tremon
4/19/2026 at 11:26:25 PM
That's a proposed standard, looks like what obsoletes is the draft standard.by notepad0x90
4/19/2026 at 11:30:56 PM
And rfc2460 is obsoleted by rfc8200 from 2017. That should mean that IPv6 was designed in 2017 by an Israeli cybersecurity company, right?by tremon
4/20/2026 at 12:12:30 AM
No, I think proposed, draft and internet standard all have specific meanings we don't need to debate over. Your claim that IPv6 was first proposed in 1995 is correct, as is my claim that it was first accepted in 1998. No one actually uses a proposed standard, but when it is draft people start implementing it and giving feedback over issues until it is fully ratified is my understanding (correct me if that's wrong please).https://www.ietf.org/process/rfcs/
> Proposed Standard (PS). The first official stage. Many standards never progress beyond this level.
> Draft Standard. An intermediate stage that is no longer used for new standards.
> Internet Standard. The final stage, when the standard is shown to be interoperable and widely deployed.
by notepad0x90
4/20/2026 at 11:54:17 AM
my claim that it was first accepted in 1998That was not your claim. Feel free to read back what you wrote, I quoted it in my first reply.
by tremon
4/20/2026 at 12:04:27 PM
I just clarified that it was.by notepad0x90
4/19/2026 at 9:25:07 PM
>There are lots and lots of new things a new layer-3 protocol could bring to the scene. But security aside, the main thing would be replacing numbered addressing with identity-based addressingI don't know much about MPLS and only know IP routing, but that quote above sounds very hand-waving. How do you route "identity based addressing"?
by unethical_ban
4/19/2026 at 11:52:10 PM
It wouldn't be a good idea to spell out an entire protocol in a comment section, but the key part is that it would cost a lot.It is far from hand-waving. Right now we have numeric addressing, where routers look at bits and perform ASIC-friendly bitwise (and other) operations on that number to forward a lot of traffic really fast for cheap.
Identity and trust establishment won't be part of the regular data flow, but at network connection time, each end-device will discover the network authority it has connected to, and build trust that allows it to validate identities in that network, including address assignments, neighbor discovery, name resolution and verification, authorized traffic forwarders (routers) and more.
After the connection is established and the network is trusted, as part of the connection establishment, the network authority designates how addressing should be done. If Alice's Iphone wants to connect to Bob's server, it will encrypt the data, and as part of a very slim header designate Bob's server's cryptographic identifier, destination service identifier, and its own cryptographic identifier for the first packet. To reduce overhead, subsequent traffic can use a simple hash of the connection identifers mentioned earlier.
When devices come online in the network, their cryptographic identifers will become known to the entire network, including intermediate routers. Routing protocols work with the identity authority of the network to build forwarding tables based on cryptographic identifiers, and for established sessions, session ids.
"Cryptographic identifier" is also not a hand-wavy term. what it means must be dynamic, so as to avoid protocol updates like v4 and v6 over addressing. V6 presumed just having lots of bits is enough. An ideal protocol will allow the network itself to communicate the identifier type and byte-size. For example an FQDN, or an IPv4 address alike could be used directly, or a public key hash using a hash algorithm of the network's choice can be used. So long as the devices in the network support it, and the end device supports it, it should work fine.
Internet addressing can use a scheme like this, but it doesn't need to. IPv6 took the wrong approach with NAT, it got rid of it instead of formalizing it. we'll always need to translate addresses. But the internet is actually well-positioned for this, due to the prevalence of certificate authorities, but it will require rethinking foundational protocols like DNS, BGP, and PKI infrastructure.
But my original point wasn't this, it was that tech has come far, our requirements today are different than 30 years ago. Even the OSI layered model is outdated, among other things.
This is just my proposal that I just thought of as I'm typing this, smarter people that can sit down and think through the problem can think of better protocols. I only proposed it to demnostrate the concept isn't hand-wavy or ridiculous.
IPv6 was relatively rushed to meet the address shortage issue of IPv4 while at the same time solve lots of other problems. The next network layer protocol (and we do need one) should have the goal of making networking as a whole adaptable to new and unforeseen requirements (that's why I suggested the network authority be the one to dictate the addressing scheme, and with it, be responsible for translating it if needed). We're being held back, not just in tech but as a species, because of this short-sighted protocol design! exaggerated as that statement might sound, it is true.
I'll reserve further discussion on the topic for when it is required, but I hope this prevents more dismissive responses.
by notepad0x90
4/19/2026 at 9:51:41 PM
Not to mention authenticated identity-based routing would mean embedding trusted centralized authorities into even deeper network layers. That is such a mess for TLS, after CAs started going rogue we've basically ended up with Google, a shitty ad company, deciding who should be trusted because they control Chrome.by guelo
4/20/2026 at 12:09:08 AM
Not at all, it doesn't even need to be PKI. But if it was, your routers would be the CA. Or more practically, whatever device is responsible for addressing, also responsible as the authority over those addresses. Your DHCP server would also be the CA for your LAN. Even a simple ND/ARP would require a claim (something like a short byte value end-devices can lookup/cache) that allows it to make that "the address x.x.x.x is at <mac>" statement. Smarter schemes might allow the network forwarder (router) to translate claims to avoid end devices looking up and caching lots of claims locally (and it would need to be authorized to do so).You wouldn't need TLS. this scheme i just thought would actually decentralize/federate PKI a lot more. If you have a public address assigned, your ISP is the IP-CA. I don't want to get into the details of my DNS replacement idea, but similar to network operators being authorities over the addresses they're responsible for, whoever issued you a network name is also the identity authority over that name (so DNS registrars would be CA's). Ideally though, every device would be named, and the people that have logical control over the address will also be responsible for the name and all identity authentication and claims over those addresses and names. You won't have freaking google and browsers dictating which CA root to trust, it will instead be the network you're joining that does that (be it your DHCP server, or your ISP is up for debate, but I prefer the former). Ideally, your public key hash is your address. How others reach you would be by resolving your public key from your identity, the traffic will be sent to your public key (or see my sibling comment for the concept of cryptographic identity). All names would of course be free, but what we call "DNS" today will survive as an alias to those proper names. so your device might be guelo.lan123.yourisp.country but a registrar might sell you a guelo.com alias that points to the former name.
The implications of this scheme are wild, think about it!
Rogue trust providers will be a problem, but only to their domain. right now random CA roots can issue domains for anything. with the scheme I proposed, your country can mess with its own traffic, as can your isp, as can you over your lan. You won't be able to spoof traffic for a different lan, or isp using their name.
Solve all the problems at their foundations!
by notepad0x90
4/19/2026 at 11:38:57 PM
Well -- it shouldn't be authenticated, then!Which public key you want to route to is above the network layer.
by tekne
4/19/2026 at 10:05:16 PM
you're implying that they could not have done better.I think they "shipped it" and washed their hands of it.
But I think there should have been more iterations, until we got a little more ipv4+ and less ipv6.
by m463
4/19/2026 at 11:53:01 PM
They shipped it because it was done.Everything since has been round after round of RFCs trying to adapt IPv4 workarounds to the IPv6 world.
by tadfisher