This annoys me, especially the last “It takes at least 25 years” rhetoric.
It didn’t take 25 years for SSL. SSH. Gzip encoding on HTTP pages. QUIC. Web to replace NNTP.
GPRS/HSDPA/3G/4G/5G
They all rolled out just fine and were pretty backwards and forwards compatible with each other.
The whole SLAAC/DHCPv6/RA thing is a total clusterfuck. I’m sure there’s many reasons that’s the case but my god. What does your ISP support? Good luck.
We need IPv6 we really do. But it seems to this day the designers of it took everything good/easy/simple and workable about v4 and threw it out. And then are wondering why v6 uptake is so slow.
If they’d designed something that was easy to understand, not too hard to implement quickly and easily, and solved a tangible problem it’d have taken off like a rocket ship. Instead they expected humans to parse hex, which no one does, and massive long numbers that aren’t easily memorable. Sure they threw that one clever :: hack in there but it hardly opened it up to easy accessibility.
Of course hindsight is easy to moan but the “It’s great what’s the problem?” tone of this article annoys me.
I was at some of those IETF meetings in the mid-1990s and attended some early IPv6 working group sessions. We knew the conversion would take time, but I don’t think any of us thought it would be this slow. I was involved with multiple L3 switches and routers from 1997 through 2010. The issue was always that IPv6 basically required lots of boxes in the middle to understand it in order to roll it out, so when would it be commercially necessary? Yes, you can do tunneling and NAT at various points, but it always requires more than just the endpoints. It shows up in DNS and socket APIs. There’s no easy way to determine if a path supports it, and the path can change in an instant due to a route change. All that is very different than SSL or QUIC where only the endpoints have to be involved. That’s why QUIC uses UDP, for instance, so old intermediate devices just see it as a protocol they already know. SSL just assigned port 443 and the “https” protocol in the web URL. If a web client contacts a server on port 443 that doesn’t use SSL, it just fails. To put it another way, the level of the stack that you’re changing matters. SSL and QUIC are really L5+. IPv6 is squarely L3. There are no protocol negotiation mechanism available at L3. So, from a business standpoint, when do you take the hit and integrate it all into the processing pipeline? How do you do that in a way that doesn’t impact your IPv4 forwarding performance, because that’s what the near-term market will judge you on? How do you afford the development and test cost associated with a whole other development (almost double)? If you’re doing software forwarding, the answers are a lot easier. As soon as you’re designing silicon, it’s a lot harder. When you’re under a lot of commercial pressure, it’s difficult to be the one who goes first. And remember that this hardware evolves on roughly 10 year cycles (2 years for design, 3-5 year market sales, 3-5 year depreciation at the customer before they buy new ones). Oh, and customer rollout of IPv6 is a major project with lots of program management and testing, not just buying a box or two. So, yea hindsight is easy. Eventually you get there, but it’s a long road.
> It didn’t take 25 years for SSL. SSH. Gzip encoding on HTTP pages. QUIC. Web to replace NNTP.
All that's required to implement each of those is two computers: 1 client and 1 server. Whereas supporting IPv6 requires every router between the two computers to also support IPv6. Similarly, if your current software doesn't support SSL/SSH/Gzip/etc., it's pretty easy to switch to different software, whereas it's hard or impossible for most people to switch ISPs.
> GPRS/HSDPA/3G/4G/5G
Radio spectrum costs providers millions of dollars, and each new cellular protocol increased spectrum efficiency, so upgrading means that providers can support more users with less spectrum. The problem is that most of the "Western" countries still have lots of IPv4 addresses, so there isn't much cost benefit to switching to IPv6. However, China and India both have lots of users and fewer IPv4 addresses, so there is a cost benefit to switching to IPv6 there, and unsurprisingly both of these countries have really high IPv6 adoption rates.
I damn near have a stroke every time I try to reason about IPv4 addresses as an integer. But hey, I guess four bytes is four bytes no matter how you read them.
I think it’s one of many that indicates the underlying issues for its adoption. It’s a 90s technology, not as much thought was given about how it would be used.
> The whole SLAAC/DHCPv6/RA thing is a total clusterfuck.
SLAAC is easily the thing I love most about IPv6. It just works. Routers publish advertisements, clients configure themselves. No DHCP server, no address collisions, no worry. What's bugging you about it?
What problem is this actually solving? I've deployed DHCP countless times in all sorts of environments and its "statefulness" was never an issue. Heck, even with SLAAC there's now DAD making it mildly stateful.
Don't get me wrong, SLAAC also works fine, but is it solving anything important enough to justify sacrificing 64 entire address bits for?
"I wish to participate in a global telecommunications network and I wish to connect immediately to all my friends and be available to them 24/7 and I wish to play games with strangers across the country and I wish to receive all my email within 300ms with no spam and I wish to watch the latest news from Iran in 4K streaming Dolby"... but priiiiivacy!
SEND secures NDP by putting a public key into those 64 bits, and also having big sparse networks renders network scanning rather useless at finding vulnerable hosts, so there are reasons to make subnets /64 other than SLAAC.
Also we can always reduce the standard subnet size in 4000::/3 if we ever somehow run out of space in 2000::/3 (and if we don't then we didn't sacrifice anything to use /64s).
DHCP requires explicit configuration; it needs a range that hopefully doesn't conflict with any VPN you use; it needs changes if your range ever gets too small; and it's just another moving part really.
With SLAAC, it's just another implementation detail of the protocol that you usually don't have to even think about, because it just works. That is a clear benefit to me.
on the local network and have it work (where ping can be any command or browser). That's easy with DHCP+DNS, and either impossible or amazingly ugly with DLAAC.
That's an extra service or two running on every device with extra configuration, and... Maybe it's more reliable now? I vaguely recall having a bad time.
What does the router do out of the box, or at all, for mdns? Isn't it a p2p service?
It wasn't even on the map until 1994. Prior to that it was an ad-hoc mess of "encryption" standards. It wasn't even important enough to become ubiquitous until Firesheep existed.
Even then SSL just incorporated a bunch of things that already existed into an extensible agreement protocol, which, in the long run, due to middleware machines, became inextensible and the protocol somewhat inelegant for it's task. 30 years later and it's due for a replacement but we're stuck with it. Perhaps slow adoption isn't a metric that portends doom.
I think most of the web wasn't encrypted by default until letsencrypt came on the scene just over a decade ago. (I remember a few "free cert" offerings that were entirely manual, and cost you $200 if you wanted to revoke a cert)
It's firmly the default now, and very odd if a site doesn't default to https.
Is it possible that you own your own router and have at some point configured the router to turn up 6 off? I know it is turned off on my router because I had some issues with Verizon ipv6 and tp link in the past.
FWIW, I'm also on Spectrum (by virtue of the Time Warner acquisition back in the day) and I get 10/10 on that page. That is, after turning off Firefox "Enhanced Privacy Protection" which actually blocked the page from loading at all for some reason. Got 9/10 using Chrome. Both on Linux.
> It didn’t take 25 years for SSL. SSH. Gzip encoding on HTTP pages. QUIC. Web to replace NNTP. GPRS/HSDPA/3G/4G/5G They all rolled out just fine and were pretty backwards and forwards compatible with each other.
You're comparing incremental rollout with migratory rollout for most of these; (not the mobile phone standards.) That's apples and oranges.
You can argue for other proposals. But at the end of the day the best you could've done is steal bits from TCP and UDP port numbers, which is... NAT. Other than that if you want to make a serious claim you need to do the work (or find and understand other people's work. It's not that people haven't tried before. They just failed.)
And, ultimately, this is quite close to typical political problems. Unpopular choices have to be made, for the benefit of all, but people don't like them especially in the short term so they don't get voted for.
> If they’d designed something that was easy to understand, […]
I can't argue on this since it's been far too long since I had to begin understanding IPv4 or IPv6… bane of experience, I guess.
> […] not too hard to implement quickly and easily, […]
As someone actually writing code for routers, IPv6 is easier in quite a few regards, especially link-local addresses make life so much easier. (Yet they're also a frequent point of hate. I absolutely cannot agree with that based on personal experience, like, it's not even within my window of possible opinions.)
> […] expected humans to parse hex […]
You're assuming hex is worse than decimal with binary ranges. Why? Of course it's clear to you that the numbers go to 256 because you're a tech person. But if you know that, you very likely also know hex. (And I'll claim the disjunct sets between these are the same size magnitude.)
Anyway I think I've bulletpointed enough, there's arguments to be made, and they have been made 25 years ago, and 20 years ago, and 15 years ago, and 10 years ago and 5 years ago.
Please, just stop. The herd is moving. If anything had enough sway, it would've had enough sway 15 years ago. Learn some IPv6. There's cool things in there. For example, did you know you can "ping ff02::1%eth0"?
There are lots of legacy things in tcp/ip headers. One of them can be for the extra octlet.
When ipv4 legacy flies around, that oclet will be null or 0. The entire internet could route just fine, especially if you put the extra octlet at the end. 1.1.1.1 gets an extra 1.1.1.1.newoctlet.
So every existing IP gets a bonus 255 new IPs, and for now, routing of those is hardlocked to that IP, and it works with all legacy gear.
In 30 years or something, we can care about the mobility of those new IPs.
You're at the very beginning, baby steps stage of inventing IPv6 there.
You aren't the first person to come up with the idea of adding extra bits to IP addresses to make them longer. The problem isn't finding somewhere to stash the extra bits in the packet format (which is trivial; you can simply set the next-protocol field to a special value and then put the bits at the start of the payload), it's getting all software to use those extra bits -- and getting that to work requires doing all of the new AF family, new sockaddr struct, new DNS records, dual stack/translation/tunnels etc etc that v6 does.
Please consider that maybe the people working on v6 weren't actually complete imbeciles and did in fact think things through.
Please consider that maybe the people working on v6 weren't actually complete imbeciles and did in fact think things through.
It is possible for the world to change, and for designs and plans and viewpoints 30+ years ago to be less correct today.
This world is not that world. That world had massive concerns about the processing cost of NAT. That was one reason for ipv6. It also had different ideas about where the net would go. We now know that the "internet of things" and "having your fridge online", as well as "5G in everything so people can't firewall it off" is just insane and malign.
We also know that tying an IP address to a person (compared to an ISP using NAT) reduces privacy. That devious and devilish actors abound.
Even though they thought these things might be neat, many of them aren't.
None of that has anything to do with what you said in the post I replied to. "Add an extra octet to v4 addresses" has hard technical barriers to deal with if you want it to work, regardless of what the world looks like or what you're designing for.
> We now know that the "internet of things" and "having your fridge online", as well as "5G in everything so people can't firewall it off" is just insane and malign
None of this is really relevant either. IP's job is to handle the addressing used when sending data over the Internet, and it should do this job well regardless of what people end up doing with it.
> We also know that tying an IP address to a person (compared to an ISP using NAT) reduces privacy
We don't tie IP addresses to people. PI allocations might sort of count, but regular users don't get those.
None of that has anything to do with what you said in the post I replied to.
Of course not, why would it? I quoted what I was replying to, and all of my comments made perfect sense in that context. In that context, I was discussing the winning ipv6's original design considerations, and yes "IPs for everything" was one of them, hence me talking about it.
> That world had massive concerns about the processing cost of NAT
The processing cost of NAT is still a problem. There's that classic post by a Native American tribal ISP where it was cheaper for them to pay to replace their clients IPv4-only Roku devices with IPv6 capable Apple TVs than to upgrade their CGNAT appliance to handle the video traffic.
The concerns about the "processing cost of NAT" were edge concerns. Companies, homes, edge-devices with 100 or 1000 RFC1918 addressed devices behind them. When ipv6 was created, NAT wasn't a thing, as processing power just wasn't there.
And it was thought the processing power would never be there.
Yet now everyone has NAT in little devices at home. So the need to route 100 IPs into every person's home isn't a thing. Which is inline with my comment about how the world looked different 30 years ago, and how the concept of "IPs for everything" is the reverse of what people even want now.
We have that variant of IPv8, it's what CGNAT gives you, especially if you run MAP-E or MAP-T (which are technically not quite NAT, but kinda are, it's… complicated). You take some bits from the port number and essentially repurpose them into part of the address.
It's a nice band-aid technology, no less and no more.
have that be the invisible bottom layer. come up with a list of 256 common words, one per byte, and have that be the human visible IP address. mentally reading a string of words, however nonsensical, is way easier than a soup of undifferentiated hex digits.
That would cause worse confusion when working with teams from different localisations. Not to mention the complexity of now adding localisations to the address parser.
Yeah the at least 25 years thing is a cop out. The IPng committee specifically chose the protocol that didn't have a transition plan, and today still doesn't have a transition plan.
I expect we're going to plateau with adoption for a long while now. 50% adoption is meaningless if it doesn't tangibly make a dent in the IPv4 exhaustion problem.
Stomping your foot angrily at ISPs and Internet facing entities to adopt a protocol noone cares and/or getting governments to intervene because you've exhausted all your options and progress is stagnant is not a transition plan, that's a hail mary.
If you can't enforce a flag day then that's all you're left with, isn't it? Other than maybe hacking into people's networks, upgrading them and then somehow preventing them from undoing your work.
Indeed, it was clear from the beginning, "AI" companies want to become infrastructure and a critical dependency for businesses, so they can capture the market and charge whatever they want. They will have all the capital and data needed to eventually swallow those businesses too, or more likely sell it to anyone who wants the competitive advantage.
I doubt that you would install this application without even reading the README, so I don't understand how citing literally the second paragraph of the README helps.
> it's critical to know whether it's vibe coded
Strictly speaking, the only way to be sure that something is not vibe-coded is to either have a proof that the code were published before vibe-coding tools were available or to hand code it yourself.
Also, if you think that knowing if something is vibe-coded is so important, it is unwise to attack people who honestly tell you that something is vibe-coded.
While tone often portrays poorly over text, I think this is an example where the sarcasm is very overt. I don’t think anyone would think the comment is serious.
Same, I remember googling Deno and going "Oh this new thing looks neat" - and then I haven't heard/seen/read a thing about it until this post. But I keep hearing about Bun and of course nodejs.
Feel bad for them, they obviously just didn't capture a real userbase. I expect if yt-dlp hadn't started to require it they'd have just silently flamed out.
Probably because that’s not most people’s experience. Google is just pages of the same AI generated content. DDG, once good, seems even worse. Searx and the like are slow and results are mixed. Kagi, for me at least, seems to find the actual gems you’re looking for. It feels like Google used to feel back in 05.
I wouldn't go so far as to say that it's as good as peak google (agree on ~ when that was), which felt like magic. But it is, in my experience, noticeably better than current google.
It didn’t take 25 years for SSL. SSH. Gzip encoding on HTTP pages. QUIC. Web to replace NNTP. GPRS/HSDPA/3G/4G/5G They all rolled out just fine and were pretty backwards and forwards compatible with each other.
The whole SLAAC/DHCPv6/RA thing is a total clusterfuck. I’m sure there’s many reasons that’s the case but my god. What does your ISP support? Good luck.
We need IPv6 we really do. But it seems to this day the designers of it took everything good/easy/simple and workable about v4 and threw it out. And then are wondering why v6 uptake is so slow.
If they’d designed something that was easy to understand, not too hard to implement quickly and easily, and solved a tangible problem it’d have taken off like a rocket ship. Instead they expected humans to parse hex, which no one does, and massive long numbers that aren’t easily memorable. Sure they threw that one clever :: hack in there but it hardly opened it up to easy accessibility.
Of course hindsight is easy to moan but the “It’s great what’s the problem?” tone of this article annoys me.
reply