upvote
I was at some of those IETF meetings in the mid-1990s and attended some early IPv6 working group sessions. We knew the conversion would take time, but I don’t think any of us thought it would be this slow. I was involved with multiple L3 switches and routers from 1997 through 2010. The issue was always that IPv6 basically required lots of boxes in the middle to understand it in order to roll it out, so when would it be commercially necessary? Yes, you can do tunneling and NAT at various points, but it always requires more than just the endpoints. It shows up in DNS and socket APIs. There’s no easy way to determine if a path supports it, and the path can change in an instant due to a route change. All that is very different than SSL or QUIC where only the endpoints have to be involved. That’s why QUIC uses UDP, for instance, so old intermediate devices just see it as a protocol they already know. SSL just assigned port 443 and the “https” protocol in the web URL. If a web client contacts a server on port 443 that doesn’t use SSL, it just fails. To put it another way, the level of the stack that you’re changing matters. SSL and QUIC are really L5+. IPv6 is squarely L3. There are no protocol negotiation mechanism available at L3. So, from a business standpoint, when do you take the hit and integrate it all into the processing pipeline? How do you do that in a way that doesn’t impact your IPv4 forwarding performance, because that’s what the near-term market will judge you on? How do you afford the development and test cost associated with a whole other development (almost double)? If you’re doing software forwarding, the answers are a lot easier. As soon as you’re designing silicon, it’s a lot harder. When you’re under a lot of commercial pressure, it’s difficult to be the one who goes first. And remember that this hardware evolves on roughly 10 year cycles (2 years for design, 3-5 year market sales, 3-5 year depreciation at the customer before they buy new ones). Oh, and customer rollout of IPv6 is a major project with lots of program management and testing, not just buying a box or two. So, yea hindsight is easy. Eventually you get there, but it’s a long road.
reply
> It didn’t take 25 years for SSL. SSH. Gzip encoding on HTTP pages. QUIC. Web to replace NNTP.

All that's required to implement each of those is two computers: 1 client and 1 server. Whereas supporting IPv6 requires every router between the two computers to also support IPv6. Similarly, if your current software doesn't support SSL/SSH/Gzip/etc., it's pretty easy to switch to different software, whereas it's hard or impossible for most people to switch ISPs.

> GPRS/HSDPA/3G/4G/5G

Radio spectrum costs providers millions of dollars, and each new cellular protocol increased spectrum efficiency, so upgrading means that providers can support more users with less spectrum. The problem is that most of the "Western" countries still have lots of IPv4 addresses, so there isn't much cost benefit to switching to IPv6. However, China and India both have lots of users and fewer IPv4 addresses, so there is a cost benefit to switching to IPv6 there, and unsurprisingly both of these countries have really high IPv6 adoption rates.

reply
> Instead they expected humans to parse hex, which no one does

Of all aspects of IPv6 you took the only one that doesn't complicate implementations and can easily be swapped if you wanted.

reply
Wait til you’ve got to copy & paste em, or see em comingled with hw addresses
reply
I'm not disagreeing that's a bad aspect of IPv6, I'm just saying that it's not that big of a issue for its adoption.
reply
Wait till you find an application that accepts 1.65793 as an IPv4 address. Or 134744072.

  $ ping -c 1   1.65793
  PING 1.65793 (1.1.1.1) 56(84) bytes of data.
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=54 time=1.56 ms
  
  --- 1.65793 ping statistics ---
  1 packets transmitted, 1 received, 0% packet loss, time 0ms
  rtt min/avg/max/mdev = 1.560/1.560/1.560/0.000 ms
(by the way, this was way less of a dumb peculiarity back when IPv6 was designed)
reply
> The whole SLAAC/DHCPv6/RA thing is a total clusterfuck.

SLAAC is easily the thing I love most about IPv6. It just works. Routers publish advertisements, clients configure themselves. No DHCP server, no address collisions, no worry. What's bugging you about it?

reply
What problem is this actually solving? I've deployed DHCP countless times in all sorts of environments and its "statefulness" was never an issue. Heck, even with SLAAC there's now DAD making it mildly stateful.

Don't get me wrong, SLAAC also works fine, but is it solving anything important enough to justify sacrificing 64 entire address bits for?

reply
* privacy addresses are great

* deriving additional addresses for specific functions is great (e.g. XLAT464/CLAT)

* you don't get collisions when you lose your DHCP lease database

* as Brian says, DHCP wasn't quite there yet when IPv6 was designed

* ability to proactively change things by sending different RAs (e.g. router or prefix failover, though these don't work as well as one would hope)

* ability to encode mnemonic information into those 64 bits (when configuring addresses statically)

* optimization for the routing layers in assuming prefixes mostly won't be longer than /64

… and probably 20 others that don't come to mind immediately. I didn't even spend seconds thinking about the ones I listed here.

reply
DHCP requires explicit configuration; it needs a range that hopefully doesn't conflict with any VPN you use; it needs changes if your range ever gets too small; and it's just another moving part really.

With SLAAC, it's just another implementation detail of the protocol that you usually don't have to even think about, because it just works. That is a clear benefit to me.

reply
When it fail, you find there is no option to tune its behaviour.

Plug in a rough router and see quickly you can find it.

reply
What kind of failure are you referring to? What would you want to tune? You can still easily locate all devices on your network.
reply
> What does your ISP support?

My ISP is Spectrum. They get a 0/10 on IPv6 support on this test page [1].

[1] https://test-ipv6.com

reply
FWIW, I'm also on Spectrum (by virtue of the Time Warner acquisition back in the day) and I get 10/10 on that page. That is, after turning off Firefox "Enhanced Privacy Protection" which actually blocked the page from loading at all for some reason. Got 9/10 using Chrome. Both on Linux.
reply
Is it possible that you own your own router and have at some point configured the router to turn up 6 off? I know it is turned off on my router because I had some issues with Verizon ipv6 and tp link in the past.
reply
Good idea–on my list of to-check items.
reply
how do you encode 128 bits without making a long number? and not using hex?
reply
have that be the invisible bottom layer. come up with a list of 256 common words, one per byte, and have that be the human visible IP address. mentally reading a string of words, however nonsensical, is way easier than a soup of undifferentiated hex digits.
reply
Easier if you’re a native English speaker. Harder if you’re not.

My only gripe with IPv6 addresses is they look too similar to MAC addresses. But as a representation, I think they’re absolutely fine.

reply
Far easier to use ipv8, which just has 5 octets instead of 4.
reply
We have that variant of IPv8, it's what CGNAT gives you, especially if you run MAP-E or MAP-T (which are technically not quite NAT, but kinda are, it's… complicated). You take some bits from the port number and essentially repurpose them into part of the address.

It's a nice band-aid technology, no less and no more.

reply
That still means replacing every part of the chain.
reply
> It didn’t take 25 years for SSL. SSH. Gzip encoding on HTTP pages. QUIC. Web to replace NNTP. GPRS/HSDPA/3G/4G/5G They all rolled out just fine and were pretty backwards and forwards compatible with each other.

You're comparing incremental rollout with migratory rollout for most of these; (not the mobile phone standards.) That's apples and oranges.

You can argue for other proposals. But at the end of the day the best you could've done is steal bits from TCP and UDP port numbers, which is... NAT. Other than that if you want to make a serious claim you need to do the work (or find and understand other people's work. It's not that people haven't tried before. They just failed.)

And, ultimately, this is quite close to typical political problems. Unpopular choices have to be made, for the benefit of all, but people don't like them especially in the short term so they don't get voted for.

> If they’d designed something that was easy to understand, […]

I can't argue on this since it's been far too long since I had to begin understanding IPv4 or IPv6… bane of experience, I guess.

> […] not too hard to implement quickly and easily, […]

As someone actually writing code for routers, IPv6 is easier in quite a few regards, especially link-local addresses make life so much easier. (Yet they're also a frequent point of hate. I absolutely cannot agree with that based on personal experience, like, it's not even within my window of possible opinions.)

> […] expected humans to parse hex […]

You're assuming hex is worse than decimal with binary ranges. Why? Of course it's clear to you that the numbers go to 256 because you're a tech person. But if you know that, you very likely also know hex. (And I'll claim the disjunct sets between these are the same size magnitude.)

Anyway I think I've bulletpointed enough, there's arguments to be made, and they have been made 25 years ago, and 20 years ago, and 15 years ago, and 10 years ago and 5 years ago.

Please, just stop. The herd is moving. If anything had enough sway, it would've had enough sway 15 years ago. Learn some IPv6. There's cool things in there. For example, did you know you can "ping ff02::1%eth0"?

reply
> It didn’t take 25 years for SSL.

It wasn't even on the map until 1994. Prior to that it was an ad-hoc mess of "encryption" standards. It wasn't even important enough to become ubiquitous until Firesheep existed.

Even then SSL just incorporated a bunch of things that already existed into an extensible agreement protocol, which, in the long run, due to middleware machines, became inextensible and the protocol somewhat inelegant for it's task. 30 years later and it's due for a replacement but we're stuck with it. Perhaps slow adoption isn't a metric that portends doom.

reply