If you look at say 3G -> 4G -> 5G or Wifi, you see industry bodies of manufacturers, network providers, and middle vendors who both standardize and coordinate deployment schedules; at least at the high level of multi-year timelines. This is also backed by national and international RF spectrum regulators who want to ensure that there is the most efficient use of their scarce airwaves. Industry players who lag too much tend to lose business quite quickly.
Then if you look at the internet, there is a very uncoordinated collection of manufacturers, network providers, and standardization is driven in a more open manner that is good for transparency but is also prone to complexifying log-jams and hecklers vetos. Where we see success, like the promotion of TLS improvements, it's largely because a small number of knowledgable players - browsers in the case of TLS - agree to enforce improvements on the entire eco-system. That in turn is driven by simple self-interest. Google, Apple, and Microsoft all have strong incentives to ensure that TLS remains secure; their ads and services revenue depend upon it.
But technologies like DNSSEC, IPv6, QUIC all face a much harder road. To be effective they need a long chain of players to support the feature, and many of those players have active disincentives. If a home users internet seems to work just fine, why be the manufacturer that is first to support say DNSSEC validation and deal with all of the increased support cases when it breaks, or device returns when consumers perceive that it broke something? (and it will).
IPv6 deployment is extra hard because we need almost every network in the world to get on board.
Dnssec shouldn't be as bad, but for dns resolvers and software that build them in. I think it's a bit worse than TLS adoption in part just because of DNS allowing recursive resolution and in part DNS being applicable to a bit more than TLS was. But the big thing seems to be that there isn't a central authority like web browsers who can entirely force the issue. ... Maybe OS vendors could do it?
Quic is an end to end protocol so should be deployable without every network operator buying in. That said, we probably do need a reduction in udp blocking in some places. But otherwise, how can quic deployment be harder than TLS deployment? I think there just hasn't been incentive to force it everywhere.
No. IPv6 deployment is tricky (though accelerating), but not all that scary, because it's easy to run IPv4 and IPv6 alongside each other; virtually everybody running IPv6 does that.
The problem with DNSSEC is that deploying it breaks DNS. Anything that goes wrong with your DNSSEC configuration is going to knock your whole site off the Internet for a large fraction of Internet users.
Very aware that dual stack deployment is a thing. It's really the only sane way to do the migration for any sizable network, but obviously increases complexity vs a hopeful future of IPv6 only.
Good point about dnssec, but this is par for the course with good security technologies - it could break things used to be an excuse for supporting plaintext http as a fallback from https / TLS. If course having an insecure fallback means downgrade attacks are possible and often easy, so defeats a lot of the purpose of the newer protocols
I don't think the failure modes for DNSSEC really are par for the course for security technologies, just for what it's worth; I think DNSSEC's are distinctively awful. HPKP had similar problems, and they killed HPKP.
Plus IPv6 has significant downsides (more complex, harder to understand, more obscure failure modes, etc…), so the actual cost of moving is the transition cost + total downside costs + extra fears of unknown unknowns biting you in the future.
Definitely there are fear of unknowns to deal with. And generally some business won't want to pay the switching costs over something perceived to be working.
IPv6 is simpler in a lot of ways than ipv4 - fewer headers/extensions, no support for fragmentation. What makes it more complicated? What makes the failure modes more obscure? Is it just that dual stack is more complex to operate?
Then if you look at the internet, there is a very uncoordinated collection of manufacturers, network providers, and standardization is driven in a more open manner that is good for transparency but is also prone to complexifying log-jams and hecklers vetos. Where we see success, like the promotion of TLS improvements, it's largely because a small number of knowledgable players - browsers in the case of TLS - agree to enforce improvements on the entire eco-system. That in turn is driven by simple self-interest. Google, Apple, and Microsoft all have strong incentives to ensure that TLS remains secure; their ads and services revenue depend upon it.
But technologies like DNSSEC, IPv6, QUIC all face a much harder road. To be effective they need a long chain of players to support the feature, and many of those players have active disincentives. If a home users internet seems to work just fine, why be the manufacturer that is first to support say DNSSEC validation and deal with all of the increased support cases when it breaks, or device returns when consumers perceive that it broke something? (and it will).