This isn't so much as a scary story I'm telling so much as it is an empirically observable fact; it's happened many times, to very important domains, over the last several years.
In particular, the long TTL of DNS records itself is a historic artifact and should be phased out. There's absolutely no reason to keep it above ~15 minutes for the leaf zones. The overhead of doing additional DNS lookups is completely negligible.
> This isn't so much as a scary story I'm telling so much as it is an empirically observable fact; it's happened many times, to very important domains, over the last several years.
So has the TLS cert expiration. And while you can (usually) click through it in browsers, it's not the case for mobile apps or for IoT/embedded devices. Or even for JS-rich webapps that use XMLHttpRequest/fetch.
And we keep making Internet more fragile with the ".well-known" subtrees that are served over TLS. It's also now trivial for me to get a certificate for most domains if I can MITM their network.
Edit: BTW, what is exactly _expiring_ in DNSSEC? I've been using the same private key on my HSM for DNSSEC signing for the last decade. You also can set up signing once, and then never touch it.
There's ample evidence that the cost/benefit math simply doesn't work out for DNSSEC.
You can design new DNSSECs with different cost profiles. I think a problem you'll run into is that the cost of the problem it solves is very low, so you won't have much headroom to maneuver in. But I'm not reflexively against ground-up retakes on DNSSEC.
Where you'll see viscerally negative takes from me is on attempts to take the current gravely flawed design --- offline signers+authenticated denial --- as a basis for those new solutions. The DNSSEC we're working with now has failed in the marketplace. In fact, it's failed more comprehensively than any IETF technology ever attempted: DNSSEC dates back into the early-mid 1990s. It's long past time to cut bait.
Now here is where I disagree. Just off the top of my head, how about HIP, IP multicast and PEM?
Multicast gets used (I think unwisely) in campus/datacenter scenarios. Interdomain multicast was a total failure, but interdomain multicast is more recent than DNSSEC.
HIP is mid-aughts, isn't it?
S-HTTP was a bigger failure in absolute terms (I should know!) but it was eventually published as Experimental and the IETF never really pushed it, so I don't think you could argue it was a bigger failure overall.
(I hate to IETFsplain anything to you so think of this as me baiting you into correcting me.)
To really nerd out about it, it seems to me there are two metrics.
1. How much it failed (i.e., how low adoption was). 2. How much effort the IETF and others put into selling it.
From that perspective, I think DNSSEC is the clear winner. There are other IETF protocols that have less usage, but none that have had anywhere near the amount of thrust applied as DNSSEC.
Why? What is the real difference between DNSSEC and HTTPS?
I'd argue that the only difference is that browser vendors care about protecting against MITM on the client side. They're fine with MITM on the server side or with (potentially state-sponsored) BGP prefix hijacks. And I'm not fine with that personally.
> Where you'll see viscerally negative takes from me is on attempts to take the current gravely flawed design --- offline signers+authenticated denial --- as a basis for those new solutions.
Yes, I agree with that. In particular, NSEC3 was a huge mistake, along with the complexity it added.
I think that we should have stuck with NSEC for the cases where enumeration is OK or with a "black lies"-like approach and online signing. It's also ironic because now many companies proactively publish all their internal names in the CT logs, so attackers don't even need to interact with the target's DNS to find out all its internal names.
> In fact, it's failed more comprehensively than any IETF technology ever attempted: DNSSEC dates back into the early-mid 1990s. It's long past time to cut bait.
I would say that IPv6 failed even more. It's also unfair to say that DNSSEC dates back to the 90-s, the root zone was signed only in 2008.
The good news is that DNSSEC can be improved a lot by just deprecating bad practices. And this will improve DNS robustness in general, regardless of DNSSEC use.
Speaking as someone who was formerly responsible for deciding what a browser vendor cared about in this area, I don't think this is quite accurate. What browser vendors care about is that the traffic is securely conveyed to and from the server that the origin wanted it to be conveyed to. So yes, they definitely do care about active attack between the client and the server, but that's not the only thing.
To take the two examples you cite, they do care about BGP prefix hijacks. It's not generally the browser's job to do something about it directly, but in general misissuance of all stripes is one of the motivations for Certificate Transparency, and of course the BRs now require multi-perspective validation.
I'm not sure precisely what you mean by "MITM on the server side". Perhaps you're referring to CDNs which TLS terminate and then connect to the origin? If so, you're right that browser vendors aren't trying to stop this, because it's not the business of the browser how the origin organizes its infrastructure. I would note that DNSSEC does nothing to stop this either because the whole concept is the origin wants it.
1. DNSSEC only protects the name lookup for a host, and TLS/HTTPS protects the entire session.
2. People actually rely on TLS/HTTPS, and nobody relies on DNSSEC, to the point where the root keys for DNSSEC could be posted on Pastebin tonight and almost nobody would have to be paged. If the private key for a CA in any mainstream browser root program got published that way, it would be all hands on deck across the whole industry.
> 2. People actually rely on TLS/HTTPS, and nobody relies on DNSSEC
Sure. But I treat it as a failing of the overall ecosystem rather than just the technical failure of DNSSEC. It's not the _best_ technology, but it's also no worse than many others.
This is the outcome of browser vendors not caring at all about privacy and security. Step back and look at the current TLS infrastructure from the viewpoint of somebody in the 90-s:
You're saying that to provide service for anything over the Web, you have to publish all your DNS names in a globally distributed immutable log that will be preserved for all eternity? And that you can't even have a purely static website anymore because you need to update the TLS cert every 7 days? This is just some crazy talk!
(yes, you technically can get a wildcard cert, but it requires ...drumroll... messing with the DNS)
The amount of just plain brokenness and centralization in TLS is mind-boggling, but we somehow just deal with it without even noticing it anymore. Because browser vendors were able to apply sufficient thrust to that pig.
It only provides privacy, it doesn't verify that the resolver didn't tamper with the record.
>to the point where the root keys for DNSSEC could be posted on Pastebin tonight and almost nobody would have to be paged.
This would very much be a major issue and lots of people would immediately scramble to address it. The root servers are very highly audited and there is an absurd amount of protocol and oversight of the process.