RRSIG with malformed signature found for a0d5d1p51kijsevll74k523htmq406bk.de/nsec3 (keytag=33834) dig +cd amazon.de @8.8.8.8 works, dig amazon.de @a.nic.de works. Zone data is intact, DENIC just published an RRSIG over an NSEC3 record that doesn't validate against ZSK 33834. Every validating resolver therefore refuses to answer.
Intermittency fits anycast: some [a-n].nic.de instances still serve the previous (good) signatures, so retries occasionally land on a healthy auth. Per DENIC's FAQ the .de ZSK rotates every 5 weeks via pre-publish, so this smells like a botched rollover.
Still, at this level, brittle infrastructure is a political risk. The internet's famous "routing around damage" isn't quite working here. Should make for an interesting post mortem.
Building redundant infrastructure that can withstand BGP and DNS configuration mistakes are not that simple but it can be done.
Ironically, denic still requires you to have two separate name servers with different IPs for your domain (which can be worked around by changing the IP of the registered name server afterwards lol), a requirement that all other registries I use have dropped or never had because enforcing such a policy at the registry level makes zero sense.
ns-cloud-c1.googledomains.com
ns-cloud-c2.googledomains.com
ns-cloud-c3.googledomains.com
ns-cloud-c4.googledomains.com
Would not make any sense to do four of them if it's a single AZ. Also, they are geo-aware and routed to your nearest region.Even the current centralized ICANN flavor could be substantially more resilient if it instead handed out key fingerprints and semi-permanent addresses when queried. That way it would only ever need to be used as a fallback when the previously queried information failed to resolve.
Think about what would happen the day that letsencrypt is borken for whatever reason technical or like having a retarded US leader and being located in the wrong country. Taken into account the push of letsencrypt with major web browsers to restrict certificate validities for short periods like only a few days...
I haven't followed this closely, but have there been any... shall we say plain outages longer than six hours? That's not an outrageous TTL. Or a day.
If Let's Encrypt goes down, half of the Internet will become inaccessible in a week.
DNS is a look up service that runs on the internet.
Internet routing of IP packets is what the internet does and that is working fine (for a given value of fine).
You remind me of someone using the term "the internet is down" that really means: "I've forgotten my wifi password".
Real world beats sci-fi :) And isn't it why we love IT ? And hate it too, because of "peoples in charge"...
There is designing something to be fail-closed because it needs to be secure in a physical sense (actually secure, physically protected), and then there's designing something fail-closed because it needs to be secure from an intellectual sense (gatekept, intellectually protected). While most of the internet is "open source" by nature, the complexity has been increased to the point where significant financial and technical investment must be made to even just participate. We've let the gatekeepers raise the gates so high that nobody can reach them. AI will let the gatekeepers keep raising the gates, but then even they won't be able to reach the top. Then what?
I think the point you're trying to make, put another way is in the context of "availability" and "accessibility" we've compromised a lot of both availability and accessibility in the name of security since the dawn of the internet. How much of that security actually benefits the internet, and how much of that security hinders it? How much of it exists as a gatekeeping measure by those who can afford to write the rules?
And fuck nothing at all happened as a result.
We had a short discussion about migrating to .com, but decided risk != reward as no one would know the new tld
I assume there are a couple people working for denic who had a stressfull night..
...is only for Pentagon networks and military stuff. It's not for us normal people. (We get Cloudflare and FAANG bullshit instead.)
Every FAANG company has their own fiber backbone. Why invest the internet that everyone uses when you can invest in your own private internet and then sell that instead?
Traffic that goes over "the Internet" traverses some mix of your ISP's fiber, fiber belonging to some other ISP they have a deal with, then fiber belong to some ISP they have a deal with, etc.
All those ISPs are being paid to provide service, they can invest in their own networks.
I ran up three new VMs on three different sites. I linked all three systems via a private Wireguard mesh. MariaDB on each VM bound to the wg IP and stock replication from the "primary". PowerDNS runs across that lot. One of the VMs is not available from the internet and has no identity within the DNS. The idea is that if the Eye of Sauron bears down on me, I can bring another DNS server online quite quickly and fiddle the records to bring it online. It also serves as a third authority for replication.
I also deployed https://github.com/PowerDNS-Admin/PowerDNS-Admin which is getting on a bit and will be replaced eventually but works beautifully.
Now I have DNS with DNSSEC and dynamic DNS and all the rest. This is how you start signing a zone and PowerDNS will look after everything else:
# pdnsutil secure-zone example.co.uk
# pdnsutil zone set-nsec3 example.co.uk
# pdnsutil zone rectify example.co.uk
Grab a test zone and work it all out first, it will cost you not a lot and then go for "production".My home systems are DNSSEC signed.
Telnet was sniffed. IRC was being sniffed and logged.
I've just given them part of a recipe for using DNSSEC. I suspect you are not actually human .. qingcharles.
I once worked at the level of administering DNSSEC for 300+ TLDs. It's its own world. When that company was winding down, I tried to continue in the field but the most common response (outside of no response, of course), was 'we already have a DNS team/vendor/guy.' And well, then things like this happen. I won't throw stones though, it's a lot to learn and can be incredibly brittle.
Broadly similar general concept to the team responsible for the DNSSSEC signing keys for an entire ccTLD.
Yeah a x509 PKI / root CA is a very different thing than DNSSSEC but they have a number of general logical similarities in that the chain of trust ultimately comes down to a "do not fuck this up" single point of failure.
I had the misfortune of having to dig deep into constructing ASN.1 payloads by hand [1] because that's the only thing Java speaks, and oh holy hell is this A MESS because OF COURSE there's two ways to encode a bunch of bytes (BIT STRING vs OCTET STRING) and encoding ed25519 keys uses BOTH [2].
And ed25519 is a mess in itself. The more-or-less standard implementation by orlp [3] is almost completely lacking any comments explaining what is going on where and reading the relevant RFCs alone doesn't help, it's probably only understandable by reading a 500 pages math paper.
It's almost as if cryptographers have zero interest in interested random people to join the field.
End of rant.
[1] https://github.com/msmuenchen/meshcore-packets-java/blob/mai...
[2] https://datatracker.ietf.org/doc/html/rfc8410#appendix-A
It wouldn't be as bad if asn.1 had cought on more as a general purpose serialization format and there were ubiquitous decent libraries for dealing with it. But that didn't happen. Probably partly because there are so many different representations of asn.1.
A bespoke serialization specifically for certificates might actually have aged better, if it was well designed.
Bitpacking structures used to be important in the 60s. That time has passed, unless you're dealing with LoRa, NFC or other cases of highly constrained bandwidth there are way better options to serialize and deserialize information. It's time to move on, and the complexity of all the legacy garbage in crypto has been the case of many a security vulnerability in the past.
As for the code, it might be personal preference but I'd love to have at least some comments referring back to a specification or original research paper in the code.
People who have thought they can do better have made things like PGP. It's one of the worst cryptographic solutions out there. You're free to try as well though.
And there is a related binary format that uses CBOR (COSE) as well.
> because that's the only thing Java speaks
No, it most definitely is not. You can just construct a private key directly in BouncyCastle: https://downloads.bouncycastle.org/java/docs/bcprov-jdk18on-...
I'm 100% certain that you also can do that with raw java.security. I did that about 15 years ago with raw RSA/EC keys. You can just directly specify the private exponent for RSA (as a bigint!) or the curve point for EC.
Ditto for ed25519, you can just take the canonical implementation from DJB. And you really really shouldn't do that anyway, please just use OpenSSL or another similar major crypto library.
I tried that, the problem is Meshcore specific - they do their own weird shit with private and public keys [1]. Haven't figured out how to do the private key import either, because in the C source code (or in python re-implementations) Meshcore just calls directly into the raw ed25519 library to do their custom math... it's a mess.
[1] https://jacksbrain.com/2026/01/a-hitchhiker-s-guide-to-meshc...
maybe someone is showing off?
I haven't been able to find any cases of genuine dns hijack attacks in the last few years. Would love to know if anyone else can?
Only about 40% of the crypto companies seem to use dnssec. Seems like a target rich environment.
There are also some large businesses that require, or strongly pressure SaaS providers to use DNSSEC. You can often contest that, but if you have DNSSEC, that's one less thing to argue about in the contract.
The browser would be very unhappy with an <input type="password"/> on a non-TLS site (localhost excepted). HSTS would trigger the "massive" warning and refuse to load the site, however.
Ah yes I think the HSTS issue is what I was thinking of
Paradoxically, resolvers wouldn't have noticed the misconfiguration if it weren't for DNSSEC.
Beyond that, DNS has the AD bit. If you need DNSSEC secure data (for example for the TLSA record), then when Cloudflare turns off DNSSEC validation, the AD bit will be clear and things will stop working.
---
The issue has been identified as a DNSSEC signing problem at DENIC, the organization responsible for the .DE top-level domain. Cloudflare has temporarily disabled DNSSEC validation on 1.1.1.1 resolver in order to allow .DE names to continue to resolve. DNSSEC validation will be re-enabled when the signing problems at DENIC are known to have been resolved.
---
(and in case it changes again, now it says)
---
The issue has been identified as a DNSSEC signing problem at DENIC, the organization responsible for the .DE top-level domain. Cloudflare has temporarily disabled DNSSEC validation for .de domains on 1.1.1.1 resolver (as per RFC 7646) in order to allow .DE names to continue to resolve. DNSSEC validation will be re-enabled when the signing problems at DENIC are known to have been resolved.
See RFC 7646 for more details: https://datatracker.ietf.org/doc/html/rfc7646
---
That said, the last few dnssec posts that got traction, tptacek tends to be at least 20% of the comments alone (ex, 55/259), ignoring word count. Today seems calm
Fun fact, CloudFlare has used the same KSK for zones it serves more than a decade now.
Keeping key material secure for more than a decade while it's in active use is vastly more complex than keeping it secure for a month, until it rotates.
For all we know, some ex-employee might be walking around with that KSK, theoretically being able to use it for god knows what for an another decade.
Nope. Key material rotation is just circus when it's done for the sake of rotation.
> For all we know, some ex-employee might be walking around with that KSK, theoretically being able to use it for god knows what for an another decade.
Or maybe an employee has compromised the new key that is going to be rotated in, while the old key is securely rooted in an HSM?
I'm a mere sysadmin and not a cybersecurity expert. But this is always something that leaves me torn.
On the one hand, yes, rotation periods for many/most credentials are long enough that you're not really de-risking yourself all that much.
On the other hand, doing regular rotations allows you to tighten up your threat model. A regularly-rotated credential allows you to say "I implicitly trust that this credential has not been compromised prior to the previous rotation."[0] Whereas, without credential rotation, you're saying "I implicitly trust that this credential has not been compromised ever."
The latter to me seems clearly like the inferior model. The question is just whether the cost-benefit pencils out. And that is obviously very situationally dependent. That calculus doesn't pencil out when dealing with user-owned passwords for instance (i.e. the costs of regular password rotation dominate the benefits of the improved threat model). Human limitations with memory and such are the main issue there. However, that doesn't apply to e.g. hypothetical sufficiently developed DNSSEC infrastructure. Does that calculus pencil out there? I don't know. But it seems plausible at least.
[0] Modulo attackers having been able to pivot into a persistent threat with a previously-compromised credential.
I'm just saying that rotating the key just in case someone compromised it is not a great idea. Doubly so if it's done infrequently enough for the operational experience to atrophy between rotations.
And yeah, I fully agree that anything surrounding the DNSSEC operations is a burning trash fire. It doesn't have to be this way, but it is.
And I just don't fully buy this rationale for asymmetric key rotation. It makes total sense for symmetric secrets (except for passwords).
Also possible, but that'd be an active threat that has some probability of being caught.
Never replacing keys allows permanent compromise that can only be caught if someone directly observes misuse.
Though nobody monitors DNSSEC like that, nor uses it, so it's fine from that aspect I guess.
Edit: Alternative link: https://www.cyberciti.biz/media/new/cms/2017/04/dns.jpg
{"data":{"error":"Imgur is temporarily over capacity. Please try again later."},"success":false,"status":403}
There is some strange irony to this, I suppose.It's been like that for over two years now.
The ".de" TLD is inherently managed by a single organization, and things wouldn't be much better if its nameservers went down. Some of the records would be cached by downstream resolvers, but not all of them, and not for very long.
> we took the decentralized platform DNS was and added a single-point-of-failure certificate layer on top of it
DNSSEC actually makes DNS more decentralized: without DNSSEC, the only way to guarantee a trustworthy response is to directly ask the authoritative nameservers. But with DNSSEC, you can query third-party caching resolvers and still be able to trust the response because only a legitimate answer will have a valid signature.
Similarly, without DNSSEC, a domain owner needs to absolutely trust its authoritative nameservers, since they can trivially forge trusted results. But with DNSSEC, you don't need to trust your authoritative nameservers nearly as much [0], meaning that you can safely host some of them with third-parties.
but how would one verify the signature if the DNSKEY expired and you cannot fetch a fresh one because the organisation providing those keys is down? As far as I understood the TTL for those keys is different and for DENIC it seems to be 1h [0]. So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
[0] dig RRSIG de. @8.8.8.8
de. 3600 IN RRSIG DNSKEY 8 1 3600 20260519214514 20260505201514 26755 de. [...]
In theory, this shouldn't happen, because if you use the same TTLs for your DNSSEC records and your "regular" records, then if the regular records are present in the cache, the DNSSEC records will be too.
> So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
Yes, but I'd argue that the DNSSEC records should have the same TTLs for exactly this reason. That's how my domain is set up at least:
$ dig +nocmd +nocomments +nostats +dnssec @any.ca-servers.ca. maxchernoff.ca. DS
;maxchernoff.ca. IN DS
maxchernoff.ca. 86400 IN DS 62673 15 2 487B95FEFF04265826F037C9DB2E1F14FF9ADBF2C7BE246A2B9F9BFD 481BE928
maxchernoff.ca. 86400 IN RRSIG DS 13 2 86400 20260512131336 20260505104433 46762 ca. ppc9LrWniPWdAI2Xq1g3FrYJGQVYayA5TtgFRkJfqOqNfe6zu/n0gwti IO3c9pOoUpIum5gPB6GLOGbGU+sfhg==
$ dig +nocmd +nocomments +nostats +dnssec @ns.maxchernoff.ca. maxchernoff.ca. DNSKEY
;maxchernoff.ca. IN DNSKEY
maxchernoff.ca. 86400 IN DNSKEY 257 3 15 DYs9mPDMRx/hQ9R9iGLi1Ysx1eFdhlXeCujY6PqJWeU=
maxchernoff.ca. 86400 IN RRSIG DNSKEY 15 2 86400 20260518072823 20260504055823 62673 maxchernoff.ca. RgPyEvB/kjXIvoidRNF/hfm7utzDs0kxXn4qJL17TUAVYOdbLl0Vd8zt E52bGBBFv2TNEnf9O9LkiT2GBH0jAA==
$ dig +nocmd +nocomments +nostats +dnssec @ns.maxchernoff.ca. maxchernoff.ca. A
;maxchernoff.ca. IN A
maxchernoff.ca. 86400 IN A 152.53.36.213
maxchernoff.ca. 86400 IN RRSIG A 15 2 86400 20260518072823 20260504055823 62673 maxchernoff.ca. bRfTVHnMjCFRaIh5uc0aT1vD4yh1UZrqOZDRunLbxFI1eth6nNlTiOOC xti7axVoXwB6VAoHOAnW0nL0eeJNDQ==No, that actually is true, but I think (?) that the part that you were missing is that DNSSEC records are mostly the same as any other record, so they can be cached the same way. And since most resolvers are DNSSEC-enabled these days, they'll tend to request (and therefore cache) the DNSSEC records at the same time as the regular records.
There are tons of edge cases here, but it should hopefully be pretty rare for a cache to have a current A/AAAA record and stale/missing DNSSEC records.
> the DNSSEC verification is also "cached"
Technically the verification itself isn't cached, but since verification only depends on the chain of DNSSEC records, and those records are cached, it has the same effect.
A list of root nameserver IP addresses is included with every local recursive DNS resolver. The list changes, albeit slowly, over the years. With DNSSEC, this list also includes public keys of those root servers, which also rotate, slowly.
So what the issue is, that the operator has, does not change the impact.
The world might be a little bit better with more decentralization of the root zone.
https://archive.nytimes.com/www.nytimes.com/library/cyber/we...
$ dig -x 199.181.172.242 +short
www2.nytimes.com.
Neat.You can both be the 3rd biggest economy in the world and still only be 1/10th of US+China GDPs combined.
And only three companies in the Top 100 for Germany:
https://companiesmarketcap.com/
Germany is the kingdom of the "mittelstand": many, many, many SMEs.
Both GP and you are right: it's the 3rd largest economy in the world and yet it's simply not that big.
https://en.wikipedia.org/wiki/Mittelstand
In other words: I expect this German DNS SNAFU to have 0.000000001% impact on the world's GDP this year.
126 trillion USD * 0.00000000001 = 1260USD
I'm pretty sure the impact was higher than that ;)
EDIT: it says "Service Disruption" now
Edit: Now even the humor is gone.
EDIT: called it...
Good news though, if you add domain-insecure: "de" to your unbound config everything works fine
"Cloudflare Radar data shows 8.11% of domains are signed with DNSSEC, but only 0.47% of queries are validated end-to-end." [1]
Zones I may care about:
- Amazon.com: unsigned
- My banks: unsigned
- Hacker News: unsigned
- Email that I do not host: unsigned
- My power companies billing: unsigned
- I found some! id.me and irs.gov are signed.
But not before 8am.
DNSSEC not working
If using an open resolver, i.e., a shared DNS cache, e.g., third party DNS service such as Google, Cloudflare, etc., then it might fail, or it might not. It depends on the third party DNS provider
https://datatracker.ietf.org/meeting/118/materials/slides-11...
It's the cryptographic version of that one time the same TLD told the world domains starting with certain letters didn't exist: https://www.theregister.com/2010/05/12/germany_top_level_dom...
DENIC screwed up the TLD itself, and .com/.net are just as susceptible.
Surely a wealth tax is not worth mentioning.
They made the point that more immigration / growth wouldn't help fix the core problem if they don't fix that asap.
I'm really not too close to Denic and know nothing about their internals, but just close enough to have experienced the stress of someone working for DENIC second hand during the outage. From the very limited information I happened to gather DENIC had some trouble in addressing the issue because, surprise, infrastructure that they need to do so runs on de domains. [1]
I'm convinced there are all kinds of extended cyclic decencies between different centralization points in the net.
If some important backbone of the internet is down for an extended time, this will absolutely cause cascading failures. And thesw central points of failure are only getting worse. I love Let's Encrypt, but if something causes them to hard fail things will go really bad once certificates start to expire.
We need concrete plans to cold start extended parts of the internet. If things go really bad once and communication lines start to fail, we're in for a bad time.
Maybe governments have redundant, ultra resistant, low tech communication lines, war rooms and a list of important people in the industry who they can find and put in these war rooms so they can coordinate the rebuild of infrastructure. But I doubt it.
[^1] I don't know if there is some kind of disaster plan in the drawer at DENIC that would address this. I don't mean to allege anything against DENIC specifically, but broadly speaking about companies and infrastructure providers, I would not be surprised if there was absolutely no plan on what to do if things really go down and how to cold start cyclic dependencies or where they even are.
DENIC's status page currently says "Frankfurt am Main, 5 May 2026 – DENIC eG is currently experiencing a disruption in its DNS service for .de domains. As a result, all DNSSEC-signed .de domains are currently affected in their reachability. The root cause of the disruption has not yet been fully identified. DENIC’s technical teams are working intensively on analysis and on restoring stable operations as quickly as possible.
If my domains' DNS servers start pointing at localhost, that doesn't mean DNS is a broken protocol.
The only problem with DNSSEC here is that it's complex.
[Edit] After playing around with it, google seems to have at least some pages cached. After setting dns to 8.8.8.8 amazon.de and spiegel.de work again, my blog does not.
https://blog.denic.de/denic-informiert-uber-die-behebung-der...
"Die Störung ist inzwischen behoben und alle Systeme laufen wieder stabil. Die genaue Ursache wird derzeit noch analysiert. Sobald belastbare Erkenntnisse vorliegen, wird DENIC diese transparent zur Verfügung stellen."
translation:
‘The disruption has now been resolved and all systems are running smoothly again. The exact cause is currently being investigated. As soon as reliable findings are available, DENIC will make them publicly available.’
"It all began with the decommissioning of the last nuclear power plant, ..."
yes indeed
Looks like it failed after a maintenance: https://www.namecheap.com/status-updates/planned-denic-de-re...
-> no idea if that also "heals" anyone who had dnssec on before.
-> no idea if maybe they need to roll back something and then rebreak the new dnssec i made a minute later lol...
$ unbound-host -t A www.denic.de
www.denic.de has address 81.91.170.12
This does not: $ unbound-host -D -t A www.denic.de
www.denic.de has address 81.91.170.12
validation failure <www.denic.de. A IN>: signature crypto failed from 194.246.96.1 for DS denic.de. while building chain of trust
So it does seem DNSSEC-related.EDIT My explanation was wrong, this is not how keytags work. The published keytag data is consistent:
de. 3600 IN DNSKEY 256 3 8 AwEAAfRLmzuIXVf7x5A0+U7hke0dS+GEJG0EdPhnOthCCLhy0t0WqLyoXJOhnfsTJ8vQX5fd9qOJc9gyr3SWJZkXAhPm3yPSC7FWWHF70WZTKKM9CekmKdqwMwq6ZCjMSUcecCuSF4Sbt1MRszV7rFmfGVklA1l5UzNbqwD+Dr5vfcLn ;{id = 33834 (zsk), size = 1024b}
de. 3600 IN DNSKEY 257 3 8 AwEAAbWUSd/QN9Ae543xzdiacY6qbjwtZ21QfmdgxRdm4Z7bjjHWy249uqxCyjjjoS4LDoRDKmj7ElffMKvTWKE1qFKu0p8TUy4wyhX0M+m5FUjvQ3CiZMi+qY7GSHA5B+Zd73cidmnTeb3e8lso6jEsXg05/VZ2AyAqWF6FexEIFxIqiwwLk4UP0BwZ17Ur3q1qx9VSbPMyHgQ9d6nHUN1EEJsTDA2v0vKumsUyp74ZanRZ/bB/6IzpaaZyr5BLF5pSCNdbRNjVmkwYD0993vm79LueyOeibsoHRc16jhALrIJou1PFjdq7YQsYN0KtqRiJtaAfPprDBREpeamPuW/MnW0= ;{id = 26755 (ksk), size = 2048b}
de. 3600 IN DNSKEY 256 3 8 AwEAAbTe1PJi8EgIudNGb+KRTxBL2aCu5rXkZ+aIe/TC88pwRdrXYeXODp1ihZWFop5CrbWRBLrk/YUPBE8aBc6oJP+58dSkdMLYkjSkmvdvYx+zXnRLWlF2bapxvZxshATJDfGjGbCiWxKEOoyRx3UhICtHC+cUSddsEvzfacUcBb6n ;{id = 32911 (zsk), size = 1024b}
de. 3600 IN RRSIG DNSKEY 8 1 3600 20260519030655 20260505013655 26755 de. ke56T5GZt/X6zMBAF+ouyCTnAd7RY7MsnDcfa9jyyOwSouRXhvzim/V13JDTMBAnpAHxWQXoruXrAZ6A6re5N+8Pp2utVkAEKTWs0r4UOLNKoZ2+zMwNplKjNNnY5PJIbHfa5myyziLiIsi//qDIgQEACFk+pZcHXrRdqRoXPCL3UtfaXjk3+duDQdlPnYsJys5UshjVpkALSMChW7J0anzr0sG+f9ytstBneymMwFYOUC3NqbejbLPZsXGPZBQKPAoVJuV5q3znopbcqrDFfjI7bmX3QPYNvOaiT1ElBfi2piJVpDzMaMAmm2jCmvrf5VeTOBccMroh8sBtDPsaEg== ;{id = 26755}
The signature on the SOA record still does not verify: de. 86400 IN SOA f.nic.de. dns-operations.denic.de. 1778014672 7200 7200 3600000 7200
de. 86400 IN RRSIG SOA 8 1 86400 20260519205754 20260505192754 33834 de. aZoiAJ+PaHUDVSHNXfV/R26ZK3GpFB7ek2Z46VnZdmPEDaTww+a7PkiQ98W83xohUunXYSvQCMeGYfUre5UT76eBKThdxW2a6ImX9/x/oEzQ9x/69Y/NSeTckOv9m3HCLBOug01op1koiHOIAVEvonOmXEHHqo1P4sR/fNbcVg4= ;{id = 33834}I am very happy that it doesn't happen more often.
As fallback they should use their X account: https://x.com/denic_de
May 5, 2026 23:28 CEST
May 5, 2026 21:28 UTC
INVESTIGATING
Frankfurt am Main, 5 May 2026 – DENIC eG is currently experiencing a disruption in its DNS service for .de domains. As a result, all DNSSEC-signed .de domains are currently affected in their reachability. The root cause of the disruption has not yet been fully identified. DENIC’s technical teams are working intensively on analysis and on restoring stable operations as quickly as possible. Based on current information, users and operators of .de domains may experience impairments in domain resolution. Further updates will be provided as soon as reliable findings on the cause and recovery are available. DENIC asks all affected parties for their understanding. For further enquiries, DENIC can be contacted via the usual channels.
We observed issues on a non-DNSSEC .de domain at 19:45Z and confirmed around 20:12Z it wasn't just us, but also more high profile domain names.
There’s no way it’s DNS
It was DNSSEC
There are too many coincidences happening.
Fundamentally, security is a solution to an availability problem: The desire of the users is for a system to remain available despite external attack.
Systems that become unavailable to everyone fail this requirement.
A door with its keyhole welded shut is not "secure", it's broken.
If I’m unable to use Amazon for 24 hours it doesn’t really matter. If a photo copy of my passport is leaked that’s worries and potential troubles for years.
or alternatively,
Security = (exclude unauth'd reads) + (exclude unauth'd writes) + (include auth'd reads and auth'd writes)
Gotta satisfy all parts in order to have security.
Confidentiality = available to us, but nobody else.
Integrity = available to us in a pristine condition.
It's a bit reductive, I'll admit, but it can be a useful exercise in the same way that everything in an economy can be reduce to units of either: "human time", "money" or "energy". Roughly speaking they're interchangeable.E.g.: What's the benefit to you if your data is so confidential that you can't read it either? This is a real problem with some health information systems, where I can't access my own health records! Ditto with many government bureaucracies that keep my records safe and secure from me.
Bad UX and bugs are in general not always an availiability problem.
If it hard to get what you want due to bad design but the site is up, the site is still up.
Non-authoritative answer: Name: bmw.de Address: 160.46.226.165
$ nslookup www.bmw.de ~ ;; Got SERVFAIL reply from 8.8.8.8, trying next server Server: 8.8.4.4 Address: 8.8.4.4#53
* server can't find www.bmw.de: SERVFAIL
https://edition.cnn.com/2026/05/01/politics/us-troop-withdra...