Building redundant infrastructure that can withstand BGP and DNS configuration mistakes are not that simple but it can be done.
Ironically, denic still requires you to have two separate name servers with different IPs for your domain (which can be worked around by changing the IP of the registered name server afterwards lol), a requirement that all other registries I use have dropped or never had because enforcing such a policy at the registry level makes zero sense.
ns-cloud-c1.googledomains.com
ns-cloud-c2.googledomains.com
ns-cloud-c3.googledomains.com
ns-cloud-c4.googledomains.com
Would not make any sense to do four of them if it's a single AZ. Also, they are geo-aware and routed to your nearest region.Even the current centralized ICANN flavor could be substantially more resilient if it instead handed out key fingerprints and semi-permanent addresses when queried. That way it would only ever need to be used as a fallback when the previously queried information failed to resolve.
Think about what would happen the day that letsencrypt is borken for whatever reason technical or like having a retarded US leader and being located in the wrong country. Taken into account the push of letsencrypt with major web browsers to restrict certificate validities for short periods like only a few days...
I haven't followed this closely, but have there been any... shall we say plain outages longer than six hours? That's not an outrageous TTL. Or a day.
If Let's Encrypt goes down, half of the Internet will become inaccessible in a week.
* https://www.keyfactor.com/blog/2023s-biggest-certificate-out...
DNS is a look up service that runs on the internet.
Internet routing of IP packets is what the internet does and that is working fine (for a given value of fine).
You remind me of someone using the term "the internet is down" that really means: "I've forgotten my wifi password".
Real world beats sci-fi :) And isn't it why we love IT ? And hate it too, because of "peoples in charge"...
There is designing something to be fail-closed because it needs to be secure in a physical sense (actually secure, physically protected), and then there's designing something fail-closed because it needs to be secure from an intellectual sense (gatekept, intellectually protected). While most of the internet is "open source" by nature, the complexity has been increased to the point where significant financial and technical investment must be made to even just participate. We've let the gatekeepers raise the gates so high that nobody can reach them. AI will let the gatekeepers keep raising the gates, but then even they won't be able to reach the top. Then what?
I think the point you're trying to make, put another way is in the context of "availability" and "accessibility" we've compromised a lot of both availability and accessibility in the name of security since the dawn of the internet. How much of that security actually benefits the internet, and how much of that security hinders it? How much of it exists as a gatekeeping measure by those who can afford to write the rules?
And fuck nothing at all happened as a result.
We had a short discussion about migrating to .com, but decided risk != reward as no one would know the new tld
I assume there are a couple people working for denic who had a stressfull night..
...is only for Pentagon networks and military stuff. It's not for us normal people. (We get Cloudflare and FAANG bullshit instead.)
Every FAANG company has their own fiber backbone. Why invest the internet that everyone uses when you can invest in your own private internet and then sell that instead?
Traffic that goes over "the Internet" traverses some mix of your ISP's fiber, fiber belonging to some other ISP they have a deal with, then fiber belong to some ISP they have a deal with, etc.
All those ISPs are being paid to provide service, they can invest in their own networks.