DC doesn't have such a killer. There are a decent bunch of benefits, and the main drawback is gear availability. However, the chicken-and-egg problem is being solved by hyperscalers. Like it or not, the rank-and-file of small & medium businesses is dying, and massive deployments like AWS/GCP/Azure/Meta are becoming the norm. Those four already account for 44% of data center capacity! If they switch to DC can you still call it "specialty kit", or would it perhaps be more accurate to call it "industry norm"?
It is becoming increasingly obvious that the rest of the industry is essentially getting Big Tech's leftovers. I wouldn't be surprised if DC became the norm for colocation over the next few decades.
[0]: https://thecoolingreport.com/intel/pfas-two-phase-immersion-...
Fucks sake.
https://developer.nvidia.com/blog/nvidia-800-v-hvdc-architec...
https://blogs.nvidia.com/blog/gigawatt-ai-factories-ocp-vera...
almost everybody in the industry is embracing 800V DC mostly because of Vera Rubin and the increased electricity requirements.
https://www.nokia.com/bell-labs/publications-and-media/publi...
Every single DC I’ve worked in, from two racks to hundreds, has been AC-driven. It’s just cheaper to go after inefficiencies in consumption first with standard kit than to optimize for AC-DC conversion loss. I’m not saying DC isn’t the future so much as I’ve been hearing it’s the future for about as long as Elmo’s promised FSD is coming “next year”.
Its much cheaper, quicker and easier to use cooling blocks with leak proof quick connectors to do liquid cooling. It means you can use normal equipment, and don't need to re-re-enforce the floor.
A lot of "edge" stuff has 12/48v screw terminals, which I suspect is because they are designed to be telco compatible.
For megawatt racks though, I'm still not really sure.
Edit: s/have/had/
Looking at the manual for the first server line that came to mind, you can buy a Dell PowerEdge R730 today with a first party support DC power supply.