upvote
As seen on HN a few days ago, immersion cooling is dead: turns out the risks of getting sued to oblivion due to widespread PFAS contamination isn't worth it. [0]

DC doesn't have such a killer. There are a decent bunch of benefits, and the main drawback is gear availability. However, the chicken-and-egg problem is being solved by hyperscalers. Like it or not, the rank-and-file of small & medium businesses is dying, and massive deployments like AWS/GCP/Azure/Meta are becoming the norm. Those four already account for 44% of data center capacity! If they switch to DC can you still call it "specialty kit", or would it perhaps be more accurate to call it "industry norm"?

It is becoming increasingly obvious that the rest of the industry is essentially getting Big Tech's leftovers. I wouldn't be surprised if DC became the norm for colocation over the next few decades.

[0]: https://thecoolingreport.com/intel/pfas-two-phase-immersion-...

reply
They poison water supplies, knowingly, for decades, and it only takes $12 billion dollars to finally get them to stop?

Fucks sake.

reply
I recommend reading these two:

https://developer.nvidia.com/blog/nvidia-800-v-hvdc-architec...

https://blogs.nvidia.com/blog/gigawatt-ai-factories-ocp-vera...

almost everybody in the industry is embracing 800V DC mostly because of Vera Rubin and the increased electricity requirements.

reply
Those vendors all have DC power supply options, to my knowledge. It’s hardly new; early telco datacenters had DC power rails, since Western Electric switching equipment ran on 48VDC.

https://www.nokia.com/bell-labs/publications-and-media/publi...

reply
That’s just it though, telco DCs != Compute DCs. Telcos had a vested interest in DC adoption because their wireline networks used it anyway, and the fewer conversions being done the more efficient their deployments were.

Every single DC I’ve worked in, from two racks to hundreds, has been AC-driven. It’s just cheaper to go after inefficiencies in consumption first with standard kit than to optimize for AC-DC conversion loss. I’m not saying DC isn’t the future so much as I’ve been hearing it’s the future for about as long as Elmo’s promised FSD is coming “next year”.

reply
I think the real reason is because battery power didn't have to be converted twice to be able to run the gear in case of an outage, so you'd get longer runtime in case of a power failure, and it saves a bunch of money on supplies and inverters because you effectively only need a single giant supply for all of the gear and those tend to be more efficient (and easier to keep cool) than a whole raft of smaller ones.
reply
Immersion cooling was/is so fucking impractical it is only useful for very specific issues. If you talk to any engineer who worked on CRAY machines that were full of liquid freon, they'll tell how hard it is to do quick swaps of anything.

Its much cheaper, quicker and easier to use cooling blocks with leak proof quick connectors to do liquid cooling. It means you can use normal equipment, and don't need to re-re-enforce the floor.

A lot of "edge" stuff has 12/48v screw terminals, which I suspect is because they are designed to be telco compatible.

For megawatt racks though, I'm still not really sure.

reply
We had a cluster of liquid cooled CDC Cyber mainframes. One of them developed a bad leak and managed to drain itself into the raised floor. This was a Very Bad Day for many folks in the computer center.

Edit: s/have/had/

reply
At least for servers, power supplies are highly modular. It just takes 1 moderately sized customer to commit to buying them, and a DC module will appear.

Looking at the manual for the first server line that came to mind, you can buy a Dell PowerEdge R730 today with a first party support DC power supply.

reply
Surely if it makes sense for the big players, they will do it, and then the benefits will trickle down to the rest? Like how Formula 1 technology will end up in consumer vehicles.
reply
It is weird to me how far from the state of the art mainstream server equipment is. I can't imagine anything worse than AC-AC UPS, active PDUs, and redundant AC-DC supplies in each rack unit, but that's still how people are doing it.
reply
These are GigaWatt data centers. For a single one they buy equipment by the container ship. Nothing is niche about it.
reply