upvote
This is almost definitely an issue of equipment failure.

Cooling in datacenters is like everything else both over and under provisioned.

It's overprovisioned in the sense that the big heat exchange units are N+1 (or in very critical and smaller load facilities 2N/3N). This is done because you need to regularly take these down for maintenance work and they have a relatively high failure rate compared to traditional DC components and require mechanical repairs that require specialized labor and long lead times. In a bigger facility its not uncommon to have cooling be N+3 or more when N becomes a bigger number because you're effectively always servicing something or have something down waiting for a blower assembly which needs to be literally made by a machinist with a lathe because that part doesn't exist anymore but that's still cheaper than replacing the whole unit.

The system are also under-provisioned in the sense that if every compute capacity in the facility suddenly went from average power draw to 100% power draw you would overload the cooling capacity, you would also commonly overload things in the electrical and other paths too. Over provisioning is just the nature of the industry.

In general neither of these things poses a real problem because compute loads don't spike to 100% of capacity and when they do spike they don't spike for terribly long and nobody builds facilities on a knife-edge of cooling or power capacity.

The problem comes when you have the intersection of multiple events.

You designed your cooling system to handle 200% of average load which is great because you have lots of headroom for maintenance/outages.

Repair guy comes on Tuesday to do work on a unit and finds a bad bearing, has to get it from the next state over so he leaves the unit off overnight to not risk damaging the whole fan assembly (which would take weeks to fabricate).

The two adjacent cooling units are now working JUST A BIT harder to compensate and one of them also had a motor which was just slightly imbalanced or a fuse which was loose and warming up a bit and now with an increased duty cycle that thing which worked fine for years goes pop.

Now you're minus two units in an N+2 facility. Not really terrible, remember you designed for 200% of average load.

That 3rd unit on the other side of the first failed unit, now under way more load, also has a fault. You're now minus 3 in a N+2 facility.

Still, not catastrophic because really you designed for 200% of average load.

The thing is, it's now 4AM, the onsite ops guy can't fix these faults and needs to call the vendor who doesn't wake up till 7AM and won't be onsite till 9.

Your load starts ramping up.

Everything up above happens daily in some datacenter in the USA. It happens in every datacenter probably once a year.

What happens next is the confluence of events which puts you in the news.

One of your bigger customers decides now is a great time to start a huge batch processing job. Some fintech wants to run a huge model before market open or some oil firm wants to do some quick analysis of a new field.

They spin up 10000 new VMs.

Normally, this is fine, you have the spare capacity.

But, remember, you planned for 200% of AVERAGE cooling capacity and this is not nodes which are busy but not terribly busy, these are nodes doing intense optimized number crunching work which means they draw max power and thus expel max waste heat.

Not only has your load in terms of aggregate number of machines spiked but their waste heat impact is also greater on average.

Boom, cascading failure, your cooling is now N-4.

Server fans start ramping up faster which consumes more power.

Your cooling is now N-5.

Alarms are blaring all over the place.

Safeties on the cooling units start to trip as they exceed their load and refrigerant pressures rise.

Your cooling is now N-6.

Your cooling is now N-7.

Your cooling is now 0.

reply
This is a great writeup! thank you!!

Reminds when i did noogler training back in the day and one of the talks described a cascading failure at a datacenter, starting with a cat which was too curious near a power conditioner, and briefly conducted

reply
The cat incident at a facility I worked at.

Its cold up here in the winter, sadly, the residual heat from even totally passive components like switch gear is enough to warm things up enough to attract them. .001% of 1MW of power is still quite warm. (I have no idea how much switchgear leaks but i know they are warm even in winter outdoors).

And, yeah, the rest of the writeup is also an amalgamation of some panic-inducing experiences in my life.

reply
I could totally get into “Ops Thriller” genre of novels like this.
reply
I'd expect someone like AWS to just throttle machines before overloading their cooling. Because they probably can do that, while e.g. a data center that just rents the space can't really throttle their customers nicely.
reply
Reducing clock speeds, even if they could do that -- and I'm not sure they can, given how Nitro is designed -- would be problematic since a lot of customer workloads assume homogeneous nodes.

But they did load-shed. Perhaps not soon enough, but the reason this is publicly known is because they reduced the amount of heat being produced.

reply
> But they did load-shed

Right, exactly, I highly doubt the facility went into any kind of actual uncontrolled thermal rise. This is news because they had to take such drastic actions. I'm sure its common that they force spot prices up (probably way up) to compensate for reduced capacity due to events, I'm sure they even sometimes fake no capacity for similar reasons. No capacity means "I don't want to turn on your node" not merely "I don't have any more physical servers I could turn up for you".

This is news because they powered off some non-preemptible customer loads, which actually makes me wonder if you saw that chain of events occur here.

spot prices rise -> new instance availability goes to 0 -> preemptible instances go dark -> normal instances go dark.

reply
Its harder and harder to throttle machines with hardware segmentation capabilities effectively passing through hardware components "intact"

A decade ago it was trivial to just tell the hypervisor to reduce the cpu fraction of all VMs by half and leave half unallocated. Now, it's much more complicated and definitely would be user visible.

reply
The cooling units dont fail just because they get to 100% duty cycle. That's pretty much "normal operation", you just get... higher efficiency coz the cooling side is warmer
reply
Of course not. They fail above 100%.

Some fail below 100% too.

reply
You can't have a duty cycle above 100%. It's impossible.
reply
Not according to POTUS math. You can have 200%, 500%, 600%, 1200%. You just have to say it enough and people will question if they really might not understand percentages enough, and just go with it.
reply
ok but cooling systems don't run on POTUS math though
reply
Nor does the rest of the world
reply
This is written beautifully. It's like a much more inconsequential variant of Chernobyl.
reply
I would have thought with all the data centers being built the parts for cooling systems would be standardized with replacements available from Grainger immediately.
reply
Shouldn't there be a feedback system here preventing the scheduling of loads when cooling is degraded?
reply
With hyperscalers for sure.

But this is the physical world, shit happens.

The algorithm didn't know that fuse was lose and fine at 50% duty cycle but was high resistance and going to blow at 100%.

reply
One of the data center's cooling loops broke.
reply
No backups?
reply
I once worked at a company that had a wealth of backups. A backup generator, backup batteries as the generator takes a few seconds to start, a contract for emergency fuel deliveries, a complete failover data centre full of hot standby hardware, 24/7 ops presence, UPSes on the ops PCs just in case, weekly checks that the generators start, quarterly checks by turning off the breakers to the data centre, and so on.

It wasn't until a real incident that we learned: (a) the system wasn't resilient to the utility power going on-off-on-off-on-off as each 'off' drained the batteries while the generator started, and each 'on' made the generator shut down again; (b) the ops PCs were on UPSes but their monitors weren't (C13 vs C5 power connector) and (c) the generator couldn't be refuelled while running.

Even if you've got backup systems and you test them - you can never be 100% sure.

reply
A plan that has never been executed is really just hope and wishful thinking.
reply
What happens when the backup breaks?
reply
At a certain point earth is a single point of failure.
reply
You have a back up for the back up backup.

Turtles all the way down.

At AWS scale even unlikely hardware events become more common I guess.

reply
Each turtle gives them another 9. How many 9s are they down due to incidents over the past year?
reply
They're definitely more than half a day now, which is only two and a half nines.
reply
They absolutely have backups, I presume they were ineffective or also down for _reasons_.
reply
The point of being "cloud native" is you build redundancy at higher levels. Instead of having extra pipes and wires, you have extra software that handles physical failures.
reply
I worked in a DC that had multiple redundant chillers on the roof, and multiple redundant coolers on each floor, but the whole building's cooling failed at once when the water lines failed somehow.

They didn't say how, but apparently the pipes between each floor and the roof were not redundant. It took almost 24 hours to fix.

reply
reply