upvote
Pretty much the same experience (on a much smaller scale). And just open up one of their servers and compare the engineering to a Dell or HPE server. Anything that can be cheaped out is. Corrugated plastic for cooling air channels, FRU assemblies held in place with sheet metal screws, all very bargin basement.
reply
They look cheap even from the outside. They all look like they last went through a chassis redesign in 2002.
reply
Pretty much. But at one point you could buy 2 to 3 units to every equivalent Dell or HP unit unless you had enough scale to get volume discounts. At $30M I expect the price to be a lot closer though.

Then it’s a matter of how well your engineering/ops org is setup to deal with silly hardware issues and annoyances. Some orgs will burn dozens of hours on a random failure, some will burn an hour or treat the entire server as disposable due to aforementioned cost differences. If you are not built to run on cheaply engineered gear that has lots of “quality of life” sharp edges (including actual physical sharp edges!) then you are gonna have a bad time. Silly things like rack rails sucking will bite you and run up the costs far more than anyone would expect unless you have experience to predict and plan for such things beforehand.

Of course you do have the risk of a totally shit batch or model of server where all that goes out the window. I got particularly burned by some of their high density blade servers, where it was a similar story to yours. Total loss in the 7 figures on that one!

Totally agreed on their BMC/firmware department. Flashbacks to hours of calls with them trying to explain the basics. My favorite story from that group is arguing with them over what a UUID is - they thought it was just a randomly generated string. Worked until one didn’t pass parsing on some obscure deeply buried library and caused mysterious automation failures due to being keyed against chassis UUID… and that’s when they’d actually burn one into firmware in the first place.

It was also always a tradeoff of having to deal with cheaped out hardware engineering with supermicro or with some horrible enterprise quarterly numbers driven sales process with Dell.

reply
I haven't worked with anything at that scale, but the little bit that I was SuperMicro adjacent I was always unimpressed by the "fit and finish" of the entire experience, as compared to Dell and HP. (Having said that, the entire x86 commodity server experience is shitty anyway. I had a brief time, early in my career, when I did work with DEC Alpha machines. Man, they had their shit together. Stuff was expensive as sin, but stuff worked together and worked well. Build quality was tank-like.)
reply
When Compaq servers were still a thing it was the same with those. You could drop them two stories and they'd probably continue playing if the cable was long enough ;)

Oh and you'd get fined for damage to the pavement.

reply