Then it’s a matter of how well your engineering/ops org is setup to deal with silly hardware issues and annoyances. Some orgs will burn dozens of hours on a random failure, some will burn an hour or treat the entire server as disposable due to aforementioned cost differences. If you are not built to run on cheaply engineered gear that has lots of “quality of life” sharp edges (including actual physical sharp edges!) then you are gonna have a bad time. Silly things like rack rails sucking will bite you and run up the costs far more than anyone would expect unless you have experience to predict and plan for such things beforehand.
Of course you do have the risk of a totally shit batch or model of server where all that goes out the window. I got particularly burned by some of their high density blade servers, where it was a similar story to yours. Total loss in the 7 figures on that one!
Totally agreed on their BMC/firmware department. Flashbacks to hours of calls with them trying to explain the basics. My favorite story from that group is arguing with them over what a UUID is - they thought it was just a randomly generated string. Worked until one didn’t pass parsing on some obscure deeply buried library and caused mysterious automation failures due to being keyed against chassis UUID… and that’s when they’d actually burn one into firmware in the first place.
It was also always a tradeoff of having to deal with cheaped out hardware engineering with supermicro or with some horrible enterprise quarterly numbers driven sales process with Dell.
Oh and you'd get fined for damage to the pavement.
You either take a gamble on something and hope it's good, or try to buy the same thing that someone else bought and reviewed.
Remember the 2018 accusations of spy chips implanted in supermicro motherboards that everyone denied so strongly?
It'd be easy to prove the existence of a pervasive "spy-chip" problem using a camera or a microscope. Unsurprisingly, neither Bloomberg nor it's quoted "experts" ever managed to do so, deapite loudly banging that drum.
If some market has large margins, it means it has some inefficiencies.
I thought about quite often while visiting a pub owned by the land lord renting out 150 rooms above. Each floor had a large industrial shared kitchen, shared bathrooms, toilets and a large shared living room. If people had 1-2 guests they would stay in their room, if they had 2-10 guests they would use the shared space, if they had 4-80 guests they would take the elevator to the pub. When one was bored with the guests or didn't have time they were left in the pub. Technically people had bar shifts in their rent contract (that you could buy your way out of) but there were plenty who enjoyed running the bar for free. Drinks were at cost. If you tried to tip or didn't take your change they left it on the counter and it would sit there for a day or two. The problem of the pinball machine earnings they solved with rounds of free drinks and chips.
When asked the owner said exploiting a bar was entirely to much work. If he wanted more money from the people living there he could just increase the rent?
Gross margin of zero would be mean you sell at exactly the cost to produce. Net margin of zero means you cover all your expenses including COGS. The only really difficult, practically impossible, thing would be doing both at the same time. Though, I could also see a case where you drive down net margins once sunk costs are paid and achieve both.
Doing so practically, or sustainably, in most circumstances would be uhh crazy… but it’s not impossible. Even then I think aiming for zero margin is a pretty credible tactic in eliminating competition if you can out sustain them.
TLDR; Weird? Sure. But not impossible. And even sort of likely if you’re trying to atrophy your competition out of existence.