Oxide seem to be the best and most thorough in their space because they have chosen to own the stack from the firmware upwards. For someone who cares in that dimension they are a clear leader already on that basis alone, for other buyers who don’t, hopefully it also makes their product superior to use as well.
Oxide is a really nice platform. I keep trying to manipulate things at work to justify the buy in (I really want to play wiht their stuff), but they aren't going for it.
Didn't companies historically own their own compute? And then started offloading to so-called cloud providers? I thought this was a cost-cutting measure/entry/temporary solution.
Or is this targeting a scale well beyond the typical HPC cluster (few dozen to few hundred nodes)? I ask because those are found in most engineering companies as far as I know (that do serious numerical work) as well as labs or universities (that can't afford the engineers and technicians companies can).
Also, what is the meaning of calling an on-prem machine "cloud" anymore? I thought the whole point of the cloud was that the hardware had been abstracted (and moved) away and you just got resources on demand over the network. Basically I don't understand what they're selling if it's not what people already call clusters. And then if the machine is designed, set up and maintained by a third party, why even go through the hassle of hosting it physically, and not rent out the compute?
As group-of-cats racks, usually, which is a totally different thing. Way "back in the day" you'd have an IT closet with a bunch of individually hand-managed servers running your infrastructure, and then if you were selling really oldschool software, your customers would all have these too, and you'd have some badly made remote access solution but a lot of the time your IT Person would call the customer's IT Person and they'd hash things out.
Way, way, way back in the day you'd have a leased mainframe or minicomputer and any concerns would be handled by the support tech.
> I thought the whole point of the cloud was that the hardware had been abstracted (and moved) away and you just got resources on demand over the network.
This idea does that, but in an appliance box that you own.
> And then if the machine is designed, set up and maintained by a third party, why even go through the hassle of hosting it physically, and not rent out the compute?
The system is designed by a third party to be trivially set up and maintained by the customer, that's where the differentiation lies.
In the moderately oldschool way: pallets of computers arrive, maybe separate pallets of SAN hosts arrive, pallets of switches and routers arrive. You have to unbox, rack, wire, and provision them, configure the switches, integrate everything. If your system gets big enough you have to build an engineering team to deal with all kinds of nasty problems - networking, SAN/storage, and so on.
In the other really oldschool way: An opaque box with a wizard arrives and sometimes you call the wizard.
In this model: you buy a Fancy Box, but there's no wizard. You turn on the Fancy Box and log into the Deploy a Container Portal and deploy containers. Ideally, and supposedly, you never have to worry about anything else unless the Big Status Light turns red and you get a notification saying "please replace Disk 11.2 for me." So it's a totally different model.
Historically, companies got their compute needs supplied by mainframe vendors like IBM and others. The gear might have sat on premises in a computer room/data center, but they didn't really own it in any real sense.
> Basically I don't understand what they're selling if it's not what people already call clusters.
Is it really a cluster when the whole machine is an integrated rack and workloads are automatically migrated within the rack so that any impending failure doesn't disrupt operation? That's a lot closer to a single node.
I don't know if it's true or not but it seems like our AWS bill is something like paying the full purchase price of the underlying hardware every month.
IIRC, Bryan Cantrill has compared the value proposition of an Oxide (rack?) to an IBM AS/400.
I've heard Bryan and Co. call it a "mainframe for Zoomers," but it's much closer to what Nutanix or VxRail is/was doing than it is to an AS/400.
The result is a system that can handle years of operation with no downtime. The platform got very popular with huge retailers for this reason.
Then in later years the platform got the ability to run Linux or Windows VMs, so that they could benefit from the reliability features.
For the business guys they're focusing on price and sovereignty. Owning your business. For technical people they are focusing on quality. Not having to deal with integration bugs.