upvote
> tell me what pc with an nvidia gpu can you buy with same memory and performance.

And power consumption !

The performance per watt of Apple is unmatched.

reply
This needs to be sold as the big ticket item for low level devs. Their chips are some of the most power efficient chips on the market right now.

Hoping they release a blade server version somehow.

reply
Apple releasing anything enterprise or "server" related would be a pretty big pivot - let alone blades.
reply
Nvidia's recent GPUs are more power-efficient than Apple Silicon in raster, training and inference workloads.

A blade server would get cancelled just like the Mac Pro for exactly the same reasons: https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...

reply
> Nvidia's recent GPUs are more power-efficient than Apple Silicon in raster, training and inference workloads.

I think you can do better than the proverbial Apples and Oranges comparison.

In terms of total system, "box on desk", Apple is likely to remain the performance per watt leader compared to random PC workstations with whatever GPUs you put inside.

reply
Then ignore me, and go ask your local datacenter why Apple Silicon isn't on any of their racks.
reply
I've owned some beefy computers in the past and this tiny little m4 mini on my desk blows them all out of the water easily. It's crazy.
reply
Untouchable my ass. You get a PC that has an ssd glued to the motherboard so if you run write intensive workloads and that thing wears out replacing it will have significant cost. Then there’s no PCie slot to get any decent network card if you want to work more than one of them in unison, you’re stuck with that stupid thunderbolt 5 while Infiniband gives x10 network speeds. As for memory bandwidth, it’s fast compared to CPUs but any enterprise GPU dwarfs it significantly. The unified RAM is the only interesting angle.

Apple could have taken a chunk of the enterprise market now with that AI craze if they had made an upgradable and expandable server edition based on their silicon. But no, everything has to be bolt down and restricted.

reply
This has changed since Sam Altman started buying up all the chip supply, raising prices on memory, storage, and GPUs for everyone, but it used to be the case that you could build a PC that was both cheaper and faster than a Mac for LLM inference, with roughly equal performance per watt.

You would use multiple *90-series GPUs, throttled down in terms of power. Depending on the GPU, the sweet spot is between 225-350W, where for LLM workloads you only lose 5-10% of performance for a ~50% drop in power consumption.

Combined with a workstation (Xeon/Epyc) CPU with lots of PCIe, you can support 6-7 such GPUs (or more, depending on available power). This will blow away the fastest Mac studio, at a comparable performance per watt.

Again, a lot of this has changed, since GPUs and memory are so much more expensive now.

Macs are great for a simpler all in one box with high memory bandwidth and middling-to-decent GPU performance, but they are (or were) absolutely not "untouchable."

reply
With 6-7 GPUs and EPYC cpu it will also cost 2-3x more than a Mac Studio.
reply
I think OP’s point was that it would do more than 2-3x the workload, thus them stating “blow it out of the water” and specifying “performance-per-watt”.
reply