Nvidia outperforms Mac significantly on diffusion inference and many other forms. It’s not as simple as the current Mac chips are entirely better for this.
https://www.tomshardware.com/pc-components/gpus/chinese-work...
There's also unreleased Nvidia engineering samples of cards with doubled VRAM like this - https://www.reddit.com/r/nvidia/comments/1rczghu/update_unre...
There are some on sale via eBay right now. The memory controllers on some Nvidia gpus support well beyond the 16-24gb they shipped with as standard, and enterprising folks in China desolder the original memory chips and fit higher capacity ones.
To the original point, it's safe to say that highlighting a nationality with regards to trust is baseless and without merit, as would be for any other topic (men/women from x are y, z food is better here, etc..). Real life is much more complicated and nuanced past nationalities. Some might call it FUD (fear, uncertainty and doubt) but there's always a deeper rationale at the individual level as well.
It does seem like pretty low risk in this specific case so I agree OP's comment was bit over the top, but I would have no way to make anything resembling even an educated guess as to how far their programs go.
The mac will just work for models as large as 100B, can go higher with quantized models. And power draw will be 1/5th as much as the 3090 setup.
You can certainly daisy chain several 3090's together but it doesn't work seamlessly.
It's not "daisy chaining" 3090 has NVLink.
This setup will work for 100B models as well. And yes, the Mac will draw less power, but the Nvidia machine will be many times faster. So depending on your specific Mac and your specific Nvidia setup, the performance per watt will be in the same ballpark. And higher absolute performance is certainly a nice perk.
> You can certainly daisy chain several 3090's together but it doesn't work seamlessly.
Citation needed; there's no "daisy chaining" in the setup I describe, and low level libraries like pytorch as well as higher level tools like Ollama all seamlessly support multiple GPUs.
Regardless - there's a difference between training and inference. And pytorch doesn't magically make 5 gpus behave like 1 gpu.
The cheapest Apple desktop with 128GB of memory shows up as costing $3499 for me, which isn't very "enthusiast-compatible", it's about 3x the minimum salary in my country!
$3499 is definitely enthusiast compatible. That's beefy gaming PC tier, which is possibly the canonical example of an enthusiast market.
This isn't tens of thousands of dollars for top tier Nvidia chips we're talking about.
In the most literal meaning, absolutely, "Enthusiast" just means a person who likes something, is excited about something.
When it comes to market and products though, typically you'll see the word "Enthusiast" as mid-tier - something like: Consumer --> Enthusiast --> Professional (may have words like "Prosumer" in there as well etc:)
In that context, which is typically the one people will use when discussing product pricing and placement, "Enthusiast" is somebody who yes enjoys something, but does it sufficiently to be discerning and capable of purchasing mid-tier or above hardware.
So while a consumer photographer, may use their phone or compact or all-in-one camera, enthusiast photographer will probably spend $3000 - $5000 in camera gear. Equivalently, there are myriad gamers out there (on phones, consoles, Geforce Now, whatever:), an enthusiast gamer is assumed to have a dedicated gaming computer, probably a tower, with a dedicated video card, likely say a 5070ti or above, probably 32GB+ RAM, couple of SSDs which are not entry level, etc.
Again, this is not to say a person with limited budget is "not a real enthusiast", no gatekeeping is intended here; simply, if it may help, what the word means when it comes to market segmentation and product pricing :)
If you're an actual pro, you need your stuff to work properly, efficiently, reliably, when it's called for. When you're a hobbyist, it's sometimes almost the goal to waste money and time on stuff that really doesn't matter beyond your interest in it; working on the thing is the point, not the value it generates. Pros should spend money on good tools and research and knowledge, but it usually needs to be an investment, sometimes crossing over with hobbyist opinions.
A friend of mine who's a computer hobbyist and retail IT tech, making far far less than I do, spends comically more than me on hardware to play basically one game. He keeps up to date with the latest processors and all that stuff, he knows hardware in terms of gaming. I meanwhile—despite having more money available—have a fairly budget gaming PC that I did build myself, but contains entirely old/used components, some of which he just needed to get rid of and gave me for free, and I upgrade my main mac every 5 years or something. I only upgrade when hardware is really getting in my way.
It's interesting that you chose photographers as the example here. In many cases that I've seen, enthusiast photographers spend much more than professional photographers on their gear because the photographers make their money with their gear and therefore need to justify it, while the enthusiasts are often tech people, successful doctors, etc., who spend lots and lots on money on their hobbies...
In any case, your point stands, that "enthusiast" computer users would easily spend $3-4K or more on gear to play games, train models, etc.
It's out of reach for lots of people, even in developed countries. But it's easily within reach for loads of people that care more about computing than other stuff.
(Source: Wikipedia via Claude Opus)
Golf equipment, mountaineering equipment, skiing and snowboarding lift tickets and gear, a single excessive graphics card that's only used for increasing frame rates marginally, or basically a single extra feature on a car, are all things that accumulate quite quickly. Some are clearly more superfluous than others and cater to whales, while some are just expensive by nature and aren't attempting to be anything else
It is easy to confirm this, just look at the sales number of these $3500 devices. It is definitely not an enthusiast price point, even in the US.
I know plenty of people who don't make a lot of money (say top 25% or so) that will have a Boat or RV that costs more than a $3500 computer, and balk at the thought of spending that much on a computer. It just depends on where your interests are.
There are tens of millions of top 10% income adults in America. So something can be both unaffordable to most people, and also easily accessible to very many people.
There are a lot of people who could easily choose to spend $3,500 on a computer.
Some people succumb to lifestyle creep or choose it deliberately. Others choose to live below their means when their income grows. The latter have a lot more money to spend on extras, or to save if that's what they prefer.
Learned something new today at least, so that's cool :)
It can absolutely do some ML inference on it, but not much in terms of LLMs.
That said, a higher end gaming setup is going to cost that much and is absolutely in the enthusiast realm. "enthusiast" doesn't mean compatible with "minimum wage"
We are so freaking spoiled by the cheap cost of compute now.
Enthusiast compute hardware doesn't cater to the people on the minimum salary in any country, let alone developing nations. When Ferrari makes a car they don't ask themselves if people on minimum salary will be able to afford them.
In in the bottom two poorest EU member states and Apple and Microsoft Xbox don't even bother to have a direct to customer store presence here, you buy them from third party retailers.
Why? Probably because their metrics show people here are too poor to afford their products en-masse to be worth operating a dedicated sales entity. Even though plenty of people do own top of the line Macbooks here, it's just the wealthy enthusiast niche, but it's still a niche for the volumes they (wish to)operate at. Why do you think Apple launched the Mac Neo?
Enthusiast in this contest more or less means you are excited enough about something to get a level above what normal people should get and just below professional pricing. An enthusiast camera body can be 2000 euros.
I would say an enthusiast computer is 2-4k.
It really depends what you meant with minimum salary (yearly?) because paying 3 months of salary for a computer like that isn't far fetched. You're not using this to generate recipes for cookies. An enthusiast level car is expensive as well.
Why? Enthusiasts are by definition people for whom value for money is not the main driver but top performance and cutting edge novelty at any cost. Affording enthusiast computer hardware is not a human right same how affording a Lamborghini or McMansion isn't.
But you don't need to buy a Lamborghini to do your grocery shopping or drive your kids to school, same how you don't need an Nvidia 5090 or MacBook Pro Max to do your taxes or do your school work.
So the definition is fine as it is. It's hardware for people with very deep pockets, often called whales.
I never liked apple hardware, but they are now untouchable since their shift to own sillicon for home hardware.
And power consumption !
The performance per watt of Apple is unmatched.
Hoping they release a blade server version somehow.
A blade server would get cancelled just like the Mac Pro for exactly the same reasons: https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...
I think you can do better than the proverbial Apples and Oranges comparison.
In terms of total system, "box on desk", Apple is likely to remain the performance per watt leader compared to random PC workstations with whatever GPUs you put inside.
Apple could have taken a chunk of the enterprise market now with that AI craze if they had made an upgradable and expandable server edition based on their silicon. But no, everything has to be bolt down and restricted.
You would use multiple *90-series GPUs, throttled down in terms of power. Depending on the GPU, the sweet spot is between 225-350W, where for LLM workloads you only lose 5-10% of performance for a ~50% drop in power consumption.
Combined with a workstation (Xeon/Epyc) CPU with lots of PCIe, you can support 6-7 such GPUs (or more, depending on available power). This will blow away the fastest Mac studio, at a comparable performance per watt.
Again, a lot of this has changed, since GPUs and memory are so much more expensive now.
Macs are great for a simpler all in one box with high memory bandwidth and middling-to-decent GPU performance, but they are (or were) absolutely not "untouchable."
There are also SO-DIMM options.
The Mac Studio almost certainly uses at least half the power
(educated guess, I'm too lazy to go look at all the spec sheets and run the numbers)
Come on mate ... I think you and I both know I was talking about complete system here, not discrete components.
I'm pretty sure your total package (Dell Pro Max + GB10) will pull more from the wall.
The Dell Pro Max PSU + enclosure is only rated for 240w, it literally can't pull more than 250w from the wall without shorting itself.
280w according to the spec sheet I just looked at.
Also just look at the graphs on Geerling's website. The Mac Studio eats the Dell for breakfast in a number of the tests: https://www.jeffgeerling.com/blog/2025/dells-version-dgx-spa...
https://www.jeffgeerling.com/blog/2025/15-tb-vram-on-mac-stu...
Also why Swift nowadays has to have good Linux support, if app developers want to share code with the server.
There's also: "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning."
The market segments that can afford to ignore laptops and only target permanently-installed desktops are mostly those niches where the desktop is installed alongside some other piece of equipment that is much more expensive.
If you want to get usable speeds from very large models that haven't been quantitized to death on local machines, RDMA over Thunderbolt enables that use case.
Consumer PC GPUs don't have enough RAM, enterprise GPUs that can handle the load very well are obscenely expensive, Strix Halo tops out at 128 Gigs of RAM and is limited on Thunderbolt ports.
It'll increase a lot based on the zero-ram baseline. But it's still complete garbage compared to fitting the model in RAM. Even if you fit most of it in RAM you're still probably an order of magnitude slower than fitting all of it in RAM, most of your time spent waiting for your SSD.
I have a feeling that Mac fans obsess more about being able to run large models at unusably slow speeds instead of actually using said models for anything.
For LLMs. For inference with other kinds of models where the amount of compute needed relative to the amount of data transfer needed is higher, Apple is less ideal and systems worh lower memory bandwidth but more FLOPS shine. And if things like Google’s TurboQuant work out for efficient kv-cache quantization, Apple could lose a lot of that edge for LLM inference, too, since that would reduce the amount of data shuffling relative to compute for LLM inference.
Anything that increases the necessary compute to fully utilize RAM bandwidth in optimal LLM serving weakens Apples advantage for that.
https://marketplace.nvidia.com/en-us/enterprise/personal-ai-...
I don't think they expect anyone to actually buy these.
Most companies looking to buy these for developers would ideally have multiple people share one machine and that sort of an arrangement works much more naturally with a managed cloud machine instead of the tower format presented here.
Confirming my hypothesis, this category of devices more or less absent in the used market. The only DGX workstation on ebay has a GPU from 2017, several generations ago.
If you try to find the pricing of the GB300 towers even on the manufacturer sites, you'll see that it's not listed for any of the six or so models.
The MSI workstation is the one that is showing some pricing around. Seems like some distributors are quoting USD96K, and have a wait time of 4 to 6 weeks [0]. Other say 90K and also out of stock [1]
--
0: https://www.cdw.com/product/msi-nvidia-gb300-wkstn-72c-grace-cpu/9087313?pfm=srh
1: https://www.centralcomputer.com/msi-ct60-s8060-nvidia-dgx-station-cpu-memory-up-to-496gb-lpddr5x-nvidia-blackwell-ultra-gpu-1x-10-gbe-2x-400-gbe.htmlOlder xeon based workstations easily reach that number.
"Purchasing limit reached. To complete your order and provide you with the best customer experience, please call 1-877-888-8235"
Since the user here is not paying for it directly, the manufacturer does not have any incentive to list prices anywhere.
I really don't get why anybody would want that. What's the use case there?
If someone doesn't care about privacy, they can use for-profit services because they are basically losing money, trying to corner the market.
If they care about privacy, they can rent cloud instances in order to setup, run, close and it will be both cheaper, faster (if they can afford it) but also with no upfront cost per project. This can be done with a lot of scaffolding, e.g. Mistral, HuggingFace, or not, e.g. AWS/Azure/GoogleCloud, etc. The point being that you do NOT purchase the GPU or even dedicated hardware, e.g. Google TPU, but rather rent for what you actually need and when the next gen is up, you're not stuck with "old" gen.
So... what use case if left, somebody who is both technical, very privacy conscious AND want to do so offline despite have 5G or satellite connectivity pretty much anywhere?
I honestly don't get who that's for (and I did try a dozens of local models, so I'm actually curious).
PS: FWIW https://pricepertoken.com might help but not sure it shows the infrastructure each rely on to compare. If you have a better link please share back.
I'm a somewhat tech heavy guy (compiles my own kernel, uses online hosting, etc).
Reading your comment doesn't sound appealing at all. I do almost no cloud stuff. I don't know which provider to choose. I have to compare costs. How can I trust they won't peek at my data (no, a Privacy Policy is not enough - I'd need encryption with only me having the key). What do I do if they suddenly jack up the rates or go out of business? I suddenly need a backup strategy as well. And repeat the whole painful loop.
I'll lose a lot more time figuring this out than with a Mac Studio. I'll probably lose money too. I'll rent from one provider, get stuck, and having a busy life, sit on it a month or two before I find a fix (paying money for nothing). At least if I use the Mac Studio as my primary machine, I don't have to worry about money going to waste because I'm actually utilizing it.
And chances are, a lot of the data I'll use it with (e.g. mail) is sitting on the same machine anyway. Getting something on the cloud to work with it is yet-another-pain.
There is basically no lock-in, you don't even "move" your image, your data is basically some "context" or a history of prompts which probably fits in a floppy disk (not even being sarcastic) so if you know the basic about containerization (Docker, podman, etc) which most likely the cloud provider even takes care of, then it takes literally minutes to switch from one to another. It's really not more complex that setting up a PHP server, the only difference is the hardware you run on and that's basically a dropdown button on a Web interface (if you don't want to have scripts for that too) then selecting the right image (basically NVIDIA support).
Consequently even if that were to happen (which I have NEVER seen! at worst it's like 15% increase after years) then it would actually not matter to you. It's also very unlikely to happen based of the investment poured into the "industry". Basically everybody is trying to get "you" as a customer to rely on their stack.
... but OK, let's imagine that's not appealing to you, have you not done the comparison of what a Mac Studio (or whatever hardware) could actually buy otherwise?
That's a bit more appealing. How much would it cost per month to have it continually online?
I don't want to make an ad here but I'm going to point to HuggingFace https://endpoints.huggingface.co (and to avoid singling them out just https://replicate.com/pricing too but I don't know them well) as an example with pricing.
The "beauty" IMHO of such solutions is that again you pay for what you want. If you want to use the endpoint only for 5min to test that the model and its API fits your need? OK. You want the whole month? Sure. You want 1 user, namely you? Fine, not a lot of power, you want your whole organization to use that endpoint? Scale up.
I'm going to give very rough approximation because honestly I'm not really into this so someone please adjust with source :
Apple Mac Studio M3 Ultra 96GB = $4K
~NVIDIA A100 with 80G ~ 10x perf compared to M3 Pro (obviously depends on models)
So on Replicate today a one can get an A100 for ~$5/hr which is ... about a month. But that's for 10x speed and electricity included. So very VERY approximately if you use a Mac Studio for 10 months on AI non stop (days and night) then it's arguably worth it.
If you use it less, say 2hrs/day only for inference, then I imagine it takes few years to have the equivalent and by that time I bet Replicate or HuggingFace is going to rent much faster setup for much cheaper simply because that's what they have ALL done for the last few years.
For my own use, I'm just looking at absolute price (and convenience).
I haven't explored open weights models, so I have no idea which I'd want. It would be great to get a "frontier" model like Minimax-M2.5, but at $10/hr, it's not worth it - let alone $40/hr for GLM-5. I'd have to explore use cases for cheaper models. Likely for things related to reading emails, I can get by with a much cheaper model.
If I set one of these up, how easily is it for me to launch one of these (on the command line on my home PC) and then shut it down. Right now, when I write any app (or use OpenCode), it's frictionless. My worry is that either turning it on will be a hassle, and even worse, I'll forget to turn it off and suddenly get a big pointless bill.
If there are any guides out there on how people manage all this, it would be much appreciated.
Well it's not exactly a guide and honestly it's quite outdated (because I stop keeping track as I just don't get the quality of results I hope for versus huge trade offs that aren't worth it for me) but I listed plenty of models and software solutions for self-hosting, at home or in the cloud at https://fabien.benetou.fr/Content/SelfHostingArtificialIntel...
Feels free to check it out and if there is something I can clarify, happy to try.
That being said if I were to be in such a situation, and if somehow the guarantees wouldn't be enough then I'd definitely expect to have the budget to build my own data center with GB300 or TPUs. I can't imagine that running it on a Mac Studio.
Similarly if your use case depends on a whole lot of fast storage (eg, the 4x NVME to PCI-E x16 bifurcation boards), well that's also now something Apple just doesn't support. They didn't figure out something else. They didn't do super innovative engineering for it. They just walked away from those markets completely, which they're allowed to do of course. It's just not exactly inspiring or "deserves credit" worthy.
When they introduced the cheese grater Mac Pro the new high end GPUs were a showcase feature of it. Complete with the bespoke "Duo" variants and the special power connector doohickey (MPX iirc?). So I'd consider that an attempt to re-enter that market at least.
Apple removing/adding something to their product line matters nothing, for all we know, they have a new version ready to be launched next month, or whatever. Unless you work at Apple and/or have any internal knowledge, this is all just guessing, not a "testament" to anything.
“Apple has also confirmed to 9to5Mac that it has no plans to offer future Mac Pro hardware.”
None the less, what Apple says or doesn't say doesn't really matter. If their plan for a new Mac Pro is secret, they'll answer exactly that when someone asks them about it. Doesn't mean we won't see new Mac Pro hardware this summer. Plenty of cases in the past where they play coy and then suddenly, "whoops, we just had to keep it a secret, never mind".
But yeah, right now Apple actually has price <-> performance captured a lot of you’re buying a new computer just in general.
Maybe you spend 1000$ more for a PC of comparable performance, well tomorrow you need more power, change or add another GPU, add more RAM, add another SSD. A workstation you can keep upgrade it for years, adding a small cost for an upgrade in performance.
An Apple machine is basically throw away: no component inside can be upgraded, you need more RAM? Throw it away and buy a new one. You want a new GPU technology? You have to change the whole thing. And if something inside breaks? You of course throw away the whole computer since everything is soldered on the mainboard.
There is then the software issue, with Apple devices you are forced to use macOS that kind of sucks, especially for a server usage. True nowadays you can install Linux on it, but the GPU it's not that well supported, thus you loose all the benefits. You have to stuck with an OS that sucks, while in the PC market you have plenty of OS choices, Windows, a million of Linux distributions, etc. If I need a workstation to train LLM why do I care about a OS with a GUI? It's only a waste of resources, I just need a thing that runs Linux and I can SSH into it. Also I don't get the benefit of using containers, Docker, etc.
Mac suck even hardware side form a server point of view, for example it's not possible to rack mount them, it's not possible to have redundant PSU, key don't offer remote KVM capability, etc.
It isnt 2005 any more where RAM/CPU/etc. progress benefits from upgrading every 6mo. It's closer to 6yr to really notice
That's only the case for CPU/MB/RAM, because the interfaces are tightly coupled (you want to upgrade your CPU, but the new one uses an AM5 socket so you need to upgrade the motherboard, which only works with DDR5 so you need to upgrade your RAM). For other parts, a "Ship of Theseus" approach is often worth it: you don't need to replace your 2TB NVMe M.2 storage just because you wanted a faster CPU, you can keep the same GPU since it's all PCIe, and the SATA DVD drive you've carried over since the early 2000s still works the same.
I expect many users would be happy with the above final state through 2030, when the AM6 socket releases. That would be 13 years of service for that original motherboard, memory, case and ancillary components. This is an extreme case, you have to time the initial purchase perfectly, but it is possible.
Your point kind of disproves your point.
Or sell it, which is much easier to do with Macs because they're known quantities and not "Acer Onyx X321 Q-series Ultra".
There is then the software issue, with Apple devices you are forced to use macOS that kind of sucks, especially for a server usage
That's a fair point. Apple would get a ton of goodwill if they released enough documentation to let Asahi keep up with new hardware. I can't imagine it would harm their ecosystem; the people who would actually run Linux are either not using Macs at all, or users like me who treat them as Unix workstations and ignore their lock-in attempts.
On the upgrade path I don’t think upgrades are truly a thing these days. Aside from storage for most components by the time you get to whatever your next cycle is, it’s usually best/easiest to refresh the whole system unless you underbought the first time around.
you can just install linux?
Windows is 10x more enshittified than OSX
> An Apple machine is basically throw away: no component inside can be upgraded, you need more RAM? Throw it away and buy a new one.
Tell that to all the people rocking 5-10 year old macbook that still run great
I can live without the RAM for a couple of months to get a good price for it, especially since Apple don’t sell that model (with the RAM) any more.
Wish you a speedy recovery for your back!
There are none currently on eBay.co.uk, so I'm going to try there. I'll also try some of the reddit UK-specific groups.
As far as not being scammed - it's a really high value one-off sale, so it'll either be local pickup (and cash / bank-transfer at the time, which happens in seconds in the UK) or escrow.com (for non-eBay) with the buyer paying all the fees etc.
I'd prefer local pickup because then I have the money, the buyer can see it working, verify everything to their satisfaction etc. etc.
> Wish you a speedy recovery for your back!
Thank you :) It is a little better today. Sitting down is now tolerable for short periods... :)
I do know that Escrow.com is one of the most reputable escrow platforms, on a more personal note, I would love to know a escrow service where I can just sell the spare domains I have (I have got some .com/.net domains for 1$ back during a deal for a provider), is there any particular escrow service which might not charge a lot and I can get a few dollars from selling them as some of those domains aren't being used by me.
> Thank you :) It is a little better today. Sitting down is now tolerable for short periods... :)
I am wishing you speedy recovery as well. A cowboy gotta have a strong back :-)
I sold a domain via escrow.com a long time ago now (20 years or so) but the buyer paid fees, so I don’t know what they charge for that. You could try the calculator they have though (https://www.escrow.com/fee-calculator)
And thanks for the good wishes :)
https://appleinsider.com/articles/26/03/06/forget-512gb-ram-...
You may want to hold on to your M3 Ultra! There's no guarantee there will be a M5 Ultra with 512 Gb ram.
But it feels really good to have more ram than you can think of a use for.
I have a faint memory of an interview ages ago with Knuth I think where he mentioned as an aside he was using a workstation with 3.2 Gb of storage and 4 Gb of ram :)
I was young and dumb and never would have guessed I'd own a computer with 32gb of RAM that felt pitifully underpowered for today's tasks.
I was constantly constrained by my computers back then. Trying to navigate complex scenes or model very detailed meshes could get soooo slow. But man I loved it so much.
Probably because it ran Maya. Which was a SGI product back then, not an Autodesk product yet.
Your point would have been largely correct in the first half of 2025.
Now, you're going to have a much better experience with a couple of Nvidia GPUs.
This is because of two reasons - the reasoning models require a pretty high number of tokens per second to do anything useful. And we are seeing small quantized and distilled reasoning models working almost as well as the ones needing terabytes of memory.
At best we probably get a chassis to awkwardly daisy chain a bunch of Mac Studios together
Seem odd that a computer from a decade ago could have more than a 1TB of incremental RAM vs what we can buy today from Apple.
The market for this use case is tiny
> I bet there’s gonna be a banger of a Mac Studio announced in June. Apple really stumbled into making the perfect hardware for home inference machines.
This I'm not actually as sure about. The current Studio offerings have done away with the 512GB memory option. I understand the RAM situation, but they didn't change pricing they just discontinued it. So I'm curious to see what the next Studio is like. I'd almost love to see a Studio with even one PCI slot, make it a bit taller, have a slide out cover...
That's a pretty good deal I would think
https://frame.work/de/de/products/desktop-diy-amd-aimax300/c...
So even if the model fits in the memory buffer on the Ryzen Max, you're still going to hit something like half the tokens/second just because the GPU will be sitting around waiting for data.
Personally, I'd rather have the Framework machine, but if running local LLMs is your main goal, the offerings from Apple are very compelling, even when you adjust for the higher price on the Apple machine.
A cluster of 4 Apple’s M3 ultra Mac studios by comparisons will consume near 1100W under load.
Apple are winning a small battle for a market that they aren’t very good in. If you compare the performance of a 3090 and above vs any Apple hardware you would be insane to go with the Apple hardware.
When I hear someone say this it’s akin to hearing someone say Macs are good for gaming. It’s such a whiplash from what I know to be reality.
Or another jarring statement - Sam Altman saying Mario has an amazing story in that interview with Elon Musk. Mario has basically the minimum possible story to get you to move the analogue sticks. Few games have less story than Mario. Yet Sam called it amazing.
It’s a statement from someone who just doesn’t even understand the first thing about what they are talking about.
Sorry for the mini rant. I just keep hearing this apple thing over and over and it’s nonsense.
If the OpenAI domino falls, and I'd be happy to admit if I'm wrong, we're going to see a near catastrophic drop in prices for RAM and demand by the hyperscalers to well... scale. That massive drop will be completely and utterly OpenAI's fault for attempting to bite off more than it can chew. In order to shore up demand, we'll see NVidia and AMD start selling directly to consumers. We, developers, are consumers and drive demand at the enterprises we work for based on what keeps us both engaged and productive... the end result being: the ol' profit flywheel spinning.
Both NVidia and AMD are capable of building GPUs that absolutely wreck Apple's best. A huge reason for this is Apple needs unified memory to keep their money maker (laptops) profitable and performant; and while, it helps their profitability it also forces them into less performant solutions. If NVidia dropped a 128GB GPU with GDDR7 at $4k-- absolutely no one would be looking for a Mac for inference. My 5090 is unbelievably fast at inference even if it can't load gigantic models, and quite frankly the 6-bit quantized versions of Qwen 3.5 are fantastic, but if it could load larger open weight models I wouldn't even bother checking Apple's pricing page.
tldr; competition is as stiff as it is vicious-- Apple's "lead" in inference is only because NVidia and AMD are raking in cash selling to hyperscalers. If that cash cow goes tits up, there's no reason to assume NVidia and AMD won't definitively pull the the rug out from Apple.
None of the things people care about really get much out of "unified memory". GPUs need a lot of memory bandwidth, but CPUs generally don't and it's rare to find something which is memory bandwidth bound on a CPU that doesn't run better on a GPU to begin with. Not having to copy data between the CPU and GPU is nice on paper but again there isn't much in the way of workloads where that was a significant bottleneck.
The "weird" thing Apple is doing is using normal DDR5 with a wider-than-normal memory bus to feed their GPUs instead of using GDDR or HBM. The disadvantage of this is that it has less memory bandwidth than GDDR for the same width of the memory bus. The advantage is that normal RAM costs less than GDDR. Combined with the discrete GPU market using "amount of VRAM" as the big feature for market segmentation, a Mac with >32GB of "VRAM" ended up being interesting even if it only had half as much memory bandwidth, because it still had more than a typical PC iGPU.
The sad part is that DDR5 is the thing that doesn't need to be soldered, unlike GDDR. But then Apple solders it anyway.
the bottleneck in lots of database workloads is memory bandwidth. for example, hash join performance with a build side table that doesn't fit in L2 cache. if you analyze this workload with perf, assuming you have a well written hash join implementation, you will see something like 0.1 instructions per cycle, and the memory bandwidth will be completely maxed out.
similarly, while there have been some attempts at GPU accelerated databases, they have mostly failed exactly because the cost of moving data from the CPU to the GPU is too high to be worth it.
i wish aws and the other cloud providers would offer arm servers with apple m-series levels of memory bandwidth per core, it would be a game changer for analytical databases. i also wish they would offer local NVMe drives with reasonable bandwidth - the current offerings are terrible (https://databasearchitects.blogspot.com/2024/02/ssds-have-be...)
It can be depending on the operation and the system, but database workloads also tend to run on servers that have significantly more memory bandwidth:
> i wish aws and the other cloud providers would offer arm servers with apple m-series levels of memory bandwidth per core, it would be a game changer for analytical databases.
There are x64 systems with that. Socket SP5 (Epyc) has ~600GB/s per socket and allows two-socket systems, Intel has systems with up to 8 sockets. Apple Silicon maxes out at ~800GB/s (M3 Ultra) with 28-32 cores (20-24 P-cores) and one "socket". If you drop a pair of 8-core CPUs in a dual socket x64 system you would have ~1200GB/s and 16 cores (if you're trying to maximize memory bandwidth per core).
The "problem" is that system would take up the same amount of rack space as the same system configured with 128-core CPUs or similar, so most of the cloud providers will use the higher core count systems for virtual servers, and then they have the same memory bandwidth per socket and correspondingly less per core. You could probably find one that offers the thing you want if you look around (maybe Hetzner dedicated servers?) but you can expect it to be more expensive per core for the same reason.
Apple needs to solder it because they are attaching it directly to the SOC to minimize lead length and that is part of how they are able to get that bandwidth.
No it isn't:
https://www.newegg.com/crucial-32gb-ddr5-7500-cas-latency-cl...
CAMM2 is new and most of the PC companies aren't using it yet but it's exactly the sort of thing Apple used to be an early adopter of when they wanted to be.
Isn't that also because that's world we have optimized workloads for?
If the common hardware had unified memory, software would have exploited that I imagine. Hardware and software is in a co-evolutionary loop.
Part of the problem is that there is actually a reason for the distinction, because GPUs need faster memory but faster memory is more expensive, so then it makes sense to have e.g. 8GB of GDDR for the GPU and 32GB of DDR for the CPU, because that costs way less than 40GB of GDDR. So there is an incentive for many systems to exist that do it that way, and therefore a disincentive to write anything that assumes copying between them is free because it would run like trash on too large a proportion of systems even if some large plurality of them had unified memory.
A sensible way of doing this is to use a cache hierarchy. You put e.g. 8GB of expensive GDDR/HBM on the APU package (which can still be upgraded by replacing the APU) and then 32GB of less expensive DDR in slots on the system board. Then you have "unified memory" without needing to buy 40GB of GDDR. The first 8GB is faster and the CPU and GPU both have access to both. It's kind of surprising that this configuration isn't more common. Probably the main thing you'd need is for the APU to have a direct power connector like a GPU so you're not trying to deliver most of a kilowatt through the socket in high end configurations, but that doesn't explain why e.g. there is no 65W CPU + 100W GPU with a bit of GDDR to be put in the existing 170W AM5 socket.
However, even if that was everywhere, it's still doesn't necessarily imply there are a lot of things that could do much with it. You would need something that simultaneously requires more single-thread performance than you can get from a GPU, more parallel computation than you can get from a high-end CPU, and requires a large amount of data to be repeatedly shared between those subsets of the computation. Such things probably exist but it's not obvious that they're very common.
These companies always try to preserve price segmentation, so I don’t have high hopes they’d actually do that. Consumer machines still get artificially held back on basic things like ECC memory, after all . . .
https://docs.nvidia.com/cuda/cuda-programming-guide/04-speci...
Can we also stop giving Apple some prize for unified memory?
It was the way of doing graphics programming on home computers, consoles and arcades, before dedicated 3D cards became a thing on PC and UNIX workstations.