upvote
> the GPU is limited by the Thunderbolt port

Not everything is limited by the transfer speed to/from the GPU. LLM inference, for example.

reply
Chicken/egg. NVidia tooling is lacking surely in part because the hardware wasn’t usable on macOS until now. Now that it’s usable that might change.
reply
Nvidia GPUs were usable on Intel Macs, but compatibility got worse over time, and Apple stopped making a Mac Pro with regular PCIe slots in 2013. People then got hopeful about eGPUs, but they have their own caveats on top of macOS only fully working with AMD cards. So I've gotten numb to any news about Mac + GPU. The answer was always to just get a non-Apple PC with PCIe slots instead of giving yourself hoops to jump through.
reply
The 2019 Intel Mac Pro had PCIe slots. The Apple Silicon Mac Pro still has them as well, but they’re pretty much useless.
reply
Until there is official support for Mac coming from nvidia, I don't think anything will happen.

> the hardware wasn't usable on macOS

This eGPU thing is from a third-party if I understand correctly. I don't see why nvidia would get excited about that. If they cared about the platform, they would have released something already.

reply
Nvidia tooling like CUDA has worked on AArch64 UNIX-certified OSes since June of 2020: https://download.nvidia.com/XFree86/Linux-aarch64/

The software stack has been ready for Apple Silicon for more than a half decade.

reply
There's a third option that might fit some of the "I'm on a Mac but need CUDA" cases: network-mounting an Nvidia GPU from another machine on the same LAN. The GPU stays wherever it lives (office server, lab machine, a roommate's PC), your Mac runs the CUDA workload locally without any code changes — same PyTorch/CUDA calls, just intercepted by a stub library that forwards them over the local network.

The tradeoff vs. a physical eGPU: no Thunderbolt bandwidth ceiling or cabling, but you do need to be on the same LAN and there's ~4% overhead vs. native. Doesn't help if you need the GPU while traveling, and won't fix the physical macOS driver situation for native GPU access.

Disclosure: I work on GPU Go (tensor-fusion.ai/products/gpu-go), so I'm obviously biased toward this approach — but it genuinely is a different point in the design space from eGPU.

reply
> same PyTorch/CUDA calls, just intercepted by a stub library that forwards them over the local network.

At that point you're making more work for yourself than debugging over SSH.

reply
I misunderstood eGPU for virtual GPU. But I was wrong it means external GPU.
reply