upvote
> And as for extensions - gone are the days of PCIe. Audio cards and other specialized equipment works and lives just fine on USB-C and Thunderbolt.

Grumble grumble. Well, there used to more than audio cards, back before the first time Apple canceled the Mac Pro and released the 2013 Studio^H^H Trash Can^H^H Mac Pro.

Then everyone stopped writing Mac drivers because why bother. So when they brought the PCIe Pro back in 2019, there wasn't much to put in it besides a few Radeon cards that Apple commissioned.

The nice thing about PCIe is the low latency, so you can build all sorts of fun data acquisition and real time control applications. It's also much cheaper because you don't need multi-gigabit SERDES that can drive a 1m line. That's why LabVIEW (originally a Mac exclusive) and NI-DAQ no longer exist on Mac.

USB-C oscilloscopes work because the peripheral contains all the hardware, so it doesn't particularly matter that the device->host latency is high. They also don't require much bandwidth because triggering happens inside the peripheral, and only the triggered waveform record is sent a few dozen times per second.

> It's not the Mac's or Apple's fault. We are actually live in the age where systems are quite independent and do not require direct installations.

It is, and we don't. Maybe you don't notice it, but others do.

reply
> USB-C oscilloscopes work because the peripheral contains all the hardware, so it doesn't particularly matter that the device->host latency is high.

Yeah, that's basically the way accessories have gone. Powerful mcu's and soc's have gotten cheap enough to make it viable. Makes me a little sad though, I liked having low latency "GPIO's" straight to software running on my PC (but I'm thinking as far back as the parallel port... love how simple that was).

reply
It's not just that - anything working with analog signals benefits hugely from not living inside the complete EM interference nightmare of the computer case.
reply
Well there is https://www.crowdsupply.com/eevengers/thunderscope

With USB4/TB you can get quite far in both latency and throughput. Actually there are network adapters with TB connection that are just TB to PCIe adapters and PCIe network card.

reply
> gone are the days of PCIe.

My GPU, NVMe drives and motherboard might disagree.

reply
The top Mac Studio has six thunderbolt 5 ports, each of which is a PCIe 4.0 x4 link. Each is a 8GB/sec link in each direction, which is a lot. Going from x16 down to x4 has less than a 10% hit on games: https://www.reddit.com/r/buildapc/comments/sbegpb/gpu_in_pci...
reply
Your example uses GTX1080, which is a very old GPU. Current flagship consumer GPU will take a harder hit on low bandwidth PCIE.
reply
Here’s more recent HW: https://www.pugetsystems.com/labs/articles/impact-of-gpu-pci...

This is an RTX4080.

“In the more common situations of reducing PCI-e bandwidth to PCI-e 4.0 x8 from 4.0 x16, there was little change in content creation performance: There was only an average decrease in scores of 3% for Video Editing and motion graphics. In more extreme situations (such as running at 4.0 x4 / 3.0 x8), this changed to an average performance reduction of 10%.”

reply
A 10% performance reduction seems like a lot to be leaving on the table.
reply
Not really.
reply
deleted
reply
The article is nearly 3 years old and the 4080 is not even top of the line at the written time.

Still, 10% in difference is still considerable, almost gen-to-gen difference

reply
PCIe 4.0 x4 is going to be a huge bottleneck, even recent SSDs have more throughput (they use PCIe 5.0) never mind GPUs.
reply
Gaming isn't what people are using Mac Studios for. Thunderbolt also isn't a substitute for OCuLink.
reply
Sure, but it’s probably reflective of the fact that GPUs generally aren’t PCIe-bandwidth bound. Also, TB5 and Oculink2 both use PCI 4.0 x4 links.
reply
Oculink is generally faster than TB5 despite them both using PCIe 4.0, because Oculink provides direct PCIe access whereas Thunderbolt has to route all PCIe traffic through its controller. The benchmarks show that the overhead introduced by the TB5 controller slows down GPU performance.
reply
It's not just the controllers; the Thunderbolt protocol itself imposes different speed limits. The bit rates used by Thunderbolt aren't the same as PCIe, and PCIe traffic gets encapsulated in Thunderbolt packets.
reply
Apple Silicon has an integrated thunderbolt controller so that should have less latency than PCs that use a discrete thunderbolt controller.
reply
Many recent laptop CPUs from Intel and AMD have integrated Thunderbolt controllers (i.e. USB 4), so that has not been a difference for a long time.
reply
Maybe; I'm unable to find any benchmarks that specifically compare PCs with TB to Macs to test this. But there is certainly still overhead with TB no matter what, and therefore it'll never be as fast as Oculink.
reply
That's just blatantly wrong, the performance loss of GPUs is very well documented and gets worse as you go towards higher end models. We're talking 30%+ loss of performance here.
reply
Um, I have an M3 Ultra 512GB on my desk for development. Love me some Baldur’s Gate 3, everything turned up to 11…
reply
Yeah 80GB/s total I/O bandwidth is a lot for a Mac, but desktop PCs have been doing 1TB/s (128x PCIe5) for years (Threadripper etc).
reply
Sure. And lots of people need all that I/O. But my point is that it’s not like the Mac Studio has no I/O. The outgoing Mac Pro only has 24 total lanes of PCIe 4.0 going to the switch chip that’s connected to all the PCI slots. The advent of externally route PCIe is a development in the last few years that may have factored into the change in form factor.
reply
- GPU is integrated into the SoC - Surprisingly, it is possible to plug a drive into a TB/USB port

…so what do you actually need PCIe for?

reply
High-end Macs have moved to PCIe 5.0 speeds in their internal drives. Thunderbolt 5 is not fast enough to get the same performance from external ones.

Thunderbolt is also too slow for higher-end networks. A single port is already insufficient for 100-gigabit speeds.

reply
When people talk about 100gigabit networks for Macs, im really curious what kind of network you run at home and how much money you spent on it. Even at work I’m generally seeing 10gigabit network ports with 100gigabit+ only in data centers where macs don’t have a presence
reply
Local AI is probably the most common application these days.

Apple recently added support for InfiniBand over Thunderbolt. And now almost all decent Mac Studio configurations have sold out. Those two may be connected.

reply
> Apple recently added support for InfiniBand over Thunderbolt.

TIL:

* https://developer.apple.com/documentation/technotes/tn3205-l...

Or maybe I forgot:

* https://news.ycombinator.com/item?id=46248644

reply
100 Gb/s Ethernet is likely to be expensive, but dual-port 25 Gb/s Ethernet NICs are not much more expensive than dual-port 10 Gb/s NICs, so whenever you are not using the Ethernet ports already included by a motherboard it may be worthwhile to go to a higher speed than 10 Gb/s.

If you use dual-port NICs, you do not need a high-speed switch, which may be expensive, but you can connect directly the computers into a network, and configure them as either Ethernet bridges or IP routers.

reply
I work in media production and I have the same thought constantly. Hell I curse in church as far as my industry is concerned because I find 2.5 to be fine for most of us. 10 absolutely.
reply
100gbps is going to be for mesh networks supporting clusters (4 Mac Studios let's just say) - not for LAN type networks (unless it's in an actual datacenter).
reply
I suppose the throughput is not the key, latency is. When you split ann operation that normally ran within one machine between two machines, anything that crosses the boundary becomes orders of magnitude slower. Even with careful structuring, there are limits of how little and how rarely you can send data between nodes.

I suppose that splitting an LLM workload is pretty sensitive to that.

reply
To have lots of them plugged together, high end audio cards, electronics integrations, disks with having cables all over the place.
reply
Things that aren’t graphics cards, such very high bandwidth video capture cards and any other equipment that needs a lot of lanes of PCI data at low latency.
reply
but what about second GPU?
reply
Multiple GPUs was tried, by the whole industry including Apple (most notably with the trash can Mac Pro). Despite significant investment, it was ultimately a failure for consumer workloads like gaming, and was relegated to the datacenter and some very high-end workstations depending on the workload.

Multi-GPU has recently experienced a resurgence due to the discovery of new workloads with broader appeal (LLMs), but that's too new to have significantly influenced hardware architectures, and LLM inference isn't the most natural thing to scale across many GPUs. Everybody's still competing with more or less the architectures they had on hand when LLMs arrived, with new low-precision matrix math units squeezed in wherever room can be made. It's not at all clear yet what the long-term outcome will be in terms of the balance between local vs cloud compute for inference, whether there will be any local training/fine-tuning at all, and which use cases are ultimately profitable in the long run. All of that influences whether it would be worthwhile for Apple to abandon their current client-first architecture that standardizes on a single integrated GPU and omits/rejects the complexity of multi-GPU setups.

reply
Video capture

I/O expansion

Networking

reply
deleted
reply
> gone are the days of PCIe

Thunderbolt is external PCIe.

reply
No, oculink is external PCIe.

Thunderbolt can kinda-sorta mimic PCIe, but it needs to chop up the PCIe signal into smaller packets, transmit them and then put them back together and this introduces a big jump in latency, even when bandwidth can be rather high.

For many applications this isn't a big deal, but for others it causes major problems (gaming being the big one, but really anything that's latency sensitive is going to suffer a lot).

reply
I’m at peace with the memory and PCIe basically flows over thundebolt. At one point external gpus were a thing. I think what I’d really love would be a couple or few m.2 slots in my studio for storage expansion.
reply
Does M5 series have better video encoding chip/chiplet/whatever it is called than M4 series? Because while I’m happy with my M4 Pro overall, H.264 encoding performance with videotoolbox_h264 is disappointingly basically exactly the same as a previous 2018 model Intel Mac mini, and blown out of water by nvenc on any mid to high end Nvidia GPU released in the last half-decade, maybe even full decade. And video encoding is a pretty important part of video editing workflow.
reply
If you mean editing ProRes is a better fit, if you mean final export software always beats hardware encoders in terms of quality, if you mean mass h.264 transcoding a Mac workstation is probably not the right place though.
reply
> gone are the days of PCIe

This is a wild and very wrong take.

Just about every single consumer computer shipped today uses PCIe. If you were referring to only only the physical PCIe slots, that's wrong too: the vast majority of desktop computers, servers, and workstations shipped in 2025 had physical PCIe slots (the only ones that didn't were Macs and certain mini-PCs).

The 2023 Mac Pro was dead on arrival because Apple doesn't let you use PCIe GPUs in their systems.

reply
> This is a wild and very wrong take.

That's what happens when you quote only part of a statement. Taken in context, it was referring to a very real decline in expansion cards. Now that NICs (for WiFi) and SSDs have been moved into their own compact specialized slots, and Ethernet and audio have been standard integrated onto the motherboard itself for decades, the regular PCIe slots are vestigial. They simply are not widely used anymore for expanding a PC with a variety of peripherals (that era was already mostly over by the transition from 32-bit PCIe to PCIe).

Across all desktop PCs, the most common number of slots filled is one (a single GPU), and the average is surely less than one (systems using zero slots and relying on integrated graphics must greatly outnumber systems using more than one slot).

Even GPUs themselves are a horrible argument in favor of PCIe slots. The form factor is wildly unsuitable for a high-power compute accelerator, because it's ultimately derived from a 1980s form factor that prioritized total PCB area above all else, and made zero provisions for cards needing a heatsink and fan(s).

reply
> Ethernet and audio have been standard integrated onto the motherboard itself for decades

Unless the one it comes with isn't as fast as the one you want, or they didn't integrate one at all, or you need more than one.

> Across all desktop PCs, the most common number of slots filled is one (a single GPU), and the average is surely less than one (systems using zero slots and relying on integrated graphics must greatly outnumber systems using more than one slot).

There is an advantage in having an empty slot because then you can put something in it.

Your SSD gets full, do you want to buy one which is twice as big and then pay twice as much and screw around transferring everything, or do you want to just add a second one? But then you need an empty slot.

You bought a machine with an iGPU and the CPU is fine but the iGPU isn't cutting it anymore. Easy to add a discrete GPU if you have somewhere to put it.

The time has come to replace your machine. Now you have to transfer your 10TB of junk once. You don't need 100Gbps ethernet 99% of the time, but using the builtin gigabit ethernet for this is more than 24 hours of waiting. A pair of 100Gbps cards cuts that >24 hours down to ~15 minutes. If the old and new machines have an empty slot.

reply
My motherboard has 3 16x PCIe slots, but realistically only one is used for the GPU as the other two are under the mastodon of a cooler needed by the GPU. Can't use a 100G network card if I can't fit it under the GPU. Can't not use the GPU as I don't have an iGPU in my CPU.

He's not advocating from removing PCIe slots, but in practice, it's needed by way less consumers than before. There's probably more computers being sold right now without any PCIe slot than there are with more than 1.

reply
> My motherboard has 3 16x PCIe slots, but realistically only one is used for the GPU as the other two are under the mastodon of a cooler needed by the GPU.

Discrete GPUs generally consume two PCI slots, not three, and even the mATX form factor allows for four PCI slots (ATX is seven), which gives board makers an obvious thing to do. Put one x16 slot at the top and the other(s) lower down and use the space immediately under the top x16 slot for an x1 slot which is less inconvenient to block or M.2 slot which can be used even if there is a GPU hanging over it. This configuration is currently very common.

It also makes sense to put one of the x16 slots at the very bottom because it can either be used for a fast single height card (>1Gb network or storage controller) or a GPU in a chassis with space below the board (e.g. mATX board in ATX chassis) without blocking another slot.

reply
My post Mortem sentiments exactly. The lack of Nvidia GPU support for the M series Mac Pro models kneecapped the platform for professionals. If Apple had included that in those they’d be the defacto professional workstation for many more folks working in AI tech.
reply
On the other hand it forced developers to invest more in Metal which looks like an investment starting to bear fruit.
reply
Plus modern interconnects like CXL are also layers on top of PCIe, and USB4 supports PCIe tunnelling. PCIe is a big collection of specifications, the physical/link/transaction layers can be mixed and matched and evolved separately.

I don't see it disappearing, at most we'll get PCIe 6/7/etc.

reply
Thunderbolt is PCIe running over a cable.
reply
Sure, with expensive line drivers to send the data 1+ meters, instead of 10ish cm. And with only 2 channels instead of up to 16.
reply
Yes, I know; this is part of what I was implying when I said "Just about every single consumer computer shipped today uses PCIe."

I don't understand how this is a response to anything I said.

reply
Yup the 4090 and SoundBlaster ZXR in my AM5 7800X3D system would both like to upvote your reply.
reply
Sound card works fine on USB2 (RME for example has cards on USB2 that can manage 30/30 io at 192khz without issue at low latency if you have the CPU to deal with the load)

With USB3 you have 94 i/o…

For years pci has not been mandatory for audio. UAD, Apogee, RME and other high end brands will push you to them. Or even only provide them as usb device… even Thunderbolt is not needed here.

And that’s been the case for a while! My Fireface UC from 15 years ago can deal with 16 channels at 96khz at 256 sample. On PC and Mac.

reply
Personally, I'd love to see / read / hear more about the way RME do what they do. I know they basically update the fpga on the devices in lock step with the drivers, which allows them to do all sorts of magic (low CPU usage, zero latency recording of each raw channel being one of them) but I'd love an interview or article from some of the hardware and software people from RME. They have been rock solid and basically future proof for decades and I think the entire hardware and software industries could learn something from the way they do things.

Incredible products, definitely worth the premium.

reply
Then they should start putting internal high powered USB ports inside the case where I can literally bolt this shit into place because my desk is a goddamn mess of cables and dongles and boxes that don't stack or interlock or interface at all and I am so so utterly tired of being gaslit into beliving that they're just as good as a fucking slot.
reply
Sounds like a 9.5" mini rack could help with the stacking, see Geerling.
reply
I have about 14 or 15 USB devices in addition to my 4 monitors, and whilst I'm sure you're right I'm very happy to have a high quality soundcard that is not part of that mix.
reply
Compared to video data and the speed the CPU is running at, audio trickles in at a snails pace.
reply
deleted
reply
Scarlett 2i2 has been amazing for me, I’d say unbeatable in terms of quality/price ratio.
reply
it's not just about pcie, it's socketed memory and disks. I guess disks are just pcie technically - but memory sockets are great. hell, in the pro chassis I am surprised they didn't opt for a socketed cpu that could be upgraded.
reply
The latest M2-based Mac Pro did not take socketed memory AIUI.
reply
This is correct; Apple has refused to implement socketed memory on any M-series machine.
reply