upvote
From the perspective of PC building, I've always thought it would be neat if the CPU/storage/RAM could go on a card with a PCIe edge connector, and then that could be plugged into a "motherboard" that's basically just a PCIe multiplexer out to however many peripheral cards you have.

Maybe it's gimmicky, but I feel like you could get some interesting form factors with the CPU and GPU cards sitting back-to-back or side-by-side, and there would be more flexibility for how to make space for a large air cooler, or take it up again if you've got an AIO.

I know some of this already happens with SFF builds that use a Mini-ITX motherboard + ribbon cable to the GPU, but it's always been a little awkward with Mini-ITX being a 170mm square, and high end GPUs being only 137mm wide but up to 300mm in length.

reply
Oh, going back to a backplane computer design? That could be cool, though I assumed we moved away from that model for electrical/signaling reasons? If we could make it work, it would be really cool to have a system that let you put in arbitrary processors, eg. a box with 1 GPU and 2 CPU cards plugged in
reply
I believe PCIe is a leader/follower system, so there'd probably be some issues with that unless the CPUs specifically knew they were sharing, or there was a way for the non-leader units to know they they shouldn't try to control the bus.
reply
But if we're dreaming, we can have the backplane be actually multiple (Nx thunderbird 5 cables connected each slot to all other slots directly).

Then each device can be a host, a client, at the same time and at full bandwidth.

reply
That's basically what S-100 systems were, isn't it (on a much slower bus)?
reply
This was (is?) done - some strange industrial computers for sure and I think others, where the "motherboard" was just the first board on the backplane.

The transputer b008 series was also somewhat similar.

reply
That would crush latency on RAM.
reply
The RAM and CPU would still be on the same card together, and for the typical case of a single GPU it would just be 16x lanes direct from one to the other.

For cases where there are other cards, yes there would more contention, but few expansion cards are able to saturate more than a lane or two. One lane of PCIe Gen5 is a whopping 4 GB/s in each direction, so that theoretically handles a dual 10gige NIC on its own.

reply
deleted
reply
That's what I was hoping Apple was going to do with a refreshed Mac Pro.

I had envisoned a smaller tower design that with PCI slots and Apple developing and selling daughter cards that were basically just a redesigned macbook pro PCB but with a PCI-E edge connector and power connector.

The way I see it a user could start with a reasonably powerful base machine and then upgrade it over time and mix and match different daughter cards. A ten year old desktop is fine as a day to day driver, it just needs some fancy NPU to do fancy AI stuff.

This kind of architecture seems to make sense to me in an age where computers have such a longer usable lifespan and where so many features are integrated into the motherboard.

reply
You can do basically that by connecting over Thunderbolt 5

https://news.ycombinator.com/item?id=46248644

reply
Homogenous RDMA is less like a daughterboard and more like a brother or sisterboard.
reply
M5 processor plugged into the same RDMA as IBM POWER for that "brother from anothermotherboard".
reply
Apple already experimented with this with the prototype Jonathan computer. It's very late 80's in its aesthetic, and I love it.

https://512pixels.net/2024/03/apple-jonathan-modular-concept...

reply
Now we have cables that include computers more powerful than an old mainframe. So if it pleases you, just think of all the tiny little daughter computers hooked up to your machine now.
reply
reply