upvote
Oh, going back to a backplane computer design? That could be cool, though I assumed we moved away from that model for electrical/signaling reasons? If we could make it work, it would be really cool to have a system that let you put in arbitrary processors, eg. a box with 1 GPU and 2 CPU cards plugged in
reply
I believe PCIe is a leader/follower system, so there'd probably be some issues with that unless the CPUs specifically knew they were sharing, or there was a way for the non-leader units to know they they shouldn't try to control the bus.
reply
But if we're dreaming, we can have the backplane be actually multiple (Nx thunderbird 5 cables connected each slot to all other slots directly).

Then each device can be a host, a client, at the same time and at full bandwidth.

reply
That's basically what S-100 systems were, isn't it (on a much slower bus)?
reply
This was (is?) done - some strange industrial computers for sure and I think others, where the "motherboard" was just the first board on the backplane.

The transputer b008 series was also somewhat similar.

reply
That would crush latency on RAM.
reply
The RAM and CPU would still be on the same card together, and for the typical case of a single GPU it would just be 16x lanes direct from one to the other.

For cases where there are other cards, yes there would more contention, but few expansion cards are able to saturate more than a lane or two. One lane of PCIe Gen5 is a whopping 4 GB/s in each direction, so that theoretically handles a dual 10gige NIC on its own.

reply
deleted
reply