upvote
Oh that's a weird way to do it; they used to have an x86 add on block for mainframes which was just a pile of x86 blades with some integration.
reply
I loved the era of "daughter cards" which were just entire computers on a board.

things like https://www.youtube.com/watch?v=a6b4lYOI0GQ could get you a really interesting form of multitasking

reply
From the perspective of PC building, I've always thought it would be neat if the CPU/storage/RAM could go on a card with a PCIe edge connector, and then that could be plugged into a "motherboard" that's basically just a PCIe multiplexer out to however many peripheral cards you have.

Maybe it's gimmicky, but I feel like you could get some interesting form factors with the CPU and GPU cards sitting back-to-back or side-by-side, and there would be more flexibility for how to make space for a large air cooler, or take it up again if you've got an AIO.

I know some of this already happens with SFF builds that use a Mini-ITX motherboard + ribbon cable to the GPU, but it's always been a little awkward with Mini-ITX being a 170mm square, and high end GPUs being only 137mm wide but up to 300mm in length.

reply
Oh, going back to a backplane computer design? That could be cool, though I assumed we moved away from that model for electrical/signaling reasons? If we could make it work, it would be really cool to have a system that let you put in arbitrary processors, eg. a box with 1 GPU and 2 CPU cards plugged in
reply
I believe PCIe is a leader/follower system, so there'd probably be some issues with that unless the CPUs specifically knew they were sharing, or there was a way for the non-leader units to know they they shouldn't try to control the bus.
reply
But if we're dreaming, we can have the backplane be actually multiple (Nx thunderbird 5 cables connected each slot to all other slots directly).

Then each device can be a host, a client, at the same time and at full bandwidth.

reply
That's basically what S-100 systems were, isn't it (on a much slower bus)?
reply
This was (is?) done - some strange industrial computers for sure and I think others, where the "motherboard" was just the first board on the backplane.

The transputer b008 series was also somewhat similar.

reply
That would crush latency on RAM.
reply
The RAM and CPU would still be on the same card together, and for the typical case of a single GPU it would just be 16x lanes direct from one to the other.

For cases where there are other cards, yes there would more contention, but few expansion cards are able to saturate more than a lane or two. One lane of PCIe Gen5 is a whopping 4 GB/s in each direction, so that theoretically handles a dual 10gige NIC on its own.

reply
deleted
reply
That's what I was hoping Apple was going to do with a refreshed Mac Pro.

I had envisoned a smaller tower design that with PCI slots and Apple developing and selling daughter cards that were basically just a redesigned macbook pro PCB but with a PCI-E edge connector and power connector.

The way I see it a user could start with a reasonably powerful base machine and then upgrade it over time and mix and match different daughter cards. A ten year old desktop is fine as a day to day driver, it just needs some fancy NPU to do fancy AI stuff.

This kind of architecture seems to make sense to me in an age where computers have such a longer usable lifespan and where so many features are integrated into the motherboard.

reply
You can do basically that by connecting over Thunderbolt 5

https://news.ycombinator.com/item?id=46248644

reply
Homogenous RDMA is less like a daughterboard and more like a brother or sisterboard.
reply
M5 processor plugged into the same RDMA as IBM POWER for that "brother from anothermotherboard".
reply
Apple already experimented with this with the prototype Jonathan computer. It's very late 80's in its aesthetic, and I love it.

https://512pixels.net/2024/03/apple-jonathan-modular-concept...

reply
Now we have cables that include computers more powerful than an old mainframe. So if it pleases you, just think of all the tiny little daughter computers hooked up to your machine now.
reply
Z/OS for ARM then? ;-)

I’ve been running VM/370 and MVS on my RPi cluster for a long time now.

reply
But I wonder if this is "much better" than x86 emulation or virt?

Is there really SW that's limited to (Linux) ARM and not x86?

reply
Technically aren't most android apps limited to ARM?
reply
There's certainly some, but I don't think most.

I'd guess most apps are bytecode only, which will run on any platform. Some apps with native code have bytecode fallbacks. Many apps with native code include multiple support for multiple architectures; the app developer will pick what they think is relevant for their users, but mips and x86 are options. There were production x86 androids for a few years, some of those might still be in user bases; mips got taken out of the Native Development Kit in 2018 so probably not very relevant anymore.

reply
Probably Intel and AMD aren't willing to do this deal but Arm is.
reply
IBM actually owns x86 rights still. They last used it to do something similar called Lx86 which ran x86 VMs on POWER CPUs.
reply
Developing a good x86 CPU is far beyond IBM's abilities. The rights aren't enough.
reply
Price competitive to AMD and intel? Sure. Abilities? There is no magic, the Tellium and Power11 are each as complicated as something like Epyc and the former has both a longer and taller compatibility totem pole than x86.

Anyway this post was never about building ARM or x86 CPUs, the point is they could have done a zArch fast path for x86 for "free", so there is some other strategy at play to consider doing it with ARM.

reply
> Is there really SW that's limited to (Linux) ARM and not x86?

MacOS? (hides)

reply