"KVM: s390: Introduce arm64 KVM"
"By introducing a novel virtualization acceleration for the ARM architecture on s390 architecture, we aim to expand the platform's software ecosystem. This initial patch series lays the groundwork by enabling KVM-accelerated ARM CPU virtualization on s390....."
https://patchwork.kernel.org/project/linux-arm-kernel/cover/...
things like https://www.youtube.com/watch?v=a6b4lYOI0GQ could get you a really interesting form of multitasking
Maybe it's gimmicky, but I feel like you could get some interesting form factors with the CPU and GPU cards sitting back-to-back or side-by-side, and there would be more flexibility for how to make space for a large air cooler, or take it up again if you've got an AIO.
I know some of this already happens with SFF builds that use a Mini-ITX motherboard + ribbon cable to the GPU, but it's always been a little awkward with Mini-ITX being a 170mm square, and high end GPUs being only 137mm wide but up to 300mm in length.
Then each device can be a host, a client, at the same time and at full bandwidth.
The transputer b008 series was also somewhat similar.
I had envisoned a smaller tower design that with PCI slots and Apple developing and selling daughter cards that were basically just a redesigned macbook pro PCB but with a PCI-E edge connector and power connector.
The way I see it a user could start with a reasonably powerful base machine and then upgrade it over time and mix and match different daughter cards. A ten year old desktop is fine as a day to day driver, it just needs some fancy NPU to do fancy AI stuff.
This kind of architecture seems to make sense to me in an age where computers have such a longer usable lifespan and where so many features are integrated into the motherboard.
https://512pixels.net/2024/03/apple-jonathan-modular-concept...
I’ve been running VM/370 and MVS on my RPi cluster for a long time now.
Is there really SW that's limited to (Linux) ARM and not x86?
I'd guess most apps are bytecode only, which will run on any platform. Some apps with native code have bytecode fallbacks. Many apps with native code include multiple support for multiple architectures; the app developer will pick what they think is relevant for their users, but mips and x86 are options. There were production x86 androids for a few years, some of those might still be in user bases; mips got taken out of the Native Development Kit in 2018 so probably not very relevant anymore.
MacOS? (hides)
There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software. That's what a mainframe is. You can write a simple and easy to maintain single process backend program, run it on a mainframe and be fairly confident that it can run without stopping for decades. Everything from the power supply to the CPU is redundant and can be hot swapped without booting the OS. Credit card transactions and banking software run on this model for example (just think about how insanely reliable credit card transactions are).
IBM has a monopoly in the second world. You could say the entire field of distributed systems is one big indie effort to break free of IBM's monopoly on computing.
1. They run complicated infrastructure software, written by third-party developers.
2. And they run their own simple programs on top of them.
So for example you can rent Kubernetes cluster from AWS and run simple HTTP server. If your server crashes, Kubernetes will restart it, so it's resilient. There will be records in some metrics which will light up some alerts and eventually people will know about it and will fix it.
Another example: your simple program does some REST GET query. This query failed for some reason. But that query was intercepted by middleware proxy and that proxy determines that HTTP response was 5xx, so it can retry it. So it retries it few times with properly calibrated duration and eventually gets a response and propagates it back to the simple program. Simple program had no idea about all the stuff happening to make it work, it just threw HTTP query and got a response.
There's a lot of complicated machinery to enable simple programs to be part of resilient architecture. That's a goal, anyway.
You actually need both, the point of the extremely resilient hardware is that it can act as the single source of truth when you need it - including perhaps hosting some web-based transactions that directly affect your single source of truth. (Calling this a "model" for web-based infrastructure in general would be misleading though: a credit card transaction on the web is not your ordinary website! The web is just an implementation technology here.) Everything else can be ephemeral open systems, which is orders-of-magnitude cheaper.
TSYS is super expensive and is dying out. The current generation of banking software is very much shifting to distributed software across commodity data centers.
IBM Z mainframes play a pivotal role in facilitating 87% of global credit card transactions, nearly $8 trillion in annual payments, and 29 billion ATM transactions each year, amounting to nearly $5 billion per day. Rosamilia highlighted the continuous growth in demand for capacity over the past decade, which has seen inventory expand by 3.5 times.
https://thesiliconreview.com/2024/04/ibm-new-mainframe-web-t...
Some stayed at on prem, some pushed code to mainframe VMs in the cloud, some went to OpenShift (mostly on prem from what Ive seen, probably 80-85%).
Eh, they can but even a couple of decades ago there was a shift to open platforms. 90s and early 00s, sure, it was mainframe and exotic x86 species like Stratus machines. But even then the power of “throw a ton of cheaper Unix at it” was winning.
Banks’ central systems maybe, I have less experience there. IBM did also try for a while to ride the Linux virtualisation wave as well, saying “hey, you can run thousands of Linux instances on a single mainframe”, and I did some work porting IBM software to s390 Linux around 2007.
All our production stuff was being deployed on Aix, HP-UX, Solaris and Windows NT/2000 Server.
Likewise most of my university degree used DG/UX and Solaris, when Red-Hat Linux was first deployed on the labs, it was after the DG/UX server died, and I was already on the fourth year of a five year degree.
We did use NT/2K internally but that was because we had some who insisted on using SMB via Windows.
Such fun times. The nix and nix-like OSes were spreading like fire. I never would have thought I'd ever wrangle them for the majority of my career.
Just because things hung around didn't mean that Sun/Solaris/Java were long for this world. Linux/x86 was just too cheap compared to SPARC gear. Even if it wasn't as robust as the Sun gear, it just made too much sense especially if you didn't have any legacy baggage.
But the x86 I was referring to in my comment above, Stratus, was (maybe still is?) an exotic attempt to enter the mainframe-reliability space with windows. IIRC it effectively ran two redundant x86 machines in lockstep, keeping them in sync somehow, so that if hardware on one died the other could continue. I have no idea how big their market was, but I know of at least one acquirer/issuer credit card system that ran on that hardware around 2002-3.
Basically they do a lot, but they're not showy about it.
IBM is not in consumer products nor services so we do not hear about it.
IBM was declining for 10 years while the rest of the tech related businesses were blowing up, plus IBM does not pay well, so other than it being a business in decline, there wasn’t much to talk about. No one expects anything new from IBM.
Also, they had quite a few big boondoggles where they were the bad guys helping swindle taxpayers due to the goodwill from their brand’s legacy, so being a dying rent seeking business as opposed to a growing innovative business was the assumption I had.
Have their own Java implementation, with capabilities like AOT before OpenJDK got started on Leyden, or even Graal existed, for years had extensions for value types (nowadays dropped), and alongside Azul, cluster based JIT compiler that shares code across JVM instances.
IBM i and z/OS are still heavely deployed in many organisations, alongside Aix, and LinuxONE (Linux running on mainframes and micros).
Research in quantum computing, AI, design processes, one of the companies that does huge amounts of patents per year across various fields.
And yes a services company, that is actually a consortium of IBM owned companies many of each under a different brand (which is followed by "an IBM company").
Beneath the countless layers of VMs and copious weird purpose built gear like Tandem and Base24 for the ATMs was a whole bunch of true blue z/OS powered IBM mainframes chugging through thousands and thousands of interlocking COBOL programs that do everything from moving files between partner banks all over the world, moving money between accounts, compounding interest, and extracting a metric shitton of every type of fee imaginable.
If you know z/OS there's work available until your retirement. Miserable, pointless, banal, and archaic legacy as fuck mainframe work.
https://en.wikipedia.org/wiki/Tandem_Computers
A good friend of mine who worked on a CICS based credit card processing application at that bank doubled his salary twice inside of 4 yrs. First by quitting the bank and going to a boutique consultancy to build competing software (which they sold to other banks) and then by quitting that job and coming back to the bank to takeover the abysmal state the CICS app had lapsed into in his absence.
And that was circa 2010.
One thing that was true of the bank then and I'm sure is true now is that when they see a nail they truly have just the one hammer. When a problem comes along, hit it with a huge sack of cash until it goes away.
Tandem! Now there's a name i haven't heard in a long time. A college friend of mine worked with some of their stuff right out of college and I still remember him telling me about it. It seemed like magic, we were both floored with the capabilities.
/we were in our early 20s and the inet was just taking off so there were lots of "magic" everywhere
https://www.youtube.com/watch?v=SSSB7ZTSXH4
The Remarkable Computers Built Not to Fail by Asianometry
Huge generalizations incoming, there are exceptions to every rule, but in my experience there are no nerds who love tech for tech's sake in the banking world. It's entirely staffed by the "C's get degrees" crowd who just want to clock in, clock out, keep their head down, and retire with a nice pension.
I wanted to work on sexy technology, wrangle clouds, contribute to open source, and hack in modern languages.
I have many friends who are still at that bank 20 yrs later. They're all directors of this that or the other thing, still just grinding out some midlevel whatever career and cruising comfortably. If that ticks all your boxes then by all means go hit up a bank job.
By the time I left I couldn't drink enough liquor in a day to rinse the stench of that job off me. If I hadn't managed to slip that place I'd be dead of liver failure by now.
It's the secret for a long life for some folks, but it ain't for everybody.
Licensing of course just being typical rent seeking behaviour but their services are valuable given the financial impact if one of their solutions goes down on us (which is very rarely)
IBM (imho) is in the absolute frontline in quantum computers. One could argue if the number of startups in QC means that there is an actual market or not. Companies that lives on VC or the valuation of their stock.
But IBM is not showy, not on the front pages, does not live on VC or stock valuation. IBM makes tons of money decade after decade from customers that are also not showy but makes tons of money. Banks, financial institutions, energy, logistics, health care etc etc. If IBM thinks these companies will benefit from using QC from IBM (and pay tons of money for it), there is quite probably some truth in QC becoming useful in the near future. Years rather than decades.
IBM have run the numbers and have decided that spending the money for engineering, research required is outweighs the money possible to earn on QC services. QCs powerful enough to run the QC-supported algorithms these companies need to make more tons of money. And it's probably not breaking RSA or ECC.
Evidence for this is in the number of articles that talk about simulated annealing/quantum annealing (or other optimization problems) w/r/t QC rather than crypto. Sure attention seeking headlines always focus on prime factoring, and the security aspect has a lot more enthusiast interest, but when you look past that into deeper stuff, a lot of the focus is on the optimization.
And many industries can dramatically benefit from better optimization - think about how many companies are at their core bin-packers or traveling salesmen.... off the top of my head anything in logistics, airlines, many aspects of the energy sector, and on and on.
The flash is in reading secrets, the money is in quantum annealing.
What I don't get however is who'd use their custom accelerators for AI inference.
To give you an idea:
- of the risk in regulated industries like banking: a UK bank was once fined *$62 million* for botching a mainframe migration and causing downtime. - of the difficulty and risk in non-tech industries: Australia once spent *$120 million* trying to migrate its social security system off mainframes... and failed.
Mainframes are not their only business, of course, but it's a major cash cow that's under appreciated. I, for one, didn't know that business keeps growing.
Coincidentally, I wrote about the topic of mainframes with relation to IBM's acquisition of Confluent here today: https://blog.2minutestreaming.com/p/ibm-confluent-acquisitio...
Both have been around for many years, but neither is obsolete, they're just not designed for consumer applications.
They still generate $10-15 billion per year in revenue.
IBM eventually stepped away from the embedded market and eventually lost their foothold in consoles as well. While Raptor did offer Power9 systems at a somewhat accessible price point, the IBM-produced CPUs were still fundamentally enterprise-grade hardware, meaning they retained the high costs and "big iron" features of server tech.
IBM had a hand in both however
But yes they’re mostly enterprise/services/mainframes not anything overly consumer
You can see their roadmap here:
1. Red Hat Enterprise Linux, which is by far the most commonly deployed Linux variant among US Enterprise orgs.
2. Ansible
3. Podman
4. Hashicorp Terraform / Consul / Packer / Vagrant / Nomad / Etc.
5. Giant B2B services arm
6. Mainframe, which a lot of science organizations / governments / credit card companies still run. Sometimes you may have an IBM rep show up to replace a part on the mainframe you didn't even know was broken - very reliable, fault tolerant system.
7. The only service I know where you can rent Quantum computing time in the cloud
8. Probably a ton of other things I'm not even aware of.
9. Red Hat OpenShift - so if you're big enterprise running k8s on prem, there's a good chance it's OpenShift, especially in banking / finance / government.
If IBM runs them into the ground, there's a niche for a copy-cat of the original company that you can just found again. Rinse and repeat.
So essentially they sell new hardware and "support" to customers who have been in need to process tabular, multi-GB databases since when a PC was 128MB memory and have been doing electronic record-keeping since the 1970s. They also allow their ~hostages~, ehm, customers who trust them with their data to run processing near the data at a cost/in a cloud style billing model. That is so expensive though that every large IBM-shop has built an elaborate layer of JVMs, Unix and mirror-databases around their IBM appliances. Lately they bought Redhat and hashicorp and confluent thus taking a cut from the "support" of the abominiations of IT systems they helped birth for some more time to come (also remember the alternative JVM OpenJ9, do you all?).
I think the later a company started using centralized electronic record keeping, the higher the likelyhood they are not paying IBM anymore: commercial banks, governments and insurance started digitizing in the 60s (with custom software) and if the companies are old (or in US-friendly petrostates) they are all IBM customers. Corps using ERP or PLM offerings (so manufacturing and retail chains which are younger than banks) used to start digitizing a little later (Walmart only was founded in the 60s and electronic CAD started in the 80s) and while they likely used IBM in the past (SAP was big on DB2) they might not use it anymore (also it helps they usually bought the ERP or PLM from someone else). New Companies whose sole business was to run a digital-platform started on Unix (see Amazon who successfully fought to ditch Oracle even) or just built their whole platform (Google). If those companies predate Unix they usually fought hard to get rid of IBM (Microsoft, Amadeus)
Consulting/outsourcing services have been spun out to Kyndryl, so nowadays IBM only sells hardware, support for their products and ostensibly has some people left to develop their products... The days when that was a big thing and IBM produced all the stuff they sell support for now, have been long gone. A fun link to see how their "product development" operates nowadays is this discussion to bring gitlab-runners to z/OS: https://gitlab.com/gitlab-org/gitlab-runner/-/work_items/275... - tl;dr "hey you opensource company, we are IBM and managed to pay someone to port a go compiler to z/OS. Now we have a customer who wants to use gitlab with z/OS. Would you like to make your software part of our product offering?". A fun fact is that - even within IBM - access to the real mainframe seems to be very limited which shows a bit in the discussion linked above and also with an ex-Kyndryl-person saying: "oh, I once had a contract where we replaced the mainframe and we ran that on Linux-boxes inside IBM, because it was just cheaper that way. Just the big reporting was a bit slow, but the reliability was just fine"
I think we can ignore the "AI" word here as its presence is only because everything currently has to be AI.
So why would IBM add ARM?
> As enterprises scale AI and modernize their infrastructure, the breadth of the Arm software ecosystem is enabling these workloads to run across a broader range of environments
I think it has become too expensive for IBM to develop their own CPU architecture and that ARM64 is starting to catch up in performance for a much lower price.
So IBM wants to switch to ARM without making a too big fuzz about it.
That was my first thought too, but it does not make sense, because if IBM would sell ARM-based servers nobody would buy from them instead of using cheaper alternatives.
As revealed in another comment, at least for now their strategy is to provide some add-in cards for their mainframe systems, containing an ARM CPU which is used to execute VMs in which ARM-native programs are executed.
So this is like decades ago, when if you had an Apple computer with a 6502 CPU you could also buy a Z80 CPU card for it, so you could also run CP/M programs on your Apple computer, not only programs written for Apple and 6502.
Thus with this ARM accelerator, you will be able to run on IBM mainframes, in VMs, also Linux-on-ARM instances or Windows-on-ARM instances. Presumably they have customers who desire this.
I assume that the IBM marketing arguments for this are that this not only saves the cost of an additional ARM-based server, but it also provides the reliability guarantees of IBM mainframes for the ARM-based applications.
Taking into account that today buying an extra server with its own memory may cost a few times more than last summer, an add-in CPU card that shares memory with your existing mainframe might be extra enticing.
The architecture might be non-standard and not very widespread however for what it does and workloads that are suited to it. I dont think any ARM design comes close , maybe Fujitsu's A64FX.
Sun had the same problem after 2001 dotcom when standard PC servers became reliable enough to run web servers on.
It's easier to sell "our special sauce" when building using a custom ARM platform. Then you have no easy comparison with standard servers.
They will probably market the ARM inclusion similarly - as something that the package provides.
As far as POWER i think only Raptor[1] does direct marketingof the power(hehe) and capabilities
https://www.ibm.com/products/power
The i systems are just POWER machines with different firmware.
Why do you say "starting to"? arm64 has been competitive with ppc64le for a fairly long time at this point
The recent generations of IBM POWER CPUs have not been designed for good single-thread performance but only for excellent multi-threaded performance.
So I believe that an ARM CPU from a flagship smartphone should be much faster in single thread that any existing IBM POWER CPU.
On the other hand, I do not know if there exists any ARM-based server CPU that can match the multi-threaded performance of the latest IBM POWER CPUs.
At least for some workloads the performance of the ARM-based CPUs must be much lower, as the IBM CPUs have huge cache memories and very fast memory and I/O interfaces.
The ARM-based server CPUs should win in performance per watt (due to using recent TSMC processes vs. older Samsung processes) and in performance per dollar, but not in absolute performance.
And the single thread side isn't that good either, but SMT8 is a quite nice software licensing trick
But I could be wrong… I’m going from a historical perspective. I haven’t checked PPC benchmarks in quite a while.
Motorola made CPUs with this ISA. Apple used CPUs with this ISA, some made by IBM and some made by Motorola.
While Motorola and Apple used the name "PowerPC", IBM continued to use the original name "POWER" for its server and workstation CPUs. Later IBM sold its division that made CPUs for embedded applications and for PCs, retaining only the server/workstation CPUs.
However, nowadays, even if the official IBM name is "POWER", calling it "PowerPC" is not a serious mistake, because all the "PowerPC" ISA changes have been incorporated many years ago into the POWER ISA.
So the current POWER ISA is an evolution of the PowerPC ISA, which was an evolution of the original 1990 POWER ISA.
It is better to call it POWER, as saying "PowerPC" may imply a reference to an older version of the ISA, instead of referring to the current version, but the 2 names are the same thing. PowerPC was an attempt of rebranding, but then they returned to the original name.
Legacy apps on s390x do not move because IBM put out a press release and IBM does not get fatter cloud margins by joining the same ARM pile as other vendors. Mainframe migration is not a weekend project. "Easier" usually means somebody signs a six digit check first.
But, what are their legacy finance-sector customers asking for here? Are they trying to add ARM to LinuxONE, while maintaining the IBM hardware-based nine nines uptime strategy/sweet support contract paradigm?
If so, why don't the Visas of the world just buy 0xide, for example?
> develop new dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security.
> "This moment marks the latest step in our innovation journey for future generations of our IBM Z and LinuxONE systems, reinforcing our end-to-end system design as a powerful advantage."
IBM could put an entire 1k core ARM mini-cloud inside a Z series configuration and it could easily be missed upon visual inspection. Imagine being able to run banking apps with direct synchronous SQL access to core and callbacks for things like real-time fraud detection. Today, you'd have to do this with networked access into another machine or a partner's cloud which kills a lot of use cases.
If I were IBM, I would set up some kind of platform/framework/marketplace where B2B vendors publish ARM-based apps that can run on Z. Apple has already demonstrated that we can make this sort of thing work quite well with regard to security and how locked down everything can be.
The value in z series is in the system design and ecosystem, IBM could engineer an architecture migration to custom CPUs based on ARM cores. They would still be mainframe processors, but likely able to be able to reduce investment in silicon and supporting software.
They called their new architecture "ESAME" for a while for a pretty obvious reason.
I never would have expected such, but now I'm getting used to it.
I'm waiting for Apple and Microsoft to announce collaboration. They probably already do, but Apple knows its bad for marketing.
I'm not sure I can be surprised anymore.
edit: s/390 is big endian.
My gut feeling says to lean more on the bad side. I am very skeptic when corporations announce "this is for the win". Then I slowly walk over to the Google Graveyard and nod my head wisely in sadness ... https://killedbygoogle.com/
https://www.qualcomm.com/news/releases/2025/09/qualcomm-achi...