Currently supports CPU and GPU on macOS and CPU on linux.
https://github.com/kiwi-array-lang/kiwi
Kiwi runs computations on small dense arrays in its own runtime, when they are larger it will lower to MLX CPU and eventually to MLX GPU when it is worth it.
As user you don't have to change any code, you just write k.
I'm sure there are other languages designed to take advantage of modern GPUs.
But even with SIMD you can get quite far with array oriented code and many array language implementations will make use of it (BQN, ngn/growler/k, goal, ktye k has a version with SIMD support, …)
I’ve yet to find a language that does SIMD / multithreading / GPU with minimal tweaks let along multiprocessing.
Yes, it can, but only by eliminating the features that make it Turing complete. It’s relatively easy to vectorize map with a closure that can’t mutate anything but once you have nontrivial control flow, the compiler can’t make those kinds of assumptions.
Definitely the closest thing so far (doesn’t do multiprocessing) but does seem to do SIMD / multithreading and GPU auto parallelizing!
Any idea why it’s so little known?
That’s what compilers and high level languages are supposed to be for!
It's already (partly) existed called D language, by default it's garbage collected (GC), can also be program without it or hybrid. It's a modern, backward compatible with C and it's included in GCC.
The linear algebra system in D or Mir GLAS is standalone BLAS implementation written directly in D [1]. It's already proven faster than the other widely existing conventional BLAS like OpenBLAS back in 2016, about ten years ago!
This popular OpenBLAS include Fortran based LAPACK (yes you read it right Fortran) and it is being used by almost all data processing languages currently Matlab, Julia, Rust and also Mojo [2].
Interestingly there is a very early stage of standalone BLAS implementation written directly in Mojo namely mojoBLAS similar to Mir GLAS just started very recently [3].
>Surely this is the sort of thing compiler / language design nerds dream about?
You can say this again.
Especially on the GC side of the programming language since this SIMD / multi thread / multiprocessing / GPU can be abstracted away.
Actually someone recently proposed VGC or virtualized garbage collector for Python in C++ for heteregenous GC [4],[5]. However, the current evaluation excludes JIT compilation, AOT optimization, SIMD acceleration, and GPU offloading.
[1] OpenBLAS:
https://en.wikipedia.org/wiki/OpenBLAS
[2] Numeric age for D: Mir GLAS is faster than OpenBLAS and Eigen:
http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...
[3] mojoBLAS:
https://github.com/shivasankarka/mojoBLAS
[4] Virtual Garbage Collector (VGC): A Zone-Based Garbage Collection Architecture for Python's Parallel Runtime:
https://arxiv.org/abs/2512.23768
[5] VGC-for-arxiv: