upvote
This isn't a hardware feat, this is a software triumph.

They didn't make special purpose hardware to run a model. They crafted a large model so that it could run on consumer hardware (a phone).

reply
It's both.

We haven't had phones running laptop-grade CPUs/GPUs for that long, and that is a very real hardware feat. Likewise, nobody would've said running a 400b LLM on a low-end laptop was feasible, and that is very much a software triumph.

reply
> We haven't had phones running laptop-grade CPUs/GPUs for that long

Agree to disagree, we've had laptop-grade smartphone hardware for longer than we've had LLMs.

reply
Kind of.

We've had solid CPUs for a while, but GPUs have lagged behind (and they're the ones that matter for this particular application). iPhones still lead by a comfortable margin on this front, but have historically been pretty limited on the IO front (only supported USB2 speeds until recently).

reply
The iPhone 17 Pro launched 8 months ago with 50% more RAM and about double the inference performance of the previous iPhone Pro (also 10x prompt processing speed).
reply
deleted
reply
>triumph

It’s been a lot of years, but all I can hear after reading that is … I’m making a note here, huge success

reply
There’s no use crying over every mistake. You just keep on trying until you run out of cake.
reply
It's hard to overstate my satisfaction!
reply
both, tbh
reply
It wasn't considered impossible. There are examples of large MoE LLMs running on small hardware all over the internet, like giant models on Raspberry Pi 5.

It's just so slow that nobody pursued it seriously. It's fun to see these tricks implemented, but even on this 2025 top spec iPhone Pro the output is 100X slower than output from hosted services.

reply
If the bottleneck is storage bandwidth that's not "slow". It's only slow if you insist on interactive speeds, but the point of this is that you can run cheap inference in bulk on very low-end hardware.
reply
> If the bottleneck is storage bandwidth that's not "slow"

It is objectively slow at around 100X slower than what most people consider usable.

The quality is also degraded severely to get that speed.

> but the point of this is that you can run cheap inference in bulk on very low-end hardware.

You always could, if you didn't care about speed or efficiency.

reply
You're simply pointing out that most people who use AI today expect interactive speeds. You're right that the point here is not raw power efficiency (having to read from storage will impact energy per operation, and datacenter-scale AI hardware beats edge hardware anyway by that metric) but the ability to repurpose cheaper, lesser-scale hardware is also compelling.
reply
> very low-end hardware

iPhone 17 Pro outperforms AMD’s Ryzen 9 9950X per https://www.igorslab.de/en/iphone-17-pro-a19-pro-chip-uebert...

reply
In single threaded workloads, still impressive
reply
The software has real software engineers working on it instead of researchers.

Remember when people were arguing about whether to use mmap? What a ridiculous argument.

At some point someone will figure out how to tile the weights and the memory requirements will drop again.

reply
The real improvement will be when the software engineers get into the training loop. Then we can have MoE that use cache-friendly expert utilisation and maybe even learned prefetching for what the next experts will be.
reply
> maybe even learned prefetching for what the next experts will be

Experts are predicted by layer and the individual layer reads are quite small, so this is not really feasible. There's just not enough information to guide a prefetch.

reply
It's feasible to put the expert routing logic in a previous layer. People have done it: https://arxiv.org/abs/2507.20984
reply
Manually no. It would have to be learned, and making the expert selection predictable would need to be a training metric to minimize.
reply
Making the expert selection more predictable also means making it less effective. There's no real free lunch.
reply
/FIFY A year ago this would have been considered impossible. The software is moving faster than anyone's hardware assumptions.
reply
I mean, by any reasonable standard it still is. Almost any computer can run an llm, it's just a matter of how fast, and 0.4k/s (peak before first token) is not really considered running. It's a demo, but practically speaking entirely useless.
reply
Devils advocate - this actually shows how promising TinyML and EdgeML capabilities are. SoCs comparable to the A19 Pro are highly likely to be commodified in the next 3-5 years in the same manner that SoCs comparable to the A13 already are.
reply
Does iPhone have some kind of hardware acceleration for neural netwoeks/ai ?
reply
Yes, a Neural Engine and on the latest A19 tensor processing on the GPU cores (neural accelerator).
reply