upvote
Running Gemma-4-E2B-it on an iPhone 15 (can't go higher than that due to RAM limitations) versus a Pixel 9 Pro, I don't really notice much of a difference between the two. The Pixel is a bit faster, but also a year more recent.

The model itself works absolutely fine, though the iPhone thermal throttles at some point which really reduces the token generation speed. When I asked it to write me a business plan for a fish farm in the Nevada desert, it slowed down after a couple thousand tokens, whereas the Pixel seems to just keep going.

reply
It’s likely a llama.cpp backend issue. On the Pixel, inference hits QNN or a well-optimized Vulkan path that distributes the SoC load properly. On the iPhone, everything is shoved through Metal, which maxes out the GPU immediately and causes instant overheating. Until Apple opens up low-level NPU access to third-party models, iPhones will just keep melting on long-context prompts
reply
You can run Android on just about anything so it boils down to Linux GPU benchmarks.
reply
That doesn't answer the question, I'm curious too. I think there's a speed and battery advantage on the A19 Pro chip compared to the Snapdragon 8 Elite Gen 5 chip, but to know for sure one has to run the same model used in the most efficient way on both machines (flagships ios and android).
reply
I dont think you should have been downvoted. Processing and memory are the only thing that matters. (Unless we are being so nontechnical now that we just say things like Pixel 9 is great...)
reply