upvote
Nah.

All neural accelerator hardware models and all neural accelerator software stacks output slightly different results. That is a truth of the world.

The same is true for GPUs and 3d rendering stacks too.

We don't usually notice that, because the tasks themselves tolerate those minor errors. You can't easily tell the difference between an LLM that had 0.00001% of its least significant bits perturbed one way and one that had them perturbed the other.

But you could absolutely construct a degenerate edge case that causes those tiny perturbances to fuck with everything fiercely. And very rarely, this kind of thing might happen naturally.

reply
You are correct that implementations of numerical functions in hardware differ, but I do not think you correctly understand the implications of this.

>And very rarely, this kind of thing might happen naturally.

It is not a question of rarity, it is a question of the stability of the numerical problem. Luckily most of the computation in an LLM is matrix multiplication, which is s extremely well understood numerical problem and which can be checked for good condition.

Two different numerical implementations on a well conditioned problem and which requires much computation, differing significantly would indicate a disastrous fault in the design or condition of the hardware, which would be noticed by most computations done on that hardware.

If you weigh the likelihood of OP running into a hardware bug, causing significant numerical error on one specific computational model against the alternative explanation of a problem in the software stack it is clear that the later explanation is orders of magnitude more likely. Finding a single floating point arithmetic hardware bug is exceedingly rare (although Intel had one), but stacking them up in a way in which one particular neural network does not function, while other functions on the hardware run perfectly fine, is astronomically unlikely.

reply
> yet so minor that it does not affect any of the software on the device utilizing that hardware

You're being unfair here. The showpiece software that uses that hardware wouldn't install, and almost all software ignores it.

reply
The hardware itself is utilized by many pieces of software on any Apple device. Face ID uses it, Siri uses it, the camera uses it, there are also other Apple on device LLM features, where you could easily test whether the basic capabilities are there.

I highly doubt that you could have a usable iPhone with a broken neural engine, at the very least it would be obvious to the user that there is something very wrong going on.

reply
> The conclusion, that it was not the fault of the developer was correct, but assuming anything other than a problem at some point in the software stack is unreasonable.

Aah, the old "you're holding it wrong" defense.

reply
What do you mean? The developer is perfectly justified in being upset over a basic example not functioning correctly, due to bug on behalf of Apple's developers. It just wasn't reasonable to assume that the bug was due to malfunctioning hardware.
reply