upvote
Inference at low bit-widths is easy. Training is where the wheels come off, because you spend the saved math budget on gradient tricks and rescaling just to stop the model from drifting.

That trade loses outside tight edge deploymints. Float formats stuck around for boring reasons: they handle ugly value ranges and they fit the GPU stack people already own.

reply
> and standard ML theory works over real numbers.

This paper uses binary numbers only, even for training, with a solid theoretical foundation: https://proceedings.neurips.cc/paper_files/paper/2024/file/7...

TL;DR: They invent a concept called "Boolean variation" which is the binary analog to the Newton/Leibniz derivative. They are then able to do backpropagation directly in binary.

reply