What happens is that you get very spikey activations, there are so called "outlier" activations. A easy to read paper that tells you about this is SmoothQuant [0]. Another source from Anthropic and the Mechanistic Interperability people is calling these "privileged basis" [1].
Now based on the weight symmetries of a typical transformer, these actually don't need to exist. Weight symmetries means the ways you can change the weights without actually affecting the mathematical function, there are a broad class of these because the linear algebra has a lot of redundancies in it.
But the behaviour of the Adam optimizer is such that you do end up w/ these things because it sort of more quickly optimizes to produce them. This comes from the fact it is an elementwise dynamic learning rate (and probably partly to do with the epsilon).
[0] https://arxiv.org/pdf/2211.10438 [1] https://transformer-circuits.pub/2023/privileged-basis/index...
> In particular, we can generate fixed random rotation matrices at initialization, and multiply them into the activations any time we read from or write to the residual stream.
I guess I was mistaken in assuming this part was part of the TurboQuant-specific innovations. Still an interesting concept thoughThe thing about Muon is that it doesn't have this specific feature of ADAM that causes it to "move along the diagonal". Basically if you flatten weights as a huge vector of a few billion elements. SGD moves along the gradient, which isn't biased. ADAM normalizes everything elementwise, so it sort of moves along a vector of +-1.
This isn't a proof or anything, but what you can imagine might be happening is that if you move along +-1, then you find spikey solutions somehow. Not sure how to prove that. Muon doesn't really do this, but it has its own sort of funky reshaping of the update (it moves along low rank directions).
[0] https://www.lesswrong.com/posts/yrhu6MeFddnGRSLtQ/adam-optim...
In simple terms, large ML models like LLMs often learn trivial rules such as "if the 21st decimal place of the 5th dimension in the embedding vector is 5 - then the image is of a cat." Learning such a memorization function is usually not what we are trying to do, and there are a variety of techniques to avoid these trivial solutions and "smooth" the optimization geometry.
Let's pick a simpler compression problem where changing the frame of reference improves packing.
There's a neat trick in the context of floating point numbers.
The values do not always compress when they are stored exactly as given.
[0.1, 0.2, 0.3, 0.4, 0.5]
Maybe I can encode them in 15 bytes instead of 20 as float32.
Up the frame of reference to be decibels instead of bels and we can encode them as sequential values without storing exponent or sign again.
Changing the frame of reference, makes the numbers "more alike" than they were originally.
But how do you pick a good frame of reference is all heuristics and optimization gradients.
>How can a boolean value preserve all of the relational and positional information between data points?
They aren't reducing entire vector to a bollean only each of its dimensions.