upvote
For those who want to :popcorn-meme: the drama, there's some great comments on the peer review of the TurboQuant paper: https://openreview.net/forum?id=tO3ASKZlok
reply
There are also more papers on similar themes.

For example, TurboQuant makes use of QJL (quantized Johnson Lindenstrauss transformations). One of the first papers to characterize the QJL and in fact the rate distortion tradeoff for quantized matrix multiplication in general is "Optimal Quantization for Matrix Multiplication" (https://arxiv.org/abs/2410.13780) by Ordentlich and Polyanskiy.

There is also a more accessible survey paper around quantized matrix multiplication called "High-Rate Quantized Matrix Multiplication: Theory and Practice" (https://arxiv.org/abs/2601.17187), by the same authors.

TurboQuant cites none of them.

reply
TurboQuant is starting to look like a case study in how to turn a fragile paper into a breakthrough story.

The attribution is thin, the “6x compression” headline is not clearly separated from prior KV-cache quantization baselines like KIVI, and the RaBitQ comparison is hard to take seriously: single-core CPU for the baseline, A100 GPU for TurboQuant. It is comparing apples-to-datacenter. Worse, there are also public OpenReview comments saying that even the reported accuracy results are not reproducible.

Hard to believe this is the standard for something being promoted as a breakthrough. If this came from a random startup blog, people would be much harsher about it.

reply
I believe our claim at this point is more fundamental than just lack of citation.

The quantizer in TurboQuant is EDEN quantization (2021) applied to the KV-cache. It is neither a novel quantizer nor an improvement in quantization techniques.

In DRIVE/EDEN, we already introduced the version used in "TurboQuant"'s paper and suggested an optimal scale configurations which are better in both mse-minimizing and unbiased scenarios.

reply
I wonder how often this happens in practice - by "this", I mean industry/LLM world not noticing* some research until a bigger player repeats it with louder PR.

(*hopefully I didn't misunderstand the situation)

reply
Ask Jürgen Schmidhuber
reply
If we go only by the cases that have been publicly known it already happens all the time. Lots of patents are a race to register by multiple parties too and it's rarely done fairly.
reply
https://docs.vllm.ai/en/v0.20.0/api/vllm/model_executor/laye...

`vllm.model_executor.layers.quantization.turboquant`

> The technique implemented here consists of the scalar case of the HIGGS quantization method (Malinovskii et al., "Pushing the Limits of Large Language Model Quantization via the Linearity Theorem", NAACL 2025; preprint arXiv:2411.17525): rotation + optimized grid + optional re-normalization, applied to KV cache compression. A first application of this approach to KV-cache compression is in "Cache Me If You Must: Adaptive Key-Value Quantization for Large Language Models" (Shutova et al., ICML 2025; preprint arXiv:2501.19392). Both these references pre-date the TurboQuant paper (Zandieh et al., ICLR 2026).

reply
Those works did cite DRIVE/EDEN :)

HIGGS is an extension of EDEN (using the well known method for blockwise Lloyd-Max).

The proper framing of this "TurboQuant" layer in vllm (which does not include JQL) is precisely EDEN 22 without the scale correction.

reply
deleted
reply
Thanks a lot for pointing this out. I will update this explainer to properly add the prior literature so that there is a proper attribution.
reply
Thanks for the quick response and for being willing to update the explainer. I really appreciate the clarification.
reply
I have added the lineage section in the explainer: https://arkaung.github.io/interactive-turboquant/#lineage
reply
Thanks for that! Note that the residual chain is empirically and theoretically inferior to our unbiased scale; furthermore, it requires an additional bit in certain cases. Additionally, TurboQuant was not the first to apply EDEN to KV-cache (see for example https://arxiv.org/abs/2411.17525 from 2024).
reply
https://arxiv.org/abs/2604.18555

"This note clarifies the relationship between the recent TurboQuant work and the earlier DRIVE (NeurIPS 2021) and EDEN (ICML 2022) schemes. DRIVE is a 1-bit quantizer that EDEN extended to any bits per coordinate; we refer to them collectively as EDEN. First, TurboQuant is a special case of EDEN obtained by fixing EDEN's scalar scale parameter to . EDEN supports both biased and unbiased quantization, each optimized by a different (chosen via methods described in the EDEN works). The fixed choice used by TurboQuant is generally suboptimal, although the optimal for biased EDEN converges to as the dimension grows; accordingly TurboQuant approaches EDEN's behavior for large . Second, TurboQuant combines a biased -bit EDEN step with an unbiased 1-bit QJL quantization of the residual. It is suboptimal in three ways: (1) its -bit step uses the suboptimal ; (2) its 1-bit unbiased residual quantization has worse MSE than (unbiased) 1-bit EDEN; (3) chaining a biased -bit step with a 1-bit unbiased residual step is inferior to unbiasedly quantizing the input directly with -bit EDEN. Third, some of the analysis in the TurboQuant work mirrors that of the EDEN works: both exploit the connection between random rotations and the shifted Beta distribution, use the Lloyd-Max algorithm, and note that Randomized Hadamard Transforms can replace uniform random rotations. Experiments support these claims: biased EDEN (with optimized ) is more accurate than TurboQuant, and unbiased EDEN is markedly more accurate than TurboQuant, often by more than a bit (e.g., 2-bit EDEN beats 3-bit TurboQuant). We also repeat all accuracy experiments from the TurboQuant paper, showing that EDEN outperforms it in every setup we have tried."

reply
FYI your comment is missing several constants/words and is hard to read
reply
Here is another claim that the results are repeating previous research:

https://x.com/Tim_Dettmers/status/2041496879238611455

reply
Are you guys going to follow up with a paper showing EDEN results match or beat turboquant for needle in a haystack benchmarks?
reply
The note includes extensive experiments and reproduces many of the figures from the TurboQuant paper in our Section 5. Honestly, I think our case is pretty clear-cut as is. I am not sure what the overhead for those specific benchmarks would be, but we will look into it.

(In any case, I want to emphasize that TurboQuant quantizer is a private case of EDEN)

reply
with the amount of traction this has gotten... coming with a clear set of experiments even on arxiv paper would be of great help to showcase your improvements. And if they're easily reproducible, they could get integrated in the mainstream inference engines as well, as the main point here is compression with little degradation.
reply
When you use TurboQuant, you are essentially using the EDEN quantizer under a different name applied to KV-cache.

Both EDEN and its 1-bit variant have been implemented in PyTorch, JAX, and TensorFlow across numerous open-source libraries and are used in various applications. I am currently writing a blog post that will document these in detail.

EDEN defines a scale parameter, S, for which we suggest specific optimal values for both biased and unbiased versions. As shown in the note I shared, these values lead to clear empirical improvements. Consequently, users who rely on the less optimal S value and the unbiasing method popularized by TurboQuant will generally see inferior results compared to those using EDEN with the optimal scale values suggested in our original papers.

reply