AKA “we want more funding.”
To avoid the scenario where for a prolonged period of time the intelligence community has secret access to QC, researchers against that type of thing are incentivized to shout fire when they see the glimmerings of a possibly productive path of research.
If the intelligence community is going to nab the first team that has a quantum computing breakthrough, does it actually help the public to speed up research?
It seems like an arms race the public is destined to lose because the winning team will be subsumed no matter what.
Luckily, in this particular arms race, all we the public need to do is swap encryption algorithms, and there's no risk of ending global civilization if we mess up. So we get the best of both worlds: Quantum computing for civilian purposes (simulations and whatnot), while none of the terrifying surveillance capabilities. We just need to update a couple of libraries.
By what margin? An active push can minimize the gap.
However I think you're confusing the existence of a CRQC with adoption of PQC algorithms. The latter can be done in the absence of the former.
Filippo Valsorda (maintainer of Golang's crypto packages, among other things) published a summary yesterday [0] targeted at relative laypeople, with the same "we need to target 2029" bottom line.
AES probably can't be broken but that's irrelevant because in this scenario you have the key in plaintext from the key exchange
The algorithm everyone tends to be thinking of when they bring this up has literally nothing to do with any cryptography used anywhere ever; it was wildly novel, and it was interesting only because it (1) had really nice ergonomics and (2) failed spectacularly.
It's hubris to say there are no questions, especially for key exchange. The general classes of mathematical problems for PQC seem robust, but that's generally not how crypto systems fail. They fail in the details, both algorithmically and in implementation gotchas.
From a security engineering perspective, there's no persuasive reason to avoid general adoption of, e.g., the NIST selections and related approaches. But when people suggest not to use hybrid schemes because the PQC selections are clearly robust on their own, well then reasonable people can disagree. Because, again, the devil is in the details.
The need to proclaim "no questions" feels more like a reaction to lay skepticism and potential FUD, for fear it will slow the adoption of PQC. But that's a social issue, and imbibing that urge may cause security engineers to let their guard down.
SIKE: not lattices. Literally moon math. Do you understand how SIKE/SIDH works? It's fucking wild.
I'm going to keep saying this: you know the discussion is fully off the rails when people bring SIKE/SIDH into it as evidence against MLKEM.
DJB himself seems to prefer hybrid over non-hybrid precisely over concern about the unknowns: https://blog.cr.yp.to/20260219-obaa.html
These doubts may not be the kind curious onlookers have in mind, but to say there are no doubts among researchers and practitioners is a misrepresentation. In fact, you're flatly contradicting what DJB has said on the matter:
> SIKE is not an isolated example: https://cr.yp.to/papers.html#qrcsp shows that 48% of the 69 round-1 submissions to the NIST competition have been broken by now.
https://archive.cr.yp.to/2026-02-21/18:04:14/o2UJA4Um1j0ursy...
Unqualified assurances is what you hear from a salesman. You're trying to sell people on PQC. There's no reason to believe ML-KEM is a lemon, but you're effectively saying, "it's the last KEX scheme we'll ever need", and that's just not honest from an engineering point of view, even if it's what people need to hear.
And, if we're on the subject of how trustworthy Bernstein's concerns are, I'll note again: in his own writing about the potential frailty of MLKEM, he cites SIKE, because, again, he thinks you're too dumb to understand the difference between a module lattice and a generic lattice.
Finally, I'm going to keep saying this until I don't have to say it anymore: PQC is not a "kind" of cryptography. It doesn't mean anything that N% of the Round 1 submissions to the NIST PQC Contest were cryptanalyzed. Multivariate quadratic equation cryptography, supersingular isogeny cryptography, and F_2^128 code-based cryptography are not related to each other. The point of the contest was for that to happen.
The elephant in the room in these conversations is Daniel Bernstein and the shade he has been casting on MLKEM for the last few years. The things I think you should remember about that particular elephant are (1) that he's cited SIDH as a reason to be suspicious of MLKEM, which indicates that he thinks you're an idiot, and (2) that he himself participated in the NIST PQC KEM contest with a lattice construction.
1. cryptography developed across the world, 2. the actual schemes were overwhelmingly by European authors 3. standardized by the US 4. other countries standardizations have been substantially similar (e.g. the ongoing Korean one, the German BSI's recommendations. China's CACR [had one with substantially similar schemes](https://www.sdxcentral.com/analysis/china-russia-to-adopt-sl...). Note that this is separate from a "standardization", which sounds like it is starting soon).
In particular, given that China + the US ended up with (essentially the same) underlying math, you'd have to have a very weird hypothetical scenario for the conclusion to not be "these seem secure", and instead "there is a global cabal pushing insecure schemes".
1. the main "eye catching" attack was the [attack on SIDH](https://eprint.iacr.org/2022/975.pdf). it was very much a "thought to be entirely secure" to "broken in 5 minutes with a Sage (python variant) implementation" within ~1 week. Degradation from "thought to be (sub-)exp time" to "poly time". very bad.
2. the other main other "big break" was the [RAINBOW attack](https://eprint.iacr.org/2022/214.pdf). this was a big attack, but it did not break all parameter sets, e.g. it didn't suddenly reduce a problem from exp-time to poly-time. instead, it was a (large) speedup for existing attacks.
anyway, someone popular among some people in tech (the cryptographer Dan Bernstein) has been trying (successfully) to slow the PQC transition for ~10 years. His strategy throughout has been complaining that a very particular class of scheme ("structured LWE-based schemes") are suspect. He has had several complaints that have shifted throughout the years (galois automorphism structure for a while, then whatever his "spherical models" stuff was lmao). There have been no appreciable better attacks (nothing like the above) on them since then. But he still complains, saying that instead people should use
1. NTRU, a separate structured lattice scheme (that he coincidentally submitted a scheme for standardization with). Incidentally, it had [a very bad attack](https://eprint.iacr.org/2016/127) ~ 2016. Didn't kill PQC, but killed a broad class of other schemes (NTRU-based fully homomorphic encryption, at least using tensor-based multiplication)
2. McCliece, a scheme from the late 70s (that has horrendously large public keys --- people avoid it for a reason). He also submitted a version of this for standardization. It also had a [greatly improved attack recently](https://eprint.iacr.org/2024/1193).
Of course, none of those are relevant to improved attacks on the math behind ML-KEM (algebraically structured variants on ring LWE). there have been some progress on these, but not really. It's really just "shaving bits", e.g. going from 2^140 to 2^135 type things. The rainbow attack (of the first two, the "mild" one) reduced things by a factor ~2^50, which is clearly unacceptable.
Unfortunately, because adherents of Dan Bernstein will pop up, and start saying a bunch of stuff confidently that is much too annoying to refute, as they have no clue what the actual conversation is. So the conversation becomes
1. people who know things, who tend to not bother saying anything (with rare exceptions), and 2. people who parrot Dan's (very wrong at this point honestly, but they've shifted over time, so it's more of 'wrong' and 'unwilling to admit it was wrong') opinions.
the dynamic is similar to how when discussions of vaccines on the internet occur, many medical professionals may not bother engaging, so you'll get a bunch of insane anti-vax conspiracies spread.
> anyway, someone popular among some people in tech (the cryptographer Dan Bernstein) has been trying (successfully) to slow the PQC transition for ~10 years
Sounds enough like throwing shade to make me doubt it's value, in absence of other signals.
My point was your history of posting knowledgeably about security and cryptography provides the credibility for me to go do more reading about the stuff in mswphd's post.