However at some point, 100x faster performance w/o an exploitable attack vector is also relevant! (though sometimes people find ways).
CSPRNGs are mostly worried about very specific attack vectors, and sure, they're like to be completely unpredictable. But other applications care more about other attack vectors like lack of k-dimensional equiprobability, and that hurts them far more.
The idea that CSPRNGs are the end all and be all of rngs holds CS back.
For emphasis, an empirically measurable deviation from k-equidistribution would be a cryptographic weakness (since it means that knowing some members of the k-tuple helps you guess the others). So that would be a strong claim requiring specific support.
Contrary to GP’s statement, I can’t find any claims of an actual test anywhere in the PCG materials, just “k-dimensional equdistribution: no” which I’m guessing means what I’ve just said. This is, at worst, correct but a bit terse and very slightly misleading on O’Neill’s part; how GP could derive any practical consequences from it, however, I haven’t been able to understand.
One way to imagine what symmetric cryptography does is a cellular automaton that is completely shuffled every iteration. In the case of Keccak/SHA3, that is almost exactly what happens too.
What is a perfect CSPRNG?
A good MCMC simulation might test that! E.g. say, training a large diffusion model. It takes way more computing power than the average time for a single computer to fail.
Also, the standards of those tests vs. does it bias the statistical model fitted with MCMC are different.