Sure FP64 is a problem and not always available in the capacity people would like it to be, but there are a lot of things you can do just fine with FP32 and all of that research and engineering absolutely is done on GPU.
The AI-craze also made all of it much more accessible. You don't need advanced C++ knowledge anymore to write and run a CUDA project anymore. You can just take Pytorch, JAX, CuPy or whatnot and accelerate your numpy code by an order of magnitude or two. Basically everyone in STEM is using Python these days and the scientific stack works beautifully with nvidia GPUs. Guess which chip maker will benefit if any of that research turns out to be a breakout success in need of more compute?
When bitcoin was still profitable to mine on GPUs, AMD's performed better due to not being segmented like NVIDIA cards. It didn't help AMD, not that it matters. AMD started segmenting because they couldn't make a competitive card at a competitive price for the consumer market.
Ok. You're talking about performance.
> their performance per watt has oscillated during most of the time around 3 times and sometimes up to 4 times greater in comparison with CPUs
Now you're talking about perf/W.
> This is impressive, but very far from the "100" factor originally claimed by NVIDIA.
That's because you're comparing apples to apples per apple cart.
Even if we interpret the NVIDIA claim as referring to the performance available in a desktop, the GPU cards had power consumptions at most double in comparison with CPUs. Even with this extra factor there has been more than an order of magnitude between reality and the NVIDIA claims.
Moreover I am not sure whether around 2010 and before that, when these NVIDIA claims were frequent, the power permissible for PCIe cards had already reached 300 W, or it was still lower.
In any case the "100" factor claimed by NVIDIA was supported by flawed benchmarks, which compared an optimized parallel CUDA implementation of some algorithm with a naive sequential implementation on the CPU, instead of comparing it with an optimized multithreaded SIMD implementation on that CPU.
That aside, it still didn't make sense to compare apples to apples per apple cart...