The parent article shows that B70 is faster than RTX 4000.
RTX 4500 is faster than RTX 4000, but it cannot be more than 3 times faster, not even more than 2 times faster.
The parent article is consistent with RTX 4500 being faster than B70 for ML inference, but by a much smaller ratio, e.g. less than 50% faster.
If you know otherwise, please point to the source.
If you have run a benchmark yourself, please describe the exact conditions.
In the benchmarks shown at Phoronix for llama.cpp, the relative performance was extremely variable for different LLMs, i.e. for some LLMs a B70 was faster than RTX 4000, but for others it was significantly slower.
Your 3x performance ratio may be true for a particular LLM with a certain quantization, but false for other LLMs or other quantizations.
This performance variability may be caused by immature software for B70. For instance instead of using matrix operations (XMX engines), non-optimized software might use traditional vector operations, which are slower.
It is also possible that for optimum performance with a certain LLM one may need to choose a different quantization for B70 than for NVIDIA, because for sub-16-bit number formats Intel supports only integer numbers.
At that power consumption, you also end up being more expensive than API calls and many times slower. It starts to feel very stupid to run local interference.
If the client is very keen on privacy, then they can pay for the NVIDIA.
I end up returning my B70s, and bought RTX PRO 6000.
Hardware-wise a B70 should be significantly faster than any of the available CPUs at ML inference. If it was not so in your tests, that must really be a software problem, so you must identify the software, for others to know what does not work.