upvote
What you say is not consistent with TFA.

The parent article shows that B70 is faster than RTX 4000.

RTX 4500 is faster than RTX 4000, but it cannot be more than 3 times faster, not even more than 2 times faster.

The parent article is consistent with RTX 4500 being faster than B70 for ML inference, but by a much smaller ratio, e.g. less than 50% faster.

If you know otherwise, please point to the source.

If you have run a benchmark yourself, please describe the exact conditions.

In the benchmarks shown at Phoronix for llama.cpp, the relative performance was extremely variable for different LLMs, i.e. for some LLMs a B70 was faster than RTX 4000, but for others it was significantly slower.

Your 3x performance ratio may be true for a particular LLM with a certain quantization, but false for other LLMs or other quantizations.

This performance variability may be caused by immature software for B70. For instance instead of using matrix operations (XMX engines), non-optimized software might use traditional vector operations, which are slower.

It is also possible that for optimum performance with a certain LLM one may need to choose a different quantization for B70 than for NVIDIA, because for sub-16-bit number formats Intel supports only integer numbers.

reply
There are nonlinearities to exploit in that calculus. Given enough VRAM to host a larger model that you're targeting, just the size can push you past the usability threshold at a much better price.
reply
When you get 4 of these, the idle power alone is 120W. That is a lot of electricity if left on 24/7.

At that power consumption, you also end up being more expensive than API calls and many times slower. It starts to feel very stupid to run local interference.

If the client is very keen on privacy, then they can pay for the NVIDIA.

I end up returning my B70s, and bought RTX PRO 6000.

reply
Problem is the more B70 you have, the slower the inference it gets(due to terrible software atm). A single B70 is almost barely faster than CPU inference. If you have 4 B70, you might as well run interference on CPU and be faster with cheaper DDR5 instead of GDDR6.
reply
For what you say to be useful, please specify what sowftware you have used with B70, including its version.

Hardware-wise a B70 should be significantly faster than any of the available CPUs at ML inference. If it was not so in your tests, that must really be a software problem, so you must identify the software, for others to know what does not work.

reply