upvote
CFD benefits from cache, but it benefits even more from sustained memory bandwidth, no? A small(ish) chunk of L3 + two channels of DRAM is not going to compete with a quarter as much L3 plus eight channels of DRAM when typical working set sizes (in my experience) are in the tens of gigabytes, is it?
reply
Sorry, what is "CFD" in this context?
reply
But consumer product does not support SDCI (only Epyc Turin supports it), so it does not benefit too much if an accelerator is involved.
reply
It's also useful to point out that the use cases and workloads where SDCI are most beneficial are far, far beyond the scope of what anyone will have installed in a Zen rig. Dual 100G networking cards? The cost of both of those damn near buys all of a 9950X3D2 setup.
reply
no, dual 100Gb are not that expensive any more, eg https://www.scan.co.uk/products/2-port-intel-e810-cqda2blk-d... UK retail for gbp349.
reply
It really doesn't. In virtually every case the work is being completed faster than the cache can grow to that size. What little gains are being realized are from not having to wait for cores with access to the cache to become available.
reply
> It really doesn't. In virtually every case the work is being completed faster than the cache can grow to that size.

If your tasks don’t benefit then don’t buy it.

But stop claiming that it doesn’t help anywhere because that’s simply wrong. I do some FEA work occasionally and the extra cache is a HUGE help.

There are also a lot of non-LLM AI workloads that have models in the size range than fit into this cache.

reply
There are some very specific workloads (say simple object detection) that fit into cache and have crazy performance where the value of the cpu will be unbeatable, as the alternative is one of the cache epycs, everywhere else it'll only be small improvement if the software is not purpose made for it
reply