upvote
I have been wondering this recently. It was the convention that if you wanted to keep costs down, try to keep the memory bus size down as low as possible. Still remember the awful Radeon 9200 SE - 64bit data bus that strangled an already slow GPU.

Heck, I have a phone with a 16bit memory bus for instance. The high(ish) clock rate only makes up the difference slightly.

But with general prices on all components going up, it might not be such a big factor any more.

HBM migght make sense for higher end products which can free up space for the lower end that will never use the tech.

reply
> 1TB/s HBM2 memory subsystem which is more than any consumer GPU you can get today

5090 has 1.8 TB/s?

reply
It also does 64 bit floating point I think?
reply
I was gonna say, I still use an AMD Vega that uses HBM2.
reply
Vega was a card with decent perf/$ for the consumer, but from a pure technical point of view (perf/mm2, perf/BW, perf/W) it was a major failure. Both Vega (and Fiji before it) showed that excess memory BW alone is not sufficient to win.
reply
That card only had 16GB of memory; its memory bandwidth was 1TB/s.
reply
The Pro variant had 32GB, I had one in a 2019 Mac Pro
reply
You're saying this in a world where AMD's highest end consumer GPU in 2026 is also limited to 16 GB.
reply
RX7900 XTX has 24GB
reply
this card is 4 years old, it's not on store shelves anymore.
reply