They do have different infrastructure / electricity costs and they might not run on nvidia hardware.
It's not just the models.
Namely, Amazon Bedrock and Google Vertex.
That means normalized infrastructure costs, normalized electricity costs, and normalized hardware performance. Normalized inference software stack, even (most likely). It's about a close of a 1 to 1 comparison as you can get.
Both Amazon and Google serve Opus at roughly ~1/2 the speed of the chinese models. Note that they are not incentivized to slow down the serving of Opus or the chinese models! So that tells you the ratio of active params for Opus and for the chinese models.
We were responded about 10x not 0.5x.
x86 vs arm64 could have different performance. The Chinese models could be optimized for different hardware so it could show massive differences.
Also with Nvidia you get the efficiency of everything (including inference) built on/for Cuda, even efforts to catch AMD up are still ongoing afaik.
I wouldn't be surprised if things like DS were trained and now hosted on Nvidia hardware.
They are. Nvidia makes A LOT of profit. Hey, top stock for a reason.
> I wouldn't be surprised if things like DS were trained and now hosted on Nvidia hardware
DS is "old". I wouldn't study them. The new 1s have a mandate to at least run on local hardware. There are data center requirements.
I agree it could still be trained on Nvidia GPUs (black market etc), but not running.
They do? Source?
But if that's true, it would explain why Minimax, Z.ai and Moonshot are all organized as Singaporean holding companies, with claimed data center locations (according to OpenRouter) in the US or Singapore and only the devs in China. Can't be forced to use inferior local hardware if you're just a body shop for a "foreign" AI company. ;)
They just have a China only endpoint and likely a company under a different name.
Nothing to do with AI. TikTok is similar (global vs China operations).