upvote
Does anyone have inside info on what these Huawai chips look like? I know Google has a Torus architecture unlike Nvidias fully connected one. Maybe it’s a similar architectural decision on the huawai chips that leads to bottlenecks in serving?
reply
https://www.huawei.com/en/news/2026/3/mwc-superpod-ai

>For AI computing, the Atlas 950 SuperPoD, powered by UnifiedBus, integrates 64 NPUs per cabinet and can scale up to 8,192 NPUs, delivering superior performance for large-scale AI training and high-concurrency inference.

reply
Plenty of other providers that offer much faster inference on GLM-5.1. Friendli, GMICloud, Venice, Fireworks, etc. And can be deployed through Bedrock already as well. Will probably be available generally in Bedrock soon, I would guess.
reply
better than Opus? not even close. after struggling thru server overload for the past couple hours i finally put 5.1 thru the paces and it's....okay. failed some simple stuff that Sonnet/Opus/Gemini didn't. failed it badly and repeatedly actually. this was in typescript, btw. not sure if i'll keep the subscription or not
reply
[flagged]
reply
I appreciate that it's not working for your use case but it's unfortunate that you dismiss the experience of others. And i am not chinese, I am European. Thanks for your feedback anyway.
reply
[dead]
reply
I tried Gemini 3.1 pro once to implement a previously designed 7-phase plan. it only implemented a quarter of the plan before stopping, the code didnt even compile because half of the scaffolding was missing. it then confidently said everything was done.

Codex and GLM didnt have any issue following the exact same plan and getting a working app. So I would argue Gemini is the failure here.

reply
Sounds like you two are taking pass each other. PDF work is a specific niche that according to you it fails, the other person say it's good at coding.
reply
Scroll down to my other comment, I've used it specifically for coding as well.

"It couldn't even debug some moderately complicated python scripts reliably."

reply
“GLM5…better than Opus, Codex, Gemini…”

What wild claim to make. Unsupported by benchmarks, unsupported by the consensus of the community, no evidence provided.

Sounds like in another comment here even the GLM5 team concedes they are behind the frontier wrt tool calling, do you know something they don’t?

reply
I know my use case and my personal experience :) i am not trying to pretend that it is the best in benchmarks, just sharing my experience so people know that some folks are having a very good experience with GLM models, compared to the competition.

My only goal is to encourage people to try it out so they can see if it moves the needle for them, because there are fair chances that it will. I am not trying to start a flamewar or something.

reply
It’s not a flame war, and you’re not just sharing your experience and encouraging others to try it out.

You’re making a claim, and I’m pointing out that it’s unsubstantiated and not consistent with any other source of data, including that internal to the company that makes the model.

I hope you can see that that’s different than saying it’s worked well for me

reply
Sometimes we STEM folks are way too rigid, I obviously meant "IN MY OPINION, GLM models are at this point superior to...".

I do not think that anyone who read my comment understood it differently. But I grant you this point, this is just my opinion based on my personal experience not the result of a scientific study.

Once this is said, i wasn't submitting a scientific paper for preprint, just posting my opinion on an internet forum.

Not sure why you are making such a big deal out of it, especially for something for which people can decide within minutes if it works for them or not. And I haven't seen you nitpick on other people saying that all Chinese models are garbage incapable of doing even the most basic task, without quoting any study. This kind of scrutiny tends to be one-sided.

Edit: and regarding what the z.ai team is saying about their models, just check their Discord and the articles they link there. They themselves say that their latest models have leading performance on a number of aspects. It is misleading to suggest that the authors of the model are not proudly saying that their models have best in class performance.

reply
FWIW, my experience is the same. Paired with opencode it has been excellent to me.
reply