I think so for a few reasons:
1. The Reuters article does explicitly say the model is compatible with domestic chips for inference, without mentioning training. I agree that the Reuters passage is a bit confusing, but I think they mean it was developed to be compatible with Ascends (and other chips) for inference, after it had been trained.
2. The z.ai blog post says it's compatible with Ascends for inference, without mentioning training, consistent with the Reuters report https://z.ai/blog/glm-5
3. When z.ai trained a small image model on Ascends, they made a big fuss about it. If they had trained GLM-5 with Ascends, they likely would've shouted it from the rooftops.
4. Ascends just aren't that good
Also, you can definitely train a model on one chip and then support inference on other chips; the official z.ai blog post says GLM-5 supports "deploying GLM-5 on non-NVIDIA chips, including Huawei Ascend, Moore Threads, Cambricon, Kunlun Chip, MetaX, Enflame, and Hygon" -- many different domestic chips. Note "deploying".