Yes, but... isn't the same true for Opus and all the other models too?
So you're either paying $1000's for Opus in Pi, or $30/month for GLM in Pi. If the results are mostly equivalent that's an easy choice for most of us.
It also compresses the context at around 100k tokens.
In case anyone is interested: https://github.com/sebastian/pi-extensions/tree/main/.pi/ext...