This sounds like significant genuine gains unless one of the following is true, which would be really unlikely:
1. They somehow managed to benchmaxx every coding benchmark way harder than their own last generation.
2. They held back the coding performance of their last generation 397B model on purpose to make this 3.6 Qwen model look good. (basically a tinfoil hat theory as it would literally require 4D chess and self-harming to do)
So, it's pretty save to say that we actually have a competent agentic coding model we can leave on in a prosumer laptop overnight to create real software for almost zero token costs.
I've got 3x SBCs that can run the Gemma 4 26B MoE on NPU. Around 4W extra power, 3 tokens a second...so that can hammer away at tasks 24/7 without moving the needle on electricity bill
They just use APIs though. There is very little interest within them to do the model engineering and inference in house.