My early takeaway is that Gemma 26B-A4B is the best tuned out of the bunch, but being small and with few active params, it's severely constrained by context (large inputs and tasks with large required outputs tank Gemma 26B's performance). We're working on a clean visualization for this; the data is there.
It's not uncommon for a sub-release of a model to show improvements across the board on its model card, but actually have mixed real performance compared to its predecessor (sometimes even being worse on average).
Moreover, tool invocation had problems that were later corrected by Google in an updated chat template.
So any early benchmarks that have shown the dense model as inferior to the MoE model are likely to be flawed and they must be repeated after updating both the inference backend and the model.
All benchmarks that I have seen after the bugs were fixed have shown the dense model as clearly superior in quality, even if much slower.
They did a similar re-release during the Gemini 3.1 Pro Preview rollout, and released a custom-tools version with its own slug, which performs MUCH better on custom harnesses (mostly because the original release could not figure out tool call formatting at all).