upvote
Thats funny, it failed my usual ‘hello world’ benchmark for LLM’s:

“Write a single file web page that implements a 1 dimensional bin fitting calculator using the best fit decreasing algorithm. Allow the user to input bin size, item size, and item quantity.”

Qwen3.5, Nematron, Step 3.5, gpt-oss all passed first go..

reply
Overall it's a very good open weights model! Notably I thought it makes more dumb coding mistakes than GPT-OSS on my M5, but it's fairly close overall.
reply
For me the vision/OCR is much better than other models in weights class.
reply
Gemma 31B scoring below 26B-A4B?
reply
In one shot coding, surprisingly, yes, by a decent amount. And it isn't a sample size issue. In agentic, no: https://gertlabs.com/?agentic=agentic

My early takeaway is that Gemma 26B-A4B is the best tuned out of the bunch, but being small and with few active params, it's severely constrained by context (large inputs and tasks with large required outputs tank Gemma 26B's performance). We're working on a clean visualization for this; the data is there.

It's not uncommon for a sub-release of a model to show improvements across the board on its model card, but actually have mixed real performance compared to its predecessor (sometimes even being worse on average).

reply
In early tests the performance of gemma-4-31B was affected by tokenizer bugs in many of the existing backends, like llama.cpp, which were later corrected by their maintainers.

Moreover, tool invocation had problems that were later corrected by Google in an updated chat template.

So any early benchmarks that have shown the dense model as inferior to the MoE model are likely to be flawed and they must be repeated after updating both the inference backend and the model.

All benchmarks that I have seen after the bugs were fixed have shown the dense model as clearly superior in quality, even if much slower.

reply
We add samples every week, so I'm curious if the numbers will move.

They did a similar re-release during the Gemini 3.1 Pro Preview rollout, and released a custom-tools version with its own slug, which performs MUCH better on custom harnesses (mostly because the original release could not figure out tool call formatting at all).

reply
I have very mixed feelings about that model. I want to like it. It's very fast and seems to be fit for many uses. I strongly dislike its "personality", but it responds well to system prompts.

Unfortunately, my experience with it as a coding assistant is very poor. It doesn't understand libraries it seems to know about, it doesn't see root causes of problems I want it to solve, and it refuses to use MCP tools even when asked. It has a very strong fixation on the concept of time. Anything past January 2025, which I think is its knowledge cutoff, the model will label as "science fiction" or "their fantasy" and role play from there.

reply