upvote
You can trust that a model that scores 40% vs a model that scores 90% is indeed worse.

You can’t trust it that a model that scores 93% is better at software engineering than a model that scores 90%, because at that point it’s impossible to distinguish between recall and reasoning.

reply
It’s honestly far better to just ignore SWEBench Verified in 2026. Multiple labs have noted issues with contamination, and achieving high scores require memorisation of what passes the prescriptive verifier; not what is a correct solution.

40% vs 90%? Sure.

70% vs 90%? _Absolutely meaningless_ as you are not measuring coding intelligence but “how well can the model cheat flaws in SWEBench Verified”, the former can certainly be better at coding even assuming no deliberate benchmaxxing / foul play.

reply
> models that aren't over-optimized for it.

But how do you know the model was over-optimized for it or just really good?

reply
reply
I don't understand that methodology in the first place. Does Anthropic even have some kind of somewhat objective definition to measure and judge "memorization"? Is there any evidence that other LLMs are viable tool to determine that?
reply
This article says anthropic models can write out the entire benchmark solution set word for word from memory
reply