Like can the model take your plan and ask the right questions where there appear to be holes.
How wide of architecture and system design around your language does it understand.
How does it choose to use algorithms available in the language or common libraries.
How often does it hallucinate features/libraries that aren't there.
How does it perform as context get larger.
And that's for one particular language.
With the right scaffolding these models are able to perform serious work at high quality levels.
I’d feel unscientific and broken? Sure maybe why not.
But at the end of the day I’m going to choose what I see with my own two eyes over a number in a table.
Benchmarks are a sometimes useful to. But we are in prime Goodharts Law Territory.
I honestly I have no idea what benchmarks are benchmarking. I don’t write JavaScript or do anything remotely webdev related.
The idea that all models have very close performance across all domains is a moderately insane take.
At any given moment the best model for my actual projects and my actual work varies.
Quite honestly Opus 4.5 is proof that benchmarks are dumb. When Opus 4.5 released no one was particularly excited. It was better with some slightly large numbers but whatever. It took about a month before everyone realized “holy shit this is a step function improvement in usefulness”. Benchmarks being +15% better on SWE bench didn’t mean a damn thing.