upvote
I don't know how these benchmarks work (do you do a hundred runs? A thousand runs?), but 0.1% seems like noise.
reply
That benchmark is pretty saturated, tbh. A "regression" of such small magnitude could mean many different things or nothing at all.
reply
i'd interpret that as rounding error. that is unchanged

swe-bench seems really hard once you are above 80%

reply
it's not a great benchmark anymore... starting with it being python / django primarily... the industry should move to something more representative
reply
Openai has; they don't even mention score on gpt-5.3-codex.

On the other hand, it is their own verified benchmark, which is telling.

reply