upvote
We used to get one annual release which was 2x as good, now we get quarterly releases which are 25% better. So annually, we’re now at 2.4x better.
reply
Due to the increasing difficulty of scaling up training, it appears the gains are instead being achieved through better model training which appears to be working well for everyone.
reply
GPT 5.3 (/Codex) was a huge leap over 5.2 for coding
reply
Eh, sure, but marginally better if not the same as Claude 4.6, which itself was a small bump over Claud 4.5
reply