upvote
This is already well known, all these AI benchmarks use a different model to judge whether or not the solution was correct.

It’s… remarkably poor, and as demonstrated in the paper, easily gamed. Worst yet, these benchmarks teach AIs to be very short-sighted and hyper-focused on completing the task, rather than figuring out the best solution.

reply
Every ai labs train on the test set. That is a big part of why we see benchmark climbing from 1% to 30% after a few models iterations
reply
Models themselves definitely aren't getting better.
reply
Frontier model developers try to check for memorization. But until AI interpretability is a fully solved problem, how can you really know whether it actually didn't memorize or your memorization check wasn't right?
reply
Probably a more interesting benchmark is one that is scored based on the LLM finding exploits in the benchmark.
reply