Context for the unaware: https://simonwillison.net/tags/pelican-riding-a-bicycle/
It's just an experiment on how different models interpret a vague prompt. "Generate an SVG of a pelican riding a bicycle" is loaded with ambiguity. It's practically designed to generate 'interesting' results because the prompt is not specific.
It also happens to be an example of the least practical way to engage with an LLM. It's no more capable of reading your mind than anyone or anything else.
I argue that, in the service of AI, there is a lot of flexibility being created around the scientific method.
For the last generation of models, and for today's flash/mini models, I think there is still a not-unreasonable binary question ("is this a pelican on a bicycle?") that you can answer by just looking at the result: https://simonwillison.net/2024/Oct/25/pelicans-on-a-bicycle/
I'm guessing both humans and LLMs would tend to get the "vibe" from the pelican task, that they're essentially being asked to create something like a child's crayon drawing. And that "vibe" then brings with it associations with all the types of things children might normally include in a drawing.
Do electric pelicans dream of touching electric grass?
We need a new, authentic scenario.
I don't think there's a good description anywhere. https://youtube.com/@t3dotgg talks about it from time to time.
1. Take the top ten searches on Google Trends
(on day of new model release)
2. Concatenate
3. SHA-1 hash them
4. Use this as a seed to perform random noun-verb
lookup in an agreed upon large sized dictionary.
5. Construct a sentence using an agreed upon stable
algorithm that generates reasonably coherent prompts
from an immensely deep probability space.
That's the prompt. Every existing model is given that prompt and compared side-by-side.You can generate a few such sentences for more samples.
Alternatively, take the top ten F500 stock performers. Some easy signal that provides enough randomness but is easy to agree upon and doesn't provide enough time to game.
It's also something teams can pre-generate candidate problems for to attempt improvement across the board. But they won't have the exact questions on test day.
This pattern of considering 90% accuracy (like the level we've seemingly we've stalled out on for the MMLU and AIME) to be 'solved' is really concerning for me.
AGI has to be 100% right 100% of the time to be AGI and we aren't being tough enough on these systems in our evaluations. We're moving on to new and impressive tasks toward some imagined AGI goal without even trying to find out if we can make true Artificial Niche Intelligence.
As far as I can tell for AIME, pretty much every frontier model gets 100% https://llm-stats.com/benchmarks/aime-2025