upvote
It feels like the results stopped being interesting a little while ago but the practice has become part of simonw's brand, and it gives him something to post even when there is nothing interesting to say about another incremental improvement to a model, and so I don't imagine he'll stop.
reply
I, for one, expected progress. Uneven, sometimes delayed, but ever increasing progress.

But that Opus pelican?

reply
It’s not a waste of time. As the boundaries of AI are pushed we increasingly struggle to define what intelligence actually is. It becomes more useful to test what models cannot do instead of what they can. Random tasks like the pelican test can show how general the intelligence really is, putting aside the obvious flaw that the labs can optimise for such a simple public benchmark.
reply
Fun is so un-productive. Everyone doing things for "fun" is going to be sorry when they look back and realizes they were wasting time having a "good time" rather than optimizing their KPIs.
reply
I do wonder how much energy collectively has been burned on this useless "benchmark".
reply
I can't believe you're such a party pooper. It's exciting times, the silly things do matter!
reply
I also can't understand how this goes so viral every time on Hackernews lol
reply