1. You can run this on a Mac using llama-server and a 17GB downloaded file
2. That version does indeed produce output (for one specific task) that's of a good enough quality to be worth spending more time checking out this model
3. It generated 4,444 tokens in 2min 53s, which is 25.57 tokens/s
* er, that probably sounds strange, but I did just spend 6 weeks working on integrating the Willison Trifecta for my app I've been building for 2.5 years, and I considered it a release blocker. It's a simple mental model that is a significant UX accomplishment IMHO.
It's perhaps not a serious test, it isn't to me, but on the edges of jokes about pelicans they're usually some useful things people smarter than me say, and additionally if providers are spending some time on making pelicans or svg look better, this benefits all of us.
So, no hard feelings, you're understood (and I'm not trying to be patronising, I'm just awkward with the language), but pelicans are here to stay because it seems that the consensus is they're beneficial and on topic.
All the best!
Missing an opportunity here, lol.