upvote
Here's a reproduction attempt (LM Studio, same Qwen3.6-35B-A3B-GGUF model as linked in parent, M1 Max 64GB, <90 seconds):

https://files.catbox.moe/r3oru2.png

- My Qwen 3.6 result had sun and cloud in sky, similar to the second Opus 4.7 result in Simon's post.

- My Qwen 3.6 result had no grass (except as a green line), but all three results in Simon's post had grass (thick).

- My Qwen 3.6 result had visible "tailing air motion" like Simon's Qwen 3.6 result.

- My Qwen 3.6 result had a "sun with halo" effect that none of Simon's results had.

But, I know, it's more about the pelican and the bicycle.

reply
The bicycle frame is ok. Simon's was better but at least it's not broken like Opus 4.7.

I can't comment that flamingo.

reply
I've been running qwen3.6:35b-a3b-q4_K_M (22.3GB) via Ollama.

Is the 20.9GB GGUF version better or negligible in comparison?

reply
Thanks for pointing to the GGUF.

I just tried this GGUF with llama.cpp in its UD Q4_K_XL version on my custom agentic oritened task consisiting of wiki exploration and automatic database building ( https://github.com/GistNoesis/Shoggoth.db/ )

I noted a nice improvement over QWen3.5 in its ability to discover new creatures in the open ended searching task, but I've not quantified it yet with numbers. It also seems faster, at around 140 token/s compared to 100 token/s , but that's maybe due to some different configuration options.

Some little difference with QWen3.5 : to avoid crashes due to lack of memory in multimodal I had to pass --no-mmproj-offload to disable the gpu offload to convert the images to tokens otherwise it would crash for high resolutions images. I also used quantized kv store by passing -ctk q8_0 -ctv q8_0 and with a ctx-size 150000 it only need 23099 MiB of device memory which means no partial RAM offloading when I use a RTX 4090.

reply
I wonder when pelican riding a bicycle will be useless as an evaluation task. The point was that it was something weird nobody had ever really thought about before, not in the benchmarks or even something a team would run internally. But now I'd bet internally this is one of the new Shirley Cards.
reply
Pelicanmaxxing
reply
Yeah try it with something else, or e.g. add a tiger to the back seat.
reply
I mean look at the result where he asked about a unicycle - the model couldn't even keep the spokes inside the wheels - would be rudimentary if it "learned" what it means to draw a bicycle wheel and could transfer that to unicycle.
reply
it's the frame that's surprisingly - and consistentnly - wrong. You'd think two triangles would be pretty easy to repro; once you get that the rest is easy. It's not like he's asking "draw a pelican on a four-bar linkage suspension mountainbike..."
reply
This is older, but even humans don't have a great concept of how a bicycle works... https://twistedsifter.com/2016/04/artist-asks-people-to-draw...
reply
Wouldn't this be more about being capable of mentally remembering how a bicycle looks versus how it works?

This reminds me of Pictionary. [0] Some people are good and some are really bad.

I am really bad a remembering how items look in my head and fail at drawing in Pictionary. My drawing skills are tied to being able to copy what I see.

[0] https://en.wikipedia.org/wiki/Pictionary

reply
I think it’s difficult to draw a bike exactly because you remember how it works rather than how it looks, so you worry about placing all the functional parts and get the overall composition wrong. Similar to drawing faces, without training, people will consistently dedicate too much area to the lower part of the face and draw some kind of neanderthal with no forehead.
reply
is it possible to have greater success with the specificity? I don't think i ever drew a bike frame properly as a kid despite riding them and understanding the concept of spokes and wheels...
reply
They’ll hardcode it in 4.8, just like they do when they need to “fix” other issues
reply
the more I look at these images the more convinced I become that world models are the major missing piece and that these really are ultimately just stochastic sentence machines. Maybe Chomsky was right
reply
> that these really are ultimately just stochastic sentence machines

I thought that's exactly what they are?

reply
I am so perplexed what exactly where people thinking they were. Its nothing else than highly sofisticated statistics.
reply
I'm not sure how you can give the flamingo win to Qwen:

* It's sitting on the tire, not the seat.

* Is that weird white and black thing supposed to be a beak? If so, it's sticking out of the side of its face rather than the center.

* The wheel spokes are bizarre.

* One of the flamingo's legs doesn't extend to the pedal.

* If you look closely at the sunglasses, they're semi-transparent, and the flamingo only has one eye! Or the other eye is just on a different part of its face, which means the sunglasses aren't positioned correctly. Or the other eye isn't.

* (subjective) The sunglasses and bowtie are cute, but you didn't ask for them, so I'd actually dock points for that.

* (subjective) I guess flamingos have multiple tail feathers, but it looks kinda odd as drawn.

In contrast, Opus's flamingo isn't as detailed or fancy, but more or less all of it looks correct.

reply
He literally said it came down to the comment in the SVG. Points for taste, not correctness. Basically.
reply
deleted
reply
It's fascinating that a $999 Mac Mini (M4 32GB) with almost similar wattage as a human brain gets us this far.
reply
Interesting thought, I looked it up out of curiosity and fund 155w max (but realistically more like 80w sustained) for the mac under load, and just around 20watts for the brain, surprisingly almost constant whether “under load” or not.
reply
interesting, I just tried this very model, unsloth, Q8, so in theory more capable than Simon's Q4, and get those three "pelicans". definitely NOT opus quality. lmstudio, via Simon's llm, but not apple/mlx. Of course the same short prompt.

Simon, any ideas?

https://ibb.co/gFvwzf7M

https://ibb.co/dYHRC3y

https://ibb.co/FLc6kggm (tried here temperature 0.7 instead of pure defaults)

reply
try Unsloth recommended settings

    Thinking mode for general tasks: temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0

    Thinking mode for precise coding tasks (e.g. WebDev): temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0

    Instruct (or non-thinking) mode for general tasks: temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0

    Instruct (or non-thinking) mode for reasoning tasks: temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
(Please note that the support for sampling parameters varies according to inference frameworks.)
reply
But that you also gave a win to Qwen on flamingo is pretty outrageous! :)

Tthe right one looks much better, plus adding sunglasses without prompting is not that great. Hopefully it won't add some backdoor to the generated code without asking. ;)

reply
I love how the Chinese models often have an unprompted predilection to add flair.

GLM-5.1 added a sparkling earring to a north Virginia opossum the other day and I was delighted: https://simonwillison.net/2026/Apr/7/glm-51/

reply
You're running 5.1 locally or hosted?
reply
I used that one via OpenRouter.
reply
The flamingo on Qwen's unicycle is sitting on the tire, not the seat. That wins because of sunglasses?
reply
Can a benchmark meant as a joke not use a fun interpretation of results? The Qwen result has far better style points. Fun sunglasses, a shadow, a better ground, a better sky, clouds, flowers, etc.

If we want to get nitty gritty about the details of a joke, a flamingo probably couldn't physically sit on a unicycle's seat and also reach the pedals anyways.

reply
Transparency of the wheel

Stylized gradients on the flamingo

Flowers

Ground/grass has a stylized look and feel

...despite a miss along the Y-axis where it's below the seat, couple oddly organized tail feathers, spokes, the composition overall is much closer to a production quality entity

Opus 4.7 looks like 20 seconds in MS paint.

Qwen3.6 looks incomplete due to the sitting position, but like a WIP I could see on a designer coworkers screen if I walk up and interrupt them. Click and drag it up, adjust tail feathers and spokes, you're there or much closer, to a usable output

reply
Well, maybe the flamingo is a really good unicyclist...

https://youtu.be/Rrpgd5oIKwI

reply
The real question is what the next truly weird, un-optimized prompt will be. Something involving a sloth debugging a quantum computer in MS Paint?"
reply
Hey I really enjoy your blog. On some things I end up finding a blog post of yours thats a year+ old and at other times, you and I are investigating similar things. I just pulled Qwen3.6 - 35b -A3B (Can't believe thats a A3B coming from 35b).

I'm impressed about the reach of your blog, and I'm hoping to get into blogging similar things. I currently have a lot on my backlog to blog about.

In short, keep up the good work with an interesting blog!

reply
I've been trying the Q4_K_M version, and sometimes it gets stuck in a loop. Gemma 4 doesn’t have this issue.
reply
This has happened before with quantizations and other backends (ones not used by the research lab). Give it a week, download latest versions of everything, and try again.
reply
perhaps increasing repitition_penalty might be helpful
reply
Interesting, qwen has the pelican driving on the left lane. Coincidence or has it something to do with the workers providing the RL data?
reply
Could be on a bike path where bikes are on the left and pedestrians to the right.
reply
I've had some really gnarly SVGs from Claude. Here's what I got after many iterations trying to draw a hand: https://imgur.com/a/X4Jqius
reply
Probably because all the training material of humans drawing hands are garbage haha.
reply
The qwen flamingo looks like it’s smoking’ a doobie.
reply
Oh that is pretty good! And the SVG one!
reply
deleted
reply
How does it do with the "car wash" benchmark? :D
reply