But I understood your point, Simon asked it to output SVG (text) instead of a raster image so it's more difficult.
I didn't realize quite how strong the correlation was until I put together this talk: https://simonwillison.net/2025/Jun/6/six-months-in-llms/
Since it's not a formal encoding of geometric shapes, it's fundamentally different I guess, but it shares some challenges with the SVG tasks I guess? Correlating phrases/concepts with an encoded visual representation, but without using imagegen, that is.
Do you think that "image encoding" is less useful?
It's a thing I love to try with various models for fun, too.
Talking about illustration-like content, neither text-based ASCII art nor abusing it for rasterization.
The results have been interesting, too, but I guess it's less predictable than SVG.
It makes sense, since it works adds associations between descriptions and individual shapes / paths etc., similar to other code.
Everything here should be trivial for LLM, but they’re quite poor at it because there’s almost no “how to draw complex shapes in svg” type content in their training set.
I'm quite happy that there's someone with both the time to keep up with all the LLM/AI stuff, that is also good enough at writing amusing stuff that I want to keep reading it.
That's how the pelicans get ya.