upvote
Thanks for this!

Has there been any study of grammar and other word order effects in the result? Is "Dog fetches ball with tail" more likely to produce an image of dog with a ball grabbed with its tail than "tail ball dog fetch with"?

Like search engines, an issue is user searched for "best price on windows". Do they mean windows the OS or glass windows.

My impression, at least with image generation I've used, it's while there is some mapping of words and maybe phrases through the latent space to an image it's very weak. If you put "red ball" in a long prompt, it's nearly as likely "red" will get applied to some other part of the description than the ball.

reply
I think some of the visualizations would be much better if you used a pixel-space model instead of a latent diffusion model.

Right now we are only seeing the denoising process after it's been morphed by the latent decoder, which looks a lot less intuitive than actual pixel diffusion.

If you can't find a suitable pixel-space model, then you can just trivially generate a forward process and play it backwards.

reply
Thanks that’s a great suggestion.
reply
Loved the writeup!

Found the manual latent space exploration part really interesting.

Too many LLM/diffusion explanations fall in the proverbial “how to draw an owl” meme without giving a taste as to what’s going on.

reply
It's quite clever and thoughtful. thanks for making it!
reply
I enjoyed this a lot.

The interpolations between butterfly and snail were pretty horrifying. But something like Z-Image you could basically concatenate the text and end up with a normal image of both. Is the latent space for "butterfly and snail" just well off the path between the two individually?

It's hard to imagine what is nearby in latent space and how text contributes, so I did really like the section adding words to the prompt 1-by-1.

reply