This was originally posted here a decade ago. I’m happy to see it’s still alive.
I’ve been using some generated assets for a game with voxelized art. I intend to take a deeper look at this and see if it can simplify parts of my workflow.
This is fascinating. I see its powered by weights and probabilities - would this be a very simple ancestor of things like Stable Diffusion that we have now, or would this be on a completely different branch (different approach)
It’s procedural generation but that’s pretty much where the similarities end. People today might use a big generative NN model to do this, using maybe a thousand times as much energy to get the essentially same result.
I always wondered how this compares to the 1999 algorithm Texture Synthesis by Non-parametric Sampling [1]. The results look very similar to my eyes. Implementation here [2] — has anyone tried both?