I'm seeing more and more AI video memes and they are getting really good. Still just bunch of short clips, long shots are not working well enough, but typical Hollywood movies have few second cuts anyway so this is almost good enough to make a marvel fanfic.
the workflow right now would be to take this images, make a sequence of them for key "shots" and send them to an I2V model. LTX-2 is the model the r/stablediffusion folks are playing with right now, but there are a fair few.