upvote
The ability to reverse is very dependent on the transformation being well known, in this case it is deterministic and known with certainty. Any algorithm to reverse motion blur will depend on the translation and rotation of the camera in physical space, and the best the algorithm could do will be limited by the uncertainty in estimating those values.

If you apply a fake motion blur like in photoshop or after effects then that could probably be reversed pretty well.

reply
> and the best the algorithm could do will be limited by the uncertainty in estimating those values

That's relatively easy if you're assuming simple translation and rotation (simple camera movement), as opposed to a squiggle movement or something (e.g. from vibration or being knocked). Because you can simply detect how much sharper the image gets, and hone in on the right values.

reply
I recall a paper from many years ago (early 2010s) describing methods to estimate the camera motion and remove motion blur from blurry image contents only. I think they used a quality metric on the resulting “unblurred” image as a loss function for learning the effective motion estimate. This was before deep learning took off; certainly today’s image models could do much better at assessing the quality of the unblurred image than a hand-crafted metric.
reply
Probably not the exact paper you have in mind, but... https://jspan.github.io/projects/text-deblurring/index.html
reply
Record gyro motion at time of shutter?
reply
The missing piece of the puzzle is how to determine the blur kernel from the blurry image. There's a whole body of literature on that that's called blind deblurring.

For instance: https://deepinv.github.io/deepinv/auto_examples/blind-invers...

reply
Absolutely, Photoshop has it:

https://helpx.adobe.com/photoshop/using/reduce-camera-shake-...

Or... from the note at the top, had it? Very strange, features are almost never removed. I really wonder what the architectural reason was here.

reply
Just guessing, patent troll.
reply
Oof, I hope not. I wonder if the architecture for GPU filters migrated, and this feature didn't get enough usage to warrant being rewritten from scratch?
reply
I believe Microsoft of all people solved this a while ago by using the gyroscope in a phone to produce a de-blur kernel that cleaned up the image.

Its somewhere here: https://www.microsoft.com/en-us/research/product/computation...

reply
I wonder if the "night mode" on newer phone cameras is doing something similar. Take a long exposure, use the IMU to produce a kernel that tidies up the image post facto. The night mode on my S24 actually produces some fuzzy, noisy artifacts that aren't terribly different from the artifacts in the OP's deblurs.
reply