Generative Image Dynamics Summary
Recently a new Google Research has been published under the name of Generative Image Dynamics which seems astonishing to everyone because of Interactive motions.
This research paper presents an approach of modelling an Image space to Dynamic space.

Here We’ll quickly summarise the approach and methodology used by Google Researchers in this paper.
Methodology used by in this paper consists of 4 main stages which are as follows:
- Data Collection: Data consists of motion trajectories extracted from Real Video Sequences containing natural, oscillating motion such as trees, candles, and clothes blowing in the wind which can also be seen from official demo.
- Model Training: Given a single image, the trained model uses a Frequency-coordinated diffusion sampling process to predict a per-pixel long-term motion representation in the Fourier domain. This representation is referred to as a Neural Stochastic Motion Texture in paper.
- Motion Representation: This Neural Stochastic Motion texture can be converted into dense motion trajectories that span an entire video which we are seeing demo are motion trajectories.
- Application: Along with an Image-based Rendering module, these motion trajectories can be used for several downstream applications. For instance, they are turning still images into seamlessly looping dynamic videos or allowing users to realistically interact with objects in real pictures.
This is a very short and crisp summary of methodology used in Generative Image Dynamics .
To know more from where Interactive Dynamics Video came from , Check out

For more such quick and short summaries, Follow me.
All the suggestions are welcome in comment section to improve content.
References:
Explore the Official Demo — https://generative-dynamics.github.io/#demo
Research Paper — https://generativedynamics.github.io/static/pdfs/GenerativeImageDynamics.pdf
Interactive Dynamics Video — http://www.interactivedynamicvideo.com/