Generating action videos in future scenes based on static images can make computer vision systems to be better applied for video understanding and intelligent decision-making. However, current models pay more attention to the motion trend of the generated objects, and the processing effect on local details is not ideal. The local features of the generated video will have the problem of blurred frames and incoherent motion. This paper proposes a two-stage model, video spatio-temporal generative adversarial network (VSTGAN), which consists of two GAN networks, such as temporal network and spatial network (S-net). The model fully combines the advantages of CNNs, recurrent neural networks (RNNs), and GANs to decompose the complex spatiotemporal generation problem into temporal and spatial dimensions. Therefore, VSTGAN can focus on local features from the above dimensions respectively. In the temporal dimension, we propose an RNN unit, the convolutional attention unit (ConvAU), which uses the convolutional attention module to dynamically generate weights to update the hidden state. Thus, T-net uses the ConvAU to generate local dynamics. In the spatial dimension, S-net uses CNNs and attention modules to perform resolution reconstruction of the generated local dynamics for video generation. We build two small-sample datasets and validate our approach on these two new datasets and the KTH public dataset. The results show that our approach can effectively generate local details in future action videos and that the model performance on small-sample datasets is competitive with the state-of-the-art in video generation. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Video
Gallium nitride
Data modeling
Education and training
Motion models
3D modeling
Tunable filters