Learning Long-Term Style-Preserving Blind Video Temporal Consistency - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

Learning Long-Term Style-Preserving Blind Video Temporal Consistency

Résumé

When trying to independently apply image-trained algorithms to successive frames in videos, noxious flickering tends to appear. State-of-the-art post-processing techniques that aim at fostering temporal consistency, generate other temporal artifacts and visually alter the style of videos. We propose a postprocessing model, agnostic to the transformation applied to videos (eg style transfer, image manipulation using GANs, etc.), in the form of a recurrent neural network. Our model is trained using a Ping Pong procedure and its corresponding loss, recently introduced for GAN video generation, as well as a novel style preserving perceptual loss. The former improves long-term temporal consistency learning, while the latter fosters style preservation. We evaluate our model on the DAVIS and this http URL datasets and show that our approach offers state-of-the-art results concerning flicker removal, and better keeps the overall style of the videos than previous approaches.
Fichier non déposé

Dates et versions

hal-03204046 , version 1 (21-04-2021)

Identifiants

  • HAL Id : hal-03204046 , version 1

Citer

Hugo Thimonier, Julien Despois, Robin Kips, Perrot Matthieu. Learning Long-Term Style-Preserving Blind Video Temporal Consistency. IEEE International Conference on Multimedia and Expo, Jul 2021, Virtual, France. ⟨hal-03204046⟩
56 Consultations
0 Téléchargements

Partager

More