Autoregressive GAN for Semantic Unconditional Head Motion Generation - Archive ouverte HAL Access content directly
Preprints, Working Papers, ... Year :

Autoregressive GAN for Semantic Unconditional Head Motion Generation

(1, 2) , (2) , (3, 4, 5) , (1)
1
2
3
4
5

Abstract

We address the task of unconditional head motion generation to animate still human faces in a low-dimensional semantic space. Deviating from talking head generation conditioned on audio that seldom puts emphasis on realistic head motions, we devise a GAN-based architecture that allows obtaining rich head motion sequences while avoiding known caveats associated with GANs. Namely, the autoregressive generation of incremental outputs ensures smooth trajectories, while a multi-scale discriminator on input pairs drives generation toward better handling of high and low frequency signals and less mode collapse. We demonstrate experimentally the relevance of the proposed architecture and compare with models that showed state-of-the-art performances on similar tasks.
Fichier principal
Vignette du fichier
SUHMo.pdf (1.02 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03833759 , version 1 (28-10-2022)

Identifiers

Cite

Louis Airale, Xavier Alameda-Pineda, Stéphane Lathuilière, Dominique Vaufreydaz. Autoregressive GAN for Semantic Unconditional Head Motion Generation. 2022. ⟨hal-03833759⟩
0 View
0 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More