Transferring Style in Motion Capture Sequences with Adversarial Learning
Résumé
We focus on style transfer for sequential data in a supervised setting. Assuming sequential data include both content and style information we want to learn models able to transform a sequence into another one with the same content information but with the style of another one, from a training dataset where content and style labels are available. Following works on image generation and edition with adversarial learning we explore the design of neural network architectures for the task of sequence edition that we apply to motion capture sequences.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...