View-invariant Skeleton Action Representation Learning via Motion Retargeting
Résumé
Current self-supervised approaches for skeleton action representation learning often focus on constrained scenarios, where videos and skeleton data are recorded in laboratory settings. When dealing with estimated skeleton data in realworld videos, such methods perform poorly due to the large variations across subjects and camera viewpoints. To address this issue, we introduce ViA, a novel View-Invariant Autoencoder for self-supervised skeleton action representation learning 1. ViA leverages motion retargeting between different human performers as a pretext task, in order to disentangle the latent action-specific 'Motion' features on top of the visual representation of a 2D or 3D skeleton sequence. Such 'Motion' features are invariant to skeleton geometry and camera view and allow ViA to facilitate both, cross-subject and cross-view action classification tasks. We conduct a study focusing on transfer-learning for skeleton-based action recognition with self-supervised pre-training on real-world data (e.g., Posetics). Our results showcase that skeleton representations learned from ViA are generic enough to improve upon stateof-the-art action classification accuracy, not only on 3D laboratory datasets such as NTU-RGB+D 60 and NTU-RGB+D 120, but also on real-world datasets where only 2D data are accurately estimated, e.g., Toyota Smarthome, UAV-Human and Penn Action.
Fichier principal
ViA__View_invariant_Skeleton_Action_Representation_Learning_via_Motion_Retargeting.pdf (11.13 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|