Rethinking Scene Graphs for Action Recognition - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

Rethinking Scene Graphs for Action Recognition

Résumé

Over the last years, Graph Neural Networks (GNNs) have been widely used in a variety of applications, including action recognition. Scene graphs are extracted from videos and fed to a GNN in order to predict the action represented. However, in previous works, choices regarding the design of such scene graphs are often arbitrary; for instance, directed temporal edges are added without giving the GNN the capacity to use this information. In this work, we rethink the way scene graphs are built, taking inspiration from line graphs in order to propose a new design that can be applied to any type of human activity. We perform our experiments on 2 datasets and show that adapting our GNN so that it can make use of temporal edges improves its precision up to 7.5% for action recognition. We also show that adopting our alternate design for scene graphs further improves performance by an additional 14%, bringing new perspectives to this field.
Fichier principal
Vignette du fichier
160.pdf (1.01 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04420416 , version 1 (26-01-2024)

Licence

Identifiants

  • HAL Id : hal-04420416 , version 1

Citer

Mathieu Riand, Patrick Le Callet, Laurent Dollé. Rethinking Scene Graphs for Action Recognition. 2023 IEEE International Conference on Visual Communications and Image Processing, Dec 2023, Jeju, South Korea. ⟨hal-04420416⟩
73 Consultations
155 Téléchargements

Partager

More