Rethinking Scene Graphs for Action Recognition
Résumé
Over the last years, Graph Neural Networks (GNNs) have been widely used in a variety of applications, including action recognition. Scene graphs are extracted from videos and fed to a GNN in order to predict the action represented. However, in previous works, choices regarding the design of such scene graphs are often arbitrary; for instance, directed temporal edges are added without giving the GNN the capacity to use this information. In this work, we rethink the way scene graphs are built, taking inspiration from line graphs in order to propose a new design that can be applied to any type of human activity. We perform our experiments on 2 datasets and show that adapting our GNN so that it can make use of temporal edges improves its precision up to 7.5% for action recognition. We also show that adopting our alternate design for scene graphs further improves performance by an additional 14%, bringing new perspectives to this field.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |