COVAD: Content-oriented video anomaly detection using a self-attention based deep learning model - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Virtual Reality & Intelligent Hardware Année : 2023

COVAD: Content-oriented video anomaly detection using a self-attention based deep learning model

Résumé

Video anomaly detection has always been a hot topic and attracting an increasing amount of attention. Much of the existing methods on video anomaly detection depend on processing the entire video rather than considering only the significant context. This paper proposes a novel video anomaly detection method named COVAD, which mainly focuses on the region of interest in the video instead of the entire video. Our proposed COVAD method is based on an auto-encoded convolutional neural network and coordinated attention mechanism, which can effectively capture meaningful objects in the video and dependencies between different objects. Relying on the existing memory-guided video frame prediction network, our algorithm can more effectively predict the future motion and appearance of objects in the video. Our proposed algorithm obtained better experimental results on multiple data sets and outperformed the baseline models considered in our analysis. At the same time we improve a visual test that can provide pixel-level anomaly explanations.
Fichier principal
Vignette du fichier
vrih0523.pdf (1.42 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04029569 , version 1 (27-03-2023)

Identifiants

Citer

Wenhao Shao, Praboda Rajapaksha, Yanyan Wei, Dun Li, Noel Crespi, et al.. COVAD: Content-oriented video anomaly detection using a self-attention based deep learning model. Virtual Reality & Intelligent Hardware, 2023, 5 (1), pp.24-41. ⟨10.1016/j.vrih.2022.06.001⟩. ⟨hal-04029569⟩
28 Consultations
54 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More