Anomaly Detection via Learnable Pretext Task - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Anomaly Detection via Learnable Pretext Task

Ngoc-Son Vu
Jean Beaudet
  • Fonction : Auteur
  • PersonId : 1114110

Résumé

Deep anomaly detection has become over the years an appealing solution in many fields, and has seen many recent developments. One of the most promising avenues is the use of pretext tasks, which have greatly improved one-class anomaly detection. However this approach is limited by the lack of anomalous samples and carries an important inductive bias. Indeed we could further improve the discrimination power of pretext tasks by incorporating a small set of anomalies, which in practice is often available. To this end, we introduce the concept of learnable pretext tasks, where a pretext task itself is learned to succeed on normal samples while failing on anomalies. To our knowledge it is the first work to explore this direction. By applying the learnable task on a thin plate transform recognition task, our method helps discriminating harder edge-case anomalies and greatly improves anomaly detection. It outperforms state-of-the-art with up to 49% relative error reduction measured with AUROC on various anomaly detection problems including one-vs-all and face presentation attack detection.
Fichier non déposé

Dates et versions

hal-03737352 , version 1 (24-07-2022)

Identifiants

  • HAL Id : hal-03737352 , version 1

Citer

Loic Jezequel, Ngoc-Son Vu, Jean Beaudet, Aymeric Histace. Anomaly Detection via Learnable Pretext Task. International Conference on Pattern Recognition, IAPR ; IEEE, Aug 2022, Montreal, Canada. ⟨hal-03737352⟩
132 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More