Anomaly Detection via Learnable Pretext Task
Résumé
Deep anomaly detection has become over the years
an appealing solution in many fields, and has seen many recent
developments. One of the most promising avenues is the use of
pretext tasks, which have greatly improved one-class anomaly
detection. However this approach is limited by the lack of
anomalous samples and carries an important inductive bias.
Indeed we could further improve the discrimination power of
pretext tasks by incorporating a small set of anomalies, which in
practice is often available.
To this end, we introduce the concept of learnable pretext
tasks, where a pretext task itself is learned to succeed on normal
samples while failing on anomalies. To our knowledge it is the
first work to explore this direction. By applying the learnable
task on a thin plate transform recognition task, our method
helps discriminating harder edge-case anomalies and greatly
improves anomaly detection. It outperforms state-of-the-art with
up to 49% relative error reduction measured with AUROC on
various anomaly detection problems including one-vs-all and face
presentation attack detection.