Opti-CAM: Optimizing saliency maps for interpretability - Archive ouverte HAL
Journal Articles Computer Vision and Image Understanding Year : 2024

Opti-CAM: Optimizing saliency maps for interpretability

Abstract

Methods based on class activation maps (CAM) provide a simple mechanism to interpret predictions of convolutional neural networks by using linear combinations of feature maps as saliency maps. By contrast, masking-based methods optimize a saliency map directly in the image space or learn it by training another network on additional data. In this work we introduce Opti-CAM, combining ideas from CAM-based and masking-based approaches. Our saliency map is a linear combination of feature maps, where weights are optimized per image such that the logit of the masked image for a given class is maximized. We also fix a fundamental flaw in two of the most common evaluation metrics of attribution methods. On several datasets, Opti-CAM largely outperforms other CAM-based approaches according to the most relevant classification metrics. We provide empirical evidence supporting that localization and classifier interpretability are not necessarily aligned.
Fichier principal
Vignette du fichier
OptiCAM_CVIU-1.pdf (5.75 Mo) Télécharger le fichier
Origin Publication funded by an institution
Licence
Copyright

Dates and versions

hal-04678832 , version 1 (27-08-2024)

Licence

Copyright

Identifiers

Cite

Hanwei Zhang, Felipe Torres, Ronan Sicre, Yannis Avrithis, Stephane Ayache. Opti-CAM: Optimizing saliency maps for interpretability. Computer Vision and Image Understanding, 2024, 248, pp.104101. ⟨10.1016/j.cviu.2024.104101⟩. ⟨hal-04678832⟩
40 View
9 Download

Altmetric

Share

More