Adversarial Counterfactual Visual Explanations - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2022

Adversarial Counterfactual Visual Explanations

Résumé

Counterfactual explanations and adversarial attacks have a related goal: flipping output labels with minimal perturbations regardless of their characteristics. Yet, adversarial attacks cannot be used as is in a counterfactual explanation perspective, as such perturbations are perceived as noise and not as actionable and understandable image modifications. Building on the robust learning literature, this paper proposes an elegant method to turn adversarial attacks into semantically meaningful perturbations, without modifying the classifiers to explain. The proposed approach hypothesizes that Denoising Diffusion Probabilistic Models are excellent regularizers for avoiding high-frequency and out-of-distribution perturbations when generating adversarial attacks. The paper's key idea is to build attacks through a diffusion model to polish them, which allows studying a model regardless of its robustification level. Extensive experimentation shows the advantages of our counterfactual explanation approach over current State-of-the-Art in multiple testbeds.
Fichier principal
Vignette du fichier
preprint_ACE-1.pdf (55.69 Mo) Télécharger le fichier
ace.zip (45.34 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03874816 , version 1 (28-11-2022)
hal-03874816 , version 2 (17-03-2023)

Identifiants

  • HAL Id : hal-03874816 , version 1

Citer

Guillaume Jeanneret, Loïc Simon, Frédéric Jurie. Adversarial Counterfactual Visual Explanations. 2022. ⟨hal-03874816v1⟩
117 Consultations
20 Téléchargements

Partager

More