How to choose your best allies for a transferable attack? - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

How to choose your best allies for a transferable attack?

Résumé

The transferability of adversarial examples is a key issue in the security of deep neural networks. The possibility of an adversarial example crafted for a source model fooling another targeted model makes the threat of adversarial attacks more realistic. Measuring transferability is a crucial problem, but the Attack Success Rate alone does not provide a sound evaluation. This paper proposes a new methodology for evaluating transferability by putting distortion in a central position. This new tool shows that transferable attacks may perform far worse than a black box attack if the attacker randomly picks the source model. To address this issue, we propose a new selection mechanism, called FiT, which aims at choosing the best source model with only a few preliminary queries to the target. Our experimental results show that FiT is highly effective at selecting the best source model for multiple scenarios such as single-model attacks, ensemble-model attacks and multiple attacks.
Fichier principal
Vignette du fichier
transferability_measure.pdf (2.85 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04395797 , version 1 (15-01-2024)

Licence

Identifiants

  • HAL Id : hal-04395797 , version 1

Citer

Thibault Maho, Seyed-Mohsen Moosavi-Dezfooli, Teddy Furon. How to choose your best allies for a transferable attack?. ICCV 2023 - International Conference on Computer Vision, Oct 2023, Paris, France. pp.1-13. ⟨hal-04395797⟩
31 Consultations
49 Téléchargements

Partager

More