Toward Stronger Textual Attack Detectors - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Toward Stronger Textual Attack Detectors

Résumé

The landscape of available textual adversarial attacks keeps growing, posing severe threats and raising concerns regarding the deep NLP system's integrity. However, the crucial problem of defending against malicious attacks has only drawn the attention of the NLP community. The latter is nonetheless instrumental in developing robust and trustworthy systems. This paper makes two important contributions in this line of search: (i) we introduce LAROUSSE, a new framework to detect textual adversarial attacks and (ii) we introduce STAKEOUT, a new benchmark composed of nine popular attack methods, three datasets, and two pre-trained models. LAROUSSE is ready-to-use in production as it is unsupervised, hyperparameter-free, and non-differentiable, protecting it against gradient-based methods. Our new benchmark STAKEOUT allows for a robust evaluation framework: we conduct extensive numerical experiments which demonstrate that LAROUSSE outperforms previous methods, and which allows to identify interesting factors of detection rate variations.
Fichier principal
Vignette du fichier
2310.14001v1 (1).pdf (1.54 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04574946 , version 1 (14-05-2024)

Identifiants

Citer

Pierre Colombo, Marine Picot, Nathan Noiry, Guillaume Staerman, Pablo Piantanida. Toward Stronger Textual Attack Detectors. 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023), Dec 2023, Singapour, Singapore. pp.484-505, ⟨10.18653/v1/2023.findings-emnlp.35⟩. ⟨hal-04574946⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More