Communication Dans Un Congrès Année : 2025

Task-Agnostic Attacks Against Vision Foundation Models

Résumé

The study of security in machine learning mainly focuses on downstream task-specific attacks, where the adversarial example is obtained by optimizing a loss function specific to the downstream task. At the same time, it has become standard practice for machine learning practitioners to adopt publicly available pre-trained vision foundation models, effectively sharing a common backbone architecture across a multitude of applications such as classification, segmentation, depth estimation, retrieval, questionanswering and more. The study of attacks on such foundation models and their impact to multiple downstream tasks remains vastly unexplored. This work proposes a general framework that forges task-agnostic adversarial examples by maximally disrupting the feature representation obtained with foundation models. We extensively evaluate the security of the feature representations obtained by popular vision foundation models by measuring the impact of this attack on multiple downstream tasks and its transferability between models.

Fichier principal
Vignette du fichier
2503.03842v1.pdf (8.78 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-05172315 , version 1 (21-07-2025)

Licence

Identifiants

  • HAL Id : hal-05172315 , version 1

Citer

Brian Pufler, Yury Belousov, Vitaliy Kinakh, Teddy Furon, Slava Voloshynovskiy. Task-Agnostic Attacks Against Vision Foundation Models. 5th Workshop of Adversarial Machine Learning at CVPR 2025, Jun 2025, Nashville, United States. pp.1-18. ⟨hal-05172315⟩
144 Consultations
212 Téléchargements

Partager

  • More