Sparse and invisible adversarial attacks using MIP Optimization
Résumé
Deep Learning methods are known to be vulnerable to adversarial attacks where malicious perturbed inputs lead to erroneous model outputs. From a safety perspective, highly sparse adversarial attacks are particularly dangerous. On the other hand the pixel wise perturbations of sparse attacks are typically large and thus can be potentially detected. Adversarial attacks during training have been early on proposed as a potential defense, now known as adversarial training. Previous research has shown that $\ell_0$-norm has good sparsity but is challenging to solve. We propose a new technique to craft adversarial examples aiming at minimizing L1 distance to the original image regularized by the Total variation (TV) function. This will favor the change of pixels in the region of high variation making the attacks almost invisible.
A possible way to find the minimal optimal perturbation that change the model decision (adversarial attack) is to transform the problem, with the help of binary variables and the classical bigM formulation, into a Mixed Integer Program (MIP). By formulating the problem as an MIP, we can ensure that the solution is the globally optimal one, meaning that we can guarantee that the attack is both sparse and invisible.
In this paper, we propose a global optimization approach to get the optimal sparse invisible perturbation using a using a dedicated branch-and-bound algorithm. A specific tree search strategy is built based on greedy forward selection algorithms. We show that each subproblem involved at a given node can be evaluated {\it via} a specific convex optimization problem with box constraints and without binary variables, for which an active-set algorithm is used. Our method is more efficient than the generic MIP solver Gurobi and the state-of-the-art method.
Fichier principal
Sparse_and_invisible_adversarial_attacks_using_MIP_Optimization.pdf (836.42 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|