On Improving the Robustness of Reinforcement Learning Policies against Adversarial Attacks - IRT SystemX Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

On Improving the Robustness of Reinforcement Learning Policies against Adversarial Attacks

Yesmina Jaafra
  • Fonction : Auteur
  • PersonId : 1170575
Christophe Bohn
  • Fonction : Auteur
  • PersonId : 1170576
Lucas Schott
Faouzi Adjed
Mehdi Rezzoug
  • Fonction : Auteur
  • PersonId : 1084622

Résumé

With deep neural networks as universal function approximators, the reinforcement learning paradigm has been adopted in several commonplace services such as autonomous vehicles, aircrafts and domestic assistance, which is raising new safety requirements. Indeed, a deep reinforcement learning agent obtains its states through observations, which may contain natural accuracy errors or malicious adversarial noises. Since the observations may diverge from the true environment states, they can lead the agent into taking risky suboptimal decisions. This vulnerability is well-known in computer vision literature where it has been emphasized via adversarial attacks. In terms of defense, various techniques have been proposed, including heuristic and certified methods, mainly to improve the robustness of deep neural networks-based classifiers. It is therefore necessary to propose solutions adapted to this learning challenge faced by reinforcement learning agents. In this paper, we propose two defense mechanisms based on reward shaping and adversarial training as a countermeasure against attacks on environment observations. The results reported from experiments conducted on autonomous vehicles controlled by reinforcement learning policies demonstrate that our approach successfully provide sufficient information to effectively learn the task in the context of highly perturbed environments. Furthermore, the defense mechanisms improve the robustness and generalization capacities of the learning models decreasing risky decisions in the presence of adversarial attacks.
Fichier principal
Vignette du fichier
Safe_DRL-2022.pdf (406.31 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03797500 , version 1 (22-02-2024)

Licence

Copyright (Tous droits réservés)

Identifiants

  • HAL Id : hal-03797500 , version 1

Citer

Yesmina Jaafra, Christophe Bohn, Lucas Schott, Faouzi Adjed, Frédéric Pelliccia, et al.. On Improving the Robustness of Reinforcement Learning Policies against Adversarial Attacks. ESREL 2022, Aug 2022, Dublin, Ireland. ⟨hal-03797500⟩

Collections

IRT-SYSTEMX
48 Consultations
1 Téléchargements

Partager

Gmail Facebook X LinkedIn More