Detecting Potential Local Adversarial Examples for Human-Interpretable Defense - Archive ouverte HAL
Communication Dans Un Congrès Année : 2018

Detecting Potential Local Adversarial Examples for Human-Interpretable Defense

Xavier Renard
Thibault Laugel
Marcin Detyniecki

Résumé

Machine learning models are increasingly used in the industry to make decisions such as credit insurance approval. Some people may be tempted to manipulate specific variables, such as the age or the salary, in order to get better chances of approval. In this ongoing work, we propose to discuss, with a first proposition, the issue of detecting a potential local adversarial example on classical tabular data by providing to a human expert the locally critical features for the classifier's decision, in order to control the provided information and avoid a fraud.

Dates et versions

hal-01905948 , version 1 (26-10-2018)

Identifiants

Citer

Xavier Renard, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki. Detecting Potential Local Adversarial Examples for Human-Interpretable Defense. Workshop on Recent Advances in Adversarial Learning (Nemesis) of the European Conference on Machine Learning and Principles of Practice of Knowledge Discovery in Databases (ECML-PKDD), Sep 2018, Dublin, Ireland. ⟨hal-01905948⟩
134 Consultations
0 Téléchargements

Altmetric

Partager

More