Leveraging Adversarial Examples to Quantify Membership Information Leakage
Résumé
The use of personal data for training machine learning
systems comes with a privacy threat and measuring the level
of privacy of a model is one of the major challenges in ma-
chine learning today. Identifying training data based on
a trained model is a standard way of measuring the pri-
vacy risks induced by the model. We develop a novel ap-
proach to address the problem of membership inference in
pattern recognition models, relying on information provided
by adversarial examples. The strategy we propose consists
of measuring the magnitude of a perturbation necessary to
build an adversarial example. Indeed, we argue that this
quantity reflects the likelihood of belonging to the training
data. Extensive numerical experiments on multivariate data
and an array of state-of-the-art target models show that our
method performs comparable or even outperforms state-
of-the-art strategies, but without requiring any additional
training samples.