Unsupervised Risk for Privacy - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

Unsupervised Risk for Privacy

Résumé

This position paper deals with privacy for deep neural networks, more precisely with robustness to membership inference attacks. The current state-of-the-art methods, such as the ones based on differential privacy and training loss regularization, mainly propose approaches that try to improve the compromise between privacy guarantees and decrease in model accuracy. We propose a new research direction that challenges this view, and that is based on novel approximations of the training objective of deep learning models. The resulting loss offers several important advantages with respect to both privacy and model accuracy: it may exploit unlabeled corpora, it both regularizes the model and improves its generalization properties, and it encodes corpora into a latent low-dimensional parametric representation that complies with Federated Learning architectures. Arguments are detailed in the paper to support the proposed approach and its potential beneficial impact with regard to preserving both privacy and quality of deep learning.
Fichier principal
Vignette du fichier
paper.pdf (174.98 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03407454 , version 1 (28-10-2021)
hal-03407454 , version 2 (13-12-2021)
hal-03407454 , version 3 (30-12-2021)

Identifiants

  • HAL Id : hal-03407454 , version 3

Citer

Christophe Cerisara, Alfredo Cuzzocrea. Unsupervised Risk for Privacy. IEEE BigData, Special Session on Privacy and Security of Big Data, Dec 2021, Orlando (virtual), United States. ⟨hal-03407454v3⟩
115 Consultations
121 Téléchargements

Partager

More