Privacy-preserving Neural Representations of Text - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Privacy-preserving Neural Representations of Text

Maximin Coavoux
Shashi Narayan
  • Fonction : Auteur
  • PersonId : 767539
  • IdRef : 182508757

Résumé

This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection. We study a specific type of attack: an attacker eavesdrops on the hidden representations of a neural text classifier and tries to recover information about the input text. Such scenario may arise in situations when the computation of a neural network is shared across multiple devices, e.g. some hidden representation is computed by a user's device and sent to a cloud-based model. We measure the privacy of a hidden representation by the ability of an attacker to predict accurately specific private information from it and characterize the tradeoff between the privacy and the utility of neural representations. Finally, we propose several defense methods based on modified training objectives and show that they improve the privacy of neural representations.
Fichier principal
Vignette du fichier
D18-1001.pdf (299.32 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte
Loading...

Dates et versions

hal-02135081 , version 1 (07-06-2019)

Identifiants

  • HAL Id : hal-02135081 , version 1

Citer

Maximin Coavoux, Shashi Narayan, Shay B. Cohen. Privacy-preserving Neural Representations of Text. 2018 Conference on Empirical Methods in Natural Language Processing, Nov 2018, Brussels, Belgium. pp.1--10. ⟨hal-02135081⟩
41 Consultations
39 Téléchargements

Partager

Gmail Facebook X LinkedIn More