Efficient Per-Example Gradient Computations in Convolutional Neural Networks - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Efficient Per-Example Gradient Computations in Convolutional Neural Networks

Résumé

Deep learning frameworks leverage GPUs to perform massively-parallel computations over batches of many training examples efficiently. However, for certain tasks, one may be interested in performing per-example computations, for instance using per-example gradients to evaluate a quantity of interest unique to each example. One notable application comes from the field of differential privacy, where per-example gradients must be norm-bounded in order to limit the impact of each example on the aggregated batch gradient. In this work, we discuss how per-example gradients can be efficiently computed in convolutional neural networks (CNNs). We compare existing strategies by performing a few steps of differentially-private training on CNNs of varying sizes. We also introduce a new strategy for per-example gradient calculation, which is shown to be advantageous depending on the model architecture and how the model is trained. This is a first step in making differentially-private training of CNNs practical.
Fichier principal
Vignette du fichier
1912.06015.pdf (464.38 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04023867 , version 1 (10-03-2023)

Identifiants

Citer

Gaspar Rochette, Andre Manoel, Eric W. Tramel. Efficient Per-Example Gradient Computations in Convolutional Neural Networks. Workshop on Theory and Practice of Differential Privacy (TPDP), Nov 2020, Virtual, France. ⟨hal-04023867⟩
6 Consultations
111 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More