Neural Network Information Leakage through Hidden Learning - Archive ouverte HAL
Conference Papers Year : 2023

Neural Network Information Leakage through Hidden Learning

Abstract

We investigate the problem of making an artificial neural network perform hidden computations whose result can be easily retrieved from the network's output. In particular, we consider the following scenario. A user is provided a neural network for a classification task by a third party. The user's input to the network contains sensitive information and the third party can only observe the output of the network. I this work, we provide a simple and efficient training procedure, which we call hidden learning, that produces two networks: (i) one that solves the original classification task with performance near to state of the art; (ii) a second one that takes as input the output of the first, retrieving sensitive information to solve a second classification task with good accuracy. Our result might expose important issues from an information security point of view, as for the use of artificial neural networks in sensible applications.
Fichier principal
Vignette du fichier
hidden_learning.pdf (417.57 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03157141 , version 1 (02-03-2021)
hal-03157141 , version 2 (27-02-2023)
hal-03157141 , version 3 (27-03-2023)
hal-03157141 , version 4 (23-05-2023)

Licence

Public Domain

Identifiers

Cite

Arthur Carvalho Walraven da Cunha, Emanuele Natale, Laurent Viennot. Neural Network Information Leakage through Hidden Learning. OLA2023 - International Conference on Optimization and Learning, May 2023, Malaga, Spain. pp.117-128, ⟨10.1007/978-3-031-34020-8_8⟩. ⟨hal-03157141v4⟩
249 View
450 Download

Altmetric

Share

More