Information Removal at the bottleneck in Deep Neural Networks - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Information Removal at the bottleneck in Deep Neural Networks

Résumé

Deep learning models are nowadays broadly deployed to solve an incredibly large variety of tasks. Commonly, leveraging over the availability of "big data", deep neural networks are trained as black-boxes, minimizing an objective function at its output. This however does not allow control over the propagation of some specific features through the model, like gender or race, for solving some an uncorrelated task. This raises issues either in the privacy domain (considering the propagation of unwanted information) and of bias (considering that these features are potentially used to solve the given task). In this work we propose IRENE, a method to achieve information removal at the bottleneck of deep neural networks, which explicitly minimizes the estimated mutual information between the features to be kept ``private'' and the target. Experiments on a synthetic dataset and on CelebA validate the effectiveness of the proposed approach, and open the road towards the development of approaches guaranteeing information removal in deep neural networks.

Dates et versions

hal-03916176 , version 1 (30-12-2022)

Identifiants

Citer

Enzo Tartaglione. Information Removal at the bottleneck in Deep Neural Networks. 33rd British Machine Vision Conference 2022, {BMVC} 2022, London, UK, November 21-24, 2022, Nov 2022, London, United Kingdom. ⟨hal-03916176⟩
31 Consultations
0 Téléchargements

Altmetric

Partager

More