Unsupervised Learning of Disentangled Representation via Auto-Encoding: A Survey - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Sensors Année : 2023

Unsupervised Learning of Disentangled Representation via Auto-Encoding: A Survey

Ikram Eddahmani
Thibault Napoléon
Isabelle Badoc
  • Fonction : Auteur
Jean-Rassaire Fouefack

Résumé

In recent years, the rapid development of deep learning approaches has paved the way to explore the underlying factors that explain the data. In particular, several methods have been proposed to learn to identify and disentangle these underlying explanatory factors in order to improve the learning process and model generalization. However, extracting this representation with little or no supervision remains a key challenge in machine learning. In this paper, we provide a theoretical outlook on recent advances in the field of unsupervised representation learning with a focus on auto-encoding-based approaches and on the most well-known supervised disentanglement metrics. We cover the current state-of-the-art methods for learning disentangled representation in an unsupervised manner while pointing out the connection between each method and its added value on disentanglement. Further, we discuss how to quantify disentanglement and present an in-depth analysis of associated metrics. We conclude by carrying out a comparative evaluation of these metrics according to three criteria, (i) modularity, (ii) compactness and (iii) informativeness. Finally, we show that only the Mutual Information Gap score (MIG) meets all three criteria.

Dates et versions

hal-04092026 , version 1 (09-05-2023)

Identifiants

Citer

Ikram Eddahmani, Chi-Hieu Pham, Thibault Napoléon, Isabelle Badoc, Jean-Rassaire Fouefack, et al.. Unsupervised Learning of Disentangled Representation via Auto-Encoding: A Survey. Sensors, 2023, 23 (4), pp.2362. ⟨10.3390/s23042362⟩. ⟨hal-04092026⟩
50 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More