The Gaussian equivalence of generative models for learning with two-layer neural networks - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2020

The Gaussian equivalence of generative models for learning with two-layer neural networks

Sebastian Goldt
Marc Mezard
Florent Krzakala

Résumé

Understanding the impact of data structure on learning in neural networks remains a key challenge for the theory of neural networks. Many theoretical works on neural networks do not explicitly model training data, or assume that inputs are drawn independently from some factorised probability distribution. Here, we go beyond the simple i.i.d. modelling paradigm by studying neural networks trained on data drawn from structured generative models. We make three contributions: First, we establish rigorous conditions under which a class of generative models shares key statistical properties with an appropriately chosen Gaussian feature model. Second, we use this Gaussian equivalence theorem (GET) to derive a closed set of equations that describe the dynamics of two-layer neural networks trained using one-pass stochastic gradient descent on data drawn from a large class of generators. We complement our theoretical results by experiments demonstrating how our theory applies to deep, pre-trained generative models.

Dates et versions

hal-02984900 , version 1 (01-11-2020)

Identifiants

Citer

Sebastian Goldt, Galen Reeves, Marc Mezard, Florent Krzakala, Lenka Zdeborová. The Gaussian equivalence of generative models for learning with two-layer neural networks. 2020. ⟨hal-02984900⟩
105 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More