To update or not to update? Neurons at equilibrium in deep models - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

To update or not to update? Neurons at equilibrium in deep models

Résumé

Recent advances in deep learning optimization showed that, with some a-posteriori information on fully-trained models, it is possible to match the same performance by simply training a subset of their parameters. Such a discovery has a broad impact from theory to applications, driving the research towards methods to identify the minimum subset of parameters to train without look-ahead information exploitation. However, the methods proposed do not match the state-of-the-art performance, and rely on unstructured sparsely connected models. In this work we shift our focus from the single parameters to the behavior of the whole neuron, exploiting the concept of neuronal equilibrium (NEq). When a neuron is in a configuration at equilibrium (meaning that it has learned a specific input-output relationship), we can halt its update; on the contrary, when a neuron is at non-equilibrium, we let its state evolve towards an equilibrium state, updating its parameters. The proposed approach has been tested on different state-of-the-art learning strategies and tasks, validating NEq and observing that the neuronal equilibrium depends on the specific learning setup.

Dates et versions

hal-03853699 , version 1 (15-11-2022)

Identifiants

Citer

Andrea Bragagnolo, Enzo Tartaglione, Marco Grangetto. To update or not to update? Neurons at equilibrium in deep models. 36th Conference on Neural Information Processing Systems (NeurIPS 2022), Nov 2022, New Orleans, United States. ⟨hal-03853699⟩
19 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More