Toward Novel Optimizers: A Moreau-Yosida View of Gradient-Based Learning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Toward Novel Optimizers: A Moreau-Yosida View of Gradient-Based Learning

Résumé

Machine Learning (ML) strongly relies on optimization procedures that are based on gradient descent. Several gradient-based update schemes have been proposed in the scientific literature, especially in the context of neural networks, that have become common optimizers in software libraries for ML. In this paper, we re-frame gradient-based update strategies under the unifying lens of a Moreau-Yosida (MY) approximation of the loss function. By means of a first-order Taylor expansion, we make the MY approximation concretely exploitable to generalize the model update. In turn, this makes it easy to evaluate and compare the regularization properties that underlie the most common optimizers, such as gradient descent with momentum, ADAGRAD, RMSprop, and ADAM. The MY-based unifying view opens to the possibility of designing novel update schemes with customizable regularization properties. As case-study we propose to use the network outputs to deform the notion of closeness in the parameter space.
Fichier non déposé

Dates et versions

hal-04410778 , version 1 (22-01-2024)

Identifiants

Citer

Alessandro Betti, Gabriele Ciravegna, Marco Gori, Stefano Melacci, Kevin Mottin, et al.. Toward Novel Optimizers: A Moreau-Yosida View of Gradient-Based Learning. AIxIA 2023 – XXIInd International Conference of the Italian Association for Artificial Intelligence, Nov 2023, Rome, France. pp.218-230, ⟨10.1007/978-3-031-47546-7_15⟩. ⟨hal-04410778⟩
14 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More