Speeding up the Training of Neural Networks with the One-Step Procedure - Archive ouverte HAL
Article Dans Une Revue Neural Processing Letters Année : 2024

Speeding up the Training of Neural Networks with the One-Step Procedure

Résumé

Abstract In the last decade, research and corporate have shown a dramatically growing interest in the field of machine learning, mostly due to the performances of deep neural networks. These increasingly complex architectures solved a wide range of problems. However, training these sophisticated architectures require many computation on advanced hardware. With this paper, we introduce a new approach based on the One-Step procedure that may fasten their training. In this procedure, an initial guess estimator is computed on a subsample that is then improved with only one step of the Newton gradient descent on the whole dataset. To show the efficiency of this framework, we consider regression and classification tasks using simulated and real datasets. We consider classic architectures, namely multi-layer perceptrons and show, on our examples, that the One-Step procedure is often halving the computation time to train the neural networks while preserving the performances.

Dates et versions

hal-04733965 , version 1 (13-10-2024)

Identifiants

Citer

Wajd Meskini, Alexandre Brouste, Nicolas Dugué. Speeding up the Training of Neural Networks with the One-Step Procedure. Neural Processing Letters, 2024, 56 (3), pp.178. ⟨10.1007/s11063-024-11637-6⟩. ⟨hal-04733965⟩
33 Consultations
0 Téléchargements

Altmetric

Partager

More