Approximation speed of quantized vs. unquantized ReLU neural networks and beyond - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2022

Approximation speed of quantized vs. unquantized ReLU neural networks and beyond

Résumé

We consider general approximation families encompassing ReLU neural networks. On the one hand, we introduce a new property, that we call ∞-encodability, which lays a framework that we use (i) to guarantee that ReLU networks can be uniformly quantized and still have approximation speeds comparable to unquantized ones, and (ii) to prove that ReLU networks share a common limitation with many other approximation families: the approximation speed of a set C is bounded from above by an encoding complexity of C (a complexity well-known for many C's). The property of ∞-encodability allows us to unify and generalize known results in which it was implicitly used. On the other hand, we give lower and upper bounds on the Lipschitz constant of the mapping that associates the weights of a network to the function they represent in L^p. It is given in terms of the width, the depth of the network and a bound on the weight's norm, and it is based on well-known upper bounds on the Lipschitz constants of the functions represented by ReLU networks. This allows us to recover known results, to establish new bounds on covering numbers, and to characterize the accuracy of naive uniform quantization of ReLU networks.
Fichier principal
Vignette du fichier
preprint_approximation_speed_of_quantized_vs_unquantized_ReLU_neural_networks_and_beyond.pdf (1.06 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03672166 , version 1 (23-05-2022)
hal-03672166 , version 2 (06-10-2022)

Identifiants

Citer

Antoine Gonon, Nicolas Brisebarre, Rémi Gribonval, Elisa Riccietti. Approximation speed of quantized vs. unquantized ReLU neural networks and beyond. 2022. ⟨hal-03672166v1⟩
344 Consultations
307 Téléchargements

Altmetric

Partager

More