Approximation speed of quantized vs. unquantized ReLU neural networks and beyond - Archive ouverte HAL
Article Dans Une Revue IEEE Transactions on Information Theory Année : 2023

Approximation speed of quantized vs. unquantized ReLU neural networks and beyond

Résumé

We deal with two complementary questions about approximation properties of ReLU networks. First, we study how the uniform quantization of ReLU networks with real-valued weights impacts their approximation properties. We establish an upper-bound on the minimal number of bits per coordinate needed for uniformly quantized ReLU networks to keep the same polynomial asymptotic approximation speeds as unquantized ones. We also characterize the error of nearest-neighbour uniform quantization of ReLU networks. This is achieved using a new lower-bound on the Lipschitz constant of the map that associates the parameters of ReLU networks to their realization, and an upper-bound generalizing classical results. Second, we investigate when ReLU networks can be expected, or not, to have better approximation properties than other classical approximation families. Indeed, several approximation families share the following common limitation: their polynomial asymptotic approximation speed of any set is bounded from above by the encoding speed of this set. We introduce a new abstract property of approximation families, called infinite-encodability, which implies this upper-bound. Many classical approximation families, defined with dictionaries or ReLU networks, are shown to be infinite-encodable. This unifies and generalizes several situations where this upper-bound is known.
Fichier principal
Vignette du fichier
v2_preprint_approximation_speed_of_quantized_vs_unquantized_ReLU_neural_networks_and_beyond.pdf (1.06 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03672166 , version 1 (23-05-2022)
hal-03672166 , version 2 (06-10-2022)

Identifiants

Citer

Antoine Gonon, Nicolas Brisebarre, Rémi Gribonval, Elisa Riccietti. Approximation speed of quantized vs. unquantized ReLU neural networks and beyond. IEEE Transactions on Information Theory, 2023, 69 (6), pp.3960-3977. ⟨10.1109/TIT.2023.3240360⟩. ⟨hal-03672166v2⟩
344 Consultations
307 Téléchargements

Altmetric

Partager

More