Lossless Neural Network Model Compression Through Exponent Sharing - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue IEEE Transactions on Very Large Scale Integration (VLSI) Systems Année : 2023

Lossless Neural Network Model Compression Through Exponent Sharing

Résumé

Artificial intelligence (AI) on the edge has emerged as an important research area in the last decade to deploy different applications in the domains of computer vision and natural language processing on tiny devices. These devices have limited on-chip memory and are battery-powered. On the other hand, neural network (NN) models require large memory to store model parameters and intermediate activation values. Thus, it is critical to make the models smaller so that their on-chip memory requirements are reduced. Various existing techniques like quantization and weight-sharing reduce model sizes at the expense of some loss in accuracy. We propose a lossless technique of model size reduction by focusing on the sharing of exponents in weights, which is different from the sharing of weights. We present results based on generalized matrix multiplication (GEMM) in NN models. Our method achieves at least a 20% reduction in memory when using Bfloat16 and around 10% reduction when using IEEE single-precision floating point, for models, in general, with a very small impact (up to 10% on the processor and less than 1% on FPGA) on the execution time with no loss in accuracy. On specific models from HLS4ML, about 20% reduction in memory is observed in single precision with little execution overhead.
Fichier principal
Vignette du fichier
Lossless_Neural_Network_Model_Compression_Through_Exponent_Sharing.pdf (1.16 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04397024 , version 1 (16-01-2024)

Licence

Paternité

Identifiants

Citer

Prachi Kashikar, Olivier Sentieys, Sharad Sinha. Lossless Neural Network Model Compression Through Exponent Sharing. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2023, 31, pp.1816 - 1825. ⟨10.1109/tvlsi.2023.3307607⟩. ⟨hal-04397024⟩
15 Consultations
23 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More