REx: Data-Free Residual Quantization Error Expansion - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

REx: Data-Free Residual Quantization Error Expansion

Résumé

Deep neural networks (DNNs) are ubiquitous in computer vision and natural language processing, but suffer from high inference cost. This problem can be addressed by quantization, which consists in converting floating point operations into a lower bit-width format. With the growing concerns on privacy rights, we focus our efforts on data-free methods. However, such techniques suffer from their lack of adaptability to the target devices, as a hardware typically only support specific bit widths. Thus, to adapt to a variety of devices, a quantization method shall be flexible enough to find good accuracy v.s. speed trade-offs for every bit width and target device. To achieve this, we propose REx, a quantization method that leverages residual error expansion, along with group sparsity and an ensemble approximation for better parallelization. REx is backed off by strong theoretical guarantees and achieves superior performance on every benchmarked application (from vision to NLP tasks), architecture (ConvNets, transformers) and bit-width (from int8 to ternary quantization).
Fichier non déposé

Dates et versions

hal-04425863 , version 1 (30-01-2024)

Identifiants

Citer

Edouard Yvinec, Arnaud Dapgony, Matthieu Cord, Kevin Bailly. REx: Data-Free Residual Quantization Error Expansion. the Thirty-seventh Annual Conference on Neural Information (NeurIPS'23), Dec 2023, New Orleans (Louisiana), United States. ⟨10.48550/arXiv.2203.14645⟩. ⟨hal-04425863⟩
1 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More