When Quantization Affects Confidence of Large Language Models? - Equipe de Recherche en Ingénierie des Connaissances Accéder directement au contenu
Communication Dans Un Congrès Année : 2024

When Quantization Affects Confidence of Large Language Models?

Résumé

Recent studies introduced effective compres- sion techniques for Large Language Models (LLMs) via post-training quantization or low- bit weight representation. Although quantized weights offer storage efficiency and allow for faster inference, existing works have indicated that quantization might compromise perfor- mance and exacerbate biases in LLMs. This study investigates the confidence and calibra- tion of quantized models, considering factors such as language model type and scale as con- tributors to quantization loss. Firstly, we reveal that quantization with GPTQ to 4-bit results in a decrease in confidence regarding true labels, with varying impacts observed among different language models. Secondly, we observe fluctu- ations in the impact on confidence across differ- ent scales. Finally, we propose an explanation for quantization loss based on confidence levels, indicating that quantization disproportionately affects samples where the full model exhibited low confidence levels in the first place. We make our code and quantized models publicly available.1
Fichier principal
Vignette du fichier
NAACL_2024_irina.pdf (458.85 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04646224 , version 1 (12-07-2024)

Identifiants

  • HAL Id : hal-04646224 , version 1

Citer

Guillaume Metzler, Irina Proskurina, Julien Velcin, Luc Brun. When Quantization Affects Confidence of Large Language Models?. 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Jun 2024, Mexico City, Mexico. ⟨hal-04646224⟩
14 Consultations
15 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More