OFFMATE: full fine-tuning of LLMs on a single GPU by re-materialization and offloading - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

OFFMATE: full fine-tuning of LLMs on a single GPU by re-materialization and offloading

Résumé

We present OFFMATE, an efficient memory-reducing framework to enable fine-tuning large language models on a single GPU. In the same way that PyTorch Dynamo takes a model and automatically changes it to reduce the execution time, OFFMATE takes a model and automatically modifies it to fit memory constraints (e.g. GPU VRAM), while keeping the same numerical results without approximation. OFFMATE uses integer linear programming to combine re-materialization (deleting some intermediate activations and recomputing them when needed), weight and activation offloading (moving data to CPU memory), and CPU optimization in a holistically optimized way, ensuring an efficient usage of available resources. With 10%-50% execution time overhead, OFFMATE has achieved up to 10× GPU memory reduction on billion-size models including Llama, Phi, Bloom and Mistral from HuggingFace. OFFMATE is also designed to be compatible with reduced precision and parameter-efficient fine-tuning techniques, so that the memory benefits can be combined.
Fichier principal
Vignette du fichier
main.pdf (661 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04660745 , version 1 (24-07-2024)

Identifiants

  • HAL Id : hal-04660745 , version 1

Citer

Xunyi Zhao, Lionel Eyraud-Dubois, Théotime Le Hellard, Julia Gusak, Olivier Beaumont. OFFMATE: full fine-tuning of LLMs on a single GPU by re-materialization and offloading. 2024. ⟨hal-04660745⟩
165 Consultations
221 Téléchargements

Partager

More