HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts

Résumé

By routing input tokens to only a few split experts, Sparse Mixture-of-Experts has enabled efficient training of large language models. Recent findings suggest that fixing the routers can achieve competitive performance by alleviating the collapsing problem, where all experts eventually learn similar representations. However, this strategy has two key limitations: (i) the policy derived from random routers might be suboptimal, and (ii) it requires extensive resources during training and evaluation, leading to limited efficiency gains. This work introduces HyperRouter, which dynamically generates the router's parameters through a fixed hypernetwork and trainable embeddings to achieve a balance between training the routers and freezing them to learn an improved routing policy. Extensive experiments across a wide range of tasks demonstrate the superior performance and efficiency gains of HyperRouter compared to existing routing methods. Our implementation is publicly available at https://github.com/ giangdip2410/HyperRouter.
Fichier principal
Vignette du fichier
HyperRouter___EMNLP.pdf (585.8 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04257067 , version 1 (24-10-2023)
hal-04257067 , version 2 (17-12-2023)

Licence

Paternité

Identifiants

  • HAL Id : hal-04257067 , version 2

Citer

Giang Do, Khiem Le, Quang Pham, Trungtin Nguyen, Thanh-Nam Doan, et al.. HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts. The 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023 Main, Dec 2023, 2023 - Resorts World Convention Centre, Singapore. pp.1-12. ⟨hal-04257067v2⟩
223 Consultations
139 Téléchargements

Partager

Gmail Facebook X LinkedIn More