Federated Learning-Based Tokenizer for Domain-Specific Language Models in Finance
Résumé
The Federated Byte-level Byte-Pair Encoding (BPE) Tokenizer
(FedByteBPE) leverages a Federated Learning (FL) approach for
a privacy-preserving approach to train language models tokenizer across
distributed datasets. This approach enables entities to train and refine
their tokenizer models locally, with vocabulary aggregation performed
on a centralized server. This method ensures the creation of a robust,
domain-specific tokenizer while preserving privacy. Supported by theoretical
analysis and empirical results from experiments on a real-world
distributed financial dataset, our findings demonstrate that the federated
tokenizer significantly outperforms off-the-shelf and individual local
tokenizers in vocabulary coverage. This highlights the potential of
federated learning to address training language model tokenizers in a
privacy-preserving setting.