Scalable Language Model Look-Ahead for LVCSR
Résumé
In this paper a new computation and approximation scheme for Language Model Look-Ahead (LMLA) is introduced. The main benefit of LMLA is sharper pruning of the search space during the LVCSR decoding process. However LMLA comes with its own cost and is known to scale badly with both LM n-gram order and LM size. The proposed method tackles this problem with a divide and conquer approach which enables faster computation without additional WER cost. The obtained results allowed our system to participate in the real-time task of the ESTER Broadcast News transcription evaluation campaign for French.