Organizing Gaussian mixture models into a tree for scaling up speaker retrieval
Résumé
Numerous pattern recognition tasks set in the probabilistic framework face the following issue : it is expensive to evaluate the likelihood function for test data, when there are given very many candidate probabilistic models for explaining this data.
We consider the application of this general and important problem to speaker recognition for indexing and retrieval purposes in radio archives.
More precisely, we propose to reduce complexity at query time, by prior organization of speaker models into a hierarchy. This is very classically done for multi-dimensional vectors, but we propose herein a technique for building a hierarchy of probabilistic models, in the case these models take the form of a Gaussian mixture. From a closed-form approximation of Kullback-Leibler divergence between parent and children, an optimality criterion and an optimization technique are derived, from which we propose an efficient approach for building a tree of models, using clustering techniques (dendrogram-based or k-means-like). The proposed scheme is evaluated on real data.
Domaines
Informatique [cs]Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|