Using the H-divergence to Prune Probabilistic Automata
Résumé
A problem usually encountered in probabilistic automata learning is the difficulty to deal with large training samples and/or wide alphabets. This is partially due to the size of the resulting Probabilistic Prefix Tree (PPT) from which state merging-based learning algorithms are generally applied. In this paper, we propose a novel method to prune PPTs by making use of the H-divergence dH, recently introduced in the field of domain adaptation. dH is based on the classification error made by an hypothes is learned from unlabeled examples drawn according to two distributions to compare. Through a thorough comp arison with state-of-the-art divergence measures, we provide experimental evidences that demonstrate the efficiency of our method based on this simple and intuitive criterion.