Naturally Constrained Online Expectation Maximization
Résumé
With the rise of big data sets, learning algorithms must be adapted to piece-wise mechanisms to tackle large-scale calculations' time and memory costs. Furthermore, for most learning embedded systems, the input data are fed sequentially and contingently: one by one, and possibly class by class. Thus, learning algorithms should not only run online but cope with time-varying, non-independent, and non-balanced training data for the system's entire life. Online Expectation-Maximization is a well-known algorithm for learning probabilistic models in real-time, due to its simplicity and convergence properties. However, these properties are only valid in the case of large, independent and identically distributed samples. In this paper, we propose to constrain the online Expectation-Maximization on the Fisher distance between the parameters. After presenting the algorithm, we make a thorough study of its use in Probabilistic Principal Components Analysis. First, we derive the update rules, and then we analyze the effect of the constraint on major problems of online and sequential learning: convergence, forgetting and interference. Furthermore, we use several algorithmic protocols: iid vs sequential data, and constraint parameters updated step-wise vs class-wise. Our results show that this constraint increases the convergence rate of online Expectation-Maximization, decreases forgetting and slightly introduces positive transfer learning.
Domaines
Apprentissage [cs.LG]Origine | Fichiers produits par l'(les) auteur(s) |
---|