Handling signal variability with contextual markovian models
Résumé
There are two popular families of statistical models for dealing with sequences and in particular with handwriting signals, either on-line or off-line, the well known generative hidden Markov models and the more recently proposed discriminative Hidden Conditional Random Fields.
One key issue in such modeling frameworks is to efficiently handle variability. The traditional approach consists in first removing as much as possible signal variability in the preprocessing stage, and to use more complex models, for instance in the case of hidden Markov models one increases the number of states and the Gaussian mixture size.
We focus here on another kind of approaches where the probability distribution implemented by the models depends on a number of additional contextual variables, that are assumed fixed or that vary slowly along a sequence. The context may stand for emotion features in speech recognition, physical features in gesture recognition, gender, age, etc.
We propose a framework for deriving markovian models that make use of such contextual information. This yields new models that we call Contextual hidden Markov models and contextual Hidden Conditional Random Fields. We detail learning algorithms for both models and investigate their performances on the IAM off-line handwriting dataset.