A Multimodal Probabilistic Model for Gesture-based Control of Sound Synthesis
Résumé
In this paper, we propose a multimodal approach to create the mapping between gesture and sound in interactive music systems. Specifically, we propose to use a multimodal HMM to conjointly model the gesture and sound parameters. Our approach is compatible with a learning method that allows users to define the gesture-sound relationships interactively. We describe an implementation of this method for the control of physical modeling sound synthesis. Our model is promising to capture expressive gesture variations while guaranteeing a consistent relationship between gesture and sound.
Fichier principal
FranA_oise_Schnell_Bevilacqua_-_2013_-_A_Multimodal_Probabilistic_Model_for_Gesture--based_Control_of_Sound_Synthesis.pdf (713.99 Ko)
Télécharger le fichier
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|
Loading...