A Multimodal Probabilistic Model for Gesture-based Control of Sound Synthesis - Archive ouverte HAL
Communication Dans Un Congrès Année : 2013

A Multimodal Probabilistic Model for Gesture-based Control of Sound Synthesis

Résumé

In this paper, we propose a multimodal approach to create the mapping between gesture and sound in interactive music systems. Specifically, we propose to use a multimodal HMM to conjointly model the gesture and sound parameters. Our approach is compatible with a learning method that allows users to define the gesture-sound relationships interactively. We describe an implementation of this method for the control of physical modeling sound synthesis. Our model is promising to capture expressive gesture variations while guaranteeing a consistent relationship between gesture and sound.
Fichier principal
Vignette du fichier
FranA_oise_Schnell_Bevilacqua_-_2013_-_A_Multimodal_Probabilistic_Model_for_Gesture--based_Control_of_Sound_Synthesis.pdf (713.99 Ko) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte
Loading...

Dates et versions

hal-01005538 , version 1 (12-06-2014)

Identifiants

Citer

Jules Françoise, Norbert Schnell, Frédéric Bevilacqua. A Multimodal Probabilistic Model for Gesture-based Control of Sound Synthesis. 21st ACM international conference on Multimedia (MM'13), Oct 2013, Barcelona, Spain, Spain. pp.705-708, ⟨10.1145/2502081.2502184⟩. ⟨hal-01005538⟩
153 Consultations
491 Téléchargements

Altmetric

Partager

More