Interactive Sound Texture Synthesis Through Semi-Automatic User Annotations
Résumé
We present a way to make environmental recordings controllable again by the use of continuous annotations of the high-level semantic parameter one wishes to control, e.g. wind strength or crowd excitation level. A partial annotation can be propagated to cover the entire recording via cross-modal analysis between gesture and sound by canonical time warping (CTW). The annotations serve as a descriptor for lookup in corpus-based concatenative synthesis in order to invert the sound/annotation relationship. The workflow has been evaluated by a preliminary subject test and results on canonical correlation analysis (CCA) show high consistency between annotations and a small set of audio descriptors being well correlated with them. An experiment of the propagation of annotations shows the superior performance of CTW over CCA with as little as 20 s of annotated material.
Domaines
Son [cs.SD] Interface homme-machine [cs.HC] Musique, musicologie et arts de la scène Traitement du signal et de l'image [eess.SP] Apprentissage [cs.LG] Intelligence artificielle [cs.AI] Ingénierie assistée par ordinateur Multimédia [cs.MM] Vision par ordinateur et reconnaissance de formes [cs.CV] Autre [cs.OH] Traitement du signal et de l'image [eess.SP]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...