Spatial rendering of audio-visual synthetic speech use for immersive environments - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2008

Spatial rendering of audio-visual synthetic speech use for immersive environments

Résumé

Synthetic speech is usually delivered as a mono audio signal. In this pro ject, audio-visual speech synthesis is attributed to a virtual agent moving in a virtual 3-dimensional scene. More realistic acoustic rendering is achieved by taking into account the position of the agent in the scene, the acoustics of the room depicted in the scene, and the orientation of the virtual character’s head relative. 3D phoneme dependant radiation patterns have been measured for two speakers and a singer. These data are integrated into a Text-To-Speech system using a phoneme to directivity pattern transcription module which also includes a phoneme to viseme model for the agent. In addition to the effects related to agent’s head orientation for the direct sound, a room acoustics model allows for realistic rendering of the room effect as well as the apparent distance as depicted in the virtual scene. Real-time synthesis is implemented in a 3D audio rendering system.
Fichier non déposé

Dates et versions

hal-01107098 , version 1 (20-01-2015)

Identifiants

  • HAL Id : hal-01107098 , version 1

Citer

Markus Noisternig, Brian F. G. Katz, Christophe d'Alessandro. Spatial rendering of audio-visual synthetic speech use for immersive environments. 155th ASA, 5th Forum Austicum, and 2nd ASA-EAA Joint Conference (Acoustics'08), Jun 2008, Paris, France. pp.3939-3939. ⟨hal-01107098⟩
101 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More