Speech transcription for Embodied Conversational Agent animation - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Journal of the Acoustical Society of America Année : 2008

Speech transcription for Embodied Conversational Agent animation

Résumé

This article investigates speech transcription within a framework of Embodied Conversational Agent (ECA) animation by voice. The idea is to detect some pronounced expressions/keywords in order to animate automatically the face and the body of an avatar. Extensibility, speed and precision are the main constraints of this interactive application. So after defining the set of the relevant words (to the application), a fast large vocabulary speech recognition system was developped and the keyword detection was evaluated. In order to fasten the recognition system without decreasing its efficiency, the acoustic models have been shortened by an original process. It consists in decreasing the number of shared central states of context dependant models which are considered stationary. The shared states situated in the border of the models remain inchanged. Then all the models are retrained. The system is evaluated on an hour of the ESTER database (a French broadcast news database). The experiments show that reducing the number central states of triphones is advantageous. Indeed, the length of models is reduced by 20% with no loss of accuracy.
Fichier non déposé

Dates et versions

hal-04094023 , version 1 (10-05-2023)

Identifiants

Citer

Leila Zouari, Gerard Chollet. Speech transcription for Embodied Conversational Agent animation. Journal of the Acoustical Society of America, 2008, 123 (5), pp.3886-3886. ⟨10.1121/1.2935817⟩. ⟨hal-04094023⟩
12 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More