Beyond ASR 1-best: Using word confusion networks in spoken language understanding - Archive ouverte HAL
Article Dans Une Revue Computer Speech and Language Année : 2005

Beyond ASR 1-best: Using word confusion networks in spoken language understanding

Résumé

We are interested in the problem of robust understanding from noisy spontaneous speech input. With the advances in automated speech recognition (ASR), there has been increasing interest in spoken language understanding (SLU). A challenge in large vocabulary spoken language understanding is robustness to ASR errors. State of the art spoken language understanding relies on the best ASR hypotheses (ASR 1-best). In this paper, we propose methods for a tighter integration of ASR and SLU using word confusion networks (WCNs). WCNs obtained from ASR word graphs (lattices) provide a compact representation of multiple aligned ASR hypotheses along with word confidence scores, without compromising recognition accuracy. We present our work on exploiting WCNs instead of simply using ASR one-best hypotheses. In this work, we focus on the tasks of named entity detection and extraction and call classification in a spoken dialog system, although the idea is more general and applicable to other spoken language processing tasks. For named entity detection, we have improved the F-measure by using both word lattices and WCNs, 6–10% absolute. The processing of WCNs was 25 times faster than lattices, which is very important for real-life applications. For call classification, we have shown between 5% and 10% relative reduction in error rate using WCNs compared to ASR 1-best output.

Dates et versions

hal-01314993 , version 1 (12-05-2016)

Identifiants

Citer

Dilek Hakkani-Tü, Frédéric Béchet, Giuseppe Riccardi, Gokhan Tur. Beyond ASR 1-best: Using word confusion networks in spoken language understanding. Computer Speech and Language, 2005, ⟨10.1016/j.csl.2005.07.005⟩. ⟨hal-01314993⟩

Collections

UNIV-AVIGNON LIA
154 Consultations
0 Téléchargements

Altmetric

Partager

More