Comparison of Two Speech/Music Segmentation Systems For Audio Indexing on the Web - Archive ouverte HAL
Communication Dans Un Congrès Année : 2003

Comparison of Two Speech/Music Segmentation Systems For Audio Indexing on the Web

Résumé

This article talks about two majors ways of performing a speech/music segmentation task. The first one uses a competing modelling approach based on classical speech recognition parameters (MFCC). The second one uses a class/non-class approach for both main topics: speech/non-speech and music/non-music. In order to fit closely speech and music characteristics, different kinds of parameters are used, MFCC and spectral coefficients. We present both approaches with some intrinsic experiments. Then, we compare their speech/music discrimination accuracy using a real-world testing corpus: a broadcast program containing noisy interviews, superimposed segments (speech with music), and an alternation of broad-band speech and telephone speech. Within the classical approach, we can notice that either the derivative alone, or the second derivative alone, plays a major role in the discrimination process as well as the number of cepstral coefficients. In the differentiated way, the class/non-class approach is more homogeneous.

Domaines

Son [cs.SD]
Fichier non déposé

Dates et versions

hal-00104131 , version 1 (05-10-2006)

Identifiants

  • HAL Id : hal-00104131 , version 1

Citer

Joseph Razik, Christine Sénac, Dominique Fohr, Odile Mella, Nathalie Vallès-Parlangeau. Comparison of Two Speech/Music Segmentation Systems For Audio Indexing on the Web. 7th World Multiconference on Systemics, Cybernetics and Informatics, SCI 2003, Jul 2003, Orlando, Florida, United States. pp.1-6. ⟨hal-00104131⟩
263 Consultations
0 Téléchargements

Partager

More