Learning Word Embeddings: Unsupervised Methods for Fixed-size Representations of Variable-length Speech Segments
Abstract
Fixed-length embeddings of words are very useful for a variety of tasks in speech and language processing. Here we systematically explore two methods of computing fixed-length embeddings for variable-length sequences. We evaluate their susceptibility to phonetic and speaker-specific variability on English, a high resource language and Xitsonga, a low resource language, using two evaluation metrics: ABX word discrimination and ROC-AUC on same-different phoneme n-grams. We show that a simple downsampling method supplemented with length information can outperform the variable-length input feature representation on both evaluations. Recurrent autoencoders, trained without supervision, can yield even better results at the expense of increased computational complexity.
Fichier principal
Holzenberger_DKRD_2018_fixed_length_embeddings_for_words.Interspeech.pdf (505.44 Ko)
Télécharger le fichier
Origin | Files produced by the author(s) |
---|
Loading...