Catplayinginthesnow: Impact of Prior Segmentation on a Model of Visually Grounded Speech - Archive ouverte HAL
Communication Dans Un Congrès Année : 2020

Catplayinginthesnow: Impact of Prior Segmentation on a Model of Visually Grounded Speech

Résumé

The language acquisition literature shows that children do not build their lexicon by segmenting the spoken input into phonemes and then building up words from them, but rather adopt a top-down approach and start by segmenting word-like units and then break them down into smaller units. This suggests that the ideal way of learning a language is by starting from full semantic units. In this paper, we investigate if this is also the case for a neural model of Visually Grounded Speech trained on a speech-image retrieval task. We evaluated how well such a network is able to learn a reliable speech-to-image mapping when provided with phone, syllable, or word boundary information. We present a simple way to introduce such information into an RNN-based model and investigate which type of boundary is the most efficient. We also explore at which level of the network's architecture such information should be introduced so as to maximise its performances. Finally, we show that using multiple boundary types at once in a hierarchical structure, by which low-level segments are used to recompose high-level segments, is beneficial and yields better results than using low-level or high-level segments in isolation.
Fichier principal
Vignette du fichier
ARTICLE_CoNLL2020-2.pdf (476.78 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02962275 , version 1 (13-10-2020)

Identifiants

  • HAL Id : hal-02962275 , version 1

Citer

William N Havard, Laurent Besacier, Jean-Pierre Chevrot. Catplayinginthesnow: Impact of Prior Segmentation on a Model of Visually Grounded Speech. Conference on Natural Language Learning (CoNLL), Nov 2020, Virtual, France. ⟨hal-02962275⟩
81 Consultations
75 Téléchargements

Partager

More