Comparing identification of vocal imitations and computational sketches of everyday sounds
Résumé
Sounds are notably difficult to describe. It is thus not surprising that human speakers often use many imitative vocalizations to communicate about sounds. In practice,vocal imitations of non-speech everyday sounds (e.g. the sound of a car passing by) arevery effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are often inaccurate, constrained by the human vocal apparatus. The present study investigated the semantic representations evoked by vocal imitations by experimentally quantifying how well listeners could match sounds to category labels. Itcompared two different types of sounds: human vocal imitations, and computational auditory sketches (created by algorithmic computations), both based on easily identifiable sounds (sounds of human actions and manufactured products). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds.More detailed analyses showed that the acoustic distance between vocal imitations and referent sounds is not sufficient to account for such performance. They suggested that instead of reproducing the acoustic properties of the referent sound as accurately as vocally possible, vocal imitations focus on a few important features dependent on each particular sound category.