Towards Understanding Syntactic Structure of Language in Human-Robot Interaction - Archive ouverte HAL
Communication Dans Un Congrès Année : 2018

Towards Understanding Syntactic Structure of Language in Human-Robot Interaction

Amir Aly
Daichi Mochihashi
  • Fonction : Auteur
  • PersonId : 1040452

Résumé

Robots are progressively moving into spaces that have been primarily shaped by human agency; they collaborate with human users in different tasks that require them to understand human language so as to behave appropriately in space. To this end, a stubborn challenge that we address in this paper is inferring the syntactic structure of language, which embraces grounding parts of speech (e.g., nouns, verbs, and prepositions) through visual perception, and induction of Combinatory Categorial Grammar (CCG) in situated human-robot interaction. This could pave the way towards making a robot able to understand the syntactic relationships between words (i.e., understand phrases), and consequently the meaning of human instructions during interaction, which is a future scope of this current study. I. INTRODUCTION Creating interactive social robots able to collaborate with human users in different tasks requires high-level spatial intelligence that could make them able to discover and interact with their surroundings. Developing this spatial intelligence involves grounding language (action verbs, object characteristics (i.e., color and geometry), and spatial prepositions) and the underlying syntactic structure through sensory information so as to make a robot able to understand human instructions in the physical world. Understanding syntactic structure of language has been intensively investigated in the literature of cognitive robotics and computational linguistics. In cognitive robotics, different research studies proposed computational models for grounding nouns, verbs, adjectives, and prepositions encoding spatial relationships between objects [1, 2, 22, 26, 38]. However, they have not investigated grammar understanding at the phrase level, which constitutes a higher level than grounding words through perception. Meanwhile, in computational linguistics, recent studies presented models for inducing combinatory syntactic structure of language [5, 15]; however, they used annotated databases for grammar induction where each word has a corresponding syntactic tag (as a noun, verb, etc.). This last point illustrates the important role that cognitive robotics could play in grammar induction through grounding parts of speech in visual perception so as to allow for learning the latent syntactic structure of phrases in a developmentally plausible manner. In this study, we build on the model of Bisk and Hockenmaier [5] for grammar induction, and propose an extended probabilistic
Fichier principal
Vignette du fichier
extra-7.pdf (5.76 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02493168 , version 1 (27-02-2020)

Identifiants

  • HAL Id : hal-02493168 , version 1

Citer

Amir Aly, Tadahiro Taniguchi, Daichi Mochihashi. Towards Understanding Syntactic Structure of Language in Human-Robot Interaction. International Workshop on Visually Grounded Interaction and Language (ViGIL), in Conjunction with the 32nd Conference on Neural Information Processing Systems (NeurIPS), Dec 2018, Montreal, Canada. ⟨hal-02493168⟩
57 Consultations
50 Téléchargements

Partager

More