VISUALHINTS: A Visual-Lingual Environment for Multimodal Reinforcement Learning - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2021

VISUALHINTS: A Visual-Lingual Environment for Multimodal Reinforcement Learning

Thomas Carta
Subhajit Chaudhury
  • Fonction : Auteur
  • PersonId : 1119343
Kartik Talamadupula
  • Fonction : Auteur
Michiaki Tatsubori
  • Fonction : Auteur

Résumé

We present VISUALHINTS, a novel environment for multimodal reinforcement learning (RL) involving text-based interactions along with visual hints (obtained from the environment). Real-life problems often demand that agents interact with the environment using both natural language information and visual perception towards solving a goal. However, most traditional RL environments either solve pure visionbased tasks like Atari games or video-based robotic manipulation; or entirely use natural language as a mode of interaction, like Text-based games and dialog systems. In this work, we aim to bridge this gap and unify these two approaches in a single environment for multimodal RL. We introduce an extension of the TextWorld cooking environment with the addition of visual clues interspersed throughout the environment. The goal is to force an RL agent to use both text and visual features to predict natural language action commands for solving the final task of cooking a meal. We enable variations and difficulties in our environment to emulate various interactive real-world scenarios. We present a baseline multimodal agent for solving such problems using CNN-based feature extraction from visual hints and LSTMs for textual feature extraction. We believe that our proposed visual-lingual environment will facilitate novel problem settings for the RL community.
Fichier principal
Vignette du fichier
VISUALHINTS A Visual-Lingual Environment forMultimodal Reinforcement Learning.pdf (1.15 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03466647 , version 1 (06-12-2021)

Identifiants

Citer

Thomas Carta, Subhajit Chaudhury, Kartik Talamadupula, Michiaki Tatsubori. VISUALHINTS: A Visual-Lingual Environment for Multimodal Reinforcement Learning. 2021. ⟨hal-03466647⟩
53 Consultations
74 Téléchargements

Altmetric

Partager

More