Visual Reasoning with Multi-hop Feature Modulation - Archive ouverte HAL
Communication Dans Un Congrès Année : 2018

Visual Reasoning with Multi-hop Feature Modulation

Résumé

Recent breakthroughs in computer vision and natural language processing have spurred interest in challenging multi-modal tasks such as visual question-answering and visual dialogue. For such tasks, one successful approach is to condition image-based convolutional network computation on language via Feature-wise Linear Modulation (FiLM) layers, i.e., per-channel scaling and shifting. We propose to generate the parameters of FiLM layers going up the hierarchy of a convolutional network in a multi-hop fashion rather than all at once, as in prior work. By alternating between attending to the language input and generating FiLM layer parameters, this approach is better able to scale to settings with longer input sequences such as dialogue. We demonstrate that multi-hop FiLM generation achieves state-of-the-art for the short input sequence task ReferIt-on-par with single-hop FiLM generation-while also significantly outperforming prior state-of-the-art and single-hop FiLM generation on the GuessWhat?! visual dialogue task.
Fichier principal
Vignette du fichier
1808.04446.pdf (5.26 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01927811 , version 1 (20-11-2018)

Identifiants

Citer

Florian Strub, Mathieu Seurin, Ethan Perez, Harm de Vries, Jérémie Mary, et al.. Visual Reasoning with Multi-hop Feature Modulation. ECCV 2018 - 15th European Conference on Computer Vision, Sep 2018, Munich, Germany. pp.808-831. ⟨hal-01927811⟩
156 Consultations
190 Téléchargements

Altmetric

Partager

More