Inductive Document Network Embedding with Topic-Word Attention - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Inductive Document Network Embedding with Topic-Word Attention

Résumé

Document network embedding aims at learning representations for a structured text corpus i.e. when documents are linked to each other. Recent algorithms extend network embedding approaches by incorporating the text content associated with the nodes in their formulations. In most cases, it is hard to interpret the learned representations. Moreover, little importance is given to the generalization to new documents that are not observed within the network. In this paper, we propose an interpretable and inductive document network embedding method. We introduce a novel mechanism, the Topic-Word Attention (TWA), that generates document representations based on the interplay between word and topic representations. We train these word and topic vectors through our general model, Inductive Document Network Embedding (IDNE), by leveraging the connections in the document network. Quantitative evaluations show that our approach achieves state-of-the-art performance on various networks and we qualitatively show that our model produces meaningful and interpretable representations of the words, topics and documents.

Dates et versions

hal-02502288 , version 1 (09-03-2020)

Identifiants

Citer

Robin Brochier, Adrien Guille, Julien Velcin. Inductive Document Network Embedding with Topic-Word Attention. 42nd European Conference on IR Research, ECIR 2020, Apr 2020, Lisbonne, France. ⟨hal-02502288⟩
23 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More