Self-Attention in Colors: Another Take on Encoding Graph Structure in Transformers - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Transactions on Machine Learning Research Journal Année : 2023

Self-Attention in Colors: Another Take on Encoding Graph Structure in Transformers

Résumé

We introduce a novel self-attention mechanism, which we call CSA (Chromatic Self-Attention), which extends the notion of attention scores to attention _filters_, independently modulating the feature channels. We showcase CSA in a fully-attentional graph Transformer CGT (Chromatic Graph Transformer) which integrates both graph structural information and edge features, completely bypassing the need for local message-passing components. Our method flexibly encodes graph structure through node-node interactions, by enriching the original edge features with a relative positional encoding scheme. We propose a new scheme based on random walks that encodes both structural and positional information, and show how to incorporate higher-order topological information, such as rings in molecular graphs. Our approach achieves state-of-the-art results on the ZINC benchmark dataset, while providing a flexible framework for encoding graph structure and incorporating higher-order topology.
Fichier principal
Vignette du fichier
1141_self_attention_in_colors_anoth.pdf (1.91 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04105101 , version 1 (24-05-2023)
hal-04105101 , version 2 (09-01-2024)

Identifiants

Citer

Romain Menegaux, Emmanuel Jehanno, Margot Selosse, Julien Mairal. Self-Attention in Colors: Another Take on Encoding Graph Structure in Transformers. Transactions on Machine Learning Research Journal, 2023. ⟨hal-04105101v2⟩
71 Consultations
96 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More