U-Net Transformer: Self and Cross Attention for Medical Image Segmentation - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

U-Net Transformer: Self and Cross Attention for Medical Image Segmentation

Résumé

Medical image segmentation remains particularly challenging for complex and low-contrast anatomical structures. In this paper, we introduce the U-Transformer network, which combines a U-shaped architecture for image segmentation with self-and cross-attention from Transformers. U-Transformer overcomes the inability of U-Nets to model long-range contextual interactions and spatial dependencies, which are arguably crucial for accurate segmentation in challenging contexts. To this end, attention mechanisms are incorporated at two main levels: a self-attention module leverages global interactions between encoder features, while cross-attention in the skip connections allows a fine spatial recovery in the U-Net decoder by filtering out non-semantic features. Experiments on two abdominal CT-image datasets show the large performance gain brought out by U-Transformer compared to U-Net and local Attention U-Nets. We also highlight the importance of using both self-and cross-attention, and the nice interpretability features brought out by U-Transformer.
Fichier principal
Vignette du fichier
U_Transformer_MLMI_2021_final.pdf (1.42 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03337089 , version 1 (08-09-2021)

Identifiants

  • HAL Id : hal-03337089 , version 1

Citer

Olivier Petit, Nicolas Thome, Clement Rambour, Loic Themyr, Toby Collins, et al.. U-Net Transformer: Self and Cross Attention for Medical Image Segmentation. MICCAI workshop MLMI, Sep 2021, Strasbourg (virtuel), France. ⟨hal-03337089⟩
834 Consultations
1710 Téléchargements

Partager

Gmail Facebook X LinkedIn More