Leveraging local similarity for token merging in Vision Transformers
Résumé
Vision Transformers (ViTs) have shown promising results in computer vision tasks, challenging CNN architectures on image classification, segmentation and object detection. However, their quadratic complexity O(N 2 ), where N is the token sequence length, hinders their deployment on edge devices. To tackle this challenge, researchers have proposed various compressing schemes that exploit sparsity and redundancies. In this paper, we focus on one of these strategies, named token merging, which consists of dynamically and progressively combining similar tokens during inference, leading to computational savings. Most of the proposed methods compute similarities between all tokens before picking the highest score that leads to the merging decision. This contradicts the intuition that spatially close tokens are more similar than distant ones. In our paper, we show that the distribution of cosine similarity scores of adjacent token pairs is higher than the distribution of similarity scores of distant tokens. Based on this observation, we propose LoTM, a Local Token Merging approach where we constrain the merging window to a pair of adjacent tokens only. Our model is evaluated on a classification task using the ImageNet-1K dataset, as it outperforms most state-of-the-art approaches in accuracy given the same computational budget without requiring further training.
Origine | Fichiers produits par l'(les) auteur(s) |
---|