Dilated convolution with learnable spacings - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Dilated convolution with learnable spacings

Résumé

Recent works indicate that convolutional neural networks (CNN) need large receptive fields (RF) to compete with visual transformers and their attention mechanism. In CNNs, RFs can simply be enlarged by increasing the convolution kernel sizes. Yet the number of trainable parameters, which scales quadratically with the kernel's size in the 2D case, rapidly becomes prohibitive, and the training is notoriously difficult. This paper presents a new method to increase the RF size without increasing the number of parameters. The dilated convolution (DC) has already been proposed for the same purpose. DC can be seen as a convolution with a kernel that contains only a few non-zero elements placed on a regular grid. Here we present a new version of the DC in which the spacings between the non-zero elements, or equivalently their positions, are no longer fixed but learnable via backpropagation thanks to an interpolation technique. We call this method "Dilated Convolution with Learnable Spacings" (DCLS) and generalize it to the n-dimensional convolution case. However, our main focus here will be on the 2D case. We first tried our approach on ResNet50: we drop-in replaced the standard convolutions with DCLS ones, which increased the accuracy of ImageNet1k classification at iso-parameters, but at the expense of the throughput. Next, we used the recent ConvNeXt state-of-the-art convolutional architecture and drop-in replaced the depthwise convolutions with DCLS ones. This not only increased the accuracy of ImageNet1k classification but also of typical downstream and robustness tasks, again at iso-parameters but this time with negligible cost on throughput, as ConvNeXt uses separable convolutions. Conversely, classic DC led to poor performance with both ResNet50 and ConvNeXt. The code of the method is available at: https://github.com/K-H-Ismail/Dilated-Convolution-with-Learnable-Spacings-PyTorch.
Fichier principal
Vignette du fichier
268_dilated_convolution_with_learn (1).pdf (1.36 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04057309 , version 1 (04-04-2023)

Licence

Paternité

Identifiants

Citer

Ismail Khalfaoui-Hassani, Thomas Pellegrini, Timothée Masquelier. Dilated convolution with learnable spacings. 11th International Conference on Learning Representations (ICLR 2023), May 2023, Kigali, Rwanda. ⟨10.48550/arXiv.2112.03740⟩. ⟨hal-04057309⟩
110 Consultations
30 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More