Transfer Learning by Weighting Convolution - Archive ouverte HAL Access content directly
Conference Papers Year : 2020

Transfer Learning by Weighting Convolution

Stéphane Ayache
Ronan Sicre
  • Function : Author
  • PersonId : 1067988

Abstract

Transferring pretrained deep architectures to datasets with few labels is still a challenge in many real-world situations. This paper presents a new framework to understand convolutional neural networks, by establishing connections between Kronecker factorization and convolutional layers. We then introduce Convolution Weighting Layers that learn a vector of weights for each channel, allowing efficient transfer learning in small training settings, as well as enabling pruning the transferred models. Experiments are conducted on two main settings with few labeled data: transfer learning for classification and transfer learning for retrieval. Two well known convolutional architectures are evaluated on five public datasets. We show that weighting convolutions is efficient to adapt pretrained models to new tasks and that pruned networks conserve good performance.
Fichier principal
Vignette du fichier
wconv.pdf (838.86 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02544099 , version 1 (16-04-2020)

Identifiers

  • HAL Id : hal-02544099 , version 1

Cite

Stéphane Ayache, Ronan Sicre, Thierry Artières. Transfer Learning by Weighting Convolution. International Joint Conference on Neural Networks (IJCNN), 2020, Glasgow, United Kingdom. ⟨hal-02544099⟩
166 View
426 Download

Share

Gmail Facebook X LinkedIn More