Unsupervised post-tuning of deep neural networks - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

Unsupervised post-tuning of deep neural networks

Résumé

We propose in this work a new unsupervised training procedure that is most effective when it is applied after supervised training and fine-tuning of deep neural network classifiers. While standard regularization techniques combat overfitting by means that are unrelated to the target classification loss, such as by minimizing the L2 norm or by adding noise either in the data, model or process, the proposed unsupervised training loss reduces overfitting by optimizing the true classifier risk. The proposed approach is evaluated on several tasks of increasing difficulty and varying conditions: unsupervised training, posttuning and anomaly detection. It is also tested both on simple neural networks, such as small multi-layer perceptron, and complex Natural Language Processing models, e.g., pretrained BERT embeddings. Experimental results confirm the theory and show that the proposed approach gives the best results in posttuning conditions, i.e., when applied after supervised training and fine-tuning.
Fichier principal
Vignette du fichier
ijcnn.pdf (1.09 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02022062 , version 1 (18-02-2019)
hal-02022062 , version 2 (15-04-2021)

Identifiants

  • HAL Id : hal-02022062 , version 2

Citer

Christophe Cerisara, Paul Caillon, Guillaume Le Berre. Unsupervised post-tuning of deep neural networks. IJCNN, Jul 2021, Virtual Event, United States. ⟨hal-02022062v2⟩
469 Consultations
250 Téléchargements

Partager

More