A study of Knowledge Distillation in Fully Convolutional Network for Time Series Classification - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

A study of Knowledge Distillation in Fully Convolutional Network for Time Series Classification

Résumé

In recent years, deep learning revolutionized the field of machine learning. While many applications of deep learning are observed in computer vision, other domains like natural language processing (NLP) or speech recognition also benefited from advances in deep learning research. More recently, the field of time series analysis and more especially time series classification (TSC) also witnessed the emergence of deep neural networks providing competitive results. Through the years, the proposed network architectures became deeper and deeper pushing the performance higher. While these very deep models achieve impressive accuracy, their training and deployment became challenging. Indeed, a large number of GPUs is often required to train state-of-the-art networks and obtain high performances. While the requirements needed for the training step can be acceptable, deploying very deep neural networks can be difficult especially in embedded systems (e.g. robots) or devices with limited resources (e.g. web browsers, smartphones). In this context, knowledge distillation is a machine learning task consisting in transferring knowledge from a large model to a smaller one with fewer parameters. The goal is to create a lighter model mimicking the predictions of a larger one in order to obtain similar performances with a fraction of the computational cost. In this paper, we introduce and explore the concept of knowledge distillation for the specific task of TSC. We also present a first experimental study showing promising results on several datasets of the UCR time series archive. As current state-of-the-art models for TSC are deep and sometimes ensemble of models, we believe that knowledge distillation could become an important research area in the coming years.
Fichier principal
Vignette du fichier
ijcnn2022.pdf (320.24 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03800230 , version 1 (06-10-2022)

Identifiants

Citer

Emel Ay, Maxime Devanne, Jonathan Weber, Germain Forestier. A study of Knowledge Distillation in Fully Convolutional Network for Time Series Classification. 2022 International Joint Conference on Neural Networks (IJCNN), Jul 2022, Padua, France. pp.1-8, ⟨10.1109/IJCNN55064.2022.9892915⟩. ⟨hal-03800230⟩
11 Consultations
111 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More