Very Deep Convolutional Networks for Text Classification - Archive ouverte HAL
Communication Dans Un Congrès Année : 2017

Very Deep Convolutional Networks for Text Classification

Alexis Conneau
  • Fonction : Auteur
Yann Lecun
  • Fonction : Auteur

Résumé

The dominant approach for many NLP tasks are recurrent neural networks, in particular LSTMs, and convolutional neural networks. However, these architectures are rather shallow in comparison to the deep convolutional networks which have pushed the state-of-the-art in computer vision. We present a new architecture (VDCNN) for text processing which operates directly at the character level and uses only small convolutions and pooling operations. We are able to show that the performance of this model increases with depth: using up to 29 convolutional layers, we report improvements over the state-of-the-art on several public text classification tasks. To the best of our knowledge, this is the first time that very deep convolutional nets have been applied to text processing.

Dates et versions

hal-01454940 , version 1 (03-02-2017)

Identifiants

Citer

Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun. Very Deep Convolutional Networks for Text Classification. European Chapter of the Association for Computational Linguistics EACL'17, 2017, Valencia, Spain. ⟨hal-01454940⟩
619 Consultations
0 Téléchargements

Altmetric

Partager

More