Unsupervised Word Representations Learning with Bilinear Convolutional Network on Characters
Résumé
In this paper, we propose a new unsupervised method for learning word embedding with raw characters as input representations, bypassing the problems arising from the use of a dictionary. To achieve this purpose, we translate the distributional hypothesis into a unsupervised metric learning objective, which allows to consider only an encoder instead of an encoder-decoder architecture. We propose to use a convolutional neural network with bilinear product blocks and residual connections to encode co-occurrences patterns.
We show the efficiency of our approach by comparing it with classical word embedding methods such as fastText and GloVe on several benchmarks.