Linguistic Information in Word Embeddings - Archive ouverte HAL
Chapitre D'ouvrage Année : 2019

Linguistic Information in Word Embeddings

Linguistic Information in Word Embeddings

Ali Basirat
  • Fonction : Auteur
Marc Tang

Résumé

We study the presence of linguistically motivated information in the word embeddings generated with statistical methods. The nominal aspects of uter/neuter, common/proper, and count/mass in Swedish are selected to represent respectively grammatical, semantic, and mixed types of nominal categories within languages. Our results indicate that typical grammatical and semantic features are easily captured by word embeddings. The classification of semantic features required significantly less neurons than grammatical features in our experiments based on a single layer feed-forward neural network. However, semantic features also generated higher entropy in the classification output despite its high accuracy. Furthermore, the count/mass distinction resulted in difficulties to the model, even though the quantity of neurons was almost tuned to its maximum.
Fichier non déposé

Dates et versions

hal-02529156 , version 1 (02-04-2020)

Identifiants

Citer

Ali Basirat, Marc Tang. Linguistic Information in Word Embeddings. Agents and artificial intelligence, pp.492-513, 2019, ⟨10.1007/978-3-030-05453-3_23⟩. ⟨hal-02529156⟩
39 Consultations
0 Téléchargements

Altmetric

Partager

More