Linguistic Preprocessing for Distributional Analysis : Evidence from French - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2015

Linguistic Preprocessing for Distributional Analysis : Evidence from French

Emmanuel Cartier

Résumé

For about fifteen years, the statistical paradigm, from the distributionalism hypothesis (Harris, 1954) and corpus linguistics (Firth, 1957), has prevailed in the NLP field, with a lot of convincing results : multiword expression, part-of-speech, semantic relation identification, and even probabilistic models of language. These studies have identified interesting linguistic phenomena, such as collocations, "collostructions" (Stefanowitsch, 2003), « word sketches » (Kilgariff et al., 2004). Cognitive Semantics (Langacker, 1987, 1991; Geeraerts et al. 1994 ; Schmid, 2007, 2013), have also introduced novel concepts, most notably that of « entrenchment », which enables to ground the social lexicalization of linguistic signs and to correlate it with repetition in corpus. Finally, Construction Grammars (Fillmore et al., 1988 ; Goldberg, 1995, 2003 ; Croft, 2001, 2004, 2007) have proposed linguistic models which reject the distinction lexicon (list of “words”) - grammar (expliciting the combination of words) : all linguistic signs are constructions, from morphemes to syntaxical schemes, leading to the notion of “constructicon”, as a goal for linguistic description. Computational Models of the Distributional Hypothesis As Computational Linguistics is concerned, the Vector Space Model (VSM) has prevailed to implement the distributional hypothesis, giving rise to continuous sophistication and several state-of-the-arts (Turney and Pantel, 2010; Lenci and al., 2010; Kiela and Clark, 2013; Clark, 2015). (Kiela and Clarke, 2014) state that the following parameters are implied in any VSM implementation: vector size, window size, window-based or dependency-based context, feature granularity, similarity metric, weighting scheme, stopwords and high frequency cut-off. Three of them are directly linked to linguistic preprocessing : window-based or dependency-based context, the second requiring a dependency analysis of the corpus; feature granularity, ie, the fact of taking into account either the raw corpus, or a lemmatized or pos-tagged one for n-gram calculus; stopwords and high frequency cut-off , ie removal of high-frequency words or “tool words”. (Kiela and Clarke, 2014) conducted six experiments/tasks with varying values for each parameter, so as to assess the most efficient ones. They conclude that : dependency-based does not trigger any improvement over raw-text n-gram calculus; as for feature granularity, that stemming yields the better results; as for stopwords or high-frequency words removal, it does yield better results, but only if no raw frequency weighting is applied to the results; this is in line with the conclusion of (Bulinaria and Levy, 2012). Nevertheless, these conclusions should be refined and completed : 1/ As feature granularity is concerned, the authors do not take into account a combination of features from different levels; (Béchet et al., 2012), for example, have shown that combining features from three levels (form, lemma, pos-tag) can result in better pattern recognition for specific linguistic tasks; such a combination is also in line with the Cognitive Semantics and the Construction Grammar hypothesis, that linguistic signs emerge as constructions combining schemes, lemmas and specific forms; 2/ The experiments on dependency need additional experiments, as several works (for example Pado and Lapata, 2007) made a contradictory conclusion. 3/ Stopwords or high-frequency words removal results in better results if no frequency weighting is applied; but the authors apply – as quasi all work in the field -, a brute-force removal either based on “gold standard” stopword lists, or on a arbitrary count to cut off results; this technique should be refined to remove only the noisy words or n-grams and should be linguistically motivated. Linguistic Motivation for Linguistic Preprocessing The hypothesis supported in this paper is that, if repetition of sequences is the best way to access usage and to induce linguistic properties, language users do not only rely on the sequentiality of language, but also on non-sequential knowledge thus untractable from the actual distribution of words. This knowledge is linked to the three classical linguistical units : lexical units, phrases and predicate structures, each being a combination of the preceding with language-specific rules for their construction. Probabilistic models of language have mainly focused until now on the lexical units level, but to leverage language, probabilistic research must also model and preprocess phrases and predicate structures. The present paper will try to ground this hypothesis through an experiment, aimed at retrieving lexico-semantic relations in French, where we preprocess the corpus in three ways : morphosyntactic analysis peripheral lexical units removal phrases identification. As we will see, these steps enable to access more easily the predicate structures that the experiment aims at revealing, while using a VSM model on the resulting preprocessed corpus.

Domaines

Linguistique
Fichier principal
Vignette du fichier
sentence-simplification.pdf (121.98 Ko) Télécharger le fichier
presentationCL2015 17072015-1.pdf (231.08 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01443193 , version 1 (22-01-2017)

Identifiants

  • HAL Id : hal-01443193 , version 1

Citer

Emmanuel Cartier. Linguistic Preprocessing for Distributional Analysis : Evidence from French. Corpus Linguistics 2015, Lancaster University, Jul 2015, Lancaster, United Kingdom. ⟨hal-01443193⟩
216 Consultations
366 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More