Correlation between textual similarity and quality of LDA topic model results
Résumé
The LDA topic model describes a corpus on the basis of its vocabulary. Our experiment aims at determining whether LDA outputs' quality can be estimated through text similarity metrics, and if so determining the most relevant one. To do so, we use a categorized corpus on which we apply these metrics on every pair of categories. We present correlation scores between several metrics and the quality of the topic model. The experiments also include a comparison between simple and complex term extraction within our framework. We observed very high correlations with the Hellinger distance with or without complex terms, while the Soergel distance is most efficient when including complex terms. These experiments are a case study on a categorised corpus of 20,000 article abstracts.