Exploring unsupervised textual representations generated by neural language models in the context of automatic tweet stream summarization
Résumé
Users are often overwhelmed by the amount of information generated on online social networks and media(OSNEM), in particular Twitter, during particular events. Summarizing the information streams would helpthem be informed in a reasonable time. In parallel, recent state of the art in summarization has a special focus on deep neural models and pre-trained language models. In this context, we aim at (i) evaluating different pre-trained language model (PLM) to represent microblogs (i.e., tweets), and (ii) to identify the most suitable ones in a summarization context, as well as (iii) to see how neural models can be used knowing the issue of input size limitation of such models. For this purpose, we divided the problem into 3 questions and made experiments on 3 different datasets. Using a simple greedy algorithm, we first compared several pre-trained models for single tweet representation. We then evaluated the quality of the average representation of the stream and sought to use it as a starting point for a neural approach. First results show the interest of using USE and Sentence-BERT representations for tweet stream summarization, as well as the great potential of using the average representation of the stream.