Controllable Text Generation to Fight Disinformation
Résumé
During the 21st century, the global development of online platforms have led to their use in Strategic Influence Operations. These operations cause deaths and put democratic processes at risk. To counter it, governmental and nongovernmental entities emerged with analysts specialized in detecting and characterizing foreign disinformation campaigns. These analysts need constant training to be able to adopt new methodologies against the highly evolving threats landscape. Trainings take place in simulated environments reproducing the Internet, providing
faithful training environment that we call infospheres. This paper is about an ongoing PhD which aims at automating content generation to populate these infospheres. We highlight key challenges in this domain which are controllable text generation, fictive writings, preventing hallucinations and evaluation protocols to assess the used methodologies. We propose an idea based on a three levels modeling which structures knowledge and social dynamics associated with a training scenario. This modeling is used in a prompting methodology to automate content production with Large Language Models. Finally, we propose directions to continue this work, including social graph modeling to mimic how information propagates as well as new evaluation protocols.