Evaluation of Transformer Models (from BERT to GPT) for Geographic Information Recognition
Résumé
In this presentation, we will focus on the automatic recognition of named entities, nested named entities, nominal entities, and geographic information (spatial relationships and geographic coordinates). We will present a comparative study of different methods, with a particular focus on Transformer-based models, from BERT to GPT. The objective of this task is to extract and structure geographic information as a preliminary step for analysis tasks. In addition to evaluating automatic annotation methods, we will present the annotated dataset GeoEDdA, consisting of encyclopedic articles, which was used for training and evaluation phases. We will also present a case study on toponym resolution and mapping based on the generated data.