Crowdsourcing label noise simulation on image classification tasks - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Crowdsourcing label noise simulation on image classification tasks

Résumé

It is common to collect labelled datasets using crowdsourcing. Yet, labels quality depends deeply on the task difficulty and on the workers abilities. With such datasets, the lack of ground truth makes it hard to assess the quality of annotations. There are few open-access crowdsourced datasets, and even fewer that provide both heterogeneous tasks in difficulty and all workers answers before the aggregation. We propose a new crowdsourcing simulation framework with quality control. This allows us to evaluate different empirical learning strategies empirically from the obtained labels. Our goal is to separate different sources of noise: workers that do not provide any information on the true label against poorly performing workers, useful on easy tasks.
Fichier principal
Vignette du fichier
jds22.pdf (1.03 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04562503 , version 1 (29-04-2024)

Identifiants

  • HAL Id : hal-04562503 , version 1

Citer

Tanguy Lefort, Benjamin Charlier, Alexis Joly, Joseph Salmon. Crowdsourcing label noise simulation on image classification tasks. JDS 2022 - 53es Journées de Statistique, Jun 2022, Lyon, France. ⟨hal-04562503⟩
30 Consultations
29 Téléchargements

Partager

More