Identify ambiguous tasks combining crowdsourced labels by weighting Areas Under the Margin - Archive ouverte HAL
Article Dans Une Revue Transactions on Machine Learning Research Journal Année : 2024

Identify ambiguous tasks combining crowdsourced labels by weighting Areas Under the Margin

Résumé

In supervised learning - for instance in image classification - modern massive datasets are commonly labeled by a crowd of workers. The obtained labels in this crowdsourcing setting are then aggregated for training. The aggregation step generally leverages a per-worker trust score. Yet, such worker-centric approaches discard each task's ambiguity. Some intrinsically ambiguous tasks might even fool expert workers, which could eventually be harmful to the learning step. In a standard supervised learning setting - with one label per task - the Area Under the Margin (AUM) is tailored to identify mislabeled data. We adapt the AUM to identify ambiguous tasks in crowdsourced learning scenarios, introducing the Weighted AUM (WAUM). The WAUM is an average of AUMs weighted by task-dependent scores. We show that the WAUM can help discard ambiguous tasks from the training set, leading to better generalization or calibration performance. We report improvements over existing strategies for learning a crowd, both for simulated settings and for the CIFAR-10H, LabelMe and Music crowdsourced datasets.

Dates et versions

hal-03812716 , version 1 (12-10-2022)

Identifiants

Citer

Tanguy Lefort, Benjamin Charlier, Alexis Joly, Joseph Salmon. Identify ambiguous tasks combining crowdsourced labels by weighting Areas Under the Margin. Transactions on Machine Learning Research Journal, 2024. ⟨hal-03812716⟩
79 Consultations
0 Téléchargements

Altmetric

Partager

More