Crowdsourcing Thousands of Specialized Labels: A Bayesian Active Training Approach
Résumé
Large-scale annotated corpora have yielded impressive performance improvements in computer vision and multimedia content analysis. However, such datasets depend on an enormous amount of human labeling effort. When the labels correspond to well-known concepts, it is straightforward to train the annotators by giving a few examples with known answers. It is also straightforward to judge the quality of their labels. Neither is true when there are thousands of complex domain-specific labels. Training on all labels is infeasible and the quality of an annotator's judgements may be vastly different for some subsets of labels than for others. This paper proposes a set of data-driven algorithms to 1) train image annotators on how to disambiguate among automatically generated candidate labels, 2) evaluate the quality of annotators' label suggestions, and 3) weigh predictions. The algorithms adapt to the skills of each annotator both in the questions asked and the weights given to their answers. The underlying judgements are Bayesian, based on adaptive priors. We measure the benefits of these algorithms on a live user experiment related to image-based plant identification involving around 1000 people. The proposed methods are shown to enable huge gains in annotation accuracy. A standard user can correctly label around 2% of our data. This goes up to 80% with machine learning assisted training and assignment and up to almost 90% when doing a weighted combination of several annotators' labels.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...