CISUC

Learning from Multiple Annotators: Distinguishing Good from Random Labelers

Authors

Abstract

With the increasing popularity of online crowdsourcing platforms such as Amazon Mechanical Turk (AMT), building supervised learning models for datasets with multiple annotators is receiving an increasing attention from researchers. These platforms provide an inexpensive and accessible resource that can be used to obtain labeled data, and in many situations the quality of the labels competes directly with those of experts. For such reasons, much attention has recently been given to annotator-aware models. In this paper, we propose a new probabilistic model for supervised learning with multiple annotators where the reliability of the di?erent annotators is treated as a latent variable. We empirically show that this model is able to achieve state of the art performance, while reducing the number of model parameters, thus avoiding a potential overfitting. Furthermore, the proposed model is easier to implement and extend to other classes of learning problems such as sequence labeling tasks.

Keywords

Multiple Annotators, Crowdsourcing, Latent Variable Models, Expectation-Maximization, Logistic Regression

Subject

Machine learning

Related Project

Crowds - Understanding urban land use from digital footprints of crowds

Journal

Pattern Recognition Letters, Elsevier, December 2013

PDF File

DOI


Cited by

Year 2016 : 1 citations

 C Long, G Hua, A Kapoor, A joint gaussian process model for active visual recognition with expertise estimation in crowdsourcing, International Journal of Computer Vision, 2016

Year 2015 : 4 citations

 ED Simpson, M Venanzi, S Reece, P Kohliâ?¦, Language Understanding in the Wild: Combining Crowdsourcing and Machine Learning, Proceedings of the 24th …, 2015

 YE Kara, G Genc, O Aran, L Akarun, Modeling annotator behaviors for crowd labeling, Neurocomputing, 2015

 M Venanzi, O Parson, A Rogers, N Jennings, The ActiveCrowdToolkit: An Open-Source Tool for Benchmarking Active Learning Algorithms for Crowdsourcing Research, Third AAAI Conference on …, 2015

 A Fuddoly, J Jaafar, N Zamin, News Classification with Human Annotators: A Case Study, Jurnal Teknologi, 2015

Year 2014 : 2 citations

 A Tarasov, SJ Delany, B Mac Namee, Dynamic estimation of worker reliability in crowdsourcing for regression tasks: Making it work, Expert Systems with Applications, 2014

 A Tarasov, Dynamic Estimation of Rater Reliability using Multi-Armed Bandits, Publication/NA, 2014

Year 2013 : 1 citations

 L Kinley, Towards the use of Citizen Sensor Information as an Ancillary Tool for the Thematic Classification of Ecological Phenomena, Proceedings of the 2nd AGILE (Association of …, 2013