Openly accessible

Distinguishing question subjectivity from difficulty for improved crowdsourcing

Jin, Yuan, Carman, Mark, Zhu, Ye and Buntine, Wray 2018, Distinguishing question subjectivity from difficulty for improved crowdsourcing, in ACML 2018 : Proceedings of the 10th Asian Conference on Machine Learning Research, MIT Press, Cambridge, Mass., pp. 192-207.

Attached Files
Name Description MIMEType Size Downloads

Title Distinguishing question subjectivity from difficulty for improved crowdsourcing
Author(s) Jin, Yuan
Carman, Mark
Zhu, YeORCID iD for Zhu, Ye orcid.org/0000-0003-4776-4932
Buntine, Wray
Conference name Machine Learning. Asian Conference (10th : 2018 : Beijing, China)
Conference location Beijing, China
Conference dates 2018/11/14 - 2018/11/16
Title of proceedings ACML 2018 : Proceedings of the 10th Asian Conference on Machine Learning Research
Publication date 2018
Start page 192
End page 207
Total pages 16
Publisher MIT Press
Place of publication Cambridge, Mass.
Keyword(s) Crowdsourcing
Subjectivity
Difficulty
Statistical modelling
Summary The questions in a crowdsourcing task typically exhibit varying degrees of difficulty and subjectivity. Their joint effects give rise to the variation in responses to the same question by different crowd-workers. This variation is low when the question is easy to answer and objective, and high when it is difficult and subjective. Unfortunately, current quality control methods for crowdsourcing consider only the question difficulty to account for the variation. As a result, these methods cannot distinguish workers' ,personal preferences for different correct answers of a partially subjective question from their ability to avoid objectively incorrect answers for that question. To address this issue, we present a probabilistic model which (i) explicitly encodes question difficulty as a model parameter and (ii) implicitly encodes question subjectivity via latent preference factors for crowd-workers. We show that question subjectivity induces grouping of crowd-workers, revealed through clustering of their latent preferences. Moreover, we develop a quantitative measure for the question subjectivity. Experiments show that our model (1) improves both the question true answer prediction and the unseen worker response prediction, and (2) can potentially provide rankings of questions coherent with human assessment in terms of difficulty and subjectivity.
ISSN 2640-3498
Language eng
Indigenous content off
HERDC Research category E1 Full written paper - refereed
Copyright notice ©2018, Y. Jin, M. Carman, Y. Zhu, W. Buntine
Free to Read? Yes
Persistent URL http://hdl.handle.net/10536/DRO/DU:30123307

Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in TR Web of Science
Scopus Citation Count Cited 0 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 37 Abstract Views, 4 File Downloads  -  Detailed Statistics
Created: Wed, 26 Jun 2019, 11:21:06 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.