TY - GEN
T1 - Enhancing Collective Estimates by Aggregating Cardinal and Ordinal Inputs
AU - Kemmer, Ryan
AU - Yoo, Yeawon
AU - Escobedo, Adolfo R.
AU - Maciejewski, Ross
N1 - Publisher Copyright:
© 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2020
Y1 - 2020
N2 - There are many factors that affect the quality of data received from crowdsourcing, including cognitive biases, varying levels of expertise, and varying subjective scales. This work investigates how the elicitation and integration of multiple modalities of input can enhance the quality of collective estimations. We create a crowd sourced experiment where participants are asked to estimate the number of dots within images in two ways: ordinal (ranking) and cardinal (numerical) estimates. We run our study with 300 participants and test how the efficiency of crowdsourced computation is affected when asking participants to provide ordinal and/or cardinal inputs and how the accuracy of the aggregated outcome is affected when using a variety of aggregation methods. First, we find that more accurate ordinal and cardinal estimations can be achieved by prompting participants to provide both cardinal and ordinal information. Second, we present how accurate collective numerical estimates can be achieved with significantly fewer people when aggregating individual preferences using optimization-based consensus aggregation models. Interestingly, we also find that aggregating cardinal information may yield more accurate ordinal estimates.
AB - There are many factors that affect the quality of data received from crowdsourcing, including cognitive biases, varying levels of expertise, and varying subjective scales. This work investigates how the elicitation and integration of multiple modalities of input can enhance the quality of collective estimations. We create a crowd sourced experiment where participants are asked to estimate the number of dots within images in two ways: ordinal (ranking) and cardinal (numerical) estimates. We run our study with 300 participants and test how the efficiency of crowdsourced computation is affected when asking participants to provide ordinal and/or cardinal inputs and how the accuracy of the aggregated outcome is affected when using a variety of aggregation methods. First, we find that more accurate ordinal and cardinal estimations can be achieved by prompting participants to provide both cardinal and ordinal information. Second, we present how accurate collective numerical estimates can be achieved with significantly fewer people when aggregating individual preferences using optimization-based consensus aggregation models. Interestingly, we also find that aggregating cardinal information may yield more accurate ordinal estimates.
UR - http://www.scopus.com/inward/record.url?scp=85123520502&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85123520502&partnerID=8YFLogxK
U2 - 10.1609/hcomp.v8i1.7465
DO - 10.1609/hcomp.v8i1.7465
M3 - Conference contribution
AN - SCOPUS:85123520502
SN - 9781577358480
T3 - Proceedings of the AAAI Conference on Human Computation and Crowdsourcing
SP - 73
EP - 82
BT - HCOMP 2020 - Proceedings of the 8th AAAI Conference on Human Computation and Crowdsourcing
A2 - Aroyo, Lora
A2 - Simperl, Elena
PB - Association for the Advancement of Artificial Intelligence
T2 - 8th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2020
Y2 - 25 October 2020 through 29 October 2020
ER -