TY - GEN
T1 - Supporting answerers with feedback in social QandA
AU - Frens, John
AU - Walker, Erin
AU - Hsieh, Gary
N1 - Publisher Copyright:
© 2017 Association for Computing Machinery. All rights reserved.
PY - 2018/6/26
Y1 - 2018/6/26
N2 - Prior research has examined the use of Social Question and Answer (QandA) websites for answer and help seeking. However, the potential for these websites to support domain learning has not yet been realized. Helping users write effective answers can be beneficial for subject area learning for both answerers and the recipients of answers. In this study, we examine the utility of crowdsourced, criteria-based feedback for answerers on a student-centered QandA website, Brainly.com. In an experiment with 55 users, we compared perceptions of the current rating system against two feedback designs with explicit criteria (Appropriate, Understandable, and Generalizable). Contrary to our hypotheses, answerers disagreed with and rejected the criteria-based feedback. Although the criteria aligned with answerers' goals, and crowdsourced ratings were found to be objectively accurate, the norms and expectations for answers on Brainly conflicted with our design. We conclude with implications for the design of feedback in social QandA.
AB - Prior research has examined the use of Social Question and Answer (QandA) websites for answer and help seeking. However, the potential for these websites to support domain learning has not yet been realized. Helping users write effective answers can be beneficial for subject area learning for both answerers and the recipients of answers. In this study, we examine the utility of crowdsourced, criteria-based feedback for answerers on a student-centered QandA website, Brainly.com. In an experiment with 55 users, we compared perceptions of the current rating system against two feedback designs with explicit criteria (Appropriate, Understandable, and Generalizable). Contrary to our hypotheses, answerers disagreed with and rejected the criteria-based feedback. Although the criteria aligned with answerers' goals, and crowdsourced ratings were found to be objectively accurate, the norms and expectations for answers on Brainly conflicted with our design. We conclude with implications for the design of feedback in social QandA.
KW - Crowd Assessment
KW - Feedback
KW - Informal Learning
KW - Peer Help
UR - http://www.scopus.com/inward/record.url?scp=85051555343&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85051555343&partnerID=8YFLogxK
U2 - 10.1145/3231644.3231653
DO - 10.1145/3231644.3231653
M3 - Conference contribution
AN - SCOPUS:85051555343
T3 - Proceedings of the 5th Annual ACM Conference on Learning at Scale, L at S 2018
BT - Proceedings of the 5th Annual ACM Conference on Learning at Scale, L at S 2018
PB - Association for Computing Machinery, Inc
T2 - 5th Annual ACM Conference on Learning at Scale, L at S 2018
Y2 - 26 June 2018 through 28 June 2018
ER -