TY - JOUR
T1 - Presentation, expectations, and experience
T2 - Sources of student perceptions of automated writing evaluation
AU - Roscoe, Rod
AU - Wilson, Joshua
AU - Johnson, Adam C.
AU - Mayra, Christopher R.
N1 - Funding Information:
This research was supported in part by a grant from the Institute of Education Sciences (IES R305A120707). Opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of the Institute of Education Sciences. The authors would like to thank Danielle McNamara, Laura Allen, Matthew Jacovina, and Jianmin Dai for their input and support.
Publisher Copyright:
© 2017 Elsevier Ltd
PY - 2017/5/1
Y1 - 2017/5/1
N2 - Automated writing evaluation (AWE) is a popular form of educational technology designed to supplement writing instruction and feedback, yet research on the effectiveness of AWE has observed mixed findings. The current study considered how students’ perceptions of automated essay scoring and feedback influenced their writing performance, revising behaviors, and future intentions toward the technology. The manner in which the software was presented—claims about the accuracy and quality of the automated scoring and feedback—were modestly related to students’ expectations and perceptions. However, students’ direct experiences with the software were most strongly associated with their perceptions. Importantly, students’ perceptions seemed to have minimal impact on their “in the moment” use of the software to write and revise successfully. Students revised and improved their essays regardless of their positive or negative views of the system. However, positive and negative perceptions significantly predicted future intentions to use the software again or to recommend the software to a friend. Implications for AWE design, implementation, and evaluation are discussed.
AB - Automated writing evaluation (AWE) is a popular form of educational technology designed to supplement writing instruction and feedback, yet research on the effectiveness of AWE has observed mixed findings. The current study considered how students’ perceptions of automated essay scoring and feedback influenced their writing performance, revising behaviors, and future intentions toward the technology. The manner in which the software was presented—claims about the accuracy and quality of the automated scoring and feedback—were modestly related to students’ expectations and perceptions. However, students’ direct experiences with the software were most strongly associated with their perceptions. Importantly, students’ perceptions seemed to have minimal impact on their “in the moment” use of the software to write and revise successfully. Students revised and improved their essays regardless of their positive or negative views of the system. However, positive and negative perceptions significantly predicted future intentions to use the software again or to recommend the software to a friend. Implications for AWE design, implementation, and evaluation are discussed.
KW - Automated writing evaluation
KW - Formative feedback
KW - Human-computer interaction
KW - Technology adoption
KW - User experience
KW - Writing
UR - http://www.scopus.com/inward/record.url?scp=85008952998&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85008952998&partnerID=8YFLogxK
U2 - 10.1016/j.chb.2016.12.076
DO - 10.1016/j.chb.2016.12.076
M3 - Article
AN - SCOPUS:85008952998
SN - 0747-5632
VL - 70
SP - 207
EP - 221
JO - Computers in Human Behavior
JF - Computers in Human Behavior
ER -