Using automatic scoring models to detect changes in student writing in an intelligent tutoring system

Scott A. Crossley, Rod Roscoe, Danielle McNamara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

24 Scopus citations

Abstract

This study compares automated scoring increases and linguistic changes for student writers in two groups: a group that used an intelligent tutoring system embedded with an automated writing evaluation component (Writing Pal) and a group that used only the automated writing evaluation component. The primary goal is to examine automated scoring differences in both groups from pretest to posttest essays to investigate score gains and linguistic development. The study finds that both groups show significant increases in automated writing scores and significant development in lexical, syntactic, cohesion, and rhetorical features. However, the Writing-Pal group shows greater raw frequency gains (i.e., negative v. positive gains).

Original languageEnglish (US)
Title of host publicationFLAIRS 2013 - Proceedings of the 26th International Florida Artificial Intelligence Research Society Conference
Pages208-213
Number of pages6
StatePublished - 2013
Event26th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2013 - St. Pete Beach, FL, United States
Duration: May 22 2013May 24 2013

Publication series

NameFLAIRS 2013 - Proceedings of the 26th International Florida Artificial Intelligence Research Society Conference

Other

Other26th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2013
Country/TerritoryUnited States
CitySt. Pete Beach, FL
Period5/22/135/24/13

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Using automatic scoring models to detect changes in student writing in an intelligent tutoring system'. Together they form a unique fingerprint.

Cite this