Predicting affective states expressed through an emote-aloud procedure from autotutor’s mixed-initiative dialogue

Sidney K. D’Mello, Scotty D. Craig, Jeremiah Sullins, Arthur C. Graesser

Research output: Contribution to journalArticlepeer-review

116 Scopus citations


This paper investigates how frequent conversation patterns from a mixed-initiative dialogue with an intelligent tutoring system, AutoTutor, can significantly predict users’ affective states (e.g. confusion, eureka, frustration). This study adopted an emote-aloud procedure in which participants were recorded as they verbalized their affective states while interacting with AutoTutor. The tutor-tutee interaction was coded on scales of conversational directness (the amount of information provided by the tutor to the learner, with a theoretical ordering of assertion > prompt for particular information > hint), feedback (positive, neutral, negative), and content coverage scores for each student contribution obtained from the tutor’s log files. Correlation and regression analyses confirmed the hypothesis that dialogue features could significantly predict the affective states of confusion, eureka, and frustration. Standard classification techniques were used to assess the reliability of the automatic detection of learners’ affect from the conversation features. We discuss the prospects of extending AutoTutor into an affect-sensing intelligent tutoring system.

Original languageEnglish (US)
Pages (from-to)3-28
Number of pages26
JournalInternational Journal of Artificial Intelligence in Education
Issue number1
StatePublished - 2006
Externally publishedYes


  • Affect detection
  • AutoTutor
  • Dialogue patterns
  • Intelligent Tutoring Systems

ASJC Scopus subject areas

  • Education
  • Computational Theory and Mathematics


Dive into the research topics of 'Predicting affective states expressed through an emote-aloud procedure from autotutor’s mixed-initiative dialogue'. Together they form a unique fingerprint.

Cite this