Abstract

Recent years have seen a growing need in the affective computing community to understand an emotion space beyond the seven basic expressions, leading to explorations of an emotion space continuum spanned by dimensions such as valence and arousal. While there has been substantial research in the identification of facial Action Units as building blocks for the basic expressions, there is a new need to discover fine-grained facial descriptors that can explain the variations in the continuum of emotions. We propose a methodology to extract Latent Facial Topics (LFTs) from facial videos, by adapting Latent Dirichlet Allocation and supervised Latent Dirichlet Allocation topic models for facial affect analysis. In this work, we study the application of topic models to both discrete emotion recognition as well as continuous emotion prediction tasks. We show that meaningful and visualizable LFTs can be extracted and used successfully for emotion recognition. We report our recognition results on the widely known Cohn Kanade Plus and AVEC 2012 FCSC challenge data sets, which have shown promise for both discrete and continuous emotion recognition problems.

Original languageEnglish (US)
Title of host publicationElectronic Proceedings of the 2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013
DOIs
StatePublished - 2013
Event2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013 - San Jose, CA, United States
Duration: Jul 15 2013Jul 19 2013

Publication series

NameElectronic Proceedings of the 2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013

Other

Other2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013
Country/TerritoryUnited States
CitySan Jose, CA
Period7/15/137/19/13

Keywords

  • Emotion Recognition
  • Facial Descriptors
  • Topic models

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Latent Facial Topics for affect analysis'. Together they form a unique fingerprint.

Cite this