TY - GEN
T1 - Multimodal emotion recognition using deep learning architectures
AU - Ranganathan, Hiranmayi
AU - Chakraborty, Shayok
AU - Panchanathan, Sethuraman
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/5/23
Y1 - 2016/5/23
N2 - Emotion analysis and recognition has become an interesting topic of research among the computer vision research community. In this paper, we first present the emoF-BVP database of multimodal (face, body gesture, voice and physiological signals) recordings of actors enacting various expressions of emotions. The database consists of audio and video sequences of actors displaying three different intensities of expressions of 23 different emotions along with facial feature tracking, skeletal tracking and the corresponding physiological data. Next, we describe four deep belief network (DBN) models and show that these models generate robust multimodal features for emotion classification in an unsupervised manner. Our experimental results show that the DBN models perform better than the state of the art methods for emotion recognition. Finally, we propose convolutional deep belief network (CDBN) models that learn salient multimodal features of expressions of emotions. Our CDBN models give better recognition accuracies when recognizing low intensity or subtle expressions of emotions when compared to state of the art methods.
AB - Emotion analysis and recognition has become an interesting topic of research among the computer vision research community. In this paper, we first present the emoF-BVP database of multimodal (face, body gesture, voice and physiological signals) recordings of actors enacting various expressions of emotions. The database consists of audio and video sequences of actors displaying three different intensities of expressions of 23 different emotions along with facial feature tracking, skeletal tracking and the corresponding physiological data. Next, we describe four deep belief network (DBN) models and show that these models generate robust multimodal features for emotion classification in an unsupervised manner. Our experimental results show that the DBN models perform better than the state of the art methods for emotion recognition. Finally, we propose convolutional deep belief network (CDBN) models that learn salient multimodal features of expressions of emotions. Our CDBN models give better recognition accuracies when recognizing low intensity or subtle expressions of emotions when compared to state of the art methods.
UR - http://www.scopus.com/inward/record.url?scp=84977623462&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84977623462&partnerID=8YFLogxK
U2 - 10.1109/WACV.2016.7477679
DO - 10.1109/WACV.2016.7477679
M3 - Conference contribution
AN - SCOPUS:84977623462
T3 - 2016 IEEE Winter Conference on Applications of Computer Vision, WACV 2016
BT - 2016 IEEE Winter Conference on Applications of Computer Vision, WACV 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - IEEE Winter Conference on Applications of Computer Vision, WACV 2016
Y2 - 7 March 2016 through 10 March 2016
ER -