FUSION OF VISIBLE IMAGES AND THERMAL IMAGE SEQUENCES FOR AUTOMATED FACIAL EMOTION ESTIMATION

Authors

  • HUNG NGUYEN Japan Advanced Institute of Science and Technology 1-1 Asahidai, Nomi, Ishikawa, Japan
  • FAN CHEN Japan Advanced Institute of Science and Technology 1-1 Asahidai, Nomi, Ishikawa, Japan
  • KAZUNORI KOTANI Japan Advanced Institute of Science and Technology 1-1 Asahidai, Nomi, Ishikawa, Japan
  • BAC LE University of Science, VNU - HCMC, Vietnam 227 Nguyen Van Cu, Ho Chi Minh city, Vietnam

Keywords:

facial emotions, thermal images, emotion estimation, feature fusion, decision fusion, thermal image sequences, t-PCA, n-EMC, KTFE database

Abstract

The visible image-based approach has long been considered the most powerful approach to facial emotion estimation. However it is illumination dependency. Under uncontrolled operating conditions, estimation accuracy degrades significantly. In this paper, we focus on integrating visible images with thermal image sequences for facial emotion estimation. First, to address limitations of thermal infrared (IR) images, such as being opaque to eyeglasses, we apply thermal Regions of Interest (t-ROIs) to sequences of thermal images. Then, wavelet transform is applied to visible images. Second, features are selected and fused from visible features and thermal features. Third, fusion decision using conventional methods, Principal Component Analysis (PCA) and Eigen-space Method based on classfeatures (EMC), and our proposed methods, thermal Principal Component Analysis (t-PCA) and norm Eigen-space Method based on class-features (n-EMC), is applied. Applying our suggested methods, experiments on the Kotani Thermal Facial Emotion (KTFE) database show significant improvement, proving its effectiveness.

 

Downloads

Download data is not yet available.

References

Z.Zeng, M.Pantic, G.T.Roisman, and T.S.Huang (2009), A survey of affect recognition methods: Audio, visual,

and spontaneous expressions, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 31 (1),

pp. 39-58.

B.Fasel and J.Luettin (2003), Automatic facial expression analysis: a survey, Pattern Recognition, Vol. 36(1),

pp. 259-275.

M.Pantic, S.Member, L.J.M.Rothkrantz (2000) Automatic analysis of facial expressions: The state of the art,

IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, pp. 14241445.

S.Jarlier,D.Grandjean, S.Delplanque, K.N0Diaye, I.Cayeux, M.Velazco, D.Sander, P.Vuilleumier, and

K.Schere (2011), Thermal Analysis of Facial Muscles Contractions, IEEE Transaction on Affective Computing,

Vol. 2 (1), pp. 2-9.

M.M.Khan, R.D.Ward, and M.Ingleby (2009), Classifying pretended and evoked facial expression of positive

and negative affective states using infrared measurement of skin temperature, ACM Transactions on Applied

Perception, Vol. 6 (1), pp. 1-22.

L.Trujillo, G.Olague, R.Hammoud, and B.Hernandez (2005), Automatic feature localization in thermal images

for facial expression recognition, IEEE Computer Society Conference on Computer Vision and Pattern

Recognition-Workshops, pp. 14.

B.Hern´andez, G.Olague, R.Hammoud, L.Trujillo, and E.Romero (2007), Visual learning of texture descriptors

for facial expression recognition in thermal imagery, Computer Vision and Image Understanding, Vol. 106,

pp. 258 - 269.

B.R.Nhan, and T.Chau, T (2010), Classifying affective states using thermal infrared imaging of the human

face, IEEE Transactions on Biomedical Engineering, Vol. 57, pp. 979-987.

Y.Yoshitomi, N.Miyawaki, S.Tomita, and S.Kimura (1997), Facial expression recognition using thermal im

age processing and neural network, Robot and Human Communication, ROMAN ’97 Proceedings , 6th IEEE

International Workshop, pp. 380 - 385.

Y.Yoshitomi (2010), Facial expression recognition for speaker using thermal image processing and speech

recognition system, Proceedings of the 10th WSEAS International Conference on Applied Computer Science,

pp. 182-186.

Y.Koda, Y.Yoshitomi, M.Nakano, and M.Tabuse (2009),A facial expression recognition for a speaker of a

phoneme of vowel using thermal image processing and a speech recognition system, The 18th IEEE International

Symposium on Robot and Human Interactive Communication, ROMAN 2009, pp. 955-960.

S.Wang, S.He, Y.Wu, M.He, and Q.Ji (2014), Fusion of visible and thermal images for facial expression

recognition, Frontiers of Computer Science, Vol 8(2), pp. 232-242.

Y.Yoshitomi, S.Kim, T.Kawano, and T.Kilazoe (2000), Effect of sensor fusion for recognition of emotional

states using voice, face image and thermal image of face, Proceedings of the 9th IEEE InternationalWorkshop

on Robot and Human Interactive Communication, pp.178 -183.

M.Antonini, M.Barlaud, P.Mathieu, and I.Daubechies (1992), Image coding using wavelet transform, IEEE

Transactions on Image Processing, Vol. 1, pp.205 -220.

D.T.LIN (2006), Facial expression classification using PCA and hierarchical radial basis function network,

Journal of Information Science and Engineering, Vol. 22, pp. 1033-1046.

T.Yabui, Y.Kenmochi, and K.Kotani, Facial expression analysis from 3D range images; comparison with the

analysis from 2D images and their integration, 2003 International Conference on Image Processing, Vol.2,

pp.879-882.

H.Nguyen, K.Kotani, F.Chen, and B.Le (2014), A thermal facial emotion database and its analysis, Image

and Video Technology, Lecture Notes in Computer Science, Vol. 8333, pp. 397-408.

M. Turk and A. Pentland (1991), Eigenfaces for recognition, Journal of Cognitive Neuroscience, Vol.3, pp.71-

T.Kurozumi, Y.Shinza, Y.Kenmochi, and K.Kotani (1999), Facial individuality and expression analysis by

eigenspace method based on class features or multiple discriminant analysis, 1999 International Conference

on Image Processing, Vol.1, pp. 648-652.

H.Nguyen, F.Chen, K.Kotani, and B.Le (2014), Human emotion estimation using wavelet transform and t-

ROIs for fusion of visible images and thermal image sequences, Computational Science and Its Applications

ICCSA 2014, Lecture Notes in Computer Science, Vol.8584, pp. 224-235.

Downloads

Published

2014-10-26

How to Cite

NGUYEN, H. ., CHEN, F., KOTANI, K., & LE, B. . (2014). FUSION OF VISIBLE IMAGES AND THERMAL IMAGE SEQUENCES FOR AUTOMATED FACIAL EMOTION ESTIMATION. Journal of Mobile Multimedia, 10(3-4), 294–308. Retrieved from https://journals.riverpublishers.com/index.php/JMM/article/view/4579

Issue

Section

Articles