Real-Time Emotion Classification and Prediction Using a Hybrid Facial Expression Recognition Model Emotion Recognition in Human Resources’ Future

Authors

  • Abhilasha Sharma Sharda University, Greater Noida, U.P., India
  • Usha Tiwari Sharda University, Greater Noida, U.P., India
  • Sushanta Kumar Mandal Adamas University, Barasat, Kolkata, W.B, India

DOI:

https://doi.org/10.13052/jmm1550-4646.21344

Keywords:

DCNN, automation, facial emotion recognition, human-machine interaction, computer vision, artificial intelligence (AI)

Abstract

Human-Computer Interaction (HCI) and psychology works in the vast field of expression theory where expression recognition is its vital part and a very crucial research area in various disciplines. In this study, a hybrid model that uses Deep Convolutional Neural Network (DCNN). The goal is to group the images into one of the seven different categories of facial emotion. To improve face feature extraction and filtering depth, the DCNN used in this study comprises some more convolutional layers, activation functions, and numerous kernels. A Load haar cascade model was additionally used in conjunction with real-time pictures and video frames to detect facial features. Images from the FER dataset from the Kaggle repository were used, and the training and validation processes were accelerated by taking advantage of Graphics Processing Unit (GPU) processing. This study employs behavioural features to focus on a person’s mental or emotional state, which can help human resource managers spot emotional engagement within their workforce. This study demonstrates the performance of the suggested design while also demonstrating the significance of its implementation in real life.

The future effects of emotion recognition technology on human resources (HR) practises are examined in this paper. Tools for emotion recognition are being used more frequently as AI and machine learning develop. While these tools have the potential to revolutionize HR by offering fresh ways to gauge employee satisfaction and engagement, they also raise significant privacy and ethical issues. The cutting-edge technology are covered in this paper, along with an overview of recent research on emotion recognition and its potential applications in HR. Emotions have an impact on decisions. Emotional measurement has enormous research applications. Face recognition software of today can recognise common facial expressions like happiness, fear, rage, and sadness. It’s fascinating to see how emotion analysis and recognition are used in human resources. Think about your hiring and testing options. One such is X0PA Ai, which powers its analytics and video interviewing capabilities using Microsoft’s Video Indexer. Using video and audio models, it derives profound insights, including emotional analysis. The method currently being used to gauge emotions is self-report. The self-report method could lead to inaccurate results because employees can easily manipulate the data to produce a socially desirable result. That is why facial recognition software is a useful tool.

Downloads

Download data is not yet available.

Author Biographies

Abhilasha Sharma, Sharda University, Greater Noida, U.P., India

Abhilasha Sharma received the bachelor’s degree in electronics and communication engineering from Kurukshetra University in 2014, the master’s degree in signal processing and digital design from Delhi technological University in 2017. She is currently pursuing her research in artificial intelligence and facial emotions. His research areas include facial emotions, deep learning, and AI.

Usha Tiwari, Sharda University, Greater Noida, U.P., India

Usha Tiwari is currently Assistant Professor, Department of EECE, Sharda University. Dr. Tiwari is Ph.D. from Jamia Millia Islamia, New Delhi, in Data Compression Schemes for Wireless Sensor Networks. She did M.Tech in Electronics & Communication from MDU Rohtak in year 2010. Dr. Usha graduated with honours from UPTU Lucknow, Uttar Pradesh in 2005, with a degree in the field of Electronics & Instrumentation Engineering. She is holds the 8th rank in top ten merit list declared by UPTU in 2005.

Sushanta Kumar Mandal, Adamas University, Barasat, Kolkata, W.B, India

Sushanta Kumar Mandal received his B.E. degree in Electrical Engineering from Jalpaiguri Govt. Engineering College, MS and Ph.D degree from IIT Kharagpur. Currently he is working as Professor in the Department of Electrical and Electronics Engineering and Dean Quality Assurance and Accreditation at Adamas University, Kolkata. He has more than two decades of teaching experience in reputed institutions. He has guided 6 PhD and 30 MTech Theses and also guiding several PhD Scholars. He has published more than 70 research papers in reputed international journals and conferences. His research interests include VLSI design, Artificial Intelligence and Machine Learning applications.

References

Ekman, R. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS) (Oxford University Press, 1997).

S. E. Kahou, V. Michalski, K. Konda, R. Memisevic, and C. Pal, ‘Recurrent neural networks for emotion recognition in video.’ Proc. in ACM on International Conference on Multimodal Interaction, pp. 467–474, NY, USA, 2015.

L. Nwosu, H. Wang, J. Lu, I. Unwala, X. Yang and T. Zhang, ‘Deep Convolutional Neural Network for Facial Expression Recognition Using Facial Parts’, 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress Orlando, FL, USA, pp. 1318–1321, 2017.

B. Yang, X. Xiang, D. Xu, X. Wang and X. Yang, ‘3d palm print recognition using shape index representation and fragile bits’, pp. 15357–15375, 2017.

W. Mou, O. Celiktutan and H. Gunes, ‘Group-level arousal and valence recognition in static images: Face, body and context.,’ 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia, 2015, Multimed. Tools Appl. 76(14), 15357–15375, 2017.

L. Tan, K. Zhang, K. Wang, X. Zeng, X. Peng, and Y. Qiao, ‘Group Emotion Recognition with Individual Facial Emotion CNNs and Global Image-based CNNs,’ 19th ACM International Conference on Multimodal Interaction, Glasgow, UK, 2017.

N. Kumar, D. Bhargava, ‘A scheme of features fusion for facial expression analysis: A facial action recognition’, J. Stat. Manag. Syst. 20(4), pp. 693–701, 2017.

G. Tzimiropoulos, M. Pantic, ‘Fast algorithms for fitting active appearance models to unconstrained images’. Int. J. Comput. Vis. 122(1), pp. 17–33, 2017.

M. Sajid, NI. Ratyal, N. Ali, B. Zafar, S. H. Dar, M. T. Mahmood, and Y. B Joo, ‘The impact of asymmetric left and asymmetric right face images on accurate age estimation.’ Math Probl Eng, pp. 1–10, 2019.

G. Zhao, M. Pietikainen, ‘Dynamic texture recognition using local binary patterns with an application to facial expressions’. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), pp. 915–928, 2007.

M. Ahmadinia, ‘Energy-efficient and multi-stage clustering algorithm in wireless sensor networks using cellular learning automata’. IETE J. Res. 59(6), pp. 774–782, 2013.

X. Zhao, X. Liang, L. Liu, T. Li, Y. Han, N. Vasconcelos, S. Yan, ‘Peak-piloted deep network for facial expression recognition’, European Conference on Computer Vision pp. 425–442, 2016.

H. Zhang, A. Jolfaei, M. Alazab, ‘A face emotion recognition method using convolutional neural network and image edge computing’. IEEE Access 7, pp. 159081–159089, 2019.

I.J. Goodfellow, D. Erhan, D., P.L. Carrier, et al., ‘Challenges in representation learning: A report on three machine learning contests’, International Conference on Neural Information Processing, pp. 117–124, 2013.

Z. Yu and C. Zhang, ‘Image based static facial expression recognition with multiple deep network learning’, Proc. ACM on International Conference on Multimodal Interaction, pp. 435–442, 2015.

H. Niu, et al., ‘Deep feature learnt by conventional deep neural network’, Comput. Electr. Eng. 84, 106656, 2020.

M. Pantic, M. Valstar, R. Rademaker, and L. Maat, ‘Web-based database for facial expression analysis’, IEEE International Conference on Multimedia and Expo 5, IEEE, 2005.

X. Wang, X. Feng, and J. Peng, ‘A novel facial expression database construction method based on web images’, Proc. of the Third International Conference on Internet Multimedia Computing and Service, pp. 124–127, 2011.

C. Mayer, M. Eggers, and B. Radig, ‘Cross-database evaluation for facial expression recognition’, Pattern Recognit. Image Anal. 24(1), pp. 124–132, 2014.

Y. Tang, ‘Deep learning using linear support vector machines’. arXiv preprint arXiv:1306.0239, 2013.

Y. Gan, ‘Facial expression recognition using convolutional neural network’, Proc. of the 2nd International Conference on Vision, Image and Signal Processing, pp. 1–5, 2018.

C.E.J. Li, and L. Zhao, ‘Emotion recognition using convolutional neural networks’, Purdue Undergraduate Research Conference 63, 2019.

Y. Lv, Z. Feng, Z, C. Xu, ‘Facial expression recognition via deep learning’, International Conference on Smart Computing, pp. 303–308 IEEE, 2014.

A. Mollahosseini, D. Chan, and H.M. Mahoor, ‘Going deeper in facial expression recognition using deep neural networks’, IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–10 IEEE, 2016.

R.H. Hahnloser, R. Sarpeshkar, M.A. Mahowald, R.J. Douglas, and H.S. Seung, ‘Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit’. Nature 405(6789), pp. 947–951, 2000.

M.N. Patil, B. Iyer, and R. Arya, ‘Performance evaluation of PCA and ICA algorithm for facial expression recognition application’, Proc. of Fifth International Conference on Soft Computing for Problem Solving, pp. 965–976, Springer, 2016.

N. Christou, and N. Kanojiya, ‘Human facial expression recognition with convolution neural networks’, Third International Congress on Information and Communication Technology, pp. 539–545, Springer, 2019.

B. Niu, Z. Gao, B. Guo, ‘Facial expression recognition with LBP and ORB features’, Comput. Intell. Neurosci. 2021, pp. 1–10, 2021.

S.M. González-Lozoya, et al., ‘Recognition of facial expressions based on CNN features’. Multimed. Tools Appl. 79, pp. 1–21, 2020.

A. Christy, S. Vaithyasubramanian, A. Jesudoss, and M.A. Praveena, ‘Multimodal speech emotion recognition and classification using convolutional neural network techniques’, Int. J. Speech Technol. 23, pp. 381–388, 2020.

H. Niu, et al., ‘Deep feature learnt by conventional deep neural network’, Comput. Electr. Eng. 84, 106656, 2020.

F. Wang, et al., ‘Emotion recognition with convolutional neural network and EEG-based EFDMs’, Neuropsychologia 1(146), 107506, 2020.

F. Wang, et al., ‘Emotion recognition with convolutional neural network and EEG-based EFDMs’, Neuropsychologia 1(146), 107506, 2020.

D. Canedo, A.J. Neves, ‘Facial expression recognition using computer vision: A systematic review’, Appl. Sci. 9(21), 4678, 2019.

F. Nonis, N. Dagnes, F. Marcolin, and E. Vezzetti, ‘3d approaches and challenges in facial expression recognition algorithms – A literature review’, Appl. Sci. 9(18), 3904, 2019.

A.S.A. Hans, and R.A. Smitha, ‘CNN-LSTM based deep neural networks for facial emotion detection in videos’, Int. J. Adv. Signal Image Sci. 7(1), pp. 11–20, 2021.

J.D. Bodapati, and N. Veeranjaneyulu, ‘Facial emotion recognition using deep CNN based features, 2019.

J. Haddad, O. Lézoray, and P. Hamel, ‘3D-CNN for facial emotion recognition in videos’, International Symposium on Visual Computing, pp. 298–309, Springer, 2020.

A.S. Hussain, and S.A.A.B. Ahlam, ‘A real time face emotion classification and recognition using deep learning model’ J. Phys. Conf. Ser. 1432(1), 012087, 2020.

S. Singh, and F. Nasoz, ‘Facial expression recognition with convolutional neural networks’, 10th Annual Computing and Communication Workshop and Conference (CCWC), pp. 0324–0328 IEEE, 2020.

C. Shan, S. Gong, and P.W. McOwan, ‘Facial expression recognition based on local binary patterns: a comprehensive study’, Image Vis. Comput. 27(6), pp. 803–816, 2009.

https://www.kaggle.com/msambare/fer2013. In FER-2013|Kaggle. Accessed 20 Feb 2021.

https://github.com/Tanoy004/Facial-Own-images-for-test. Accessed 20 Feb 2021.

O. A. Montesinos López, A. Montesinos López, J. Crossa, ‘Overfitting, Model Tuning, and Evaluation of Prediction Performance in Multivariate Statistical Machine Learning Methods for Genomic Prediction.’ Springer 2022.

Downloads

Published

2025-08-13

How to Cite

Sharma, A. ., Tiwari, U. ., & Mandal, S. K. . (2025). Real-Time Emotion Classification and Prediction Using a Hybrid Facial Expression Recognition Model Emotion Recognition in Human Resources’ Future. Journal of Mobile Multimedia, 21(3-4), 407–428. https://doi.org/10.13052/jmm1550-4646.21344

Issue

Section

WPMC 2024