fbpx

Real-Time Emotion Analysis in Video Interviews

By Orisys Academy on 23rd January 2024

Problem Statement

Traditional job interviews often rely on subjective assessments, and candidates may not
always express their emotions accurately. Real-time emotion analysis in video
interviews can provide additional insights into the candidate’s emotional state

Abstract

This project focuses on developing a system that performs real-time emotion analysis
during video interviews. Utilizing computer vision and machine learning, the system will
analyze facial expressions and other visual cues to assess the candidate’s emotional
state. The goal is to provide additional information to aid in the hiring decision-making
process.

Outcome

A tool that can analyze video interviews in real-time, providing insights into the
emotional responses of candidates to enhance the hiring process.

Reference

Emotion recognition is one of the important applications of computer vision and artificial intelligence for human computer interaction. ‘Surprise, Anger, Neutral, Fear, Sad, Happy, and Disgust’ are the seven basic emotions of humans. In this work, CMU MultiPIE database has been used to train and test the emotion recognition model. This model is then used to test the emotions of people in real-time in situations like interview, online classroom, office meetings, other human interactions etc. The emotions considered in this work are happy, surprise, angry, disgust and neutral. In interviews, a face-scanning algorithm helps to decide whether one deserves the job or not. Moving beyond recruitment, the same model can be used in schools to assess whether students are paying attention or to identify whether employees are paying attention in office meetings. It is required to develop a model that can detect emotions in videos with less number of images in the training set that works well for unknown samples. Meta-Learning provides a solution for this problem. The train database consists of front pose face images from CMU MultiPIE database. The test database contains various head pose images from CMU MultiPIE database. The proposed model provides an accuracy of 95% and 85% for training and testing respectively.

  1. Jordan Min Han Pang, Tee Connie, and Goh Kah Ong Michael, “Recognition of Academic Emotions in Online Classes,” 2021 9th International Conference on Information and Communication Technology (ICoICT), 2021.
  2. Suja Palaniswamy and Shikha Tripathi, “Geometrical Approach for Emotion Recognition from Facial Expressions Using 4D Videos and Analysis on Feature-Classifier Combination,” International Journal of Intelligent Engineering and Systems, 2017. [CrossRef]
  3. Divina Lawrance and Suja Palaniswamy, “Emotion recognition from facial expressions for 3D videos using siamese network,” 2021 International Conference on Communication Control and Information Sciences (IC-CISc), 2021.
  4. Tanya Keshari and Suja Palaniswamy, “Emotion Recognition Using Feature-level Fusion of Facial Expressions and Body Gestures,” 2019 International Conference on Communication and Electronics Systems (ICCES), 2019.
  5. Soumya Kuruvayil and Suja Palaniswamy, “Emotion recognition from facial images with simultaneous occlusion pose and illumination variations using meta-learning,” Journal of King Saud University, 2021.

    https://ieeexplore.ieee.org/document/9971841/references#references