Abstract—Over the last few years, emotional intelligent systems have changed the way humans interact with machines. The main intention of these systems is not only to interpret human affective states but also to respond in real time during assistive human to device interactions. In this paper we propose a method for building a Multimodal Emotion Recognition System (MERS), which combine mainly face cues and hand over face gestures which work in near real time with an average frame rate of 14 Fps. Although there are many state of the art emotion recognition systems using facial landmarks, we claim that our proposed system is one of the very few which also include hand over face gestures, which are commonly expressed during emotional interactions.
Index Terms—Hand-over-face gesture, facial landmark, histogram of oriented gradient, space-time interest points.
Mahesh Krishnananda Prabhu is with Samsung R&D Institute, Bagmane Constellation Business Park, Phoenix Building, Outer Ring Road, Doddanekkundi, Bengaluru, Karnataka 560037, India (e-mail: email@example.com).
Dinesh Babu Jayagopi is with Multi Modal Perception Lab, International Institute of Information Technology Bangalore (IIITB), 26/C, Hosur Rd, Electronics City Phase 1, Electronic City, Bengaluru, Karnataka 560100, India (e-mail: firstname.lastname@example.org).
Cite: Mahesh Krishnananda Prabhu and Dinesh Babu Jayagopi, "Real Time Multimodal Emotion Recognition System using Facial Landmarks and Hand over Face Gestures," International Journal of Machine Learning and Computing vol. 7, no. 2, pp. 30-34, 2017.