Abstract—The purpose of this paper is to develop a speaker-independent emotion recognition system for emotional interaction between humans and robots. Recognizing human emotion from speech is one of the challenges in the field of human-robot interaction. The ability to recognize emotions from an unspecified human, called speaker-independent emotion recognition, is important for commercial use in speech emotion recognition systems. However, generally, speaker-independent systems show lower performance compared with speaker-dependent systems, as emotional feature values depend on the speaker and his/her gender. Hence, this paper describes the realization of speaker-independent emotion recognition based on separation and rejection to make the emotion recognition system accurate and stable. Through comparison of the proposed methods with conventional method, the improvement and effectiveness of proposed methods were clearly confirmed.
Index Terms—Speech emotion recognition, confidence measure, SID system.
The authors are with the Korea Institute of Industrial Technology, Ansan-si, Gyeongi-do, South Korea (e-mail: email@example.com).
Cite: Bo Seong Kim and Eun Ho Kim, "Speaker-Independent Emotion Recognition for Interstate Measuring of User Based on Separation and Rejection," International Journal of Machine Learning and Computing vol. 8, no. 2, pp. 152-157, 2018.