Abstract—In this paper, we present a sign language
recognition model which does not use any wearable devices for
object tracking. The system design issues and implementation
issues such as data representation, feature extraction and
pattern classification methods are discussed. The proposed data
representation method for sign language patterns is robust for
spatio-temporal variances of feature points. We present a
feature extraction technique which can improve the
computation speed by reducing the amount of feature data. A
neural network model which is capable of incremental learning
is introduced. We have defined a measure which reflects the
relevance between the feature types and the pattern classes. The
measure makes it possible to select more effective features
without any degradation of performance. Through the
experiments using six types of sign language patterns, the
proposed model is evaluated empirically.
Index Terms—Sign language recognition, neural network, feature extraction, pattern classification.
Ho-Joon Kim is with the School of Computer Science and Electric Engineering, Handong Global University, Pohang, Kyungbuk, Korea (e-mail: firstname.lastname@example.org).
So-Jeong Park and Seung-Kang Lee are with the Dept. of Information and Communication at Handong Global University, Pohang, Kyungbuk, Korea (email: email@example.com; firstname.lastname@example.org)
Cite:Ho-Joon Kim, So-Jeong Park, and Seung-Kang Lee, "Sign Language Recognition Using Motion History Volume and Hybrid Neural Networks," International Journal of Machine Learning and Computing vol.2, no. 6, pp. 750-753, 2012.