Abstract—A system was developed that will serve as a learning tool for starters in sign language that involves hand detection. This system is based on a skin-color modeling technique, i.e., explicit skin-color space thresholding. The skin-color range is predetermined that will extract pixels (hand) from non–pixels (background). The images were fed into the model called the Convolutional Neural Network (CNN) for classification of images. Keras was used for training of images. Provided with proper lighting condition and a uniform background, the system acquired an average testing accuracy of 93.67%, of which 90.04% was attributed to ASL alphabet recognition, 93.44% for number recognition and 97.52% for static word recognition, thus surpassing that of other related studies. The approach is used for fast computation and is done in real time.
Index Terms—ASL alphabet recognition, sign language recognition, static gesture.
L. K. S. Tolentino is with the Department of Electronics Engineering, Technological University of the Philippines and the University Extension Services Office, Technological University of the Philippines, Philippines (e-mail: firstname.lastname@example.org).
R. O. Serfa Juan is with the Department of Electronic Engineering, Cheongju University, South Korea (e-mail: email@example.com).
A. C. Thio-ac, M. A. B. Pamahoy, J. R. R. Forteza, and X. J. O. Garcia are with the Department of Electronics Engineering, Technological University of the Philippines, Philippines (e-mail: firstname.lastname@example.org).
Cite: Lean Karlo S. Tolentino, Ronnie O. Serfa Juan, August C. Thio-ac, Maria Abigail B. Pamahoy, Joni Rose R. Forteza, and Xavier Jet O. Garcia, "Static Sign Language Recognition Using Deep Learning," International Journal of Machine Learning and Computing vol. 9, no. 6, pp. 821-827, 2019.Copyright © 2019 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).