• Jul 29, 2019 News!IJMLC Had Implemented Online Submission System, Please Sumbit New Submissions thorough This System Only!   [Click]
  • Jul 16, 2019 News!Good News! All papers from Volume 9, Number 3 have been indexed by Scopus!   [Click]
  • Jul 08, 2019 News!Vol.9, No.4 has been published with online version.   [Click]
General Information
    • ISSN: 2010-3700 (Online)
    • Abbreviated Title: Int. J. Mach. Learn. Comput.
    • Frequency: Bimonthly
    • DOI: 10.18178/IJMLC
    • Editor-in-Chief: Dr. Lin Huang
    • Executive Editor:  Ms. Cherry L. Chen
    • Abstracing/Indexing: Scopus (since 2017), EI (INSPEC, IET), Google Scholar, Crossref, ProQuest, Electronic Journals Library.
    • E-mail: ijmlc@ejournal.net
Dr. Lin Huang
Metropolitan State University of Denver, USA
It's my honor to take on the position of editor in chief of IJMLC. We encourage authors to submit papers concerning any branch of machine learning and computing.

IJMLC 2018 Vol.8(3): 274-279 ISSN: 2010-3700
DOI: 10.18178/ijmlc.2018.8.3.699

Local Feature Extraction from RGB and Depth Videos for Human Action Recognition

Rawya Al-Akam and Dietrich Paulus
Abstract—In this paper, we present a novel system to analyze human body motions (actions) for recognizing human actions by using 3D videos (RGB and depth data). We apply the Bag-of-Features techniques for recognizing human actions by extracting local-spatial temporal features from all video frames. Feature vectors are computed in two steps: The first step consists of detecting all interest keypoints from RGB video frames by using Speed-Up Robust Features detector; then the motion points are filtered by using Motion History Image and Optical Flow, and these important motion points are aligned to the depth frame sequences. In the second step, the feature vectors are computed by using a Histogram of Orientation Gradient descriptor, this descriptor is applied around these motion points from both RGB and depth channels, then the feature vector values are combined in one RGBD feature vector. Finally, the k-means clustering and multi-class Support Vector Machines are used for the action classification task. Our system is invariant to scale, rotation and illumination. All tested results are computed from a dataset that is available to the public and often used in the community. This new features combination method is help to reach recognition rates superior to other publications on the dataset.

Index Terms—RGBD videos, feature extraction, K-means clustering, SVM classification.

The authors are with the Faculty of Computer Science, Institute of Computational Visualistics, Active Vision Group (AGAS), University of Koblenz-Landau, Germany, Universitätsstr. 1, 56070 Koblenz (e-mail:rawya@ uni-koblenz.de, paulus@ uni-koblenz.de).


Cite: Local Feature Extraction from RGB and Depth Videos for Human Action Recognition, "Rawya Al-Akam and Dietrich Paulus," International Journal of Machine Learning and Computing vol. 8, no. 3, pp. 274-279, 2018.

Copyright © 2008-2019. International Journal of Machine Learning and Computing. All rights reserved.
E-mail: ijmlc@ejournal.net