• Aug 09, 2018 News!Good News! All papers from Volume 8, Number 3 have been indexed by Scopus!   [Click]
  • Jan 11, 2019 News!The papers published in Vol.9, No.1 have all received dois from Crossref.
  • Jan 08, 2019 News!Vol.9, No.1 has been published with online version.   [Click]
Search
General Information
Editor-in-chief
Dr. Lin Huang
Metropolitan State University of Denver, USA
It's my honor to take on the position of editor in chief of IJMLC. We encourage authors to submit papers concerning any branch of machine learning and computing.
IJMLC 2019 Vol.9(1): 44-50 ISSN: 2010-3700
DOI: 10.18178/ijmlc.2019.9.1.763

Combining Pose-Invariant Kinematic Features and Object Context Features for RGB-D Action Recognition

Manoj Ramanathan, Jaroslaw Kochanowicz, and Nadia Magnenat Thalmann
Abstract—Action recognition using RGB-D cameras is a popular research topic. Recognising actions in a pose-invariant manner is very challenging due to view changes, posture changes and huge intra-class variations. This study aims to propose a novel pose-invariant action recognition framework based on kinematic features and object context features. Using RGB, depth and skeletal joints, the proposed framework extracts a novel set of pose-invariant motion kinematic features based on 3D scene flow and captures the motion of body parts with respect to the body. The obtained features are converted to a human body centric space that allows partial viewinvariant recognition of actions. The proposed pose-invariant kinematic features are extracted for both foreground (RGB and depth) and skeleton joints and separate classifiers are trained. Bordacount based classifier decision fusion is employed to obtain an action recognition result. For capturing object context features, a convolutional neural network (CNN) classifier is proposed to identify the involved objects. The proposed context features also include temporal information on object interaction and help in obtaining a final action recognition. The proposed framework works even with non-upright human postures and allows simultaneous action recognition for multiple people, which are topics that remain comparatively unresearched. The performance and robustness of the proposed pose-invariant action recognition framework are tested on several benchmark datasets. We also show that the proposed method works in real-time.

Index Terms—Real-time action/activity recognition, poseinvariant kinematic features, object context, non-upright postures.

The authors are with Institute for Media Innovation, Nanyang Technological University, Singapore (e-mail: mra-manathan@ntu.edu.sg, jarek108@gmail.com, nadiathalmann@ntu.edu.sg).

[PDF]

Cite: Manoj Ramanathan, Jaroslaw Kochanowicz, and Nadia Magnenat Thalmann, "Combining Pose-Invariant Kinematic Features and Object Context Features for RGB-D Action Recognition," International Journal of Machine Learning and Computing vol. 9, no. 1, pp. 44-50, 2019.

Copyright © 2008-2019. International Journal of Machine Learning and Computing. All rights reserved.
E-mail: ijmlc@ejournal.net