Abstract—The paper addresses the problem of transferring new skills to robots from observation of human demonstrated skill examples. An approach is presented for retrieving trajectories of an object, being manipulated during the demonstrations, from Kinect-provided measurements. The problem of object tracking across the image frames is solved by using weighted dynamic template matching with normalized cross-correlation. Such approach takes advantage of the simultaneous image and depth measurements by the Kinect device in leveraging the pattern localization and pose estimation. Demonstrated trajectories are stochastically encoded with hidden Markov model, and the obtained model is exploited for generation of a generalized trajectory for task reproduction. The developed methodology is experimentally validated in a real-world task learning scenario.
Index Terms—Robotics, programming by demonstration, visual learning, pose estimation.
Aleksandar Vakanski and Farrokh Janabi-Sharifi are with the Department of Mechanical and Industrial Engineering at Ryerson University, Toronto, M5B 2K3, Canada (phone: +1-416-979-5000 x 7089; fax: +1-416-979-5265; e-mail: firstname.lastname@example.org, email@example.com).
Iraj Mantegh is with the National Research Council Canada (NRC) - Aerospace Portfolio, Montreal, H3T 2B2, Canada (e-mail: firstname.lastname@example.org).
Cite: Aleksandar Vakanski, Farrokh Janabi-Sharifi, and Iraj Mantegh, "Robotic Learning of Manipulation Tasks from Visual Perception Using a Kinect Sensor," International Journal of Machine Learning and Computing vol.4, no. 2, pp. 163-169, 2014.