A comparison of human skeleton extractors for real-time human-robot interaction
Abstract
Modern industrial manufacturing procedures gradually integrate physical Human-Robot interaction (pHRI) scenarios. This requires robots to understand human intentions for effective and safe cooperation. Vision is the most commonly used sensor modality for robots to perceive human behavior. In this paper, a comparison of existing vision-based human skeleton extraction frameworks is made, to provide guidance for the design of human-robot interaction applications. A dataset consisting of consecutive images that records 14 actions conducted by different users acquired by a kinect camera is used for human skeleton extraction. The work justifies our choice of skeleton extractors according to pHRI constraints.
Origin | Files produced by the author(s) |
---|