Improving Haptic Response for Contextual Human Robot Interaction
Résumé
For haptic applications, a user in a virtual environment needs to interact with proxies attached to a robot. The device must be at the exact location defined in the virtual environment in time. However, due to device limitations, delays are always unavoidable. One of the solutions to improve device response is to infer human intended motion and move the robot at the earliest time possible to the desired goal. This paper presents an experimental study to improve prediction time and reduce the robot time to reach the desired position. We developed motion strategies based on the hand motion and eye-gaze direction to determine the point of user interaction in a virtual environment. To assess the performance of the strategies, we conducted a subject-based experiment using an exergame for reach and grab tasks designed for upper limb rehabilitation training. Experimental results in this study revealed that eye-gaze-based prediction significantly improved detection time by 37% and the robot time to reach the target by 27%. Further analysis provided more insight on the effect of eye-gaze window and the hand threshold on the device response for the experimental task.
Origine | Fichiers produits par l'(les) auteur(s) |
---|