One-shot Learning for Task-oriented Grasping
Résumé
Task-oriented grasping models aim to predict a suitable grasp pose on an object to fulfill a task. These systems have limited generalization capabilities to new tasks, but have shown the ability to generalize to novel objects by recognizing the physical properties of objects that can be associated with an action (i.e. affordances). However, this object generalization often comes at the cost of being unable to recognize the object category being grasped, which could lead to unpredictable or risky behaviors, especially within unconstrained environments. This paper overcomes these generalization limitations by exploring one-shot learning techniques to develop a task-oriented grasping solution that can leverage explicit knowledge defined in a database to implicitly generalize to new objects and tasks. We propose the One-shot Task-oriented Grasping (OS-TOG) framework, composed of four sub-models, that uses a database of objects and tasks to identify suitable task-oriented grasps on a specified object from an image scene. In physical experiments with novel objects, OS-TOG recognizes 69.4% of detected objects correctly and predicts suitable task-oriented grasps with 82.3% accuracy, having a physical grasp success rate of 82.3%. Code and models will be released upon publication.
Origine | Fichiers produits par l'(les) auteur(s) |
---|