Easy grasping location learning from one-shot demonstration
Résumé
In this paper, we propose a fast learner grasping pipeline able to grasp objects at a specific location few minutes after being taught by an operator. Our motivation is to ease reconfiguration of robot according to a specific task, without any CAD model, nor existing database, nor simulator. We build a CNN pipeline which performs a semantic segmentation of object, and recognizes authorized and prohibited grasping location shown during demonstration. For that we have simplified the input space, created a data augmentation process and proposed a light CNN architecture allowing learning in less than 5 minutes. Validation on a real 7-DOF robot shown good performances (70 to 100% depending on the object), with only a one-shoot operator’s demonstration. Performances remain good when grasping similar unseen objects, and with several objects in the robot’s workspace using few demonstrations. A video highlighting the main aspects can be found at https://www.youtube.com/watch?v=rYCIk6njBo4
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|