A Stable Method for Task Priority Adaptation in Quadratic Programming via Reinforcement Learning - Archive ouverte HAL
Pré-Publication, Document De Travail (Preprint/Prepublication) Année : 2024

A Stable Method for Task Priority Adaptation in Quadratic Programming via Reinforcement Learning

Andrea Testa
Marco Laghi
  • Fonction : Auteur
  • PersonId : 1274485
Gennaro Raiola
Arash Ajoudani

Résumé

In emerging manufacturing facilities, robots must enhance their flexibility. They are expected to perform complex jobs, showing different behaviors on the need, all within unstructured environments, and without requiring reprogramming or setup adjustments. To address this challenge, we introduce the A3CQP, a non-strict hierarchical Quadratic Programming (QP) controller. This controller seamlessly combines both motion and interaction functionalities, with priorities dynamically and autonomously adapted through a Reinforcement Learningbased adaptation module. This module utilizes the Asynchronous Advantage Actor-Critic algorithm (A3C) to ensure rapid convergence and stable training within continuous action and observation spaces. The experimental validation, involving a collaborative peg-in-hole assembly and the polishing of a wooden plate, demonstrates the effectiveness of the proposed solution in terms of its automatic adaptability, responsiveness, and safety.
Fichier principal
Vignette du fichier
AdaptiveQP (1).pdf (9.65 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04280264 , version 1 (10-11-2023)
hal-04280264 , version 2 (29-02-2024)

Identifiants

  • HAL Id : hal-04280264 , version 2

Citer

Andrea Testa, Marco Laghi, Edoardo del Bianco, Gennaro Raiola, Enrico Mingo Hoffman, et al.. A Stable Method for Task Priority Adaptation in Quadratic Programming via Reinforcement Learning. 2024. ⟨hal-04280264v2⟩

Collections

INRIA INRIA2
65 Consultations
80 Téléchargements

Partager

More