Online reward adaptation for MDP-based distributed missions
Résumé
Unmanned aerial vehicles are increasingly used in environments where human intervention is difficult, repetitive, and dangerous. They greatly improve mission quality, productivity, and safety. Mission management of these increasingly complex autonomous vehicles requires independent and online decisions. Markov decision processes (MDPs) are the most widely used probabilistic decision models for describing, modeling, and solving decision-making problems under uncertainty. In order to take into account the physical constraints and safety requirements of the mission, parallel decision models are required with an increase in mission complexity. However, the parallel execution of several MDPs can lead to conflicts. This paper describes a self-adaptation method for resolving conflicts that arise during the mission of a UAV swarm modeled with Markov decision processes (MDPs). The decisions must be taken in priority by the UAV itself but in some cases, it does not have the global view to choose the most adapted to the mission. The proposed method is able to detect and resolve conflicts based on two main phases. The first is the detection of conflicting UAV members by the embedded edge devices. Second, each UAV adjusts its mission plan to avoid conflicts in the swarm.To illustrate the methodology, experimental results obtained with a UAV swarm system performing a target search and tracking mission are presented. Our solution has low overhead and significantly improves the swarm’s lifetime, safety, and mission efficiency.