Deep reinforcement learning for the olfactory search POMDP: a quantitative benchmark - Institut de Recherche sur les Phenomenes Hors Equilibre Accéder directement au contenu
Article Dans Une Revue European Physical Journal E: Soft matter and biological physics Année : 2023

Deep reinforcement learning for the olfactory search POMDP: a quantitative benchmark

Résumé

The olfactory search POMDP (partially observable Markov decision process) is a sequential decision-making problem designed to mimic the task faced by insects searching for a source of odor in turbulence, and its solutions have applications to sniffer robots. As exact solutions are out of reach, the challenge consists in finding the best possible approximate solutions while keeping the computational cost reasonable. We provide a quantitative benchmarking of a solver based on deep reinforcement learning against traditional POMDP approximate solvers. We show that deep reinforcement learning is a competitive alternative to standard methods, in particular to generate lightweight policies suitable for robots.
Fichier principal
Vignette du fichier
Loisy2023a_EurPhysJE_drl-benchmark_arxiv.pdf (2.22 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
licence : CC BY - Paternité

Dates et versions

hal-04045475 , version 1 (24-03-2023)

Identifiants

Citer

Aurore Loisy, Robin Heinonen. Deep reinforcement learning for the olfactory search POMDP: a quantitative benchmark. European Physical Journal E: Soft matter and biological physics, 2023, 46 (3), pp.17. ⟨10.1140/epje/s10189-023-00277-8⟩. ⟨hal-04045475⟩
38 Consultations
38 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More