Towards Learning Human-Like and Efficient Multi-Agent Path Finding
Résumé
Simulating trajectories of virtual crowds is a commonly encountered task in computer graphics. It significantly overlaps with the broader field of multiagent path finding, having the same central goal, but with different desired characteristics of motion. Several recent works have applied Reinforcement Learning methods to animate virtual crowds, however they often make quite different design choices when it comes to the fundamental simulation setup. Each of these choices comes with a reasonable justification for its use, so it is not obvious what is their real impact, and how they affect the results. In this work, we build upon our recent research where we study the impact of these arbitrary design choices in terms of their impact on the learning performance, as well as the quality of the resulting motion. We extend it with a more in-depth analysis of the reward function, its structure and properties. We introduce a simple framework for modelling the reward function that enables studying its properties without performing a relatively costly RL training. We also show some of our findings on how certain specific reward functions succeed or fail at producing believable behavior in different scenarios.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |