Leaky PPO: A simple and efficient RL algorithm for autonomous vehicles - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Leaky PPO: A simple and efficient RL algorithm for autonomous vehicles

Résumé

Interest in applying Reinforcement Learning (RL) to Autonomous Vehicles (AVs) is experiencing a rapid and substantial expansion. Proximal Policy Optimization (PPO), a well-known RL algorithm with two versions, is simple to implement and has a high level of generality. In this paper, we first analyze the issues in each of the original PPO versions: asymmetric penalty in the Adaptive KL Penalty Coefficient PPO version, gradient loss and pessimistic estimate in the Clipped PPO version. Therefore, we propose three improved PPO algorithms: Adaptive JS Penalty Coefficient PPO, Leaky PPO, and Parametric PPO. To validate the effectiveness of the proposed algorithm, we generated three autonomous driving scenarios in the Metadrive simulator. Experimental results demonstrate that Leaky PPO outperforms the other five PPO variant algorithms in various autonomous driving simulation scenarios. Furthermore, we demonstrate that the Leaky PPO outperforms other popular RL algorithms and achieves state-of-the-art performance.
Fichier non déposé

Dates et versions

hal-04738745 , version 1 (15-10-2024)

Identifiants

Citer

Xinchen Han, Hossam Afifi, Hassine Moungla, Michel Marot. Leaky PPO: A simple and efficient RL algorithm for autonomous vehicles. 2024 International Joint Conference on Neural Networks (IJCNN), Jun 2024, Yokohama, France. pp.1-7, ⟨10.1109/IJCNN60899.2024.10650450⟩. ⟨hal-04738745⟩
17 Consultations
0 Téléchargements

Altmetric

Partager

More