Leaky PPO: A simple and efficient RL algorithm for autonomous vehicles
Résumé
Interest in applying Reinforcement Learning (RL) to Autonomous Vehicles (AVs) is experiencing a rapid and substantial expansion. Proximal Policy Optimization (PPO), a well-known RL algorithm with two versions, is simple to implement and has a high level of generality. In this paper, we first analyze the issues in each of the original PPO versions: asymmetric penalty in the Adaptive KL Penalty Coefficient PPO version, gradient loss and pessimistic estimate in the Clipped PPO version. Therefore, we propose three improved PPO algorithms: Adaptive JS Penalty Coefficient PPO, Leaky PPO, and Parametric PPO. To validate the effectiveness of the proposed algorithm, we generated three autonomous driving scenarios in the Metadrive simulator. Experimental results demonstrate that Leaky PPO outperforms the other five PPO variant algorithms in various autonomous driving simulation scenarios. Furthermore, we demonstrate that the Leaky PPO outperforms other popular RL algorithms and achieves state-of-the-art performance.