Unlocking the Power of Reinforcement Learning: Investigating Optimal Q-Learning Parameters for Routing in Flying Ad Hoc Networks
Résumé
The routing challenges in Flying Ad Hoc Networks (FANETs), characterized by high-speed Unmanned Aerial Vehicles (UAVs), limited UAV battery life, intermittent links, network partitioning, and dynamic topologies, have led to the development of specialized routing protocols based on Reinforcement Learning (RL). In this context, the Q-Learning algorithm is the most commonly used RL algorithm. It relies on two primary hyperparameters: the learning rate and discount factor. The protocol’s efficiency hinges on the selection of these parameters. To tackle this challenge, numerous adaptive Q-Learning routing protocols introduce novel functions to dynamically adjust the learning parameters. Therefore, this paper delves into an examination of these parameters and introduces a novel taxonomy categorizing them into three distinct classes: linear function-based adjustment, exponential function-based adjustment, and grid search-based adjustment. This paper highlights that the prevailing adjustment function for the learning rate follows a decreasing exponential pattern, while the discount factor adheres to a linear function. This equilibrium facilitates swift adaptation to changes while ensuring a stable transition between short-term and long-term rewards. Such balance is essential for efficient and effective routing in FANETs.
Origine | Fichiers produits par l'(les) auteur(s) |
---|