Reinforcement learning vs rule-based dynamic movement strategies in UAV assisted networks
Résumé
Since resource allocation of cellular networks is not dynamic, some cells may experience unplanned high traffic demands due to unexpected events. Unmanned aerial vehicles (UAV) can be used to provide the additional bandwidth required for data offloading.
Considering real-time and non-real-time traffic classes, our work is dedicated to optimize the placement of UAVs in cellular networks by two approaches. A first rule-based, low complexity method, that can be embedded in the UAV, while the other approach uses Reinforcement Learning (RL). It is based on Markov Decision Processes (MDP) for providing optimal results. The energy of the UAV battery and charging time constraints have been taken into account to cover a typical cellular environment consisting of many cells.
We used an open dataset for the Milan cellular network provided by Telecom Italia to evaluate the performance of both proposed models. Considering this dataset, the MDP model outperforms the rule-based algorithm. Nevertheless, the rule-based one requires less processing complexity and can be used immediately without any prior data. This work makes a notable contribution to developing practical and optimal solutions for UAV deployment in modern cellular networks.
Origine | Fichiers produits par l'(les) auteur(s) |
---|