NOMA-based Scheduling and offloading for energy harvesting devices using reinforcement learning
Résumé
We consider a joint optimization problem of resource scheduling and computation offloading in a Mobile-Edge Computing (MEC) system where User Equipments (UEs) or devices have energy harvesting functionalities. The UEs can either execute locally the data packets or offload them to a nearby MEC server for remote processing. The main objective is to minimize the overall packet losses of the UEs under strict delay constraints imposed by applications. Non-Orthogonal Multiple Access is enabled to allow UEs sending their data packets simultaneously. The problem is formulated as a Markov Decision Process and is solved using Proximal Policy Optimization, a Deep Reinforcement Learning algorithm. The numerical results show the efficiency of such an algorithm in reducing the packet loss as well as the energy consumed during testing compared to some naive heuristics