Learning Optimal Edge Processing with Offloading and Energy Harvesting - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2023

Learning Optimal Edge Processing with Offloading and Energy Harvesting

Résumé

Modern portable devices can execute increasingly sophisticated AI models on sensed data. The complexity of such processing tasks is data-dependent and has relevant energy cost. This work develops an Age of Information markovian model for a system where multiple battery-operated devices perform data processing and energy har- vesting in parallel. Part of their computational burden is offloaded to an edge server which polls devices at given rate. The structural properties of the optimal policy for a single device-server system are derived. They permit to derive a new model-free reinforcement learning method specialized for monotone policies, namely Ordered Q-Learning, providing a fast procedure to learn the optimal policy. The method is oblivious to the devices’ battery capacities, the cost and the value of data batch processing and to the dynamics of the energy harvesting process. Finally, the polling strategy of the server is optimized by combining such policy improvement techniques with stochastic approximation methods. Extensive numerical results provide insight into the system properties and demonstrate that the proposed learning algorithms outperform existing baselines.
Fichier principal
Vignette du fichier
mobihoc_v6.pdf (744.7 Ko) Télécharger le fichier
Origine Accord explicite pour ce dépôt
Licence

Dates et versions

hal-04022507 , version 1 (10-03-2023)
hal-04022507 , version 2 (23-03-2023)

Licence

Identifiants

  • HAL Id : hal-04022507 , version 1

Citer

Andrea Fox, Francesco De Pellegrini, Eitan Altman. Learning Optimal Edge Processing with Offloading and Energy Harvesting. 2023. ⟨hal-04022507v1⟩
129 Consultations
164 Téléchargements

Partager

More