Online scheduling optimization for DAG-based requests through reinforcement learning in collaboration edge networks
Résumé
The wide-adoption of edge computing promotes the scheduling of tasks in complex requests upon smart devices on the network edge, whereas tasks are necessary to be offloaded to the cloud when they are intensive in computational and energy resources. Traditional techniques explore mostly the scheduling of atomic tasks, whereas complex requests scheduling on edge servers is the challenge unexplored extensively. To address this challenge, this paper proposes an online task scheduling optimization for DAG-based requests at the network edge, where this scheduling procedure is modeled as Markov decision process, in which system state, request and decision space are formally specified. A temporal-difference learning based mechanism is adopted to learn an optimal tasks allocation strategy at each decision stage. Extensive experiments are conducted, and evaluation results demonstrate that our technique can effectively reduce the system's long-term average delay and energy consumption in comparison with the state-of-art's counterparts.