Towards Efficient Parallel GPU Scheduling: Interference Awareness with Schedule Abstraction
Résumé
GPUs are powerful computing architectures that are increasingly used in embedded systems for implementing complex intelligent applications. Unfortunately, it is difficult to predict their temporal behavior, especially when multiple parallel tasks are concurrently executed. Running one single task at a time may results in severe underutilization of the resources; on the other hand, running multiple tasks concurrently may introduce mutual interference.
In this work, we introduce Parallel Batch Scheduler (PBS) to enable parallel execution of a set of real-time tasks on GPUs. PBS avoids concurrent execution when it might jeopardize schedulability, and it identifies scenarios where parallel flows might enhance platform utilization and therefore schedulability. To find the feasible scenarios, we propose a scheduling analysis based on a scheduling graph, in which all possible concurrent and serialized scenarios are evaluated for schedulability. To mitigate the explosion in the state space, we propose a technique to reduce the size of the graph.
Through an extensive set of experiments, we demonstrate that PBS outperforms both serialized and fully parallel execution approaches, highlighting its effectiveness in maximizing GPU utilization while maintaining schedulability. We illustrate the usefulness of our approach through the development of a tool that takes the trace of the execution generated by our schedulability analysis and manages GPU workload submissions for GPU tasks.
• Computer systems organization → Real-time system specification.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|