A Methodology to Scale Containerized HPC Infrastructures in the Cloud
Résumé
This paper introduces a generic method to scale HPC clusters on top of the Kubernetes cloud orchestrator. Users define their targeted infrastructure with the usual Kubernetes syntax for recipes, and our approach automatically translates the description to a full-fledged containerized HPC cluster. Moreover, resource extensions or shrinks are handled, allowing a dynamic resize of the containerized HPC cluster without disturbing its running. The Kubernetes orchestrator acts as a provisioner. We applied the generic method to three orthogonal architectural designs Open Source HPC schedulers: SLURM, OAR, and OpenPBS. Through a series of experiments, the paper demonstrates the potential of our approach regarding the scalability issues of HPC clusters and the simultaneous deployment of several job schedulers in the same physical infrastructure. It should be noticed that our plan does not require any modification either in the containers orchestrator or in the HPC schedulers. Our proposal is a step forward to reconciling the two ecosystems of HPC and cloud. It also calls for new research directions and concrete implementations for the dynamic consolidation of servers or sober placement policies at the orchestrator level. The works contribute a new approach to running HPC clusters in a cloud environment and test the technique on robustness by adding and removing nodes on the fly.
Mots clés
Resource management in HPC Clusters and Clouds Containers Scalability Orchestration Aggregation and federation of HPC Clusters in the Cloud
Resource management in HPC Clusters and Clouds
Containers
Scalability
Orchestration
Aggregation and federation of HPC Clusters in the Cloud
Resource management in HPC Clusters and Clouds Containers Scalability Orchestration Aggregation and federation of HPC Clusters in the Cloud
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|