Learning-based Approaches to Estimate Job Wait Time in HTC Datacenters
Résumé
High Throughput Computing datacenters are a cornerstone of scientic discoveries in the elds of High Energy Physics and Astroparticles Physics. These datacenters provide thousands of users from dozens of scientic collaborations with tens of thousands computing cores and Petabytes of storage. The scheduling algorithm used in such datacenters to handle the millions of (mostly single-core) jobs submitted every month ensures a fair sharing of the computing resources among user groups, but may also cause unpredictably long job wait times for some users. The time a job will wait can be caused by many entangled factors and conguration parameters and is thus very hard to predict. Moreover, batch systems implementing a fair-share scheduling algorithm cannot provide users with any estimation of the job wait time at submission time. Therefore, we investigate in this paper how learning-based techniques applied to the logs of the batch scheduling system of a large HTC datacenter can be used to get an estimation of job wait time. First, we illustrate the need for users for such an estimation. Then, we identify some intuitive causes of this wait time from the information found in the batch system logs. We also formally analyze the correlation between job and system features and job wait time. Finally, we study several Machine Learning algorithms to implement learning-based estimators of both job wait time and job wait time ranges. Our experimental results show that a regression-based estimator can predict job wait time with a median absolute percentage error of about 54%, while a classier that combines regression and classication assigns nearly 77% of the jobs in the right wait time range or in an immediately adjacent one.
Origine | Fichiers produits par l'(les) auteur(s) |
---|