A Preventive Auto-Parallelization Approach for Elastic Stream Processing
Résumé
Nowadays, more and more sources (connected devices, social networks, etc.) emit real-time data with fluctuating rates over time. Existing distributed stream processing engines (SPE) have to resolve a difficult problem: deliver results satisfying end-users in terms of quality and latency without over-consuming resources. This paper focuses on parallelization of operators to adapt their throughput to their input rate. We suggest an approach which prevents operator congestion in order to limit degradation of results quality. This approach relies on an automatic and dynamic adaptation of resource consumption for each continuous query. This solution takes advantage of i) a metric estimating the activity level of operators in the near future ii) the AUTOSCALE approach which evaluates the need to modify parallelism degrees at local and global scope iii) an integration into the Apache Storm solution. We show performance tests comparing our approach to the native solution of this SPE.