Robust flow control and optimal sensor placement using deep reinforcement learning
Résumé
This paper focuses on finding a closed-loop strategy to reduce the drag of a cylinder in laminar flow conditions. Deep reinforcement learning algorithms have been implemented to discover efficient control schemes, using two synthetic jets located on the cylinder's poles as actuators and pressure sensors in the wake of the cylinder as feedback observation. The present work focuses on the efficiency and robustness of the identified control strategy and introduces a novel algorithm (S-PPO-CMA) to optimise the sensor layout. An energy-efficient control strategy reducing drag by 18.4% at a Reynolds number of 120 is obtained. This control policy is shown to be robust both to a Reynolds-number variation in the range [100;216] and to measurement noise, for signal-to-noise ratios as low as 0.2 with negligible impact on performance. Along with a systematic study on sensor number and location, the proposed sparsity-seeking algorithm has achieved a successful optimisation to a reduced five-sensor layout while keeping state-of-the-art performance. These results further highlight the interesting possibilities of reinforcement learning for active flow control and pave the way to efficient, robust and practical implementations of these control techniques in experimental or industrial systems.