Deep reinforcement learning for maintenance optimization of a scrap-based steel production line
Résumé
This paper presents a Deep Reinforcement Learning (DRL)-based optimization approach for determining the optimal inspection and maintenance planning of a scrap-based steel production line. The DRL-based optimization maintenance recommends the adequate time for inspections and maintenance activities based on the monitoring conditions of the production line, such as machine productivity, buffer level, and production demand. Some practical aspects of the system, such as such uncertainty of the maintenance duration and the variable production rate of the machines, were considered. A scrap-based steel production line was modeled as a multi-component system considering components dependencies. A simulation model was developed to simulate the dynamics of the system and assist with the development of the DRL maintenance approach. The proposed DRL-based maintenance is compared with traditional maintenance policies, such reactive maintenance, time-based maintenance, and condition-based maintenance. In addition, different DRL algorithms such as PPO (Proximal Policy Optimization), TRPO (Trust Region Policy Optimization), and DQN (Deep Q-Network) are investigated in the case-based scenario. The findings indicated the potential for significant financial savings. Therefore, the proposed maintenance approach demonstrates system adaptability and has the potential to be a powerful tool for industrial competitiveness.