An interpretable model for bridge scour risk assessment using explainable artificial intelligence and engineers’ expertise
Résumé
A machine learning (ML) model would be difficult to apply in practice without knowing how the prediction is made. To address this issue, this paper uses XAI and engineers’ expertise to interpret an ML model for bridge scour risk prediction. By using data from the French National Railway Company (SNCF), an extreme gradient boosting (XGBoost) algorithm-based ML model was constructed at first. Later, XAI approaches were employed to have global and local explanations as well as explicit expressions for ML model interpretation. Meanwhile, a group of engineers from SNCF were asked to rank the input parameters based on their engineering judgment. In the end, feature importance obtained from XAI approaches and engineers’ survey was compared. It was found that for both XAI and engineers’ interpretations, observation of local scour around the bridge foundation is the most important feature for decision-making. The differences between XAI interpretations and human expertise emphasize the importance of knowledge in hydrology and hydromorphology for scour risk assessment, since currently engineers make decisions primarily based on observed damages (e.g., scour hole, crack). The results of this paper could make the ML model trustworthy by understanding how the prediction is made and provide valuable guidance for improving current inspection procedures.