Enhancing prognostics for sparse labeled data using advanced contrastive self-supervised learning with downstream integration
Résumé
Data-driven Prognostics and Health Management (PHM) requires extensive and well-annotated datasets for developing algorithms that can estimate and predict the health state of systems. However, acquiring run-to-failure data is costly, time-consuming, and often lacks comprehensive sampling of failure states, limiting the effectiveness of PHM models. This paper explores the use of Self-Supervised Learning (SSL) in PHM, addressing key limitations and proposing a novel contrastive SSL approach using a nested siamese network structure to enhance degradation feature representation. The model's performance with sparse data improves by integrating downstream task information, particularly Remaining Useful Life (RUL) prediction, into the siamese structure during SSL pre-training. This approach enforces a consistency condition that failure times for two samples from the same monitoring sequence be identical. The proposed method demonstrates superior performance on the PRONOSTIA bearing dataset, outperforming state-of-the-art methods even with sparse labeling. Furthermore, the study delves into the impact of the upstream–downstream relationship in learning processes, asserting that fine-tuning significantly enhances RUL prediction by leveraging the foundational behaviors established during pre-training. Fine-tuning refines the model's ability to capture subtle degradation patterns by building on the initial feature representations learned in pre-training, thereby improving accuracy and robustness in RUL predictions. The generalizability of the proposed strategy is confirmed through an end-to-end tool wear prediction in a real industrial environment, illustrating the applicability of the proposed method across various datasets and models, and providing effective solutions for sparse data scenarios in prognostics.