An explainable-by-design ensemble learning system to detect unknown network attacks
Résumé
Machine learning (ML) is a promising technology for network intrusion detection systems. There is a wide range of ML algorithms that are potential candidates for network intrusion detection systems, as they exhibit very good detection accuracy in average. However, significant detection differences appear when facing different kinds of attacks, some being prone to better detect some particular attack types. They then often appear to complement each other. The challenge then lies in determining the accurate result when several ML models provide different results, and this without any explanation about their decision. To address this challenge, our system aims to reconstruct attack patterns from the outputs of these ML models and presenting them in an interpretable manner. For that, we propose an approach combining ensemble learning and stacking with a meta-learner that works on graphical representation of traffic flows, that then provides the required explainability level for the decisions made. The evaluation of our system, using the CSE-CIC-IDS2018 dataset, demonstrates a significant improvement achieved through the combination of multiple ML algorithms. Furthermore, we emphasize the importance of explainability in network intrusion detection systems and the need for accurate and interpretable models. Our system goes beyond traditional detection methods by reporting anomalous feature pairs and providing visual representations of attack patterns, empowering analysts to better understand and respond to network threats.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|