Assessing adversarial training effect on IDSs and GANs
Abstract
Deep neural network-based Intrusion Detection Systems (IDSs) are gaining popularity to improve anomaly detection accuracy and robustness. Yet, Deep neural network (DNN) models have been shown to be vulnerable to adversarial attacks. An attacker can use a generator, here a Generative Adversarial Network, to alter an attack so that the IDS model misclassify it as normal network traffic. There is a race between adversarial attacks and mechanisms to make robust IDSs, like Adversarial Training. To our knowledge, no study thoroughly assesses how attack generators or IDS training is sensitive to parameters controlling resources spent during training. Such results provide insights on how much to spend on IDS training. This paper presents the outcome of this assessment for GANs vs adversarial training. Interestingly, it shows that GANs' evasion capabilities are either very good or poor, with almost no average cases. Resources impact the likelihood of obtaining an efficient generator.