Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Defending The Defender: Detecting Adversarial Examples For Network Intrusion Detection Systems.

Version 1 : Received: 15 December 2022 / Approved: 22 December 2022 / Online: 22 December 2022 (03:04:56 CET)

How to cite: Khettaf, D.; Bouzar-Benlabiod, L. Defending The Defender: Detecting Adversarial Examples For Network Intrusion Detection Systems.. Preprints 2022, 2022120409. https://doi.org/10.20944/preprints202212.0409.v1 Khettaf, D.; Bouzar-Benlabiod, L. Defending The Defender: Detecting Adversarial Examples For Network Intrusion Detection Systems.. Preprints 2022, 2022120409. https://doi.org/10.20944/preprints202212.0409.v1

Abstract

The advancement in network security threats led to the development of new Intrusion Detection Systems(IDS) that rely on deep learning algorithms known as deep IDS. Along with other systems based on deep learning, deep IDS suffer from adversarial examples: malicious inputs aiming to change the prediction of a machine learning/deep learning model. Protecting deep learning against adversarial examples remains an open challenge. In this paper, we propose “NIDS-Defend” a framework to enhance the robustness of Network IDS against adversarial attacks. Our framework is composed of two layers: a statistical test and a classifier that together detect adversarial examples in real-time. The detection process consists of two steps: (1) flagging flows that contain adversarial examples with a statistical test, and (2) extracting individual adversarial examples in the previously flagged flows with a classifier. Our approach is evaluated on binary IDS with the NSL-KDD dataset. To generate adversarial examples, the crafting methods used are (1) Boundary attack and (2) HopSkipJumpAttack. We investigate the vulnerabilities of a Network IDS against adversarial examples, then apply our defense. The statistical test can confidently distinguish adversarial flows with more than 95% accuracy, and the classifier detects individual adversarial examples with more than 80% accuracy. We also show that our framework detects adversarial examples crafted by an adversary aware of the defense and confirm the effectiveness of our solution against adversarial attacks.

Keywords

intrusion detection systems, adversarial examples, adversarial attacks, adversarial machine learning, statistics.

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.