Preprint Article Version 2 Preserved in Portico This version is not peer-reviewed

On Detecting and Preventing Jamming Attacks with Machine Learning in Optical Networks

Version 1 : Received: 29 January 2019 / Approved: 30 January 2019 / Online: 30 January 2019 (10:16:14 CET)
Version 2 : Received: 17 June 2019 / Approved: 18 June 2019 / Online: 18 June 2019 (07:26:54 CEST)

How to cite: Bensalem, M.; Singh, S.K.; Jukan, A. On Detecting and Preventing Jamming Attacks with Machine Learning in Optical Networks. Preprints 2019, 2019010311. https://doi.org/10.20944/preprints201901.0311.v2 Bensalem, M.; Singh, S.K.; Jukan, A. On Detecting and Preventing Jamming Attacks with Machine Learning in Optical Networks. Preprints 2019, 2019010311. https://doi.org/10.20944/preprints201901.0311.v2

Abstract

Optical networks are prone to power jamming attacks intending service disruption. This paper presents a Machine Learning (ML) framework for detection and prevention of jamming attacks in optical networks. We evaluate various ML classifiers for detecting out-of-band jamming attacks with varying intensities. Numerical results show that artificial neural network is the fastest ($10^6$ detection per second) for inference and most accurate ($\approx 100 \%$) in detecting power jamming attacks as well as identifying the optical channels attacked. We also discuss and study a novel prevention mechanism when the system is under active jamming attacks. For this scenario, we propose a novel resource reallocation scheme that utilizes the statistical information of attack detection accuracy to lower the probability of successful jamming of lightpaths while minimizing lightpaths' reallocations. Simulation results show that the likelihood of jamming a lightpath reduces with increasing detection accuracy, and localization reduces the number of reallocations required.

Keywords

optical networks; jamming attacks; machine learning; detection and prevention; routing and spectrum assignment; security

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.