Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

An Ontological Knowledge Base of Poisoning Attacks on Deep Neural Networks

Version 1 : Received: 8 August 2022 / Approved: 10 August 2022 / Online: 10 August 2022 (09:39:07 CEST)

How to cite: Altoub, M.; AlQurashi, F.; Yigitcanlar, T.; Corchado, J.; Mehmood, R. An Ontological Knowledge Base of Poisoning Attacks on Deep Neural Networks. Preprints 2022, 2022080197 (doi: 10.20944/preprints202208.0197.v1). Altoub, M.; AlQurashi, F.; Yigitcanlar, T.; Corchado, J.; Mehmood, R. An Ontological Knowledge Base of Poisoning Attacks on Deep Neural Networks. Preprints 2022, 2022080197 (doi: 10.20944/preprints202208.0197.v1).

Abstract

Deep neural networks (DNN) have successfully delivered a cutting-edge performance in several fields. With the broader deployment of DNN models on critical applications, the security of DNNs becomes an active and yet nascent area. Attacks against DNNs can have catastrophic results, according to recent studies. Poisoning attacks, including backdoor and Trojan attacks, are one of the growing threats against DNNs. Having a wide-angle view of these evolving threats is essential to better understand the security issues. In this regard, creating a semantic model and a knowledge graph for poisoning attacks can reveal the relationships between attacks across intricate data to enhance the security knowledge landscape. In this paper, we propose a DNN Poisoning Attacks Ontology (DNNPAO) that would enhance knowledge sharing and enable further advancements in the field. To do so, we have performed a systematic review of the relevant literature to identify the current state. We collected 28,469 papers from IEEE, ScienceDirect, Web of Science, and Scopus databases, and from these papers, 712 research papers were screened in a rigorous process, and 55 poisoning attacks in DNNs were identified and classified. We extracted a taxonomy of the poisoning attacks as a scheme to develop DNNPAO. Subsequently, we used DNNPAO as a framework to create a knowledge base. Our findings open new lines of research within the field of AI security.

Keywords

Deep neural networks; Adversarial Attacks; Poisoning; Backdoors; Trojans; Taxonomy; Ontology; Knowledge Base; Explainable AI; Green AI

Subject

MATHEMATICS & COMPUTER SCIENCE, Artificial Intelligence & Robotics

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.

We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.