Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Adversarial Attacks Can Deceive AI Systems, Leading to Misclassification or Incorrect Decisions

Version 1 : Received: 29 September 2023 / Approved: 29 September 2023 / Online: 29 September 2023 (08:42:10 CEST)

How to cite: Radanliev, P.; Santos, O. Adversarial Attacks Can Deceive AI Systems, Leading to Misclassification or Incorrect Decisions. Preprints 2023, 2023092064. https://doi.org/10.20944/preprints202309.2064.v1 Radanliev, P.; Santos, O. Adversarial Attacks Can Deceive AI Systems, Leading to Misclassification or Incorrect Decisions. Preprints 2023, 2023092064. https://doi.org/10.20944/preprints202309.2064.v1

Abstract

This comprehensive analysis thoroughly examines the topic of adversarial attacks in artificial intelligence (AI), providing a detailed overview of the various methods used to compromise machine learning models. It explores different attack techniques, ranging from the simple Fast Gradient Sign Method (FGSM) to the intricate Carlini and Wagner Attack (C&W), emphasising the wide range of adversarial approaches and their intended goals. The discussion also distinguishes between targeted and non-targeted attacks, highlighting the adaptability and versatility of these malicious efforts. Additionally, the study delves into the realm of black-box attacks, revealing the capability of adversarial strategies to compromise models even with limited knowledge. Real-life examples illustrate the tangible consequences and potential dangers of adversarial attacks in various fields such as self-driving cars, multimedia, and voice assistants. These cases highlight the difficulties in ensuring the legitimacy and dependability of AI-powered technologies and programs. The article stresses the importance of ongoing research and innovation to address the growing difficulties posed by advanced methods like deepfakes and disguised voice commands in preserving the security of AI systems. This study provides valuable insights on how different adversarial strategies and defence mechanisms interact within AI. The results emphasise the urgent need for stronger and more secure AI models to combat the increasing number of adversarial threats in today's AI landscape. These findings can guide future research and innovations in developing more resilient AI technologies that can better withstand various adversarial vulnerabilities and challenges.

Keywords

adversarial attacks; artificial intelligence; machine learning; defense mechanisms; system integrity; model vulnerabilities; advanced attack techniques; Fast Gradient Sign Method (FGSM); Carlini and Wagner Attack (C&W); targeted attacks; non-targeted attacks; blackbox attacks

Subject

Computer Science and Mathematics, Security Systems

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.