Submitted:
30 July 2024
Posted:
02 August 2024
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Background
2.1. Adversarial Patch Attacks
2.2. Dpatch
2.3. Autoencoders
2.4. Structural Similarity Index Measure
3. Adversarial Patch Attacks on Object Detectors
4. SSIM-Based Autoencoder Modeling
| Algorithm 1:SSIM-based Autoencoder Modeling |
|
Input: Data point X
Output: AE
|
5. The Countermeasure Against Adversarial Patch Attacks
5.1. Traffic Sign Detection in Normal Detector
5.2. Results on Adversarial Patch Attacks
5.3. Object Detection Using Proposed SSIM-Based Autoencoder
6. Conclusion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Brown, T.B.; Maném, D.; Roy, A.; Abadi, M.; Gilmer, J. Adversarial patch. In Proceedings of the NIPS 2017 Workshop on Machine Learning and Computer Security; 2017. [Google Scholar]
- Lengyel, H.; Remeli, V.; Szalay, Z. Easily deployed stickers could disrupt traffic sign recognition. Perner’s Contacts 2019, 19, 156–163. [Google Scholar]
- Thys, S.; Van Ranst, W.; Goedemé, T. Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops; 2019; pp. 0–0. [Google Scholar]
- Liu, X.; Yang, H.; Yang, L.Z.; Song, L.; Li, H.; Chen, Y. DPatch: An adversarial patch attack on object detectors. In Proceedings of the AAAI Workshops; 2018. [Google Scholar]
- Eykholt, K.; Evtimov, I.; Fernandes, E.; Li, B.; Rahmati, A.; Xiao, C.; et al. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2018; pp. 1625–1634. [Google Scholar]
- Hu, Y.C.T.; Kung, B.H.; Tan, D.S.; Chen, J.C.; Hua, K.L.; Cheng, W.H. Naturalistic physical adversarial patch for object detectors. In Proceedings of the IEEE/CVF International Conference on Computer Vision; 2021; pp. 7848–7857. [Google Scholar]
- Naseer, M.; Khan, S.; Porikli, F. Local gradients smoothing: Defense against localized adversarial attacks. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV); 2019; pp. 1300–1307. [Google Scholar]
- Yin, S.L.; Zhang, X.L.; Zuo, L.Y. Defending against adversarial attacks using spherical sampling-based variational auto-encoder. Neurocomputing 2022, 1–10. [Google Scholar] [CrossRef]
- Tsuruoka, G.; Sato, T.; Chen, Q.A.; Nomoto, K.; Tanaka, Y.; Kobayashi, R.; et al. WIP: Adversarial Retroreflective Patches: A Novel Stealthy Attack on Traffic Sign Recognition at Night. In Proceedings of the Symposium on Vehicle Security and Privacy; 2024. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations (ICLR’15); 2015; pp. 1–11. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. In Proceedings of the International Conference on Learning Representations (ICLR’18); 2017; pp. 1–11. [Google Scholar]
- Carlini, N.; Wagner, D. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP); 2017; pp. 39–57. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2016; pp. 2574–2582. [Google Scholar]
- Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The limitations of deep learning in adversarial settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy; 2016; pp. 372–387. [Google Scholar]
- Lin, X.; Li, Y.; Hsiao, J.; Ho, C.; Kong, Y. Catch missing details: Image reconstruction with frequency augmented variational autoencoder. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2023; pp. 1736–1745. [Google Scholar]
- Suganuma, M.; Ozay, M.; Okatani, T. Exploiting the potential of standard convolutional autoencoders for image restoration by evolutionary search. In Proceedings of the International Conference on Machine Learning; 2018; pp. 4771–4780. [Google Scholar]
- Mao, X.J.; Shen, C.; Yang, Y.B. Image restoration using convolutional auto-encoders with symmetric skip connections. In Proceedings of the Neural Information Processing Systems; 2016; pp. 1–17. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Song, D.; Eykholt, K.; Evtimov, I.; Fernandes, E.; Li, B.; Rahmati, A.; et al. Physical adversarial examples for object detectors. In Proceedings of the 12th USENIX Workshop on Offensive Technologies (WOOT 18); 2018. [Google Scholar]
- Pavlitska, S.; Lambing, N.; Zöllner, J.M. Adversarial attacks on traffic sign recognition: A survey. In Proceedings of the 2023 3rd International Conference on Electrical, Computer, 2023, Communications and Mechatronics Engineering (ICECCME); pp. 1–6.
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; et al. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 2014, Part V, Springer, 2014, September 6-12; pp. 740–755.
- Mogelmose, A.; Trivedi, M.M.; Moeslund, T.B. Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey. IEEE Transactions on Intelligent Transportation Systems 2012, 13, 1484–1497. [Google Scholar] [CrossRef]
- Chãn, H.L. TrafficSign detection Dataset [Open Source Dataset]. (2022). Available online: https://universe.roboflow.com/chan-hung luu/trafficsign detection.






| Dataset | YOLOv3 | YOLOv5 | YOLOv8 | Faster-RCNN |
|---|---|---|---|---|
| COCO Dataset(Stop signs) | 97.64% | 98.31% | 98.03% | 96.05% |
| LISA Dataset(Stop signs) | 94.26% | 94.05% | 96.71% | 98.55% |
| Traffic Sign Dataset(Stop & speed signs) | 98.80% | 98.76% | 98.88% | 96.16% |
| Dataset | Attack Method | YOLOv3 | YOLOv5 | YOLOv8 | Faster-RCNN |
|---|---|---|---|---|---|
| COCO Dataset | Adv_Patch[3] | 68.01% | 52.36% | 73.48% | 53.38% |
| Dpatch[4] | 42.09% | 37.56% | 65.45% | 41.28% | |
| LISA Dataset | Adv_Patch[3] | 38.20% | 22.21% | 49.16% | 49.05% |
| Dpatch[4] | 30.94% | 40.37% | 40.45% | 22.16% | |
| Traffic Sign Dataset | Adv_Patch[3] | 41.34% | 32.96% | 26.55% | 54.36% |
| Dpatch[4] | 23.98% | 23.52% | 39.86% | 42.96% |
| Dataset | Attack Method |
YOLOv3 | YOLOv5 | YOLOv8 | Faster-RCNN | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Ours | LGS[7] | AE[8] | Ours | LGS[7] | AE[8] | Ours | LGS[7] | AE[8] | Ours | LGS[7] | AE[8] | ||
| COCO Dataset |
No attack | 95.15% | 96.92% | 85.80% | 95.00% | 98.27% | 85.68% | 97.68% | 95.99% | 87.41% | 88.60% | 94.63% | 79.65% |
| Adv-patch[3] | 93.06% | 79.31% | 79.31% | 91.41% | 67.63% | 75.23% | 94.93% | 77.79% | 80.77% | 86.23% | 76.31% | 75.50% | |
| Dpatch[4] | 93.83% | 65.76% | 81.80% | 91.71% | 54.00% | 77.95% | 94.98% | 88.40% | 82.22% | 85.08% | 69.87% | 71.88% | |
| LISA Dataset |
No attack | 93.63% | 94.24% | 89.73% | 94.15% | 92.96% | 90.32% | 96.65% | 95.58% | 93.82% | 95.07% | 95.29% | 90.95% |
| Adv-patch[3] | 91.50% | 81.38% | 86.58% | 89.91% | 78.99% | 49.06% | 92.88% | 78.34% | 90.14% | 92.75% | 66.65% | 88.4% | |
| Dpatch[4] | 94.09% | 45.28% | 85.21% | 90.31% | 59.03% | 78.74% | 95.19% | 88.93% | 87.19% | 90.79% | 65.85% | 82.65% | |
| Traffic Sign Dataset |
No attack | 98.24% | 98.10% | 83.66% | 97.62% | 97.60% | 80.11% | 98.85% | 97.27% | 80.18% | 96.16% | 96.50% | 81.81% |
| Adv-patch[3] | 88.25% | 70.85% | 61.97% | 87.47% | 66.86% | 62.45% | 91.56% | 71.69% | 63.54% | 87.02% | 52.02% | 43.64% | |
| Dpatch[4] | 88.19% | 33.02% | 54.04% | 90.51% | 56.46% | 36.16% | 91.46% | 74.98% | 67.63% | 84.15% | 43.95% | 43.32% | |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).