Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Detection of Sensors Used for Adversarial Examples against Machine Learning Models

Version 1 : Received: 6 November 2023 / Approved: 6 November 2023 / Online: 6 November 2023 (08:19:19 CET)

How to cite: Kurniawan, A.; Ohsita, Y.; Maisuria, S.; Murata, M. Detection of Sensors Used for Adversarial Examples against Machine Learning Models. Preprints 2023, 2023110328. https://doi.org/10.20944/preprints202311.0328.v1 Kurniawan, A.; Ohsita, Y.; Maisuria, S.; Murata, M. Detection of Sensors Used for Adversarial Examples against Machine Learning Models. Preprints 2023, 2023110328. https://doi.org/10.20944/preprints202311.0328.v1

Abstract

Machine learning (ML) systems that use sensors obtain observations from sensors and use them to recognize and interpret the current situation. These systems are susceptible to sensor-based adversarial example attacks (AEs); if some sensors are vulnerable and can be compromised by an attacker, the attacker can change the output of the system by changing the values of the sensors. The detection of compromised sensors is important to defend the system against sensor-based AEs, because we can check the sensors and replace them by detecting the sensors used by the attacker. In this paper, we propose a method to detect the sensors used in sensor-based AEs by utilizing the features of the attack that cannot be avoided. In this method, we introduced a model called the feature-removable model (FRM), which allows us to select the features used as inputs into the model. Our method detects the sensors used in sensor-based AEs by finding the inconsistencies between the outputs of the FRM obtained by changing the selected features. We evaluated our method using a human activity recognition model with sensors attached to the user’s chest, wrist, and ankle. We demonstrate that our method can accurately detect sensors used by the attacker and achieves an average Recall of Detection of 0.92, and the average Precision of Detection is 0.72.

Keywords

Adversarial examples; compromised sensor; detection; multiple sensors; human activity recognition

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.