Preprint
Article

This version is not peer-reviewed.

Explainable AI for Securing Perception-Layer Sensor Data in IoT Environmental Danger Detection Systems

Submitted:

07 May 2026

Posted:

08 May 2026

You are already at the latest version

Abstract
This paper presents an explainable defense framework against perception-layer and Man-in-the-Middle (MitM) attacks in Internet of Things (IoT)-based environmental hazard warning systems. These systems rely on heterogeneous sensors (gas, light, sound, temperature, and humidity) whose integrity is crucial for reliable environmental alerts. Perception-layer attacks such as spoofing, jamming, and data injection can compromise sensor readings, while MitM attacks threaten communication reliability. The proposed approach integrates Dynamic Time Warping (DTW) for time-series anomaly detection with Shapley Additive Explanations (SHAP) for interpretability. A comparative evaluation framework jointly considers detection performance and explanation quality through metrics including pre-registering a Casual Ground Truth based on network protocol specifications and measuring the Sperman’s rank correlation of SHAP outputs, which eliminates the need for manual expert evaluation. Experimental simulations using an authentic EdgeIIoT-2022 dataset demonstrate high detection accuracy and moderated explainability scores. The results prove the framework’s ability to detect and explain adversarial behaviors in sensor networks, strengthening trust, transparency, and resilience in safety-critical IoT infrastructures.
Keywords: 
;  ;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated