Preprint
Review

This version is not peer-reviewed.

Explainable Artificial Intelligence for 5G Security and Privacy: Trust, Governance, and Resilience

Submitted:

08 December 2025

Posted:

08 December 2025

You are already at the latest version

Abstract
Explainable artificial intelligence (XAI) plays a central role in strengthening security, privacy, and trust in AI-driven 5G and future 6G networks. In this review, we first re-fine the concepts of transparency and interpretability, and introduce the notions of marginal transparency and marginal interpretability to describe the diminishing re-turns that arise from progressively deeper disclosure of model internals. We then survey key XAI methods, including LIME, SHAP, interpretable neural networks, and federated, privacy-preserving techniques, and assess their suitability for wireless re-source management, intrusion detection, and regulatory auditing in next-generation networks. Building on these foundations, we outline a 2025–2030 research roadmap that integrates XAI into Zero Trust architectures, edge intelligence, and self-explaining 6G systems. Across these layers, we argue that explainability should be built in as a design-time requirement, enabling wireless infrastructures that are not only high performance but also auditable, accountable, and resilient.
Keywords: 
;  ;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated