Explainable artificial intelligence (XAI) plays a central role in strengthening security, privacy, and trust in AI-driven 5G and future 6G networks. In this review, we first re-fine the concepts of transparency and interpretability, and introduce the notions of marginal transparency and marginal interpretability to describe the diminishing re-turns that arise from progressively deeper disclosure of model internals. We then survey key XAI methods, including LIME, SHAP, interpretable neural networks, and federated, privacy-preserving techniques, and assess their suitability for wireless re-source management, intrusion detection, and regulatory auditing in next-generation networks. Building on these foundations, we outline a 2025–2030 research roadmap that integrates XAI into Zero Trust architectures, edge intelligence, and self-explaining 6G systems. Across these layers, we argue that explainability should be built in as a design-time requirement, enabling wireless infrastructures that are not only high performance but also auditable, accountable, and resilient.