Submitted:
20 July 2025
Posted:
22 July 2025
You are already at the latest version
Abstract
Keywords:
Introduction
Methodology
| Token | iFlw | hPhy | cMth | α(t) |
| falling pressure | 0.30 | 0.01 | 0.01 | 0.56 |
| high humidity | 0.20 | 0.01 | 0.01 | 0.59 |
| high temperature | 0.25 | 0.01 | 0.01 | 0.64 |
| strong winds | 0.15 | 0.01 | 0.01 | 0.66 |
| low solar radiation | 0.10 | 0.01 | 0.01 | 0.68 |
Results
| Criterion | Talking to Blackbox | SHAP [4] | LIME [5] | Counterfactuals [10] |
| Explanation Type | Dynamic, narrative with token evolution (α(t)) | Static, feature contribution values | Static, local surrogate model | Hypothetical “what-if” scenarios |
| Interpretability Metric | α(t) tracks progressive clarity (0–1) | SHAP value magnitudes | Feature weights in local approximation | Binary plausibility of alternative outcomes |
| Temporal Aspect | Iterative and evolving explanations | No temporal tracking | Single-step explanation | Single counterfactual per query |
| Narrative Structure | Tokens form coherent storylines | None (list of feature values) | Limited textual descriptions | Scenario-based but not narrative |
| Epistemic Fields | Uses iFlw, hPhy, cMth, wThd to assess explanation | Not applicable | Not applicable | Not applicable |
| Model Agnostic? | Yes, adaptable to CNNs, LLMs, GANs, RL | Yes | Yes | Yes |
| Human-Centric View | Emphasizes storytelling and interpretability | Quantitative focus | Quantitative focus | Hypothetical examples |
| Strengths | Dynamic, intuitive, narrative-oriented | Strong theoretical foundation | Simple and easy to implement | Provides actionable “what-if” insights |
| Limitations | Tokenization heuristics, no standard α(t) benchmarks | Static, lacks narrative context | Instability with different seeds | No dynamic interpretability metric |
Discussion
Future Research Implications
Limitations
Future Work
Conclusion
License and Ethical Disclosures
Use of AI and Large Language Models
Ethics Statement
Author Contributions
Data Availability Statement
Conflicts of Interest
References
- R. Figurelli, What if P + NP = 1? A Multilayer Co-Evolutionary Hypothesis for the P vs NP Millennium Problem, Preprints.org, 2025. [Online]. Available. [CrossRef]
- R. Figurelli, Heuristic Layering: Structuring AI Systems Beyond End-to-End Models, Preprints.org, 2025. [Online]. Available. [CrossRef]
- R. Figurelli, The Heuristic Convergence Theorem: When Partial Perspectives Assemble the Invisible Whole, Preprints.org, 2025. [Online]. Available. [CrossRef]
- M. T. Ribeiro, S. M. T. Ribeiro, S. Singh, and C. Guestrin, "Why Should I Trust You?: Explaining the Predictions of Any Classifier," Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pp. 1135–1144, 2016. [Online]. Available. [CrossRef]
- S. M. Lundberg and S. Lee, "A Unified Approach to Interpreting Model Predictions," Proc. 31st Conference on Neural Information Processing Systems (NeurIPS), 2017. [Online]. Available: https://arxiv.org/abs/1705. 0787.
- C. Molnar, Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, 2nd ed., Leanpub, 2022. [Online]. Available: https://christophm.github.
- F. Doshi-Velez and B. Kim, "Towards A Rigorous Science of Interpretable Machine Learning," arXiv preprint, 2017. [Online]. Available: https://arxiv.org/abs/1702. 0860.
- D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen, and K.-R. Mueller. How to Explain Individual Classification Decisions. Journal of Machine Learning Research 2010, 11, 1803–1831. [Google Scholar]
- W. Samek, T. Wiegand, and K.-R. Mueller. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models. IT Professional 2019, 21, 31–41. [Google Scholar]
- Adadi and, M. Berrada. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- Z. C. Lipton. The Mythos of Model Interpretability. Queue 2018, 16, 31–57. [Google Scholar] [CrossRef]
- J. Gilpin, D. Bau, B. Zoran, I. Yosinski, and D. E. Bau. Explaining Explanations: An Overview of Interpretability of Machine Learning. Proc. IEEE 2021, 109, 251–266. [Google Scholar]
- J. Pearl, Causality: Models, Reasoning and Inference, 2nd ed., Cambridge University Press, 2009.
- S. Wachter, B. Mittelstadt, and C. Russell. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology 2018, 31, 841–887. [Google Scholar]
- L. Floridi and J. Cowls, "A Unified Framework of Five Principles for AI in Society," Harvard Data Science Review, 2019. [Online]. Available. [CrossRef]
- D. Gunning, "Explainable Artificial Intelligence (XAI)," Defense Advanced Research Projects Agency (DARPA), 2017. [Online]. Available: https://www.darpa.
- G. Montavon, W. Samek, and K.-R. Müller. Methods for Interpreting and Understanding Deep Neural Networks. Digital Signal Processing 2018, 73, 1–15. [Google Scholar] [CrossRef]
- Y. Bengio, I. Y. Bengio, I. Goodfellow, and A. Courville, Deep Learning, MIT Press, 2016.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).