Submitted:
24 September 2025
Posted:
25 September 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Materials and Methods
3. Results
3.1. The imperative of Transparency in AI for Education
3.2. Addressing the "Black Box" Problem
- Bias and Discrimination: AI algorithms are trained on data, and if that data reflects existing biases, the algorithm may perpetuate or even amplify those biases. For example, an AI system used to predict student success might unfairly disadvantage students from underrepresented groups if the training data overemphasizes certain demographic factors.
- Lack of Accountability: When an AI system makes a decision that affects a student's educational trajectory, it is crucial to understand the reasoning behind that decision. If the system is a black box, it becomes difficult to hold anyone accountable for errors or unfair outcomes.
- Erosion of Trust: Students, parents, and educators need to trust that AI systems are being used fairly and ethically. Opacity erodes trust and can lead to resistance to the adoption of AI in education.
- Reduced Student Agency: When students are evaluated or guided by unclear systems, they may feel powerless to understand or influence their own learning outcomes. This undermines student autonomy and engagement in the learning process.
- Misguided Interventions: Educators and administrators may rely on AI-generated insights to make instructional or disciplinary decisions. If these insights are based on flawed logic or inaccessible reasoning, interventions may be misaligned with students’ actual needs, potentially causing more harm than good.
- Over-Reliance on Automated Systems: As AI tools become more integrated into classrooms, there's a risk that educators may over-rely on them, deferring critical judgment to algorithms. This can lead to reduced professional autonomy for teachers and may diminish the human element essential to effective teaching and mentoring.
- Privacy and Data Security Concerns: AI systems often require large amounts of student data to function effectively. If the inner workings of the system are obscure, it becomes harder to ensure that sensitive student information is being handled securely and ethically.
- Difficulty in Challenging Outcomes: If a student receives an automated grade or recommendation, the lack of explainability makes it difficult to appeal or contest the decision. This lack of recourse can reinforce power imbalances and reduce fairness in educational settings.
3.2.1. From Black Box to Glass Door
3.3. Ethical AI and the Role of Explainability
3.4. The Case for Transparency and Interpretability
3.5. Current Developments in Educational AI
- Intelligent Tutoring Systems (ITS)
- Technology-Enhanced Learning (TEL)
- Predictive Analytics
- Natural Language Processing (NLP) and Chatbots
- Immersive and Metaverse-Based Systems
3.6. Ethical and Explainability Challenges
- Algorithmic Bias
- Privacy and Surveillance
- Opaque Assessment Systems
- Over-Reliance on Automation
- Cultural and Linguistic Challenges
3.7. Lessons from Case Studies
- Transparent ITS: The iRead Project
- Explainable Metaverse Learning
- Predictive Analytics Failures
- Human Oversight in Grading
3.8. Building a Framework for Transparent Educational AI
- Ethical design principles: fairness, accountability, privacy, and inclusivity.
- Built-in explainability: XAI integrated into the system, not added as an afterthought.
- Stakeholder-centered explanations: clear communication tailored to students, teachers, and policymakers.
- Auditable models: systems designed with documentation and continuous monitoring to detect bias.
- In building transparent AI systems for education, we advocate for the Model Context Protocol (MCP) which is a governance-first approach that embeds explainability, documentation, and stakeholder alignment into the lifecycle of AI tools. This complements technical approaches like Human-in-the-Loop (HITL) and ensures that AI decisions are not only explainable but also contextually appropriate and auditable.
- Human-in-the-Loop (HITL): embedding educators and administrators into AI workflows ensures that outputs are reviewed, contextualized, and, if necessary, overridden, preserving accountability and pedagogical judgment.
- Regulation and policy alignment: compliance with laws such as GDPR and education-specific AI standards.
3.9. Success Stories
- MCP in literacy learning: The iRead project implemented a Model Context Protocol by using an explicit, auditable model of literacy development. This allowed teachers to understand and review system recommendations, building trust and improving student engagement. (Karpouzis, 2023)
- HITL in predictive analytics. Community college pilots in the United States used machine learning to flag at-risk students, but advisors reviewed the recommendations before interventions. This preserved fairness and prevented students from being unfairly labeled (Holstein et al., 2020).
- HITL in grading. Universities experimenting with AI essay scoring required teachers to validate final grades. This balanced efficiency with professional oversight, reinforcing accountability (Ng et al., 2023).
- Transparent metaverse systems. Learner-centric platforms that provided interpretable explanations improved both accuracy and student confidence, showing the value of embedding XAI into cutting-edge learning environments (Labadze et al., 2023).
3.10. Future Directions
- Development of standardized frameworks for XAI in education, reducing definitional confusion (Altukhi & Pradhan, 2025).
- Expansion of cross-cultural studies to ensure explanations are meaningful across diverse linguistic and cultural contexts.
- Implementation of policy frameworks that require explainability in the procurement and deployment of educational AI tools.
- Integration of student voices in designing explanations, ensuring that systems foster learning and agency rather than passive acceptance.
- Deployment of scalable solutions for low-resource environments so that transparency is not limited to well-funded systems.
3.11. The Role of Explainable AI (XAI)
- Explainable AI (XAI) aims to address the "black box" problem by making AI systems more transparent and interpretable. XAI techniques can provide insights into how an AI model arrives at a particular decision, allowing users to understand the factors that influenced the outcome.
- Increased Understanding: XAI methods can help educators and students understand the strengths and limitations of AI systems. This understanding can inform how these systems are used and how their outputs are interpreted.
- Improved Decision-Making: By providing explanations for AI-driven recommendations, XAI can empower educators to make more informed decisions about student learning.
- Enhanced Trust and Acceptance: Transparency fosters trust. When users understand how an AI system works, they are more likely to accept its recommendations and use it effectively.
3.11. Recent Developments in XAI for Education
- Rule-Based Systems: These systems use explicit rules to make decisions, making their reasoning easy to follow.
- Decision Trees: Decision trees provide a visual representation of the decision-making process, making it easier to understand the factors that led to a particular outcome.
- Feature Importance Analysis: This technique identifies the features (e.g., student demographics, grades, attendance) that have the greatest impact on the AI model's predictions.
- Counterfactual Explanations: These explanations describe how a student's characteristics would need to change in order to achieve a different outcome.
3.12. Practical Strategies for Embedding Openness into AI Systems
- Prioritize Transparency in Design: When developing or selecting AI tools for education, prioritize systems that offer transparency and interpretability, ensuring that both educators and students can understand how decisions are made. It's also important to focus on systems that are ethically sound, prioritizing fairness, inclusivity, and privacy. By doing so, we can create tools that not only support learning but also foster trust and accountability in AI-driven environments.
- Use Explainable AI Techniques: When developing or selecting AI tools for education, prioritize systems that offer transparency and interpretability, ensuring educators and students understand decision-making. Incorporate Explainable AI (XAI) techniques to provide insights into how decisions are made. Focus on ethical design, prioritizing fairness, inclusivity, and privacy, while ensuring adaptability to diverse learning styles.
- Provide Clear Explanations: Present explanations in a clear and accessible manner, tailored to the needs of different stakeholders (e.g., students, educators, parents). For students, explanations should be simple and intuitive, possibly incorporating visual aids, examples, or analogies that make complex concepts easier to grasp. Educators may need more detailed, data-driven insights to help them understand how AI is supporting student learning, while parents should receive concise summaries that highlight the impact of AI on their child’s progress, along with information on privacy and security.
- Ensure Data Quality and Fairness: Carefully curate and preprocess training data to minimize bias and ensure fairness. This involves selecting diverse, representative datasets that reflect the wide range of student experiences, backgrounds, and needs. It’s essential to identify and address any inherent biases in the data—whether they stem from historical inequalities, demographic imbalances, or skewed sampling—before training the AI models.
- Establish Accountability Mechanisms: Define clear roles and responsibilities for overseeing the use of AI systems and addressing any issues that arise. This includes appointing specific individuals or teams responsible for monitoring AI performance, ensuring compliance with ethical standards, and swiftly addressing any technical or ethical challenges. These roles might involve data privacy officers, AI ethics committees, and educational technology managers who ensure the system operates fairly and securely. Establishing accountability also involves creating protocols for addressing concerns raised by students, educators, or parents, ensuring transparency in how issues are resolved. By setting clear lines of responsibility, organizations can ensure that AI tools are used effectively, ethically, and in alignment with educational goals.
- Promote Education and Training: Provide educators and students with training on how to use and interpret AI systems effectively. This comprehensive training must extend beyond basic technical operations to include critical AI literacy, enabling stakeholders to understand algorithmic decision-making processes, recognize potential biases, and evaluate the reliability of AI-generated outputs. Training programs should be tiered and age-appropriate, covering fundamental AI concepts, ethical implications, data privacy principles, and practical skills for leveraging AI tools as collaborative partners rather than passive consumers.
- Regularly Audit and Evaluate: Conduct regular audits of AI systems to assess their performance, identify potential biases, and ensure that they are being used ethically. These audits should be comprehensive, evaluating not just the accuracy and efficiency of the AI’s outputs, but also how its decisions affect diverse student groups. This includes checking for unintended bias or discriminatory patterns that could negatively impact certain demographics. Regular evaluations should also assess the system’s adherence to ethical guidelines, such as data privacy, transparency, and fairness.
- Solicit Feedback: Actively solicit feedback from students, educators, and parents on their experiences with AI systems. This feedback should be gathered through surveys, focus groups, or direct interactions, ensuring that all stakeholders have a voice in how the AI tools are functioning. For students, feedback might focus on how intuitive and helpful the system is for their learning, while educators may provide insights into how well AI supports their teaching strategies. Parents can share their perspectives on the system's impact on their child's learning experience and overall satisfaction.
4. Discussion
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| MDPI | Multidisciplinary Digital Publishing Institute |
| DOAJ | Directory of open access journals |
| TLA | Three letter acronym |
| LD | Linear dichroism |
References
- Altukhi, Z.M.; Pradhan, S. Systematic literature review: Explainable AI definitions and challenges in education. arXiv 2025. [CrossRef]
- Bhutoria, A. Personalized learning with artificial intelligence: A framework for the future. Comput. Educ. Artif. Intell. 2022, 3, 100068. [Google Scholar] [CrossRef]
- Carbonell, J.R. AI in CAI: An artificial-intelligence approach to computer-assisted instruction. IEEE Trans. Man-Mach. Syst. 1970, 11(4), 190–202.
- Chaudhry, M.A.; Cukurova, M.; Luckin, R. A transparency index framework for AI in education. arXiv 2022. [CrossRef]
- Garcia, T.; Pintrich, P.R. Student motivation and self-regulated learning: A critical review. Educ. Psychol. 2023, 58(2), 115–134.
- Holstein, K.; Wortman Vaughan, J.; Hal, D.; Dudik, M.; Wallach, H. Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25 April–1 May 2020; pp. 1–13. [Google Scholar]
- Ifenthaler, D.; Yau, J.Y.-K.; Mah, D.-K. Utilising learning analytics for study success: Reflections on current empirical findings. Interact. Technol. Smart Educ. 2021, 18(3), 302–318.
- Karpouzis, K. Explainable AI for intelligent tutoring systems: The case of the iRead project. In Frontiers of Artificial Intelligence, Ethics, and Multidisciplinary Applications; Springer: Cham, Switzerland, 2023; pp. 45–62. [Google Scholar]
- Khosravi, H.; Kitto, K.; Siemens, G. Explainable artificial intelligence in education. Comput. Educ. Artif. Intell. 2022, 3, 100076. [Google Scholar] [CrossRef]
- Kirkwood, A.; Price, L. Technology-enhanced learning and teaching in higher education: What is ‘enhanced’ and how do we know? Learn. Media Technol. 2013, 39(1), 6–36.
- Labadze, G.; Li, L.; Huang, T. Learner-centric explainable educational metaverse for cyber–physical systems engineering. Electronics 2023, 13(17), 3359.
- Ng, B.W.-H.; Ong, C.H.; Wang, L. AI in assessment: Opportunities, risks, and the need for explainability. Br. J. Educ. Technol. 2023, 54(5), 1280–1296.
- Pedro, F.; Subosa, M.; Rivas, A.; Valverde, P. Artificial intelligence in education: Challenges and opportunities for sustainable development. UNESCO Working Papers on Education Policy 2019, 22. [Google Scholar]
- Sterling, S.; Orr, D. Sustainable education: Re-visioning learning and change; Green Books: Devon, UK, 2001.
- UNESCO. Reimagining our futures together: A new social contract for education; UNESCO Publishing: Paris, France, 2021. [Google Scholar]
- VanLehn, K. The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educ. Psychol. 2011, 46(4), 197–221.
- Wang, Y.-Y.; Lai, A.-F.; Shen, Y.-R.; Chu, Y.-H. Modeling and verification of an intelligent tutoring system based on Petri net theory. Math. Biosci. Eng. 2019, 16(5), 4947–4975.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).