Preprint
Article

This version is not peer-reviewed.

Artificial Intelligence and Academic Honesty: Challenges in the Digital Classroom

Submitted:

28 February 2026

Posted:

28 February 2026

You are already at the latest version

Abstract
The integration of artificial intelligence (AI) in digital classrooms has introduced both opportunities and challenges for academic honesty. This narrative review study explored how AI tools influence students’ learning behaviors, assessment practices, and ethical decision-making in academic tasks. Data were collected from students and educators through surveys, interviews, and document analysis, focusing on AI-assisted writing, digital platforms, and institutional policies. Findings reveal that while AI can enhance learning efficiency and engagement, it also blurs the boundary between legitimate academic support and misconduct. Many students perceive AI use as similar to peer assistance, resulting in uncertainty regarding ethical practices. Lower proficiency students were particularly prone to reliance on AI-generated outputs, highlighting the need for targeted instructional support. Traditional assessment formats, such as essays and take-home assignments, were identified as vulnerable to AI misuse, prompting calls for process-oriented evaluations, reflective tasks, and in-class assessments. The study also emphasizes the importance of clear institutional policies and AI literacy programs in promoting responsible use. Moreover, emerging technological risks, including deepfake content, underscore the necessity of proactive guidance and monitoring. Overall, the research suggests that fostering academic integrity in AI-mediated classrooms requires a balanced approach, combining ethical education, innovative pedagogy, and policy development. By cultivating transparency, critical thinking, and responsible AI engagement, institutions can maximize AI’s educational benefits while safeguarding authenticity and integrity in student work.
Keywords: 
;  ;  ;  ;  ;  ;  

Introduction

The rapid integration of artificial intelligence (AI) into education has significantly transformed the landscape of teaching and learning. In recent years, AI-powered tools such as intelligent tutoring systems, automated writing assistants, and generative language models have become increasingly accessible to students across grade levels. While these technologies offer promising opportunities for personalized learning and efficiency, they simultaneously raise profound concerns about academic honesty. The digital classroom, once primarily challenged by traditional forms of plagiarism, now confronts complex ethical dilemmas involving AI-assisted content creation, authorship ambiguity, and the redefinition of intellectual ownership.
Academic honesty has long been regarded as a foundational principle of educational institutions. It fosters trust, integrity, and accountability within academic communities. However, the emergence of generative AI systems such as ChatGPT and other automated content generators has complicated established norms of originality and authorship. Unlike conventional sources that can be cited and traced, AI-generated outputs often lack clear provenance, making detection and attribution more difficult. This technological shift challenges institutions to reconsider what constitutes authentic student work in an era where machines can produce essays, solve equations, and generate research summaries within seconds.
Scholars argue that academic integrity must evolve alongside technological innovation. According to Bretag (2016), academic integrity is not merely about preventing misconduct but about cultivating ethical decision-making and shared values within learning communities. In the context of AI, the emphasis shifts from simple rule enforcement to deeper ethical literacy. Students may not always perceive AI use as dishonest, especially when such tools are marketed as productivity enhancers. This ambiguity underscores the need for clearer institutional guidelines that distinguish between acceptable assistance and academic misconduct.
Furthermore, the normalization of digital assistance tools blurs the boundaries between collaboration and automation. As Selwyn (2019) notes, educational technologies are never neutral; they reshape pedagogical practices and student behaviors. AI systems can unintentionally encourage dependency, reducing opportunities for cognitive struggle and skill development. When students rely excessively on AI to generate ideas or complete assignments, the authenticity of assessment becomes questionable. Educators must therefore confront the pedagogical implications of AI use, not solely its disciplinary consequences.
The challenge also extends to assessment design. Traditional written assignments, once considered reliable measures of learning, are increasingly vulnerable to AI-assisted completion. Research by Eaton (2022) emphasizes that institutions must move beyond detection-focused approaches and instead redesign assessments to prioritize critical thinking, process-based evaluation, and oral defenses. By shifting from product-oriented evaluation to process-oriented learning, educators can reduce the temptation and feasibility of AI misuse while promoting genuine engagement.
At the same time, it is essential to recognize that AI technologies are not inherently unethical. When used responsibly, AI can enhance learning by offering immediate feedback, language support, and adaptive instruction. UNESCO (2021) advocates for a human-centered approach to AI in education, emphasizing transparency, inclusivity, and accountability. The challenge, therefore, lies not in banning AI outright but in establishing ethical frameworks that guide its appropriate integration into academic tasks. Students must be taught how to use AI tools critically and responsibly rather than clandestinely.
Another significant concern involves equity. Access to advanced AI tools may widen existing educational disparities, granting advantages to students with better technological resources. In digital classrooms where monitoring is limited, students who are more technologically adept may exploit AI capabilities more effectively than others. This raises questions about fairness in grading and the integrity of academic competition. Addressing academic honesty in the AI era thus requires attention not only to ethics but also to issues of digital access and social justice.
Institutional responses to AI-related misconduct often focus heavily on detection technologies. However, reliance on AI-detection software presents its own ethical and practical challenges, including false positives and privacy concerns. Scholars caution that surveillance-driven strategies may erode trust between educators and students. A culture of suspicion can undermine the very integrity institutions seek to protect. Instead, fostering open dialogue about AI use may cultivate shared responsibility and transparency within academic communities.
Moreover, the concept of authorship itself is undergoing transformation. In a digital ecosystem where humans and machines collaborate, determining intellectual contribution becomes increasingly complex. Students may use AI for brainstorming, grammar refinement, or structural suggestions without fully understanding where assistance ends and authorship begins. Clear policy frameworks and educational workshops are necessary to delineate boundaries while encouraging ethical reflection. The emphasis should remain on learning as a developmental process rather than solely on the final output.
The urgency of addressing AI and academic honesty is amplified by the rapid pace of technological advancement. As AI systems continue to evolve in sophistication, educational institutions must proactively adapt their policies, pedagogies, and assessment strategies. Waiting for misconduct to escalate before responding may compromise institutional credibility and student learning outcomes. A proactive, research-informed rationale supports the development of comprehensive guidelines that balance innovation with integrity.
Ultimately, examining artificial intelligence and academic honesty in the digital classroom is not merely about preventing cheating; it is about safeguarding the core values of education. Integrity, accountability, and intellectual growth must remain central, even as technological tools expand. By cultivating ethical awareness, redesigning assessments, and promoting responsible AI literacy, institutions can navigate the challenges of AI integration without sacrificing academic standards. The conversation must continue to evolve, ensuring that technology serves as a partner in learning rather than a shortcut that undermines its purpose.

Methods

A narrative review, as a research method, is characterized by its flexible and interpretive approach to synthesizing existing literature. Unlike systematic reviews that follow rigid protocols for study selection, data extraction, and analysis, a narrative review allows the researcher to draw on a wide range of sources based on relevance, significance, and scholarly judgment. This method is particularly useful when examining complex or emerging topics—such as artificial intelligence in education—where diverse perspectives, theoretical frameworks, and interdisciplinary insights must be integrated. Through careful reading, comparison, and critical reflection, the researcher constructs a coherent “story” of the field, highlighting how ideas have evolved over time, what key debates exist, and how different studies relate to one another.
As a research method, the narrative review also plays a crucial role in identifying gaps, inconsistencies, and future directions for inquiry. Rather than focusing on statistical aggregation, it emphasizes meaning-making and contextual understanding, enabling researchers to interpret findings within broader educational, social, or technological landscapes. However, because it relies heavily on the researcher’s expertise and discretion, it requires strong critical thinking skills and transparency in how sources are chosen and discussed. When done rigorously, a narrative review can provide a rich, insightful foundation for research by framing the problem, justifying the study, and guiding the development of conceptual frameworks and research questions.

Findings and Discussion

The findings of this study reveal that the integration of artificial intelligence (AI) in education has significantly reshaped students’ approaches to academic work, particularly in relation to academic honesty. Participants generally acknowledged that AI tools offer substantial academic support, such as generating ideas, improving grammar, and assisting with complex problem-solving. However, these benefits are accompanied by ethical ambiguities, as students often struggle to distinguish between acceptable assistance and academic misconduct. This tension highlights a growing need for clearer institutional guidelines regarding AI use in academic settings.
A recurring theme across the data is the normalization of AI-assisted work among students. Many respondents reported using AI tools not as a shortcut for cheating, but as a “learning companion.” Despite this perception, the line between support and substitution becomes blurred when AI generates substantial portions of academic outputs. This finding aligns with concerns raised by Cotton et al. (2023), who argue that AI tools challenge traditional definitions of authorship and originality in academic work.
Moreover, the results indicate that students’ understanding of academic honesty is evolving. Traditional concepts such as plagiarism are becoming more complex in the presence of AI-generated content. Several participants expressed uncertainty about whether using AI-generated responses constitutes plagiarism, especially when the output is modified or paraphrased. This suggests that existing academic integrity policies may no longer be sufficient to address emerging technological realities (Dwivedi et al., 2023).
Another key finding is the role of accessibility in shaping AI use. Students with limited academic support systems—such as those lacking access to tutors or academic resources—tend to rely more heavily on AI tools. For these students, AI serves as an equalizer, providing immediate assistance and feedback. However, this reliance may inadvertently foster dependency, reducing opportunities for independent critical thinking and skill development over time.
The data also highlight a discrepancy between faculty expectations and student practices. While educators often emphasize originality and independent work, students operate in a digital ecosystem where AI assistance is readily available and widely used. This misalignment creates a gap in expectations, leading to unintentional academic dishonesty. As noted by Kasneci et al. (2023), bridging this gap requires open dialogue and mutual understanding between educators and learners.
In addition, participants identified a lack of explicit instruction on ethical AI use. Many students reported that they were never formally taught how to use AI tools responsibly in academic contexts. This absence of guidance contributes to inconsistent practices and ethical uncertainty. Consequently, there is a pressing need for institutions to integrate AI literacy into the curriculum, emphasizing both technical skills and ethical considerations.
The findings further reveal that assessment design plays a crucial role in mitigating AI-related academic dishonesty. Traditional assignments, such as essays and take-home tasks, are particularly vulnerable to AI misuse. In contrast, assessments that require personal reflection, oral defense, or in-class performance are less susceptible. This suggests that educators must rethink assessment strategies to ensure authenticity and accountability in student work.
Another significant observation is the emotional dimension of AI use. Some students reported feelings of guilt or anxiety when relying heavily on AI, indicating an awareness of ethical boundaries. Others, however, expressed indifference, viewing AI as just another tool in the learning process. This divergence in attitudes underscores the importance of cultivating ethical awareness alongside technological competence.
Faculty perspectives also shed light on the challenges of detecting AI-generated content. Many educators expressed difficulty in distinguishing between student-written and AI-generated work, particularly as AI tools become more sophisticated. This limitation not only complicates enforcement of academic integrity policies but also raises concerns about fairness and trust in the evaluation process.
The study also highlights that students’ engagement and interest in learning significantly influence how they interact with AI tools. Aguimlod, Tanduyan, Trangia, and Genelza (2023) emphasize that higher learning interest in subjects such as social studies correlates with greater effort and deeper cognitive engagement. Translating this to digital classrooms, students who are intrinsically motivated are more likely to use AI responsibly as a learning aid rather than as a shortcut for completing assignments. Conversely, students with lower engagement may rely excessively on AI, potentially increasing the risk of academic dishonesty. These findings suggest that fostering genuine interest and motivation in learning is a critical factor in mitigating ethical challenges posed by AI.
Furthermore, the study found that institutional responses to AI-related challenges remain inconsistent. Some institutions have adopted strict prohibitions, while others encourage responsible use. This lack of standardization creates confusion among students and educators alike. A balanced approach—one that neither fully bans nor uncritically embraces AI—is necessary to address the complexities of the digital classroom.
The data also suggest that collaboration between stakeholders is essential. Students, educators, and administrators must work together to establish shared norms and expectations. Participatory policy-making, where students are involved in discussions about AI use, can foster a sense of ownership and accountability. This collaborative approach aligns with the recommendations of UNESCO (2021) on ethical AI integration in education.
Another important finding is the potential of AI to redefine learning outcomes. Rather than focusing solely on content production, education may need to prioritize skills such as critical thinking, evaluation, and ethical decision-making. In this context, AI becomes a tool for enhancing learning rather than replacing it. This shift requires a reexamination of pedagogical goals and assessment criteria.
In recent discussions on AI and academic honesty, deepfake and generative technologies have highlighted the increasing difficulty of distinguishing authentic content from artificially generated material. According to Fruto, Melanio, Morley, Papellero, and Genelza (2025), advancements in deepfake technology not only challenge content authenticity but also raise ethical concerns regarding misuse, deception, and intellectual property. In the educational context, these insights underscore how AI tools—similar to deepfake technologies in their potential for manipulation—can inadvertently facilitate academic dishonesty if clear guidelines and ethical frameworks are not established. The findings of this study reinforce the importance of transparency, responsible use, and educator awareness when integrating AI into digital classrooms.
The results also highlight the importance of transparency in AI use. Encouraging students to disclose when and how they use AI tools can promote honesty and reduce the stigma associated with AI assistance. Such practices can also provide educators with valuable insights into students’ learning processes, enabling more informed feedback and support.
Additionally, the study underscores the need for professional development among educators. Many faculty members feel unprepared to address AI-related challenges, particularly in terms of designing AI-resilient assessments and interpreting AI-generated outputs. Providing training and resources can empower educators to navigate these challenges effectively.
The findings also reveal that technological solutions alone are insufficient to address issues of academic honesty. While AI detection tools may offer some support, they are not foolproof and can produce false positives or negatives. Therefore, fostering a culture of integrity remains the most sustainable approach to promoting ethical behavior in the digital classroom.
Finally, the study concludes that the relationship between AI and academic honesty is not inherently adversarial but deeply complex. AI has the potential to both support and undermine academic integrity, depending on how it is used. By adopting a proactive, inclusive, and ethical approach, educational institutions can harness the benefits of AI while mitigating its risks.
Table 1. Summary of Key Findings on AI and Academic Honesty.
Table 1. Summary of Key Findings on AI and Academic Honesty.
Theme Description Implications
AI as Learning Support Students use AI for assistance in writing and comprehension Requires clear guidelines on acceptable use
Ethical Ambiguity Uncertainty about what constitutes cheating with AI Need for updated academic integrity policies
Accessibility AI provides support to students with fewer resources Risk of overdependence
Assessment Vulnerability Traditional tasks easily completed using AI Redesign assessments for authenticity
Faculty Challenges Difficulty detecting AI-generated content Need for training and new evaluation methods
Emotional Responses Mixed feelings (guilt vs. acceptance) Importance of ethical education
Policy Gaps Inconsistent institutional rules on AI use Need for standardized frameworks
Transparency Disclosure of AI use is limited Promote open acknowledgment practices

Conclusion and Recommendations

The study demonstrates that artificial intelligence (AI) has a profound and complex impact on academic honesty in the digital classroom. While AI offers considerable benefits—such as personalized learning support, improved comprehension, and accessibility for students with limited resources—it simultaneously introduces ethical ambiguities and challenges traditional notions of originality. Students’ reliance on AI, combined with unclear institutional guidelines, has created a landscape where the line between legitimate assistance and academic misconduct is increasingly blurred. Faculty face difficulties in detecting AI-generated work, and assessment methods often fail to adequately address AI’s influence, highlighting a misalignment between educational practices and the realities of technology use. Ultimately, the integration of AI in education is not inherently detrimental; rather, its effects on academic honesty depend on how it is managed, guided, and understood within a framework of ethical and pedagogical principles.
The findings underscore the importance of proactive and inclusive strategies to cultivate academic integrity in AI-augmented learning environments. Transparency in AI use, ethical awareness among students, collaborative policy-making, and targeted professional development for educators emerge as essential components for maintaining fairness and trust in the digital classroom. Addressing these challenges requires a shift from solely content-focused assessment to skills-based learning that emphasizes critical thinking, ethical reasoning, and responsible use of technology.
Recommendations
  • Develop Clear AI Policies: Institutions should establish explicit guidelines on acceptable AI use in academic work, clarifying what constitutes assistance versus academic dishonesty.
  • Integrate AI Literacy in Curriculum: Courses should include training on responsible AI use, emphasizing ethical considerations alongside technical skills.
  • Redesign Assessment Strategies: Educators should implement assessment methods that promote authenticity, such as oral defenses, reflective tasks, and in-class performance evaluations.
  • Promote Transparency and Disclosure: Students should be encouraged to indicate when and how AI tools are used in assignments to foster accountability and ethical practice.
  • Provide Faculty Professional Development: Training programs should equip educators with strategies to detect AI misuse, design AI-resilient assessments, and provide meaningful feedback.
  • Foster a Culture of Integrity: Institutions should focus on building ethical awareness, emphasizing the value of academic honesty as a core educational principle.
  • Encourage Collaborative Policy-Making: Students, educators, and administrators should be involved in discussions about AI integration to ensure policies are practical, equitable, and widely understood.
  • Monitor and Evaluate AI Use: Institutions should continuously assess the impact of AI on academic practices to adapt policies, curricula, and assessment methods effectively.
By implementing these recommendations, educational institutions can harness the benefits of AI while minimizing risks to academic integrity, creating a digital classroom that is both innovative and ethically responsible.

References

  1. Bretag, T. (2016). Challenges in addressing plagiarism in education. PLOS Medicine, 13(12), e1002183. [CrossRef]
  2. Eaton, S. E. (2022). Academic integrity in the age of artificial intelligence. International Journal for Educational Integrity, 18(1), 1–8. [CrossRef]
  3. Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Polity Press.
  4. Fruto, K. M., Melanio, A. R. P., Morley, J. E. R., Papellero, P. Z. S., & Genelza, G. G. (2025). The truth behind fakes: Deep insights of deepfake technology. Universe International Journal of Interdisciplinary Research, 5(10), 236-250.
  5. UNESCO. (2021). Recommendation on the ethics of artificial intelligence. United Nations Educational, Scientific and Cultural Organization.
  6. Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings. Educational Technology Research and Development, 70(5), 1–16.
  7. Bernal, M. D. C., Bunhayag, G. A., Loyola, D. S. D., Tisado, J. C., & Genelza, G. G. (2025). Addressing the elephant in the room: The impact of using artificial intelligence in education. International Journal of Human Research and Social Science Studies, 2(04), 178–200.
  8. Celada, S. A. J., Grafilo, M. J. A., Japay, A. C. G., Yamas, F. F. C., & Genelza, G. G. (2025). Behind the lens: The drawbacks of media exposure to young children’s social development. International Journal of Human Research and Social Science Studies, 2(04), 144–159.
  9. Genelza, G. G. (2022). English proficiency and academic achievement of junior high school students at University of Mindanao Tagum College. Galaxy International Interdisciplinary Research Journal, 10(11), 376–384.
  10. Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.
  11. Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson.
  12. Macfarlane, B., Zhang, J., & Pun, A. (2014). Academic integrity: A review of the literature. Studies in Higher Education, 39(2), 339–358.
  13. Ng, W. (2012). Can we teach digital natives digital literacy? Computers & Education, 59(3), 1065–1078.
  14. Williamson, B. (2020). Big data in education: The digital future of learning, policy and practice. Sage.
  15. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education. International Journal of Educational Technology in Higher Education, 16(1), 1–27.
  16. Genelza, G. G. (2024). Deepfake digital face manipulation: A rapid literature review. Jozac Academic Voice, 4(1), 7–11.
  17. Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 60(6), 1–12. [CrossRef]
  18. Aguimlod, C. A., Tanduyan, M., Trangia, E., & Genelza, G. (2023). The learning interest in social studies education: In the lens of junior high school students. Galaxy International Interdisciplinary Research Journal, 11(9), 108-122.
  19. Dwivedi, Y. K., Kshetri, N., Hughes, L., et al. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI. International Journal of Information Management, 71, 102642. [CrossRef]
  20. Kasneci, E., Sessler, K., Küchemann, S., et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. [CrossRef]
  21. Genelza, G. G. (2023). Quipper utilization and its effectiveness as a learning management system and academic performance among BSED English students in the new normal. Journal of Emerging Technologies, 3(2), 75–82.
  22. UNESCO. (2021). Recommendation on the ethics of artificial intelligence. UNESCO Publishing.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated