Preprint
Article

This version is not peer-reviewed.

Ethics in the time of Artificial Intelligence: Rethinking Integrity in the Classroom

Submitted:

12 May 2025

Posted:

13 May 2025

You are already at the latest version

Abstract
The rapid integration of generative artificial intelligence (AI) tools such as ChatGPT into educational contexts has sparked a paradigmatic shift in how students learn, how educators assess, and how institutions define academic integrity. This paper explores the emerging ethical landscape surrounding AI-assisted student work through a qualitative, literature-based inquiry that synthesizes perspectives from educators, students, ethicists, and institutional policy frameworks. As AI tools increasingly mediate knowledge production, they blur the boundaries between assistance and authorship, raising urgent questions about originality, authenticity, and educational purpose. The study reveals a growing dissonance between technological innovation and traditional academic values: students often perceive AI as a tool for productivity and learning enhancement, while faculty express apprehension about compromised learning outcomes and diminished student accountability. Institutional policies, still largely reactive and fragmented, struggle to keep pace with these developments. Drawing on virtue ethics and principles of responsible innovation, the paper proposes an integrative ethical framework that prioritizes character formation, reflective judgment, and contextual discernment. Additionally, the study advocates for a reimagining of assessment design—moving away from rote demonstration toward deeper engagement, critical inquiry, and meta-cognitive reflection. Rather than merely policing misconduct, academic institutions must cultivate a culture of ethical AI use that equips learners with the moral imagination to navigate emerging technologies in both academic and professional spheres. In doing so, education can reclaim its role not just as a transmitter of knowledge, but as a steward of ethical agency in the digital age.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

Introduction

The introduction of ChatGPT and similar generative AI tools has sparked what many educators describe as an overnight revolution in teaching, learning, and assessment paradigms across educational institutions worldwide. Within moments of its public release in November 2022, it became apparent that these technologies could produce sophisticated written content that, in many cases, would satisfy traditional educational assessment requirements (Evangelista, 2025). This technological leap has forced immediate reconsideration of long-established practices in academic integrity, particularly concerning student writing and assessment design.
The fundamental challenge presented by AI tools stems from their unprecedented capabilities. Unlike previous technological innovations in education, generative AI does not merely assist with specific tasks but can actually produce original content that mimics human writing across diverse formats, styles, and knowledge domains (Rane et al., 2024). This capability directly intersects with core academic values of authorship, originality, and the demonstration of individual learning—concepts that have traditionally defined academic integrity in educational settings.
Academic integrity has historically encompassed principles of honesty, trust, fairness, respect, and responsibility in learning environments (Maral, 2024). These values have been embedded in institutional policies, honor codes, and assessment practices designed to verify that student work represents their own knowledge, skills, and effort. However, AI tools challenge these frameworks by blurring the boundaries between human and machine-generated work, raising important questions about attribution, authorship, and the very nature of demonstrating learning.
The significance of this investigation extends beyond immediate concerns about detecting AI use or preventing cheating. As Balalle and Pannilage (2025) observe, the integration of AI into education represents a fundamental shift in how knowledge is produced, processed, and demonstrated. This has profound implications for the future of education, workplace preparation, and the development of ethical technological literacy. Addressing these challenges requires not merely technical solutions but thoughtful ethical frameworks that can guide institutional policies, pedagogical approaches, and student behaviors in this new landscape.
The conceptual background for understanding this issue draws from several intersecting domains. First, the literature on academic integrity provides an essential foundation for examining the ethical dimensions of student work (Maral, 2024). Second, emerging research on AI ethics offers important perspectives on responsible technology use, particularly concerning transparency, fairness, and human agency (Spector et al., 2022). Finally, philosophical approaches to ethics—especially virtue ethics with its focus on character development—provide valuable frameworks for conceptualizing how educational institutions might foster ethical AI use beyond mere rule compliance (Hagendorff, 2022).
This paper aims to investigate the ethical dimensions of AI-assisted student work by examining faculty and student perspectives, institutional policy responses, ethical frameworks for guiding AI use, and implications for assessment design. Through this analysis, the study seeks to contribute to emerging discussions about how educational institutions might ethically navigate the integration of AI technologies while maintaining academic integrity. The research is guided by the following questions: How are stakeholders responding to the integration of AI tools in academic work? What ethical frameworks might guide responsible AI use in educational settings? How might assessment practices be redesigned to maintain academic integrity in an AI-enhanced environment?

Methodology

This study employs a qualitative, literature-based analytical approach to examine the ethical dimensions of AI-assisted student work in academic settings. Given the rapidly evolving nature of AI technologies and their recent integration into educational contexts, this methodology allows for a comprehensive synthesis of emerging research, perspectives, and practices across multiple institutions and stakeholder groups.
The analytical framework combines elements of both content analysis and ethical inquiry. For content analysis, the paper systematically reviews published literature on AI in education with a particular focus on academic integrity, ethical frameworks, institutional policies, and assessment practices. This includes peer-reviewed journal articles, institutional policy documents, educational reports, and emerging best practices published between 2022 and 2025—a period marking the rapid adoption of generative AI tools like ChatGPT in educational settings.
For the ethical dimension of analysis, the study employs multiple ethical lenses to examine the tensions and dilemmas presented by AI-assisted student work. While drawing on various ethical perspectives including deontological (rule-based) and consequentialist (outcome-based) approaches, the paper places particular emphasis on virtue ethics as an analytical framework. Virtue ethics, with its focus on character development and internalized ethical dispositions rather than external rules, provides a valuable perspective for examining how educational institutions might foster ethical AI use that extends beyond mere policy compliance.
Data collection involved a systematic search of academic databases including Web of Science, ERIC, and Google Scholar, using search terms related to "AI," "academic integrity," "ethics," "education," "ChatGPT," and "assessment." Additional policy documents were gathered directly from institutional websites of universities that have published comprehensive AI policies. To ensure representation of diverse perspectives, the literature review deliberately included research examining both faculty and student viewpoints across different geographical and institutional contexts.
The analytical process followed an iterative approach of identifying key themes, comparing perspectives, and synthesizing findings to address the research questions. Particular attention was paid to areas of consensus and divergence among stakeholders, emerging ethical frameworks, innovative assessment practices, and potential paths forward for educational institutions. Throughout this process, the analysis sought to balance practical considerations with deeper ethical reflection on the values that should guide AI integration in academic settings.
Limitations of this approach include the rapidly evolving nature of AI technologies, which means some findings may have limited long-term applicability as capabilities continue to develop. Additionally, the reliance on published literature may underrepresent perspectives from institutions or contexts with less published research on these topics. Despite these limitations, this methodological approach allows for a comprehensive examination of the complex and multifaceted ethical issues surrounding AI-assisted student work.

Results / Analysis

Types and Prevalence of AI-Assisted Academic Work

The analysis reveals that students engage with AI tools across a spectrum of activities, from minimal assistance to complete generation of academic work. Student perspectives indicate a significant distinction between what they consider acceptable and unacceptable AI use. According to a study by the International Journal for Educational Integrity, the majority of students (54.1%) support using tools like Grammarly for grammar and spelling assistance, but 70.4% oppose using ChatGPT to write an entire essay (International Journal for Educational Integrity, 2024). This finding suggests students generally distinguish between supportive use (editing, idea generation, language assistance) and substitutive use (having AI produce complete assignments).
The prevalence of AI use in academic work appears substantial, though precise measurement is challenging due to inconsistent disclosure. One particularly revealing finding comes from research on student compliance with mandatory AI use declarations, which found that up to 74% of students failed to declare their AI usage on academic work (Taylor & Francis Online, 2024). This high non-compliance rate suggests AI use may be more widespread than officially documented, complicating efforts to understand its true prevalence in academic settings.

Student and Faculty Perspectives on Ethical Boundaries

Students and faculty demonstrate both alignment and divergence in their perspectives on AI use. Both groups generally recognize potential benefits of AI in education while expressing concerns about maintaining academic integrity. However, significant nuances emerge in how each group conceptualizes ethical boundaries.
Students exhibit a pragmatic approach to establishing ethical boundaries, focusing on the degree of reliance and purpose of AI use. According to student comments, using AI as "a tool for idea generation, structuring work, or obtaining alternative explanations" is considered acceptable, while "relying entirely on AI to produce academic work was viewed as bypassing the learning process" (International Journal for Educational Integrity, 2024). This suggests students establish ethical boundaries based primarily on preserving authentic learning rather than technical distinctions about AI capabilities.
Faculty perspectives reveal greater concern about maintaining assessment validity and authentic demonstration of learning outcomes. Faculty members expressed that AI tools threaten academic integrity when they compromise "the development of individual writing skills" and create "an uneven playing field among students" (MDPI, 2024). Faculty motivations for addressing AI use include "preparing students for future work by emphasizing skills that require human originality, such as creativity, critical thinking, and the ability to synthesize ideas" (MDPI, 2024).
Common concerns across both groups include fairness issues, the potential devaluation of academic qualifications, and anxiety about unclear expectations. Both students and faculty emphasized the need for clearer guidance on appropriate AI use boundaries.

Institutional Dilemmas and Policy Responses

Educational institutions face significant dilemmas in developing AI policies that balance innovation with integrity. Analysis of institutional responses reveals several approaches and challenges in policy development. Many institutions have moved beyond binary "allow or prohibit" approaches to more nuanced policies. For example, some universities have adopted frameworks that specify contexts where AI is permitted, prohibited, or used under specific conditions. The research indicates a trend toward policies that distinguish between AI as a learning tool versus AI as a substitute for demonstrating learning (International Journal of Educational Technology in Higher Education, 2023).
A key challenge identified across institutional responses is policy enforcement. The high non-compliance rate with AI declaration requirements highlights significant gaps between policy and practice. As one study noted, "ambiguous guidelines and inconsistent enforcement across modules creates a scenario where students face a dilemma: declare and risk negative academic repercussions or withhold information" (Taylor & Francis Online, 2024). This suggests effective policies must not only establish clear guidelines but also create environments where disclosure is perceived as safe and normative.
Institutional approaches to AI detection represent another area of significant dilemma. While some institutions have invested in AI detection tools, research questions their efficacy and ethics. Studies indicate these tools have "high false positive rates" and may disproportionately flag work from "non-native English writers, Black students, and neurodiverse students" (NIU CITL, 2024). These findings suggest an overreliance on technological detection may inadvertently undermine equity goals and erode trust between students and institutions.
More forward-looking institutional approaches focus on educational initiatives rather than punitive measures. For example, some institutions are implementing AI literacy programs that teach ethical AI use alongside traditional academic skills. This approach acknowledges that "limiting access to AI tools or banning their use altogether could disadvantage groups who may rely on these supports (e.g., disabled students, non-native English speakers) and could backfire by encouraging covert use" (International Journal for Educational Integrity, 2024).

Tensions Between Innovation and Integrity

The analysis identifies fundamental tensions between embracing technological innovation and maintaining academic integrity standards. These tensions manifest across multiple dimensions of educational practice. One central tension concern assessment purposes and methods. Traditional assessment models—designed to evaluate individual knowledge and skills—come into conflict with AI tools that can produce sophisticated work without demonstrating student learning. Researchers note that "assessment strategies that emphasize outcomes (final products) over process may be particularly vulnerable to AI substitution" (MDPI, 2024).
Another significant tension emerges regarding skill development in an AI-augmented environment. Educational stakeholders must balance teaching traditional academic skills with preparing students for workplaces where AI collaboration will likely be normalized. This creates tension between maintaining disciplinary traditions and adapting to technological change. A third tension relates to equitable access and support. While AI tools can democratize certain forms of academic support (e.g., writing assistance, explanations of complex concepts), they may simultaneously create new inequities based on technological access, literacy, and quality of AI outputs. This creates tensions for institutions seeking to maintain both academic standards and equity goals.
Perhaps most fundamentally, tensions exist between different ethical frameworks for conceptualizing academic integrity itself. As one researcher observed, approaches based strictly on rule compliance may be "limited in addressing the moral ambiguity" introduced by AI, suggesting a need for "frameworks that develop ethical judgment rather than mere rule-following" (ACM Digital Library, 2024).

Ethical Frameworks for AI in Academic Settings

The analysis identified several ethical frameworks being applied to AI use in academic settings, with particular attention to how these frameworks conceptualize academic integrity. Traditional principle-based approaches emphasize specific values like honesty, fairness, and responsibility, typically formalized in honor codes and academic integrity policies. While these remain relevant, research suggests they may be insufficient alone when applied to AI use. As one study noted, "current ethics initiatives in both industry and academia, while often adopted as a checklist of ethical guidelines, do not sufficiently influence actual ethical behavior" (SpringerLink, 2022).
Consequentialist frameworks evaluate AI use based on outcomes rather than adherence to specific rules. Some institutional approaches reflect this perspective by focusing on whether AI use enhances or inhibits learning rather than categorically permitting or prohibiting it. This approach aligns with findings that "the perception of AI declaration as potential self-incrimination suggests that policies may be inadvertently deterring honest disclosure" (Taylor & Francis Online, 2024).
Virtue ethics emerges as a particularly promising framework according to several sources. This approach focuses on developing ethical character and dispositions rather than merely following rules. Research suggests virtue ethics may be especially relevant for AI contexts because it "promotes intrinsic motivation by cultivating internal virtues rather than enforcing external rules, thereby fostering honesty and self-control from within" (ACM Digital Library, 2024).
The virtue ethics approach identifies key virtues particularly relevant to AI ethics, including:
  • Justice: Ensuring fair and equitable AI use that doesn't disadvantage certain students
  • Honesty: Maintaining transparency about AI contributions to academic work
  • Responsibility: Taking accountability for one's academic development regardless of AI use
  • Care: Considering impacts of AI use on the broader academic community (SpringerLink, 2022)
This framework is complemented by "second-order virtues" like prudence and fortitude, which help individuals navigate cognitive biases and external pressures that may lead to unethical AI use (SpringerLink, 2022).

Discussion

Reimagining Academic Integrity for an AI-Enhanced Era

The findings reveal that established conceptions of academic integrity require significant recalibration in response to AI technologies. Traditional academic integrity frameworks were developed in contexts where the boundaries between authorized assistance and unauthorized help were relatively clear. However, AI tools create a new category of assistance that calls for more nuanced approaches. A productive reimagining of academic integrity must move beyond binary categorizations of AI use as either "cheating" or "legitimate." Instead, it should focus on the developmental goals of education and how AI might support or hinder these goals in different contexts. As the research indicates, students themselves distinguish between using AI as a learning aid versus bypassing learning entirely (International Journal for Educational Integrity, 2024). This suggests academic integrity frameworks should similarly focus on the relationship between AI use and learning outcomes rather than categorical prohibitions (Rimban, 2023).
The implications for policy design are substantial. Institutions must develop frameworks that classify different types of AI use based on their relationship to learning goals rather than technical distinctions about the technology itself. For example, policies might distinguish between:
  • AI as a learning tool (e.g., obtaining explanations, checking work)
  • AI as a collaborative partner (e.g., idea generation, feedback on drafts)
  • AI as a substitute for demonstrating learning (e.g., generating entire assignments)
This approach aligns with findings that students seek clearer guidance about what constitutes acceptable versus unacceptable AI use (Taylor & Francis Online, 2024). By focusing on the relationship between AI use and learning objectives, institutions can develop more coherent and pedagogically sound integrity policies.

Faculty Roles and Pedagogical Shifts

The integration of AI necessitates significant pedagogical adaptations from faculty. The findings indicate faculty must balance several complex and sometimes competing responsibilities: maintaining academic standards, preparing students for AI-integrated futures, and designing meaningful learning experiences that remain valid in an AI-enhanced environment. Faculty development emerges as a critical need. Research shows many educators feel inadequately prepared to navigate AI technologies in their teaching. Faculty efforts to integrate AI ethics into their pedagogy are often hampered by "ambiguity in guidelines and inconsistent enforcement across modules" (Taylor & Francis Online, 2024). This suggests institutions must invest in comprehensive faculty development programs that address both technical literacy and ethical considerations regarding AI.
The "Against, Avoid, Adopt, and Explore" (AAAE) framework identified in the research offers a promising approach for faculty to conceptualize their pedagogical options regarding AI (MDPI, 2024). This framework recognizes that different disciplines, courses, and learning objectives may warrant different approaches to AI integration—from maintaining AI-free assessments in some contexts to exploratory AI integration in others. Perhaps most importantly, faculty need to shift from focusing primarily on assessment products to emphasizing learning processes. The findings on assessment redesign consistently highlight the value of "process documentation" approaches that make visible the steps and thinking involved in completing academic work (UMass CTL, 2024). This shift aligns with the virtue ethics framework by emphasizing the development of ethical judgment through transparent process rather than merely evaluating final products.

Reshaping Trust and Assessment in Higher Education

The findings suggest that AI technologies are fundamentally reshaping trust relationships in educational settings. Assessment practices, in particular, face a crisis of credibility when traditional methods can be easily circumvented using AI tools. The high non-compliance rate with AI declaration requirements (74% according to one study) indicates a significant trust gap between institutional expectations and student behavior (Taylor & Francis Online, 2024). This gap suggests current approaches to maintaining academic integrity through detection and punishment may be reaching their practical limits. As one study notes, AI detectors themselves present ethical concerns due to "high false positive rates" and potential bias against certain student populations (NIU CITL, 2024).
A more promising approach involves redesigning assessment practices to focus on what AI cannot readily replicate. The research identifies several effective strategies, including:
  • In-class presentations with dynamic Q&A components
  • Authentic, situated learning experiences tied to personal contexts
  • Multilayered projects requiring sequential development and integration
  • Reflective components that connect learning to personal experiences
  • Process documentation that makes visible the development of work (UMass CTL, 2024)
These approaches share an emphasis on assessment as an integral part of the learning process rather than merely a verification mechanism. This aligns with virtue ethics' focus on developing character through practice rather than merely enforcing compliance through rules.

Moral Ambiguity and Ethical Adaptation

The findings reveal significant moral ambiguity surrounding AI use in academic settings. This ambiguity stems partly from rapidly evolving technology capabilities and partly from genuine ethical complexity about the purposes of education itself. The virtue ethics framework offers valuable insights for navigating this ambiguity. Rather than assuming ethical questions about AI use have clear right or wrong answers, virtue ethics focuses on developing the capacity for ethical judgment itself. This approach acknowledges that students will encounter increasingly complex ethical decisions about technology use throughout their careers—decisions that cannot be reduced to simple rules but require developed ethical sensibilities. The "model-practice-stabilize" approach identified in the research offers a pathway for institutions to foster ethical AI use habits through:
  • Modeling: Faculty demonstrate ethical AI use through transparent examples
  • Practice: Students experiment with AI use in structured, low-stakes contexts
  • Stabilization: Repeated reflection and application help internalize ethical habits (ACM Digital Library, 2024)
This approach views ethical AI use not as a fixed set of rules but as a developmental process. It acknowledges that students need opportunities to practice ethical decision-making about AI in educational settings before facing similar decisions in professional contexts. The integration of AI into educational settings thus requires adaptation not just of policies and assessment but of ethical frameworks themselves. Static, rule-based approaches prove insufficient in contexts where technology capabilities and applications continue to evolve rapidly. Instead, educational institutions must foster ethical adaptability—the capacity to apply core ethical principles to novel and complex contexts.

Conclusion

This study has examined the multifaceted ethical dimensions of AI-assisted student work, revealing a complex landscape that challenges traditional conceptions of academic integrity, assessment practices, and educational ethics. The findings demonstrate that navigating this territory requires moving beyond reactive policies and simplified moral judgments toward more nuanced approaches that balance technological innovation with educational values. Several key insights emerge from this analysis. First, the research reveals significant differences between stakeholder perspectives, with students generally distinguishing between supportive and substitutive AI use while faculty express broader concerns about maintaining authentic learning. Second, current institutional policies often suffer from ambiguity and inconsistent enforcement, leading to high rates of non-compliance with AI disclosure requirements. Third, virtue ethics emerges as a promising framework for fostering ethical AI use by developing intrinsic moral dispositions rather than merely enforcing external rules. Fourth, assessment practices can be redesigned to maintain academic integrity while acknowledging AI technologies through strategies that emphasize process, reflection, and authentic application.
These findings suggest several recommendations for educational institutions seeking to address the challenges of AI-assisted student work:
  • Develop clear, consistent policies that distinguish between different types of AI use based on their relationship to learning objectives rather than technical distinctions
  • Invest in comprehensive AI literacy programs for both faculty and students that address technical capabilities, limitations, and ethical considerations
  • Adopt assessment redesign approaches that emphasize process documentation, personal reflection, and authentic application
  • Create safe disclosure environments where students can honestly declare AI use without fear of automatic penalties
  • Move beyond detection-based enforcement toward educational initiatives that develop ethical judgment
The implications of this study extend beyond immediate policy concerns to fundamental questions about the nature and purpose of education in an AI-enhanced society. As artificial intelligence increasingly permeates academic and professional environments, educational institutions must prepare students not merely to follow rules about technology use but to develop the ethical judgment needed to navigate complex moral terrain. This requires a shift from viewing academic integrity as rule compliance toward understanding it as an aspect of ethical character development. Future research should examine the long-term effectiveness of virtue ethics-based approaches to AI ethics education, explore discipline-specific strategies for assessment redesign, and investigate how educational institutions might better align academic integrity policies with workplace expectations regarding AI use. Additionally, longitudinal studies tracking how student attitudes and behaviors regarding AI evolve over time would provide valuable insights into the developmental aspects of ethical AI literacy.
As educational institutions continue navigating the integration of AI technologies, they must recognize that neither uncritical adoption nor categorical prohibition represents a sustainable path forward. Instead, the ethical integration of AI requires thoughtful consideration of how these technologies might enhance or undermine fundamental educational values. By fostering virtues of justice, honesty, responsibility, and care regarding AI use, institutions can help prepare students for ethical engagement with these technologies both academically and professionally.

References

  1. ACM Digital Library. (2024). Generative AI policy: A pedagogical communication tool and virtue ethics gateway. https://dl.acm.org/doi/fullHtml/10.1145/3641237.3691652. [CrossRef]
  2. Balalle, H.; Pannilage, S. (2025). Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity. Social Sciences & Humanities Open, 11, 101299. https://www.sciencedirect.com/science/article/pii/S2590291125000269.
  3. Evangelista, E. D. L. (2025). Ensuring academic integrity in the age of ChatGPT: Rethinking exam design, assessment strategies, and ethical AI policies in higher education. Contemporary Educational Technology, 17(1), ep559. https://www.cedtech.net/article/ensuring-academic-integrity-in-the-age-of-chatgptrethinking-exam-design-assessment-strategies-and-15775.
  4. Frontiers in Education. (2024). Ethical use of ChatGPT in education—Best practices to combat AI-induced plagiarism. https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2024.1465703/full. [CrossRef]
  5. Hagendorff, T. (2022). A virtue-based framework to support putting AI ethics into practice. Philosophy & Technology, 35(38). https://link.springer.com/article/10.1007/s13347-022-00553-z. [CrossRef]
  6. International Journal for Educational Integrity. (2024). Student perspectives on the use of generative artificial intelligence in higher education. https://edintegrity.biomedcentral.com/articles/10.1007/s40979-024-00149-4. [CrossRef]
  7. International Journal of Educational Technology in Higher Education. (2023). A comprehensive AI policy education framework for university teaching and learning. https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-023-00408-3. [CrossRef]
  8. Maral, M. (2024). A bibliometric analysis on academic integrity. Journal of Academic Ethics. https://link.springer.com/article/10.1007/s10805-024-09519-6. [CrossRef]
  9. MDPI. (2024). Redesigning assessments for AI-enhanced learning: A framework for educators in the generative AI era. https://www.mdpi.com/2227-7102/15/2/174. [CrossRef]
  10. NIU Center for Innovative Teaching and Learning (CITL). (2024). AI detectors: An ethical minefield. https://citl.news.niu.edu/2024/12/12/ai-detectors-an-ethical-minefield/.
  11. Rane, N., Shirke, S., Choudhary, S. P., & Rane, J. (2024). Education strategies for promoting academic integrity in the era of artificial intelligence and ChatGPT: Ethical considerations, challenges, policies, and future directions. Journal of ELT Studies. https://www.sabapub.com/index.php/jes/article/view/1314.
  12. Rimban, E. L. (2023). Challenges and limitations of ChatGPT and other large language models. International Journal of Arts and Humanities, 4(1), 147-152. [CrossRef]
  13. Spector, J. M., Ifenthaler, D., Sampson, D., Yang, L., Mukama, E., Warusavitarana, A., Dona, K. L., Eichhorn, K., Fluck, A., Huang, R., Bridges, S., Lu, J., Ren, Y., Gui, X., Deneen, C. C., San Diego, J., & Gibson, D. C. (2022). Open university learning analytics dataset. Computers and Education: Artificial Intelligence, 3, 100032.
  14. SpringerLink. (2022). A virtue-based framework to support putting AI ethics into practice. https://link.springer.com/article/10.1007/s13347-022-00553-z. [CrossRef]
  15. Taylor & Francis Online. (2024). Addressing student non-compliance in AI use declarations: Implications for academic integrity and assessment in higher education. https://www.tandfonline.com/doi/full/10.1080/02602938.2024.
  16. UMass Center for Teaching and Learning (CTL). (2024). How do I (re)design assignments and assessments in an AI-impacted world? https://www.umass.edu/ctl/how-do-i-redesign-assignments-and-assessments-aiimpacted-world.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated