Preprint
Article

This version is not peer-reviewed.

Student Perceptions of AI-Assisted Writing and Academic Integrity: Ethical Concerns, Academic Misconduct, and Use of Generative AI in Higher Education

A peer-reviewed article of this preprint also exists.

Submitted:

16 July 2025

Posted:

23 July 2025

You are already at the latest version

Abstract
The rise of generative AI in higher education has disrupted our traditional understandings of academic integrity, moving our focus from clear-cut infractions to evolving ethical judgment. In this study, 401 students from major U.S. universities provide insight into how beliefs, behaviors, and policy awareness intersect in shaping how students interact with AI-assisted writing. The findings indicate that students’ ethical beliefs – not institutional policies – are the strongest predictors of perceived misconduct and actual AI use in writing. Policy awareness was found to have no significant effect on ethical judgments or behavior. Instead, students who believe AI writing is cheating were found to be substantially less likely to view it as ethical or engage with it. These findings suggest that many students do not treat AI use in learning activities as an extension of conventional cheating (e.g., plagiarism), but rather as a distinct category of academic conduct/misconduct. Rather than using punitive models to attempt to punish students for using AI, this study suggests that education about AI ethics and the risk of AI overreliance may prove more successful for curbing unethical AI use in higher education.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Education

Introduction

For students in higher education, the boundaries of academic misconduct have long seemed relatively fixed - cheating, plagiarism, and collusion were clearly defined. But the rise of generative artificial intelligence, particularly large language models (LLMs), has begun to unsettle that map. Unlike plagiarism, which implies a straightforward act of copying without attribution, LLMs produce text that is technically original but procedurally opaque, shaped by probabilistic associations rather than authorship (White et al., 2023). As a result, the outputs of these systems sit uneasily between tool and coauthor, innovation and infraction - a liminal zone where institutional norms have yet to catch up. Research by Lund et al. (2025) suggests that students are navigating this ambiguity without a clear ethical compass, often uncertain where acceptable use of these tools ends and misconduct begins.
This uncertainty is not simply a matter of policy lagging behind technology. It likely reflects a deeper epistemological shift among students in what it means to write, to know, and to learn. For students, LLMs offer powerful new affordances: instant feedback, tailored content, frictionless summaries of complex material (Williamson & Murray, 2024). They expand access to knowledge and scaffold the learning process in ways traditional instruction may not. But with these benefits come ethical frictions - over-reliance, epistemic passivity, and blurred authorship (Zhai et al., 2024). More than just technical tools, LLMs are shaping how students conceive of academic labor, creativity, and intellectual ownership (Darban, 2025). While institutions scramble to update their honor codes and draft AI integrity policies, students are already making choices in real time, guided as much by perceived fairness and personal values as by formal rules.
Understanding these choices - how students interpret the ethics of AI use, what they believe constitutes “help” versus “cheating,” and how these beliefs vary across contexts - is essential for designing pedagogies and policies that are not only effective, but just. These questions do not have easy answers. They require grappling with the tension between technological possibility and academic responsibility, between efficiency and effort, between the right to use tools and the obligation to learn (Yue et al., 2025). In this study, we examine how students position themselves within this evolving landscape, and what their perspectives reveal about the future of academic integrity in an AI powered world.

Literature Review

AI in Higher Education

Across university campuses, Generative Artificial Intelligence (GenAI) is no longer an emerging novelty. It is here, embedded in the margins of syllabi, lingering in student browsers, shaping drafts before they are even written. Whether higher education institutions choose to formally embrace it or not, the tide of AI adoption is already rising. As Bearman et al. (2022) suggest, GenAI’s presence is becoming less a matter of policy choice and more an inevitability that demands thoughtful navigation. McDonald et al. (2025) describe this moment not as an inflection point but as an integration point, one where strategy, ethics, pedagogy, and imagination must converge.
The literature paints an academic landscape that is marked by both promise and peril. On one side, scholars consistently note GenAI’s usefulness as a flexible, scalable, and often responsive assistant for teaching, learning, and administration. Chiu et al. (2023), Compton and Burke (2023), and Kurtz et al. (2024) all highlight its growing role in tutoring, feedback provision, and even lesson planning. These tools are increasingly integrated into how instructors assess student understanding and where administrators forecast enrollment patterns, refine communications, and streamline operational planning (Zawacki-Richter et al., 2019). For students - particularly at the undergraduate level - GenAI has been described as a kind of always-on tutor: one that explains, rewrites, rephrases, and even reassures (Shahzad et al., 2025). Yet, as Yusuf et al. (2024) observe, the growing body of research pays comparatively less attention to graduate students, whose needs and challenges may differ substantially.
Despite the enthusiasm around its functionality, the literature repeatedly sounds a cautionary note. Ethical dilemmas prompted by AI are neither small nor theoretical. Scholars such as Batista et al. (2024) and Perera and Lankathilaka (2023) point to the blurring line between help and dishonesty, with GenAI enabling new forms of academic misconduct that are difficult to detect. The generation of fabricated citations, a known flaw in some GenAI models, adds another layer of complexity, undermining academic integrity and muddying the waters of authorship and originality (Wang et al., 2024). Kurtz et al. (2024) echo these concerns, noting that cheating facilitated by GenAI poses a serious threat to educational integrity.
Beyond ethical concerns lies a deeper pedagogical question: what happens to student learning when artificial machines become too helpful? Cordero et al. (2025), Lee et al. (2024), and Zhang and Xu (2025) explore the risk of intellectual atrophy, warning that students may become overly dependent on GenAI tools, sacrificing opportunities to develop creativity, reasoning, and problem-solving skills. Batista et al. (2024) reinforce this perspective, while also suggesting that without guided instruction, GenAI may ultimately short-circuit meaningful engagement with course content. And yet, across much of this critical discourse, a student’s own perception of GenAI - their understanding, curiosity, trust, or skepticism - is rarely addressed as a factor shaping its misuse.
There is consensus, however, that GenAI’s role in higher education is no longer optional or something that can be prevented. As the literature repeatedly emphasizes, the path forward is not resistance, but regulation - anchored by institutional policies, ethical guidelines, and shared expectations (Batista et al., 2024; Cordero et al., 2025; Kurtz et al., 2024; Lee et al., 2024; Perera & Lankathilaka, 2023; Wang et al., 2024; Zhang & Xu, 2025). Still, any effective policy must reflect the diversity of higher education institutions and their learners. Yusuf et al. (2024) caution that cultural variability across global institutions makes universal policy frameworks unrealistic. Instead, they argue for context-specific strategies that are sensitive to distinct educational values and ethical frameworks.
Central to this adaptation is GenAI literacy - not only for students but for faculty and staff as well. Farrelly and Baker (2023) emphasize that GenAI cannot be responsibly integrated without equipping educators with the tools to navigate it critically and creatively. They, along with McGrath et al. (2023), highlight frameworks like those developed by Ng et al. and Hillier, which provide structured approaches to building GenAI literacy in academic environments. Zhang and Xu (2025) similarly underline that institutions must invest in upskilling their workforce, not merely to keep pace with technology but to shape its use in pedagogically sound and ethically aligned ways.

Academic Integrity in the Age of AI

Within higher education, academic integrity is a principle that is often considered a core tenant of an institution’s values. An academic institution must be able to uphold academic integrity within its programs; otherwise, the credibility and quality of the programs offered by the institution become questionable (Balalle & Pannilage, 2025). The International Center for Academic Integrity (ICAI) (2021) currently describes academic integrity as being the commitment to the values of honesty, trust, fairness, respect, responsibility, and courage within academic communities. Thus, dishonest practices which threaten academic integrity, such as plagiarism and cheating, are largely frowned upon within higher education, especially in higher degree levels (Șercan & Voicu, 2022). While threats to these core values of academic integrity are far from being a novel sighting, the rise of generative AI writing tools, such as ChatGPT, have brought reason for concern within academic institutions (Benke & Szőke, 2024; Laflamme & Bruneault, 2025; Sabzalieva & Valentini, 2023). With LLMs being able to easily provide detailed writing, they have become a prime tool for cheating in academic settings (Ward et al., 2024).
When discussing students’ use of LLMs and academic integrity, one of the main concerns presented is that of the development of knowledge and critical thinking skills in students. (Cong-Lem et al., 2024; Khatri & Karki, 2023; Balalle & Pannilage, 2025; Salehi et al., 2025; Yeo, 2023). If a student is overly reliant on AI tools to complete assignments, then there is no guarantee that the student is actually reaping any knowledge from the activity (Gupta, 2024; Khatri & Karki, 2023). This brings concern not only in the context of an academic institution but also in the context of the fields in which these students will work in in the future; this is especially a concern for subject fields in which assessment is heavily based in the quality of a student’s written work, such as the social sciences, arts, and humanities (Gupta, 2024).
This concern over the development of critical thinking is also in line with the concern regarding the originality of students’ work. When discussing the use of generative AI for writing content, debates over what constitutes authorship and plagiarism arise (Yeo, 2023). According to Yeo (2023), there is no universally accepted definitions for authorship or for plagiarism; however, there is a general agreement that “authorship” requires that the work be the person's own original work, and that “plagiarism” involves using someone or something else’s content without credit. While usage of generative AI for essays may not constitute plagiarism, it can reasonably be determined to be “false authorship,” since the student did not write the essay themselves. Functionally, it is almost the same as having a peer write the essay in place of the student.
Additional concerns over the usage of LLMs in education persist as well. The topic of student responsibility is a significant one, especially with it being a core tenant of the idea of academic integrity itself (Alioğulları et al., 2025; ICAI, 2021). The accuracy of LLMs is also a topic of discussion, since LLMs can generate unintentional biases and “hallucinations,” or false facts, and present them as truth (Alioğulları et al., 2025; Salehi et al., 2025). Irresponsibility in students can greatly impact the credibility of an academic program if occurring on a large enough scale, as can the decrease of critical thinking skills and accurate knowledge within a student population.
While the rise of AI-generated writing in academia is of great concern, there are potential uses of AI that would still fit within the ideals of academic integrity. Yeo (2023) discusses the usage of writing assistants, such as Wordtune, which do not generate their own content and instead make subtle changes to text that the user has already provided. If discussing the usage of such tools and the legitimacy of a one’s authorship, it could be reasonably argued that using a tool such as Wordtune would still allow for one to be considered the original author, since they provided the original idea and text simply used Wordtune to revise their work (Yeo, 2023). These arguments could apply to other tools such as Grammarly, since it only provides revision and editing services and does not fully generate a written work.

Student Perceptions of AI Use and AI Ethics

As AI applications become increasingly prevalent in educational settings, there has been substantial academic interest in students’ perspectives on their purposes and ethical implications. As these technologies become a routine part of educational practice, learners recognize both the benefits and challenges they present, which informs efforts to promote responsible use and maintain academic integrity. This tension between AI’s educational potential and the complexities it introduces provides a critical foundation for developing informed policies, practices, and pedagogies. One study by Lee and Maeng (2023), explored AI systems perception among 30 high school students in South Korea. They found that students valued chatbots for their convenience and efficiency, meaning a rating of 4.33. Students liked that they could find information without temporal or spatial bounds, and they believed that the chatbot was user-friendly, rating it at 3.87 out of 5 for usability. The study also showcased some ethical concerns students had about the chatbot systems, particularly around plagiarism and copyright, with a mean rating of 3.80 for concerns about originality and copyright issues. Students expressed concerns about their personal data being breached, which demonstrates some understanding of privacy risks. Students surveyed with no previous experience using a chatbot were more skeptical about ethical issues and educational concerns, such as the potential for over-dependence on chatbots undermining exploratory learning than students who had previously used chatbots for English language learning. Lee and Maeng (2023), recommend that "teachers should provide educational guidance for students to take a critical approach to information provided by the chatbots" (Lee & Maeng, 2023, p. 69). This represented the AI's duality as a positive educational resource and a source of ethical challenges, especially for younger students.
Another study by Xiao et al. (2023) showed a mixed method to analyze its effect on academic integrity on learners. Their findings showed that students valued AI as a tool, such as ChatGPT to assist drafting, but had concerns regarding plagiarism and diminished authentic learning experiences. It points out, "some students struggle to put their moral views into words, which reach a path in using assisted writing tools" (Xiao et al., 2023, p. 45). They reported that 60% of participants had used these services for academic work, and eventually, 72% responded in the affirmative to ethical matters, for cheating. On one hand, this echoes the complexity and duality of situations when embracing AI, with students seeing its promise as a learning aid. This deep study recommends educational policy to promote responsible ethical use of tools so that organizations develop guidelines that clarify ethical risks while tapping into the benefits of AI.
In addition, other studies analyze student perceptions of AI tools across diverse educational and cultural contexts. They found that cultural context and institutional factors affect attitudes toward AI ethics. For example, participants were generally more concerned about plagiarism and academic dishonesty in a Western context, as opposed to Asian contexts, where privacy ultimately was a bigger concern when deciding to engage with the technology. Tlili et al. (2023) stated, "Students in Asia expressed more trepidation about any implications regarding data privacy while acknowledging risks, 68% expressed risk of personal information leakage" (Tlili et al., 2023, p. 12). The study suggested that globally, 55% of students used AI tools for academic purposes with perceived cognitive advantages of improved writing and problem-solving skills. Yet, ethical concerns were ubiquitous regardless of location, with 70% of participants having diminished critical thinking. Essentially, these showcase the applicability of ethical frameworks of students' perceptions addressed to responsible engagement with AI technologies.
Moreover, a study by Yu (2023) examined the longer-term ethical use in education by surveying 200 undergraduate students using AI-assisted writing. A significant finding of the study reported that 62% of students ascribed to the use of AI for academic purposes, but 80% said they were concerned that students would lose their sense of independent thought due to overuse or becoming too reliant on AI. Yu (2023) states, "Over reliance on AI might elicit ethical and potentially privacy issues that may lessen authorial ownership" (Yu, 2023, p. 5). Students raise concerns about biased outputs and the presence of prejudiced content in AI systems. Despite these issues, many students recognize the value of AI tools for their effectiveness and efficiency, especially given the risk of potential overreliance in educational contexts. Moreover, the study found that three-quarters of students reported that AI improved the quality of writing assignments. Educators of future school leaders will need to employ critical thinking and ethical consideration in their learning as a counterbalance to the negative impacts of AI. Overall, this study stressed the importance of practitioners examining the implications of AI on education in the short term, while also developing ethical implications and academic integrity.

Research Problem and Questions

The rapid adoption of AI tools within higher education has prompted concerns about their ethical use, particularly when used in student writing (Wang et al., 2023). While many institutions have responded by developing new academic integrity policies related to student AI use, it remains unclear whether students' awareness of these policies influences their ethical perceptions or their actual use of AI tools. Additionally, the relationship between students’ personal ethical beliefs and their behavior with AI-assisted writing tools is not well understood in the current literature. There is a significant need to explore the ways students interpret and act upon academic integrity standards in the context of AI, and what drives the acceptance or rejection of AI-assisted writing as an ethical academic practice. This problem informs the following research questions:
  • How does students’ awareness of academic integrity policies influence their perceptions of AI-assisted writing?
  • Do students’ perceptions of AI-assisted writing predict their actual AI tool usage?
  • How do policy awareness and ethical beliefs influence students’ perceptions of the severity of academic misconduct involving AI-assisted writing?
  • What are the strongest predictors of whether students perceive AI-assisted writing as an ethically acceptable academic practice?

Methods

A questionnaire was created using an online survey platform and distributed electronically to participants. Eligible participants were students enrolled in higher education institutions who were at least 18 years old at the time of survey completion. The survey was disseminated via email to students at several major universities across the United States, with recipients encouraged to share the survey link with others to expand participation. It remained open from April 1 to May 1, 2024. During this period, 521 responses were collected, of which 401 were valid and complete. These responses were exported from the survey platform into a CSV file for subsequent analysis. The study received approval from the Institutional Review Board (IRB) under protocol number IRB-24-142. All participants provided informed consent prior to beginning the survey, acknowledging their voluntary participation, understanding of the study’s purpose, and the confidentiality of their responses. The research followed all institutional ethical standards for human subjects research.
The questionnaire consisted of questions relating to student demographics - educational status (undergraduate, masters, doctoral), major, residency status (domestic/international), and gender - as well as a series of Likert items related to AI use and perceptions of academic integrity. Among the AI and academic integrity questions, students were asked to rate various activities based on their perceived level of academic misconduct (on a scale from “not academic misconduct at all” to “major academic misconduct”), the seriousness of AI use for various academic activities, and a series of statements related to AI rated on a five-point scale from “strongly disagree” to “strongly agree,” including, “Cheating on assignments is unethical,” “Cheating is okay as long as I don’t get caught,” and “Using AI to help write papers is cheating.”
Survey data was transferred to a spreadsheet for further analysis. A series of regression analyses were performed to ascertain any significant relationships among the variables, according to the four research questions posed for this study. For the first analysis, we looked at the relationship of responses to questions pertaining to AI policy awareness and views of AI use on assignments as ethical. For the second analysis, we examined the relationship between students’ ethical beliefs about AI use and their self-reported engagement in AI-assisted writing. For the third analysis, we assessed how students’ ethical views, beliefs about cheating, and educational level predicted their perception of the seriousness of AI-assisted writing as academic misconduct. For the fourth analysis, we investigated which factors - including views on cheating, perceived misconduct seriousness, and policy awareness - best predicted students’ ethical acceptance of AI-assisted writing.

Results

The Role of Policy Awareness in Shaping Ethical Views of AI-Assisted Writing

Universities have issued guidelines to address the ethical use of artificial intelligence (AI) in academic work, yet the effectiveness of these policies remains an open question. Specifically, does knowing the rules actually influence what students believe? To address this, we asked: RQ1: How does students' awareness of academic integrity policies influence their perceptions of AI-assisted writing?
Because the dependent variable—ai_use_ethical—is ordinal (ranging from 1 to 6 on a Likert scale), we used Ordinal Logistic Regression. Results are shown in Table 1.
The regression reveals no significant effect of policy awareness on students’ ethical judgments of AI use (p = 0.379). In other words, knowing the rules doesn’t necessarily shape what students believe is right or wrong. This finding challenges the assumption that institutional policies alone can meaningfully govern students’ ethical perspectives.
Instead, the strongest predictor is the belief that AI writing is cheating. Students who equate AI-assisted writing with academic dishonesty are substantially less likely to view it as ethical (β = -0.262, p < 0.001). The odds ratio (0.77) suggests they are 23% less likely to deem it acceptable. Importantly, this indicates that ethical attitudes are more strongly shaped by internalized beliefs about cheating than by policy exposure.
Interestingly, a student’s broader stance on academic dishonesty - whether they believe cheating is wrong - did not significantly predict how they viewed AI use (p = 0.333). This suggests that students may treat AI writing as a distinct category, not merely as an extension of conventional cheating. It may require its own ethical vocabulary.
Academic level also plays a role. The edu_status coefficient is marginally significant (p = 0.056), with graduate students less likely than undergraduates to see AI-assisted writing as ethical. The odds ratio (0.35) implies they are about 65% less likely to approve. This difference may reflect higher standards of originality and rigor in graduate education.

Do Ethical Beliefs Predict Behavior?

Building on RQ1, we then asked: RQ2: Do students’ ethical perceptions of AI-assisted writing predict their actual use of AI tools?
To measure behavior, we combined three related ordinal variables—ai_writing_full_paper, ai_writing_section, and ai_revision—into a composite measure: ai_assisted_writing. Each item was scored on a 6-point Likert scale. We calculated the mean across all three behaviors and rounded to the nearest whole number, preserving its ordinal nature (Table 2).
Students who view AI use as ethical are significantly more likely to engage in it. The negative coefficient for ai_use_ethical (β = -0.309, p < 0.001) indicates that students who perceive AI as ethically acceptable are less likely to frame its use as misconduct. Likewise, those who believe AI writing is cheating are significantly more likely to see AI-assisted writing as problematic.
Again, policy awareness is not a significant predictor (p = 0.549), reinforcing our earlier finding: knowing the rules doesn’t necessarily shape action.
Interestingly, students who use Grammarly Pro are nearly twice as likely to use AI tools in writing (OR = 1.91). Familiarity with AI-powered tools appears to reduce resistance to more advanced writing assistants like ChatGPT.

What Predicts the Perceived Severity of AI Misconduct?

We next asked: RQ3: How do students’ ethical beliefs and policy awareness influence their perception of the severity of academic misconduct involving AI writing? The same linear regression analyses were used with this new dependent variable (Table 3).
The results reinforce a key theme: ethical beliefs—not policy awareness—are the strongest predictors. Students who see AI-assisted writing as ethical are less likely to view it as serious misconduct (β = -0.164, p < 0.001). Conversely, students who view AI writing as cheating take it more seriously (β = 0.242, p < 0.001).
Broader moral beliefs also matter. Students who view cheating as unethical or harmful to others are more likely to classify AI use as serious misconduct.
Once again, policy awareness does not predict perceived seriousness (p = 0.479). Educational level, however, does: graduate students are significantly more likely to view AI use as serious misconduct (β = 1.049, p < 0.001), reflecting perhaps a heightened sensitivity to academic standards at the graduate level.

What Drives Ethical Acceptance of AI Writing?

Finally, we employed an ordinal linear regression to address RQ4: What are the strongest predictors of whether students view AI-assisted writing as ethically acceptable?
Two variables emerge as the strongest predictors of ethical acceptance: perceived seriousness and cheating leniency. Students who see AI writing as a serious form of misconduct are far less likely to view it as ethical (β = -0.669, p < 0.001). Those who believe cheating is acceptable if not caught are far more likely to endorse AI writing as ethical (β = 0.629, p < 0.001) (Table 4).
Surprisingly, believing that AI writing is cheating does not significantly predict ethical acceptance (p = 0.517), nor does simply thinking that cheating is unethical (p = 0.123). These results suggest that students may separate abstract ethical ideals from practical judgments about AI.
Additionally, use of Grammarly Pro and educational status do not significantly affect views on AI ethics. Familiarity with AI tools does not automatically imply approval, and graduate students are not categorically stricter—at least not when it comes to ethical acceptance of AI writing.

Discussion

Patterns that have emerged from this study draw a more nuanced picture of student engagement with generative AI than existing policy documents or institutional statements might suggest. In classrooms where the presence of AI tools like ChatGPT or Grammarly Pro is increasingly commonplace, it is not just the policies that matter - it is how students interpret them through their own ethical lenses. Contrary to what some institutional leaders might assume, mere awareness of a university’s AI policy does little to sway student views on whether using AI is ethical or not. The numbers in this study stress that formal guidance often trails behind the informal norms students create for themselves.
This gap is significant because even when students can recite what their institution allows or prohibits, those rules do not necessarily take root in practice. The data shows that students’ ethical beliefs - not their policy knowledge - are what shape their behavior. This echoes earlier findings by Farrelly and Baker (2023) and McGrath et al. (2023), and adds weight to Yusuf et al.’s (2024) argument that the landscape of student-AI interaction cannot be flattened into a single institutional approach. Students arrive with different values, shaped by culture, academic background, and their evolving sense of right and wrong. A rulebook alone cannot account for that complexity.
Some students, for example, expressed a belief that cheating is only unethical if one gets caught. This view sharply predicts more permissive attitudes toward AI-assisted writing. Others, however, appeared to weigh the same behaviors more seriously, guided by an internal compass rather than a fear of surveillance. These diverging worldviews reflect what Xiao et al. (2023) describe as the quiet influence of moral reasoning - an influence often invisible to policy makers, yet powerful enough to determine how technologies are adopted in practice.
One of the more revealing insights of this study came in the form of a contradiction: many students viewed Grammarly Pro, an AI-powered tool, as ethically benign, while viewing other AI tools - particularly those that offer more generative or autonomous capabilities - as suspect. This finding suggests a conceptual murkiness among the students. Students are using AI without necessarily understanding what it is, how it works, or where it crosses ethical lines. The distinction between proofreading assistance and ghostwriting may seem obvious to faculty, but for many students, the boundary is less clear. As AI systems become more embedded and seamless, these boundaries will only blur further.
This is where policy must evolve into pedagogy. Rather than asking students to memorize what’s allowed and what’s not, institutions must invite them to think critically. Embedding GenAI literacy into the curriculum - teaching not just the mechanics of the tools, but the ethics of their use - becomes essential. The call here is not for stricter enforcement but for deeper conversation. What does authorship mean in an age of predictive text? Where does originality begin when suggestions come from an algorithm? These are not technical questions. They are ethical ones.
One demographic detail worth noting is the role of students’ education level. Graduate students 0 who comprised the majority of respondents to this survey - were significantly more likely to view certain AI uses as serious misconduct. This may suggest that as students progress through their studies, their ethical beliefs evolve. They become more attuned to academic norms, more conscious of professional expectations. This insight aligns with work by Zhang and Xu (2025), who argue that effective policy must be layered, responsive to the different stages of a student’s academic journey. A first-year undergraduate navigating their first semester cannot be expected to internalize integrity in the same way as a doctoral candidate preparing to publish.
This study does have several limitations to note. Our survey carries the risk of social desirability bias - participants might tell us what they think they should believe rather than what they actually do. And the geographic scope is narrow, limited to a handful of U.S. universities. The ethical and behavioral landscape may look very different in other contexts, where cultural norms, institutional histories, and access to AI tools vary widely. Future work might look outward - and forward - asking how these perceptions shift over time and across borders. A longitudinal study, for instance, might track how a student’s understanding of integrity changes as they move from their first paper to their final thesis.
Still, even within these limitations, the message is clear: ethics cannot be outsourced to a policy document. Students live out their beliefs, not their rulebooks. And if we want them to use AI wisely, critically, and with integrity, then we must meet them not just with regulations - but with reflection. We must ask not only what they are using, but why. The future of academic integrity is not written in penalties or permissions. It is instead written in conversations and reflections on ethical practices.

Conclusion

This study adds nuance to the growing discussion around generative AI in education by spotlighting the values that underlie student decision-making. While institutional policies set the outer boundaries of permissible AI usage, it is students’ own ethical beliefs that most powerfully shape their actual use of AI tools. This finding suggests that the core issue does not lie with policy compliance, but rather moral reasoning. Students are already navigating the gray areas and questioning what qualifies as permissible behavior. Educators who focus solely on monitoring behavior may miss an opportunity to engage with the deeper questions students are asking themselves.
If the goal of higher education is to cultivate responsible, independent thinkers, then teaching students why certain uses of AI can hinder their growth is more effective than simply enforcing rules. Over-reliance on these tools risks dulling critical skills, eroding confidence, and weakening their position in a job market that still values human judgment. As institutions continue to adapt to AI’s expanding presence, the most durable solutions may lie not in the technology, but in the conversations we foster around it. When students are trusted and engaged in discussions about AI ethics, they may commit to more ethical behavior.

Data Availability Statement

Data can be made available upon request to the corresponding author.

Conflicts of Interest

None of the authors have potential conflicts of interest to report for this submission.

References

  1. Alioğulları, E., Tüylü, D., & Sağıroğlu, A. (2025). Examining Artificial Intelligence and Ethics in Education With Bibliometric Analysis. In G. Sart, & F. H. Sezgin, AI Adoption and Diffusion in Education (pp. 1-30). IGI Global Scientific Publishing. [CrossRef]
  2. Balalle, H., & Pannilage, S. (2025). Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity. Social Sciences & Humanities Open, 11. [CrossRef]
  3. Batista, J., Mesquita, A., & Carnaz, G. (2024). Generative AI and Higher Education: Trends, Challenges, and Future Directions From a Systematic Literature Review. Information, 15(11), 1-27. [CrossRef]
  4. Bearman, M., Ryan, J., & Ajjawi, R. (2022). Discourses of Artificial Intelligence in Higher Education: A Critical Literature Review. Higher Education 86, 369-385. [CrossRef]
  5. Benke, E., & Szőke, A. (2024). Academic Integrity in the Time of Artificial Intelligence: Exploring Student Attitudes. Italian Journal of Sociology of Education, 16(2), 91-108. [CrossRef]
  6. Chiu, T. K. F., Xia, Q., Zhou, X., Chai, C. S., & Cheng, M. (2023). Systematic Literature Review on Opportunities, Challenges, and Future Research Recommendations of Artificial Intelligence in Education. Computers and Education: Artificial Intelligence 4, 1-15. [CrossRef]
  7. Compton, H., & Burke, D. (2023). Artificial Intelligence in Higher Education: the State of the Field. International Journal of Educational Technology in Higher Education 20(22), 1-22. [CrossRef]
  8. Cong-Lem, N., Tran, T. N., & Nguyen, T. T. (2024). ACADEMIC INTEGRITY in the AGE of GENERATIVE AI: PERCEPTIONS and RESPONSES of VIETNAMESE EFL TEACHERS. Teaching English with Technology, 24(1), 28-47. [CrossRef]
  9. Cordero, J., Torres-Zambrano, J., & Cordero-Castillo, A. (2025). Integration of Generative Artificial Intelligence in Higher Education: Best Practices. Education Sciences, 15(1), 32. [CrossRef]
  10. Darban, M. (2025). The future of virtual team learning: Navigating the intersection of AI and education. Journal of Research on Technology in Education, 57(3), 659-675.
  11. Farrelly, T., & Baker, N. (2023). Generative Artificial Intelligence: Implications and Considerations for Higher Education. Education Sciences, 13(11). [CrossRef]
  12. Gupta, A. (2024). When generative artificial intelligence meets academic integrity: Educational opportunities and challenges in a digital age. British Educational Research Association. Retrieved from https://www.bera.ac.uk/publication/when-generative-artificial-intelligence-meets-academic-integrity.
  13. International Center for Academic Integrity (ICAI). (2021). The Fundamental Values of Academic Integrity. (3). Retrieved from International Center for Academic Integrity: www.academicintegrity.org/the-fundamental-values-of-academic-integrity.
  14. Khatri, B. B., & Karki, P. D. (2023, December 22). Artificial Intelligence (AI) in Higher Education: Growing Academic Integrity and Ethical Concerns. Nepalese Journal of Development and Rural Studies, 20(1), 1-7. [CrossRef]
  15. Kurtz, G., Amzalag, M., Shaked, N., Zaguri, Y. Kohen-Vacs, D., Gal, E., Zailer, G., & Barak-Medina, E. (2024). Strategies for Integrating Generative AI into Higher Education: Navigating Challenges and Leveraging Opportunities. Education Sciences, 14(5). [CrossRef]
  16. Laflamme, A. S., & Bruneault, F. (2025). Redefining Academic Integrity in the Age of Generative Artificial Intelligence: The Essential Contribution of Artificial Intelligence Ethics. Journal of Scholarly Publishing, 56(2). [CrossRef]
  17. Lee, D., Arnold, M., Srivastava, A., Plastow, K., Strelan, P., Ploeckl, F., Lekkas, D., & Palmer, E. (2024). The Impact of Generative AI on Higher Education Learning and Teaching: A Study of Educators Perspectives. Computers and Education: Artificial Intelligence 6. [CrossRef]
  18. Lee, J. E., & Maeng, U. (2023). Perceptions of high school students on AI chatbots use in English learning: Benefits, concerns, and ethical consideration. Journal of Pan-Pacific Association of Applied Linguistics, 27(2), 53–72.
  19. Lund, B. D., Lee, T. H., Mannuru, N. R., & Arutla, N. (2025). AI and academic integrity: Exploring student perceptions and implications for higher education. Journal of Academic Ethics. [CrossRef]
  20. McDonald, N., Johri, A., Ali, A., & Collier, A. H. (2025). Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines. Computers in Human Behavior: Artificial Humans, 100121.
  21. McGrath, C., Pargman, T. C., Juth, N., & Palmgren, P. J. (2023). University Teachers’ Perceptions of Responsibility and Artificial Intelligence in Higher Education- An Experimental Philosophical Study. Computers and Education: Artificial Intelligence 4. [CrossRef]
  22. Perera, P., & Lankathilaka, M. (2023). AI in Higher Education: A Literature Review of ChatGPT and Guidelines for Responsible Implementation. International Journal of Research and Innovation in Social Science 7(6), 306-314. [CrossRef]
  23. Sabzalieva, E., & Valentini, A. (2023). ChatGPT and artificial intelligence in higher education: quick start guide. UNESCO. Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000385146.
  24. Salehi, S., Bethlahmey, J., Peterson, D., Newman, E., & Chow, E. (2025). How Medical Students Across the USA Use Generative Artificial Intelligence for Learning: A Cross-Sectional Survey. J Gen Intern Med. [CrossRef]
  25. Șercan, E., & Voicu, B. (2022). Patterns of Academic Integrity Definitions among BA Romanian Students’. The Impact of Rising Enrolments. Revista de cercetare și intervenție socială, 78, 87-106. [CrossRef]
  26. Shahzad, M. F., Xu, S., & Zahid, H. (2025). Exploring the Impact of Generative AI-based Technologies on Learning Performance Through Self-efficacy, Fairness, & Ethics, Creativity, and Trust in Higher Education. Education and Information Technologies, 30(3), 3691-3716. [CrossRef]
  27. Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(1), 15. [CrossRef]
  28. Wang, H., Dang, A., Wu, Z., & Mac, S. (2024). Generative AI in Higher Education: Seeing ChatGPT Through Universities’ Policies, Resources, and Guidelines. Computers and Education: Artificial Intelligence, 7, 2-11. [CrossRef]
  29. Wang, T., Lund, B. D., Marengo, A., Pagano, A., Mannuru, N. R., Teel, Z. A., & Pange, J. (2023). Exploring the potential impact of artificial intelligence (AI) on international students in higher education: Generative AI, chatbots, analytics, and international student success. Applied Sciences, 13(11), 6716.
  30. Ward, A., Manoharan, S., & Ye, X. (2024). Exploring Academic Integrity in the Age of Generative AI. 2024 21st International Conference on Information Technology Based Higher Education and Training (ITHET), (pp. 1-5). [CrossRef]
  31. White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H.,... & Schmidt, D. C. (2023). A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv, arXiv:2302.11382.
  32. Williamson, A., & Murray, J. (2024). So we’re embracing LLMS? Now what?: A study on enhancing feedback in education through generative AI (p. 2486-2494). EDULEARN24 Proceedings.
  33. Xiao, Y., Li, J., & Chen, H. (2023). Student perceptions of ChatGPT in academic settings: Balancing utility and ethics. Journal of Educational Technology Development and Exchange, 16(1), 35–50.
  34. Yeo, M. A. (2023). Academic integrity in the age of Artificial Intelligence (AI) authoring apps. TESOL Journal, 14. [CrossRef]
  35. Yu, H. (2023). Reflection on whether ChatGPT should be banned by academia from the perspective of education and teaching. Frontiers in Psychology, 14, 1181712. [CrossRef]
  36. Yue, M., Jong, M. S. Y., Dai, Y., & Lau, W. W. F. (2025). Students as AI literate designers: a pedagogical framework for learning and teaching AI literacy in elementary education. Journal of Research on Technology in Education. [CrossRef]
  37. Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the Future of Higher Education: A Threat to Academic Integrity or Reformation? Evidence from Multicultural Perspectives. International Journal of Educational Technology in Higher Education, 21(21), 1-29. [CrossRef]
  38. Zawacki-Richter, O., Marin, V. I., Bond, M., & Gouverneur, F. (2019). Systematic Review of Research on Artificial Intelligence Applications in Higher Education—Where are the Educators? International Journal of Educational Technology in Higher Education 16(39), 1-22. [CrossRef]
  39. Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learning Environments, 11(1), article 28.
  40. Zhang, L., & Xu, J. (2025). The Paradox of Self-efficacy and Technological Dependence: Unraveling Generative AI’s Impact on University Students Task Completion. The Internet and Higher Education 65, 1-10. [CrossRef]
Table 1. Ordinal Logistic Regression Results for Perceived Ethicality of AI Use (Dependent Variable: ai_use_ethical).
Table 1. Ordinal Logistic Regression Results for Perceived Ethicality of AI Use (Dependent Variable: ai_use_ethical).
Predictor Variable Coefficient (β) Std. Error z-value p-value 95% Confidence Interval
policy_aware 0.464 0.528 0.879 0.379 [-0.570, 1.498]
ai_writing_cheating -0.262 0.061 -4.304 <0.001 [-0.382, -0.143]
cheating_assignments_unethical -0.071 0.074 -0.969 0.333 [-0.216, 0.073]
edu_status -1.056 0.552 -1.915 0.056 [-2.138, 0.025]
Note:Statistically significant p-values (p < 0.05) are bolded. The model estimates ordinal logistic regression coefficients, where negative values indicate a lower perceived ethicality of AI use.
Table 2. Ordinal Logistic Regression Results for AI-Assisted Writing Perceptions (Dependent Variable: ai_writing_cheating).
Table 2. Ordinal Logistic Regression Results for AI-Assisted Writing Perceptions (Dependent Variable: ai_writing_cheating).
Predictor Variable Coefficient (β) Std. Error z-value p-value 95% Confidence Interval
ai_use_ethical -0.309 0.055 -5.608 <0.001 [-0.417, -0.201]
ai_writing_cheating 0.339 0.071 4.786 <0.001 [0.200, 0.477]
policy_aware 0.367 0.612 0.599 0.549 [-0.833, 1.566]
grammarly_pro_revise 0.645 0.061 10.487 <0.001 [0.524, 0.765]
Table 3. Linear Regression Results for Perceived Seriousness (Dependent Variable: Perceived_Seriousness).
Table 3. Linear Regression Results for Perceived Seriousness (Dependent Variable: Perceived_Seriousness).
Predictor Variable Coefficient (β) Std. Error t-value p-value 95% Confidence Interval
Intercept (const) 0.277 0.747 0.371 0.711 [-1.191, 1.746]
policy_aware 0.194 0.274 0.709 0.479 [-0.344, 0.732]
ai_use_ethical -0.164 0.024 -6.938 <0.001 [-0.211, -0.118]
ai_writing_cheating 0.242 0.031 7.791 <0.001 [0.181, 0.304]
cheating_assignments_unethical 0.097 0.037 2.595 0.010 [0.024, 0.171]
cheating_assignments_hurts_others 0.168 0.035 4.781 <0.001 [0.099, 0.237]
edu_status 1.049 0.295 3.552 <0.001 [0.469, 1.629]
Table 4. Ordinal Logistic Regression Results for Predictors of Students' Ethical Acceptance of AI-Assisted Writing (Dependent Variable: ai_use_ethical).
Table 4. Ordinal Logistic Regression Results for Predictors of Students' Ethical Acceptance of AI-Assisted Writing (Dependent Variable: ai_use_ethical).
Predictor Variable Coefficient (β) Std. Error z-value p-value 95% Confidence Interval
cheating_assignments_unethical 0.142 0.092 1.544 0.123 [-0.038, 0.322]
ai_writing_cheating -0.048 0.074 -0.648 0.517 [-0.192, 0.097]
perceived_seriousness -0.669 0.155 -4.319 <0.001 [-0.973, -0.365]
cheating_ok_if_not_caught 0.629 0.072 8.728 <0.001 [0.488, 0.770]
grammarly_pro_revise -0.034 0.089 -0.379 0.704 [-0.209, 0.141]
edu_status -0.093 0.555 -0.168 0.866 [-1.180, 0.994]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated