Preprint
Review

This version is not peer-reviewed.

The Influence of Artificial Intelligence (AI) on Academic Integrity in Higher Education

Submitted:

27 January 2026

Posted:

06 February 2026

You are already at the latest version

Abstract
Artificial intelligence (AI) rapidly changes how educators and institutions view academic integrity. Tools like ChatGPT are raising concerns about plagiarism and ethical misuse. While some view these tools as helpful, others are concerned about academic dishonesty or overreliance. The difficulty in detecting AI-generated work makes enforcing ethical rules more challenging. Moreover, university policies vary across institutions, leading to inconsistencies in enforcement. This study used a narrative review following the SANRA scale to assess 24 scholarly articles for key themes regarding the use of AI in higher education. The results show that many students use tools like ChatGPT without understanding the ethical issues involved. Furthermore, faculty struggle to verify student-authored content as AI detection tools lack accuracy in differentiating between human and AI-generated content. University policies regarding AI usage are inconsistent, with some attempting to integrate AI ethically while others ban it outright. While AI tools have clear benefits, the challenges with their use highlight the need for stronger policies, AI literacy programs, better detection tools, and faculty training in the ethical integration of such technology. AI should support student-centered learning rather than replace critical thinking, and for that, universities must update their academic integrity policies to include the ethical use of AI.
Keywords: 
;  ;  ;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Education

1. Introduction

Higher education is transforming through the use of artificial intelligence (AI). Artificial intelligence has become a widely discussed and prominent issue within contemporary academic and educational discourse [1] .Students can use AI to enhance their understanding of language and build problem-solving skills without human interaction [2]While this can be beneficial, it raises questions about academic integrity, unethical use, and plagiarism [3]. Using another person’s ideas or work without attribution is considered plagiarism, but AI is not another person [4]. This brings into question the originality of AI-written work or how to attribute AI-generated content. Source attributions are required in academic writing when individuals benefit intellectually from using others’ ideas or work, even though the benefit is not associated with specific financial gains [4]. Academic integrity is a fundamental component of educational quality, encompassing principles such as honesty, trust, and ethical behavior [3]but the use of AI-generated content puts the question of academic integrity in a new context that has yet to be fully defined [3]. In an educational setting, academic integrity refers to six principles: honesty, trust, fairness, respect, and responsibility [5]. Higher education institutions commonly use honor codes, institutional policies, and formal guidelines as mechanisms to promote academic integrity and regulate student conduct in academic work [5]. However, as AI is increasingly shaping higher education, traditional approaches to academic integrity are being challenged or becoming ineffective.

1.1. History of Academic Integrity in Higher Education

For many centuries, the moral center of higher education has been academic integrity[6]. Oral traditions were originally central to transferring knowledge in medieval European universities. The oral defense determines students’ capacity to understand the concepts central to their work and communicate those ideas effectively [6]. In the 15th century, the printing press was introduced, a great innovation that fundamentally changed education. After this time, people began producing a massive number of books that not only included original research but also built upon the work of others. As a result, proper citations became crucial in correctly attributing the original author’s ideas and works [7]. Guidelines were established to prevent academic dishonesty, which remain in place today. However, academic misconduct became a growing concern once students could access online resources. In response, universities began implementing more stringent educational policies for exams and written assignments [3]. Plagiarism detection tools were designed to check for written work copied and pasted into documents without proper attribution. Turnitin, introduced in the early 2000s, is one of the most used plagiarism detection tools today, and it is quickly becoming the most commonly used AI detection tool. Although tools such as this are practical for detecting plagiarism by looking for copied content, there are still problems with detecting AI-generated writing since it is original even as it is generated by a non-human entity [8].

1.2. The Emergence of Artificial Intelligence in Education

The rise of AI significantly influences higher education. Early AI applications used a rule-based tutoring system [9]. This system could provide an automatic, immediate response to improve writing quality [10]Lack of adaptability and flexibility were the main issues of these early AI applications [11]; however, when adaptive learning platforms were introduced, students could get help to enhance their learning based on their needs [12]. Machine learning was introduced in education in the 2000s [13], which enabled AI to track learners’ performance [14]. This transformation positively impacted learners’ engagement and learning outcomes [15]. Learning models were also changed by the introduction of AI in education [2]. Presently, the use of AI-driven applications is rapidly increasing for academic tasks. ChatGPT, which stands for Chat Generative Pre-Trained Transformer, is one of the most popular AI-driven tools. OpenAI officially introduced it in November 2022. ChatGPT is an artificial intelligence–driven tool that produces text outputs in response to user prompts [16]. ChatGPT is significantly transforming academic works due to its accessibility and adaptability, which helps students improve their writing, speaking, and research skills [11]. However, this innovative tool is also associated with an increased risk of academic misconduct.

1.3. The Impact of AI on Academic Integrity

Faculty members face significant challenges in detecting AI-created content. AI is using vast sources of human-generated information to write content, and it is learning from the very sources it uses, which means the content generated is similar, if not identical, to human-generated content [3]. Given this, it is very challenging for educators to detect AI-generated text. Despite some higher education institutions creating software that can detect AI, the tools are still at the beginning stage and often struggle to differentiate between human-written and AI-generated content [3]. Plagiarism detection tools currently have limited capacity to accurately identify AI-generated content, making careful review of student assignments an important instructional practice [3].
Moreover, students are becoming dependent on AI for academic writing, which is a significant concern given the fears that it is diminishing the development of critical thinking skills and analytical reasoning [17]. Furthermore, many students do not know how to use AI ethically since they have never been instructed in what is allowed and what is not. ChatGPT is one of the most popular platforms; many students use it to complete their homework [17]. The use of this platform is creating concerns about how AI might affect independent writing and critical thinking, as many fear students will become overly dependent on AI-generated content [12]. That could easily result in students lacking a deeper understanding of the course content and failing to develop the analytical skills they need to use AI-generated content effectively. Universities are trying to address these challenges by modifying their existing academic misconduct policies to include requirements for students to disclose AI used for homework and to create honor codes for AI usage [12]. However, universities face challenges in implementing policies, given that AI tools continue to develop and upgrade. For this reason, some universities have turned to the tried-and-true method of oral defenses and in-person exams to ensure that students do their homework and understand the course materials on a deeper level [3]. Faculty members have also turned to altered assessment models to help grade student homework. These models require students to engage in a deeper discussion of the material and relate it to their own lived experience, making it difficult to rely entirely on AI-generated content [17].

1.4. AI and the Future of Academic Integrity

As the saying goes, putting the genie back in the bottle regarding AI is difficult [8]. It appears it is here to stay, but how it is managed is the real question. Some universities have strict policies for using AI tools, while others have accepted it as a supplementary learning tool [8]. Many universities have also incorporated student training on the ethical use of AI (Ateeq et al., 2024). University faculties must play an active role in reforming AI’s influence on academic misconduct. Some faculties motivate students to use AI only for brainstorming while they prioritize critical thinking and original ideas [6]. Other faculties use AI-driven content in their curriculum to teach how to critically evaluate AI-generated content [2].
Despite its use for content generation, AI also provides new opportunities to enhance academic integrity. Studies found that AI could be used for meaningful purposes, like citing work accurately and checking the facts of academic writing [7]. Using AI ethically ultimately means using it as a valuable complement for learning rather than completely replacing human interaction, critical thinking, and reasoned analysis [3].

1.5. Statement of the Problem

There is no doubt that AI is becoming increasingly ubiquitous in all areas of modern life, including education. While AI tools can be effectively used to enhance student academic outcomes, there are valid concerns about the ethical use of AI tools, like ChatGPT, and their potential impact on students’ ability for critical thinking and analytical reasoning. This review will explore the following research topics related to the use of AI tools in higher educational settings: 1) the student and educator perceptions of AI, specifically ChatGPT on academic integrity, 2) how universities are modifying their academic integrity policies to address the challenges of the use of AI tools, and 3) the effectiveness of AI detection tools in identifying and preventing academic misconduct with the use of AI tools like ChatGPT.

2. Methods

This study adopts a narrative literature review methodology, which allows for a flexible yet rigorous synthesis of existing knowledge. As Sukhera [18] explains, narrative reviews differ fundamentally from systematic reviews by embracing a broader scope and permitting a more interpretive and context-sensitive approach. Rather than adhering to strict inclusion or exclusion criteria, narrative reviews prioritize conceptual coherence and subjective critique, enabling researchers to construct meaning from diverse sources. Rooted in interpretivist and subjectivist paradigms, this methodology acknowledges that reality is socially constructed and synthesis is shaped by the reviewers’ perspectives, historical contexts, and organizational lenses [18]. This approach is particularly suited for exploring under-researched or emergent educational practices, such as ChatGPT’s pedagogical and ethical implications in higher education. Accordingly, this review integrates peer-reviewed literature from ERIC, EBSCOhost, and JSTOR, with iterative refinement of research questions and thematic boundaries throughout the analytical process.
To assess the methodological rigor of the included narrative review articles, this study employed the Scale for the Assessment of Narrative Review Articles (SANRA) developed by Baethge et al. [19]. SANRA is a validated tool explicitly designed to evaluate the quality of non-systematic reviews, which are common in scholarly literature yet often lack standardized appraisal frameworks. The SANRA instrument includes six evaluation criteria: (1) justification of the review’s importance, (2) clarity of stated objectives, (3) description of the literature search, (4) appropriateness of referencing, (5) use of scientific reasoning, and (6) presentation of relevant data. Each item is scored from 0 (low standard) to 2 (high standard), yielding a total score ranging from 0 to 12. The developers provided detailed anchor definitions and scoring guidelines to support consistency across raters. In the validation study, SANRA demonstrated acceptable internal consistency (Cronbach’s alpha = 0.68) and satisfactory inter-rater reliability (intra-class correlation coefficient = 0.77), confirming its feasibility for editorial and academic purposes [19]. This tool was applied in the present review to ensure that selected articles met a minimum quality threshold for inclusion.
Five databases were used to search for appropriate articles: Education Source (via EBSCOhost), ERIC, EBSCO, JSTOR, and ProQuest Education Database. The following search terms utilized Boolean operators (OR, AND) to identify relevant articles.
Table 1. AI and Academic Integrity Search Strategy Table.
Table 1. AI and Academic Integrity Search Strategy Table.
Search Focus Boolean Operators Used Purpose / Description
Academic writing and AI tools “Artificial intelligence” OR “AI” OR “ChatGPT” AND “academic writing” To identify articles that explore how AI tools are used in academic writing.
Academic integrity and AI use “Artificial intelligence” OR “AI” OR “ChatGPT” AND “academic integrity” To find literature discussing ethical concerns about AI use in maintaining academic integrity.
AI’s influence in higher education “Artificial intelligence” OR “AI” OR “ChatGPT” AND “higher education” To explore broader implications of AI in higher education contexts.
Academic integrity issues in higher ed “Academic integrity” AND “higher education” To study how academic integrity is maintained in higher education.
Perceptions of AI by faculty and students “Faculty perception” OR “student perception” AND “Artificial intelligence” OR “AI” OR “ChatGPT” To examine differing views of AI tools between students and faculty.
AI detection tools and academic misconduct “AI detection tools” AND “academic misconduct” OR “challenges” To understand how AI detection tools are used to identify misconduct.
AI detection tools: solutions and future directions “AI detection tools” AND “solutions” OR “future directions” To investigate proposed improvements and solutions for AI detection tools.
Benefits of using AI in education “Artificial intelligence” OR “AI” OR “ChatGPT” AND “benefits” To identify potential advantages of AI integration in educational contexts.
Institutional policies on AI and academic integrity “Artificial intelligence” OR “AI” OR “ChatGPT” AND “institutional policies” OR “university policies” AND “academic integrity” To explore how institutions are shaping AI usage policies tied to integrity.
Accuracy and limitations of AI tools “Artificial intelligence” OR “AI” OR “ChatGPT” AND “accuracy” OR “limitations” To examine the reliability, challenges, and limitations of AI detection technologies.

2.1. Inclusion and Exclusion Criteria

The selection of studies for this review was guided by clearly defined inclusion and exclusion conditions, summarized in Table 3. This process helped ensure the relevance and recency of the literature analyzed.
Table 2. Inclusion and Exclusion Criteria for Article Selection.
Table 2. Inclusion and Exclusion Criteria for Article Selection.
Criteria Category Inclusion Criteria (with Justification) Exclusion Criteria (with Justification)
Publication Date Articles published within the last three years (2022–2024) were included to ensure the analysis reflects the most current developments, trends, and policy changes in AI applications within higher education. Articles published before 2022 were excluded because they may not address recent AI tools like ChatGPT, which emerged prominently after late 2022.
Language Only English-language publications were included to ensure accurate interpretation of nuanced academic language and because the research team is proficient in only English. Non-English publications were excluded to avoid potential misinterpretation during translation, which could compromise the reliability of thematic analysis.
Peer Review Status Peer-reviewed journal articles were selected to ensure academic rigor, validity of findings, and credibility of sources, which is vital for maintaining scholarly standards in a narrative review. Non-peer-reviewed materials such as editorials, blogs, were excluded due to their lack of empirical validation and scholarly review.
Context of Study Studies explicitly conducted within higher education contexts were included to align with the research focus on university-level academic integrity and AI integration. Studies focused on K–12 education settings were excluded because their institutional structures, ethical standards, and student profiles differ significantly from higher education.
Topical Relevance To maintain thematic consistency, articles had to explicitly discuss the intersection of artificial intelligence (e.g., ChatGPT), academic integrity, or related policy frameworks. Articles that did not discuss academic integrity, AI tools, or university policy responses were excluded to avoid diluting the review’s central focus.
Full-text Availability Only full-text accessible articles were included to allow comprehensive reading, quality assessment (via SANRA), and accurate extraction of key insights. Publications without full-text access were excluded as abstracts alone do not provide enough content to assess methodological quality or conduct a thorough review.

2.2. Study Characteristics

The final set of 24 studies represented diverse research methodologies, including literature reviews, quantitative analyses, qualitative investigations, and mixed-method designs. These studies varied in scope and regional context. Table 4 presents an overview of each study’s country of origin, research objectives, methodological approach, and key findings.

2.3. Thematic Analysis

The extracted themes were analyzed to assess their relevance within higher education and AI-enhanced learning environments. This thematic synthesis provided insight into familiar patterns, challenges, and innovations across the reviewed literature. Table 5 outlines the primary thematic areas identified.
Thematic analysis in this review was guided by a hybrid approach, combining deductive and inductive strategies. The major themes were pre-determined, aligned with the three central research questions that framed the review. These themes served as an organizing framework for the synthesis of literature. In contrast, the supporting subthemes were developed inductively through iterative reading and analysis of the selected studies. This approach allowed for a structured yet responsive interpretation of the data, accommodating emergent patterns while maintaining alignment with the review’s conceptual focus. Table 5 presents the finalized thematic framework, including each central theme, its associated subthemes, and a summary of structural insights drawn from the literature.

3. Discussion

This review focused on AI tools’ impact on academic integrity in higher education, specifically ChatGPT. It examines the differing perspectives of students and instructors on the use of these AI tools and how AI innovation has affected the modification of academic integrity policies in higher education. It also explores the use and effectiveness of AI detection tools.

3.1. Student and Instructor Perceptions of ChatGPT’s Impact on Academic Integrity in Higher Education

Students and educators have different interests in using innovative tools like ChatGPT. Therefore, it is unsurprising that their perceptions of AI tools would differ.

3.1.1. Students’ Perceptions of ChatGPT and Academic Integrity

Most students reported that ChatGPT is an invaluable tool as they can use it for brainstorming, idea generation, and writing help [15]. Baek et al. [15] surveyed 1,001 American college students regarding ChatGPT usage and attitudes. Participants were recruited through Prolific, an online site for high-standard behavioral data. Participants qualified if they were full-time undergraduates enrolled in an undergraduate degree program in a technical/community college in the United States. Of 1,091 initial recruits, 90 were excluded for failing two attention check questions. The rest, 1,001, completed a 15-minute anonymized online survey between August and September 2023 and were each paid $3.02, the pay scale recommended by Prolific. The survey contained multiple-choice questions for ChatGPT usage frequency, demography, institutional policies, and one open-ended question regarding ChatGPT usage concerns. Baek et al. [15] note, “about one-third of the participants (33.1%) indicated that they use ChatGPT for writing monthly” (p. 3). It also helps them break down complex topics, making them easier to understand. AI applications like ChatGPT benefit native speakers by providing a valuable writing aid that checks their grammar and writing structure and makes writing more straightforward [15]. Some learners also reported that ChatGPT helps them because they receive immediate feedback with explanations. That significantly helps them manage time [15]However, many studies show concern with the ethical use of ChatGPT. Despite using ChatGPT per the university’s educational policies, many students still fear that admitting to its use will negatively affect their grades [20]. Others realize they use ChatGPT to finish homework without fully understanding the course materials. This indicates they are becoming over-reliant on AI, negatively affecting their learning outcomes [20].
Since ChatGPT can generate human-like academic texts, it raises concerns about academic integrity and plagiarism. Many students use AI-generated texts without proper attribution [14]. Because of this, faculty members face significant challenges in determining student assignments’ originality [14]. Despite these concerns, many students believe that using ChatGPT is not associated with academic misconduct. They think that if they use ChatGPT ethically and adhere to the guidelines from their institution, it is simply a valuable learning tool that helps them structure, paraphrase, and revise their writing [11]. A survey from three European universities shows that 39% of university students use ChatGPT occasionally, while 32% use ChatGPT multiple times a week [11]. ChatGPT responses created satisfaction among 58.7% of students, and 49% trusted the ChatGPT-driven information. However, 40% of the learners were cautious in utilizing ChatGPT’s content [11]. In general, it appears that students are unaware of ethical guidelines for AI-generated content and require more explicit university guidelines and a better understanding of how to analyze AI content [11]
One of the rising concerns about using ChatGPT is the fear that it will negatively affect students’ independent learning skills and critical thinking. Some students reported that AI-generated content is an obstacle for them to hone their problem-solving skills, and they have difficulty engaging fully with the course content when using ChatGPT [8]. This is especially true for learners who are struggling with time management. Students who cannot manage their time effectively use ChatGPT to complete the assigned work quickly, but this hampers their ability to engage deeply in learning the course materials [8]. However, many students reported that ChatGPT mainly helps them by providing quick feedback; they do not have to rely on instructors’ slow feedback that may take days or weeks [2]. These students feel they are engaging with the material and using AI to get quick feedback. Moreover, many students use ChatGPT to continue their learning over school breaks. Van Horn [2] investigates Korean university undergraduates’ use of ChatGPT during English language learning contexts, particularly its digital writing support. The findings indicate that during the summer semester, 70% had used ChatGPT to support independent learning in English, demonstrating a shift toward independent learning practices. By the start of the fall semester, 87.5% of users continued to do so, primarily in the format of a supplementary writing resource, enhancing perceptions regarding what writing was requested and what in-class material. However, ChatGPT’s effectiveness always depends on how the learners use it in their academic work [2].

3.1.2. Instructors’ Perceptions of ChatGPT and Academic Integrity

Unsurprisingly, educators have a somewhat different perception of students’ use of AI. Mamo et al. [13] conducted an exploratory study to examine how higher education faculty perceive the integration of ChatGPT in academic settings. The researchers analyzed a dataset of 2,115 English-language tweets authored by university faculty members between November 30, 2022, and April 30, 2023, using the hashtags #ChatGPT and #highered. Unsurprisingly, educators have a somewhat different perception of students’ use of AI. Mamo et al. [13] found that university faculty are concerned about AI-driven plagiarism. Higher education institutions are at risk of what Mamo et al. [13] call a cheating epidemic, as AI-generated text is increasingly complex for faculty to detect. Even existing plagiarism detection tools are often ineffective in identifying AI-generated content. Therefore, 51% of faculty members expressed uncertainty about ChatGPT’s overall impact on education. While 40% of faculty members perceived ChatGPT as a helpful tool, particularly for enhancing student engagement and creativity, 9% expressed concerns about AI’s originality and ethical use in academic work [13]. Using the Valence Aware Dictionary for Sentiment Reasoning (VADER) and the National Research Council (NRC) Emotion Lexicon, combined with grounded coding, the study identified faculty attitudes and emotional responses toward ChatGPT. Tweets were purposefully sampled and preprocessed to exclude bots, duplicates, and irrelevant content, offering insights into faculty perspectives primarily within English-speaking academic contexts. These findings highlight the nuanced and evolving nature of faculty responses to ChatGPT, ranging from trust and optimism to skepticism and ethical concern [13]. Many educators are also concerned that students may become overly dependent on AI-driven content rather than developing their problem-solving and critical thinking skills. Faculty members also worry that some students may write whole assignments using AI, negatively affecting their ability to understand and grasp the knowledge [9].
It is becoming increasingly complex for faculty to differentiate AI-driven from human-written text. For this reason, many educators now emphasize traditional assessments like in-class discussions and exams to evaluate student performance [9]. Despite the negative consequences, faculty members also mentioned the benefits of AI in education. Many educators perceive AI as a tool that helps students understand complex ideas and enhances engagement [15]
Kiryakova and Angelova [26] conducted a study at Trakia University in Bulgaria, where they found that 41.4% of campus faculty members were optimistic about using ChatGPT in teaching. For educators, ChatGPT is a tool that helps them save time (60.9%), makes the learning process more engaging (59.8%), and improves learners’ creativity and critical thinking (47.1%). However, 73.6% of faculty members are still concerned about introducing ChatGPT into the course curriculum [26]. Some argue that AI should be used under strict guidelines illustrated in clear educational policies, and learners should be trained to use AI ethically in academic settings [14]. Many educators are now emphasizing AI literacy training for students. Faculty members are also incorporating discussions of AI ethics in syllabi to help guide students on AI usage rather than altogether banning it, something likely to be ignored by many students [8].

3.2. Modifications to Academic Integrity Policies in Response to ChatGPT Challenges

Maintaining academic integrity when using AI tools like ChatGPT is a significant challenge, and it requires clear university policies regarding the acceptable use of these tools.

3.2.1. New Directions for Academic Misconduct

In the wake of using AI tools, universities are revising their definition of academic misconduct. Some universities are taking action to ban AI-generated content; many higher educational institutions now classify AI-generated content as a form of academic dishonesty [29]Other universities allow ChatGPT only for generating ideas and brainstorming, not for students to submit AI-generated assignments [23]. Many universities now have AI rules in their academic ethics policies. Professors ask students to cite the AI-driven content for their research or writing tasks to ensure they have read and understand the rules [12]. Many higher education institutions use another strategy. They strive to maintain transparency by mandating students to mention whether they use ChatGPT in their assignments [12]. Many universities also offer training programs for students to teach them how to use AI ethically. This training helps learners distinguish between academic misconduct and permissible assistance [27]. Universities are also providing training to their faculty members on the role of AI in higher education settings. For example, professors are trained to detect AI in writing assignments. Some universities (38.5%) offer professors guidance on using AI in their syllabi while prioritizing academic integrity [28]
Faculty members emphasize that ChatGPT should be a learning tool rather than a way to avoid learning [6]. A recent study by Wang et al. [28] discovered that 53.8% of universities adhere to policies to use AI ethically. Despite implementing AI detection applications, many universities (56.7%) are concerned about the trustworthiness of these applications [28]. The primary concern related to AI detection is that it can be flagged as false positives, meaning original written content may be flagged as written by AI [1]. For this reason, many universities do not rely solely on AI detection tools. Instead, they use them as a secondary source to detect AI-driven content and other supplementary evidence [12]. Faculty members are encouraged to use strategies that help them be aware of students’ writing styles so they will notice sudden irregularities [6]. Wang et al. [28] emphasize that AI literacy not only addresses the adequate knowledge of AI algorithms; it also enables learners to hone their critical thinking, ethically use AI tools, and adapt to new technological innovations as they arise. Instead of banning AI, many universities are examining how faculty members can use these new tools to make learning fun and more engaging for their students. This idea of teaching helps change the focus from punishment to educated guidance on using AI ethically. Many universities now use AI-supported research projects and practical learning activities to accomplish this. Ateeq et al. [12] indicate that incorporating AI into curricula has many benefits. One of the prominent advantages is that students will learn the real-world use of AI tools. It also promotes interdisciplinary learning, preparing students for their careers in AI-based industries.

3.2.2. Institutional Policies on AI Use in Research

In many cases, students must follow citation standards for their AI-generated works. Some universities require them to cite AI-driven texts in bibliographies as they cite other scholarly materials [23]. Most educational institutions that do this emphasize that AI-generated ideas should be supportive research aids rather than completely replacing critical analysis [1]. Faculty from the humanities, law, and medicine adhere to stricter rules for using AI generators because these disciplines require critical human judgment, reasoning, and originality [26]. For example, precise interpretation is essential for law students. AI-generated content can be supplementary, but it cannot understand the legal rationale for new arguments. In medicine, students must be sure of the accuracy of the information to avoid any negative consequences for their patients. Faculty members have shown concern about the overdependence of AI-generated content negatively influencing learners’ ability to analyze medical information critically [26], thereby hampering their ability to make sound clinical decisions. Context analysis and creative thinking are vital to humanities students’ discipline. AI cannot fully replace these all-too-human abilities. The stricter policies to use AI in these fields ensure that professional training is emphasized to develop the necessary qualities without AI assistance [26].
In contrast, disciplines like computer science are embracing AI applications such as ChatGPT. Students in this field use AI for various purposes, like software development, debugging code, and analyzing algorithms [28]. Analyzing and understanding complex coding is key to AI-powered programming. Many higher education institutions are now trying AI-driven models, emphasizing machine learning, and using AI for coding tasks. These activities enrich students with practical knowledge, thereby fostering computational thinking. In data analytics, AI assists learners in working with automatic tasks and massive datasets to predict outcomes. Once accomplished, machine learning algorithms can reveal data trends and patterns. One of the main advantages of AI is that it now assists students in visualizing large datasets and helps them to make timely decisions. Because of this, AI is becoming an essential part of these STEM fields [28].

3.3. Student Awareness Campaigns and Honor Pledges

Higher education institutions are now promoting awareness campaigns about the ethical use of AI. In student orientation programs, universities discuss AI’s ethical use [6]. Others are adding courses to guide students on using AI ethically [27]. Still, others are modifying their honor codes to ensure that AI is used responsibly. Many universities have mandatory rules that require students to strictly adhere to academic integrity policies [23]. This helps students to be motivated to use AI-generated content more responsibly. Universities are also emphasizing privacy rules to protect students’ information [28]. AI applications often save user inputs, increasing academic and personal information risks. Institutions are now implementing new policies to protect faculty and student information [28]. For example, universities are now asking students not to input unpublished research or sensitive data into AI applications, as many AI applications store this information to update their models. Many institutions are also implementing technical measures to secure AI. Some universities restrict outside AI platforms and encourage students to use only university-approved AI applications [28]Other universities are tracking the students’ AI to ensure they adhere to ethical guidelines. Universities are also working with AI developers to promote academic AI applications and ensure these applications do not share, use, or store information [28].

3.3.1. Effectiveness of AI Detection Tools in Addressing ChatGPT-Assisted Academic Misconduct

With the advent of AI tools like ChatGPT, it has also become necessary to create AI detection tools to prevent academic misconduct. However, the effectiveness of these tools continues to be a struggle in the context of education. Understanding how they work and whether they are genuinely effective is essential.

3.3.2. How AI Detection Tools Work

Tools for detecting AI-generated content rely mainly on the analysis of text patterns, the coherence of the content, and the syntax structures to help distinguish human-written content from that of AI [24]. The tools utilize statistical models, natural language processing techniques, and machine learning algorithms to analyze [10]. Though this type of analysis can work, the effectiveness of these tools varies widely. While some studies indicate AI-generated content detection can be reasonably accurate, there are problems with false positives, whereby human-written content is flagged as AI-generated, and false negatives, whereby AI-generated content is missed [24]. As Paustian and Slinger [10] found, the ability of AI-detection tools to flag AI content largely depends on the sophistication of the language model that produced the text. More sophisticated large language models (LLMs) can mimic human writing more accurately, which makes detection difficult and increases the likelihood of false positives [10]. The patterns and structures common in less sophisticated AI models become less predictable with improved technology. Moreover, when humans attempt to disguise AI writing, the detection tools have a more challenging time identifying it as such. Therefore, unedited AI content is easier to identify than writing that has been edited. Editing, thus, increases the likelihood of false negatives [10]. Hence, the authors conclude that detection tools alone are insufficient for determining authorship. These tools are one aspect of a comprehensive policy for AI in educational contexts. Establishing clear guidelines, training students in the ethical use of AI, and guiding faculty members on how to spot suspicious written outputs are all needed in academic settings to ensure the ethical use of AI tools like ChatGPT [10].

3.3.3. Accuracy and Limitations of AI Detection Tools

Several studies have examined the accuracy of AI detection tools. Elkhatat et al. [24] did a comparative analysis of various detection tools, including OpenAI’s detector, GPTZero, Copyleaks, Writer, and CrossPlag. The authors found that these tools were more effective at detecting AI-generated content produced by older AI models but were less accurate with newer versions. The authors also noted that false positives were a significant concern as human-written content was often flagged incorrectly [24]. Paustian and Slinger [24] also found that human-written work was misclassified as AI-generated in some 12% of the samples they included in their study. While 88% of AI-generated content was correctly identified, the authors concluded there is still a high risk of unjust accusations against students [10].

3.3.4. Challenges in Detecting AI-Assisted Writing

Another area of concern is that newer AI models are capable of producing more human-like text. In analyzing RoBERTa-based classifiers trained to detect AI-generated writing, Ibrahim [25] found that these kinds of tools had difficulty with AI-modified texts. Even slight editing by humans made the AI content challenging to detect, reducing the effectiveness of these tools [25]. Kovari [7] also found that ChatGPT-4 and similar AI models can create text with varied sentence structures, reducing detectors’ ability to identify AI-generated content. While many of these detection tools rely on perplexity scores, a measurement of text randomness, to identify AI content, as AI models become more complex and mimic natural language better, they can generate human-like content and avoid detection [7]. Kovari [7] noted that this makes detection less reliable. Additionally, students can manipulate AI-generated content to ensure that it bypasses detection. As explained, if students paraphrase the content generated by AI tools, it is more difficult for detectors to flag the work as AI-generated content [3]. This significantly limits the effectiveness of AI detection tools [20].

3.3.5. Effectiveness in Different Academic Fields

The detection of AI-generated content also varies according to the subject matter. For example, Chen et al. [22] found it more straightforward for AI detection tools to flag AI content in more technical disciplines like medicine because the models used for generating that content lack domain-specific reasoning. This means that AI-generated medical papers are more likely to contain factual errors, which makes them easier to identify. In the humanities, however, AI-generated content more closely mimics human writing, which makes detection more difficult [22]. Ibrahim [25] tested AI detection tools incorporating coherence checks and factual inconsistencies into the AI classifiers and found this improved their performance. However, their accuracy depended on the complexity of the AI model generating the content [25].

3.3.6. Potential Solutions to Improve AI Detection

Several approaches have been proposed to enhance the performance of AI detection tools. For example, Paustian and Slinger [10] suggest embedding invisible watermarks in the AI-generated text that would serve as unique identifiers to assist detection tools in differentiating between human and AI writing. However, that would require the AI developers’ cooperation, which remains a challenge. Chen et al. [22] proposed using linguistic fingerprinting, which analyzes individual writing styles to compare with AI-generated content. This would involve using a student’s previous work, and implementing this approach would require extensive datasets and computational resources [22]. Kovari [7] argues for the use of adaptive AI detection systems, which work by continuously updating their detection algorithms. These tools need fewer AI models to learn from to remain effective. Moreover, continuous adaptation would help prevent the current AI tools from becoming obsolete as AI-generated text becomes more sophisticated. This may be the most reasonable current approach [7].

3.3.7. Redesigning Assessments to Reduce AI Misuse

Aside from AI detection tools, it is also essential for educators to rethink how they assess students. Akintande [21] argues that educators must shift toward open-ended assessments, emphasizing critical thinking and problem-solving. AI struggles with nuanced reasoning tasks, making the content generated less effective. It is also vital for educational institutions to establish clear guidelines regarding the ethical use of AI in academic work [21]. Ibrahim [25] warns that false accusations of AI-generated work could ultimately negatively impact a student’s educational records, making it critical for institutions to implement transparent policies to ensure fairness [21]. That is one of the reasons why researchers highlight the need for AI-detection tools to explain why it flagged content. Current detectors produce results without detailing the reasoning behind them. That lack of transparency makes assessing the detection results more challenging for students and educators. More work must be done to implement fair policies, ensuring accurate results [3].

4. Future Directions in AI Detection

The future of AI detection is currently focused on hybrid detection models. Chen et al. [22] argue that combining linguistic analysis, metadata tracking, and virtual watermarking is the best approach. Such an approach showed promising results in some early studies but required significant computational resources. Even with these types of advances, however, AI detection tools remain imperfect [22]. Elkhatat et al. [24] found that though detection tools are valuable, they cannot be relied upon as the only method of addressing AI-assisted academic misconduct. Instead, the authors argue that institutions must adopt a multifaceted approach combining detection tools, updated academic integrity policies, and innovative assessment strategies. These detection tools, while helpful, are an imperfect solution to address AI-assisted academic misconduct. As AI models like ChatGPT improve, the detection methods must also evolve. While a combination of AI watermarks, forensic linguistics, and adaptive detection tools may enhance accuracy, it’s also vital for educators to rethink assessment strategies and promote the ethical use of AI tools in academia [10]. Institutions must strike a balance between technological detection and academic integrity education. This must include fair policies and well-designed assessments rather than relying on detection software [25].

5. Limitations of the Current Study

While this review has provided valuable insights into the influence of artificial intelligence on academic integrity in higher education, several limitations must be acknowledged. First, the study relies mainly on secondary sources, which means this review lacks empirical data from direct student and faculty sources. The literature review is extensive. However, the absence of primary data restricts capturing current perceptions and evolving attitudes toward AI technology. Second, the study is focused on textual AI applications, specifically ChatGPT, but other tools, such as image and code generators, are also contributors to academic misconduct, which remains unexplored. It is necessary to more broadly examine AI across academic disciplines to understand its impact on academic integrity. Third, the study does not explore regional or institutional differences in AI policies. There are varying approaches in educational institutions to AI regulation, and a comparative analysis would provide more generalizable findings. Fourth, this study lacks a technical evaluation that tests the accuracy and reliability of existing AI detection. This is needed to provide a more comprehensive understanding of these tools. Finally, the study assumes a negative impact on academic integrity using AI tools. It does not explore the potential benefits in-depth. AI tools can be employed ethically for learning, research, and writing assistance. Future research should explore how institutions can harness the strengths of AI tools while mitigating the risks. These limitations highlight the need for more empirical research and a multidimensional analysis of AI’s role related to academic integrity.

6. Gaps in the Literature and Future Research Directions on AI and Academic Integrity

The growing presence of artificial intelligence (AI) in education calls for further research to explore its ethical implications and effects on academic integrity. While current studies address challenges associated with AI-generated content, future investigations should focus on empirical research, policy comparisons, and technological progress to provide a deeper understanding. First, collecting primary data through surveys of students and faculty is crucial. Qualitative interviews would also offer valuable insights into perspectives on AI ethics and academic honesty. Second, research should extend beyond text-based AI tools like ChatGPT. As AI is increasingly applied in coding, image generation, and data analysis, concerns about academic integrity arise across these fields. Expanding the scope of research would offer a more comprehensive view of AI’s role in academia. Moreover, cross-institutional policy analysis is needed. Different universities worldwide have varying AI regulations, from strict bans to full integration. Comparative studies could identify best practices and help create standardized guidelines for maintaining academic integrity. Additionally, research should assess the effectiveness of AI detection tools. Many institutions rely on AI detection software, but its reliability remains uncertain. Future studies should systematically test detection models to evaluate their ability to distinguish between human- and AI-generated content. Lastly, exploring the potential benefits of AI in education is essential. When used ethically, AI could enhance personalized learning and support academic research. Understanding how to integrate AI responsibly will enable institutions to navigate its challenges while preserving academic integrity. Future research should be interdisciplinary, data-driven, and policy-focused to develop balanced approaches to AI in education.

7. Conclusions

Artificial intelligence is genuinely transforming the concept of academic integrity in higher education. Naturally, students and faculty have divided views on the subject, with some seeing AI as a valuable tool while others fear it will be misused. Many students rely on AI to complete their assignments, raising concerns about independent learning. Faculty members struggle to assess student work for originality, which makes academic honesty more difficult to enforce. Plagiarism detection is becoming more challenging, and traditional tools often fail to identify AI-generated texts. Moreover, many students unknowingly cross ethical boundaries when using AI because they have not been provided clear guidance on what is acceptable. University policies on the subject vary widely, which leads to inconsistencies in enforcing policies about academic honesty.
Additionally, AI evolution outpaces institutional responses. Therefore, AI literacy education needs to be stronger and clearer regarding policies, which should be reviewed and updated regularly, and clear guidelines are needed to ensure its responsible use. While sometimes helpful, detection tools must be improved for accuracy and fairness, but faculty need to employ other methods to recognize AI-assisted writing. They must redesign their assessments to encourage critical thinking, independent writing, and originality. When used ethically, AI can be a valuable support tool, but institutions need to focus on providing students and faculty with proper guidance on the ethical use of such tools. It is only with transparent academic integrity policies that universities can help students and faculty navigate the increasing role of AI in learning. Universities must also be ready to adapt quickly to AI’s rapid evolution to keep their academic integrity policies updated. Through the responsible use of AI, higher education can find a way to maintain integrity while embracing such innovative technological progress.

References

  1. Cotton, DRE; Cotton, PA; Shipway, JR. Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International 2024, 61(2), 228–39. [Google Scholar] [CrossRef]
  2. Van Horn, KR. ChatGPT in English Language Learning: Exploring Perceptions and Promoting Autonomy in a University EFL Context. TESL-EJ 2024, 28(1). [Google Scholar] [CrossRef]
  3. Balalle, H; Pannilage, S. Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity. In Social Sciences and Humanities Open; Elsevier Ltd, 2025; Vol. 11. [Google Scholar]
  4. Fishman, T. “We know it when we see it” is not good enough: toward a standard definition of plagiarism that transcends theft, fraud, and copyright [Internet]. 2009. Available online: http://www.opsi.gov.uk/RevisedStatutes/Acts/ukpga/1968/cukpga_19680060_en_1#pb1-l1g1.
  5. Șercan, E; Voicu, B. Patterns of Academic Integrity Definitions among BA Romanian Students’. The Impact of Rising Enrolments. Revista de Cercetare si Interventie Sociala 2022, 78, 87–106. [Google Scholar] [CrossRef]
  6. Galindo-Domínguez, H; Campo, L; Delgado, N; Sainz de la Maza, M. Relationship between the use of ChatGPT for academic purposes and plagiarism: the influence of student-related variables on cheating behavior. Interactive Learning Environments 2025, 33(6), 4047–61. [Google Scholar] [CrossRef]
  7. Kovari, A. Ethical use of ChatGPT in education—Best practices to combat AI-induced plagiarism. In Frontiers in Education; Frontiers Media SA, 2024; Vol. 9. [Google Scholar]
  8. Espartinez, AS. Exploring student and teacher perceptions of ChatGPT use in higher education: A Q-Methodology study; Computers and Education: Artificial Intelligence., 1 Dec 2024; p. 7. [Google Scholar]
  9. Karkoulian, S; Sayegh, N; Sayegh, N. ChatGPT Unveiled: Understanding Perceptions of Academic Integrity in Higher Education - A Qualitative Approach. J Acad Ethics 2025, 23(3), 1171–88. [Google Scholar] [CrossRef]
  10. Paustian, T; Slinger, B. Students are using large language models and AI detectors can often detect their use; Front Educ (Lausanne), 2024; p. 9. [Google Scholar]
  11. Žáková, K; Urbano, D; Cruz-Correia, R; Guzmán, JL; Matišák, J. Exploring student and teacher perspectiveson ChatGPT’s impact in higher education. Educ Inf Technol (Dordr) 2025, 30(1), 649–92. [Google Scholar] [CrossRef]
  12. Ateeq, A; Alzoraiki, M; Milhem, M; Ateeq, RA. Artificial intelligence in education: implications for academic integrity and the shift toward holistic assessment; Front Educ (Lausanne), 2024; p. 9. [Google Scholar]
  13. Mamo, Y; Crompton, H; Burke, D; Nickel, C. Higher Education Faculty Perceptions of ChatGPT and the Influencing Factors: A Sentiment Analysis of X. TechTrends 2024, 68(3), 520–34. [Google Scholar] [CrossRef]
  14. Fajt, B; Schiller, E. ChatGPT in Academia: University Students’ Attitudes Towards the use of ChatGPT and Plagiarism. J Acad Ethics 2025, 23(3), 1363–82. [Google Scholar] [CrossRef]
  15. Baek, C; Tate, T; Warschauer, M. “ChatGPT seems too good to be true”: College students’ use and perceptions of generative AI; Computers and Education: Artificial Intelligence., 1 Dec 2024; p. 7. [Google Scholar]
  16. Welskop, W. CHATGPT IN HIGHER EDUCATION. International Journal of New Economics and Social Sciences 2023, 17(1), 9–18. [Google Scholar] [CrossRef]
  17. Isiaku, L; Muhammad, AS; Kefas, HI; Ukaegbu, FC. Enhancing technological sustainability in academia: leveraging ChatGPT for teaching, learning and evaluation. In Quality Education for All; Emerald Publishing, 2024; Vol. 1, pp. 385–416. [Google Scholar]
  18. Sukhera, J. Narrative Reviews: Flexible, Rigorous, and Practical. J Grad Med Educ. 2022, 14(4), 414–7. [Google Scholar] [CrossRef] [PubMed]
  19. Baethge, C; Goldbeck-Wood, S; Mertens, S. SANRA—a scale for the quality assessment of narrative review articles. Res Integr Peer Rev 2019, 4(1). [Google Scholar] [CrossRef] [PubMed]
  20. Acosta-Enriquez, BG; Arbulú Ballesteros, MA; Arbulu Perez Vargas, CG; Orellana Ulloa, MN; Gutiérrez Ulloa, CR; Pizarro Romero, JM; et al. Knowledge, attitudes, and perceived Ethics regarding the use of ChatGPT among generation Z university students. International Journal for Educational Integrity 2024, 20(1). [Google Scholar] [CrossRef]
  21. Akintande, OJ. Artificial versus natural intelligence: Overcoming students’ cheating likelihood with artificial intelligence tools during virtual assessment. Future in Educational Research 2024, 2(2), 147–65. [Google Scholar] [CrossRef]
  22. Chen, Z; Chen, C; Yang, G; He, X; Chi, X; Zeng, Z; et al. Research integrity in the era of artificial intelligence: Challenges and responses. In Medicine (United States); Lippincott Williams and Wilkins, 2024; Vol. 103, p. e38811. [Google Scholar]
  23. Eke, DO. ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology 2023, 13. [Google Scholar] [CrossRef]
  24. Elkhatat, AM; Elsaid, K; Almeer, S. Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. International Journal for Educational Integrity 2023, 19(1). [Google Scholar] [CrossRef]
  25. Ibrahim, K. Using AI-based detectors to control AI-assisted plagiarism in ESL writing: The Terminator Versus the Machines. Language Testing in Asia, 2023 Dec 1; 13. [Google Scholar]
  26. Kiryakova, G; Angelova, N. ChatGPT—A Challenging Tool for the University Professors in Their Teaching Practice. In Educ Sci (Basel); 1 Oct 2023; 10, p. 13. [Google Scholar]
  27. Naznin, K; Al Mahmud, A; Nguyen, MT; Chua, C. ChatGPT Integration in Higher Education for Personalized Learning, Academic Writing, and Coding Tasks: A Systematic Review. In Computers; Multidisciplinary Digital Publishing Institute (MDPI), 2025; Vol. 14. [Google Scholar]
  28. Wang, H; Dang, A; Wu, Z; Mac, S. Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines; Computers and Education: Artificial Intelligence., 1 Dec 2024; p. 7. [Google Scholar]
  29. Harrad, R; Keasley, R; Jefferies, L. Academic integrity or academic misconduct? Conceptual difficulties in higher education and the potential contribution of student demographic factors. Higher Education Research and Development 2024, 43(7), 1556–70. [Google Scholar] [CrossRef]
Table 4. Summary of Included Studies and Key Characteristics.
Table 4. Summary of Included Studies and Key Characteristics.
No. In-text Citation Country Participants Aim of the Study Method Key Finding
1 (Acosta Espartinez et al., 2023) [20] Philippines University students To explore and categorize perceptions of ChatGPT use in Philippine HEIs among students and teachers Quantitative Three perception types were identified: Ethical Tech Guardians, Balanced Pedagogy Integrators, and AI Enthusiasts. Views varied; recommendations included ethics, localization, and critical thinking.
2 (Akintande et al., 2023) [21] Nigeria University students To assess ChatGPT’s opportunities and challenges in Nigerian higher education Mixed-methods ChatGPT offers promise in enhancing learning, but challenges include ethical concerns, misinformation, and plagiarism risks.
3 (Ateeq et al., 2023) [12] Bahrain University students & faculty Explore AI’s impact on academic integrity and shift to holistic assessments. Quantitative Educational Impact (EI) had the most potent positive effect on Academic Outcomes (AO)
4 (Baek et al., 2024) [15] USA 1,001 college students Investigate ChatGPT usage, perceptions, and institutional policy awareness Quantitative survey Varied attitudes emerged: higher-income students viewed ChatGPT more positively; concerns included job loss and institutional punishment, highlighting equity issues in AI use.
5 (Balalle et al., 2023) [3] Cross-country Not applicable (systematic review of 25 studies)focused on higher education. To systematically review the impact of AI on academic integrity in education Systematic literature review AI helps and harms academic integrity; ethical use and institutional safeguards are needed to maintain integrity.
6 (Chen et al., 2023) [22] Cross-country Not applicable (narrative review)Broad focus incl. K–12 & higher ed. To examine how AI impacts research integrity, including risks like plagiarism and data fabrication Narrative literature review AI enhances research but introduces new misconduct risks; stronger ethics training, policy, and global cooperation are needed.
7 (Cotton et al., 2023) [1] UK Not empirical (authors + ChatGPT generated content). Focus on higher education context. To explore opportunities and challenges of ChatGPT in higher education and assess risks to academic integrity Conceptual/theoretical essay (partially generated by ChatGPT, validated and edited by authors) ChatGPT presents both opportunities (e.g., engagement, personalized assessment) and threats (e.g., plagiarism); proactive strategies like AI detection tools, student education, and assessment redesign are necessary.
8 (EKE et al., 2023) [23] Cross-country University students/ higher education. To analyze learning experiences with ChatGPT in higher ed. Conceptual ChatGPT risks undermining academic integrity but offers potential academic value if used responsibly.
9 (Elkhatat et al., 2023) [24] Qatar AI-generated vs. human text samples Paragraphs generated by ChatGPT 3.5, ChatGPT 4, and 5 human-written samples Experimental study Detection tools are better at identifying GPT 3.5 content but inconsistent with GPT 4 and human text, showing need for tool improvement.
10 (Enriquez et al., 2023) [20] Peru SUniversity students To assess the knowledge, attitudes, concerns, and perceived ethics regarding the use of ChatGPT among Generation Z university students in Peru. Quantitative Knowledge and attitudes did not significantly influence ChatGPT usage, but usage significantly impacted students’ concerns (β = 0.802) and perceived ethics (β = 0.856). No moderating effects were found for gender or age
11 (Fajt et al., 2023) [14] Hungary University students To examine attitudes toward ChatGPT and its relation to plagiarism in academia. Mixed-methods Students found ChatGPT easy to use and moderately useful, but expressed concerns about plagiarism risk.
12 (Galindo Domínguez, 2023) [6] Spain University students To examine the relationship between the frequency of using ChatGPT for academic purposes and levels of plagiarism, and whether student-related variables (e.g., motivation, cheating culture) moderate this relationship. Quantitative A higher frequency of ChatGPT use correlated with plagiarism but did not causally predict it. Cheating culture and amotivation were stronger predictors of plagiarism.
13 (Van Horn, 2024) [2] South Korea Students and faculty o explore Korean university students’ perceptions of ChatGPT in English language classes and whether short-term training can promote long-term autonomous use. Qualitative Most students expressed positive attitudes toward ChatGPT, showing improved confidence, engagement, collaboration, and autonomous learning. A majority continued using it months after the training ended.
14 (Ibrahim et al., 2023) [25] Kuwait 240 essays (120 human-written; 120 ChatGPT-generated) To evaluate the potential of two RoBERTa-based classifiers in detecting AI-assisted plagiarism in ESL writing Quantitative Both AI detectors could identify AI-generated texts, but detection accuracy was inconsistent
15 (Isiaku et al., 2023) [17] Cross-country University lecturers To investigate the role, benefits, and challenges of using ChatGPT in higher education teaching, learning, and assessment. Literature Review ChatGPT can enhance teaching and assessment by supporting personalized learning, feedback, lesson planning, and student engagement, but ethical concerns such as data privacy, misuse, and academic integrity require attention.
16 (Karkoulian et al., 2023) [9] Lebanon Higher education students and faculty To explore students’ perceptions toward the use of AI chatbots like ChatGPT in education Qualitative Students saw ChatGPT as useful for saving time and enhancing learning, but raised concerns about ethics and content accuracy
17 (Kiryakova et al., 2023)[26] Bulgaria University professors To explore professors’ familiarity, attitudes, and concerns about using ChatGPT in teaching. Quantitative Professors see ChatGPT as helpful for saving time and student engagement, but fear misuse like plagiarism and overreliance.
18 (Kovari et al., 2023) [7] Cross-country N/A (Opinion paper) To outline best practices in education to address ethical challenges and plagiarism risks posed by ChatGPT. Opinion paper The study recommends a multi-layered approach to prevent AI-assisted plagiarism, including clear AI policies, creative assessments, AI-detection tools, educational campaigns, and reflective tasks to promote academic integrity.
19 (Mamo et al., 2024) [13] USA Higher education faculty To explore faculty perceptions of ChatGPT in higher education and identify factors influencing those perceptions. Sentiment analysis using VADER and NRC 40% positive, 51% neutral, 9% negative; top emotions—trust, joy, fear, anger.
20 (Naznin, 2023) [27] Australia College students To systematically review how ChatGPT is integrated into higher education for personalized learning, academic writing, and coding tasks, and to identify associated challenges. Systematic review ChatGPT enhances personalized learning through real-time feedback and adaptive support, aids in academic writing and coding, but raises concerns about accuracy, overreliance, academic integrity, and privacy.
21 (Paustian et al., 2023) [10] USA University students To investigate how college students use large language models (LLMs) and evaluate the effectiveness of AI detectors in identifying AI-generated text Mixed method 46.9% used LLMs; 7.2% used them to write full essays; detectors identified AI text with ~88% accuracy, but had a 12% error rate, making them unreliable as standalone tools
22 (Wang et al., 2023) [28] United States STEM students To examine students’ perceptions and self-reported use of ChatGPT and their association with academic performance Quantitative 59.2% reported using ChatGPT; higher GPA was associated with more responsible and strategic use of ChatGPT; concerns about academic integrity were noted
23 (Welskop, 2023) [16] Cross-country ChatGPT in Higher Education To explore the concerns, challenges, and implications of ChatGPT in higher education Narrative review paper ChatGPT aids learning but raises concerns about bias, plagiarism, and critical thinking decline.
24 (Zakova et al., 2023) [11] Three European countries (Slovakia, Portugal, Spain) Higher education students To explore student and teacher perspectives on ChatGPT’s impact in higher education across multiple areas Quantitative survey Students viewed ChatGPT positively for learning support but had concerns about accuracy, ethics, and assessment
Table 5. Major Themes, Supporting Themes, and Structural Breakdown.
Table 5. Major Themes, Supporting Themes, and Structural Breakdown.
Major Themes (Aligned with RQs) Supporting Themes (Co-Themes) Structural Breakdown
RQ1: Use of ChatGPT in Academic Writing Enhancing writing support and feedback Students commonly use ChatGPT for grammar correction, paraphrasing, summarization, brainstorming ideas, and improving sentence structure. It serves as a digital tutor that provides constant assistance, particularly for students with limited language proficiency or writing anxiety.
Autonomy and efficiency in writing ChatGPT facilitates self-paced learning by providing immediate feedback, enabling students to work independently without waiting for instructor responses. This supports time management, reduces writing pressure, and encourages continuous learning outside of class hours, such as during breaks or holidays.
Overreliance and skill erosion Excessive dependence on ChatGPT for completing academic tasks may inhibit students’ ability to develop original arguments, analytical reasoning, and academic writing skills. It can reduce deep engagement with content, foster surface-level learning, and potentially widen skill gaps in critical thinking and problem-solving.
RQ2: Perceptions of AI Tools Student attitudes toward AI Most students express positive attitudes toward ChatGPT as a helpful study aid. However, ethical understanding varies widely, many are unaware of proper attribution practices or fear disclosing AI use, even when permitted. Some perceive it as a tool to bypass learning, while others use it to enhance understanding.
Faculty attitudes and concerns Faculty members are divided, some see potential for engagement and pedagogical innovation, while others emphasize risks such as AI-driven plagiarism, diminished originality, and ethical misuse. There is growing concern about fairness in assessment and the erosion of student accountability.
Pedagogical responses Instructors are modifying assessments to reduce AI misuse—e.g., using oral defenses, personalized writing tasks, reflective assignments, or in-class exams. Faculty are also embedding AI ethics and literacy into curricula to guide responsible student use and build critical awareness.
RQ3: Institutional and Policy Responses Evolving academic integrity policies Universities are updating academic honesty guidelines to include AI-specific clauses. These include mandatory AI use disclosures, clear citation rules for AI-generated content, honor code revisions, and AI literacy campaigns. Policy strictness varies widely by institution and region.
AI detection and assessment strategies AI detectors (e.g., GPTZero, Turnitin AI) are adopted with mixed effectiveness, some flag human text as AI (false positives) or miss edited AI text (false negatives). As a result, many institutions promote redesigned assessments focused on personalized learning, creativity, and critical thinking to reduce AI dependency.
Training and disciplinary adaptations Institutions offer workshops and course modules for students and faculty on ethical AI use, policy interpretation, and detection tool limitations. Disciplinary norms shape AI adoption, e.g., law, medicine, and humanities often impose stricter AI restrictions, while STEM fields integrate AI tools into practical tasks.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated