Preprint
Article

This version is not peer-reviewed.

Not Just Answers: ChatGPT’s Expanding Role in Supporting Student Motivation

Submitted:

06 April 2026

Posted:

08 April 2026

You are already at the latest version

Abstract
This study investigated how college students’ use of ChatGPT has evolved from a tool for simply getting answers to one that supports motivation and learning engagement. Using a novel survey instrument based on Self-Determination Theory, we measured six types of motivation, three intrinsic, two extrinsic, and amotivation, across two cohorts of students (2023 and 2025). Results revealed statistically significant increases in both intrinsic and extrinsic motivation over time. While early use of ChatGPT focused on convenience and task completion, students increasingly reported using it to overcome mental blocks, build momentum, and stay engaged with their academic work. Follow-up analyses of repeat participants and comparisons by class standing suggest these trends reflect broader shifts in student engagement, not just cohort or experience differences. Cluster analysis revealed distinct motivational profiles, highlighting variation in how students incorporate ChatGPT into their academic routines. These findings suggest that as generative artificial intelligence becomes more familiar, accessible, and sophisticated, students are integrating it more deeply into their learning processes, not just for answers, but for sustained motivation and support.
Keywords: 
;  ;  ;  ;  
* Correspondence alang@oru.edu

1. Introduction

Since OpenAI’s release of ChatGPT in November 2022, the landscape of higher education has undergone rapid and unprecedented transformation. Large language models (LLMs), like ChatGPT, offer students real-time access to coherent, context-sensitive, linguistically sophisticated, and logically well-structured responses across a broad range of academic domains. By early 2023, ChatGPT had already become a common, if unofficial, companion in many students’ learning routines, used to generate ideas, summarize readings, structure essays, translate concepts, and support software development.
This widespread adoption was highlighted at the 2025 AI Ascent Conference, where OpenAI CEO Sam Altman described a “generational divide on AI tools,” likening the moment to the release of the first iPhone. He observed that many young adults now use ChatGPT not just for academics, but as an “operating system, life advisor, and organizational tool” [1]. These trends reflect emerging research showing that students are forming increasingly personalized relationships with AI, using it not just to complete tasks, but also for motivation, emotional support, and strategic life planning [2,3,4,5].
Despite this rapid shift, little is known about how such tools influence students’ academic motivation over time. To address this gap, the present study investigates whether and how students engage with ChatGPT not only for task completion, but also as a potential source of academic motivation.

1.1. ChatGPT’s Impact on Student Motivation and Autonomy

A growing body of research is examining how generative AI influences student motivation. Several studies position motivation as a key outcome, grounded in well-established psychological models. Almulla [6] found that ChatGPT use enhanced motivation, particularly in collaborative learning environments. Drawing on Self-Determination Theory (SDT), Chiu [7] and Ng et al. [8] showed that AI-supported activities can help satisfy students’ needs for autonomy, competence, and relatedness. Hmoud et al. [9] similarly reported increased enjoyment, engagement, and perceived relevance.
Other studies highlight ChatGPT’s practical benefits: Budu and Oteng [10], Ilić and Ivanovic [11], and Qi et al. [12] found that students appreciated its ability to simplify difficult concepts, increase efficiency, and support independent learning. These findings align with current U.S. Department of Education guidance, which emphasizes AI’s potential to promote personalized, adaptive, and socially engaging learning environments, especially for self-directed and at-risk learners [13].
However, concerns remain. Fan et al. [14] and Ye et al. [15] caution that ChatGPT may reduce cognitive effort and promote “metacognitive laziness.” Deng et al. [16] found that although students reported higher engagement, ChatGPT use did not improve academic confidence. Zogheib and Zogheib [17] noted that external factors like peer influence often outweighed intrinsic motivation. Broader reviews link AI-supported learning to increased convenience, but also to shifting expectations around effort and responsibility [18,19]. Additional concerns include misinformation, academic dishonesty, and weakened critical thinking [20,21,22]. Scholars such as Abramson [23] and Hasanein and Sobaih [24] argue that these risks can be mitigated through structured integration and guided reflection.

1.2. Theoretical Framework: Self-Determination Theory

Self-determination theory, developed by Ryan and Deci [25], is a widely respected framework for understanding the quality of human motivation. It identifies three core psychological needs (autonomy, competence, and relatedness) as essential for well-being and sustained engagement. When these needs are met, individuals tend to perform better and experience greater satisfaction; when they are not, motivation often declines [25,26,27]. A 2021 meta-analysis of 144 studies involving over 79,000 students affirms SDT’s importance in educational contexts [27].
SDT frames motivation as a continuum from amotivation to extrinsic and intrinsic motivation. Intrinsic motivation involves doing something for its own sake. Amotivation reflects disengagement or a sense of futility. Extrinsic motivation includes external regulation (e.g., doing something for grades), introjected regulation (guilt or pride), identified regulation (recognizing personal value), and integrated regulation (alignment with one’s identity) [25]. As motivation becomes more internalized, it becomes more autonomous. Studies show that autonomy-supportive teaching [27] and competence-building feedback [26] help facilitate this shift.
Satisfying these psychological needs has been shown to improve academic success in both in-person and online learning environments [27,28]. For example, students given meaningful choices and supportive feedback are more likely to develop confidence and sustained interest. Wang et al. [28] found that need satisfaction in both delivery modes enhances motivation, reinforcing the importance of strong instructor-student relationships.

1.3. Generative AI Through the Lens of SDT

As generative AI tools like ChatGPT become more deeply embedded in academic life, their influence on student motivation can be examined through the lens of SDT. ChatGPT can support intrinsic motivation by helping students master complex tasks (e.g., writing or coding) [29,30], fueling curiosity by clarifying concepts [31], and making learning more emotionally engaging through creative prompts [32]. It can also support extrinsic motivation, for example, by helping students stay on track with deadlines (external regulation), or by linking schoolwork to personal goals such as career preparation (identified regulation) [9,25,33,34]. For students experiencing amotivation, ChatGPT may help reduce overwhelm by breaking tasks into manageable steps, though over-reliance risks undermining competence and long-term engagement [25,33].
This study examines how ChatGPT influences student motivation by analyzing its relationship to six SDT-based motivational types, offering insights into both its educational benefits and limitations.

2. Materials and Methods

2.1. Participants

A total of 113 undergraduate students enrolled in computing and mathematics programs participated in the study across two academic terms yielding 141 survey submittals: Fall 2023 (n = 68) and Spring 2025 (n = 73). Participation was voluntary and offered as extra credit in a departmental seminar course required of all majors each semester. Limited demographic data was collected, and there were no exclusion criteria.

2.2. Measures

Participants completed an 18-item motivation survey designed to assess their experiences using ChatGPT for academic purposes. Each item was rated on a 4-point Likert scale ranging from 1 (Strongly Disagree) to 4 (Strongly Agree). The items were organized into six subscales, each reflecting a distinct type of motivation grounded in SDT and contextualized for ChatGPT use:
  • • Accomplishment: Intrinsic Motivation Toward Accomplishment (e.g., Q2: “ChatGPT provides valuable tips to help me maintain my motivation when working toward long-term goals.”);
  • • Desire to Know: Intrinsic Motivation Based on the Desire to Know (e.g.,Q6: “I turn to ChatGPT for advice on specific learning methods that enhance the joy of acquiring knowledge.”);
  • • Stimulation: Intrinsic Motivation Based on the Desire to Experience Stimulation (e.g., Q8: “With ChatGPT’s input, I incorporate activities and approaches that infuse excitement into the learning process.”);
  • • Rewards: Extrinsic Motivation Through Rewards and Constraints (e.g., Q10: “ChatGPT suggests creative ways to reward myself when I achieve specific milestones or goals.”);
  • • Value: Extrinsic Motivation Based on Personal Value (e.g., Q15: “With ChatGPT’s insights, I reframe tasks to make them more personally significant.”);
  • • Amotivation (e.g., Q17: “ChatGPT provides guidance on overcoming a persistent lack of motivation and regaining a sense of purpose.”).
Each subscale consisted of three items, with scores calculated as the mean of their respective responses. Items were adapted from the Academic Motivation Scale (AMS-C 28) by Vallerand et al. [35] to specifically reference ChatGPT use in academic settings. Wording was modified to reflect interactions with the tool (e.g., seeking guidance, generating ideas) while preserving the original intent of each motivational construct. The adaptation process focused on aligning item content with SDT’s motivational types and with contexts in which ChatGPT might influence students’ academic behavior, such as setting goals, exploring new topics, or connecting tasks to personal values.
Internal consistency was assessed using Cronbach’s alpha. Reliability coefficients were acceptable to excellent for most subscales: Accomplishment (α = .77), Stimulation (α = .86), Value (α = .90), and Amotivation (α = .86). The Desire to Know subscale demonstrated marginal reliability (α = .66). The Rewards subscale showed relatively lower internal consistency (α = .58), although reliability improved substantially when one item (Q12) was removed (α = .75), suggesting a potential mismatch between this item and the others in the subscale. However, we opted to retain Q12 in the analysis to preserve content coverage and consistency with the original construct definition (see Limitations).

2.3. Procedure

The survey was administered online and took fewer than 10 minutes to complete, minimizing participant burden. Although usernames were used to identify repeated responses across semesters, participant identities were not traced. All data were de-identified prior to analysis.

2.4. Data Analysis

Descriptive statistics were computed for each motivation category by year. Independent-samples t-tests were used to examine differences in motivation scores between 2023 and 2025, and Cohen’s d was calculated to estimate effect sizes. To account for repeated participation among the 28 students who completed the survey in both years, linear mixed-effects models were estimated for each motivation category, with Year as a fixed effect and respondent (username) as a random intercept. We also conducted exploratory comparisons between 2025 freshmen and upperclassmen to assess whether class standing and prior exposure to ChatGPT shaped motivational patterns. All analyses were conducted using R version 4.5.0 [36].

2.4. Data Availability

Both the dataset and survey instrument are available under a CC0 open license to support reproducibility [37].

3. Results

3.1. Descriptive Statistics

Mean scores for most motivation types were above the scale midpoint of 2.5, indicating general agreement. The exceptions were value and amotivation in 2023, which fell slightly below this threshold, suggesting that students initially used ChatGPT less frequently to align tasks with personal values or during periods of low motivation. By 2025, however, mean scores for all six motivation types exceeded 2.5, suggesting increased agreement with statements about ChatGPT’s potential to support academic motivation across multiple dimensions. As shown in Table 1, the highest scores were observed for desire to know, followed by accomplishment, stimulation, rewards, and value, with amotivation remaining the lowest. This rank order was consistent across both years.
From 2023 to 2025, average scores increased across all six motivation types, see Figure 1. The largest gains were observed in rewards and amotivation, while more modest increases were noted in accomplishment, desire to know, stimulation, and value, see Table 1.

3.2. Inferential Analysis

Independent-samples t-tests were used to compare 2023 and 2025 scores for each motivation type. As shown in Table 2, statistically significant increases were observed in five of the six categories: accomplishment, desire to know, rewards, value, and amotivation (p < .05). The increase in stimulation was non-significant at the .05 level (p = .073). Effect sizes ranged from small to medium, with the largest effect observed for rewards (Cohen’s d = 0.63).
To account for repeated participation, linear mixed-effects models were estimated for each motivation type. As presented in Table 3, all six models revealed significant increases in motivation scores from 2023 to 2025 (p < .05), including stimulation, which had not quite reached significance in the original simple t-test. Since the mixed-effects approach more effectively captures within-subject variance, this suggests that the increase in stimulation is in fact also significant.
To further explore whether changes in motivation were influenced by prior college exposure to ChatGPT, we examined class standing among 2025 participants. Specifically, we compared 2025 freshmen (n = 23), who may have used ChatGPT in high school but were new to using it in a college context, with upperclassmen (sophomores, juniors, and seniors). Independent-samples t-tests revealed no significant differences in motivational scores between these two groups. This finding suggests that the year-over-year increases observed across all six motivation types were not driven solely by repeated college use or class standing but may reflect broader shifts in how students relate to generative AI.

3.3. Cluster Analysis

To complement the main analyses, an exploratory cluster analysis was conducted to examine patterns in students’ motivation profiles. Standardized responses to the 18 survey items were grouped using K-means clustering, yielding a three-cluster solution that balanced interpretability and internal consistency. Summary statistics for each cluster are presented in Table 4.
Cluster 1 represented students with low motivation scores across all categories. Cluster 2 showed high scores across all types, including amotivation, while Cluster 3 reflected moderate motivation. Average GPA was highest in Cluster 3 and lowest in Cluster 2, though differences were small.
Figure 2 presents a Principal Component Analysis (PCA) visualization that reinforces these insights: Cluster 1 appeared compact and distinct; Cluster 2 showed greater dispersion, suggesting some heterogeneity among highly motivated students. Cluster 3 was also cohesive and fell between the two extremes.
Overall, the clusters highlight the complexity of motivational patterns in the context of ChatGPT use. While some students reporting high motivation also reported lower academic performance, the relationship between motivation and GPA should be interpreted with caution. The survey assessed self-reported motivation, not learning outcomes or ChatGPT usage patterns. GPA is also shaped by multiple factors, including course design, grading policies, and assessment types. As such, this analysis is exploratory and intended to generate questions for future research, not to draw definitive conclusions about the role of ChatGPT in academic achievement.

4. Discussion

This study provides insight into how students’ motivational relationships with generative AI tools like ChatGPT have evolved across two academic years. In 2023, student engagement appeared primarily task-oriented, focused on curiosity, goal completion, and support with specific academic challenges. By 2025, students reported broader and more personalized engagement patterns, incorporating ChatGPT into their learning routines in ways that extended beyond traditional academic assistance.
These findings align with emerging work suggesting that ChatGPT is becoming integrated into students’ academic lives [1,2,3,4,5]. The upward shift in motivation scores from 2023 to 2025 suggests that students increasingly perceived ChatGPT as a helpful resource for managing academic demands and sustaining engagement.
Consistent with prior research [6,7,8], the strongest motivational ratings were observed in the domains of desire to know and accomplishment, reflecting intrinsic motivation grounded in curiosity and personal challenge, two key constructs in Self-Determination Theory (SDT) [25]. These patterns align with studies showing that timely, responsive support can enhance students’ sense of competence and autonomy [9,29]. That said, we cannot isolate whether these experiences stemmed from ChatGPT alone or from a combination of instructional support, peer interaction, and tool use.
The significant increase in rewards-related motivation between years suggests growing student interest in using ChatGPT to organize tasks, meet deadlines, and stay productive. While extrinsic motivation is often seen as less ideal than intrinsic motivation, SDT highlights that when extrinsic motivation is internalized, such as through identified regulation, it can meaningfully support persistence and self-regulation [10,12].
One of the more unexpected findings was the increase in amotivation. While this could signal disengagement, it may also suggest that students are turning to ChatGPT when they feel stuck, overwhelmed, or uncertain. This aligns with prior work suggesting that AI tools can serve as a coping mechanism or first step toward re-engagement during periods of low motivation [16,17]. Rather than indicating detachment, higher amotivation scores may reflect students using ChatGPT as a kind of psychological triage, a way to regain traction when academic energy runs low.
The cluster analysis added a tentative layer of insight, showing that students with the highest motivation scores were not necessarily the highest performers. While speculative, this finding supports the idea that motivation alone does not guarantee academic success. Factors such as prior preparation, learning strategies, and time management likely influence GPA just as much, if not more, than AI use. Additionally, the presence of elevated amotivation in highly motivated students may reflect internal conflict, over-reliance on tools, or a shift in how students define success, not solely in terms of grades, but in terms of progress, control, or personal growth [14].
Taken together, our findings reflect a shift in how students use ChatGPT, not simply to “get the answer,” but as a tool for managing motivation, regulating effort, and navigating the academic experience. Students increasingly describe ChatGPT as part of their motivational routines, emotional coping strategies, and self-regulated learning practices. These patterns suggest a deeper, more relational use of AI, echoing Altman’s [1] observation that for many young adults, ChatGPT is becoming more than a writing aid; it’s evolving into an everyday academic companion.
By grounding our analysis in SDT, we offer a structured framework for interpreting these motivational dynamics. This approach clarifies why some students may thrive with AI support, while others struggle, particularly if they lack the readiness, context, or critical skills to engage productively. Motivation, in this context, appears dynamic and context-sensitive, shaped not only by personality or trait-level dispositions, but also by experience, feedback, and tool use. Additionally, comparisons between 2025 freshmen and upperclassmen revealed no significant motivational differences, suggesting that observed year-to-year increases were not solely due to prior exposure or class standing but reflect broader trends in how students engage with generative AI.
But motivation is not enough. Without a strong foundation in the subject matter or the ability to critically evaluate AI-generated responses, students may be prone to surface-level learning or even burnout. In short, prompting alone is not pedagogy. As ChatGPT becomes more powerful and more personal, capable of tracking goals, preferences, and usage history, educators must be equally intentional about how it is introduced and integrated into the academic experience.
If we want students to move from “let AI do it” to “let AI support me as I grow,” we must design learning environments that make that shift possible. That means teaching evaluative judgment, scaffolding AI use in stages, and treating ChatGPT not as a shortcut, but as a catalyst for deeper learning. This study offers early evidence that when students use ChatGPT with purpose, they are more likely to stay engaged, feel capable, and find meaning in their academic work.
As AI continues to evolve, so too must our educational practices, grounded not in fear or novelty, but in long-standing commitments to motivation, mastery, and meaningful learning.
Limitations
Several limitations should be considered when interpreting the results of this study. First, the sample consisted exclusively of undergraduate computing and mathematics majors, a group not representative of the broader university population. These students may hold different attitudes toward technology, particularly generative AI, than students in other disciplines. While it is sometimes assumed that STEM majors are more comfortable with advanced technologies, there is also evidence suggesting that technically proficient students may be more critical or skeptical of AI tools, particularly with regard to accuracy and overreliance.
Second, the study relied on a self-report survey administered online and incentivized with extra credit. Although participation was voluntary, the structure of the incentive and the lack of random sampling may have introduced self-selection bias, with students who had strong feelings, positive or negative, about ChatGPT more likely to respond. Additionally, the use of Likert-scale items can limit the depth of insight into students’ nuanced experiences and may not fully capture the complexity of their motivational processes.
Third, the motivation survey was adapted from the Academic Motivation Scale (AMS-C 28) to fit the context of ChatGPT use. While the adaptation was theoretically grounded in SDT, the items were not validated as a new instrument. One subscale in particular, Rewards, demonstrated relatively low internal consistency. We retained all three items, including Q12, to preserve content validity and maintain balance across subscales, but this decision may have introduced measurement noise that attenuated results related to that subscale.
Another limitation involves the evolving nature of the ChatGPT platform itself. Between 2023 and 2025, the underlying model underwent significant updates, improving its capabilities and user experience. As a result, some changes in motivational responses may reflect improved functionality rather than shifting attitudes. The rapid pace of model development presents a moving target for research, and findings from one version of ChatGPT may not generalize to future iterations or to other generative AI tools.
Finally, while the study used a mixed-methods design to account for repeated measures among a subset of students, the partially overlapping sample still presents analytical challenges. Differences between the two cohorts may be influenced by cohort-specific factors unrelated to AI use, such as course load, external stressors, or instructor differences. The absence of significant demographic data, such as gender and ethnicity further limits the ability to analyze subgroup differences or contextual influences.
Future studies should include more diverse student populations, incorporate qualitative data to explore motivation in greater depth, and consider longitudinal designs that follow the same students over time. Expanding beyond STEM majors and tracking interactions with evolving AI models will also strengthen the generalizability and relevance of future findings.

Author contributions

Conceptualization, APT; methodology, ASIDL and APT; formal analysis, ASIDL and EFVA; data curation, ASIDL; writing—original draft preparation, JAH, CEK, ASIDL, LL, APT, EFVA, and JHRW; writing—review and editing, ASIDL and SRW; visualization, ASIDL and EFVA. All authors have read and agreed to the published version of the manuscript.

Conflict of interest

Andrew S.I.D. Lang is a member of the editorial board of Artificial Intelligence and Education but was not involved in the peer review or editorial decision-making process for this manuscript. All other authors declare no conflicts of interest.

References

  1. Altman S. OpenAI’s Sam Altman on Building the ‘Core AI Subscription’ for Your Life [Video]. YouTube; 2025. Available online: https://www.youtube.com/watch?v=ctcMA6chfDY.
  2. Annamalai N, Ab Rashid R, Hashmi UM, et al. Using chatbots for English language learning in higher education. Comput Educ Artif Intell. 2023;5:100153. [CrossRef]
  3. Annamalai N, Bervell B, Mireku D, et al. Artificial intelligence in higher education: Modelling students’ motivation for continuous use of ChatGPT based on a modified self-determination theory. Comput Educ Artif Intell. 2025;8:100346. [CrossRef]
  4. Cronjé J. Exploring the role of ChatGPT as a peer coach for developing research proposals: Feedback quality, prompts, and student reflection. Electron J e-Learn. 2023;22(2):1–15. [CrossRef]
  5. Chen BX. It’s not just a chatbot, it’s a life coach. N Y Times. 2023 Jun 26. Available online: https://www.nytimes.com/2023/06/23/technology/ai-chatbot-life-coach.html.
  6. Almulla MA. Investigating influencing factors of learning satisfaction in AI ChatGPT for research: University students perspective. Heliyon. 2024;10(11):e32220. [CrossRef]
  7. Chiu TKF. A classification tool to foster self-regulated learning with generative artificial intelligence by applying self-determination theory: A case of ChatGPT. Educ Technol Res Dev. 2024. [CrossRef]
  8. Ng DTK, Tan CW, Leung JKL. Empowering student self-regulated learning and science education through ChatGPT: A pioneering pilot study. Br J Educ Technol. 2024. [CrossRef]
  9. Hmoud M, Swaity H, Hamad N, et al. Higher education students’ task motivation in the generative artificial intelligence context: The case of ChatGPT. Information. 2024;15(1):33. [CrossRef]
  10. Budu J, Oteng K. Exploring students’ use and outcomes of ChatGPT. Proc ISCAP Conf. 2024;10(6185). https://iscap.us/proceedings/2024/pdf/6185.pdf.
  11. Ilić J, Ivanovic M. The impact of ChatGPT on student learning experience in higher STEM education: A systematic literature review. In: Proc 2024 21st Int Conf IT-Based Higher Educ Training (ITHET); 2024 Nov; IEEE. [CrossRef]
  12. Qi C, Tang Y, Lei Y. Does feedback from ChatGPT help? Investigating the effect of feedback from both teacher and ChatGPT on students’ learning outcomes. In: Proc 30th Americas Conf Info Syst (AMCIS); 2024. https://aisel.aisnet.org/amcis2024/is_education/is_education/29.
  13. Cardona MA, Rodríguez RJ, Ishmael K. Artificial intelligence and the future of teaching and learning: Insights and recommendations. U.S. Department of Education, Office of Educational Technology; 2023. https://coilink.org/20.500.12592/rh21zz.
  14. Fan Y, Tang L, Le H, et al. Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. Br J Educ Technol. 2024. [CrossRef]
  15. Ye JH, Zhang M, Nong W, et al. The relationship between inert thinking and ChatGPT dependence: An I-PACE model perspective. Educ Inf Technol. 2025;30(3):3885–3909. [CrossRef]
  16. Deng R, Jiang M, Yu X, et al. Does ChatGPT enhance student learning? A systematic review and meta-analysis of experimental studies. Comput Educ. 2025;227:105224. [CrossRef]
  17. Zogheib S, Zogheib B. Understanding university students’ adoption of ChatGPT: Insights from TAM, SDT, and beyond. J Info Technol Educ Res. 2024;23:25. [CrossRef]
  18. Ansari AN, Ahmad S, Bhutta SM. Mapping the global evidence around the use of ChatGPT in higher education: A systematic scoping review. Educ Inf Technol. 2024;29(9):11281–11321. [CrossRef]
  19. Quintanilla Villegas MA, Pineda Rivas EE. Evidences from the literature on the motivations, consequences, and concerns regarding the use of artificial intelligence in higher education. Environ Soc Manag J. 2025;19(3). [CrossRef]
  20. Ali D, Fatemi Y, Boskabadi E, et al. ChatGPT in teaching and learning: A systematic review. Educ Sci. 2024;14(6):643. [CrossRef]
  21. Jo H. From concerns to benefits: A comprehensive study of ChatGPT usage in education. Int J Educ Technol High Educ. 2024;21(35). [CrossRef]
  22. Woerner JHR, Turtova AP, Lang ASID. Transformative potentials and ethical considerations of AI tools in higher education: Case studies and reflections. In: Proc SoutheastCon 2024; 2024; Atlanta, GA, USA. p. 510–515. [CrossRef]
  23. Abramson A. How to use ChatGPT as a learning tool. Monit Psychol. 2023;54(4). https://www.apa.org/monitor/2023/06/chatgpt-learning-tool.
  24. Hasanein AM, Sobaih AEE. Drivers and consequences of ChatGPT use in higher education: Key stakeholder perspectives. Eur J Investig Health Psychol Educ. 2023;13(11):2599–2614. [CrossRef]
  25. Ryan RM, Deci EL. Intrinsic and extrinsic motivation from a self-determination theory perspective: Definitions, theory, practices, and future directions. Contemp Educ Psychol. 2020;61:101860. [CrossRef]
  26. Deci EL, Olafsen AH, Ryan RM. Self-determination theory in work organizations: The state of a science. Annu Rev Organ Psychol Organ Behav. 2017;4:19–43. [CrossRef]
  27. Bureau JS, Howard JL, Chong JX, Guay F. Pathways to student motivation: A meta-analysis of antecedents of autonomous and controlled motivations. Rev Educ Res. 2022;92(1):46–72. [CrossRef]
  28. Wang C, Hsu HCK, Bonem EM, et al. Need satisfaction and need dissatisfaction: A comparative study of online and face-to-face learning contexts. Comput Hum Behav. 2019;95:114–125. [CrossRef]
  29. Chan CKY, Hu W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int J Educ Technol High Educ. 2023;20(1):1–18. [CrossRef]
  30. Yilmaz R, Yilmaz FGK. The effect of generative artificial intelligence (AI)-based tool use on students’ computational thinking skills, programming self-efficacy and motivation. Comput Educ Artif Intell. 2023;4:100147. [CrossRef]
  31. Hu YH. Implementing generative AI chatbots as a decision aid for enhanced values clarification exercises in online business ethics education. Educ Technol Soc. 2024;27(3):356–373. https://www.jstor.org/stable/48787035.
  32. Bettayeb AM, Alkholy SO, Alshurideh MT. Exploring the impact of ChatGPT: Conversational AI in education. Front Educ. 2024;9. [CrossRef]
  33. Sok S, Heng K. Opportunities, challenges, and strategies for using ChatGPT in higher education: A literature review. J Digit Educ Technol. 2024;4(1):ep2401. [CrossRef]
  34. Zhang X, Zhang X, Liu H. Reflections on enhancing higher education classroom effectiveness through the introduction of large language models. J Mod Educ Res. 2024. [CrossRef]
  35. Vallerand RJ, Pelletier LG, Blais MR, et al. Academic motivation scale (AMS-C 28), college (CEGEP) version. Educ Psychol Meas. 1993;52(53). https://www.lrcs.uqam.ca/wp-content/uploads/2017/08/emecegep_en.pdf.
  36. R Core Team. R: A language and environment for statistical computing (Version 4.5.0) [Computer software]. R Foundation for Statistical Computing; 2024. https://www.R-project.org/.
  37. Harder JA, Klehm C, Lang ASID, Locke L, Turtova A, Valderrama E, et al. Dataset and survey questions for Not Just Answers: ChatGPT’s Expanding Role in Supporting Student Motivation [dataset]. 2025. figshare. [CrossRef]
Figure 1. Mean motivation scores for each category in 2023 and 2025. Motivation scores are based on a 4-point Likert scale (1 = Strongly Disagree, 4 = Strongly Agree).
Figure 1. Mean motivation scores for each category in 2023 and 2025. Motivation scores are based on a 4-point Likert scale (1 = Strongly Disagree, 4 = Strongly Agree).
Preprints 206854 g001
Figure 2. Principal component analysis (PCA) plot of K-means clusters based on responses to Q1–Q18, with shapes indicating academic year. Ellipses represent the 95% confidence regions.
Figure 2. Principal component analysis (PCA) plot of K-means clusters based on responses to Q1–Q18, with shapes indicating academic year. Ellipses represent the 95% confidence regions.
Preprints 206854 g002
Table 1. Descriptive statistics by year for each motivation type.
Table 1. Descriptive statistics by year for each motivation type.
Motivation type Mean (2023) SD (2023) Mean (2025) SD (2025)
Accomplishment 2.65 0.73 2.90 0.59
Desire to know 2.79 0.61 3.02 0.58
Stimulation 2.56 0.75 2.78 0.70
Rewards 2.54 0.56 2.92 0.62
Value 2.39 0.85 2.75 0.81
Amotivation 2.31 0.82 2.63 0.81
Table 2. Independent-samples t-tests and effect sizes by motivation type.
Table 2. Independent-samples t-tests and effect sizes by motivation type.
Motivation type Mean (2023) Mean (2025) t-value df p-value Cohen’s d Effect size
Accomplishment 2.65 2.90 2.29 129.46 .023 0.39 Small
Desire to Know 2.79 3.02 2.27 137.00 .025 0.38 Small
Stimulation 2.56 2.78 1.81 136.40 .073 0.31 Small
Rewards 2.54 2.92 3.75 138.94 <.001 0.63 Medium
Value 2.39 2.75 2.59 137.22 .011 0.44 Small
Amotivation 2.31 2.63 2.34 137.95 .021 0.39 Small
Table 3. Year effects from linear mixed-effects models for each motivation type.
Table 3. Year effects from linear mixed-effects models for each motivation type.
Motivation type Estimate (b) Std. Error df t-value p-value
Accomplishment 0.27 0.10 87 2.69 .009
Desire to Know 0.23 0.09 97 2.43 .017
Stimulation 0.22 0.10 69 2.14 .036
Rewards 0.35 0.09 92 3.80 < .001
Value 0.34 0.12 83 2.75 .007
Amotivation 0.33 0.12 74 2.79 .007
Table 4. Summary by cluster.
Table 4. Summary by cluster.
Cluster Accomplish Know Stimulation Reward Value Amotivation GPA
1 1.56 1.93 1.31 1.96 1.04 1.09 3.38
2 3.13 3.29 3.20 3.17 3.23 3.09 3.24
3 2.68 2.68 2.39 2.43 2.18 2.11 3.43
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated