Submitted:
10 June 2025
Posted:
10 June 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
1.1. Introduction
1.2. Background of the Study
1.3. Problem Statement
1.4. Research Questions
- To what extent does students’ reliance on ChatGPT for academic tasks impact their ability to engage in independent analysis, synthesis, and evaluation of information?
- In what ways does unguided use of ChatGPT contribute to a reduction in students’ propensity for deep processing, independent reasoning, and source evaluation?
- How can explicit instructional strategies and metacognitive awareness training influence the integration of ChatGPT to positively augment students’ critical thinking skills, such as creative idea generation and the identification of logical fallacies?
- What pedagogical frameworks are necessary to promote responsible AI integration that cultivates a balanced approach leveraging AI’s efficiencies while rigorously safeguarding and enhancing critical thinking?
1.5. Significance of the Study
2. Literature Review
2.1. Introduction
2.2. Theoretical Frameworks of Critical Thinking
- Bloom’s Taxonomy of Educational Objectives (Revised): This hierarchical classification system identifies six levels of cognitive processes, moving from lower-order thinking (remembering, understanding) to higher-order thinking (applying, analyzing, evaluating, creating). Critical thinking predominantly engages the higher-order levels, which are essential for complex problem-solving and knowledge construction (Anderson & Krathwohl, 2001).
- The Paul-Elder Framework: This framework emphasizes the "elements of thought" (e.g., purpose, question at issue, information, inferences, concepts, assumptions, implications, point of view) and "universal intellectual standards" (e.g., clarity, accuracy, precision, relevance, depth, breadth, logic, significance, fairness) that critical thinkers apply to their reasoning (Paul & Elder, 2008). This model underscores the importance of explicit instruction in intellectual standards and metacognitive awareness in fostering robust critical thinking abilities. It posits that critical thinking is a disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action.
- Dual-Process Theory of Reasoning: This theory, prominent in cognitive psychology, distinguishes between two systems of thinking: System 1 (intuitive, fast, automatic, emotional) and System 2 (analytical, slow, effortful, logical) (Kahneman, 2011). Critical thinking largely relies on System 2 processes, which require deliberate engagement and conscious effort to override initial biases or superficial understandings. The challenge posed by AI tools may relate to their potential to encourage System 1 thinking if students passively accept generated outputs without engaging in rigorous System 2 evaluation.
2.3. Technology and Cognitive Skill Development in Education
- Benefits of Technology in Education: Technology can enhance learning by providing access to vast amounts of information, facilitating collaborative learning, offering personalized instruction, and providing interactive learning environments. Digital tools can automate rote tasks, freeing up cognitive resources for higher-order thinking (Prensky, 2001). Simulations, virtual labs, and data visualization tools can also create rich learning experiences that might otherwise be inaccessible.
- Challenges and Concerns: Concerns have long been raised about technology’s potential to foster passive consumption, superficial information processing, and a reduction in attention spans (Carr, 2010). The ease of access to information, for instance, can lead to a ’Google effect’ where individuals prioritize finding information quickly over deeply understanding or memorizing it (Sparrow et al., 2011). Furthermore, the proliferation of misinformation online underscores the critical need for robust digital literacy and source evaluation skills, which are integral components of critical thinking. Previous research has indicated that while technology can support learning, its efficacy is highly dependent on instructional design and pedagogical approaches (Kirschner & De Bruyckere, 2017).
2.4. Emergence and Capabilities of Large Language Models (LLMs)
- Technological Underpinnings: LLMs leverage deep learning architectures, specifically transformer networks, to process sequential data and identify complex patterns in language (Vaswani et al., 2017). Their ability to generate contextually relevant and grammatically correct prose has made them highly versatile tools for tasks ranging from content creation and programming assistance to customer service and scientific research.
- Applications in Education: In educational contexts, LLMs can be utilized for various purposes:
- ○
- Information Retrieval and Summarization: Quickly extracting and synthesizing information from vast sources.
- ○
- Drafting and Brainstorming: Assisting students in initiating writing assignments, generating ideas, or structuring arguments.
- ○
- Personalized Tutoring: Providing explanations, answering questions, and offering feedback (though often generic).
- ○
- Language Practice: Supporting language learners with grammar and vocabulary exercises.
- Initial Observations and Concerns: While the potential benefits are evident, the rapid adoption of LLMs has also raised significant concerns among educators. Issues such as academic integrity, plagiarism, and the potential for students to outsource cognitive labor are at the forefront of these discussions (Susnjak, 2022). Educators are grappling with how to assess learning outcomes when AI tools can produce high-quality work with minimal human effort. There is an urgent need to understand the deeper implications of these tools on the very cognitive skills they are intended to support or, perhaps, inadvertently undermine.
2.5. Impact of LLMs on Critical Thinking: Emerging Perspectives
- Potential Hindrance to Critical Thinking:
- ○
- Reduced Cognitive Effort: If students use LLMs to generate answers or complete assignments without fully understanding the underlying concepts, it may bypass the necessary cognitive struggle that leads to deeper learning and critical thought (Lodge & Kypri, 2023).
- ○
- Passive Consumption: Reliance on AI-generated content can lead to passive consumption of information rather than active engagement, analysis, and evaluation. Students may not question the validity, accuracy, or bias of AI outputs.
- ○
- Diminished Problem-Solving: Over-reliance might prevent students from developing their own problem-solving strategies, as the AI can provide immediate solutions, circumventing the iterative process of trial and error, analysis, and synthesis.
- ○
- "Hallucinations" and Misinformation: LLMs can generate plausible but incorrect or fabricated information ("hallucinations"). Students who lack critical evaluation skills may unknowingly incorporate such inaccuracies into their work, highlighting a severe deficit in source credibility assessment (Bommasani et al., 2021).
- Potential Augmentation of Critical Thinking:
- ○
- Scaffolding Complex Tasks: LLMs can act as a scaffold, helping students break down complex problems, generate initial ideas, or identify areas for further research. This can free up cognitive resources for higher-level analysis.
- ○
- Idea Generation and Diversification: AI can stimulate creativity by providing diverse perspectives or alternative solutions, which students can then critically evaluate and refine.
- ○
- Identifying Logical Fallacies: With proper prompting, LLMs could potentially be used to identify logical inconsistencies or weak arguments in a text, serving as a tool for students to refine their own critical analysis.
- ○
- Personalized Learning Pathways: AI can adapt to individual student needs, potentially offering tailored challenges that stimulate critical thinking relevant to their learning style.
- ○
- Current literature emphasizes that the outcome—whether LLMs hinder or augment critical thinking—likely depends on how these tools are integrated into the learning process. It underscores the critical role of pedagogical design in shaping students’ interactions with AI.
2.6. Gaps in the Literature
- Empirical Evidence on Direct Impact: There is a dearth of large-scale, empirical studies that rigorously measure the direct, causal impact of sustained LLM reliance on specific critical thinking sub-skills (e.g., inferential reasoning, argument analysis, bias detection) over time in diverse student populations.
- Longitudinal Studies: Most existing observations are anecdotal or based on short-term exposures. Longitudinal studies are needed to understand the cumulative effects of LLM use on cognitive development and critical thinking proficiencies throughout an academic career.
- Specific Pedagogical Interventions: Research is needed to identify and test specific pedagogical interventions and instructional designs that effectively leverage LLMs to enhance critical thinking, rather than merely using them as efficiency tools. This includes developing effective AI literacy and metacognitive strategies for students.
- Differential Impacts: There is a need to explore if the impact of LLMs varies across different academic disciplines, levels of education (e.g., undergraduate vs. graduate), or student critical thinking predispositions.
-
Qualitative Understanding of Student Interaction: More in-depth qualitative research is required to understand how students are using LLMs, their motivations for reliance, and their perceptions of AI’s influence on their learning processes and intellectual development.This study aims to contribute to filling these identified gaps by providing a focused investigation into the relationship between ChatGPT reliance and the development of student critical thinking skills, offering both quantitative and qualitative insights into this complex educational phenomenon.
2.7. Conclusion
3. Methodology
3.1. Introduction
3.2. Research Design
- Quantitative Component: The quantitative component will primarily employ a quasi-experimental design. This approach is chosen because random assignment of participants to experimental and control groups may not be ethically or practically feasible in a real educational setting. Instead, pre-existing student groups will be utilized, with interventions carefully designed and implemented. This component will measure changes in critical thinking scores over time using standardized assessments. It aims to determine the extent to which varying levels of ChatGPT reliance correlate with or causally influence critical thinking proficiencies.
-
Qualitative Component: The qualitative component will utilize a descriptive phenomenological approach. This approach is designed to explore and describe the lived experiences and perceptions of students regarding their use of ChatGPT and its perceived impact on their learning processes and cognitive skills. It will provide rich, in-depth insights into how students interact with the tool, their motivations, challenges, and specific instances where they feel their critical thinking was either enhanced or hindered.The integration of these two approaches will allow for both the generalizability of quantitative findings and the nuanced understanding provided by qualitative narratives, thereby strengthening the validity and depth of the study’s conclusions.
3.3. Research Participants
- Quantitative Participants: A sample of approximately 200 undergraduate students will be recruited from two distinct groups:
- ○
- Intervention Group: Students enrolled in courses where structured, guided integration of ChatGPT is encouraged, coupled with explicit instruction on critical evaluation of AI-generated content.
- ○
- Control Group: Students enrolled in comparable courses where ChatGPT use is either restricted or not explicitly incorporated into pedagogical strategies, representing more traditional learning environments.
- ○
- Participants will be selected based on their willingness to participate and course enrollment, ensuring comparable academic levels and disciplines where possible.
-
Qualitative Participants: A smaller, purposeful sample of 20-30 students will be selected from the quantitative participants, representing diverse levels of ChatGPT usage and perceived impact (both positive and negative). This selection will aim for maximum variation to capture a wide range of experiences.Informed consent will be obtained from all participants, clearly outlining the study’s purpose, procedures, confidentiality, and their right to withdraw at any time.
3.4. Data Collection Instruments
3.4.1. Quantitative Instruments:
- ○
- Critical Thinking Assessment (CTA): A validated standardized critical thinking test (e.g., California Critical Thinking Skills Test [CCTST], Watson-Glaser Critical Thinking Appraisal [WGCTA]) will be administered at the beginning (pre-test) and end (post-test) of the intervention period. These instruments measure core critical thinking skills such as analysis, inference, evaluation, deduction, and induction.
- ○
- ChatGPT Usage Survey: A self-report questionnaire will be administered to both groups to ascertain the frequency, purpose, and nature of their ChatGPT use for academic tasks. This survey will also gauge perceptions of reliance on the tool.
3.4.2 Qualitative Instruments:
- ○
- Semi-structured Interviews: In-depth interviews will be conducted with the qualitative participants. The interview protocol will include open-ended questions designed to explore students’ experiences with ChatGPT, their perceptions of its influence on their cognitive processes (e.g., how they approach research, problem-solving, and writing tasks with and without AI), and their strategies for evaluating AI-generated content.
- ○
- Reflective Journals/Prompts: Students in the intervention group may be asked to maintain reflective journals or respond to specific prompts throughout the study period, documenting their experiences, challenges, and insights regarding AI use and critical thinking development. This will provide ongoing, rich qualitative data.
3.5. Data Collection Procedures
- Phase 1: Pre-Intervention (Week 1-2):
- ○
- Obtain ethical approval and informed consent from participants.
- ○
- Administer the pre-test Critical Thinking Assessment to all quantitative participants.
- ○
- Administer the initial ChatGPT Usage Survey.
- Phase 2: Intervention Period (Week 3-14):
- ○
- Intervention Group: Courses will integrate ChatGPT into assignments with explicit instructions on how to use it critically, including tasks requiring AI output analysis, source verification, argument reconstruction, and ethical considerations. Metacognitive prompts will encourage reflection on their cognitive processes.
- ○
- Control Group: Courses will proceed with standard pedagogical methods, with no explicit integration or restriction of ChatGPT, allowing for natural usage patterns.
- ○
- Throughout this phase, students in the qualitative sample will engage in ongoing reflection (e.g., via journals) in preparation for interviews.
- Phase 3: Post-Intervention (Week 15-16):
- ○
- Administer the post-test Critical Thinking Assessment to all quantitative participants.
- ○
- Administer the final ChatGPT Usage Survey.
- ○
- Conduct semi-structured interviews with the qualitative participants.
3.6. Data Analysis
3.6.1. Quantitative Data Analysis:
- ○
- Descriptive statistics (means, standard deviations, frequencies) will be used to summarize demographic information and survey responses.
- ○
- Inferential statistics, specifically Analysis of Covariance (ANCOVA), will be used to compare the post-test critical thinking scores between the intervention and control groups, controlling for pre-test scores and other relevant covariates (e.g., prior academic performance). This will assess the impact of the intervention.
- ○
- Correlation analyses will be conducted between ChatGPT usage patterns (from surveys) and changes in critical thinking scores to explore relationships between reliance levels and cognitive outcomes.
- ○
- Statistical software (e.g., R, SPSS) will be used for all quantitative analyses.
3.6.2. Qualitative Data Analysis:
- ○
- Interview transcripts and reflective journals will be transcribed verbatim and analyzed using thematic analysis (Braun & Clarke, 2006).
- ○
- The analysis will involve an iterative process of familiarization with the data, generation of initial codes, searching for themes, reviewing themes, defining and naming themes, and producing the report.
- ○
- Emergent themes will illuminate students’ perceptions, experiences, and strategies related to ChatGPT use and its perceived influence on critical thinking.
- ○
- NVivo or similar qualitative data analysis software may be utilized to assist with coding and theme development.
3.6.3. Mixed-Methods Integration:
- ○
- The quantitative and qualitative findings will be integrated during the interpretation phase (Creswell & Plano Clark, 2017).
- ○
- The qualitative data will help explain the "how" and "why" behind the quantitative results, providing deeper context and nuance to statistical correlations or group differences. For instance, if quantitative analysis reveals a negative impact, qualitative data might explain why students over-rely or how they perceive AI hindering their thought processes. Conversely, if positive impacts are observed, qualitative data can reveal the specific pedagogical strategies or student practices that fostered this enhancement.
3.7. Ethical Considerations
- Informed Consent: All participants will receive a detailed informed consent form outlining the study’s purpose, procedures, potential risks and benefits, confidentiality measures, and their right to voluntary participation and withdrawal without penalty.
- Confidentiality and Anonymity: All collected data will be anonymized to the greatest extent possible. Participant identities will be protected using pseudonyms or numerical codes. Data will be stored securely on encrypted drives, accessible only to the research team.
- Minimizing Harm: The research design will ensure that participants are not subjected to undue stress or academic disadvantage. The intervention group’s curriculum will be designed to enhance learning, not detract from it.
- Transparency: All methods and findings will be reported transparently and accurately, acknowledging any limitations or biases.
3.8. Limitations of the Methodology
- Quasi-experimental Design: The inability to randomly assign participants may introduce confounding variables that could affect the generalizability of quantitative findings. Efforts will be made to control for these statistically.
- Self-report Bias: Survey and interview data rely on self-reported information, which may be subject to social desirability bias or inaccurate recall.
- Generative AI Evolution: The capabilities of LLMs are rapidly evolving. Findings based on current versions of ChatGPT may not be entirely applicable to future iterations, necessitating ongoing research.
- Scope: The study’s focus on a specific university context and undergraduate population may limit the direct transferability of findings to other educational levels or institutions.
3.9. Conclusion
4. Expected Results
4.1. Introduction
4.2. Expected Quantitative Findings
- Critical Thinking Assessment (CTA) Scores:
- ○
- It is hypothesized that the Intervention Group, which receives structured guidance on critical ChatGPT integration, will demonstrate a statistically significant improvement in post-test CTA scores compared to their pre-test scores. This improvement is expected to be more pronounced than any change observed in the Control Group.
- ○
- Conversely, for the Control Group, where ChatGPT use is less explicitly managed, a smaller, potentially non-significant improvement, or even a slight decline, in CTA scores might be observed. This would suggest that unguided reliance on AI may not inherently foster critical thinking development, and could potentially hinder it if students outsource cognitive effort without active evaluation.
- ○
- Specific sub-scores within the CTA (e.g., analysis, inference, evaluation) may show differential impacts, indicating which particular critical thinking facets are most affected by AI reliance and guided interventions. For instance, the ability to evaluate information and infer logically might see a greater positive shift in the intervention group.
- ChatGPT Usage Patterns and Critical Thinking Correlation:
- ○
- Correlation analyses are expected to reveal a complex relationship between the frequency and nature of ChatGPT usage and changes in critical thinking scores.
- ○
- In the Control Group, a negative correlation might be observed between high levels of unguided ChatGPT reliance (e.g., using it for direct answer generation without subsequent verification) and pre-to-post improvements in critical thinking scores. This would support the concern that excessive, passive use can impede cognitive development.
- ○
- In the Intervention Group, a positive correlation is anticipated between engagement with structured, critically-oriented ChatGPT tasks (e.g., using AI for brainstorming then critically evaluating its outputs, or identifying AI-generated biases) and gains in critical thinking skills. This would suggest that when appropriately scaffolded, AI can serve as a catalyst for cognitive growth.
- Impact of Covariates: Analysis of Covariance (ANCOVA) is expected to confirm that observed differences in post-test CTA scores between groups are attributable to the intervention and AI reliance patterns, even when controlling for baseline critical thinking abilities and prior academic performance.
4.3. Expected Qualitative Findings
- Perceptions of ChatGPT’s Influence:
- ○
- Students are expected to articulate a dual perception of ChatGPT. Many will likely acknowledge its utility for efficiency, speed, and overcoming initial roadblocks in academic tasks (e.g., writer’s block, quick information retrieval).
- ○
- However, a significant portion, particularly those in the intervention group, are expected to voice an increased awareness of the potential pitfalls of over-reliance, such as reduced mental effort, superficial understanding, and the risk of accepting inaccurate or biased information generated by the AI.
- ○
- Students in the control group might express less awareness of these cognitive risks, potentially indicating that unguided use fosters a less critical stance towards AI outputs.
- Cognitive Processes and AI Interaction:
- ○
- Qualitative data is expected to reveal specific ways students interact with ChatGPT. For instance, some may describe using it for rote tasks, while others will detail strategies for employing it as a thinking partner for complex analysis, argument construction, or idea generation, followed by meticulous review and refinement.
- ○
- Themes related to "cognitive outsourcing" versus "cognitive augmentation" are anticipated to emerge. Students who describe feeling less engaged or challenged when using AI for entire tasks would exemplify outsourcing, while those who detail using AI to spark their own critical thought processes would exemplify augmentation.
- Strategies for Critical Engagement:
- ○
- In the Intervention Group, participants are expected to describe specific strategies for critically evaluating AI-generated content, including cross-referencing information, identifying logical inconsistencies, questioning AI’s underlying assumptions, and refining prompts to elicit more nuanced or specific responses. This highlights the effectiveness of explicit instructional interventions.
- ○
- Conversely, students in the Control Group might report less systematic approaches to verifying AI outputs, relying more on superficial checks or assuming AI accuracy.
- Ethical Considerations and Academic Integrity:
- ○
- Discussions around academic integrity and the ethical implications of AI use are also expected to surface. Students may express a growing understanding of the line between AI assistance and intellectual dishonesty, particularly as their metacognitive awareness is raised through the intervention.
4.4. Mixed-Methods Integration and Implications
- Explaining Variance: Qualitative narratives are anticipated to illuminate why certain quantitative correlations or group differences exist. For example, if the intervention group shows greater critical thinking gains, qualitative data will likely reveal the specific pedagogical strategies and student practices that contributed to this outcome. Conversely, if control group students demonstrate limited growth, their interview responses may detail the patterns of AI use that bypassed deeper cognitive engagement.
- Confirming and Nuancing: Qualitative insights will either confirm or nuance the statistical relationships observed. For instance, a quantitative finding of reduced critical thinking in high-reliance control group students would be supported by qualitative accounts of feeling "less challenged" or "not having to think as much."
- Informing Pedagogical Recommendations: The combined findings are expected to robustly inform evidence-based pedagogical recommendations for integrating AI tools into education in a manner that actively fosters, rather than hinders, critical thinking skills. This will include practical strategies for educators and guidelines for students on responsible and effective AI utilization.
4.5. Conclusion
5. Discussion, Conclusion, and Recommendations
5.1. Introduction
5.2. Discussion of Findings
5.2.1. Interpretation of Quantitative Results
5.2.2. Interpretation of Qualitative Results
5.2.3. Mixed-Methods Synthesis
5.3. Conclusion
5.4. Limitations of the Study
5.5. Recommendations
5.5.1. Recommendations for Educators
- Integrate AI Critically: Move beyond outright banning or uncritical acceptance. Design assignments that require students to use ChatGPT strategically, for instance, for brainstorming, drafting, or identifying counter-arguments, but critically evaluate and refine its output.
- Teach AI Literacy and Metacognition: Explicitly instruct students on how LLMs work, their limitations (e.g., "hallucinations," biases), and ethical considerations. Foster metacognitive awareness by prompting students to reflect on how they are thinking when using AI and what cognitive work they are outsourcing vs. performing themselves.
- Redesign Assessments: Develop assessments that cannot be easily completed by AI alone, focusing on higher-order thinking skills such as synthesis, original argumentation, problem-solving complex, ill-defined problems, and demonstrating deep understanding through unique application.
- Emphasize Source Verification: Continuously reinforce the importance of verifying all information, regardless of its source, including AI-generated content.
- Promote Human-AI Collaboration: Position AI as a collaborative tool rather than a replacement for human intellect, encouraging students to leverage its strengths while developing their own unique cognitive capabilities.
5.5.2. Recommendations for Students
- Use AI Strategically: Employ ChatGPT as a tool to augment your learning, not to bypass it. Use it for initial idea generation, summarizing complex texts, or practicing concepts, but always follow up with your own critical analysis and independent thought.
- Verify and Validate: Never uncritically accept AI-generated information. Always cross-reference facts, evaluate arguments, and identify potential biases or inaccuracies.
- Understand Limitations: Recognize that AI tools are not infallible and do not possess true understanding or consciousness. Their outputs are based on patterns in data, not genuine reasoning.
- Reflect on Your Learning: Actively consider how using AI impacts your cognitive processes. Are you thinking more deeply or less? Are you learning effectively or just completing tasks?
5.5.3. Recommendations for Educational Institutions and Policymakers
- Develop AI Policies: Establish clear, comprehensive, and adaptable policies regarding the ethical and pedagogical use of AI tools across the institution, involving faculty, students, and administrators in their formulation.
- Invest in Faculty Development: Provide ongoing professional development for educators on integrating AI effectively and teaching AI literacy to students.
- Curriculum Integration: Review and revise curricula to embed AI literacy and critical thinking development across disciplines, ensuring students are prepared for an AI-driven future.
- Support Research: Fund and support further empirical research into the long-term impacts of AI on learning, cognition, and educational equity.
5.6. Future Research Directions
- Conducting longitudinal studies to track the long-term impact of AI reliance on critical thinking development across a student’s academic career.
- Exploring the differential effects of AI on critical thinking across various disciplines (e.g., STEM vs. Humanities) and diverse student demographics.
- Developing and testing more sophisticated AI literacy curricula and pedagogical interventions.
- Investigating the impact of multimodal AI models (e.g., those integrating text, image, and audio) on cognitive processes.
- Examining the role of AI in fostering other essential 21st-century skills, such as creativity, collaboration, and ethical reasoning.
6. Appendices and References
6.1. Introduction
6.2. Appendices
6.2.1. Appendix A: Critical Thinking Assessment (CTA) Sample Questions
6.2.2. Appendix B: ChatGPT Usage Survey Instrument
- Demographic Information: Basic participant characteristics (e.g., age, academic year, major).
- Specific Types of Academic Tasks: Items querying the specific academic tasks for which ChatGPT is used (e.g., brainstorming ideas, drafting outlines, summarizing texts, generating code snippets, answering specific factual questions, refining grammar, translating concepts).
- Perceived Reliance Level: Likert scale items assessing the degree to which students perceive their reliance on ChatGPT for academic tasks (e.g., "I rely heavily on ChatGPT for completing my assignments," "ChatGPT is an indispensable tool for my studies").
- Self-Assessment of Impact: Questions prompting students to self-assess how ChatGPT impacts their learning process, cognitive effort, and perceived development of critical thinking skills.
- Likert Scale Items: Statements assessing agreement with various propositions about AI’s helpfulness, its potential pitfalls, and its influence on their academic integrity and learning strategies.
6.2.3. Appendix C: Semi-structured Interview Protocol
- "Could you describe a typical instance of how you integrate ChatGPT into your academic workflow for a research paper or a complex assignment?"
- "In what specific ways do you feel ChatGPT influences your ability to think deeply about a topic or generate original insights?"
- "When using ChatGPT for information, what steps do you take to verify the accuracy, reliability, or potential biases of the information or arguments it generates?"
- "Can you recall a specific instance where ChatGPT either significantly helped or noticeably hindered your critical thinking process? Please elaborate."
- "How do you discern the line between using ChatGPT as a helpful tool and over-relying on it to the detriment of your own learning?"
6.2.4. Appendix D: Reflective Journal Prompts (Intervention Group)
- "After using ChatGPT for [specific task, e.g., generating an essay outline or analyzing a case study], what cognitive steps did you consciously take to evaluate its output before incorporating it into your work? What did you add or change, and why?"
- "Describe a time you felt intellectually challenged by ChatGPT’s output (e.g., it provided conflicting information, a weak argument, or a ’hallucination’). How did you resolve that challenge, and what specific critical thinking skills did you employ in the process?"
- "How has your approach to problem-solving, research, or writing changed since you started using ChatGPT in a guided, critically-aware manner compared to before?"
- "Reflect on an instance where ChatGPT helped you think more critically about a topic. What about its response prompted you to engage in deeper analysis or evaluation?"
References
- Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of educational objectives. Longman.
- Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
- Wu, Y. (2023). Integrating generative AI in education: how ChatGPT brings challenges for future learning and teaching. Journal of Advanced Research in Education, 2(4), 6-10. [CrossRef]
- Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101.
- Carr, N. (2010). The Shallows: What the Internet is Doing to Our Brains. W. W. Norton & Company.
- Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed methods research (3rd ed.). SAGE Publications.
- Ennis, R. H. (1987). A taxonomy of critical thinking dispositions and abilities. In J. B. Baron & R. J. Sternberg (Eds.), Teaching thinking skills: Theory and practice (pp. 9-26). W. H. Freeman.
- Facione, P. A. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction (Executive Summary). American Philosophical Association.
- Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
- Kirschner, P. A., & De Bruyckere, M. (2017). The myths of the digital native and the multitasker. Teaching and Teacher Education, 67, 135-142.
- Lodge, J. M., & Kypri, K. (2023). ChatGPT: A new threat to academic integrity? Medical Education, 57(3), 296-297.
- Paul, R., & Elder, L. (2008). The miniature guide to critical thinking concepts & tools. Foundation for Critical Thinking.
- Prensky, M. (2001). Digital Natives, Digital Immigrants. On the Horizon, 9(5), 1-6.
- Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science, 333(6043), 776-778. [CrossRef]
- Susnjak, T. (2022). ChatGPT: The end of online exam integrity? arXiv preprint arXiv:2212.09292.
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).