Preprint
Article

This version is not peer-reviewed.

AI-Enhanced Teaching: A Strategic Framework for Schools and Colleges

Submitted:

11 March 2026

Posted:

13 March 2026

Read the latest preprint version here

Abstract
Artificial intelligence is rapidly transforming education. Tools such as modern AI language models can now generate essays, explain complex concepts, create lesson plans, produce quizzes, and summarize entire textbooks within seconds. For many teachers and institutions, this raises an important question: what is the role of a human educator in an age when machines can instantly provide information? This paper presents an accessible framework that helps schools and colleges integrate artificial intelligence into teaching while preserving the essential human elements of education. Rather than viewing AI as a replacement for teachers, the framework positions AI as a powerful assistant that can support lesson preparation, personalized feedback, and adaptive learning resources. By automating repetitive tasks such as content generation, grading support, and material organization, AI allows educators to focus on what machines cannot easily replicate: mentorship, creativity, ethical reasoning, critical thinking, and inspiration.The framework outlines practical strategies for using AI responsibly in classrooms, including guidelines for AI-assisted lesson planning, student engagement techniques, and safeguards to maintain academic integrity. It also discusses how institutions can prepare both teachers and students for an AI-augmented learning environment by promoting digital literacy, responsible tool usage, and critical evaluation of AI-generated information.Ultimately, the goal of AI-enhanced teaching is not to replace educators, but to empower them. When used thoughtfully, artificial intelligence can reduce administrative workload, expand access to high-quality learning resources, and create more personalized educational experiences. In this vision, AI becomes a supportive partner, while teachers remain the guiding force who cultivate curiosity, wisdom, and human understanding in the classroom.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction: The AI Revolution in Education

Empowering Educators to Thrive—Not Just Survive — in the Age of GPTs and Artificial Intelligence. Artificial intelligence has entered the classroom—not as a distant futuristic concept, but as a present, transformative reality. Large language models (GPTs) can now draft essays, solve differential equations, generate code, summarise textbooks, and even produce lesson plans. For educators, this raises a profound question: What is the role of a teacher when a machine can deliver information instantly? [1]
The answer is both reassuring and challenging. AI excels at pattern recognition, retrieval, and generation [2]. It does not excel at empathy, ethical judgment, cultural sensitivity, creative vision, or the ability to inspire a room of restless teenagers to care about photosynthesis [3]. These remain deeply human capabilities—and they are precisely what great teaching is about.
Consider the scale of the transformation. A single AI model can now generate, in seconds, what once took hours: a complete lesson plan with differentiated activities, a set of twenty exam questions spanning Bloom’s taxonomy [4,5], a personalised feedback letter for every student in a class of forty, or a literature review covering fifty recent papers. The question is not whether AI will change education—it already has. The question is whether educators will lead this change or be swept along by it.
Preprints 202604 i001
This article presents two complementary frameworks—one for schools (elementary through higher secondary) and one for colleges (across six major disciplines: Arts, Science, Engineering, Agriculture, Pharma, and Medical)—that transform AI from a threat into the most powerful pedagogical tool educators have ever had. Each framework is structured around a one-hour session divided into four 15-minute phases, designed so that every lesson begins with human curiosity and ends with human ownership.
The frameworks are not theoretical abstractions. They are practical, classroom-tested structures that any educator can adopt immediately, regardless of their technical proficiency with AI. The only prerequisite is a willingness to see AI as a collaborator in pedagogy—a tool that handles the mechanical so that the human can focus on the meaningful.
Preprints 202604 i002

1.1. The Landscape of AI in Education Today

To understand why these frameworks matter, consider the current landscape. As of 2025–2026, the following capabilities are widely available at little or no cost to educators and students:
  • Text generation: AI can produce essays, reports, summaries, translations, and creative writing in seconds, across virtually any subject and any level of complexity.
  • Code generation: AI can write, debug, and explain code in dozens of programming languages, from simple scripts to complex algorithms.
  • Image generation: AI can create illustrations, diagrams, concept art, and photorealistic images from text descriptions.
  • Data analysis: AI can process datasets, identify patterns, generate visualisations, and produce statistical summaries.
  • Personalised tutoring: AI can adapt explanations to a student’s level, answer follow-up questions, and provide step-by-step guidance through complex problems.
  • Research assistance: AI can summarise papers, identify relevant literature, and synthesise findings across multiple sources.
These capabilities are remarkable—and they are improving rapidly [2,8]. But they share a common limitation: they operate entirely within the domain of information processing. They cannot observe a student’s body language, sense the energy of a classroom, make ethical judgments about competing values, or build the trust that enables a struggling teenager to ask for help. These remain the province of human educators, and they are the foundation upon which great teaching has always rested.
Preprints 202604 i003

2. Part I: AI-Powered Classroom—Framework for Schools

The school-level framework recognises a fundamental truth: younger learners need curiosity before technology [11,12]. A child who has never wondered why the sky is blue will not benefit from an AI that can explain Rayleigh scattering. The framework therefore begins with human wonder and progressively introduces AI as a tool—never as a master.
The framework is designed for a single one-hour session, but its principles can be applied to any lesson, any subject, and any age group. The four phases—Spark & Explore, Create & Collaborate, Apply & Build, and Reflect & Own—form a pedagogical arc that moves from curiosity through critical engagement to creative production and, finally, to reflective ownership. This arc ensures that AI never becomes the centre of the lesson; the student always is.

2.1. Phase 1: Spark & Explore (0–15 Minutes)

Preprints 202604 i004
The purpose of this phase is to establish that human thinking comes first. Before any student sees an AI response, they must have committed to their own ideas. This prevents the most dangerous outcome of AI in classrooms: the death of independent thought before it has a chance to develop.

2.1.1. Elementary Level (Ages 5–11)

At the elementary level, the wonder question must connect to the child’s immediate, sensory world. Young children think concretely; abstraction comes later. Effective wonder questions for this age include:
  • “Why do leaves change colour in autumn?”
  • “How does a caterpillar become a butterfly?”
  • “What would happen if it never rained?”
  • “Why is the ocean salty but rivers are not?”
  • “How do birds know where to fly in winter?”
The teacher writes student answers on the board—misspellings and all—because the point is ownership. A child who says “the leaf gets sleepy” has engaged their imagination in a way that no AI-generated answer can replicate. The teacher then asks the same question to ChatGPT or Claude, projected on the screen. Students observe the AI’s response and the class discusses: Whose answer was more interesting? Whose made you think more? Did the computer say anything surprising?
Figure 1. AI-Powered Classroom: Strategic Framework for Schools—a 1-hour session structure progressing from Spark & Explore through Reflect & Own, with key principles anchoring each phase.
Figure 1. AI-Powered Classroom: Strategic Framework for Schools—a 1-hour session structure progressing from Spark & Explore through Reflect & Own, with key principles anchoring each phase.
Preprints 202604 g001
At this age, the comparison is not about accuracy—it is about engagement. The child who said “the leaf gets sleepy” has expressed a metaphor, a creative leap, a moment of genuine cognitive effort. The AI’s technically accurate response about chlorophyll and temperature changes is informative but lacks this spark. The teacher highlights this distinction explicitly: “Your brain did something amazing just now—it made a connection that no computer would have made. That’s called imagination, and it’s something only humans have.”
For very young children (ages 5–7), the wonder question should be paired with a physical experience. Before asking “Why do things fall down?”, let the children drop different objects and observe. Before asking “Why is ice slippery?”, let them hold ice cubes and describe what they feel. The physical experience anchors the question in the body, making it real in a way that a screen never can.
Preprints 202604 i005

2.1.2. Higher Secondary Level (Ages 14–18)

Higher secondary students are capable of metacognition—thinking about thinking [14]. The wonder question can therefore be more layered, more provocative, and more connected to the complexities of the real world:
  • “Is social media making us more connected or more lonely?”
  • “Should gene editing be allowed in human embryos?”
  • “Can an AI-generated poem be considered real art?”
  • “Is economic growth always good for a country?”
  • “Should a self-driving car sacrifice its passenger to save five pedestrians?”
Students write a 2-minute quick-take paragraph, then the teacher prompts GPT with the exact same question. The class analyses not just the content of the AI’s response but its structure, tone, and assumptions. Higher secondary students should be encouraged to ask: “What perspective is missing from the AI’s answer? Whose voice is not represented? What values does the AI seem to prioritise, and why?”
This phase is particularly powerful for developing epistemic awareness—the understanding that all knowledge is produced from a particular standpoint, with particular assumptions [15], and that AI outputs are no exception. Students who develop this awareness become resistant to misinformation, propaganda, and the uncritical consumption of AI-generated content.
At this level, the teacher should also introduce the concept of prompt sensitivity: the same question, phrased slightly differently, can produce significantly different AI responses. Demonstrate this by asking the same question three ways—neutral, leading, and provocative—and showing how the AI’s answer shifts. This exercise reveals that AI is not a neutral oracle but a system that responds to the framing of the input, just as human answers are shaped by how questions are asked.
Preprints 202604 i006

2.2. Phase 2: Create & Collaborate (15–30 Minutes)

Preprints 202604 i007
This phase transforms the student’s relationship with AI-generated content from passive consumption to active critical engagement. The core insight is that in a world flooded with AI-generated text, the most valuable skill is not the ability to produce content—AI can do that—but the ability to evaluate it.

2.2.1. Elementary Level (Ages 5–11)

For young children, “finding errors” is gamified as a treasure hunt. The teacher generates a simple AI passage—say, a paragraph about penguins—and deliberately includes or highlights factual errors (“Penguins live in the Arctic”, “Penguins can fly short distances”). Students work in pairs with coloured markers to circle errors. This develops critical reading skills while making AI feel approachable rather than authoritative.
The rewriting exercise at this level is verbal: children retell the corrected passage in their own words to their partner. The teacher circulates and asks each pair: “How would you explain this to a younger child?” This develops two skills simultaneously: content mastery (they must understand the material to explain it) and communication (they must adapt their language to their audience).
For older elementary students (ages 9–11), the error hunt becomes more sophisticated. The teacher generates a passage that is mostly correct but contains subtle errors—not outright falsehoods, but oversimplifications, missing context, or misleading implications. For example, an AI passage about the solar system might say “Pluto is a planet” (contested), or a passage about healthy eating might imply that all fats are bad (oversimplified). These exercises teach children that “correct” and “incorrect” are not always binary categories—a crucial lesson for navigating an information-rich world.
Preprints 202604 i008

2.2.2. Higher Secondary Level (Ages 14–18)

At this level, the errors are not factual blunders but subtle weaknesses: logical fallacies, oversimplifications, missing context, cultural bias, lack of citations, or failure to acknowledge uncertainty. The teacher generates an AI essay on a topic the class is studying and distributes it to groups. Each group must:
1.
Identify three substantive weaknesses (not typos or formatting issues)
2.
Explain why each is a weakness, citing specific evidence or reasoning
3.
Rewrite the relevant paragraph to fix it, demonstrating the improvement
4.
Reflect on what the exercise reveals about AI’s limitations in this subject area
This exercise transforms students from passive readers into active critical editors—a skill that directly transfers to university-level work and professional life. Students discover that AI outputs, while impressively fluent, often lack depth, nuance, and the ability to handle genuine complexity.
Higher secondary students should also explore the concept of AI hallucination—the phenomenon where AI generates plausible-sounding but entirely fabricated information [16,17]. Provide students with an AI-generated passage that contains a hallucinated citation (a paper that does not exist, attributed to a real researcher) and challenge them to verify it. This exercise is profoundly important for developing research integrity skills.
Preprints 202604 i009

2.3. Phase 3: Apply & Build (30–45 Minutes)

Preprints 202604 i010
This is the phase where learning becomes embodied—where knowledge moves from the head into the hands, the voice, the body. AI can process information and generate text, but it cannot build, perform, experiment, debate, or create physical artefacts. This phase deliberately occupies the space that AI cannot reach.

2.3.1. Elementary Level (Ages 5–11)

AI cannot build a volcano out of papier-mâché, perform a puppet show about the water cycle, or draw a picture that expresses how a child feels about the rainforest. Activities at this level should be multi-sensory, collaborative, and joyful:
  • Science: Build a simple circuit, grow bean plants, conduct sink-or-float experiments, create a weather station from household materials
  • Art: Create a collage, paint a mural, design a poster using only hands and materials, sculpt with clay or playdough
  • Language: Role-play a story scene, perform a class debate on a simple motion, create a radio play with sound effects
  • Maths: Use physical manipulatives (blocks, beads, fraction tiles) to solve problems the AI solved abstractly, measure real objects and compare with AI estimates
  • Geography: Build a 3D map of the local area, conduct a mini-survey of classmates and graph the results by hand
The teacher can use AI in the background during this phase—generating a quick quiz on the topic, creating differentiated worksheets for different ability levels, or producing a fun challenge question for early finishers. But the students are doing hands-on work. The AI is the teacher’s assistant, not the student’s replacement.
Preprints 202604 i011

2.3.2. Higher Secondary Level (Ages 14–18)

Higher secondary students should tackle projects that require synthesis, judgment, and originality—qualities that cannot be automated:
  • Science: Design and run an actual experiment, then compare results to AI predictions. Where did reality diverge from the model? What variables did the AI not account for?
  • History: Stage a mock trial of a historical figure, using AI-generated evidence that students must cross-examine for bias, anachronism, and omission
  • Literature: Write a creative response to a poem—not an analysis, but a conversation with the text
  • Business Studies: Develop a startup pitch for a local problem, using AI for market research but human judgment for the value proposition
  • Mathematics: Model a real-world scenario using both hand calculations and AI, then compare approaches and identify where human intuition adds value
  • Languages: Translate a passage using AI, then improve the translation by adding cultural nuance and idiomatic expressions that the AI missed
Preprints 202604 i012

2.4. Phase 4: Reflect & Own (45–60 Minutes)

Preprints 202604 i013
Reflection is where learning becomes permanent. Without this phase, the lesson remains an experience; with it, the lesson becomes knowledge—integrated into the student’s understanding of both the subject and of themselves as a learner.

2.4.1. Elementary Level (Ages 5–11)

Young children reflect through doing and sharing. The “teach someone at home” assignment is particularly powerful: a child who can explain what they learned to a parent or sibling has truly internalised the knowledge. The exit ticket at this level can be a drawing, a single sentence, or even a verbal statement recorded by the teacher.
Effective reflection prompts for this age group include: “What was the most surprising thing you learned today?” “What did you learn today that a computer couldn’t have taught you?” “If you could ask one more question about today’s topic, what would it be?” These prompts develop metacognitive awareness—the ability to think about one’s own thinking—which research consistently identifies as one of the most powerful predictors of academic success [18,19].
Preprints 202604 i014
For older elementary students, introduce the concept of a learning journal. Each week, students spend five minutes writing about what they learned, how they learned it, and what questions they still have. Over time, this journal becomes a powerful record of intellectual growth—and a concrete reminder that learning is a human process, not a mechanical one.

2.4.2. Higher Secondary Level (Ages 14–18)

At this level, reflection should be philosophical and self-aware. Students discuss:
  • What did AI help me understand better today?
  • Where did AI mislead me or give me a shallow answer?
  • What can I do that AI cannot, and how do I develop that further?
  • If I had to explain today’s topic to someone with no internet access, how would I do it?
  • What assumptions did I make before I started, and how have they changed?
  • What would I want to investigate further, and why?
The exit ticket at this level is a written paragraph containing one original thought—an insight, question, or connection that the student generated themselves, not prompted by or derived from AI. This trains the habit of original thinking in a world saturated with generated content.
Preprints 202604 i015

3. Part II: AI-Enhanced Teaching—Framework for Colleges

The college-level framework operates on a more sophisticated premise: university students are training to become professionals in specific disciplines. AI’s role, limitations, and ethical implications differ dramatically between, say, a pharmaceutical researcher and a studio artist. A pharmacist who trusts an AI hallucination could prescribe a fatal drug interaction; an artist who delegates their creative vision to AI ceases to be an artist. The stakes, the skills, and the safeguards are discipline-specific—and the framework must be too.
The framework provides discipline-specific guidance across six fields: Arts, Science, Engineering, Agriculture, Pharma, and Medical. Each discipline is treated as a distinct culture of knowledge—with its own epistemology (how knowledge is produced) [20], its own ethics (what counts as responsible practice), and its own relationship with AI (where the tool helps and where it hinders).
Figure 2. AI-Enhanced Teaching: The comprehensive college-level framework across six disciplines—Arts, Science, Engineering, Agriculture, Pharma, and Medical—structured in four 15-minute phases with discipline-specific activities, culminating in the Golden Rule.
Figure 2. AI-Enhanced Teaching: The comprehensive college-level framework across six disciplines—Arts, Science, Engineering, Agriculture, Pharma, and Medical—structured in four 15-minute phases with discipline-specific activities, culminating in the Golden Rule.
Preprints 202604 g002

3.1. Phase 1: Engage & Provoke (0–15 Minutes)

The first phase challenges students to confront AI-generated output with disciplinary scepticism. The AI produces something; the student evaluates it through the lens of their field’s standards, ethics, and epistemology. The goal is productive scepticism—not rejection of AI, but rigorous evaluation that sharpens professional judgment.
Preprints 202604 i016
The arts represent perhaps the most philosophically provocative case for AI in education [21,22]. When a machine can generate a sonnet in the style of Shakespeare, paint in the manner of Monet, or compose music that sounds like Bach, the fundamental question becomes: What is art, and what makes it valuable?
The provocation here is not technical but existential. Students compare an AI-generated poem with a human poem on the same theme. The discussion centres on intentionality (did the AI mean to write this?), vulnerability (great art often comes from personal risk), context (a poem about grief by someone who has lost a parent carries weight the same words from a statistical model do not), and authorship (if a student curates AI output, who is the artist?).
Preprints 202604 i017
Preprints 202604 i018
Science is built on scepticism. Every hypothesis must be testable, every claim falsifiable, every result reproducible [23]. AI generates outputs based on statistical patterns—it does not understand the scientific method [24] and cannot distinguish between a plausible-sounding hypothesis and a genuinely testable one.
Students evaluate the AI hypothesis for testability, falsifiability, implicit assumptions, edge cases, and novelty. They ask: What model is the AI relying on? What data informed this? Under what conditions would this fail?
Preprints 202604 i019
Preprints 202604 i020
Engineering is not about finding the best solution; it is about finding the best achievable solution given competing constraints of safety, cost, time, materials, environmental impact, regulation, and human factors. AI tends to optimise for a single objective without accounting for the messy reality of manufacturing tolerances, supply chain disruptions, local building codes, or changing client requirements.
Students evaluate the AI design as if conducting a formal design review: practical buildability, safety and failure modes, economic viability, ethical implications, and human factors including accessibility, ergonomics, and user experience.
Preprints 202604 i021
Preprints 202604 i022
Agriculture is perhaps the most place-based of all disciplines [25]. A crop rotation plan that works brilliantly in Iowa’s black soils may be catastrophic in the thin, acidic soils of West Cork. AI, trained on global data, produces recommendations that are statistically average—and therefore wrong for any specific field.
Students evaluate the AI plan against soil type, pH, drainage, organic matter, local rainfall patterns, frost dates, biodiversity implications, economic viability, and indigenous knowledge. The farmer’s observations about drainage patterns, traditional planting calendars, and local pest behaviour are knowledge forms that no dataset captures.
Preprints 202604 i023
Pharmaceutical science operates in a domain where errors are potentially fatal [26]. A drug interaction missed by AI, a dosage recommendation based on hallucinated data, or an unflagged contraindication could result in patient harm or death. This makes critical evaluation of AI outputs a matter of professional ethics and patient safety.
Students verify every claim for accuracy against official databases, completeness (including food and supplement interactions), hallucination detection, patient-specific factors (age, weight, renal/hepatic function, genetic polymorphisms), and jurisdictional regulatory status.
Preprints 202604 i024
Preprints 202604 i025
Medicine is fundamentally a human discipline—not because its knowledge base is non-computational, but because its practice requires the integration of scientific knowledge with empathy, observation, communication, and ethical judgment. A machine can process lab values faster than any human; but a machine cannot notice the patient’s anxiety, their reluctance to make eye contact, or the family dynamics in the room.
Students identify what additional information a clinician would gather beyond the AI’s differential: the patient’s narrative [27], social determinants of health [28], cultural context, non-verbal cues, and the therapeutic relationship itself.
Preprints 202604 i026

3.2. Phase 2: Deepen & Challenge (15–30 Minutes)

In this phase, students move from evaluation to creation—but with AI as a brainstorming partner, not an author. The student’s own vision, methodology, and judgment take centre stage.
Preprints 202604 i027
Preprints 202604 i028
Preprints 202604 i029
Preprints 202604 i030

3.3. Phase 3: Build & Synthesize (30–45 Minutes)

The production phase, where disciplinary knowledge, AI assistance, and human judgment converge into a tangible, assessable output.
Preprints 202604 i031
Preprints 202604 i032

3.4. Phase 4: Reflect & Lead (45–60 Minutes)

The final phase asks students to step back and think about the role of AI in their profession. This is a professional, ethical, and philosophical exercise.
Preprints 202604 i033
Preprints 202604 i034
Preprints 202604 i035

4. How Teachers Can Use GPTs to Enrich Teaching

Beyond the classroom session itself, AI offers teachers ten powerful capabilities for preparation, assessment, and personalisation. These are available today, and educators who adopt them gain significant advantages in efficiency, creativity, and responsiveness to student needs.
Preprints 202604 i036
Preprints 202604 i037

5. Surviving and Thriving in the Age of GPTs

The rise of AI in education is not a crisis—it is a selection pressure. Like every transformative technology before it—the printing press, the calculator, the internet [33]—AI will reshape what it means to be educated, what it means to be a professional, and what it means to teach. Educators and students who adapt will not merely survive but will become more effective, more creative, and more irreplaceable than ever before.

5.1. For Teachers: Your Irreplaceable Value

Preprints 202604 i038

5.2. For Students: Skills That AI Cannot Replace

Preprints 202604 i039
Preprints 202604 i040

5.3. For Institutions: Strategic Imperatives

Preprints 202604 i041

6. Conclusion: The Human at the Centre

The AI revolution in education is not about technology—it is about pedagogy. The question is not “How do we teach students to use AI?” but “How do we teach students to think, create, and lead in a world where AI handles the routine?”
The frameworks presented here—for schools and for colleges—share a common architecture. They begin with human curiosity, not AI output. They engage AI critically, as an object of scrutiny rather than an oracle. They challenge students to build something that AI cannot replicate. And they end with reflection, ensuring that learning stays with the learner.
This four-phase structure ensures that AI is always a tool in service of human learning, never a replacement for it. The six discipline-specific expansions—Arts, Science, Engineering, Agriculture, Pharma, and Medical—demonstrate that this principle is not abstract but practical, adapting to the unique epistemology, ethics, and professional demands of each field.
The future of education belongs to those who understand that the most powerful technology in any classroom has always been the same: a curious mind, guided by a caring teacher, engaged with the world in all its complexity and beauty [37]. AI amplifies this; it does not replace it.
Preprints 202604 i042

Acknowledgments

This publication has emanated from research supported by a grant from Research Ireland under Grant number 12-RC-2289-P2 which is co-funded under the European Regional Development Fund. For the purpose of Open Access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission.

References

  1. Seldon, A.; Abidoye, O. The Fourth Education Revolution Reconsidered: Will Artificial Intelligence Enrich or Diminish Teaching and Learning? University of Buckingham Press, 2020. [Google Scholar]
  2. Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arber, S.; von Arx, S.; et al. On the Opportunities and Risks of Foundation Models. arXiv arXiv:2108.07258. [CrossRef]
  3. Luckin, R.; Holmes, W.; Griffiths, M.; Forcier, L.B. Intelligence Unleashed: An Argument for AI in Education; Pearson Education: London, 2016. [Google Scholar]
  4. Bloom, B.S.; Engelhart, M.D.; Furst, E.J.; Hill, W.H.; Krathwohl, D.R. Taxonomy of Educational Objectives: The Classification of Educational Goals. In Handbook I: Cognitive Domain; David McKay Company: New York, 1956. [Google Scholar]
  5. Krathwohl, D.R. A Revision of Bloom’s Taxonomy: An Overview. Theory into Practice 2002, 41, 212–218. [Google Scholar] [CrossRef]
  6. Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education: Promises and Implications for Teaching and Learning; Center for Curriculum Redesign: Boston, MA, 2019. [Google Scholar]
  7. Hattie, J. Visible Learning for Teachers: Maximizing Impact on Learning; Routledge: London, 2012. [Google Scholar]
  8. Kasneci, E.; Sessler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; et al. ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. Learning and Individual Differences 2023, 103, 102274. [Google Scholar] [CrossRef]
  9. Brynjolfsson, E.; McAfee, A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies; W. W. Norton & Company: New York, 2014. [Google Scholar]
  10. Frey, C.B.; Osborne, M.A. The Future of Employment: How Susceptible Are Jobs to Computerisation? Technological Forecasting and Social Change 2017, 114, 254–280. [Google Scholar] [CrossRef]
  11. Piaget, J. The Origins of Intelligence in Children; International Universities Press: New York, 1952. [Google Scholar]
  12. Vygotsky, L.S. Mind in Society: The Development of Higher Psychological Processes; Harvard University Press: Cambridge, MA, 1978. [Google Scholar]
  13. Lyman, F. The Responsive Classroom Discussion: The Inclusion of All Students. In Mainstreaming Digest; Anderson, A.S., Ed.; University of Maryland: College Park, 1981; pp. 109–113. [Google Scholar]
  14. Flavell, J.H. Metacognition and Cognitive Monitoring: A New Area of Cognitive-Developmental Inquiry. American Psychologist 1979, 34, 906–911. [Google Scholar] [CrossRef]
  15. Hofer, B.K. Personal Epistemology as a Psychological and Educational Construct: An Introduction. Personal Epistemology: The Psychology of Beliefs about Knowledge and Knowing 2002, 3–14. [Google Scholar]
  16. Ji, Z.; Lee, N.; Frieske, R.; Yu, T.; Su, D.; Xu, Y.; et al. Survey of Hallucination in Natural Language Generation. ACM Computing Surveys 2023, 55, 1–38. [Google Scholar] [CrossRef]
  17. Huang, L.; Yu, W.; Ma, W.; Zhong, W.; Feng, Z.; Wang, H.; et al. A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. arXiv 2023. arXiv:2311.05232. [CrossRef]
  18. Wang, M.C.; Haertel, G.D.; Walberg, H.J. What Influences Learning? A Content Analysis of Review Literature. The Journal of Educational Research 1990, 84, 30–43. [Google Scholar] [CrossRef]
  19. Dignath, C.; Büttner, G.; Langfeldt, H.P. How Can Primary School Students Learn Self-Regulated Learning Strategies Most Effectively? A Meta-Analysis on Self-Regulation Training Programmes. Educational Research Review 2008, 3, 101–129. [Google Scholar] [CrossRef]
  20. Becher, T.; Trowler, P.R. Academic Tribes and Territories: Intellectual Enquiry and the Culture of Disciplines, 2nd ed.; Open University Press: Buckingham, 2001. [Google Scholar]
  21. Colton, S.; Wiggins, G.A. Computational Creativity: The Final Frontier? Frontiers in Artificial Intelligence and Applications 2012, 242, 21–26. [Google Scholar]
  22. Boden, M.A. The Creative Mind: Myths and Mechanisms, 2nd ed.; Routledge: London, 2004. [Google Scholar]
  23. Popper, K.R. The Logic of Scientific Discovery; Hutchinson: London, 1959. [Google Scholar]
  24. Mitchell, M. Artificial Intelligence: A Guide for Thinking Humans; Farrar, Straus and Giroux: New York, 2019. [Google Scholar]
  25. Altieri, M.A. Agroecology: The Science of Sustainable Agriculture, 2nd ed.; CRC Press: Boca Raton, FL, 2018. [Google Scholar]
  26. Bates, D.W.; Levine, D.M.; Salmasian, H.; Syrowatka, A.; Shahian, D.M.; Lipsitz, S.; et al. The Safety of Inpatient Health Care. New England Journal of Medicine 2023, 388, 142–153. [Google Scholar] [CrossRef] [PubMed]
  27. Charon, R. Narrative Medicine: A Model for Empathy, Reflection, Profession, and Trust. JAMA 2001, 286, 1897–1902. [Google Scholar] [CrossRef] [PubMed]
  28. Marmot, M.; Wilkinson, R.G. Social Determinants of Health, 2nd ed.; Oxford University Press: Oxford, 2005. [Google Scholar]
  29. Fleddermann, C.B. Engineering Ethics, 4th ed.; Pearson: Upper Saddle River, NJ, 2012. [Google Scholar]
  30. International Council for Harmonisation. Integrated Addendum to ICH E6(R1): Guideline for Good Clinical Practice E6(R2). In ICH Harmonised Guideline; 2016. [Google Scholar]
  31. Smith, C.M. Origin and Uses of Primum Non Nocere—Above All, Do No Harm! The Journal of Clinical Pharmacology 2005, 45, 371–377. [Google Scholar] [CrossRef] [PubMed]
  32. Vamathevan, J.; Clark, D.; Czodrowski, P.; Dunham, I.; Ferran, E.; Lee, G.; et al. Applications of Machine Learning in Drug Discovery and Development. Nature Reviews Drug Discovery 2019, 18, 463–477. [Google Scholar] [CrossRef] [PubMed]
  33. Cuban, L. Oversold and Underused: Computers in the Classroom; Harvard University Press: Cambridge, MA, 2001. [Google Scholar]
  34. Paul, R.; Elder, L. Critical Thinking: Tools for Taking Charge of Your Professional and Personal Life, 2nd ed.; Pearson: Upper Saddle River, NJ, 2019. [Google Scholar]
  35. Deakin Crick, R. Learning to Learn: Setting the Agenda for Schools in the 21st Century. In Curriculum Journal; Taylor & Francis, 2008. [Google Scholar]
  36. Cotton, D.R.E.; Cotton, P.A.; Shipway, J.R. Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT. Innovations in Education and Teaching International 2024, 61, 228–239. [Google Scholar] [CrossRef]
  37. Dewey, J. Experience and Education; Kappa Delta Pi: New York, 1938. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated