1. Introduction
The rapid growth of large language models (LLMs) has created new prospects in education. Researchers examined how prompts act as important instructions that allow an LLM to generate the desired outcome from an open-ended text input [
1]. In education, learners may use prompts to get explanations, code reviews, or tailored study materials, but they often struggle with formulating these prompts accurately or clearly.
Recent studies suggested that prompt engineering can be taught in various fields since prompts are unique textual inputs intended to regulate the outputs of an LLM [
2]. While many universities and schools now allow or encourage the adoption of LLM-based tools, some educators remain uncertain about how to direct students to craft prompts that produce relevant responses for academic tasks. Moreover, students may lack awareness of the technical nuances behind how prompts affect performance [
3]. These issues prompt a need to review fundamental aspects of prompt engineering, associated recommendations, and obstacles students face in educational contexts.
1.1. What Is Prompt Engineering for LLMs
Prompt engineering is often described as forming concise textual instructions and contextual hints to guide an LLM toward correct or meaningful outputs [
4]. Prompt engineering relies on natural language interactions to reach outcomes, unlike conventional programming, where a programming language enforces syntax. Hence, it balances linguistic clarity with domain-specific requests.
Core components of prompt engineering involve specifying roles, limiting the scope, and emphasizing the format of requested outputs. LLMs can better interpret user queries when prompts are crafted clearly, including direct questions or explicit instructions. Conversely, ambiguous language tends to cause the model to offer unclear or incorrect answers. By studying effective prompt patterns, practitioners identified common structures—such as giving an example or providing short instructions—to build prompts that are generally robust across different educational tasks.
1.2. The Role of Prompts in Interacting with LLMs
Prompts serve as the foundation for interaction between the user and the LLM. They help shape responses by indicating the type of information needed and the preferred tone or style. For example, a teacher preparing a language arts assignment can craft a prompt that urges the LLM to respond analytically or highlight grammatical rules. This tailored communication fosters direct alignment between learning targets and AI-generated content.
In educational contexts, prompts can facilitate open-ended discussions. Students can propose questions about a historical event and refine their prompt to obtain specific timelines or analyses. This iterative approach reveals how better prompting leads to better answers. Nonetheless, the LLM response might be confusing if a learner's prompt is too vague or lacks context. These issues underscore the importance of prompt guidelines, which teach students to phrase questions that guide the LLM, resulting in outputs that meet classroom needs.
1.3. Challenges for Students Using Prompts in Education
Despite the potential advantages of LLMs in education, common challenges restrict student success in prompt engineering. A significant problem is that learners often do not know what they want to ask or how to phrase it.
1.3.1. Students Unsure What to Ask
When using an LLM, middle school, high school, or even college students might be overwhelmed by broad possibilities. Without guidance, they might type a trivial request and gain little insight. Encouraging them to break tasks into smaller pieces—"Explain the main argument of this article" or "Compare two opposing viewpoints on this topic"—helps students identify essential questions.
1.3.2. Students Unable to Form Efficient Prompts
Even if they know the question, students can struggle to frame concise and well-targeted prompts. Some might use wordy or ambiguous sentences, which produce unclear outputs. Simple, direct language, plus any relevant constraints—like word length, viewpoint, or structure—make LLM interactions more fruitful. Teaching students through examples (e.g., "Show me a 200-word summary" or "Outline three pros and cons") helps develop prompt engineering abilities.
1.3.3. Instructors Unsure How to Improve Student Prompt Skills
Instructors remained uncertain about how to support students in refining prompt skills. Traditional technology instruction may skip the subtlety of guiding a large language model with text prompts [
5]. Educators might also be reluctant because they worry about the potential misuse of AI. Clear classroom exercises, showing how different prompts change the responses, can address teacher concerns. In particular, guided demonstrations and structured assignments let students practice prompt engineering, gather feedback, and realize how prompt variation influences the results.
1.4. Research Questions
According to the above background, this study was guided by the following research questions:
RQ1: What are the various prompt engineering techniques developed for effectively utilizing LLMs in education?
RQ2: What key components constitute effective prompt engineering in education?
RQ3: What strategies can educators and students employ to enhance interactions with LLMs through prompt engineering?
RQ4: What challenges are associated with prompt engineering within educational settings?
RQ5: What are students' perspectives and experiences with prompt engineering techniques integrated into chatbot-based tutoring systems?
3. RQ2: Key Components of Prompt Engineering in Education
Prompt engineering is a structured method to design practical inputs (prompts) for LLMs to enhance their performance in diverse applications, particularly education. Recent research highlighted several critical components integral to successful prompt engineering: content knowledge, critical thinking, iterative design, clarity and precision, and user comprehension. We conducted a comprehensive review of existing literature [
2,
8,
9,
18,
19,
20,
21,
22,
23] and concluded the findings.
3.1. Content Knowledge
Content knowledge refers to the depth and specificity of the information embedded within prompts. Prompts should contain accurate, detailed, and domain-specific information to guide LLMs effectively. Cain [
9] emphasized that the success of LLMs in educational applications significantly depends on the precision and relevance of the content knowledge included in the prompts. The underlying reason is that LLMs rely on contextual information in prompts to generate coherent and relevant outputs.
Effective prompt engineering in education must ensure that prompts align closely with the targeted learning objectives. For example, prompts used to assist students in writing essays should contain specific instructions about the essay structure, topic depth, and relevant terminologies. Similarly, prompts designed for generating Python code for educational purposes must incorporate clear algorithmic instructions to enable LLMs to produce accurate and pedagogically sound code.
3.2. Critical Thinking
Critical thinking within prompt engineering involves designing prompts that encourage LLMs to generate outputs demonstrating analytical reasoning and reflective judgment. Embedding critical thinking directives within prompts, such as requests for justifications, comparisons, or evaluative judgments is necessary. Such prompts compel LLMs to provide facts, synthesize information, evaluate evidence, and offer logically reasoned arguments.
In educational contexts, integrating critical thinking components in prompts is crucial for fostering higher-order cognitive skills among students. Different studies asserted that prompts encourage critical online reasoning, a subset of critical thinking that significantly enhances students' digital literacy and capacity to critically evaluate AI-generated information. Thus, prompts structured to challenge assumptions, question validity, and seek evidence-based reasoning substantially improve the educational effectiveness of LLM-generated outputs.
3.3. Iterative Design
Iterative design refers to the cyclical process of prompt refinement through repeated testing, feedback incorporation, and continual improvement. This approach acknowledges that initial prompt formulations are rarely optimal and thus require iterative revisions to achieve desired outcomes. The iterative design is essential in optimizing prompt engineering, underscoring the necessity of continuous refinement based on user feedback and output analysis.
The iterative design facilitates prompt enhancement through two principal methods: empirical evaluation of LLM outputs and systematic refinement based on clearly defined performance criteria. Iterative design involves adjusting prompt length, specificity, structure, and phrasing to maximize LLM performance on targeted tasks. This iterative approach is particularly beneficial in educational settings, where the precision of instructional prompts can significantly influence learning outcomes, learner engagement, and overall educational efficacy.
3.4. Clarity and Precision
Clarity and precision are essential for prompt effectiveness. Ambiguities or overly broad language within prompts often lead to inconsistent or irrelevant outputs from LLMs. Prompt clarity is achieved through precise language that unequivocally conveys the intended task, thus minimizing interpretive errors by the AI. Explicit and articulated prompts significantly enhance model predictability and the quality of generated responses.
Educational applications significantly benefit from prompts characterized by clarity and precision. For instance, when prompts are crafted with explicit instructional guidelines, such as specifying the exact format of desired responses or clearly defining the scope of inquiry, LLM outputs tend to align more closely with educational objectives. Such meticulous, prompt formulation reduces the cognitive load on learners, allowing them to focus more effectively on the educational content rather than deciphering ambiguous instructions.
3.5. Creativity
Creativity is vital in effective prompt engineering within educational settings, fostering innovative and engaging interactions between learners and Large Language Models (LLMs). Creative prompt design focuses on crafting prompts that meet clear educational goals while sparking learners' curiosity, imagination, and inventive thinking. For instance, in a history lesson, a creative prompt might ask students to picture themselves as advisors to a historical figure, tasked with suggesting solutions to a major challenge of that time. This method pushes students beyond simple memorization, encouraging them to use their knowledge in new and imaginative ways.
Incorporating creativity into prompts helps LLMs generate responses that avoid being basic or repetitive, boosting student engagement and motivation. Well-designed creative prompts ignite learners' intrinsic interest and inspire them to dive deeper into subjects beyond conventional boundaries. These prompts often feature open-ended questions, hypothetical situations, or imaginative tasks that prompt thoughtful and original answers. Students engaging with such prompts tend to display greater enthusiasm and a stronger willingness to explore challenging topics.
Moreover, creative prompts enhance learners' problem-solving abilities and adaptability. By introducing scenarios that require innovative thinking and flexible application of knowledge, these prompts enable students to tackle challenges from various perspectives and develop diverse solutions. For example, in a science class, a prompt asking students to design an experiment under unusual constraints can spark inventive experimental approaches and deepen their understanding of scientific principles. The value of these prompts lies in their capacity to encourage creative knowledge application, which supports more profound learning and better retention.
Ultimately, weaving creativity into prompt engineering enriches the educational experience by promoting active participation in building knowledge. This approach aligns closely with modern educational aims, highlighting critical thinking, problem-solving, and adaptability—essential skills for success in today’s rapidly evolving academic and professional landscapes.
3.6. Collaboration
Collaboration refers to the design of prompts that foster interaction, communication, and teamwork among learners. Effective prompt engineering incorporates collaborative elements to facilitate knowledge sharing, collective problem-solving, and enhanced social learning experiences. Collaborative prompts encourage learners to work together, leveraging diverse perspectives and expertise to tackle complex educational challenges.
Prompt designs may simulate group tasks, debates, or peer reviews, requiring students to construct responses or evaluate information collaboratively. Such collaborative learning scenarios deepen students' understanding and help develop essential interpersonal skills crucial for modern professional environments.
3.7. Digital Literacy
Digital literacy involves crafting prompts that enhance students' abilities to critically evaluate, use, and create digital content effectively and responsibly. Prompts to develop digital literacy emphasize critical assessment of online sources, ethical use of digital resources, and practical digital communication skills.
Educational prompts can incorporate scenarios requiring learners to identify credible digital resources, analyze digital content critically, and ethically navigate the digital information landscape. Digital literacy is essential for effective engagement with AI-generated content, underscoring its role in empowering learners to use digital tools proficiently and responsibly.
3.8. Ethical Judgment
Ethical judgment in prompt engineering involves developing prompts that enable learners to consider and evaluate ethical implications within educational contexts. These prompts challenge learners to reflect on moral dilemmas, assess the societal impacts of technology use, and critically evaluate the ethical considerations of information generated by LLMs.
Educational prompts structured around ethical judgment guide learners to explore scenarios involving privacy concerns, fairness, inclusivity, and societal impacts of AI technologies. This component of prompt engineering fosters learners' capacity for ethical reasoning, responsible decision-making, and awareness of broader implications associated with technology integration in education.
3.9. Integration of Components
Integrating content knowledge, critical thinking, iterative design, clarity and precision, creativity, collaboration, digital literacy, and ethical judgment forms a comprehensive framework for effective educational prompt engineering. Each component uniquely contributes to the effectiveness of prompt design, while their integration creates a synergistic dynamic enhancing overall educational outcomes.
Content knowledge provides the foundational substance for critical thinking, digital literacy, ethical judgment, and creativity, while iterative design systematically refines prompts through continuous feedback and evaluation. Creativity and collaboration enrich learner interactions, fostering engagement, exploration, and deeper cognitive processing. Digital literacy ensures responsible and effective use of digital tools and resources, enhancing the relevance and applicability of educational experiences. Ethical judgment adds a critical dimension, ensuring learners are cognizant of the moral implications and societal impacts of technology use.
This holistic approach aligns with constructivist learning theories, emphasizing students' active role in knowledge construction, collaborative engagement, critical evaluation, creative exploration, and ethical reflection. Effective prompt engineering thus balances these components according to specific educational goals and contexts, creating flexible, adaptive, and robust educational experiences. Ultimately, this integrated framework prepares students comprehensively, equipping them with essential skills for contemporary educational and professional landscapes.
4. RQ3: Strategies on Prompt Engineering in Education
Effective prompt engineering within educational contexts demands carefully developed strategies tailored specifically for educational settings. This section explores strategies educators and students can use to enhance LLMs interactions. Each strategy is discussed comprehensively and supplemented by relevant examples to demonstrate practical implementation. We conducted a comprehensive review of existing literature [
1,
2,
3,
4,
5,
7,
8,
9,
10,
11,
18,
19,
20,
21] and concluded the findings.
4.1. Contextual Framing
Contextual framing involves embedding specific educational contexts directly into prompts. This helps ensure that LLM-generated responses align with learning goals.
By adding context, the output becomes more relevant and precise, which improves its educational usefulness.
For example, instead of broadly prompting the LLM to "explain photosynthesis", an educator might say:
“Imagine you are explaining photosynthesis to a group of elementary students. Provide a simple yet accurate description.”
This framing allows the LLM to adjust its response to the intended audience, making it more likely to meet the specific learning objective.
4.2. Task Segmentation
Task segmentation breaks down complex educational tasks into smaller, clearly structured components. This approach enhances response clarity and depth, making tasks more manageable and systematic.
For instance, a historical essay prompt might be segmented into distinct parts:
First prompt:
"List the key factors leading to the American Civil War."
Second prompt:
"Describe the economic impact of the Civil War."
Final prompt:
"Explain the social consequences of the Civil War."
This structured approach ensures that the LLM systematically addresses each task component, promoting deeper cognitive engagement and comprehensive understanding.
4.3. Prompt Sequencing
Prompt sequencing strategically organizes prompts into a coherent, logical progression to guide the LLM through structured reasoning. It enhances logical flow and clarity, especially in tasks involving sequential reasoning or analytical steps.
For example, in a scientific experiment context, an educator might use sequential prompts:
Initial prompt:
"Explain the hypothesis you aim to test."
Next prompt:
"Outline the method you will use to test this hypothesis."
Final prompt:
"Describe how you would analyze the results."
This step-by-step progression explicitly addresses each reasoning stage, ensuring complete and logically connected responses.
4.4. Persona or Role Specification
Specifying personas or roles within prompts guides the LLM in adopting disciplinary-specific language and perspectives. This helps improve the academic rigor of responses by setting clear expectations for tone, depth, and approach. For example, an educator might prompt:
Role-based prompt:
"As a historian specializing in World War II, provide an analysis of the causes of the conflict."
This explicit role designation helps the LLM generate responses that align with professional standards and reflect appropriate disciplinary methodologies.
4.5. Reflection Prompting
Reflection prompts encourage higher-order cognitive processes like analysis, synthesis, and evaluation. These prompts support more profound understanding and critical thinking by prompting students and the LLM to consider broader implications.
Reflective prompt example:
"Provide a solution to reduce plastic waste. Reflect on the potential societal and environmental implications of your solution."
This approach fosters critical engagement with the subject matter and enhances learning outcomes.
4.6. Encouraging Counterfactual Thinking
Counterfactual prompting encourages the exploration of alternative scenarios or outcomes. It supports creativity and deeper analytical thinking by challenging conventional perspectives.
Counterfactual prompt example:
"Describe what might have happened if the internet had been invented in the 19th century."
Such prompts encourage thoughtful consideration of alternative possibilities and their implications.
4.7. Constraint-Based Creativity Prompting
Constraint-based prompting introduces specific limits or conditions to guide the LLM’s responses. These constraints help promote innovation and originality within a focused scope.
Constraint-based prompt example:
"Develop a plan to increase urban green spaces using only community-based initiatives."
This prompt encourages creative problem-solving within well-defined boundaries.
4.8. Ethical Consideration Prompts
Ethical prompts explicitly incorporate ethical dimensions into tasks, supporting ethical reflection and reasoning. They encourage students and LLMs to examine broader societal impacts and moral concerns.
Ethical prompt example:
"Discuss the ethical implications of using facial recognition technology in public surveillance."
These prompts enhance ethical awareness and promote thoughtful academic engagement.
4.9. Interactive and Iterative Prompting
Interactive prompting involves using an initial prompt followed by follow-up prompts based on the LLM's responses. This iterative method allows for refinement, clarification, and deeper exploration of ideas.
Initial prompt:
"Explain the concept of democracy."
Follow-up prompt:
"Clarify how democratic principles apply differently in direct versus representative democracy."
This step-by-step interaction supports a progressive understanding of layered concepts.
4.10. Comparative Analysis Prompting
Comparative analysis prompts focus on identifying similarities and differences between concepts, events, or perspectives. They encourage nuanced analysis and structured comparison.
Comparative prompt example:
"Compare and contrast the economic policies of the United States and China over the past two decades."
Such prompts strengthen analytical thinking and promote comprehensive understanding.
Through systematically applying these strategies, educators and students can significantly enhance their interactions with LLMs, fostering meaningful, relevant, and intellectually rigorous educational experiences.
5. RQ4: Challenges on Prompt Engineering in Education
Prompt engineering, a critical component in integrating LLMs in educational contexts, faces several challenges that educators, learners, and developers must consider to leverage these tools for improved learning outcomes effectively. This section identified and discussed the primary challenges associated with prompt engineering, specifically within educational settings, highlighting their implications on learning efficacy and user interaction. We conducted a comprehensive review of existing literature [
2,
9,
10,
20,
22,
24,
25,
26,
27,
28,
29,
30] and concluded the findings.
5.1. Ambiguity and Contextual Misinterpretation
One prominent challenge in prompt engineering for education is managing ambiguity inherent in human language. Given natural language's complexity, subtlety, and context-dependency, LLMs can misinterpret prompts, leading to responses that deviate from intended educational outcomes.
For example, a prompt such as "Explain the process of division" might result in the model describing mathematical division, cellular division, or social division. This ambiguity requires prompt engineers—often educators themselves—to craft extremely precise instructions, significantly increasing the cognitive load on educators.
Additionally, contextual misinterpretation arises when prompts do not contain sufficient contextual cues, leading models to produce generic or irrelevant content. Educational prompts must include clear instructional goals, necessary context, and specificity to prevent unintended interpretations, thus ensuring the responses support learning objectives effectively.
5.2. Balancing Specificity and Flexibility
Striking an optimal balance between specificity and flexibility in prompts represents another critical challenge in education. Particular prompts limit responses' creativity and exploratory nature, constraining learners’ critical thinking and problem-solving opportunities. Conversely, overly flexible prompts might lead to excessively broad or tangential responses that fail to meet educational objectives or standards.
For instance, a prompt like "Describe the impact of climate change" can be interpreted broadly. Refining this to "Describe the impact of climate change on agricultural productivity in Southeast Asia" narrows the response sufficiently to maintain educational rigor while allowing exploratory learning.
Educators face the ongoing task of fine-tuning prompts iteratively to maintain this delicate balance. This iterative process requires substantial effort and a deep understanding of subject matter and pedagogical methods, thus adding complexity and time investment to instructional planning.
5.3. Ensuring Response Consistency and Reliability
Consistency and reliability in the responses from LLMs are crucial for maintaining trust and efficacy in educational settings. Inconsistent responses, particularly in sequential interactions, can confuse learners, undermine confidence in educational tools, and disrupt the learning process. For instance, inconsistent tonal shifts or contradictory explanations across interactions could severely impact learner comprehension and engagement.
To address this, educators must implement structured prompt frameworks and incorporate explicit instructions for tone, format, and perspective within the prompts. Additionally, they must consistently validate and calibrate model responses against established educational standards, significantly increasing the preparatory and monitoring workload.
5.4. Managing Model Hallucinations and Accuracy
A notable challenge is managing the phenomenon of hallucinations in LLM outputs, where models generate coherent but factually incorrect or misleading information. This is particularly problematic in education, where the accuracy of content is paramount. Reliance on unverified or erroneous model-generated content could lead students to internalize incorrect information, negatively impacting learning outcomes and understanding.
Therefore, it becomes essential for educators to develop robust validation mechanisms and encourage critical evaluation among learners. This approach involves additional training for educators and students in identifying, cross-checking, and correcting inaccurate outputs from LLMs, further increasing the complexity of effectively integrating these tools into educational settings.
5.5. Ethical and Privacy Concerns
Prompt engineering in education also faces ethical and privacy-related challenges. Educational prompts may inadvertently expose sensitive personal or institutional information, raising concerns about confidentiality and data protection. Moreover, ethical issues may arise when prompts unintentionally reinforce biases or stereotypes embedded in training data, thus perpetuating prejudiced views and inequities.
Addressing these challenges necessitates clear guidelines and rigorous oversight to ensure prompts are ethically sound and respect privacy considerations. Educators must carefully curate training materials, constantly evaluate model outputs for biases, and engage in ongoing professional development to understand and mitigate potential ethical and privacy implications.
5.6. Student Engagement and Interaction Dynamics
Lastly, maintaining student engagement when using prompt-driven educational methods presents challenges. Prompts that fail to align with students' interests or abilities can reduce motivation and engagement. Moreover, excessive reliance on prompts may lead students to become passive recipients rather than active participants in their learning processes.
To counteract these dynamics, educators must design prompts encouraging active learning, critical thinking, and meaningful interaction. It also involves continuous feedback from students to adjust and improve prompts dynamically, aligning more closely with student needs, preferences, and learning styles.
Effectively addressing these challenges requires concerted efforts from educators, researchers, and technologists to optimize prompt engineering practices, ultimately enhancing educational outcomes and enriching learning experiences.
6. RQ5: Student Perspective on Prompt Engineering in Education
In addition to reviewing the existing literature [
2,
9,
19,
20,
21,
31,
32,
33], we conducted a qualitative study on primary school students' experiences with prompt engineering techniques integrated into their chatbot-based tutoring system (POE chatbot) during a 12-week Python programming course using CodeCombat.
The study involved 30 students aged 9-12, who attended five classes per week, amounting to 60 sessions of 40 minutes each. The decision to focus on primary school students was driven by a significant research gap in this area, as few studies have explored the application of prompt engineering techniques in elementary education. This study aimed to address that gap by investigating how young learners interact with and benefit from these techniques in a programming context.
Before the study began, we obtained consent from students' parents and the school and informed the students about the research, ensuring their willingness to participate. Data were collected through structured and informal interviews to gain insights into students' perceptions of different prompt engineering strategies implemented in their learning environment. A detailed qualitative analysis was performed, and the findings are presented in the following thematic subsections.
6.1. Understanding Prompt Structure and Clarity
Students demonstrated remarkable awareness of how different prompt structures affected their interactions with the POE chatbot. They frequently noted how clear, specific prompts yielded more helpful responses than vague or ambiguous queries. Throughout the program, students gradually developed an intuitive understanding of effective prompt construction, refining their ability to communicate with the AI tutoring system.
S5 reflected on his growing awareness of how prompt construction affected the chatbot's responses:
S5: "I learned that when I ask POE questions, I need to be clear. At first, I would just say 'Help me!' but that didn't work well. Now I say exactly what part of my code I'm stuck on, like 'I don't understand how to use the if-else statement.' Then POE gives me much better help!"
S19 described her discovery of how providing context in prompts significantly improved the quality of assistance she received:
S19: "When I tell POE what I already tried or what I think might be the problem, it helps me much better. I used to just say 'This doesn't work,' but now I explain more details like 'I tried using a loop but my character keeps moving forever.' This makes POE give me exactly what I need."
Students also recognized that structuring prompts with specific learning goals helped them receive more targeted support. They learned to articulate their goals rather than simply describe problems.
S22 explained how he adapted his prompt strategies to achieve better learning outcomes:
S22: "I figured out that asking 'Can you explain how this works?' doesn't help as much as saying 'I want to understand this so I can build my own game later.' When I tell POE what I want to learn, not just what I'm stuck on, the explanations make much more sense to me."
The most significant finding was how students intuitively recognized the relationship between prompt specificity and response quality. Several students noted that providing detailed information about their current understanding level helped calibrate the chatbot's explanations to their needs.
6.2. Contextual Information and Setting Parameters
A crucial prompt engineering strategy that emerged from student interactions was the importance of establishing contextual information. Students discovered that informing the chatbot about the specific CodeCombat level, their age, and their current knowledge significantly improved response relevance and comprehensibility.
S13 explained how providing context about his specific CodeCombat level transformed the quality of assistance:
S13: "I learned to always start by telling POE exactly which CodeCombat level I'm working on. Like, I say 'I'm on the Dungeon level called Fire Dancing' before asking my question. When I do this, POE knows exactly what my character can do and what code I should use. Before, I didn't say which level, and sometimes POE told me to use commands I didn't have yet!"
S26 highlighted the importance of age-appropriate explanations:
S26: "My best trick is telling POE that I'm 10 years old at the beginning. When I do that, it explains things like it's talking to a kid, not a grown-up. It uses simple words and fun examples that I can understand. One time I forgot to say that, and POE used really complicated words that confused me!"
Teachers observed that students who consistently provided contextual information received more precise and age-appropriate guidance. This contextualization became a fundamental prompt engineering technique that students gradually adopted without explicit instruction.
S10 described how setting learning parameters helped him receive more useful explanations:
S10: "I always tell POE what I already know before asking for help. Like I'll say 'I understand what variables are, but I don't get how to change them in my code.' Then POE skips explaining stuff I already know and focuses on exactly what I'm confused about. It saves time and the explanations make more sense!"
Establishing contextual boundaries emerged as a sophisticated metacognitive skill that transferred beyond coding tasks. Students began applying this approach to other learning activities, demonstrating an enhanced awareness of their own knowledge gaps and learning needs.
S30 shared how this skill transferred to other learning contexts:
S30: "The way I talk to POE now helps me ask better questions to my teachers too! I learned to say what I already know and exactly what I'm confused about. My teacher said I ask really good questions now that are easier to answer. It's like I learned a super skill for getting the right help!"
6.3. Role-Based and Scenario Prompting
Students discovered the effectiveness of role-based and scenario prompting techniques where they would frame their interactions with the chatbot in creative contexts. This approach made learning more engaging and helped students receive explanations tailored to their specific needs and interests.
S3 described her excitement about using character-based prompts:
S3: "It's fun when I ask POE to explain coding like it's a wizard teaching magic spells! I say 'Pretend you're a coding wizard and I'm your apprentice learning the magic spell of loops.' Then POE explains loops like they're magic spells, and it makes everything easier to remember because it's like a story!"
S16 shared how scenario-based prompts helped him understand complex programming concepts:
S16: "My favorite way to ask POE questions is making up coding adventures. Like when I couldn't understand variables, I asked POE to explain it like we're on a treasure hunt and variables are different treasure chests for storing things. Now I always remember that variables store things just like treasure chests!"
Our analysis revealed that role-based prompting had significant pedagogical value beyond mere entertainment. Students who assigned specific teaching roles to the chatbot often received explanations that were more memorable and aligned with their cognitive frameworks.
S8 explained how assigning a specific teaching role to POE enhanced his comprehension:
S8: "When I was stuck on functions, I asked POE to 'be like my football coach teaching a new play.' POE explained functions like they were football strategies with different players having different jobs, and suddenly it all made sense! Now whenever I write functions, I think about my football team and remember exactly how they work."
Teachers noted that role-based prompting was particularly effective for students who struggled with traditional abstract explanations. Students could anchor new knowledge to existing mental models by framing coding concepts within familiar scenarios.
6.4. Constraint Specification and Error Prevention
Students developed an impressive understanding of how specifying constraints in their prompts could prevent misleading or incorrect responses. They learned to explicitly inform the chatbot about what not to do, establishing boundaries that significantly improved response accuracy and educational value.
S9 explained his discovery about setting explicit constraints:
S9: "I learned an important trick – telling POE what NOT to do! When POE gave me answers that were too complicated, I started saying 'Please explain this simply without using any big coding words.' This worked really well! Now I always tell POE exactly how I want the answer and what I don't want."
S25 shared how specifying constraints helped her receive more appropriate guidance:
S25: "At first, POE sometimes gave me complete solutions that didn't help me learn. Then I started saying 'Please don't give me the full answer, just give me hints so I can figure it out myself.' Now POE gives me just enough help to learn without doing all the work for me. I feel prouder when I solve problems this way!"
The ability to establish constraints emerged as a sophisticated prompt engineering skill that had significant implications for fostering independent learning. Students who effectively communicated boundaries received guidance that supported their development rather than circumventing the learning process.
S14 described how constraint-setting helped balance assistance with learning:
S14: "I figured out that I can tell POE exactly how much help I want. When I say 'Don't solve the whole problem for me, just help me understand why my loop isn't working,' POE gives me the perfect amount of help. It points out my mistake but lets me fix it myself. This helps me learn much better than when POE just gives me the answer."
Teachers observed that students who mastered constraint specification demonstrated greater metacognitive awareness and learning autonomy. These students were better able to identify precisely what assistance they needed and how that assistance should be delivered.
S28 shared his sophisticated approach to specifying ethical constraints:
S28: "I noticed that sometimes POE would show me a shortcut that wasn't what our teacher wanted us to learn. So now I say 'Please help me solve this using only the commands we've learned in class, and don't show me any shortcuts.' This makes sure I'm learning the right way and not taking the easy way out."
6.5. Prompt Templates and Structure Recognition
Throughout the 12-week program, students began to recognize patterns in effective prompts and developed templates for different coding challenges. This development of prompt templates represented an important metacognitive skill that transferred to other problem-solving contexts.
S8 proudly described the prompt template he developed for debugging assistance:
S8: "I made my own special way to ask for help when my code has bugs. First, I tell POE what my code should do, then I paste my code, then I explain what's happening instead, and finally I ask what's wrong. POE almost always finds my mistake right away when I follow this pattern!"
S20 explained how she created different prompt templates for different learning needs:
S20: "I have different question patterns for when I need different help. If I want to understand something new, I start with 'Can you explain specific concept like I'm 9 years old with examples?' If I'm fixing code, I use 'Here's my code, here's what it should do, here's what's happening instead.' Having these patterns helps me get better answers from POE."
Further analysis revealed that students naturally evolved toward consistent template usage over the course of the study. By the final weeks, many students had developed sophisticated, multi-part templates that demonstrated a nuanced understanding of effective prompt construction.
S11 described her systematic template for learning new coding concepts:
S11: "I created my own four-step question formula for learning new things! Step 1: I ask 'What is [concept] in super simple words?' Step 2: I ask 'Can you show me a really easy example of [concept]?' Step 3: I ask 'How is [concept] used in games like CodeCombat?' Step 4: I ask 'What mistakes do people usually make with [concept]?' This pattern helps me understand everything completely!"
S29 shared how he developed templates based on response effectiveness:
S29: "I keep track of which questions get the best answers from POE in my notebook. When POE gives me a really good explanation, I write down exactly how I asked the question. Now I have a collection of perfect questions for different coding problems. It's like I discovered the secret codes for talking to POE!"
Teachers noted that prompt templating represented a sophisticated metacognitive strategy, as it required students to reflect on the communication process itself. Students who developed effective templates demonstrated enhanced problem-solving abilities and communication skills.
6.6. Iterative Prompt Refinement and Response Evaluation
As students progressed through the program, they began to employ sophisticated iterative prompt refinement strategies. Rather than accepting inadequate responses, students learned to systematically improve their prompts based on the quality of responses received, engaging in a collaborative dialogue with the chatbot that progressively approached optimal explanations.
S2 described her discovery of iterative prompt refinement:
S2: "At first, I would give up if POE's answer didn't help me. Then I realized I could keep asking better questions! Now if POE's explanation is confusing, I say 'I didn't understand that part about loops. Can you explain it differently using a game example?' I keep asking follow-up questions until I really understand."
S18 shared his systematic approach to refining prompts based on response evaluation:
S18: "I learned to judge POE's answers in my head – was it too hard, too easy, or just right? If it's too complicated, I say 'Can you explain that like I'm younger?' If it's too simple, I say, 'I understand that part, but can you tell me more about specific details?' It's like training POE to give me perfect answers!"
The development of response evaluation skills represented a sophisticated metacognitive capability that enhanced students' critical thinking. Students became more discerning about the quality and relevance of information, learning to identify gaps in explanations and request specific clarifications.
S6 articulated his process for evaluating and refining responses:
S6: "I've become good at spotting when POE's explanation isn't quite right for me. Sometimes POE explains something using math examples, but I understand sports better. So I say 'That makes sense, but could you re-explain it using basketball instead of numbers?' Then the explanation suddenly clicks in my brain!"
S23 described how she learned to identify specific knowledge gaps in responses:
S23: "I noticed that sometimes POE assumes I know things that I don't. Now I'm not embarrassed to say 'Wait, I don't know what a parameter is yet. Can you explain that first?' I've learned it's OK to admit when I don't understand something and ask for more basic explanations before moving to the complicated stuff."
Teachers observed that students who mastered iterative refinement demonstrated enhanced persistence when facing challenging concepts. These students were less likely to abandon complex problems and more likely to systematically work toward understanding through strategic dialogue with the chatbot.
S27 explained how iterative prompting improved her persistence and problem-solving:
S27: "Before, if I didn't understand something right away, I would just give up and say coding is too hard. Now I know that if POE's first explanation doesn't help, I can try again with a better question. Sometimes it takes five or six tries, but eventually I always understand! This has made me more determined in everything I do – I don't give up easily anymore."
6.7. Specific vs. General Prompt Strategies
Students developed a nuanced understanding of when to use specific versus general prompts depending on their learning objectives. They discovered that highly detailed prompts yielded precise solutions to immediate problems, while more open-ended prompts facilitated broader conceptual understanding and creative exploration.
S1 explained his strategic use of specific prompts for debugging:
S1: "When my code is broken and I need to fix it quickly, I've learned to be super specific with POE. I say exactly what line is causing problems, what I expected to happen, and what's happening instead. The more details I give, the faster POE finds my exact mistake!"
S12 contrasted this with her approach for concept exploration:
S12: "When I want to really understand something new, I ask broader questions like 'What are different ways to use loops in games?' or 'What can I create with functions?' These questions make POE show me lots of cool possibilities I wouldn't have thought of myself. It's like opening a treasure chest of coding ideas!"
This strategic alternation between specific and general prompts demonstrated sophisticated metacognitive awareness, as students learned to match their prompting strategies to different learning needs and contexts.
S19 shared his contextual approach to prompt specificity:
S19: "I use different question styles for different situations. When I'm stuck and frustrated, I ask very specific questions to get unstuck quickly. But when I'm feeling curious and have extra time, I ask general questions to discover new coding tricks. Both ways are helpful, just for different times and goals."
Teachers noted that students who mastered this contextual approach demonstrated enhanced learning efficiency, appropriately balancing immediate problem-solving with deeper conceptual exploration.
S4 described how varying prompt specificity enhanced his learning strategy:
S4: "I've learned to use specific questions like tools for fixing problems and general questions like adventures for exploring. If I need to complete a level quickly, I ask exactly where my mistake is. But after finishing my work, I ask curious questions like 'What are cool things I could add to this code?' This helps me learn both the 'how' and the 'why' of coding."
The ability to strategically vary prompt specificity correlated with students' developing a more sophisticated and flexible approach to learning programming concepts.
6.8. Challenges and Limitations in Prompt Engineering
Despite their impressive developments in prompt engineering skills, students encountered several challenges and limitations that provided valuable insights for future educational implementations. These challenges highlight important considerations for integrating prompt engineering education into primary school curricula.
S7 expressed frustration with occasional mismatches between prompts and responses:
S7: "Sometimes even when I ask a really good question, POE still doesn't understand what I mean. I try using all the tricks I learned, but POE gives me information about something different. It's frustrating because I don't know how to ask better."
S14 highlighted difficulties with technical vocabulary limitations:
S14: "The hardest part is when I don't know the right coding words to use in my question. If I don't call something by its proper name, like 'variable' or 'parameter,' sometimes POE doesn't understand what I'm asking about. It's like we speak different languages sometimes."
Some students struggled with the metacognitive demands of prompt engineering, finding it challenging to simultaneously think about coding problems and how to communicate about those problems effectively.
S26 described this metacognitive challenge:
S26: "It's hard to think about coding AND think about how to ask good questions at the same time. Sometimes my brain gets too full trying to do both things! When I'm really stuck on a hard problem, it's difficult to remember all the question tricks I learned."
Teachers observed that younger students (ages 9-10) generally found prompt engineering more challenging than older students (ages 11-12), suggesting developmental factors in metacognitive capabilities relevant to effective prompt construction.
S5, one of the younger participants, articulated this developmental challenge:
S5: "The older kids are better at asking POE perfect questions. I try to copy how they ask questions, but sometimes I forget all the parts to include. My teacher says my question-asking skills will get better as I practice more and my brain grows."
Despite these challenges, students demonstrated remarkable resilience and adaptation. By the end of the program, even students who initially struggled had developed basic prompt engineering strategies that noticeably improved their learning interactions.
S10 reflected on his progress despite ongoing challenges:
S10: "At first, I was really bad at asking POE questions. My questions were too short, and POE's answers didn't help me. I'm still not the best at it, but I'm much better now! I learned to be specific, give examples, and keep trying different questions. Even when it's hard, I don't give up because I know good questions lead to good answers."
These challenges suggest important considerations for scaffolding prompt engineering education for young learners, including developmentally appropriate instruction, explicit vocabulary support, and gradual release of responsibility as students develop metacognitive capabilities.
7. Conclusion
This comprehensive review explored the multifaceted landscape of prompt engineering for LLMs in educational contexts. Our investigation identified a rich taxonomy of prompt engineering techniques applicable to educational settings, ranging from foundational approaches like zero-shot and few-shot prompting to more advanced methods and their derivatives, facilitating structured reasoning and problem-solving. Furthermore, we delineated key components constituting effective prompt engineering in education, including content knowledge, critical thinking, iterative design, clarity, creativity, collaboration, digital literacy, and ethical judgment—all synergistically contributing to enhanced educational outcomes.
The strategies examined in this study provided educators and students with practical frameworks for optimizing LLM interactions, including contextual framing, task segmentation, prompt sequencing, persona specification, reflection prompting, counterfactual thinking, constraint-based creativity, ethical consideration prompts, interactive prompting, and comparative analysis. However, significant challenges persisted in implementing prompt engineering in educational settings, including ambiguity and contextual misinterpretation, balancing specificity with flexibility, ensuring response consistency, managing model hallucinations, addressing ethical concerns, and maintaining student engagement.
Notably, our qualitative study with primary school students revealed remarkable insights into students' perspectives and experiences with prompt engineering techniques. Students demonstrated impressive intuitive development of sophisticated prompt engineering skills, including understanding the prompt structure, utilizing contextual information, employing role-based prompting, specifying constraints, developing prompt templates, engaging in iterative refinement, and strategically varying prompt specificity according to learning objectives. These findings suggested that even young learners could develop metacognitive awareness of effective communication strategies with AI systems, though developmental factors influenced their capacity to implement these strategies consistently.
This research contributed to the nascent field of AI literacy in education by highlighting the importance of explicit instruction in prompt engineering as a fundamental skill for the AI-augmented classroom. Future research should focus on developing age-appropriate pedagogical frameworks for teaching prompt engineering, investigating the long-term impact of prompt engineering skills on learning outcomes across disciplines, and exploring how prompt engineering competencies might transfer to other metacognitive domains. As LLMs become increasingly integrated into educational environments, the ability to effectively engineer prompts emerges not merely as a technical skill but as a critical component of digital literacy essential for 21st-century learning.