Preprint
Article

This version is not peer-reviewed.

Exploring Challenges and Opportunities in Artificial Intelligence (AI) Literacy and Educational AI Development: A Qualitative Study of Teachers and Researchers' Perspectives

Submitted:

13 August 2025

Posted:

14 August 2025

You are already at the latest version

Abstract
This qualitative study examined how educators and researchers perceived the challenges and opportunities related to both AI literacy and educational AI development. Drawing on interviews with 15 participants from education and research contexts, the study identified four major opportunity areas: the establishment of national AI literacy frameworks, interdisciplinary project-based learning, ethics-integrated curricula, and the use of simulation-based tools for both instruction and assessment. In contrast, five key challenges emerged, including fragmented definitions of AI literacy, insufficient teacher expertise, unequal access to infrastructure, overreliance on generative AI by students, and the absence of sustained policy and financial support. Participants also highlighted the lack of standardized assessment tools to evaluate AI literacy competencies, which limits effective curriculum implementation and policy evaluation. Addressing a critical gap in the literature, this study contributed a much-needed qualitative perspective on how educational professionals understand and navigate the evolving landscape of AI in education. The findings emphasized the importance of treating AI literacy and educational AI development as central to contemporary education, requiring systemic integration, professional development, and equitable access.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

Artificial Intelligence (AI) is being integrated into classrooms with increasing speed and visibility. UNESCO has called for all learners to acquire the “knowledge, skills, and values” necessary for informed interaction with AI systems, referring to this competence as “a basic grammar of our century” [1]. In parallel, the rise of generative AI models such as ChatGPT has intensified both interest and concern, offering potential benefits like personalized feedback, but also posing risks related to academic integrity and the spread of misinformation [2]. These developments reflect two converging trends in education: the proliferation of AI-driven tools for teaching and learning, and the emergence of AI literacy as a foundational competence for students and educators alike.
AI literacy is generally understood to involve three key components: conceptual understanding of how AI systems function, practical ability to use these systems, and ethical awareness of their societal implications [3]. While related to digital literacy, data literacy, and computational thinking, AI literacy extends these frameworks by foregrounding issues such as algorithmic bias, opacity, and responsible use [1,3]. As intelligent tutoring systems demonstrate measurable learning gains [4,5], and teacher surveys reveal ambivalence about the role of generative AI in classrooms [6,7], it becomes increasingly important to understand how educators and researchers perceive the promises and risks of these technologies.
Most existing literature focuses on curriculum design, technical tool development, or quantitative assessments of learning outcomes [2]. However, there remains a lack of qualitative studies that examine how educational professionals interpret and experience AI integration in practice. This study addresses that gap through two guiding research questions:
RQ1: What opportunities do teachers and researchers see in AI literacy and educational AI development?
RQ2: What challenges do they identify in AI literacy and educational AI development?
Through in-depth interviews, this study seeks to provide insight into how those directly involved in teaching and research understand and respond to the dual movements of AI literacy promotion and educational AI implementation.

2. Related Work

AI literacy is commonly defined as comprising three interconnected dimensions: conceptual understanding of AI technologies, practical skills in using AI tools, and ethical awareness of AI’s societal impact. Long and Magerko’s influential framework [3] emphasizes this technical-practical-ethical triad, while subsequent models have expanded the scope to include learner agency, motivation, and civic responsibility [2]. Increasingly, ethics is seen as central, with international bodies such as UNESCO underscoring the need for individuals to recognize algorithmic bias, question AI outputs, and understand when not to rely on automated systems [1].
In terms of policy and curriculum development, global progress remains uneven. A 2022 UNESCO survey [1] found that only eleven countries had adopted official K-12 AI curricula, and called for better alignment of learning progressions across age groups and systems. UNESCO’s follow-up competency framework identifies twelve core skills, spanning knowledge of AI concepts, tool usage, ethical judgment, and individual agency in AI-rich contexts [8].
Some nations, such as South Korea and Singapore, have adopted coordinated national strategies that combine mandated AI coursework with systematic teacher training and dedicated AI-enabled classrooms [9]. In contrast, other countries tend to rely on smaller-scale, locally-driven initiatives. In the U.S., the AI4K12 Initiative proposes a widely used structure based on five “big ideas” in AI education [2], and many programs utilize project-based learning to integrate technical and ethical learning goals [1].
In parallel, the development and deployment of educational AI tools—ranging from intelligent tutoring systems to automated feedback platforms—have shown tangible instructional benefits [5,10]. Meta-analytic studies indicate that adaptive learning systems can significantly improve student outcomes and reduce teacher workload [4,11,12].
More recently, generative AI models have expanded the range of educational applications, supporting lesson planning, content generation, and formative feedback. However, surveys reveal mixed perceptions among educators: only six percent of U.S. K–12 teachers view AI tools as doing more good than harm, while a quarter perceive them as more harmful [7]. Key obstacles to effective adoption include uneven infrastructure, limited teacher training, and persistent uncertainty around ethical use. Many educators still lack the confidence or support to teach AI-related content or incorporate AI tools effectively [13], and structural issues such as inconsistent benchmarks [2], insufficient professional development opportunities [14,15], and unresolved concerns around bias, privacy, and academic integrity further complicate implementation [16].
Despite growing attention to both AI literacy and educational AI tools, existing research has largely focused on frameworks, technical interventions, or quantitative evaluations. There is a notable lack of qualitative studies that explore educators’ and researchers’ firsthand experiences and interpretations of these developments. Prior reviews have emphasized this gap, noting that the voices of practitioners remain underrepresented in the literature [2]. This study addresses that gap by providing empirical, interview-based insights from teachers and researchers directly involved in AI education. In doing so, it offers a grounded perspective on how AI literacy and educational AI are currently understood and enacted in higher education settings.

3. Methodology

This study employed a grounded theory approach to explore how educators and researchers perceive the integration of AI into education. Fifteen participants were recruited from higher education institutions and research contexts, all of whom had experience with AI-related teaching or investigation of AI in education. Semi-structured interviews were conducted via voice call and lasted between 30 and 60 minutes.
The interview protocol covered participants’ definitions of AI literacy, current classroom practices involving AI tools, perceived opportunities and challenges, needed supports, and future perspectives. All interviews were transcribed in full and analyzed through a three-stage coding process. Open coding was first used to identify initial concepts, followed by axial coding to organize these into thematic categories, and finally selective coding to develop core themes relevant to the research questions.
In the text, participants are referred to using identifiers such as A1, A2, etc. These codes represent different interviewees, numbered in the order in which their quotes appear in the text. The same code (e.g., A1) is used consistently to indicate the same participant.

4. Results

4.1. RQ1: Opportunities in AI Literacy and Educational AI Development

Participants highlighted four major opportunity areas for enhancing AI integration in education: (1) development of national AI literacy progression frameworks, (2) interdisciplinary project-based learning (PBL), (3) ethically guided curricula, and (4) simulation-based learning and assessment tools. Below, each theme is discussed with supporting participant commentary clearly demarcated.

4.1.1. Opportunity 1: Establishment of National AI Literacy Progression Frameworks

Participants viewed the development of national or statewide AI literacy frameworks as a critical opportunity to bring structure and coherence to AI education. They emphasized that such frameworks could standardize what students learn at different educational levels, reduce disparities across schools, and signal that AI literacy is a legitimate and essential part of contemporary education.
Several participants cited current efforts in countries like Singapore and South Korea as promising examples of top-down coordination. These moves were seen as helping to provide teachers with guidance and ensure all students advance through AI concepts in a logical and consistent way. The literature supports this need: UNESCO’s global review similarly called for national frameworks with modular and sequential competencies that can be adapted to local contexts [15].
A1: “Right now, it feels like the Wild West – each teacher is cobbling together their own AI lessons. If the education ministry (department) publishes an AI literacy framework, suddenly we all have a shared game plan. It would outline what a 5th grader, an 8th grader, and a 12th grader should know about AI. I think that’s hugely beneficial because it sets targets and ensures continuity.”
Beyond curricular structure, participants also saw frameworks as a means of legitimizing AI literacy. When AI education is integrated into national standards, teachers are more likely to treat it as a core responsibility, and students gain more equitable access to foundational knowledge.
As A2 shared: “Teachers will take AI literacy more seriously when it’s in the standards. It moves from a novelty topic to a must-do. And for students, it means by the time they graduate, they haven’t missed out – they’ll have at least a baseline AI knowledge.”
Participants suggested that AI literacy frameworks could build upon existing models, such as national digital literacy standards, but be expanded to include competencies unique to AI, such as understanding model behavior, recognizing bias, and engaging with ethical implications. This aligns with wider calls in AI education for structured learning progressions and scaffolds for curriculum and assessment developers.
However, there was also consensus that merely having a framework is insufficient. Effective implementation requires flexibility, adaptability to school contexts, and meaningful teacher involvement in the design process. Participants emphasized that teacher input is essential to ensure that frameworks are not overly abstract or unrealistic.
In summary, participants saw the creation of national frameworks as a necessary foundation for consistent, equitable, and scalable AI literacy education, provided those frameworks are grounded in classroom realities and supported by implementation mechanisms.

4.1.2. Opportunity 2: Interdisciplinary Project-Based Learning (PBL)

Participants consistently highlighted interdisciplinary project-based learning (PBL) as a powerful and accessible method for integrating AI into classrooms. They argued that AI, by nature, intersects with multiple disciplines, making it well-suited for cross-curricular projects that involve real-world problem solving and collaborative inquiry.
Such projects not only allow students to engage with the technical aspects of AI but also encourage them to explore its broader societal implications. For example, one participant described a student project that connected computer science, environmental science, and geography.
A3 shared: “AI shouldn’t be a siloed topic. We had a project last term – students trained a simple image recognition model to distinguish healthy vs. polluted water samples (that was in science class), then in geography class, they mapped out which communities might benefit most from such AI monitoring. They loved it because it felt meaningful. We hit science, CS, and social studies outcomes all in one go.”
Projects like these were seen as particularly effective in engaging students, fostering critical thinking, and helping them apply AI knowledge in practical contexts. This aligns with prior research indicating that project-based learning is especially suited for AI education, as it renders abstract technical ideas more concrete and meaningful [16].
In addition to benefits for students, interdisciplinary PBL was viewed as a support mechanism for teachers. Educators noted that collaboration across disciplines can reduce the hesitation many teachers feel when teaching unfamiliar technical content.
As A4 explained: “Interdisciplinary projects involving AI can also reduce the fear factor for teachers. A math teacher might be hesitant to teach about AI alone, but if they collaborate with the computer science teacher on a joint project, they both learn and teach together.”
This kind of co-teaching or cross-department collaboration was seen as a pathway for professional learning, enabling educators to build collective expertise and confidence.
Finally, this interdisciplinary approach was seen as inherently suited to exploring the social and ethical dimensions of AI. Because students address authentic, context-specific problems, issues such as fairness, bias, and societal impact can emerge organically through project work, providing a bridge to the next opportunity theme on ethics in AI education.

4.1.3. Opportunity 3: Ethically Guided Lesson Frameworks

Participants emphasized the importance of embedding ethics into AI education from the outset. They argued that AI should not be presented solely as a technical subject; rather, instruction should be grounded in a broader civic and ethical context. Several noted that the fast-growing presence of AI in society provides a critical window to help students build habits of reflection, responsibility, and social awareness.
This perspective aligns with the concept of critical AI literacy, which encourages learners not only to understand and use AI but to assess its limitations, fairness, and appropriateness in real-world settings [2,17]. Participants saw the classroom as an ideal space to foster this kind of inquiry, especially if ethical discussion is integrated directly into technical content, rather than treated as an optional add-on.
A5 explained: “We have a chance to do it right with AI literacy—to bake in ethics and empathy. Think of it like dissecting a frog in biology class, but also discussing bioethics. In AI, if students learn about facial recognition, in the same breath, they should be discussing privacy and bias. I’m excited about lesson plans that make ethics a core learning outcome, not an afterthought.”
Participants described actual classroom practices that reflect this approach. One middle school module, for instance, had students train a basic image classifier, but the lesson’s main focus was evaluating fairness in how the model performed on different demographic groups. Students then debated strategies to improve the model or mitigate the bias it reflected. These activities were not only technically instructive but also encouraged critical thinking and ethical reasoning.
Several participants also mentioned growing institutional support for these approaches. They pointed to international frameworks—such as UNESCO’s AI competency guidelines—which explicitly call for ethical awareness and critical use as key learning outcomes [4].
A6 elaborated: “International bodies like UNESCO have stressed that AI literacy must include understanding what AI can and can’t do—and when its use should be questioned. We’re finally seeing teaching resources that put ethics at the forefront. I see that as a huge opportunity—to cultivate not just coders, but conscientious coders and users of AI.”
Finally, participants observed that today’s media environment—where students frequently encounter stories about deepfakes, biased algorithms, or AI surveillance—makes the topic especially relevant. Teachers can leverage this familiarity as an entry point to more structured discussions, helping students turn casual exposure into informed engagement.

4.1.4. Opportunity 4: Simulation-Based AI Learning and Assessment Tools

Participants identified simulation-based and interactive tools as a promising pathway for AI education. These technologies allow students to manipulate key parameters of AI systems and observe outcomes directly, enabling experiential learning without requiring extensive coding skills. Tools such as browser-based neural network simulators or AI-driven virtual labs provide an accessible way for learners to explore how AI models function and respond to input.
A7 shared: “We now have these neat simulators – one I use lets students visually train a neural network to recognize doodles. It’s all drag-and-drop and sliders; no code needed. The kids learn by doing: they adjust the training epochs or dataset size and immediately see if the AI gets better. It’s a sandbox for understanding AI behavior. The opportunity here is that AI doesn’t stay an abstract idea – it becomes something they can play with.”
Beyond instruction, participants viewed these tools as useful for assessment. Rather than relying solely on traditional testing methods, simulation-based environments can be used to evaluate applied understanding, for example, by assigning students to improve a model’s performance and justify their choices. This approach provides insight into both conceptual grasp and problem-solving strategies.
A8 explained: “Simulation-based assessments that measure AI literacy skills in action, not just definitions, are very promising. Imagine an AI tutor that observes how a student interacts with a simulated AI system and can gauge their understanding from that. We’re not there yet, but I see the beginnings of it.”
Participants also discussed the use of AI itself as a pedagogical agent, particularly in the form of intelligent tutoring systems. These AI-driven tutors could offer personalized explanations of machine learning concepts, respond to student queries, and adapt based on learner progress.
A9 remarked: “It’s meta—an AI teaching AI. But it actually works surprisingly well for basics. And students feel more comfortable asking a bot ‘dumb’ questions they might not ask a teacher.”
Lastly, participants noted that many of these interactive tools originate in research settings and have not yet been adopted widely in schools. They called for stronger collaboration between educational technologists and practitioners to scale these innovations. This vision aligns with constructionist learning theories, where learners actively build and interact with models as a way to internalize concepts. With appropriate design and support, simulation-based tools could move AI education from abstract explanation to hands-on discovery.

4.2. RQ2: Challenges in AI Literacy and Educational AI Development

Despite their optimism, participants also identified five key challenges currently hindering the development and implementation of AI literacy in education: (1) fragmented definitions and inconsistent benchmarks, (2) limited teacher expertise and professional development, (3) infrastructure and resource disparities, (4) overreliance on AI tools by students, and (5) lack of sustained policy and financial support. Each challenge is detailed below, with illustrative quotes from participants presented separately for clarity.

4.2.1. Challenge 1: Fragmented Definitions and Benchmarks for AI Literacy

Participants consistently pointed to the lack of clear, shared definitions and progression benchmarks as a foundational obstacle in AI literacy efforts. Without agreement on what AI literacy entails at different educational stages, implementation remains inconsistent across schools, districts, and countries. Some educators focus heavily on coding or technical content, while others emphasize ethical debates or simple application exposure. This misalignment creates disparities in student learning experiences and prevents system-wide coherence.
A13: “Ask ten people what AI literacy means and you’ll get ten answers. Some think it’s just knowing how Alexa works, others think it’s being able to code an AI. So when schools try to ‘do AI’, it’s all over the map. We don’t have the equivalent of a Common Core for AI. That fragmentation is a big issue – we might think we’re all talking about the same thing, but we’re not.”
This definitional ambiguity, also noted in the literature [2], complicates curriculum design, content development, and teacher preparation. Some students may receive in-depth instruction on machine learning, while others encounter only surface-level discussions, with no shared benchmarks to ensure equitable coverage or progression.
A14: “We don’t yet have an age-appropriate learning progression for AI concepts. What should a 3rd grader know about AI versus a 7th grader? There’s work underway (like AI4K12’s big ideas), but it’s not in mainstream use. This makes it hard to benchmark and also hard for textbook publishers or educational companies to create materials, because they don’t have guidelines on complexity for each grade.”
The absence of clear learning progressions also affects assessment. Few standardized tools exist to evaluate AI literacy, making it difficult for educators to measure learning or for researchers to compare program effectiveness. One early attempt—the Generative AI Literacy Assessment Test (GLAT)—focuses on higher education and remains limited in scope [18]. Without such metrics, participants warned, policymakers may struggle to justify investment or track outcomes.
Participants called for collaborative efforts across educational, governmental, and industry sectors to develop age-banded, modular benchmarks that describe both conceptual understanding and applied competence. These would not only guide instruction but also support the design of aligned resources and assessments.

4.2.2. Challenge 2: Insufficient Teacher Expertise and Professional Development

A widely cited challenge among participants was the limited capacity of teachers to deliver AI-related instruction. They observed a lack of formal training in AI or computer science, and even those who are technologically adept may struggle with pedagogical strategies for teaching AI. This results in varying degrees of hesitation, inconsistent implementation, and in some cases, complete avoidance of AI topics in the classroom.
A1 explained: “The typical teacher is overworked and already has to manage a heavy curriculum. Expecting them to also become AI experts overnight is unrealistic. We’re dumping ‘teach AI’ on them without giving the tools or training.”
A9 shared this struggle: “When I first tried to bring AI into my class, I quickly realized I was out of my depth on the technical side. I had to self-learn using online courses late at night. It worked out for me, but most of my colleagues just wouldn’t have the time or inclination to do that.”
This lack of preparation can have several consequences. Teachers may inadvertently promote misconceptions, avoid in-depth or hands-on projects, or feel anxious about being unable to answer student questions—issues corroborated by national surveys that show widespread teacher uncertainty about AI tools [19].
Although professional development (PD) programs for AI education are beginning to emerge, participants noted that they are currently limited in reach and sustainability. Examples such as the U.S.-based “TeachAI” initiative and Singapore’s plan to train all teachers in AI basics by 2026 [9] were seen as promising, but still far from universal. Studies such as Casal-Otero et al. [15] stress that one-off training sessions are insufficient; effective PD must be sustained, co-designed with educators, and embedded within broader learning communities.

4.2.3. Challenge 3: Limited Infrastructure and Resource Disparities

Participants consistently raised concerns about unequal access to infrastructure, which threatens to undermine AI literacy efforts and widen existing educational inequities. Many AI learning activities rely on students having access to devices and reliable internet. However, this cannot be assumed in all settings.
A12 stated: “There are schools even in developed countries where students can’t be guaranteed a laptop in class or at home. How do we teach them AI, which often requires using online tools or coding environments?”
Beyond hardware availability, participants emphasized the resource intensity of AI projects. Training models or running simulations can demand higher-spec machines or access to cloud services, which may be unaffordable for many schools. One participant described having to resort to unplugged teaching approaches due to limited lab capacity.
A7 explained: “We wanted to do a data science project, but the lab had maybe four working computers. So we ended up teaching the concepts on paper. It’s not the same—you lose that hands-on experience.”
This digital divide also applies to access to software. Some AI education tools require paid licenses or come with usage limits that restrict classroom implementation. For example, free browser-based simulators may only permit limited sessions per IP address. Teachers at under-resourced schools reported needing to work around such constraints or abandon certain tools entirely. These disparities reflect observations in broader studies, such as the AI Index Report, which noted that infrastructure gaps remain a barrier in many regions [13].
Another layer is language accessibility. Most AI teaching tools and resources are available only in English, which presents a significant obstacle in non-English speaking contexts.
A14 commented: “We spend hours translating materials or trying to adapt them for our local curriculum. There’s no support for this, but if we don’t do it, our students are excluded.”
To mitigate these disparities, participants suggested targeted interventions such as government grants to improve school infrastructure, the development of open-source tools, and public-private partnerships. Some proposed shared national platforms or “cloud labs” for schools to run AI experiments without needing powerful local hardware.
Overall, participants warned that without explicit efforts to close the infrastructure gap, AI literacy may become yet another domain where only well-resourced schools can meaningfully participate.

4.2.4. Challenge 4: Overreliance on Large Language Models (and other AI) by Students

Participants expressed concern about the increasing tendency of students to depend heavily on generative AI tools—particularly large language models (LLMs) like ChatGPT—for completing academic tasks. While these tools can enhance learning, participants noted that excessive reliance risks undermining students’ development of essential skills, including reasoning, writing, and coding.
A15 reflected on this issue: “This past year, I’ve had an influx of assignments that were clearly AI-generated. The students aren’t learning the material; they’re learning to prompt ChatGPT. It’s a challenge because you want them to use new tools, but not to the point that it replaces their thinking.”
This comment captures a key tension: students should be encouraged to engage with AI tools, but not in ways that displace core learning processes. Several participants shared similar accounts of students using AI to write essays, summarize readings, or solve programming assignments—often without teacher guidance or permission.
The ethical implications were also raised. Since AI can generate original-looking responses, plagiarism detection tools may not flag AI-generated content, complicating academic honesty enforcement. Some educators have responded by adapting assessment formats—for example, implementing oral exams or in-class coding tasks. Others are experimenting with assignments that integrate AI use transparently, such as requiring students to generate an AI draft and then critically revise it. However, many teachers lack the time or resources to redesign assessments in this way.
This challenge is closely tied to gaps in AI literacy itself. Participants pointed out that students often lack the critical skills to evaluate AI outputs. Some trust results too readily without verification.
A8 recounted an incident: “A student submitted an answer that was clearly wrong, but insisted it was correct because ‘the computer said so.’ That blind trust is worrying.”
Developing students’ ability to question and cross-check AI-generated content was viewed as an essential literacy skill that is currently underdeveloped.
Some participants argued that the solution is not to restrict AI use but to embed structured AI literacy into the curriculum. Teaching students about the capabilities and limitations of LLMs—including concepts like hallucination—can help them become more discerning users.
Nonetheless, in the current environment, many educators see student overuse of AI as a threat to academic integrity and learning outcomes. This concern is consistent with recent teacher surveys, which suggest that many in K–12 settings perceive AI tools as doing more harm than good in classrooms at present [19].

4.2.5. Challenge 5: Lack of Policy Continuity and Financial Support

A widely shared concern among participants was the lack of long-term policy commitment and sustainable funding for AI education. Many initiatives were described as short-lived, often beginning with promise through pilot programs or one-time grants, but lacking the structural support to continue over time.
A2 recounted a typical scenario: “We had a state-funded AI pilot program for high schools – it was great, we trained teachers, developed materials. But it was a 2-year pilot. After that, no more money. Some schools tried to keep it going, but without dedicated funds and with teacher turnover, it fizzled out in most places. It feels like lots of reinventing the wheel whenever a new grant comes around.”
This illustrates how enthusiasm and early momentum are often lost when temporary funding expires, especially in the absence of systemic integration. Unlike traditional subjects like math or reading, AI education currently lacks stable budget lines or staffing commitments in most education systems.
Participants also raised the issue of policy inconsistency. Because educational priorities often shift with political changes, AI literacy is at risk of being deprioritized.
A14 emphasized this point: “We need AI literacy to be institutionalized. As long as it’s treated as an experiment or enrichment, it’s vulnerable. If a national curriculum board mandates AI topics, they must also commit to funding it year after year and updating it. Otherwise, schools won’t invest the effort because they think it’s a passing fad.”
Overall, participants called for more robust advocacy to demonstrate the value of AI literacy to policymakers. However, they also acknowledged the difficulty: empirical evidence on long-term impact is still being developed, making it harder to secure institutional buy-in.

5. Conclusions and Discussion

This study examined how educators and researchers perceive both the potential and the barriers surrounding AI literacy and educational AI integration.
Regarding RQ1, participants described several promising directions. The strongest consensus emerged around the need for national or system-level AI literacy frameworks that define clear learning progressions across grade levels. Participants also emphasized the value of interdisciplinary, project-based learning as a means of contextualizing AI within real-world problem-solving, promoting both technical understanding and ethical reflection. Other highlighted opportunities included embedding ethics within lesson design and using simulation-based tools for experiential learning and assessment. These opportunities reflect a growing awareness that AI education must not be narrowly technical, but instead support broader civic and critical competencies.
In response to RQ2, five key areas of concern were identified. Participants pointed to fragmented definitions of AI literacy and the absence of shared benchmarks, which hinder coherence across schools. Limited teacher expertise and lack of professional development were cited as major obstacles to implementation, especially given the rapidly evolving nature of AI technologies. Infrastructure disparities—especially related to hardware, internet access, and language-localized content—were seen as reinforcing existing inequalities. Educators also noted a growing overreliance on generative AI tools by students, which raises questions about academic integrity and the erosion of foundational skills. Finally, many initiatives were described as short-lived due to unstable policy support and funding, limiting long-term impact and continuity.
Taken together, the findings suggest that while enthusiasm and innovation in AI education are growing, systemic limitations remain. The opportunities highlighted by participants depend heavily on structural supports, such as clear policies, sustained teacher development, equitable infrastructure, and coherent curricula. Without these foundations, even well-designed AI programs may fail to scale or endure.
As AI becomes increasingly embedded in educational tools and practices, there is a pressing need to move beyond pilot initiatives toward comprehensive, policy-aligned, and equity-focused strategies. AI literacy should not be seen as a niche topic but as a core educational objective—one that prepares students not only to use AI but to understand and question its role in society.

References

  1. United Nations Educational, Scientific and Cultural Organization (UNESCO), ‘K-12 AI Curricula: A Mapping of Government-Endorsed AI Curricula’, UNESCO, 2022. [Online]. Available: https://unesdoc.unesco.org/ark:/48223/pf0000380602.
  2. X. Gu and B. J. Ericson, ‘AI Literacy in K-12 and Higher Education in the Wake of Generative AI: An Integrative Review’, 2025, arXiv preprint arXiv:2503.00079.
  3. D. Long and B. Magerko, ‘What is AI Literacy? Competencies and Design Considerations’, in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, 2020, pp. 1–16. [CrossRef]
  4. J. A. Kulik and J. D. Fletcher, ‘Effectiveness of intelligent tutoring systems: a meta-analytic review’, Review of educational research, vol. 86, no. 1, pp. 42–78, 2016.
  5. W. C. Choi and C. I. Chang, ‘A Survey of Techniques, Design, Applications, Challenges, and Student Perspective of Chatbot-Based Learning Tutoring System Supporting Students to Learn in Education’, 2025, Preprints.org. [CrossRef]
  6. G. Toppo, ‘National ChatGPT Survey: Teachers Accepting AI Into Classrooms & Workflow—Even More Than Students’, The74, 2023.
  7. L. Lin, ‘A Quarter of U.S. Teachers Say AI Tools Do More Harm Than Good in K-12 Education’. [Online]. Available: https://www.pewresearch.org/short-reads/2024/05/15/a-quarter-of-u-s-teachers-say-ai-tools-do-more-harm-than-good-in-k-12-education/.
  8. F. Miao, K. Shiohira, and others, AI competency framework for students. UNESCO Publishing, 2024.
  9. R. Lake, ‘Shockwaves and Innovations: How Nations Worldwide Are Dealing with AI in Education’. [Online]. Available: https://crpe.org/shockwaves-and-innovations-how-nations-worldwide-are-dealing-with-ai-in-education/.
  10. W. C. Choi, I. C. Choi, and C. I. Chang, ‘The Impact of Artificial Intelligence on Education: The Applications, Advantages, Challenges and Researchers’ Perspective’, 2025, Preprints.org.
  11. K. Johnson, ‘California Teachers Are Using AI to Grade Papers. Who’s Grading the AI?’ [Online]. Available: https://calmatters.org/economy/technology/2024/06/teachers-ai-grading/.
  12. S. Barrilleaux, ‘AI May Speed Up the Grading Process for Teachers’. [Online]. Available: https://news.uga.edu/ai-may-help-speed-up-grading/.
  13. Stanford Institute for Human-Centered Artificial Intelligence (HAI), ‘The 2025 AI Index Report’, Stanford University, 2025. [Online]. Available: https://hai.stanford.edu/ai-index/2025-ai-index-report.
  14. M. Lucas, Y. Zhang, P. Bem-haja, and P. N. Vicente, ‘The interplay between teachers’ trust in artificial intelligence and digital competence’, Education and Information Technologies, vol. 29, no. 17, pp. 22991–23010, 2024. [CrossRef]
  15. L. Casal-Otero, A. Catala, C. Fernández-Morante, M. Taboada, B. Cebreiro, and S. Barro, ‘AI literacy in K-12: a systematic literature review’, International Journal of STEM Education, vol. 10, no. 29, 2023. [CrossRef]
  16. W. C. Choi and C. I. Chang, ‘Enhancing Education with ChatGPT 4o and Microsoft Copilot: A Review of Opportunities, Challenges, and Student Perspectives on LLM-Based Text-to-Image Generation Models’, 2025. [CrossRef]
  17. W. C. Choi, C. I. Chang, I. C. Choi, L. C. Lam, K. I. Leong, and S. I. Ng, ‘Artificial Intelligence (AI) Literacy in Education: Definition, Competencies, Opportunities and Challenges’, Preprints, Aug. 2025. [CrossRef]
  18. Y. Jin, R. Martinez-Maldonado, D. Gašević, and L. Yan, ‘GLAT: The Generative AI Literacy Assessment Test’, Computers and Education: Artificial Intelligence, vol. 6, p. 100436, 2025. [CrossRef]
  19. A. S. Almogren, W. M. Al-Rahmi, and N. A. Dahri, ‘Exploring factors influencing the acceptance of ChatGPT in higher education: A smart education perspective’, Heliyon, vol. 10, no. 11, 2024. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated