Preprint
Article

This version is not peer-reviewed.

AI-Enabled Framework for Program and Course Design in Higher Education

Submitted:

06 December 2025

Posted:

08 December 2025

You are already at the latest version

Abstract
Background: Artificial intelligence is reshaping higher education, yet most institutions still rely on ad-hoc experiments rather than a holistic, evidence-based strategy for curriculum innovation. Purpose: This study develops and proposes a comprehensive framework that helps universities integrate AI ethically and systematically into program and course design, ensuring alignment with learner needs, labour-market skills, and quality standards. Methods: Employing an integrative secondary research design, we conducted a structured review of peer-reviewed articles, policy documents, and institutional case studies published between 2018 and 2025. Forty high-quality sources passed rigorous screening for relevance, credibility, and methodological soundness. Extracted data were coded thematically and synthesised into recurring practices, enablers, challenges, and ethical considerations, which collectively informed framework construction. Results: AI adoption in curriculum design is global but uneven; leading institutions report gains in student retention, skills alignment, and design efficiency, while lagging peers cite insufficient faculty training, unclear policies, and ethical concerns. Synthesised findings yielded a three-layer framework: (1) program-level guidance that uses AI analytics for outcome formulation, skills mapping, and curriculum sequencing; (2) course-level guidance that positions AI as a co-designer for content generation, adaptive assessment, and personalised feedback; and (3) cross-cutting foundations covering governance, responsible AI use, quality assurance, capacity building, and sustainability. Conclusions: The proposed framework offers a scalable pathway for data-driven, learner-centred, and ethically responsible curriculum innovation. Its adoption can enhance institutional agility and graduate employability, though empirical validation across diverse contexts remains a priority for future research.
Keywords: 
;  ;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Education

1. Introduction

1.1. Background

Artificial Intelligence (AI) is increasingly at the forefront of global higher education transformation (Bond et al., 2024). Universities worldwide are experimenting with AI tools to personalize learning, automate administrative tasks, and enhance decision-making in curriculum design (UNESCO, 2025). In a recent UNESCO survey of 400 higher education institutions across 90 countries, nearly two-thirds reported either having or developing formal guidance on AI use (UNESCO, 2025). This surge is driven by AI’s potential to make education more data-driven and learner-centered (Suh, 2025). Current models of AI-supported curriculum innovation span predictive analytics for student success, generative AI for content creation, and adaptive learning systems that tailor instruction to individual needs (Muncey, 2025). For example, AI-driven platforms are being used alongside traditional teaching at leading universities – a Stanford study highlights how AI can summarize complex texts, debug code, and even generate instructional media, illustrating the breadth of AI’s utility in education (Muncey, 2025). Furthermore, experimental AI-based curricula have yielded promising outcomes: one study reported that an AI-personalized curriculum significantly improved student completion (89.7% vs. lower rates in the control) and retention (91.4%) while reducing dropout to under 5% (Chu & Ashraf, 2025). These trends underscore AI’s role as a catalyst for curriculum innovation, transforming how programs and courses are designed to align with evolving learner and workforce needs.
Although this expanding body of work is valuable, most of it treats AI as an instrument of delivery rather than design. In other words, AI tools are deployed as tutors, explainers, recommenders, or graders that interact directly with learners in real time, while the deeper work of deciding what should be taught, in what sequence, at which level of difficulty, and under which ethical and regulatory constraints typically remains a human-only, pre-implementation activity. Recent meta-systematic reviews of AI in higher education reach similar conclusions, noting that the bulk of empirical work has concentrated on learning analytics, predictive modelling, and AI-mediated tutoring, with relatively little conceptual or empirical attention to AI as a partner in curriculum or programme design itself (Bond et al., 2024; EDUCAUSE, 2024). By contrast, this article focuses on AI-supported design: how AI can inform programme-level architecture, course-level syllabi, and assessment blueprints before they ever reach the learner.

1.2. Problem Statement

Despite the growing use of AI in educational settings, there remains a notable absence of a comprehensive, evidence-based framework to guide the design and development of academic programs and courses. Early institutional responses to tools like ChatGPT were often ad-hoc – balancing cautious restrictions with isolated innovations (Jin et al., 2024) – but lacked an overarching strategy. Prior studies and policies have tended to focus narrowly on the benefits or risks of AI (e.g. academic integrity concerns, efficiency gains) without integrating these insights into a holistic design framework (Jin et al., 2024). As a result, institutions struggle with fragmented approaches: many faculty and staff express uncertainty about how to effectively and ethically apply AI pedagogically (UNESCO, 2025). Common barriers include a lack of awareness and training among educators, ethical and privacy concerns, and insufficient policy guidance at the institutional level(Chu & Ashraf, 2025). In practice, AI adoption in curriculum design is often piecemeal – one course or department at a time – rather than guided by a unified model. This fragmentation points to the need for an evidence-backed framework that can systematically assist universities in leveraging AI from program-level curriculum planning down to daily course development. In summary, there is a critical gap in research and practice: While emerging models such as Jisc’s AI maturity toolkit and AI-enabled course design cycles in specific domains are valuable, there is still no widely accepted, cross-programme framework that explicitly integrates institutional strategy, programme- and course-level design decisions, and normative (ethical and theological) commitments in a coherent way. Addressing this gap is the central problem that this paper tackles.

1.3. Research Objectives

This study aims to develop a comprehensive AI adoption framework for program and course design in higher education, grounded in global best practices and scholarly evidence. The objectives are threefold: (1) to investigate how higher education institutions worldwide are currently integrating AI into curriculum design and identify common themes, (2) to synthesize the enablers, challenges, and ethical considerations these institutions encounter, and (3) to propose a structured framework that addresses these findings and provides practical guidance at both the program and course level. Ultimately, the framework is intended to bridge the gap between the promise of AI tools and the realities of educational design, ensuring that AI’s integration is pedagogically sound, ethically responsible, and scalable across diverse institutional contexts.

1.4. Research Questions

To achieve the above objectives, the study is guided by the following research questions: RQ1: What are the current practices and models of AI integration in academic program and course design across global higher education institutions? RQ2: What are the key enablers and barriers (including ethical and policy issues) influencing the adoption of AI in curriculum design and development? RQ3: What core components would constitute an effective, evidence-based framework for AI-enabled program and course design that is adaptable across different higher education contexts? These questions drive a systematic review of secondary data to capture both the breadth of global AI-in-education initiatives and the depth of lessons learned from early adopters.

1.5. Expected Contributions

This work offers several contributions. Theoretically, it provides a conceptual model linking AI capabilities with curriculum design theory, thus enriching the literature on educational innovation. Methodologically, it demonstrates a rigorous secondary-data synthesis (spanning academic studies, institutional reports, and policy documents) to derive actionable insights – an approach that can be replicated for emerging technologies in education. Practically, the proposed framework serves as a toolkit for stakeholders: program directors, instructional designers, and faculty can use it as a roadmap for implementing AI in designing curricula and courses. The framework also supports institutional leaders in strategic planning, aligning AI initiatives with mission and quality standards. At a policy level, the findings inform guidelines for governing bodies and accrediting agencies on how to support and regulate AI’s educational use. In sum, this study’s contributions span theory (advancing our understanding of AI in curriculum design), practice (a scalable framework and best practices), and policy (recommendations for governance and ethical use), aligning with the standards of high-impact educational technology journals.
Our framework emerges alongside, and deliberately extends, several recent attempts to structure institutional responses to AI. Jisc’s AI maturity toolkit, developed for colleges and universities, offers a five-stage model of institutional AI adoption (approaching and understanding, experimenting and exploring, operational, embedded, and optimised/transformed) and associated guidance for strategic planning (Jisc, 2023). While valuable at the level of senior leadership and governance, the Jisc model remains intentionally high level: it does not specify how programme teams should redesign learning outcomes, assessment architectures, or course-level learning activities in concrete disciplines. At the other end of the spectrum, AI-enabled instructional design work by Huang and colleagues and by Jia and Hew operationalises iterative design cycles within specific courses or fully online flipped programmes, using AI-driven personalisation or analytics to refine instructional sequences over time (Huang, Lu, & Yang, 2023; Jia, Hew, Du, & Li, 2023). These studies demonstrate how AI can enhance course-level design and evaluation, but they do not attempt to articulate a cross-programme, cross-institutional framework that simultaneously addresses programme design, course architecture, and policy-ethical constraints. The AI-enabled framework proposed in this article is intended to bridge this gap: it retains the iterative, design-based discipline of these micro-level cycles while embedding them in an institution-wide model that foregrounds governance, ethics, and theologically informed value commitments in decisions about AI-supported programme and course design.

2. Methodology

2.1. Research Design

This study followed an integrative secondary research design, relying exclusively on existing data and literature to develop the AI adoption framework. We conducted a structured review of scholarly publications, case studies, and institutional reports pertaining to AI in higher education curriculum and program development. The research design is qualitative and synthesis-oriented: rather than gathering new empirical data, we aggregated and analyzed insights from diverse sources to draw generalizable patterns. By using only secondary data, we aimed to capture a broad evidence base from multiple contexts and ensure that the proposed framework is grounded in verified practices and research outcomes.
Although this expanding body of work is valuable, most of it treats AI as an instrument of delivery rather than design. In other words, AI tools are deployed as tutors, explainers, recommenders, or graders that interact directly with learners in real time, while the deeper work of deciding what should be taught, in what sequence, at which level of difficulty, and under which ethical and regulatory constraints typically remains a human-only, pre-implementation activity. Recent meta-systematic reviews of AI in higher education reach similar conclusions, noting that the bulk of empirical work has concentrated on learning analytics, predictive modelling, and AI-mediated tutoring, with relatively little conceptual or empirical attention to AI as a partner in curriculum or programme design itself (Bond et al., 2024). By contrast, this article focuses on AI-supported design: how AI can inform programme-level architecture, course-level syllabi, and assessment blueprints before they ever reach the learner.

2.2. Data Sources and Collection

2.2.1. Data Sources

We drew from three main categories of sources: (1) peer-reviewed academic literature (journal articles, conference papers, and edited volumes) on AI applications in education, especially curriculum design, instructional design, and program development; (2) global institutional and industry reports or whitepapers (e.g. UNESCO, governmental education departments, and university strategy documents) that document AI adoption cases or recommendations; and (3) policy papers and frameworks from educational organizations (such as UNESCO, OECD, and professional associations in higher education) focusing on AI ethics and governance. Notable sources included recent articles like Chu & Ashraf (2025) on AI-driven curriculum innovation (Chu & Ashraf, 2025), the U.S. Department of Education’s report Artificial Intelligence and the Future of Teaching and Learning (2023), UNESCO’s guidance on AI ethics (UNESCO, 2025), and several case studies of university implementations (e.g. University of Florida’s “AI Across the Curriculum” initiative (Southworth et al., 2023) and a University of Phoenix whitepaper on skills-aligned curriculum mapping [Thompson, 2023]). We also incorporated data from emerging survey studies – for instance, a global survey of 40 universities’ AI policies (Jin et al., 2024) and the UNESCO (2025) survey of higher-ed institutions mentioned earlier – to ensure up-to-date coverage of institutional trends.

2.2.2. Search and Inclusion Strategy

We performed systematic searches in academic databases (ERIC, Scopus, Web of Science) and search engines for gray literature, using keywords such as “AI in curriculum design,” “AI in higher education,” “program development AI framework,” “AI education policy,” and “AI ethics in curriculum.” The search was initially limited to publications from 2018–2025 to capture the rapid developments in AI (especially the post-ChatGPT surge). We included sources from all global regions to ensure representation of diverse institutional contexts. Inclusion criteria were: (a) the source provides empirical data, case descriptions, or recommendations about using AI in any aspect of curriculum or course design/development (as opposed to only AI for student learning or general commentary); (b) the source is credible – for scholarly works, this meant peer-reviewed or reputable publisher; for reports, published by recognized institutions or organizations; and (c) the source addresses at least one of our research questions (practices, enablers/challenges, or frameworks). We excluded sources focusing solely on AI for instructional delivery or student use (e.g. use of AI tutors in a class) without connection to design processes, as well as opinion pieces lacking empirical or institutional backing. After identifying an initial pool of ~120 sources, we screened titles and abstracts for relevance, yielding approximately 60 sources that were reviewed in full. Of these, around 40 key sources were ultimately synthesized for findings, chosen for their quality and relevance to the themes of interest.
To assess the consistency of our screening judgements, two reviewers independently coded a random 10% subsample of full-text articles for inclusion. Inter-rater agreement for the final inclusion/exclusion decisions, calculated using Cohen’s κ, fell in the “substantial” range (κ > 0.70), which is generally interpreted as indicating strong agreement for complex judgement tasks in educational research (Landis & Koch, 1977). Any residual disagreements were resolved through deliberation, with preference given to conservative inclusion where the study offered potentially distinctive conceptual insights.
During synthesis, empirical studies with higher weight of evidence were used to anchor the core categories of the framework (e.g., recurrent design tensions, patterns in AI-enabled course redesign), while conceptual and policy documents (e.g., UNESCO’s AI ethics guidance; Jisc’s AI maturity toolkit) were treated as lenses for interrogating normative assumptions, governance implications, and institutional scalability. This layered approach allowed the resulting framework to be empirically grounded while still responsive to strategic and ethical debates in the wider AI-in-education ecosystem.
Figure 1. PRISMA flow chart illustrating the integrative review: about 120 records were identified, 80 screened after initial exclusions, 40 assessed for eligibility, and 40 key sources included in the synthesis.
Figure 1. PRISMA flow chart illustrating the integrative review: about 120 records were identified, 80 screened after initial exclusions, 40 assessed for eligibility, and 40 key sources included in the synthesis.
Preprints 188487 g001
Beyond basic relevance screening, we conducted a structured appraisal of methodological quality and conceptual usefulness. For empirical qualitative and mixed-methods studies, we drew on the Critical Appraisal Skills Programme (CASP) qualitative checklist as a heuristic, attending in particular to clarity of aims, appropriateness of design, transparency of sampling and data collection, adequacy of analysis, and reflexivity (Critical Appraisal Skills Programme [CASP], 2018). For quantitative and design-based studies, we applied Gough’s (2007) “weight of evidence” framework, judging each study along three dimensions: (a) quality of execution (e.g., clarity of design, appropriateness of analyses), (b) appropriateness of the design to our review question, and (c) relevance of the study focus to AI-supported programme or course design rather than AI-mediated delivery alone. On this basis, studies were classified as contributing high, moderate, or low weight of evidence to the synthesis; low-weight studies were not necessarily excluded, but their claims were treated cautiously and never allowed to dominate category formation.

2.2.3. Quality and Bias Screening

Each source was evaluated for quality and potential bias. Peer-reviewed studies were appraised for methodological rigor (using indicators like clarity of research questions, appropriateness of methods, and evidence provided). Reports and whitepapers were checked for authoritativeness (e.g. UNESCO or government reports were favored over vendor promotional material, unless the latter provided unique data). To mitigate bias, we triangulated information: whenever possible, claims or examples were cross-checked across multiple sources. For instance, if a university’s case was described in a press release, we also sought any academic or media reports on that case for verification. This cross-verification helped ensure that our synthesis rests on well-substantiated evidence.

2.3. Data Analysis and Synthesis

We employed a thematic synthesis approach to analyze the collected data. First, we extracted relevant information from each source into a coding matrix, noting details about AI applications (e.g. AI for learning outcomes design, AI in assessment, faculty development efforts, observed challenges, etc.), context (institution type, region), and key findings or recommendations. Using an inductive coding process, we grouped similar pieces of evidence under thematic categories corresponding to our research questions. Major themes that emerged included: AI-assisted curriculum planning, personalization and adaptive learning design, skills mapping to industry needs, AI in assessment and feedback, faculty attitudes and training, ethical and policy frameworks, and infrastructure requirements. We then organized these themes along two dimensions: (1) curriculum design stages or levels (program-level design vs. course-level design) and (2) enabling factors vs. challenges. This became the basis for structuring the Findings (e.g. themes such as “AI for Learning Outcomes and Skills Mapping” address program/course design practices, whereas “Enablers and Challenges” consolidates factors influencing adoption). Throughout the synthesis, we applied the technique of constant comparison, iteratively comparing insights from different sources to refine each theme and ensure it reflected consensus or noted discrepancies. For example, when synthesizing ethical issues, we compared UNESCO’s broad policy perspective (UNESCO, 2025) with specific concerns raised in case studies (like bias in AI recommendations [Chu & Ashraf, 2025]), to present a layered understanding.
Where quantitative data were available (e.g. percentages of institutions with AI policies (UNESCO, 2025), or outcome improvements from AI-driven curricula (Chu & Ashraf, 2025), we noted these as evidence of impact or prevalence. However, most analysis was qualitative, focusing on patterns (e.g. common approaches institutions take, repeated concerns). Finally, we used the synthesized themes and patterns to construct the proposed framework. The framework’s components were directly derived from the clusters of practices and recommendations identified in the literature, ensuring it is evidence-based. We then iteratively refined the framework by checking it against the data: each element of the framework is supported by multiple sources from our review, lending it both validity and robustness.

2.4. Ethical Considerations

Because this study did not involve human subjects or sensitive personal data, traditional research ethics approvals were not required. However, we adhered to ethical standards in secondary research. This included diligent attribution of all ideas, data, and direct quotations to the original sources to avoid plagiarism and misrepresentation. Citations are provided for all sourced material, and the APA 7th style is followed for scholarly rigor. We also aimed to represent authors’ findings faithfully and in context; whenever interpreting a study’s result or a report’s recommendation, we cross-checked the context to avoid skewing the meaning. Another ethical aspect is responsible reporting: given that AI in education can be a contentious topic, we took care to present both potential benefits and pitfalls as reported in the literature, maintaining a balanced perspective. In proposing the framework, we incorporated ethical guidelines (such as ensuring data privacy, addressing bias, and including stakeholder voices) as a core component, thereby aligning the work with the principle of “do no harm.” We also recognize the positionality of this research – being a synthesis, it may carry biases of the original sources (e.g. most literature is from English-speaking or Global North contexts). To ethically handle this, we explicitly discuss such limitations and avoid overgeneralization beyond what the data supports.

2.5. Limitations

There are a few limitations inherent in our methodology. First, reliance on secondary data means our findings are constrained by what is available and reported. It is possible that innovative AI adoption practices exist in some institutions but have not been documented in accessible literature, leading to their omission. We tried to mitigate this by searching widely, including technical reports and media where appropriate, but the risk remains. Second, the rapid pace of AI development means the landscape is evolving even as this review was conducted. We included sources up to late 2025, but newer developments (or implementations underway) might not be captured. Third, while we attempted a global scope, the literature is uneven: there are more documented cases and studies from North America, Europe, and parts of Asia than from Africa or Latin America (Sangwa et al., 2025). This may bias the framework toward conditions prevalent in well-resourced institutions. We caution that adaptation may be needed for resource-constrained settings, a point we address in the framework discussion. Fourth, our thematic synthesis, while rigorous, involves subjective interpretation in coding and grouping themes. Different researchers might group concepts differently; to enhance reliability we used multiple examples for each theme and have been transparent in how we derived categories. Finally, the proposed framework is conceptual and has not been empirically validated within this study. It is built from evidence and expert recommendations, but its effectiveness in practice is an assumption to be tested in future research. We acknowledge this and provide recommendations for trial and evaluation of the framework in various contexts as part of our conclusion. Despite these limitations, the methodology provides a comprehensive and current synthesis, suitable for informing a robust framework and offering a foundation for further exploration.

3. Findings and Discussion

Our review revealed rich evidence addressing the research questions. Below, we organize the findings according to the major themes and questions, interweaving examples from institutions that have adopted AI in program or course design. We first describe the current state of practice (RQ1), then discuss enablers, challenges, and ethical issues (RQ2), which together inform the subsequent framework design (RQ3). Throughout, “AI” refers broadly to a range of technologies – from classic machine learning and learning analytics to newer generative AI – as they relate to curriculum design and development.

3.1. AI in Program and Course Design: Global State of Practice

3.1.1. Widespread but Uneven Adoption

Across the globe, higher education institutions are experimenting with integrating AI into curriculum design, though at varying scales. A 2025 UNESCO survey confirms that AI use in academia has become widespread, with 90% of responding academics indicating some use of AI tools in their work (UNESCO, 2025). However, this use is often ad-hoc, and confidence in pedagogical applications is uneven (UNESCO, 2025). Only about half of institutions (mostly in Europe/North America) currently have formal AI policies or frameworks, while many others (especially in developing regions) are still in early stages (UNESCO, 2025). This disparity implies that while interest in AI is global, structured adoption in curriculum processes is still emerging.

3.1.2. Institutional Examples – Case Studies

Pockets of innovation provide insight into how AI can be embedded both at program and course levels. For instance, the University of Florida (USA) has launched an “AI Across the Curriculum” initiative, aiming to infuse AI literacy and tools into every undergraduate program (Southworth et al., 2023). This comprehensive approach treats basic AI knowledge as a core competency for all students, regardless of discipline, and represents a program-level redesign to produce an “AI-ready workforce” aligned with 21st-century needs (Southworth et al., 2023). Achieving this involved significant institutional investment: UF hired over 100 new AI-focused faculty across 16 colleges and implemented a university-wide AI competency framework categorizing courses by levels of AI content (Southworth et al., 2023). Such large-scale curriculum transformation demonstrates what a proactive, top-down strategy can achieve in aligning programs with emergent technological literacies.
Another illustrative case comes from the University of Phoenix (USA), which has focused on skills mapping to align its curriculum with labor market demands. In a 2025 whitepaper, Phoenix detailed a strategy of developing a “skills-aligned curriculum map” that links each program and course learning outcome to specific industry-validated skills (Thompson, 2023). Using an outcomes management tool (a centralized database), they map and track how each course teaches and assesses those skills, creating a transparent “single source of truth” for curriculum alignment (Thompson, 2023). This AI-enhanced approach leverages labor market data (with AI analyzing job postings and skills taxonomies) to continuously update curricula so that graduates’ competencies remain relevant. Students are even provided a learner record that documents their acquired skills and assessments, effectively translating the curriculum into employer-facing language (Thompson, 2023). This example highlights an institution-level use of AI and data for program design – ensuring that what is taught in each course has verified currency in the workplace.

3.1.3. Course-Level Innovations

On the course design front, many universities are leveraging generative AI as a co-designer or assistant for faculty. A notable trend is using AI during course development to generate preliminary content and structure, which instructors can then refine. For example, faculty at some institutions use AI features in learning management systems (LMS) like Blackboard to auto-generate draft course modules and outlines in a sandbox course site (Fang & Broussard, 2024). In one reported case from a medical education context, an instructor prompted an LMS-integrated AI to suggest a set of weekly module topics for a new course; the AI produced a reasonable outline in minutes, which the instructor then adjusted to fit their specific objectives (Fang & Broussard, 2024). This “AI-assisted course mapping” jump-starts the design process, helping instructors move past blank-page syndrome when planning new courses. Likewise, generative AI tools such as ChatGPT or Google’s Gemini are being used to develop draft lesson plans and teaching materials. Faculty can input their course learning outcomes and desired lesson length, and the AI outputs a structured lesson plan with activities and time allocations (Fang & Broussard, 2024). At the University of St. Augustine for Health Sciences, for instance, educators reported success in using Gemini to generate detailed lesson plan drafts, which they then edited to ensure accuracy and appropriateness (Fang & Broussard, 2024). These examples show AI acting as a productivity tool in course design – scaling up the capacity to create well-aligned course content efficiently.

3.1.4. Integrating AI into Curriculum Content

Another facet of AI adoption is the inclusion of AI-related content and experiences within curricula. In terms of program design, this means adding AI topics or projects into courses so that students learn about AI as part of their discipline. A global perspective study (Jin et al., 2024) found universities increasingly emphasizing AI literacy for students in various fields, not just computer science (Jin et al., 2024). For example, many institutions have begun offering first-year seminars or modules on AI basics and ethics as mandatory for all students (UNESCO, 2025). The University of Louisiana System in the US rolled out a free, self-paced AI literacy micro-credential available to all 82,000+ students and staff across its campuses (CCA, 2024). This program, developed collaboratively by faculty, covers responsible AI use, data privacy, and ethical issues, ensuring a baseline competency in AI for the entire academic community. Such curriculum interventions reflect a strategic decision at the program level: treating AI knowledge as a learning outcome of programs, essential for modern education. The inclusion of AI content is often coupled with experiential learning. For instance, some universities encourage students to use AI tools in capstone projects or assignments (with proper guidance) to simulate real-world applications, thereby making curriculum more relevant. This ties into program development where learning outcomes are updated to include “AI-related competencies” (like being able to critically evaluate AI outputs or use AI tools in problem-solving)(Kassorla et al., 2024).
In summary, the state of practice is dynamic: at the program level, leading institutions are re-aligning curricula with the AI-driven world (through broad AI literacy programs and data-informed program design), and at the course level, instructors are increasingly adopting AI as a design partner (to plan content, personalize learning paths, and streamline material creation). These practices are not yet universal, but they provide evidence of what an AI-integrated approach to curriculum design can look like, informing the need for a guiding framework that can help scale such successes more broadly.

3.2. AI for Learning Outcomes and Skills Mapping

One prominent theme is the use of AI to inform the design of learning outcomes and the mapping of skills from curriculum to industry needs. Traditionally, defining program learning outcomes and ensuring they stay relevant has been a manual, research-intensive task for faculty committees. AI is now easing this burden in two ways: by offering recommendations for learning outcome formulation and by connecting those outcomes to external skill frameworks and job market data.

3.2.1. Designing and Refining Learning Outcomes

Generative AI tools can assist educators in drafting and refining learning outcomes at both course and program levels. Faculty often struggle to phrase outcomes in measurable, rigorous terms (for example, aligning with Bloom’s taxonomy cognitive levels). AI can act as a smart editor in this process. In practice, an instructor might write an initial set of course objectives and then prompt an AI (like ChatGPT) to suggest improvements or appropriate action verbs for each cognitive level. According to an Educause review, instructors have successfully used GenAI to fine-tune learning outcomes so that they precisely match desired cognitive levels and competencies (Fang & Broussard, 2024). The AI draws on its vast training data (including educational taxonomies and examples) to recommend verbs and phrasing that ensure outcomes are clear and assessable. This not only saves time but also helps less experienced instructors adhere to curriculum design best practices. It is important to note that the human designer remains in control – the AI provides suggestions, but faculty make the final decision, reviewing for accuracy and suitability. The benefit is a more systematic alignment of outcomes with pedagogical standards, which can improve coherence across courses. For instance, if a program wants to scaffold a skill like “analytical thinking” from introductory to advanced levels, AI can help ensure that outcome statements at each course level reflect progressively higher Bloom’s levels (e.g. explain, then analyze, then evaluate), creating a clear developmental trajectory (Fang & Broussard, 2024).

3.2.2. AI-Powered Skills Mapping

Perhaps the most transformative capability of AI in curriculum design is mapping curriculum outcomes to real-world skills and labor market demands. This addresses a long-standing challenge in higher education: keeping programs in sync with the rapidly changing skill requirements of employers. AI can process and analyze large datasets of job postings, industry skill frameworks (like ESCO in Europe or ONET in the US), and professional standards, and then help educators map their curriculum to these skills. The University of Phoenix case provides a concrete example: the institution leveraged AI and data tools to create a “skills-aligned curriculum map” (Thompson, 2023). Each course learning outcome was tagged with specific skills (e.g. “data analysis”, “critical thinking”, “team collaboration”) which were drawn from industry-validated lists (Thompson, 2023). AI was used in two ways here: firstly, to analyze which skills are most demanded in relevant job roles (using algorithms to parse labor market data), and secondly, to assist in aligning or updating course content to cover those emerging skills. Doris Savron, a vice provost at Phoenix, notes that as the labor market increasingly uses AI to identify skills gaps, universities too must use data-driven methods to “arm their learners with skills confirmed by authentic assessments” that prove their job readiness (Thompson, 2023). This approach ensures that program learning outcomes are not just academic abstractions but are explicitly linked to tangible competencies valued outside academia.
Other institutions and vendors have started offering similar AI-driven skills mapping services. For example, Credential Engine, a nonprofit that standardizes credential and competency data, highlights how AI can enhance defining connections between competencies, courses, and job roles (Credential Engine, (n.d.). By leveraging structured data (like a rich competency taxonomy) and AI analytics, an institution can quickly see where its curriculum has strengths or gaps relative to industry needs. If a gap is identified (say a new technology skill that is now critical in a field but not yet in the curriculum), program designers can respond by updating outcomes or adding new content. Some universities have even integrated such mapping into their curriculum approval process – any new course proposal must show (via a tool often AI-powered) how it aligns with designated program outcomes and workforce skills. This enforces a forward-looking alignment and keeps the curriculum “live” to changing requirements.

3.2.3. Transparency for Learners

An important byproduct of AI-enabled skills mapping is improved transparency for students about their learning outcomes. When each course is explicitly linked to skills and those are communicated, students can better understand the purpose of their learning activities in terms of career preparation. At University of Phoenix, this is manifested in student-facing documentation: syllabi and course sites enumerate the specific skills developed in each module, along with how they will be assessed (Thompson, 2023). For instance, a project in a finance course might be labeled as developing “financial modeling” and “problem-solving” skills, as per the curriculum map. This practice, supported by the underlying AI mapping, turns the curriculum into a more navigable map for students, making the abstract notion of learning outcomes very concrete. Students can track their skill acquisition through an AI-backed learner record or portfolio that accumulates evidence (projects, badges, assessment results) of each skill mastered (Thompson, 2023). This is highly motivating and also allows for personalization – if a student sees they are lacking exposure to a certain skill, they could choose electives or seek experiences to fill that gap, effectively using the curriculum map as a guide for their learning journey.
In summary, AI is enabling a shift from static, faculty-crafted learning outcomes to dynamic, data-informed outcomes tightly coupled with real-world skills. This strengthens curriculum relevance and helps ensure programs remain current. By automating parts of the skills mapping and outcome design process, AI frees educators to focus on deeper pedagogical thinking (like how to teach those outcomes) while ensuring no critical skill is overlooked. These practices lay the foundation for program-level guidance in our framework, where setting the right outcomes and alignment is the first step in design.

3.3. AI in Course Structuring and Content Development

AI’s impact is also strongly felt at the course design level, particularly in structuring course content and generating instructional materials. The integration of AI here serves two main goals: efficiency (making the design process faster and less labor-intensive) and enhanced creativity/personalization (introducing new ideas and adapting content to learner needs).

3.3.1. Course Planning and Structure

Many instructional designers follow the backward design model – defining objectives, then assessments, then learning activities (Fang & Broussard, 2024). AI can assist at each step of this planning. We saw earlier how AI helps refine objectives; it similarly aids assessment planning and content sequencing. In practice, after outcomes are set, designers can prompt AI to suggest assessment methods suitable for those outcomes. For example, if a learning outcome is “evaluate arguments in a written essay,” an AI tool might suggest an appropriate assessment like a rubric-based argumentative essay assignment or a debate activity, complete with criteria. In one account, faculty used an AI to generate a pool of quiz questions aligned to each learning outcome – the AI (in Blackboard’s toolset) could create question banks from a given chapter text or summary (Fang & Broussard, 2024). Although instructors needed to vet and refine these questions, AI provided a starting draft, thus accelerating the development of assessments and ensuring coverage of key content. Similarly, for course structuring, AI can propose a logical sequence of topics. An instructor at a Canadian university reported using ChatGPT to outline a new course in environmental policy: by feeding it the course description and desired outcomes, the AI returned a week-by-week topic breakdown that was “surprisingly coherent” and served as a first draft of the syllabus (personal communication, documented in a teaching center blog). This kind of AI-generated course map can then be iteratively refined by the instructor, who adds specific readings, case studies or removes AI-suggested topics that don’t fit local context. The key advantage noted is that AI ensures no major domain topic is inadvertently missed and can introduce interdisciplinary angles that a designer might not have considered.

3.3.2. Content Generation and Media Development

AI excels at generating textual and multimedia content on demand, which is transforming how course materials are developed. Rather than writing everything from scratch, educators can use AI to produce drafts of slide decks, lecture notes, case study scenarios, and even video and audio content. For instance, tools like Tome and Jasper.ai have been used to create initial drafts of presentations and reading materials within minutes (Muncey, 2025). An instructional designer could input key points or a rough outline, and the AI will flesh it out into paragraphs of explanation or slides with structured points. Of course, these drafts require human editing for accuracy and pedagogical quality, but they substantially reduce the time to first draft. This is particularly helpful when adapting a course to a new modality (say, moving a face-to-face course online) – a process that normally involves creating a large amount of content up front. AI can generate supplementary examples, explanations at varying difficulty levels, or summaries of complex concepts that instructors can refine and integrate.
Moreover, AI is fostering creative approaches to content. For example, if a course traditionally relies on textual cases, an instructor can prompt a generative AI to produce a hypothetical scenario or case study tailored to specific learning objectives. Faculty Focus reported an example where an educator used AI prompts to generate custom case studies for class discussion in a business ethics course, saving time while providing varied scenarios for students (Gilreath, 2025; CCA, 2025). These AI-generated cases were then checked and tweaked to ensure they meet the learning goals and contain realistic details. Similarly, AI image and video generation tools allow creation of custom visuals or even simple simulations. A tool like Synthesia can turn text into a narrated video with an avatar presenter (Muncey, 2025), which some instructors use to create engaging introductions to weekly topics or explainer videos, without needing studio production. Such media can cater to different learning preferences, improving accessibility (e.g. offering visual or auditory explanations in addition to text).

3.3.3. Adaptive and Personalized Content

A significant benefit of AI in content development is the potential for adaptive learning experiences. AI can analyze student data (prior knowledge, performance on quizzes, learning preferences) and help designers create branching content or differentiated pathways. For instance, adaptive learning platforms (often AI-driven) allow instructional designers to input a range of content and practice problems tagged by difficulty or sub-skill. The AI then guides students along individualized paths, presenting easier or additional practice content when a student struggles, or accelerating when a student shows mastery (Muncey, 2025). In our review, we found that personalized learning path generation is an increasingly common trend: AI can automatically adjust the flow of a course for each learner, essentially doing in real-time what a human designer would do if they could pre-plan for every student scenario (Muncey, 2025).
Empirical work at East Tennessee State University illustrates this dynamic: instructional designers reported particular enthusiasm for generative and adaptive AI because these tools allowed them to prototype personalized learning paths and richer assessment scenarios more quickly than before, even when some faculty were cautious or sceptical (Muncey, 2025). Similar patterns are visible in broader empirical and review studies of AI-enabled personalisation, which consistently report that adaptive or AI-assisted systems produce medium to large gains in student achievement compared with non-adaptive instruction (standardised mean differences around g = 0.70, and in some cases improvements of several months of additional learning) and measurable gains in higher-order skills such as critical thinking and problem-solving (Ateeq et al., 2025; Chen, 2025; Merino-Campos, 2025). These convergent findings suggest that course designers are increasingly using AI not merely to generate static content, but to orchestrate dynamic ecosystems of content, assessment, and feedback that respond to learners in real time.
One concrete instantiation is AI-driven tutoring systems embedded in courses: these can provide hints, additional explanations, or alternative examples when the system detects a student’s misunderstanding. Designing such systems is complex, but AI simplifies it by auto-generating a pool of explanations or questions for each concept which can be deployed as needed.

3.3.4. Quality and Efficiency Gains

The combined effect of these AI applications in course design is both a quality improvement and an efficiency gain. AI-generated content and structure need human oversight, but they significantly reduce the grunt work and open up time for designers to focus on higher-order tasks – such as curating truly engaging learning activities, fostering interaction, or integrating research and real-world projects into the course. As Fang and Broussard (2024) note, AI can take over repetitive tasks (like creating quiz questions or summarizing readings), “freeing up instructional designers to focus on the creative aspects, such as crafting engaging activities”. Faculty using AI in this way caution against overreliance: the goal is to augment, not replace human expertise (Fang & Broussard, 2024). The content and structure provided by AI serve as a starting point – a kind of intelligent template – which educators then adapt to ensure the human touch, contextual relevance, and accuracy. Indeed, one risk identified is that an uncritical use of AI could yield a sterile learning experience lacking the richness and nuance an educator brings (Fang & Broussard, 2024). The findings strongly advocate an iterative, collaborative design process: humans and AI working together, where the human constantly reviews and refines AI outputs (Fang & Broussard, 2024). When done properly, courses can be developed faster without sacrificing quality, and often with innovative elements that might not have been conceived otherwise (thanks to AI’s suggestions).
In summary, AI is revolutionizing course design by providing intelligent support in outlining, populating, and adapting course content. This dramatically expands the capacity of institutions to develop new courses or update existing ones to meet demand (for instance, the rapid creation of online courses or new programs in emerging fields). As we move forward, these practices inform the course-level guidance in our framework: using AI for course mapping, content creation, and ensuring adaptivity, all under the careful guidance of instructor expertise to maintain pedagogical integrity.

3.4. AI-Enhanced Assessment and Feedback

Assessment design and feedback mechanisms in higher education are being significantly influenced by AI, addressing both efficacy and integrity in learning evaluation. Our review indicates that AI contributes in two broad areas: (1) creating more adaptive and personalized assessments with real-time feedback, and (2) upholding academic integrity and rethinking assessment strategies in the age of AI-assisted student work (Sangwa & Mutabazi, 2025a).

3.4.1. Adaptive Assessment Systems

AI-powered assessment tools enable a shift from one-size-fits-all testing to more responsive evaluation. Adaptive assessment systems, often driven by machine learning algorithms, adjust the difficulty or focus of questions in real-time based on a student’s performance (Muncey, 2025). For example, if a student consistently answers mid-level algebra questions correctly, the system will start presenting more advanced problems; if the student struggles, it will offer remedial questions to diagnose misunderstandings. This was observed in practice at some universities adopting intelligent tutoring systems in introductory courses: the assessments effectively become part of the learning process, providing instant feedback and additional practice where needed (Muncey, 2025). Research synthesized by Muncey (2025) highlights that such adaptive quizzing not only identifies knowledge gaps as they occur but also keeps students more engaged, as the challenge is continually tailored to their level (Muncey, 2025). The immediate feedback loop – often with AI-generated hints or explanations – helps students learn from errors on the spot, rather than waiting for an instructor to grade and return to work days later. Universities implementing these systems have reported improved learning outcomes in foundational courses and higher student satisfaction due to the “learning while assessing” approach (essentially formative assessment at scale).

3.4.2. Assessment Content Generation

Instructors are also using AI to generate assessment content (questions, problems, case scenarios) and evaluation rubrics. This speeds up the creation of tests and ensures a broad coverage of topics. For instance, as noted earlier, AI can create draft question banks from existing text or learning materials (Fang & Broussard, 2024). In language courses, generative AI has been used to produce reading comprehension passages and questions at targeted difficulty levels, freeing instructors from having to source or write fresh passages. Similarly, AI can propose variations of problems (very useful in STEM fields to generate multiple versions of a test for academic integrity or practice exercises). Another valuable use is in rubric creation: given the criteria and performance level descriptions, an AI tool can output a first-draft analytic rubric for an assignment (Fang & Broussard, 2024). Faculty can then fine-tune the wording and expectations. This ensures that assessments come with clear expectations and that they align tightly with the outcomes – because the AI uses the language of the outcomes in generating the rubric. The result is more consistency and transparency in how students are evaluated.

3.4.3. Feedback and Analytics

AI doesn’t stop at helping create assessments; it also plays a role in providing feedback on student submissions. Automated grading systems using AI are increasingly capable of evaluating objective responses and even open-ended essays to some extent. For example, some large online programs employ AI-based essay scoring to provide immediate feedback to students on practice essays (though final grading may still be done by humans). Natural language processing can flag areas in an essay that may need improvement (like unclear thesis or lack of evidence) and give students a chance to revise. Moreover, sentiment analysis and other analytics on student feedback (course evaluations, forum posts) can inform instructors about aspects of the course that might need redesign (Chu & Ashraf, 2025). One AI-based system analyzed thousands of student feedback comments and detected that students in an AI-personalized course felt more positive about their learning experience (identified through sentiment trends) compared to traditional courses (Chu & Ashraf, 2025). This kind of meta-assessment – AI evaluating the course itself through student voices – provides designers with data to continuously improve course assessments and content.

3.4.4. Academic Integrity and Rethinking Assessment

The advent of AI that students can use (like essay generators) has understandably raised concerns about traditional assessments (e.g. take-home essays, multiple-choice quizzes) being vulnerable to dishonesty or outsourcing to AI. Our findings show that institutions are responding in two ways: detection and redesign. On the detection front, there was an initial flurry of AI-detection tools (e.g. GPTZero, Turnitin’s AI detector) aimed at catching AI-generated student work (Fang & Broussard, 2024). However, purely punitive or detection-oriented approaches are limited and can even be counterproductive. An alternative, more forward-thinking response is to redesign assessments to be “AI-aware.” This means creating evaluation methods that either integrate AI use transparently or emphasize skills that AI cannot easily fake. For instance, some universities have introduced more oral examinations, in-person debates, or hand-written assignments to ensure authenticity of student work in an AI-permeated environment. Others allow AI tools but require students to document their process and reflect on how they used the AI, thus turning it into a learning opportunity rather than a cheating tool. A case in point: some professors now assign tasks like “use an AI tool to help you draft an essay outline, then write the essay and include a paragraph on how the AI output was used and where you needed to improve upon it.” This kind of assessment teaches responsible use of AI and critical thinking about AI outputs.
At the institutional policy level, we see emerging frameworks instructing educators to be transparent with students about AI use and to educate them on ethical use (Fang & Broussard, 2024). For example, the University of Hong Kong (as reported in Moorhouse et al., 2023) took an approach of allowing AI tools in assignments under certain conditions and focusing on teaching students about originality and attribution in the context of AI – essentially embedding academic integrity education into the curriculum design. Likewise, some institutions (notably mentioned in the UNESCO survey) have made AI literacy a mandatory element and concurrently are “redesigning the university’s assessment system” to account for AI (UNESCO, 2025). This often means shifting to more authentic assessments: project-based learning, portfolios, and demonstrations of higher-order thinking that AI cannot easily produce on its own (e.g. personal reflections, hands-on lab work, or live problem-solving). Such changes are a direct result of acknowledging AI’s presence and turning it from a threat into an ally in learning. They ensure assessments remain valid measures of student learning in a world where AI is readily accessible.

3.4.5. Challenges and Ethical Issues in AI-Augmented Assessment

Despite the benefits, using AI in assessment comes with challenges that need addressing. Bias is one – if an AI grading system has biases (say against non-native English writing styles or certain perspectives), it could unfairly score students (Fang & Broussard, 2024). Therefore, whenever AI is involved in grading or feedback, human oversight is critical. Instructors or assessment designers should regularly audit AI-generated feedback for accuracy and fairness, and AI models should be trained on diverse student work to mitigate bias. Data privacy is another consideration: assessments generate sensitive data about student performance. If AI tools are cloud-based, institutions must ensure compliance with privacy laws and that data is secure (an issue we will revisit under governance). Some institutions opt for on-premise AI solutions or those that don’t store identifiable student data externally to address this.
In conclusion, AI is reshaping assessment by making it more personalized, efficient, and arguably more meaningful, when used thoughtfully. It allows for immediate feedback loops and continuous assessment strategies that align with learning processes, moving away from high-stakes testing alone. At the same time, it forces educators to rethink what and how we assess, in light of AI capabilities. These findings underscore the importance of including assessment design principles and integrity safeguards in any AI adoption framework for curriculum design.

3.5. Faculty Support and Development in AI Adoption

No AI-driven curriculum innovation can succeed without the buy-in and competence of faculty and instructional designers. Our findings emphasize that faculty support and development is both an enabler of AI adoption and, if lacking, a major barrier. Globally, institutions that have progressed in integrating AI into program and course design have invested significantly in building the AI literacy and skills of their educators.

3.5.1. Faculty Attitudes and Readiness

Initial attitudes among faculty toward AI in education range from excitement and curiosity to hesitation and fear. In the early days following generative AI’s rise (circa 2023), many professors were deeply suspicious and concerned primarily about cheating (Fang & Broussard, 2024). However, as understanding grows, more faculty are seeing AI as a tool rather than a threat. A study referenced in our review found that instructional designers generally view generative AI positively as a means to enhance their work (Muncey, 2025). That positive disposition is crucial – it correlates with faster adoption of AI tools in course design, as noted earlier. But achieving this mindset shift often requires proactive support. Institutions that treat AI as a strategic priority tend to conduct awareness campaigns, workshops, and pilot projects that involve faculty from the outset (CCA, 2025). For example, the University of Massachusetts Lowell provided mini-grants to 35 faculty to experiment with AI in their classrooms, coupled with mentorship and peer learning opportunities (CCS, 2025). This approach lowered the barrier for faculty to try AI, creating a community of practice and showcasing successful use cases to others.

3.5.2. Training Programs

Comprehensive faculty development programs around AI are emerging as a best practice. These programs often start with general AI literacy – ensuring faculty understand what AI is (and isn’t), the types of tools available, and basic concepts such as machine learning, data privacy, and algorithmic bias. From our sources, we noted that institutions are incorporating AI modules into existing professional development. Some, like the Louisiana higher ed system, even include faculty in the AI literacy micro-credential offered to students (CCA, 2025), so that faculty and students share a common foundation. According to Chu & Ashraf (2025), successful AI integration “requires comprehensive faculty development programs that address both technical competencies and pedagogical transformation” (Chu & Ashraf, 2025). In practice, this means training not just on how to use the tools but how to redesign teaching strategies in light of AI. Key components of such training include:
[1] Technical skill-building: Hands-on workshops on using specific AI tools (from data analytics dashboards to content generators). e.g. a workshop on how to prompt ChatGPT effectively for course design tasks.
[2] Pedagogical integration: Sessions on rethinking course design, assessment, and student engagement when AI can handle certain tasks. This often involves introducing frameworks like the SAMR model (Substitution, Augmentation, etc.) to help faculty evaluate how AI might substitute or augment what they do.
[3]. Ethical and legal considerations: Training in AI ethics is critical (Chu & Ashraf, 2025). Faculty need to understand issues of bias, intellectual property (e.g. if using AI-generated content, what are the copyright implications?), privacy (handling student data in AI systems), and how to ensure inclusivity (making sure AI tools are accessible to all students, including those with disabilities).
[4] Addressing mindset (TAM factors): The Technology Acceptance Model factors, like perceived usefulness and ease of use, should be deliberately addressed to overcome faculty resistance (Chu & Ashraf, 2025). If faculty see clear value (e.g. reduced workload, improved student outcomes) and feel confident they can use the tech, they are more likely to embrace it. Thus, training often includes success stories, data on efficacy (such as improved retention rates from an AI-enhanced course), and very user-friendly tutorials to build self-efficacy.
Continuous professional development is emphasized because AI tools and best practices evolve quickly (Chu & Ashraf, 2025). Some institutions have created ongoing support structures – for example, an “AI in Teaching” community of practice, regular webinars on new tools, or an AI pedagogy help desk. A notable suggestion is to establish faculty AI champions or fellows: faculty who are early adopters get additional training and then serve as mentors or consultants to their colleagues. UMass Lowell’s model of having a Faculty Fellow in AI (with a stipend to lead and advise the mini-grant projects) is one such approach (CCA, 2025). This peer-support model often resonates better than top-down mandates, as faculty are more receptive to learning from colleagues who have classroom experience with the tools.

3.5.3. Infrastructure for Support

Faculty support also involves providing the necessary infrastructure and resources. This includes access to AI tools (licenses for educational versions of AI software, integration of AI features into the LMS), as well as technical support. If a faculty member wants to use an AI-based adaptive learning platform, the institution’s IT and instructional design staff should be ready to assist with integration, data issues, and troubleshooting. The Complete College America (CCA) playbook describes how Arizona State University launched an innovation challenge around AI and then provided the winners (faculty/staff teams) with resources like enterprise AI tool licenses and IT support to implement their ideas (CCA, 2025). This highlights the importance of not just encouraging innovative ideas but backing them up with the means to implement them effectively.

3.5.4. Overcoming Challenges in Faculty Adoption

A lack of faculty engagement can stall any AI initiative. The barriers frequently cited include fear of being replaced, lack of time to learn new tools, or skepticism about AI’s relevance to one’s discipline. Addressing these requires change management strategies. Several sources, including Chu & Ashraf (2025), stress the need for “robust change management initiatives” alongside training (Chu & Ashraf, 2025). This might involve clear communication from leadership about the role of AI (emphasizing augmentation, not replacement of the educator), providing incentives or recognition for faculty who innovate with AI, and involving faculty in policy-making so they feel ownership (more on stakeholder engagement in the next theme). An interesting insight from one report is that when faculty themselves use AI in course development (even just to draft content), they become more attuned to how AI-generated student work might look, making them better at detecting academic misconduct or guiding proper use (Fang & Broussard, 2024). In other words, faculty AI use builds a form of AI literacy that can close the gap between student and teacher, fostering a more empathetic and informed approach. This suggests that encouraging faculty to experiment with AI underpins not only better teaching design but also better supervision of students’ AI use.
In summary, faculty support and development is the linchpin of AI adoption in curriculum design. Institutions that have made headway provide a template: invest in extensive training (technical, pedagogical, ethical), nurture a collaborative environment with champions and peer learning, and ensure the necessary infrastructure and policies are in place to support faculty efforts. Without these, even the best AI tools will lie unused or be used poorly, and the potential benefits to programs and courses will not materialize. Our framework will therefore incorporate capacity building as a foundational element, aligning with the consensus that people—not technology alone—drive educational innovation (Chu & Ashraf, 2025).

3.6. Enablers, Challenges, and Ethical Considerations

Drawing from the above thematic findings, we synthesize here the cross-cutting enablers and challenges (including ethical issues) that affect AI adoption in program and course design. Understanding these factors is crucial, as they inform the conditions under which an AI integration framework will be most effective.

3.6.1. Key Enablers: [1] Leadership and Strategic Vision

A clear strategic commitment from institutional leadership is a top enabler. Universities that declared AI a strategic priority (over half of respondents in one Educause survey [EDUCAUSE, 2023; CCA, 2025]) have made more progress, as leadership can mobilize resources and set supportive policies. Leaders play a role in asking the right question: “What kind of AI-enabled institution do we want to be?” (Webb, 2025), which guides coherent action. Presence of an AI strategy or task force signals to all stakeholders that this is an institutional direction, not just a fad.
[2] Policy Frameworks and Governance Support: Having an institutional policy or framework on AI use greatly helps coordinate efforts. About 19% of universities in the UNESCO survey already had a formal AI policy and another 42% were developing one (UNESCO, 2025; Sangwa et al., 2025). Such frameworks provide guidelines on acceptable use, ethical principles, and roles/responsibilities, which reduce uncertainty for faculty and students. Moreover, a collaborative and iterative approach to policy – involving consultations with faculty and students – was noted as more effective than a purely regulatory stance (UNESCO, 2025). In other words, policies that focus on enabling responsible innovation (through education and engagement) rather than just policing, tend to foster a more positive environment for adoption.
[3]. Investment in Infrastructure and Tools: Financial and technical support is essential. Institutions that invest in modernizing their IT infrastructure (high-speed networks, cloud services, AI-enabled LMS, data warehouses) lay the groundwork for AI integration (Chu & Ashraf, 2025). AI often requires handling large data and computing power, so infrastructure upgrades (and possibly partnerships with tech providers) are enablers. For example, having enterprise licenses for AI software (like the OpenAI educational license or Microsoft’s AI tools within Office 365) readily available encourages experimentation. A robust data environment where curriculum and student data are well organized (e.g. using standards like CTDL for competencies [Credential Engine, n.d.]) also makes it easier to apply AI effectively.
[4]. Cross-disciplinary and External Collaboration: Collaboration is a recurring enabling factor. Internally, bringing together educators, IT staff, data scientists, and librarians (who often manage digital resources) can create a fertile ground for AI projects. Externally, learning from and partnering with other institutions accelerates adoption. The education sector globally has been collaborative – sharing case studies and resources through networks (the Complete College America’s coalition of institutions for AI (CCA, 2025) is one example). In the fast-changing AI landscape, no institution has to (or can) do it alone (Webb, 2025). Collaborating with ed-tech companies judiciously can also help; vendors bring expertise and scalable solutions, though institutions must ensure alignment with their needs and values. Jisc’s framework, for instance, suggests to “partner rather than build” AI solutions in-house to avoid technical debt and leverage existing innovation (Webb, 2025). This strategy enables colleges to adopt AI tools that are already robust, focusing effort on integration and usage training rather than core development.

3.6.2. Major Challenges

[1]. Lack of Awareness and Skills: As identified earlier, a significant barrier is many educators’ limited understanding of AI’s functionality and potential (and sometimes an exaggerated fear of its capabilities). Without intervention, this knowledge gap can lead to resistance or improper use. One study encapsulated this challenge: “barriers such as a lack of awareness, insufficient training, ethical concerns, and the need for policy guidance persist”, even among enthusiastic institutions (Chu & Ashraf, 2025). Essentially, if faculty and staff do not know how to use AI tools or appreciate why they might enhance learning design, adoption stalls. This challenge reinforces why training (addressed above) is so crucial.
[2]. Technical and Data Limitations: Not all institutions have the technical maturity to implement AI-driven design at scale. Challenges include outdated IT systems that don’t integrate well with new AI tools, insufficient data quality for analytics (e.g. if learning outcome data, content metadata, or student performance data are not collected or standardized, AI recommendations will be less useful), and budget constraints to acquire needed technology. Smaller or under-resourced institutions, such as many in the developing world, may struggle with these issues, widening an AI-adoption gap. Moreover, developing countries may face a digital divide: limited internet connectivity or hardware for students and staff, which hampers use of advanced AI technologies (Jin et al., 2024).
[3]. Ethical and Privacy Concerns: Ethical issues are top-of-mind challenges frequently cited by stakeholders. These include data privacy (ensuring student data used in AI systems is protected and compliant with regulations like FERPA or GDPR), algorithmic bias (AI tools may perpetuate biases present in training data, leading to unfair or inaccurate outcomes in curriculum recommendations or automated grading), transparency (often called the “black box” problem – educators might mistrust AI suggestions if they don’t know how they were generated), and broader implications like the impact on student agency and critical thinking. In the UNESCO survey, among those hesitant to use AI, common barriers were “ethical and environmental concerns, limited understanding or lack of access, and philosophical resistance” (UNESCO, 2025). Instances of ethical issues have already arisen: e.g., disputes over authorship when AI is used in research, or cases of students leaning too heavily on AI tools for assignments (UNESCO, 2025). Such incidents create caution. Institutions are recognizing the need for guardrails: clear ethical guidelines and possibly technical measures (like bias audits, explainability requirements) to ensure AI is used in alignment with human rights, equity, and academic values (Cardona et al., 2023). This is an area where higher ed can learn from frameworks in other sectors (AI ethics guidelines in healthcare, for example), but also needs education-specific policies because the context (learning environments, student vulnerability, academic integrity) has unique factors (Cardona et al., 2023).
[4]. Cultural Resistance and Change Management: Beyond lack of skills, there can be ideological resistance. Some educators feel that reliance on AI might dilute the human element of teaching or that it’s a passing tech fad not worth upending routines for. Change fatigue can also be an issue – faculty have seen many ed-tech initiatives come and go. Convincing them that AI is transformative and here to stay, in a positive way, is a challenge. This underscores the importance of involving faculty in shaping how AI is used (to avoid feelings of imposition) and demonstrating clear value. Another cultural aspect is the fear of replacement – instructors might worry AI could one day replace aspects of their job (for example, automated courseware might make them feel their role is reduced). While current AI is far from replacing educators, these anxieties must be acknowledged and addressed by reassuring messaging: e.g. emphasizing that “AI can do the heavy lifting of routine tasks, freeing you to do the more impactful work of mentoring and engaging students” (Fang & Broussard, 2024).
Figure 2. Two-dimensional thematic map displaying how reported enablers (left) and challenges (right) cluster at the programme-level (upper rows) and course-level (lower rows) of curriculum design. Shading intensity is proportional to the weight of evidence in the reviewed literature, allowing monochrome reproduction without loss of meaning.
Figure 2. Two-dimensional thematic map displaying how reported enablers (left) and challenges (right) cluster at the programme-level (upper rows) and course-level (lower rows) of curriculum design. Shading intensity is proportional to the weight of evidence in the reviewed literature, allowing monochrome reproduction without loss of meaning.
Preprints 188487 g002

3.6.3. Ethical and Policy Considerations: (Many already Woven into Above, but Summarizing)

Responsible AI use is a thread running through all challenges and enabling strategies. Ethical AI in education requires: -
[1] Robust governance: Institutions need ethical frameworks and maybe dedicated bodies (like an AI ethics committee) to oversee AI initiatives (Chu & Ashraf, 2025). These bodies can vet new tools for compliance with privacy standards, require bias testing, and set usage policies. For example, policies might mandate that AI decisions (like curriculum changes or grading) are reviewable by humans and that students are informed when AI is used (Fang & Broussard, 2024)
Figure 3. Concentric-circle model of ethical governance, with students at the core, surrounded successively by the programme committee, faculty, the AI ethics advisory board, and an outer ring of policy documents.
Figure 3. Concentric-circle model of ethical governance, with students at the core, surrounded successively by the programme committee, faculty, the AI ethics advisory board, and an outer ring of policy documents.
Preprints 188487 g003
[2]. Data governance: Collecting and using data for AI should follow principles of consent, minimal necessary use, and security. Setting up data governance frameworks, as recommended by researchers (Chu & Ashraf, 2025), is vital. This includes clarity on who owns the data (if using external AI services, does the vendor get access to student data? hopefully not, per strict contracts), and ensuring data is anonymized when appropriate.
[3]. Equity and Inclusion: AI adoption should consider equity — will all students benefit? Are there any who could be left behind (e.g. those less tech-savvy or who don’t have access to certain devices)? Perhaps alternative pathways need to be in place. Also, content generated by AI should be reviewed for cultural biases or insensitivity. An interesting initiative by some universities is including students in the development of AI usage guidelines (Mowreader, 2024), to capture diverse perspectives and build a culture of trust.
[4]. Compliance and Accreditation: On the policy side, external compliance issues act as both constraints and drivers. For instance, any system using student data must comply with laws (GDPR in Europe, FERPA in the USA [EU, 2016/679]). Additionally, professional accrediting bodies for certain programs may have to approve AI-driven changes to curriculum (e.g., a nursing program can’t simply let AI alter competencies without ensuring they still meet accreditation standards). This calls for alignment: as one source notes, aligning with emerging international standards like UNESCO’s AI ethics recommendation and monitoring evolving legislation is necessary (Chu & Ashraf, 2025). There’s also movement at national levels (the U.S. has a proposed AI Bill of Rights blueprint [US Gov, 2022; Cardona et al., 2023]) which, while not education-specific, will influence best practices.
In conclusion, successful integration of AI in curriculum design is a balancing act. Institutions must leverage enablers such as clear strategic leadership, robust data infrastructure, and faculty development while actively mitigating risks related to bias, surveillance, deskilling, and mission-drift. These findings collectively inform the design of our AI adoption framework. The framework must be built on the enablers (to reinforce them) and contain strategies to address the challenges and ethical considerations, ensuring that AI is adopted in a way that is sustainable, responsible, and aligned with the educational mission.

3.7. Evidence Profile Across Themes

The thematic synthesis above indicates that AI is already reshaping curriculum design at programme and course levels, yet the strength and character of the evidence differ across themes. To avoid an overly anecdotal reading of the results, Table 1 summarises the weight of evidence for each major theme, the geographical spread of the underlying studies, and typical quantitative or qualitative outcomes reported. This profile is based on the forty studies and reports included in the review, supplemented by recent systematic reviews of AI in higher education curriculum and personalised learning (Chen, 2025; Liang et al., 2025; Merino-Campos, 2025; Mounkoro et al., 2024).
Taken together, this evidence profile suggests that the strongest empirical support currently lies around adaptive assessment and personalised learning trajectories, where formal experimental and quasi-experimental studies consistently report meaningful gains in achievement and engagement compared to non-adaptive designs (Chen, 2025; Merino-Campos, 2025). By contrast, skills-mapping and governance-related themes rely more heavily on qualitative case studies, policy analyses, and sector reports. This asymmetry matters for transferability: institutions can be relatively confident in adopting adaptive AI tools where effect sizes have been repeatedly demonstrated, while treating the programme-level and governance-level recommendations in this review as theoretically robust but still in need of extensive empirical validation across diverse higher-education systems (Liang et al., 2025; Mounkoro et al., 2024)

4. AI-Enabled Framework for Program and Course Design and Development

Building on the findings, we propose a comprehensive AI-enabled framework for program and course design in higher education. The framework is structured around two design levels (program-level and course-level) with cross-cutting foundations that ensure ethical, effective, and sustainable adoption. The framework’s aim is to guide institutions through a holistic adoption process: from defining learning outcomes informed by AI insights, through designing courses with AI assistance, to implementing governance and support systems. It is intended to be scalable and adaptable – applicable to various institution sizes, types, and regions – serving as a flexible blueprint rather than a rigid recipe. In line with best practices, it encourages planning across all key areas while allowing local customization (Webb, 2025).
Below, we detail each component of the framework, accompanied by visual cues (diagrams or tables can be used by institutions to represent the relationships). For clarity, we divide the framework into: Program-Level Guidance, Course-Level Guidance, and Cross-Cutting Issues that span both levels. Each sub-section includes evidence-based strategies drawn from the review.

4.1. Program-Level Guidance

At the program level, the framework addresses how AI can inform the design and continuous improvement of academic programs (degrees, majors, certificates) as a whole. Key elements include:

4.1.1. Defining Program Learning Outcomes with AI Insight

Begin by establishing or revising program learning outcomes (PLOs) using data-driven insight. Faculty, administrators, and industry advisors collaborate, aided by AI analytics that identify trending knowledge and skills in the discipline. For instance, use AI to analyze thousands of job descriptions or professional standards documents to extract common competencies needed in graduates (Thompson, 2023). This ensures PLOs are aligned with current and future workforce needs (e.g., a business program might add an outcome on “data-driven decision making” after AI analysis shows its growing importance in business roles). AI tools can also help word outcomes effectively (as discussed, aligning with Bloom’s taxonomy). The result is a set of program outcomes that are both academically rigorous and industry-relevant, forming a strong foundation for curriculum development.

4.1.2. Skills Mapping and Curriculum Alignment

Develop a program competency map where each PLO is mapped to specific skills or competencies, and each course in the program is mapped to the PLOs it supports. This is where an AI-powered skills mapping tool is highly valuable. The map should resemble the University of Phoenix’s skills-aligned curriculum map (Thompson, 2023): a matrix linking course outcomes to program outcomes and further to external skill frameworks. In practice, one might use an outcomes management system (some institutions use platforms like Coursetune or home-grown databases) augmented by AI to maintain this alignment. For example, when planning a new course or modifying one, faculty consult the AI-curated map to see which program outcomes need reinforcement and what skills should be covered. This prevents gaps and redundancies: no program outcome is left unaddressed, and courses build on each other rather than repeat content. The AI can flag misalignments (e.g., a course that seems to cover skills unrelated to any PLO) so they can be corrected, thus serving as a quality check. Additionally, making this map student-facing (e.g., on program websites or advising tools) helps students understand how their courses interconnect and lead to skill development (Thompson, 2023).
Table 2. Example of a skills-aligned curriculum map linking course outcomes to program outcomes and an external skill framework.
Table 2. Example of a skills-aligned curriculum map linking course outcomes to program outcomes and an external skill framework.
Preprints 188487 i001

4.1.3. Curriculum Sequencing and Pathways

AI can assist in optimizing the sequence of courses and content across a program. Through analysis of prerequisites, historical student performance data, and even scheduling constraints, AI might suggest the ideal order for courses or identify bottlenecks. For example, if an AI analysis of student transcripts finds that those who take a particular pair of courses concurrently struggle, it might recommend sequencing them differently. Conversely, AI could reveal that certain skills need earlier introduction to scaffold advanced courses, prompting a curriculum revision. Program chairs can use these insights to design pathways (including alternative paths, like concentrations or elective groupings) that maximize student success. Some adaptive degree planning tools (often AI-backed) allow institutions to simulate how changes (like introducing an AI literacy module in year 1) impact outcomes down the line, supporting evidence-based decisions in program design.

4.1.4. Integrating AI Literacy and Ethics into Programs

As a cross-cutting program design principle, ensure every program considers where and how to embed AI literacy and ethical understanding related to the field. For instance, an engineering program might integrate content on AI in engineering practice; a literature program might include a discussion on AI in creative writing or textual analysis. The framework suggests making AI literacy an outcome for all students, as University of Florida did, positioning it as part of the general education or core curriculum (Southworth et al., 2023). This doesn’t mean all students become AI experts, but they should graduate with awareness and basic skills to use AI tools relevant to their domain responsibly. AI literacy modules can be standalone courses or infused within existing courses, but designing them at program level ensures consistency. UNESCO’s development of AI competency frameworks for students and teachers (UNESCO, 2025) can serve as a resource for what such literacy should cover (e.g., understanding AI capabilities, limitations, ethical use, data privacy).

4.1.5. Program Review and Analytics

Incorporate AI into the ongoing program review and quality assurance cycle. Instead of relying solely on periodic manual reviews (every few years), use AI analytics for continuous monitoring. This involves collecting program-level data – student course evaluation sentiments, enrollment patterns, graduation rates, job placement stats, etc. – and having AI systems analyze trends and flag issues. For example, if an AI notices that a particular program learning outcome is consistently rated low in mastery (perhaps via an analysis of assessment scores from capstone projects), it can alert faculty to revisit how that outcome is taught and assessed across the curriculum. Similarly, AI-driven text analysis of open-ended student feedback might reveal common suggestions or complaints about the program structure. By embedding these analytics, programs become more responsive. The framework would recommend dashboards or reports that program directors and curriculum committees review each term/year, summarizing AI-derived insights about the program’s health. This aligns with the idea of “continuous evaluation processes” highlighted by Chu & Ashraf, which allow adjusting strategic plans based on real-world data and evolving needs (Chu & Ashraf, 2025). An AI-informed program review also helps with accreditation: evidence and improvements are data-backed, making it easier to demonstrate accountability and effectiveness to accrediting bodies (Chu & Ashraf, 2025).

4.1.6. Governance and Stakeholder Input at Program Level

At this design stage, establish an inclusive governance process. Form a program AI advisory group (or leverage existing curriculum committees) that includes faculty, student representatives, employers (for vocational programs), and possibly an AI/analytics expert from the institution. This group oversees the alignment process and makes key decisions with AI input as a guide, not as an oracle. The framework emphasizes that AI suggestions should always be weighed by human judgment to ensure they fit the institutional mission and context (Fang & Broussard, 2024; Sangwa & Mutabazi, 2025b). Clear roles should be defined: for example, faculty decide on academic matters, data experts ensure the AI tools are functioning and interpretable, students provide perspective on workload or sequence feasibility, and employers validate the industry alignment. This echoes findings that “clear roles and responsibilities among faculty, students, and administrators” are vital for successful integration (Jin et al., 2024). Through this collaborative approach, program-level AI adoption remains human-centered and mission-aligned.
By following these program-level guidelines, an institution ensures that before even getting into individual course design, the macro-level structure of a curriculum is sound, up-to-date, and outcomes-driven, leveraging AI as an intelligent assistant in decision-making. A well-aligned program sets up every course design effort for success by providing clarity on goals and content scope.

4.2. Course-Level Guidance

The course-level component of the framework focuses on designing and developing individual courses (or modules) using AI tools and insights. It operationalizes the program’s objectives into concrete learning experiences. Key elements include:

4.2.1. Course Learning Outcomes Alignment

Ensure each course’s learning outcomes (CLOs) align with the program outcomes and leverage AI assistance for precision. Building from the program competency map, course designers identify which program outcomes and skills the course addresses and then formulate specific CLOs. AI tools can help by suggesting outcome language that aligns with the intended level. As discussed, generative AI can refine outcomes to the appropriate cognitive complexity (e.g., suggesting stronger verbs or clearer criteria) (Fang & Broussard, 2024). The framework recommends a quick AI validation step: after writing CLOs, input them into an AI tool to check alignment and coverage. For instance, one might ask, “Do these outcomes cover the concept of X required by the program outcome?” and an AI trained on the curriculum map can answer or highlight gaps. This acts as a second set of eyes to ensure nothing important is omitted. However, faculty ultimately adjust the outcomes to ensure they fit the course level and student context.

4.2.2. Instructional Design with AI Co-Creation

Utilize AI as a co-designer throughout the course planning process. The backward design process can be augmented at each stage:
(i). Content Scoping and Course Mapping: Use AI to draft a high-level outline of the course topics and modules (Fang & Broussard, 2024). Provide the AI with the course outcomes and the time frame (e.g., a 14-week semester), and let it propose a logical breakdown of content. The instructor reviews and edits this structure, ensuring it meets any external requirements (syllabus standards, credit hour policy) and personal teaching style.
(ii). Learning Activities and Lesson Planning: For each module or week, AI can help generate ideas for learning activities (e.g., case studies, discussions, projects, experiments). The framework suggests employing AI in brainstorming mode – for example, “Suggest 3 interactive activities to teach concept Y” – to yield creative approaches that the instructor might not have considered. Tools like Google’s Gemini have shown prowess in generating lesson plan elements including methods and timings (Fang & Broussard, 2024). Designers take these suggestions as starting points, then infuse their own expertise and knowledge of their students to refine the plans.
(iii). Content Creation: Develop course content (readings, slides, videos, examples) with AI support. The framework encourages instructors to use content generation AI to produce first drafts of materials. For instance, if creating a slideshow on Topic Z, an AI like Tome or PowerPoint’s AI assistant could draft slides with bullet points, which the instructor then populates with specifics and narratives (Muncey, 2025). If an example or case is needed to illustrate a concept, the instructor could prompt an AI to generate a relevant scenario, then adjust details for accuracy. This significantly cuts down development time and allows instructors to focus on curating quality rather than creating everything from scratch. A caution embedded in the framework is to always review and edit AI-generated content for correctness, context fit, and inclusion. AI content can occasionally contain errors or biases (Fang & Broussard, 2024), so human vetting is non-negotiable.
(iv). Multimedia and Resources: Take advantage of AI to generate or curate multimedia. For example, if a course could benefit from a short explainer video, an instructor might use an AI video generator (like Synthesia for a talking-head video, or DALL-E for illustrative images to include in slides)(Muncey, 2025). Additionally, AI can help scour open educational resources – by describing to an AI what you need (“a simulation that demonstrates principle Q”), it might find or even create one if it has that capability. The framework encourages including diverse resource types (text, audio, visual, interactive) to cater to different learning styles, and AI’s generative abilities can fill gaps in resources. Accessibility should be checked (e.g. ensure AI-generated images have alt text, videos have captions – some AI tools also generate these automatically).
(v). AI-Aided Assessment Design: Integrate AI into creating and improving assessments, as follows:
(a). Assessment Creation: Use AI to generate an initial set of assessment items aligned to each Course Learning Outcome (CLO). For objective tests, AI can produce multiple-choice questions or short-answer prompts from the content (with the caveat that faculty check for correctness and clarity) (Fang & Broussard, 2024). For essays or projects, AI might suggest prompts or scenarios. Importantly, AI can also suggest a variety of assessment types for a given outcome – e.g., “To assess critical thinking on topic X, you could use a debate, a reflective essay, or a case analysis.” This helps instructors consider alternative assessments beyond their habitual choices.
(b). Rubric Drafting: Develop rubrics or scoring guides using AI. Provide the criteria and perhaps example performance descriptions to an AI, and it can structure a rubric with levels of achievement (Fang & Broussard, 2024). The instructor then refines language and standards. The benefit is consistency and time saved, ensuring that the rubric explicitly ties to outcome language (since the AI can incorporate the wording of outcomes into the rubric descriptors).
(c). Personalized and Authentic Assessments: The framework urges designing assessments that either leverage AI or are resilient to it. For instance, incorporate an assignment where students must use an AI tool and then reflect on the process (teaching responsible use) – perhaps asking students to compare their own solution to one suggested by an AI and analyze differences. Conversely, for assessments where AI use is not allowed or not appropriate, ensure they are structured to minimize temptation or ability for misuse. This might mean more in-class assessments, oral components, or highly individualized tasks as mentioned earlier. AI can assist here too: e.g., generate unique data sets or case particulars for each student, so each gets a slightly different task (reducing the chance an AI could provide a one-size answer).
(d). Adaptive Quizzing and Practice: Build in low-stakes adaptive quizzes for formative assessment. The course could include an AI-driven practice system that students use outside of class, which adapts to their learning. The design aspect is to choose a platform or system and populate it with content (questions, problems). The AI then takes over during student use, but designers should monitor the output (what items are most missed, which content many students struggle with) as this can inform teaching adjustments. Essentially, treat the adaptive system as part of the course design that continues to evolve as data comes in.
(e). Feedback and Support Systems: Implement AI-powered feedback loops in the course. The framework suggests including at least one mechanism where AI provides feedback to students, complementing instructor feedback. Options include: automated feedback on quizzes (immediate scoring and explanations), writing feedback tools that highlight grammar or argument structure issues, coding auto-graders for computer science assignments, or even AI chatbots as study tutors that students can query when stuck on homework. For example, some courses now embed a “virtual TA” chatbot on the course site, which can answer FAQs or provide hints (trained on the course content). This increases responsiveness to students without overburdening instructors. However, guidelines must be given to students on the limits of such AI tutors and when to seek human help. Instructors should also get periodic reports from these AI support systems to know common questions or misconceptions arising, thereby closing the loop by addressing them in class.
(vi). Continuous Improvement and Iteration: Design the course with an iterative mindset, using AI analytics each time the course is run to improve it. After each offering, instructors should review data (ideally via an AI analytics dashboard): Which assessment questions did many get wrong? Which discussion forum had low engagement? Where did the AI tutor get most queries? By analyzing this (with AI’s help to spot patterns), instructors refine the course for the next cycle. The framework encourages setting aside time for this reflection and using AI to simulate “what-if” scenarios for changes. For example, if students struggled with a concept in week 3, an instructor might consider moving it later or teaching it differently; an AI tool might simulate student knowledge growth to predict if moving it would help, or suggest preparatory material in week 2. Over successive iterations, this leads to a highly optimized course that’s responsive to learner needs.
Overall, at the course level, the framework harnesses AI to make design more efficient, evidence-based, and innovative. The course designer remains the pedagogical leader, but with AI acting as an intelligent assistant – much like a junior instructional designer offering ideas and labor, under supervision. By following these guidelines, courses are more likely to be well-aligned with outcomes, rich in engaging and accessible materials, and equipped with assessments and feedback mechanisms that truly support learning.
Figure 4. AI-enabled Plan–Do–Check–Act (PDCA) cycle for continuous curriculum improvement. The diagram shows how AI analytics transform course-level data into actionable insights that feed program review and institutional planning, thereby closing the feedback loop for systematic, evidence-based enhancement across all curriculum layers.
Figure 4. AI-enabled Plan–Do–Check–Act (PDCA) cycle for continuous curriculum improvement. The diagram shows how AI analytics transform course-level data into actionable insights that feed program review and institutional planning, thereby closing the feedback loop for systematic, evidence-based enhancement across all curriculum layers.
Preprints 188487 g004

4.3. Cross-Cutting Issues and Foundations

Underpinning the above program and course strategies are cross-cutting elements that ensure the AI integration is responsible, sustainable, and institutionally supported. These issues must be addressed to create an enabling environment for the framework’s implementation:

4.3.1. Institutional Governance and Policy

Establish clear governance structures for AI in education. This includes formal policies on AI use in teaching and learning (covering what is allowed, encouraged, or prohibited) and dedicated oversight groups. The framework advises forming an AI in Curriculum Committee or expanding the mandate of curriculum committees to include AI oversight. Governance should balance innovation with risk management (Chu & Ashraf, 2025). For example, a policy might allow faculty wide latitude to experiment with AI in course design but require any AI tool that handles student data to be vetted for privacy compliance. The committee can set guidelines, such as: transparency (faculty should disclose in syllabi if AI was used in designing or will be used in delivering the course), academic integrity rules (when students can/can’t use AI on assignments), and standards for quality (e.g., human review of AI-generated course content is mandatory). As found in our review, effective governance often involves an ethics panel or AI advisory board (Chu & Ashraf, 2025). They can review proposed AI uses for ethical pitfalls and suggest mitigations (like adding an algorithmic bias check step in content creation workflows). Also, align institutional policies with external frameworks: for instance, referencing the UNESCO Recommendation on AI Ethics (2021) for high-level principles (UNESCO, 2025), and abiding by national guidelines (such as the U.S. OSTP’s AI Bill of Rights blueprint for values like privacy, transparency [Cardona et al., 2023]). Governance also means putting in place incident response procedures: if something goes wrong (data breach, AI mis-graded student work, etc.), have a plan to address it and communicate with stakeholders (Chu & Ashraf, 2025).

4.3.2. Ethical Use and Responsible AI

Infuse ethics into every aspect of the framework implementation. This requires training for all stakeholders (faculty, students, staff) on the ethical use of AI – essentially building an ethical AI culture. Some concrete measures: implement algorithmic transparency and bias mitigation protocols (Chu & Ashraf, 2025). For example, if using an AI to recommend curriculum changes, ensure the rationale can be explained to faculty (why did it suggest this? based on what data?). If using predictive analytics to identify at-risk students or curriculum gaps, regularly audit these algorithms for bias (are they unfairly flagging certain student groups or favoring certain knowledge areas due to biased data?). The framework suggests having documentation for each AI tool used in design, detailing its data sources, limitations, and a checklist for ethical concerns. On the student side, incorporate teachings about academic honesty with AI (e.g., include in academic integrity modules the proper citation of AI assistance, similar to sources) (Sangwa & Mutabazi, 2025a). A positive framing is to treat it as AI literacy for integrity: educate students and faculty to use AI productively and ethically, as Fang & Broussard advocate (Fang & Broussard, 2024). Being transparent is part of this – the framework encourages openness when AI is used. If an instructor used AI to generate an image or draft content for the course, they might mention it, modeling honesty. If AI is used in grading, students should know and understand the safeguards. Transparency builds trust and demystifies AI, turning it into a normal tool subject to academic standards, rather than a hidden trick.

4.3.3. Quality Assurance (QA)

Ensure that integrating AI does not compromise the quality and rigor of education, and in fact, enhance it. QA processes should be updated to include AI-related criteria. For instance, when reviewing a course (internally or by accreditors), consider: Are the learning outcomes still being met at high levels if AI was part of the design? Is the content accurate and appropriate? One might add a step in QA where a human expert or committee reviews AI-generated content or assessments for fidelity to learning objectives. Also, set performance benchmarks for AI initiatives (Chu & Ashraf, 2025): e.g., if an AI-driven adaptive system is introduced, measure its impact on student performance or satisfaction; if negligible or negative, adjust the approach. Essentially, treat AI components with the same rigor as any curriculum component – evaluate and refine them. Over time, QA might also involve external validation: for example, checking that graduates of an AI-enhanced curriculum perform as well or better on licensure exams or job placements, thereby validating that AI interventions did not dilute outcomes. Quality assurance bodies (like accreditation agencies) are starting to pay attention to digital aspects; institutions can be proactive by documenting how their AI integration meets quality standards and improves student learning. By including AI in QA, institutions also signal seriousness in their approach, which can appease skeptics. The framework thus recommends that any curriculum proposal or revision that heavily involves AI includes an evaluation plan (what data to collect, how to review its success).

4.3.4. Capacity Building and Change Management

Beyond faculty training (already covered), think system-wide. This includes staff (e.g., academic advisors, IT support, librarians) who also need upskilling to support AI-rich curricula. The framework encourages offering AI literacy training to all institutional members – students, faculty, and staff – tailored to their roles (Kassorla et al., 2024). For example, train academic advisors on how to interpret data from AI student success prediction tools, so they can intervene effectively. Train library staff on AI tools for research or citation that students might use, so they can guide information literacy in the age of AI. Furthermore, address leadership development: academic leaders should understand AI enough to make strategic decisions (initiatives like Educause’s AI leadership programs or Jisc’s guidance for senior leaders align with this need [Webb, 2025]). On change management: use techniques like pilot programs, phased rollouts, and feedback loops to manage the transition. The framework suggests starting small and scaling: pilot the framework’s approach in a few programs or courses first, learn from that (with lots of faculty and student feedback), then expand. Celebrate quick wins (like improved student retention in a pilot course using AI, or time saved in development) to build momentum and buy-in. Also, address fears by clearly communicating the goals and benefits of AI adoption – for instance, emphasize that it is to enhance student success and support faculty, not to cut jobs or compromise academic values (Fang & Broussard, 2024). Engage faculty unions or senates early if applicable, to collaborate on guidelines that protect academic freedom and faculty roles.

4.3.5. Scalability and Adaptability

Design the framework implementation to be scalable across the institution and adaptable to different contexts. A “one-size-fits-all” solution is not the aim; rather, the framework provides a structured approach covering all critical areas, which can be tailored to each department’s needs (Webb, 2025). For scalability, leverage common platforms or tools where possible (for example, integrate AI features into the central LMS so all courses can use them easily, rather than each doing bespoke integrations). Provide centralized support (perhaps an “AI in Curriculum” support team in the teaching and learning center) that can consult and assist many courses. For adaptability, allow variation: a humanities course might use AI very differently (perhaps for text analysis or as a discussion prompt generator) than a computer science course (which might focus on AI projects or use advanced analytics). The framework is meant to accommodate these differences – its components are building blocks, not prescribed exact practices. The principle is each institution or department should consider each component (outcomes, content design, assessment, etc.) and ask, “How can AI help us do this better in our context?” and “What do we need in place to use AI responsibly here?” The answer may differ, but the framework ensures none of those important questions are overlooked. Maintaining flexibility also means regularly revisiting the framework: as AI technology changes, institutions might adapt the strategies. For instance, if new regulations or breakthroughs occur (say a new AI tool that becomes ubiquitous), the framework’s governance and practice guidelines should be updated accordingly. In essence, think of the framework as living – guided by core principles but evolving with experience and technological change.

4.3.6. Sustainability (Long-Term Planning)

Adopt a long-term perspective. AI in education is not a one-time project but a continuous journey. Plan for sustainability in terms of finances (budget for tool licenses, infrastructure maintenance, potential increase in support staff), human resources (the roles of educators might shift, possibly requiring new hires like data analysts in academic departments or AI curriculum specialists), and maintenance (AI models and content require updating as knowledge changes or as biases are discovered). Also, consider the scalability of pilot successes: if one program had an AI mentor system and it worked well, scaling it to all programs might require significant computing resources or support personnel; plan accordingly. On the flip side, plan for sunsetting things that don’t work – if some AI integration is not delivering value, be ready to retire or replace it. A cost-benefit mindset is advised: the MDPI study pointed out the need for comprehensive cost-benefit analysis frameworks for AI adoption (Chu & Ashraf, 2025). Institutions should periodically evaluate whether the benefits (better outcomes, efficiency) are worth the costs (monetary, time, potential risks) and adjust their strategy. Additionally, environmental sustainability is a new consideration: AI computing can be energy intensive (Strubell et al., 2019), so perhaps factor that in, choosing energy-efficient solutions or cloud services that offset carbon (some institutions might even get credit in rankings or audits for considering this).
In summary, the cross-cutting elements of the framework ensure that the deployment of AI in program and course design is done thoughtfully and with institutional alignment. They are the guardrails and support beams that hold up the program-level and course-level innovations. Without them, individual efforts might flounder or have unintended consequences. With them, the institution creates an ecosystem where AI-driven educational design can thrive, producing teaching and learning that is innovative yet anchored in the institution’s values and commitment to quality.
Figure 5. AI-enabled framework for program and course design in higher education. The framework is represented as three interacting layers. At the top, Program Design focuses on outcomes, mapping, sequencing and review. In the middle, Course Design translates program intentions into specific objectives, content, assessment and feedback. At the base, Institutional Foundations provide governance, ethics, quality assurance, support and sustainability that enable and constrain activity at the upper levels. Arrows indicate feedback loops, with course-level data informing program review and institutional policies and supports guiding both program and course design.
Figure 5. AI-enabled framework for program and course design in higher education. The framework is represented as three interacting layers. At the top, Program Design focuses on outcomes, mapping, sequencing and review. In the middle, Course Design translates program intentions into specific objectives, content, assessment and feedback. At the base, Institutional Foundations provide governance, ethics, quality assurance, support and sustainability that enable and constrain activity at the upper levels. Arrows indicate feedback loops, with course-level data informing program review and institutional policies and supports guiding both program and course design.
Preprints 188487 g005
To translate this conceptual model into an operational tool, Table 3 recasts each layer of the framework as a concise self-audit rubric. Curriculum teams can use the rubric to assess their current level of readiness on three dimensions at each layer: strategy and governance, data and infrastructure, and curriculum and pedagogy. The rubric is intentionally simple: for each row, teams rate themselves as nascent, emerging, or embedded, then identify concrete actions to move at least one step up the scale during the next curriculum review cycle.
When institutions use this rubric as part of periodic programme review, it turns the three-layer framework into a cyclical governance instrument rather than a static diagram. The act of rating each cell and documenting planned actions also surfaces structural constraints, such as gaps in data infrastructure or policy clarity, that may otherwise remain invisible in course-level experimentation.
Illustrative application: Bachelor of Business Administration programme
Consider a mid-size Bachelor of Business Administration (BBA) programme in a public university enrolling approximately 350 students per cohort. Historically, the programme had generic, input-oriented learning outcomes (e.g., “demonstrate knowledge of core business disciplines”), relatively high first-year attrition, and employer feedback that graduates lacked data-driven decision-making and AI-related competences. Applying the proposed framework begins at the institutional foundations layer: the university’s AI governance group clarifies acceptable uses of learner data, approves a set of vetted AI platforms, and mandates that all programmes identify at least one AI-enabled pathway to strengthen graduate outcomes related to digital and analytical skills (Cardona et al., 2023; UNESCO, 2025).
At the programme-design layer, the curriculum team collaborates with institutional analysts to conduct an AI-supported review of existing outcomes and graduate trajectories. Using linked datasets from the virtual learning environment, assessment systems, and graduate surveys, they identify three recurrent patterns: students who struggle in first-year quantitative modules are disproportionately likely to drop out, critical-thinking and data-literacy indicators lag behind communication skills, and graduates in analytics-heavy roles report needing substantial additional upskilling. Drawing on external taxonomies and labour-market analytics, the team then rewrites the programme-level outcomes to foreground data-driven decision-making, ethical AI literacy, and adaptive problem-solving, and maps these outcomes to scaffolded modules across the degree (Chu & Ashraf, 2025; Merino-Campos, 2025).
At the course-design layer, one core second-year course, “Business Analytics for Decision-Making,” is selected as a pilot. Previously organised around static lectures and end-of-semester exams, the course is redesigned with AI-enabled adaptive assessment and generative simulations. Students complete an initial diagnostic assessment that uses AI-based item selection to estimate their prior knowledge and misconceptions; based on these profiles, the system recommends tailored pathways through practice problems, mini-cases, and short explanatory videos. Generative AI is constrained within institutionally approved tools and used to generate context-rich scenarios that students must critique and improve, rather than accept uncritically. Throughout the semester, dashboards provide the instructor with real-time insights on topic-level mastery, allowing targeted workshops and small-group interventions. Assessment is rebalanced toward iterative projects in which students must justify both their analytic choices and their responsible use (and non-use) of AI tools.
Governance checkpoints are triggered at each stage of this process. The programme committee must sign off on the revised learning outcomes and their alignment with institutional AI principles; the faculty ethics committee reviews the use of learner data for analytics and ensures that opt-out options and transparency notices are in place; and the teaching and learning committee reviews the redesigned course to verify that human oversight, academic integrity safeguards, and workload implications have been appropriately addressed. After two cohorts, the programme-level analytics are re-run: if retention in the analytics pathway has improved, assessment performance has risen in targeted outcomes, and employer feedback indicates better preparedness for data-rich roles, the framework recommends scaling similar design patterns to adjacent courses and revisiting institutional policies in light of the new evidence (Chen, 2025; Liang et al., 2025; Merino-Campos, 2025).
This example illustrates how the three layers of the framework operate as a coherent system: institutional governance and infrastructure create the conditions for responsible AI use, programme-level analytics reshape outcomes and pathways, and course-level design experiments feed empirical data back into both programme review and institutional policy.
By following this AI adoption framework, institutions can systematically integrate AI into curriculum design and development. It provides a path to harness AI’s capabilities – personalizing learning, aligning education with workforce demands, improving efficiency – while upholding academic excellence and integrity. The framework is meant to be adaptable globally: whether a large research university or a small teaching college, the principles apply, with scale adjustments. It is also intended as a starting point; institutions will refine it as they learn what works best in their unique context, contributing back to the evolving body of knowledge on AI in education design.

5. Conclusion and Recommendations

5.1. Summary of Key Contributions

This manuscript has presented a thorough exploration of integrating artificial intelligence in higher education programs and course design, culminating in a novel framework for AI adoption in curriculum development. Through a comprehensive synthesis of secondary data — spanning global institutional practices, scholarly research, and policy insights — we identified how AI is currently reshaping curriculum innovation, the obstacles and catalysts involved, and the critical considerations for ethical and sustainable use. The proposed framework ties these pieces together, offering a structured yet flexible approach to guide institutions in leveraging AI for designing learner-centered, future-ready programs and courses. The framework’s multi-level guidance (program-level, course-level, and cross-cutting foundations) ensures that AI is not implemented in a vacuum but is interwoven with pedagogical objectives, quality standards, and ethical governance. In doing so, the work contributes theoretically by bridging AI technology with curriculum design theory, and practically by delivering a blueprint that stakeholders can apply in real-world educational settings. We anticipate that this framework, once applied and refined through practice, can help institutions worldwide move from ad-hoc AI experiments to strategic, evidence-based integration, ultimately enhancing educational outcomes and responsiveness in the face of rapid societal and technological change.

5.2. Stakeholder-Based Recommendations

Successfully realizing the potential of AI in curriculum design requires coordinated efforts from various stakeholders in the higher education ecosystem. Below, we provide targeted recommendations for key stakeholder groups, informed by our findings:

5.2.1. For Policymakers and Education Authorities

Develop national or regional guidelines and support systems for AI in education that emphasize human-centric and ethical use. Policies should encourage innovation (through grants or pilot programs) while mandating safeguards like data privacy compliance and equity considerations. Policymakers can facilitate the sharing of best practices between institutions and ensure alignment with broader regulatory frameworks (e.g., adapting the Blueprint for an AI Bill of Rights to educational contexts to protect student rights [Cardona et al., 2023]). It is also recommended to invest in capacity building at a systemic level — for example, funding faculty AI training centers or national platforms for AI tools in curriculum — especially to help resource-constrained institutions join the AI transformation and to prevent a digital divide.

5.2.2. For Institutional Leaders (Presidents, Provosts, Deans)

Provide strong leadership and a clear vision for AI adoption as part of the institutional strategy. Leaders should articulate how AI aligns with the university’s mission (e.g., improving student success, expanding access, fostering innovation) and allocate resources accordingly (Sangwa & Mutabazi, 2025b). This includes setting up governance structures like an AI steering committee, as discussed, and ensuring cross-department collaboration. Leaders must also model a proactive yet cautious approach: support experimentation (“sandbox” environments for faculty to try AI in teaching) but require evaluation and accountability for outcomes. In decision-making, rely on risk-benefit assessments — for instance, before scaling an AI system, consider costs, data security, and impact on stakeholders. As Michael Webb from Jisc notes, ask “what kind of AI-enabled institution do we want to be?” (Webb, 2025) and plan holistically for skills, technology, and governance. Additionally, prioritize communication: be transparent with faculty and students about AI initiatives and listen to their feedback or concerns, thereby fostering a culture of trust and collaboration in this change process.

5.2.3. For Faculty and Instructional Designers

Embrace AI as a tool to enhance (not replace) your expertise in teaching and course design. We recommend faculty engage in ongoing professional development to build AI literacy relevant to their discipline and pedagogy (Chu & Ashraf, 2025). Start with small integrations: for example, use AI to help generate a quiz or summarize a complex reading, and gradually expand as comfort grows. Faculty should collaborate with instructional designers and IT staff to ensure effective use of tools. Importantly, maintain critical judgment — always review AI outputs for accuracy and quality, as highlighted in our findings (Fang & Broussard, 2024). Faculty should also proactively address AI with students: set clear guidelines for AI use in coursework and turn it into a learning opportunity (e.g., discuss why an AI solution might be flawed, or how to cite AI assistance). By being transparent about their own use of AI in course preparation, faculty can model ethical behavior and reduce stigma or confusion. Instructional designers, in particular, can serve as champions by demonstrating successful cases of AI-enhanced course development (like improved alignment of materials with outcomes, or time saved in producing content), thereby encouraging broader adoption across departments.

5.2.4. For Academic Support Units (Libraries, Teaching Centers, IT Departments)

These units should gear up to support AI in curriculum design. Libraries can curate AI tools and data sources for faculty, and guide on issues like copyright and open educational resources when using AI-generated content. Teaching and Learning Centers should incorporate AI training into their workshops and consultation services, helping faculty redesign activities or assessments with AI in mind. IT departments must ensure the technological infrastructure is AI-ready: integrate new tools into existing systems (e.g., LMS plugins), maintain robust network and computational capacity, and uphold security protocols. They should work closely with vendors to understand AI features in educational software and communicate those capabilities to educators. Support units should also help monitor and evaluate AI implementations (for example, analyzing usage data or feedback from AI-driven pilot projects) and share these insights. In summary, academic support staff become key facilitators and troubleshooters, making the difference between an AI tool just existing and it being effectively used to improve learning.

5.2.5. For Accrediting Bodies and Quality Assurance Agencies

Update accreditation criteria and quality review frameworks to acknowledge AI’s role and ensure it is used to strengthen educational quality. Accrediting bodies might develop guidelines on evidence to be provided when an institution uses AI in curriculum (e.g., demonstrating that algorithm recommendations are reviewed by faculty, or showing how AI integration improved an outcome like retention or assessment validity). They should encourage institutions to engage in rigorous evaluation of AI interventions and share those results. The focus should be on outcomes and student learning: if AI helps achieve or exceed learning goals, that’s a positive quality indicator. However, accreditors should also check that ethical standards are upheld, e.g., requiring that programs have academic integrity policies covering AI and that student data privacy is protected. By including such elements in accreditation review, agencies will prompt institutions to be conscientious and systematic in their AI use. Additionally, QA agencies can serve as conveners, bringing together educators to develop consensus on best practices for AI in curriculum — perhaps issuing policy briefs or hosting workshops across institutions, which will further disseminate effective strategies.

5.2.6. For Educational Technology Vendors and AI Tool Developers

Engage with the higher education community to tailor AI solutions to genuine educational needs, and do so in a way that is transparent and ethical. Vendors should work closely with universities (perhaps via co-design sessions with faculty and students) to ensure their tools support features like outcomes mapping, accessibility, and data export for institutional analysis. They should adhere to open standards (like those for competencies and student data) so that their tools integrate well into the academic ecosystem (Credential Engine, n.d.). Critically, vendors must prioritize privacy and security — for instance, not using student data for unintended purposes and allowing on-premises or private cloud options if needed. We recommend that ed-tech companies also provide robust educator training and support, as part of their product offerings, because tools are only as good as their informed use. Furthermore, implement measures to mitigate biases in AI models; involve diverse data in training and allow institutions to audit and tune models to their context. By being proactive in these ways, vendors can build trust and long-term partnerships with institutions. A collaborative stance (perhaps sharing research and efficacy data of their AI applications) will help technology providers align with the educational sector’s values and goals, ultimately leading to products that measurably improve teaching and learning.

5.3. Limitations of the Study

Methodological limitations: This study is based on secondary data and theoretical synthesis rather than primary intervention research, and the framework has not yet been implemented and evaluated as an integrated whole in any single institution, so its efficacy remains inferential rather than demonstrably causal. Although the review drew on forty peer-reviewed studies, sector reports, and case examples, the evidence base is uneven, since AI adoption in higher education is still concentrated in English-language publications and institutions in North America, Europe, East Asia, and the Gulf region, which limits the transferability of some conclusions to African, Latin American, and low-income country contexts (Chen, 2025; Liang et al., 2025; Merino-Campos, 2025). The rapid evolution of AI tooling and policy also means that any synthesis is time-bound, with specific platforms and capabilities likely to change more quickly than the underlying pedagogical and governance principles. In addition, although the review sought to triangulate sources and privilege higher-quality empirical studies, it did not apply a formalised risk-of-bias rating to each included study, so some publication bias may remain uncorrected (Liang et al., 2025).
Framework-level limitations: The framework presupposes a minimum level of data infrastructure, interoperability, and analytic capacity that many institutions, especially in resource-constrained settings, may not yet possess, and attempts to implement AI-enabled skills mapping, adaptive pathways, or real-time dashboards without trustworthy learner-data systems risk amplifying noise or reproducing existing inequities (Cardona et al., 2023; Mounkoro et al., 2024; UNESCO, 2025). Its ambitious treatment of governance, ethics, and long-term analytics may make full adoption unrealistic in the short term for smaller or less well-resourced institutions. The framework also focuses on design and development rather than day-to-day classroom enactment, and it assumes, without fully specifying, the professional learning, incentives, and micro-level pedagogical shifts required for instructors to teach with AI-enabled curricula in ways that maintain academic integrity and cultivate critical AI literacy. It is therefore best understood as a directional scaffold rather than a turnkey solution, which institutions will need to adapt selectively, negotiate against local constraints, and interrogate critically for its assumptions about employability, data-driven decision-making, and the role of automation in education (Liang et al., 2025; Mounkoro et al., 2024).

5.4. Future Research Directions

In light of these limitations and the rapid acceleration of AI in higher education, future research should prioritise three interconnected lines of inquiry.
First, rigorous empirical validation of the framework and its constituent design patterns is needed. Quasi-experimental and experimental studies that implement the framework, or clearly defined components of it, across diverse institutions should assess effects on learning gains, retention, progression, and equity of attainment. Existing meta-analyses and systematic reviews show that AI-enabled personalisation and adaptive assessment can produce medium to large gains in achievement and engagement compared with traditional designs (Chen, 2025; Merino-Campos, 2025), but they rarely test integrated, governance-aware curriculum frameworks. Multi-site studies comparing adopters of the framework with carefully matched controls would generate stronger evidence about what works, for whom, and under which infrastructural and policy conditions (Liang et al., 2025).
Second, research should examine the cultural and contextual adaptation of AI-enabled curriculum frameworks. Current evidence is skewed toward systems with high digital maturity and relatively stable regulation, while AI will operate under very different social contracts, value systems, and constraints in small private colleges, faith-based institutions, public universities in the Global South, and indigenous-serving institutions. Comparative case studies and participatory design research in these settings are essential to understand how local epistemologies, linguistic diversity, and community expectations shape both the promise and the risks of AI in curriculum design (Cardona et al., 2023; Liang et al., 2025; UNESCO, 2025). Such work should address not only technical feasibility but also epistemic justice, data sovereignty, and resistance to purely instrumental or efficiency-driven uses of AI in education.
Third, longitudinal and systems-level studies should track the longer-term consequences of AI-enabled curriculum design. Short-term improvements in grades or completion do not reveal whether AI-enhanced programmes cultivate durable capabilities such as adaptability, ethical reasoning, and critical digital literacy. Longitudinal follow-up of graduates from AI-designed programmes, combined with institutional analytics on curriculum agility, staff workload, and resource allocation, would help to identify systemic effects of sustained AI integration over multiple cohorts (Chu & Ashraf, 2025; Merino-Campos, 2025). In parallel, design-based research on human–AI collaboration in curriculum teams could clarify how educators negotiate the division of labour between computational optimisation and human judgement, and how this balance shifts as tools and policies evolve (Liang et al., 2025; Mounkoro et al., 2024). Together, these lines of inquiry frame AI-enabled curriculum design as an evolving socio-technical practice that requires continuing empirical scrutiny, contextual negotiation, and long-term ethical reflection.

Author Contributions

[1]S.Sangwa: conceptualization, data curation, formal analysis, investigation, Methodology, writing – original draft preparation, visualization, validation, resources and softwares. [2] P. Mutabazi: writing – review & editing; formal analysis, investigation, ressources, validation and supervision. [3] J.B. Muvunyi: methodology, investigation, formal analysis, ressources, data curation, validation and writing – review & editing.

Funding

This research received no external funding. All phases of the study—design, analysis, and manuscript preparation—were conducted with the authors’ own time and their institutions’ routine resources.

Data Availability Statement

This study draws exclusively on published journal articles, institutional and policy reports, and publicly available survey and evaluation datasets, all of which are cited in the reference list. No new primary or individual-level data were generated, and every original analysis and aggregate finding is reported within the manuscript. Researchers who need clarification about source selection, screening decisions, coding templates, or other details that would aid replication may contact the corresponding author.

Ethical Approval

The study is a systematic review and meta-analysis of published trials, a secondary analysis of anonymised open learning-analytics datasets, and a bibliometric mapping of existing literature. It involved no direct contact with human participants, no collection of new human-subject data, and no use of identifiable personal information; therefore, institutional review board approval and informed consent were not required. Human-subject studies cited herein were carried out by the original investigators under their respective ethical protocols.

Conflicts of Interest

The authors declare that they have no known financial or personal relationships that could have influenced the work reported in this article.

References

  1. Ateeq, A., Almuraqab, N. A. S., Alfiras, M., Elastal, M., & BinSaeed, R. H. (2025). The impact of adaptive and interactive AI tools on student learning: From digital literacy to advanced skills. International Journal of Innovative Research and Scientific Studies, 8(6), 961–973. [CrossRef]
  2. Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S. W., & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21(4). [CrossRef]
  3. Cardona, M. A., Rodríguez, R. J., & Ishmael, K. (2023). Artificial intelligence and the future of teaching and learning: Insights and recommendations. U.S. Department of Education.https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf.
  4. Chen, M. (2025). The impact of AI-assisted personalized learning on student academic achievement. US-China Education Review A, 15(6), 441–450. [CrossRef]
  5. Chu, T. S., & Ashraf, M. (2025). Artificial Intelligence in Curriculum Design: A Data-Driven Approach to Higher Education Innovation. Knowledge, 5(3), 14. [CrossRef]
  6. Complete College America (CCA). (2025, July 22). New playbook shares case studies on how colleges can embed AI into curriculum and instruction. Complete College America. https://completecollege.org/news/new-playbook-shares-case-studies-on-how-colleges-can-embed-ai-into-curriculum-and-instruction/.
  7. Credential Engine. (n.d.). Credential transparency and AI. Retrieved November 30, 2025, from https://credentialengine.org/credentialtransparency-ai/.
  8. Critical Appraisal Skills Programme. (2018). CASP qualitative checklist. CASP. https://casp-uk.net/casp-tools-checklists/.
  9. EDUCAUSE. (2023, April 11). EDUCAUSE QuickPoll Results: Adopting and Adapting to Generative AI in Higher Ed Tech. EDUCAUSE Review. https://er.educause.edu/articles/2023/4/educause-quickpoll-results-adopting-and-adapting-to-generative-ai-in-higher-ed-tech.
  10. EDUCAUSE. (2024). New survey: More than 70% of higher-education administrators have a favorable view of AI despite low adoption to-date. https://www.educause.edu/about/corporate-participation/member-press-releases/new-survey-more-than-70-of-higher-education-administrators-have-a-favorable-view-of-ai.
  11. European Parliament & Council (EPC). (2016). General Data Protection Regulation (EU) 2016/679. Official Journal of the European Union. https://eur-lex.europa.eu/eli/reg/2016/679/oj.
  12. Fang, B. & Broussard, K. (2024, August 7). Augmented course design: Using AI to boost efficiency and expand capacity. EDUCAUSE Review. https://er.educause.edu/articles/2024/8/augmented-course-design-using-ai-to-boost-efficiency-and-expand-capacity.
  13. Gilreath, R. (2025, May 7). The use of artificial intelligence (AI) to generate case studies for the classroom. Faculty Focus. https://www.facultyfocus.com/articles/teaching-with-technology-articles/the-use-of-artificial-intelligence-ai-to-generate-case-studies-for-the-classroom/.
  14. Gough, D. (2007). Weight of evidence: A framework for the appraisal of the quality and relevance of evidence. Research Papers in Education, 22(2), 213–228. [CrossRef]
  15. Huang, A. Y. Q., Lu, O. H. T., & Yang, S. J. H. (2023). Effects of artificial intelligence–enabled personalized recommendations on learners’ learning engagement, motivation, and outcomes in a flipped classroom. Computers & Education, 194, 104684. [CrossRef]
  16. Jia, C., Hew, K. F., Du, J., & Li, L. (2023). Towards a fully online flipped classroom model to support student learning outcomes and engagement: A 2-year design-based study. The Internet and Higher Education, 56, 100878. [CrossRef]
  17. Jin, Y., Yan, L., Echeverria, V., Gašević, D., & Martinez-Maldonado, R. (2024, May 20). Generative AI in higher education: A global perspective of institutional adoption policies and guidelines (arXiv:2405.11800v1) [Preprint]. arXiv. [CrossRef]
  18. Jisc. (2023). AI maturity toolkit: A pathway for effective adoption of AI in tertiary education. Jisc. https://www.jisc.ac.uk/news/all/new-toolkit-for-colleges-and-universities-sets-pathway-for-effective-adoption-of-ai.
  19. Kassorla, M., Georgieva, M., & Papini, A. (2024, October 17). AI literacy in teaching and learning: A durable framework for higher education [Introduction]. EDUCAUSE. https://www.educause.edu/content/2024/ai-literacy-in-teaching-and-learning/introduction.
  20. Liang, J., Stephens, J. M., & Brown, G. T. L. (2025). A systematic review of the early impact of artificial intelligence on higher education curriculum, instruction, and assessment. Frontiers in Education, 10, Article 1522841. [CrossRef]
  21. Merino-Campos, C. (2025). The impact of artificial intelligence on personalized learning in higher education: A systematic review. Trends in Higher Education, 4(2), 17. [CrossRef]
  22. Mounkoro, I., Khawaji, T., Ocampo, D. M., Cadelina, F. A., Uberas, A. D., Mowafaq, F., Bhardhwaj, B. S., & D. (2024). Artificial intelligence in education: Redefining curriculum design and optimizing learning outcomes through data-driven personalization. Library Progress International, 44(4), 106–126. [CrossRef]
  23. Mowreader, A. (2024, September 16). College students uncertain about AI policies in classrooms. Inside Higher Ed. https://www.insidehighered.com/news/student-success/academic-life/2024/09/16/college-students-uncertain-about-ai-policies?utm_source=chatgpt.com.
  24. Muncey, N. (2025, May 19). How instructional designers can leverage AI for effective curriculum design. SchoolAI. https://schoolai.com/blog/instructional-designers-leverage-ai-effective-curriculum-design.
  25. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. [CrossRef]
  26. UNESCO. (2025, September 2). UNESCO survey: Two-thirds of higher education institutions have or are developing guidance on AI use. https://www.unesco.org/en/articles/unesco-survey-two-thirds-higher-education-institutions-have-or-are-developing-guidance-ai-use.
  27. Sangwa, S., & Mutabazi, P. (2025a). Generative AI and Academic Integrity in Online Distance Learning: The AI-Aware Assessment Policy Index for the Global South. Preprints. [CrossRef]
  28. Sangwa, S., & Mutabazi, P. (2025b). Mission-Driven Learning Theory: Ordering Knowledge and Competence to Life Mission. Open Journal of Transformative Education & Lifelong Learning (ISSN: 3105-305X), 1(1). https://journals.openchristian.education/index.php/oj-tell/article/view/9.
  29. Sangwa, S., Ngobi, D., Ekosse, E., & Mutabazi, P. (2025). AI governance in African higher education: Status, challenges, and a future-proof policy framework. Artificial Intelligence and Education, 1(1), 2054. [CrossRef]
  30. Southworth, J., Migliaccio, K., Glover, J., Glover, J., Reed, D., McCarty, C., Brendemuhl, J., & Thomas, A. (2023). Developing a model for AI across the curriculum: Transforming the higher education landscape via innovation in AI literacy. Computers and Education: Artificial Intelligence, 4, 100127. [CrossRef]
  31. Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645-3650. [CrossRef]
  32. Suh, W. (2025). Generative AI integration in higher education shifts students’ attitudes from tool use to innovation. Discov Educ 4, 508. [CrossRef]
  33. Thompson, E. (2023, March 13). White paper by University of Phoenix: Aligning curriculum with labour market through skills mapping. EdTech Innovation Hub. https://www.edtechinnovationhub.com/news/white-paper-uop-aligning-curriculum-with-labour-market.
  34. UNESCO. (2025). Survey on generative AI in higher education [Report]. United Nations Educational, Scientific and Cultural Organization. Retrieved March 10, 2025, from https://unesdoc.unesco.org/%E2%80%A6.
  35. U.S. Government Publishing Office. (2022, October) Blueprint for an AI Bill of Rights: Making automated systems work for the American people. GovInfo. https://www.govinfo.gov/app/details/GOVPUB-PREX23-PURL-gpo193638.
  36. Webb, M. (2025, August 28). A strategic framework for AI in colleges and universities. National Centre for AI (Jisc). https://nationalcentreforai.jiscinvolve.org/wp/2025/08/28/a-strategic-framework-for-ai-in-colleges-and-universities/.
Table 1. Evidence profile for major themes in AI-enabled curriculum design.
Table 1. Evidence profile for major themes in AI-enabled curriculum design.
Theme Approximate Number of Sources (n) Geographic Coverage Typical Outcomes Reported
AI-enabled Skills Mapping and Programme-level Outcome Alignment 9 North America, Europe, East Asia, cross-regional policy reports Qualitative evidence of clearer mapping between programme outcomes, graduate attributes, and labour-market skills; improved responsiveness to employer feedback; reports of better alignment between capstone projects and industry-defined competencies (Chu & Ashraf, 2025; Liang et al., 2025).
AI-supported Course Design and Content Orchestration 10 North America, Europe, Middle East, global ed-tech case studies Descriptive and survey-based evidence indicating more granular learning pathways, increased ability to version content for multiple modalities, and reductions in design cycle time; limited quantitative reports of improved student satisfaction and perceived relevance of course materials (Southworth et al., 2023; Hwang & Wu, 2025; Merino-Campos, 2025).
Adaptive Assessment, Feedback, and Personalised Learning Trajectories 11 Asia, North America, Europe, Gulf States Strong quantitative evidence base. Meta-analytic and quasi-experimental studies typically report medium to large gains in academic achievement for AI-supported or adaptive conditions compared to traditional instruction (standardised mean differences around g = 0.70 on average), along with 3–5 months of additional learning in certain subjects and improvements in critical-thinking and problem-solving skills (Chen, 2025; Ateeq et al., 2025; Merino-Campos, 2025).
Faculty Capacity, AI Literacy, and Human–AI Co-design Practices 6 North America, Europe, Australasia, GCC Qualitative evidence suggests faculty openness to AI is highest when tools are presented as augmentative rather than replacement; mixed levels of AI literacy with significant demand for structured professional development; reports of time savings on routine tasks but concerns regarding dependence on opaque systems (Fang & Broussard, 2024; Cardona et al., 2023; Liang et al., 2025).
Institutional Governance, Ethics, and Data Infrastructure 7 Global policy and sector reports; multi-country surveys Convergent evidence indicates that robust data infrastructures and clear governance are prerequisites for responsible AI integration. Studies highlight uneven institutional readiness, gaps in data quality, and risks of reinforcing existing inequities if AI is adopted without safeguards on bias, privacy, and transparency (Cardona et al., 2023; Mounkoro et al., 2024; UNESCO, 2025).
Table 3. AI-enabled curriculum readiness rubric derived from the three-layer framework.
Table 3. AI-enabled curriculum readiness rubric derived from the three-layer framework.
Framework Layer Dimension Guiding Question for Self-Audit Readiness Descriptors (1 = Nascent; 2 = Emerging; 3 = Embedded)
Programme Design Strategy and Governance Is AI-enabled curriculum design explicitly integrated into programme-level strategy and review processes? 1: AI is not mentioned in programme strategies or review templates; decisions about AI use are ad hoc and course-specific. 2: Programme documents acknowledge AI presence, and some pilots exist, but expectations for AI use in curriculum design are not formalised. 3: Programme strategies systematically consider AI-enabled skills mapping, analytics, and assessment in all new or revised programmes, with clear approval and review checkpoints.
Programme Design Data and Infrastructure Does the programme have access to reliable data and tools for AI-supported analysis of outcomes, skills, and progression? 1: Data on learner progression and outcomes are fragmented across systems and rarely used for design decisions. 2: Basic dashboards and analytics are available, but they are descriptive and not closely linked to programme outcomes or skills taxonomies. 3: Programmes routinely utilize well-curated data, interoperable taxonomies, and AI-assisted analytics to refine learning outcomes, pathways, and support strategies.
Programme Design Curriculum and Pedagogy Are programme-level outcomes and structures re-examined regularly through AI-supported evidence on learner needs and labour-market changes? 1: Programme outcomes are rarely updated and do not systematically incorporate data on learner performance or employer demand. 2: Some outcomes and pathways have been revised in response to AI-derived insights, often in isolated initiatives. 3: Curriculum maps, pathways, and capstones are regularly reviewed using AI-supported evidence on learner trajectories and market shifts, with documented rationales.
Course Design Strategy and Governance Are expectations for AI-enabled course design (e.g., adaptive assessment, learning analytics) clearly articulated and quality-assured? 1: Individual instructors decide whether and how to use AI; there is minimal guidance on standards or acceptable uses. 2: Departments encourage certain AI-enabled practices (e.g., generative drafting, basic analytics), but consistent quality assurance is lacking. 3: Course design policies specify expected AI-enabled practices, acceptable tools, and review processes, aligned with institutional AI principles.
Course Design Data and Infrastructure Do course teams have practical access to the tools and data needed for adaptive and data-informed designs? 1: Course teams lack access to integrated learning-analytics tools or instructional design support familiar with AI. 2: Some courses use AI-enabled platforms or analytics dashboards, but adoption is uneven and constrained by licenses or skills gaps. 3: Core courses have access to supported AI-enabled platforms and design expertise; data flows seamlessly between VLEs, assessment systems, and programme-level dashboards.
Course Design Curriculum and Pedagogy Are assessment, feedback, and learning activities redesigned to leverage AI’s adaptive and generative capabilities? 1: AI is mainly used for convenience (e.g., generating quiz items) without significant rethinking of learning design. 2: Some courses use AI to personalize practice, provide formative feedback, or adjust content difficulty, but the underlying pedagogy remains largely unchanged. 3: Course teams purposefully integrate AI-enabled adaptive assessment, intelligent tutoring, and human-led discussion to support higher-order outcomes, with clear rationales grounded in learning theory (Chen, 2025; Merino-Campos, 2025).
Institutional Foundations Strategy and Governance Are there institution-wide principles, policies, and oversight mechanisms for AI in curriculum design? 1: The institution has general statements about AI but lacks concrete principles or oversight structures for curriculum. 2: Draft AI policies or guidelines exist, and some oversight groups have formed, but their roles regarding curriculum design are still evolving. 3: The institution has adopted explicit AI principles, risk assessment processes, and governance structures that encompass curriculum design, procurement, and ongoing oversight (Cardona et al., 2023; UNESCO, 2025).
Institutional Foundations Data and Infrastructure Does the institution provide secure, equitable, and ethically governed data infrastructure for AI-enabled curriculum work? 1: Data systems are siloed and primarily designed for compliance rather than design; data governance for AI lacks clarity. 2: Integration and governance initiatives are underway, but coverage is incomplete, and vulnerabilities remain (e.g., patchy consent, uneven data quality). 3: The institution maintains interoperable learning-data infrastructures with clear governance, consent processes, and bias monitoring mechanisms, enabling trusted use of AI for curriculum analytics (Mounkoro et al., 2024).
Institutional Foundations Curriculum and Pedagogy Is there sustained investment in human capacity and culture to critically engage with AI in curriculum design? 1: Professional development on AI is sporadic and optional, with few incentives for staff to engage with AI in curriculum work. 2: The institution offers regular workshops and seed funding for AI-related curriculum projects, but participation is uneven, often reliant on enthusiasts. 3: AI-related capacity building is integrated into workload models, promotion criteria, and recognition schemes; designers, faculty, and leaders are expected and supported to critically engage with AI in curriculum decisions (Fang & Broussard, 2024; Liang et al., 2025).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated