Preprint
Article

This version is not peer-reviewed.

Human-AI Learning: Architecture of a Human-agenticAI Learning System

A peer-reviewed article of this preprint also exists.

Submitted:

18 August 2025

Posted:

19 August 2025

You are already at the latest version

Abstract
The Ancient Greeks foresaw non-human automata and the power of dialogic learning, but Generative AI and Agentic AI afford the prospect of going beyond interlocutor to co-creator, in an empowering partnership between learner and AI agent to address ‘whole person’ education. This exploratory study reviews existing conceptual models and implementations of learning with AI before proposing the novel and original architecture of a human-agenticAI learning system. In this, the learner and human tutor are each supported by AI assistants, and an AI tutor coordinates the generation, presentation and assessment of adaptive learning activities requiring the partnership of learner and AI assistant in the co-creation of learning outcomes. The proposed model is significant for incorporating 21st Century skills in a diversity of realistic learning environments. It tracks a formative assessment pathway of the learner’s contribution to co-created outcomes, through to the compilation of a summative achievement portfolio for external warranting. Although focused upon learning in universities, the model is transferable to other educational milieux.
Keywords: 
;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Education

1. Introduction

1.1. AI Affordances and Educational Potential

The driver of a modern automobile is assisted by several electronic helpers: the headlamps automatically light at dusk and the windscreen wipers detect rainfall; the cabin is kept at a comfortable temperature, while cruise control and lane guidance maintain constant speed and trajectory; gears are engaged automatically and a satnav provides route guidance and real-time advice on traffic congestion. In all, this partnership uses the power of automated systems to complement and augment the driver’s agency while reducing cognitive load – resulting in safer and less stressful journeys.
This paper examines an analogous partnership in the process and management of learning through recent developments in AI. Large Language Model Generative AI (GenAI) agents such as ChatGPT, DeepSeek and Gemini have within a few months of launch received rapid take up by students [1,2]. Educational institutions have been slower to react [3], casting around for viable adaptation strategies in the face of a new technology that presents significant threats to established practice but offers radical opportunities for pedagogy and systems of delivery. The autonomic features of AgenticAI – a development of GenAI – take this further.

1.2. Purpose and Structure of This Paper

The purpose of this paper is to outline the architecture of an innovative learning experience platform that employs AI. It will be argued that, in contrast to recent trends in learning analytics, which have strengthened oversight and external controls on the learner, AI has the potential to adapt and personalise educational experience – and can be used to empower the learner. The orientation of the paper is pedagogical rather than technical, and learner-focused rather than teacher-focused. Many studies in this area aim to employ AI as an administrative tool to automate the conventional practice of lectures and summative testing; the aim of the learning experience platform described here is more radical. It provides AI partnership for the learner through personalised and adaptive dialogic engagement and formative assessment, human-agentic co-creation of learning outcomes, and the practising of 21st Century skills in a diversity of environments.
Section 2 of the paper discusses the educational context of AI. An overview of 21st Century graduate skills is followed by a brief review of Competency Based Education and emerging developments in social learning analytics. In Section 3, parallels are identified between traditional Socratic method, formative assessment, the role of generative AI tutors and the opportunities of agenticAI. In Section 4 is an overview of current AI-supported learning management systems, making the point that many see AI as a tool to reinforce existing practice. Section 5 presents the main thesis – and novel offering – of the paper: the architecture of a human-agenticAI learning system that integrates personalised dialogic engagement and formative assessment within an environment embracing collaborative working, simulated and situated learning. The system logs the learner’s contribution to co-created outcomes, resulting in the compilation of a summative achievement portfolio for CBE warranting. Section 6 concludes with discussion of the learner empowerment potential of an agenticAI approach, some limitations of the proposed system, and priorities for future research and development.

2. Educational Context

2.1. 21st Century Graduate Skills and Dispositions

The first quarter of the 21st Century has seen higher education struggling to adapt to waves of technological change, with AI as the latest disruptor. Stein [4] comments of what he calls the ‘shrinking half-life of knowledge’ accompanied by rapid growth in procedural ‘frontier knowledge’, and notes increasing pressure in many knowledge-intensive academic disciplines to keep curricula current and relevant. The Future of Jobs Report [5] finds a pattern in which two-fifths of existing skill sets will be substantially changed over the 2025-2030 period. The Report places AI and big data at the top of the list of fastest-growing skills, but also predicts increasing demand for the interpersonal skills and dispositions of computer-supported collaborative working. The professional services network PricewaterhouseCoopers make similar findings [6], reporting a 66% faster skill change in AI-exposed than other jobs and a 56% wage premium for AI skills.

2.2. Competency Based Education

Against this background, Competency Based Education (CBE) has grown in importance. CBE is defined by the U.S. Department of Education [7] as allowing students “to advance based on their ability to master a skill or competency at their own pace regardless of environment”. The same source distinguishes between CBE programmes that measure progress using credit hours and those, called direct assessment, that measure progress by directly assessing whether a student can demonstrate command of a content area or skill. Where conventional curricula can be outpaced by rapid change, the nature of CBE makes it more adaptive, with direct assessment a closer partner to formative assessment and AfL. Sturgis [8] describes CBE as a learning environment that provides timely and personalised support and formative feedback, with the aim being for students to develop and apply a broad set of skills and dispositions to foster critical thinking, problem-solving, communication, and collaboration.
Where in the past the assessment of collaborative working was problematic, social learning analytics (SLA) is making progress. A systematic review by Kaliisa et al. [9] examined recent studies of SLA in computer-supported collaborative learning environments. They concluded that while most studies employed this approach to interpret student behaviours from a constructivist perspective, there remained many opportunities for the employment of multiple analytic tools and the use of SLA data to inform teaching decisions.

2.3. Educating the Whole Person

Education of the ‘whole person’ is paramount. A sole focus on graduate skills and competencies has been widely criticised as narrow and instrumental. Datnow et al. [10] argue the need for academic systems to evolve into humanistic systems that support not only knowledge acquisition and certification but also the social, emotional, moral, and civic development of students. This orientation is reflected in the Future of Jobs Report cited earlier, which aligns personal and interpersonal qualities with the challenges facing graduates. The UK National Foundation for Educational Research [11] identifies the top five ‘soft skills’ as: problem solving/decision making; critical thinking/analysis; communication; collaboration/cooperation; and creativity/innovation, but also emphasises the importance of lifelong learning in the acquisition of emergent knowledge, skills and dispositions, and for students to develop as ethically oriented individuals prepared for complex, global citizenship [12]. As Zhao [13] argues, these changing priorities challenge traditional higher education curricula that have focused predominantly on the transmission of subject knowledge.

3. Learning with AI

This section introduces a number of ways in which the affordances of AI can facilitate educational change. Most immediately, is the relationship between learner and tutor, where GenAI supports Socratic dialogue and formative dialogic assessment. AgenticAI, a development of GenAI, facilitates the transformation of this tutorial relationship into a hybrid partnership where human and agent contribute towards shared outcomes. AI can also support a broader palette of educational activities and environments to address the needs of the whole person. In turn, this has the potential to link adaptive curriculum design to implementation, and to manage the competency-based assessment of these key skills within diverse learning environments.

3.1. Socratic Dialogue

Recent AI in Education literature has focused on parallels between traditional Socratic method and the educational stance of AI tutors. Socrates, a Greek philosopher and teacher living in the 5th Century BCE, developed the Socratic method as a form of dialogue-based learning to encourage participants to ask and answer questions in order to stimulate critical thinking and clarify ideas. Asakavičiūtė et al. [14] posit that the skills of Socratic dialogue are increasingly important in a transient world of volatile and conflicting information. Orynbassarova & Porta [15] and Gregorcic et al. [16] go further, in defining Socratic method in the context of AI and performing pilot studies with ChatGPT. An analogy between Socratic method and the Oxford University tutorial is noted by Tapper & Palfreyman [17] and by Balan [18], as educational approach to promote critical thinking, co-constructed meaning and personalised feedback. Lissack & Meagher [19] draw parallels between the Oxford tutorial and GenAI. A related pedagogical orientation is Vygotsky’s Zone of Proximal Development (ZPD), which refers to knowledge and skills too difficult for a learner to acquire alone, but possible with guidance from a ‘more knowledgeable other’. A systematic literature review by Cai et al. [20] found over 150 studies reporting how AI tools can be used to create and facilitate the necessary ZPD scaffolding for effective learning.

3.2. Formative Assessment and Learning Objectives

Many commentators have noted synergies between Socratic method and formative assessment. A significant feature of the latter is the provision of ongoing feedback, and studies over a number of years have found that well-constructed formative feedback can have high motivational value to enhance learning [21,22,23]. Assessment for Learning (AfL) is an approach within formative assessment that emphasises actively involving and empowering learners in their own learning [24]. A related strand of formative assessment is Assessment as Learning (AaL), with the purpose of supporting learners engaged in self- and peer-assessment in monitoring their own learning processes to develop self-regulation and self-direction [25]. Team working has been found particularly effective in this development of learners’ metacognitive skills [22,26,27].
GenAI provides many new opportunities for formative assessment. These include interactive simulations, virtual reality, gaming and real-time logging of achievement [3]. In these, the potential for assessment through dialogue has been explored in a number of recent studies. Vashishth et al. [28] note the effectiveness of AI-driven learning analytics in enhancing student engagement and supporting individualised learning. However, they caution that ethical considerations and the induction of teaching staff (faculty) must be addressed to make this possible. In addition to personalisation is the potential of continuous formative and adaptive assessment, and research involving 120 students by Winarno & Al Azies [29] showed positive effects on student confidence and the potential for enhanced learning outcomes. A related issue is students’ self-regulation, and studies by He [30] and Xia et al. [31] both found evidence of the positive effects of formative assessment on students’ self-regulated learning behaviours. The cautions identified by Vashishth et al. [28] are also noted by Mao et al. [3], who share ethical concerns around AI and advocate the need for students to develop new AI literacies to prepare them for future workplace demands. Ilieva et al. [32] also express concern for responsible GenAI adoption and propose a holistic framework for GenAI-supported assessment in which teaching staff design adaptive and AI-informed tasks and provide feedback; learners engage with these tools transparently and ethically; and institutional bodies manage accountability through compliance standards and policies.
Assessment design and the formulation of learning objectives link GenAI to ZPD and Bloom’s Taxonomy. Sideeg [33] takes the position of Wiggins and McTighe [34] that the nature of assessment evidence should be defined before statements of learning outcomes are formulated. He argues that these should target the ZPD zone of what learners can achieve with the support of a ‘more knowledgable other’: in this context, through Socratic dialogue with a GenAI agent. He cites the influential revision of Bloom’s Taxonomy made in 2001 by Anderson, Krathwohl et al. [35] in which the highest level of learning outcomes on the cognitive process dimension involves Creating (replacing Bloom’s original model that rated Evaluation above Synthesis). This recognises that Creating not only integrates other cognitive processes on this dimension – such as Application, Analysis and Evaluation – but also involves critical thinking, collaboration and communication. These are the same skills and dispositions identified earlier as important to whole person education and the needs of 21st Century life.
Dialogic co-construction and co-creation are key processes of the human-agenticAI learning system proposed in this paper.

3.3. AgenticAI and Co-Creation

An emerging extension of GenAI is agenticAI. The key difference between agentic agents and generative large language model AI is in their self-learning and evolutional self-direction. Where the behaviour of conventional GenAI agents follows set algorithms, agenticAI learns from experience and awareness of context, continuously evolving algorithms to make its behaviour adaptive and proactive [36]. Table 1 summarises key differences in behaviour in terms of agent autonomy, workflow automation, decision-making and tutor roles.
Some wider implications of agenticAI are discussed by Hughes et al. [38], who foresee in many industries a decentralisation of decision-making, reshaping of organisational structures and enhancement cross-functional collaboration. They also call for robust governance frameworks, cross-industry collaboration, and research into ethical safeguards.
In the context of this paper, agenticAI extends the learner-support role of GenAI to facilitate a human-AI hybrid partnership. Molenaar [39], an early pioneer in this field, proposed a transfer of control model with six levels: from teacher-only control, through four steps of increasing technology involvement, to full automation in which AI controls all learning tasks. Cukurova [40] takes a different view of the ‘high end’ of automation with AI, distinguishing between two types of human involvement. In the first, human tasks are replaced by AI, resulting in a decline of human competence over time. In the second, human cognition is tightly coupled with AI through human-AI hybrid partnership, resulting in an extension of human competence over time.
Co-creation has also been noted in the field of AI-supported software development. Vibe Coding is an approach to programming (coding) that differs from agentic coding [41]. In the latter, software is developed autonomously by goal-driven agents with little human intervention. In vibe coding, a high level of natural language conversation between human and LLM agent facilitates the envisioning and exploration of creative approaches to a software project prior to algorithmic coding within the affordances of the programming language – a technical function that is performed by the agent [42]. This partnership is similar to the modern automobile analogy earlier this paper, where automated systems complement and augment human agency while reducing cognitive load.
These conceptualisations assume humans are acting independently rather than in groups. The role of AI-supported learners in social-collaborative settings is explored by Järvelä et al. [43]. Here, they adapt a framework for socially-shared regulation in learning to include what they call hybrid human-AI shared regulation in learning (HASRL). In this, AI can be used to identify events in the interactions between co-workers that are associated with productive collaboration. As discussed earlier, team work experiences can be effective in the development of learners’ metacognitive skills. The HASRL model has the potential for application in a variety of team working and situated learning environments, such as those requiring the interpersonal soft skills discussed in Section 2.
GenAI and agenticAI show great potential to support learners and enhance learning, but may complicate assessment. Perkins et al. [44] developed a model to address this, with five levels of AI engagement in what they call an Artificial Intelligence Assessment Scale (AIAS). Activities at these levels range from no use of AI at Level 1, through increasing involvement at Levels 2, 3 and 4. At Level 5 is the full use of AI as human-agenticAI co-created. The scale is summarised in Table 2. The scale provides clear statements of levelness but appears to focus on the individual learner, with no account taken of learning environment.
To map the AIAS to collaborative working environments will first require a comparison of the roles of AI agents as tutors in the two situations. Table 3 presents roles that AI tutors could take in supporting individual working and team working. For individual working (left-hand column), the four roles are similar to Level 2–5 of the AIAS. At the first level is what might be called ‘secretarial support’ role through the curation of study activity; this is followed by Socratic tutoring and dialogic formative assessment; next is a loose involvement of AI in finessing the student’s contribution to the learning activity; and finally is the ‘hybrid’ level – of what Järvelä et al. [43] call HASRL and in this paper will be called human-agenticAI co-creation – of jointly-contributed learning outcomes.
In the right-hand column of Table 3 are four levels describing increasing involvement of agenticAI in supporting the team working of a group of learners; these also follow the AIAS Levels 2–5 in reflecting the increasing involvement of agenticAI. The final level includes the roles of promoting productive collaboration through ‘hybrid human-AI shared regulation in learning’ [43].
Team working is one of the soft skills identified by NFER [11] and discussed in Section 2.1. These learning activities may be exercised in a range environments typical of contemporary higher education. Table 4 illustrates a mapping of activities to learning environments, and can be employed as a means of logging the occasions and quality of engagement using the AIAS descriptors.
In Table 4, the six columns are headed by types of activities included in the Learning Activities Library (see Section 5.3.1). The five rows are headed by types of environments included in the Learning Environments Library (see Section 5.3.1). Two sample learning outcomes are shown: for an individual online project, rated at Level 2 on the AIAS (see Table 2), and for a teamwork activity rated at Level 4.

4. AI-Supported Learning Systems

This section reviews current literature on AI-supported learning systems, distinguishing between traditional Learning Management Systems (LMSs) and Learning Experience Platforms (LXPs). The rationale for multi-agent systems is outlined, followed by some examples of applications. The section concludes with an overview of the opportunities created by generative multi-agent collaboration to enable adaptive learner-centred systems, and some criteria against which these might be evaluated.

4.1. Learning Experience Platforms

The origins of traditional LMSs can be traced to Behaviourism and Instructional Design. More recent theories of learning, such as Constructivism, Social Constructivism and Connectivism, have focused on the experience of the learner and have influenced the development of LXPs [45,46]. Table 5 provides a summary comparison of the two approaches. Essential differences lie in the degree of control; personalisation of course content and the learner’s experience; and in opportunities for social and collaborative learning.
LXPs show higher compatibility with the educational orientation of Socratic dialogue, formative assessment, human-AI co-creation and team working discussed in the previous section. Ways in which LXPs can foster soft-skills development are discussed by Valdiviezo & Crawford [47]. LXPs are also compatible with competency-based education: in a bibliometric analysis by Radu et al. [48], and from a correlational study of the impact of technology-based collaborative learning on students’ CBE [49]. Improved student engagement and personalised learning is the target of a novel AI-enabled Intelligent Assistant built upon the Canvas LMS [50]. Similarly, Shamsudin & Hoon [51] report on the integration of ChatGPT into the Moodle LMS in Malaysian universities.

4.2. Multi-Agent Systems

Multi-agent AI systems offer the potential of improved agent reliability and verification, but their complexity creates challenges. In an extensive survey of multi-agent LLM AI systems, Guo et al. [52] liken the interaction of multiple autonomous agents in collaborative activities to human group dynamics in problem-solving scenarios. They also discuss the problem of ‘hallucination’ – where an agent generates text that is factually incorrect. In a multi-agent setting, the danger is that misinformation from one agent might be accepted and propagated by others in the network, so the detection and mitigation of hallucinations is a vital but complex task. Wu et al. [53] review recent developments in multi-agent systems, where generative agents communicate in semantically rich ways to collaborate with human-level fluency. To this end, they advocate a unified multi-agent design framework for generative embodied AI, in which hybrid collaborative architectures facilitate collaborative functionality. Alternative multi-agent architectures are proposed by Jiang et al. [54] in an area that shows burgeoning growth.

4.3. Success Criteria for an AI-Supported Learning System

The recent developments discussed above are making it more possible for generative adaptive multi-agent systems to empower learners within contemporary and emerging educational contexts. In order to achieve this, systems must meet the following success criteria, to:
  • provide dialogic engagement and formative assessment;
  • adapt and personalise learning activities to empower the learner;
  • facilitate human-agenticAI and co-created learning;
  • aim to educate the whole person and practise 21st Century graduate skills and competences within diverse environments (see Table 3 and Table 4);
  • maintain ethical compliance;
  • employ self-correcting, generative embodied multi-agent AI frameworks.
To address these six criteria, the next section presents main thesis of the paper: a specification of the architecture of a human-agenticAI learning system.

5. Architecture of a Human-agenticAI Co-Created Learning System

The proposed human-agenticAI co-created learning system – abbreviated in this paper as HCLS – is a multi-agent learning experience platform designed with safeguards for self-correction and ethical compliance. Its primary purpose is to support and empower learners through dialogic engagement and formative assessment in the practising of skills and competences through co-creation with agenticAI in a variety of activities and environments, assessed by social learning analytics. The system will be discussed at three levels: the principal agents; an overview of system processes; and functions of the AI agents.

5.1. Principal Agents of the HCLS

This first level introduces the five principal agents of the HCLS. The humans involved are the Learner and the Human Tutor. The AI agents involved are the AI Tutor, the Learner’s AI Assistant and the Human Tutor’s AI Assistant. The relations between these principal agents are illustrated in Figure 1, with the double-headed arrows indicating two-way interactions.
Each human is supported by an AI assistant. The Human Tutor’s AI Assistant provides advice and feedback on course management and the Learner’s AI Assistant supports and interacts with the learner in ways to be detailed later.

5.2. Overview of HCLS Processes

This second level introduces two more AI agents: the Learning Activity Scheduler and the Learning Activity Outcomes Assessor. Figure 2 illustrates how these are located in relation to the five principal agents and to the remaining components and processes of the HCLS.
The main processes of the HCLS are overviewed here and further details will follow later. A typical learning and assessment cycle is as follows.
  • The AI Tutor consults the Course Syllabus and activity libraries to select a learning activity for the Learner. The difficulty level and suitability are determined in consultation with the Learner’s AI Assistant and the details are passed to the Learning Activity Scheduler. This agent specifies a learning activity which is forwarded to the Learner’s AI Assistant and AI Tutor and reported to the Human Tutor’s AI Assistant.
  • The Learner’s AI Assistant cues the activity with the Learner at an opportune time, supports the Learner in completing the learning activity, and forwards the outcomes to the Learning Activity Outcomes Assessor.
  • The Learning Activity Outcomes Assessor evaluates the outcomes against the specification and reports to the Human Tutor’s AI Assistant.
  • The Human Tutor’s AI Assistant reports to the Human Tutor and forwards evidence of competence levels to the Learner’s Record of Achievement Portfolio. This is then available to external systems for academic warranting and awards.

5.3. Functions of AI Agents in the HCLS

5.3.1. Functions of the AI Tutor

The AI Tutor liaises closely with the Learner, the Human Tutor and the Learner’s AI Assistant. It consults the Course Syllabus and three libraries derived from it, from which suitable activities are selected and modified. In addition to being ethically compliant and appropriate to the Learner’s needs and abilities, these activities must be of diverse types, must address specific competences, be context-specific to individual or team-working and to real or simulated scenarios. A Learning Activities Library holds a comprehensive list of ethically compliant activities drawn from authoritative sources, such as The Alan Turing Institute [55]. These activities relate to the Course Syllabus and include: problem-based learning; projects; research; teamwork; presentations/performances/demonstrations; and viva voce examinations. A Key Competences Library (see Section 5) lists all the competences specified in the Course Syllabus. A Learning Environments Library holds a comprehensive list of all the learning environments specified in the Course Syllabus that will be encountered by the Learner, which include: flipped classroom / blended learning; individual online study; collaborative online study; real or simulated workplace, or gaming; and activities in laboratory / workshop / studio /performance spaces. The eight functions of the AI Tutor are illustrated in Figure 3.
The generated activities are approved by the Human Tutor then notified to the Learning Activity Scheduler and Learner’s AI Assistant. Finally, the AI Tutor provides a quality feedback loop by evaluating and reporting to the Course Syllabus on the effectiveness and suitability of items in the Learning Activities Library.

5.3.2. Functions of the Learner’s AI Assistant

The Learner’s AI Assistant performs two main functions: secretarial support and co-creation partnership. The secretarial support functions of the assistant include: liaison with the Learning Activity Scheduler and AI Tutor to notify the Learner of upcoming learning activities; using learning analytics data collated in the Learner Activity Monitor to track and report the Learner’s level of engagement in various activities and environments, for example, using the rating system shown in Table 2; and charting progress towards activity completion. The assistant also logs achievements evaluated by the Learning Activity Outcomes Assessor for the Learner’s Record of Achievement Portfolio. The co-creation partnership functions of the assistant include: researching, collating and summarising information; monitoring course communications via the learning management system (LMS) and peer learners, notifying the Learner on a need-to-know basis; engaging in dialogic / Socratic / formative assessment discussion with the Learner in relation to understanding course material and associated ideas; and assisting the Learner in addressing and structuring responses to learning activities. These are illustrated in Figure 4.

5.3.3. Functions of the Human Tutor’s AI Assistant

The Human Tutor’s AI Assistant performs two main functions: secretarial support and course management functions. It advises the tutor on the ethical compliance, suitability and scheduling of learning activities. It also collates performance data received from the Learning Activity Outcomes Assessor on learners in the tutor’s supervision group, and has oversight of the assessed outcomes data sent to the Record of Achievement Portfolios of relevant learners. A summary illustration of these functions is presented in Figure 5.

5.3.4. Functions of the Learning Activity Scheduler

The functions of the Learning Activity Scheduler are firstly, to liaise with the AI Tutor and Learner’s AI Assistant in cueing learning activities; and secondly, to notify the Human Tutor and Human Tutor’s AI Assistant. These functions are illustrated in Figure 6.

5.3.5. Functions of the Learning Activity Outcomes Assessor

The main function of this agent is to make a summative assessment of the co-created learning outcomes submitted alongside the report of the Learner’s AI Assistant. The outcomes are assessed in relation to activity definition criteria in the Learning Activities Library. The results of this assessment are then notified to the Human Tutor’s AI Assistant and the Learner’s AI Assistant. An illustration is presented in Figure 7.

5.4. Feedback Paths Within the HCLS

The HCLS is designed as a self-correcting multi-agent system. Han et al. [56] note the – sometimes inaccurate – information provided by GenAI, and argue the need for safeguards. They propose a data pipeline and Partial Answer Masking system for intrinsic self-correction. Cannon [57] addresses the same problem, but with a hybrid AI architecture in which large language models act as high-level orchestrators, delegating tasks to classical systems while using their outputs for analysis and refinement.
The HCLS employs five self-correcting feedback paths. In the first of these the AI Tutor liaises with the Learner, the Human Tutor and the Learner’s AI Assistant in consulting the Course Syllabus and attached libraries to select, adapt and schedule learning activities suited to the Learner’s needs and relevant learning environment. The second set of feedback paths involve the Learner’s AI Assistant in engaging with data from the Learning Activity Monitor and negotiating with the AI Tutor and Learning Activity Outcomes Assessor to determine the outcomes to be forwarded to the Record of Achievement Portfolio. The Human Tutor’s AI Assistant is involved in the third set of feedback paths, to inspect performance data from the Learning Activity Outcomes Assessor and assure congruity in the reports sent to the Record of Achievement Portfolio. In the fourth set of feedback paths, the Learning Activity Scheduler interacts with the AI Tutor, the Learner’s AI Assistant, the Human Tutor and the Human Tutor’s AI Assistant to agree and schedule learning activities. Finally, a fifth set of feedback paths involve the Learning Activity Outcomes Assessor in assessing outcomes in relation to the Learning Activities Library definitions, for ratification by the Learner’s AI Assistant and the Human Tutor’s AI Assistant. In these ways, interoperating designed-in features facilitate self-correction and quality management of the HCLS in relation to its external environment.
In addition to the internal self-correction afforded by multiple-agent agenticAI, the HCLS incorporates a form of automated quality management [58]. In consultation with the Human Tutor, the Human Tutor’s AI Assistant provides analytics feedback to the Course Syllabus on the usefulness and suitability of course and libraries content. In this curriculum development loop, the HCLS enables adaptive and continuing course tuning and enhancement.

6. Discussion and Conclusions

6.1. Evaluation of HCLS Against the Six Criteria

The HCLS is now evaluated against the six criteria A-F set out earlier. Table 6 presents a summary of how each criterion is addressed through the structures, interactions and processes of the various agents and data stores. Dialogic engagement and personalisation of learning activities support students’ personal and academic growth as sustainable learners (Criteria A and B). This is developed further through interactions with the AI Tutor and co-creation with the Learner’s AI Assistant (Criterion C). These personal experiences are complemented by the social challenges of team working to achieve real and simulated objectives in diverse environments (Criterion D) and in compliance with ethical standards (Criterion E). Finally, the self-correcting multi-agent interconnections within the HCLS enable a degree of agent autonomy in seeking to meet the requirements of the Course Syllabus and the criteria for a successful AI-supported learning system (Criterion F).

6.2. Assumptions in the HCLS Proposition

As mentioned in the Introduction section, the orientation of this paper is pedagogical rather than technical, so the HCLS is proposed in the form of an ideational conception rather than a blueprint for direct implementation. Nevertheless, it has been developed with potential application in mind, as a means to help prepare new entrants to a rapidly-changing employment landscape in which education of the whole person is key to sustainability.
Several assumptions have been made in the design of the HCLS and these fall into two sets: concerning humans and agents. The first set of assumptions is that humans may find the learning environment depersonalising, demotivating and difficult to navigate. This issue was noted in Section 3 in the concerns of Vashishth et al. [28] and Mao et al. [3], and points to the need for adequate staff (faculty) and student induction, perhaps along the lines suggested by Ilieva et al. [32]. Successful adoption would be contingent on the structures, interactions and processes in Table 6 being realised, and many learners might find variety and motivation in the emphasis on team working in diverse learning environments. In addition, recent developments in the use of AI to fine-tune User Interface (UI) and User Experience (UX) design [59,60] would help make the system easy to navigate and control. Both the Learner and Human Tutor would find mitigation of administrative duties: the secretarial support functions of the Learner’s AI Assistant, and the secretarial and course management functions of the Human Tutor’s AI Assistant would free up time and mental energy for educational matters. The second set of assumptions is that the functionality and demands made of the AI agents in the system are realistically achievable: so that multiple agents in the network will interact productively without conflicts, and any ‘hallucinations’ from one agent will be detected and rectified by others. Given the rapid pace of development in the power of new AI processor chips and in the functionality of multi-agent LXP architectures, this assumption is significant but achievable; but as for all LLM-based systems, extensive training and iterative refinement [61,62] would be necessary before the HCLS could be considered suitable for rollout.
In conclusion, this exploratory study has taken a Constructivist educational orientation on how an AI-supported LXP might be developed, employing human-agenticAI to empower liberally-educated learners for sustainable 21st Century careers in a diversity of environments.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Sublime, J.; Renna, I. Is ChatGPT Massively Used by Students Nowadays? A Survey on the Use of Large Language Models such as ChatGPT in Educational Settings. Preprint 2024, arXiv:2412.17486. [CrossRef]
  2. Zhu, T.; Zhang, K.; Wang, W. Embracing AI in Education: Understanding the Surge in Large Language Model Use by Secondary Students. Preprint 2024, arXiv:2411.18708. [CrossRef]
  3. Mao, J.; Chen, B.; Liu, J. TechTrends 2024, 68, 58–66. [CrossRef]
  4. Stein, R.M. The Half-Life of Facts: Why Everything We Know Has an Expiration Date. Quant. Financ. 2014, 14, 1701–1703. [CrossRef]
  5. World Economic Forum. The Future of Jobs; World Economic Forum: Cologny, Switzerland, 2016. Available online: https://www3.weforum.org/docs/WEF_Future_of_Jobs.pdf (accessed on 1 August 2025).
  6. PricewaterhouseCoopers. The Fearless Future: 2025 Global AI Jobs Barometer; PwC: London, UK, 2025. Available online: https://www.pwc.com/gx/en/issues/artificial-intelligence/ai-jobs-barometer.html (accessed on 1 August 2025).
  7. U.S. Department of Education. Direct Assessment (Competency-Based) Programs; U.S. Department of Education: Washington, DC, USA, 2025. Available online: https://www.ed.gov/laws-and-policy/higher-education-laws-and-policy/higher-education-policy/direct-assessment-competency-based-programs (accessed on 1 August 2025).
  8. Sturgis, C. Reaching the Tipping Point: Insights on Advancing Competency Education in New England; CompetencyWorks: USA, 2016. Available online: https://www.competencyworks.org/wp-content/uploads/2016/09/CompetencyWorks_Reaching-the-Tipping-Point.pdf (accessed on 1 August 2025).
  9. Kaliisa, R.; Rienties, B.; Mørch, A.; Kluge, A. Comput. Educ. Open 2022, 3, 100073. [CrossRef]
  10. Datnow, A.; Park, V.; Peurach, D.J.; Spillane, J.P. Transforming Education for Holistic Student Development; Brookings Institution: Washington, DC, USA, September 2022. Available online: https://www.brookings.edu/ (accessed on 1 August 2025).
  11. NFER. The Skills Imperative 2035; NFER: London, UK, 2022. Available online: https://www.nfer.ac.uk/the-skills-imperative-2035 (accessed on 1 August 2025).
  12. Saito, N.; Akiyama, T. On the Education of the Whole Person. Educ. Philos. Theory 2022, 56, 153–161. [CrossRef]
  13. Zhao, K. Educating for Wholeness, but Beyond Competences: Challenges to Key-Competences-Based Education in China. ECNU Rev. Educ. 2020, 1–17. [CrossRef]
  14. Asakaviciūtė, V.; Valantinaitė, I.; Sederavičiūtė-Pačiauskienė, V. Filos. Sociol. 2023, 34, 328–338. [CrossRef]
  15. Orynbassarova, D.; Porta, S. In Proceedings of the International Conference on Emerging eLearning Technologies and Applications; 2024. Available online: https://www.semanticscholar.org/paper/ab1cf9d86d10bcdec408489c0ae534aa944a65f4 (accessed on 1 August 2025).
  16. Gregorcic, B.; Polverini, G.; Sarlah, A. Phys. Educ. 2024, 59, 045001. [CrossRef]
  17. Tapper, T.; Palfreyman, D. The Tutorial System: The Jewel in the Crown. In Oxford, the Collegiate University; Springer: Dordrecht, The Netherlands, 2011; pp. 1–20. [CrossRef]
  18. Balan, A. Law Teach. 2017, 52, 171–189. [CrossRef]
  19. Lissack, M.; Meagher, B. Responsible Use of Large Language Models: An Analogy with the Oxford Tutorial System. She Ji 2024, 10, 389–413.
  20. Cai, L.; Msafiri, M.M.; Kangwa, D. Educ. Inf. Technol. 2025, 30, 7191–7264. [CrossRef]
  21. Black, P.; Wiliam, D. Assess. Educ. Princ. Policy Pract. 2018, 25, 551–575. [CrossRef]
  22. Parmigiani, D.; Nicchia, E.; Murgia, E.; Ingersoll, M. Front. Educ. 2024, 9, 1366215. [CrossRef]
  23. Muafa, A.; Lestariningsih, W. In Proceedings of the International Conference on Religion, Science and Education 2025, 4, 195–199. Available online: https://sunankalijaga.org/prosiding/index.php/icrse/article/view/1462/1143 (accessed on 1 August 2025).
  24. Sambell, K.; McDowell, L.; Montgomery, C. Assessment for Learning in Higher Education; Routledge: London, UK, 2012. [CrossRef]
  25. Earl, L.M. Assessment as Learning: Using Classroom Assessment to Maximize Student Learning; Corwin Press: Thousand Oaks, CA, USA, 2013.
  26. Atjonen, P.; Kontkanen, S.; Ruotsalainen, P.; Pöntinen, S. Pre-Service Teachers as Learners of Formative Assessment in Teaching Practice. Eur. J. Teach. Educ. 2024, 47, 267–284. [CrossRef]
  27. Fleckney, P.; Thompson, J.; Vaz-Serra, P. Designing Effective Peer Assessment Processes in Higher Education: A Systematic Review. High. Educ. Res. Dev. 2025, 44, 386–401. [CrossRef]
  28. Vashishth, T.; Sharma, V.; Sharma, K.; et al. AI-Driven Learning Analytics for Personalized Feedback and Assessment in Higher Education. In Using Traditional Design Methods to Enhance AI-Driven Decision Making; IGI Global, 2024. [CrossRef]
  29. Winarno, S.; Al Azies, H. Int. J. Pedagogy Teach. Educ. 2024, 8, 1–11. [CrossRef]
  30. He, S.; Epp, C.; Chen, F.; Cui, Y. Comput. Hum. Behav. 2024, 152, 108061. [CrossRef]
  31. Xia, Q.; Weng, X.; Ouyang, F.; Jin, T.; Chiu, T. Int. J. Educ. Technol. High. Educ. 2024, 21, 40. [CrossRef]
  32. Ilieva, G.; Yankova, T.; Ruseva, M.; et al. A Framework for Generative AI-Driven Assessment in Higher Education. Information 2025, 16, 472. [CrossRef]
  33. Sideeg, A. Blooms Taxonomy, Backward Design, and Vygotskys Zone of Proximal Development in Crafting Learning Outcomes. Int. J. Linguist. 2016, 8, 158. [CrossRef]
  34. Wiggins, G.; McTighe, J. Understanding by Design; Merrill-Prentice-Hall: New Jersey, NJ, USA, 2005.
  35. Anderson, L.W.; Krathwohl, D.R. (Eds.) A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives; Allyn and Bacon: Boston, MA, USA, 2001.
  36. Kamalov, F.; Santandreu Calonge, D.; Smail, L.; Azizov, D.; Thadani, D.; Kwong, T.; Atif, A. Preprint 2025, arXiv:2504.20082. [CrossRef]
  37. Sehgal, G. AI Agentic AI in Education: Shaping the Future of Learning; Medium Blog, 2025. Available online: https://medium.com/accredian/ai-agentic-ai-in-education-shaping-the-future-of-learning-1e46ce9be0c1 (accessed on 1 August 2025).
  38. Hughes, L.; Dwivedi, Y.; Malik, T.; et al. AI Agents and Agentic Systems: A Multi-Expert Analysis. J. Comput. Inf. Syst.2025, 1–29. [CrossRef]
  39. Molenaar, I. Eur. J. Educ. 2022, 57, 632–645. [CrossRef]
  40. Cukurova, M. Br. J. Educ. Technol. 2025, 56, 469–488. [CrossRef]
  41. Sapkota, R.; Roumeliotis, K.I.; Karkee, M. Vibe Coding vs. Agentic Coding: Fundamentals and Practical Implications of Agentic AI. Preprint 2025, arXiv:2505.19443. [CrossRef]
  42. Sarkar, A.; Drosos, I. Vibe Coding: Programming through Conversation with Artificial Intelligence. Preprint 2025, arXiv:2506.23253. [CrossRef]
  43. Järvelä, S.; Zhao, G.; Nguyen, A.; Chen, H. Br. J. Educ. Technol. 2025, 56, 455–468. [CrossRef]
  44. Perkins, M.; Furze, L.; Roe, J.; MacVaugh, J.J. Univ. Teach. Learn. Pract. 2024, 21, 06. [CrossRef]
  45. Masero, R. Learning Management Systems To Learning Experience Platforms: When Does an LMS Become an LXP? eLearning Industry, 14 July 2023. Available online: https://elearningindustry.com/learning-management-systems-to-learning-experience-platforms-when-does-an-lms-become-an-lxp (accessed on 1 August 2025).
  46. Valamis. LXP vs. LMS: Understanding the Key Differences; Valamis: 2025. Available online: https://www.valamis.com/blog/lxp-vs-lms (accessed on 1 August 2025).
  47. Valdiviezo, A.D.; Crawford, M. Fostering Soft-Skills Development through Learning Experience Platforms (LXPs). In Handbook of Teaching with Technology in Management, Leadership, and Business; Edward Elgar: Cheltenham, UK, 2020; pp. 312–321.
  48. Radu, C.; Ciocoiu, C.N.; Veith, C.; Dobrea, R.C. Artificial Intelligence and Competency-Based Education: A Bibliometric Analysis. Amfiteatru Econ. 2024, 26, 220–240. [CrossRef]
  49. Asad, M.M.; Qureshi, A. High. Educ. Skills Work-Based Learn. 2025. [CrossRef]
  50. Sajja, R.; Sermet, Y.; Cikmaz, M.; Cwiertny, D.; Demir, I. Information 2024, 15, 596. [CrossRef]
  51. Shamsudin, N.; Hoon, T. Exploring the Synergy of Learning Experience Platforms (LXP) with Artificial Intelligence for Enhanced Educational Outcomes. In Proceedings of the International Conference on Innovation & Entrepreneurship in Computing, Engineering & Science Education; Advances in Computer Science Research, Volume 117; Atlantis Press: Paris, France, 2024; pp. 30–39. [CrossRef]
  52. Guo, T.; Chen, X.; Wang, Y.; Chang, R.; Pei, S.; Chawla, N.V.; Zhang, X. Preprint 2024, arXiv:2402.01680. [CrossRef]
  53. Wu, D.; Wei, X.; Chen, G.; Shen, H.; et al. Generative Multi-Agent Collaboration in Embodied AI: A Systematic Review. Preprint 2025, arXiv:2502.11518. [CrossRef]
  54. Jiang, Y.-H.; Li, R.; Zhou, Y.; Qi, C.; Hu, H.; Wei, Y.; Jiang, B.; Wu, Y. AI Agent for Education: Von Neumann Multi-Agent System Framework. Preprint 2024, arXiv:2501.00083. [CrossRef]
  55. Leslie, D. Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector; The Alan Turing Institute: London, UK, 2019. [Google Scholar] [CrossRef]
  56. Han, H.; Xie, Y.; Zhao, X.; Li, J. Intrinsic Self-Correction in Generative Language Models via Data Pipeline and Partial Answer Masking. Preprint 2024, arXiv:2401.07301. [Google Scholar] [CrossRef]
  57. Cannon, M. Toward Self-Correcting Hybrid AI Systems: Integrating LLMs with Deterministic Classical Systems. SSRN Preprint 2025. [CrossRef]
  58. Quantzig. How Automated Quality Management Is Shaping the Future of Business; Quantzig: Mumbai, India, 2025. Available online: https://www.quantzig.com/… (accessed on 1 August 2025).
  59. Lamprecht, E. The Difference between UX and UI Design; CareerFoundry Blog: 2025. Available online: https://careerfoundry.com/… (accessed on 1 August 2025).
  60. Luo, Y. Designing with AI: A Systematic Literature Review on the Use, Development, and Perception of AI-Enabled UX Design Tools. Adv. Hum.-Comput. Interact. 2025, 2025, 3869207. [CrossRef]
  61. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. Training Language Models to Follow Instructions with Human Feedback. Advances in Neural Information Processing Systems 2022, 35, 27730–27744. [CrossRef]
  62. Ziegler, D.M.; Stiennon, N.; Wu, J.; Brown, T.B.; Radford, A.; Amodei, D.; Christiano, P. Fine-Tuning Language Models from Human Preferences. Advances in Neural Information Processing Systems 2019, 32. [CrossRef]
Figure 1. Principal agents of the human-agentic, co-created learning system (HCLS).
Figure 1. Principal agents of the human-agentic, co-created learning system (HCLS).
Preprints 172938 g001
Figure 2. Components and processes of the HCLS.
Figure 2. Components and processes of the HCLS.
Preprints 172938 g002
Figure 3. Functions of the AI Tutor.
Figure 3. Functions of the AI Tutor.
Preprints 172938 g003
Figure 4. Functions of the Learner’s AI Assistant.
Figure 4. Functions of the Learner’s AI Assistant.
Preprints 172938 g004
Figure 5. Functions of the Human Tutor’s AI Assistant.
Figure 5. Functions of the Human Tutor’s AI Assistant.
Preprints 172938 g005
Figure 6. Functions of the Learning Activity Scheduler.
Figure 6. Functions of the Learning Activity Scheduler.
Preprints 172938 g006
Figure 7. Functions of the Learning Activity Outcomes Assessor.
Figure 7. Functions of the Learning Activity Outcomes Assessor.
Preprints 172938 g007
Table 1. Comparison of the Educational Affordances of GenAI and AgenticAI (adapted from Sehgal [37]).
Table 1. Comparison of the Educational Affordances of GenAI and AgenticAI (adapted from Sehgal [37]).
Feature GenAI AgenticAI
Autonomy Acts in response to human input Acts autonomously in response to learner and environment
Workflow Automates given workflow processes Optimises and evolves new workflow processes
Decision-making Makes decisions on the basis of predictive learning analytics data Employs self-learning for proactive decision-making
AI Tutor roles ‘Secretarial support’ and dialogic engagement Adapting and personalising activities and curriculum for the learner
Table 2. Artificial Intelligence Assessment Scale (adapted from Perkins et al. [44]).
Table 2. Artificial Intelligence Assessment Scale (adapted from Perkins et al. [44]).
Level 1 No use of AI
Level 2 AI used for brainstorming, creating structures, and generating ideas
Level 3 AI-assisted editing, improving the quality of student created work
Level 4 Use of AI to complete certain elements of the task, with students providing a commentary on which elements were involved
Level 5 Full use of AI as ‘co-pilot’ in a collaborative partnership without specification of which elements were wholly AI generated
Table 3. Support roles of agenticAI in Students’ Individual and Team Working Environments.
Table 3. Support roles of agenticAI in Students’ Individual and Team Working Environments.
AgenticAI support for individual working AgenticAI support for team working
Curating student’s study activity with notes, summaries, diary management and links to resources Curating information and resources, team communications and liaison to support students’ team working.
Providing Socratic tutoring and dialogic formative assessment Providing Socratic tutoring and dialogic formative assessment
Checking and improving the quality of student created work Identifying and curating team working and improving the quality of collaborative achievements
Human-agenticAI co-creation between student and AI tutor Supporting peer evaluations of collaborative working; engaging in ‘hybrid human-AI shared regulation in learning’ (HASRL)
Table 4. Learning Activities Mapped to Environments, with two sample learning outcomes rated on the Artificial Intelligence Assessment Scale.
Table 4. Learning Activities Mapped to Environments, with two sample learning outcomes rated on the Artificial Intelligence Assessment Scale.
Activity PBL Projects Research Teamwork Presentations Viva voce
Flipped classroom /blended
Individual online activity Level 2
Collaborative online activity
Workplace / simulation / gaming Level 4
Laboratory /workshop / studio
Table 5. Summary Comparison of Learning Management Systems with Learning Experience Platforms (adapted from Masero [45]).
Table 5. Summary Comparison of Learning Management Systems with Learning Experience Platforms (adapted from Masero [45]).
Function Learning Management Systems Learning Experience Platforms
Locus of control Tutor/Administrator control. Cognitivist orientation in focus on content delivery and management. Learner control. Constructivist orientation in focus on learner experience and engagement.
Personalisation Limited personalisation of content and tasks. AI-driven personalisation of content and activities, based on user preferences and behaviour.
Social & collaborative
orientation
Limited social interaction features. Flexible opportunities for social and collaborative learning.
Table 6. Evaluation of HCLS against the six criteria.
Table 6. Evaluation of HCLS against the six criteria.
Code Criterion Structures, Interactions and Processes
A Dialogic engagement and formative assessment Interactions and processes between the Learner, the AI Tutor and the Learner’s AI Assistant to enable support and engagement.
B Adaptive and personalised activities to empower the learner Interactions and processes between the AI Tutor, the Learner’s AI Assistant and the Learning Activity Scheduler to select and cue suitable activities to facilitate mastery by the Learner.
C Human-agenticAI and co-created learning Interactions and processes between the Learner, the Learner’s AI Assistant and the AI Tutor to provide partnership in the co-creation of learning activity outcomes.
D Develop personal and collaborative skills and competences in diverse environments Personal and social experiences of collaborative learning in diverse environments. Interactions and processes between the Learning Activity Outcomes Assessor and adjacent agents to assess co-created learning activity outcomes against key competences and external environments criteria (Table 4).
E Ethical compliance Interactions and processes between the Human Tutor’s AI Assistant and the Human Tutor to manage the ethical compliance of learning activities and external environments to authoritative guidelines (Section 5.3.1).
F Employment of self-correcting, generative embodied multi-agent AI frameworks Structures supporting five forms of internal self-correction (Section 5.4). External quality management feedback to the Course Syllabus and libraries.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated