Preprint
Article

This version is not peer-reviewed.

GPTs and the Choice Architecture of Pedagogies in Vocational Education

A peer-reviewed article of this preprint also exists.

Submitted:

26 August 2025

Posted:

27 August 2025

You are already at the latest version

Abstract
Generative Pre-Trained Transformers (GPTs) have rapidly entered educational contexts, raising questions about their impact on pedagogy, workload, and professional practice. While their potential to automate resource creation, planning, and administrative tasks is widely discussed, little empirical evidence exists regarding their use in vocational education (VE). This study explores how VE educators in England are currently en-gaging with AI tools and the implications for workload and teaching practice. Data were collected through a survey of 60 vocational teachers from diverse subject areas, com-bining quantitative measures of frequency, perceived usefulness, and delegated tasks with open qualitative reflections. Descriptive statistics, cross-tabulation, and thematic analysis were used to interpret responses, framed through the Pareto Principle to con-sider whether certain tasks are disproportionately offloaded to GPTs. Findings indicate cautious but positive adoption, with most educators using AI tools infrequently (0–10 times per month) yet rating them highly useful (average 4/5) for supporting workload. Resource and assessment creation dominated reported uses, while administrative ap-plications were less common. The choice architecture framing indicates that some GPTs guide teachers to certain resources over others and potential implications of this are discussed. Qualitative insights highlighted concerns around quality, overreliance, and the risk of diminishing professional agency. The study concludes that GPTs offer meaningful workload support but require careful integration, critical evaluation, and professional development to ensure they enhance rather than constrain VE pedagogy.
Keywords: 
;  ;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Education

1. Introduction

The attraction and appeal of large, or neural language models, commonly known as Generative Pre-Trained Transformers (GPTs), is found in their potential to infer and produce quality text responses to prompts from users. Pre-trained language models can handle datasets they are trained on, which enable a variety of Chatbot models to perform varieties of tasks. Such models are notable in commercial form by the release of ChatGPT, accessed through the OpenAi website, while alternative interfaces exist built on the same deep learning mechanisms used in order to process, sequence, organize and analyze data to assist in solving complex problems [1]. Primarily online, these technologies can also run offline, with platforms now having transformer networks built into some models, including Acrobat PDFs, while Open Source alternatives such as GPT4ALL, through nomicAI, mean these tools can run on desktop central processing units without internet connections.
Uses vary for the application of the varying GPTs now proliferating, but educators have been quick to see benefits, with pedagogical affordances speculated for teachers, while expectations were conceived of how such models might save teachers time through the automation of labor-intensive tasks, including marking, lesson or course planning or burdensome administrative tasks. Issues surround the ethical implications of automating some of these tasks. Where bias appears to be coded into the training data [2] of the language models, this can be passed across through responses to prompts, potentially discriminating against marginalised peoples through, for example, the widening use of data analytics [3] to predict the success of learning outcomes. Equally, there are growing concerns around the disruption these technologies cause conventional means of assessment, with a corrosion of trust infiltrating students essay writing, since LLM interfaces can generate credible responses to tasks, and the plagiarism-detection software to catch them out is often flawed. This has led to general concerns about the impact of AI chatbots on academic integrity, particularly where it is related to classroom -based pedagogy [3,4].
The vocational sector, often called the skills sector in England, entails procedural knowledge underpinned by spatial abilities and psychomotor skills, including technical activities in the arts and media and with computers, musical ability, visual estimations, the relationships between objects, user and environment, often including the manipulation or handling of technical equipment, especially technologies. These types of skills, being redolent of the real world and work-based learning, have adequate fit with practical and experiential learning. Vocational Education (VE) tends to negotiate developments where it can in technological progresses and its training must be equally innovative and bold, where it can be. It is often a neglected sector, overlooked at the expense of schools and higher education and suffers from poor funding in England. The influence of GPTs on these domains and this sector remains to be seen, but this is a sector that, in reflecting real world learning and training, must be open to emergent technical tools that arise and seek to deploy them in teaching and learning contexts, as well as the opportunities they may represent in terms of efficiency in working conditions for its staff [5]. Teachers, therefore, are at the vanguard of these changes and FE commentators have long made arguments around the necessity for students and staff alike to embrace digital innovation, as long as innovative technological knowledge is accompanied by pedagogical knowledge [6]
Reports such as the 2014 FELTAG (Further Education Learning Technologies Action Group) inform the work of the vocational sector, with directives (prior to Covid 19) that proportions of teaching should be made online. While there are unanswered questions about the empirical evidence for learning gains around the uses of learning technologies, there can be no debate about the need for particular digital literacies for today’s students. This starts with the digital competencies of teachers, which is formally acknowledged in models such TPACK [7], devised as the understanding of the overlay between ‘Technical, Pedagogical and Content’ knowledge to develop proficient educational experts. Frameworks and theoretical reference points and approaches are continually evolving [8], with an emphasis on adaptiveness and reflection and competences of technological use being key for teacher education. Others [9] use evaluation matrix to consider the perceived usefulness of technology in vocational education (VE), which is underpinned by the acceptance of technology (Technology Acceptance Model, or TAM) [10], where ease of use is a main factor in take-up. Another potentially useful model for understanding how technology is utilized by teachers is SAMR, first proposed by Puentadura [12], which was designed to identify the ways in which technologies were used at four principal levels and how technologies impacted on pedagogical classroom tasks, being a substitution, augmentation, modification of redefinition (SAMR) of previously existing practices, though issues in meta studies point to the lack of contextual information often used and a focus on product over process [12]. The latter is an issues in VE, where product is certainly important, but focus on processes incorporates learning by doing, experimentation and risk-taking, and reflections on mistakes as paramount to students mastery of concepts.
GPTs certainly come with a plethora of problems: hype, inflated performance and operational capacity and a susceptibility to generating errors. They also carry ethical issues with built-in bias, glitches and even potential for exacerbating existing inequalities [13]. But as potentially powerful tools in a state of interregnum development, they are often freely available and built into existing networks or virtual learning environments, meaning they are on hand and that platformatisation of education is likely here to stay [14].
Against the previously noted need for competency and innovation are two socio-economic factors that are critical to draw into these debates: firstly, there is a growing teacher retention and recruitment crisis across most sectors in England, but seemingly internationally, with high workload, precarity [15] and poor working conditions given as reasons for these twofold issues [16]. Secondly, there is the possibility that the advent of General Artificial Intelligence creates automation in huge industries, posing stakeholders from policy Governance to local educational contexts to consider the purposes of vocational education. The UK Government has already started to investigate the ways in which AI may reduce workload or support with improved student feedback [17], though this may come at the cost of teacher agency, which is integral to teacher professionalism [18,19]. Nevertheless, the arrival of GPTs entails a new form of imagining the potential for vocational educators to integrate these into their teaching and learning contexts, or alternatively to relieve some burden of the administrative duties that push teaching and learning priorities to the side at the expense of form filling. Proposals for technologies as solutions highlight issues within systems, since they engrain policy as frozen in silicon [20], with government policymakers giving a heuristic understanding of the contemporary pressing issues. This is important, since research that addresses policy lays bare the issues inherent with the system that AIE is supposed to resolve and can shape critical pushback [21] on potential solutions that may be ineffective or require further development. In this instance, we are interested in what the tools are being used for, in order that we can potentially forecast the needs of future teachers. The focus is on workload: as the previously referenced Government report states that AIE can help with:
  • creating educational resources
  • lesson and curriculum planning
  • tailored feedback and revision activities
  • administrative tasks
  • supporting personalised learning
and that, when used appropriately, generative AI has the potential to:
  • reduce workload across the education sector
  • free up teachers’ time, allowing them to focus on delivering excellent teaching
This paper aims to investigate the ways in which VE educators are utilizing AI up to now. It takes as a starting point the Pareto Principle, which is – admittedly - a slightly dated economic notion that explained how 80 per cent of a nation’s wealth was concentrated into the possession of 20 per cent of the population, a figure that’s likely fluctuated in the present era. The Principle has been applied in academic fields in varying ways, usually corresponding to the same equative result, i.e. that 80% of outcomes stem from 20% of the causes. Its relevance here is in terms of considering whether GPTs are potentially supporting teachers with heavy workloads. For example, in streamlining repetitive tasks. A significant portion of a teacher’s time is often consumed by repetitive tasks like creating quizzes, grading, or formatting materials. There may be potential in notions of offloading time-consuming repetitive tasks, freeing up time for the teacher to focus on customizing lessons to meet individual student needs, engaging directly with students in class discussions or activities, or designing more creative, interactive components of their lessons, etc. Since GPTs are stateless, that is they don’t retain data from preceding interactions or requests, they may miss much of the nuance of a classroom that a teacher gleans from day to day and over a career. This means that they capture static impressions of dynamic and multilayered shifting contexts, so their resources may be limited in value.
If GPTs are able to generate the bulk of lesson outlines, activities, and assessments, then teachers can refine these resources for their specific classroom context. This modification aspect of tweaking resources, materials and plans created by GPTs may be crucial to teachers development and agency in the classroom. Indeed, findings show that [22] teachers with a depth of subject knowledge and mastery of effective prompt engineering rate GPTs as useful, whereas teachers with less secure subject knowledge and unfamiliarity with quality prompts do not appear to evaluate the benefits and use of AI highly. However, this does indicate that potentially in the future some offloading of creativity will occur and yet the quality of GPT generated resources remains questionable [23]. More sophisticated uses of GPTs may focus on learning analytics, topic modelling, analysis of datasets or personalising individualised learning pathways to enhance student improvements. That was a study that focused purely on ChatGPT and on primary education teachers. What is less well known is how VE teachers are currently using GPTs of more varieties and for what purposes, so this guided our research design and instruments. It also leads to our research questions, which investigates the main ways these technologies and the application of them may be influencing teacher’s professional development and workload and delegation choices of what they offload to GPTs largely. We were interested in the potential choice architecture resident in such tools and this becomes a focus on the analysis discussion that follows. Although the sample is scattered across a variety of VE contexts, so does not reflect a concentrated and potentially cross-institutional approach, the variety in results potentially shows the need for a joined-up framework to guide GPT prompt engineering towards quality output.

2. Materials and Methods

This study presents findings from a survey of 60 vocational educators regarding their use of Artificial Intelligence (AI) tools in vocational educational settings. The authors wished to develop an understanding of how vocational educators were starting to use AI in their settings and contexts. This involved asking about types of tools, their application and purpose, their frequency of use and their perceptions of usefulness in helping with workload. Other contextual information about the educators’ length of teaching experience and subject specialism within teaching was drawn in to try to ascertain whether newly qualified or more experienced teachers had differing views of the uses of GPTs. Finally, an open question was added to the survey in order to elicit views and values about the emergence of AI into education (‘Is there anything else you would like to share about your experience with using AI Tools in education?’).
Quantitative analysis was used to identify patterns, and thematic insights were gathered for illustrative purposes. In the discussion that follows the presented results, we draw on the pareto principle as a means to identify the proportion of work tasks designated by teachers to GPTs. This is done in order to try to identify insights about the quantity of tasks most given to GPTs and whether some comprise a majority.
We acknowledge the limitations in this research design. The small sample size of 60 VE teachers was purely down to who responded to the survey. Survey responses were drawn from requests to participate from VE workplaces, through a learning technologies news bulletin and across social media channels by the authors and peers. This is not meant to be demonstrative or generalizable to VE educators nationally or further afield, but a contextual insight from who was available to us and responded to our call for participants. Further, we note that users of social media or readers of a learning technologies newsletter are more likely to be inclined to be innovative adapters and users of technologies in their teaching than is probably normal, but we count this as an advantage of the study rather than a limitation, since they may be more able to yield insights through the opportunity for open questions.
Participants generally ranged from across VE subject specialisms, including subject experts from Carpentry and Joinery, Maths, Counselling, Sports, English and ESOL (English as a Second Language) Business, Engineering, English, Computing, Bakery and catering, Travel and Tourism and Special Educational Needs support. This gives a varied and realistic representation of the typical VE subjects taught in English college systems.
The data was collected via a Microsoft Forms survey and was a mixture of information gathering questions, a Likert scale ranking and an open question giving qualitative insights into one combined dataset. The questions are shown below:
  • What is your role at the College and subject specialism?
  • How many years teaching experience do you have?
  • How often do you use Artificial Intelligence tools (such as TeacherMatic, Gemini Gamma, Chat GPT, etc) for work each month?
  • Which is the AI Tool that you use most frequently?
  • On a scale of 1–5 Stars (1 being 'Not at all useful' and 5 being 'Extremely useful'), how would you rate these tools in terms of supporting your workload?
  • Which functions of these AI Tools do you use most frequently? (Select up to three.)
  • How often do you need to adapt the output created by AI?
  • In general, do you think AI Tools are worthwhile for educators?
  • Is there anything else you would like to share about your experience with using AI Tools in education?

3. Results

As mentioned, participants provided details on their teaching subject and experience, their frequency of AI tool usage, usefulness ratings, task types assigned to AI, and reflective comments. This straightgforward manner easily allows us to gather this information and present it below.

3.1. Key Quantitative Findings and Descriptive Results

Overall sample
  • Total responses: 60

3.1.1. AI Use Frequency (Monthly)

Counts by normalised band are shown:
  • 0–5 times per month: 21 respondents
  • 6–10 times per month: 14 respondents
  • 11–20 times per month: 9 respondents
  • 21+ times per month: 7 respondents
  • Never used: 1 respondent (Remaining rows had NaN for frequency where respondents didn’t answer the question.)
Most teachers are low-to-moderate user, as the modal category is low-use (0–5/month). There are a few heavier users, but they’re a small minority. The majority of participants (21) reported using AI tools 0–5 times per month. This was followed by 14 users reporting 6–10 uses, with smaller groups using tools more intensively (11–20 or 21+ times). This suggests a cautious adoption phase, with a few outliers who are heavy users. There appeared to be no correlation between heavier users and duration of teaching experience.

3.1.2. Usefulness Ratings (1-5)

Despite some low instances of use per month, teachers clearly rated the usefulness of AI tools highly for supporting their workload, with an average score of 4.03 out of 5. The most common ratings were 4 and 5 stars, indicating that many educators found these tools beneficial.
Q. How helpful do teachers find AI for supporting workload?
  • Descriptive statistics:
  • Count: 59 (one missing)
  • Mean: 4.03
  • Median: 4
  • Std dev: ~0.999
  • Min–Max: 1 — 5
Teachers who rated usefulness generally consider AI tools helpful for workload (mean ≈ 4/5). There is variation (some low ratings), but the distribution skews positive.
Higher usefulness ratings showed that those participants mostly used it for 3 main purposes with assessment purposes (such as creating assessment tests) the most common, followed by lesson planning and resource creation, such as worksheets or presentations. Fewer of the respondents reported using the platforms to take on administrative tasks, such as report writing or tracking, etc). Overall tasks delegated to GPTs are shown below.

3.1.3. Tasks Delegated to GPT Systems

Resource creation (worksheets, presentations, slides): 48 mentions
Assessment creation (quizzes, exam items): 35 mentions
Lesson planning: 27 mentions
Assessment marking / grading (where explicitly stated): 39 mentions (see note below)
Feedback generation: 12 mentions
Administrative tasks (tracking, reports): 11 mentions
Image generation: 2 mentions
Other / miscellaneous: 8 mentions
As shown above, the most common task that was assigned to AI was resource creation (48 mentions), followed by assessment creation (35) and lesson planning (27).
This shows that AI is being used primarily in creative and planning tasks, rather than decision-making, marking or grading.
Further analysis now follows, based on the following.
  • Cross-tabulation of usefulness by frequency of AI use
  • Correlation between teaching experience and AI adoption
  • Visual summaries (e.g., bar charts of task use, rating distributions)
  • Participant case examples and illustrative quotes

3.1.4. Cross-Tabulation & Group Comparisons

Mean usefulness (with respondent count) by frequency:
  • 0–5 — n = 16, mean usefulness ≈ 4.00, sd ≈ 1.06
  • 6–10 — n = 12, mean usefulness ≈ 4.08, sd ≈ 0.67
  • 11–20 — n = 3, mean usefulness ≈ 4.33, sd ≈ 1.15
  • 21+ — n = 1, mean usefulness = 5.0
  • Never used — n = 1 (no usefulness rating)
Mean usefulness is high across frequency bands. Heavier users show slightly higher mean usefulness, but the heavy-user groups are small so the usefulness rating is limited to infer correlation.

3.1.6. Correlations by Teaching Experience

The data was incomplete in the survey results from these categories, that is there was missing data in years variable for enough cases to make a stable correlation unreliable. Naturally, correlations are exploratory only, based on modest sample size and with some variables having missing value. The correlation between use frequency and usefulness is the most interpretable result here and consistent with intuition. Greater years of teaching experience alone do not predict how useful respondents find GPTs to their workload.

3.1.7. Illustrated Anonymized Comments from Open Questions

The below is a selection of samples from the open question, with more of these qualitative responses drawn into the discussions towards the end of the paper. What is striking is that in these instances, the respondents tended to valorise the GPT platforms and were often critical about the potential impact on new teachers, misuse of AI for unhelpful shortcuts and potential negative influence on education generally. What cut through largely in the responses were observations about the poor quality of the materials sometimes created and the need to modify or adapt what was produced. Sample quotes are shown below.
  • “The end result is only ever as good as the prompts you put into whichever AI generator you are using.”
  • “Some features of TeacherMatic are used more frequently than others. It's good that there is a 'favourites' option.”
  • “There is a general expectation that they should be better then they are. (e.g produce a scheme of work or lesson plan perfectly first time).”
  • “Please do not utilise A.I in education just to save time on teaching: use it to enhance learning experiences.”
  • “They are useful in the current teaching environment. But they shouldn't have to be if the workload was balanced correctly. If we continue to have to use it, will it de-skill teachers[?]. Will we run the risk of lesson be created by AI and the teacher not knowing or understanding how or if it meets the needs of learners[?]. Meaning that lesson are used inappropriately.”
  • “Often it is sold as reducing your workload but I'm not sure. Often the quality or robustness of the product it gives you requires more work to make it effective. I worry the impact it will have on student teachers and the lessons they will lose in their early career as they use AI.”
As stated, these contributions give some insights about the importance of ‘quality’ overall, and that this concern supersedes simply using such systems to help alleviate workload. We can see ‘quality’ referred to in each answer, from the first which indicates the need for decent prompts being used, to the subjective identification of what constitutes ‘quality’ on a personal level in the second example and that these can easily be clustered together. The third points to a problem mentioned earlier, that teachers need to know to look for issues in resources provided by GPTs, rather than willfully accept the results will always be to a good standard. There may be issues with this going forward, as at this juncture there is a strong focus generally on identifying what is created by AI, particularly with a view to what mistakes are present. In the future, it may be less easy to note these as we become more accustomed to its outputs and arguably more trusting of it, and less critical.
Certain functionalities and features (rubric/assessment generators) of the systems are highly valued, but there are considerable caveats about pedagogy and ethics, as well as observations around the approaches with use of such systems. The last three statements all highlight criticality about the parallels between the systems and the environments they are being introduced into. Heavy workload in education is a commonly understood issue (statement 5), but the final statement (6) included points to the inferiority of these systems against the value of a teacher, and where those come into the profession, they may often not understand the fit with their students or what came before the common infiltration of GPT systems into educational workplaces.
Nevertheless, the survey results point to the general usefulness of AI for easing workload, albeit with caveats around quality. Adoption levels of AI are uneven, and teaching experience doesn’t necessarily align to usefulness or effectiveness.
It’s important to note that this paper is published at an interregnum; that is: a threshold of what the human imagination puts to those systems and what GPT systems can produce. In a short period of time, capacities change and grow and systems tend to improve. This means that the results are determined mainly by the understanding of the teachers of what is possible and can be delegated: workload for nuanced tasks, the bullshit jobs as Graeber had it, may well end up being siphoned to AI.

4. Discussion

Given the uneven levels of adoption of AI, it was interesting to see the range of tools being deployed as cited in the surveys. For the most part, this was predicated on Teachermatic and ChatGPT, which is unsurprising, since the former has licenses in many college institutions, while the latter is prolifically used as an early marketized tool. Other mentions were given to Co-Pilot, Perplexity, Gemini, and Notebook. A choice architecture reading of the data shows that these common ‘Big Tech’ tools are in the majority being used, so that where institutions and professional bodies, as well as Government, are heedlessly advocating for the embracing of ‘AI’, the main beneficiaries to this tend to be the market established companies whose products are visibly available, or built into existing platforms, like portable documents or browsers. This is problematic, since there are more varieties of tools available being overlooked. The choice of tool and its functionality determines the way it will be used and mainstream tools embedded into monopolistic eco-systems such as Microsoft and Google are likely to be prominently used, and thus shape educational practices, not necessarily in helpful ways but also harmful (i.e. the harvesting of student data [23].
The data shows that GPT platforms hold potential for resource creation, but it could be argued that they appear to diminish subject specialism skills that would otherwise be honed in a labour intensive manner through the creation of original resources. Instead, something replaces the act of creativity: a refinement at best, as it is reported that these resources need careful scrutiny [24] because they often contain mistakes that can go unchecked straight into teaching. Refinement or checking and modifying resources produced by GPTs, one could infer, is not always going to be practiced. We argue that certainly early career teachers may benefit from creating resources to develop knowledge of how to communicate and teach the content knowledge of their subject specialism. For example, beginning teachers typically need to design and develop schemes of work for an extended syllabus, that may contain twelve weeks of lessons and activities mapped to a curriculum. This is exhaustive and could be automated and processed by an interface such as Teachermatic, yet while this or other GPTs might create a standard twelve week lesson plan, they could also problematically overlook much of the classroom environment, knowledge of students needs and preferences, the diversity of students in the group, targets and other variations that are unexpected, hard to predict or control. Teachers noted this in their open answers: “Often it is sold as reducing your workload but I'm not sure. Often the quality or robustness of the product it gives you requires more work to make it effective. I worry the impact it will have on student teachers and the lessons they will lose in their early career as they use AI.” Returning to the SAMR model mentioned in the earlier literature review, it is important to note the modification element; here it means something different. In order to stay as constituent central parts of the teaching and learning experience, as endorsed by [24], teachers will need to be vigilant about ‘quality’ and be prepared to modify GPT-generated resources or lesson plans that have been created to ensure some alignment with their students and their curriculum content [25,26].
It may alternatively be argued that early career teachers would benefit from the agency and undertaking the ‘grunt work’ involved in planning a scheme of work to give them closer oversight of the scaffolding of the subject and its curriculum, the nuances of lessons, and a personalised understanding of their students’ learning needs that an interface may not be able to provide, given it may generate generalised responses. However, many teachers recognized that the first results were never appropriate and there is evidently skill in evaluating the resulting resources: “It’s a great tool to take the initial edge of bulk admin tasks. I find it easier being the editor and creator than the doer (thats the Ai’s job).”

Choice Architecture and GPTs

Choice architecture appears present in the ways that the platforms are used is apparent from this data, with the most common uses being to generate resources, with powerpoints and quizzes cited as part of that. Teachers also reported that they used GPTs to develop lesson plans. Most uses were for teaching/learning resource creation, though creating these is unlikely to be the source of teacher workload that leads to fatigue and burnout. AI guides responses to certain normative pedagogical choices, for instance direct instruction and grouped work, which might stifle teachers and students’ creativity and innovation. This determinism means that innovation according to individual needs may be stifled by the implementation, or prescriptivism, of the tools. For example, courses which require practical elements, including experiential learning, or where teachers want to advocate for risk-taking and experimentation, allowing for mistakes and problem solving through so-called ‘inquiry-based approaches. These are potentially inhibited by the choice architecture of the tools, as noted by an English teacher in their response to the open question: “It [Teachermatic] has a good variety of resources that can be drawn from it; in terms of what it does, it tends to induce teachers to teach in certain ways by narrowing down the planning it suggests, and the resources almost always need some tweaking.” The systems appear to foreground certain functions in the user interface (e.g., template-based generators, “favourites”), lowering the cognitive and time cost for those tasks. By making content generation fast, visible, and low-risk, teachers may then be “nudged” or compelled towards these features without necessarily exploring a full pedagogical range or alternative options. Over time, this knowledge could dissipate. This was evident in responses to the open question, where some participants recognized that innovation is necessary, is guided by experience of prompting (“Understanding that the output is only ever as good as the input is crucial to using AI effectively.”), and does not always result in the common pedagogical design resulting from resource or planning creation (that is, powerpoints or presentations and grouped work). “Please use as innovation for learning and teaching to enhance experience and engagement. I recently visited one of our English classes where the teacher felt that students do not understand what it was like before technology. We organised something through Nichesss where students had a discussion with Jane Austen to ask what it was like before technology - by using technology. A fantastic bridge, and great lesson.” This is borne out by other studies, [27] where 77% of teachers relied on platforms such as Teachermatic to produce resources, which they cited as quizzes and powerpoints. It’s plausible to suggest that without rich teacher education, knowledge exchange and continued professional development, mediation between teacher and students and students with their peers could diminish at the cost of the creation of staid resources. Alternatively, GPTs may provide ideas and examples for experiential learning and it’s possible that in the future other technologies emerge from this interregnum in development that supplement those. There already exist virtual and immersive reality, in the form of smartglasses which have been shown to enhance mobile assessment of vocational education, but this is reliant on highly imaginative teachers [28].
Another concern about the ways that teachers are utilizing the platforms is in what we may see as an early stage novelty effect bias towards visible and tangible outputs. This is natural: teachers want products that would seemingly alleviate processes and efficiency in doing so. It’s shown in a lower uptake in lesson planning and administrative tasks that may result in less visually gratifying results. What is highly regarded tend to be resource outputs that can be put to use (quizzes, rubrics, worksheets). Teachers are more likely to adopt features that yield immediate, concrete artefacts that feel like a “win.” The architecture of these AI tools emphasizes ready-to-use, tangible outputs that subtly channels user engagement away from back-end or abstract tasks, where workload may be higher and more nuanced. As to the question of lowering workload, this is hard to discern, though on the surface level it would appear that the infrequent uses reported show staff are not taking the opportunities seemingly available to offload work to GPTs. Again, this is sometimes put down to resources and refinement of them, as one Maths teacher notes: “I'm not sure if it's usefulness is lacking due to my subject. The quizzes it produces are simply adequate and more well though tout resources, particularly those inline with the principals of mastery, are ready made elsewhere on specialist sites, of which there are an abundance.” This modification, entailing extra work, is echoed by others, such as this Counselling Teacher: “I find TeacherMatic extremely useful for giving me ideas and provides me with an advance in creating resources and assessment I need when preparing my lessons, more so when I have limited time. Another effective function is the Rubric, however just lately I have had to make significant amendments to make it sufficient.” While another Maths teacher was critical of this particular platform: “Doesn't seem to work effectively with Maths - was looking to see whether it could generate examples with or without solutions but only seemed able to draw in definitions. Could generate problems to an extent, but incomplete without graphical elements (e.g. 'use the graph to define' without a graph).” These conflicting views seem to point to the need for a better understanding of the affordances that can be gained from better prompt engineering, harnessing the tools to get them to work for users specific needs. This, however, also depends on the development capacity of the tools to capture more nuance and, to another extent, improvements in teacher education, such that: “the end result is only ever as good as the prompts you put into whichever AI generator you are using.” Elsewhere CPD workshops around these issues have been developed that yield positive results in being critically informed about AI, curriculum mapping to identify spaces where it can be integrated, while also exchanging and sharing knowledge in participatory sessions regarding the ways in which GPTs can enhance teaching and assessment [29]. Thes affordances will potentially lead to time-saving and reduced workload if more advanced practitioners can support this knowledge with their experience.
Yet others in the survey reported categorically on its time-saving potential, as shown by a Functional Skills and English teacher: “Teachermatic has made 2 hour jobs 20 minute jobs; now I can use my time for more student facing work.” However, this is in the context of one tool being licensed across a college; where others find having to explore the full range of GPTs available to be time-consuming in itself: “There is so much and its hard to know what is best to use from a security, cost and output point of view. Significant time has to be spent exploring different tools “. This again might result in a choice architecture where users go to the centralized technologies available from Big Tech, since they are most prominent and available (and this statement from a Digital Innovation coach who uses Co-Pilot most commonly. Finally, another statement from an Arts lecturer is distinctly more negative about the impact of these tools, and they point to the issues prevalent in working systems, rather than technologies: “They are useful in the current teaching environment. But they shouldn't have to be if the workload was balanced correctly. If we continue to have to use it will it de-skill teachers. Will we run the risk of lesson be created by AI and the teacher not knowing or understanding how or if it meets the needs of learners.”

5. Conclusions

The study presents the findings from a literature review around vocational education and a survey of how AI is being used by vocational teachers in the English college sector. It identifies potential offloading of work to GPT systems, yet it is questionable whether this usually results in the kind of work being automated that teachers wish to relinquish, since it mostly involves the creative tasks that give them oversight of lessons and modules plans. We argue that manual creation of these resources is often necessary to teacher’s professional development overall, but that GPT tools definitely hold values to teachers with more innovative flair who can apply them to better uses. The use of the Pareto Principle, while original, was inconclusive and further research would have to attempt clear quantification of time allocated to GPTs as time-saving devices. This is likely a challenging study to undertake, since so much of a teacher’s work is unseen and hidden, unpredictable, given to serendipity or happenstance, and involves human interactions at micro levels, which GPTs, at least at this interregnum in its evolution, will never be able to automate or accommodate for. This is something we endorse as we strive to argue that education and learning are human-crafted experiences, accentuated by technologies but not necessarily determined by them.

Supplementary Materials

The following supporting information can be downloaded at: Preprints.org.

Funding

This research received no external funding.

Data Availability Statement

We encourage all authors of articles published in MDPI journals to share their research data. In this section, please provide details regarding where data supporting reported results can be found, including links to publicly archived datasets analyzed or generated during the study. Where no new data were created, or where data is unavailable due to privacy or ethical restrictions, a statement is still required. Suggested Data Availability Statements are available in section “MDPI Research Data Policies” at https://www.mdpi.com/ethics.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
VE Vocational Education
GPT Generative Pre-Trained Transformers
AIE Artificial Intelligence in Education
CPD Continued Professional Development

References

  1. Liu, Y. , He, H., Han, T., Zhang, X., Liu, M., Tian, J., Zhang, Y., Wang, J., Gao, X., Zhong, T. and Pan, Y. Understanding llms: A comprehensive overview from training to inference.. Neurocomputing 2025 Volume 620. [CrossRef]
  2. Sahlgren, O. The politics and reciprocal (re) configuration of accountability and fairness in data-driven education. Learning, Media and Technology, pp.95-108. 2023; Volume 48. [Google Scholar] [CrossRef]
  3. Cukurova, M. (2025). The interplay of learning, analytics and artificial intelligence in education: A vision for hybrid intelligence. British Journal of Educational Technology, 56, 469–488. [CrossRef]
  4. Denecke, K. , Glauser, R. and Reichenpfader, D. Assessing the potential and risks of AI-based tools in higher education: Results from an eSurvey and SWOT analysis. Trends in Higher Education, pp.667-688. 2023. [CrossRef]
  5. Niloy, A.C. , Hafiz, R., Hossain, B.M.T., Gulmeher, F., Sultana, N., Islam, K.F., Bushra, F., Islam, S., Hoque, S.I., Rahman, M.A. and Kabir, S. AI chatbots: A disguised enemy for academic integrity?. International Journal of Educational Research Open, 2024. Volume 7. [CrossRef]
  6. Scott, H. , Iredale, A. and Harrison, B., FELTAG in rearview: FE from the past to the future through plague times, 1st Edition. Smith, M, Traxler, J. In Digital Learning in Higher Education (pp. 24-36). Edward Elgar Publishing. Cheltenham. UK. 2022. pp 24 -36. [CrossRef]
  7. Hobley, J. Here’s the iPad’. The BTEC philosophy: how not to teach science to vocational students. Research in Post-compulsory education 2016. 21(4), pp.434-446. [CrossRef]
  8. Mishra, P. , and M. J. Koehler. Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge. Teachers College Record. Volume 108 (6) 2006. pp. 1017–1054. [CrossRef]
  9. Song, J. , Zhong, Z., He, C., Lu, H., Han, X., Xiong, X. and Wang, Y., Definitions and a Review of Relevant Theories, Frameworks, and Approaches. Handbook of Technical and Vocational Teacher Professional Development in the Digital Age, 1st edition. Han, X. Han, Zhou, Q. Li, M., Wang, Y. Springer. Singapore. 2023. pp.17-39. [CrossRef]
  10. Cattaneo, A. A. , Antonietti, C. The role of perceived digital competence and perceived usefulness in technology use across different teaching profiles. Vocations and Learning 18(1), 5. [CrossRef]
  11. Marangunić, N. , Granić, A. Technology acceptance model: a literature review from 1986 to 2013. Universal Access in the Information Society Volume 14, 2015. pp81–95. [CrossRef]
  12. Blundell, C.N. , Mukherjee, M. and Nykvist, S. A scoping review of the application of the SAMR model in research. Computers and Education Open, 3, 2022 p.100093. [CrossRef]
  13. Hamilton, E. R. , Rosenberg, J.M., & Akcaoglu, M. (2016). The substitution, augmentation, modification, redefinition (SAMR) model: A critical review and suggestions for its use, Tech Trends, 60, 433-441. [CrossRef]
  14. Holmes, W. and Porayska-Pomsta, K. The ethics of artificial intelligence in education. 1st edition. Routledge, London, United Kingdom. 2023. pp.621-653.
  15. Perrotta, C. , Gulson, K. N., Williamson, B., & Witzenberger, K. Automation, APIs and the distributed labour of platform pedagogies in Google Classroom. Critical Studies in Education, 2021. Volume 62(1), pp 97-113. [CrossRef]
  16. Avis, J. and Reynolds, C. The digitalization of work and social justice: Reflections on the labour process of English further education teachers. In The impact of digitalization in the workplace: An educational view. Harteis, C. 1st Edition. Springer International Publishing. 2017. pp. 213-229. [CrossRef]
  17. Zaidi, A. , Caisl, J., Puts, E., & Howat, C. Initial Teacher Education in FE - 2015/16. 2018. Education & Training Foundation.
  18. Generative AI in Education. UK Government policy paper. 2025. Accessed https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education?
  19. Placani, A. Anthropomorphism in AI: hype and fallacy. AI and Ethics, 2024. Volume 4(3), pp 691-698. [CrossRef]
  20. Turvey, K. England’s essentialist teacher education policy frameworks as double texts. Teacher education in crisis: The state, the market and the universities in England, Ellis, V. Bloomsbury Collections. London, United Kingdom. 1st Edition. 2023 pp117-132.
  21. Rahm, L. , & Rahm-Skågeby, J. (2023). Imaginaries and problematisations: A heuristic lens in the age of artificial intelligence in education. British Journal of Educational Technology, 2023. Volume 54(5), pp.1147-1159. [CrossRef]
  22. Heimans, S. , Biesta, G., Takayama, K., & Kettle, M. ChatGPT, subjectification, and the purposes and politics of teacher education and its scholarship. Asia-Pacific Journal of Teacher Education, 51(2), 2023. pp.105–112. [CrossRef]
  23. Scott, H. (2023). ‘Reject All’: Data, Drift and Digital Vigilance. In: Hayes, S., Jopling, M., Connor, S., Johnson, M. (eds) Human Data Interaction, Disadvantage and Skills in the Community. Postdigital Science and Education. Springer, Cham. [CrossRef]
  24. Lammert, C. , DeJulio, S., Grote-Garcia, S., & Fraga, L. M.. Better than nothing? An analysis of AI-generated lesson plans using the Universal Design for Learning & Transition Frameworks. The Clearing House: A Journal of Educational Strategies, Issues and Ideas, 2024 97(5), pp 168-175. [CrossRef]
  25. Van den Berg, G. and du Plessis, E. ChatGPT and Generative AI: possibilities for its contribution to lesson planning, critical thinking and openness in teacher education. Education Sciences 2023. 13(10), 998. [CrossRef]
  26. Kuzu, T. E. , Irion, T., & Bay, W. AI-based task development in teacher education: an empirical study on using ChatGPT to create complex multilingual tasks in the context of primary education. Education and Information Technologies, 2025. pp 1-35. [CrossRef]
  27. Birtill, M. , & Birtill, P. Implementation and evaluation of genAI-aided tools in a UK Further Education college. In Artificial Intelligence Applications in Higher Education Routledge, Crompton, H. and Burke, D. Oxford. 1st Edition. 2024. pp. 195-214.
  28. Grace, D. , & Haddock, B. Innovative Assessment Using Smart Glasses In Further Education: HDI Considerations. In Human Data Interaction, Disadvantage and Skills in the Community: Enabling Cross-Sector Environments for Postdigital Inclusion. Hayes, S. Jopling, M. Connor, S. and Johnson, M. 2023. pp. 93-110. S: Cham. [CrossRef]
  29. Scott, H. Theoretically rich design thinking: blended approaches for educational technology workshops. International Journal of Management and Applied Research, 2023. 10(2), 298-313. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated