Preprint
Article

This version is not peer-reviewed.

Beyond Digital Adoption: Towards a Human-Centred Pedagogy for Zambian Higher Education

Submitted:

31 March 2026

Posted:

01 April 2026

You are already at the latest version

Abstract
The integration of digital technologies into higher education is reshaping pedagogical practices globally, yet many institutions in sub-Saharan Africa adopt these tools without sufficient contextual adaptation. In Zambia, universities face the compounded challenge of limited digital infrastructure, uneven connectivity, and institutional policy frameworks that lag behind the pace of technological change. This study examines how Zambian higher education can advance beyond superficial digital adoption towards a pedagogy that is at once technologically engaged and fundamentally human-centred. Drawing on qualitative survey data collected from 84 university students across multiple institutions between February and April 2025, and employing reflexive thematic analysis, we identify four interconnected themes: enthusiasm for digital tools tempered by anxieties over cognitive dependency; the structural gap between student readiness and institutional guidance; the transformative potential of collaborative and problem-based learning; and the imperative for contextually responsive assessment reform. We propose a three-pillar framework grounded in critical digital literacy, collaborative learning ecosystems, and industry-aligned problem solving. This framework aligns with Zambia’s Eighth National Development Plan and its emerging AI literacy initiatives, offering a replicable model for other resource-constrained higher education contexts in Africa.
Keywords: 
;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Education

1. Introduction

Higher education systems across sub-Saharan Africa are navigating a period of profound technological flux. The arrival of artificial intelligence (AI) tools, learning management systems (LMS), and mobile-enabled instructional platforms has generated considerable optimism about the democratisation of knowledge. For Zambian universities, institutions that carry the dual mandate of widening access and producing globally competitive graduates, this moment presents both a structural opportunity and a conceptual challenge. The central question this study addresses is not whether digital technologies should be integrated into Zambian higher education but whether the current mode of integration is fit for purpose.
There is a well-documented tendency across higher education contexts globally to treat technology adoption as an end in itself. Selwyn (2021) has argued convincingly that educational technologies are frequently layered onto existing pedagogical structures without disturbing the underlying assumptions that govern teaching, learning, and assessment. This pattern is not unique to Africa, but it carries particular consequences in settings where resource constraints are acute, where access to high-bandwidth internet remains uneven, and where institutional frameworks for guiding AI use are still nascent. When technology replicates lecture-centric pedagogy in digital form, it accomplishes little beyond repackaging the problem.
Zambia’s higher education landscape provides a revealing case study of this tension. Nationally, internet penetration reached 64 subscriptions per 100 inhabitants by mid-2025, up from 63 per 100 in mid-2024, and the government secured a USD 120 million Digital Zambia Acceleration Project grant through the World Bank in March 2025 to expand digital infrastructure (ZICTA, 2025). These developments signal a growing state commitment to digital transformation. Yet infrastructure investment alone cannot transform pedagogy. As Aborisade (2025) observes, digital literacy in African higher education must extend beyond instrumental competence in using tools to encompass critical reflection on how those tools shape knowledge production, reproduce inequalities, and alter the cognitive demands placed on learners. Workplace skills are evolving rapidly, and higher education graduates require attributes that make them fit for current purposes and become adaptable to evolving contexts. To attain these attributes, higher education institutions (HEIs) need to ensure that their teachers’ digital skill sets meet and keep up with the realities of their engagement.
The stakes of getting this right are considerable. Zambia’s Vision 2030 and the Eighth National Development Plan (2022-2026) explicitly frame higher education as an engine of economic diversification and skills development (GRZ,2021). Producing graduates who are adaptive, analytically capable, and ethically grounded in their use of technology is therefore not merely a pedagogical aspiration but a policy imperative. Universities that produce graduates skilled at operating AI tools but ill-equipped to interrogate those tools or apply them creatively to local problems will fail to deliver the kind of human capital that Zambia’s development trajectory requires.
Existing scholarship on AI in African higher education has made important contributions in mapping infrastructure deficits, documenting student attitudes, and cataloguing the risks of academic integrity violations (Holmes et al., 2022; Ajani et al., 2025). What the literature has addressed less thoroughly is the pedagogical architecture within which AI tools should be embedded. This study contributes to that gap. We argue for a three-pillar framework built on critical digital literacy, collaborative learning ecosystems, and problem-based engagement with local industries. This framework is not merely theoretical. It is grounded in qualitative evidence from Zambian students and calibrated to the regulatory and resource context of Zambian higher education.
The study is structured as follows. The Methods section describes the research design, participant sampling, data collection procedures, and the reflexive thematic analysis framework applied. The Results section presents four empirically grounded themes drawn directly from student perspectives. The Discussion section interprets these findings through relevant theoretical lenses and develops the three-pillar framework with practical implications for curriculum design, institutional policy, and assessment reform. The Conclusion summarises the study’s scholarly contribution and offers directions for future inquiry.

1.1. Research Questions

The study was organised around one overarching question and three sub-questions.
Overarching question: How do university students in Zambia experience and make sense of the integration of artificial intelligence into their academic lives?
Sub-question I: How do students describe their use of AI-powered tools for learning and completing academic tasks?
Sub-question ii: How do students perceive AI’s influence on their learning processes, critical thinking, and academic engagement?
Sub-question iii: What tensions and paradoxes do students identify in their relationship with AI as a learning tool?

1.2. Significance of the Study

This study makes three contributions. First, it generates empirical evidence on student experiences of AI integration from a sub-Saharan African university context, a geography that remains underrepresented in the global literature on educational technology. Second, it moves beyond descriptive diagnosis to propose a theoretically grounded and contextually adapted pedagogical framework. Third, it offers policy-relevant insights aligned with Zambia’s national development agenda and the regulatory mandate of the Higher Education Authority (HEA).

2. Methods

2.1. Research Design

This study adopted a qualitative research design grounded in a basic interpretive paradigm (Merriam,2002). Qualitative inquiry was appropriate for three reasons. The study sought to assess the meanings that students attach to AI technologies and the ways those technologies reshape their experience of learning. Such questions are by nature interpretive and require rich, descriptive data that captures participant perspectives in their own words (Merriam & Tisdell,2016). Additionally, AI integration in Zambian higher education is an emergent and rapidly evolving phenomenon, rendering a priori hypothesis testing less productive than open-ended exploration capable of capturing nuance, contradiction, and unexpected insight. Finally, the open-ended survey format generated textual data well-suited to qualitative analysis, revealing not only what students think but how they articulate their understandings, concerns, and aspirations.

2.2. Research Context

Data were collected from students enrolled at universities in Zambia during a period of significant institutional transition. At the time of data collection, Zambian universities were grappling with the implications of AI tools for teaching, learning, and assessment, yet institutional policies and pedagogical frameworks remained at an early stage of development. The Higher Education Authority had not yet issued binding guidelines on AI use in academic work, and individual institutions varied considerably in the degree to which AI integration featured in curriculum design or faculty development. This contextual uncertainty made qualitative exploration particularly valuable for capturing student voices at a moment of genuine pedagogical transition.

2.3. Participant Selection

Participants were selected using purposive sampling, a strategy that involves intentionally identifying individuals who can provide rich and relevant information about the phenomenon under study (Patton,2015). Sampling was designed to capture diversity across three dimensions: disciplinary background, level of study, and gender. Inclusion criteria required that participants be currently enrolled as undergraduate or postgraduate students at a Zambian university, willing to share their perspectives on AI in education, and accessible via the online data collection channels used.
The final sample comprised 84 respondents who completed the open-ended survey between February and April 2025. This sample size is consistent with established practice in qualitative survey-based research, where the aim is not statistical representativeness but the generation of rich, meaningful insights from participants who can speak credibly to the phenomenon of interest (Sandelowski,1995). Participants spanned disciplines including Education, Engineering, Health Sciences, Business, Natural Sciences, Social Sciences, and Computer Science. Both male and female students were represented, and the sample ranged from first-year undergraduates to postgraduate students.

2.4. Data Collection

Data were collected using a semi-structured online questionnaire administered through Google Forms. The instrument was developed specifically for this study and drew on existing literature on AI in higher education. It comprised two sections. The first collected basic demographic information, including gender, level of study, and field of discipline. The second contained open-ended questions inviting participants to describe their experiences in their own words.
Key questions included: descriptions of AI tools used and the nature of those experiences; perceptions of how AI had influenced study habits, learning processes, and academic engagement; reflections on the benefits and drawbacks of AI in their studies; and any additional observations about the impact of AI among university students. The instrument was piloted with five students to assess comprehensibility, and minor wording adjustments were made following that pilot.
The survey link was distributed through multiple channels: via faculty and administrative staff at Zambian universities, through student WhatsApp groups across disciplines and institutions, and through discipline-specific mailing lists and academic forums. An introductory message explained the study’s purpose, confirmed anonymity, emphasised voluntary participation, and provided contact information for queries. Data collection occurred over three months from February to April 2025.

2.5. Data Analysis

Data were analysed using reflexive thematic analysis, following the six-phase framework developed by Braun and Clarke (2021). This approach was selected for its theoretical flexibility and its established suitability for capturing the complexity of participant perspectives in qualitative social research. The analysis was iterative and recursive, moving repeatedly between data, codes, and emerging themes.
In Phase 1, all open-ended responses were exported from Google Forms into a single document. Both researchers read the dataset multiple times to develop close familiarity with the material, recording initial impressions and recurring ideas in research memos. In Phase 2, the dataset was coded line by line, identifying discrete units of meaning. Initial codes included concepts such as “AI as efficiency tool,” “cognitive dependency,” “institutional absence,” “critical thinking erosion,” “AI as unavoidable future,” and “assessment anxiety.” In Phase 3, codes were grouped and regrouped based on conceptual affinity, with visual mapping used to explore relationships between candidate themes.
In Phase 4, candidate themes were reviewed at two levels: internally, to ensure that all codes within a theme shared a coherent meaning, and against the full dataset, to confirm that themes accurately captured the breadth of participant perspectives. In Phase 5, each theme was defined and named, with attention to tensions and paradoxes within themes. In Phase 6, the thematic structure was woven into the narrative presented in the Results section. Throughout the process, reflexive memos documented analytical decisions and assumptions, and peer debriefing sessions between the two researchers were used to surface and challenge interpretive choices.

2.6. Trustworthiness and Rigour

Trustworthiness was addressed through the criteria articulated by Lincoln and Guba (1985) and elaborated by Tracy (2010). Credibility was supported through rich participant quotations, prolonged engagement with the data, and regular peer debriefing. Transferability was facilitated through a thick description of the research context, participants, and analytic process. Dependability was maintained through an audit trail documenting decisions at each stage of sampling, data collection, coding, and theme development. Confirmability was demonstrated through grounding in participant data rather than researcher preconception, evidenced by the use of direct quotations throughout the findings. Authenticity was addressed by attending to both dominant patterns and minority voices, ensuring that the complexity of student experience was not flattened into reductive generalisations.

2.7. Ethical Considerations

This study adhered to fundamental ethical principles for studies involving human participants. Informed consent was obtained through the introductory page of the Google Form, which described the study’s purpose, voluntary nature, and confidentiality protections. Participants were advised that completing the survey constituted consent, consistent with accepted practice for online research (British Educational Research Association,2018). No personally identifying information was collected, including names, student numbers, or email addresses. All data were stored on password-protected devices accessible only to the research team, and participants are referenced in the findings by general descriptors only. Voluntary participation was emphasised by notifying respondents that they could withdraw at any time by closing the browser window, with no consequence.

3. Results

Analysis of the 84 open-ended survey responses generated four interconnected themes. These themes are presented below with supporting participant quotations selected to illustrate the range and texture of student perspectives. As is standard in reflexive thematic analysis, quotations are presented in participants’ own words and referenced by discipline and level of study to provide interpretive context without compromising anonymity (Braun & Clarke,2021).

3.1. Theme One: Enthusiasm Bounded by Anxiety

The most consistent pattern across the dataset was enthusiasm for AI tools accompanied by significant anxiety about their cognitive and ethical implications. Approximately 70 per cent of respondents reported familiarity with AI-powered tools such as ChatGPT, Grammarly, and Google Gemini, and about 65 per cent expressed the view that AI had the potential to meaningfully enhance their learning. Respondents described using these tools for summarising reading materials, generating essay outlines, checking grammar and style, and obtaining rapid explanations of unfamiliar concepts.
Yet this enthusiasm was consistently qualified. A notable proportion of respondents articulated concern that AI use was beginning to displace rather than support their own cognitive effort. One undergraduate student in Education observed:
“AI makes learning easier, but it also makes us lazy. We need to learn how to think, not just how to click.” (Undergraduate, Education)
A postgraduate student in Business Administration expressed a related concern with greater specificity:
“I used ChatGPT for my assignment and got a good mark, but later I realised I could not explain half of what was written. That scared me. Am I learning or just submitting?” (Postgraduate, Business Administration)
These responses reflect what Jose et al. (2025) describe as the cognitive paradox of AI in education: the same tools that enhance efficiency and personalisation may, when used without pedagogical scaffolding, reduce the cognitive engagement necessary for deep learning and long-term retention. The pattern was consistent across disciplines, though students in Computer Science and Natural Sciences were more likely to describe AI as a legitimate and even necessary part of their professional formation.

3.2. Theme Two: The Structural Gap Between Student Readiness and Institutional Support

A second prominent theme concerned the absence of institutional guidance. Only 40 per cent of respondents reported that their university provided adequate direction on how to integrate AI tools responsibly and productively into their academic work. The remaining majority described navigating AI independently, without clear criteria for appropriate use, ethical boundaries, or pedagogical guidance.
One student enrolled in Health Sciences captured the prevailing sentiment clearly:
“Our lecturers have not said anything about AI. Some of them do not seem to know it exists. I just use it and hope for the best.” (Undergraduate, Health Sciences)
A student in Engineering identified a specific consequence of this absence:
“There is no policy. One lecturer allows it, another would fail you for it. We are all guessing.” (Undergraduate, Engineering)
This institutional gap is not unique to Zambia. Holmes et al. (2022) document a global pattern in which institutional AI integration strategies lag considerably behind the pace of student adoption, creating ethical and pedagogical ambiguity. In Zambia’s case, the gap is compounded by limited faculty training in digital pedagogy and the absence of a national framework from the Higher Education Authority governing AI use in academic contexts. Chiwoya et al. (2025) similarly identify this policy deficit as a structural impediment to effective digital integration in Zambian higher education institutions.

3.3. Theme Three: Collaborative and Industry-Oriented Learning as the Preferred Alternative

A third theme emerged from students’ articulation of what meaningful digital learning should look like. While many respondents were critical of surface-level AI use, a substantial number described experiences of collaboration, peer learning, and problem-solving as the contexts in which digital tools felt most educationally meaningful. Students described using WhatsApp groups, Google Classroom, and Zoom to work on group projects, peer review each other’s work, and navigate practical problems in real time.
A student in Social Sciences described this dynamic:
“When we work together on a case study and use AI to find information, then debate it, that feels like real learning. AI becomes a resource, not a shortcut.” (Undergraduate, Social Sciences)
Several respondents also expressed a desire for a closer connection between academic work and local industry challenges. A Natural Sciences student offered the following observation:
“I want to use AI to solve problems that matter here. Not just write essays. Give us real problems from farming or health and let us use technology to solve them.” (Undergraduate, Natural Sciences)
This finding converges with the broader literature on problem-based and project-based learning, which demonstrates consistently stronger outcomes for student engagement, skill transfer, and critical thinking when learning tasks are anchored in authentic, real-world contexts (Wiggins & McTighe,2005; Nyongesa & Westhuizen,2025). The preference students expressed for collaborative problem-solving over individual AI-assisted task completion points to a constructivist orientation that universities can build upon pedagogically.

3.4. Theme Four: Assessment as the Critical Leverage Point

The fourth theme concerned assessment. A recurring observation across disciplines was that existing assessment formats, particularly closed-book examinations and individually submitted essays, created perverse incentives for unsanctioned AI use. Respondents described a rational calculation: if the goal is to submit a text that receives a passing mark, and AI can produce such a text reliably and quickly, then using AI is instrumentally logical regardless of whether learning occurs.
One respondent articulated this calculus with unusual candour:
“The exam asks me to memorise things I can Google in five seconds. AI can write the essay better than me. Until the exam changes, I will keep using AI because it works.” (Undergraduate, Computer Science)
Another respondent proposed a different vision of what assessment could accomplish:
“If lecturers gave us real projects where we had to show how we solved something, AI would be a tool, not a cheat. Right now it is a cheat because we are being asked the wrong questions.” (Postgraduate, Education)
These observations suggest that the academic integrity challenge posed by AI is fundamentally a question of assessment design. Bearman et al. (2020) make a parallel argument: AI has not so much created the problem of assessment misalignment as it has rendered that misalignment impossible to ignore. Authentic assessments that require students to demonstrate process, reflection, and contextual judgment are considerably more resistant to AI shortcutting than recall-based examinations, and they develop precisely the competencies that employers and policymakers in Zambia identify as most needed.
Infrastructure barriers also surfaced across all four themes as a cross-cutting constraint. Approximately 30 per cent of respondents cited poor internet connectivity as a significant barrier to digital learning, and several noted the financial burden of mobile data costs. These structural realities form the material backdrop against which all pedagogical ambitions must be calibrated.

4. Discussion

The four themes emerging from this study are not independent observations. They constitute a coherent diagnostic picture: students are engaging enthusiastically and, to a significant degree, unsupervised with AI tools that they recognise as both genuinely useful and potentially damaging to their intellectual development. Institutions have not yet produced the policy, pedagogical, or assessment frameworks necessary to channel this engagement productively. What is needed is not restriction but redesign. The following discussion interprets these findings through relevant theoretical lenses and develops a three-pillar framework for human-centred pedagogy in Zambian higher education.

4.1. Critical Digital Literacy as a Foundational Commitment

The anxiety about cognitive dependency documented in Theme One is not simply a student concern to be managed; it is a signal of genuine pedagogical opportunity. Students who articulate worries about over-reliance on AI are already engaged in the metacognitive reflection that critical digital literacy requires. The task for educators is to formalise and extend that reflection rather than leave it to occur incidentally.
Critical digital literacy, as articulated by Aborisade (2025), involves the ability not only to use digital tools but to interrogate their ethical, social, and epistemic dimensions. This means understanding how AI systems generate content, what assumptions are embedded in training data, what kinds of errors or biases AI tends to produce, and what is lost when AI is substituted for human judgment. Curricula that incorporate assignments requiring students to evaluate the accuracy and limitations of AI-generated outputs, trace the sources and assumptions behind AI claims, and reflect on the ethical dimensions of algorithmic decision-making are building exactly these capacities.
The UNESCO-ICHEI initiative establishing an AI literacy and micro-certification centre at Mulungushi University represents a promising model for institutionalising this commitment in Zambia (UNESCO-ICHEI,2025). The challenge is to mainstream similar competencies across institutions and disciplines rather than confining them to specialist programmes.

4.2. Collaborative Learning Ecosystems

Theme Three reveals that students already intuitively recognise the value of collaborative engagement with digital tools. The constructivist tradition in educational theory,associated with the work of Vygotsky and elaborated more recently by scholars of collaborative and social learning, holds that knowledge is produced through interaction, dialogue, and the joint navigation of complexity (Mercer,2019). AI tools, on this account, are most productively integrated as resources within collaborative processes rather than as substitutes for individual cognitive effort.
Practically, this means designing learning activities that require students to use AI tools in group contexts where outputs are subject to peer scrutiny, debate, and refinement. Learning management systems such as Moodle provide infrastructure for this kind of collaborative engagement, hosting discussion forums, shared workspaces, and peer review functions that extend the intellectual community beyond the physical classroom (Ajani et al.,2025). Mobile platforms already widely used in Zambia, including WhatsApp, Telegram, and Google Classroom, can similarly support peer collaboration and mentoring at low cost (Nyongesa & Westhuizen,2025).
The institutional infrastructure required to support collaborative digital learning is not solely technological. It requires faculty development, timetabling flexibility, and a pedagogical culture that values process and dialogue over product delivery. These are institutional commitments, not merely technical installations.

4.3. Problem-Based Engagement with Local Industries

The desire students expressed in Theme Three for learning anchored in local, real-world problems is consistent with a substantial body of evidence on the motivational and cognitive benefits of authentic,problem-based pedagogy (Wiggins & McTighe,2005). It also aligns directly with Zambia’s policy agenda. The Eighth National Development Plan explicitly frames higher education as a vehicle for economic diversification and skills development, identifying agriculture, health, and digital services as priority sectors (GRZ,2021). Universities that design project-based courses in partnership with local industry create learning environments in which AI tools serve a genuine instrumental purpose: supporting students in analysing real data, modelling scenarios, and communicating findings to non-academic audiences.
Concrete examples in the Zambian context might include AI-driven agricultural data analysis projects conducted in partnership with farming cooperatives, public health data interpretation initiatives undertaken with district health offices, or digital literacy programmes co-designed with local government bodies. Such projects embed AI use in a context of accountability and purpose, making it considerably harder for students to treat AI as a shortcut and considerably easier for them to recognise it as a tool requiring critical judgment.

4.4. Assessment Reform as the Structural Precondition

Theme Four identifies assessment as the critical leverage point, and this finding deserves particular emphasis. The pedagogical frameworks described above will produce limited change if assessment systems continue to reward outputs that AI can generate as effectively as students. Bearman et al. (2020) argue that authentic assessment,encompassing portfolios, oral examinations, reflective journals, and collaborative project evaluations, is not simply a response to AI-enabled academic dishonesty but a long-overdue alignment of assessment with the competencies that higher education is supposed to develop.
For Zambian universities, authentic assessment presents both a practical challenge and a structural opportunity. The practical challenge is that large class sizes, limited assessment moderation capacity, and inherited examination cultures make rapid change difficult. The structural opportunity is that reform motivated by AI integration can be framed not as a concession to technology but as an advancement towards the graduate attributes that industry and policy already demand: analytical reasoning, communication, collaborative problem-solving, and ethical judgment. Sambell et al. (2013) provide a robust framework for assessment reform oriented towards learning rather than sorting, and their principles translate readily to higher education contexts characterised by resource constraints and diverse student populations.

4.5. Equity as a Cross-Cutting Imperative

Any framework for digital pedagogy in Zambia must grapple explicitly with the digital divide. Approximately 30 per cent of respondents in this study cited poor connectivity as a barrier to digital engagement, and students in rural areas and from lower socioeconomic backgrounds are systematically disadvantaged by infrastructure deficits and the cost of mobile data (Mukosa 2019). Poverty is the first obstacle to the spread of digital literacy in developing nations. The cost of digital infrastructure and inadequate support for information technology have impeded the progress of education in most rural areas (Yu, 2024). Digital transformation that proceeds without deliberate equity provisions risks reproducing and amplifying existing inequalities rather than addressing them.
Institutional responses should include subsidised data provision for low-income students,offline-compatible digital resources, and investment in on-campus connectivity infrastructure. Community ICT hubs and digital literacy programmes, of the kind supported through UNESCO and other multilateral initiatives, offer scalable models for extending digital access beyond the university campus. The Government of Zambia’s Digital Zambia Acceleration Project, with its focus on expanding network coverage in remote areas, provides national policy backing for these institutional efforts (ZICTA,2025).

5. Conclusion

This study began with a question about whether Zambian higher education is moving beyond superficial digital adoption. The evidence gathered from 84 students across multiple institutions suggests a nuanced answer. Students are already deep in the world of AI; they are using these tools, benefiting from them, worrying about them, and calling, with considerable lucidity, for the kinds of pedagogical environments in which technology serves human development rather than displacing it. What is lagging is not student readiness but institutional response.
The three-pillar framework proposed in this study, comprising critical digital literacy, collaborative learning ecosystems, and industry-aligned problem solving, offers a coherent direction for that institutional response. It is not a framework that demands unprecedented resources. It demands a reorientation of existing resources: curriculum time directed towards critical reflection on technology; assessment designs that require demonstration of understanding rather than production of text; industry partnerships that give digital learning a genuine purpose; and faculty development that equips lecturers to be navigators of digital complexity rather than casualties of it.
As one respondent in this study observed: “AI is here. The question is not whether we use it, but how we use it to become better thinkers, not just better users.” That formulation captures the essence of what a human-centred pedagogy must achieve. The graduates that Zambian universities produce in the coming decade will encounter AI not as an educational novelty but as an ambient condition of professional life. Whether they will be shaped by it or capable of shaping it in turn depends substantially on the pedagogical choices that their universities make now.

5.1. Scholarly Contribution

This study contributes empirical evidence on student experiences of AI integration from a sub-Saharan African context, a geography underrepresented in the global educational technology literature. It advances a contextually grounded pedagogical framework that goes beyond diagnosis to offer actionable structural guidance. It also demonstrates the value of qualitative methods for capturing the affective and ethical dimensions of AI integration that quantitative studies tend to underemphasise.

5.2. Limitations

Several limitations should be acknowledged. The online distribution method means that students without reliable internet access or not connected to relevant networks were systematically excluded, potentially excluding perspectives most affected by the infrastructure gaps identified in the findings. The study relies on self-report data, which may be influenced by social desirability or memory limitations. The cross-sectional design captures perspectives at a single moment in a rapidly evolving landscape. A single data collection method, without follow-up interviews or focus groups, limits the depth of interpersonal exploration possible. Finally, despite reflexive efforts, researchers’ assumptions inevitably shape analytical choices.

5.3. Recommendations

Based on the findings and discussion, we offer the following practical recommendations. Higher Education Authority should develop and disseminate binding guidelines on AI use in academic work across all Zambian universities. Individual universities should establish faculty development programmes that equip lecturers to design AI-integrated learning activities and AI-resistant assessments. Curriculum committees should review assessment formats across disciplines with the explicit aim of increasing authentic and process-based evaluation. Institutions should explore partnerships with local industry to anchor project-based learning in authentic community problems. Governments and universities should invest in subsidised data access and on-campus connectivity infrastructure to ensure that digital pedagogical approaches do not widen existing equity gaps.

5.4. Future Research Directions

Future research should follow a longitudinal cohort of students to examine how AI use patterns and attitudes evolve across degree programmes. Comparative studies across sub-Saharan African higher education contexts would help distinguish features of the Zambian experience from broader regional patterns. Investigation of faculty perspectives on AI integration, which this study did not address, represents an important and largely unexplored dimension. Finally, intervention studies testing the pedagogical effects of the three-pillar framework proposed here would provide the empirical validation needed to move from conceptual proposal to evidence-based policy.

Disclosure Statement

The authors declare no conflict of interest. This study received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Declaration of Use of Generative AI

During the preparation of this work, the authors used Open Grammarly for grammar checks and style suggestions. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of this study.

References

  1. Aborisade, P. A., F. O. Olubode Sawe, T. Fola Adebayo, A. Makinde, B. R. Gbenga Owoyemi, B. Akeredolu Ale, B. Adelabu, and B. K. Alese. 2025. Digital Literacy Levels: Realities and Voices of African Teachers in Higher Education Institutions. The International Journal of Learning in Higher Education 32, 1: 199–223. [Google Scholar] [CrossRef]
  2. Ajani, O., M. Maphalala, and O. Adigun. 2025. Bridging the digital divide: Exploring undergraduate students’ experiences with learning management systems in rural South African universities. Frontiers in Education 10: 1674885. [Google Scholar] [CrossRef]
  3. Ashford-Rowe, K., J. Herrington, and C. Brown. 2014. Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education 39, 2: 205–222. [Google Scholar] [CrossRef]
  4. Bearman, Margaret, Phillip Dawson, Rola Ajjawi, Joanna Tai, and David Boud. 2020. Re-imagining University Assessment in a Digital World. Deakin University. Book.: Available online: https://hdl.handle.net/10536/DRO/DU:30140455.
  5. Braun, V., and V. Clarke. 2021. Thematic Analysis: A practical guide. SAGE. [Google Scholar]
  6. British Educational Research Association (BERA). 2018. Ethical guidelines for educational research, 4th ed. BERA: Available online: https://www.bera.ac.uk/publication/ethical-guidelines-for-educational-research-2018.
  7. Chansa, C. 2023. Bridging the digital divide in Zambia: A call to action for education equity. Medium. [Google Scholar]
  8. Chiwoya, A., H. Daka, and M. L. Mulenga-Hagane. 2025. Enhancing online learning in higher education institutions in Zambia: An evaluation of the measures put in place by the Government of Zambia, Internet Service Providers and higher learning institutions. International Journal of Research and Innovation in Social Science 9, 3s: 1675–1689. [Google Scholar] [CrossRef]
  9. Government of the Republic of Zambia (GRZ). 2021. Eighth National Development Plan 2022-2026. Government Printers. [Google Scholar]
  10. Holmes, W., K. Porayska-Pomsta, K. Holstein, E. Sutherland, T. Baker, S. B. Shum, and O. C. Santos. 2022. Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education 32, 2: 504–526. [Google Scholar] [CrossRef]
  11. Jose, B., J. Cherian, A. M. Verghis, S. M. Varghese, S. Mumthas, and S. Joseph. 2025. The cognitive paradox of AI in education: Between enhancement and erosion. Frontiers in Psychology 16: 1550621. [Google Scholar] [CrossRef] [PubMed]
  12. Lincoln, Y. S., and E. G. Guba. 1985. Naturalistic Inquiry. SAGE. [Google Scholar]
  13. Magasu, O., J. Lubbungu, L. Kamboni, E. Sakala, and B. Kapanda. 2022. Implementation of blended learning in higher learning institutions in Zambia: A case of Kwame Nkrumah University. European Journal of Education and Pedagogy 3, 3: 214–218. [Google Scholar] [CrossRef]
  14. Mercer, N. 2019. Language and the joint creation of knowledge: The selected works of Neil Mercer. Routledge. [Google Scholar]
  15. Merriam, S. B. 2002. Qualitative research in practice: Examples for discussion and analysis. Jossey-Bass. [Google Scholar]
  16. Merriam, S. B., and E. J. Tisdell. 2016. Qualitative research: A guide to design and implementation, 4th ed. Jossey-Bass. [Google Scholar]
  17. Mukosa, F., and B. Mweemba. The Digital Divide Hindering E-learning in Zambia. International Journal of Scientific Research and Engineering Development May-June 2019. Vol 2, Issue 3. Available online: https://ijsred.com/volume2/issue3/IJSRED-V2I3P102.pdf.
  18. Nyongesa, W. J., and J. V. D. Westhuizen. 2025. The impact of digital teaching tools on student engagement and learning outcomes in higher education in Africa. International Journal of Innovative Research and Scientific Studies 8, 4: 264–280. [Google Scholar] [CrossRef]
  19. Patton, M. Q. 2015. Qualitative research and evaluation methods, 4th ed. SAGE. [Google Scholar]
  20. Sambell, K., L. McDowell, and C. Montgomery. 2012. Assessment for Learning in Higher Education, 1st ed. Routledge. [Google Scholar] [CrossRef]
  21. Sandelowski, M. 1995. Sample size in qualitative research. Research in Nursing and Health 18, 2: 179–183. [Google Scholar] [CrossRef] [PubMed]
  22. Selwyn, N. 2021. EdTech Inc.: Selling, automating and globalising education in the digital age. Routledge. [Google Scholar]
  23. Tracy, S. J. 2010. Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry 16, 10: 837–851. [Google Scholar] [CrossRef]
  24. UNESCO-ICHEI. 2025. Digital transformation of higher education in sub-Saharan Africa: Challenges and opportunities in advanced digital skills. International Centre for Higher Education Innovation under the Auspices of UNESCO. Available online: https://cn.ichei.org/en/news/information/882.html.
  25. Wiggins, G., and J. McTighe. 2005. Understanding by design, 2nd ed. ASCD. [Google Scholar]
  26. Yu, Y., D. Appiah, B. Zulu, and K. A. Adu-Poku. 2024. Integrating Rural Development, Education, and Management: Challenges and Strategies. Sustainability 16, 15: 6474. [Google Scholar] [CrossRef]
  27. ZICTA. 2025. Zambia Information and Communications Technology Authority: Market and infrastructure report Q2 2025. Available online: https://www.biometricupdate.com/202503/zambia/.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated