Submitted:
20 March 2026
Posted:
30 March 2026
You are already at the latest version
Abstract
Keywords:
Introduction
- What is the current landscape of AI tool usage among higher education employees, and how does this compare with institutional policy awareness and governance?
- What are the primary risks, opportunities, and challenges associated with AI use in higher education work environments?
- How are institutions approaching AI-related workforce development, return on investment (ROI) measurement, and policy creation?
- What theoretical frameworks best explain the observed disconnect between AI adoption and institutional governance?
- What are the ethical, environmental, and relational dimensions of AI adoption in higher education?
- What recommendations can be offered for bridging the AI implementation gap in higher education?
Literature Review
Technology Adoption in Higher Education
Organizational Change and Governance in Higher Education
AI in Higher Education: Opportunities and Risks
Ethical Considerations in AI Adoption
The Role of Vendors and Technology Companies
Environmental Impact of AI
The Concept of the “Implementation Gap”
Methodology
Data Sources
Survey Instrument and Measures
- Strategies, policies, and guidelines: Awareness of institutional AI strategies, elements of work-related AI strategy, orientation of policies and guidelines, and workforce upskilling efforts.
- Risks, opportunities, and challenges: Perceptions of urgent risks, promising opportunities, and significant challenges associated with AI use.
- Use cases: Frequency and types of work-related AI tool use, access to AI tools, and use of tools not provided by the institution.
Respondent Demographics
Analytical Approach
Results
Summary of Key Findings
AI Adoption: Ubiquitous Yet Unevenly Governed
- Brainstorming (63%)
- Drafting emails (62%)
- Summarizing long documents or meetings (61%)
- Proofreading or copyediting (56%)
- Creating presentations (47%)
Shadow AI: Unvetted Tool Use
Institutional Strategies and Policy Orientations
- Piloting the use of AI tools (65%)
- Evaluating both opportunities and risks (60%)
- Encouraging staff and faculty to use AI tools (59%)
- Creating work-related policies and guidelines (54%)
Workforce Development and Upskilling
- Encouraging faculty and staff to develop skills on their own (80%)
- Offering in-house professional learning opportunities (71%)
Measuring Return on Investment
Perceptions of Risks, Opportunities, and Challenges
Risks
- Increased misinformation (55%)
- Use of data without consent (52%)
- Loss of fundamental skills requiring independent thought (51%)
- Insufficient data protection (51%)
- Violations of copyright and intellectual property (47%)
- Student AI use outpacing faculty and staff AI skills (47%)
Opportunities
- Automating repetitive processes (70%)
- Offloading administrative burdens and mundane tasks (65%)
- Analyzing large datasets (60%)
- Generating insights for data-informed decision-making (53%)
- Real-time data analytics and visualization (51%)
Challenges
- AI’s pace of change (60%)
- Lack of AI expertise (55%)
- Lack of best practices (48%)
- Lack of time to learn new skills (46%)
- The number of risks associated with AI (41%)
Attitudes Toward AI: Enthusiasm Tempered by Caution
Faculty-Specific Findings
Discussion
Interpreting the AI Implementation Gap
Diffusion of Innovations and the Pace of Change
Organizational Complexity and Shared Governance
Institutional Type and Disciplinary Differences
The Role of Professional Identity and Autonomy
Voluntary Versus Mandated Adoption
Implications for Data Privacy, Cybersecurity, and Risk Management
Ethical Considerations
The Role of Vendors and Technology Companies
Environmental Impact of AI
The Challenge of Measuring Impact
Workforce Development: Self-Directed Learning and Its Limits
Balancing Enthusiasm and Caution
International and Comparative Perspectives
Limitations and Directions for Future Research
Recommendations
For Institutional Leaders
- Develop and communicate clear AI policies and guidelines. The finding that only 54% of respondents are aware of AI-related policies underscores the urgent need for institutions to create, formalize, and widely communicate expectations for AI use. Policies should address data privacy, cybersecurity, intellectual property, accessibility, ethical considerations, and environmental impact.
- Engage stakeholders in policy development. Involving faculty, staff, and other stakeholders in the creation of AI policies can help ensure that guidelines are practical, widely understood, and reflective of diverse perspectives (Kezar, 2014). Inclusive governance processes can also build buy-in and support for policy implementation.
- Invest in structured professional development. While self-directed learning has value, institutions should also provide formal training opportunities, workshops, and resources to support AI skill development. Professional development should be role-specific and address both technical skills and ethical considerations.
- Vet and provide access to AI tools. To reduce shadow AI use, institutions should proactively identify, evaluate, and provide access to AI tools that meet institutional standards for privacy, security, quality, and environmental sustainability. Communicating the availability of approved tools and the reasons for their selection can help build trust and compliance.
- Measure and communicate return on investment. Institutions should develop frameworks for evaluating the impact of AI investments, including both quantitative metrics (e.g., time savings, cost reductions) and qualitative assessments (e.g., user satisfaction, quality of work). Sharing findings with stakeholders can help build confidence and inform future decisions.
- Update job descriptions and expectations. As AI-related responsibilities become more common, institutions should codify these duties in job descriptions to clarify expectations, ensure access to resources, and acknowledge the changing nature of work.
- Address faculty-specific needs. Given the finding that nearly a quarter of faculty lack access to desired AI tools, institutions should assess and address the unique needs of faculty in different disciplines, ensuring equitable access and support.
- Develop ethical guidelines for AI use. Institutions should develop and communicate ethical guidelines that address issues such as algorithmic bias, transparency, accountability, and the appropriate use of AI in high-stakes decisions (e.g., hiring, admissions, student evaluation).
- Manage vendor relationships strategically. Institutions should develop procurement processes that evaluate AI tools for data privacy, security, accessibility, and ethical considerations. Building capacity for vendor negotiation and contract management is essential for protecting institutional interests.
- Incorporate environmental considerations. Institutions should consider the environmental impact of AI adoption and explore strategies for mitigating impact, such as prioritizing energy-efficient tools and advocating for sustainable practices among vendors.
For Policymakers
- Support the development of sector-wide standards and best practices. National and regional higher education organizations, accreditors, and government agencies can play a role in developing guidelines, sharing exemplary practices, and fostering cross-institutional collaboration on AI governance.
- Provide resources for workforce development. Policymakers should consider funding professional development initiatives, research on AI in higher education, and technical assistance for institutions seeking to improve their AI governance.
- Promote transparency and accountability in AI tool development. Regulatory frameworks should encourage AI developers to provide clear information about data practices, algorithmic decision-making, environmental impact, and the limitations of their tools.
- Encourage international collaboration. Given the global nature of AI technologies, policymakers should support international collaboration on standards, best practices, and research.
For Researchers
- Conduct longitudinal and comparative studies. Future research should examine how AI adoption and governance evolve over time, and compare approaches across different types of institutions, national contexts, and disciplinary fields.
- Investigate the perspectives of underrepresented groups. Given the low proportion of faculty in the available data, targeted studies of faculty experiences with AI—as well as the perspectives of students, part-time employees, and other stakeholders—are needed.
- Develop and test frameworks for evaluating AI impact. Researchers can contribute to practice by developing rigorous, practical approaches to measuring the impact of AI on higher education work, learning, and organizational outcomes.
- Examine the role of vendors and technology companies. Research on the vendor-institution relationship and its implications for governance, data privacy, and educational values would be valuable.
- Explore ethical and environmental dimensions. Additional research is needed on the ethical implications of AI use in higher education and the environmental footprint of AI adoption.
Conclusion
References
- Banta, T. W.; Blaich, C. Closing the assessment loop. Change: The Magazine of Higher Learning 2011, 43(1), 22–27. [Google Scholar] [CrossRef]
- Bates, A. W. Teaching in a digital age: Guidelines for designing teaching and learning; BCcampus, 2015. [Google Scholar]
- Birnbaum, R. How colleges work: The cybernetics of academic organization and leadership; Jossey-Bass, 1988. [Google Scholar]
- Davis, F. D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 1989, 13(3), 319–340. [Google Scholar] [CrossRef]
- EDUCAUSE. 2025 EDUCAUSE AI landscape study; EDUCAUSE, 2025. [Google Scholar]
- Ertmer, P. A.; Ottenbreit-Leftwich, A. T. Teacher technology change: How knowledge, confidence, beliefs, and culture intersect. Journal of Research on Technology in Education 2010, 42(3), 255–284. [Google Scholar] [CrossRef]
- Holmes, W.; Bialik, M.; Fadel, C. Artificial intelligence in education: Promises and implications for teaching and learning; Center for Curriculum Redesign, 2019. [Google Scholar]
- Kezar, A. Tools for a time and place: Phased leadership strategies to institutionalize a diversity agenda. The Review of Higher Education 2007, 30(4), 413–439. [Google Scholar] [CrossRef]
- Kezar, A. How colleges change: Understanding, leading, and enacting change; Routledge, 2014. [Google Scholar]
- Kezar, A.; Eckel, P. D. The effect of institutional culture on change strategies in higher education: Universal principles or culturally responsive concepts? The Journal of Higher Education 2002, 73(4), 435–460. [Google Scholar] [CrossRef]
- Luckin, R.; Holmes, W.; Griffiths, M.; Forcier, L. B. Intelligence unleashed: An argument for AI in education; Pearson, 2016. [Google Scholar]
- Macfarlane, B. The morphing of academic practice: Unbundling and the rise of the para-academic. Higher Education Quarterly 2011, 65(1), 59–73. [Google Scholar] [CrossRef]
- Marginson, S. Higher education and the common good; Melbourne University Publishing, 2016. [Google Scholar]
- Mollick, E.; Mollick, L. Using AI to implement effective teaching strategies in classrooms: Five strategies, including prompts; The Wharton School Research Paper, 2023. [Google Scholar]
- Palmer, K. Data shows AI ‘disconnect’ in higher ed workforce; Inside Higher Ed, 13 January 2026. [Google Scholar]
- Popenici, S. A. D.; Kerr, S. Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning 2017, 12(1), 1–13. [Google Scholar] [CrossRef] [PubMed]
- Pressman, J. L.; Wildavsky, A. Implementation: How great expectations in Washington are dashed in Oakland, 3rd ed.; University of California Press, 1984. [Google Scholar]
- Robert, J. The impact of AI on work in higher education; EDUCAUSE, 2026. [Google Scholar]
- Rogers, E. M. Diffusion of innovations, 5th ed.; Free Press, 2003. [Google Scholar]
- Sabatier, P. A. Top-down and bottom-up approaches to implementation research: A critical analysis and suggested synthesis. Journal of Public Policy 1986, 6(1), 21–48. [Google Scholar] [CrossRef]
- Selwyn, N. Distrusting educational technology: Critical questions for changing times; Routledge, 2014. [Google Scholar]
- Selwyn, N. Education and technology: Key issues and debates, 2nd ed.; Bloomsbury Academic, 2016. [Google Scholar]
- Selwyn, N. Should robots replace teachers? AI and the future of education; Polity Press, 2019. [Google Scholar]
- Selwyn, N. The future of AI and education: Some cautionary notes. European Journal of Education 2022, 57(4), 620–631. [Google Scholar] [CrossRef]
- Strubell, E.; Ganesh, A.; McCallum, A. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics; 2019; pp. 3645–3650. [Google Scholar]
- Tierney, W. G. The impact of culture on organizational decision making: Theory and practice in higher education; Stylus Publishing, 2008. [Google Scholar]
- Tondeur, J.; van Braak, J.; Ertmer, P. A.; Ottenbreit-Leftwich, A. Understanding the relationship between teachers’ pedagogical beliefs and technology use in education: A systematic review of qualitative evidence. Educational Technology Research and Development 2017, 65(3), 555–575. [Google Scholar] [CrossRef]
- Tsai, Y.-S.; Perrotta, C.; Gašević, D. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics. Assessment & Evaluation in Higher Education 2020, 45(4), 554–567. [Google Scholar]
- Venkatesh, V.; Davis, F. D. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science 2000, 46(2), 186–204. [Google Scholar] [CrossRef]
- Williamson, B.; Bayne, S.; Shay, S. The datafication of teaching in higher education: Critical issues and perspectives. Teaching in Higher Education 2020, 25(4), 351–365. [Google Scholar] [CrossRef]
- Zawacki-Richter, O.; Marín, V. I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education 2019, 16(1), 1–27. [Google Scholar] [CrossRef]
| Finding | Percentage |
|---|---|
| Used AI tools for work in past 6 months | 94% |
| Aware of AI-related policies/guidelines | 54% |
| Used AI tools not provided by institution | 56% |
| Institution has work-related AI strategy | 92% |
| Institution measuring ROI for AI tools | 13% |
| Required to use AI tools for work | 11% |
| Want to use or continue using AI tools | 86% |
| Identified 6+ urgent risks | 67% |
| Identified 5+ promising opportunities | 67% |
| Feel confident using AI with current policies | 56% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).