Preprint
Article

This version is not peer-reviewed.

The AI Implementation Gap in Higher Education: Navigating the Disconnect Between Technology Adoption, Policy Awareness, and Institutional Governance

Submitted:

20 March 2026

Posted:

30 March 2026

You are already at the latest version

Abstract
Artificial intelligence (AI) has rapidly permeated higher education workplaces, yet a significant disconnect exists between employee adoption of AI tools and institutional policy awareness, governance structures, and strategic clarity. This study examines the emergent phenomenon of the "AI implementation gap" in higher education—the disparity between widespread AI tool usage and the institutional frameworks meant to guide such use. Drawing on recent survey data from nearly 2,000 higher education professionals and situating findings within broader theoretical frameworks of technology adoption, organizational change, and higher education governance, this article critically analyzes the current state of AI integration in higher education work environments. Key findings reveal that while 94% of higher education employees report using AI tools for work, only 54% are aware of relevant institutional policies, and more than half have used AI tools not sanctioned by their institutions. The analysis explores the risks, opportunities, and challenges associated with this implementation gap, including concerns about data privacy, misinformation, skill erosion, algorithmic bias, environmental impact, and the largely unmeasured return on investment of AI initiatives. The article also examines the roles of AI vendors, the ethical dimensions of AI adoption, and the implications of voluntary versus mandated technology use. The article concludes with recommendations for institutional leaders, policymakers, and researchers seeking to bridge the gap between AI adoption and governance in higher education contexts.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

Introduction

The emergence of generative artificial intelligence (AI) technologies has precipitated a fundamental transformation in the nature of work across sectors, and higher education is no exception. Since the public release of large language models such as ChatGPT in late 2022, institutions of higher education have grappled with questions about how AI tools should be integrated into teaching, learning, research, and administrative functions (Mollick & Mollick, 2023; Selwyn, 2022). While much scholarly and public attention has focused on student-facing implications—particularly academic integrity and personalized learning—the impact of AI on the higher education workforce has received comparatively less systematic examination (Robert, 2026; Zawacki-Richter et al., 2019).
The rapid adoption of AI tools by higher education employees has outpaced the development of institutional policies, guidelines, and governance structures meant to guide their use. This phenomenon—which may be termed the “AI implementation gap”—poses significant challenges for data privacy, cybersecurity, decision-making quality, ethical practice, and workforce development (EDUCAUSE, 2025; Robert, 2026). Understanding the dimensions, causes, and consequences of this gap is essential for institutional leaders, policymakers, and scholars seeking to harness the benefits of AI while mitigating its risks.
This article presents a comprehensive analysis of the current state of AI adoption and governance in higher education workplaces. Drawing on a major 2025 survey conducted by EDUCAUSE in partnership with the Association for Institutional Research (AIR), the National Association of College and University Business Officers (NACUBO), and the College and University Professional Association for Human Resources (CUPA-HR), as well as complementary reporting and analysis, the article examines the following research questions:
  • What is the current landscape of AI tool usage among higher education employees, and how does this compare with institutional policy awareness and governance?
  • What are the primary risks, opportunities, and challenges associated with AI use in higher education work environments?
  • How are institutions approaching AI-related workforce development, return on investment (ROI) measurement, and policy creation?
  • What theoretical frameworks best explain the observed disconnect between AI adoption and institutional governance?
  • What are the ethical, environmental, and relational dimensions of AI adoption in higher education?
  • What recommendations can be offered for bridging the AI implementation gap in higher education?
The analysis proceeds as follows. First, the article reviews relevant literature on technology adoption, organizational change, higher education governance, and AI ethics to situate the empirical findings within established theoretical frameworks. Next, the methodology of the primary data sources is described. The results section presents key findings regarding AI adoption, policy awareness, risks, opportunities, and challenges. The discussion section interprets these findings in light of the theoretical frameworks and prior research, with expanded attention to ethical considerations, the role of AI vendors, environmental impact, and the implications of voluntary adoption. The conclusion offers recommendations for practice and avenues for future research.

Literature Review

Technology Adoption in Higher Education  

The integration of new technologies into higher education has long been characterized by cycles of enthusiasm, experimentation, and institutionalization—often accompanied by uneven adoption, resistance, and unforeseen consequences (Rogers, 2003; Selwyn, 2014). Theoretical models of technology adoption, such as Rogers’ (2003) Diffusion of Innovations and the Technology Acceptance Model (TAM; Davis, 1989; Venkatesh & Davis, 2000), emphasize the roles of perceived usefulness, ease of use, social influence, and organizational support in shaping adoption patterns. In higher education contexts, these factors interact with disciplinary cultures, institutional governance structures, and the professional identities of faculty and staff (Ertmer & Ottenbreit-Leftwich, 2010; Tondeur et al., 2017).
Research on prior waves of educational technology—including learning management systems, MOOCs, and educational analytics—has demonstrated that successful integration depends not only on the availability of tools but also on the development of policies, professional development, and a shared understanding of appropriate use (Bates, 2015; Selwyn, 2016). The rapid emergence of generative AI has compressed the typical timeline for institutional response, creating conditions in which adoption may outpace governance (Mollick & Mollick, 2023; Tsai et al., 2020).

Organizational Change and Governance in Higher Education  

Higher education institutions are complex organizations characterized by shared governance, decentralized decision-making, and the coexistence of academic and administrative cultures (Birnbaum, 1988; Kezar, 2014). These features can both facilitate and impede organizational change. On the one hand, faculty autonomy and disciplinary expertise may enable bottom-up innovation; on the other hand, the diffusion of authority and the persistence of established routines may slow the development of coherent institutional strategies (Kezar & Eckel, 2002; Tierney, 2008).
The governance of emerging technologies in higher education is further complicated by the need to balance innovation with risk management, and to reconcile the interests of diverse stakeholders—including faculty, staff, students, administrators, and external partners (Macfarlane, 2011; Marginson, 2016). Prior research has highlighted the importance of transparent communication, inclusive governance, and adaptive policymaking in navigating technological change (Kezar, 2014; Selwyn, 2022).
Importantly, institutions differ in their capacity for change. Research universities with robust IT infrastructure and dedicated governance committees may respond differently to AI adoption than community colleges or small private institutions with fewer resources (Kezar & Eckel, 2002). Similarly, disciplinary cultures shape faculty attitudes toward technology; faculty in STEM fields may embrace AI tools more readily than those in the humanities, who may be more attuned to concerns about authenticity and human judgment (Tondeur et al., 2017).

AI in Higher Education: Opportunities and Risks  

The scholarly literature on AI in higher education has grown rapidly in recent years, reflecting both optimism about the technology’s potential and concern about its risks (Zawacki-Richter et al., 2019; Holmes et al., 2019). Proponents argue that AI can automate routine tasks, enhance data-driven decision-making, personalize learning, and free faculty and staff to focus on higher-order work (Luckin et al., 2016; Popenici & Kerr, 2017). Critics, however, caution against overreliance on AI, highlighting risks such as algorithmic bias, erosion of critical thinking skills, data privacy violations, and the potential for job displacement (Selwyn, 2019; Williamson et al., 2020).
A recurrent theme in the literature is the tension between the pace of technological change and the slower rhythms of institutional governance and policy development (Tsai et al., 2020; Selwyn, 2022). This tension is particularly acute in the case of generative AI, where the rapid proliferation of tools and use cases has challenged institutions to respond in real time (Mollick & Mollick, 2023; Robert, 2026).

Ethical Considerations in AI Adoption  

The ethical dimensions of AI adoption in higher education have received increasing attention in recent scholarship. Key concerns include algorithmic bias—the tendency of AI systems to reflect and amplify existing social inequities—and the potential for AI to reinforce structural inequalities in hiring, admissions, and student services (Holmes et al., 2019; Selwyn, 2019). The opacity of many AI systems, often described as “black box” decision-making, raises questions about accountability and transparency (Williamson et al., 2020).
Additionally, scholars have raised concerns about the commodification of education, the erosion of human relationships in teaching and learning, and the potential for AI to prioritize efficiency over educational values (Selwyn, 2019). The ethical implications of AI-driven surveillance and data collection in educational settings have also been explored, with particular attention to issues of consent and privacy (Tsai et al., 2020; Williamson et al., 2020).

The Role of Vendors and Technology Companies  

The adoption of AI in higher education is shaped not only by institutional actors but also by the vendors and technology companies that develop and market AI tools. Scholars have noted the increasing influence of technology companies in shaping educational practices and policies, raising concerns about the concentration of power and the potential for vendor lock-in (Williamson et al., 2020; Selwyn, 2016). The terms of service and data practices of AI vendors may not align with institutional values or regulatory requirements, creating tensions between innovation and governance.

Environmental Impact of AI  

The environmental footprint of AI technologies has emerged as a growing concern. Training and operating large AI models require significant computational resources, contributing to energy consumption and carbon emissions (Strubell et al., 2019). As institutions expand their use of AI tools, the environmental implications of this adoption warrant consideration in institutional decision-making and governance.

The Concept of the “Implementation Gap”  

The notion of an “implementation gap”—a disparity between the adoption of a practice or technology and the policies, resources, or support structures meant to govern its use—has been applied in a variety of organizational and policy contexts (Pressman & Wildavsky, 1984; Sabatier, 1986). In higher education, implementation gaps have been documented in areas such as assessment, diversity initiatives, and technology integration (Banta & Blaich, 2011; Kezar, 2007; Selwyn, 2016).
The AI implementation gap in higher education may be understood as a specific manifestation of this broader phenomenon, shaped by the unique characteristics of AI technologies—including their rapid evolution, broad applicability, and the opacity of many AI systems—and the distinctive features of higher education governance (Robert, 2026; EDUCAUSE, 2025). Understanding the contours and causes of this gap is essential for developing effective strategies to bridge it.

Methodology

Data Sources  

The primary data for this analysis are drawn from the 2025 EDUCAUSE survey, “The Impact of AI on Work in Higher Education,” conducted in partnership with AIR, NACUBO, and CUPA-HR (Robert, 2026). The survey was disseminated to higher education professionals from September 29 to October 13, 2025, yielding 1,960 responses meeting inclusion criteria. Respondents were required to be currently employed at a higher education institution and to answer questions regarding their position, area of responsibility, attitude toward AI, and awareness of AI-related policies.
Supplementary data and analysis are drawn from reporting in Inside Higher Ed (Palmer, 2026), which contextualized the EDUCAUSE findings and provided expert commentary on the implications of the survey results.

Survey Instrument and Measures  

The EDUCAUSE survey comprised 34 closed- and open-ended items covering the following domains:
  • Strategies, policies, and guidelines: Awareness of institutional AI strategies, elements of work-related AI strategy, orientation of policies and guidelines, and workforce upskilling efforts.
  • Risks, opportunities, and challenges: Perceptions of urgent risks, promising opportunities, and significant challenges associated with AI use.
  • Use cases: Frequency and types of work-related AI tool use, access to AI tools, and use of tools not provided by the institution.
For the purposes of this research, AI was defined according to the EU AI Act: “A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” (Robert, 2026). This definition encompasses both generative AI and other AI-powered features embedded in common software applications.

Respondent Demographics  

The survey sample included managers and directors (39%), professional staff (32%), executive leaders (16%), and faculty (12%). The low proportion of faculty respondents is a notable limitation, as it may skew findings toward administrative and operational workflows (Palmer, 2026). Respondents were drawn from institutions of varying sizes, control types, and levels, with a majority (89%) based primarily in the United States.

Analytical Approach  

Quantitative data from closed-ended items were analyzed using descriptive statistics. Open-ended responses were coded manually and with qualitative analysis software. Findings are presented thematically, organized around the research questions identified in the introduction.

Results

Summary of Key Findings  

Table 1 provides a summary of the key findings from the EDUCAUSE survey regarding AI adoption, policy awareness, risks, opportunities, and challenges among higher education employees.

AI Adoption: Ubiquitous Yet Unevenly Governed  

The EDUCAUSE survey reveals that AI tool usage is now nearly universal among higher education employees. Ninety-four percent of respondents reported having used AI tools for work within the past six months, with a majority (73% of those who have used AI) doing so on a daily (38%) or weekly (34%) basis (Robert, 2026). This widespread adoption spans a range of tasks, with more than half of respondents (54%) indicating that they have used AI tools for eight or more types of work-related activities in the past six months.
The most common work-related uses of AI tools include:
  • Brainstorming (63%)
  • Drafting emails (62%)
  • Summarizing long documents or meetings (61%)
  • Proofreading or copyediting (56%)
  • Creating presentations (47%)
Less common uses include application screening (6%), onboarding new employees (7%), and scheduling meetings (9%), suggesting that AI adoption is currently concentrated in lower-stakes, routine tasks rather than high-stakes decision-making processes.
Despite the ubiquity of AI tool use, institutional governance and policy awareness lag significantly behind. Only 54% of respondents indicated that they are aware of policies or guidelines meant to guide their work-related use of AI tools (Robert, 2026). This finding is particularly concerning given the risks associated with unsanctioned or uninformed AI use, including data privacy breaches, violations of intellectual property, and the propagation of misinformation.
The gap between adoption and policy awareness is not simply a matter of poor communication. Disaggregation of the data by role reveals that even institutional leaders and IT professionals—those most likely to have decision-making authority over AI policy—report significant lack of awareness. Thirty-eight percent of executive leaders, 43% of managers and directors, 35% of technology professionals, and 30% of cybersecurity and privacy professionals indicated that they are not aware of relevant policies (Robert, 2026). These findings suggest that many institutions may lack formal AI policies altogether, rather than merely failing to communicate existing guidelines.

Shadow AI: Unvetted Tool Use  

A related concern is the prevalence of “shadow AI”—the use of AI tools that have not been vetted or provided by the institution. More than half of respondents (56%) reported using AI tools not provided by their institutions for work-related tasks (Robert, 2026). This pattern exposes institutions to risks related to data privacy, cybersecurity, accessibility, copyright, and environmental impact, as unvetted tools may not meet institutional standards in these areas.
The reasons for shadow AI use are varied. While the survey did not explicitly probe motivations, contributing factors may include lack of access to desired tools, insufficient awareness of institutional offerings, and the perception that personal or third-party tools are more effective or convenient.

Institutional Strategies and Policy Orientations  

Despite the gaps in policy awareness, most respondents (92%) indicated that their institution has a work-related AI strategy (Robert, 2026). The most common elements of these strategies include:
  • Piloting the use of AI tools (65%)
  • Evaluating both opportunities and risks (60%)
  • Encouraging staff and faculty to use AI tools (59%)
  • Creating work-related policies and guidelines (54%)
Notably, only 5% of respondents reported that their institution discourages or prohibits the use of AI tools for work, indicating a generally permissive orientation.
Among those who are aware of institutional policies, nearly half (47%) characterized them as somewhat or extremely permissive, while 30% described them as neutral and 14% as somewhat or extremely restrictive (Robert, 2026). This permissive orientation may reflect a desire to balance innovation with caution, but it may also contribute to uncertainty and inconsistency in practice.
Importantly, only about half of respondents who are aware of policies and guidelines reported feeling confident (34%) or very confident (22%) using AI tools for work (Robert, 2026). This finding suggests that the existence of policies alone is insufficient; clarity, communication, and practical guidance are also essential.

Workforce Development and Upskilling  

A majority of institutions are addressing AI-related workforce skills by upskilling and reskilling existing staff and faculty (69% of those whose institution’s strategy includes increasing workforce skills) rather than by hiring new roles (Robert, 2026). The most common approaches to upskilling include:
  • Encouraging faculty and staff to develop skills on their own (80%)
  • Offering in-house professional learning opportunities (71%)
While self-directed learning is valuable, the literature suggests that structured professional development and institutional support are critical for building confidence and ensuring consistent practice (Ertmer & Ottenbreit-Leftwich, 2010; Tondeur et al., 2017).

Measuring Return on Investment  

A striking finding is the nascent state of ROI measurement for AI tools. Only 13% of respondents indicated that their institution is measuring the return on investment for work-related AI tools (Robert, 2026). Among those who said their institution is measuring ROI, many reported uncertainty about how this is being done. Methods mentioned include user experience surveys, focus groups, and empirical observation of key performance indicators such as adoption rates, usage time, user satisfaction, and work time for specific tasks pre- and post-implementation.
Respondents also noted challenges in isolating the impact of AI tools from other factors and expressed concern that enthusiasm for AI may be outweighing a strategic approach to evaluation. As one respondent explained, “I am still waiting for a use case that exceeds what we have already or reduces our costs” (Robert, 2026).
The challenge of measuring ROI is not unique to AI; prior research on educational technology has documented similar difficulties in quantifying the impact of technology investments (Bates, 2015). However, the pace of AI adoption and the scale of investment make ROI measurement particularly urgent. Institutions may benefit from adapting frameworks from organizational performance measurement, including logic models, cost-benefit analysis, and balanced scorecards, to evaluate AI investments (Banta & Blaich, 2011).

Perceptions of Risks, Opportunities, and Challenges  

Risks 

Higher education employees perceive a broad range of risks associated with AI use. Sixty-seven percent of respondents identified six or more “urgent” risks from a provided list, indicating widespread concern (Robert, 2026). The most frequently selected risks include:
  • Increased misinformation (55%)
  • Use of data without consent (52%)
  • Loss of fundamental skills requiring independent thought (51%)
  • Insufficient data protection (51%)
  • Violations of copyright and intellectual property (47%)
  • Student AI use outpacing faculty and staff AI skills (47%)
Open-ended responses highlighted additional concerns, including concentration of power among technology companies, academic misconduct, inconsistent policies, loss of creativity, and environmental impact.

Opportunities  

Despite these concerns, respondents also perceive significant opportunities. Sixty-seven percent selected five or more “most promising” opportunities (Robert, 2026). The most frequently selected opportunities include:
  • Automating repetitive processes (70%)
  • Offloading administrative burdens and mundane tasks (65%)
  • Analyzing large datasets (60%)
  • Generating insights for data-informed decision-making (53%)
  • Real-time data analytics and visualization (51%)
Open-ended responses described additional opportunities such as personalized learning assistants, improved digital accessibility, and customer service agents.

Challenges  

The most frequently cited challenges to AI use include:
  • AI’s pace of change (60%)
  • Lack of AI expertise (55%)
  • Lack of best practices (48%)
  • Lack of time to learn new skills (46%)
  • The number of risks associated with AI (41%)
Open-ended responses also highlighted unsubstantiated “hype,” unequal access to AI tools, poor fit of specific tools to work tasks, lack of technology infrastructure, and inability to mitigate environmental impact.

Attitudes Toward AI: Enthusiasm Tempered by Caution  

Most respondents (81%) reported feeling enthusiasm or a mix of caution and enthusiasm toward AI, with only 17% indicating pure caution (Robert, 2026). However, the framing of survey questions may influence these findings. As one expert noted, combining “caution and enthusiasm” into a single response option may obscure the degree to which respondents are uncertain or ambivalent (Palmer, 2026).
The perception of institutional leaders’ attitudes mirrors these patterns, with 38% of respondents describing their leaders as enthusiastic and 36% as expressing a mix of caution and enthusiasm.

Faculty-Specific Findings  

Although faculty comprised only 12% of survey respondents, the available data reveal notable differences between faculty and staff experiences with AI. A majority of faculty (63%) reported using AI tools for creating learning activities or assessments, compared to just 32% of staff (Robert, 2026). This finding reflects the instructional responsibilities of faculty and suggests that AI is being integrated into core teaching functions.
Conversely, nearly a quarter (23%) of faculty respondents indicated that their institution does not provide access to any of the AI tools they want to use for work, compared to just 10% of respondents overall (Robert, 2026). This disparity raises questions about resource allocation and institutional support for faculty AI adoption. Possible explanations include disciplinary differences in tool preferences, lack of awareness among IT administrators about faculty-specific needs, or budget constraints that limit the range of tools available.
The underrepresentation of faculty in the survey sample limits the generalizability of these findings, but the available data suggest that faculty may face unique barriers to AI adoption that warrant further investigation.

Discussion

Interpreting the AI Implementation Gap  

The findings presented above reveal a significant and consequential gap between the adoption of AI tools by higher education employees and the institutional governance structures, policies, and support systems meant to guide that use. This gap can be understood through several theoretical lenses.

Diffusion of Innovations and the Pace of Change 

Rogers’ (2003) Diffusion of Innovations theory posits that the adoption of new technologies follows a predictable pattern, with innovators and early adopters leading the way and the majority following as the technology becomes normalized. In the case of AI, the pace of adoption has been remarkably swift, compressing the typical diffusion curve and leaving institutional governance structures struggling to keep pace (Mollick & Mollick, 2023; Robert, 2026).
The finding that 94% of higher education employees have used AI tools for work within the past six months suggests that AI has already moved beyond the early adoption phase and into the mainstream. However, the lag in policy awareness and the prevalence of shadow AI use indicate that institutional norms and governance have not kept up with individual behavior.

Organizational Complexity and Shared Governance  

Higher education institutions are characterized by shared governance, decentralized decision-making, and the coexistence of diverse stakeholder interests (Birnbaum, 1988; Kezar, 2014). These features can impede the development of coherent, institution-wide AI strategies and policies. The finding that even institutional leaders and IT professionals are often unaware of AI policies suggests that responsibility for AI governance may be diffuse or unclear, and that communication channels may be inadequate.
The permissive orientation of most AI policies may also reflect the difficulty of achieving consensus in complex organizations, as well as a reluctance to restrict innovation in the absence of clear evidence of harm.

Institutional Type and Disciplinary Differences  

The AI implementation gap may manifest differently across institutional types and disciplinary contexts. Research universities with robust IT infrastructure and dedicated governance committees may be better positioned to develop and implement AI policies than community colleges or small private institutions with fewer resources (Kezar & Eckel, 2002). Similarly, faculty in STEM fields—who may be more familiar with computational tools—may embrace AI more readily than those in the humanities, who may prioritize concerns about authenticity, originality, and the erosion of human judgment (Tondeur et al., 2017).
The available survey data do not allow for detailed disaggregation by institutional type or discipline, but future research should explore these dimensions to better understand the contextual factors shaping the implementation gap.

The Role of Professional Identity and Autonomy  

Faculty and staff in higher education often possess strong professional identities and expectations of autonomy in their work (Macfarlane, 2011). The voluntary nature of most AI tool use—with only 11% of respondents required to use AI for work—suggests that adoption is being driven by individual initiative rather than institutional mandate. While this may foster innovation, it also creates risks if individuals are unaware of or choose to ignore institutional guidelines.
The high rate of shadow AI use (56%) is particularly concerning in this context, as it suggests that many employees are making independent decisions about which tools to use without regard for institutional vetting or approval.

Voluntary Versus Mandated Adoption  

The finding that 86% of respondents want to use or continue using AI tools—despite the lack of institutional mandates—raises important questions about the governance of voluntary technology adoption. On the one hand, voluntary adoption may signal genuine perceived value and intrinsic motivation, which can support sustained use and innovation. On the other hand, voluntary adoption in the absence of clear policies and guidance may result in inconsistent practices, unmanaged risks, and inequitable access.
Institutions must balance respect for professional autonomy with the need for consistent, institution-wide practices. This may require developing governance models that provide clear expectations and support while allowing flexibility for individual and disciplinary variation.

Implications for Data Privacy, Cybersecurity, and Risk Management  

The AI implementation gap has significant implications for data privacy and cybersecurity. When employees use AI tools that have not been vetted by their institutions, they may inadvertently expose sensitive data to third parties, violate privacy regulations, or introduce security vulnerabilities (Robert, 2026; Williamson et al., 2020). The risks are compounded by the opacity of many AI systems, which may collect, store, or share data in ways that are not transparent to users.
The finding that only 54% of respondents are aware of AI-related policies—and that even cybersecurity and privacy professionals report significant lack of awareness—suggests that many institutions are not effectively managing these risks.

Ethical Considerations  

The ethical dimensions of AI adoption in higher education extend beyond data privacy and cybersecurity. Algorithmic bias—the tendency of AI systems to reflect and amplify existing social inequities—poses risks in contexts such as hiring, admissions, and student services (Holmes et al., 2019; Selwyn, 2019). If AI tools are used to screen job applications or evaluate student work without adequate oversight, they may perpetuate discrimination and undermine institutional commitments to equity and inclusion.
The opacity of many AI systems raises additional ethical concerns. When decisions are made or influenced by “black box” algorithms, it may be difficult to explain or justify outcomes to affected individuals, undermining principles of transparency and accountability (Williamson et al., 2020). Institutions should consider developing ethical guidelines for AI use that address issues of bias, transparency, and accountability.
The potential for AI to erode human relationships in teaching and learning is another ethical concern. While AI can automate routine tasks and provide personalized feedback, overreliance on AI may diminish the role of human judgment, mentorship, and connection in educational experiences (Selwyn, 2019).

The Role of Vendors and Technology Companies  

The adoption of AI in higher education is shaped not only by institutional actors but also by the vendors and technology companies that develop and market AI tools. Survey respondents noted concerns about the concentration of power among technology companies, and the prevalence of shadow AI use suggests that employees may be turning to third-party tools when institutional offerings do not meet their needs (Robert, 2026).
The vendor-institution relationship raises important governance questions. AI vendors’ terms of service and data practices may not align with institutional values or regulatory requirements, creating tensions between innovation and compliance. Institutions should develop procurement processes that evaluate AI tools not only for functionality but also for data privacy, security, accessibility, and ethical considerations. Building capacity for vendor negotiation and contract management is essential for protecting institutional interests.

Environmental Impact of AI  

The environmental footprint of AI technologies is an emerging concern that warrants attention in institutional decision-making. Training and operating large AI models require significant computational resources, contributing to energy consumption and carbon emissions (Strubell et al., 2019). As institutions expand their use of AI tools, they should consider the environmental implications of this adoption and explore strategies for mitigating impact, such as prioritizing energy-efficient tools, consolidating AI infrastructure, and advocating for sustainable practices among vendors.
Survey respondents mentioned environmental impact as a concern in open-ended comments, indicating that this issue is on the radar of at least some higher education employees. Institutions may benefit from incorporating environmental considerations into their AI governance frameworks.

The Challenge of Measuring Impact  

The nascent state of ROI measurement for AI tools is another significant finding. Without systematic evaluation, institutions cannot determine whether AI investments are yielding tangible benefits, identify areas for improvement, or make evidence-based decisions about future investments (Bates, 2015; Robert, 2026). The challenges of measuring ROI are not unique to AI, but they are exacerbated by the rapid pace of change and the difficulty of isolating the impact of AI from other factors.
As one survey respondent observed, “AI alone can’t really create the business outcome value that’s needed to justify the investment” without supporting actions such as data quality, integration, and business process improvement (Robert, 2026). This insight underscores the need for a holistic approach to AI implementation that goes beyond tool adoption to include organizational change and capacity-building.
Institutions may benefit from adapting established frameworks for evaluating technology investments, including logic models, cost-benefit analysis, and balanced scorecards (Banta & Blaich, 2011). Developing shared metrics and benchmarks across institutions could also facilitate comparative analysis and the identification of best practices.

Workforce Development: Self-Directed Learning and Its Limits  

The emphasis on self-directed learning as a strategy for AI workforce development is both pragmatic and potentially problematic. While encouraging employees to develop skills on their own may be cost-effective and consistent with traditions of academic autonomy, it may also result in uneven skill development, inconsistent practice, and gaps in critical competencies (Ertmer & Ottenbreit-Leftwich, 2010; Tondeur et al., 2017).
The literature on technology integration in education suggests that structured professional development, ongoing support, and opportunities for collaboration are essential for building both competence and confidence (Bates, 2015; Selwyn, 2016). The finding that only half of respondents who are aware of policies feel confident using AI tools for work suggests that current approaches to workforce development may be insufficient.

Balancing Enthusiasm and Caution  

The mixed attitudes toward AI reported by respondents—with most expressing a combination of enthusiasm and caution—reflect the genuine ambiguity of the current moment. AI technologies offer significant potential benefits, but they also pose substantial risks. The challenge for institutions is to create environments in which staff and faculty can explore and experiment with AI tools while also ensuring that risks are identified, assessed, and managed.
The finding that most institutional policies are permissive or neutral suggests that institutions are erring on the side of enabling innovation. However, the prevalence of shadow AI use and the low levels of policy awareness raise questions about whether this permissive stance is being accompanied by adequate communication, education, and oversight.

International and Comparative Perspectives  

The majority of survey respondents (89%) are based in the United States, limiting the generalizability of findings to other national and regional contexts. The AI implementation gap may manifest differently in countries with different regulatory environments, institutional structures, and cultural attitudes toward technology. For example, the European Union’s AI Act establishes a comprehensive regulatory framework for AI that may influence institutional governance in EU member states (Robert, 2026). Similarly, institutions in countries with strong traditions of centralized governance may be better positioned to develop and implement coherent AI policies than those in more decentralized systems.
Future research should explore how the AI implementation gap varies across national and regional contexts, and identify lessons that can be learned from different governance approaches.

Limitations and Directions for Future Research  

Several limitations of the available data should be acknowledged. First, the low proportion of faculty respondents (12%) means that findings may not fully represent the perspectives and practices of instructional staff (Palmer, 2026). Second, the survey was conducted at a single point in time and may not capture the dynamic and rapidly evolving nature of AI adoption and governance. Third, the reliance on self-reported data introduces potential biases, including social desirability bias and recall bias.
Future research should seek to address these limitations by employing longitudinal designs, purposive sampling of underrepresented groups, and mixed-methods approaches that combine survey data with interviews, focus groups, and observational studies. Additional research is also needed on the effectiveness of different governance models, the impact of AI on specific job roles and functions, and the long-term consequences of AI adoption for higher education institutions and their stakeholders.
Comparative and international research is particularly needed to understand how the AI implementation gap manifests in different institutional, national, and cultural contexts. Research on the role of AI vendors and technology companies in shaping adoption and governance would also be valuable.

Recommendations

Based on the findings and analysis presented in this article, the following recommendations are offered for institutional leaders, policymakers, and researchers:

For Institutional Leaders  

  • Develop and communicate clear AI policies and guidelines. The finding that only 54% of respondents are aware of AI-related policies underscores the urgent need for institutions to create, formalize, and widely communicate expectations for AI use. Policies should address data privacy, cybersecurity, intellectual property, accessibility, ethical considerations, and environmental impact.
  • Engage stakeholders in policy development. Involving faculty, staff, and other stakeholders in the creation of AI policies can help ensure that guidelines are practical, widely understood, and reflective of diverse perspectives (Kezar, 2014). Inclusive governance processes can also build buy-in and support for policy implementation.
  • Invest in structured professional development. While self-directed learning has value, institutions should also provide formal training opportunities, workshops, and resources to support AI skill development. Professional development should be role-specific and address both technical skills and ethical considerations.
  • Vet and provide access to AI tools. To reduce shadow AI use, institutions should proactively identify, evaluate, and provide access to AI tools that meet institutional standards for privacy, security, quality, and environmental sustainability. Communicating the availability of approved tools and the reasons for their selection can help build trust and compliance.
  • Measure and communicate return on investment. Institutions should develop frameworks for evaluating the impact of AI investments, including both quantitative metrics (e.g., time savings, cost reductions) and qualitative assessments (e.g., user satisfaction, quality of work). Sharing findings with stakeholders can help build confidence and inform future decisions.
  • Update job descriptions and expectations. As AI-related responsibilities become more common, institutions should codify these duties in job descriptions to clarify expectations, ensure access to resources, and acknowledge the changing nature of work.
  • Address faculty-specific needs. Given the finding that nearly a quarter of faculty lack access to desired AI tools, institutions should assess and address the unique needs of faculty in different disciplines, ensuring equitable access and support.
  • Develop ethical guidelines for AI use. Institutions should develop and communicate ethical guidelines that address issues such as algorithmic bias, transparency, accountability, and the appropriate use of AI in high-stakes decisions (e.g., hiring, admissions, student evaluation).
  • Manage vendor relationships strategically. Institutions should develop procurement processes that evaluate AI tools for data privacy, security, accessibility, and ethical considerations. Building capacity for vendor negotiation and contract management is essential for protecting institutional interests.
  • Incorporate environmental considerations. Institutions should consider the environmental impact of AI adoption and explore strategies for mitigating impact, such as prioritizing energy-efficient tools and advocating for sustainable practices among vendors.

For Policymakers  

  • Support the development of sector-wide standards and best practices. National and regional higher education organizations, accreditors, and government agencies can play a role in developing guidelines, sharing exemplary practices, and fostering cross-institutional collaboration on AI governance.
  • Provide resources for workforce development. Policymakers should consider funding professional development initiatives, research on AI in higher education, and technical assistance for institutions seeking to improve their AI governance.
  • Promote transparency and accountability in AI tool development. Regulatory frameworks should encourage AI developers to provide clear information about data practices, algorithmic decision-making, environmental impact, and the limitations of their tools.
  • Encourage international collaboration. Given the global nature of AI technologies, policymakers should support international collaboration on standards, best practices, and research.

For Researchers  

  • Conduct longitudinal and comparative studies. Future research should examine how AI adoption and governance evolve over time, and compare approaches across different types of institutions, national contexts, and disciplinary fields.
  • Investigate the perspectives of underrepresented groups. Given the low proportion of faculty in the available data, targeted studies of faculty experiences with AI—as well as the perspectives of students, part-time employees, and other stakeholders—are needed.
  • Develop and test frameworks for evaluating AI impact. Researchers can contribute to practice by developing rigorous, practical approaches to measuring the impact of AI on higher education work, learning, and organizational outcomes.
  • Examine the role of vendors and technology companies. Research on the vendor-institution relationship and its implications for governance, data privacy, and educational values would be valuable.
  • Explore ethical and environmental dimensions. Additional research is needed on the ethical implications of AI use in higher education and the environmental footprint of AI adoption.

Conclusion

The integration of artificial intelligence into higher education workplaces is proceeding at a rapid pace, with the vast majority of employees now using AI tools for a wide range of work-related tasks. However, this widespread adoption has outpaced the development of institutional policies, governance structures, and support systems, creating what may be termed an “AI implementation gap.” The consequences of this gap are significant, including risks to data privacy and cybersecurity, uneven skill development, ethical concerns, environmental impact, and uncertainty about the value and impact of AI investments.
Addressing the AI implementation gap will require concerted effort from institutional leaders, policymakers, and researchers. Key priorities include developing and communicating clear policies, investing in professional development, vetting and providing access to AI tools, measuring return on investment, addressing ethical and environmental considerations, managing vendor relationships, and engaging stakeholders in governance processes. By taking these steps, higher education institutions can harness the benefits of AI while managing its risks and ensuring that the adoption of new technologies is guided by shared understanding, clear expectations, and a commitment to the values of the academy.
The current moment is one of both opportunity and uncertainty. AI technologies have the potential to automate routine tasks, enhance decision-making, and free faculty and staff to focus on higher-order work. At the same time, the risks of misinformation, data misuse, skill erosion, algorithmic bias, environmental harm, and job displacement are real and must be taken seriously. Navigating this landscape will require not only technical expertise but also organizational agility, transparent communication, ethical reflection, and a willingness to adapt as the technology and its implications continue to evolve.
As higher education communities continue to explore the impacts of AI on all aspects of institutional operations, the findings and recommendations presented in this article offer a foundation for evidence-based action. By bridging the gap between adoption and governance, institutions can position themselves to thrive in an era of rapid technological change while upholding their core missions of teaching, learning, research, and public service.

References

  1. Banta, T. W.; Blaich, C. Closing the assessment loop. Change: The Magazine of Higher Learning 2011, 43(1), 22–27. [Google Scholar] [CrossRef]
  2. Bates, A. W. Teaching in a digital age: Guidelines for designing teaching and learning; BCcampus, 2015. [Google Scholar]
  3. Birnbaum, R. How colleges work: The cybernetics of academic organization and leadership; Jossey-Bass, 1988. [Google Scholar]
  4. Davis, F. D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 1989, 13(3), 319–340. [Google Scholar] [CrossRef]
  5. EDUCAUSE. 2025 EDUCAUSE AI landscape study; EDUCAUSE, 2025. [Google Scholar]
  6. Ertmer, P. A.; Ottenbreit-Leftwich, A. T. Teacher technology change: How knowledge, confidence, beliefs, and culture intersect. Journal of Research on Technology in Education 2010, 42(3), 255–284. [Google Scholar] [CrossRef]
  7. Holmes, W.; Bialik, M.; Fadel, C. Artificial intelligence in education: Promises and implications for teaching and learning; Center for Curriculum Redesign, 2019. [Google Scholar]
  8. Kezar, A. Tools for a time and place: Phased leadership strategies to institutionalize a diversity agenda. The Review of Higher Education 2007, 30(4), 413–439. [Google Scholar] [CrossRef]
  9. Kezar, A. How colleges change: Understanding, leading, and enacting change; Routledge, 2014. [Google Scholar]
  10. Kezar, A.; Eckel, P. D. The effect of institutional culture on change strategies in higher education: Universal principles or culturally responsive concepts? The Journal of Higher Education 2002, 73(4), 435–460. [Google Scholar] [CrossRef]
  11. Luckin, R.; Holmes, W.; Griffiths, M.; Forcier, L. B. Intelligence unleashed: An argument for AI in education; Pearson, 2016. [Google Scholar]
  12. Macfarlane, B. The morphing of academic practice: Unbundling and the rise of the para-academic. Higher Education Quarterly 2011, 65(1), 59–73. [Google Scholar] [CrossRef]
  13. Marginson, S. Higher education and the common good; Melbourne University Publishing, 2016. [Google Scholar]
  14. Mollick, E.; Mollick, L. Using AI to implement effective teaching strategies in classrooms: Five strategies, including prompts; The Wharton School Research Paper, 2023. [Google Scholar]
  15. Palmer, K. Data shows AI ‘disconnect’ in higher ed workforce; Inside Higher Ed, 13 January 2026. [Google Scholar]
  16. Popenici, S. A. D.; Kerr, S. Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning 2017, 12(1), 1–13. [Google Scholar] [CrossRef] [PubMed]
  17. Pressman, J. L.; Wildavsky, A. Implementation: How great expectations in Washington are dashed in Oakland, 3rd ed.; University of California Press, 1984. [Google Scholar]
  18. Robert, J. The impact of AI on work in higher education; EDUCAUSE, 2026. [Google Scholar]
  19. Rogers, E. M. Diffusion of innovations, 5th ed.; Free Press, 2003. [Google Scholar]
  20. Sabatier, P. A. Top-down and bottom-up approaches to implementation research: A critical analysis and suggested synthesis. Journal of Public Policy 1986, 6(1), 21–48. [Google Scholar] [CrossRef]
  21. Selwyn, N. Distrusting educational technology: Critical questions for changing times; Routledge, 2014. [Google Scholar]
  22. Selwyn, N. Education and technology: Key issues and debates, 2nd ed.; Bloomsbury Academic, 2016. [Google Scholar]
  23. Selwyn, N. Should robots replace teachers? AI and the future of education; Polity Press, 2019. [Google Scholar]
  24. Selwyn, N. The future of AI and education: Some cautionary notes. European Journal of Education 2022, 57(4), 620–631. [Google Scholar] [CrossRef]
  25. Strubell, E.; Ganesh, A.; McCallum, A. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics; 2019; pp. 3645–3650. [Google Scholar]
  26. Tierney, W. G. The impact of culture on organizational decision making: Theory and practice in higher education; Stylus Publishing, 2008. [Google Scholar]
  27. Tondeur, J.; van Braak, J.; Ertmer, P. A.; Ottenbreit-Leftwich, A. Understanding the relationship between teachers’ pedagogical beliefs and technology use in education: A systematic review of qualitative evidence. Educational Technology Research and Development 2017, 65(3), 555–575. [Google Scholar] [CrossRef]
  28. Tsai, Y.-S.; Perrotta, C.; Gašević, D. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics. Assessment & Evaluation in Higher Education 2020, 45(4), 554–567. [Google Scholar]
  29. Venkatesh, V.; Davis, F. D. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science 2000, 46(2), 186–204. [Google Scholar] [CrossRef]
  30. Williamson, B.; Bayne, S.; Shay, S. The datafication of teaching in higher education: Critical issues and perspectives. Teaching in Higher Education 2020, 25(4), 351–365. [Google Scholar] [CrossRef]
  31. Zawacki-Richter, O.; Marín, V. I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education 2019, 16(1), 1–27. [Google Scholar] [CrossRef]
Table 1. Summary of Key Findings.
Table 1. Summary of Key Findings.
Finding Percentage
Used AI tools for work in past 6 months 94%
Aware of AI-related policies/guidelines 54%
Used AI tools not provided by institution 56%
Institution has work-related AI strategy 92%
Institution measuring ROI for AI tools 13%
Required to use AI tools for work 11%
Want to use or continue using AI tools 86%
Identified 6+ urgent risks 67%
Identified 5+ promising opportunities 67%
Feel confident using AI with current policies 56%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated