Preprint
Article

This version is not peer-reviewed.

Where Are the AI Governance Roles? An Early-Stage Empirical Mapping of Presence, Absence, and Structure in Organisational AI Oversight

A peer-reviewed version of this preprint was published in:
Businesses 2026, 6(2), 18. https://doi.org/10.3390/businesses6020018

Submitted:

13 February 2026

Posted:

23 February 2026

You are already at the latest version

Abstract
As AI technologies increasingly play a crucial role in organisational decision-making, ethical frameworks and governance guidelines have been developed to ensure accountability, transparency, and responsible use. However, these governance structures primarily assume that organisations have the formal capacity to oversee AI, without examining whether such capacity is actually present. Empirical evidence on how organisations truly govern AI—and where responsibility is fundamentally lacking—remains scarce. This paper offers an initial empirical delineation of formal AI governance responsibilities across diverse sectors and regions. It employs survey data from 351 organisations to investigate the existence of positions such as Chief Artificial Intelligence Officer (CAIO), AI Ethics Officer, Responsible AI Lead, Algorithmic Auditor, and AI Governance Committees. Furthermore, it analyses variations in these jobs across industries and geographies, as well as their structural characteristics, such as seniority, reporting relationships, authority, and available resources. The research reveals prevalent profiles of governance maturity. The findings indicate that formal roles for AI governance are not consistently implemented and, when they do exist, often lack the necessary authority, resources, and integration at a senior institutional level. Executive-level leadership roles and specialised audit functions are rare, and many organisations operate without any formal AI governance roles despite using AI technologies. The study outlines four profiles of governance maturity: Governance Absence, Symbolic Governance, Operational Governance, and Institutionalised Governance, highlighting that mature governance is often more the exception than the norm. By empirically assessing the presence or absence of AI governance, this research presents an absence-based viewpoint on AI ethics. It indicates that ethical concerns often arise from inadequacies in governance design rather than from flaws in existing frameworks. These results establish a foundational empirical baseline for subsequent studies on how various AI governance models influence compliance, trust, and ethical risks.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

Artificial intelligence (AI) technologies are increasingly being adopted in organisations' decision-making processes across industries, with implications for outcomes in areas such as finance, healthcare, public administration, and digital markets. There has been a growing demand for the establishment of more comprehensive policy frameworks and ethical guidelines that prioritise accountability, transparency, fairness, and human oversight (Floridi & Cowls, 2019; OECD. (2019). Generally, these frameworks assume that organisations have the necessary internal capabilities to assign responsibility, supervise AI systems, and take action when ethical or regulatory issues emerge.
Nonetheless, it remains unclear, in particular, whether organisations truly possess such governance capacities. There is growing advocacy among policymakers and practitioners for formal AI governance roles, such as Chief Artificial Intelligence Officers, AI Ethics Officers, and Algorithmic Auditors (Gopal et al, 2025). However, there is insufficient empirical evidence on the existence, structure, and responsibilities of these AI oversight roles.
The absence of empirical evidence is important. Conversations regarding ethical risks in AI systems generally emphasise concerns such as skewed data, flawed models, or deficiencies in governance within existing frameworks (Hanna et al., 2024; Papagiannidis et al., 2025). Nonetheless, ethical hazards may also arise during the initial stages of organisational design. In these cases, the responsibility for AI is either ambiguously defined, unassigned, or not included within the institution (Hanna et al., 2024).
This paper seeks to address this gap by providing an initial, cross-sectoral, and multi-regional empirical analysis of formal AI governance responsibilities within organisations. This study prioritises a basic question over evaluating the efficacy of governance processes or ethical outcomes: where is the formal responsibility for AI located within organisations, or where is it deficient?
This paper addresses the query by reevaluating the perspective on AI ethics, presenting empirical evidence of deficiencies in AI governance responsibilities and reframing the discourse from one of failure to one of inadequate organisational design. We document the existence and non-existence of AI governance positions, examine disparities across industries and locations, analyse the structural characteristics of these roles, and delineate standard governance maturity levels. This research empirically demonstrates the absence of governance, establishing a core knowledge of the impact of organisational architecture on ethical risk and accountability in AI systems.
This document is structured as follows: Section 2 examines the literature on AI ethics and governance, emphasising a deficiency in the distribution of accountability. Section 3 articulates the study questions, while Section 4 delineates the methodology. Section 5 and Section 6 present the findings, followed by limitations and conclusions.

2. Literature Review

2.1. Normative AI Ethics and the Presumption of Organisational Capability

In the last decade, the rapid expansion of artificial intelligence has prompted significant scrutiny of its ethics and appropriate utilisation. Central frameworks underscore concepts such as accountability, openness, fairness, explainability, and human oversight, as commonly defined in ethical guidelines and governance toolkits (Floridi & Cowls, 2019; OECD, 2019; European Commission, 2019). These initiatives have influenced the dialogue on ethical AI and have impacted regulatory systems.
However, Governance frameworks needed for implementation typically assume that organisations have the necessary structures to ensure responsibility, oversight, and corrective action for AI-related risks. Emerging positions such as Chief Artificial Intelligence Officer (CAIO), AI Ethics Officer, and Algorithmic Auditor are being introduced alongside traditional roles, including Chief Information Officer and Chief Risk Officer (Schmitt, 2024). This development indicates a shift towards formalising AI governance.
Despite recognition of these roles, there is limited empirical evidence on their actual adoption, their positioning within organisations, or the authority and resources they are granted. Current literature frequently transitions from ethical principles to assumed execution without investigating the existence of these governance positions or the ambiguity around accountability for AI oversight (Mittelstadt, 2019; Morley et al., 2020).
As a result, much of the AI ethics literature conflates the discussion of governance principles with actual organisational capacity. This raises an important question: how many organisations have the formal roles and structures needed to meet the ethical and governance standards expected of them? Addressing this requires a focus on the realities of role presence and the embedding of governance structures within organisations.

2.2. Gaps in Accountability, Responsibility, and Governance

Investigation into algorithmic accountability explores the distribution of blame within socio-technical systems. Accountability is fundamentally defined as a relationship wherein individuals are responsible for their decisions and may face consequences (Bovens, 2007). The focus in the domain of algorithms has been on transparency, auditability, and systems for contestation and redress (Martin, 2019; Wieringa, 2020).
However, a considerable segment of this research assumes that accountable individuals are identifiable and hold institutional authority. Recent studies challenge this premise, emphasising that accountability in AI systems is often diffuse and fragmented across organisations (Rahwan, 2018; Burrell, 2016). Although these investigations underscore the difficulties in establishing accountability, they do not empirically assess whether organisations have implemented formal governance frameworks to monitor accountability outcomes. This fragmentation prompts concerns that failures in accountability may stem not from inadequate management but from an absence of clearly delineated roles.

2.3. Organisational AI Governance and Role Emergence

A growing number of studies highlight the importance of AI governance in organisations, focusing on how firms and public institutions manage AI systems. Many reports recommend establishing roles such as Chief AI Officers, ethics boards, and responsible AI leads (World Economic Forum, 2023; Deloitte, 2022). Scholarly research underscores the necessity of integrating ethical and governance considerations into AI development (Shneiderman, 2020; Wirtz et al., 2019).
Nonetheless, there is a lack of empirical evidence regarding the existence and structure of these jobs. The majority of the study is confined to singular examples, particular industries, or prominent organisations, yielding less understanding of overarching trends. Moreover, discussions of these roles often overlook essential aspects, including seniority, reporting relationships, authority, and resource availability.

2.4. Symbolic Governance and Institutional Adoption

Organisational and institutional theory provides a valuable perspective for analysing the disparity between governance principles and their practical implementation. The notion of institutional isomorphism posits that organisations frequently adopt formal structures mainly to project legitimacy to external stakeholders, rather than to implement substantive alterations in their internal processes (Meyer & Rowan, 1977). Recent studies in governance indicate that new roles may serve more as a façade than as effective governance, frequently lacking the requisite authority and resources for substantial oversight (Meyer & Rowan, 1977).
This perspective suggests that governance roles in AI may be established primarily for compliance and reputation, rather than functioning as genuine decision-making bodies. Nonetheless, there is an absence of empirical research that adequately distinguishes between symbolic and substantive AI governance frameworks.

2.5. Ethics of Absence and the Empirical Blind Spot

The current literature reveals a notable deficiency in the allocation of accountability for AI within organisations. Although frameworks delineate ethical standards and responsibilities, the structure of duty remains ambiguous. This gap underscores the necessity for absence-based ethics, indicating that ethical hazards may arise not solely from governance failures but also from the absence of governance frameworks.
Collectively, the existing literature highlights a significant empirical gap. Although normative AI ethics frameworks outline the components of responsible governance and accountability, research explores how responsibility may be distributed; there is a structural under-examination about whether responsibility for AI is inherently assigned within organisations. This lack of evidence informs the current study's focus on the existence, non-existence, and arrangement of AI governance roles within organisations, as formalised in the subsequent research questions.

3. Research Questions and Objectives

There is growing discussion about the norms and policies for responsible and trustworthy AI, but empirical data on how organisations govern AI is lacking. Key roles in AI governance, such as Chief Artificial Intelligence Officer (CAIO), AI Ethics Officer, and Algorithmic Auditor, are poorly documented with respect to their distribution and authority. This research aims to provide a preliminary mapping of AI governance roles across different organisations, sectors, and regions. The research aims to achieve two primary objectives:
  • Record the existence or non-existence of formal AI governance roles inside organisations to provide an empirical baseline.
  • Examine the structural organisation of AI governance responsibilities within entities and discover prevalent governance practices.
These objectives are operationalised through the following research questions.
Research Questions
  • RQ1: Prevalence:
To what extent have organisations implemented official AI governance positions such as Chief Artificial Intelligence Officer, AI Ethics Officer, and Algorithmic Auditor?
  • RQ2: Sectoral Variation:
How does the adoption of AI governance roles differ across industries with varying risk intensity and complexity, such as finance, healthcare, telecom, retail, logistics, and the public sector?
  • RQ3: Geographical Variation:
How does role adoption differ across geographical regions, Europe, North America, Asia-Pacific, Africa, Latin America, and the Middle East?
  • RQ4: Structural Characteristics of Roles:
In places where these jobs exist, what are their organisational characteristics — hierarchy, reporting structures, mandate, and resource allocation?
  • RQ5: Governance Maturity Profiles:
What patterns or levels of governance maturity emerge in the distribution of AI governance roles and structures across organisations?
Contribution of the Research Questions
This study presents the primary multi-regional baseline for comprehending how organisations control AI through defined roles. Rather than assessing outcomes, it emphasises governance, structural integration, and configuration, underscoring the allocation of AI responsibility and its deficiencies.
By identifying distinct AI governance maturity profiles, the study sets the stage for future research on the relationship between governance structures and outcomes, including trust, compliance, ethical risk, and organisational performance.

4. Methodology

4.1. Research Design

This study uses a quantitative, cross-sectional research design to map formal AI governance roles in organisations. It aims to descriptively and exploratively analyse roles such as Chief Artificial Intelligence Officer (CAIO), AI Ethics Officer, Responsible AI Lead, and Algorithmic Auditor.
Given the absence of an existing dataset on AI governance roles, the goal is not to achieve statistical generalisation but to establish a foundational empirical baseline. The study examines (i) the existence of these roles, (ii) their adoption across various sectors and regions, and (iii) how they are integrated within organisations. This method enables exploratory inference and typology creation, acknowledging that results are suggestive rather than representative of the entire population.
A survey-based design is appropriate for three primary reasons: it emphasises organisational governance roles; a cross-sectional methodology facilitates systematic comparisons across industries and regions; and essential governance attributes can be precisely captured through structured self-reporting by organisations.
The primary unit of analysis is the organisation itself, not individual respondents.

4.2. Sampling Strategy and Scope

Sampling Approach

The research uses a targeted multi-sector sampling strategy, supplemented by snowball sampling, to reach organisations in AI-intensive areas and senior respondents involved in AI activities. This approach focuses on a wide range of sectors and regions rather than aiming for representativeness within any specific industry or location, aligning with the study’s exploratory goals and enabling comparisons across sectors and regions.

Identification of Organisations and Recruitment of Participants

Due to the absence of a global register of organisations with explicit AI governance responsibilities, entities were identified pragmatically through a targeted mapping approach. Over seven months, we assembled a list using publicly available resources and professional networks. Our approaches included:
  • Exploring corporate websites with an emphasis on sections related to AI, Data, Innovation, Digital, Risk & Compliance, and Governance.
  • Researching professional networking and job-search websites (such as LinkedIn, Indeed, Glassdoor, Google Careers and company career pages) to identify roles or governance functions related to AI.
  • Consulted publicly available business profile websites (e.g., Crunchbase, Bloomberg company profiles, and Reuters company profiles) to validate organisational context and AI-facing functions.
  • Reviewing organisational newsrooms, press releases, annual reports, ESG/sustainability reports, and statements on corporate governance.
We carried out specific keyword searches using combinations like “responsible AI lead,” “AI governance committee,” “AI ethics officer,” “model risk AI,” “algorithm/audit,” “CAIO,” “AI risk compliance,” and “AI policy,” along with sector and country-specific terms.
Organisations were included on our list if they demonstrated active use of AI or governance relevance, such as automation initiatives, responsible AI declarations, or ongoing recruitment for AI governance roles. We then selected potential survey participants based on their seniority and connection to AI-related operations, with a focus on roles in technology, data, innovation, risk, and governance. The survey link (Google Forms) was distributed through direct email invitations, and recipients were encouraged to forward it to other qualified representatives to extend our reach. This strategy emphasised both breadth and relevance within organisational contexts, aligning with the study's initial purpose of empirical mapping.
All identification was based entirely on publicly accessible information and direct engagement.

Sectoral Coverage

Organisations were picked from a varied range of areas where the implementation of artificial intelligence is either well-established or undergoing rapid expansion, including:
  • Banking, financial services, and insurance
  • Healthcare and pharmaceuticals
  • Telecommunications and technology
  • Retail and e-commerce
  • Logistics and supply chain
  • Manufacturing and mining
  • Energy and utilities
  • Consulting and professional services
  • Public sector and government institutions
This selection facilitates the examination of how regulatory impact, operational risk, and organisational complexity influence the establishment and structuring of AI governance roles.

Geographical Coverage

Responses were collected from organisations headquartered or operating across six major regions:
  • Europe
  • North America
  • Asia-Pacific
  • Africa
  • Latin America and the Caribbean
  • Middle East
This global viewpoint provides a thorough assessment of geographic variations and inequalities in the acceptance of AI governance positions.

4.3. Respondents and Inclusion Criteria

To ensure legitimacy at the organisational level, respondents were mandated to meet the following criteria:
  • Assume a senior management or leadership position overseeing AI implementation, digital strategy, data governance, risk management, compliance, or innovation (e.g., CIO, CTO, CDO, Head of Data, Head of Risk/Compliance, Innovation Lead).
  • Represent an organisation that either actively employs AI systems or is in the process of implementing AI/ML technology.
  • Possess a comprehensive awareness of the organisational governance structures, reporting systems, and policy requirements pertaining to AI.
These requirements guarantee that the feedback embodies organisational frameworks and governance structures rather than personal perspectives or interpretations.

4.4. Data Collection and Survey Instruments

Data were gathered using a standardised online questionnaire created through Google Forms, specifically tailored for this study to evaluate organisational-level AI governance roles and their integration. The survey comprised six sections.
  • Organisational profile (sector, size, geography),
  • Status of AI adoption and functional applications.
  • Presence of formal AI governance positions,
  • Characteristics of the highest AI governance position,
  • Governance resources, and
  • AI governance maturity and contextual application.
The survey used categorical and ordinal response formats suitable for early-stage empirical mapping (Dillman et al., 2014). Organisational profiles were captured with single-choice categorical items. AI adoption status was assessed using a three-category item:
  • Yes, (AI/ML use)
  • No (AI use)
  • Plans to adopt within 18 months.

Measurements

The study measured formal AI governance roles by asking whether organisations held specific positions, such as Chief Artificial Intelligence Officer (CAIO), AI Ethics Officer, Responsible AI Lead, Algorithmic Auditor, AI Risk/Compliance Officer, and AI Governance Committees. Respondents answered “Yes,” “No,” or “Not sure” for each role, highlighting both the absence of roles and any uncertainty regarding governance. They were also asked to indicate the total number of distinct AI governance roles in their organisation (0, 1, 2, 3, 4+, or “Not sure”).
Respondents provided information about the highest AI governance role's seniority (e.g., C-suite, VP/Director, senior manager, manager, or committee without a formal leader), its reporting line (e.g., CEO, board of directors, CIO/CTO, CRO, CCO, CDO, HR executive), and its mandate. The mandate was assessed using a checklist of governance functions related to responsible AI, including AI strategy, AI ethics, implementation of Responsible AI, risk management, compliance, algorithmic auditing, data governance oversight, and internal AI literacy/training.
Governance capacity was measured by looking at resources and decision-making authority. Resources were classified based on whether AI governance had a dedicated team (5+ people), a small team (2–4 people), a single individual, shared responsibilities, no dedicated resources, or an uncertain status. Decision-making authority was assessed on a scale ranging from high authority (the ability to approve or veto AI systems) to moderate authority (an advisory role), low authority (limited to documentation/compliance), and very low authority (a symbolic role).
The survey included a checklist of governance mechanisms such as AI ethics policies, risk assessment processes, model documentation standards, algorithmic audits, bias monitoring, incident reporting, cross-functional governance, and training programs. Respondents could also select options for the absence of these mechanisms or indicate uncertainty.
Respondents assessed their organisation's AI governance maturity on a five-point Likert scale (1–5). The survey also gathered data on AI usage intensity and risk exposure, asking about the number of AI/ML systems deployed (1–3, 4–10, 11–20, 21–50, more than 50, or “Not sure”) and the criticality of AI systems (low-supportive, medium-operational, high-stakes, mixed, or “Not sure”).

4.5. Data Preparation and Analytical Strategy

After data collection, we screened responses for completeness and consistency, using each survey response as a single organisational observation. We conducted descriptive analyses on the entire sample and structural analyses on organisations with at least one formal AI governance role. Responses labelled “Not sure” were included to highlight governance visibility gaps, while cases with missing or ambiguous responses on key variables were excluded for analyses needing ordered interpretation, reducing the sample size for some analyses.
The analysis was performed in five phases, corresponding to the study topics.
  • RQ1 (prevalence) employed descriptive statistics to ascertain baseline acceptance rates for AI governance positions.
  • RQ2 (sectoral variance) entailed cross-tabulations and percentage comparisons across industries.
  • RQ3 (geographical variation) was assessed using regional distributions and heatmap visualisations.
  • RQ4 (structural characteristics) focused on the seniority, reporting lines, mandates, authority levels, and resources related to the top AI governance role in each organisation.
  • RQ5 (governance maturity profiles) utilised exploratory cluster analysis to identify common governance configurations based on a standardised set of indicators.
The analysis focuses on identifying patterns and differences without making causal claims or generalising findings to the larger population.

4.6. Cluster Analysis and Typology Development

To identify governance maturity profiles, we used cluster analysis on standardised governance indicators, including role formalisation, structural embedding, governance capacity, mandate breadth, AI adoption status, and self-assessed governance maturity.
We used a clustering solution based on interpretability, coherence, and relevance rather than statistical optimisation (Dubes & Jain, 1980). This approach functions as a heuristic typology that illustrates prevalent governance configurations rather than offering a definitive classification. This methodology is appropriate for preliminary mapping research intended to generate profiles that facilitate subsequent hypothesis formulation and evaluation (Dillman et al., 2014). The complete survey instrument, along with the corresponding response options, is provided in Appendix A.

4.7. Ethical Consideration

The study adhered to the Belmont Report (1979) ethical guidelines and conducted an organisational survey of senior professionals using non-sensitive information. Participation was voluntary, with participants informed about the study's purpose and the use of aggregated data; no personal information was collected. Respondents could request a summary of the results, which would be kept separate from the analysis dataset. Data were analysed and presented in aggregate to minimise re-identification risks, ensuring safe anonymised reporting to participants.

5. Results

5.1. Prevalence: Sample Eligibility and AI Adoption Status

In this study, the level of AI adoption functions as a contextual variable rather than as an outcome metric. It is utilised for subgroup analysis and governance maturity assessment, without suggesting any causal correlations between AI adoption and formal governance responsibilities.
Figure 1 illustrates the AI adoption status of the participating organisations (n = 351). A majority of respondents (73.2%) indicate that their organisation now employs AI technologies, whilst 14.0% plan to implement AI within the next 18 months.
Together, these groups account for over 87% of the sample, indicating that most organisations are currently facing or will soon encounter AI governance challenges.
This distribution strengthens the study's internal validity, as the findings on formal AI governance roles derive from surveyed organisations that face significant AI-related risks and accountability issues. The remaining 12.8% of surveyed organisations that do not use AI are included for exploratory comparisons and to assess early-stage governance among those yet to adopt AI.

5.2. Composition by Organisation Size

Figure 2 and Table 1 illustrate the breakdown of participating organisations by size (n = 351). The largest proportion consists of medium-sized organisations (50–249 employees) at 32.5%, followed by very large organisations (1,000–9,999 employees; 28.5%), large organisations (250–999 employees; 20.8%), small organisations (1–49 employees; 12.3%), and global enterprises with over 10,000 employees (5.98%).
This diverse composition means that no single organisational size dominates the sample, enabling a meaningful analysis of AI governance roles. Variations in size are important because they affect governance capacity, role formalisation, and resource availability.
The sample comprises a variety of organisational sizes, with a majority of entities classified as medium or large. This corresponds with the study's objective of delineating AI governance roles within intricate organisations that implement and oversee AI systems. Despite the underrepresentation of global firms, the sample provides sufficient diversity to analyse governance trends associated with scale without compromising its representativeness.
Figure 3 below shows the regional distribution of organisations participating in the global AI governance survey (n = 351): with participation from Europe, North America, Asia-Pacific, Latin America and the Caribbean, Africa, and the Middle East. Europe accounts for the largest share of respondents (37.3%), followed by North America (21.7%) and the Asia-Pacific (14.0%). Significantly, regions that are typically less represented in empirical AI governance studies—including Africa (9.12%), Latin America and the Caribbean (10.3%), and the Middle East (7.69%)—together make up over a quarter of the sample.
This distribution aligns with the study's aim of providing an initial, multi-regional empirical overview of AI governance roles, rather than focusing on a specific region or economy. While the sample is predominantly from Europe and North America—indicative of their more developed regulatory frameworks and higher levels of AI integration—the considerable representation from emerging and developing regions enables meaningful comparative analyses. It highlights potential geographical disparities in AI governance uptake.
Consistent with the study's descriptive and exploratory emphasis, the regional proportions are regarded as indicative rather than entirely representative of the population. Nonetheless, the extensive regional involvement amplifies the study's significance by encompassing institutional variety across diverse regulatory, economic, and organisational frameworks, thus laying a foundation for future comparative and outcome-oriented research. This regional variation provides a robust foundation for RQ2, which investigates disparities in AI governance responsibilities across industries with respect to regulatory intensity, organisational complexity, and AI applications.

5.3. Sectoral Variation

Table 2 presents the sectoral distribution of the 351 organisations that took part in the worldwide AI governance study. The predominant sectors are banking, financial services, and insurance (19.37%), followed by the public sector and government organisations (12.25%), and consulting and professional services (10.26%). Retail and e-commerce account for 9.69%, whilst manufacturing and mining provide 9.12%. Healthcare and pharmaceuticals comprise 8.83%, while telecommunications and technology constitute 7.41%.
The presence of both highly regulated industries, such as finance, healthcare, public sector, and energy and commercial sectors, including retail, consulting, logistics, and sports, enables meaningful comparisons in the implementation of AI governance roles. Differences in regulatory exposure, operational risk, and the importance of decision-making across these industries shape the development of AI governance frameworks.
Although sectors such as hospitality, transportation, agriculture, and construction are underrepresented, their inclusion enhances the dataset's diversity. The sectoral proportions in the study should be interpreted as suggestive rather than representative of the global economy. Nevertheless, the diverse array of sectors enhances the examination of sector-specific governance trends and aids in recognising groups of AI governance maturity for future research.

5.4. Geographical Variation in AI Governance Roles

Table 3 shows the percentage of organisations in various regions that have created formal positions for AI governance. Figure 4 provides a heatmap representation of these results to highlight regional patterns and differences.
The percentages in Table 3 reflect the proportion of organisations in each region that have reported the existence of the specified AI governance role (n = 351).
Regional variations in formal roles for AI governance are quite pronounced. Both North America and Europe demonstrate higher levels of institutionalisation, especially through the establishment of Responsible AI Leads and AI Governance Committees. North America is particularly notable, with the highest proportion of Responsible AI Leads at 40.8%, indicating a greater emphasis on operational accountability rather than executive participation.
In the Asia-Pacific region, there is a significant inclination towards establishing AI Governance Committees (38.8%) and appointing Chief AI Officers (18.4%), indicating a more collaborative approach to AI regulation. Europe presents a well-rounded framework with Responsible AI Leads at 27.5% and governance committees at 26.7%, highlighting its focus on official accountability.
Conversely, Africa and Latin America/Caribbean show much lower levels of executive AI governance roles, with few Chief AI Officers or Algorithmic Auditors reported. Governance responsibilities in these areas tend to be more operational and centralised around Responsible AI Leads or risk-focused positions. The scarcity of Algorithmic Auditors worldwide underscores the uneven development of formal auditing mechanisms for AI systems.
Figure 4's heatmap highlights notable regional inequalities in the creation of official AI governance roles. North America and Europe exhibit significantly higher levels of institutionalisation, particularly among Responsible AI Leads and AI Governance Committees. North America exhibits the highest density of Responsible AI Leads, signifying a governance strategy that emphasises operational accountability and integrates these positions into AI development and execution frameworks.
These studies provide preliminary patterns rather than precise population estimates. The heatmap provides an initial empirical visualisation of variations in organisational AI governance frameworks across different regional contexts. It underscores the influence of geographical contexts, regulatory sophistication, and organisational capability on the formation of AI governance positions. Although the insights are preliminary, they provide a foundation for understanding spatial disparities in AI governance.

5.5. Structural Characteristics of AI Governance Roles

The percentages in Panels A–E below are calculated from surveyed organisations that have reported at least one established AI governance role. Totals presented may vary from the overall sample size.
Panel A. Seniority of the highest AI governance role Frequency %
Vice President / Director 95 27.07%
Senior Manager 58 16.52%
Manager 48 13.68%
Committee without formal leader 43 12.25%
C-suite (CAIO, CIO/CTO with explicit AI mandate) 22 6.27%
Not sure 18 5.13%
Total 284 80.91%
Invalid 67 19.09%
Total 351 100%
Panel B. Reporting Line (Reports to) Frequency %
Not sure 79 22.51%
Chief Risk Officer 48 13.68%
CIO / CTO 32 9.12%
HR Executive 30 8.55%
Board of Directors 29 8.26%
CEO 27 7.69%
Chief Compliance Officer 21 5.98%
Chief Data Officer 16 4.56%
Total 282 80.34%
Invalid 69 19.66%
Total 351 100%
Panel C. Primary Mandate %
Responsible AI implementation (operational focus) 12.1%
AI strategy (strategic focus) 11.7%
Combined ethics and Responsible AI 5.1%
AI risk management (including compliance) 7.45
Mixed or multi-functional mandate* 63.7%
Total 100%
*Roles that combine multiple functions, including strategy, ethics, audit, risk, compliance, training, or data governance.
Panel D. Resources allocated to AI governance Frequency %
Small team (2–4 people) 69 19.66%
Dedicated team (5+ people) 65 18.52%
Not sure 57 16.24%
Shared responsibilities across teams 56 15.95%
No dedicated resources 24 6.84%
Single individual 13 3.7%
Total 284 80.91%
Invalid 67 19.09%
Total 351 100%
Panel E. Decision-making authority (level of authority) Frequency %
Moderate: Advisory role but influential 113 32.19%
High: Can veto or approve AI systems 58 16.52%
Low: Limited to documentation/compliance 56 15.95%
Very low: Symbolic role 39 11.11%
Not sure 19 5.41%
Total 285 81.2%
Invalid 66 18.8%
Total 351 100%
RQ4 examines the integration of AI governance roles within organisations. The findings reveal that, when such roles are present, they tend to be found at the vice-presidential or managerial levels rather than in the C-suite, with only about 7% of organisations having executive-level AI governance positions with apparent strategic authority.
AI governance is primarily situated within risk, technology, or compliance departments rather than as a standalone governance body. Although some organisations report board-level oversight, most rely on indirect methods.
Mandates for these roles vary significantly, often covering ethics, risk, compliance, and operational oversight. Resource allocation is inconsistent; while over half of the organisations have dedicated AI governance teams, many others operate with shared or under-resourced setups. Decision-making authority is mainly advisory, indicating that the existence of formal roles does not guarantee effective governance.

5.6. Governance Maturity Profiles

To identify governance maturity profiles (RQ5), we conducted an exploratory cluster analysis using a standardised set of governance indicators, including role formalisation, structural embedding (encompassing seniority, decision authority, and resources), mandate breadth (number of governance functions), AI adoption status, and self-assessed governance maturity. All variables were normalised prior to analysis to ensure comparability across scales. In line with established methodological recommendations, the cluster solution was chosen on the basis of interpretability, internal coherence, and conceptual relevance, rather than purely statistical optimisation (Dubes & Jain, 1980; Everitt et al., 2011).
Consequently, the four-cluster solution is designed as a heuristic typology that illustrates common governance configurations rather than serving as a definitive or comprehensive classification (Table 4).
In Table 4, role formalisation is assessed by counting the number of unique AI governance positions identified by the organisation (ranging from 0 to 6), and the result is standardised for cluster analysis.
Interpretation
The cluster solution indicates that organisations differ not only in the presence of a Chief AI Officer (CAIO) or an ethics officer, but also in their governance configurations, which vary in terms of role formalisation, embedding, resourcing, and authority.
The Institutionalised Governance profile encompasses organisations that exhibit robust governance characterised by significant authority, ample resources, and a sense of maturity. In contrast, Governance Absence denotes that organisations lack formal roles for AI oversight, resulting in unallocated responsibility.
Two intermediate profiles are more common: Symbolic Governance and Operational Governance. Symbolic Governance is characterised by roles that are only weakly institutionalised, often lacking adequate resources and authority, indicating that governance is primarily nominal. Operational Governance consist of organisations that have defined governance roles within their operational divisions, exhibit a moderate level of authority and resources, yet lack the senior-level backing found in the Institutionalised category.
Overall, mature and institutionalised AI governance is rare; many organisations fall into transitional or low-capacity governance configurations. This typology offers a basis for benchmarking across sectors and regions and paves the way for future research on how these governance profiles affect compliance, stakeholder trust, and ethical incident rates.
These governance maturity profiles indicate that the term “absence” should be understood not merely as non-adoption, but as a significant ethical condition in which accountability is unclear or not assigned.

6. Discussion

This research addresses an important deficiency in the field of AI governance by investigating not only the existence of formal AI governance positions but also how these positions are organised, financed, and empowered within organisations. The results provide an initial empirical overview across different sectors and regions, delivering significant insights for both theory and practice. It demonstrates that the effectiveness of governance is shaped more by the authority, seniority, and institutional backing of these roles than by their mere existence.

6.1. The Presence of an AI Role Does Not Equate to Governance Capacity

The findings from RQ1 to RQ3 demonstrate that formal AI governance positions are infrequently implemented and inadequately formed within organizations. Roles like Responsible AI Lead and AI Governance Committee are present in only a limited number of firms, whereas senior positions such as Chief Artificial Intelligence Officer and specialised roles like Algorithmic Auditor are rare across all regions. This contests the presumption in numerous AI governance dialogues that organisations have preemptively allocated oversight tasks (Floridi & Cowls, 2019; Mittelstadt, 2019).
Importantly, having AI governance roles does not guarantee effective governance. RQ4 indicates that when such roles are present, they typically reside beneath the executive tier, associated with established technology, risk, or compliance functions, and are confined to providing counsel rather than executing decisions. Resources assigned to these positions are frequently limited, and their mandates are extensive yet superficial, including ethics, risk, and operational monitoring without defined priorities. Previous research supports the idea that governance roles may be introduced as a formality without real changes in authority or organisational power (Meyer & Rowan, 1977; Cummings, 2021). The observed patterns should be regarded not as indications of governance failures, but as signals of incomplete or deferred governance designs.

6.2. A Substantial Share of Organisations Govern AI Through Structural Absence

Expanding upon the cluster-based insights from RQ5, the absence of governance emerges not only as a minor concern but as a widespread organisational condition. A significant empirical finding from the study is that many organisations operate without defined AI governance roles, despite actively utilising AI systems. The Governance Absence profile identified in RQ5 suggests that ethical and accountability issues frequently arise not from governance failures, but from inadequate governance design.
This discovery redefines ethical risk from a failure event to an inherent organisational condition. Traditional discourse on algorithmic accountability generally emphasises shortcomings in current governance structures (Bovens, 2007; Martin, 2019), implicitly presupposing that responsibility has been allocated. The present findings reveal that, in many organisations, the oversight of AI is not formally assigned, leading to a governance void where ethical concerns are relegated to informal practices or reactive compliance measures (Rahwan, 2018).

6.3. Symbolic and Operational Governance Take Precedence over Institutionalised Models

Aside from complete absence, most organisations conform to one of two governance profiles: Symbolic Governance or Operational Governance.
Symbolic Governance refers to organisations that have established AI governance roles or mandates but allocate limited authority and resources to these positions. Operational Governance denotes a more engaged approach, in which governance responsibilities are embedded within operational units and supported by sufficient resources. Nonetheless, it frequently suffers from insufficient executive support and treats AI governance as a technical matter rather than a strategic concern, consistent with observations from both public- and private-sector AI applications (Shneiderman, 2020; Wirtz et al., 2019).

6.4. Fully Institutionalised AI Governance Remains the Exception

The Institutionalised Governance profile, marked by multiple formal roles, higher authority, dedicated resources, and greater governance maturity, is found in only a small number of organisations. Even among these, governance authority is not consistently centralised at the executive level, and formal audit mechanisms are often lacking.
This suggests that claims about the rapid professionalisation of AI governance through executive positions and oversight frameworks may be exaggerated (World Economic Forum, 2023; Deloitte, 2022). Rather, the level of institutionalisation varies widely, shaped by factors such as organisational size, regulatory demands, and regional influences, rather than by a consistent governance trajectory.

6.5. Governance Maturity Reflects Structural Design, Not Role Titling

The findings demonstrate that AI governance maturity is more strongly influenced by structural design decisions than by job titles. Organisations with identical function labels may possess varying governance capacities, influenced by characteristics such as seniority, reporting structures, decision-making authority, and resource distribution. This underscores the necessity of viewing governance as a set of structural components rather than merely a catalogue of roles.
The study also reveals the phases of AI governance—absence, symbolic adoption, operational embedding, and institutionalisation—thereby enhancing comprehension of how governance is enacted or evaded within organisations.

6.6. Implications for Theory, Management, and Policy

The research advances the scholarship on AI governance in three key aspects. Initially, it shifts from prescriptive frameworks to emphasise actual governance practices, underscoring areas of deficient governance and lapses in accountability for AI. Secondly, it presents an absence-based ethics, demonstrating that ethical concerns frequently arise from non-design rather than from inadequate decision-making, thereby expanding the discourse on accountability in algorithmic systems (Bovens, 2007; Martin, 2019). Third, it offers a typology of AI governance maturity profiles that underpins future study.
Future studies should investigate the correlation between various governance profiles and outcomes, including regulatory compliance, ethical incidents, stakeholder trust, and organisational performance. Longitudinal studies can investigate the temporal changes in governance profiles, elucidating the circumstances under which symbolic governance may transition into more entrenched forms.
From a managerial perspective, the results suggest that establishing an AI governance position without adequate authority, resources, or a specific mandate may provide minimal safeguards against ethical and regulatory risks. Organisations should perceive AI governance as a structural design problem rather than simply a position to occupy.
The findings indicate to policymakers that the lack of governance and mere symbolic compliance highlight the risks of supposing organisations are prepared for regulation. Strategies that rely exclusively on internal governance may overlook the ambiguity surrounding the scope of AI supervision obligations, particularly in less-regulated industries.

7. Limitations

This work provides an initial empirical mapping of AI governance roles and structures, along with some limitations that should be acknowledged.
The study used a cross-sectional survey approach, capturing organisational governance at a particular point in time. The findings do not indicate any temporal changes. Longitudinal studies are necessary to understand how organisations evolve from inadequate governance to the implementation of formal structures.
The sampling technique emphasises extensive representation across many sectors and geographies instead of guaranteeing statistical representativeness within specific groups. This method is suitable for preliminary mapping; nevertheless, it indicates that the prevalence figures are indicative rather than definitive.
The data derive from self-reports provided by senior respondents. Despite attempts to ensure that respondents had adequate oversight of AI operations, perceptions of governance maturity and authority may differ among individuals and organisations. Subsequent research may augment survey results by document analysis, organisational charts, or regulatory disclosures.
This research primarily examines the existence and organisation of AI governance roles, rather than their efficacy or results. Consequently, it fails to demonstrate causal linkages between governance profiles and outcomes, such as ethical incidents or regulatory compliance. These constraints correspond with the study's exploratory objective and provide potential directions for future research to expand upon the presented findings.

8. Conclusions

This study offers a cross-sectoral and multi-regional examination of formal AI governance responsibilities inside organisations spanning diverse sectors and locations. It documents the existence, structure, and deficiencies of these positions, shifting the emphasis of AI governance research from theoretical proposals to real-world organisational realities.
The results indicate that formal AI governance positions are ineffectively implemented and inadequately established. Senior leadership roles and specialised audit tasks are rare, with numerous organisations relying on advisory or operational positions that possess restricted authority and resources. Many organisations operate without designated AI governance, despite extensive use of the technology.
The research delineates four maturity stages of AI governance: Absence of Governance, Symbolic Governance, Operational Governance, and Institutionalised Governance. This classification indicates that mature governance structures are uncommon, with most organisations exhibiting transitional or low-capacity arrangements.
The paper presents the concept of absence-based ethics, emphasising that ethical concerns often emerge prior to governance failure, namely at the juncture where governance is not structurally designated. This undermines prevalent assumptions in AI ethics and accountability discourse that assume the existence of formal governance.
The findings caution organisations against regarding AI governance as a trivial formality. Effective governance requires authority, resources, and explicit structural coherence. The findings suggest that policymakers' expectations regarding organisational fitness for governance may be baseless, especially in less-regulated sectors.
This study establishes a solid foundation for understanding how organisations administer—or fail to administer—AI systems. By emphasising governance deficiencies, it facilitates further research into the effects of various governance structures and their correlations with ethical, regulatory, and organisational outcomes. The governance maturity profiles developed in this study offer a significant empirical foundation for subsequent inquiries into regulatory compliance, ethical considerations, and trust outcomes across various organisational and national contexts.

Appendix A

Questionnaire
1. Organisation Profile What sector does your organisation operate in?
Banking / Financial / Insurance
Public Sector / Government
Consulting / Professional Services
Retail / E-commerce
Mining / Manufacturing
Healthcare / Pharmaceuticals
Telecommunications / Technology
Sports / Entertainment
Logistics / Supply Chain
Education
Energy / Utilities
Agriculture / Farming
Hospitality
Transportation
Building and construction
Size of your Organisation:
Small (1–49 employees)
Medium (50–249 employees)
Large (250–999 employees)
Very Large (1,000–9,999 employees)
Global Enterprise (10,000+ employees)
Location of organisation:
North America
Europe
Asia-Pacific
Africa
Middle East
Latin America / Caribbean
Does your organisation currently use AI or machine-learning systems?
Yes
No
Planning to adopt within 18 months
In which areas are AI systems currently being used or to be utilised?
Credit/loan decisioning
Fraud detection
Recruitment/HR screening
Medical diagnostics
Pricing/underwriting
Customer service (chatbots)
Marketing / Personalisation
Supply chain optimisation
Administrative automation
Cybersecurity
Not sure
Other:
2. Presence or Absence of AI Governance Roles Does your organisation have any of the following formal AI governance roles?
Chief Artificial
Intelligence Officer
(CAIO)
AI Ethics Officer
Responsible AI
Lead / Responsible
AI Manager
Algorithmic
Auditor / AI Audit
Specialist
AI Risk or AI
Compliance Officer
AI Governance
Committee or
Ethics Board
Other AI governance role(s). Please state: ____
How many distinct AI governance roles exist in your organisation?
0
1
2
3
or more
Not sure
3. Structural Characteristics of Roles Seniority of the highest AI governance role:
C-suite (CAIO, CIO/CTO with explicit AI
mandate)
Vice President / Director
Senior Manager
Manager
Committee without formal leader
Not sure
Who does this role/committee report to?
CEO
Board of Directors
CIO / CTO
Chief Data Officer
Chief Risk Officer
Chief Compliance Officer
HR Executive
Not sure
Other
What is the mandate of this role?
AI strategy
AI ethics
AI risk management
Algorithmic audit/model validation
Compliance with AI-related regulation
Data governance oversight
Responsible AI implementation
Staff training and AI literacy
Other:
Resources allocated to AI governance
Dedicated team (5+ people)
Small team (2–4 people)
Single individual
Shared responsibilities across teams
No dedicated resources
Not sure
How much decision-making authority does the role have?
High: Can veto or approve AI systems
Moderate: Advisory role but influential
Low: Limited to documentation/
compliance
Very low: Symbolic role
Not sure
4. Governance Structures Supporting AI Roles Which of the following AI governance mechanisms exist?
AI ethics policy
AI risk assessment process
Model documentation standards
Algorithmic audit routines
Bias monitoring or fairness checks
Incident reporting or escalation pathway
Cross-functional governance committee
Internal AI awareness/training program
None
Not sure
How would you describe your organisation’s AI governance maturity?
1 = very low, 5 = very high
5. AI Use Intensity and Context Approximate number of AI/ML systems deployed in your organisation:
1–3
4–10
11–20
21–50
More than 50
Not sure
Criticality of AI systems
Low (non-critical, supportive automation)
Medium (customer-facing or operational
systems)
High (decisions with legal, financial,
medical, or safety implications)
Mixed
Not sure
Would you like to receive the final report or results summary?
Yes
No

References

  1. Bovens, M. Analysing and Assessing Accountability: A Conceptual Framework. European Law Journal 2007, 13(4), 447–468. [Google Scholar] [CrossRef]
  2. Cummings, M. L. Rethinking the Maturity of Artificial Intelligence in Safety-Critical Settings. Ai Magazine 2021, 42(1), 6–15. [Google Scholar] [CrossRef]
  3. Deloitte. State of AI in the Enterprise | Deloitte Australia. Www.deloitte.com. 2022. Available online: https://www.deloitte.com/au/en/services/consulting/research/state-of-ai-in-enterprise-2022.html.
  4. Dubes, R. C.; Jain, A. K. Clustering Methodologies in Exploratory Data Analysis; Elsevier EBooks, 1980; pp. 113–228. [Google Scholar] [CrossRef]
  5. European Commission. Ethics guidelines for trustworthy AI | Shaping Europe’s digital future. European Commission. 2019. Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
  6. Everitt, B. S.; Landau, S.; Leese, M.; Stahl, D. Cluster Analysis; Wiley Series in Probability and Statistics, 2011. [Google Scholar] [CrossRef]
  7. Floridi, L.; Cowls, J. A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review 2019, 1(1). [Google Scholar] [CrossRef]
  8. Gopal, V.; Davenport, T.H; Bean, R. Why Your Company Needs a Chief Data, Analytics, and AI Officer. Harvard Business Review. December 2025. Available online: https://hbr.org/2025/12/why-your-company-needs-a-chief-data-analytics-and-ai-officer.
  9. Hanna, M.; Pantanowitz, L.; Jackson, B.; Palmer, O.; Visweswaran, S.; Pantanowitz, J.; Deebajah, M.; Rashidi, H. Ethical and Bias Considerations in Artificial intelligence/machine Learning. In Modern Pathology; ScienceDirect, 2024; Volume 38, 3, pp. 1–13. [Google Scholar] [CrossRef]
  10. Meyer, J.; Rowan, B. Institutionalised Organisations: Formal Structure as Myth and Ceremony. American Journal of Sociology 1977, 83, 340–363. [Google Scholar] [CrossRef]
  11. Martin, K. Ethical Implications and Accountability of Algorithms. Journal of Business Ethics 2019, 160, 835–850. [Google Scholar] [CrossRef]
  12. Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 2019, 1(11), 501–507. [Google Scholar] [CrossRef]
  13. Morley, J.; Floridi, L.; Kinsey, L.; Elhalal, A. From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics 2020, 26(4). [Google Scholar] [CrossRef]
  14. OECD. (2019, June 11). Artificial Intelligence in Society. OECD. Available online: https://www.oecd.org/en/publications/artificial-intelligence-in-society_eedfee77-en.html.
  15. Papagiannidis, E.; Mikalef, P.; Conboy, K. Responsible artificial intelligence governance: A review and research framework. The Journal of Strategic Information Systems 2025, 34(2). [Google Scholar] [CrossRef]
  16. Rahwan, I. Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology 2018, 20(1), 5–14. [Google Scholar] [CrossRef]
  17. Schmitt, M. Strategic Integration of Artificial Intelligence in the C-Suite: The Role of the Chief AI Officer. In ArXiv; Cornell University, 2024. [Google Scholar] [CrossRef]
  18. Shneiderman, B. Bridging the Gap between Ethics and Practice. ACM Transactions on Interactive Intelligent Systems 2020, 10(4), 1–31. [Google Scholar] [CrossRef]
  19. Wirtz, B. W.; Weyerer, J. C.; Geyer, C. Artificial Intelligence and the Public Sector—Applications and Challenges. International Journal of Public Administration 2019, 42(7), 596–615. [Google Scholar] [CrossRef]
  20. World Economic Forum Launches AI Governance Alliance Focused on Responsible Generative AI. World Economic Forum. 2023. Available online: https://www.weforum.org/press/2023/06/world-economic-forum-launches-ai-governance-alliance-focused-on-responsible-generative-ai/.
  21. Dillman, D. A.; Smyth, J. D.; Christian, & Leah Melani. Internet, Phone, Mail, and Mixed-Mode Surveys; Wiley, 2014. [Google Scholar] [CrossRef]
  22. The Belmont Report (1979). Office for Human Research Protections. The Belmont Report. U.S. Department of Health and Human Services. Available online: https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/index.html.
Figure 1. AI Adoption Status of Surveyed Organisations.
Figure 1. AI Adoption Status of Surveyed Organisations.
Preprints 198946 g001
Figure 2. Surveyed Organisations by Size.
Figure 2. Surveyed Organisations by Size.
Preprints 198946 g002
Figure 3. Geographical Distribution of Surveyed Organisations.
Figure 3. Geographical Distribution of Surveyed Organisations.
Preprints 198946 g003
Figure 4. Regional Distribution of AI Governance Roles. Data from authors' survey (2026).
Figure 4. Regional Distribution of AI Governance Roles. Data from authors' survey (2026).
Preprints 198946 g004
Table 1. Surveyed Organisations by Size.
Table 1. Surveyed Organisations by Size.
Size of your Organisation Frequency %
Medium (50–249 employees) 114 32.48%
Very Large (1,000–9,999 employees) 100 28.49%
Large (250–999 employees) 73 20.8%
Small (1–49 employees) 43 12.25%
Global Enterprise (10,000+ employees) 21 5.98%
Total 351 100%
Table 2. Sectoral Distribution of Surveyed Organisations.
Table 2. Sectoral Distribution of Surveyed Organisations.
Sector of organisation respondents operate in. Frequency %
Banking / Financial / Insurance 68 19.37%
Public Sector / Government 43 12.25%
Consulting / Professional Services 36 10.26%
Retail / E-commerce 34 9.69%
Mining / Manufacturing 32 9.12%
Healthcare / Pharmaceuticals 31 8.83%
Telecommunications / Technology 26 7.41%
Sports / Entertainment 24 6.84%
Logistics / Supply Chain 22 6.27%
Education 16 4.56%
Energy / Utilities 9 2.56%
Agriculture / Farming 6 1.71%
Hospitality 2 0.57%
Transportation 1 0.28%
Building and construction 1 0.28%
Total 351 100%
Table 3. Regional Variation in the Adoption of AI Governance Roles (%).
Table 3. Regional Variation in the Adoption of AI Governance Roles (%).
Region CAIO AI Ethics Officer Responsible AI Lead Algorithmic Auditor AI Risk / Compliance Officer AI Governance Committee
Africa 0.0 3.1 21.9 0.0 15.6 18.8
Asia-Pacific 18.4 12.2 22.4 4.1 2.0 38.8
Europe 13.0 8.4 27.5 3.8 11.5 26.7
Latin America / Caribbean 5.6 2.8 22.2 0.0 27.8 19.4
Middle East 7.4 7.4 22.2 0.0 18.5 18.5
North America 11.8 2.6 40.8 1.3 11.8 27.6
Table 4. AI Governance Maturity Profiles (Cluster Solution). Cluster centroids denote the mean values for evaluation. Elevated values signify enhanced governing capacity, encompassing increased authority, resources, and maturity.
Table 4. AI Governance Maturity Profiles (Cluster Solution). Cluster centroids denote the mean values for evaluation. Elevated values signify enhanced governing capacity, encompassing increased authority, resources, and maturity.
Governance maturity profile n (%) Role formalisation % with no formal roles Governance authority (mean) Resources (mean) Maturity (mean) Profile interpretation
Institutionalised Governance 79 (22.5%) 1.76 1.3% 3.51 3.10 3.81 Enhanced governance with greater authority, resources, senior engagement, and broader mandates.
Governance Absence 72 (20.5%) 0.00 100.0% 0.06 0.00 2.94 There are no formal AI governance roles; governance is mainly informal and lacks a clear definition.
Symbolic Governance 87 (24.8%) 0.33 66.7% 1.46 1.11 1.92 There is insufficient formalisation of responsibilities and inadequate governance competence, leading to diminished authority and scant resources. This indicates a governance approach that prioritises appearances above genuine compliance.
Operational Governance 113 (32.2%) 1.23 5.3% 2.78 2.21 2.57 Governance is integrated into operational functions with some authority and resources, but lacks strong support from institutional or executive leadership.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated