Preprint
Article

This version is not peer-reviewed.

AI Governance and Ethics in Saudi Arabia: Building a Responsible Innovation Ecosystem

Submitted:

24 November 2025

Posted:

25 November 2025

You are already at the latest version

Abstract
Adopting a secondary qualitative research design, this studyinvestigates how Saudi Arabia can develop sound regulatory measures and ethical standards to develop responsible, inclusive, and sustainable adoption of artificial intelligence (AI) in the banking industry in line with Vision 2030. The thematic analysis of secondary literature published between 2019 and 2024 (academic articles, policy papers, and institutional reports) reveals that Saudi Arabia has established AI governance frameworks with the help of SDAIA, SAMA, or the National Cybersecurity Authority, and other general frameworks followingthe international standards of transparency, responsibility, and equity. However, the sector is also facing challenges in terms of ensuring enforcement, interpretability, risk classification, and multi-stakeholder engagement along with regulatory fragmentation, cultural perspectives of privacy and trust, low data quality, cybersecurity threats, and internal skills gaps. In spite of these challenges, state-led efforts, fintech partnerships, responsible investment approaches, and innovation-oriented company cultures have become powerful facilitators of responsible AI utilization within the sector. Future studies should conduct interviews from regulators, bank executives, and AI developers to have more insightful opinions about the ethical AI adoption in Saudi banking in a real-life context.
Keywords: 
;  ;  ;  

Introduction

Artificial intelligence (AI) has been recognized as one of most transformative forces in the current time period because of its significance to transform the workplace dynamics around the world by changing the ways through which business activities are being carried out around the world Al-Dosari et al. (2024), being integrated to enhance operational efficiency, strengthen decision-making, and deliver personalized customer services while bringing complex ethical challenges such as issues of fairness, accountability, transparency, and inclusivity (Baffour Gyau et al., 2024; Noreen et al., 2023). In this regard, Al-Dosari et al. (2024) and Rahman et al. (2021) highlighted the inclination of governments and financial institutions around the world to formulate robust governance frameworks that balance innovation with ethical responsibility as evident in the case of Saudi Arabia which takes advantage of the Saudi Data and AI Authority (SDAIA) and related government initiatives such as Deem cloud platform, Tawakkalna and Nafath apps and Estishraf analytics platform (Alshagathrh et al., 2018; Dahlan et al., 2022). Within banking sector of Saudia Arbia because of such initiatives, AI adoption is accelerating through chatbots, robo-advisors, fraud detection systems, and advanced credit scoring models in line with both customer expectations and national policy goals as per the Vision 2030, which emphasizes economic diversification away from oil dependence, with AI, fintech, and digital banking positioned as critical enablers of sustainable growth (Alammash et al., 2021).
Despite the fact that AI adoption is celebrated as a driver of efficiency and competitiveness, the frameworks guiding its ethical deployment remain underdeveloped in the specific context of the financial institutions of Saudia Arbia. This is because the existing research has largely focused on advanced economies such as the European Union, the United States, and some Asian countries (Boustani, 2021; Gupta et al., 2023; Mucsková, 2024; Noreen et al., 2023), where regulatory frameworks and ethical standards for AI are more mature which shows a limited scholarly work examining how emerging economies, particularly in the Middle East, are addressing the ethical challenges of AI in finance. Furthermore, the literature has been focused on technological potential or policy ambitions paying less attention to organizational, cultural, and regulatory barriers that shape responsible adoption (Merhi, 2023; Murire, 2024) which highlights that although international organizations such as the European Union, OECD, and UNESCO have articulated principles of ethical AI (Gupta et al., 2023; Van Norren, 2022), the lack of study exploring the extent to which Saudi Arabia’s banking sector aligns with these standards underscores the need to evaluate current governance practices, explore local challenges, and identify opportunities for responsible innovation tailored to the Saudi context. The problem, therefore, is not the lack of ambition but the absence of equally strong governance mechanisms to address how use of AI-driven technologies in banking sector could be improved in terms of reducing inequalities, introduce systemic risks, and weaken public confidence in digital transformation because the consequences of mismanaged AI adoption could result in algorithmic bias and privacy violations in Saudi Arabia AI ecosystem that aims to support both national priorities and global leadership aspirations (Brika et al., 2025).
By addressing the identified research gap and research problem, the study adds a fresh evidence to scholarly debates on AI ethics, governance, and responsible innovation in the context of Saudia Arbia banking sector by highlighting the ethical dimensions of technological transformation that are often overlooked in policy-focused discussions while advancing cross-cultural and comparative insights to have an understanding of how ethical AI practices can be adapted to different institutional and cultural settings. Practically, the study offers actionable guidance for government bodies such as SDAIA and the Saudi Central Bank to design comprehensive regulatory frameworks that ensure AI adoption aligns with national priorities and international ethical standards being in line with Vision 2030; for top officials, the study offers strategies to set in ethics and governance into organizational culture to enhance trust, customer loyalty, and long-term competitiveness while for fintech firms and investors, the insights identify pathways for collaboration that balance profitability with responsibility to complying ethical AI governance mechanism to position Saudi Arabia to not only manage risks but also position it as a leader in global ethical AI usage domain.

1.1. Research Objectives

The primary aim of this study is to analyze how Saudi Arabia can establish regulatory frameworks and ethical guidelines for AI development in banking that promote sustainable, inclusive, and climate-aligned innovation being guided by the following objectives:
  • To evaluate the existing AI governance structures and ethical practices within Saudi Arabia’s banking sector and assess their alignment with Vision 2030 priorities and global ethical AI standards.
  • To examine the regulatory, cultural, technical, and organizational challenges that limit the effective and responsible adoption of AI in Saudi financial institutions.
  • To investigate the key enablers and opportunities, including government initiatives, fintech collaborations, investment strategies, and organizational innovation culture, that can promote inclusive, sustainable, and ethically guided AI practices in banking sector.
The remainder of this paper discusses the literature review that evaluates existing work on AI governance, ethical principles, and their application in the financial sector, a methodology section explaining the secondary qualitative research design and the reasoning behind selecting a specific range of academic sources applying a particular inclusion and exclusion criteria, and findings section presents the results of the interviews, highlighting governance practices, challenges, and enablers within Saudi banking by connecting them to theory, Vision 2030 priorities, and global ethical AI standards. Finally, the conclusion summarizes the key contributions, outlines policy and practical recommendations, and suggests directions for future research.

2. Literature Review

This section documents the studies that discuss the current AI governance systems and associated ethical practices, regulatory, cultural, technical, and organizational issues in successful and responsible use of AIai as well as enablers and opportunities to facilitate inclusive, sustainable, and ethically driven AI practices. The next discussion is about the theoretical foundation of study based on stakeholder theory.

2.1. Current AI Governance Systems and Ethical Practices

During the last decade, the adoption of AI-driven tools in businesses has been so fast that it has surpassed the establishment of a governance system, which has been questioned in terms of accountability, fairness, and transparency as the existing literature demonstrates a disjointed landscape in terms of the systems of governance and related ethical practices varying by region and industry like in the case of Europe, the AI Act by the EU suggests a risk-based regulatory framework focusing on human regulation, transparency, and stringent regulation of high-risk applications (Gupta et al., 2023). Conversely, the United States has been more market-oriented, with agencies like the National Institute of Standards and Technology (NIST) leading the way, emphasizing voluntary systems over legal ones (Al-Alawi et al., 2021). In this regard, governmental bodies like UNESCO and OECD have encouraged values of reliable AI at the international level, which include human-centered values, safety, and inclusiveness (Van Norren, 2022), yet studies indicate that most organizations find it challenging to operationalize them. As an example, the issue of bias reduction is a persistent problem, since algorithms may reproduce or even enhance the inequalities existing in society (Murire, 2024; Van Norren, 2022). This is mainly because corporate AI governance is frequently based on corporate social responsibility statements, ethics committees or internal policies like in the case of Google and Microsoft which have developed AI ethics boards and guidelines, but the controversies around these boards in terms of the data used to train AI systems are sometimes collected without informed consent, raising concerns about privacy and data protection have cast doubt on their independence and implementation of ethical AI-based technologies (Corrêa et al., 2023; Fukuda--Parr & Gibbons, 2021).
Recent research also indicates that governance systems are mostly reactive and not proactive and are adopted once ethical failures are realized especially in the case of Middle East and Asia where AI governance is directly associated with state-oriented approaches. To illustrate, Vision 2030 of Saudi Arabia is one of the countries where AI is a key element of economic change, and the Saudi Data and AI Authority (SDAIA) has to strike the balance between innovation and ethical protection (Alshagathrh et al., 2018). Likewise, China prioritizes national security and social stability in its AI strategy, and this governance is formed by strict regulatory control and alignment with the state goals (Roberts et al., 2021). However, regardless of these advances, researchers believe that there is still a governance gap between ideals and the actual practices in regard to ethics as most governance systems lack a clear definition on enforcement, monitoring or accountability systems (Cristi et al., 2023; Svoboda, 2023). Moreover, it is still debated whether AI ethics should be universal or specific to local cultures and politics which is indicative of larger debates on digital sovereignty and the tradeoff between innovation and regulation. In general, the literature indicates that the current AI governance frameworks are dynamic but still disjointed and, in many cases, lack practical implementation therefore, the assessment of governance structures is a critical step in comprehending the way responsible AI may be attained.

2.2. Regulatory, Cultural, Technical, and Organizational Issues in Successful and Responsible Use of AI

The effective implementation and use of AI have been experiencing various challenges that make it difficult to implement it responsibly and on a large scale to achieve the desired results. One of the prominent challenges highlighted in the existing literature is the lack of harmonized international standards at the regulatory level, which pose a challenge to international businesses to carry on their business effectively as various jurisdictions have different rules as pointed out by Al-Dosari et al. (2024) and Mucsková (2024). In support of this, Van Norren (2022) argued that over-regulation is perceived as a hindrance to innovation whereas under-regulation is considered to be a source of ethical lapses and threat to human rights. According to scholars, regulatory fragmentation is more expensive to comply with and makes cross-border partnerships more complicated, especially in fintech and autonomous systems. Cultural issues are also at the centre stage according to Murire (2024), who contended that the higher the level of trust in the government and institutions, the smoother the adoption of AI is, which is observed in Nordic countries. Conversely, societies that are suspicious of centralized power might be hostile to AI systems that seem invasive or untransparent. Furthermore, cultural values define ideas of justice, privacy, and responsibility. As an example, Western cultures tend to prioritize individual rights and data privacy, but in Asian regions, the collective good and efficiency can be more important than the individual privacy rights which make it hard to create universal ethical standards (Boustani, 2021; Fukuda--Parr & Gibbons, 2021).
These issues are worsened by technical challenges such as biased datasets, inability to interpret them, and security risks. Even sophisticated machine learning models tend to be black boxes, which is why it is challenging to make sure that the organization can be compliant with the rules (Rudin, 2019). The problem of cybersecurity threats, such as adversarial attacks that alter AI outputs, also diminishes the trust of AI systems. The responsibility to implement AI is also limited by infrastructure gaps, especially in developing economies because of lack of access to quality data, computing resources, and talented human resources generates uneven rates of adoption in different regions according to Al-Dosari et al. (2024) and Murire (2024). Some of the organizational issues include resistance to change, skills deficit and absence of a clear accountability structure which are faced when companies do not have internal capabilities to evaluate the risks of AI or apply ethical standards. In this regard, research indicates that efficiency and cost-cutting are usually the primary concerns of managers, which causes conflicts between business goals and responsible AI practices (Mucsková, 2024; Noreen et al., 2023). Moreover, the organizational silos restrict the cooperation between technical teams and compliance officers, which creates loopholes in the control. The ethical consciousness in organizations is also not consistent as some companies create AI ethics officers or compliance departments, whereas others consider ethics as an afterthought, and they mostly rely on outside regulations (Boustani, 2021; Gupta et al., 2023). Also, liability and accountability issues in AI-driven decisions are another factor that makes adoption more complicated especially when it is not clear who is to be held accountable either the developers, the managers, or the regulators for the issues faced (Dahlan et al., 2022; Merhi, 2023).
To conclude, the implementation of AI is not only limited by technical and regulatory factors but also cultural and organizational processes therefore, it is necessary to engage in coordinated activities, which will balance innovation and accountability and make the use of AI effective and ethically justified. The next discussion is about the enablers to facilitate inclusive, sustainable, and ethically driven AI practices, taking the optimal use of available opportunities.

2.3. Enablers and Opportunities to Facilitate Inclusive, Sustainable, And Ethically Driven AI Practices

In the existing literature, a considerable number of prospects to promote the adoption of responsible AI have been highlighted, of which the most powerful one is the government initiatives. The UK, Singapore, and the UAE are some of the countries that have developed national AI strategies that incorporate ethical considerations in innovation. As an illustration, UK Model AI Governance Framework offers feasible advice to companies on transparency, human engagement, and accountability (Al-Alawi et al., 2021) while, SDAIA programs in Saudi Arabia focus on the creation of a balanced ecosystem, which facilitates the innovation and regulation, and thus the country is a regional leader in the development of AI (Dahlan et al., 2022) because these policies do not only give guidance, but also open funds, research assistance and regulatory clarity that promote sustainable adoption. Another important enabler is fintech collaborations. Furthermore, it is important to note that the adoption of AI in financial services, whether in the form of robo-advisory systems or credit scoring systems, has the potential to enhance inclusivity by enhancing access to credit and financial services by underserved groups (Al-Dosari et al., 2024; Rahman et al., 2021) nevertheless, effective partnerships require a sense of trust, information exchange systems, and shared ethical principles that eliminate exploitation and bias. Investment strategies are also important in the development of responsible AI according to Al-Dosari et al. (2024) and Rahman et al. (2021) especially from side of the governments, venture capital firms, and corporate investors who are becoming more aware of the need to invest in AI projects that are sustainable and ethical. In this regard, the focus on energy-efficient algorithms in the form of green AI has received attention as a way of minimizing the environmental impact of machine learning models as these types of investment strategies do not only encourage innovation, but they also send a message to the market that ethical practices can be a competitive advantage as pointed out by Penev et al. (2024).
Innovation culture is, beside the cited ones, the most decisive enabler of ethical AI in organizations. Studies indicate that companies that have open, collaborative, and learning cultures are in a better position to incorporate ethical practices in the adoption of AI which highlights that the cross-disciplinary relationship between technologists, ethnics, and business managers should be encouraged to create a more holistic approach (Baffour Gyau et al., 2024; Murire, 2024; Noreen et al., 2023). In this regard, transparency, employee voice, and stakeholder engagement are more likely to be identified as the powerful opportunities to make these enablers effective by encouraging transparency, employee voice, and stakeholder engagement, which will respond responsibly to risks. Stakeholder collaboration also helps in inclusive AI practices according to Faruq et al. (2024) especially when ethical frameworks are critically contributed by academic institutions, civil society and international organizations. As a case in point, university-industry collaborations have resulted in explainable AI tools, and NGOs have demanded fairness and human rights in the use of AI making sure that the different views are considered in governance and innovation to minimize the possibility of ethical blind spots which points to the fact that responsible AI adoption is not only about reducing risks but also about using enablers that can provide sustainable and inclusive opportunities.

2.4. Theoretical Framework

The stakeholder theory proposed by Freeman (1984) forms the basis of this study assuming that organizations cannot simply look at the shareholders; they must also look at the needs and expectations of other parties including customers, workers, regulators, and society in general to achieve the desired results as pointed out in existing literature (Ademola, 2024; Da Costa et al., 2023) where this theory has been applied to study how organizations can establish regulatory frameworks and ethical guidelines for AI development that promote sustainable, inclusive, and climate-aligned innovation. In the context of first research objective, which is the assessment of AI governance structures and ethical practices, the stakeholder theory can be used to understand why governance cannot be developed based on the business efficiency perspective only arguing that social and regulatory expectations including fairness, accountability, and transparency should also be taken care of in governance (Wanyama et al., 2013). This view of governance makes organizations consider the effects their AI decisions will have on various stakeholders and not only on their profits. Regarding the second objective, which examines the issues of adoption, the theory demonstrates that the issues are usually caused by the conflicts between the stakeholders (Dmytriyev et al., 2021). Regulators desire strictness, companies desire flexibility, and the customers desire privacy and trust while the technical issues such as bias and unaccountability complicate the ability to please everyone simultaneously. In this regard, the stakeholder theory implies that AI responsibility is not about disregarding these conflicting demands because various stakeholders (e.g., employees, customers, regulators, developers, and the public) have different interests and power levels, and the AI’s actions impact them all affirming that AI accountability is not a simple matter of prioritizing one group, but rather of finding a balance through trade-offs to create value and mitigate negative outcomes for the entire interconnected system (Freeman et al., 2020). Lastly, in the context of third objective, stakeholder theory emphasizes the importance of enablers (government initiatives, fintech partnerships, and ethical investment approaches) as these enablers allow the stakeholders to collaborate, harmonize their interests, and distribute their responsibilities. To illustrate, collaborative governance systems or co-operations between companies and regulators contribute to the development of trust and distribution of responsibility. Overall, Stakeholder Theory offers a feasible background to this research as it provides AI governance and ethics as a continuous balancing exercise among various interests, and it describes why inclusive and sustainable practices are needed to attain long-term organizational performance.

3. Research Methodology

3.1. Research Philosophy, Design and Approach

This study was informed by interpretivism philosophy and inductive research approach considering the fact that the main aim of the study is to examine how Saudi Arabia can establish regulatory frameworks and ethical guidelines for AI development in banking, while addressing the regulatory, cultural, technical, and organizational challenges based on insights drawn from secondary qualitative data in academic literature from 2019 to 2024, aligned with Vision 2030. In line with the choices made, the researchers choose a qualitative exploratory design to explore the attitudes and practices in depth, which would not have been possible to measure using quantitative measures (Lim, 2024). The selected choices enabled the researcher to analyze nuanced perceptions and contextual factors reported in the literature and to place these insights into the wider institutional, cultural, and regulatory settings. More specifically, themes and patterns were derived from the secondary qualitative sources related to each research objective instead of using participant interviews, providing contextually rich information on a research problem that is under-researched and supporting evidence-based policy and practice recommendations (Saunders et al., 2019).

3.2. Data Collection Procedure – Inclusion and Exclusion Criteria

Adopting a purposive sampling technique, the researchers identified and selected secondary qualitative sources from peer-reviewed academic articles, policy reports, white papers, and other credible publications published between 2019 and 2024 that specifically addressed AI governance, ethics, sustainability, and innovation in the Sudia Arbia banking or financial sector aligned with Vision 2030. Before searching for related academic sources, specific inclusion and exclusion criteria were determined. The inclusion criteria were that sources should be related to the Saudi Arabian banking sector, published between 2019 to 2024, must be published in English, focus on AI applications or governance in the banking and financial sector, discuss ethical, regulatory, or cultural aspects, and provide qualitative insights such as case studies, expert commentaries, or thematic analyses while the exclusion criteria eliminated sources that were purely quantitative, lacked contextual or thematic analysis, focused on unrelated industries other than the Saudi Arabian banking sector, or were published outside the 2019-2024 timeframe.
After setting these criteria, the sources were located from Google Scholar, developing a set of carefully selected keywords related to the research objectives, including terms such as “AI governance in banking Saudi Arabia,” “ethical AI frameworks banking,” “sustainable innovation financial sector,” “regulatory challenges AI Saudi Arabia,” and “AI ethics and compliance banking” using Boolean operators such as AND, OR, and quotation marks to refine the search and ensure precision, for example: “AI governance” AND “Saudi Arabia” AND “banking sector,” or “ethical AI” AND “regulatory frameworks.” Applying these keywords using a built-in year filter feature of Google Scholar, the researchers were successful in having a pool of approximately 115 sources out of which 25 sources were finally used for analysis while screening for abstracts and full texts to assess alignment with the inclusion and exclusion criteria and deleting duplicates.
All the consulted literature to draw themes was properly referenced using a set reference style by the publisher to avoid any academic misconduct. Furthermore, the defined inclusion and exclusion criteria were followed strictly to have the most related set of sources from credible academic journals, policy documents, and institutional reports aligned with the study’s objective to avoid any bias or irrelevant literature. In doing so, the researchers did not subject to report any information unrelated to the Saudi Arabian banking sector in order to maintain transparency in the derivation of meaningful themes and evidence-based recommendations consistent with Vision 2030.

3.3. Data Analysis Procedure

The finalized academic sources were processed in terms of analyzing and generating themes with the help of the six-step thematic analysis methodology created by Braun and Clarke (2019) as it offered a systematic but versatile way of identifying, analyzing and reporting themes from the textual data collected from academic sources as recommended by Byrne (2022) and Murphy (2022). The process of analysis started with familiarization with data, during which the academic sources were read several times to obtain the general impression of the data. Systematic generation of initial codes was then done throughout the dataset, with features of interest to the research objectives being identified. The third phase involved the collation of codes into possible themes, which included insights that were related to governance gaps, cultural influences, and enabling strategies having sampled themes such as AI governance practices, adoption challenges, and enablers for responsible AI, while the corresponding codes included transparency, fairness, data privacy, regulatory barriers, cultural influences, government initiatives, partnerships and sustainability alignment. The fourth step was to compare these themes with the coded extracts which were defined and named in fifth stage to provide a detailed write-up in last stage, which entailed the integration of narrative descriptions with illustrative quotes of the respondents to give a detailed and believable description of the findings.

4. Analysis and Findings

This section reports the results obtained by the thematic analysis of secondary qualitative sources based on the six-step framework by Braun and Clarke (2019). This section combines the findings of academic literature, policy reports, and institutional documents to analyze the current governance frameworks, regulatory and cultural obstacles, and opportunities that would enable the responsible use of AI in the context of Vision 2030. All of the themes are elaborated upon and analyzed in the context of the stakeholder theory to understand the way various stakeholder interests influence the use of AI in the financial sector in a way that is ethical and sustainable.

4.1. Objective One: To Evaluate the Existing AI Governance Structures and Ethical Practices Within Saudi Arabia’s Banking Sector and Assess Their Alignment with Vision 2030 Priorities and Global Ethical AI Standards

The results reveal that Saudi Arabia has achieved remarkable steps to lay down the principal AI governance frameworks that govern digital transformation within the banking industry. According to the study conducted by Bendary and Rajadurai (2024), the most significant part of these is the Saudi Data and AI Authority (SDAIA), which has a significant role in shaping policies, data governance standards, and ethical principles on a national scale in line with the vision 2030. On the other hand, Asem et al. (2024) found that the data classification policies, the National Data Governance Framework and cybersecurity standards of the National Cybersecurity Authority all impact the way banks handle data integrity, privacy and AI implementation. Moreover, the Saudi Central Bank (SAMA) has released regulatory sandboxes, fintech principles and data protection policy to support innovation without lowering compliance (Suleiman & Ming, 2025). Alasiri and Mohammed (2022) asserted that internal governance (AI risk committees, model validation processes, data quality audits, etc.) have begun to be implemented by banks in the Kingdom, although the level of maturity is inconsistent and underutilized between institutions, as well as not well-standardized. The result shows that the emerging AI regulatory systems in Saudi Arabia show some compliance with international ethical principles of the EU AI Act, OECD AI Principles, and guidelines on transparency, fairness, and human-centered development in the UNESCO setting. As with other international standards, Saudi governance promotes responsible use of AI, risk- sensitivity, and human accountability with automated decision-making (Shalhoob, 2023).
Nevertheless this is where the gaps come out about the enforcement practices, interpretability requirements and the structures of the stakeholder accountability. Khan (2025) evaluated that Saudi frameworks, in contrast to the extremely elaborate regulatory architecture of the EU, provide more general principles and have fewer binding commitments, thus leaving the banks with much latitude in their application. Additionally, although the OECD has focused on cross-border harmonization and participatory governance, Saudi banking governance is still mostly state-based, where there is limited participation of the multi-stakeholders (Mani & Goniewicz, 2024). The results indicate that the alignment of principles is positive, but the practical alignment is limited by the contextual, infrastructural and institutional factors.
In the literature, a few ethical issues in the adoption of AI into the Saudi banking sector are mentioned that signify the existence of governance cracks on a broader scale (Alessa et al., 2022; Rahman & Qattan, 2021).In the study of Thalib et al. (2024), the issue of data privacy, algorithmic transparency, and the threat of discriminating final results were widely mentioned. According to Al Shanawani (2023), the blistering development of AI solutions, including automated credit rating, fraud detection, and chatbots of customer support, poses the question of the explainability of AI models and the justice of the results, in particular, of vulnerable populations. As per the study of Alhur (2024), the lack of quality, representative data would contribute to the high chances that the decisions will be made biased, and the lack of transparency would not permit the customers to comprehend the impact that AI-driven decisions have on them. The other common problem is the absence of solid monitoring and accountability measures in order to promote ethical adherence in institutions (Al Khashan et al., 2021). Although structures are in place, the application of these structures varies with the end result being the degree of ethical maturity being experienced by the banks (Bahreldin et al., 2025). On the other hand, Aljuhani (2024) found that these loopholes indicate the necessity of closer supervision, more articulated rules of operation, and more standardised ethical practices of governance.

4.2. Objective Two: To Examine the Regulatory, Cultural, Technical, and Organizational Challenges that Limit the Effective and Responsible Adoption of AI in Saudi Financial Institutions

The results indicate that regulatory fragmentation is among the major obstacles restraining the adoption of ethical and responsible AI in the banking industry of Saudi Arabia (Aljerian et al., 2025). Although the SDAIA, SAMA, and the National Cybersecurity Authority offer governance standards, the structures are scattered, and each provides inconsistencies in compliance expectations in institutions (Alarabi & Alasmari, 2023). Touati and Saad (2024) found that the lack of a single AI-specific regulatory law in the similar line to the EU AI Act leads to uncertainties when it comes to the classification of risks, the monitoring and enforcement provisions. Also, the balance between innovation promotion and consumer protection remains to be changing (Al Saadi & Atef, 2024). According to Elimam (2022), excessive regulation can limit experimentation in the fintech sector and inadequate regulation can amplify ethical and business risk, especially in the credit scoring, biometric authentication and automated decision-making domain. AlshaliL et al. (2024) stated that these regulatory loopholes reduce the pace of harmonized governance practices and can bring confusion to those banks that want to implement AI systems in a responsible manner. The cultural variables also determine how fast and agreeable the process will be in the Saudi banking ecosystem in terms of AI adoption (Mawkili, 2025). The conclusion of the study conducted by Alharbi et al. (2025) shows that the aspects of societal expectations, views on privacy, and confidence in digital technologies are interrelated in a multifaceted manner. Although the Saudi society tends to have high trust in government-led digital initiatives, the risk of data abuses and surveillance is still one of the most widespread issues, which influence the way customers accept AI-driven financial services (Alharbi et al., 2024). However, Alessy et al. (2024) stated that conventional ideas about the way human interaction works in banking lower the confidence in completely automated systems as well. Moreover, cultural practices of fairness, transparency and ethical practices influence social expectation of the use of algorithms in decision making (Almajed et al., 2024). Such cultural aspects put pressure on banking institutions in the context of which AI systems can work, being sensitive to the values of society, which further complicates the adoption of AI in ethically sensitive aspects such as lending and fraud detection.
One challenge to responsible AI deployment, which was recurring in the literature, was technical limitations (Alharbi et al., 2025). Availability and quality of training data can be listed among the most acute problems. According to Bendary and Rajadurai (2024), banking institutions regularly use disjointed datasets or old ones, which cannot be used to educate highly sophisticated machine learning models, and this heightens the chances of biased or inaccurate performance. Asem et al. (2024) found that many AI systems are black box, and hence, interpretability is also a challenge, as it is hard to explain automated decisions to a regulator or a customer. Further, the vulnerabilities of cybersecurity are a serious threat, including adversarial attacks, data breaches, and model manipulation, especially as the financial data is sensitive (Suleiman & Ming, 2025). Shalhoob (2023) identified such technical issues are also complicated by the lack of interoperability between the banking systems, lack of sufficient computational infrastructure, and insufficient AI talent with specialized skills to build and operate ethical, secure, and transparent AI systems. Another significant barrier to the responsible AI adoption identified by Khan (2025) is organizational barriers. It is often the case that many financial institutions in Saudi Arabia continue to be hierarchical and siloed to ensure cross-functional collaboration that is needed to have ethical AI governance (Mani & Goniewicz, 2024). The conclusions show that there is a severe lack of internal expertise in such areas as data governance, AI ethics, and algorithmic auditing, which leads to a high dependence on external vendors whose practices might not be consistent with the local regulatory/ethical requirements (Alkhoraif, 2024). According to Rahman and Qattan (2021), the opposition of change among the workers and decision-makers also decelerates the pace of AI integration, since the fears of displacement and the trust in automated technologies remain. Also, not many banks have formalized controls, including AI ethics committees or model risk management teams or open reporting to mitigate the ethical implication of AI tools (Alessa et al., 2022). Thalib et al. (2024) and Al Khashan et al. (2021) stated that this absence of regularity in governance bodies decreases accountability, restricts oversight, and raises the chances of the ethical risks remaining undiscovered in the course of development and deployment.

4.3. Objective Three: To Investigate the Key Enablers and Opportunities, Including Government Initiatives, Fintech Collaborations, Investment Strategies, and Organizational Innovation Culture, that Can Promote Inclusive, Sustainable, and Ethically Guided AI Practices in Banking Sector

The results show that the Saudi government has implemented a number of strategic enablers that enable ethical and responsible adoption of AI in the banking industry. According to Al Khashan et al. (2021), SDAIA is a key contributor by its policies in the area of transparency, data integrity, and data risk management, as outlined in its National Strategy for Data and AI. Efforts like the National Data Bank, the National Data Governance Framework, the Estishraf Analytics Platform, and the National Data Index enhance access to data, standardization and clarity of regulations, which are among the preconditions of responsible AI implementation (Alhur, 2024). In the meantime, the Vision 2030 makes digital transformation one of the national priorities, which makes banks innovate, adhering to ethical and compliance standards (Bahreldin et al., 2025). Alqahtani and Albabtain (2025) stated that the SAMA also has regulatory sandbox programs, which can also help in experimenting under a controlled setting, and the banks and fintech institutions can use AI-based solutions to test their ideas without fearing any consequences. Combined, these state-based enablers form an institutional basis of promoting accountability, innovation, and trust in financial ecosystem. Fintech collaborations will be an effective facilitator when it comes to the responsible use of AI in the banking sector (Alrabiah et al., 2024). The results of the study conducted by Alarabi and Alasmari (2023) demonstrate that fintech companies give banks the opportunity to access sophisticated AI-based applications, including biometric authentication, alternative credit scoring models, robo-advisory services, and fraud detection applications. Such partnerships promote innovation and financial inclusion efforts, whereby the banking services are more affordable to underserved groups (Touati & Saad, 2024). According to Mawkili (2025), knowledge sharing is another aspect that is provided by partnerships to enable banks to embrace the best practices in ethical development and implementation of AI. Nevertheless, effective cooperation requires a mutual code of ethics, disclosure of the algorithms, and concrete agreements on data governance not to be exploited, discriminated, or abused (Alharbi et al., 2025). When done to ensure accountability, fintech partnerships accelerate ethical, inclusive and customer-focused AI adoption in the industry.
The literature points to the fact that both investment approaches are becoming more ethical and sustainable, which opens the possibilities in order to make AI adoption responsible (Alessy et al., 2024). Almajed et al. (2024) found that directed by government programmes, venture capital and corporate investors are putting money towards AI innovations, which are more focused on fairness, transparency and environmental sustainability. Alharbi et al. (2025) stated that Green AI, which deals with low-energy use algorithms and less use of computations, assists Vision 2030 sustainability requirements in addition to strengthening ethical AI development procedures. According to Bendary and Rajadurai (2024), ethical investment is a sign of an institutional dedication to responsible innovation and encourages the banking industry towards adopting systems that prioritize performance with sustainability, protection of its customers, and compliance with regulation. A culture that is innovation-friendly is also found as a key prerequisite to the ethical adoption of AI (Alasiri & Mohammed, 2022). The results indicate that banks having open communication lines, cooperative decision making frameworks, and effective leadership are in a better position to incorporate ethical aspects in AI development (Khan, 2025). According to Rahman and Qattan (2021), promoting cross-department interactions and employee voice helps enhance transparency and accountability and AI projects. Mani and Goniewicz (2024) found that ethical oversight is further improved with stakeholder engagement such as customers, employees, regulators and technology partners. Alessa et al. (2022) stated that prioritizing fairness, responsibility, and inclusivity as organizational values can help banks to develop internal preparedness in the AI implementation and deal with the most related risks.

5. Conclusions

5.1. Conclusions

The primary aim of this study was to understand how Saudi Arabia can establish regulatory frameworks and ethical guidelines for AI development in banking that promote sustainable, inclusive, and climate-aligned innovation aligned with Vision 2030 being guided by three research objectives: To evaluate the existing AI governance structures and ethical practices within Saudi Arabia’s banking sector, to examine the regulatory, cultural, technical, and organizational challenges that limit the effective and responsible adoption of AI in Saudi financial institutions and to investigate the key enablers and opportunities that can promote inclusive, sustainable, and ethically guided AI practices in banking sector. To achieve these objectives, the study followed a secondary qualitative research strategy guided by the interpretivism philosophical stance and performed thematic analysis employing six steps procedure of Braun and Clarke (2019) on collected data from peer-reviewed academic articles, policy reports, white papers, and other credible publications published between 2019 and 2024.
For the first research objective, the findings indicate that Saudi Arabia has gone a long way in implementing and following AI governance frameworks under national-level organizations like the Saudi Data and AI Authority (SDAIA), SAMA, and the National Cybersecurity Authority. These organizations have presented models that deal with data management, cybersecurity, risk management, and ethical issues, which can be used as a whole to influence the way AI technologies are utilized responsibly by banks. Though these frameworks show that they are in accordance with the international standards, such as the EU AI Act, OECD AI Principles, and UNESCO ethical guidelines, there are still gaps in their implementation, especially in the area of enforcement, the requirement to be interpretable, and involvement of multiple stakeholders. Furthermore, although internal governance systems are being developed in banks, there is a difference in maturity, and the lack of unified ethical practices restrains the ability of the sector to guarantee uniform accountability and equitability in all AI uses.
The results about the second objective demonstrate that a lack of clear definition of risk classes, compliance expectations, and monitoring requirements leads to regulatory fragmentation due to overlapping mandates of national agencies and the lack of an AI-specific law that explicitly defines these issues. Socially, the perception of privacy, trust and human communication influences the way customers accept AI-based financial applications, especially in sensitive fields such as credit score and fraud detection. Technical limitations, limited data availability, low interpretability of models, cybersecurity risks, and lack of talent in AI’s effective use and implementation further limit responsible deployment of AI solutions by banks. There were also organizational issues such as organizational hierarchy, resistance to change, lack of skills, dependence on external vendors, and uneven ethical regulation practices to the creation of coherent, transparent, and responsible AI governance practices.
Finally, for the third objective, the research findings highlighted that different government initiatives such as National Data Governance Framework, National Data Bank, Estishraf Analytics Platform, and the regulatory sandbox of the SAMA are key enablers to facilitate inclusive, sustainable, and ethically driven AI practices within the sector as they promote regulatory clarity, experimentation, and innovation in line with Vision 2030. Fintech partnerships also enhance financial inclusion and technology, allowing banks to use more sophisticated solutions, including alternative credit scoring and biometric authentication. Furthermore, long-term responsible innovation is supported by ethical and sustainability-oriented investment approaches, such as the emergence of so-called Green AI as well as the organizational culture, which focuses on transparency, employee involvement, cross-functional cooperation, and stakeholder involvement, is a backbone of integrating ethical considerations into the development and implementation of AI.

5.2. Theoretical and Practical Implications

Theoretically, the research makes a contribution by implementing the stakeholder theory to AI governance in the banking industry to support the claim that the process of AI adoption is not a technological process but a socio-technical one, which is engaged by a number of stakeholders, including customers, regulators, employees, developers and society in general. The theory puts a lot of emphasis on how the conflict of interests between competing interests influences the practices of governance; this shows that balanced, inclusive, and transparent decision-making processes are necessary. The study builds on this knowledge in a way that emerging economies and, in particular, Middle East countries can situate global ethical AI principles to fit the cultural, political, and institutional realities through the inclusion of the stakeholder perspective in AI governance.
In practice, the study offers policy implications to policymakers, regulators, and financial institutions in Saudi Arabia. In the case of SDAIA and SAMA, the results support the idea that regulatory coherence should be reinforced, and AI-specific laws should be created, and monitoring and enforcement systems should become formal. Next, the research highlights to banking executives and compliance teams the need to invest in internal governance frameworks, including AI ethics committees, model risk management teams, and data quality audits highlighting the need to promote organizational strengths by training workforce, cross-functional interactions, and ethical auditing systems. To fintech companies and investors, the research encourages responsible AI innovation with concepts of transparency, value-oriented partnership, and sustainability-oriented investment in creating a banking ecosystem which is not only technologically superior, but also socially inclusive and morally sound in line with Vision 2030.

5.3. Limitations and Future Research Directions

Although the study has contributed to existing literature, it has a number of limitations that must be acknowledged. First and foremost, the study relies on secondary qualitative sources, which can be biased in interpretations, not real-time, or not capture the emerging developments that are yet to be reflected in academic publications. Second, the lack of field research, including interviews with regulators, bank managers, developers of AI, or customers, reduces the depth and the contextual accuracy of the results. Third, the research is limited to the Saudi banking industry and does not allow generalizing the results to other sectors or regions. Also, the rapid development of AI regulation implies that future modifications of the policies can soon be out of date with the existing observations. The future studies ought to take into account primary qualitative studies to give more context-specific information based on the perceptions of stakeholders and organizational practices. Comparative research on AI governance in other Gulf Cooperation Council countries would potentially provide information on the similarities and differences in the region. It would also be beneficial to conduct longitudinal studies to trace the development of AI policies in Saudi Arabia because of the dynamism in the regulatory environment. Technical aspects, including explainability, algorithmic fairness, green AI, and cybersecurity risks, could be further investigated in further research. Lastly, it would be valuable to include official insights, specifically in terms of trust, privacy, and digital literacy, to further understand the concept of societal acceptance and ethical considerations of AI in financial services carrying out interviews.

Supplementary Materials

The following supporting information can be downloaded at the website of this paper posted on Preprints.org.

References

  1. Ademola, O. (2024). Detailing the stakeholder theory of management in the ai world: A position paper on ethical decision-making. Social Informatics, Business, Politics, Law, Environmental Sciences & Technology Journal, 10(1), 1-8.
  2. Al-Alawi, L., Al-Busaidi, R., & Ali, S. (2021). Applying NIST SP 800-161 in supply chain processes empowered by artificial intelligence. 2021 22nd International Arab Conference on Information Technology (ACIT),.
  3. Al-Dosari, K., Fetais, N., & Kucukvar, M. (2024). Artificial Intelligence and Cyber Defense System for Banking Industry: A Qualitative Study of AI Applications and Challenges. Cybernetics and Systems, 55(2), 302-330. [CrossRef]
  4. Al Khashan, H., Abogazalah, F., Alomary, S., Nahhas, M., Alwadey, A., Al-Khudhair, B., Alamri, F., Aleisa, N., Mahmoud, N., & Hassanein, M. (2021). Primary health care reform in Saudi Arabia: progress, challenges and prospects. Eastern Mediterranean Health Journal, 27(10), 1016-1026. [CrossRef]
  5. Al Saadi, A., & Atef, R. (2024). The Complex Relationship between Leadership, Innovation, Education, and Sustainable Development: A Review in the Arabian Gulf Region. Cultural Conflict and Integration, 1(1), 41–53. [CrossRef]
  6. Al Shanawani, H. M. M. (2023). The current status and future prospects of early childhood education in Saudi Arabia and in light of Saudi Vision 2030. Information Sciences Letters, 12(6), 2475-2489. [CrossRef]
  7. Alammash, S. A., Guo, P. S., & Vinnikova, A. (2021). Saudi Arabia and the heart of Islam in Vision 2030: Impact on international relations. Arab Journal for Scientific Publishing(32), 1-17.
  8. Alarabi, S., & Alasmari, F. (2023). Challenges and Opportunities for Small Cities in the Kingdom of Saudi Arabia: A Study of Expert Perceptions. Sustainability, 15(8), 6960-6996. [CrossRef]
  9. Alasiri, A. A., & Mohammed, V. (2022). Healthcare transformation in Saudi Arabia: an overview since the launch of vision 2030. Health services insights, 15, 1-17. [CrossRef]
  10. Alessa, N. A., Shalhoob, H. S., & Almugarry, H. A. (2022). Saudi women’s economic empowerment in light of Saudi Vision 2030: Perception, challenges and opportunities. Journal of Educational and Social Research, 12(1), 316-334. [CrossRef]
  11. Alessy, S. A., Almotlak, A. A., Alattas, M., Alshareef, A., Alwosaibai, K., Alghamdi, M. A., Razack, H. I., & Alqahtani, S. A. (2024). Cancer research challenges and potential solutions in Saudi Arabia: a qualitative discussion group study. JCO global oncology, 10, e2300189. [CrossRef]
  12. Alharbi, H., Alghamdi, A., Almandeel, K., & Alharbi, A. (2024). Implementation of Creativity and Innovation in e-Learning: An Analysis of Opportunities and Challenges Toward a Sustainable Future Economic. Conference on Creativity, Technology, and Sustainability,.
  13. Alharbi, H. M. N., Alharbi, N. M., AlGhamdi, S. G. S., Dhahwah, E. A. M., Alfaqih, F. S., Alghbewee, N. S. D., Alasmari, H. S., Alhuwayfi, Y. S., Alharazi, I. M., & AlNebate, F. A. A. (2025). Strategies of Challenges and Advantages of Implementation for Health Insurance in Saudi Arabia: A Systemic Review 2024. Journal of International Crisis and Risk Communication Research, 8(S1), 21-36. [CrossRef]
  14. Alhur, A. (2024). Overcoming electronic medical records adoption challenges in Saudi Arabia. Cureus, 16(2), 1-7. [CrossRef]
  15. Aljerian, N. A., Almasud, A. M., AlQahtani, A., Alyanbaawi, K. K., Almutairi, S. F., Alharbi, K. A., Alshahrani, A. A., Albadrani, M. S., & Alabdulaali, M. K. (2025). Exploring a Systems-Based Model of Care for Effective Healthcare Transformation: A Narrative Review in Implementation Science of Saudi Arabia’s Vision 2030 Experience. Healthcare,.
  16. Aljuhani, F. (2024). Systematic review of social entrepreneurship in the kingdom of Saudi Arabia: Trends, challenges, and future directions. International Research Journal of Economics and Management Studies IRJEMS, 3(3), 109-113. [CrossRef]
  17. Alkhoraif, A. (2024). The impact of innovation governance and policies on government funding for emerging science and technology sectors in Saudi Arabia. Journal of Infrastructure, Policy and Development, 8(14), 8515-8537. [CrossRef]
  18. Almajed, O. S., Aljouie, A., Alghamdi, R., Alabdulwahab, F. N., Laheq, M. T., & Alabdulwahab, F. (2024). Transforming dental care in Saudi Arabia: challenges and opportunities. Cureus, 16(2), 1-7. [CrossRef]
  19. Alqahtani, A. G., & Albabtain, B. (2025). Vision of Saudi Arabia 2030 in Health Sector and Clinical Services in Community Pharmacy: Current Situation and Future Directions. Saudi Journal of Clinical Pharmacy, 4(3), 72-75. [CrossRef]
  20. Alrabiah, A. S. H., ALjohani, T. S., Alsehli, N. M., Almaghazawi, R. S., Alsaedi, A. S., Alotaibi, K. J., Al-Shammari, M. H. S., Alotaibi, M. A. A., Arawi, W. A., & Abuillah, W. A. (2024). The Success of Saudi Healthcare System: Opportunities and Challenges a systematic review 2024. Journal of International Crisis and Risk Communication Research, 7(S5), 783-792. [CrossRef]
  21. Alshagathrh, F., Khan, S. A., Alothmany, N., Al-Rawashdeh, N., & Househ, M. (2018). Building a cloud-based data sharing model for the Saudi national registry for implantable medical devices: Results of a readiness assessment. International Journal of Medical Informatics, 118, 113-119. [CrossRef]
  22. AlshaliL, D., Asiz, A., Ayadat, T., Khasawneh, M., Ahmed, D., & Sultana, T. (2024). Advancements in green sustainable concrete technologies for sustainable development in Saudi Arabia: A review in light of vision 2030. Materials Research Proceedings, 48, 271-278. [CrossRef]
  23. Asem, A., Mohammad, A. A., & Ziyad, I. A. (2024). Navigating digital transformation in alignment with Vision 2030: A review of organizational strategies, innovations, and implications in Saudi Arabia. Journal of Knowledge Learning and Science Technology, 3(2), 21-29. [CrossRef]
  24. Baffour Gyau, E., Appiah, M., Gyamfi, B. A., Achie, T., & Naeem, M. A. (2024). Transforming banking: Examining the role of AI technology innovation in boosting banks financial performance. International Review of Financial Analysis, 96(Part B), 103700. [CrossRef]
  25. Bahreldin, I., Samir, H., Maddah, R., Hammad, H., & Hegazy, I. (2025). Leveraging advanced digital technologies for smart city development in Saudi Arabia: opportunities, challenges, and strategic pathways. International Journal of Low-Carbon Technologies, 20, 834-847. [CrossRef]
  26. Bendary, M. G., & Rajadurai, J. (2024). Emerging Technologies and Public Innovation in the Saudi Public Sector: an analysis of Adoption and challenges amidst Vision 2030. The Innovation Journal, 29(1), 1-42.
  27. Boustani, N. M. (2021). Artificial intelligence impact on banks clients and employees in an Asian developing country. Journal of Asia Business Studies, 16(2), 267-278. [CrossRef]
  28. Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative research in sport, exercise and health, 11(4), 589-597. [CrossRef]
  29. Brika, S. k., Adli, B., Mili, K., Bengana, I., & Khababa, N. (2025). Key Sectors Driving Saudi Vision 2030 Diversification. Journal of Posthumanism, 5(4), 609–629. [CrossRef]
  30. Byrne, D. (2022). A worked example of Braun and Clarke’s approach to reflexive thematic analysis. Quality & quantity, 56(3), 1391-1412. [CrossRef]
  31. Corrêa, N. K., Galvão, C., Santos, J. W., Del Pino, C., Pinto, E. P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., & Terem, E. (2023). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns, 4(10), 1-15.
  32. Cristi, S., Birau, R., Shetty, S., & Filip, R. (2023). Impact of Artificial Intelligence in Banking Sector with Reference to Private Banks in India. Annals of the University of Craiova, Physics, 32, 59-75.
  33. Da Costa, R. L., Gonçalves, R., & Montez, A. (2023). Artificial intelligence applied to stakeholder theory. In Digital Technologies and Transformation in Business, Industry and Organizations: Volume 2 (pp. 101-125). Springer.
  34. Dahlan, K. R., Badawi, A. A., & Megahed, A. (2022). Perspective Chapter: Data as Currency-On the Impact of ICTs and Data on the Saudi Economy and Industrial Sector. In Digital Transformation-Towards New Frontiers and Business Opportunities. IntechOpen.
  35. Dmytriyev, S. D., Freeman, R. E., & Hörisch, J. (2021). The relationship between stakeholder theory and corporate social responsibility: Differences, similarities, and implications for social issues in management. Journal of management studies, 58(6), 1441-1470. [CrossRef]
  36. Elimam, H. (2022). Environmental problems & development sustainability in light of the Kingdom’s 2030 vision: Opportunities & challenges. International Journal of Education and Social Science, 9(1), 2410-5171.
  37. Faruq, M., Wardiyanto, B., Umpain, S., & Toyib, M. (2024). Collaboration of Stakeholders and AI in The Implementation of New Public Service in The Digital Era. Golden Ratio of Human Resource Management, 5(1), 63-70. [CrossRef]
  38. Freeman, R. E. (1984). Strategic management: A stakeholder approach. Cambridge university press.
  39. Freeman, R. E., Phillips, R., & Sisodia, R. (2020). Tensions in stakeholder theory. Business & society, 59(2), 213-231.
  40. Fukuda--Parr, S., & Gibbons, E. (2021). Emerging consensus on ‘ethical AI’: Human rights critique of stakeholder guidelines. Global Policy, 12, 32-44.
  41. Gupta, A., Dwivedi, D. N., & Shah, J. (2023). Artificial Intelligence Applications in Banking and Financial Services (Vol. 10).
  42. Khan, W. U. (2025). Saudi Arabia’s Cloud Broadband Landscape: Opportunities and Challenges in the Era of Vision 2030. Journal of Business and Management Studies, 7(1), 259-262. [CrossRef]
  43. Lim, W. M. (2024). What Is Qualitative Research? An Overview and Guidelines. Australasian Marketing Journal, 33(2), 199-229. [CrossRef]
  44. Mani, Z. A., & Goniewicz, K. (2024). Transforming Healthcare in Saudi Arabia: A Comprehensive Evaluation of Vision 2030’s Impact. Sustainability, 16(8), 3277-3300. [CrossRef]
  45. Mawkili, W. A. (2025). The future of personalized medicine in Saudi Arabia: Opportunities and challenges. Saudi Medical Journal, 46(1), 19-26. [CrossRef]
  46. Merhi, M. I. (2023). An assessment of the barriers impacting responsible artificial intelligence. Information Systems Frontiers, 25(3), 1147-1160. [CrossRef]
  47. Mucsková, M. (2024). Transforming banking with artificial intelligence: Applications, challenges, and implications. Trends Economics and Management, 18(42), 21-37.
  48. Murire, O. T. (2024). Artificial Intelligence and Its Role in Shaping Organizational Work Practices and Culture. Administrative Sciences, 14(12). [CrossRef]
  49. Murphy, R. (2022). How children make sense of their permanent exclusion: a thematic analysis from semi-structured interviews. Emotional and Behavioural Difficulties, 27(1), 43-57. [CrossRef]
  50. Noreen, U., Shafique, A., Ahmed, Z., & Ashfaq, M. (2023). Banking 4.0: Artificial Intelligence (AI) in Banking Industry & Consumer’s Perspective. Sustainability, 15(4). [CrossRef]
  51. Penev, K., Gegov, A., Isiaq, O., & Jafari, R. (2024). Energy Efficiency Evaluation of Artificial Intelligence Algorithms. Electronics, 13(19), 1-11. [CrossRef]
  52. Rahman, M., Ming, T. H., Baigh, T. A., & Sarker, M. (2021). Adoption of artificial intelligence in banking services: an empirical analysis. International Journal of Emerging Markets, 18(10), 4270-4300. [CrossRef]
  53. Rahman, R., & Qattan, A. (2021). Vision 2030 and sustainable development: state capacity to revitalize the healthcare system in Saudi Arabia. Inquiry: The Journal of Health Care Organization, Provision, and Financing, 58, 1-10. [CrossRef]
  54. Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation. AI & SOCIETY, 36(3), 59-77. [CrossRef]
  55. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, 1(5), 206-215.
  56. Saunders, M., Lewis, P., & Thornhill, A. (2019). Research methods for business students. Pearson education.
  57. Shalhoob, H. (2023). Green sukuk in Saudi Arabia: Challenges and potentials of sustainability in the light of Saudi vision 2030. Journal of Governance and Regulation/Volume, 12(4), 351-360. [CrossRef]
  58. Suleiman, A. K., & Ming, L. C. (2025). Transforming healthcare: Saudi Arabia’s vision 2030 healthcare model. Journal Of Pharmaceutical Policy And Practice, 18(1), 2449051. [CrossRef]
  59. Svoboda, A. (2023). The Impact of Artificial Intelligence on the Banking Industry. Journal of Banking and Finance Management, 4(1), 7-13. [CrossRef]
  60. Thalib, H. I., Zobairi, A., Javed, Z., Ansari, N., Alshanberi, A. M., Alsanosi, S. M., Alhindi, Y. Z., Falemban, A. H., Bakri, R., & Shaikhomer, M. (2024). Enhancing Healthcare Research in Saudi Arabia According to Vision 2030: An In-Depth Review. Journal of Research in Medical and Dental Science, 12(9), 01-08.
  61. Touati, M., & Saad, M. A. (2024). Investment Management as a Mechanism for Economic Diversification in the Kingdom of Saudi Arabia: between Opportunities and Challenges-Analytical and Foresight Study 2030 Vision. Management and Economics Review, 9(3), 470-482.
  62. Van Norren, D. E. (2022). The ethics of artificial intelligence, UNESCO and the African Ubuntu perspective. Journal of Information, Communication and Ethics in Society, 21(1), 112-128. [CrossRef]
  63. Wanyama, S., Burton, B., & Helliar, C. (2013). Stakeholders, accountability and the theory--practice gap in developing nations’ corporate governance systems: evidence from Uganda. Corporate Governance, 13(1), 18-38. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated