Preprint
Article

This version is not peer-reviewed.

Exploring the Factors Influencing AI Adoption Intentions in Higher Education

Submitted:

23 March 2025

Posted:

25 March 2025

You are already at the latest version

Abstract
This study investigates the primary technological and socio-environmental factors influencing the adoption intentions of AI-powered technology at the corporate level within higher education institutions. A conceptual model based on the Diffusion of Innovation Theory (DOI) and the Technology-Organization-Environment (TOE) framework was proposed and tested using data collected from 367 higher education students, faculty members, and employees. The findings reveal that Compatibility, Complexity, User Interface, Perceived Ease of Use, User Satisfaction, Performance Expectation, AI introducing new tools, AI Strategic development, Availability of Resources, Technological Support, and Facilitating Conditions significantly impact AI adoption intentions. At the same time, Competitive Pressure and Government Regulations do not. Demographic factors, including major and years of experience, moderated these associations, and there were large differences across educational backgrounds and experience. The SPSS Amos 24 was used for SEM to choose the best-fitting model that proved to be more efficient than traditional multiple regression analysis.
Keywords: 
;  ;  ;  ;  

1. Introduction

This embrace of advanced digital methods has become essential to today’s business climate because of the constant demand for competition and efficiency. This has completely transformed the organization’s work [80]. Companies increasingly use AI technologies to boost productivity, decision-making and efficiency [26]. AI’s ubiquitous application in marketing, education, manufacturing and finance has shown positive performance and productivity outcomes because they transcend human cognitive constraints [48]. AI technologies are quickly growing in organizations, disrupting traditional business processes, and stepping into the spaces where human talent once thrived [64]. AI has the ability to cover a range of skills, ranging from speech recognition to problem-solving and learning, that imitate human cognition [18]. Machine learning techniques are unable to be used to leverage AI, preventing organizations from extracting patterns and rules from extensive datasets. This empowers organizations to improve operational efficiency and make well-informed decisions [12]. Integrating AI technologies in companies offers considerable prospects for enhancing business value chains, decision-making assistance, knowledge administration, forecast maintenance, customer assistance, and relationship management [29]. Organizations frequently encounter difficulties in achieving the desired results despite the rising investment in AI technologies. These problems include constrained budgets, deficiencies in skills, and a lack of consciousness, all of which hinder the widespread integration of AI [27]. Accordingly, there is an increasing necessity to determine the main factors influencing the successful implementation of AI technologies on the organizational scale [19]. The use of Artificial Intelligence (AI) systems has emerged as a vital element in ensuring organizational effectiveness and student experience in the ever-changing world of higher education [31]. The study reveals the key technical and socio-environmental factors influencing the corporate-level adoption of AI-based technologies in higher education. In the process of building an abstract framework anchored in the DOI and the TOE models, this research attempts to get a deep understanding of the intricate process that underlies the use of AI in higher education. The paper examines the challenges to the implementation of AI in higher education. It explores compatibility, complexity, user experience, usefulness, ease of use, satisfaction, performance expectation, strategic alignment, resource availability, competitive pressure, government regulations, technological support, and facilitating conditions. The research aims to uncover how these factors shape decision-making. It furthermore evaluates the impact of age, gender, education, and experience. Understanding these interactions is crucial for AI adoption in higher education. The study provides insights into the complexities of AI adoption and aims to contribute to adoption strategies. It investigates the dynamics of AI adoption in higher education institutions and informs decision-making and planning for AI integration.

2. Literature Review

If you are using Word, use either the Microsoft Equation Editor or the MathType add-on (http://www.mathtype.com) for equations in your paper (Insert | Object | Create New | Microsoft Equation or MathType Equation). “Float over text” should not be selected.

2.1. Higher Education

In higher education, curiosity towards using AI is diminishing. Making AI available to higher education has become all the more exciting because it can completely change the way that people learn and teach. Numerous studies have emphasized the profound impact on higher education, highlighting the importance of recognizing the drivers that drive AI use in the classroom. This question of artificial intelligence (AI) in higher education is becoming more intriguing because it has the potential to reshape the way people teach and learn altogether. Greenhalgh et al. emphasized the need to understand the diffusion of innovations within service organizations and, in particular, to pay attention to parameters such as communication channels and message delivery speed. This aligns with the research question of whether service organizations will use AI-based technologies [34]. Numerous studies have emphasized the profound influence AI has on higher education, underscoring the significance of comprehending the factors that affect the adoption of AI-based technologies in educational environments [14,23,46,66]. Compatibility, complexity, user experience, perceived usefulness, perceived ease of use, user satisfaction, performance expectation, AI strategic alignment, availability of resources, competitive pressure, government regulations, technological support, and facilitating conditions and demographic variables such as age, gender, education, and years of experience all contribute to the intricate web of factors that influence the decision to adopt AI-based technologies [83]. Research discoveries have proven that the incorporation of artificial intelligence (AI) in higher education is impacted by factors such as perceived risk, facilitating conditions, and expected effort. These factors, in contrast, do impact individuals’ attitudes toward and intentions to utilize AI [82]. The likelihood of AI having a significant impact on higher education is highlighted in the literature, with expectations of considerable growth in AI implementation in the education sector [38]. As AI evolves, universities will need to adopt and leverage AI solutions for their educational activities. Recognizing the drivers of the use of AI in higher education, and addressing the obstacles to its use, are key to fully exploiting AI’s educational potential.

2.2. Artificial Intelligence

AI is increasingly entering the field of higher education teaching and learning. Numerous studies have focused on AI and educational environments, specifically on factors that impact the adoption of AI based technology.
Several key factors have been recognized as influencing the intention to adopt AI- based technologies within higher education. These include compatibility, complexity, user experience, perceived usefulness and ease of use, user satisfaction, performance expectations, strategic alignment with AI, resource availability, competitive pressures, government regulations, technological support, and facilitating conditions [22]. Studies have highlighted AI’s transformative potential within educational spheres, where breakthroughs like unobtrusive brain-computer interfaces, coupled with AI, pave the way for pioneering pedagogical methods [53].

2.3. Compatibility

Rogers defined ’compatibility’ as ’the extent to which an innovation is consistent with the values, practices, and needs of potential adopters’. For AI-based technologies in higher education, alignment plays an important role in determining adoption strategies and institutional readiness [52]. AI solutions need to adapt to prevailing education methodologies and systems to become effective and popular across educational facilities. The study under- emphasizes compatibility as an essential ingredient in adopting technological changes like cloud computing and e-commerce. These studies highlight that integrating new technologies with an organization’s existing systems and practices is key to the way organizations can effectively take advantage of and benefit from technological advancements [8,52]. Similarly, in the AI applications for higher education, AI-driven tools must fit the context, curriculum requirements, and pedagogical practices. This consistency is crucial in order to ensure that educators and students adopt and effectively use these tools. Incorporating AI in learning environments is connected to a number of variables including personalization, usability, and interactivity [63,83]. All of these factors play a key role in leveraging AI technologies for the differing needs and preferences of students and teachers. Further, openness to experimentation and compatibility between AI solutions and institutional goals and objectives are essential to the fit that motivates them.
The successful implementation of AI in education, as highlighted by [62]. Studies have also explored compatibility in the deployment of AI in healthcare, HR, and e-learning settings. It stresses the importance of perceived compatibility with organizational goals, technological infrastructure, and user requirements [62,79]. Understanding factors influencing compatibility and addressing barriers to harmonization is key to making sure that AI technologies are used effectively across different organizational contexts. Compatibility, in short, is an important factor in adopting AI technologies in universities and other fields. With AI solutions that are in synch with practices, values, and systems, organizations can make it easier for AI technologies to be used and integrated. The resulting in increased efficiency, effectiveness, and creativity in educational processes.

2.4. Complexity

The adoption of artificial intelligence (AI) technologies in organizational contexts, such as the world of higher education, is also heavily dependent on their degree of sophistication. In this context, ‘complexity’ refers to the perceived difficulty or complexity of using and implementing AI solutions in schools. Understanding the factors leading to this complexity is key to the successful integration and maximization of AI technologies within higher education [66]. Research suggests that educational institutions will struggle to implement AI technologies, and this will negatively impact the learning experiences of students. Further, the application of AI for teaching, student support and other administrative work causes challenges for these institutions and requires further research to solve these issues [66]. The deployment and execution of AI systems encounter hurdles due to their complex nature and the varying needs of educational stakeholders. The complexity of technology is not the only factor influencing the adoption of AI-based technologies; human elements are equally pivotal. ALTakhayneh etal. explored the psychological resistance of teachers to digital innovation, underscoring the importance of overcoming psychological barriers and fostering positive attitudes toward educational technology to ease the adoption process. Navigating the intricacies of AI adoption in educational settings necessitates overcoming resistance and cultivating a welcoming stance towards AI technologies. Furthermore, the successful integration of AI-based technologies in higher education is contingent upon the interplay of complexity with other elements such as compatibility, user experience, and organizational preparedness. Addressing these multifaceted challenges and adopting strategies to boost user acceptance and organizational backing is crucial for educational institutions aiming to successfully steer through the complexities of AI adoption and unlock AI’s transformative power in teaching and learning. In sum, grasping and tackling the complexities tied to adopting AI-based technologies is vital for their successful incorporation and use in higher education. Recognizing these challenges and executing strategies to overcome obstacles will enable educational institutions to refine the adoption process and employ AI technologies to improve educational outcomes [2].

2.5. UX

User Experience (UX) plays a key role in the use of AI technologies, especially in the higher education industry. UX refers to the complete user experience, satisfaction and experience of interaction with AI platforms and applications. Understanding the factors that affect UX is crucial for improving user adoption, engagement, and the use of AI in educational settings [44,45]. Many studies stress the necessity of UI design that caters to different needs and preferences of users, particularly those with poor reading skills. Numerous studies have highlighted the essential nature of usability, accessibility, and user-centric design principles in crafting artificial intelligence (AI) applications that are intuitive, captivating, and inclusive [44].
By creating intuitive user interfaces that are responsive to users’ needs, colleges and universities can vastly improve the UX of AI technologies at higher educational level. Furthermore, AI’s entrance into the healthcare space has reaffirmed UX’s role in fostering trust and acceptance among doctors and patients. Transparency, reliability, and ease of use all contribute to good user behavior and perceptions of AI in medical environment [45]. With an emphasis on a transparent, trustworthy, and simple AI system design, healthcare providers can enhance UX and build user trust. The application of goal-setting theory in understanding user adoption intentions has been utilized for AI-enabled mobile applications. Researchers have delved into the impact of users’ internal states on their behavior and attitudes toward AI-powered mobile applications, considering AI technologies' intelligent and anthropomorphic characteristics [51]. To design AI applications that resonate with user expectations and preferences, it is essential to grasp users’ perceptions of AI technology and its influence on their goal-setting behavior. Moreover, the user experience (UX) of AI-driven systems, like mobile fitness apps, has been scrutinized from a UX perspective. Employing goal-setting theory, researchers have explored how users’ views on informational and emotional support influence their adoption and ongoing engagement with AI-powered applications [51]. Enhancing user engagement and satisfaction over the long term can be achieved by improving the UX of AI technologies with personalized and supportive elements. For higher education and other sectors to boost user acceptance and engagement, prioritizing UX in designing and deploying AI-based technologies is imperative. By concentrating on usability, accessibility, transparency, and personalization, organizations can refine the UX of AI systems, leading to enriched user experiences and heightened adoption and use of AI technologies.

2.6. User Satisfaction

User Satisfaction plays a major role in ensuring the adoption and continued use of AI-based technologies, especially in higher education institutions. It refers to the aggregate happiness and good experiences people have in interacting with AI systems and resources. Understanding the elements that impact user satisfaction is essential for improving engagement, acceptance, and implementation of AI technologies in educational environments. Research has emphasized the significance of user satisfaction in various AI-powered systems, including mental health chatbots and mobile banking applications. Studies indicate that factors such as usability, service quality, and anthropomorphism influence user contentment and their continued use of AI technologies [51,84]. User satisfaction is pivotal to the successful deployment and continued use of artificial intelligence (AI) technologies within higher education. It pertains to how well users’ expectations and actual experiences correspond with the effectiveness and advantages offered by AI systems. Recognizing the elements that impact user satisfaction is vital for fostering favorable user perceptions, involvement, and sustained use of AI technologies in academic environments [25,75]. Moreover, integrating AI into educational settings has underscored the importance of user satisfaction in fostering the effective implementation of technology. Elements like system performance, ease of operation, and perceived advantages have been recognized as crucial factors influencing user satisfaction and acceptance of AI-driven tools in education, both in teaching and administrative tasks [75]. By emphasizing user-centered design and features, educational establishments can improve user satisfaction and ease the seamless incorporation of AI technologies into educational activities. Moreover, the advancement of AI technology in education depends on feedback systems that capture user satisfaction and preferences. The research underscores the importance of user feedback, usability testing, and iterative design in enhancing user satisfaction and driving innovation in educational AI applications [51]. By integrating user feedback into the development of AI technologies, educational institutions can guarantee that AI systems meet user expectations and enhance learning outcomes positively. In conclusion, it is imperative to consider user satisfaction when designing, implementing, and testing AI technology for positive user experiences and easy adoption of AI in higher education. Through prioritizing factors that boost user satisfaction such as perceived usefulness, expectation-settlement, and individualized experience, education providers can enhance user interaction and the use of AI technology for better learning outcomes.

2.7. Performance Expectation

Performance Expectation (PE) is an essential factor influencing the adoption intentions of AI-based technologies in different sectors, including higher education. The research conducted by [3] demonstrated that performance expectancy significantly and positively impacts behavioral intention, among other factors. This highlights the importance of PE in shaping individuals’ attitudes towards adopting new technologies. Similarly, Rasheed et al. classified the existing literature on AI adoption into drivers and barriers, emphasizing the role of PE in influencing adoption decisions [70]. In examining e-learning adoption among students, Tarhini et al. highlight the significance of factors influencing student adoption behaviors, such as Performance Expectations. Their research enriches academic discourse by incorporating a range of variables he University Information Disclosure System correlates with perceived performance outcomes, emphasizing the importance of fulfilling information needs to enhance enterprise agility. This study underscores the role of PE in influencing users’ perceptions of technology performance within university and evaluating them within the context of UK universities, offering significant perspectives on the influence of PE in students’ acceptance of technology [77]. Lee et al. investigated by what method satisfaction with t systems, highlighting the need for aligning technology with user expectations [50]. They also investigated the influence of quality library information resources on the satisfaction of postgraduate students at the Ignatius Ajuru University of Education library. The study emphasized the significance of pertinent information resources and their accessibility in boosting user satisfaction. It stressed the crucial role of the physical environment in determining users’ contentment with library services, pointing out the necessity for sufficient resources to fulfill user expectations [50].

2.8. Introducing AI New Tools

Implementing new AI technologies in different organizational contexts, including universities, is impacted by a range of technological and socio-environmental factors. A study conducted by Henke examined university communication tactics and viewpoints on generative AI tools. The findings revealed significant variations in adopting AI tools among universities, which can be attributed to disparities in their approaches. The variation highlights the different methods and tactics used by universities to incorporate and utilize AI capabilities in their teaching settings [40]. In their study, Okunlaya et al. presented a novel conceptual framework that explores the use of AI library services to alter university education digitally. They highlighted the need to adopt AI technology to improve service delivery and encourage new behaviors in educational institutions [58]. The study conducted by Dora et al. revealed key factors that play a crucial role in the adoption of artificial intelligence in food supply chains. These factors include technology readiness, security, privacy, customer satisfaction, demand volatility, regulatory compliance, competitor pressure, and information sharing among partners. The study emphasizes the importance of these factors in promoting the adoption of AI. These aspects are essential in influencing the adoption of AI technology in many areas, such as education [25]. In addition, Gupta and Gupta highlights the combined effectiveness of need-based and curiosity-based experimentation in the adoption of AI technology for libraries. This study offers a holistic approach to controlling the adoption of AI technology in the educational scope [36]. In Sallu’s study, the focus is on the utilization of artificial intelligence (AI) in higher education. The study provides valuable information on how AI technologies can bring about significant changes and improvements in academic settings [73]. Saidakhror examines the influence of artificial intelligence on higher education and the economic aspects of information technology, demonstrating the various uses of AI tools in educational environments. Ultimately, the implementation of AI technologies in higher education institutions is impacted by various elements, such as technology preparedness, organizational backing, and individual perspectives. Comprehending these factors is crucial for universities to negotiate the intricacies of AI implementation, improve teaching and learning methods, and stimulate innovation in educational environments [72].

2.9. AI Strategic Alignment

AI strategic alignment (AIS) is one of the biggest drivers for the use and deployment of AI within higher education, particularly universities. Many researches have captured the strategic implications of AI adoption and how AI activities align with institutional objectives and goals in the university context, which further indicates the importance of AIS in ensuring successful technology integration. Jarrahi et al. discussed in greater depth the strategic benefits of AI and its profound influence on the improvement of organizational learning. The study indicates the significance of AIS in supporting innovation in organizations and the effective operation of knowledge assets. In its attention to the slow, pathway-driven adoption of AI in organization life cycles and continuous learning strategies, this study accentuates the strategic implications of AIS in navigating organizational successes towards long-term success [47]. Okunlaya et al. developed a new theoretical model for digitizing university teaching using AI library services. The study focuses on the importance of adopting AI in the context of university library services in order to support educational outcomes and customer experiences, as well as the significance of AI systems in driving digital transformation efforts. According to the literature review, the strategic coherence of artificial intelligence (AI) is crucial for its successful deployment and incorporation in higher education. Understanding the strategic implications of AI adoption and aligning AI initiatives with university objectives is essential for universities to successfully leverage AI technology and encourage innovation in the classroom [58].

2.10. Availability of Resources

The availability of resources (AVR) is essential for successfully implementing and integrating AI-based technologies in higher education institutions, including universities. Multiple studies have examined the necessity of having enough resources to facilitate the adoption of technology and innovation in universities, particularly concerning AI efforts. Bearman and Ajjawi examined the educational significance of artificial intelligence in education, highlighting educators’ need to modify their instructional approaches to integrate AI technologies proficiently. This study emphasizes the importance of having access to resources, such as AI tools and educational materials, in equipping students for a future that involves AI technology. It highlights how resources play a crucial role in influencing teaching methods in higher education [10]. The study specifically examined the factors that influence the employability of these graduates. This study emphasizes the significance of resource availability, such as high-quality education and skills development programs, in improving graduates’ capacity to find employment and achieve success in the job market. It underscores the value of resources in supporting student outcomes [35]. Boonsiritomachai et al. emphasized the significance of technical resources in influencing the acceptance of new technologies. They underlined the relevance of physical assets, such as networking, data, and computer hardware, in promoting the adoption of technology. This study emphasizes the importance of collaborative resources in establishing a scalable and adaptable basis for business applications, emphasizing the crucial role of technical resources in facilitating the integration of technology [13].
Availability of Resources (AVR) is a crucial element in facilitating the effective implementation and assimilation of AI-based technologies in higher education institutions. Universities must have sufficient resources and match their strategies with organizational goals in order to effectively utilize AI technologies and promote innovation in the education industry.

2.11. Competative Pressure (COP)

Competitive pressure (COP) greatly influences the willingness of higher education institutions, especially universities, to adopt AI-based technology. Multiple studies have emphasized the significance of competitive pressure in promoting the adoption of technology and innovation in universities. These studies show the strategic necessity of effectively responding to competitive market dynamics. Hungund and Mani emphasized that organizational decisions about technology adoption are influenced by the external environment in which the company operates. They emphasized the strategic necessity of competitive pressure in order for firms to remain competitive in highly competitive markets. This research highlights the importance of competitive pressure in driving businesses to adopt new technologies to increase performance and increase their likelihood of survival in competitive environments [43]. Alsheibani identified competitive pressure as a driving force for the spread of innovative technologies, highlighting its significance in allowing companies to compete efficiently and preserve their competitive edge in the market. This research emphasized that companies operating in high-pressure situations tend to embrace innovative technologies to enhance their performance and enhance their likelihood of survival, emphasizing the strategic significance of addressing competitive challenges through technology adoption [7]. As Porter and Millar also pointed out, IT innovation can change the organization of industries, modify rules of competition and reshape the landscape of competition, thus revealing the game-changing effect of technology on competition. The study emphasized that firms could take a step ahead in utilizing cutting-edge technology and AI to improve their services and differentiate from competitors in the market. This research illustrates how Competitive pressure (COP) has an enormous impact on AI adoption intentions in higher education. In order to remain competitive, universities must adopt and innovate with technology to improve their performance and drive innovation in the education sector [67].

2.12. Government Regulations (GOR)

Government regulations (GOR) are essential in influencing the willingness of higher education institutions, namely universities, to adopt AI-based technologies. Multiple studies have emphasized the importance of government laws and regulations in promoting IT breakthroughs, such as integrating AI technology, in university environments. [5,25,60] highlighted the crucial significance of governmental policies and laws in incentivizing the use of AI technology. These studies highlighted the support offered by state authorities in encouraging the implementation of AI technology through regulatory frameworks and guidelines, demonstrating the government’s dedication to advancing technological progress in different industries, comprising higher education. Wong and Yap highlight government rules' significance in shaping the adoption of artificial intelligence in accounting inside micro, small, and medium firms. This exemplifies the extensive scope of government rules in influencing the deployment of AI technologies in many business contexts [81]. In addition, Dora et al. emphasize that government regulators can exert pressure on companies to implement new technologies in their supply chain operations, demonstrating the impact of government regulations on technological progress in organizational procedures [25]. Ghani et al. highlight the essential role of government assistance in promoting the adoption of AI, underscoring the importance of government rules in influencing the acceptance of technology [32]. Within the domain of responsible AI governance, [4] proposes that although governments are beginning to enforce formal regulatory measures to oversee AI, there is a requirement for ethics frameworks and "soft law" tools to supplement these rules. Moon proposes the notion of AI participatory governance, which involves several stakeholders, including government regulators, to ensure the fair and constructive utilization of AI for society’s progress [55]. Farida et al. examine the impact of government AI systems on obtaining accountable results, emphasizing the crucial role of government rules in guaranteeing responsible development and implementation of AI technology [30]. In their study, Noordt and Misuraca put out a comprehensive framework that aims to enhance the effective implementation of AI systems in government settings. They highlight the various elements that play a role in AI adoption, going beyond just the technical components. Government regulations (GOR) significantly impact the willingness of organizations, particularly higher education institutions, to use AI-based technologies. The interplay of regulatory frameworks, ethical considerations, and stakeholder involvement highlights the intricate nature of AI governance, where responsible practices are essential for optimizing the advantages of AI technologies while mitigating associated risks. An intricate comprehension of government regulations and their consequences is crucial for promoting innovation, guaranteeing adherence, and propelling sustainable technological advancement [57].

2.13. Technological Support

Technological support also determines the chances that higher education institutions will adopt AI technologies. Several studies have highlighted the importance of technology support for artificial intelligence adoption in the educational setting. Multiple studies have emphasized the need for technological assistance to incorporate artificial intelligence in educational environments. Hannan highlights the revolutionary influence of technological breakthroughs in higher education operations by demonstrating successful implementations of AI technologies in improving student learning experiences, student support services, and enrollment administration systems within educational institutions [38]. According to Greiner et al. the Technology Acceptance Model (TAM) and the Four-Sides Communication Model can be employed to understand how humans interact with AI and the adjustments needed for acceptance, specifically in the context of grading dissertations. This emphasizes the significance of matching technological assistance with the needs and expectations of users [35]. Crompton and Song explore how AI benefits students and faculty in higher education. They highlight how AI enables personalized learning, intelligent tutoring systems, facilitates collaboration, and automates grading. Such instances demonstrate how technological support improves educational processes [23]. Mohsin et al. highlights various factors that contribute to creating a favorable environment for the adoption of artificial intelligence (AI). These factors include sufficient funding, technological infrastructure, IT support, training programs, and institutional policies that promote the integration of AI. The author emphasizes the diverse range of technological support required to facilitate the adoption of AI [54]. In [9] study, an analysis is conducted on faculty attitudes, technology readiness, curriculum re- form needs, and policy implications in the integration of AI in information technology education. The study highlights the intricate nature of providing technological support in educational environments. Opesemowo and Adekomaya examines the qualitative elements of utilizing AI to promote Sustainable Development Goals in South Africa’s higher education system. The study highlights the need for technological assistance in promoting sustainable educational practices [59]. Polyportis performs a long-term investigation on the acceptance of AI, providing practical advice for AI developers and educational institutions to enhance student involvement with AI technologies, highlighting the ever-changing nature of technology assistance in educational settings. Ultimately, the presence of technology support plays a crucial role in determining the likelihood of AI-based technologies being adopted in higher education. The analyzed papers highlight the various ways in which technological support enhances educational processes, ranging from enhancing student learning experiences to optimizing administrative procedures. Comprehending the complex and diverse aspects of technology assistance is crucial for higher education institutions to successfully incorporate AI technologies, enhance student involvement, and foster innovation in instructional methods [65].

2.14. Facilitating Conditions

Facilitating conditions have a critical role in shaping the likelihood of AI-based technologies being adopted in different organizational settings, such as higher education institutions. According to Eftimov and Kitanoviki, facilitating conditions refer to the conducive surroundings and incentives that empower individuals to acknowledge the advantages of using AI technologies [28]. Tanantong and Wongras establish a connection between enabling situations and individuals’ perceptions of the resources and support required for various behaviors [76]. Jain et al. highlight that the ease of conditions relies on users’ perceptions regarding the accessibility of assistance and resources for utilizing technology within companies [46]. Mohsin et al. stresses the positive effect of accommodating environments on effort expectancy, suggesting that a comfortable setting increases users’ willingness to be able to effectively engage with AI systems [54]. Morrison examines barriers and facilitators to AI deployment in clinical settings in the NHS. Facilitating factors play an important role in shaping the adoption strategies of AI technologies across multiple organizational settings. Creating optimal conditions, setting priorities, and creating the right environment are essential to using technology. Understanding and adapting to appropriate environments are essential for businesses, in particular higher education institutions, to be able to integrate AI, enhance the user experience, and drive creativity into adoption processes [56].

2.15. AI Adoption Intention

Factors driving the decision to bring AI into universities include technological innovation, social-environmental aspects and institutional factors. According to Chen et al. explored the conditions leading to the successful application of AI in China’s telecommunications sector. The results of their study offer valuable insights for companies on how to choose and spend resources in adopting AI. The research pointed to the importance of understanding the factors leading to successful adoption of AI in a particular industry context [21]. Furthermore, Chen et al. investigated the determinants impacting the acceptance of big data analytics and artificial intelligence in connection to operational effectiveness. The study highlighted the correlation between operational performance and environmental performance [20]. Furthermore, Bughin investigated the impact of a company’s AI strategy on employment growth, specifically analyzing the strategic goals and resources affected by the use of AI. The study provided useful insights into the consequences of using artificial intelligence on the internal operations and various resources of enterprises [15]. Govindan highlighted the significance of artificial intelligence in advancing sustainable and economically efficient innovation. He emphasized the importance of vendors integrating AI-based processes in order to effectively adopt these technologies [33]. Horani et al. (2023) provides valuable insights into the key elements that influence the implementation of artificial intelligence within organizations. By incorporating these findings into the discourse on the desire to embrace artificial intelligence (AI), companies, especially higher education institutions, can gain a comprehensive understanding of the key elements that impact decisions related to the adoption of technology. Subsequently, individuals can utilize this acquired understanding to proficiently maneuver through the procedure of integrating AI and attain triumph. In summary, the choice to use AI-driven technologies in higher education institutions is influenced by factors such as technological support, favorable conditions, and regulatory frameworks. Understanding these attributes is crucial for businesses to successfully manage the use of artificial intelligence and foster innovation for enduring and sustainable expansion.

3. Objectives

It aims to identify and assess the fundamental technological and socio-economic parameters that can influence the decision of universities to adopt AI technologies. The research aims to gain an understanding of factors related to intent to implement AI technologies, such as Compatibility, Complexity, User Experience, Perceived Usefulness, Perceived Ease of Use, User Satisfaction, Performance Expectation, AI Strategic Alignment, Resources available, Competitive Pressure, Government Policy, Technological Assistance, and Facilitating Environments.
Furthermore, the study aims to analyze the effect of demographic attributes such as Age, Gender, Education, and Years of experience on the relationships between the above parameters and the intention to use AI-based technology. The study aims to acquire a holistic understanding of how personal attributes are likely to interact with the tool and its implication on understanding the factors governing the decision about AI implementation and its effect on organizational transformation.

4. Methodology

4.1. Participant

The study collected data from a diverse panel of 500 respondents, including university students and professional workers in higher education. This ensures that the contexts under which they might use AI-based technologies in their professional lives or higher education settings are considered. The viewpoints of learners and practitioners also help generate a comprehensive understanding of the intentions an individual might have to adopt the AI use in educational settings.

4.2. Data Collecttion

Adopting multiple data sources has also helped the team collect as many as possible variables in order to build a strong theory explaining the factors of intention to adopt the technology enabled by AI, in particular: the intention to adopt learning technologies, the intention to adopt teaching technologies and the general technological self-efficacy of the staff. By integrating quantitative and qualitative designs, we have been able to achieve thorough and nuanced hypothesized intentions to adopt AI-based tools and technologies in the higher education environment.

4.3. Data Analysis

The investigation utilized a quantitative content analysis methodology to determine recurring themes, connections, and patterns within the data collected from questionnaires. Surveys were used to quantitatively assess factors such as Compatibility, Complexity, UX, perceived usefulness (PU), perceived ease of use (PEOU), User Satisfaction, Performance Expectation (PE), AI strategic alignment (AIS), Availability of Resources (AVR), Competitive pressure (COP), Government regulations (GOR), technological support, and Facilitating Conditions, and the effects of these factors on the level of Ai adoption intention. The analysis considered age, gender, and years of experience as potential mediating variables.

5. Research Methodologies

5.1. Research Question

How do the factors that influence the adoption intentions of AI-based technologies in higher education institutions align with the Technology Acceptance Model (TAM) and the Technology-Organization-Environment (TOE) framework? Additionally, what are the mediating roles of demographic variables in these relationships?
The impact of Key Factors (Compatibility (C), Complexity (CX), User Interface (UX), Perceived Ease of Use (PEOU), User Satisfaction (US), Performance Expectation (PE), AI introducing new tools (AINT), AI Strategic Alignment (AIS), Availability of Resources (AVR), Competitive Pressure (COP), Government Regulations (GOR), Technological Support (TS), and Facilitating Conditions (FC)) together on AI adoption intentions among higher education students and faculty members in Turkey, Canada, and USA. Additionally, the study aims to understand the moderating roles of age, gender, and years of experience in this relationship.

5.2. Deductive Approach

The current investigation utilized a deductive methodology. The deductive approach is typically initiated by formulating a hypothesis and then subjecting it to careful observations or data collection. In the context of this particular inquiry, the investigation initiated with a theoretical framework relating to the influence of the impact of Key Factors on AI adoption intentions and then tested this theoretical perspective using empirical data from higher education students and faculty members.

5.3. Population

The participants within this study are located in Turkey, Canada, and USA. This is made up of specialists who hold critical roles in higher education student and faculty, and thus can offer useful feedback about the impact of influential factors on AI adoption goals.

5.4. Sample Size Calculation

The sample size of the study of 367 university students and workers can be estimated using the sample size formula for a qualitative variable. This approach is often applied to prevalence or cross-sectional studies to make sure the sample size is large enough to get statistical power and allow for the proper rejection of the data the null hypothesis if needed. The following formula can be applied to calculate the sample size if population size is unknown: It is determined by calculating the value of n with respect to the variables z, p, q and d.Squaring z, multiplying by p and q, and divided by the square of d.Using this formula and the values of z, p, q and d, sample size can be accurately calculated for the study involving 367 participants in higher education [6].

5.5. Sampling Method

To sample 367 higher education students and/or faculty, and used the intentional sampling method to select participants who meet certain criteria relevant to the study. Purposive sampling allows researchers to deliberately pick those with specific attributes or experiences necessary to fully address the study’s goals.

5.6. Rational for Purposive Sampling

Purposive sampling will be employed to recruit those with direct or indirect experience or participation in technology adoption processes within the education sector for a more in-depth look into what drives higher education’s willingness to adopt AI-based technologies. This selective approach ensures that the sample includes those with a focus on what and how AI tools can be used in higher education. (Kharis, 2023)

5.7. Theoretical Background and Hypothesis Structuring

Camisón and Villar-López defined innovation adoption as “the implementation of a new or significantly improved product (goods or services) or process, a new marketing method, or a new organizational method in business practices, workplace organization or external relations”. In other words, Innovation Organizational adoption refers to a firm’s explicit choice to either adopt or use a new technology [16]. Where AI is an advanced and innovative technological domain [39]. Applying existing technology adoption models to study the implementation of AI at the organizational level poses significant difficulties. AI encompasses the overall aspects of organizations, including their procedures, data, talents, strategies, and structures [27]. Therefore, in order to examine the perspective of higher education organizations’ implementation of AI, this article utilizes three frameworks: The Diffusion of Innovation Theory (DOI) [71]. TAM: technology acceptance model [24], Technology-Organization-Environment (TOE) framework [78].
DOI, or Diffusion of Innovation, is the process by which an innovation or technology is shared among members of a social group over time through certain channels of communication [17]. The proposition suggests that the spread of a new idea or technology is mainly determined by how people perceive it and the specific features of the technology itself. Diffusion is the process by which businesses, individuals, communities, or subsystems acquire and fully embrace innovative notions, such as new technology, to make progress in science and education. TAM does not incorporate a social element. Although UTAUT includes social impact as a primary element in its model, it does not include the attitude variable. Attitude significantly determines the behavioral intention to use a specific technology, as showcased in education (Breiki et al., 2023). The DOI theory and the TOE framework share significant commonalities, as noted by Baker (2012). The organizational and technological aspects of the TOE framework correspond to the innovative characteristics and organizational context of the DOI model, respectively [68]. However, there are significant distinctions between DOI and TOE [61]. Unlike the DOI, the TOE system does not account for the specific characteristics of individuals. However, unlike the TOE model, the DOI hypothesis ignores environmental influences. Adding the TOE and TAM elements is generally accepted as essential to overcome DOI theory limitations in the context of technology adoption across multiple con- texts [1,11,69]. Therefore, by aligning DOI, TOE and TAM into a single framework, we can investigate the external and internal drivers behind an organization’s adoption of Artificial Intelligence (AI). This means that the triad DOI-TOE-TAM model (Figure 1) is suitable for explaining the technology and socio-environmental aspects of AI implementation in organizations.

5.8. Research Hypotheses

H1: 
There is a statistically significant impact of Key Factors (Compatibility (C), Complexity (CX), User Interface (UX), Perceived Ease of Use (PEOU), User Satisfaction (US), Performance Expectation (PE), AI introducing new tools (AINT), AI Strategic Alignment (AIS), Availability of Resources (AVR), Competitive Pressure (COP), Government Regulations (GOR), Technological Support (TS), and Facilitating Conditions (FC)) together on AI adoption intentions” at P ≤ 0.05.
This hypothesis was divided into 13 sub hypotheses:
H1a: 
There is a statistically significant impact of Compatibility (C) on AI adoption intentions at P ≤ 0.05.
H1b: 
There is a statistically significant impact of Complexity (CX) on AI adoption intentions at P ≤ 0.05.
H1c: 
There is a statistically significant impact of User Interface (UX) on AI adoption intentions at P ≤ 0.05.
H1d: 
There is a statistically significant impact of Perceived ease of use (PEOU) on AI adoption intentions at P ≤ 0.05.
H1e: 
There is a statistically significant impact of User Satisfaction (US) on AI adoption intentions at P ≤ 0.05.
H1f: 
There is a statistically significant impact of Performance Expectation (PE) on AI adoption intentions at P ≤ 0.05.
H1g: 
There is a statistically significant impact of AI introducing new tools (AINT) on AI adoption intentions at P ≤ 0.05.
H1h: 
There is a statistically significant impact of AI Strategic Alignment (AIS) on AI adoption intentions at P ≤ 0.05.
H1i: 
There is a statistically significant impact of Availability of Resources (AVR) on AI adoption intentions at P ≤ 0.05.
H1j: 
There is a statistically significant impact of Competitive Pressure (COP) on AI adoption intentions at P ≤ 0.05.
H1j: 
There is a statistically significant impact of Government Regulations (GOR) on AI adoption intentions at P ≤ 0.05.
H1k: 
There is a statistically significant impact of Technological Support (TS) on AI adoption intentions at P ≤ 0.05.
H1m: 
There is a statistically significant impact of Facilitating Conditions (FC) on AI adoption intentions at P ≤ 0.05.

5.9. The Research Model

Figure 1 shows the study model which includes the independent variables (Technological factors, Organization factors, Environment factors) and dependent variable which is AI Adoption intention

5.10. Data Analysis

A descriptive statistical analysis was performed to calculate the values, and both simple and multiple regression analyses were conducted to examine the relationships between the external variables and other study elements. The acquired data was analyzed and the hypotheses of the study were tested using SPSS® Amos.

5.11. Descriptive Analysis

1)
Sample Characteristics
Table 1 presents the demographic variables of the study sample; the male respondents were 51% and the female respondents were 49%. The majority of respondents age are be- tween 34 and 44 years old (47.7%). The majority of respondents’ education level was PhD degree (59.2%). The majority of respondents’ educational major was IT (37.1%), while respondents with other majors (32.7%) were (Languages 16, Structural Design 7, Engineering 22, Education 21, Biology 5, Economics 13, Science 18, Marketing 9, MIS 5, and Finance 4). The majority of respondents’ work experience (44.7%) was 10 years and above, (25.6%) of Respondents using AI tools or Apps less than 6 months, (15%) of Respondents using AI tools or Apps from 6 months - less than 1 year, (13.1%) of Respondents using AI tools or Apps from 1 year - less than 2 years, and (46.3%) of Respondents using AI tools or Apps for 2 years and more. The majority of respondents preferred Windows PC operating system to use their preferred AI tool with a percentage of (73.3%), while the other respondents use Linux operating system.
2)
What Type of AI Tools Do You Use for Your Work or School Needs?
Table 2 shows that (74.7%) of all respondents using ChatGPT tool in their work or school needs, (36.8%) of all respondents using QuillBot tool in their work or school needs, (67.6%) of all respondents using Grammarly tool in their work or school needs, (9.8%) of all respondents using Scholarcy tool in their work or school needs, (11.7%) of all respondents using Scite tool in their work or school needs, (18.5%) of all respondents using pdf.ai tool in their work or school needs, finally (6.5%) of all respondents using other tools in their work or school needs which are (Scispace ai, Tome AI, Cognigy.ai, Copilot, Generative AI by Adobe, Rytr Deep learning, OpenAI API Key).
3)
How Has Management Supported the Usage of AI in Your Workplace?
Table 3 shows that (21.3%) of all respondents agreed that management supported the usage of AI in their workplace by Conferences, (29.4%) of all respondents agreed that management supported the usage of AI in their workplace by Workshops, (34.9%) of all respondents agreed that management supported the usage of AI in their workplace by Training, (22.6%) of all respondents agreed that management supported the usage of AI in their workplace by all of (Conferences, Workshops, and Training), finally (18.8%) of all respondents agreed that management supported the usage of AI in their workplace by other ways.
4)
What Are Some of the Resources That You Believe Support the Adoption of AI In Yout Organization?
Table 4 shows that (19.1%) of all respondents believe that the resources support the adoption of AI in their organization are application processes, (15.3%) of all respondents believe that the resources support the adoption of AI in their organization are collaboration strategies, (23.2%) of all respondents believe that the resources support the adoption of AI in their organization are IT development plans, (26.4%) of all respondents believe that the resources support the adoption of AI in their organization are technical knowledge/skills, (48.2%) of all respondents believe that the resources support the adoption of AI in their organization are all of the resources mentioned above, finally (2.5%) of all respondents believe that there are other resources support the adoption of AI in their organization.
5)
What Are Some of the Assistances Offerd by State Authorities to Motivate the Adoption Of AI?
Table 5 shows that (26.4%) of all respondents believe that the social attitudes about morals and ethics offered by state authorities motivate the adoption of AI, (19.3%) of all respondents believe that the guidelines for the development of AI applications offered by state authorities motivate the adoption of AI, (33.5%) of all respondents believe that the protect privacy and Ownership rights offered by state authorities motivate the adoption of AI, (28.1%) of all respondents believe that all of the above resources offered by state authorities motivate the adoption of AI, finally (13.1%) of all respondents believe that other resources offered by state authorities motivate the adoption of AI.
6)
What Technological Support Does Your Organization Have to Support the Adoption of AI?
Table 6 shows that (24.3%) of all respondents believe that their organization has supportive AI in-house software to support the adoption of AI, (20.2%) of all respondents believe that their organization has adoptive operating systems that support AI.to support the adoption of AI, (20.4%) of all respondents believe that their organization has supportive AI in-house network to support the adoption of AI, (47.7%) of all respondents believe that their organization is not yet there, none of the above to support the adoption of AI, finally (2.2%) of all respondents believe that their organization has other technological support to support the adoption of AI.

5.12. Testing the Model

1)
Confirmatory Factor Analysis
Confirmatory factor analysis (CFA) is used to validate the factor structure of the collection of observed variables (the factor loadings). Convergence validity and composite reliability (CR) are evaluated. Table 7 below displays the findings. Discriminant validity is seen in Table 8.
Given that the recommended factor loading is 0.50 or higher, and ideally 0.70 or higher (Bollen, 2014), Table 7 demonstrates that all of the item loadings range from 0.621 to 0.874, the results are therefore accepted.
Composite reliability (CR) and average variance extracted (AVE) can be used to evaluate convergent validity in factor loadings. According to the findings, composite reliability scores 0.757 to 0.905 which are higher than 0.7, indicate strong internal consistency. Additionally, the results demonstrate that the average variance extracted (AVE) values, which are greater than 0.50 (the cut-off value justifies the usage of the construct), ranged from 0.512 to 0.714. As a result, all of the latent variables satisfy the requirements needed to demonstrate convergent validity [37].
All of the HTMT values obtained are less than 0.85, as shown in Table (8), suggesting that there are no issues with discriminant validity. Henseler et al. (2015) state that discriminant validity amongst reflective constructs is established by HTMT values less than 0.90 [41]. According to the findings, there were no overlapping items in the impacted constructs according to respondents’ perceptions, and there were no collinearity issues among the latent constructs (multicollinearity).
Based on the results of Table 7 and Table 8 above, the final best-fitting model is presented in Figure 2 below.
2)
Goodness of Fit
A number of metrics are used to assess the model’s goodness of fit, Standardized Root Mean Squared Residual (SRMR), comparative fit index (CFI), Tucker and Lewis’s index of fit (TLI), normed fit index (NFI), and root mean square error of approximation (RM- SEA). Other indicators include the recommended cut-off values of model fit (Chi-square χ 2(P > 0.05); Normed Chi-Square (χ2 /df) 1.0≤ χ 2 /df ≤3; RAMSE 0.10, NFI 0.90; CFI 0.90; IFI 0.90; TLI 0.90). Table 9 below displays the findings.
Table 9 demonstrates that an excellent model fit is indicated by the SRMR value, which is less than 0.08 [42]. A great match for the model is shown by a CFI score greater than 0.95 [49]. Additionally, an excellent match is shown by the TLI value, which is greater than 0.90 [74]. A good match for the model is also shown by the NFI and IFI values, both of which are greater than 0.90 [42]. A good fit for the model is indicated when the RMSEA is less than or equal to 0.1 (Brown, 2015).
The suggested model is fitted since indexes indicate that it adequately fits the available data. Given that the indexes indicate that the model fits the data sufficiently.

5.13. Testing the Hypotheses

The variance-based Structural Equation Model (SEM), Partial Least Squares (PLS), is utilized to evaluate the research hypotheses, which is necessary for this study. Figure 3 shows the SEM model hypotheses.
1)
Testing the first hypothesis
H1: 
There is a statistically significant impact of Key Factors (Compatibility (C), Com- plexity (CX), User Interface (UX), Perceived Ease of Use (PEOU), User Satisfaction (US), Performance Expectation (PE), AI introducing new tools (AINT), AI Strategic Alignment (AIS), Availability of Resources (AVR), Competitive Pressure (COP), Govern- ment Regulations (GOR), Technological Support (TS), and Facilitating Conditions (FC)) together on AI adoption intentions”. at a level of P ≤ 0.05.
This hypothesis was divided into 13 sub-hypotheses:
H1a: 
There is a statistically significant impact of Compatibility (C) on AI adoption intentions at a level of P ≤ 0.05.
H1b: 
There is a statistically significant impact of Complexity (CX) on AI adoption intentions at a level of P ≤ 0.05.
H1c: 
There is a statistically significant impact of User Interface (UX) on AI adoption intentions at a level of P ≤ 0.05.
H1d: 
There is a statistically significant impact of Perceived ease of use (PEOU) on AI adoption intentions at a level of P ≤ 0.05.
H1e: 
There is a statistically significant impact of User Satisfaction (US) on AI adoption intentions at a level of P ≤ 0.05.
H1f: 
There is a statistically significant impact of Performance Expectation (PE) on AI adoption intentions at a level of P ≤ 0.05.
H1g: 
There is a statistically significant impact of AI introducing new tools (AINT) on AI adoption intentions at a level of P ≤ 0.05.
H1h: 
There is a statistically significant impact of AI Strategic Alignment (AIS) on AI adoption intentions at a level of P ≤ 0.05.
H1i: 
There is a statistically significant impact of Availability of Resources (AVR) on AI adoption intentions at a level of P ≤ 0.05.
H1j: 
There is a statistically significant impact of Competitive Pressure (COP) on AI adoption intentions at a level of P ≤ 0.05.
H1j: 
There is a statistically significant impact of Government Regulations (GOR) on AI adoption intentions at a level of P ≤ 0.05.
H1k: 
There is a statistically significant impact of Technological Support (TS) on AI adoption intentions at a level of P ≤ 0.05.
H1m: 
There is a statistically significant impact of Facilitating Conditions (FC) on AI adoption intentions at a level of P ≤ 0.05.
The result of the SEM for testing the hypotheses is presented in Table 10 below which shows the following results:
  • Compatibility (C) has a positive significant impact on AI adoption intentions, as indicated by the regression weights; the route is significant since the p-value (***) is less than 0.001 and the crucial ratio value is more than 2 (Byrne, 2013). Consequently, it is decided to embrace the first alternative sub-hypothesis.
  • Complexity (CX) has a positive significant impact on AI adoption intentions, as indicated by the regression weights; the route is significant since the p-value (***) is less than 0.001 and the crucial ratio value is more than 2 (Byrne, 2013). Consequently, it is decided to embrace the second alternative sub-hypothesis.
  • Complexity (CX) has a positive significant impact on AI adoption intentions, as indicated by the regression weights; the route is significant since the p-value (***) is less than 0.001 and the crucial ratio value is more than 2 (Byrne, 2013). Consequently, it is decided to embrace the second alternative sub-hypothesis.
  • User Interface (UX) has a positive significant impact on AI adoption intentions, as indicated by the regression weights; the route is significant since the p-value (***) is less than 0.001 and the crucial ratio value is more than 2 (Byrne, 2013). Consequently, it is decided to embrace the third alternative sub-hypothesis.
  • Perceived Ease of Use (PEOU) has a positive significant impact on AI adoption intentions, as indicated by the regression weights; the route is significant since the p- value (***) is less than 0.001 and the crucial ratio value is more than 2 (Byrne, 2013). Consequently, it is decided to embrace the fourth alternative sub-hypothesis.
  • User Satisfaction (US) has a positive significant impact on AI adoption intentions, as indicated by the regression weights; the route is significant since the p-value (***) is less than 0.001 and the crucial ratio value is more than 2 (Byrne, 2013). Consequently, it is decided to embrace the fifth alternative sub-hypothesis.
  • Performance Expectation (PE) has a positive significant impact on AI adoption intentions, as indicated by the regression weights; the route is significant since the p-value (.001) is less than 0.01 and the crucial ratio value is more than 2 (Byrne, 2013). Consequently, it is decided to embrace the sixth alternative sub-hypothesis.
  • AI introducing new tools (AINT) has a positive significant impact on AI adoption intentions, as indicated by the regression weights; the route is significant since the p- value (***) is less than 0.001 and the crucial ratio value is more than 2 (Byrne, 2013). Consequently, it is decided to embrace the seventh alternative sub-hypothesis.
  • AI Strategic Alignment (AIS) has a positive significant impact on AI adoption intentions, as indicated by the regression weights; the route is significant since the p-value (.003) is less than 0.01 and the crucial ratio value is more than 2 (Byrne, 2013). Consequently, it is decided to embrace the eighth alternative sub-hypothesis.
  • Availability of Resources (AVR) has a positive significant impact on AI adoption intentions, as indicated by the regression weights; the route is significant since the p- value (***) is less than 0.001 and the crucial ratio value is more than 2 (Byrne, 2013). Consequently, it is decided to embrace the ninth alternative sub-hypothesis.
  • As per Byrne (2013), the regression weights indicate that Competitive Pressure (COP) has an insignificant impact on AI adoption intentions. This is because the critical ratio value is less than 2, and the p-value (0.421) is higher than 0.05, indicating that the path is not significant. The tenth null hypothesis is thus accepted.
  • As per Byrne (2013), the regression weights indicate Government Regulations (GOR) has an insignificant impact on AI adoption intentions. This is because the critical ratio value is less than 2, and the p-value (0.785) is higher than 0.05, indicating that the path is not significant. The eleventh null hypothesis is thus accepted.
  • Technological Support (TS) has a positive significant impact on AI adoption intentions, as indicated by the regression weights; the route is significant since the p-value (***) is less than 0.001 and the crucial ratio value is more than 2 (Byrne, 2013). Consequently, it is decided to embrace the twelfth alternative sub-hypothesis.
  • Facilitating Conditions (FC) has a positive significant impact on AI adoption intentions, as indicated by the regression weights; the route is significant since the p-value (***) is less than 0.001 and the crucial ratio value is more than 2 (Byrne, 2013). Consequently, it is decided to embrace the thirteenth alternative sub-hypothesis.
2)
Testing the Second Hypothesis
H2: 
Demographic factors (gender, age, education, major, and years of experience) moderate the original relationship between Key Factors (Compatibility, Complexity, User Inter- face, Perceived Ease of Use, User Satisfaction, Performance Expectation, AI introducing new tools, AI strategic alignment, Availability of Resources, Competitive pressure, Government regulations, Technological Support, and Facilitating Conditions) together and AI adoption intentions.
The second main hypothesis is tested through Multiple-Group SEM analysis using AMOS for the seven demographics it represents.
The results of the sub-hypotheses testing are presented in the following subsections.
Gender Moderation: Gender categorical moderation is examined, and the results are presented in Table 11 below.
Table 11 demonstrates that because the p-value (0.491) is higher than (0.05), the chi-square value (0.455) is not significant. This indicates that the disparities between the groups of men and women are negligible.
Age Moderation: Age categorical moderation is examined, and the results are presented in Table 12 below.
Given that the p-value (0.322) is higher than (0.05), Table 12 demonstrates that the chi-square value (1.279) is not significant. This suggests that age has no discernible moderating influence on the first association between AI Key Factors (Compatibility, Complexity, User Interface, Perceived Ease of Use, User Satisfaction, Performance Expectation, AI introducing new tools, AI strategic alignment, Availability of Resources, Competitive pressure, Government regulations, Technological Support, and Facilitating Conditions) together and AI adoption intentions because there are no notable variations across the various age groups.
Education Moderation: The Education categorical moderation is examined, and the results are presented in Table 13 below.
Given that the p-value (0.099) is higher than (0.05), Table (4–13) demonstrates that the chi-square value (4.624) is not significant. This suggests that education has no discernible moderating influence on the first association between AI Key Factors (Compatibility, Complexity, User Interface, Perceived Ease of Use, User Satisfaction, Performance Expectation, AI introducing new tools, AI strategic alignment, Availability of Resources, Competitive pressure, Government regulations, Technological Support, and Facilitating Conditions) together and AI adoption intentions, because there are no notable variations across the various education level groups.
Major Moderation: The Major categorical moderation is examined, and the results are presented in Table 14 below.
Table 14 shows that the chi-square value (12.939) is significant since the p-value (0.012) is less than (0.05). This means that there are significant differences between the different types of major groups. This suggests that major has moderating influence on the first association between AI Key Factors (Compatibility, Complexity, User Interface, Perceived Ease of Use, User Satisfaction, Performance Expectation, AI introducing new tools, AI strategic alignment, Availability of Resources, Competitive pressure, Government regulations, Technological Support, and Facilitating Conditions) together and AI adoption intentions.
While the results in Table 15 shows that all types of majors have a significant moderation effect since the critical ratio value is greater than 2 and the p-values are less than 0.01, the path is significant, except medicine major which has an insignificant moderation effect on the original relationships between AI key factors and AI adoption intentions.
Other majors have the biggest effect with 0.764, then comes respectively (IT, Management, and Pharmaceutical) with effect values (0.692, 0.676, and 0.675).
Teaching Experience Moderation: The Experience categorical moderation is examined, and the results are presented in Table 16 below.
Table 16 shows that the chi-square value (10.625) is significant since the p-value (0.031) is less than (0.05). This means that there are significant differences between the different groups of Experience, therefore the groups of Experience have moderating influence on the first association between AI Key Factors (Compatibility, Complexity, User Interface, Perceived Ease of Use, User Satisfaction, Performance Expectation, AI introducing new tools, AI strategic alignment, Availability of Resources, Competitive pressure, Government regulations, Technological Support, and Facilitating Conditions) together and AI adoption intentions.
The results above in Table 17 shows that all categories included in years of experience have significant moderation effect since the critical ratio value is greater than 2 and the p-values are less than 0.01, the path is significant.
Years of experience (8 years - less than 10 years) has the biggest effect with 0.907, then comes respectively years of experience (10 years and above, 6 years - less than 8 years, 2 years - less than 6 years, and less than 2 years) with effect values (0.760, 0.738, 0.666, and 0.339).

6. Conclusion and Future Work

6.1. Conclusions

The findings of this study support several factors that trigger higher education institutions to innovate and adopt Artificial Intelligence. The study revealed that all investigated factors, namely Compatibility, Complexity, User Interface (UX), Perceived Ease of Use (PEOU), User Satisfaction (US), Performance Expectation (PE), AI introduces new tools (AINT), AI Strategic alignment (AIS), Availability of Resources (AVR), Technological Supporting and Facilitating Conditions (FC) have a statistically significant positive impact on AI adoption intentions. The statistical analysis showed that p-values for all factors reached high significance (p < 0.001 for most factors) and allows us to accept the alternative hypothesis for the above-mentioned factors. However, Competitive Pressure (COP) and Government Regulations (GOR) showed no statistically significant impact on AI adoption intentions and we accept the null hypothesis for the mentioned factors.
Demographic factors were also examined as possible moderators in this study. It was found that age and education level did not significantly impact the relationship between key factors and adoption intentions of AI. Significant moderation effects were observed for main fields of study and years of experience, suggesting that these demographic factors might influence the way individuals perceive and formulate their intentions to adopt AI in a higher education context.

6.2. Future Works and Recommedations

The results of this study offer a useful understanding of the elements that influence the intent of higher education institutions to adopt AI technology. Considering the notable effects that have been identified, several suggestions can be put forth for future endeavors and real-world implementations:
1)
Compatibility (C): The results indicate that Compatibility has a significant positive impact on AI adoption intentions. Further research should investigate how institutions might improve the compatibility of artificial intelligence (AI) technology with current systems and processes, to allow a more effortless adoption.
2)
Complexity (CX): Complexity also shows a significant positive impact on AI adoption intentions. Further study endeavors may explore methods to streamline AI technologies and diminish apparent intricacy, promoting wider consumer acceptance.
3)
User Interface (UX): The positive impact of User Interface on AI adoption aspirations underscores the need to craft user-friendly interfaces. Subsequent research should prioritize creating user-friendly and easily available artificial intelligence systems that ad- dress the varied requirements of individuals in higher education.
4)
Perceived Ease of Use (PEOU): The strong correlation between Perceived Ease of Use and AI adoption intentions indicates that institutions should prioritize providing training and support to boost users’ confidence in employing AI technologies. Subsequent studies could investigate the efficacy of various training programs in enhancing the perception of usability.
5)
User Satisfaction (US): User Satisfaction significantly influences AI adoption intentions, indicating that organizations must ensure a positive user experience with AI tools. Subsequent research should investigate the elements influencing user happiness and determine improving methods.
6)
Performance Expectation (PE): The findings reveal that Performance Expectation positively impacts AI adoption intentions. Future research should explore how organizations might effectively convey the anticipated advantages of AI technologies t prospective users.
7)
Demographic Variables: The study highlights the mediating roles of demographic variables such as age, gender, education, and years of experience. Further investigation is needed to explore the impact of these characteristics on the adoption of AI technology and develop strategies accordingly. To summarize, the results of this study highlight the significance of resolving the highlighted elements to improve the intent of higher education institutions to use artificial intelligence. Further investigation should be conducted to examine these aspects, offering practical knowledge for policymakers and educational administrators to promote the effective incorporation of AI in academic environments.

References

  1. Ahmad, S., Miskon, S., Alkanhal, T.A. and Tlili, I. (2020), “Modeling of business intelligence systems using the potential determinants and theories with the lens of individual, technological, organizational, and environmental contexts-a systematic literature review”, Applied Sciences, Vol. 10 No. 9, pp. 3208–3208.
  2. ALTakhayneh, S.K., Karaki, W., Hasan, R.A., Chang, B., Shaikh, J.M. and Kanwal,W. (2022), “Teachers’ psychological resistance to digital innovation in Jordanian entrepreneurship and business schools: moderation of teachers’ psychology and attitude toward educational technologies”, Frontiers in Psychology, Vol. 13. [CrossRef]
  3. Alalwan, A.A., Dwivedi, Y.K. and Rana, N.P. (2017), “Factors influencing adoption of mobile banking by Jordanian bank customers: extending utaut2 with trust”, International Journal of Information Management, Vol. 37 No. 3, pp. 99–110. [CrossRef]
  4. Alexander, C.S., Yarborough, M. and Smith, A. (2023). [CrossRef]
  5. Alghamdi, M.I. (2020), “Assessing factors affecting intention to adopt AI and ML: The case of the Jordanian retail industry”, Periodicals of Engineering and Natural Sciences (PEN), Vol. 8 No. 4, pp. 2516–2524. [CrossRef]
  6. Ali, M.D. and Hatef, E.A.J.A. (2024), “Types of sampling and sample size determination in health and social science research”, Journal of Young Pharmacists, Vol. 16 No. 2, pp. 204–215.
  7. Alsheibani, S., Messom, C. and Cheung, Y. (2020), “Re-Thinking the Competitive Land- scape of Artificial Intelligence”, in “Proceedings of the Hawaii International Conference on System Sciences (HICSS)”, pp. 5861–5870.
  8. Arshad, Y., Chin, W.P., Yahaya, S.N., Nizam, N.Z., Masrom, N.R. and Ibrahim, S.N.S. (2018), “Small and medium enterprises’ adoption for e-commerce in Malaysia tourism state”, International Journal of Academic Research in Business and Social Sciences, Vol. 8 No. 10. [CrossRef]
  9. Bai, X. (2024), “The role and challenges of artificial intelligence in information technol- ogy education”, Pacific International Journal, Vol. 7 No. 1, pp. 86–92.
  10. Bearman, M. and Ajjawi, R. (2023), “Learning to work with the black box: pedagogy for a world with artificial intelligence”, British Journal of Educational Technology, Vol. 54 No. 5, pp. 1160–1173. [CrossRef]
  11. Beshdeleh, M., Angel, A. and Bolour, L. (2020), “Adoption of EBET Agency’s Cloud Casino Software by using TOE and DOI Theory as a Solution for Gambling Website. Maxwell Beshdeleh et al. Adoption of EBET Agency’s Cloud Casino Software by using TOE and DOI Theory as a Solution for Gambling Website”, Journal of Innovation and Business Research, Vol. 116, pp. 100–119.
  12. Bharadiya, J.P. (2023), “Machine learning and AI in business intelligence: Trends and opportunities”, International Journal of Computer (IJC), Vol. 48 No. 1, pp. 123–134.
  13. Boonsiritomachai, W., Mcgrath, G.M. and S, B. (2016), “Exploring business intelligence and its depth of maturity in Thai SMEs”, Cogent Business & Management, Vol. 3 No. 1, pp. 1220663–1220663. [CrossRef]
  14. Bozkurt, A., Karadeniz, A., Bañeres, D. and Rodríguez, M.E. (2021), “Artificial intel- ligence and reflections from educational landscape: a review of ai studies in half a century”, Sustainability, Vol. 13 No. 2. [CrossRef]
  15. Bughin, J. (2023), “Does artificial intelligence kill employment growth: the missing link of corporate ai posture”, Frontiers in Artificial Intelligence, Vol. 6. [CrossRef]
  16. Camisón, C. and Villar-López, A. (2011), “Non-technical innovation: organizational memory and learning capabilities as antecedent factors with effects on sustained com- petitive advantage”, Industrial Marketing Management, Vol. 40 No. 8, pp. 1294–1304. [CrossRef]
  17. Chang, H.C. (2010), “A new perspective on Twitter hashtag use: Diffusion of innovation theory”, Proceedings of the American Society for Information Science and Technology, Vol. 47, pp. 1–4. [CrossRef]
  18. Chatterjee, S., Ghosh, S.K. and Chaudhuri, R. (2020a), “Knowledge management in im- proving business process: an interpretative framework for successful implementation of ai-crm-km system in organizations”, Business Process Management Journal, Vol. 26 No. 6, pp. 1261–1281. [CrossRef]
  19. Chatterjee, S., Ghosh, S.K., Chaudhuri, R. and Chaudhuri, S. (2020b), “Adoption of ai- integrated crm system by indian industry: from security and privacy perspective”, Computer Security, Vol. 29 No. 1, pp. 1–24. [CrossRef]
  20. Chen, C., Chen, S., Khan, A., Lim, M.K. and Tseng, M. (2024). [CrossRef]
  21. Chen, H., Li, L. and Chen, Y. (2020a), “Explore success factors that impact artificial intelligence adoption on telecom industry in china”, Journal of Management Analytics, Vol. 8 No. 1, pp. 36–68. [CrossRef]
  22. Chen, L., Chen, P. and Lin, Z. (2020b), “Artificial intelligence in education: a review”,IEEE Access, Vol. 8, pp. 75264–75278.
  23. Crompton, H. and Song, D. (2021). [CrossRef]
  24. Davis, F.D., Bagozzi, R.P. and Warshaw, P.R. (1989), “Technology acceptance model”, J Manag Sci, Vol. 35 No. 8, pp. 982–1003.
  25. Dora, M., Kumar, A., Mangla, S.K., Pant, A. and Kamal, M.M. (2021), “Critical success factors influencing artificial intelligence adoption in food supply chains”, International Journal of Production Research, Vol. 60 No. 14, pp. 4621–4640. [CrossRef]
  26. Duan, Y., Edwards, J.S. and Dwivedi, Y.K. (2019), “Artificial intelligence for decision making in the era of Big Data-evolution, challenges and research agenda”, Interna- tional journal of information management, Vol. 48, pp. 63–71.
  27. Dwivedi, Y.K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Williams, and D, M. (2021), “Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy”, In- ternational Journal of Information Management, Vol. 57, pp. 101994–101994.
  28. Eftimov, L. and Kitanovikj, B. (2023), “Unlocking the path to ai adoption: antecedents to behavioral intentions in utilizing ai for effective job (re)design”, Journal of Human Resource Management - HR Advances and Developments, Vol. 2023 No. 2, pp. 123– 134. [CrossRef]
  29. Enholm, I.M., Papagiannidis, E., Mikalef, P. and Krogstie, J. (2022), “Artificial intelli- gence and business value: A literature review”, Information Systems Frontiers, Vol. 24 No. 5, pp. 1709–1734.
  30. Farida, I., Ningsih, W., Lutfiani, N., Aini, Q. and Harahap, E.P. (2023), “Responsible urban innovation working with local authorities a framework for artificial intelligence (ai)”, Scientific Journal of Informatics, Vol. 10 No. 2, pp. 121–126.
  31. George, B. and Wooden, O. (2023), “Managing the strategic transformation of higher education through artificial intelligence”, Administrative Sciences, Vol. 13 No. 9, pp. 196–196. [CrossRef]
  32. Ghani, E.K., Ariffin, N. and Sukmadilaga, C. (2022), “Factors influencing artificial intelli- gence adoption in publicly listed manufacturing companies: a technology, organisation, and environment approach”, International Journal of Applied Economics, Vol. 14 No. 2, pp. 108–117.
  33. Govindan, K. (2024), “How artificial intelligence drives sustainable frugal innova- tion: a multitheoretical perspective”, IEEE Transactions on Engineering Management, Vol. 71, pp. 638–655.
  34. Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P. and Kyriakidou, O. (2004), “Diffusion of innovations in service organizations: systematic review and recommendations”, The Milbank Quarterly, Vol. 82 No. 4, pp. 581–629. [CrossRef]
  35. Greiner, C., Peisl, T.C., Höpfl, F. and Beese, O. (2023), “Acceptance of ai in semi-structured decision-making situations applying the four-sides model of communication-an empirical analysis focused on higher education”, Education Sci- ences, Vol. 13 No. 9. [CrossRef]
  36. Gupta, V. and Gupta, C. (2023), “Synchronizing innovation: unveiling the synergy of need-based and curiosity-based experimentation in ai technology adoption for li- braries”, Library Hi Tech News, Vol. 40 No. 9, pp. 15–17. [CrossRef]
  37. Hair, J.F., Ringle, C.M. and Sarstedt, M. (2011), “PLS-SEM: indeed a silver bullet”, The Journal of Marketing Theory and Practice, Vol. 19 No. 2, pp. 139–152.
  38. Hannan, E. (2021a, b), “Ai: new source of competitiveness in higher education”, Competi- tiveness Review: An International Business Journal, Vol. 33 No. 2, pp. 265–279.
  39. Harwood, S. and Eaves, S. (2020), “Conceptualising technology, its development and future: The six genres of technology”, Technological forecasting and social change, Vol. 160, pp. 120174–120174.
  40. Henke, J. (2024), “Navigating the ai era: university communication strategies and per- spectives on generative ai tools”, Journal of Science Communication, No. 03, pp. 23– 23. [CrossRef]
  41. Henseler, J., Ringle, C.M. and Sarstedt, M. (2015), “A New Criterion for Assessing Dis- criminant Validity in Variance-based Structural Equation Modeling”, Journal of the Academy of Marketing Science, Vol. 43 No. 1, pp. 115–135.
  42. Hu, L. and Bentler, P.M. (1999).
  43. Hungund, S. and Mani, V. (2019), “Benchmarking of factors influencing adoption of inno- vation in software product SMEs: An empirical evidence from India”, Benchmarking: An International Journal, Vol. 26 No. 5, pp. 1451–1468.
  44. Islam, M.N., Khan, N.I., Inan, T.T. and Sarker, I.H. (2023), “Designing user interfaces for illiterate and semi-literate users: a systematic review and future research agenda”, SAGE Open, Vol. 13 No. 2. [CrossRef]
  45. Ismatullaev, U.V.U. and Kim, S.H. (2022), “Review of the factors affecting acceptance of ai-infused systems”, Human Factors: The Journal of the Human Factors and Er- gonomics Society, Vol. 66 No. 1, pp. 126–144.
  46. Jain, R., Garg, N. and Khera, S.N. (2022), “Adoption of ai-enabled tools in social devel- opment organizations in india: an extension of utaut model”, Frontiers in Psychology, Vol. 13. [CrossRef]
  47. Jarrahi, M.H., Kenyon, S., Brown, A., Donahue, C. and Wicher, C. (2022), “Artificial intelligence: a strategy to harness its power through organizational learning”, Journal of Business Strategy, Vol. 44 No. 3, pp. 126–135. [CrossRef]
  48. Jöhnk, J., Weißert, M. and Wyrtki, K. (2020). [CrossRef]
  49. Kline, R.B. (2005), Principles and practice of structural equation modeling, Guilford Press, New York, NY.
  50. Lee, H., Lee, S. and Shin, J. (2020), “An analysis on the satisfaction and perception of performance outcomes of the university information disclosure system”, Asia-Pacific Journal of Educational Management Research, Vol. 5 No. 3, pp. 49–56. [CrossRef]
  51. Lee, J.C. and Chen, X. (2022), “Exploring users’ adoption intentions in the evolution of artificial intelligence mobile banking applications: the intelligent and anthropomorphic perspectives”, International Journal of Bank Marketing, Vol. 40 No. 4, pp. 631–658. [CrossRef]
  52. Low, C., Chen, Y. and Wu, M. (2011). [CrossRef]
  53. Luckin, R. and Cukurova, M. (2019), “Designing educational technologies in the age of ai: a learning sciences-driven approach”, British Journal of Educational Technology, Vol. 50 No. 6, pp. 2824–2838. [CrossRef]
  54. Mohsin, F.H., Isa, N.M., Ishak, K. and Salleh, H. (2024), “Navigating the adoption of artificial intelligence in higher education”, International Journal of Business and Technopreneurship (IJBT), Vol. 14 No. 1, pp. 109–120. [CrossRef]
  55. Moon, M.J. (2023), “Searching for inclusive artificial intelligence for social good: par- ticipatory governance and policy recommendations for making ai more inclusive and benign for society”, Public Administration Review, Vol. 83 No. 6, pp. 1496–1505. [CrossRef]
  56. Morrison, K. (2021), “Artificial intelligence and the nhs: a qualitative exploration of the factors influencing adoption”, Future Healthcare Journal, Vol. 8 No. 3, pp. 648–654. [CrossRef]
  57. Noordt, C.V. and Misuraca, G. (2020), “Exploratory insights on artificial intelligence for government in europe”, Social Science Computer Review, Vol. 40 No. 2, pp. 426–444.
  58. Okunlaya, R.O., Abdullah, N.S. and Alias, R.A. (2022a), “Artificial intelligence (ai) library services innovative conceptual framework for the digital transformation of uni- versity education”, Library Hi Tech, Vol. 40 No. 6, pp. 1869–1892. [CrossRef]
  59. Opesemowo, O.A.G. and Adekomaya, V. (2024), “Harnessing artificial intelligence for advancing sustainable development goals in south africa’s higher education system: a qualitative study”, International Journal of Learning, Teaching and Educational Re- search, Vol. 23 No. 3, pp. 67–86. [CrossRef]
  60. Pan, Y., Froese, F. and Liu, N. (2022), “The adoption of artificial intelligence in employee recruitment: The influence of contextual factors”, The International Journal of Human Resource Management, Vol. 33 No. 6, pp. 1125–1147. [CrossRef]
  61. Park, Y.J., Jeong, Y.J., An, Y.S. and Ahn, J.K. (2022), “Analyzing the Factors Influenc- ing the Intention to Adopt Autonomous Ships Using the TOE Framework and DOI Theory”, Journal of Navigation and Port Research, Vol. 46 No. 2, pp. 134–144.
  62. Paton, C. and Kobayashi, S. (2019), “An open science approach to artificial intelligence in healthcare”, Yearbook of Medical Informatics, Vol. 28 No. 01, pp. 47–051. [CrossRef]
  63. Pillai, R., Metri, B.A. and Kaushik, N. (2023), “Students’ adoption of ai-based teacher- bots (t-bots) for learning in higher education. Information Technology &Amp”, People, Vol. 37 No. 1, pp. 328–355. [CrossRef]
  64. Pillai, R. and Sivathanu, B. (2020), “Adoption of artificial intelligence (ai) for talent ac- quisition in it/ites organizations”, Benchmarking: An International Journal, Vol. 27 No. 9, pp. 2599–2629.
  65. Polyportis, A. (2024), “A longitudinal study on artificial intelligence adoption: under- standing the drivers of chatgpt usage behavior change in higher education”, Frontiers in Artificial Intelligence, Vol. 6. [CrossRef]
  66. Popenici, S. and Kerr, S. (2017a, b), “Exploring the impact of artificial intelligence on teach- ing and learning in higher education”, Technology Enhanced Learning, Vol. 12 No. 1. [CrossRef]
  67. Porter, M. and Millar, V. (2002).
  68. Priyadarshinee, P., Raut, R.D., Jha, M.K. and Gardas, B.B. (2017), “Understanding and predicting the determinants of cloud computing adoption: A two staged hybrid SEM- Neural networks approach”, Computers in Human Behavior, Vol. 76, pp. 341–362. [CrossRef]
  69. Qasem, Y.A., Abdullah, R., Yah, Y., Atan, R., Al-Sharafi, M.A. and Al-Emran, M. (2021), “Towards the development of a comprehensive theoretical model for examining the cloud computing adoption at the organizational level”, Recent Advances in Intelligent Systems and Smart Applications, pp. 63–74.
  70. Rasheed, H.M.W., Yuanqiong, H., Khizar, H.M.U. and Khalid, J. (2024), “What drives the adoption of artificial intelligence among consumers in the hospitality sector: a sys- tematic literature review and future agenda”, Journal of Hospitality and Tourism Tech- nology, Vol. 15 No. 2, pp. 211–231. [CrossRef]
  71. Rogers, E.M. (2003), Diffusion of innovations, Free Press, New York.
  72. Saidakhror, G. (2024), “The impact of artificial intelligence on higher education and the economics of information technology”, International Journal of Law and Policy, Vol. 2 No. 3, pp. 1–6. [CrossRef]
  73. Sallu, S., Raehang, R. and Qammaddin, Q. (2024), “Exploration of artificial intelligence (ai) application in higher education”, Architecture and High Performance Computing, Vol. 6 No. 1, pp. 315–327. [CrossRef]
  74. Sharma, S., Sharma, A., Sharma, W.R. and Dillon (2005), “A simulation study to inves- tigate the use of cutoff values for assessing model fit in covariance structure models”, Journal of Business Research, Vol. 58, pp. 935–943.
  75. Sun, H., Fang, Y. and Zou, H. (2016), “Choosing a fit technology: understanding mind- fulness in technology adoption and continuance”, Journal of the Association for Infor- mation Systems, Vol. 17 No. 6, pp. 377–412. [CrossRef]
  76. Tanantong, T. and Wongras, P. (2024), “A utaut-based framework for analyzing users’ intention to adopt artificial intelligence in human resource recruitment: a case study of thailand”, Systems, Vol. 12 No. 1. [CrossRef]
  77. Tarhini, A., Masa’deh, R., Al-Busaidi, K.A., Mohammed, A.B. and Maqableh, M. (2017), “Factors influencing students’ adoption of e-learning: a structural equation modeling approach”, Journal of International Education in Business, Vol. 10 No. 2, pp. 164–182. [CrossRef]
  78. Tornatzky, L.G. and Fleischer, M. (1990), The Processes of Technological Innovation. Issues in organization and management series, Lexington Books, Lexington, Mas- sachusetts.
  79. Tuffaha, M. and Perello-Marin, M.R. (2022), “Adoption factors of artificial intelligence in human resources management”, Future of Business Administration, Vol. 1 No. 1, pp. 1–12. [CrossRef]
  80. Volberda, H.W., Khanagha, S., Baden-Fuller, C., Mihalache, O.R. and Birkinshaw, J. (2021).
  81. Wong, J.W. and Yap, K.H.A. (2024), “Factors influencing the adoption of artificial in- telligence in accounting among micro, small medium enterprises (msmes)”, Quantum Journal of Social Sciences and Humanities, Vol. 5 No. 1, pp. 16–28.
  82. Wu, W., Zhang, B., Li, S. and Liu, H. (2022), “Exploring Factors of the Willingness to Accept AI-Assisted Learning Environments: An Empirical Investigation Based on the UTAUT Model and Perceived Risk Theory”, Frontiers in Psychology, Vol. 13, pp. 870777–870777. [CrossRef]
  83. Zawacki-Richter, O., Marín, V.I., Bond, M. and Gouverneur, F. (2019), “Systematic re- view of research on artificial intelligence applications in higher education - where are the educators?”, International Journal of Educational Technology in Higher Education, No. 1, pp. 16–16. [CrossRef]
  84. Zhu, Y., Wang, R. and Pu, C. (2022), ““i am chatbot, your virtual mental health ad- viser.” what drives citizens’ satisfaction and continuance intention toward mental health chatbots during the covid-19 pandemic? an empirical study in china”, Digital Health, Vol. 8.
Figure 1. The study model.
Figure 1. The study model.
Preprints 153307 g001
Figure 2. Final best-fitting CFA model.
Figure 2. Final best-fitting CFA model.
Preprints 153307 g002
Figure 3. The SEM model for the hypothese.
Figure 3. The SEM model for the hypothese.
Preprints 153307 g003
Table 1. Sample’s Demographic Information.
Table 1. Sample’s Demographic Information.
Variable Category Count Percent
Gender Male 187 51
Female 180 49
Other - -
Total 367 100
Age 18-24 55 15
25-33 71 19.3
34-44 175 47.7
45-54 40 10.9
55-65 26 7.1
66 and older - -
Total 367 100
Residence Turkey 71 19.4
USA 192 52.3
Canada 104 28.3
Total 367 100
Education Diploma’s degree - -
Bachelor's degree 46 12.5
Master's degree 104 28.3
PhD 217 59.2
Total 367 100
Educational Major IT 136 37.1
Management 74 20.1
Accounting 4 1.1
Medicine 22 6
Pharmaceutical 11 3
Other 120 32.7
Total 367 100
Work Experience Less than 2 years 58 15.8
2 years - less than 6 years 78 21.3
6 years - less than 8 years 25 6.8
8 years - less than 10 years 42 11.4
10 years and above 164 44.7
Total 367 100
How long have you been using AI tools or apps? Less than 6 months 94 25.6
6 months - less than 1 year 55 15
1 year - less than 2 years 48 13.1
2 years and more 170 46.3
Total 367 100
Where do you most use your preferred AI tool (Type of operating system do you use)? Windows PC 269 73.3
Mac OS (Mac Book) 24 6.5
Android (Samsung, Sony, HTC, LG, Motorola…etc.) 19 5.2
iOS (iPhone) 46 12.5
Tablet 2 0.5
Other 7 1.9
Total 367 100
Table 2. Frequencies and Percentages.
Table 2. Frequencies and Percentages.
Category Count Percent
ChatGPT 274 74.7
QuillBot 135 36.8
Grammarly 248 67.6
Scholarcy 36 9.8
Scite 43 11.7
pdf.ai 68 18.5
other 24 6.5
Table 3. Frequencies and Percentages.
Table 3. Frequencies and Percentages.
Category Count Percent
Conferences 78 21.3
Workshops 108 29.4
Training 128 34.9
All of the above 83 22.6
other 69 18.8
Table 4. Frequencies and Percentages.
Table 4. Frequencies and Percentages.
Category Count Percent
Application processes 70 19.1
Collaboration strategies 56 15.3
IT development plans 85 23.2
technical knowledge/skills 97 26.4
All of the above 177 48.2
other 9 2.5
Table 5. Frequencies and Percentages.
Table 5. Frequencies and Percentages.
Category Count Percent
Social attitudes about morals and ethical concerns 97 26.4
Offer guidelines for the development of AI applications 71 19.3
Protect privacy and Ownership rights 123 33.5
All of the above 103 28.1
Other 48 13.1
Table 6. Frequencies and Percentages.
Table 6. Frequencies and Percentages.
Category Count Percent
Supportive AI in-house software. 89 24.3
Adoptive operating systems that support AI. 74 20.2
Supportive AI in-house Network. 75 20.4
Not yet there, none of the above. 175 47.7
Other 8 2.2
Table 7. Confirmatory Factor Analysis Results (Factor Loading).
Table 7. Confirmatory Factor Analysis Results (Factor Loading).
Latent Variable Indicator FL FLS AVE
(> 0.50)
CR
(> 0.70)
Cronbach's
Alpha
Compatibility
C1 0.82 0.672
0.585

0.875
C2 0.663 0.440
C3 0.831 0.691 0.883
C4 0.765 0.585
C5 0.732 0.536
Complexity CX1 0.873 0.762 0.574 0.843
CX2 0.698 0.487
CX3 0.753 0.567 0.867
CX4 0.694 0.482
User Interface
UX1 0.867 0.752 0.697 0.902
UX2 0.839 0.704
0.938

UX3 0.848 0.719
UX4 0.784
0.615

Ease of Use
PEOU1 0.874 0.764 0.585 0.807
PEOU2 0.721 0.520 0.821
PEOU3 0.687 0.472
User Satisfaction US1 0.763 0.582 0.615 0.905
US2 0.721 0.520
US3 0.738 0.545
0.95


US4 0.865 0.748
US5 0.832 0.692
US6 0.778
0.605
Performance Expectation PE1 0.757 0.573 0.664 0.855
PE2 0.811 0.658 0.881
PE3 0.872 0.760
AI Strategic Alignment AIS1 0.834 0.696 0.573 0.80
AIS2 0.757 0.573 0.835
AIS3 0.671 0.450
Availability of Resources AVR1 0.704 0.496
AVR2 0.785 0.616 0.614 0.826 0.862
AVR3 0.854 0.729
Competitive Pressure COP1 0.716 0.513 0.789
COP2 0.765 0.585 0.555 0.817
COP3 0.754 0.569
Government Regulations GOR1 0.784 0.615
GOR2 0.682 0.465 0.528 0.77 0.814
GOR3 0.711 0.506
Technological Support TS1 0.621 0.386
TS2 0.745 0.555 0.512 0.757 0.805
TS3 0.772 0.596
Facilitating Conditions FC1 0.857 0.734
FC2 0.823 0.677 0.709 0.88 0.913
FC3 0.846 0.716
AI Adoption Intentions AIA1 0.844 0.712 0.714 0.882
AIA2 0.856 0.733 0.929
AIA3 0.834 0.696
Fl =Factor loading, FLS=Factor loading squared, AVE =Average Variance Extracted, CR= Composite Reliability
FL = Factor Loading, FLS = Factor Loading Squared, AVE= Average Variance Extracted, CR= Composite Reliability
Table 8. HTMT Analysis.
Table 8. HTMT Analysis.
Preprints 153307 i001
Table 9. Final Measurement Model Fit.
Table 9. Final Measurement Model Fit.
X2 51.213
X2 /DF 5.12
SRMR 0.037
CFI 0.951
TLI 0.924
NFI 0.958
IFI 0.958
RMSEA 0.07
Table 10. Structural equation modelling regression weights.
Table 10. Structural equation modelling regression weights.
Estimate S.E. C.R. P Result
H1a C AIA 0.342 0.054 6.876 *** Not Supported
H1b CX AIA 0.268 0.044 6.085 *** Supported
H1c UX AIA 0.421 0.058 8.154 *** Not Supported
H1d PEOU AIA 0.332 0.045 7.382 *** Supported
H1e US AIA 0.216 0.046 4.672 *** Supported
H1f PE AIA 0.186 0.043 4.312 .001 Supported
H1g AINT AIA 0.766 0.033 23.519 *** Supported
H1h AIS AIA 0.100 0.031 3.263 .003 Supported
H1i AVR AIA 0.122 0.022 5.587 *** Supported
H1j COP AIA 0.072 0.035 1.004 .421 Not Supported
H1k GOR AIA 0.008 0.029 0.743 .785 Not Supported
H1l TS AIA 0.551 0.034 8.581 *** Not Supported
H1m FC AIA 0.964 0.039 25.000 *** Supported
S.E. = Standard errors of the regression weights, C.R. = Critical Ratio, P = p-value (*<0.05, **<0.01, ***<0.001).
Table 11. Multiple-Group Sem Analysis Results For Gender Model.
Table 11. Multiple-Group Sem Analysis Results For Gender Model.
Model Structural weights
DF 1
CMIN 0.455
P 0.491
NFI Delta-1 0.002
IFI Delta-2 0.002
Table 12. Multiple-group SEM analysis results for age model.
Table 12. Multiple-group SEM analysis results for age model.
Model Structural weights
DF 2
CMIN 1.279
P 0.322
NFI Delta-1 0.009
IFI Delta-2 0.009
Table 13. Multiple-group SEM analysis results for education model.
Table 13. Multiple-group SEM analysis results for education model.
Model Structural weights
DF 2
CMIN 4.624
P 0.099
NFI Delta-1 0.017
IFI Delta-2 0.017
Table 14. Multiple-group SEM analysis results for major model.
Table 14. Multiple-group SEM analysis results for major model.
Model Structural weights
DF 4
CMIN 12.939
P 0.012
NFI Delta-1 0.053
IFI Delta-2 0.053
Table 15. Structural equation modelling regression weights.
Table 15. Structural equation modelling regression weights.
Estimate S.E. C.R. P Effect R2
AI Key Factors (IT) AIA 1.053 .095 11.110 *** 0.692 0.479
AI Key Factors (Management) AIA 1.576 .201 7.829 *** 0.676 0.456
AI Key Factors (Medicine) AIA .219 .338 .648 .517 0.138 0.019
AI Key Factors (Pharmaceutical) AIA 1.275 .422 3.017 .003 0.675 0.456
AI Key Factors (Other) AIA 1.250 .097 12.890 *** 0.764 0.584
S.E. = Standard errors of the regression weights, C.R. = Critical Ratio, P = p-value (*<0.05, **<0.01, ***<0.001).
Table 16. Multiple-group SEM analysis results for experience model.
Table 16. Multiple-group SEM analysis results for experience model.
Model Structural weights
DF 4
CMIN 10.625
P 0.03
NFI Delta-1 0.038
IFI Delta-2 0.038
Table 17. Structural equation modelling regression weights.
Table 17. Structural equation modelling regression weights.
Estimate S.E. C.R. P Effect R2
AI Key Factors
(Less than 2 years)
AIA .606 .222 2.726 .006 0.339 0.115
AI Key Factors
(2 years - less than 6 years)
AIA 1.136 .145 7.839 *** 0.666 0.444
AI Key Factors
(6 years - less than 8 years)
AIA 1.481 .273 5.429 *** 0.738 0.544
AI Key Factors
(8 years - less than 10 years)
AIA 1.366 .098 13.886 *** 0.907 0.823
AI Key Factors
(10 years and above)
AIA 1.195 .080 14.890 *** 0.760 0.578
S.E. = Standard errors of the regression weights, C.R. = Critical Ratio, P = p-value (*<0.05, **<0.01, ***<0.001).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated