1. Introduction
Artificial intelligence has moved from being a speculative technological aspiration to becoming an increasingly embedded component of contemporary organizational life, particularly in the domain of managerial decision-making. Advances in computational power, algorithmic sophistication, and data availability have transformed artificial intelligence from a laboratory-based concept into a practical managerial tool capable of augmenting, automating, and in some cases reshaping how decisions are made across industries. Early discussions of artificial intelligence focused largely on technical feasibility and the possibility of creating machines capable of mimicking human intelligence, but contemporary discourse has shifted toward understanding how artificial intelligence systems are actually used within organizations and how managers perceive their value, risks, and implications for work and decision authority (Russell & Norvig, 2021; Haenlein & Kaplan, 2019; Emon, 2025). As organizations increasingly rely on data-driven insights to navigate uncertainty and complexity, artificial intelligence has emerged as a central enabler of managerial decision-making, influencing strategic planning, operational control, and performance evaluation (Brynjolfsson & McElheran, 2016; Wamba et al., 2021). Despite this growing prominence, the adoption of artificial intelligence in managerial decision-making remains uneven, contested, and deeply shaped by managerial perceptions, interpretations, and contextual realities. From a managerial perspective, artificial intelligence adoption is not merely a technical implementation issue but a socio-cognitive process involving beliefs about usefulness, ease of use, trustworthiness, and compatibility with existing decision practices. Foundational research in information systems has long emphasized that managerial acceptance and usage of technology are influenced by perceived usefulness and perceived ease of use, which shape attitudes and behavioral intentions toward technology adoption (Davis, 1989; Emon & Chowdhury, 2025). Building on this foundation, the Unified Theory of Acceptance and Use of Technology highlighted the roles of performance expectancy, effort expectancy, social influence, and facilitating conditions in shaping user acceptance (Venkatesh et al., 2003). While these models have been widely applied to understand technology adoption, artificial intelligence presents unique challenges that extend beyond traditional information systems due to its autonomous, learning-based, and often opaque nature (Dwivedi et al., 2021). Managers are not simply users of artificial intelligence systems; they are decision-makers who must interpret algorithmic outputs, balance human judgment with machine recommendations, and take responsibility for outcomes influenced by artificial intelligence. The growing integration of artificial intelligence into managerial decision-making is closely tied to broader transformations in organizational data environments and digital platforms. Next-generation digital platforms enable organizations to collect, integrate, and analyze vast volumes of structured and unstructured data, creating fertile ground for artificial intelligence applications in forecasting, optimization, and pattern recognition (Rai et al., 2019; Emon, 2025). These platforms reshape how information flows within organizations and redefine the temporal and spatial dimensions of decision-making, often accelerating decision cycles and increasing reliance on algorithmic insights. As organizations adopt artificial intelligence-enabled analytics, managers are increasingly confronted with recommendations generated by systems that operate beyond traditional rule-based logic, raising questions about interpretability, accountability, and control (Ghasemaghaei, 2021). Understanding how managers perceive these systems is therefore critical to explaining why artificial intelligence adoption succeeds in some contexts while faltering in others. The diffusion of artificial intelligence within organizations can be viewed through the lens of innovation diffusion theory, which emphasizes the roles of relative advantage, compatibility, complexity, trialability, and observability in shaping adoption decisions (Rogers, 2003). From this perspective, managerial perceptions of artificial intelligence are central to the diffusion process, as managers act as key opinion leaders and gatekeepers who influence organizational readiness and commitment to innovation. Artificial intelligence may promise significant relative advantages in terms of efficiency, accuracy, and scalability, yet its perceived complexity and lack of transparency can hinder managerial acceptance, particularly in high-stakes decision contexts (Makridakis, 2017; Emon, 2025). Moreover, compatibility with existing organizational routines and decision cultures plays a crucial role, as managers may resist artificial intelligence systems that challenge established expertise or redistribute decision authority (Raisch & Krakowski, 2021). Artificial intelligence also intersects with broader debates about the future of work and the evolving role of managers. Rather than fully replacing human decision-makers, many scholars argue that artificial intelligence is more likely to augment managerial capabilities by enabling collaborative intelligence, in which humans and machines jointly contribute to decision outcomes (Wilson & Daugherty, 2018). In this collaborative model, artificial intelligence excels at processing large datasets and identifying patterns, while managers contribute contextual understanding, ethical judgment, and strategic insight. However, realizing the benefits of collaborative intelligence depends heavily on managerial perceptions of artificial intelligence as a trustworthy and reliable partner rather than a threatening or inscrutable black box (Siau & Wang, 2018). Negative perceptions related to job displacement, loss of autonomy, or erosion of professional identity can undermine adoption, even when technical performance is demonstrably strong (Jarrahi, 2018; Emon, 2025). Trust emerges as a particularly salient issue in artificial intelligence-driven decision-making. Unlike traditional information systems, artificial intelligence systems often rely on machine learning models whose internal logic may be difficult for managers to fully understand or explain. This opacity can generate skepticism and reduce willingness to rely on algorithmic recommendations, especially in strategic or ethically sensitive decisions (Siau & Wang, 2018). Managers may question the data quality, assumptions, and potential biases embedded within artificial intelligence systems, leading to cautious or selective adoption. Research suggests that trust in artificial intelligence is not solely a function of technical accuracy but is also shaped by organizational norms, prior experiences, and the degree of transparency and explainability provided by the system (Keding, 2021; Emon, 2025). Understanding managerial perceptions of trust is therefore essential to explaining how artificial intelligence is integrated into decision-making processes. Organizational context further shapes managerial perceptions of artificial intelligence adoption. The technological, organizational, and environmental dimensions emphasized in the technology–organization–environment framework highlight that adoption decisions are influenced not only by perceived technological attributes but also by organizational resources, leadership support, and external pressures (Tornatzky & Fleischer, 1990). Managers operating in resource-constrained environments may perceive artificial intelligence as costly or risky, while those in highly competitive or data-intensive industries may view it as a strategic necessity. Environmental factors such as regulatory expectations, industry norms, and competitive pressure can also influence managerial attitudes toward artificial intelligence, particularly in sectors where algorithmic decision-making is becoming a standard practice (Sun & Medaglia, 2019). The assimilation of artificial intelligence into managerial decision-making extends beyond initial adoption to encompass routinization and integration into everyday practices. Research on enterprise system assimilation emphasizes that technologies deliver value only when they are deeply embedded within organizational processes and supported by appropriate structures and skills (Liang et al., 2007; Emon, 2025). For artificial intelligence, this assimilation process may be particularly complex, as managers must learn how to interpret algorithmic outputs, adjust workflows, and redefine roles and responsibilities. Managerial perceptions during this phase are critical, as early experiences with artificial intelligence can reinforce or undermine confidence in its usefulness and reliability. Positive perceptions may encourage experimentation and learning, while negative experiences may lead to superficial or symbolic adoption that fails to transform decision-making practices. Artificial intelligence-driven decision-making also has strategic implications for business models and dynamic capabilities. As organizations leverage artificial intelligence to sense market changes, seize opportunities, and reconfigure resources, managerial perceptions of artificial intelligence influence how these capabilities are developed and deployed (Teece, 2018). Managers who view artificial intelligence as a strategic asset are more likely to invest in complementary capabilities such as data governance, talent development, and organizational learning. Conversely, managers who perceive artificial intelligence primarily as a technical tool may underinvest in these complementary elements, limiting its strategic impact. This divergence underscores the importance of understanding managerial perceptions as a determinant of not only adoption but also value creation. The growing body of research on artificial intelligence adoption highlights the need for richer, context-sensitive insights into how managers make sense of artificial intelligence in practice. While quantitative studies have identified key determinants of adoption and performance outcomes, they often abstract away from the lived experiences, interpretations, and sensemaking processes that shape managerial behavior (Dwivedi et al., 2021). Qualitative research offers a valuable lens for exploring these deeper dynamics, capturing how managers interpret artificial intelligence in relation to their roles, identities, and organizational contexts. Such an approach is particularly well suited to examining decision-making, which is inherently complex, socially embedded, and influenced by tacit knowledge and judgment (Shrestha et al., 2019; Emon, 2025). Managerial decision-making itself is undergoing transformation as artificial intelligence reshapes how information is generated, evaluated, and acted upon. Traditional models of managerial decision-making emphasized bounded rationality and the use of heuristics to cope with information overload. Artificial intelligence promises to expand the bounds of rationality by processing vast datasets and generating probabilistic predictions, yet it also introduces new forms of uncertainty related to model validity and ethical implications (Makridakis, 2017). Managers must therefore navigate a paradox in which artificial intelligence simultaneously enhances and complicates decision-making. Their perceptions of this paradox influence whether artificial intelligence is embraced as a decision aid or resisted as an unreliable or intrusive technology. Ethical considerations further complicate managerial perceptions of artificial intelligence adoption. Concerns about data privacy, algorithmic bias, and accountability for automated decisions can shape managerial attitudes and constrain adoption, particularly in contexts involving sensitive stakeholder impacts (Kaplan & Haenlein, 2020). Managers may be reluctant to delegate decisions to artificial intelligence systems if they fear reputational damage or legal consequences arising from biased or erroneous outcomes. These ethical concerns highlight the importance of examining not only functional perceptions of usefulness and efficiency but also normative perceptions related to responsibility and legitimacy. The relevance of managerial perceptions is also evident in the public sector, where artificial intelligence adoption in decision-making is influenced by distinct institutional logics and accountability requirements. Research mapping artificial intelligence use in the public sector shows that managerial attitudes toward transparency, fairness, and public value significantly shape adoption trajectories (Sun & Medaglia, 2019; Emon, 2025). While the present study focuses on managerial decision-making more broadly, insights from public sector contexts underscore the broader importance of perception in shaping how artificial intelligence is deployed and governed across organizational settings. As artificial intelligence continues to evolve, its role in decision-making is likely to expand, encompassing not only operational and tactical decisions but also strategic and creative domains. Emerging applications in scenario planning, risk assessment, and innovation management challenge traditional assumptions about the boundaries of managerial judgment (McAfee & Brynjolfsson, 2017). These developments heighten the importance of understanding how managers perceive artificial intelligence not only as a tool but as an actor that participates in decision processes. Perceptions of agency, control, and collaboration become central to explaining how artificial intelligence is integrated into managerial work (Faraj et al., 2018). Despite the growing importance of artificial intelligence in managerial decision-making, there remains a notable gap in understanding how managers themselves interpret and experience this transformation. Much of the existing literature focuses on technological capabilities or adoption outcomes, offering limited insight into the subjective meanings that managers attach to artificial intelligence. Calls for more nuanced and contextually grounded research emphasize the need to explore how managerial perceptions shape adoption trajectories, influence decision quality, and mediate the relationship between artificial intelligence and organizational performance (Ghasemaghaei, 2021; Keding, 2021). Addressing this gap requires qualitative inquiry that foregrounds managerial voices and captures the complexity of sensemaking processes surrounding artificial intelligence. In this context, a qualitative exploration of managerial perceptions of artificial intelligence adoption in decision-making is both timely and necessary. By examining how managers understand, evaluate, and respond to artificial intelligence in their decision roles, such research can illuminate the cognitive, emotional, and social factors that underpin adoption beyond what can be captured through survey-based models alone. It can reveal how perceptions evolve over time, how they are shaped by organizational experiences and external narratives, and how they influence the balance between human judgment and algorithmic input in decision-making. Ultimately, understanding managerial perceptions is essential to realizing the potential of artificial intelligence as a transformative force in organizational decision-making, while also addressing the challenges and tensions that accompany its adoption in complex managerial contexts. ChatGPT's out of space for saved memories New memories won’t be added until you make space. Learn more Manage ChatGPT can make mistakes. Check important info.
2. Literature Review
The literature on artificial intelligence adoption in organizational decision-making has expanded rapidly in recent years, reflecting the growing recognition that AI is reshaping how organizations operate, compete, and make decisions. Scholars increasingly agree that AI adoption is not merely a technological issue but a complex socio-technical phenomenon involving managerial cognition, organizational structures, strategic intent, and environmental pressures. Early studies on AI in organizations emphasized efficiency gains and automation potential, but more recent work highlights the nuanced role of managerial perceptions in shaping how AI is interpreted, assimilated, and leveraged for decision-making purposes (Chatterjee et al., 2021; von Krogh, 2018). Managers are central actors in this process, as they evaluate AI systems not only in terms of technical performance but also in relation to trust, legitimacy, organizational fit, and strategic value. Consequently, understanding managerial perceptions has become a focal point in AI adoption research, particularly as organizations move from experimentation to large-scale deployment. A significant stream of literature draws on technology acceptance and information systems theories to explain AI adoption behavior. Although originally developed in consumer and end-user contexts, these models have been widely applied to organizational settings to explain managerial acceptance of advanced technologies. Pavlou (2003) extends acceptance research by integrating trust and perceived risk into models of technology adoption, demonstrating that users’ beliefs about uncertainty and vulnerability strongly influence adoption intentions. In the context of AI-driven decision-making, managers often face heightened uncertainty due to algorithmic opacity and perceived loss of control, making trust a critical determinant of adoption (Gefen et al., 2003). While traditional acceptance models such as TAM have been criticized for their simplicity, they remain influential in explaining how perceived usefulness and perceived ease of use shape managerial attitudes toward AI, particularly when combined with contextual and organizational factors (Benbasat & Barki, 2007; Emon et al., 2025). These insights suggest that managerial perceptions of AI are formed through both cognitive evaluations of system performance and affective responses related to risk, trust, and confidence. Beyond individual-level acceptance, organizational readiness and maturity have emerged as critical lenses for understanding AI adoption. AI readiness refers to the extent to which an organization possesses the technological infrastructure, data quality, skills, governance mechanisms, and cultural openness required to successfully implement AI solutions (Jöhnk et al., 2021; Jöhnk et al., 2020). Managers play a decisive role in assessing and shaping this readiness, as their perceptions influence investment decisions, capability development, and change management initiatives. Studies show that organizations with higher AI readiness are more likely to integrate AI into core decision-making processes rather than limiting its use to peripheral or experimental applications (Sun et al., 2018; Emon et al., 2025). However, readiness is not purely objective; it is also perceptual, shaped by managerial interpretations of organizational capabilities and constraints. This subjectivity helps explain why organizations with similar resources may exhibit markedly different AI adoption trajectories. The literature also emphasizes the role of AI in organizational decision-making by situating it within the broader evolution of decision support systems and analytics. Traditional decision support systems were designed to assist managers by providing structured information and analytical tools, but they largely relied on human interpretation and judgment (Shollo et al., 2015). AI-driven systems, by contrast, increasingly generate recommendations and predictions autonomously, thereby altering the balance between human and machine agency in decision-making. Xu and Zhang (2021) argue that AI enhances decision-making by improving information processing capacity, reducing cognitive biases, and enabling real-time analysis, yet they caution that these benefits depend on managers’ willingness and ability to engage with AI outputs. Managerial perceptions thus mediate whether AI functions as a supportive decision aid or a dominant decision authority, shaping the actual impact of AI on organizational outcomes. Research on big data analytics provides additional insights into the foundations of AI-driven decision-making. Big data capabilities are often seen as precursors to effective AI adoption, as AI systems rely heavily on large, diverse, and high-quality datasets (Kankanhalli et al., 2015). Managers’ perceptions of data quality, analytical competence, and value creation strongly influence whether organizations invest in and utilize AI for decision-making (Gupta et al., 2020). Empirical evidence suggests that organizations that successfully align big data and AI strategies with business objectives are more likely to realize performance benefits, whereas misalignment can lead to underutilization or disillusionment with AI initiatives (Chatterjee et al., 2021). These findings highlight that managerial perceptions are shaped not only by AI technologies themselves but also by complementary assets and capabilities within the organization. Another important body of literature examines AI adoption through the lens of business transformation and strategic change. AI is increasingly viewed as a general-purpose technology with the potential to fundamentally reshape business models, value chains, and competitive dynamics (Yang et al., 2020; Calvino et al., 2019). Managers must therefore assess AI not only as an operational tool but also as a strategic resource. This strategic perspective aligns with dynamic capabilities theory, which emphasizes the role of managerial sensing, seizing, and transforming activities in responding to technological change (Teece, 2017; Emon et al., 2025). Managers who perceive AI as a source of strategic flexibility and innovation are more likely to champion its adoption and integration into decision-making processes. Conversely, those who view AI primarily as a cost or risk may adopt a cautious or defensive stance, limiting its transformative potential. The human-centered perspective on AI adoption further underscores the importance of managerial perceptions. Zhang et al. (2021) argue that designing human-centered AI systems that align with users’ values, skills, and decision-making needs can enhance acceptance and effective use. In managerial contexts, this involves ensuring that AI systems provide explainable, interpretable, and actionable insights rather than opaque recommendations. Literature on managerial cognition suggests that managers rely on mental models and heuristics to interpret complex information, and AI systems that conflict with these cognitive frameworks may face resistance (Keding & Meissner, 2021). As a result, managerial perceptions of AI are shaped by the extent to which AI outputs align with existing decision logics or challenge established ways of thinking. Trust remains a recurring theme across AI adoption studies, particularly in relation to decision-making authority and accountability. Gefen et al. (2003) demonstrate that trust can reduce perceived risk and increase acceptance of technology-mediated processes, a finding that has been extended to AI contexts. Managers may hesitate to rely on AI recommendations when accountability for decisions ultimately rests with them, especially in high-stakes environments. Marikyan et al. (2022) show that users’ emotional responses to AI, including anxiety and perceived threat, can influence acceptance beyond rational cost–benefit considerations. These emotional and psychological dimensions are particularly salient for managers, whose professional identity and expertise may be challenged by algorithmic decision-making systems. Sector-specific studies further illustrate how managerial perceptions of AI vary across functional domains. In logistics and supply chain management, AI has been associated with improved forecasting, routing, and inventory decisions, leading managers to perceive AI as a valuable efficiency-enhancing tool (Trunk et al., 2020; Emon et al., 2025). In human resource management, AI applications such as recruitment screening and performance analytics have generated both optimism and concern, as managers balance efficiency gains against ethical and fairness considerations (Tambe et al., 2019; Pillai & Sivathanu, 2020). Service management literature highlights that AI-driven personalization and automation can enhance customer experiences, but managerial perceptions of customer acceptance and service quality influence adoption decisions (Huang & Rust, 2018; Wirtz et al., 2018). These sectoral differences suggest that managerial perceptions are context-dependent, shaped by domain-specific risks, opportunities, and stakeholder expectations. The sociotechnical perspective provides a more holistic understanding of AI adoption by emphasizing the interdependence between technological artifacts and social structures. Sarker et al. (2019) argue that AI adoption outcomes depend on how technical capabilities interact with organizational routines, power relations, and institutional norms. From this perspective, managerial perceptions are not formed in isolation but are socially constructed through interactions with peers, subordinates, vendors, and external stakeholders. Faraj et al. (2021) further highlight the role of communities and collective knowledge in shaping how AI is understood and used, suggesting that managerial learning about AI often occurs through social networks and professional communities. This reinforces the idea that perceptions evolve over time as managers gain experience and observe the outcomes of AI adoption in their own and other organizations. Organizational ambidexterity theory offers another useful lens for examining managerial perceptions of AI. Ambidexterity refers to an organization’s ability to balance exploration of new opportunities with exploitation of existing capabilities (Raisch & Birkinshaw, 2008; Emon et al., 2025). AI adoption often embodies this tension, as managers must invest in exploratory initiatives involving uncertain technologies while maintaining operational efficiency. Managers who perceive AI as enabling both exploration and exploitation may be more inclined to integrate it into decision-making processes, whereas those who view it as disruptive to existing operations may resist adoption. This balance is particularly relevant in established organizations, where legacy systems and routines can constrain managerial openness to AI-driven change. Despite the growing body of research highlighting AI’s potential, several studies point to a persistent gap between AI investment and realized productivity gains, often referred to as the productivity paradox of AI. Brynjolfsson et al. (2021) argue that realizing AI’s benefits requires complementary investments in organizational redesign, skills development, and process innovation, all of which depend on managerial commitment and vision. Managers’ perceptions of AI’s value thus influence not only adoption decisions but also the extent to which organizations undertake the necessary complementary changes. When managers underestimate these requirements, AI initiatives may fail to deliver expected benefits, reinforcing skepticism and slowing further adoption. Managerial perceptions of AI are also shaped by broader technological and industrial transformations, including Industry 4.0 and digitalization trends. Ardito et al. (2019) note that AI is often adopted alongside other advanced technologies such as the Internet of Things and robotics, creating complex technological ecosystems. Managers must therefore evaluate AI within a broader portfolio of digital investments, which can influence their perceptions of priority, urgency, and strategic fit. Lee et al. (2019) further suggest that emerging technologies can alter managerial decision-making by increasing information complexity and interdependence, thereby heightening the need for advanced analytical support. In such environments, managers may perceive AI as both a solution to complexity and a source of new challenges. The literature consistently highlights that AI adoption is an ongoing, dynamic process rather than a one-time decision. AI maturity models emphasize stages of development ranging from experimentation to full integration and optimization, with managerial perceptions evolving at each stage (Jöhnk & Röglinger, 2022; Emon et al., 2025). Early-stage perceptions may be shaped by hype and uncertainty, while later-stage perceptions are informed by hands-on experience and performance outcomes. Dwivedi et al. (2020) argue that understanding these temporal dynamics is essential for explaining variation in adoption outcomes across organizations. Managers who experience early successes with AI are more likely to develop positive perceptions and expand adoption, whereas negative experiences can lead to abandonment or symbolic adoption. Finally, empirical studies directly examining managerial perceptions of AI provide valuable insights into the cognitive and evaluative processes underlying adoption decisions. Zhou et al. (2020) find that managers’ perceptions of AI usefulness, strategic relevance, and controllability significantly influence their support for AI initiatives. These perceptions are shaped by factors such as prior technology experience, organizational culture, and external competitive pressure. Makarius et al. (2020) further demonstrate that managers who view AI as augmenting human capabilities rather than replacing them are more likely to adopt collaborative decision-making approaches that integrate human judgment and algorithmic insights. Collectively, these studies underscore that managerial perceptions are multifaceted, context-dependent, and central to understanding how AI is adopted and used in organizational decision-making.
3. Research Methodology
This study adopted a qualitative research methodology to explore managerial perceptions of artificial intelligence adoption in decision-making, as such an approach was considered most appropriate for capturing in-depth insights into managers’ experiences, interpretations, and sensemaking processes. A qualitative design was selected because the phenomenon under investigation involved subjective meanings, contextual influences, and cognitive evaluations that could not be adequately examined through quantitative measures alone. The study was grounded in an interpretivist paradigm, which assumed that reality is socially constructed and that managerial perceptions of artificial intelligence are shaped by organizational contexts, professional experiences, and individual belief systems. This philosophical stance allowed the researchers to focus on how managers understood and interpreted artificial intelligence in relation to their decision-making responsibilities rather than attempting to measure predefined variables. Data were collected using semi-structured, in-depth interviews, which enabled participants to articulate their views freely while allowing the researchers to maintain consistency across interviews through a guiding protocol. The interview guide was developed based on an extensive review of prior literature on artificial intelligence adoption, managerial decision-making, and technology acceptance, ensuring conceptual relevance while remaining flexible enough to capture emergent themes. Open-ended questions were used to explore managers’ experiences with artificial intelligence, perceived benefits and challenges, trust and transparency concerns, and the extent to which artificial intelligence influenced their decision authority and judgment. Probing questions were employed during interviews to elicit deeper explanations and clarify participants’ responses, thereby enhancing the richness of the data. The study employed purposive sampling to identify participants who possessed direct experience with artificial intelligence-enabled systems in managerial decision-making contexts. Managers from diverse organizational backgrounds and functional areas were selected to ensure variation in perspectives and to capture a broad range of experiences related to artificial intelligence adoption. Eligibility criteria required participants to hold managerial positions with decision-making responsibilities and to have prior exposure to artificial intelligence-based tools such as analytics platforms, decision support systems, or automated recommendation systems. This sampling strategy was deemed appropriate for qualitative inquiry, as it prioritized information-rich cases rather than statistical representativeness. Interviews were conducted over a defined period and were carried out either face-to-face or through online communication platforms, depending on participants’ availability and preferences. Each interview lasted between approximately forty-five and ninety minutes, allowing sufficient time for in-depth discussion. With participants’ informed consent, all interviews were audio-recorded to ensure accuracy and subsequently transcribed verbatim. Ethical considerations were carefully addressed throughout the data collection process. Participants were informed about the purpose of the study, assured of confidentiality and anonymity, and advised of their right to withdraw at any stage without consequence. Pseudonyms were assigned to all participants, and identifying information was removed from transcripts to protect privacy. Data analysis was conducted using a thematic analysis approach, which allowed for the systematic identification, analysis, and interpretation of patterns within the qualitative data. The analysis followed an iterative process in which transcripts were read multiple times to achieve familiarity with the data. Initial codes were generated inductively, reflecting recurring ideas and concepts expressed by participants. These codes were then compared, refined, and grouped into broader themes that captured shared meanings related to managerial perceptions of artificial intelligence adoption in decision-making. Throughout the analysis, constant comparison was employed to identify similarities and differences across participants’ accounts, thereby enhancing analytical depth. To ensure the rigor and trustworthiness of the study, several strategies were employed. Credibility was enhanced through prolonged engagement with the data and the use of verbatim quotations to ground interpretations in participants’ own words. Member checking was conducted by sharing summarized interpretations with selected participants to confirm the accuracy of the researchers’ understanding. Dependability was supported through the maintenance of a detailed audit trail documenting data collection procedures, analytical decisions, and theme development. Reflexivity was also practiced, as the researchers continuously reflected on their own assumptions and potential biases to minimize undue influence on data interpretation. The qualitative methodology enabled a nuanced and context-sensitive exploration of managerial perceptions of artificial intelligence adoption in decision-making. By focusing on managers’ lived experiences and interpretive processes, the methodological approach provided rich insights into how artificial intelligence was understood, evaluated, and integrated into managerial decision practices, thereby offering a deeper understanding of adoption dynamics that extend beyond purely technological considerations.
4. Results and Findings
The study’s findings illustrate a complex and multifaceted picture of managerial perceptions regarding artificial intelligence adoption in decision-making. Managers acknowledged substantial benefits in efficiency, analytical depth, and collaborative potential, while also recognizing challenges related to complexity, trust, ethical considerations, and organizational support. The integration of AI altered decision-making processes, reshaped managerial roles, and prompted reflection on the balance between human judgment and machine intelligence. Managers’ perceptions were influenced not only by technological attributes but also by organizational culture, external pressures, and future strategic ambitions. These insights collectively underscore the importance of understanding managerial perspectives in facilitating successful AI adoption, highlighting both the transformative potential of AI and the nuanced considerations necessary for its effective integration.
The study explored managerial perceptions of artificial intelligence adoption in decision-making by analyzing data collected from in-depth interviews with managers across multiple industries. The thematic analysis revealed nine primary themes that encapsulate the nuanced ways in which managers experienced, evaluated, and integrated artificial intelligence into their decision-making practices. These themes highlight both the opportunities and challenges associated with artificial intelligence adoption, emphasizing its impact on managerial roles, decision processes, and organizational dynamics.
Table 1.
Perceived Benefits of Artificial Intelligence Adoption.
Table 1.
Perceived Benefits of Artificial Intelligence Adoption.
| Theme |
Description |
| Efficiency and Speed |
Managers highlighted that AI-enabled systems significantly reduced the time required for routine decision-making tasks and increased overall operational efficiency. |
| Accuracy and Reliability |
Many managers emphasized that AI tools provided more precise analyses and predictions compared to traditional methods, reducing human errors. |
| Data-Driven Insights |
Managers noted that AI allowed them to uncover patterns and insights from large datasets that were previously difficult to identify manually. |
Managers consistently emphasized the advantages of artificial intelligence in enhancing decision-making efficiency and accuracy. They described AI as a tool that not only accelerated routine operations but also enabled them to make more informed strategic choices by leveraging complex data analytics. The ability of AI systems to process and synthesize large volumes of data was frequently mentioned as a critical factor that supported evidence-based decision-making. Managers reported that these systems allowed them to focus on higher-level strategic thinking rather than being encumbered by repetitive analytical tasks.
Table 2.
Challenges in AI Integration.
Table 2.
Challenges in AI Integration.
| Theme |
Description |
| Complexity of Systems |
Managers reported difficulties in understanding and effectively utilizing AI systems due to their technical complexity. |
| Resistance to Change |
Some participants highlighted reluctance among staff and management to adopt AI, particularly due to fear of disruption or job displacement. |
| Data Quality and Availability |
Concerns were raised regarding the accuracy, consistency, and completeness of data used by AI systems, affecting decision reliability. |
The integration of AI into decision-making processes was not without challenges. Managers frequently expressed concern over the complexity of AI tools, noting that the steep learning curve could hinder effective utilization. Resistance to change emerged as a recurrent theme, with some managers observing apprehension from colleagues who feared that AI might replace human judgment. Additionally, the quality and availability of data emerged as a critical factor influencing the reliability of AI outputs. Managers emphasized that AI systems could only be as effective as the data they processed, and inconsistencies or gaps in data compromised their confidence in AI recommendations.
Table 3.
Trust and Reliability in AI Systems.
Table 3.
Trust and Reliability in AI Systems.
| Theme |
Description |
| Transparency of Algorithms |
Managers emphasized the importance of understanding how AI systems generated recommendations to build confidence in their decisions. |
| Predictability of Outputs |
Participants noted that reliable and consistent AI outputs were crucial for maintaining trust in the system. |
| Accountability |
Managers were concerned about assigning responsibility for decisions made with AI assistance, especially in high-stakes scenarios. |
Trust emerged as a central theme influencing managerial adoption of AI. Managers highlighted the need for transparent algorithms that could be explained and justified to stakeholders. They stressed that predictability and consistency in AI outputs were essential for fostering confidence, especially when these outputs informed critical organizational decisions. Accountability also played a significant role, with managers expressing concerns about responsibility when relying on AI-generated recommendations. Many participants indicated that clear protocols and guidelines were necessary to ensure that human oversight complemented AI-assisted decision-making.
Table 4.
Impact on Managerial Roles.
Table 4.
Impact on Managerial Roles.
| Theme |
Description |
| Shift in Focus |
Managers reported that AI adoption allowed them to focus more on strategic and creative aspects rather than routine operational tasks. |
| Skill Requirements |
Participants noted that AI necessitated new skills, including data literacy, analytical reasoning, and understanding AI outputs. |
| Decision Authority |
Some managers expressed concerns about potential erosion of authority or the perception that decisions were increasingly delegated to AI systems. |
AI adoption influenced the way managers perceived and performed their roles. Many described a shift in focus from operational execution to strategic oversight, allowing them to engage more actively in value-creating activities. At the same time, the integration of AI highlighted the need for enhanced skills, including the ability to interpret AI outputs, make informed judgments, and apply analytical reasoning to decision contexts. While AI enabled more informed decision-making, some managers expressed apprehension about potential encroachment on their authority, particularly when AI systems were perceived as independent decision agents.
Table 5.
Decision-Making Process Changes.
Table 5.
Decision-Making Process Changes.
| Theme |
Description |
| Speed of Decisions |
Managers observed that AI systems enabled faster decision cycles by providing immediate insights and recommendations. |
| Analytical Depth |
Participants noted that AI allowed for more comprehensive analysis, incorporating diverse data sources and advanced predictive models. |
| Collaboration |
AI facilitated more collaborative decision-making by providing shared insights that multiple stakeholders could review simultaneously. |
The introduction of AI led to noticeable changes in the decision-making process. Managers reported accelerated decision cycles due to AI’s rapid data processing capabilities, which reduced delays in evaluating alternatives. AI also enhanced analytical depth, allowing managers to explore scenarios and trends that would have been too complex or time-consuming to analyze manually. Additionally, AI served as a collaborative tool, offering a common foundation of insights that multiple stakeholders could reference when discussing strategic and operational decisions. This shared visibility contributed to more inclusive and informed decision-making.
Table 6.
Ethical and Governance Considerations.
Table 6.
Ethical and Governance Considerations.
| Theme |
Description |
| Bias and Fairness |
Managers expressed concerns about potential bias in AI systems and the implications for fair decision-making. |
| Data Privacy |
Participants highlighted the need to protect sensitive data and comply with regulatory standards when using AI. |
| Ethical Responsibility |
Managers noted the importance of maintaining human oversight and accountability for AI-assisted decisions. |
Managers were acutely aware of the ethical dimensions of AI adoption. Concerns about algorithmic bias were frequently mentioned, with participants stressing the importance of ensuring fairness in AI-generated recommendations. Data privacy and compliance with legal standards were considered essential, particularly when AI relied on sensitive or personal information. Managers emphasized that ethical responsibility could not be fully transferred to AI systems, underscoring the continued need for human judgment and oversight to guide decision outcomes responsibly.
Table 7.
Organizational Support and Infrastructure.
Table 7.
Organizational Support and Infrastructure.
| Theme |
Description |
| Training and Skill Development |
Participants emphasized the importance of organizational initiatives to build AI literacy and technical competencies. |
| Technical Resources |
Managers highlighted the need for adequate infrastructure, such as computing power and data management systems, to support AI. |
| Leadership Endorsement |
Support from top management was considered crucial for fostering confidence and facilitating adoption. |
Organizational support emerged as a critical enabler of AI adoption. Managers highlighted the need for structured training programs to enhance AI literacy and technical skills. Adequate technical infrastructure, including computing resources and data management capabilities, was essential for AI systems to operate effectively. Additionally, leadership endorsement played a key role in encouraging adoption, signaling organizational commitment and reducing resistance among managers and staff. These support mechanisms were seen as fundamental to ensuring that AI adoption translated into meaningful improvements in decision-making processes.
Table 8.
External and Environmental Influences.
Table 8.
External and Environmental Influences.
| Theme |
Description |
| Competitive Pressure |
Managers reported that external competition motivated the adoption of AI to gain strategic advantage. |
| Regulatory Environment |
Participants highlighted that compliance requirements influenced the extent and manner of AI adoption. |
| Industry Trends |
Observations of peers and industry best practices affected managerial perceptions of AI relevance and necessity. |
External factors shaped managerial perceptions of AI adoption. Competitive pressure was cited as a major driver, with managers feeling compelled to adopt AI to maintain or enhance organizational performance. Regulatory considerations also influenced adoption strategies, as managers sought to comply with industry-specific rules while leveraging AI capabilities. Furthermore, managers reported that observing AI adoption by peers and competitors reinforced the perception that AI was an essential component of contemporary management practice, shaping both expectations and strategic priorities within their organizations.
Table 9.
Future Outlook and Strategic Integration.
Table 9.
Future Outlook and Strategic Integration.
| Theme |
Description |
| Long-Term Planning |
Managers envisioned AI as integral to future organizational strategy and decision-making. |
| Continuous Learning |
Participants emphasized the need for ongoing adaptation and learning to keep pace with AI advancements. |
| Integration with Human Judgment |
Managers highlighted the importance of balancing AI recommendations with human expertise and contextual understanding. |
Managers recognized AI as a strategic enabler for long-term organizational planning. They envisioned AI as a tool to enhance future decision-making capabilities, provided that organizations continuously adapt to technological advancements. Participants emphasized that AI could not fully replace human judgment and that strategic integration required a careful balance between algorithmic recommendations and managerial expertise. Managers anticipated that successful adoption would depend on fostering an environment of continuous learning, adaptability, and alignment between AI capabilities and organizational goals. The study’s findings illustrate a complex and multifaceted picture of managerial perceptions regarding artificial intelligence adoption in decision-making. Managers acknowledged substantial benefits in efficiency, analytical depth, and collaborative potential, while also recognizing challenges related to complexity, trust, ethical considerations, and organizational support. The integration of AI altered decision-making processes, reshaped managerial roles, and prompted reflection on the balance between human judgment and machine intelligence. Managers’ perceptions were influenced not only by technological attributes but also by organizational culture, external pressures, and future strategic ambitions. These insights collectively underscore the importance of understanding managerial perspectives in facilitating successful AI adoption, highlighting both the transformative potential of AI and the nuanced considerations necessary for its effective integration.
5. Discussion
The findings of this study provide important insights into how managers perceive and engage with artificial intelligence in decision-making processes. Managers consistently recognized that artificial intelligence has the potential to enhance efficiency and accelerate decision-making, allowing them to focus on more strategic and creative aspects of their roles. This shift from operational tasks to higher-order strategic activities illustrates a significant transformation in managerial responsibilities, suggesting that artificial intelligence is not merely a tool for data processing but a catalyst for redefining managerial work. Managers highlighted that the ability to analyze large datasets and generate actionable insights enables more informed decisions, demonstrating that artificial intelligence can augment human judgment rather than simply replacing it. This perspective reflects a broader understanding that successful integration of artificial intelligence into decision-making requires managers to balance reliance on automated insights with their own experiential knowledge and contextual understanding. Despite these perceived benefits, managers also articulated several challenges associated with artificial intelligence adoption. The technical complexity of artificial intelligence systems emerged as a significant concern, highlighting the need for adequate training and skill development to ensure that managers can fully utilize these tools. Resistance to change was another prominent theme, indicating that organizational culture and attitudes toward technology play a critical role in adoption processes. Managers noted that fear of job displacement and uncertainty regarding the accuracy of artificial intelligence outputs could hinder acceptance and limit the effective integration of these systems into decision workflows. Furthermore, the quality and availability of data were emphasized as foundational to the reliability of artificial intelligence, with managers noting that poor data quality could compromise decision accuracy and erode confidence in artificial intelligence recommendations. These challenges underscore that technical capability alone is insufficient for successful adoption; organizational readiness, managerial competence, and trust in the system are equally crucial. Trust and transparency were consistently highlighted as essential factors influencing managerial acceptance of artificial intelligence. Managers emphasized that understanding how algorithms generate recommendations is critical for building confidence and ensuring accountability. Predictable and reliable outputs were deemed necessary to foster trust, particularly in high-stakes decision contexts. Managers were concerned about the delegation of responsibility to artificial intelligence systems, highlighting the importance of establishing clear protocols that define human oversight and accountability. This reflects an awareness that artificial intelligence introduces new dimensions of responsibility, where managers must navigate the interplay between machine-generated insights and ethical, strategic, and regulatory considerations. Building trust in artificial intelligence is thus not merely a matter of technical accuracy but involves creating transparent, interpretable systems that managers can rely upon while maintaining ultimate decision responsibility. The study also revealed that artificial intelligence adoption has substantial implications for managerial roles and skills. Managers indicated that AI-enabled systems necessitate new competencies, including data literacy, analytical reasoning, and the ability to interpret complex outputs. While these systems reduce the burden of routine analytical work, they simultaneously increase expectations regarding strategic oversight, judgment, and integration of machine insights into broader decision frameworks. Some managers expressed concern about the potential erosion of decision authority, reflecting a tension between the empowering and constraining aspects of artificial intelligence. This tension highlights the need for organizations to actively manage role transitions, support skill development, and foster an environment where managers feel confident and competent in leveraging artificial intelligence for decision-making. Changes in the decision-making process emerged as another notable aspect of artificial intelligence adoption. Managers reported that AI facilitated faster and more comprehensive analysis, allowing for more informed decisions within shorter timeframes. The collaborative potential of artificial intelligence was also evident, as managers described how shared insights generated by AI systems enabled multiple stakeholders to engage more effectively in decision-making discussions. This collaborative dimension illustrates how artificial intelligence can serve as a common reference point for organizational deliberation, promoting transparency, consistency, and inclusiveness. However, managers also acknowledged that reliance on artificial intelligence should not diminish critical thinking or reduce human oversight, as blind reliance on machine-generated recommendations could lead to suboptimal or ethically problematic decisions. Ethical considerations were a significant theme throughout the study. Managers were aware of potential biases in artificial intelligence outputs and the importance of ensuring fairness in decision-making processes. Data privacy and regulatory compliance were also cited as crucial considerations, reflecting the broader societal and legal context in which artificial intelligence operates. Managers emphasized that human judgment remains central in mitigating ethical risks and maintaining accountability, highlighting the need for frameworks that integrate artificial intelligence insights with ethical decision-making principles. The study suggests that ethical awareness is not only a regulatory necessity but also a determinant of managerial confidence and adoption, as managers are more likely to engage with systems they perceive as fair, transparent, and responsible. Organizational support and infrastructure were identified as critical enablers of successful artificial intelligence adoption. Managers noted that training programs, technical resources, and leadership endorsement significantly influence both adoption rates and effective utilization. Adequate infrastructure, including computing power, data storage, and analytics platforms, was considered essential for the smooth operation of artificial intelligence systems. Leadership support was described as a key factor in reducing resistance, fostering confidence, and signaling organizational commitment to AI initiatives. These findings underscore that artificial intelligence adoption is not solely a technological endeavor but a socio-technical process that depends on organizational preparedness, resource allocation, and supportive leadership. External and environmental influences also shaped managerial perceptions and adoption strategies. Competitive pressure emerged as a strong motivator, with managers recognizing the need to adopt artificial intelligence to maintain or enhance strategic positioning. Observations of industry peers and benchmarks influenced managerial attitudes, demonstrating the role of social and normative pressures in shaping technology adoption. Regulatory environments were also noted as influential, with managers taking care to align artificial intelligence usage with legal and ethical standards. These external factors highlight that managerial perceptions are formed not only by internal organizational conditions but also by the broader industry context, including competition, regulatory mandates, and evolving technological norms. The study further illustrated the importance of continuous learning and adaptability in leveraging artificial intelligence effectively. Managers emphasized that ongoing engagement with AI systems, combined with iterative skill development, was necessary to keep pace with rapid technological advancements. This focus on continuous learning aligns with the view that artificial intelligence adoption is a dynamic process, requiring managers to evolve their practices, update their knowledge, and refine decision-making strategies over time. Managers highlighted that long-term strategic integration of artificial intelligence requires deliberate planning, iterative evaluation, and alignment with organizational goals, suggesting that successful adoption is contingent on both technological capability and human adaptability. The findings also highlighted the nuanced relationship between human judgment and machine intelligence. Managers consistently underscored the importance of balancing AI-generated recommendations with contextual knowledge, experience, and intuition. While artificial intelligence enhanced analytical depth and provided valuable insights, managers emphasized that ultimate decision responsibility remained a human prerogative. This balancing act illustrates the concept of collaborative intelligence, where human and machine capabilities are integrated to achieve superior decision outcomes. Managers reported that achieving this balance required careful attention to workflow design, decision protocols, and organizational culture, ensuring that AI supports rather than undermines managerial authority and strategic oversight.
6. Conclusions
This study provides a comprehensive understanding of managerial perceptions of artificial intelligence adoption in decision-making, highlighting both the opportunities and challenges associated with integrating AI into organizational processes. The findings demonstrate that managers recognize the potential of AI to enhance efficiency, accuracy, and analytical depth, allowing for faster and more informed decision-making. AI was perceived as a valuable tool that enables managers to focus on strategic and creative aspects of their roles, thereby reshaping the nature of managerial work. At the same time, the study revealed that adoption is influenced by complex factors beyond technological capabilities, including trust, transparency, organizational support, skill development, ethical considerations, and external pressures. Managers’ perceptions were found to play a pivotal role in shaping the adoption trajectory, mediating how AI is integrated into decision workflows and determining the extent to which its benefits are realized. The research underscores that successful AI adoption is not merely a technical endeavor but a socio-technical process that requires alignment between organizational infrastructure, managerial competencies, and ethical governance. Managers emphasized the need for continuous learning, ongoing skill enhancement, and clear protocols for accountability to ensure that AI supports rather than replaces human judgment. The balance between machine-generated insights and human expertise emerged as a critical consideration, highlighting the concept of collaborative intelligence in which human and artificial intelligence capabilities are integrated to achieve superior decision outcomes. Ethical concerns, including bias, fairness, and data privacy, further shape managerial perceptions, emphasizing the importance of responsible and transparent AI use. Organizational support, including leadership endorsement, access to resources, and structured training programs, was identified as a key enabler of effective AI adoption. External factors, such as competitive pressure, industry trends, and regulatory requirements, were also influential in shaping managers’ attitudes toward AI and the strategies employed for its integration. These findings collectively illustrate that AI adoption is a dynamic, context-sensitive process in which managerial perceptions, organizational readiness, and environmental conditions interact to determine both adoption outcomes and the value generated for decision-making processes.
The study highlights that understanding managerial perceptions is essential for organizations seeking to leverage AI effectively in decision-making. Managers’ experiences reveal that AI adoption is not solely about implementing advanced technologies but involves navigating complex human, organizational, and ethical dimensions. The insights gained from this research provide guidance for organizations aiming to foster a supportive environment for AI integration, emphasizing the importance of trust, transparency, skill development, and continuous adaptation. By centering managerial perspectives, the study contributes to a deeper understanding of how AI can be strategically and responsibly incorporated into decision-making practices, ensuring that its potential is maximized while mitigating risks and fostering sustainable organizational growth.
References
- Davenport, T.H.; Ronanki, R. Artificial intelligence for the real world. Harvard Business Review 2018, 96, 108–116. Available online: https://hbr.org/2018/01/artificial-intelligence-for-the-real-world.
- Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology. MIS Quarterly 2003, 27, 425–478. [Google Scholar] [CrossRef]
- Rai, A.; Constantinides, P.; Sarker, S. Next-generation digital platforms. MIS Quarterly 2019, 43, iii–ix. [Google Scholar] [CrossRef]
- Jarrahi, M.H. Artificial intelligence and the future of work. Business Horizons 2018, 61, 577–586. [Google Scholar] [CrossRef]
- Dwivedi, Y.K.; et al. Artificial intelligence adoption research. International Journal of Information Management 2021, 57, 101994. [Google Scholar] [CrossRef]
- Rogers, E.M. Diffusion of innovations, 5th ed.; Free Press, 2003; Available online: https://books.google.com.
- Ghasemaghaei, M. Understanding the impact of AI on decision-making. Information & Management 2021, 58, 103389. [Google Scholar] [CrossRef]
- Makridakis, S. The forthcoming artificial intelligence revolution. Futures 2017, 90, 46–60. [Google Scholar] [CrossRef]
- Emon, M.M.H.; Rahman, K.M.; Islam, M.S.; Nath, A.; Kutub, J.; Santa, N.A. Determinants of Explainable AI Adoption in Customer Service Chatbots: Insights from the Telecom Sector of Bangladesh. In Proceedings of the 2025 7th International Conference on Sustainable Technologies for Industry 5.0 (STI), 2025; pp. 1–6. [Google Scholar]
- Davis, F.D. Perceived usefulness, perceived ease of use. MIS Quarterly 1989, 13, 319–340. [Google Scholar] [CrossRef]
- Haenlein, M.; Kaplan, A. A brief history of artificial intelligence. California Management Review 2019, 61, 5–14. [Google Scholar] [CrossRef]
- Shrestha, Y.R.; Ben-Menahem, S.M.; von Krogh, G. Organizational decision-making structures. California Management Review 2019, 61, 66–84. [Google Scholar] [CrossRef]
- Tornatzky, L.G.; Fleischer, M. The processes of technological innovation; Lexington Books, 1990; Available online: https://books.google.com.
- Emon, M.M.H. Leveraging AI on Social Media & Digital Platforms to Enhance English Language Teaching and Learning. In AI-Powered English Teaching; IGI Global Scientific Publishing, 2025; pp. 211–238. [Google Scholar] [CrossRef]
- Wilson, H.J.; Daugherty, P.R. Collaborative intelligence. In Harvard Business Review; 2018; Available online: https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces.
- Faraj, S.; Pachidi, S.; Sayegh, K. Working in the age of AI. Information and Organization 2018, 28, 62–70. [Google Scholar] [CrossRef]
- Brynjolfsson, E.; McElheran, K. Data-driven decision making. American Economic Review 2016, 106, 133–139. [Google Scholar] [CrossRef]
- Kaplan, A.; Haenlein, M. Rulers of the world, unite! Bus. Horiz. 2020, 63, 37–50. [Google Scholar] [CrossRef]
- Raisch, S.; Krakowski, S. Artificial intelligence and management. Academy of Management Review 2021, 46, 192–214. [Google Scholar] [CrossRef]
- Siau, K.; Wang, W. Building trust in artificial intelligence. Journal of Database Management 2018, 29, 1–17. [Google Scholar] [CrossRef]
- Liang, H.; Saraf, N.; Hu, Q.; Xue, Y. Assimilation of enterprise systems. MIS Quarterly 2007, 31, 59–87. [Google Scholar] [CrossRef]
- Emon, M.M.H.; Nath, A.; Rahman, K.M.; Kutub, J.; Rifat, H.H.; Islam, M.M.-U. Evaluating the Impact of AI Integration on Supply Chain Efficiency in E-Commerce Operations. In Proceedings of the 2025 7th International Conference on Sustainable Technologies for Industry 5.0 (STI), 2025; pp. 1–6. [Google Scholar]
- Keding, C. Understanding AI-driven decision-making. Journal of Business Research 2021, 123, 230–243. [Google Scholar] [CrossRef]
- McAfee, A.; Brynjolfsson, E. Machine, platform, crowd; Norton, 2017; Available online: https://wwnorton.com.
- Sun, T.Q.; Medaglia, R. Mapping AI in the public sector. Government Information Quarterly 2019, 36, 368–383. [Google Scholar] [CrossRef]
- Teece, D.J. Business models and dynamic capabilities. Long Range Planning 2018, 51, 40–49. [Google Scholar] [CrossRef]
- Wamba, S.F.; et al. Big data analytics and firm performance. Information & Management 2021, 58, 103439. [Google Scholar] [CrossRef]
- Russell, S.; Norvig, P. Artificial intelligence: A modern approach, 4th ed.; Pearson, 2021; Available online: https://www.pearson.com.
- Chatterjee, S.; et al. AI adoption in organizations. Journal of Business Research 2021, 127, 80–92. [Google Scholar] [CrossRef]
- Jöhnk, J.; Weißert, M.; Wyrtki, K. Ready or not—AI readiness. Business & Information Systems Engineering 2021, 63, 5–20. [Google Scholar] [CrossRef]
- Pavlou, P.A. Consumer acceptance of electronic commerce. MIS Quarterly 2003, 27, 69–103. [Google Scholar] [CrossRef]
- Emon, M.M.H. Circularity Meets AI: Revolutionizing Consumer Engagement Through Green Marketing. In Transforming Business Practices With AI-Powered Green Marketing; IGI Global Scientific Publishing, 2025; pp. 169–208. [Google Scholar] [CrossRef]
- Trunk, A.; Birkel, H.; Hartmann, E. Artificial intelligence in logistics. International Journal of Physical Distribution & Logistics Management 2020, 50, 988–1012. [Google Scholar] [CrossRef]
- von Krogh, G. Artificial intelligence in organizations. Academy of Management Discoveries 2018, 4, 404–409. [Google Scholar] [CrossRef]
- Benbasat, I.; Barki, H. Quo vadis, TAM? J. Assoc. Inf. Syst. 2007, 8, 211–218. [Google Scholar] [CrossRef]
- O’Reilly, C.A.; Tushman, M.L. Lead and disrupt. California Management Review 2016, 58, 5–22. [Google Scholar] [CrossRef]
- Kankanhalli, A.; Hahn, J.; Tan, S.; Gao, G. Big data analytics. MIS Quarterly 2015, 39, 817–838. [Google Scholar] [CrossRef]
- Emon, M.M.H.; Rahman, K.M.; Nath, A.; Islam, M.M.-U.; Rifat, H.H.; Kutub, J. Exploring Drivers of Sustainable E-Commerce Adoption among SMEs in Bangladesh: A TOE Framework Approach for Industry 5.0 Transformation. In Proceedings of the 2025 7th International Conference on Sustainable Technologies for Industry 5.0 (STI), 2025; pp. 1–6. [Google Scholar]
- Xu, S.X.; Zhang, X. Impact of AI on decision-making. Journal of Management Information Systems 2021, 38, 472–502. [Google Scholar] [CrossRef]
- Yang, C.; Lan, S.; Shen, W. Artificial intelligence in business transformation. Technological Forecasting and Social Change 2020, 159, 120189. [Google Scholar] [CrossRef]
- Mishra, A.N.; Konana, P.; Barua, A. Antecedents and consequences of IT assimilation. Information Systems Research 2007, 18, 453–473. [Google Scholar] [CrossRef]
- Zhang, P.; Lu, Y.; Gupta, S.; Zhao, L. Designing human-centered AI. Journal of the Association for Information Systems 2021, 22, 523–548. [Google Scholar] [CrossRef]
- Emon, M.M.H. Ethical Intelligence in Motion: Leveraging AI for Responsible Sourcing in Supply Chain and Logistics. In Navigating Responsible Business Practices Through Ethical AI; 2025; pp. 207–240. [Google Scholar] [CrossRef]
- Calvino, F.; Criscuolo, C.; Squicciarini, M. AI diffusion and productivity. In OECD Productivity Working Papers; 2019. [Google Scholar] [CrossRef]
- Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM. MIS Quarterly 2003, 27, 51–90. [Google Scholar] [CrossRef]
- Makarius, E.E.; Mukherjee, D.; Fox, J.D.; Fox, A.K. Rising with the machines. Journal of Business Research 2020, 120, 262–273. [Google Scholar] [CrossRef]
- Emon, M.M.H.; Nath, A.; Rahman, K.M.; Islam, M.M.-U.; Niloy, G.B.; Rifat, M. Examining the Determinants of Generative AI Utilization for SEO and Content Marketing: A TOE Approach in the Bangladeshi Digital Space. In Proceedings of the 2025 IEEE International Conference on Computing, Applications and Systems (COMPAS), 2025; pp. 1–7. [Google Scholar]
- Sarker, S.; Chatterjee, S.; Xiao, X.; Elbanna, A. The sociotechnical axis of AI. MIS Quarterly 2019, 43, 695–719. [Google Scholar] [CrossRef]
- Huang, M.-H.; Rust, R.T. Artificial intelligence in service. Journal of Service Research 2018, 21, 155–172. [Google Scholar] [CrossRef]
- Jöhnk, J.; Röglinger, M. AI maturity models. Business Research 2022, 15, 875–919. [Google Scholar] [CrossRef]
- Emon, M.M.H. Purpose-Driven Intelligence in Green Marketing: Leveraging AI for CSR-Centric and Sustainable Brand Positioning. In Transforming Business Practices With AI-Powered Green Marketing; IGI Global Scientific Publishing, 2025; pp. 89–122. [Google Scholar] [CrossRef]
- Gupta, S.; Kar, A.K.; Baabdullah, A.; Al-Khowaiter, W.A.A. Big data and AI strategy. Technological Forecasting and Social Change 2020, 153, 119928. [Google Scholar] [CrossRef]
- Tambe, P.; Cappelli, P.; Yakubovich, V. Artificial intelligence in human resources. California Management Review 2019, 61, 15–42. [Google Scholar] [CrossRef]
- Shollo, A.; Galliers, R.D.; Lucas, H.C.; Myers, M.D. Decision support systems. Journal of Decision Systems 2015, 24, 99–115. [Google Scholar] [CrossRef]
- Wirtz, J.; Patterson, P.G.; Kunz, W.; et al. Brave new world of AI. Journal of Service Management 2018, 29, 907–931. [Google Scholar] [CrossRef]
- Keding, C.; Meissner, F. Managerial cognition and AI. Management Decision 2021, 59, 646–666. [Google Scholar] [CrossRef]
- Emon, M.M.H.; Rahman, K.M.; Nath, A.; Fuad, M.N.; Kabir, S.M.I.; Emee, A.F. Understanding User Adoption of Cybersecurity Practices through Awareness and Perception for Enhancing Network Security in Bangladesh. In Proceedings of the 2025 IEEE International Conference on Computing, Applications and Systems (COMPAS), 2025; pp. 1–7. [Google Scholar]
- Dwivedi, Y.K.; Rana, N.P.; Tamilmani, K.; Raman, R. Adoption of emerging technologies. Information Systems Frontiers 2020, 22, 1285–1295. [Google Scholar] [CrossRef]
- Faraj, S.; von Krogh, G.; Monteiro, E.; Lakhani, K.R. Online community and AI. Organization Science 2021, 32, 605–632. [Google Scholar] [CrossRef]
- Sun, S.; Cegielski, C.G.; Jia, L.; Hall, D.J. Understanding AI adoption. Information & Management 2018, 55, 337–356. [Google Scholar] [CrossRef]
- Lee, J.; Suh, T.; Roy, D.; Baucus, M. Emerging technology and decision-making. Sustainability 2019, 11, 833. [Google Scholar] [CrossRef]
- Emon, M.M.H. Cybersecurity in the Smart City Era: Overcoming Challenges With Modern Cryptographic Solutions. In Securing Smart Cities Through Modern Cryptography Technologies; IGI Global Scientific Publishing, 2025; pp. 43–74. [Google Scholar] [CrossRef]
- Ardito, L.; Petruzzelli, A.M.; Panniello, U.; Garavelli, A.C. Industry 4.0 technologies. Technological Forecasting and Social Change 2019, 144, 157–169. [Google Scholar] [CrossRef]
- Teece, D.J. Dynamic capabilities and strategy. Strategic Management Journal 2017, 38, 613–631. [Google Scholar] [CrossRef]
- Marikyan, D.; Papagiannidis, S.; Alamanos, E. User responses to AI. Computers in Human Behavior 2022, 128, 107089. [Google Scholar] [CrossRef]
- Zhou, L.; Owusu-Marfo, J.; Chen, K. Managerial perceptions of AI. Industrial Management & Data Systems 2020, 120, 1337–1359. [Google Scholar] [CrossRef]
- Raisch, S.; Birkinshaw, J. Organizational ambidexterity. Academy of Management Perspectives 2008, 22, 63–81. [Google Scholar] [CrossRef]
- Emon, M.M.H.; Chowdhury, M.S.A. Fostering Sustainable Education Through AI and Social Media Integration: A New Frontier for Educational Leadership. In International Dimensions of Educational Leadership and Management for Economic Growth; IGI Global Scientific Publishing, 2025; pp. 331–374. [Google Scholar] [CrossRef]
- Brynjolfsson, E.; Rock, D.; Syverson, C. Productivity paradox of AI. American Economic Journal: Macroeconomics 2021, 13, 333–372. [Google Scholar] [CrossRef]
- Pillai, R.; Sivathanu, B. Adoption of AI in HRM. International Journal of Manpower 2020, 41, 637–653. [Google Scholar] [CrossRef]
- Jöhnk, J.; Weißert, M.; Wyrtki, K. AI readiness framework. Business & Information Systems Engineering 2020, 62, 123–138. [Google Scholar] [CrossRef]
- Zuboff, S. The age of surveillance capitalism. PublicAffairs. 2019. Available online: https://www.publicaffairsbooks.com.
- Emon, M.M.H. Navigating the Digital Labyrinth: Personal Data Privacy and Security at Individual and Organizational Levels. In User-Centric Cybersecurity Implications for Sustainable Digital Transformation; IGI Global Scientific Publishing, 2025; pp. 257–288. [Google Scholar] [CrossRef]
- Trunk, A.; Birkel, H.; Hartmann, E. Artificial intelligence in logistics. International Journal of Physical Distribution & Logistics Management 2020, 50, 988–1012. [Google Scholar] [CrossRef]
- Emon, M.M.H.; Hlali, A. Overcoming Hurdles: Navigating Challenges in the Adoption of Smart Logistics. In Emerging Trends in Smart Logistics Technologies; IGI Global Scientific Publishing, 2025; pp. 197–228. [Google Scholar] [CrossRef]
- Sun, T.Q.; Medaglia, R. AI in public decision-making. Government Information Quarterly 2019, 36, 368–383. [Google Scholar] [CrossRef]
- Vial, G. Understanding digital transformation. MIS Quarterly 2019, 43, 223–254. [Google Scholar] [CrossRef]
- von Krogh, G.; Karunakaran, A. Artificial intelligence and knowledge. Academy of Management Discoveries 2020, 6, 87–105. [Google Scholar] [CrossRef]
- Siau, K.; Wang, W. Responsible AI. MIS Quarterly Executive 2020, 19, 37–50. Available online: https://www.misqe.org.
- Emon, M.M.H. Digital transformation in emerging markets: Adoption dynamics of AI image generation in marketing practices. Telematics and Informatics Reports 2025, 20, 100267. [Google Scholar] [CrossRef]
- Bughin, J.; Seong, J.; Manyika, J.; Chui, M.; Joshi, R. Notes from the AI frontier; McKinsey Global Institute, 2018; Available online: https://www.mckinsey.com.
- Shrestha, Y.R.; Ben-Menahem, S.M.; von Krogh, G. Decision-making with AI. California Management Review 2021, 63, 66–85. [Google Scholar] [CrossRef]
- Wamba, S.F.; Gunasekaran, A.; Akter, S.; Ren, S.J.; Dubey, R.; Childe, S.J. Big data analytics capabilities. International Journal of Production Economics 2017, 191, 81–96. [Google Scholar] [CrossRef]
- Makridakis, S.; Spiliotis, E.; Assimakopoulos, V. Statistical and AI forecasting. PLOS ONE 2018, 13, e0194889. [Google Scholar] [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).