Preprint
Review

This version is not peer-reviewed.

AI Is Not Intelligent

Submitted:

25 January 2025

Posted:

27 January 2025

You are already at the latest version

Abstract

The global advancement of Artificial Intelligence (AI) in problem-solving, pattern recognition and natural language processing is notable but it continues to fall short in exhibiting the essential qualities that constitute real intelligence. This paper analyses AI's limitations with philosophical and psychological viewpoints combining them with ethical and technological constraints to show how these systems excel at large data processing and complex tasks but yet remain deficient in the key elements of human intelligence. The operational boundaries of AI systems require them to use statistical analytical models with pattern recognition capabilities rather than actual comprehension or independent decision processing. Without self-awareness artificial intelligence systems create uncertainties regarding who is responsible and how moral judgments should be established. The findings from this paper will advance our current understanding of the distinction between human intelligence and artificial intelligence and the need to understand how AI mechanisms affecting human choices and independence.

Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

Introduction

"AI is not intelligent”—this is not mere sensationalism! In the 21st century, artificial intelligence has become almost synonymous with Generative Artificial Intelligence (GAI) and Large Language Models (LLMs), often regarded as the core of AI. However, it is essential to ask: what exactly is artificial intelligence?
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines, enabling them to perform tasks such as learning, problem-solving, perception, and language understanding (Stryker & Kavlakoglu, 2024; AWS, n.d). Over the years, AI has been defined in various ways. John McCarthy in 1956 described it as "the science and engineering of making intelligent machines, especially intelligent computer programs," (Kersting, 2018) while Alan Turing in the 1950s introduced the "Turing Test" to measure AI's ability to mimic human behavior (Mirror, 2023). Russell and Norvig (2021) further classified AI based on whether systems think like humans, act like humans, think rationally, or act rationally.
AI has evolved through several phases, from symbolic reasoning in the 1950s to data-driven approaches in the 1990s and the deep learning revolution of the 2010s (DataCamp, 2023; Oracle, 2020; Sestili). Today, AI applications such as speech recognition, personlised systems, and autonomous systems have transformed industries (Siemens, 2024; Rumyley et al., 2023; Casaca & Miguel, 2024; Brainvire, 2025). However, misconceptions about AI's capabilities persist, often fuelled by media hype and industry marketing (Cocato, 2025; Sharps, 2024; SAP, 2024; Stryker & Kavlakoglu, 2024, Yadav, 2024; Firstpost, 2024). Many believe AI possesses superhuman intelligence, creativity, and emotional understanding (Marwala, 2024; Hermann, 2021; Nikolopoulou, 2023; Joyce et al., 2024). In reality, AI excels in narrow, well-defined tasks but lacks general adaptability, ethical reasoning, and emotional intelligence (Glover, 2022; Lumenalta, 2024; Brookhouse, 2023). The portrayal of AI as an autonomous, self-learning entity capable of independent decision-making is misleading. AI systems rely on algorithms and large datasets, requiring human intervention for training and fine-tuning (Marusarz, 2022; Pardo, 2022; IBM, 2021). While AI enhances productivity and innovation, it should be viewed as a tool that complements human capabilities rather than replaces them.
AI is simply a computer that operates on the principle of “garbage in... garbage out” and the overreliance on AI without recognizing its limitations can lead to unintended consequences, such as biased decision-making, ethical concerns, and security vulnerabilities. Hence, this paper critically examines the common misconceptions surrounding AI, exploring its actual capabilities, limitations, and the ethical considerations necessary for responsible AI adoption.

Objectives

  • To critically analyse AI’s limitations in comparison to human intelligence.
  • To challenge prevailing assumptions with evidence-based arguments.

Scope and Methodology

The paper adopts a literature-driven approach by critical analysing arguments from psychology, philosophy, and AI development.

What is Intelligence?

Philosophically, intelligence is linked to reasoning, understanding abstract concepts, and applying knowledge to solve problems (Sternberg, 2025). Aristotle viewed it as rational thought and the pursuit of knowledge, while modern philosophers associate it with adaptability to changing environments (Brooks, 2024; Sternberg, 2021). Psychologically, intelligence is defined as the ability to acquire and apply knowledge and skills (Legg, 2025; Jaarsveld & Lachmann, 2017; Ruhl, 2024), with Howard Gardner expanding it beyond IQ to multiple intelligences such as linguistic, logical-mathematical, spatial, and emotional intelligence (Cerutti, 2013). Robert Sternberg’s triarchic theory categorises intelligence into analytical, creative, and practical components that help individuals function effectively in diverse contexts (Clark & Sternberg, 1986).
Adaptability is another core attribute of intelligence, enabling individuals to respond to new challenges by applying learned knowledge and skills (Sternberg, 1996). Human intelligence stands out for its flexibility in handling unfamiliar situations, drawing from past experiences, and adjusting behaviour accordingly (Gardner, 1983; Sternberg, 1985). Another key element is self-awareness, which allows reflection on thoughts, emotions, and actions (Goleman, 2020; Salovey & Mayer, 1997; Schore, 2016). Unlike artificial systems, humans possess introspection and a sense of self, aiding in conscious decision-making with emotional intelligence that recognises, understand, and manage emotions for effective communication and relationships.

Key Elements of Human Intelligence

Human intelligence is composed of several core elements that distinguish it from artificial intelligence. One of these elements is reasoning, which involves the ability to analyse information, draw conclusions, and solve complex problems (Caroll, 1993). Human reasoning is not limited to logical deduction but also includes intuitive and moral reasoning, allowing individuals to navigate ethical dilemmas and make informed life choices (Haidt, 2001; Greene et al., 2001; Kohlberg, 1984). Unlike AI, which processes data through pre-defined algorithms, human reasoning is context-dependent and influenced by experiences, emotions, and social norms.
Creativity is another defining feature of human intelligence which is characterised by the ability to generate novel ideas, think outside the box, and approach challenges with innovative solutions (Runco & Jaeger, 2012; Kaufman & Beghetto, 2009). Creativity involves imagination and the ability to synthesise diverse concepts, which AI systems struggle to replicate beyond the limits of their training data (Russel & Norvig, 2021; Goodfellow et al., 2016; AI for Good, 2024). Human creativity is deeply intertwined with personal experiences, emotions, and cultural influences, making it unique to each individual which AIs lacks.
Another element of intelligence is consciousness which refers to an individual's awareness of their own thoughts, emotions, and existence (Seth, 2021; Chalmers, 1996). Consciousness enables humans to reflect on their actions, set long-term goals, and engage in self-improvement. While AI operates based on data inputs and outputs (Russel & Novig, 2021; Goodfellow et al., 2016; Bishop, 2006), human consciousness allows for subjective experience, introspection, and moral decision-making (Damasio, 1999; Chalmers, 1996). Consciousness is what gives human intelligence depth, allowing for the formation of personal identity and a sense of purpose.

AI’s Imitation of Intelligence

The perception that artificial intelligence (AI) is truly intelligent is largely based on its ability to imitate human cognitive functions rather than possessing genuine understanding. AI systems, particularly those leveraging machine learning and deep learning, excel at pattern recognition, data analysis, and task execution. However, their performance is driven by statistical models rather than true comprehension or cognitive processing (Iordanov, 2024; lake et al., 2015; Russel & Norvig, 2021). François Chollet, a senior staff engineer at Google, concluded that AIs such LLMs will never be intelligent (Carroll, 2024) While AI can simulate aspects of intelligence, such as language processing and problem-solving, it fundamentally lacks the deeper elements of understanding that characterise human cognition.

Machine Learning vs. True Understanding

Machine learning (ML), a subset of AI, enables systems to identify patterns and make predictions based on vast amounts of data (MIT Sloan, 2021; IBM, 2021b; Columbia Engineering, 2023). These systems are trained using algorithms that adjust their parameters to optimise accuracy over time, allowing AI to recognise images, translate languages, and generate text that appears coherent and human-like. However, this process relies entirely on statistical correlations rather than an inherent grasp of actual meaning. AI does not understand concepts in the way humans do; instead, it processes inputs to produce outputs based on probabilities and learned associations. For instance, a language model like GPT-4 generates text based on the likelihood of word sequences appearing together, rather than possessing any intrinsic knowledge of language rules or context.
As earlier mentioned, true understanding, unlike machine learning systems that rely on pattern recognition, humans possess semantic understanding, allowing them to discern intent, sarcasm, and deeper implications within communication. AI lacks such and instead operates within the confines of its training data, making it susceptible to errors in situations that require common sense, intuition, or moral judgment.

Computational Models vs. Cognitive Models

AI operates using computational models that are fundamentally different from human cognitive models. Computational models are algorithm-driven frameworks designed to process data, execute logical operations, and optimise for efficiency (Krzywanski et al., 2024; Sarker, 2021; Rausch et al., 2021). These models rely on mathematical functions, neural networks, and statistical methods to simulate aspects of human behaviour but lack the intrinsic mental processes that underlie true intelligence. They function deterministically or probabilistically, depending on predefined rules or patterns within the data they have been trained on.
Cognitive models, on the other hand, aim to represent the complexities of human thought processes, including memory, perception, problem-solving, and decision-making (Wang & Chiew, 2010; Prezenski et al., 2017; Newell & Broder, 2008). These models incorporate elements such as emotions, context-awareness, and experiential learning, which are difficult to replicate in AI systems (Olider, 2024; Henning, 2023; Mishra & Tiwary, 2019). Unlike computational models that operate linearly, cognitive models are dynamic and adaptive, integrating past experiences and emotional states to shape future behaviour. Human cognition is influenced by factors such as social interaction, personal beliefs, and cultural background—dimensions that AI cannot genuinely incorporate or understand.

The Fundamental Differences Between AI and Human Intelligence

One of the key arguments against AI being truly intelligent is its lack of intrinsic motivation and self-awareness—two essential attributes that differentiate human intelligence from artificial systems (Wang, 2023; Su, 2024; Zeng et al., 2024). While AI can perform complex tasks, optimise processes, and generate human-like responses, it lacks inherent drive or consciousness. Intrinsic motivation, which compels humans to pursue goals, learn new skills, and engage with the world for personal growth or emotional fulfillment, is absent in AI. Unlike humans, whose actions are influenced by emotions, values, and experiences, AI operates solely on external commands and predefined objectives (Dhaduk, 2023; Digiprima, 2025). This absence of intrinsic motivation limits AI’s ability to exhibit true autonomy or purpose, as it cannot engage in self-directed learning or experience curiosity and self-improvement beyond its programmed parameters.
Self-awareness is another component of human intelligence which allows individuals to reflect on their thoughts, emotions, and actions, providing a sense of identity and enabling introspection, ethical reasoning, and personal development (Carden et al., 2022; Branch & George, 2014; London, 2022). AI lacks this internal sense of self or consciousness, preventing it from recognizing limitations, adapting based on feedback, or making informed choices rooted in experience and aspiration (Zeng et al., 2024; Wan, 2024; Yin et al., 2024). Although AI can analyse data and execute tasks with precision, it does so without subjective experience or personal context. For example, an AI may provide virtual tutoring but lacks the emotional or ethical understanding necessary for compassionate teaching and learning. Without self-awareness, AI cannot exhibit true empathy, moral judgment, or independent thought, limiting its potential beyond statistical learning.

Key Limitations of AI

  • Lack of Consciousness and Understanding: A key limitation of AI is its reliance on syntactic processing rather than true semantic comprehension, a concept illustrated by John Searle's famous Chinese Room argument. According to Searle, an AI system may manipulate symbols and produce seemingly intelligent responses without actually understanding their meaning (Cole, 2004). AI processes input based on rules and patterns, responding in ways that align with statistical probabilities, but it does not grasp the underlying concepts or context. For example, a chatbot can respond to user queries fluently, but it lacks genuine comprehension of the conversation's emotional or philosophical nuances.
    A counterargument often raised is that advanced neural networks, particularly Deep Learning Models, can approximate understanding by recognizing complex patterns and correlations across vast datasets (Idrees, 2024; Yousef & Allmer, 2023; Zhou, 2018). Proponents argue that neural networks, such as transformer-based models, develop representations that mirror human-like cognitive processing (Ornes, 2022; Miller, 2024; Ito et al., 2022). However, critics maintain that these models still function on probabilistic associations rather than genuine comprehension (Pavlus, 2024; Puebla et al., 2019; Li et al., 2023). Despite impressive advancements in natural language processing and image recognition, AI lacks the conscious experience and subjective understanding inherent in human cognition (Mogi, 2024; Farisco et al., 2024; Albantakis & Tononi, 2021).
  • Dependency on Data and Patterns: Another argument against AI’s intelligence is its heavy dependency on training data and pattern recognition (Holzinger et al., 2023; Razavian, 2020; Data Idealogy, 2024). AI models require extensive datasets to function effectively and struggle when faced with scenarios outside their training parameters. While humans adapt and generalise knowledge across different contexts, AI is constrained by its programmed scope. For instance, an AI model trained on historical data may fail to adapt to unprecedented situations, such as rapidly evolving economic trends or novel scientific discoveries.
    Similarly, in the healthcare sector, AI diagnostic tools have occasionally misinterpreted medical conditions due to biases in the training data, leading to incorrect predictions and potential harm to patients (Ueda, 2023; Murphy, 2024; Office of Minority Health, 2024; Smith, 2023). Similarly, autonomous vehicles have encountered challenges in unpredictable environments, such as reacting to rare road conditions or unexpected pedestrian behaviour (Rezwana & Lownes, 2024; MulticoreWare, 2024; Akridata, 2024; Miller et al., 2024). These examples illustrate AI’s inability to extrapolate beyond its learned experiences, underscoring the distinction between data-driven decision-making and human cognitive flexibility.
  • Absence of Common-Sense Reasoning: AI's struggle with intuitive decision-making further reinforces the argument that it lacks true intelligence (Global Navigator LLC, 2024; Finlay, 2024; Kim, 2020). Humans possess an innate ability to apply common-sense reasoning to everyday situations, drawing on life experiences, cultural knowledge, and social awareness. This capability allows people to navigate ambiguous or uncertain scenarios with ease. In contrast, AI lacks the heuristic-based problem-solving approach that enables humans to make quick, informed decisions without explicit instruction (Felin & Holweg, 2024; Mukherjee & Chang, 2024; Gurney et al., 2023).
    For instance, an AI assistant might fail to recognise sarcasm or cultural nuances in conversation, leading to misinterpretations. In decision-making tasks, AI can optimise based on available data but struggles to consider abstract factors such as ethical dilemmas, emotional intelligence, and social context (Ong, 2021; Chang, 2023; Latif, 2022). Despite advancements in machine learning, AI still exhibits difficulty in areas requiring flexible, adaptive, and context-aware reasoning, making it ill-equipped for complex real-world challenges where human intuition is crucial.
  • Ethical and Moral Shortcomings: Another fundamental limitation of AI is its inability to make value-based decisions without human input. Ethical and moral reasoning requires an understanding of abstract concepts such as fairness, empathy, and social responsibility—qualities that AI lacks (Afroogh, 2024; Jiang, 2024; Cheng-Tek Tai, 2020; Tai, 2020). While AI systems can be programmed to follow ethical guidelines, they do not possess the intrinsic ability to weigh competing values or understand the human consequences of their decisions.
    Bias in AI algorithms further exacerbates ethical concerns. Since AI models are trained on historical data, they often inherit and amplify existing biases, leading to discriminatory outcomes in areas such as hiring, law enforcement, and lending. For example, facial recognition systems have been shown to exhibit racial and gender biases, disproportionately misidentifying individuals from underrepresented groups (Hardesty, 2018; Gentzel, 2021; Leslie, 2020). These biases raise significant questions about AI’s role in decision-making processes and the potential for perpetuating societal inequalities.

The Illusion of AI Intelligence

The Hype vs. Reality

The widespread marketing of AI technologies has significantly contributed to the illusion that AI possesses genuine intelligence. Companies and media outlets often exaggerate the capabilities of AI systems, portraying them as autonomous, self-learning entities that can think, reason, and solve complex problems like humans (Federal Trade Commission, 2023; Stewart, 2024). Examples include advanced AI models such as GPT (Generative Pre-trained Transformer) and autonomous systems, which are frequently marketed as ground-breaking innovations capable of human-level understanding and decision-making. However, in reality, these systems rely on vast amounts of data and pattern recognition rather than true comprehension.
AI-generated text, for instance, can appear coherent and credible, yet it lacks the depth of human thought, emotional understanding, and contextual awareness. Similarly, autonomous vehicles are often promoted as intelligent machines capable of independent navigation, but their functionality remains highly dependent on predefined algorithms and environmental inputs (Wang et al., 2024; Hurair et al., 2024; Abdallaoui, 2023). The discrepancy between marketing narratives and actual AI performance has led to unrealistic public expectations, further fuelling misconceptions about AI's capabilities and limitations.

Anthropomorphism in AI Perception

A major factor contributing to the illusion of AI intelligence is anthropomorphism—the tendency to attribute human-like qualities to machines (Coghlan, 2024; Bhatti & Robert, 2023). As AI systems become more sophisticated in language processing and interaction, users often perceive them as intelligent beings capable of thought, emotion, and intention. Human-like interactions, such as chatbots mimicking conversational patterns or virtual assistants responding with humour and empathy, create the impression that AI possesses cognitive abilities beyond its actual capabilities.
This misconception is further reinforced by the design choices made by AI developers, who intentionally incorporate features that make AI appear more relatable and intelligent. Voice assistants with natural-sounding speech, chatbots with personalised responses, and humanoid robots with expressive facial features all contribute to the illusion of consciousness and understanding (Guingrich & Graziano, 2023; Gros, 2022; Shanahan, 2024). However, these interactions are driven by pattern recognition and preprogramed responses rather than genuine intelligence.

AI as an Augmentative Tool, Not an Autonomous Intelligence

AI should be seen as a complement to human intelligence rather than a replacement (Brynjolfsson & McAfee, 2017; Russell & Norvig, 2021). While AI excels at processing vast amounts of data, identifying patterns, and automating repetitive tasks, it lacks critical human attributes such as creativity, ethical reasoning, and emotional intelligence (Tegmark, 2017). The most effective use of AI lies in collaborative settings, where it enhances human capabilities by providing data-driven insights and automating mundane tasks, allowing humans to focus on strategic, creative, and interpersonal aspects of work (Wilson & Daugherty, 2018). For instance, in sectors like healthcare and education, AI can support professionals by analysing data, offering recommendations, and streamlining administrative processes. However, decision-making, ethical considerations, and nuanced judgment must remain in human hands (Bostrom, 2014).
Several industries have successfully leveraged AI to enhance rather than replace human roles. In healthcare, AI-powered diagnostic tools assist doctors in analyzing medical images, predicting disease outbreaks, and personalizing treatment plans (Topol, 2019). For example, AI-driven radiology systems can identify anomalies in scans with high accuracy, helping radiologists make faster, more precise diagnoses while reducing workload (Esteva et al., 2017). However, human expertise remains essential in interpreting results within a broader clinical context (Rajpurkar et al., 2018).
In education, AI has been integrated to personalise learning experiences and provide real-time feedback to students (Luckin et al., 2016). Intelligent tutoring systems can adapt to individual learning styles, offering tailored recommendations and pinpointing areas where students need extra support (Holmes et al., 2019). However, teachers are indispensable for providing emotional support, fostering critical thinking, and addressing students' social and emotional development (Selwyn, 2019). AI enhances the learning process but cannot replace the nuanced understanding and mentorship educators offer.
In business, AI-powered analytics tools help organizations make data-driven decisions, optimise supply chains, and improve customer experiences (Davenport & Ronanki, 2018). AI algorithms analyse consumer behaviour and market trends, providing insights that inform business strategies (Agrawal et al., 2018). However, leadership, creativity, and interpersonal skills remain uniquely human, and businesses that successfully integrate AI do so by using it as a decision-support tool, not an autonomous decision-maker (Brynjolfsson et al., 2020).
Despite its potential to augment human capabilities, over-reliance on AI poses risks, including the de-skilling of human professionals (Frank et al., 2019). As AI becomes more advanced and integrated into workflows, there is a danger that individuals may become overly dependent on it, leading to a decline in critical thinking, problem-solving abilities, and domain expertise (Frey & Osborne, 2017). For instance, excessive reliance on AI in medical diagnostics could erode healthcare professionals' diagnostic skills, potentially compromising patient care when AI systems fail or encounter unfamiliar scenarios (Davenport & Kalakota, 2019). Additionally, AI integration raises concerns about job displacement and the erosion of traditional skill sets (Manyika et al., 2017). While AI enhances efficiency and productivity, it also threatens the value of human labour in certain tasks. The challenge lies in ensuring AI complements, rather than replaces, human roles, preserving the unique capabilities humans bring to the workforce (Bessen, 2019).

Debunking Popular Myths About AI

Myth 1: AI Can Think and Feel: A common misconception is that AI possesses cognitive abilities akin to human thought and emotion. Popular media and science fiction often depict AI as sentient beings capable of experiencing emotions, forming opinions, and engaging in independent reasoning (Kurzweil, 2005; Bostrom, 2014). However, AI fundamentally lacks subjective experience and consciousness (Chalmers, 1996; Koch, 2004). AI operates through complex algorithms that process and respond to data based on pre-programmed rules and statistical correlations (Russell & Norvig, 2021; Goodfellow, Bengio, & Courville, 2016), without any genuine understanding or awareness. The lack of consciousness means AI cannot possess true intent, understanding, or moral responsibility (Marcus & Davis, 2019; Russell & Norvig, 2021), reinforcing the fact that AI remains a tool rather than an autonomous entity.
Myth 2: AI Will Fully Replace Humans: The fear that AI will entirely replace human labour across all industries is another widespread misconception. While AI has indeed automated many routine and repetitive tasks, its capabilities remain limited when it comes to complex, uncertain, and dynamic environments that require human intuition, ethical judgment, and creativity (Frey & Osborne, 2013; Acemoglu & Restrepo, 2018). AI thrives in structured environments with clear rules and vast data availability but struggles in scenarios requiring contextual understanding, moral reasoning, and social intelligence (Russell & Norvig, 2021; Marcus & Davis, 2019). Evidence from sectors such as healthcare and education suggest that AI functions best as an augmentative tool rather than a replacement. For example, AI-assisted diagnostic tools can enhance the accuracy of medical diagnoses (Topol, 2019), but they cannot replace the empathetic decision-making of healthcare professionals. Similarly, in education, AI-powered platforms personalise learning experiences (Siemens & Wiggins, 2014), but they cannot replace the role of teachers in nurturing learners.
Myth 3: AI Is Free of Bias and Errors: A prevalent belief is that AI systems are inherently objective and free from human biases. However, AI algorithms are trained on historical data collected and curated by humans, which often contain implicit biases. As a result, AI systems can perpetuate and even amplify existing social and cultural biases present in the data (Buolamwini & Gebru, 2018; Noble, 2018). Instances of biased AI decision-making have been observed in areas such as hiring processes, law enforcement, and loan approvals, where AI systems have disproportionately disadvantaged certain demographic groups due to biased training data. For example, AI-driven recruitment tools have been found to favour certain candidates based on biased historical hiring patterns, reinforcing systemic inequalities (Dastin, 2018).

Ethical and Societal Implications of Overestimating AI

The widespread overestimation of AI's capabilities has significant ethical and societal implications such as:
  • Implications for Employment: The impact of AI on employment is a widely debated concern, particularly the fear of large-scale job displacement. While the idea that AI can autonomously replace human workers is prevalent, the reality is more complex. AI often leads to job transformation rather than outright displacement (Frey & Osborne, 2013; Acemoglu & Restrepo, 2018). Routine and repetitive tasks can be automated, but this creates opportunities for new roles that require human oversight, creativity, and emotional intelligence. For example, in industries like manufacturing and administration, AI-driven automation can boost efficiency, allowing workers to focus on more complex and strategic tasks (Ford, 2015). However, without proper reskilling and upskilling initiatives, this transition could worsen unemployment and economic disparities (Autor, 2019).
  • Bias and Discrimination Concerns: AI systems trained on biased datasets can perpetuate and even amplify societal inequalities. Biases, whether racial, gender-based, or socioeconomic, can manifest in AI algorithms, leading to discriminatory outcomes in critical areas such as hiring, law enforcement, and finance (Angwin et al., 2016; Obermeyer et al., 2019; Barocas & Selbst, 2016; Selbst & Barocas, 2018). For instance, facial recognition technologies have been shown to perform poorly on minority groups, resulting in wrongful identifications and perpetuating systemic discrimination (Buolamwini & Gebru, 2018).
  • Accountability and Transparency Challenges: AI decision-making processes are often complex, making it challenging to establish accountability and ensure transparency. Many AI models operate as "black boxes," where even developers may not fully understand how decisions are made (Lipton, 2016; Doshi-Velez & Kim, 2017). This lack of interpretability poses ethical concerns, especially in areas like healthcare, finance, and criminal justice, where the consequences of AI decisions can be severe (Goodfellow, Bengio, & Courville, 2016; Russell & Norvig, 2021).

Future Directions and Recommendations

1. 
Ethical AI Development: A human-centric approach to AI governance is essential to align technological advancements with societal values.
2. 
Balancing Expectations with Reality: the media, policymakers, and educational institutions should collaborate to provide accurate, accessible information about AI, positioning it as a complement to human intelligence rather than an autonomous replacement.
3. 
Interdisciplinary Collaboration: Researchers in fields like ethics, psychology, and sociology should work alongside AI developers to create systems that are both technically robust and socially responsible.

Conclusion

The analysis of AI’s capabilities highlights its inability to achieve true intelligence as understood in human terms. While AI demonstrates impressive computational power and pattern recognition abilities, it fundamentally lacks key elements of human intelligence, such as reasoning, creativity, consciousness, and emotional depth. The misconceptions surrounding AI’s potential, often fuelled by media and industry hype, have led to exaggerated expectations that do not align with its actual capabilities. Arguments such as the absence of intrinsic motivation, common-sense reasoning, and ethical judgment further underscore the distinction between AI’s imitation of intelligence and genuine human cognition.
The final position of this paper asserts that AI should be recognised as a powerful augmentative tool rather than an autonomous intelligent entity. It excels at processing vast amounts of data, identifying patterns, and performing repetitive tasks with efficiency; however, it remains inherently dependent on human input and oversight. Viewing AI as a complement to human abilities, rather than a replacement, is essential to ensuring that its deployment supports human endeavours rather than undermines them.
As AI continues to evolve and integrate into various aspects of society, there is an urgent need for responsible AI adoption. Ethical considerations, such as fairness, transparency, and accountability, must be prioritised to prevent unintended harm and societal inequalities. Policymakers, developers, and end-users must collaborate to establish frameworks that ensure AI is used responsibly and aligns with human values.

References

  1. Abdallaoui, S., Halima Ikaouassen, Kribèche, A., Chaibet, A., & Aglzim, E. (2023). Advancing autonomous vehicle control systems: An in-depth overview of decision-making and manoeuvre execution state of the art. The Journal of Engineering, 2023(11). [CrossRef]
  2. Acemoglu, D., & Restrepo, P. (2018). Artificial intelligence, automation, and work. National Bureau of Economic Research.
  3. Afroogh, S., Akbari, A., Malone, E., Kargar, M., & Alambeigi, H. (2024). Trust in AI: progress, challenges, and future directions. Humanities and Social Sciences Communications, 11(1). [CrossRef]
  4. AI for Good. (2024a). How we can ensure that AI works for us. YouTube. https://youtu.be/H5xOof91Q5M.
  5. Akridata. (2024). How Edge Case Detection Enhances AI Safety in Autonomous Vehicles. Akridata • Edge Data Platform for Data-Centric AI. https://akridata.ai/blog/edge-case-detection-safer-ai-autonomous-vehicles/.
  6. Albantakis, L., & Tononi, G. (2021). What we are is more than what we do. ArXiv.org. https://arxiv.org/abs/2102.04219. [CrossRef]
  7. Autor, D. H. (2019). Work in the age of artificial intelligence. Science, 363(6430), 762-768.
  8. AWS. (n.d.). What are AI Agents? - Agents in Artificial Intelligence Explained - AWS. Amazon Web Services, Inc. https://aws.amazon.com/what-is/ai-agents/.
  9. Bhatti, S., & Robert, L. (2023). What Does It Mean to Anthropomorphise Robots? Food For Thought for HRI Research. https://deepblue.lib.umich.edu/bitstream/handle/2027.42/175558/Bhatti%20and%20Robert%202023.pdf.
  10. Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.
  11. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
  12. Brainvire. (2025). How AI is revolutionizing the manufacturing industry for a smarter future. Brainvire.com - a Website Development, Mobile Apps and Digital Marketing Company. https://www.brainvire.com/blog/ai-led-solutions-for-manufacturing-industry/.
  13. Branch, W. T., & George, M. (2014). Reflection-Based Learning for Professional Ethical Formation. AMA Journal of Ethics, 19(4), 349–356. [CrossRef]
  14. Brookhouse, O. (2023). Can artificial intelligence understand emotions? Telefónica Tech. https://telefonicatech.com/en/blog/can-artificial-intelligence-understand-emotions.
  15. Brooks, A. C. (2024). Are You a Platonist or an Aristotelian? The Atlantic; theatlantic. https://www.theatlantic.com/ideas/archive/2024/10/aristotle-plato-philosophy-happiness/680339/.
  16. Brynjolfsson, E., & McAfee, A. (2017, July 18). The Business of Artificial Intelligence. Harvard Business Review. https://hbr.org/2017/07/the-business-of-artificial-intelligence.
  17. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability, and Transparency.
  18. Carden, J., Jones, R. J., & Passmore, J. (2022). Defining self-awareness in the context of adult development: A systematic literature review. Journal of Management Education, 46(1). Sagepub. [CrossRef]
  19. Carroll, J. B. (1993). Human Cognitive Abilities. Cambridge University Press.
  20. Carroll, S. (2024). 280 François Chollet on Deep Learning and the Meaning of Intelligence. Preposterousuniverse.com. https://www.preposterousuniverse.com/podcast/2024/06/24/280-francois-chollet-on-deep-learning-and-the-meaning-of-intelligence/.
  21. Casaca, J. A., & Miguel, L. P. (2024). The influence of personalization on consumer satisfaction. Advances in Marketing, Customer Relationship Management, and E-Services Book Series, 256–292. [CrossRef]
  22. Cerruti, C. (2013). Building a functional multiple intelligences theory to advance educational neuroscience. Frontiers in Psychology, 4. [CrossRef]
  23. Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press. https://personal.lse.ac.uk/ROBERT49/teaching/ph103/pdf/Chalmers_The_Conscious_Mind.pdf.
  24. Chang, E. Y. (2023). CoCoMo: Computational Consciousness Modeling for Generative and Ethical AI. ArXiv.org. https://arxiv.org/abs/2304.02438. [CrossRef]
  25. Clarke, A. M., & Sternberg, R. J. (1986). Beyond IQ: A Triarchic Theory of Human Intelligence. British Journal of Educational Studies, 34(2), 205. [CrossRef]
  26. Cocato, P. (2025). “The limit of AI lies in its inability to understand complex contexts or show empathy.” Telefónica. https://www.telefonica.com/en/communication-room/blog/limit-ai-lies-inability-understand-complex-contexts-show-empathy/.
  27. Coghlan, S. (2024). Anthropomorphizing Machines: Reality or Popular Myth? Minds and Machines, 34(3). [CrossRef]
  28. Cole, D. (2004, March 19). The Chinese Room Argument. Stanford.edu. https://plato.stanford.edu/entries/chinese-room/.
  29. Columbia Engineering. (2023). Artificial Intelligence (AI) vs. Machine Learning. CU-CAI. https://ai.engineering.columbia.edu/ai-vs-machine-learning/.
  30. Damasio, A. R. (1999). The Feeling of What happens: Body, Emotion and the Making of Consciousness. Vintage, Cop.
  31. Data Camp. (2023). What is symbolic AI? Datacamp.com; DataCamp. https://www.datacamp.com/blog/what-is-symbolic-ai.
  32. Data Ideology. (2024, April 24). Understanding AI and Data Dependency - Data Ideology. Data Ideology. https://www.dataideology.com/understanding-ai-and-data-dependency/.
  33. Dhaduk, H. (2023). 6 Types of AI Agents: Exploring the Future of Intelligent Machines. Simform - Product Engineering Company. https://www.simform.com/blog/types-of-ai-agents/.
  34. Digiprima. (2025). Types of AI Agents: From Simple to Complex Systems - Digiprima - Medium. Medium. https://medium.com/%40digiprima/types-of-ai-agents-from-simple-to-complex-systems-f7967840d298.
  35. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
  36. Farisco, M., Evers, K., & Changeux, J.-P. (2024). Is artificial consciousness achievable? Lessons from the human brain. ArXiv.org. https://arxiv.org/abs/2405.04540. [CrossRef]
  37. Federal Trade Commission. (2023). Keep your AI claims in check. Federal Trade Commission. https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check.
  38. Felin, T., & Holweg, M. (2024). Theory Is All You Need: AI, Human Cognition, and Decision Making. Social Science Research Network. [CrossRef]
  39. Finlay, V. (2024). Using AI for Decision-Making The HOW Institute for Society. The HOW Institute for Society. https://thehowinstitute.org/using-ai-for-decision-making/.
  40. Firstpost. (2024). How companies overhype the use of artificial intelligence vantage on firstpost. YouTube. https://www.youtube.com/watch?v=2wp5Ksld5nQ.
  41. Ford, M. (2015). Rise of the robots: Technology and the threat of a jobless future. Basic Books.
  42. Frey, C. B., & Osborne, M. A. (2013). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 70(4-5), 2242-2251.
  43. Gardner, H. (1983). Frames of Mind: The Theory of Multiple Intelligences. Basic Books.
  44. Gentzel, M. (2021). Biased Face Recognition Technology Used by Government: A Problem for Liberal Democracy. Philosophy & Technology, 34(4). [CrossRef]
  45. Global Navigator LLC. (2024). Artificial Intuition: Can AI Truly Develop Human-Like Intuitive Decision Making? Medium. https://medium.com/%4013032765d/artificial-intuition-can-ai-truly-develop-human-like-intuitive-decision-making-b29ce8da93f5.
  46. Glover, E. (2022). Strong AI vs weak AI: What’s the difference. Builtin.com. https://builtin.com/artificial-intelligence/strong-ai-weak-ai.
  47. Goleman, D. (2020). Emotional intelligence: Why it can matter more than IQ. Bloomsbury. (Original work published 1995).
  48. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. The MIT Press. https://www.deeplearningbook.org/.
  49. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI Investigation of Emotional Engagement in Moral Judgment. Science, 293(5537), 2105–2108. [CrossRef]
  50. Gros, D., Li, Y., & Yu, Z. (2022). Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in Dialog Systems. ArXiv.org. https://arxiv.org/abs/2210.12429. [CrossRef]
  51. Guingrich, R. E., & Graziano, M. (2023). Chatbots as social companions: How people perceive consciousness, human likeness, and social health benefits in machines. ArXiv.org. https://arxiv.org/abs/2311.10599. [CrossRef]
  52. Gurney, N., Miller, J. H., & Pynadath, D. V. (2023). The Role of Heuristics and Biases during Complex Choices with an AI Teammate. Proceedings of the... AAAI Conference on Artificial Intelligence, 37(5), 5993–6001. [CrossRef]
  53. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834.
  54. Hardesty, L. (2018). Study finds gender and skin-type bias in commercial artificial-intelligence systems. MIT News Massachusetts Institute of Technology. https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212.
  55. Henning, J. E. (2023). Beyond Action and Cognition: The Role of Awareness and Emotion in Experiential Learning. Journal of Philosophy of Education, 79(2).
  56. Hermann, I. (2021). Artificial intelligence in fiction: Between narratives and metaphors. AI & Society, 38. [CrossRef]
  57. Holzinger, A., Saranti, A., Angerschmid, A., Finzel, B., Schmid, U., & Mueller, H. (2023). Toward human-level concept learning: Pattern benchmarking for AI algorithms. 100788–100788. [CrossRef]
  58. Hurair, M., Ju, J., & Han, J. (2024). Environmental-Driven Approach towards Level 5 Self-Driving. Sensors, 24(2), 485–485. [CrossRef]
  59. IBM. (2021a). Machine learning. Ibm.com. https://www.ibm.com/think/topics/machine-learning.
  60. IBM. (2021b). Unsupervised learning. Ibm.com. https://www.ibm.com/think/topics/unsupervised-learning.
  61. Idrees, H. (2024). Shallow Learning vs. Deep Learning: Is Bigger Always Better? Medium. https://medium.com/%40hassaanidrees7/shallow-learning-vs-deep-learning-is-bigger-always-better-51c0bd21f059.
  62. Iordanov, G. (2024). Rethinking AI. Newman Springs.
  63. Ito, T., Yang, G. R., Laurent, P., Schultz, D. H., & Cole, M. W. (2022). Constructing neural network models from brain data reveals representational transformations linked to adaptive behavior. Nature Communications, 13(1), 673. [CrossRef]
  64. Jaarsveld, S., & Lachmann, T. (2017). Intelligence and Creativity in Problem Solving: The Importance of Test Features in Cognition Research. Frontiers in Psychology, 8(134). [CrossRef]
  65. Jiang, Z. Z. (2024). Self-Disclosure to AI: The Paradox of Trust and Vulnerability in Human-Machine Interactions. Arxiv.org. https://arxiv.org/html/2412.20564v1. [CrossRef]
  66. Jinchang Wang. (2023). Self-Awareness, a Singularity of AI. Philosophy Study, 13(2). [CrossRef]
  67. Joyce, K., Balthazor, A., & Magee, J. (2024). Beyond the hype: The SEC’s intensified focus on AI washing practices. Hklaw.com. https://www.hklaw.com/en/insights/publications/2024/04/beyond-the-hype-the-secs-intensified-focus-on-ai-washing-practices.
  68. Kaufman, J. C., & Beghetto, R. A. (2009). Beyond Big and Little: the Four C Model of Creativity. Review of General Psychology, 13(1), 1–12. [CrossRef]
  69. Kersting, K. (2018). Machine learning and artificial intelligence: Two fellow travelers on the quest for intelligent behavior in machines. Frontiers in Big Data, 1. Frontiersin. [CrossRef]
  70. Kim, H.-S. (2020). Decision-Making in Artificial Intelligence: Is It Always Correct? Journal of Korean Medical Science, 35(1). [CrossRef]
  71. Koch, C. (2004). Consciousness: Essays from the edge of the visible. Oxford University Press.
  72. Kohlberg, L. (1984). The Psychology of Moral Development: The Nature and Validity of Moral Stages. San Francisco Harper & Row.
  73. Krzywanski, J., Sosnowski, M., Grabowska, K., Zylka, A., Lasek, L., & Kijo-Kleczkowska, A. (2024). Advanced Computational Methods for Modeling, Prediction and Optimization—A Review. Materials, 17(14), 3521–3521. [CrossRef]
  74. Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Viking Press.
  75. Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332–1338. [CrossRef]
  76. Latif, S., Ali, H. S., Usama, M., Rana, R., Schuller, B., & Qadir, J. (2022). AI-Based Emotion Recognition: Promise, Peril, and Prescriptions for Prosocial Path. ArXiv.org. https://arxiv.org/abs/2211.07290. [CrossRef]
  77. Legg, S. (2025). Definitions of Intelligence. Calculemus.org. https://calculemus.org/lect/08szt-intel/materialy/Definitions%20of%20Intelligence.html.
  78. Leslie, D. (2020). Understanding Bias in Facial Recognition Technologies. The Alan Turing Institute. [CrossRef]
  79. Li, B., Thomson, A. J., Nassif, H., Engelhard, M. M., & Page, D. (2023). On Neural Networks as Infinite Tree-Structured Probabilistic Graphical Models. ArXiv.org. https://arxiv.org/abs/2305.17583. [CrossRef]
  80. Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490.
  81. London, M., Sessa, V. I., & Shelley, L. A. (2022). Developing Self-Awareness: Learning Processes for Self- and Interpersonal Growth. Annual Review of Organizational Psychology and Organizational Behavior, 10(1), 261–288. Researchgate. [CrossRef]
  82. Lumenalta. (2024). AI’s limitations: What artificial intelligence can’t do understanding the limitations of AI lumenalta. Lumenalta. https://lumenalta.com/insights/ai-limitations-what-artificial-intelligence-can-t-do.
  83. Marcus, G. F., & Davis, E. (2019). Rebooting AI: building artificial intelligence we can trust. New York Pantheon Books.
  84. Marusarz, W. (2022). How much data does AI need? What to do when you have limited datasets? Nexocode. https://nexocode.com/blog/posts/ai-data-needs-for-training-and-data-augmentation-techniques/.
  85. Marwala, T. (2024). AI is not a high-precision technology, and this has profound implications for the world of work. United Nations University. https://unu.edu/article/ai-not-high-precision-technology-and-has-profound-implications-world-work.
  86. Miller, K. (2024). From Brain to Machine: The Unexpected Journey of Neural Networks. Stanford HAI; Stanford University. https://hai.stanford.edu/news/brain-machine-unexpected-journey-neural-networks.
  87. Miller, T., Durlik, I., Kostecka, E., Borkowski, P., & Łobodzińska, A. (2024). A Critical AI View on Autonomous Vehicle Navigation: The Growing Danger. Electronics, 13(18), 3660. [CrossRef]
  88. Mirror. (2023). Are machines truly conscious? Mirror.xyz. https://mirror.xyz/definn.eth/76dHu7yM9n8VDcWq26H6dyMwFAJWXl0LBfAxOfhg3ao?collectors=true.
  89. Mishra, S., & Tiwary, U. S. (2019). A Cognition-Affect Integrated Model of Emotion. ArXiv.org. https://arxiv.org/abs/1907.02557. [CrossRef]
  90. MIT Sloan. (2021). Machine learning, explained MIT Sloan. MIT Sloan. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained.
  91. Mogi, K. (2024). Artificial intelligence, human cognition, and conscious supremacy. Frontiers in Psychology, 15. [CrossRef]
  92. Mukherjee, A., & Chang, H. H. (2024). Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorption. ArXiv.org. https://arxiv.org/abs/2403.09404. [CrossRef]
  93. MulticoreWare. (2024). Challenges and Advancements in Testing Autonomous Vehicles - MulticoreWare. MulticoreWare. https://multicorewareinc.com/challenges-and-advancements-in-testing-autonomous-vehicles/.
  94. Murphy, K. (2024, May 21). 6 Common Healthcare AI Mistakes. Prsglobal.com; PRS Global. https://prsglobal.com/blog/6-common-healthcare-ai-mistakes.
  95. Newell, B. R., & Bröder, A. (2008). Cognitive processes, models and metaphors in decision research. Judgment and Decision Making, 3(3), 195–204. [CrossRef]
  96. Nikolopoulou, K. (2023). What is anthropomorphism? definition & examples. Scribbr. https://www.scribbr.com/academic-writing/anthropomorphism/.
  97. Office of Minority Health. (2024). Shedding Light on Healthcare Algorithmic and Artificial Intelligence Bias. Office of Minority Health. https://minorityhealth.hhs.gov/news/shedding-light-healthcare-algorithmic-and-artificial-intelligence-bias.
  98. Olider, A., Deroncele-Acosta, A., Luis, J., Barrasa, A., López-Granero, C., & Martí-González, M. (2024). Integrating artificial intelligence to assess emotions in learning environments: a systematic literature review. Frontiers in Psychology, 15. [CrossRef]
  99. Ong, D. C. (2021). An Ethical Framework for Guiding the Development of Affectively-Aware Artificial Intelligence. ArXiv.org. https://arxiv.org/abs/2107.13734. [CrossRef]
  100. Oracle. (2020). What is machine learning? Oracle.com. https://www.oracle.com/ng/artificial-intelligence/machine-learning/what-is-machine-learning/.
  101. Ornes, S. (2022, September 12). How Transformers Seem to Mimic Parts of the Brain. Quanta Magazine. https://www.quantamagazine.org/how-ai-transformers-mimic-parts-of-the-brain-20220912/.
  102. Pardo, M. (2022). Ethics at every stage of the AI lifecycle: Data preparation. Appen.com; Appen. https://www.appen.com/blog/ethical-data-for-the-ai-lifecycle-data-preparation.
  103. Pavlus, J. (2024, September 29). The Atlantic. The Atlantic; theatlantic. https://www.theatlantic.com/technology/archive/2024/09/does-ai-understand-language/680056/.
  104. Prezenski, S., Brechmann, A., Wolff, S., & Russwinkel, N. (2017). A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making. Frontiers in Psychology, 8(1335). [CrossRef]
  105. Puebla, G., Martin, A. E., & Doumas,. (2019). The relational processing limits of classic and contemporary neural network models of language processing. ArXiv.org. https://arxiv.org/abs/1905.05708. [CrossRef]
  106. Rausch, O., Ben-Nun, T., Dryden, N., Ivanov, A., Li, S., & Hoefler, T. (2021). A Data-Centric Optimization Framework for Machine Learning. ArXiv.org. https://arxiv.org/abs/2110.10802. [CrossRef]
  107. Razavian, N., Knoll, F., & Geras, K. J. (2020). Artificial Intelligence Explained for Nonexperts. Seminars in Musculoskeletal Radiology, 24(01), 003-011. [CrossRef]
  108. Rezwana, S., & Lownes, N. (2024). Interactions and Behaviors of Pedestrians with Autonomous Vehicles: A Synthesis. Future Transportation, 4(3), 722–745. [CrossRef]
  109. Ruhl, C. (2024). Theories Of Intelligence In Psychology. Simply Psychology. https://www.simplypsychology.org/intelligence.html.
  110. Rumley, K., Nguyen, J., & Neskovic, G. (2023). How speech recognition improves customer service in telecommunications. NVIDIA Technical Blog. https://developer.nvidia.com/blog/how-speech-recognition-improves-customer-service-in-telecommunications/.
  111. Runco, M. A., & Jaeger, G. J. (2012). The Standard Definition of Creativity. Creativity Research Journal, 24(1), 92–96. [CrossRef]
  112. Russel, S., & Norvig, P. (2021). Artificial intelligence: A Modern approach (4th ed.). Prentice Hall.
  113. Salovey, P., & Mayer, D. J. (1997). Emotional development and emotional intelligence: educational implications (pp. 3–31). Basic Books.
  114. SAP. (2024). What is AI bias? Causes, effects, and mitigation strategies. Sap.com. https://www.sap.com/resources/what-is-ai-bias.
  115. Sarker, I. H. (2021). Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Computer Science, 2(3), 1–21. Springer. [CrossRef]
  116. Schore, A. N. (2016). Affect regulation and the origin of the self: the neurobiology of emotional development. Psychology Press.
  117. Sestili, C. (2018). Deep learning: Going deeper toward meaningful patterns in complex data. SEI Blog. https://insights.sei.cmu.edu/blog/deep-learning-going-deeper-toward-meaningful-patterns-in-complex-data/.
  118. Seth, A. (2021). BEING YOU: a new science of consciousness. Dutton.
  119. Shanahan, M. (2024). Simulacra as Conscious Exotica. ArXiv.org. https://arxiv.org/abs/2402.12422. [CrossRef]
  120. Sharps, S. (2024). The Impact of AI on the Labour Market. Institute. Global; Tony Blair Institute. https://institute.global/insights/economic-prosperity/the-impact-of-ai-on-the-labour-market.
  121. , & Wiggins, G. (2014). Personifying the pedagogical; Siemens, G., & Wiggins, G. (2014). Personifying the pedagogical: Exploring the intersection of personal, participatory, and proximal in personalised learning. Journal of Interactive Learning Research, 26(1), 3-14.
  122. Siemens. (2024). Revolutionizing industry with AI. Siemens.com Global Website. https://www.siemens.com/global/en/company/stories/digital-transformation/how-ai-revolutionizing-industry.html.
  123. Smith, D. (2023). Clinicians could be fooled by biased AI, despite explanations. Michigan Engineering News. https://news.engin.umich.edu/2023/12/clinicians-could-be-fooled-by-biased-ai-despite-explanations/.
  124. Sternberg, R. J. (1996). Successful intelligence. Cambridge University Press.
  125. Sternberg, R. J. (2021). Adaptive Intelligence: Its Nature and Implications for Education. Education Sciences, 11(12), 823. [CrossRef]
  126. Sternberg, R. J. (2025). Human intelligence. Encyclopaedia Britannica. https://www.britannica.com/science/human-intelligence-psychology.
  127. Stewart, E. (2024). Companies are luring investors by exaggerating what their AI can do. Business Insider. https://www.businessinsider.com/generative-ai-exaggeration-openai-nvidia-microsoft-chatgpt-jobs-investors-markets-2024-3.
  128. Stryker, C., & Kavlakoglu, E. (2024). What Is Artificial Intelligence (AI)? IBM. https://www.ibm.com/think/topics/artificial-intelligence.
  129. Su, J. (2024). Critical Debates in Humanities. Science and Global Justice, 3(1), 2024. https://criticaldebateshsgj.scholasticahq.com/api/v1/articles/117373-consciousness-in-artificial-intelligence-a-philosophical-perspective-through-the-lens-of-motivation-and-volition.pdf.
  130. Tai, M. C.-T. (2020). The Impact of Artificial Intelligence on Human Society and Bioethics. Tzu Chi Medical Journal, 32(4), 339–343. National Library of Medicine. [CrossRef]
  131. Topol, E. J. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Little, Brown Spark.
  132. Ueda, D., Kakinuma, T., Fujita, S., Kamagata, K., Fushimi, Y., Ito, R., Matsui, Y., Nozaki, T., Nakaura, T., Fujima, N., Tatsugami, F., Yanagawa, M., Hirata, K., Yamada, A., Tsuboyama, T., Kawamura, M., Fujioka, T., & Naganawa, S. (2023). Fairness of Artificial Intelligence in healthcare: Review and Recommendations. Japanese Journal of Radiology, 42(1). [CrossRef]
  133. Wan, M. (2024, June 18). Consciousness, awareness, and the intellect of AI. Eficode.com; Eficode Oy. https://www.eficode.com/blog/consciousness-awareness-and-the-intellect-of-ai.
  134. Wang, X., Azhar, M. W., Trancoso, P., & Maleki, M. A. (2021). Moving Forward: A Review of Autonomous Driving Software and Hardware Systems. Arxiv.org. https://arxiv.org/html/2411.10291v1. [CrossRef]
  135. Wang, Y., & Chiew, V. (2010). On the cognitive process of human problem solving. Cognitive Systems Research, 11(1), 81–92. [CrossRef]
  136. Yadav, S. (2024). Science fiction as the blueprint: Informing policy in the age of AI and emerging tech. Orfonline.org. https://www.orfonline.org/research/science-fiction-as-the-blueprint-informing-policy-in-the-age-of-ai-and-emerging-tech.
  137. Yin, Y., Jia, N., & Wakslak, C. J. (2024). AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences of the United States of America, 121(14). [CrossRef]
  138. Yousef, M., & Allmer, J. (2023). Deep learning in bioinformatics. Turkish Journal of Biology, 47(6), 366–382. [CrossRef]
  139. Zeng, Y., Zhao, F., Zhao, Y., Zhao, D., Lu, E., Zhang, Q., Wang, Y., Feng, H., Zhao, Z., Wang, J., Kong, Q., Sun, Y., Li, Y., Shen, G., Han, B., Dong, Y., Pan, W., He, X., Bao, A., & Wang, J. (2024). Brain-inspired and Self-based Artificial Intelligence. ArXiv.org. https://arxiv.org/abs/2402.18784. [CrossRef]
  140. Zhou, D.-X. (2018). Universality of Deep Convolutional Neural Networks. ArXiv.org. https://arxiv.org/abs/1805.10769. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated