Submitted:
25 January 2025
Posted:
27 January 2025
You are already at the latest version
Abstract
The global advancement of Artificial Intelligence (AI) in problem-solving, pattern recognition and natural language processing is notable but it continues to fall short in exhibiting the essential qualities that constitute real intelligence. This paper analyses AI's limitations with philosophical and psychological viewpoints combining them with ethical and technological constraints to show how these systems excel at large data processing and complex tasks but yet remain deficient in the key elements of human intelligence. The operational boundaries of AI systems require them to use statistical analytical models with pattern recognition capabilities rather than actual comprehension or independent decision processing. Without self-awareness artificial intelligence systems create uncertainties regarding who is responsible and how moral judgments should be established. The findings from this paper will advance our current understanding of the distinction between human intelligence and artificial intelligence and the need to understand how AI mechanisms affecting human choices and independence.
Keywords:
Introduction
Objectives
- To critically analyse AI’s limitations in comparison to human intelligence.
- To challenge prevailing assumptions with evidence-based arguments.
Scope and Methodology
What is Intelligence?
Key Elements of Human Intelligence
AI’s Imitation of Intelligence
Machine Learning vs. True Understanding
Computational Models vs. Cognitive Models
The Fundamental Differences Between AI and Human Intelligence
Key Limitations of AI
-
Lack of Consciousness and Understanding: A key limitation of AI is its reliance on syntactic processing rather than true semantic comprehension, a concept illustrated by John Searle's famous Chinese Room argument. According to Searle, an AI system may manipulate symbols and produce seemingly intelligent responses without actually understanding their meaning (Cole, 2004). AI processes input based on rules and patterns, responding in ways that align with statistical probabilities, but it does not grasp the underlying concepts or context. For example, a chatbot can respond to user queries fluently, but it lacks genuine comprehension of the conversation's emotional or philosophical nuances.A counterargument often raised is that advanced neural networks, particularly Deep Learning Models, can approximate understanding by recognizing complex patterns and correlations across vast datasets (Idrees, 2024; Yousef & Allmer, 2023; Zhou, 2018). Proponents argue that neural networks, such as transformer-based models, develop representations that mirror human-like cognitive processing (Ornes, 2022; Miller, 2024; Ito et al., 2022). However, critics maintain that these models still function on probabilistic associations rather than genuine comprehension (Pavlus, 2024; Puebla et al., 2019; Li et al., 2023). Despite impressive advancements in natural language processing and image recognition, AI lacks the conscious experience and subjective understanding inherent in human cognition (Mogi, 2024; Farisco et al., 2024; Albantakis & Tononi, 2021).
-
Dependency on Data and Patterns: Another argument against AI’s intelligence is its heavy dependency on training data and pattern recognition (Holzinger et al., 2023; Razavian, 2020; Data Idealogy, 2024). AI models require extensive datasets to function effectively and struggle when faced with scenarios outside their training parameters. While humans adapt and generalise knowledge across different contexts, AI is constrained by its programmed scope. For instance, an AI model trained on historical data may fail to adapt to unprecedented situations, such as rapidly evolving economic trends or novel scientific discoveries.Similarly, in the healthcare sector, AI diagnostic tools have occasionally misinterpreted medical conditions due to biases in the training data, leading to incorrect predictions and potential harm to patients (Ueda, 2023; Murphy, 2024; Office of Minority Health, 2024; Smith, 2023). Similarly, autonomous vehicles have encountered challenges in unpredictable environments, such as reacting to rare road conditions or unexpected pedestrian behaviour (Rezwana & Lownes, 2024; MulticoreWare, 2024; Akridata, 2024; Miller et al., 2024). These examples illustrate AI’s inability to extrapolate beyond its learned experiences, underscoring the distinction between data-driven decision-making and human cognitive flexibility.
-
Absence of Common-Sense Reasoning: AI's struggle with intuitive decision-making further reinforces the argument that it lacks true intelligence (Global Navigator LLC, 2024; Finlay, 2024; Kim, 2020). Humans possess an innate ability to apply common-sense reasoning to everyday situations, drawing on life experiences, cultural knowledge, and social awareness. This capability allows people to navigate ambiguous or uncertain scenarios with ease. In contrast, AI lacks the heuristic-based problem-solving approach that enables humans to make quick, informed decisions without explicit instruction (Felin & Holweg, 2024; Mukherjee & Chang, 2024; Gurney et al., 2023).For instance, an AI assistant might fail to recognise sarcasm or cultural nuances in conversation, leading to misinterpretations. In decision-making tasks, AI can optimise based on available data but struggles to consider abstract factors such as ethical dilemmas, emotional intelligence, and social context (Ong, 2021; Chang, 2023; Latif, 2022). Despite advancements in machine learning, AI still exhibits difficulty in areas requiring flexible, adaptive, and context-aware reasoning, making it ill-equipped for complex real-world challenges where human intuition is crucial.
-
Ethical and Moral Shortcomings: Another fundamental limitation of AI is its inability to make value-based decisions without human input. Ethical and moral reasoning requires an understanding of abstract concepts such as fairness, empathy, and social responsibility—qualities that AI lacks (Afroogh, 2024; Jiang, 2024; Cheng-Tek Tai, 2020; Tai, 2020). While AI systems can be programmed to follow ethical guidelines, they do not possess the intrinsic ability to weigh competing values or understand the human consequences of their decisions.Bias in AI algorithms further exacerbates ethical concerns. Since AI models are trained on historical data, they often inherit and amplify existing biases, leading to discriminatory outcomes in areas such as hiring, law enforcement, and lending. For example, facial recognition systems have been shown to exhibit racial and gender biases, disproportionately misidentifying individuals from underrepresented groups (Hardesty, 2018; Gentzel, 2021; Leslie, 2020). These biases raise significant questions about AI’s role in decision-making processes and the potential for perpetuating societal inequalities.
The Illusion of AI Intelligence
The Hype vs. Reality
Anthropomorphism in AI Perception
AI as an Augmentative Tool, Not an Autonomous Intelligence
Debunking Popular Myths About AI
Ethical and Societal Implications of Overestimating AI
- Implications for Employment: The impact of AI on employment is a widely debated concern, particularly the fear of large-scale job displacement. While the idea that AI can autonomously replace human workers is prevalent, the reality is more complex. AI often leads to job transformation rather than outright displacement (Frey & Osborne, 2013; Acemoglu & Restrepo, 2018). Routine and repetitive tasks can be automated, but this creates opportunities for new roles that require human oversight, creativity, and emotional intelligence. For example, in industries like manufacturing and administration, AI-driven automation can boost efficiency, allowing workers to focus on more complex and strategic tasks (Ford, 2015). However, without proper reskilling and upskilling initiatives, this transition could worsen unemployment and economic disparities (Autor, 2019).
- Bias and Discrimination Concerns: AI systems trained on biased datasets can perpetuate and even amplify societal inequalities. Biases, whether racial, gender-based, or socioeconomic, can manifest in AI algorithms, leading to discriminatory outcomes in critical areas such as hiring, law enforcement, and finance (Angwin et al., 2016; Obermeyer et al., 2019; Barocas & Selbst, 2016; Selbst & Barocas, 2018). For instance, facial recognition technologies have been shown to perform poorly on minority groups, resulting in wrongful identifications and perpetuating systemic discrimination (Buolamwini & Gebru, 2018).
- Accountability and Transparency Challenges: AI decision-making processes are often complex, making it challenging to establish accountability and ensure transparency. Many AI models operate as "black boxes," where even developers may not fully understand how decisions are made (Lipton, 2016; Doshi-Velez & Kim, 2017). This lack of interpretability poses ethical concerns, especially in areas like healthcare, finance, and criminal justice, where the consequences of AI decisions can be severe (Goodfellow, Bengio, & Courville, 2016; Russell & Norvig, 2021).
Future Directions and Recommendations
- 1.
- Ethical AI Development: A human-centric approach to AI governance is essential to align technological advancements with societal values.
- 2.
- Balancing Expectations with Reality: the media, policymakers, and educational institutions should collaborate to provide accurate, accessible information about AI, positioning it as a complement to human intelligence rather than an autonomous replacement.
- 3.
- Interdisciplinary Collaboration: Researchers in fields like ethics, psychology, and sociology should work alongside AI developers to create systems that are both technically robust and socially responsible.
Conclusion
References
- Abdallaoui, S., Halima Ikaouassen, Kribèche, A., Chaibet, A., & Aglzim, E. (2023). Advancing autonomous vehicle control systems: An in-depth overview of decision-making and manoeuvre execution state of the art. The Journal of Engineering, 2023(11). [CrossRef]
- Acemoglu, D., & Restrepo, P. (2018). Artificial intelligence, automation, and work. National Bureau of Economic Research.
- Afroogh, S., Akbari, A., Malone, E., Kargar, M., & Alambeigi, H. (2024). Trust in AI: progress, challenges, and future directions. Humanities and Social Sciences Communications, 11(1). [CrossRef]
- AI for Good. (2024a). How we can ensure that AI works for us. YouTube. https://youtu.be/H5xOof91Q5M.
- Akridata. (2024). How Edge Case Detection Enhances AI Safety in Autonomous Vehicles. Akridata • Edge Data Platform for Data-Centric AI. https://akridata.ai/blog/edge-case-detection-safer-ai-autonomous-vehicles/.
- Albantakis, L., & Tononi, G. (2021). What we are is more than what we do. ArXiv.org. https://arxiv.org/abs/2102.04219. [CrossRef]
- Autor, D. H. (2019). Work in the age of artificial intelligence. Science, 363(6430), 762-768.
- AWS. (n.d.). What are AI Agents? - Agents in Artificial Intelligence Explained - AWS. Amazon Web Services, Inc. https://aws.amazon.com/what-is/ai-agents/.
- Bhatti, S., & Robert, L. (2023). What Does It Mean to Anthropomorphise Robots? Food For Thought for HRI Research. https://deepblue.lib.umich.edu/bitstream/handle/2027.42/175558/Bhatti%20and%20Robert%202023.pdf.
- Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.
- Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
- Brainvire. (2025). How AI is revolutionizing the manufacturing industry for a smarter future. Brainvire.com - a Website Development, Mobile Apps and Digital Marketing Company. https://www.brainvire.com/blog/ai-led-solutions-for-manufacturing-industry/.
- Branch, W. T., & George, M. (2014). Reflection-Based Learning for Professional Ethical Formation. AMA Journal of Ethics, 19(4), 349–356. [CrossRef]
- Brookhouse, O. (2023). Can artificial intelligence understand emotions? Telefónica Tech. https://telefonicatech.com/en/blog/can-artificial-intelligence-understand-emotions.
- Brooks, A. C. (2024). Are You a Platonist or an Aristotelian? The Atlantic; theatlantic. https://www.theatlantic.com/ideas/archive/2024/10/aristotle-plato-philosophy-happiness/680339/.
- Brynjolfsson, E., & McAfee, A. (2017, July 18). The Business of Artificial Intelligence. Harvard Business Review. https://hbr.org/2017/07/the-business-of-artificial-intelligence.
- Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability, and Transparency.
- Carden, J., Jones, R. J., & Passmore, J. (2022). Defining self-awareness in the context of adult development: A systematic literature review. Journal of Management Education, 46(1). Sagepub. [CrossRef]
- Carroll, J. B. (1993). Human Cognitive Abilities. Cambridge University Press.
- Carroll, S. (2024). 280 François Chollet on Deep Learning and the Meaning of Intelligence. Preposterousuniverse.com. https://www.preposterousuniverse.com/podcast/2024/06/24/280-francois-chollet-on-deep-learning-and-the-meaning-of-intelligence/.
- Casaca, J. A., & Miguel, L. P. (2024). The influence of personalization on consumer satisfaction. Advances in Marketing, Customer Relationship Management, and E-Services Book Series, 256–292. [CrossRef]
- Cerruti, C. (2013). Building a functional multiple intelligences theory to advance educational neuroscience. Frontiers in Psychology, 4. [CrossRef]
- Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press. https://personal.lse.ac.uk/ROBERT49/teaching/ph103/pdf/Chalmers_The_Conscious_Mind.pdf.
- Chang, E. Y. (2023). CoCoMo: Computational Consciousness Modeling for Generative and Ethical AI. ArXiv.org. https://arxiv.org/abs/2304.02438. [CrossRef]
- Clarke, A. M., & Sternberg, R. J. (1986). Beyond IQ: A Triarchic Theory of Human Intelligence. British Journal of Educational Studies, 34(2), 205. [CrossRef]
- Cocato, P. (2025). “The limit of AI lies in its inability to understand complex contexts or show empathy.” Telefónica. https://www.telefonica.com/en/communication-room/blog/limit-ai-lies-inability-understand-complex-contexts-show-empathy/.
- Coghlan, S. (2024). Anthropomorphizing Machines: Reality or Popular Myth? Minds and Machines, 34(3). [CrossRef]
- Cole, D. (2004, March 19). The Chinese Room Argument. Stanford.edu. https://plato.stanford.edu/entries/chinese-room/.
- Columbia Engineering. (2023). Artificial Intelligence (AI) vs. Machine Learning. CU-CAI. https://ai.engineering.columbia.edu/ai-vs-machine-learning/.
- Damasio, A. R. (1999). The Feeling of What happens: Body, Emotion and the Making of Consciousness. Vintage, Cop.
- Data Camp. (2023). What is symbolic AI? Datacamp.com; DataCamp. https://www.datacamp.com/blog/what-is-symbolic-ai.
- Data Ideology. (2024, April 24). Understanding AI and Data Dependency - Data Ideology. Data Ideology. https://www.dataideology.com/understanding-ai-and-data-dependency/.
- Dhaduk, H. (2023). 6 Types of AI Agents: Exploring the Future of Intelligent Machines. Simform - Product Engineering Company. https://www.simform.com/blog/types-of-ai-agents/.
- Digiprima. (2025). Types of AI Agents: From Simple to Complex Systems - Digiprima - Medium. Medium. https://medium.com/%40digiprima/types-of-ai-agents-from-simple-to-complex-systems-f7967840d298.
- Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
- Farisco, M., Evers, K., & Changeux, J.-P. (2024). Is artificial consciousness achievable? Lessons from the human brain. ArXiv.org. https://arxiv.org/abs/2405.04540. [CrossRef]
- Federal Trade Commission. (2023). Keep your AI claims in check. Federal Trade Commission. https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check.
- Felin, T., & Holweg, M. (2024). Theory Is All You Need: AI, Human Cognition, and Decision Making. Social Science Research Network. [CrossRef]
- Finlay, V. (2024). Using AI for Decision-Making The HOW Institute for Society. The HOW Institute for Society. https://thehowinstitute.org/using-ai-for-decision-making/.
- Firstpost. (2024). How companies overhype the use of artificial intelligence vantage on firstpost. YouTube. https://www.youtube.com/watch?v=2wp5Ksld5nQ.
- Ford, M. (2015). Rise of the robots: Technology and the threat of a jobless future. Basic Books.
- Frey, C. B., & Osborne, M. A. (2013). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 70(4-5), 2242-2251.
- Gardner, H. (1983). Frames of Mind: The Theory of Multiple Intelligences. Basic Books.
- Gentzel, M. (2021). Biased Face Recognition Technology Used by Government: A Problem for Liberal Democracy. Philosophy & Technology, 34(4). [CrossRef]
- Global Navigator LLC. (2024). Artificial Intuition: Can AI Truly Develop Human-Like Intuitive Decision Making? Medium. https://medium.com/%4013032765d/artificial-intuition-can-ai-truly-develop-human-like-intuitive-decision-making-b29ce8da93f5.
- Glover, E. (2022). Strong AI vs weak AI: What’s the difference. Builtin.com. https://builtin.com/artificial-intelligence/strong-ai-weak-ai.
- Goleman, D. (2020). Emotional intelligence: Why it can matter more than IQ. Bloomsbury. (Original work published 1995).
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. The MIT Press. https://www.deeplearningbook.org/.
- Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI Investigation of Emotional Engagement in Moral Judgment. Science, 293(5537), 2105–2108. [CrossRef]
- Gros, D., Li, Y., & Yu, Z. (2022). Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in Dialog Systems. ArXiv.org. https://arxiv.org/abs/2210.12429. [CrossRef]
- Guingrich, R. E., & Graziano, M. (2023). Chatbots as social companions: How people perceive consciousness, human likeness, and social health benefits in machines. ArXiv.org. https://arxiv.org/abs/2311.10599. [CrossRef]
- Gurney, N., Miller, J. H., & Pynadath, D. V. (2023). The Role of Heuristics and Biases during Complex Choices with an AI Teammate. Proceedings of the... AAAI Conference on Artificial Intelligence, 37(5), 5993–6001. [CrossRef]
- Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834.
- Hardesty, L. (2018). Study finds gender and skin-type bias in commercial artificial-intelligence systems. MIT News Massachusetts Institute of Technology. https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212.
- Henning, J. E. (2023). Beyond Action and Cognition: The Role of Awareness and Emotion in Experiential Learning. Journal of Philosophy of Education, 79(2).
- Hermann, I. (2021). Artificial intelligence in fiction: Between narratives and metaphors. AI & Society, 38. [CrossRef]
- Holzinger, A., Saranti, A., Angerschmid, A., Finzel, B., Schmid, U., & Mueller, H. (2023). Toward human-level concept learning: Pattern benchmarking for AI algorithms. 100788–100788. [CrossRef]
- Hurair, M., Ju, J., & Han, J. (2024). Environmental-Driven Approach towards Level 5 Self-Driving. Sensors, 24(2), 485–485. [CrossRef]
- IBM. (2021a). Machine learning. Ibm.com. https://www.ibm.com/think/topics/machine-learning.
- IBM. (2021b). Unsupervised learning. Ibm.com. https://www.ibm.com/think/topics/unsupervised-learning.
- Idrees, H. (2024). Shallow Learning vs. Deep Learning: Is Bigger Always Better? Medium. https://medium.com/%40hassaanidrees7/shallow-learning-vs-deep-learning-is-bigger-always-better-51c0bd21f059.
- Iordanov, G. (2024). Rethinking AI. Newman Springs.
- Ito, T., Yang, G. R., Laurent, P., Schultz, D. H., & Cole, M. W. (2022). Constructing neural network models from brain data reveals representational transformations linked to adaptive behavior. Nature Communications, 13(1), 673. [CrossRef]
- Jaarsveld, S., & Lachmann, T. (2017). Intelligence and Creativity in Problem Solving: The Importance of Test Features in Cognition Research. Frontiers in Psychology, 8(134). [CrossRef]
- Jiang, Z. Z. (2024). Self-Disclosure to AI: The Paradox of Trust and Vulnerability in Human-Machine Interactions. Arxiv.org. https://arxiv.org/html/2412.20564v1. [CrossRef]
- Jinchang Wang. (2023). Self-Awareness, a Singularity of AI. Philosophy Study, 13(2). [CrossRef]
- Joyce, K., Balthazor, A., & Magee, J. (2024). Beyond the hype: The SEC’s intensified focus on AI washing practices. Hklaw.com. https://www.hklaw.com/en/insights/publications/2024/04/beyond-the-hype-the-secs-intensified-focus-on-ai-washing-practices.
- Kaufman, J. C., & Beghetto, R. A. (2009). Beyond Big and Little: the Four C Model of Creativity. Review of General Psychology, 13(1), 1–12. [CrossRef]
- Kersting, K. (2018). Machine learning and artificial intelligence: Two fellow travelers on the quest for intelligent behavior in machines. Frontiers in Big Data, 1. Frontiersin. [CrossRef]
- Kim, H.-S. (2020). Decision-Making in Artificial Intelligence: Is It Always Correct? Journal of Korean Medical Science, 35(1). [CrossRef]
- Koch, C. (2004). Consciousness: Essays from the edge of the visible. Oxford University Press.
- Kohlberg, L. (1984). The Psychology of Moral Development: The Nature and Validity of Moral Stages. San Francisco Harper & Row.
- Krzywanski, J., Sosnowski, M., Grabowska, K., Zylka, A., Lasek, L., & Kijo-Kleczkowska, A. (2024). Advanced Computational Methods for Modeling, Prediction and Optimization—A Review. Materials, 17(14), 3521–3521. [CrossRef]
- Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Viking Press.
- Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332–1338. [CrossRef]
- Latif, S., Ali, H. S., Usama, M., Rana, R., Schuller, B., & Qadir, J. (2022). AI-Based Emotion Recognition: Promise, Peril, and Prescriptions for Prosocial Path. ArXiv.org. https://arxiv.org/abs/2211.07290. [CrossRef]
- Legg, S. (2025). Definitions of Intelligence. Calculemus.org. https://calculemus.org/lect/08szt-intel/materialy/Definitions%20of%20Intelligence.html.
- Leslie, D. (2020). Understanding Bias in Facial Recognition Technologies. The Alan Turing Institute. [CrossRef]
- Li, B., Thomson, A. J., Nassif, H., Engelhard, M. M., & Page, D. (2023). On Neural Networks as Infinite Tree-Structured Probabilistic Graphical Models. ArXiv.org. https://arxiv.org/abs/2305.17583. [CrossRef]
- Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490.
- London, M., Sessa, V. I., & Shelley, L. A. (2022). Developing Self-Awareness: Learning Processes for Self- and Interpersonal Growth. Annual Review of Organizational Psychology and Organizational Behavior, 10(1), 261–288. Researchgate. [CrossRef]
- Lumenalta. (2024). AI’s limitations: What artificial intelligence can’t do understanding the limitations of AI lumenalta. Lumenalta. https://lumenalta.com/insights/ai-limitations-what-artificial-intelligence-can-t-do.
- Marcus, G. F., & Davis, E. (2019). Rebooting AI: building artificial intelligence we can trust. New York Pantheon Books.
- Marusarz, W. (2022). How much data does AI need? What to do when you have limited datasets? Nexocode. https://nexocode.com/blog/posts/ai-data-needs-for-training-and-data-augmentation-techniques/.
- Marwala, T. (2024). AI is not a high-precision technology, and this has profound implications for the world of work. United Nations University. https://unu.edu/article/ai-not-high-precision-technology-and-has-profound-implications-world-work.
- Miller, K. (2024). From Brain to Machine: The Unexpected Journey of Neural Networks. Stanford HAI; Stanford University. https://hai.stanford.edu/news/brain-machine-unexpected-journey-neural-networks.
- Miller, T., Durlik, I., Kostecka, E., Borkowski, P., & Łobodzińska, A. (2024). A Critical AI View on Autonomous Vehicle Navigation: The Growing Danger. Electronics, 13(18), 3660. [CrossRef]
- Mirror. (2023). Are machines truly conscious? Mirror.xyz. https://mirror.xyz/definn.eth/76dHu7yM9n8VDcWq26H6dyMwFAJWXl0LBfAxOfhg3ao?collectors=true.
- Mishra, S., & Tiwary, U. S. (2019). A Cognition-Affect Integrated Model of Emotion. ArXiv.org. https://arxiv.org/abs/1907.02557. [CrossRef]
- MIT Sloan. (2021). Machine learning, explained MIT Sloan. MIT Sloan. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained.
- Mogi, K. (2024). Artificial intelligence, human cognition, and conscious supremacy. Frontiers in Psychology, 15. [CrossRef]
- Mukherjee, A., & Chang, H. H. (2024). Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorption. ArXiv.org. https://arxiv.org/abs/2403.09404. [CrossRef]
- MulticoreWare. (2024). Challenges and Advancements in Testing Autonomous Vehicles - MulticoreWare. MulticoreWare. https://multicorewareinc.com/challenges-and-advancements-in-testing-autonomous-vehicles/.
- Murphy, K. (2024, May 21). 6 Common Healthcare AI Mistakes. Prsglobal.com; PRS Global. https://prsglobal.com/blog/6-common-healthcare-ai-mistakes.
- Newell, B. R., & Bröder, A. (2008). Cognitive processes, models and metaphors in decision research. Judgment and Decision Making, 3(3), 195–204. [CrossRef]
- Nikolopoulou, K. (2023). What is anthropomorphism? definition & examples. Scribbr. https://www.scribbr.com/academic-writing/anthropomorphism/.
- Office of Minority Health. (2024). Shedding Light on Healthcare Algorithmic and Artificial Intelligence Bias. Office of Minority Health. https://minorityhealth.hhs.gov/news/shedding-light-healthcare-algorithmic-and-artificial-intelligence-bias.
- Olider, A., Deroncele-Acosta, A., Luis, J., Barrasa, A., López-Granero, C., & Martí-González, M. (2024). Integrating artificial intelligence to assess emotions in learning environments: a systematic literature review. Frontiers in Psychology, 15. [CrossRef]
- Ong, D. C. (2021). An Ethical Framework for Guiding the Development of Affectively-Aware Artificial Intelligence. ArXiv.org. https://arxiv.org/abs/2107.13734. [CrossRef]
- Oracle. (2020). What is machine learning? Oracle.com. https://www.oracle.com/ng/artificial-intelligence/machine-learning/what-is-machine-learning/.
- Ornes, S. (2022, September 12). How Transformers Seem to Mimic Parts of the Brain. Quanta Magazine. https://www.quantamagazine.org/how-ai-transformers-mimic-parts-of-the-brain-20220912/.
- Pardo, M. (2022). Ethics at every stage of the AI lifecycle: Data preparation. Appen.com; Appen. https://www.appen.com/blog/ethical-data-for-the-ai-lifecycle-data-preparation.
- Pavlus, J. (2024, September 29). The Atlantic. The Atlantic; theatlantic. https://www.theatlantic.com/technology/archive/2024/09/does-ai-understand-language/680056/.
- Prezenski, S., Brechmann, A., Wolff, S., & Russwinkel, N. (2017). A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making. Frontiers in Psychology, 8(1335). [CrossRef]
- Puebla, G., Martin, A. E., & Doumas,. (2019). The relational processing limits of classic and contemporary neural network models of language processing. ArXiv.org. https://arxiv.org/abs/1905.05708. [CrossRef]
- Rausch, O., Ben-Nun, T., Dryden, N., Ivanov, A., Li, S., & Hoefler, T. (2021). A Data-Centric Optimization Framework for Machine Learning. ArXiv.org. https://arxiv.org/abs/2110.10802. [CrossRef]
- Razavian, N., Knoll, F., & Geras, K. J. (2020). Artificial Intelligence Explained for Nonexperts. Seminars in Musculoskeletal Radiology, 24(01), 003-011. [CrossRef]
- Rezwana, S., & Lownes, N. (2024). Interactions and Behaviors of Pedestrians with Autonomous Vehicles: A Synthesis. Future Transportation, 4(3), 722–745. [CrossRef]
- Ruhl, C. (2024). Theories Of Intelligence In Psychology. Simply Psychology. https://www.simplypsychology.org/intelligence.html.
- Rumley, K., Nguyen, J., & Neskovic, G. (2023). How speech recognition improves customer service in telecommunications. NVIDIA Technical Blog. https://developer.nvidia.com/blog/how-speech-recognition-improves-customer-service-in-telecommunications/.
- Runco, M. A., & Jaeger, G. J. (2012). The Standard Definition of Creativity. Creativity Research Journal, 24(1), 92–96. [CrossRef]
- Russel, S., & Norvig, P. (2021). Artificial intelligence: A Modern approach (4th ed.). Prentice Hall.
- Salovey, P., & Mayer, D. J. (1997). Emotional development and emotional intelligence: educational implications (pp. 3–31). Basic Books.
- SAP. (2024). What is AI bias? Causes, effects, and mitigation strategies. Sap.com. https://www.sap.com/resources/what-is-ai-bias.
- Sarker, I. H. (2021). Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Computer Science, 2(3), 1–21. Springer. [CrossRef]
- Schore, A. N. (2016). Affect regulation and the origin of the self: the neurobiology of emotional development. Psychology Press.
- Sestili, C. (2018). Deep learning: Going deeper toward meaningful patterns in complex data. SEI Blog. https://insights.sei.cmu.edu/blog/deep-learning-going-deeper-toward-meaningful-patterns-in-complex-data/.
- Seth, A. (2021). BEING YOU: a new science of consciousness. Dutton.
- Shanahan, M. (2024). Simulacra as Conscious Exotica. ArXiv.org. https://arxiv.org/abs/2402.12422. [CrossRef]
- Sharps, S. (2024). The Impact of AI on the Labour Market. Institute. Global; Tony Blair Institute. https://institute.global/insights/economic-prosperity/the-impact-of-ai-on-the-labour-market.
- , & Wiggins, G. (2014). Personifying the pedagogical; Siemens, G., & Wiggins, G. (2014). Personifying the pedagogical: Exploring the intersection of personal, participatory, and proximal in personalised learning. Journal of Interactive Learning Research, 26(1), 3-14.
- Siemens. (2024). Revolutionizing industry with AI. Siemens.com Global Website. https://www.siemens.com/global/en/company/stories/digital-transformation/how-ai-revolutionizing-industry.html.
- Smith, D. (2023). Clinicians could be fooled by biased AI, despite explanations. Michigan Engineering News. https://news.engin.umich.edu/2023/12/clinicians-could-be-fooled-by-biased-ai-despite-explanations/.
- Sternberg, R. J. (1996). Successful intelligence. Cambridge University Press.
- Sternberg, R. J. (2021). Adaptive Intelligence: Its Nature and Implications for Education. Education Sciences, 11(12), 823. [CrossRef]
- Sternberg, R. J. (2025). Human intelligence. Encyclopaedia Britannica. https://www.britannica.com/science/human-intelligence-psychology.
- Stewart, E. (2024). Companies are luring investors by exaggerating what their AI can do. Business Insider. https://www.businessinsider.com/generative-ai-exaggeration-openai-nvidia-microsoft-chatgpt-jobs-investors-markets-2024-3.
- Stryker, C., & Kavlakoglu, E. (2024). What Is Artificial Intelligence (AI)? IBM. https://www.ibm.com/think/topics/artificial-intelligence.
- Su, J. (2024). Critical Debates in Humanities. Science and Global Justice, 3(1), 2024. https://criticaldebateshsgj.scholasticahq.com/api/v1/articles/117373-consciousness-in-artificial-intelligence-a-philosophical-perspective-through-the-lens-of-motivation-and-volition.pdf.
- Tai, M. C.-T. (2020). The Impact of Artificial Intelligence on Human Society and Bioethics. Tzu Chi Medical Journal, 32(4), 339–343. National Library of Medicine. [CrossRef]
- Topol, E. J. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Little, Brown Spark.
- Ueda, D., Kakinuma, T., Fujita, S., Kamagata, K., Fushimi, Y., Ito, R., Matsui, Y., Nozaki, T., Nakaura, T., Fujima, N., Tatsugami, F., Yanagawa, M., Hirata, K., Yamada, A., Tsuboyama, T., Kawamura, M., Fujioka, T., & Naganawa, S. (2023). Fairness of Artificial Intelligence in healthcare: Review and Recommendations. Japanese Journal of Radiology, 42(1). [CrossRef]
- Wan, M. (2024, June 18). Consciousness, awareness, and the intellect of AI. Eficode.com; Eficode Oy. https://www.eficode.com/blog/consciousness-awareness-and-the-intellect-of-ai.
- Wang, X., Azhar, M. W., Trancoso, P., & Maleki, M. A. (2021). Moving Forward: A Review of Autonomous Driving Software and Hardware Systems. Arxiv.org. https://arxiv.org/html/2411.10291v1. [CrossRef]
- Wang, Y., & Chiew, V. (2010). On the cognitive process of human problem solving. Cognitive Systems Research, 11(1), 81–92. [CrossRef]
- Yadav, S. (2024). Science fiction as the blueprint: Informing policy in the age of AI and emerging tech. Orfonline.org. https://www.orfonline.org/research/science-fiction-as-the-blueprint-informing-policy-in-the-age-of-ai-and-emerging-tech.
- Yin, Y., Jia, N., & Wakslak, C. J. (2024). AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences of the United States of America, 121(14). [CrossRef]
- Yousef, M., & Allmer, J. (2023). Deep learning in bioinformatics. Turkish Journal of Biology, 47(6), 366–382. [CrossRef]
- Zeng, Y., Zhao, F., Zhao, Y., Zhao, D., Lu, E., Zhang, Q., Wang, Y., Feng, H., Zhao, Z., Wang, J., Kong, Q., Sun, Y., Li, Y., Shen, G., Han, B., Dong, Y., Pan, W., He, X., Bao, A., & Wang, J. (2024). Brain-inspired and Self-based Artificial Intelligence. ArXiv.org. https://arxiv.org/abs/2402.18784. [CrossRef]
- Zhou, D.-X. (2018). Universality of Deep Convolutional Neural Networks. ArXiv.org. https://arxiv.org/abs/1805.10769. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
