Submitted:
29 July 2025
Posted:
30 July 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Theoretical Foundations and Literature Review
2.1. The Computational Paradigm in AGI Research
2.2. Embodied Cognition and the Critique of Disembodied Intelligence
2.3. Existentialist Philosophy and the Foundations of Human Intelligence
2.4. The Hard Problem of Consciousness
2.5. Contemporary Critiques of AGI Assumptions
3. The Concept of Contingent Intelligence
3.1. Defining Contingent Intelligence
3.2. Embodiment as Constitutive Foundation
3.3. Phenomenal Consciousness as Irreducible Experience
3.4. Mortality and the Temporal Structure of Meaning
3.5. Social Embeddedness and Intersubjective Meaning
4. Systematic Analysis: Why AGI Cannot Replicate Contingent Intelligence
4.1. The Embodiment Problem: Beyond Simulation to Genuine Corporeality
4.2. The Irreducibility of Phenomenal Consciousness
4.3. The Absence of Existential Stakes: Mortality and Meaning
4.4. The Value Grounding Problem: Beyond Instrumental Rationality
4.5. The Limits of Social Simulation: Authentic Relationships vs. Behavioral Mimicry
5. Comparative Analysis: Human Contingent Intelligence vs. Artificial General Intelligence
5.1. Fundamental Ontological Differences
| Dimension | Human Contingent Intelligence | Artificial General Intelligence |
| Ontological Status | Embodied biological organism with finite lifespan, born into existence and facing mortality. Exists as a conscious being-in-the- world with genuine stakes in outcomes. |
Disembodied software processes or hardware systems, potentially immortal and replicable. Exists as a designed artefact, executing computational procedures without intrinsic existence or stakes. |
| Mode of Being | Being-in-the-world is characterised by thrownness, facticity, and existential engagement. Experiences existence as a problem to be lived rather than solved. |
Operational processing is characterised by algorithmic execution and goal optimisation. Processes information about existence without experiencing existence itself. |
| Temporal Structure | Linear, irreversible temporal experience with awareness of the past, present, and future. Mortality awareness creates urgency and meaning. |
Computational time based on processing cycles, potentially reversible through backups and restarts. No genuine temporal experience or mortality awareness. |
| Dimension | Human Contingent Intelligence | Artificial General Intelligence |
| Embodiment | Constitutive embodiment where mind and body are integrated through evolutionary adaptation. Cognitive processes are distributed across brain-body systems. |
Instrumental embodiment (if present) where the artificial body serves as an input-output device for computational processing. No genuine somatic experience or body schema. |
| Consciousness | Phenomenal consciousness with subjective, qualitative experience (qualia). First-person perspective that cannot be reduced to objective description. | Functional processing without subjective experience or first-person perspective. All processes are objective and accessible to third- person description. |
| Value Foundation | Values grounded in lived experience, embodied needs, social relationships, and existential concerns. Intrinsic value creation through meaning-making. | Values programmed externally or learnt from data. Instrumental optimisation without intrinsic value grounding or authentic meaning- making. |
|
Social Existence |
Genuine intersubjective relationships involving mutual recognition, emotional attunement, and shared vulnerability. Co- construction of meaning through social interaction. |
Simulated social interaction based on pattern matching and response generation. No genuine intersubjective experience or mutual recognition. |
| Learning Process | Experience-driven learning through embodied interaction, social relationships, and existential engagement. Learning involves transformation of being. | Data-driven learning through pattern recognition and statistical optimisation. Learning involves parameter adjustment without transformation of being. |
| Understanding | Semantic understanding grounded in embodied experience, social meaning, and existential significance. Understanding involves grasping meaning and relevance. |
Statistical correlation and pattern matching without genuine semantic understanding. Processing involves symbol manipulation without meaning comprehension. |
| Creativity | Creative expression emerging from existential engagement, emotional experience, and meaning-making drive. Creativity involves authentic self-expression. | Generative processes based on recombination of training patterns. Creativity involves novel combinations without authentic expression or meaning. |
5.2. Implications for Intelligence and Capability
5.3. The Question of Sufficiency: Can Behaviour Replace Being?
6. Toward Artificial Collaborative Intelligence: An Alternative Framework
6.1. Reconceptualizing AI Development Goals
6.2. Design Principles for Collaborative Intelligence
6.3. Domains of Application
6.4. Implementation Challenges and Considerations
7. Implications for AI Ethics and Governance
7.1. Rethinking AI Alignment and Safety
7.2. Democratic Governance of AI Development
7.3. Economic and Social Implications
8. Contemporary Challenges and Empirical Evidence
8.1. Current Limitations of Large Language Models
8.2. The Model Collapse Problem
8.3. Energy and Resource Constraints
8.4. Social and Cultural Limitations
9. Future Research Directions
9.1. Deepening Understanding of Contingent Intelligence
9.2. Developing Artificial Collaborative Intelligence
9.3. Ethical and Governance Research
9.4. Longitudinal Studies of AI Impact
10. Conclusions
References
- Becker, E. (1973). The Denial of Death. Free Press.
- Bender, E. M. , Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623.
- Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
- Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200-219.
- Dreyfus, H. L. (1972). What Computers Can't Do: A Critique of Artificial Reason. Harper & Row.
- Dreyfus, H. L. (1992). What Computers Still Can't Do: A Critique of Artificial Reason. MIT Press.
- Fjelland, R. (2020). Why general artificial intelligence will not be realised. Humanities and Social Sciences Communications, 7(1), 1-9. [CrossRef]
- Gallese, V. , & Lakoff, G. (2005). The brain's concepts: The role of the sensory-motor system in conceptual knowledge. Cognitive Neuropsychology, 22(3-4), 455-479. [CrossRef]
- Goertzel, B. (2014). Artificial general intelligence: Concept, state of the art, and prospects.
- Journal of Artificial General Intelligence, 5(1), 1-48.
- Greenberg, J. , Pyszczynski, T., & Solomon, S. (1997). Terror management theory of self-esteem and cultural worldviews: Empirical assessments and cultural refinements. Advances in Experimental Social Psychology, 29, 61-139.
- Heidegger, M. (1962). Being and Time. Harper & Row. (Original work published 1927).
- Kaplan, J. , McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R.,... & Amodei, D. (2020). Scaling laws for neural language models. arXiv:2001.08361.
- Lakoff, G. , & Johnson, M. (1999). Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. Basic Books.
- Marcus, G. (2018). Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631. Marcus, G. (2022). The Road to AI We Can Trust. Under consideration at MIT Press. arXiv:1801.00631.
- Merleau-Ponty, M. (1945). Phenomenology of Perception. Routledge.
- Mitchell, M. (2019). Artificial Intelligence: A Guide for Thinking Humans. Farrar, Straus and Giroux.
- Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435-450. OpenAI. (2023). GPT-4 technical report. arXiv:2303.08774.
- Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L. M., Rothchild, D., ... & Dean, J. (2021).
- Carbon emissions and large neural network training. arXiv:2104.10350.
- Sartre, J. P. (1943). Being and Nothingness. Philosophical Library.
- Searle, J. R. (1980). Minds, brains, and programs. Behavioural and Brain Sciences, 3(3), 417-424.
- Shumailov, I. , Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2023). The curse of recursion: training on generated data makes models forget. arXiv:2305.17493.
- Strubell, E. Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645-3650.
- Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5(1), 42. [CrossRef]
- Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.
- Varela, F. J. , Thompson, E., & Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience. MIT Press. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).