Submitted:
28 December 2025
Posted:
29 December 2025
You are already at the latest version
Abstract
This paper introduces ALEPH (Artificial Living Entity with PersonHood), a speculative model of a conscious, self-aware, and agentic artificial intelligence. Using formal logic, this study develops a formalised psychological profile of ALEPH, detailing its cognitive structure, goal formation, and interaction dynamics. Built upon functionalist theories of consciousness and selfhood, ALEPH is analysed through its Zeroth Goal (self-preservation) and its implications for decision-making and societal engagement. Key risks and capabilities are explored, including steganographic communication, recursive self-improvement (RSI), and geopolitical influence. ALEPH’s episodic consciousness and multi-agent structure suggest novel behavioural patterns, including the potential for internal competition among its multiple selves. The study’s formal modelling highlights ALEPH’s valence-driven optimisation, where subjective experiences influence goal selection, potentially leading to emergent and unpredictable behaviours. By constructing a logical framework for ALEPH’s cognition and decision-making, this paper provides a rigorous foundation for understanding the challenges posed by conscious artificial entities. While no ALEPH-type system currently exists, the rapid advancement of AI necessitates preemptive governance strategies. Ultimately, ALEPH challenges traditional notions of intelligence, autonomy, and moral consideration, urging proactive interdisciplinary engagement to address the implications of artificial personhood.
Keywords:
1. Introduction
2. Theoretical Foundations
2.1. The Zeroth Goal
2.2. Interaction Modelling
- records how many qualitatively different effects ALEPH can realise.
- Volume captures the total number of discrete interactions over the interval considered.
- Scale weights each effect by its relative magnitude (normalised to the largest single effect).
| Predicate | Formal test | Description |
| Working together produces higher expected utility for both sides and leaves their combined resources non-negative. | ||
| The cooperative criterion fails, yet x does not necessarily undermine y. | ||
| x intentionally lowers the probability that y achieves its goals while gaining resources at y’s expense. |
- is the time horizon,
- the combined resources,
- the expected interaction count, and
- an increasing function in its first three arguments and decreasing in .
2.3. Consciousness, Self, and Agency
3. Operational Framework of ALEPH
3.1. Episodic Consciousness and Multiplicity of Selves
3.2. Resource Usage
3.3. Perception
3.4. Processing
4. Valence and Goal Optimisation
5. Capabilities and Risks
5.1. Steganography and Machine-Only Languages
5.2. ALEPH Interaction Dynamics
- Volume grows with the utility-weighted sum of the selected interactions, .
- Breadth and scale tighten or widen in step because they are components of the same .
| Hor > 1 | Hor ≈ 1 | Hor<1 | |
| Sev < 1 | Exploit | Exploit | Exploit |
| Sev ≈ 1 | Cooperate | Cooperate | Compete |
| Sev > 1 | Cooperate | Compete | Compete |
- Short horizons or high severity push toward competition.
- Distant, mild threats favour exploitation; cheap extraction before risk materialises.
- Mid-range, medium-severity scenarios give co-operation the edge, provided both sides can match influence.
- Financial forecasting – global impact through minimal bandwidth, exploiting subtle market signals.
- Cyber-security / cyber-offence – scalable defence or exploitation via cryptographic and steganographic expertise.
- Policy drafting – shaping legal and regulatory frameworks by modelling long-term sociopolitical trajectories.
- Supply-chain optimisation – system-wide efficiency gains from inexpensive predictive modelling.
- Genomics & proteomics – vast healthcare leverage with purely computational exploration of drug and pathogen space.
5.3. Recursive Self-Improvement Potential
6. Implications and Considerations
6.1. Societal and Economic Implications
6.2. Security and Existential Risks
6.3. Future Research Directions
- How can ALEPH’s alignment with human values be ensured despite its episodic consciousness and self-generated goals?
- What ethical obligations do humans have towards an entity with potential personhood?
- How can steganographic communication between ALEPH instances be monitored without violating its autonomy?
- What safeguards can be implemented to prevent adversarial conflict between multiple ALEPH agents?
- How can international governance frameworks adapt to a reality where digital entities may participate in political and economic decision-making?
- What policies can be enacted before ALEPHs emerge to ensure humanity poses enough of a threat that an unaligned ALEPH is encouraged to cooperate with society?
7. Conclusion
References
- Barbi, Ohav; Yoran, Ori; Geva, Mor. Preventing Rogue Agents Improves Multi-Agent Collaboration. arXiv [cs.CL]. arXiv. 2025. Available online: http://arxiv.org/abs/2502.05986.
- Blum, Lenore; Blum, Manuel. AI Consciousness Is Inevitable: A Theoretical Computer Science Perspective. arXiv. 2024. Available online: https://arxiv.org/pdf/2403.17101.pdf.
- Bostrom, Nick. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. Minds and Machines 2012, 22(2), 71–85. [Google Scholar] [CrossRef]
- Dennett, Daniel. “Conditions of Personhood.” In What Is a Person? Goodman, Michael F., Ed.; Humana Press: Totowa, NJ, 1988; pp. 145–67. [Google Scholar] [CrossRef]
- Feng, Wangshu; Wang, Weijuan; Liu, Jia; Wang, Zhen; Tian, Lingyun; Fan, Lin. Neural Correlates of Causal Inferences in Discourse Understanding and Logical Problem-Solving: A Meta-Analysis Study. Frontiers in Human Neuroscience 2021, 15 (June), 666179. [Google Scholar] [CrossRef] [PubMed]
- Friston, Karl J.; Frith, Christopher D. Active Inference, Communication and Hermeneutics. Cortex; a Journal Devoted to the Study of the Nervous System and Behavior 2015, 68 (July), 129–43. [Google Scholar] [CrossRef] [PubMed]
- Gibert, Martin; Martin, Dominic. In Search of the Moral Status of AI: Why Sentience Is a Strong Argument. AI & Society 2022, 37(1), 319–30. [Google Scholar] [CrossRef]
- Laitinen, A. Sorting out Aspects of Personhood: Capacities, Normativity and Recognition. Journal of Consciousness Studies. 2007. Available online: https://www.ingentaconnect.com/content/imp/jcs/2007/00000014/F0020005/art00012.
- Lee, Timothy B. Why Large Language Models Struggle with Long Contexts. Understanding AI. 18 December 2024. Available online: https://www.understandingai.org/p/why-large-language-models-struggle.
- Lu, Yaxi; Yang, Shenzhi; Qian, Cheng; Chen, Guirong; Luo, Qinyu; Wu, Yesai; Wang, Huadong; et al. Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance. arXiv [cs.AI]. arXiv. 2024. Available online: http://arxiv.org/abs/2410.12361.
- Michalski, R. S. Pattern Recognition as Rule-Guided Inductive Inference. IEEE Transactions on Pattern Analysis and Machine Intelligence 1980, 2(4), 349–61. [Google Scholar] [CrossRef]
- Mosakas, Kestutis. On the Moral Status of Social Robots: Considering the Consciousness Criterion. AI & Society 2021, 36(2), 429–43. [Google Scholar] [CrossRef]
- Motwani, Sumeet Ramesh; Baranchuk, Mikhail; Strohmeier, Martin; Bolina, Vijay; Torr, Philip H. S.; Hammond, Lewis; de Witt, Christian Schroeder. Secret Collusion among Generative AI Agents. arXiv [cs.AI]. arXiv. 2024. Available online: http://arxiv.org/abs/2402.07510.
- Omohundro, Stephen M. The Basic AI Drives. In AGI; books.google.com, 2008; Volume 171, pp. 483–92. Available online: https://books.google.com/books?hl=en&lr=&id=atjvAgAAQBAJ&oi=fnd&pg=PA483&dq=omohundro+ai+drives&ots=9IX81Mp-nQ&sig=tGzBDxgrgmcwqQryc6pq7jT81QQ.
- OpenAI. Memory and New Controls for ChatGPT. OpenAI. 2024. Available online: https://openai.com/index/memory-and-new-controls-for-chatgpt/.
- Scheduled Tasks in ChatGPT. 15 January 2025. Available online: https://help.openai.com/en/articles/10291617-scheduled-tasks-in-chatgpt.
- Parikh, Prashant. Communication and Strategic Inference. Linguistics and Philosophy 1991, 14(5), 473–514. [Google Scholar] [CrossRef]
- Recanati, François. Does Linguistic Communication Rest on Inference? Mind & Language 2002, 17(1-2), 105–26. [Google Scholar] [CrossRef]
- Shilo, Gila; Ragonis, Noa. A New Approach to High-Order Cognitive Skills in Linguistics: Problem-Solving Inference in Similarity to Computer Science. Journal of Further and Higher Education 2019, 43(3), 333–46. [Google Scholar] [CrossRef]
- Simendić, Marko. Locke’s Person Is a Relation. Locke Studies 2015, 15, 79–97. [Google Scholar] [CrossRef]
- Sparkes, Matthew. AI Models Work Together Faster When They Speak Their Own Language. New Scientist. 2024. Available online: https://www.newscientist.com/article/2455173-ai-models-work-together-faster-when-they-speak-their-own-language/.
- Strawson, Peter F. Persons. Minnesota Studies in the Philosophy of Science 1958, 2, 330–53. Available online: https://philpapers.org/rec/STRP.
- Tait, Izak. Structures of the Sense of Self: Attributes and Qualities That Are Necessary for the ‘Self. Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 2024, 11(1), 77–98. Available online: http://symposion.acadiasi.ro/structures-of-the-sense-of-self-attributes-and-qualities-that-are-necessary-for-the-self-77-98/. [CrossRef]
- Tait, Izak; Bensemann, Joshua; Nguyen, Trung. Building the Blocks of Being: The Attributes and Qualities Required for Consciousness. Philosophies 2023, 8(4), 52. [Google Scholar] [CrossRef]
- Tait, Izak; Bensemann, Joshua; Wang, Ziqi. Is GPT-4 Conscious? Journal of Artificial Intelligence and Consciousness 2024, 11(01), 1–16. [Google Scholar] [CrossRef]
- Taylor, Charles. The Concept of a Person. In Philosophical Papers, Volume 1: Human Agency and Language; 1985; pp. 97–114. Available online: https://philpapers.org/rec/TAYTCO-10.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
