Submitted:
28 July 2025
Posted:
04 August 2025
You are already at the latest version
Abstract

Keywords:
1. Introduction
2. Materials and Methods
2.1. Databases and Search Strategy
2.2. Inclusion and Exclusion Criteria
- Addressed artificial or machine consciousness in theoretical, philosophical, or empirical contexts.
- Discussed relational or ethical dimensions relevant to otherness and human-AI interaction.
- Were peer-reviewed journal articles, conference proceedings, or recognized academic monographs.
- Publications without direct relevance to consciousness or alterity in AI.
- Non-academic sources, opinion pieces lacking theoretical grounding, or grey literature without peer review.
2.3. Selection and Analytical Framework
2.4. Transparency and Reproducibility
3. Results and Analysis
3.1. Simulation of Relationality in Current AI Systems

3.2. Ethical Risks of Simulating the Other in Generative AI
- Perceptual manipulation: Users may attribute genuine emotional understanding to systems lacking subjective experience.
- Erosion of relational norms: Sustained exposure to simulated empathy may recalibrate interpersonal expectations and reduce authenticity in human relationships.
3.3. Integrating Philosophy and Empirical Evidence
4. Discussion
4.1. Philosophical–Ethical Dimensions and Empirical Integration
Cross-Cultural Perspectives on Alterity
Relational AI Ethics Framework with Measurable Variables
- Empathic Transparency Index (ETI): Quantifies clarity of AI disclosure about its simulated empathy. Scale: 0–1 based on user recognition of AI-generated responses.
- Reciprocity Score (RS): Measures perceived bidirectional engagement in user-AI interaction via post-session surveys and interaction analysis.
- Cultural Relational Adaptability (CRA): Evaluates system performance across diverse cultural contexts using cross-linguistic empathy perception benchmarks.
- Authenticity Gap Metric (AGM): Assesses divergence between user-rated authenticity and system disclosure accuracy.

Checklist for Auditing Relational AI Systems
- Verification that the system incorporates explicit user interface indicators revealing simulated empathy (Target: ETI ≥ 0.8).
- Assessment of whether the system has been evaluated with at least three culturally distinct user groups (CRA validation).
- Collection and benchmarking of user perceptions of reciprocity for each deployment (RS monitoring).
- Existence of a defined protocol to assess and reduce the Authenticity Gap (AGM < 0.2 threshold).
| Ethical Principle | Design Implementation /Metric |
|---|---|
| Transparency | User-facing indicators; Empathic Transparency Index (ETI). |
| Cultural Adaptability | Multilingual models; Cultural Relational Adaptability (CRA). |
| Accountability | Dynamic audit logs; Authenticity Gap Metric (AGM). |
| Reciprocity | Feedback loops; Reciprocity Score (RS). |
| Fairness | Bias mitigation pipelines; cross-demographic testing. |
5. Limitations and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| LLM | Large Language Model |
| IIT | Integrated Information Theory |
| ETI | Empathic Transparency Index |
| RS | Reciprocity Score |
| CRA | Cultural Relational Adaptability |
| AGM | Authenticity Gap Metric |
| OECD | Organisation for Economic Co-operation and Development |
| UNESCO | United Nations Educational, Scientific and Cultural Organization |
References
- Wang, Y.; Siau, K. Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work and Future of Humanity. J. Database Manag. 2019, 30, 61–79. [Google Scholar] [CrossRef]
- Dwivedi, Y.K.; Hughes, D.L.; et al. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 2021, 57, 101994. [Google Scholar] [CrossRef]
- Guillot, M. Consciousness and the Self: A Defense of the Phenomenal Self-Model. Philos. Psychol. 2017, 30, 45–67. [Google Scholar]
- Chalmers, D.J. The Conscious Mind: In Search of a Fundamental Theory; Oxford University Press: Oxford, UK, 1997. [Google Scholar]
- Minsky, M. The Society of Mind; Simon & Schuster: New York, NY, USA, 1986. [Google Scholar]
- Shanahan, M. Talking About Large Language Models. Nat. Mach. Intell. 2021, 3, 1026–1028. [Google Scholar] [CrossRef]
- Shteynberg, G.; et al. Simulated Empathy in Generative AI: Experimental Evidence. AI Soc. Online First. 2024. [Google Scholar]
- Sorin, C.; et al. Evaluating Empathy in Large Language Models. Front. AI 2024, 7, 223. [Google Scholar]
- Peperzak, A. To the Other: An Introduction to the Philosophy of Emmanuel Levinas; Purdue University Press: West Lafayette, IN, USA, 1993. [Google Scholar]
- Heidegger, M. Being and Time; Harper & Row: New York, NY, USA, 1951. [Google Scholar]
- Verhaeghen, P. Relational Selfhood in Ubuntu Philosophy. Philos. Afr. 2017, 16, 1–14. [Google Scholar]
- Garfield, J. Engaging Buddhism: Why It Matters to Philosophy; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
- Kim, T. Task-Optimized AI and Ethical Relationality. AI Ethics 2023, 3, 211–226. [Google Scholar]
- Lu, Y.; et al. Hyperreality and Generative AI: Rethinking Authenticity. AI Soc. Online First. 2022. [Google Scholar]
- Burggraeve, R. The Wisdom of Love in the Service of Love: Emmanuel Levinas on Justice, Peace, and Human Rights; Marquette University Press: Milwaukee, 2006. [Google Scholar]
- Heidegger, M. Being and Time; Harper & Row: New York, NY, USA, 1951. [Google Scholar]
- Li, X.; et al. Emotion Recognition in AI Systems: Affective Models and Social Simulation. IEEE Trans. Affect. Comput. 2020, 11, 500–512. [Google Scholar]
- Sorin, A.; et al. Evaluating Empathy Simulation in Large Language Models. AI & Society in press. 2024. [Google Scholar]
- Lee, K.; et al. Machine-Simulated Empathy: Human Perception and Ethical Challenges. Nature Mach. Intell. 2024, 6, 112–124. [Google Scholar]
- Warren, T.; et al. Contextual Sensitivity in AI Emotional Intelligence. Front. AI Ethics 2024, 5, 77–90. [Google Scholar]
- Schlegel, R.; et al. Comparative Assessment of Human and AI Emotional Intelligence. Comput. Hum. Behav. 2025, 139, 107524. [Google Scholar]
- Liao, Q.; Vaughan, J. Emergent Behaviors in LLMs and Ethical Disclosure. Proc. AAAI 2023, 37, 1112–1121. [Google Scholar]
- Birch, J. Transparency and Trust in Generative AI Systems. AI Ethics 2024, 4, 201–213. [Google Scholar]
- Lu, H.; et al. Hyperreality in the Age of Generative AI. Philos. Technol. 2022, 35, 88. [Google Scholar]
- Binhammad, M.H.Y.; Othman, A.; Abuljadayel, L.; Al Mheiri, H.; Alkaabi, M.; Almarri, M. Investigating How Generative AI Can Create Personalized Learning Materials Tailored to Individual Student Needs. Creat. Educ. 2024, 15, 1499–1523. [Google Scholar] [CrossRef]
- Boltuc, P. The Philosophical Issue in Machine Consciousness. Int. J. Mach. Conscious. 2009, 1, 155–176. [Google Scholar] [CrossRef]
- Chella, A.; Manzotti, R. Introduction: Artificial Intelligence and Consciousness. In AI and Consciousness: Theoretical Foundations and Current Approaches; Chella, A., Manzotti, R., Eds.; AAAI Press: Arlington, VA, USA, 2007; pp. 1–8, AAAI Technical Report FS-07-01, AAAI Fall Symposium; Available online: https://www.aaai.org/Library/Symposia/Fall/fs07-01.php (accessed on 24 July 2025).
- Organisation for Economic Co-operation and Development. OECD Principles on Artificial Intelligence. OECD AI Policy Observatory. 2019. Available online: https://oecd.ai/en/ai-principles (accessed on 24 July 2025).
- IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems; IEEE Standards Association: Piscataway, NJ, USA, 2019; Available online: https://standards.ieee.org/industry-connections/ec/autonomous-systems/ (accessed on 24 July 2025).
- UNESCO. Recommendation on the Ethics of Artificial Intelligence; Adopted by the 41st Session of the General Conference; UNESCO: Paris, France, 2021; Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381137 (accessed on 24 July 2025).
- Shermer, M. Why Artificial Intelligence Is Not an Existential Threat. Skeptic 2017, 22, 29–36. [Google Scholar]
- Schneider, S. Artificial You: AI and the Future of Your Mind; Princeton University Press: Princeton, NJ, USA, 2019. [Google Scholar] [CrossRef]
- Tiwari, R. Ethical and Societal Implications of AI and Machine Learning. Int. J. Sci. Res. Eng. Manag. 2023, 7, 1. [Google Scholar] [CrossRef]
- Luo, M.; Warren, C.J.; et al. Assessing Empathy in Large Language Models with Real-World Physician–Patient Interactions. arXiv 2024, arXiv:2405.16402. [Google Scholar]

| Framework | Implications for AI |
|---|---|
| Levinasian Alterity | Highlights the ethical centrality of the Other and relational responsibility in AI design. |
| Heideggerian Mitsein | Emphasizes the social and situated nature of being; challenges task-centric AI architectures. |
| Chalmers’ Hard Problem | Distinguishes behavioral simulation from subjective experience and qualia. |
| Baudrillard’s Simulation | Explores the blurring of reality and hyperreality in AI-mediated interactions. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).