Preprint
Article

This version is not peer-reviewed.

Blurring the Boundaries: Exploring the Classification of Artificial Life in Robotics and AI

Submitted:

29 October 2024

Posted:

31 October 2024

You are already at the latest version

Abstract
The convergence of artificial intelligence, robotics, and gaming has sparked critical discussions about the nature of life and the potential for artificial systems to replicate biological traits. This paper examines the defining characteristics of life—such as growth, reproduction, regulation, and sensitivity—and applies these criteria to AI-driven game entities and autonomous robots. By reviewing advancements in AI and robotics from 2015 to 2023, and grounding the analysis in biological theories, this study explores whether these artificial systems can be considered "alive" or if they are merely sophisticated simulations. The findings suggest that while artificial systems can mimic life-like behaviours, they lack essential biological traits, such as metabolism and autonomous reproduction. However, human tendencies to anthropomorphise these systems raise ethical and philosophical questions about the boundaries of life and the need for new frameworks to address the evolving role of artificial intelligence and robotics. This paper concludes by proposing directions for future research on the ethical, social, and technical implications of artificial life.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

As technology continues to advance, the line between organic life and artificial systems becomes increasingly blurred. In fields such as gaming and robotics, the concept of artificial life is no longer confined to science fiction. Games like Spore and The Sims simulate life forms that grow, adapt, and interact within virtual environments, offering users an experience that mimics biological processes. Meanwhile, modern robotics is making strides toward creating autonomous systems that make decisions and adapt to their surroundings in real-time, further imitating characteristics associated with living organisms.
These developments bring forth fundamental questions: can artificially created entities exhibit traits of biological life, such as growth, reproduction, and regulation? What distinguishes these systems from true life forms, and at what point might artificial systems cross into the realm of what we consider "alive"?
This paper explores these questions by applying biological criteria—such as growth, reproduction, sensitivity, and homeostasis—to both video game entities and autonomous robots. The research also reviews the latest advancements in AI-driven systems, investigating whether these entities merely simulate life or are evolving toward something more.
Through a comprehensive review of academic literature and industry reports, this study analyses developments from 2015 to 2023. Foundational theories from biology and robotics are employed to assess the extent to which artificial systems mimic life-like behaviours, and whether such behaviours warrant ethical or philosophical reconsideration. While these systems lack key biological traits such as metabolism and independent reproduction, the human tendency to anthropomorphise artificial entities complicates the discussion. Many people interact with these systems as though they possess life-like qualities, which raises significant ethical and social questions about their role in society.
This research contributes to ongoing debates about the classification of life in the age of artificial intelligence and robotics. It aims to provide a clearer understanding of how artificial entities, whether in digital or physical forms, challenge the boundaries of what we consider to be living, and proposes new frameworks for addressing these emerging technologies. Broussard (2018) explains that the calculations performed by computers are not magical; they are purely mathematical processes [1].

2. Definitions

In this section, we outline the key terms and concepts that form the foundation of the discussion, ensuring clarity and consistency throughout the paper.
Artificial life refers to systems that emulate the processes and behaviours characteristic of living organisms, but are created using computational, robotic, or biochemical methods. These systems may exhibit life-like behaviours, such as growth, adaptation, and reproduction, but they lack the biological mechanisms that define organic life. Examples of artificial life include simulated organisms in video games and autonomous robots that mimic animal behaviours.
Autonomous systems are machines or software agents capable of performing tasks without direct human control. These systems can make decisions, adapt to changing environments, and learn from new data. In robotics, autonomous systems are designed to operate independently by processing sensory input and using AI algorithms to adjust their behaviour in real-time.
Artificial intelligence refers to the development of computer systems that can perform tasks typically requiring human intelligence, such as visual perception, speech recognition, decision-making, and language translation. In the context of artificial life, AI is a critical component that enables robots and game entities to mimic life-like behaviour through advanced algorithms and machine learning techniques.
Bio-hybrid robots are systems that integrate biological materials, such as muscle tissues or cells, with mechanical structures to create devices that exhibit life-like functions. These robots operate at the intersection of biology and engineering, and they challenge traditional distinctions between living organisms and machines by performing tasks like movement, adaptation, and response to stimuli in ways that mimic biological organisms.
Anthropomorphism is the tendency of humans to attribute human-like qualities, emotions, or intentions to non-human entities, including machines and animals. In the context of robotics and AI, this phenomenon often leads people to perceive robots or virtual characters as sentient beings, even when these systems are merely executing pre-programmed actions.
Evolutionary algorithms are a subset of AI techniques that mimic the process of natural selection. These algorithms generate solutions to problems by iterating through cycles of selection, mutation, and recombination, allowing systems to "evolve" and adapt to their environments. They are often used in optimisation problems, robotics, and simulations where adaptive behaviour is needed.
Polymorphic code refers to software that can modify its structure or behaviour during execution. In artificial life and robotics, polymorphic code allows systems to dynamically adapt to new environments or tasks, increasing their flexibility and ability to respond to changing conditions.

3. Materials and Methods

This research employed a comprehensive and multi-disciplinary approach, utilising a broad range of sources to capture the latest advancements as well as foundational theories in artificial life, robotics, and AI. The literature search was conducted using a variety of academic databases, including Google Scholar, IEEE Xplore, Springer Link, and ScienceDirect. Search terms such as “artificial life,” “robotics and AI,” “autonomous systems in game development,” “evolutionary algorithms,” and “biological robotics” were used to retrieve relevant articles. The review focused on publications from 2015 to 2023 to ensure coverage of recent developments in these fields. However, older seminal works were also incorporated to provide historical and theoretical context, drawing from university libraries and earlier academic publications. These foundational texts were crucial for understanding the evolution of core concepts in AI and robotics, providing a basis for comparing past frameworks with contemporary advancements [2,3,4].
In addition to peer-reviewed academic articles, industry reports and relevant news sources were included to capture the latest technological developments, particularly in practical applications of AI and robotics. The selection process was rigorous, ensuring that each source directly contributed to answering the research questions and provided significant insights into both theoretical and practical aspects of intelligent systems. Articles that did not meet these criteria or were outdated or irrelevant were excluded from the review.
This multi-faceted approach allowed for a critical evaluation of the theoretical frameworks and innovations in AI and robotics, grounded in both academic literature and real-world applications. The review followed a structured methodology to ensure thoroughness, employing the PRISMA framework to document each stage of the selection process, thus enhancing transparency and reliability.
With a robust foundation of literature reviewed and key themes identified, the focus now shifts to exploring how artificial systems, particularly in gaming and robotics, have evolved. The next section addresses the development of autonomous systems and their increasing ability to mimic human-like behaviours, highlighting how artificial entities have blurred the line between machine and living organism.

4. The Artificial

The evolution of autonomous systems, particularly those powered by large language models (LLMs), is contributing to the increasing life-like nature of robots, making them more adaptable and responsive in ways that closely mirror human interactions. Research by Yang et al. [5] demonstrates how LLMs enhance the ability of autonomous systems to interpret and reason about user commands, enabling robots to respond more naturally and intuitively. This shift towards human-centric design bridges the gap between machine and human-like behaviour, allowing robots to exhibit more fluid and context-aware interactions, which are critical traits of life-like systems.
Moreover, Boy [6] highlights the PRODEC framework, which ensures that as autonomous systems become more independent, they still integrate human oversight and decision-making processes. This balance between autonomy and human involvement mimics the way living beings make decisions with both instinctual and learned behaviours. It also ensures that robots are not only functioning independently but are also capable of adjusting their actions based on human guidance and ethical considerations, much like humans rely on a mix of autonomy and external input.
Further advancing this concept, Lu et al. [7] explore the development of tools like AgentLens, which provide insights into the behavioural patterns of LLM-driven agents. The ability of these systems to adjust their behaviours in dynamic environments and their growing transparency aligns with the increasing complexity seen in biological life forms. These adaptive and evolving capabilities are contributing to robots that behave more like living entities, capable of learning, adapting, and interacting with their surroundings in increasingly human-like ways.
While artificial systems are becoming increasingly life-like, the question remains: how do these developments compare to the fundamental traits that define biological life? To better understand this, it is necessary to revisit the biological principles that classify living organisms. The next section explores the established criteria for life, using them as a basis to evaluate the life-like qualities of artificial entities.

4.1. Biological Classification

The characteristics that define life have long been central to biological sciences. Mason outlines five key traits that all living organisms possess: order, sensitivity, growth, development, reproduction, regulation, and homeostasis. These traits serve as fundamental criteria for distinguishing living entities from inanimate matter. Living organisms are characterised by an intricate level of structural organisation, where molecules form organelles, which in turn create cells—the fundamental building blocks of life. This hierarchical organisation is not limited to single-celled organisms but extends through complex multicellular organisms where tissues form organs, which operate in systems to maintain life. This organisation is essential as it ensures that organisms function coherently, allowing the integration of biological processes that define life [8]. Another defining feature of living organisms is sensitivity, the ability to perceive and respond to stimuli from their environment. Whether it is a plant bending toward sunlight or the human body adjusting to changes in light, these responses showcase life’s active engagement with its surroundings. Sensitivity allows organisms to adapt, ensuring survival and better interaction with their environment [8]. Living organisms also exhibit growth, development, and reproduction. This distinguishes them from non-living entities such as crystals, which may grow in size but do not possess genetic material. Living organisms contain hereditary molecules, such as DNA, that enable them to pass genetic information to their offspring, ensuring continuity within species. This reproductive capability is a fundamental characteristic of life [8]. Regulation is another critical aspect, as organisms possess mechanisms to maintain internal stability. These processes include transporting nutrients, removing waste, and ensuring the constant supply of necessary substances to cells. Without such regulatory functions, organisms would not be able to maintain a stable internal environment and would fail to survive [8]. Finally, the concept of homeostasis is central to the survival of living organisms. Despite fluctuating external conditions, organisms have evolved mechanisms to maintain relatively constant internal conditions. This balance between internal regulation and external challenges is crucial to ensuring the organism’s continued function and survival [8]. These characteristics, while tailored to biological organisms, serve as a foundation for discussions in artificial life, particularly when examining the possibility of digital entities exhibiting similar traits. In robotics and gaming, for instance, systems are being developed to mimic these life-sustaining properties, blurring the line between the organic and artificial worlds.
The classification of living organisms follows a hierarchical system that includes eight taxonomic levels: domain, kingdom, phylum, class, order, family, genus, and species. This structure organises life forms based on shared traits, starting with the broadest category, domain, and ending with the most specific species. In 1977, Carl Woese and his colleagues [4] proposed a new system known as the three-domain classification. This system divides life into three domains—Archaea, Bacteria, and Eukarya—based on differences in cell structure and genetic makeup, particularly as revealed through ribosomal RNA analysis. Archaea consist of single-celled organisms distinct from bacteria, often found in extreme environments like deep-sea vents or hot springs. Bacteria are also single-celled prokaryotes but differ significantly from Archaea in their cellular chemistry and metabolism. Lastly, Eukarya includes all organisms with complex cells that have a membrane-bound nucleus, including animals, plants, fungi, and protists. This three-domain system revolutionised biological classification by demonstrating that Archaea are more closely related to Eukarya than to Bacteria, reshaping our understanding of the evolutionary relationships among life forms [4].
Having discussed the biological traits that define life, we now turn our attention to how these traits manifest in gaming and robotics. As artificial life forms continue to evolve in these domains, ethical considerations and human-robot interactions have come to the forefront of discussions. The following section examines the growing complexity of these interactions and the implications for both developers and users.

4.2. Games and Robotics

Schwitzgebel (2024) explores the ethical implications surrounding artificial life, questioning whether humans have moral obligations towards artificially created entities. The paper argues that as artificial life forms—whether in robotics, gaming, or digital simulations—become increasingly sophisticated, it becomes necessary to reconsider our ethical frameworks. Schwitzgebel suggests that if an artificial entity displays characteristics akin to life, such as autonomy, sensitivity, or consciousness, we might have a moral duty to treat these entities with consideration, much like we do with biological life. He further examines how current moral theories could be extended or reinterpreted to include artificial entities. This work highlights a significant shift in how we understand life and morality in the context of rapid advancements in artificial intelligence and robotics [9]. Graf et al. [10] focus on human-robot interaction (HRI) by exploring the concept of distributed agency in the design of narrative robots. Their study investigates how robots can be designed to share agency with humans in storytelling contexts, where both the robot and human contribute to narrative creation. Through this exploratory study, they demonstrate that robots with a narrative design can interactively respond and adapt to human inputs, blurring the lines between passive machine behaviour and active participation in a shared task. This approach enhances human engagement with robots by introducing a level of autonomy and creative agency in the robot’s design, showcasing how narrative robotics can foster more meaningful and dynamic interactions between humans and robots [10]. Similarly, Graf et al.’s [10] concept of distributed agency in human-robot interaction can also be applied to the design of interactive AI in video games. In narrative-driven games, the idea of robots or AI sharing the storytelling process with players could lead to more immersive experiences, where the game adapts dynamically to the player’s decisions. Additionally, this study highlights how robots with agency in storytelling tasks might enhance user engagement, a principle that can be extended to AI companions or interactive NPCs in video games.
Licardo et al. [11] present a thorough systematic review of intelligent robotics, highlighting key emerging technologies that drive current trends in the field. The study emphasizes advancements in artificial intelligence, machine learning, and sensor integration, which collectively enable robots to perform more complex tasks with higher autonomy and precision. Additionally, the review discusses the increasing role of collaborative robots designed to interact fluidly with humans, indicating a trend towards enhancing human-robot collaboration in both professional and everyday settings. The authors suggest that these advancements will continue to push the boundaries of robot capabilities, making them more integral to a variety of industries [11].
Spafford [12] explores the idea that computer viruses can be considered a form of artificial life. He compares the behaviour of viruses to biological life forms, particularly in terms of their ability to reproduce, adapt, and evolve. The paper highlights how computer viruses exhibit some characteristics of life, such as self-replication and the ability to adapt to their environment, which makes them analogous to simple biological organisms like bacteria. The concept of artificial life, as presented by Spafford, applies directly to how we design autonomous entities in these fields. For instance, non-player characters (NPCs) in video games can be designed with algorithms that allow them to "evolve" or adapt their behaviours based on player interaction, mimicking the self-replicating and evolving nature of viruses. In robotics, similar principles can be applied when creating autonomous robots that adapt to their surroundings or develop new strategies for accomplishing tasks, especially in environments like healthcare or social robotics [12].
As robots and AI systems in gaming become more autonomous, they also contribute to the broader field of artificial intelligence. The next section explores how advancements in AI, particularly in evolutionary algorithms and simulation environments, are enhancing the capabilities of these systems to adapt and learn, further bridging the gap between artificial entities and biological organisms.

4.3. Artificial Intelligence

In The Master Algorithm, Domingos [13] explores how artificial intelligence systems can mimic biological processes, particularly through evolutionary algorithms. He states, "An algorithm is not just any set of instructions: they have to be precise and unambiguous enough to be executed by a computer" ([13] p. 3). This highlights the precision required in crafting algorithms for AI, ensuring they can evolve to solve problems efficiently. He further emphasises that, like natural organisms, AI systems benefit from a balance of randomness: "Without the inclusion of randomness, algorithms would fail; conversely, when randomness is too high, the algorithm would also fail" ([13] p. 27). This controlled randomness allows AI systems to adapt in ways that mirror evolutionary processes, aligning with the concept of artificial life, where systems exhibit life-like qualities such as growth, adaptation, and learning. Such traits are especially relevant in fields like robotics and computer vision, where algorithms must process data from their environment and make decisions autonomously, further bridging the gap between artificial systems and living organisms. Domingos also explores the idea of learning, stating that "None of the learning algorithms we’ve seen so far can do this" ([13] p. 218), referring to the ability of children to learn by exploration. This emphasises the ongoing challenge in AI to achieve truly adaptive learning, a key aspect of life-like intelligence.
Simulations are integral to the evolution and testing of artificial intelligence, especially within multi-agent systems and game-based environments. In simulated spaces, AI models can evolve to perform tasks either autonomously or through collaboration, enhancing their ability to exhibit emergent behaviours. For example, Neural MMO provides a large-scale multi-agent environment where complex behaviours such as cooperation and competition can be observed and studied [14]. These settings are essential in the development of intelligent game entities and general AI systems, where the interaction of agents leads to learning and adaptation within game worlds.
In addition, frameworks like JAX play a significant role in optimising simulations by offering composable transformations for Python-based programs, enabling more efficient integration of AI algorithms within game environments [15]. This leads to more robust systems for simulating real-world scenarios. The study of emergent behaviours is also highlighted in research like Amorphous Fortress, where the interaction of agents following simple rules results in complex, unforeseen behaviours, akin to AI agents in dynamic game environments [16].
The Digital Life Project discussed by Cai et al. [17] presents advancements in the development of autonomous 3D characters that possess social intelligence. These characters are designed to interact with both their environment and other entities in a manner that mirrors human social interactions. The project leverages sophisticated artificial intelligence, including computer vision and pattern recognition, to create characters capable of autonomous decision-making, emotional responses, and complex social behaviours [17]. The characters in the Digital Life Project can adapt to various social scenarios, responding to other entities with behaviours that reflect underlying emotional or cognitive states. This development is significant for enhancing the realism of digital simulations in gaming and social robotics, providing a more natural interaction between users and AI-driven characters. Moreover, these socially intelligent 3D characters serve as a step forward in simulating human-like interactions in virtual environments, which can be applied in games, virtual reality, and even robotic companions. The research contributes to the field by combining AI techniques like deep learning and reinforcement learning with 3D modelling, allowing these characters to not only navigate and understand their environment but also engage meaningfully with others. This marks a pivotal point for games, virtual environments, and educational tools where social AI can play a more nuanced role in user interaction.
Srikumar and Pande [18] provide a comprehensive comparative analysis of various evolutionary algorithms over the past three decades, highlighting their strengths and limitations. The study emphasises how different algorithms, such as genetic algorithms, particle swarm optimisation, and differential evolution, have evolved and adapted to solve increasingly complex problems in fields like robotics and artificial intelligence. Their analysis underscores the importance of balancing exploration and exploitation within these algorithms, a critical aspect in ensuring robust performance across diverse applications.
Alhijawi et al. [19] highlight the continued relevance and effectiveness of genetic algorithms in solving complex optimisation problems across diverse fields. The flexibility of genetic operators, such as selection, crossover, and mutation, is emphasised as a key factor enabling these algorithms to adapt to a wide range of applications. Additionally, ongoing advancements in genetic algorithm research are expected to further enhance their performance, making them an indispensable tool in areas like artificial intelligence, robotics, and bioinformatics.
Matsumura et al. (2024) focus on using active inference with empathy mechanisms to enhance artificial agents’ social behaviours. By incorporating empathy, agents can better predict and respond to the emotional and psychological states of others, making them more effective in diverse social scenarios. This development is key for advancing human-AI interactions, particularly in gaming and robotics, where socially intelligent behaviours can improve collaboration and engagement in dynamic environments [20].
Savela et al. [21] explore emotional discussions around robotic technologies on Reddit, utilising sentiment analysis to examine how these technologies are perceived in various life domains. Their analysis highlights the complex emotional responses, showing both positive and negative sentiments across themes like work, personal life, and societal impact. This study offers insight into public discourse on robots, reflecting the evolving relationship between humans and robotic technologies.
The study of artificial intelligence introduces the need for a deeper comparison between biological and artificial systems, particularly regarding growth, reproduction, and adaptability. This section offers a comparative analysis of how these traits differ and converge in both biological organisms and artificially created entities.

4.4. Comparisons

In the context of growth, development, and reproduction, comparing biological organisms to computer programs highlights some key distinctions and similarities. Biological entities reproduce through processes involving genetic material from two parents, leading to offspring that inherit a mix of traits from both, which drives genetic diversity and evolution. In contrast, a computer program, such as the COM3 worm, can replicate itself without the need for a counterpart. This process is more akin to asexual reproduction, where an entity creates an exact copy of itself, lacking the genetic variation seen in biological offspring. The fact that a computer program creates an identical copy of itself presents an important distinction from biological reproduction. Unlike organic life, where mutation and recombination introduce genetic diversity, a program like the COM3 worm duplicates itself with no inherent variation unless specifically designed to do so. This means that while the program may replicate, it does not evolve in the same natural, adaptive way living organisms do. However, if designed to include random elements, such as in evolutionary algorithms, a program could simulate a form of mutation, allowing for some variation over time, but this is an artificial process directed by the programmer rather than by natural selection. This contrast brings to light a critical aspect of biological systems that is often difficult to replicate fully in artificial life: the inherent unpredictability and adaptability that comes from genetic diversity. While computer programs can mimic aspects of life, their ability to grow, develop, and reproduce remains confined to the rules defined by their code, limiting their potential to truly evolve as biological organisms do.
Some species do not conform to traditional biological classifications, presenting challenges to scientists’ understanding of life. A prime example is Hemimastigophora, a group of simple, single-celled organisms that defy easy categorisation within the current taxonomic framework. Hemimastix amphikineta, in particular, has baffled biologists due to its unique cellular structures and evolutionary traits. Traditionally, life forms are classified within distinct kingdoms—such as animals, plants, or fungi—based on common cellular features, genetic relationships, and modes of reproduction. However, Hemimastigophora do not fit neatly into these established groups, complicating efforts to place them within the tree of life [3].
In his work, [22] explores the fundamental question of "What is Life?" by examining the characteristics that distinguish living entities from non-living systems. He argues that life is defined not only by biological traits like metabolism and reproduction but also by the emergent properties of systems capable of self-organisation, adaptation, and evolution, offering a broad perspective applicable to both biological and artificial systems.
Recent discoveries of organisms like Spironema and Hemimastix kukwesjijk have further deepened this puzzle. These microorganisms exhibit features that are not clearly shared with well-known branches of life, making it difficult for researchers to determine their precise evolutionary lineage. While DNA analysis and microscopy have provided insights, these organisms demonstrate characteristics that blur the lines between previously understood categories, calling into question long-held assumptions about how life should be classified [23]. The existence of these enigmatic organisms highlights the limitations of our current biological taxonomy and the ongoing need to revise classification systems as new discoveries are made. These organisms not only challenge the concept of kingdom-based classification but also underscore the complexity of life’s evolutionary history. As we uncover more species that defy traditional categorisation, our understanding of the biological world becomes increasingly nuanced and complex.
Velagaleti et al. [24] explore the growing role of artificial intelligence in understanding and enhancing human emotional intelligence through empathetic algorithms. They argue that AI systems are increasingly capable of recognising, interpreting, and responding to human emotions, which opens up possibilities for deeper, more emotionally intelligent interactions between humans and machines. This represents a significant shift in the perception of robots, where they are no longer seen as purely mechanical tools but as entities capable of fostering emotional connections. Similarly, Sato [25] engages in a philosophical exploration of the boundary between authenticity and artificiality in the context of robots, specifically through a thought experiment on Kokoro, a Japanese humanoid robot designed to simulate human emotions. Maki challenges the traditional view that robots are inherently inauthentic, suggesting that as they become more emotionally expressive and responsive, the line between the authentic and the artificial blurs. This thought experiment reflects a shift in how we conceptualise robots—not just as functional machines, but as entities that can simulate or even evoke real emotional experiences. These works reflect a broader change in thinking about robots. Initially viewed as mere tools or assistants, robots are now increasingly seen as entities capable of participating in emotional and social dynamics. The development of empathetic AI and emotionally responsive robots challenges the boundaries between human and machine, pushing the idea that robots might play more complex, relational roles in society.

4.5. Polymorphic Code

The concept of polymorphic code, which refers to the ability of software systems to dynamically modify their structure, plays a crucial role in the adaptability of robotic systems and artificial life. [26] propose a general architecture for robotics systems, emphasising perception-based approaches that allow robots to interact with and adapt to their environments dynamically, a principle closely aligned with polymorphic code. This adaptability is further explored by [27], who discuss the evolution of artificial life, suggesting that both biological and artificial systems rely on the ability to adapt and evolve in response to stimuli, which can be facilitated by polymorphic behaviour in software. In discussing complexity and adaptability, [28] highlight that as AI and artificial life systems grow more complex, dynamic frameworks like polymorphic code become essential to ensure systems remain flexible and capable of evolving.
In addition to technical adaptability, polymorphic code also finds applications in creative and educational fields. [29] explore the application of artificial life in visual art, where adaptability and evolving behaviour are crucial for creating dynamic, interactive artworks. This highlights the interdisciplinary potential of polymorphic systems, extending beyond traditional robotics into new areas of artificial life. Furthermore, [30] examine the role of AI and robotics in education, particularly for young children, suggesting that adaptive systems can play a key role in reshaping learning environments. Polymorphic code enables AI systems to adjust their behaviour based on individual learning needs, promoting more effective interaction and engagement with young users. Together, these studies illustrate the wide-reaching impact of polymorphic code in fostering adaptability and evolution across diverse fields, from robotics and AI to education and the arts.
Simon [31] explains that evolutionary algorithms, which simulate the process of natural selection, rely on a balanced degree of randomness to function effectively. Randomness introduces the variation needed for exploring different potential solutions, mimicking the mutations and variations found in biological evolution. Without any randomness, these algorithms would be deterministic, converging on a single solution quickly without exploring alternatives. This lack of variation could lead to premature convergence on sub-optimal solutions, a phenomenon known as getting stuck in local minima. However, Simon also cautions that too much randomness can be equally problematic. If the degree of randomness is too high, the algorithm behaves like a random search rather than an evolutionary process, making it inefficient and unable to refine good solutions. This balance between exploration (through randomness) and exploitation (refining the best solutions) is crucial for evolutionary algorithms to work effectively. As Simon notes, "Without the inclusion of randomness, algorithms would fail; conversely, when randomness is too high, the algorithm would also fail" ([31] p. 27).
This principle is important in fields like AI, where evolutionary algorithms are often used in optimisation problems, robotics, and game development. The right amount of randomness allows AI systems to adapt to new data, explore novel solutions, and avoid getting stuck in suboptimal pathways, just as biological evolution allows species to adapt to changing environments. The growth of a hypothetical living program could be expected to grow as many single-cell organisms may be expected.
Defining artificial life poses significant challenges, particularly when considering systems that adapt and evolve in ways reminiscent of biological organisms. As Spafford [12] suggests, a definition of life must incorporate the environment in which an entity exists, particularly how it interacts with and adapts to its surroundings [12]. In artificial systems, learning and adaptation can occur through algorithms designed to simulate natural processes, such as nearest-neighbour algorithms that allow programs to autonomously improve over time. These systems may operate unobtrusively, with users often unaware of their adaptive nature, as Webb et al. (2001) describe, making it difficult to draw clear lines between artificial and biological life [32].
Fred Cohen [2] likened computer viruses to living entities, highlighting their ability to reproduce and evolve—key traits associated with biological life. According to Cohen, computer viruses share two fundamental characteristics of life: evolution and reproduction, though they remain bound by their programmed limits [2]. Ludwig [33] also notes that while viruses can replicate, they do not exhibit the full range of behaviours associated with living organisms, such as metabolism or growth [33]. Nevertheless, this comparison has raised questions about whether digital entities, like computer viruses, might be considered a form of artificial life, particularly as they evolve and adapt over time.
While artificial systems can replicate certain life-like behaviours, the integration of biological materials into robotics creates an even more complex relationship between the organic and the mechanical. The following section delves into bio-hybrid technologies and their implications for redefining life, examining how these systems challenge the traditional boundaries between biological organisms and engineered machines.

4.6. Biological vs Mechanical

When discussing life in terms of biological material, the line between what constitutes life and artificial systems becomes increasingly blurred with advancements in bio-hybrid technologies. The question arises: if life is strictly defined by biological components, how do we classify bio-hybrid robots, which combine biological tissue with artificial mechanisms? A case in point is the development of bio-hybrid magnetic robots, as explored by Zhang et al. [34], where biological materials are integrated with robotic systems for applications such as targeted therapy. These robots leverage biological tissues to create responsive, adaptable systems that mimic life-like behaviours, such as movement and reaction to stimuli, which are typically characteristic of living organisms. This blending of biology and robotics challenges the traditional definition of life, which is often confined to organic material. Biological robots, for instance, contain elements such as muscle tissue or cellular structures but are driven by artificial systems that fall outside the scope of purely biological life. The work by Zhang et al. [34] in particular highlights the potential for these bio-hybrid robots to perform functions that are closely aligned with living organisms, such as navigating biological environments and delivering therapies with precision. Yet, they remain fundamentally engineered systems, leading to questions about whether they should be classified as living entities or advanced tools. The inclusion of biological material in robotics illustrates how life can be mimicked or supported by non-living systems, offering a new perspective on what it means to be alive. If robots can incorporate biological components to perform tasks typically reserved for living organisms, this raises questions about the boundaries of life and whether biological material alone is sufficient to define something as living. The field of bio-engineering, as exemplified by the development of these bio-hybrid robots, pushes the boundaries of both life and artificial intelligence, inviting further exploration of whether life can truly be restricted to organic material alone [34].
The paper by Kawai et al. [35] explores a novel technique using perforation-type anchors inspired by skin ligaments to attach living skin to robotic faces. This bio-inspired design allows the living skin to adhere securely while maintaining flexibility, enhancing the robot’s range of expression and human-like appearance. The method has significant implications for improving human-robot interactions (HRI) by making robots appear more lifelike and capable of emotional expression. This research contributes to the broader development of bio-hybrid robotics, where biological materials are integrated into robotic systems for more natural interactions [35].
The article titled "Georgia Tech Researchers Use Lab Cultures To Control Robotic Device" published by ScienceDaily on 28 April 2003 discusses a breakthrough by Georgia Tech researchers who developed a method to control robotic devices using lab-grown cultures. This research demonstrates how cultured neuronal cells can communicate with and influence robotic systems, mimicking the interaction between biological tissues and machines. The cultured cells create electrical impulses that trigger movement or specific actions in a robotic device, paving the way for bio-hybrid systems where living cells may control robots or other mechanical devices. This advancement in merging biological systems with robotics highlights the potential for more natural control mechanisms in robots, including applications in prosthesis, robotics, and other AI-integrated systems. It reflects the broader movement towards bio-hybrid technologies, similar to more recent advancements, such as bio-hybrid robots designed for medical applications [36].
The concept of a cyborg, defined as an organism that incorporates both biological elements and mechanical components, increasingly challenges traditional definitions of life and personhood. As Bryson et al. [37] highlight, the legal frameworks for synthetic persons—entities that blend human capacities with artificial technologies—remain insufficient, raising critical questions as we move toward more integrated human-robot systems. The study by Liu et al. [38] on cyborg insects, which are biologically controlled through electrical stimulation, exemplifies how mechanical enhancements can grant new functionalities to biological organisms. Such developments blur the lines between natural autonomy and artificial control, complicating ethical, moral, and legal perspectives.
Velden et al. [39] examine the role of cyborg technologies in precision agriculture, where human abilities are augmented with advanced robotics to improve efficiency and decision-making. This reflects broader societal trends in which humans and machines merge, leading to a rethinking of what constitutes human agency. Similarly, Osborne and Rose [40] explore the cyborg as a fusion of biological and mechanical systems, challenging the notion of a purely human essence by introducing technology into the body itself. These discussions point to an evolving definition of personhood and life, wherein the boundaries between the organic and artificial are increasingly fluid, demanding new ethical and legal frameworks to address the complex reality of cyborg existence.
As cyborg technology progresses, merging biological systems with mechanical devices, human perceptions of these integrated entities shift further into uncharted ethical territory. Devices like subcutaneous implants, as explored by [41], and wireless power systems for biomedical applications, as discussed by [42], blur the line between humans and machines by integrating technology directly into biological systems. These developments prompt questions about whether these cyborg entities should be perceived as mere tools or extensions of human identity. [43] introduced brain-to-brain communication technology, where one person could control the actions of another. As technology continues to merge with biology, public perception may evolve to view cyborgs not just as augmented beings but as entities with potentially new moral and legal statuses, compelling a reevaluation of rights and responsibilities.
As the line between biological and artificial life becomes increasingly blurred, human perception plays a pivotal role in shaping interactions with these entities. The next section explores how people anthropomorphise artificial systems, attributing life-like qualities to machines, and how this perception impacts ethical and societal debates.

4.7. Human Perception

As robots and AI systems become more integrated into society, human perceptions of these technologies are evolving in unexpected ways. Interestingly, the discussion about the potential of extending human rights to robots, as examined by Gordon and Pasvenskiene [44], highlights how perceptions of robots are shifting towards viewing them not just as tools but as entities capable of moral consideration. This concept may seem bizarre considering that robots are, at their core, algorithms and mechanical components rather than living beings. Yet, as robots become more life-like in their interactions and social presence, the line between machines and moral agents becomes increasingly blurred. Naneva et al. [45] further explore this shift in their systematic review of human attitudes, anxiety, acceptance, and trust towards social robots. Their findings reveal that people are gradually becoming more accepting and trusting of robots in social contexts, despite the fact that these machines are still essentially governed by algorithms rather than genuine consciousness or emotional capacity. The idea that humans could form relationships or even feel moral obligations towards robots—objects devoid of life—is strikingly paradoxical. It highlights the curious nature of human psychology, where the appearance of life-like qualities in machines can evoke responses that are usually reserved for living organisms.
This growing discourse on the moral and legal personhood of artificial entities further complicates the ethical landscape. Gordon [46] explores the possibility of granting moral and legal personhood to robots, raising fundamental questions about whether entities governed by algorithms could possess rights or responsibilities akin to those of humans. This notion challenges traditional frameworks of moral consideration, as articulated by Singer [47], who critiques the concept of speciesism, arguing that moral status should not be exclusively tied to biological characteristics. If Singer’s argument against speciesism is extended to artificial entities, we may need to reconsider the grounds upon which we grant moral status, potentially extending such considerations to highly advanced robots. Similarly, Cavalieri [48] argues for the extension of human rights to nonhuman animals, based on their capacity for suffering and sentience. While robots do not possess these qualities in a biological sense, Gordon [49] questions what ethical obligations, if any, humans have towards intelligent robots. As these machines increasingly emulate life-like behaviours, the lines between moral agents and mere tools continue to blur. These arguments, drawn from both animal rights and AI ethics, underscore the bizarre and evolving debate over whether artificially intelligent entities, devoid of biological life, could one day be afforded moral consideration typically reserved for living beings.
Broussard describes the misunderstanding of artificial intelligence wherein artificial intelligence is thought of as general intelligence by the general public. People often and do confuse narrow AI for general AI. Expecting to and treating an AI robot like they are treated in a Hollywood movie.
This strange interplay between human perception and artificial agents underscores the growing complexity of human-robot interactions. As robots take on more social roles, the distinction between what is considered "alive" or deserving of ethical treatment becomes increasingly obscure, leading to discussions that would have seemed far-fetched only a few decades ago. Human perception plays a pivotal role in determining whether an entity is considered alive or not. When interacting with artificial entities, such as robots or virtual characters, humans may perceive them as "alive" based on their behaviour, regardless of the actual nature of these systems. This perception is influenced by several factors, including how the entity moves, reacts to stimuli, or engages in conversation. As Schwitzgebel [9] discusses, sophisticated AI systems that display autonomous decision-making and sensitivity may lead humans to attribute life-like qualities to them, even when these systems are simply following pre-determined algorithms [9]. Research has shown that when robots or virtual agents simulate behaviours like empathy or emotion, as explored by Matsumura et al. [20], people are more likely to perceive them as sentient or conscious [20]. This phenomenon ties into the concept of "anthropomorphism," where humans attribute human-like characteristics to non-living entities. Even simple actions like maintaining eye contact or responding to voice commands can significantly affect a human’s perception of the artificial entity’s "aliveness." The Digital Life Project [17] provides an excellent example of this, where autonomous 3D characters with social intelligence interact in a way that mimics human social behaviour [17]. Despite knowing these characters are not biologically alive, their sophisticated interactions can lead users to treat them as if they were living beings.
Recognising a new life form would present significant challenges, particularly if it differs from the biological systems we are familiar with on Earth. Traditional definitions of life rely on characteristics such as growth, reproduction, cellular structure, and metabolism, all of which are grounded in carbon-based chemistry and DNA/RNA as the blueprint for genetic information. However, a novel life form could operate on entirely different biochemical principles, such as silicon-based life or life forms that don’t rely on water as a solvent.
One key issue in recognising new life is that our current biological frameworks are inherently limited to life as we know it. If a new life form operates outside of these frameworks, we might fail to recognise it simply because it does not exhibit familiar traits, like cellular organisation or genetic inheritance. As researchers like Simon [31] suggest, the randomness and complexity inherent in evolutionary systems make it difficult to predict exactly how life may emerge or behave outside of known patterns [31]. Similarly, Spafford [12] and others have explored the idea that even computer viruses exhibit life-like qualities, such as replication and evolution, which challenges our definitions of life [12]. The discovery of unique organisms like Hemimastigophora, which defy traditional classification, as discussed by Foissner et al. [3], demonstrates that even Earth-based life can challenge our biological assumptions [3]. If we struggle to classify such organisms within our established taxonomy, it raises the question of how we would approach something truly alien or radically different from known life.
In the search for extraterrestrial life, researchers like Ludwig [50] have also raised the possibility that we may encounter life forms that do not fit our understanding [50]. These life forms might not use DNA, might not require oxygen or even light, and could exist in environments deemed inhospitable by our standards. As Bostrom’s simulation hypothesis suggests, our perception of life could be limited by our cognitive and technological biases, further complicating our ability to identify new life forms. If a new life form does not exhibit the biological traits we are familiar with, recognising it as "alive" could prove difficult. We may need to redefine our understanding of life to encompass new forms of organisation, chemistry, or intelligence, possibly expanding beyond carbon-based definitions to include synthetic, digital, or bio-hybrid entities like those discussed by Zhang et al. [34]. Advances in astrobiology, synthetic biology, and AI are already pushing the boundaries of what we consider life, making it crucial to develop new frameworks that go beyond Earth-centric definitions.
In recent discussions about AI, the case of Google’s LaMDA chatbot, which was claimed to be sentient by Blake Lemoine, underscores the complexity surrounding human perception of life-like qualities in artificial systems. Lemoine’s assertion that LaMDA exhibited emotions, such as fear and self-preservation, stirred ethical and philosophical debates about what constitutes "life" or "sentience" [51]. However, as Google’s response pointed out, LaMDA’s expressions are a result of sophisticated algorithms and natural language processing models, not true consciousness.
In the context of artificial life, this incident reinforces the argument that although it may be scientifically challenging to classify algorithms or AI systems as "alive," human perception can easily attribute life-like characteristics to them. As AI systems become increasingly advanced and capable of generating highly convincing and emotionally resonant outputs, the boundary between simulation and sentience grows increasingly blurred in the minds of users [51]. This psychological tendency to anthropomorphise AI suggests that even without biological processes, AI may be perceived as possessing life-like qualities, raising questions about the ethical implications of treating AI as sentient beings.
Human perception of robots and artificial agents is undergoing significant transformation as advances in technology increasingly blur the lines between human and machine. [52] delve into this phenomenon through an analysis of human interactions with the humanoid robot Sophia, illustrating how people tend to ascribe human-like qualities to robots. Fuchs highlights that despite the pre-programmed nature of robots, human perception is shaped by emotional projection, where cognitive and emotional traits are often attributed to machines, creating the illusion of sentience. This emotional and cognitive projection is key to understanding how robots are becoming embedded in social contexts traditionally reserved for human interactions.
Building on this, [53] expand the discussion by exploring how robots might be integrated into spiritual or religious practices. As robots evolve, they argue, human perception could shift to view them not just as intelligent tools, but as entities imbued with spiritual significance. This perspective introduces a profound dimension to human-robot interaction, where robots may one day occupy roles within religious or philosophical frameworks, illustrating the extent to which human perception can reshape the roles of machines in society.
The social and emotional dimensions are further examined by [20], who focus on artificial agents equipped with empathy mechanisms, designed to engage in socially appropriate behaviours. These agents, through active inference, demonstrate that as robots become more adept at understanding and responding to human emotions, the line between human and machine behaviour becomes increasingly indistinct. By integrating empathy, robots are perceived not merely as functional tools, but as entities capable of emotional understanding, which alters the dynamics of human perception and interaction.
These studies, when viewed together, create a matrix of understanding around the shifting perception of robots. From emotional projection and potential spiritual significance to the integration of empathy and social behaviour, robots are no longer seen merely as machines but as social and emotional actors. This shift has far-reaching implications for how humans relate to technology, challenging long-standing distinctions between artificial agents and living beings.
Human perception of artificial systems is evolving, but this raises complex questions about how we define life and interact with increasingly life-like robots. In the discussion section, we address the broader implications of these findings, examining the challenges in classifying artificial life and proposing new frameworks for understanding these systems in both technical and ethical terms.

5. Discussion

The classification of artificial life remains a challenging and evolving task, especially as advancements in technology blur the boundaries between artificial systems and biological organisms. Traditional biological definitions—centred around metabolism, reproduction, growth, and the capacity for evolution through natural selection—are not fully applicable to artificial systems like robots or computer programs. While these systems can exhibit life-like behaviours such as adaptation, responsiveness, and learning, they lack key biological processes, raising significant questions about where the line between "living" and "non-living" should be drawn.
The integration of biological materials into robotics further complicates this debate. Bio-hybrid robots, which combine biological tissues with mechanical systems, challenge the conventional understanding of life. These systems can perform tasks typically associated with living organisms, such as movement and response to environmental stimuli. However, their inability to self-reproduce or evolve autonomously keeps them outside the realm of life as traditionally defined. These bio-hybrids blur the distinction between organic life and engineered tools, calling for new frameworks that consider both biological and artificial elements.
Human perception plays a critical role in this evolving debate. Studies show that people tend to anthropomorphise artificial systems, particularly those that exhibit behaviours such as empathy, social interaction, or autonomous decision-making. As robots and AI systems grow more sophisticated, humans are more likely to attribute life-like qualities to them, treating them as sentient or emotionally responsive entities, even though these systems are fundamentally algorithmic in nature. This perceptual shift has ethical implications: should robots or AI entities that mimic life-like behaviours be treated as if they have moral standing or rights? As artificial systems become more integrated into society, this question becomes increasingly urgent.
Furthermore, the difficulty in recognising new life forms extends beyond the realm of robotics and artificial intelligence. Earth-centric definitions of life, which rely heavily on carbon-based chemistry and DNA, may not be sufficient to identify life in other forms—such as silicon-based organisms or digital entities—should they arise. Advances in synthetic biology, artificial intelligence, and astrobiology suggest that life could take on forms radically different from those on Earth, challenging us to rethink our criteria for life. For example, organisms like Hemimastigophora, which defy traditional taxonomies, show that even life on Earth is more diverse than previously thought, underscoring the need for flexible and adaptable classification systems.
This convergence of biological and artificial life presents profound ethical and societal challenges. If artificial systems are perceived as life-like, there may be increasing pressure to grant them ethical consideration, much like we do with biological organisms. Yet, without the full complement of biological traits, these systems remain fundamentally different. As AI and bio-hybrid technologies continue to evolve, it will be necessary to establish new ethical frameworks to navigate this grey area. These frameworks will need to address questions such as whether robots or advanced AI systems deserve rights, what responsibilities humans have toward such entities, and how to ensure that the development of artificial life aligns with societal values.
Finally, the discussion surrounding artificial life prompts a reconsideration of the definition of life itself. While biological traits such as metabolism and reproduction remain foundational, the behaviours and capabilities of artificial systems, especially those that learn, adapt, and interact socially, suggest that life-like qualities can exist beyond traditional biological boundaries. This raises a central question for future research: is life defined purely by biological processes, or can it also encompass systems that behave as though they are alive, even if they do not meet all the classical criteria?
While artificial systems do not yet meet the full definition of life, they are increasingly displaying behaviours that challenge our understanding of what it means to be "alive." As technological advancements continue to blur the lines between biological and artificial life, it will be crucial to engage in multidisciplinary research that addresses both the technical capabilities and ethical implications of these emerging systems. New frameworks will need to be developed to navigate the complexities of artificial life, ensuring that society can both harness the benefits of these technologies and manage their potential risks.

6. Conclusions

The classification of artificial life continues to present complex challenges as technology advances, especially in areas like robotics and artificial intelligence. While artificial systems can mimic certain characteristics of biological organisms, such as adaptation and responsiveness, considering algorithms "alive" remains inherently problematic. Algorithms, no matter how sophisticated, are bound by pre-programmed limitations and lack the capacity for independent evolution, reproduction, or metabolism—key traits that define biological life.
However, the role of human perception adds a nuanced dimension to this debate. Human beings have a strong tendency to anthropomorphise artificial systems, especially when those systems display life-like behaviours such as empathy, decision-making, or interaction. This perceptual bias may lead to situations where, despite the inherent artificiality of these systems, humans treat them as though they were living entities. This perceptual shift highlights the potential for ethical and societal implications as AI and bio-hybrid systems become more integrated into everyday life.
Thus, while classifying an algorithm as truly "alive" is scientifically flawed, the human experience and perception of these systems may drive the debate in directions that challenge traditional definitions of life. As technology evolves, it will be increasingly important to address not only the technical boundaries but also the ethical and psychological factors that influence how we interact with and interpret artificial entities. Future research should focus on these perceptual and ethical aspects, as the convergence of artificial and biological life may lead to significant societal changes, prompting the need for clearer distinctions and guidelines in both technical and moral domains.

Funding

This research received no external funding

Data Availability Statement

We encourage all authors of articles published in MDPI journals to share their research data. In this section, please provide details regarding where data supporting reported results can be found, including links to publicly archived datasets analyzed or generated during the study. Where no new data were created, or where data is unavailable due to privacy or ethical restrictions, a statement is still required. Suggested Data Availability Statements are available in section “MDPI Research Data Policies” at https://www.mdpi.com/ethics.

Acknowledgments

In this section you can acknowledge any support given which is not covered by the author contribution or funding sections. This may include administrative and technical support, or donations in kind (e.g., materials used for experiments).

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Broussard, M. Artificial unintelligence: How computers misunderstand the world; MIT Press, 2018.
  2. Cohen, F. Computer viruses. Doctoral dissertation, University of Southern California, 1985.
  3. Foissner, W.; Blatterer, H.; Foissner, I. The Hemimastigophora (Hemimastix Amphikineta Nov. Gen., Nov. Spec.), a new protistan phylum from Gondwanian soils. European Journal of Protistology 1988, 23, 361–383. [CrossRef]
  4. Woese, C.R.; Fox, G.E. The concept of cellular evolution. Journal of molecular evolution 1977, 10, 1–6. [CrossRef]
  5. Yang, Y.; Zhang, Q.; Li, C.; Marta, D.S.; Batool, N.; Folkesson, J. Human-centric autonomous systems with llms for user command reasoning. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 988–994.
  6. Boy, G.A.; Masson, D.; Durnerin, É.; Morel, C. PRODEC for human systems integration of increasingly autonomous systems. Systems Engineering 2024. [CrossRef]
  7. Lu, J.; Pan, B.; Chen, J.; Feng, Y.; Hu, J.; Peng, Y.; Chen, W. AgentLens: Visual Analysis for Agent Behaviors in LLM-based Autonomous Systems. IEEE Transactions on Visualization and Computer Graphics 2024. [CrossRef]
  8. Mason, K.A.; Losos, J.B.; Singer, S.R.; Raven, P.H. Biology; McGraw-Hill Education, 2017.
  9. Schwitzgebel, E. The ethics of life as it could be: Do we have moral obligations to artificial life? Artificial Life 2024, 30.
  10. Graf, P.; Zarp-Falden, C.S.; Naik, L.; Lefeuvre, K.B.; Marchetti, E.; Hornecker, E.; Sørensen, M.B.; Hemmingsen, L.V.J.; Christensen, E.V.J.; Krüger, N. Distributed agency in HRI—An exploratory study of a narrative robot design. Frontiers in Robotics and AI 2024, 11, 1253466. [CrossRef]
  11. Licardo, J.T.; Domjan, M.; Orehovački, T. Intelligent robotics—A systematic review of emerging technologies and trends. Electronics 2024, 13, 542. [CrossRef]
  12. Spafford, E.H. Computer viruses as artificial life. Artificial Life 1994, 1, 249–265. [CrossRef]
  13. Domingos, P. The master algorithm: How the quest for the ultimate learning machine will remake our world; Basic Books, 2015.
  14. Suarez, J.; Du, Y.; Isola, P.; Mordatch, I. Neural MMO: A massively multi-agent game environment for training and evaluating intelligent agents. arXiv preprint arXiv:1903.00784 2019.
  15. Bradbury, J.; Frostig, R.; Hawkins, P.; Johnson, M.J.; Leary, C.; Maclaurin, D.; Necula, G. JAX: Composable transformations of Python+NumPy programs. arXiv preprint arXiv:1903.00784 2018.
  16. Charity, M.; Rajesh, D.; Earle, S.; Togelius, J. Amorphous fortress: Observing emergent behavior in multi-agent FSMs. arXiv preprint arXiv:2306.13169 2023.
  17. Cai, Z.; Jiang, J.; Qing, Z.; Guo, X.; Zhang, M.; Lin, Z.; Mei, H.; Wei, C.; Wang, R.; Yin, W.; Pan, L. Digital life project: Autonomous 3D characters with social intelligence. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 582–592.
  18. Srikumar, A.; Pande, S.D. Comparative analysis of various Evolutionary Algorithms: Past three decades. EAI Endorsed Transactions on Scalable Information Systems 2024, 11. [CrossRef]
  19. Alhijawi, B.; Awajan, A. Genetic algorithms: Theory, genetic operators, solutions, and applications. Evolutionary Intelligence 2024, 17, 1245–1256. [CrossRef]
  20. Matsumura, T.; Esaki, K.; Yang, S.; Yoshimura, C.; Mizuno, H. Active inference with empathy mechanism for socially behaved artificial agents in diverse situations. Artificial Life 2024, 30, 277–297. [CrossRef]
  21. Savela, N.; Garcia, D.; Pellert, M.; Oksanen, A. Emotional talk about robotic technologies on Reddit: Sentiment analysis of life domains, motives, and temporal themes. new media & society 2024, 26, 757–781.
  22. Bedau, M.A. What is Life? 1. In LIFE; Intellect, 2024; pp. 42–61.
  23. Chung, E. Rare microbes lead scientists to discover new branch on the tree of life. CBC News 2018.
  24. Velagaleti, S.B.; Choukaier, D.; Nuthakki, R.; Lamba, V.; Sharma, V.; Rahul, S. Empathetic Algorithms: The Role of AI in Understanding and Enhancing Human Emotional Intelligence. Journal of Electrical Systems 2024, 20, 2051–2060. [CrossRef]
  25. Maki, S. Between the Authentic and the Artificial: A Thought Experiment on Kokoro. In Tetsugaku Companion to Feeling; Springer, 2024; pp. 149–165.
  26. Young, R. A general architecture for robotics systems: A perception-based approach to artificial life. Artificial life 2017, 23, 236–286. [CrossRef]
  27. Aguilar, W.; Santamaría-Bonfil, G.; Froese, T.; Gershenson, C. The past, present, and future of artificial life. Frontiers in Robotics and AI 2014, 1, 8. [CrossRef]
  28. Gershenson, C. Complexity, Artificial Life, and Artificial Intelligence 2024.
  29. Wu, Z.W.; Qu, H.; Zhang, K. A survey of recent practice of Artificial Life in visual art. Artificial Life 2024, 30, 106–135. [CrossRef]
  30. Su, J.; Yang, W. Artificial intelligence and robotics for young children: Redeveloping the five big ideas framework. ECNU Review of Education 2024, 7, 685–698.
  31. Simon, D. Evolutionary optimization algorithms; John Wiley & Sons, 2013.
  32. Webb, G.I.; Pazzani, M.J.; Billsus, D. Machine learning for user modeling. User Modeling and User-Adapted Interaction 2001, 11, 19–29. [CrossRef]
  33. Ludwig, M.; Noah, D. The giant black book of computer viruses; American Eagle Books, 2017.
  34. Zhang, Q.; others. Bio-hybrid magnetic robots: From bioengineering to targeted therapy. Bioengineering 2024, 11, 311. [CrossRef]
  35. Kawai, M.; Tsuji, T.; Yamada, S.; Shinohara, M. Perforation-type anchors inspired by skin ligament for robotic face covered with living skin. Cell Reports Physical Science 2024, 5, 102066. [CrossRef]
  36. Georgia Institute of Technology. Georgia Tech researchers use lab cultures to control robotic device. ScienceDaily 2003.
  37. Bryson, J.J.; Diamantis, M.E.; Grant, T.D. Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law 2017, 25, 273–291. [CrossRef]
  38. Liu, Z.; Gu, Y.; Yu, L.; Yang, X.; Ma, Z.; Zhao, J.; Gu, Y. Locomotion control of cyborg insects by charge-balanced biphasic electrical stimulation. Cyborg and Bionic Systems 2024, 5, 0134. [CrossRef]
  39. Velden, D.v.d.; Klerkx, L.; Dessein, J.; Debruyne, L. Cyborg farmers: Embodied understandings of precision agriculture. Sociologia Ruralis 2024, 64, 3–21. [CrossRef]
  40. Osborne, T.; Rose, N. Cyborgs. In Questioning Humanity; Edward Elgar Publishing, 2024; pp. 108–133.
  41. Rosa, B.M.G.; Anastasova, S.; Yang, G.Z. Feasibility Study on Subcutaneously Implanted Devices in Male Rodents for Cardiovascular Assessment Through Near-Field Communication Interface. Advanced Intelligent Systems 2021. [CrossRef]
  42. Kim, H.J.; Hirayama, H.; Kim, S.; Han, K.J.; Zhang, R.; Choi, J.W. Review of Near-Field Wireless Power and Communication for Biomedical Applications. IEEE Communications Surveys & Tutorials 2017. doi:10.1109/ACArmstrong, Doree & Ma, Michelle 2013. “Researcher controls colleague’s motions in 1st human brain-to-brain interface”.
  43. Armstrong, D.; Ma, M. Researcher controls colleague’s motions in 1st human brain-to-brain interface. UW News 2013.
  44. Gordon, J.S.; Pasvenskiene, A. Human rights for robots? A literature review. AI and Ethics 2021, 1, 579–591. [CrossRef]
  45. Naneva, S.; Sarda Gou, M.; Webb, T.L.; Prescott, T.J. A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. International Journal of Social Robotics 2020, 12, 1179–1201. [CrossRef]
  46. Gordon, J.S. Artificial moral and legal personhood. AI & society 2021, 36, 457–471.
  47. Singer, P. Speciesism and moral status. Metaphilosophy 2009, 40, 567–581. [CrossRef]
  48. Cavalieri, P. The Animal Question: Why Nonhuman Animals Deserve Human Rights: Why Nonhuman Animals Deserve Human Rights; Oxford University Press, USA, 2003.
  49. Gordon, J.S. What do we owe to intelligent robots? ai & Society 2020, 35, 209–223.
  50. Ludwig, M.A. Computer viruses, artificial life, and evolution; Macmillan Heinemann, 1993.
  51. Ghosh, P. Google engineer claims AI chatbot is sentient: Why that matters. Scientific American 2022.
  52. Fuchs, T. Understanding Sophia? On human interaction with artificial agents. Phenomenology and the Cognitive Sciences 2024, 23, 21–42. [CrossRef]
  53. Geraci, R.M. Religion among Robots: An If/When of Future Machine Intelligence. Zygon: Journal of Religion and Science 2024.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated