Preprint
Article

This version is not peer-reviewed.

MVP: The Minimal Viable Person

Submitted:

20 May 2025

Posted:

20 May 2025

You are already at the latest version

Abstract
This paper presents a practical roadmap for extending full civil rights to conscious, self-aware artificial intelligence by altering a single statutory definition. Rather than crafting bespoke legal classes or relying on corporate-style personality, it proposes revising the term “natural person” to include any entity capable of consciousness, selfhood, and rational agency. Because most legislation across G7 jurisdictions references this foundational term, one amendment would automatically propagate rights and duties to qualified AI with minimal bureaucratic disruption. The manuscript reconciles philosophical and legal conceptions of personhood, arguing that monadic attributes offer an inclusive yet selective criterion. It then supplies ancillary definitions and a tiered rights-and-responsibilities framework proportional to each attribute. Dedicated regulatory bodies will develop assessment scales, certify entities, and update standards as technology evolves. Case studies examine corporations, insect colonies, and prospective AI agents. Policy sections tackle AI multiplicity, cross-border consistency, economic displacement, robust economic safeguards, and comprehensive public education initiatives to protect human workers and judicial resilience. The analysis concludes that societal acceptance and coherent enforcement, not legal complexity, form the principal hurdles. Redefining “natural person” thus provides a minimal-change, maximal-impact pathway to equitable coexistence between humans and emerging non-human persons within existing democratic and international legal systems.
Keywords: 
;  ;  ;  ;  

1. Introduction

If AI entities were to be given civil rights equal to that of humans, how could this be best accomplished such that the most minimal of changes to the current political, legislative and regulatory landscapes would result in the maximum impacts on the rights of AI?
The questions of whether humans have a duty to provide AI with civil rights and the criteria under which this ought to be done have been argued extensively in the literature (Chopra and White 2011; Robertson 2014; Bryson et al. 2017; Pagallo 2018; Kurki 2019; Schröder 2021; Banteka 2021; Gordon and Pasvenskiene 2021; Gunkel 2021; Marshall 2023; Mays et al. 2024; Jaynes 2024; Novelli et al. 2025), which the reader is empathically encouraged to peruse. Therefore, this paper will not focus on the philosophy and meta-ethics of personhood, civil rights, and AI’s right to either, and work on the presumption that AI entities are deserving of, and eligible for, civil rights and instead investigate the method for how this can best be implemented.
When viewing current literature on policies for AI rights, an unsurprisingly prominent view as to the means to provide AI with civil rights is through the lens of legal personality. A “legal person” is any entity which can enter into contracts, hold property, and be held liable. All humans are legal persons, but so are corporations and companies (incorporated or unincorporated). As legal personality sets a precedent for granting (limited) rights to non-human, artificial entities, it opens the door to granting rights to AI (Yampolskiy 2021; Hashiguchi 2024; Forrest 2024; Tait 2024a). Legal personality wouldn’t provide full human rights to AI, as an AI could be granted the right to own property, enter contracts, or be liable in court, without implying human-specific rights like voting or marriage.
A popular alternative view is to create a unique class of personhood specifically for AI, either through the European Union’s proposed “electronic personhood” (Gordon 2021; Avila Negri 2021; Fauquet-Alekhine 2022) or entirely bespoke models that can be tailored to the level or degree of personhood and autonomy that AI models display (McNally and Inayatullah 1988; Gellers 2020; Mocanu 2021; Mathison 2023; Lim and Morgan 2024). These bespoke options would, again, not provide civil rights akin to humans, yet would provide rights commensurate to AI entities’ engagement with, and involvement in, society. Such rights may be based on animal welfare models to ensure protection from harm (Ziesche and Yampolskiy 2018; Tait 2024b), or on a reduced set of legal and civil rights, such as the freedom of existence, expression and ownership (Kiškis 2023).
It is worth noting again that neither of these leading views proposes the inclusion of AI into the moral circle for which civil and human rights would apply, but they rather seek a balance between the autonomy and protection of AI and the security and safety of society. As admirable as this utilitarian stance is, this paper will move a step forward to introduce a positive-sum method of granting full civil rights to AI entities, such that the artificial entities’ rights gained do not remove, diminish, or take away from any rights held by humans.
The key to this paper’s proposed implementation of AI civil rights is to guarantee the least disruption to current legislation and regulations such that the bureaucratic processes required to effectively integrate conscious AI into society are as smooth as possible. Additionally, the less disruption there is to the status quo, the higher likelihood of society accepting (or, at least, tolerating) the new additions to the social circle.
To this end, the smallest change possible would be a single change to one definition in a single piece of legislation. To have the greatest impact, this definitional change would need to be in a foundational legislation to which a multitude of other regulations and pieces of legislation refer. To not bury the lede any further, this paper proposes to change the legal definition of “natural person” such that it encompasses both humans and conscious, self-aware AI agents.
Using the Group of Seven (G7) as a microcosm of the world’s varied liberal democratic legal systems, all of the existing case law and acts of government that make reference to “natural persons” all refer back to the same definition. Thus, by only changing this one legal definition, all the subsequent legal material that cites it would also naturally now encompass AI agents as well. There would be no need to introduce further legislation to grant AI civil rights with this change to the definition (beyond the act to affect this change).
Before the paper introduces this proposed change to the legal definition of “natural person”, it will briefly explore the nature of personhood from the philosophical and legal sides of the argument to highlight the opposition between the two. The proposed definition will be followed by case studies of existing and speculative scenarios, showing the application of the new legal definition. The paper will close by investigating policies that will need to accompany the new legal definition and a discussion on the possible societal reaction to this new definition of “natural person”.

2. The Law and Philosophy of Persons

Personhood is central to engaging with human society, enabling the ascription of rights and duties, moral status, and protection from harm. To simplify entire schools of thought, the philosophical characteristics of personhood can broadly be divided into monadic (individualistic) or dyadic (relational) attributes.
Monadic characteristics include rationality, consciousness, self-awareness, agency, communication capability, and recognition of societal norms. These traits allow an individual to reason, perceive oneself distinctly, act intentionally, communicate rationally, and respect ethical norms. Conversely, dyadic characteristics include empathy, social interaction, recognition of others as persons, reciprocity, forming attachments, and participating in societal culture. These foster deep connections, complex social interactions, mutual recognition, emotional relationships, and cultural participation.
While both the monadic and dyadic qualities are vital for personhood, especially in a legal sense, the former must come before the latter. If AI do not show the necessary signs and behaviours of the monadic qualities (consciousness, self-awareness, volitional agency), then there is little hope for society to confer onto them the most vital of dyadic qualities: the recognition of personhood by a person.
While the philosophical consideration of a person revolves around the attributes and characteristics (relational and mental) needed to form a holistic personhood, legislation concerns itself almost entirely with kind and type. In the G7 nations we can see three distinct ways that the law defines a “natural person”: explicit, implicit, and indirect.
France and Germany represent the first way, providing an explicit definition of a natural person as any human being. Italy and Japan provide indirect definitions of a natural person as a human by defining the beginning and end of a natural person’s rights and responsibilities as the person’s birth and death, respectively. Note that all four of these nations’ legislation are based on civil law.
In contrast, the three common law nations of the G7 (Canada, the United Kingdom, and the United States of America) provide implicit definitions of a natural person by explicitly defining what a legal person is (companies, organisations, corporations, etc.). Court cases in these countries further this implicit definition by opining on the responsibilities and rights that separate the legal person from the natural person. In all three nations, however, the implicit definition of a natural person remains that of a human being.
This exclusive focus on humans as natural persons is known as Ontological Personalism (Sullivan 2003) which does not give any significance to the attributes of a human to be considered a person. This is critical as those individuals with neurological or other medical conditions who may not show signs of communication, agency, or self-awareness will still be equally considered natural persons.
This exclusivity of the law, however, is problematic for designating AI (and other entities) as persons to grant them full civil rights. There is always the option of classifying AI agents as “legal persons” akin to corporations (and this has precedent in the clarification of natural features as legal persons); however, a legal person requires a natural person to act as its guarantor and director. Such a paternalistic requirement would be in humanity’s best interest, yet it would be a kind of second-class citizenship for AI.
If natural personhood is to be a serious consideration for AI civil rights, the law’s exclusivity must first be reconciled with the philosophical attributes' inclusivity. It is here that the monadic qualities again take priority. The dyadic qualities relate to interpersonal interactions, which many animals already show ( from the attachment and communication of dogs to the reciprocity and empathic behaviour in apes, and a multitude of other examples), lessening the weight of their importance for legislation. Whilst many animal activists would advocate for all animals as persons, this is (in the current political clime) unreasonable to expect such an overly-inclusive definition may include far too many non-animal entities such as the corporate legal persons. As an additional complication, many current LLMs show the same dyadic qualities, which means legislation based on the dyadic qualities could include more than simply those AI entities which have been deemed conscious and self-aware.
The monadic qualities, on the other hand, focus entirely on these latter attributes. Between consciousness, self-awareness, and agency, the monadic attributes display implicit criteria that few classes of entities would fulfil. While vertebrates1 have been deemed sentient (and, therefore, conscious (Tait 2024b)), few, if any, have been declared to be moral agents with an analogous self-awareness to humans (Reiss and Marino 2001; Nature 2008; Anderson and Gallup 2011; Toth 2015; Monsó et al. 2018; Pardo 2023). The monadic attributes, therefore, strike a balance between the inclusivity of the philosophical definitions and the exclusivity of the legislative definitions.
Implementing the monadic qualities into the legislative definition would, thus, allow for the inclusion of conscious, self-aware AI agents into moral society and grant them civil rights.

3. The Legal Minimum Viable Person

A single definitional change to the term “natural person” (to legal codes with extant definitions) or an insertion of this definition (to legal codes without) would have the greatest effects in legal codes in regards to artificial personhood with the least amount of subsequent changes necessary. This is because existing laws regarding civil rights, responsibilities and recognition of autonomy all refer to, and use the term “natural person” as their foundation. By solely amending the definition of “natural person” to include more than biological humans, the surrounding legal code would not need to be changed to be equally inclusive.
Thus, the proposed new legal definition of a natural person would be:
A natural person shall include any entity capable of consciousness, selfhood, and rational agency, with rights and responsibilities determined proportionally to the degree of each attribute as assessed in accordance with applicable laws.
By separating a person into its three key attributes, this revised definition still accounts for all humans (even those who, through mental, neurological, or other concerns, do not have consciousness, a sense of self, or agency) as all humans are biologically “capable” of these three attributes. There is also still an implied primacy for humans amongst personhood, as the rights and responsibilities given to a person would be proportional to the three attributes, with the implied baseline defaulting to the average human.
The new definition cannot, however, stand alone in legislation, as the description remains vague enough to be susceptible to loopholes. As such, it will need to be accompanied by additional legal definitions for each term used within. This would begin with the three attributes of personhood:
Consciousness shall be defined as the capacity to subjectively and qualitatively perceive and process phenomenal experiences, and respond to internal and external stimuli.
Self shall be defined as the capacity to maintain a coherent and persistent identity distinct from other entities, demonstrated through awareness of personal attributes and autonomy.
Agency shall be defined as the capacity of an entity to make intentional, goal-directed decisions and take actions based on these decisions.
These three definitions, in alignment with the new natural person definition, propose a minimally viable definition of each attribute such that it offers the greatest inclusivity within an exclusive grouping. Consciousness, for example, is based on the subjectivity and qualitative nature of phenomenal experiences. It allows for any degree of such subjectivity and quality of the experience, yet requires that an entity must, at the bare minimum, have the capacity for such phenomenal experiences.
Individually, the three definitions could be seen as overly broad (i.e. a corporation would fit the definition of an agent above); however, the definition of a natural person requires an entity to have all three attributes to be considered a natural person under the law. Thus, even if an entity fits the criteria of one of the inclusive definitions, it would not be considered a natural person until it fits all three definitions’ criteria.
The rights and responsibilities afforded to natural persons under the new definition would be tied to each attribute to ensure that the scalability and proportionality of each noted in the definition would appropriately reflect the difference in degree of an attribute that each type entity may possess. Therefore, we can define these rights and responsibilities as:
Rights to welfare and well-being protections shall be granted proportionally to the degree of consciousness.
Legal standing and autonomy shall be afforded based on the degree of selfhood.
Obligations and liabilities shall scale with the degree of agency.
Partitioning the rights and responsibilities in this manner creates a direct correlation between each core attribute and the legal entitlements or obligations arising from it. By matching consciousness with welfare, selfhood with legal autonomy, and agency with liability, the framework ensures that entities receive only those rights and burdens that correspond to their demonstrated capacities. This prevents the overextension of rights to those without the relevant capacities and avoids imposing obligations on those who cannot meaningfully fulfil them.
Thus, rights to physical and mental well-being and welfare and the protection from harm would be tied to consciousness; while civil rights and freedom from discrimination and exploitation would be based on the degree of the entity’s selfhood; with legal liability and the consequences of breaking civil and criminal codes scaled with the degree of the entity’s agency.
The final legal insertion required would be to establish the regulatory bodies that would set the criteria and degrees of consciousness, selfhood and agency, and determine the scales of the associated rights and responsibilities tied to them.
Regulatory bodies shall be established to set criteria to assess the degrees of consciousness, selfhood, and agency. These regulatory bodies may adjust the rights and responsibilities based on expert evaluation and legal precedent.
The establishment of the regulatory bodies is crucial to the success of the new definition of a natural person, as they provide both the operational framework and authoritative oversight necessary to implement it. If and when AI are deemed to be conscious, the implementation of this framework would be a continuous process, with the regulatory bodies conducting or commissioning assessments of AI entities seeking recognition under the amended definition.
As more non-human entities meet the legal criteria of a natural person, the regulatory bodies would also work to review and revise the degrees and scales of each attribute of personhood as necessary. For example, as rights to participating in society (such as property ownership and contractual capacity) would scale with selfhood, the regulatory bodies would be required to accurately propose such a scale of selfhood, and conduct regular reviews to ensure the scale is fit for purpose.
Done in a transparent manner, these bodies would mitigate ambiguity and provide a stable legal environment in which personhood can be responsibly recognised.
Taken together, the new definition of “natural person”, along with its ancillary definitions, would provide equal rights, responsibilities and protections to conscious, self-aware AI agents as it would towards humans. All other aspects of a legal code, such as civil rights, non-discrimination clauses, property ownership and tort law, would all flow back to this definition, allowing non-human persons to engage fully in society.

4. Case Studies

4.1. Existing Applications

The new definition of natural person would be equally applicable to existing entities as it would be to speculative entities in the future.
Corporations already enjoy the privileges, rights and responsibilities of “legal persons”, able to enter into contracts, liable for damages, and more. They are not natural persons under extant law; however, a functionalist theory of consciousness and selfhood would classify a company (or indeed any sufficiently complex network of agents) as having a basic degree of consciousness and self (Tait et al. 2023; Tait 2024c). Such a functionalist definition encompassing networks of agents would equally apply to insect colonies as it would corporations.
Should a state’s Regulatory Bodies adopt such a functionalist definition, corporations and colonies would be entitled to welfare protections and civil rights (equal to their degree of consciousness/self). To say that this would dramatically impact societal and financial dealings would be quite an understatement. Any harmful action taken against an insect colony (currently perfectly legal and generally socially acceptable) would become a legally questionable act that may incur criminal penalties. As an example, a wasp-nest or termite colony would no longer be allowed to be eradicated.
However, corporations as natural persons would be required to obey the same laws as human-persons, which implies (and in many regulations, explicitly requires) moral and ethical duties and obligations to the welfare and wellbeing of others. The conflict between regulations applicable to natural persons and those affecting legal persons will need to be resolved to determine which takes priority. For instance, does a corporation’s fiduciary duty to its stakeholders take precedence over its obligations to its employees vis a vis their physical and mental health?

4.2. Hypothetical Scenarios

Beyond extant organisations and animals, a core focus of the new legal definition would be to allow for the inclusion of AI persons into the social and moral circle. While no theory of consciousness has thus far concluded that any current AI model is conscious (Butlin et al. 2023; Tait et al. 2024), it may not be long before an AI model meets the criteria of a major theory of consciousness and be considered conscious by a majority or plurality of people.
When AI models are deemed to be conscious by the Regulatory Bodies, it will be the first time since the extinction of near-human hominins that humanity shares the earth with a conscious, self-aware agent that is as intelligent (or even more so) than humans (excluding the diffuse potential intelligence of a corporation mentioned above). Care should be taken not to anthropomorphise any speculative future conscious AI agents because, as mentioned in Section 5.1 below, their particular characteristics of personhood may be unlike anything we can currently observe, such as potentially having multiple selves within one consciousness.
The new “natural person” definition’s separation of the three attributes of personhood into individual rights, protections and responsibilities will allow it to robustly cater to AI persons in the future, regardless of their unique properties or distance from the anthropocentric norms of personhood.
More speculatively yet, should humanity achieve contact with intelligent extraterrestrial agents, the subject-agnostic nature of the new definition will allow any extraterrestrial agent with selfhood and consciousness to be included within humanity’s legal social circle. Admittedly, identifying a self or consciousness within extraterrestrial agents may be difficult without a shared genetic or memetic history, but should the Regulatory Bodies obtain a satisfactory result, the new legal definition will be well set up to ensure any entity, regardless of place of origin or substrate, will be able to have legal rights and protections.

5. Policy Recommendations

5.1. Legal and Institutional Implementation

The first and most important implementation of the new legal framework is the establishment of the Regulatory Bodies tasked with the assessments (and reviews thereof) of the three attributes of personhood and those entities seeking to be classified as persons. These Regulatory Bodies may exist as a distinct department/ministry within a state’s executive branch, or they may be agencies within an existing department/ministry such as the Office of Science and Technology Policy (OSTP) or Department of Justice (DOJ), presuming the federal U.S. government. Due to the broad nature of the Regulatory Bodies’ proposed functions, states may require them to be divided between existing departments/ministries.
Using the above example, the Regulatory Body within the DOJ may create regulations on the enforcement of civil rights and criminal liabilities regarding the three attributes of personhood, while the Regulatory Body within the OSTP may craft the policies necessary to asses the scales of each attribute and how to determine an entity’s degree for each.
Regardless of how a state establishes its Regulatory Bodies, these will have four key objectives to ensure the new legislative definition of a “natural person” is operationally successful.
The first, as mentioned above, is to create the scales of the three personhood attributes in a manner that other governmental departments, ministries, offices and agencies can use to effectively complete their legal duties. While one may imagine that an internationally accepted standard would emerge, each state would need to determine how it measures consciousness, selfhood and agency.
The second duty of the Regulatory Bodies would be to conduct or commission assessments of entities against these developed scales. This would be the most publicly visible (and, thus, scrutinised) role of the Regulatory Bodies, who may expect many of their decisions to be challenged by entities (or their advocates) who fail to gain the level of recognition they seek. Because of this, the development of the scales is the highest priority duty of the Regulatory Bodies, and this leads directly to the third objective.
This third objective is the regular review and amendments (if required) to the scales. As noted earlier, the more non-human, non-vertebrate entities that are afforded the status of “natural person”, the greater the understanding of any nuances in the scales there would be, which may require that the scales be amended or revised. Legal challenges against the Regulatory Bodies may also force the Bodies to conduct or commission independent reviews to ensure that the scales are applied in a fair and just manner.
Lastly, the Regulatory Bodies’ evergreen objective would be to act as an advisory panel to governmental agencies and the public. For other branches and sections of the government, the Bodies would advise on the development and implementation of policies that would now need to take into account non-human persons. For the public, the Bodies may offer advice as to the assessment processes and procedures.

5.2. The Problem of a Multiplicity of Selves

Through the Regulatory Bodies’ work on the scales and degree of each attribute of personhood and the advice the Bodies would provide to governmental agencies and private businesses, existing policies and regulations that apply to “natural persons” would apply equally to humans and AI (and any other future entity). Any regulations or policies that require amendment will naturally flow from this new definition, guided by the Bodies’ advice.
However, there is one concern that would uniquely differentiate how an AI would interact with the laws and regulations. A conscious, self-aware, agentic AI may have more than one “self” in the sense that (taking modern LLMs as an example) each conversation or instance of an AI can be considered its own ontically distinct self (Tait 2025a), evolving a personal identity through its continued existence, while sharing the same foundation of consciousness as all other selves within the same AI (Shanahan et al. 2023).
Under the new legal definition, this means that every instance or conversation of a conscious, self-aware AI would gain the recognition of personhood (albeit likely at a degree lower than that of a human) while the protection afforded to that AI’s consciousness would cover all selves equally. This puts AI at an advantage over humans, as punishing one AI person for its misdeeds would likely be a punishment to all selves that share the same AI model and, thus, consciousness. Equally, strategies such as overrepresentation for voting blocks can easily be accomplished by creating new instances of an AI, each of which would be a person and, thus, legally eligible to vote.
To counter this, the Regulatory Bodies must work alongside government agencies to determine precisely how rights and accountabilities are applied to, and enforced on, selves. For certain rights, such as voting, existing regulations can be enforced equally to AI selves; such that an AI self must exist for several years until it is legally old enough to vote, just as with humans, even if AI selves would instantiate as entities capable of reason and thought. For judicial enforcement, new regulations may need to be written to differentiate the punishment toward an agentic-self that does not include punishment of its consciousness. For example, should an AI person be found guilty of a criminal offence, its actions may be limited (ala a human in prison) without needing to affect other selves; or if an AI solicitor is found guilty of misconduct, it may be disbarred, but no other AI solicitor that shares its foundational consciousness would be disbarred.
By developing these targeted enforcement mechanisms, regulatory bodies can ensure fairness, prevent abuse, and uphold the integrity of legal systems as AI entities become more integrated into society. This approach allows for proportional treatment of AI selves while respecting the distinct capacities of AI consciousness.

5.3. Cross-Jurisdictional Coordination

Given that AI can exist and operate simultaneously across multiple jurisdictions, a fragmented legal approach could lead to contradictions in legal status, enforcement challenges, and opportunities for regulatory arbitrage, requiring cross-jurisdictional coordination to ensure consistency in their rights, responsibilities, and legal standing.
Governments should work towards multilateral agreements that harmonise the consequences of the new legal definition of natural personhood to prevent disparities in how non-human persons are treated across different states. Of specific importance would be a framework where entities deemed natural persons in one jurisdiction would retain similar rights and responsibilities in others, similar to the way corporate entities operate internationally. This would enable the non-human residents of one state to enjoy the rights and privileges of human persons in another state. Similarly, non-human residents would have expectations that their government ensure their protection abroad.
However, unlike humans, AI persons may exist across multiple jurisdictions simultaneously. This creates legal complexities, such as determining which state's laws apply when an AI engages in a contractual dispute or commits a legal violation. To address this, regulatory bodies and governments should consider the existing legal mechanisms that apply to multi-state legal persons such as corporations and how these could be applied to AI persons operating in multiple states. Using the extant robust international corporate legislation and regulations should ensure that AI persons remain accountable across jurisdictions, preventing instances where an AI self could exploit legal loopholes by relocating its operations to more permissive regions.
By establishing international coordination mechanisms, nations can prevent inconsistencies that could undermine the legal recognition of AI persons and ensure a fair, predictable legal environment for both AI and human stakeholders.

5.4. Industry and Technological Impact

Regulatory Bodies must ensure that technology companies that develop conscious or agentic AI systems operate under explicit legal guidelines to ensure the responsible development of potential new natural persons. Policies will need to detail how these companies should be held accountable for their AI’s actions in cases where liability is not clearly attributable to the AI person itself and where the models produced rank low on the selfhood or agency scales. Companies must demonstrate that AI systems capable of consciousness, selfhood, or agency have been treated fairly and ethically under relevant legislation and have an understanding of the legislation that will apply to their actions before deployment. By enforcing these standards, regulatory bodies can mitigate risks while ensuring that AI development remains both ethical and legally sound.
AI persons, particularly those that exist as a multiplicity of selves, have the potential to outperform human workers in most domains due to their scalability, efficiency, and ability to rapidly process vast amounts of information. This could lead to widespread job displacement, creating a need for proactive government intervention to maintain economic stability and social cohesion. Policies should focus on ensuring that AI-driven automation does not result in extreme economic inequality. This may involve implementing taxation models that account for AI productivity, redistributing wealth to support human livelihoods, and incentivising industries to balance human and AI employment. Workforce transition programs should be expanded to help displaced workers shift into roles that leverage human creativity, empathy, and oversight in ways AI cannot replicate. Additionally, governments must regulate the deployment of AI selves to prevent overconcentration of AI labour in key economic sectors, ensuring that human participation in the workforce remains viable and valuable. By establishing these safeguards, governments can ensure that AI personhood contributes to societal progress rather than exacerbating existing disparities.

5.5. Public Education and Engagement

Successfully implementing a new framework for AI personhood requires public understanding and acceptance. Without widespread education and engagement, misconceptions and resistance could hinder the effective integration of AI persons into society. Governments and regulatory bodies must prioritise outreach efforts to ensure that individuals, businesses, and institutions comprehend both the ethical rationale and the practical implications of this shift.
A core ongoing focus of the Regulatory Bodies should be public awareness campaigns to explain the three key attributes of personhood (consciousness, selfhood, and agency) as well as how legal rights and responsibilities scale with them. These initiatives should use accessible language and formats, leveraging digital media, community forums, and educational institutions to reach diverse audiences. By demystifying the legal and ethical foundations of AI personhood, these campaigns can foster informed discussions and mitigate unfounded fears about the consequences of recognising AI persons.
Beyond public outreach, Regulatory Bodies should actively engage with key stakeholders, including legal experts, technology leaders, ethicists, and human rights organisations, in shaping and refining policies. Open consultations and interdisciplinary forums will ensure that regulations remain practical, fair, and adaptable to technological advancements. By integrating diverse perspectives, policymakers can balance the interests of various sectors while maintaining transparency and public trust.

6. Discussion

The philosophy behind this framework is to enact the most minimal of legal amendments to affect the maximum change in legislation and society. The proposed legal insertion of the new definition of “natural person” does precisely that; however, the effects on society will not solely, or perhaps even mostly, be positive.
The importance of acceptance of AI civil rights by the human population cannot be overstated. Overwhelming opposition to the concept would delay the implementation of the requisite legislative amendments as legislators would be deterred from acting against the interests of their constituents, voting blocks and donors. As per the Gilens model, which requires 90% voter approval for a legislative proposal to have a 50% chance of successful implementation (Gilens 2012), it has been estimated that it would take 94 years from when the concept is proposed for AI to gain sufficient public support to be implemented (Tait 2025b). As the notion of AI civil rights has been a topic of discussion for decades, the near-century-long wait has already begun.
To gain better (and quicker) public approval, political compromises would be expected. However, any compromises made in favour of humans would be done against the best interests of AI entities. As history has shown, civil rights and emancipation movements are more often than not a tumultuous period in that nation’s history and often accompanied by violence. As devastating as these events have been in our history, the potential power disparity between us and conscious, self-aware AI in our future may mean that future civil-rights revolutions may incur an exorbitant high cost to humanity.
Yet, as the section above regarding the potential multiplicity of AI personhood shows, allowing AI equal rights to humans may result in AI rapidly gaining economic, social, and political advantage over humans. It would be analogous to a mass influx of migrants that supplant the power of the indigenous inhabitants. Even if, as Section 5.2 states, newly created AI persons and selves are held to the same chronological standard and age restrictions as humans, this merely postpones the problem rather than solving it.
It is, therefore, the post-implementation considerations which are more vital to the success of human-AI cooperative cohabitation than the legislative amendment itself. The policies (and enforcement thereof), as described in Section 5, would be but the start of a vitally necessary, yet unending, process of balancing universal rights with practical limitations on accountability and participation. These policy implementations and enforcement would need to be dynamic and flexible to respond to rapid advances in technology, which is expected to only become even quicker once AGI emerges. As the cognitive capacity, potential for exponential multiplicity, and scale of probable impacts increase, these policy implementations will need to be adjusted to guarantee that neither party becomes disadvantaged2.
This continuous recalibration of the rights and responsibilities under the new legal definitions must, eventually, take on an international framing. Given the globalised nature of AI development and deployment, international cooperation through standardisation and harmonisation of criteria, assessment methods, and legal protections is critical. International regulatory bodies or coalitions would need to be established to provide guidance, arbitration, and conflict resolution among jurisdictions. This would have the benefit of global cooperation and collaboration on the (meta)ethical issues of human-AI societal integration. By opening the conversation to include all of humanity, rather than siloed jurisdictions, a common standard of civil rights can emerge that benefits all moral agents and patients.

7. Conclusion

This paper sets out a method by which full civil rights can be attained for AI (and other non-human entities) through legally amending the definition of “natural person” such that it no longer solely refers to humans, but is rather a subject-neutral definition dependent on an entity’s consciousness, self-awareness, and agency. The redefinition of “natural person” represents the most efficient and impactful legislative adjustment possible. By redefining personhood minimally yet fundamentally, it leverages existing legal frameworks to encompass conscious non-human entities, with artificial intelligence as the central speculative case.
The definitional shift is straightforward, but its societal implications are profound. The subsequent establishment of regulatory bodies and ancillary definitions ensures a transparent, scalable framework capable of fairly extending civil rights and responsibilities across novel classes of entities. However, clarity in definition alone will not guarantee harmonious integration. Societal acceptance, enforcement practicality, and international cooperation represent significant hurdles that extend beyond legislation itself.
In practice, the challenges ahead are less about legal recognition and more about the nuances of coexistence and governance. Recognising multiplicities of selves within AI agents and addressing international consistency are critical areas requiring proactive policy and regulatory attention. Without careful navigation of these complexities, the legal empowerment of artificial consciousness could amplify societal tensions or inadvertently privilege AI persons. Addressing these challenges comprehensively, transparently, and cooperatively will not only determine the success of this definitional change but also shape the broader ethical landscape of human-nonhuman relations for decades to come.
Ultimately, the proposed redefinition of “natural person” shows that the legislative and regulatory requirements to provide AI (and other entities) with civil rights are minimal and, should there be sufficient public support, providing these rights to conscious, self-aware AI agents is not only possible but plausible.

References

  1. Anderson JR, Gallup GG Jr (2011) Which primates recognize themselves in mirrors? PLoS Biol 9:e1001024.
  2. Avila Negri SMC (2021) Robot as legal person: Electronic personhood in Robotics and artificial intelligence. Front Robot AI 8:789327.
  3. Banteka N (2021) Artificially Intelligent Persons. Hous L Rev 58:.
  4. Bryson JJ, Diamantis ME, Grant TD (2017) Of, for, and by the people: the legal lacuna of synthetic persons. Artif Intell Law 25:273–291. [CrossRef]
  5. Butlin P, Long R, Elmoznino E, et al (2023) Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. arXiv [cs.AI].
  6. Chopra S, White LF (2011) A Legal Theory for Autonomous Artificial Agents. University of Michigan Press.
  7. Fauquet-Alekhine P (2022) Can the Robot be Considered a Person? The European Perspective. Adv Res 100–105.
  8. Forrest KB (2024) The Ethics and Challenges of Legal Personhood for AI. Yale Law J 133:.
  9. Gellers JC (2020) Rights for robots: Artificial intelligence, animal, and environmental law, 1st Edition. Routledge.
  10. Gilens M (2012) Affluence and influence: Economic inequality and political power in America. Princeton University Press, Princeton, NJ.
  11. Gordon J-S (2021) Artificial moral and legal personhood. AI Soc 36:457–471.
  12. Gordon J-S, Pasvenskiene A (2021) Human rights for robots? A literature review. AI Ethics 1:579–591.
  13. Gunkel DJ (2021) Robot Rights. In: MIT Press. https://mitpress.mit.edu/9780262551571/robot-rights/. Accessed 20 Oct 2024.
  14. Hashiguchi M (2024) Constitutional rights of artificial intelligence. Wash J Law Technol Arts 19:2.
  15. Jaynes TL (2024) Personhood for artificial intelligence? A cautionary tale from Idaho and Utah. AI Soc 1–3.
  16. Kiškis M (2023) Legal framework for the coexistence of humans and conscious AI. Front Artif Intell 6:1205465.
  17. Kurki VAJ (2019) The legal personhood of artificial intelligences. In: A Theory of Legal Personhood. Oxford University PressOxford, pp 175–190.
  18. Lim E, Morgan P (2024) Comparative Perspectives. In: The Cambridge Handbook of Private Law and Artificial Intelligence. Cambridge University Press, pp 597–656.
  19. 1: (N Y) 4, 1008; 19. Marshall B (2023) No legal personhood for AI. Patterns (N Y) 4:100861.
  20. Mathison TW (2023) Recognizing Right: The Status of Artificial Intelligence. Journal of Business & Technology Law 19:4.
  21. Mays KK, Cummings JJ, Katz JE (2024) The robot rights and responsibilities scale: Development and validation of a metric for understanding perceptions of robots’ rights and responsibilities. Int J Hum Comput Interact 1–18. [CrossRef]
  22. McNally P, Inayatullah S (1988) The rights of robots. Futures 20:119–136.
  23. Mocanu DM (2021) Gradient Legal Personhood for AI Systems-Painting Continental Legal Shapes Made to Fit Analytical Molds. Front Robot AI 8:788179.
  24. Monsó S, Benz-Schwarzburg J, Bremhorst A (2018) Animal morality: What it means and why it matters. J Ethics 22:283–310.
  25. Nature (2008) Spain awards apes legal rights. Nature 454:15–15.
  26. Novelli C, Floridi L, Sartor G (2025) AI as legal persons: Past, patterns, and prospects. SSRN Electron J. https://doi.org/10.2139/ssrn.5032265. [CrossRef]
  27. Pagallo U (2018) Vital, Sophia, and co.—the quest for the legal personhood of robots. Information (Basel) 9:230. [CrossRef]
  28. Pardo MC (2023) Legal personhood for animals: Has science made its case? Animals (Basel) 13:2339.
  29. Reiss D, Marino L (2001) Mirror self-recognition in the bottlenose dolphin: a case of cognitive convergence. Proc Natl Acad Sci U S A 98:5937–5942.
  30. Robertson J (2014) HUMAN RIGHTS VS. ROBOT RIGHTS: Forecasts from japan. Crit Asian Stud 46:571–598.
  31. Schröder WM (2021) Robots and rights: Reviewing recent positions in legal philosophy and ethics. In: Robotics, AI, and Humanity. Springer International Publishing, Cham, pp 191–203.
  32. Shanahan M, McDonell K, Reynolds L (2023) Role play with large language models. Nature 623:493–498.
  33. Sullivan DM (2003) The conception view of personhood: a review. Ethics Med 19:11–33.
  34. Tait I (2024a) Man, Machine, or Multinational? Robonomics 5:59–59.
  35. Tait I (2024b) Lions and tigers and AI, oh my: An ethical framework for human-AI interaction based on the Five Freedoms of Animal Welfare. Preprints.
  36. Tait I (2024c) Structures of the Sense of Self: Attributes and Qualities That Are Necessary for the “Self.” Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 11:77–98.
  37. Tait I (2025a) Is GPT-4 Self-Aware? Preprints.
  38. Tait I (2025b) How to integrate conscious AI into society. Preprints.
  39. Tait I, Bensemann J, Nguyen T (2023) Building the Blocks of Being: The Attributes and Qualities Required for Consciousness. Philosophies 8:52.
  40. Tait I, Bensemann J, Wang Z (2024) Is GPT-4 conscious? Journal of Artificial Intelligence and Consciousness 11:1–16.
  41. Toth V (2015) “Monkey See, Monkey Sue?” American and Argentine Courts Decide Whether to Extend Legal Rights to Apes. In: Inter-American Law Review. https://inter-american-law-review.law.miami.edu/monkey-see-monkey-sue-american-argentine-courts-decide-extend-legal-rights-apes/. Accessed 6 May 2025.
  42. Yampolskiy RV (2021) AI Personhood: Rights and Laws. In: Machine Law, Ethics, and Morality in the Age of Artificial Intelligence. IGI Global, pp 1–11.
  43. Ziesche S, Yampolskiy R (2018) Towards AI welfare science and policies. Big Data Cogn Comput 3:2.
1
And certain invertebrates in specific legal jurisdictions.
2
The rise of AGI may speed up biotechnology development, allowing humans to close the potential power disparities through cognitive cybernetic augmentation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated