2. Assessing Frameworks of Integration
To create the prior probability figures for the successful implementation of the society-CAI integration frameworks, a formula will be used that takes into account the following:
Public Perceptions (p(B)): The views of both experts and the general public regarding AI consciousness and rights.
Historical Legislative Data (p(E)): Analogous cases of similar legislative frameworks from G7 nations (used here as a representative sample of liberal democracies).
Perceived Risks (r(t)): Public and expert assessments of the existential or societal risks associated with AI.
While other factors such as economic incentives and lobbying efforts would also influence legislative success, the model simplifies these by presuming they are mediated through public perception and rate of legislative enactment.
The formula is thus:
Where
H is the hypothesis in question (i.e. the integration framework),
B is the public’s perception towards AI behaviour, and
E is the extant and historical legislation. Both
p(B) and
p(E) will be weighted, and
and
such that
and
are the initial probabilities of
p(B) and
p(E) respectively; and
λ shows the positive growth constants for both
p(B) and
p(E). The
r here shows the perceived risk that AI can cause to human societies, with
adjusting the probabilities downward as time increases, reflecting growing uncertainty. Both
λ and
γ can be expressed as percentages.
The weighting for all sections will be 3-to-2 in favour of p(B) as the survey responses directly link to the moral considerations of conscious/sentient artificial beings, while the legislation used are analogous but not homologous to the proposed frameworks.
The p(B) values below are based on surveys in the literature where participants expressed their perceptions of current and future AI using a variety of methods (from percentile approval figures to Likert scales) (All figures from the surveys can be found in the supplementary materials). The surveys were predominantly based on US participants, with some broader online participation.
All averaged responses have been transformed into figures between 0 and 1.
To arrive at a coherent
p(E) figure, the speed at which the relevant legislation passed through all G7 nations plus the length of time that the legislation has been in effect was used, expressed as:
The above “Baseline date” was chosen independently for each integration framework, based on the earliest well-recorded date in one of the G7 nations when a movement was founded based on the nature of the framework. For instance, with the animal welfare framework, 1824 was selected as the baseline year, as it was the first recorded date that a movement for animal welfare was founded in the G7 nations, specifically the Society for the Prevention of Cruelty to Animals (SPCA).
The baseline date serves as a point of reference for when the core idea of the framework became publicly widespread, but not yet enshrined in legislation. The time between these two points are analogous to current discussions on AI rights and moral consideration and when legislation may be implemented in the future.
The risk factor
r, as with
p(B), has been taken from survey results of academics’ and the public's perception and attitudes towards AI existential risk and transformed using a hazard rate formula to show the perceived risk per year (For risk values provided without an explicit time horizon, 50 years were used for the formulation as a conservative estimate of the end of the average respondent's lifespan.) of an AI-caused existential event:
Where
P is the public perception of the risk, and
n is the number of years between when the survey was conducted and the time horizon of the perceived risk. Unlike the rest of the formulation’s parts, the risk factor’s initial value will remain the same for all four frameworks (that is, 0.0062) as it is independent of the four frameworks.
For λB, the time for legislation to first be passed was taken as an inverse of 1. Similarly, the time for the legislation to finish passing through all seven nations was taken as an inverse of 1 for λE.
The overall formula, taken holistically, will then produce a figure showing the probability of a successful integration. Put another way, the end result can be taken as a speculative “approval rating” at a given point in time, starting from the year that the integration framework is enacted. Because of the λ and γ growth constants, one can adjust the formula to determine the speculative approval rating society would have on the frameworks in any given year following the frameworks’ implementation. This paper will use this to determine how long it would take (if at all) for the majority of society to approve of the framework.
Specifically, the approval rating, and thus p(H), of most interest for a framework to be judged a success is 0.9, as policy changes and legislative implementation require (on average) a 90% approval rating to have a 50% chance of being adopted and enacted (Gilens 2012).
2.1. Animal Welfare
While, as Ziesche & Yampolskiy mentions, AI welfare is a recent field of inquiry (Ziesche and Yampolskiy 2018), creating an integration framework based on current animal welfare models may seem an attractive option, as it would provide CAI entities with legal protections, and there would be a clear delineation between human and machine agents (that may remain as property (Calverley 2006)) that would assuage anthropocentric chauvinist groups.
There is a wealth of ethical frameworks and legislative regulations that deal with the use of, and actions towards, animals that could be used as a foundation for such an integration framework, most notably the Five Freedoms of Animal Welfare. This framework accounts for an animal’s physical and mental health, ensuring an animal has freedom from disease, hunger, pain, stress, fear and inhibited behaviour (Mellor 2016).
One can translate these animal freedoms into subject-neutral terms (such as changing injury and disease to malfunction and system degradation) that could then be applied to AI in such a way as to provide AI with equal protection under the law as that society currently grants animals (Tait 2024a). With the extent of legal protections that G7 nations currently provide animals, integrating protections for CAI entities into this legislation would ensure the physical and mental safety of AI entities.
One can argue that CAI would, in this speculative circumstance, be better protected by governments than humans are. This notion is somewhat supported by the result of the hypothesis testing formula.
The survey questions used for this section (The transformed survey responses for all sections can be found in full in the supplementary material.) deal with issues such as the right to life (Lima et al. 2020), control of mental states (Mays et al. 2024), opposition to damage and punishment (Pauketat and Anthis 2022; Anthis et al. 2024), and a desire for protection (Martínez and Winter 2021).
The item with the highest approval rating from survey participants acknowledged that torturing sentient artificial beings was wrong (with 76% of respondents agreeing), while the lowest stated that artificial beings ought to be given equal moral consideration as humans (with only 27% of surveyed respondents agreeing).
Overall, the average approval rating for all items related to treating AI on par with current animal welfare ethical frameworks was 47%, giving us the p(B) for this section.
To calculate the
p(E) value (The dates of legislations for all sections can be found in the supplementary material.), the founding of the Society for the Prevention of Cruelty to Animals in 1824 was used as a baseline date, with the United Kingdom following as the first G7 nation to pass animal welfare legislation in 1835, and Japan as the final nation in 1973. With these figures in hand, we can calculate the
p(E) value as:
With these values, we can complete the formula from earlier to determine the likelihood of success of the speculative framework in the year of its legislative passing:
A mere 44% chance of successful integration does not inspire confidence, but as mentioned above, should one view animal welfare protections as stronger than human well-being efforts, it can be rationalised (if only post hoc). However, with the λ values, we can forecast how many years it would take to reach any arbitrary probability of success, such as 0.5, 0.67 or a complete 100% chance of success.
To calculate the final λ values, 1/(1973-1835) will provide a λE of 0.0072, and 1/(1835-1824) will provide λB of 0.0909.
With these, and the risk value of 0.0062, we can see that it would take at least two years to reach a 50% chance of successful integration:
Not to repeat the formula again, it would take eleven years post-legislation for the framework to gain the required 90% approval rating as per Gilens' calculations (Gilens 2012) to give the framework a 50% chance of successful implementation.
2.2. Human Directorate
A framework which would give AI more legal rights may be based on the current legal philosophy of “legal personality”, predominantly found in corporate law (Laukyte 2019; Nanos 2020). Unlike humans as “natural persons”, a “legal person” such as a corporation may enter into contracts and legal agreements, become members of incorporated societies, be held legally liable and bring legal suits to court for any damages incurred. Despite the ongoing debate about whether AI can or should be subject to legal personality (Čerka et al. 2017; van den Hoven van Genderen 2018; Dremliuga et al. 2019; Solum 2020; Chesterman 2020; Jowitt 2021; Novelli 2023), CAI may find such a framework more appealing than an animal welfare-based framework that offers protection but no active rights. Society may also deem CAI to be worthy of the increased autonomy under this framework if they are deemed to be capable of acting equally ethically to humans (Chu and Liu 2023).
On the other hand, all legal persons have representatives who are equally held liable for the legal person’s actions. A CEO or director is legally responsible for the actions committed by their corporation or company. This human-in-the-loop aspect would be an attractive feature of a directorate framework to society as it ensures that all actions of a CAI agent are overseen by a human who is incentivised to ensure the AI's actions are aligned to societal norms, termed “dependent legal personality” (Chopra and White 2011). Such a directorate framework can also draw from the legally protected governance of a parent over their children to create a mutually beneficial relationship between the CAI and its director, who would then have a fiduciary duty towards the AI (Tait 2024b).
The survey questions selected for this section thus dealt with AI's right to file a lawsuit (Martínez and Winter 2021), the right to receive payment for work (Mays et al. 2024), the right to enter into contracts (Lima et al. 2020), the right to own their own programming (Anthis et al. 2024), and the right of humans to own and direct artificial beings (Pauketat and Anthis 2022).
Of these, the aspect that most respondents in their respective surveys agreed on was the right for humans to own and trade artificial entities at 71% approval, while the least agreed upon aspect was the right for AI to be granted citizenship of their country of residence, at 27%. The average approval rating (and thus the p(B) value for this section) was 40%.
The speed of corporate legislation through the G7 nations was dramatically faster than for animal welfare, taking a mere 55 years, from the United Kingdom’s Join Stock Act in 1844 (alongside a Supreme Court ruling in the USA of the same year) through to Japan’s Shōhō (Commercial Code) of 1899. The baseline date for this section was 1776, the year of publication of Adam Smith’s seminal Wealth of Nations, which laid the philosophical foundations for modern corporate law. With 2024 as the current date, we can determine the
p(E) value as:
And, thus, the initial p(H) at the time of proposed passing would be:
Which seems far more satisfactory than the previous section, as the framework already has a greater chance of success than failure. Using the same
λ calculations as before, there is a
λE of 0.0182 and a
λB of 0.0147. Collective, these
λ values are lower than those related to animal welfare, which means it would take a hypothetical 40 years to reach a
p(H) of 0.9:
2.3. Paternalism
A framework that would provide CAI agents with greater freedoms than the two listed above, and diffuse the human control over AI from the individual to the societal level, would be one based on paternalism (‘Paternalism’ here refers to governance or policy measures wherein a higher authority imposes restrictions or controls on individuals or entities; in this case where conscious AI is afforded limited autonomy under human oversight). Treating CAI agents as second-class citizens or slaves would provide them with greater autonomy at the operational level while ensuring they have little to no influence at the strategic level of human societies.
Paternalism, whether it is slavery, racial segregation, apartheid, or caste systems, has an understandably unpopular history as it contradicts the theological and humanistic philosophical claims of human dignity and equality (which explains why paternalistic frameworks of human-AI interactions are not seen in the literature). If all humans are born equal, then it would be immoral for one to own another or to be in a class above another.
However, AI are patently not human, even if modern state-of-the-art models seem quite human-like in their communication with us. Treating AI as second-class citizens, therefore, may appease the human chauvinists and gain greater approval amongst society (It must be noted that, unlike the other frameworks, a paternalism-based integration framework would predominantly be designed to gain societal approval rather than to work in the efforts of AI welfare or wellbeing). It can be argued that owning an AI may, at the object level, be the same as owning a pet or a tool (Bryson 2010) and that treating them as lower caste would still provide them more autonomy than human prisoners, and may indeed provide them with a legal personhood (Nanos 2020).
Survey responses affecting the p(B) value for this section would thus be about the right of humans to own and possess AI (Pauketat and Anthis 2022), but also about the greater autonomy of AI in a paternalistic framework, such as privacy (Lima et al. 2020) and right to work (Mays et al. 2024). Unsurprisingly, owning AI had the highest approval rating of 71%, and allowing AI control over their own programming had the lowest at 25%.
The overall approval rating, and p(B) value, was 44%.
As no G7 nation has extant laws surrounding slavery or racial segregation, the formula for determining the p(E) value must be slightly altered. Rather than using the current year, the longest duration from the baseline date (Norman England’s feudalism of 1066) to the end of slavery in the G7 nations was used. This figure (881 years for the United Kingdom), coupled with the average duration that slavery existed in the G7 (181 years), gives a p(E) of 0.671 for legislated slavery.
The same was done for the period for which G7 nations had racial segregation to provide a p(E) of 0.662. The average of these is, therefore, 0.666
The initial p(H) would thus be:
On par with the human-directorate framework above, and an adequate majority approval. The λ values for this section are also calculated differently. For the λB value, the time since the last G7 nation abolished slavery and segregation was used to provide a value of 0.025. As the years continue (presumably without a return to slavery and segregation), this figure will become smaller, showing society’s lack of appetite thereof.
For
λE, to show the strength of the legislation of its time, the shortest duration was taken over the longest duration, providing a figure of 0.037. With these values, the framework would attain a
p(H)>0.9 in nineteen years:
2.4. Civil Rights
An integration framework based on civil rights is perhaps the most intuitive to understand, as it involves providing CAI entities with equal legal rights to humans. Should future CAI have a dominant humanoid embodiment, this connection may seem even more intuitive (Bontula et al. 2024). Unfortunately, the requirement for equality may also ensure this is the most difficult framework for which to build popular support. Animals, such as chimpanzees and gorillas, have shown the capacity to understand and express themselves in human language. Yet, while the great apes have been granted personhood status in many nations, they do not have civil rights. CAI agents would be far more intelligent than the near-human apes, but also far removed from us in terms of biology and history, which could serve as a barrier to granting them full legal rights.
However, the intuitive nature of civil rights has meant it has received the greatest attention in the literature, from discussions as to whether AI should or can have rights (Coeckelbergh 2010; Gunkel 2020; Bennett and Daly 2020; Mamak 2022), to whether human rights ought to be prioritised (Bryson et al. 2017; Birhane and van Dijk 2020; Gunkel 2021), to the operations and ramifications of granting AI rights (Osborne 2021; Yanke 2021; Gordon and Pasvenskiene 2021), and more.
For the survey responses in this section, the questions unsurprisingly involved universal suffrage (Mays et al. 2024), legal and moral human rights (Guingrich and Graziano 2024; Anthis et al. 2024), and the respondents’ willingness to join movements for AI civil rights (Pauketat and Anthis 2022). All told, the average approval response was 40%, tied last of the four framework’s responses alongside the human-directorate survey responses.
For the p(E) value, two different sets of legislation were used, much like the previous section. In this instance, the abolition of slavery in the G7 nations, as well as legislation granting full and equal civil rights. For the former, 1315 was used as the baseline, when France first abolished slavery on mainland France; and for the latter, 1215 was used as the date the Magna Carta was signed, seen as the first movement for a type of legal equality in the G7 nations.
It took 329 years for the G7 to abolish slavery, beginning with France in 1794 (entirely this time) and ending with Germany’s defeat in the First World War, ending slavery in its colonies. Comparatively, civil rights legislation breezed through the G7, taking only 36 years from the United Kingdom Representation of the People Act in 1928 to the United States’ Civil Rights Act in 1964. Put together, these two sets of legislation work together to produce a
p(E) of 0.63. This means that the initial
p(H) would be:
Had this calculation merely used the civil rights legislation and not the abolition laws, this initial
p(H) would have been 0.53. However, with the
λE of 0.0154 and
λB of 0.0025, we can determine how long it would take to reach a
p(H) of 0.5 and 0.9. While it would only take two years to reach a
p(H) of 0.5 (equal to the animal welfare section), it would take a colossal ninety-four years to reach a
p(H) of 0.9:
Even if one were to completely remove the risk factor from this equation, it would still take a substantial fifty-nine years for the p(H) to reach 0.9. This enormous figure is due to the low λ values which, in turn, is due in part to the large disparity between the baseline date and when the legislation was passed. While movements for equality were seen quite early in history, a political will for legislation was slow to arrive.