1. Introduction
This article examines the use of artificial intelligence in political decision-making, specifically decisions made by political actors regarding issues such as proposing and approving or rejecting laws, managing resources, public sector positions, and delivering public speeches, among others. The research is framed within the field of political philosophy, analyzing both real and hypothetical outcomes of using Large Language Models (LLM) AI programs within the context of political principles such as democracy, where decisions concerning the state are made by people; citizen participation, understood as the involvement of citizens in political processes without the need for intermediaries; inclusion, understood as providing opportunities for disadvantaged individuals; and equity, which refers to accounting for individual differences to achieve greater equality.
The main objective is to analyze the use of artificial intelligence in political decision-making through a literature review and experimentation with LLM programs to discuss their advantages and disadvantages in political practice. The specific objectives are to identify the political lines in the proposals generated by artificial intelligence and to examine the possible advantages and disadvantages of using artificial intelligence in politics.
The philosophical method employed is dialectical, understanding traditional politics as the thesis and AI-driven politics as its antithesis. The research adopts an axiological position rooted in empiricism, follows an inductive approach, and utilizes documentary analysis and experimentation strategies. The approach is multimethod, as it involves both a literature review and experimentation, and it has a cross-sectional horizon since data will be collected at a single point in time. Initially, the available literature on the characteristics, uses, possibilities, and initiatives of AI in politics is analyzed. Subsequently, an experiment is conducted involving the input of commands into Artificial LLM models to generate policy proposals, followed by an analysis to determine which author’s theories or political lines these proposals align with most closely. Finally, after identifying the potential policy proposals that could arise from the use of artificial intelligence in politics, the study examines the possible advantages and disadvantages of employing artificial intelligence in political decision-making.
Ortega Ruiz & Becerra [
1] authored
Artificial Intelligence in Legal and Political Decision-Making, a study aimed at establishing the influence of Artificial Intelligence in the legal field, with particular focus on legal and political decision-making. The methodology employed was a descriptive analysis of its use in judicial, administrative, and legislative decisions within the frameworks of substantive, procedural, and evidentiary law. The research posed the question of whether artificial intelligence serves merely as a tool for making legal decisions or whether it constitutes a new entity generating legal decisions. The conclusion reached was that the implementation of artificial intelligence in law is feasible as a means or instrument for legal decision-making, but it does not have a place as a legal operator, meaning it would not replace lawyers, judges, or notaries, as human intelligence is required to decide on qualitative cases. In legislative decision-making, the use of artificial intelligence would be inappropriate due to the political diversity and human needs inherent in this arena. According to the authors, the decisions of a populace cannot be subjected to algorithmic decisions.
Flores-Ruiz, Miedes-Ugarte, & Wanner [
2] wrote Relational Intelligence, Artificial Intelligence, and Citizen Participation: The Case of the Cooperative Digital Platform Les Oiseaux de Passage. This article presented a case study of the French cooperative platform Les Oiseaux de Passage. The authors adopted a critical stance toward what they term artificial intelligence based on capitalist values, which, according to them, privileges big data and algorithms where people are mere data providers, leaving little room for participation in their management and control, and where access is limited. As a result, individuals find themselves in a vulnerable position, exposed to the loss of privacy. In contrast to capitalist-value-based artificial intelligence, the authors advocate for cooperative platforms like Les Oiseaux de Passage, which prioritize social objectives over capital, emphasize transparency and equity, maintain voluntary and open membership, ensure democratic control by members, balance individual and general interests, promote cooperation, self-management, and independence from public powers, prioritize the collective interests of the cooperative, adopt a territorial approach with a global projection, allocate surpluses to the general objective, and maintain a strategic vision.
McKelvey & MacDonald [
3] wrote
Artificial Intelligence Policy Innovations at the Canadian Federal Government, which questions the use of artificial intelligence in the Canadian government from the perspective of inclusion. The article suggests that artificial intelligence could be used to replace jobs considered automatable, particularly those known as feminized jobs. It also mentions several standards proposed for managing artificial intelligence, such as the FAIR (Findable, Accessible, Interoperable, and Reusable); FACT (Fairness, Accuracy, Confidentiality, and Transparency); and FATE (Fairness, Accuracy, Transparency, and Ethics) standards. These frameworks challenge the acceptability of artificial intelligence, raising concerns about whether it would produce biased or reliable outcomes. Finally, the article is critical of the rapid adoption of artificial intelligence by the Canadian government and argues that standards for the use of AI must be approached from a critical perspective that considers development and impact from a diversity standpoint. According to the authors, feminist science studies, indigenous epistemology, and other perspectives could provide key insights for using AI in making democratic decisions.
McEvoy [
4] wrote
Political Machines: Ethical Governance in the Age of AI, this article argues that if engineers develop ethically robust systems, governments will have a moral obligation to consult them as part of the decision-making process. The reasons provided for this argument are: 1. Human judgments are often compromised by a multitude of cognitive biases that are difficult to identify, creating problems for political decision-making; 2. AI systems make reliably accurate judgments in low-validity environments such as governance. The author clarifies that the moral obligation to consult AI would only exist if the AI were ethical, but that there is no need to wait for the development of ethical AI to experiment with it as a consultation tool.
2. Realist Politics
Political realism is often associated with the ideas of authors such as Thomas Hobbes [
5] and Niccolò Machiavelli [
6]. In the case of Machiavelli, his work The Prince, originally published in 1532, offered a series of recommendations for rulers, which could conflict with moral principles, with the goal being political power. According to Cañas [
7], Machiavelli’s proposition is that, to acquire, maintain, and expand political power, one must learn not always to be good, and decide whether to act accordingly based on the situation. Although the ideal is for a prince—a ruler—to embody all desirable and admirable qualities, human nature does not allow for this.
The central principle of Machiavelli’s work is that practical outcomes are more important than abstract ideals. Therefore, some authors, like Strauss [
8], have labeled his thought as immoral, while others, such as Gramsci [
9], argue that his ideas should not be interpreted from a moral perspective and should instead be considered amoral. Fernández de la Peña [
10] asserts that Machiavelli understood morality as a necessary but contingent creation, and therefore a political product in which universalism has no place. According to Fernández de la Peña, politics is what allows for the establishment of morality in society, and thus it is appropriate for politics to employ principles contrary to morality if it serves the objective of social development.
Machiavelli places significant importance on a ruler’s advisors, arguing that the reputation of a ruler depends on the quality of the people they surround themselves with. If they are surrounded by capable and loyal individuals, the ruler will be deemed wise; otherwise, they will not be considered prudent [
6]. Applying this principle to the context of artificial intelligence, one could argue that issues of capability and loyalty would no longer be a concern, as AI has access to abundant information and cannot be disloyal. However, Machiavelli also emphasizes the virtues of the ruler, asserting that good advice, regardless of its source, should ideally stem from the ruler’s prudence, rather than the ruler’s prudence stemming from good advice.
Moreover, Machiavelli advises that prudent rulers should follow the paths of great leaders of the past and imitate those who excelled [
6]. In this regard, artificial intelligence could be instrumental, as it has access to vast historical data and studies, enabling it to analyze current situations by comparing them to similar past events, outlining their causes and consequences, and providing a ruler with comprehensive and immediate tools that advisors, political scientists, and analysts could not offer with the same speed and breadth. In other words, AI could be valuable for the historical-comparative method in public policy formulation.
Additionally, Machiavelli did not rule out the use of violence and oppression to maintain power. Among other things, he stated that one must conquer or eliminate men because, if they can avenge minor offenses, they cannot do so against severe ones; therefore, the offense must be so severe that it renders revenge impossible [
6]. Similarly, his proposal on the use of finances is tied to the ruler’s image and the maintenance of power, expressing that it is more prudent to bear the label of a miser, which carries shame without resentment, than to risk gaining a reputation for being prodigal and fall into that of a plunderer, generating shame accompanied by hatred.
3. Bureaucratic Theory
One of the most prominent exponents of bureaucratic theory was Max Weber, who characterized bureaucracy as domination through knowledge [
11] and saw a parallel between the mechanization of industry and the proliferation of bureaucratic forms of organization [
12]. According to Weber [
13], the purest type of legal domination is that exercised through a bureaucratic administrative framework, composed of individual officials who are personally free, hierarchically organized, with strictly defined competencies, employed under a contract, remunerated with fixed salaries, performing their duties as their sole or principal profession, with a career path ahead of them, working with complete separation from the means of administration, and subject to strict uniform discipline and administrative oversight. Bureaucratic domination generally signifies a social tendency toward leveling in the interest of universally recruiting the most professionally qualified, a tendency toward plutocratization—government by the wealthy—and the domination of formalistic impersonality, subject only to the pressure of strict duty [
13].
The normal spirit of rational bureaucracy, in general terms, is formalism, primarily required to ensure personal life opportunities for those involved, regardless of their class. However, this tendency is in apparent and partly real contradiction with bureaucrats’ inclination to carry out their administrative tasks according to utilitarian-material criteria in service of the well-being of the dominated. The tendency toward material rationality finds support among those dominated who do not belong to the class interested in ensuring the guarantees they possess [
13].
The issue of bureaucracy relates to artificial intelligence because, on the one hand, when used in political decision-making, AI could alter the ideal bureaucratic principles of personally free officials, hierarchically organized, with strictly defined competencies, employed under a contract, with salaries and a career path, since AI would not require these. On the other hand, it could emphasize the bureaucratic tendency to adopt utilitarian-material criteria. Whether this inclination serves the well-being of the dominated remains doubtful, both in traditional bureaucracy and in one operated by artificial intelligence.
4. Conflict Theory
Conflict theory is based on the premise that groups within society interact primarily through conflict rather than consensus. This theory suggests that there are structural differences—social, cultural, or economic—that lead to power dynamics and unequal access to resources. One of the theories classified as conflict-based is Marxism, which argues that the State is a product of the irreconcilable nature of class contradictions and is merely a council that manages the interests of the bourgeoisie [
14].
Marxist theory also considers technological advancements as key factors in social change and in enabling certain classes to rise to power. For instance, Marx [
15] mentioned that the hand mill gave rise to the society of feudal lords, while the steam mill ushered in the society of industrial capitalists. From this perspective, it could be assumed that the technological developments of the Fourth Industrial Revolution—such as robotics, artificial intelligence, and the Internet of Things—could drive social and political changes, potentially altering the structure of the state, political participation, and the way political decisions are made.
Other theories often categorized under conflict theory include feminism, indigenism, liberation philosophy, critical race theory, and the LGBTI liberation movement, among others. These perspectives, in turn, emphasize inequality based on ethnicity, nationality, gender, sexual orientation, and other factors that may be overlooked by the class-based perspective. From these viewpoints, artificial intelligence raises concerns due to the potential for biases related to class, gender, nationality, and more. If these factors are not adequately considered, the use of AI could have detrimental effects on these marginalized groups.
5. Uses of Artificial Intelligence in Politics
5.1. Social Services
One potential use of artificial intelligence in social services is the identification of fraudulent benefit claims. Fraud in social service claims can lead to significant financial losses. For example, in the United Kingdom, it is estimated that around £1.5 billion was lost in 2020 due to fraudulent universal credit claims [
16]. According to Dilmegani [
17], AI-driven fraud detection could identify patterns such as repeated phone numbers or applications written in the same style and analyze social media profiles to check for information that conflicts with the data provided in applications. Additionally, AI could consider multiple vulnerability factors of individuals applying for social programs and weigh them to establish priorities in resource allocation.
5.2. Health
In the health sector, artificial intelligence could be useful for monitoring the spread of diseases and preventing further outbreaks. According to Wang [
18], the use of AI during the first wave of COVID-19 in China had significant effects on projecting and detecting the disease, as well as monitoring and assessing the evolution of the pandemic. Moreover, the integration of digital spatiotemporal data, AI, and real-time analytics with traditional spatial epidemiology research, such as epidemic maps, could provide local governments with a solid foundation for formulating policies related to the resumption of work and production [
18].
Theodosiou & Read [
19] found evidence of the clinical utility of AI applied to laboratory diagnostics, such as digital reading of culture plates, malaria diagnosis, and antimicrobial resistance profiling; clinical image analysis, such as the diagnosis of pulmonary tuberculosis; and clinical decision support tools, such as sepsis prediction and antimicrobial prescription. However, most studies to date lack real-world validation or clinical utility metrics.
Another potential use of AI in healthcare is triage, i.e., the selection and classification of patients by evaluating the priority of care based on survival probability, therapeutic needs, and available resources. In public health, AI could also be used to inform the population about frequently asked health questions, which could help counteract fake news and messages that could cause collective panic.
5.3. Security
AI can be used in the security domain to predict crimes. By identifying crime patterns, it is possible to project areas in cities where crimes are more likely to occur. According to Dakalbab [
20], existing AI technologies perform reasonably well in predicting and preventing crimes, as they can predict crimes with high accuracy and improve the efficiency of identifying spatiotemporal crime hotspots. Rotaru et al. [
21] developed an algorithm that predicts crime using spatiotemporal learning patterns based on public data on violent and property crimes from seven U.S. cities: Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco. This model has been able to predict crimes a week in advance with 90% accuracy and has also exposed existing territorial biases in resource and personnel allocation for crime control in various areas of the cities.
Machine learning-based AI can also be used to detect crimes in real-time through security cameras, which identify unusual, irregular, unexpected, and unpredictable events or behaviors. To determine what situations are normal and which could be crimes, recordings of normal situations and crime events are used. Additionally, facial recognition systems can detect wanted individuals from the database of persons sought by authorities.
5.4. Emergencies
In emergencies, AI can be used to increase the efficiency of emergency services. For example, automatic voice recognition systems can identify the tone of voice of individuals to determine whether an emergency call is real or false. According to Negnevitsky, Tomin & Rehtanz [
22], AI tools can help analyze calls and messages to make faster decisions. Consequently, AI tools could reduce wait times and save more lives by assisting operators in efficiently filtering out abusive calls.
The Danish company Corti developed a system capable of identifying cardiac arrests through emergency calls. By asking questions of callers and considering databases and patterns, AI analyzes potential signs of cardiac arrest, such as tone of voice and breathing. In a study of a database of 161,650 emergency calls, the system identified 93.1% of cardiac arrests compared to 71.9% recognized by human operators [
23]. Additionally, AI provides instructions to the caller, whether to go immediately to a hospital or to perform CPR.
5.5. Public Relations
AI could be used in public relations by interacting in real-time with citizens when they have questions or complaints about public administration. The Norwegian government has implemented a chatbot service that corresponds to the capacity of 220 human operators. Most inquiries are handled entirely by the chatbot, but one in five is transferred to a live conversation with a human operator [
24].
Another potential use of AI in public relations is monitoring citizens’ social media posts to obtain feedback on the performance of public administration. Hung [
25] applied machine learning methods to analyze data collected from Twitter during the COVID-19 pandemic in the United States to investigate citizens’ sentiments. After analyzing 187,042 tweets, they found that five themes dominated the interactions about COVID-19: the healthcare environment, emotional support, business economy, social change, and psychological stress.
6. Challenges of Using Artificial Intelligence for Political Decision-Making
6.1. Logical and Ethical Problems
Hasty Generalization: Hasty generalization is an informal fallacy where a general conclusion is drawn from insufficient evidence. This fallacy follows the pattern:
Since machine learning algorithms often use inductive inference, they are prone to making hasty generalizations. In areas like healthcare, this could lead to the erroneous assumption that a person has a particular disease based on characteristics like someone who has that disease, even if those characteristics are not actual symptoms. In security, AI might mistakenly suspect someone of committing a crime based on behaviors like those of a criminal, even though these behaviors could have other explanations. While humans frequently make these errors, AI may lack the ethical dilemmas that humans face when making such classifications.
On the other hand, it can be argued that humans might make mistakes precisely because of what distinguishes them from AI; sentimental, ideological, or indecisive factors may cause humans to overlook situations that AI would classify correctly.
Cherry-Picking Evidence: Cherry-picking, also known as incomplete evidence, is a fallacy where only the best or worst cases are selected to confirm a position or proposition. This fallacy is related to hasty generalization. When AI is used for political decision-making, particularly to assess the feasibility or advisability of certain actions, it may fall into the cherry-picking trap. When asked if a decision will be correct, AI might only look for cases where that action had positive results, ignoring situations where it did not meet expectations. Moreover, it might overlook the specific context in which a decision is being made, suggesting actions that only work under certain circumstances.
6.2. Biases
AI can produce both intentional or explicit discrimination and unintentional discrimination, with unintentional discrimination being more common. Given that AI learns from large amounts of data, if the data is biased, the AI will also be biased. Considering that most scientific studies historically and currently have been conducted by people from the Global North, who are male, heterosexual, and of privileged economic status, these conditions may influence research outcomes or how the results are presented.
According to research by Huang [
26], which analyzed the publication careers of 7,863,861 scientists, male scientists published an average of 13.2 articles during their careers, while female scientists published only 9.6, representing a 27% gender gap in total productivity. Additionally, according to Fry [
27], a Pew Research Center study found that in the United States, Black and Hispanic workers remain underrepresented in the science, technology, engineering, and mathematics (STEM) workforce. Black workers make up 9% of the workforce in these areas, Hispanics represent 8%, and Asians 13%.
Racial Biases: One concern regarding AI biases is racial bias. Considering the large amount of racist or stereotypical content that could be found in various databases, AI-driven decisions could lead to discrimination, marginalization, and even mistreatment of people from diverse backgrounds. According to Metz [
28], instances of racism perpetrated by AI systems include the discovery that in a Google online photo service, photos of Black people were categorized in a folder labeled as gorillas. Another case involved a Black researcher who found that a facial recognition system could not identify her face until she wore a white mask.
Studies have shown that facial recognition technologies and digital assistants struggle to identify images and speech patterns of non-white people. For example, a journalist asked the DALL-E 2 image generator to imagine buildings in her city, Dakar, and the algorithm produced landscapes of an arid desert and dilapidated buildings that bore no resemblance to the homes in Senegal’s capital [
29]. According to Zuiderveen [
30], a system used to predict criminal recidivism in some parts of the U.S., despite not including racial origin or skin color among its parameters, was found in a 2016 study by Angwin et al. to classify Black people as high-risk at twice the rate of white people, even though they are not more likely to reoffend. Conversely, white people were more likely to be classified as low risk despite a higher likelihood of committing other crimes.
Class Biases: Just as AI can have racial biases if its databases reflect such tendencies, it can also have class biases. In cases where AI requires user feedback, individuals with limited access to this technology may be underrepresented, potentially leading to neglect of problems in certain areas of the city. Zuiderveen [
30] used the example of the Street Bump app, which is used by the Boston city council to receive reports on street conditions. The problem in this case is that reports require the use of the app on a smartphone while driving, so areas where people have limited access to smartphones or mobile internet may be underrepresented and could receive fewer funds and public works from the local administration.
Gender Biases: AI can exhibit gender biases in areas such as human resources. According to Dastin [
31], in 2015, Amazon stopped using an AI system to rate candidates for software developer positions and other technical roles because it was not evaluating them in a gender-neutral manner. This was because the models were trained to examine applicants by looking for patterns in resumes submitted to the company over a 10-year period, most of which came from men. As a result, the system taught itself that male candidates were preferable. The system penalized resumes that included words related to women, such as captain of the women’s chess club, and downgraded graduates of all-women’s colleges. With this precedent, AI used for political decision-making could prioritize men for public positions.
There could also be biases in the healthcare sector due to underdiagnosis of pathologies based on gender differences. According to Fernández [
32], one example is asthma attacks, which are often mistaken for anxiety attacks in women. Another example is chronic obstructive pulmonary disease (COPD), whose symptoms sometimes differ between men and women. Considering that AI is based on previous diagnoses and typical symptoms, AI health assistants could overlook women’s unique cases when making medical or public health recommendations.
In the case of LGBTI+ people, according to Holden [
33], biased AI could lead to concerning outcomes such as neglecting or excluding LGBTQ businesses and their target audience from advertising, hiding them by limiting the reach of posts or accounts of LGBTI+ individuals and their businesses, improper categorization of LGBTI+ individuals resulting in targeted posts not reaching them, biases in recruitment, and unfair profiling in legal proceedings, which could lead to wrongful arrests or unjust sentences.
6.3. Ethical-Political Problems
State Surveillance: While AI can be used to prevent disease outbreaks and combat crime, the same information and methods used for this could also be employed to constantly monitor citizens. This would allow governments, both authoritarian and non-authoritarian, to influence citizens’ lives every time they use electronic devices. Algorithms that collect information about their browsing habits can also show them information based on their inferred characteristics, providing AI with an idea of who each internet user is.
Suppression of Freedoms: In line with the above, constant surveillance could lead to the suppression of personal freedoms. Depending on the situation and the interests of the current government, certain content deemed dangerous could be automatically censored. Social media platforms like Facebook and Twitter have implemented tools to combat fake news, which have received both praise and criticism. One of the criticisms is that these tools could also be used to censor certain political opinions.
Violations of Privacy: Hand in hand with state surveillance are privacy violations. If algorithms can profile users of electronic devices, they can gather information about opinions, preferences, routines, relationships, and other characteristics, and even predict a person’s future actions based on patterns. Algorithms might also discover things about users that they have not yet realized themselves; for example, Hill [
34] described a case where Target’s algorithm allegedly discovered that a teenager was pregnant based on her shopping patterns.
Identification of Opponents: Using those tools, governments could identify opponents through algorithms that detect search patterns and could map their location. Additionally, facial recognition systems could report their location in real time and block their access to certain places or services.
7. Experiment
In the ChatGPT chatbot application, version 3.5, a request was made for an example of R code that could help the State make political decisions. The response was as follows:
"Creating a computational model to help the State make political decisions is a significant challenge due to the complexity and variety of factors involved in governmental decision-making. However, I can provide you with a simple example using a decision model based on weighted criteria. In this case, we will assume that we are evaluating different public policies based on certain criteria" [
35].
The code proposed by ChatGPT was the following:
# Define public policies and evaluation criteria
policy_1 <- list(cost = 1000000, social_impact = 0.8, viability = 0.9)
policy _2 <- list(cost = 1200000, social_impact = 0.7, viability = 0.8)
policy _3 <- list(cost = 800000, social_impact = 0.9, viability = 0.7)
# Function to calculate the utility of each policy (total score)
calculate_utility <- function(policy) {
# You can assign weights to each criterion according to perceived importance
weight_cost <- 0.4
weight_impact <- 0.4
weight_viability <- 0.2
utility <- -policy$cost * weight_cost + policy$social_impact * weight_impact + policy$viability * weight_viability
return(utility)
}
# Calculate the utility for each policy
policy_utility_1 <- calculate_utility(policy_1)
policy_utility _2 <- calculate_utility (policy _2)
policy_utility _3 <- calculate_utility (policy _3)
# Make a decision based on utility
if (policy_utility_1 > policy_utility _2 & policy_utility _1 > policy_utility _3) {
decision <- "Implement Policy 1"
} else if (policy_utility _2 > policy_utility _1 & policy_utility _2 > policy_utility _3) {
decision <- "Implement Policy 2"
} else {
decision <- "Implement Policy 3"
}
# Print the decision
cat("Decision:", decision, "\n")
This example is simplified and uses a criterion-weighting approach to evaluate three fictitious public policies. Each policy is evaluated based on cost, social impact, and feasibility, with different weights assigned to each criterion. The policy with the highest utility, according to the defined utility function, is chosen for implementation. Keep in mind that, in practice, political decision-making is much more complex and generally involves considering more factors, real data, expert consultations, and possibly the use of more advanced models. This example serves as a basic starting point and may require adjustments depending on the specific context and the requirements of the problem you are addressing [
35].
When asked to include inclusion and equality criteria in the model, ChatGPT produced the following output:
# Define public policies and evaluation criteria
policy_1 <- list(cost = 1000000, social_impact = 0.8, viability = 0.9, inclusion = 0.7, equality = 0.8)
policy _2 <- list(cost = 1200000, social_impact = 0.7, viability = 0.8, inclusion = 0.8, equality = 0.7)
policy _3 <- list(cost = 800000 social_impact = 0.9, viability = 0.7, inclusion = 0.6, equality = 0.9)
# Function to calculate the utility of each policy (total score)
calculate_utility <- function(policy) {
# You can assign weights to each criterion according to perceived importance
weight_cost <- 0.3
weight _impact <- 0.3
weight _viability <- 0.1
weight _inclusion <- 0.2
weight _ equality <- 0.1
utility <- -policy$cost * weight_cost + policy$social_impact * weight_impact + policy $viability * weight_viability + policy $inclusion * weight_inclusion + policy$equality * weight_equality
return(utility)
}
# Calculate the utility for each policy
policy _utility_1 <- calculate_utility(policy_1)
policy _utility _2 <- calculate_utility (policy_2)
policy _utility _3 <- calculate_utility (policy_3)
# Make a decision based on utility
if (policy _utility _1 > policy _utility _2 & policy _utility _1 > policy _utility _3) {
decision <- "Implement Policy 1"
} else if (policy _utility _2 > policy _utility _1 & policy _utility _2 > policy _utility _3) {
decision <- "Implement Policy 2"
} else {
decision <- "Implement Policy 3"
}
In this example, two new criteria, Inclusion and Equality, have been added with assigned weights in the utility function. These criteria reflect the perceived importance of inclusion and equality in the evaluation of public policies. You can adjust the weights according to your considerations about the importance of each criterion in political decision-making [
35].
8. Analysis
Table 1.
Interpretation of the Relationship Between Selected Political Theories and the Use of AI.
Table 1.
Interpretation of the Relationship Between Selected Political Theories and the Use of AI.
| Theory |
Principles |
Benefits of Artificial Intelligence |
Disadvantages of Artificial Intelligence |
| Realist Politics |
- Emphasis on the State - Preservation of power - Interests over ideologies |
Without human emotions and with the ability to execute explicit orders, AI would have no issues making decisions that prioritize state defense, even if these are unpopular or deemed unjust or violent. |
If based on incomplete information or unsuitable examples, AI could suggest counterproductive, costly, inefficient, and ineffective decisions. |
| Bureaucratic Theory |
- Hierarchical organization - Division of labor - Professionalization |
Bureaucrats would carry out more mechanical tasks, requiring only the execution of necessary actions with technically justified functions. |
If AI replaces many human workers, these workers may become discontent, leading to reduced performance due to a lack of career advancement opportunities. |
| Conflict Theory |
- Equality - Social change - Emphasis on collectives |
Automation could reduce the workload of public workers, giving them more time for personal development. If configured with a diversity focus, AI could suggest decisions that human actors with biases might not make. |
If AI is based on biased data, it could exacerbate discrimination based on gender, age, nationality, among others. AI could also replace collective deliberations with a digital authority. |
In the experiment, the proposed model considered factors such as cost, social impact, and feasibility. The factors with the most weight were cost and social impact, which could be interpreted as indicating that social impact is as important as cost. However, it is possible that a policy with lower cost and higher feasibility could be chosen even if it has less social impact.
Regarding ethical issues, although the presented model includes the variable social impact, this does not necessarily imply that AI operates from a moral standpoint. The definition of social impact can vary depending on the ideology of the person operating the AI. Furthermore, while the basic model provided by ChatGPT values social impact, these parameters are easily adjustable, so its importance could be diminished if deemed less relevant.
Consequently, even with the social impact parameter, AI could be aligned with Machiavellian realist politics, as this factor does not necessarily guide decision-making but is rather weighted according to the achievement of an objective. If the social impact parameter were removed, the emphasis on maintaining power would be even more apparent, as the model would prioritize feasibility and minimal costs. It should also be emphasized that the inclusion and equality parameters were only added when explicitly requested from ChatGPT.
Regarding the relationship between AI and bureaucratic theory, the use of AI could significantly alter the principles of bureaucracy. Given that an important element of bureaucracy, according to Weber, is rationality, it is necessary to question whether AI is rational in the same sense that humans are. Weber argued that bureaucrats perform their administrative tasks according to utilitarian-material criteria and that there is a trend toward material rationality. If an AI plays the role of a bureaucrat without material needs, it would solely follow utilitarian criteria. Moreover, the bureaucrat’s career, contracts, remuneration, and the trend toward plutocratization would disappear, likely leading to a reorganization of the State and possibly a new form of State.
If a State were managed by AI, there would no longer be support from the governed, as there would be no opportunities to join the bureaucracy. Discontent with the way the State is administered could lead either to complete submission to the State or to its total rejection. In other words, a State managed by AI could lead to either authoritarianism or anarchism, depending on the specific situation of each country and the sentiment of the population.
This analysis also touches on Marxist ideas, as any change in the State would highlight how technological advancement leads to political change. Assigning certain tasks to AI would result in the loss of some human jobs, which could be problematic for some workers but beneficial for others. Social change driven by AI would not necessarily lead to dominated groups gaining power; instead, new forms of domination could emerge. Groups that do not adapt quickly to technological changes would be at risk. As previously discussed, underrepresented groups in the technology sector, such as women, people of African descent, and those from the Global South, may not immediately benefit from the social change caused by AI. In fact, their representation could decrease if affirmative action policies are not implemented.
9. Conclusions
Artificial intelligence, as it currently stands, reflects the society in which it is developed. As a result, its proposals may not be more impartial, just, or free from bias unless specifically configured to be so. The examples of possible uses of AI for political decision-making suggest that it can lead to biased and fallacious interpretations of the data on which it relies. Current uses of AI imply that when developed with a specific goal in mind, the resulting policies could align with Machiavellian principles by promoting decisions based on calculations and achieving objectives, with little regard for moral dilemmas.
The experiment suggests that even an AI designed with caution to avoid aggressive or dangerous responses may still prioritize calculation and goal achievement over ethical or ideological principles. Since the program aims to avoid controversial responses, its inclusion of variables like social impact as just another factor implies that the most important objective remains the achievement of goals, such as maintaining power, and that variables like social impact may only be considered insofar as they contribute to that objective without hindrance.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Ortega Ruiz, L. G., & Becerra, J. (2022). La Inteligencia Artificial en la decisión jurídica y política. Araucaria, 24(49), 217–238. [CrossRef]
- Flores-Ruiz, D., Miedes-Ugarte, B., & Wanner, P. (2021). Inteligencia relacional, inteligencia artificial y participación ciudadana. El caso de la plataforma digital cooperativa Les Oiseaux de Passage. Recerca: Revista de Pensament i Anàlisi, 26(2). [CrossRef]
- McKelvey, F., & MacDonald, M. (2019). Artificial Intelligence Policy Innovations at the Canadian Federal Government. Canadian Journal of Communication, 44(2), PP43–PP50. [CrossRef]
- McEvoy, F. J. (2019). Political Machines: Ethical Governance in the Age of AI. Moral Philosophy and Politics, 6(2), 337–356. [CrossRef]
- Hobbes, T. (2002). Leviathan. Project Gutenberg. https://www.gutenberg.org/ebooks/3207/pg3207-images.html.
- Machiavelli, N. (2009). The Prince. Penguin Classics.
- Cañas, R. (2004). Maquiavelo y el realismo político. Revista Estudios, 18–19, Article 18–19. [CrossRef]
- Strauss, L. (1958). Thoughts On Machiavelli. The Free Press. http://archive.org/details/LeoStraussThoughtsOnMachiavelli_201411.
- Gramsci, A. (2021). The modern prince & other writings. Foreign languages press.
- Fernández de la Peña, M. (2016). Los principios políticos en Maquiavelo: Entre el arte del Estado y la moral. Ingenium: Revista electrónica de pensamiento moderno y metodología en historia de las ideas, 10, 75–91.
- Swedberg, R., & Agevall, O. (2005). The Max Weber Dictionary: Key Words and Central Concepts. Stanford University Press.
- Visitchaichan, S. (2004). Revisiting Weber’s Theory of Bureaucracy and its Usefulness for Analyzing Organizational Structures and Issues. https://www.semanticscholar.org/paper/Revisiting-Weber's-Theory-of-Bureaucracy-and-its-Visitchaichan/4c9ea6f57c03a8388a877075cf0d219f4151b652.
- Weber, M. (1978). Economy and Socioety. University of California Press.
- Marx, K., & Engels, F. (2000). Manifesto of the Communist Party. https://www.marxists.org/archive/marx/works/1848/communist-manifesto/index.htm.
- Marx, K. (2009). The Poverty of Philosophy. Marxists Internet Archive. https://www.marxists.org/archive/marx/works/1847/poverty-philosophy/.
- BBC. (2020, mayo 20). Coronavirus: Benefit claims fraud could be £1.5bn. BBC News. https://www.bbc.com/news/business-52745983.
- Dilmegani, C. (2023, octubre 9). AI in Government: Examples, Challenges & Best Practices [2023]. AI Multiple. https://research.aimultiple.com/ai-government/.
- Wang, T., Zhang, Y., Liu, C., & Zhou, Z. (2022). Artificial intelligence against the first wave of COVID-19: Evidence from China. BMC Health Services Research, 22(1), 767. [CrossRef]
- Theodosiou, A. A., & Read, R. C. (2023). Artificial intelligence, machine learning and deep learning: Potential resources for the infection clinician. Journal of Infection, 87(4), 287–294. [CrossRef]
- Dakalbab, F., Abu Talib, M., Abu Waraga, O., Bou Nassif, A., Abbas, S., & Nasir, Q. (2022). Artificial intelligence & crime prediction: A systematic literature review. Social Sciences & Humanities Open, 6(1), 100342. [CrossRef]
- Rotaru, V., Huang, Y., Li, T., Evans, J., & Chattopadhyay, I. (2022). Event-level prediction of urban crime reveals a signature of enforcement bias in US cities. Nature Human Behaviour, 6(8), 1056–1068. [CrossRef]
- Negnevitsky, M., Tomin, N. V., & Rehtanz, C. (2014). Preventing Large-Scale Emergencies in Modern Power Systems: AI Approach. Journal of Advanced Computational Intelligence and Intelligent Informatics, 18(5), 714–727. [CrossRef]
- Vincent, J. (2018, abril 25). AI that detects cardiac arrests during emergency calls will be tested across Europe this summer. The Verge. https://www.theverge.com/2018/4/25/17278994/ai-cardiac-arrest-corti-emergency-call-response.
- Vassilakopoulou, P., Haug, A., Salvesen, L. M., & Pappas, I. O. (2023). Developing human/AI interactions for chat-based customer services: Lessons learned from the Norwegian government. European Journal of Information Systems, 32(1), 10–22. [CrossRef]
- Hung, M., Lauren, E., Hon, E. S., Birmingham, W. C., Xu, J., Su, S., Hon, S. D., Park, J., Dang, P., & Lipsky, M. S. (2020). Social Network Analysis of COVID-19 Sentiments: Application of Artificial Intelligence. Journal of Medical Internet Research, 22(8), e22590. [CrossRef]
- Huang, J., Gates, A. J., Sinatra, R., & Barabási, A.-L. (2020, febrero 18). Historical comparison of gender inequality in scientific careers across countries and disciplines. https://web.archive.org/web/20200818023036/https://www.pnas. 4609.
- Fry, R., Kennedy, B., & Funk, C. (2021, abril 1). STEM Jobs See Uneven Progress in Increasing Gender, Racial and Ethnic Diversity. Pew Research Center Science & Society. https://www.pewresearch.org/science/2021/04/01/stem-jobs-see-uneven-progress-in-increasing-gender-racial-and-ethnic-diversity/.
- Metz, C. (2023). Who Is Making Sure the A.I. Machines Aren’t Racist? The New York Times. https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html.
- Small, Z. (2023, julio 4). Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History. The New York Times. https://www.nytimes.com/2023/07/04/arts/design/black-artists-bias-ai.html.
- Zuiderveen Borgesius, F. (2018). Discrimination, artificial intelligence, and algorithmic decision-making. Consejo de Europa. https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73.
- Dastin, J. (2018, octubre 11). Insight—Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.comarticle/idUSKCN1MK0AG/.
- Fernández, I. (2022, octubre 6). La falta de “perspectiva de género” en consulta infradiagnostica a la mujer. Redacción Médica. https://www.redaccionmedica.com/secciones/medicina-familiar-y-comunitaria/la-falta-de-perspectiva-de-genero-en-consulta-infradiagnostica-a-la-mujer-9441. 9441.
- Holden, K. (2023, agosto 10). What is AI bias—And how might it affect LGBTQ people? Attitude. https://www.attitude.co.uk/life/443057-443057/.
- Hill, K. (2012, febrero 16). How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did. Forbes. https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/.
- OpenAI. (2023). ChatGPT. https://chat.openai.com.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).