Preprint
Article

This version is not peer-reviewed.

Innovation and Balance of Legal Regulation and Ethical Governance in Autonomous Driving in China

Submitted:

20 December 2025

Posted:

22 December 2025

You are already at the latest version

Abstract
The life conflict represents a paradigmatic ethical dilemma in the application of autonomous driving powered by artificial intelligence, where the right to life of passengers in the vehicle collides violently with that of pedestrians outside. In these contexts, can artificial intelligence replace humans in making choices to protect passengers or prioritize passengers at the expense of pedestrians? As autonomous vehicles become increasingly widespread, the life-or-death dilemma demands clearer normative resolution. This is a central issue in legal governance and the foundational principle guiding the development of industry for public. This paper explores whether artificial intelligence can replace human decision-making and the boundaries of such decisions, addressing ethical challenges in autonomous driving through legal frameworks to advance the progress of industry.
Keywords: 
;  ;  ;  ;  
Subject: 
Social Sciences  -   Law

Background

Ethics refers to the rationale and norms of order for handling the relations between individuals and society, which is about what people ought to do rather than what they do, what they want to do, or what social conventions require. In the history of mankind, major scientific and technological developments have often brought significant changes in productive forces, production relations and superstructure, which have become an important criterion for the demarcation of eras, and have often brought about profound reflection on social ethics (Zhang and Zhang 2021). With the rapid development of science and technology, human beings are facing an increasing number of ethical challenges in the in-depth integration of ‘science and technology’ and ‘ethics’ is a common problem faced by governments, academics, and the public in the development and governance of global science and technology (Lu and Wang 2023). This issue has become particularly prominent in the era of artificial intelligence.
The new generation of generative artificial intelligence has aroused widespread public concern and research on the ethics of information technology, such as the data leakage, digital divide and the information cocoon (Duan 2020). Then as one of the the most important application scenario of artificial intelligence, autonomous driving technology, as a typical representative of embodied intelligence continues to mature and popularize, the emergence of autonomous vehicle has accelerated the life conflict and other ethical dilemmas to the people, and the impact of these moral dilemmas which may far beyond the non-embodied intelligence, and will have far-reaching impact on the future of the process of interaction between AI and human. A major regulatory milestone towards the deployment of automated vehicle technologies has been attained on 23 March 2016 with the entry into force of amendments to the 1968 Vienna Convention on Road Traffic (UNECE 2023), which may signal that the global deployment of autonomous vehicles is becoming an inevitable trend.
As a typical product of embodied intelligence and commercialized application scenarios, autonomous vehicles greatly expand innovation and experimentation of human-AI interaction, which is a further development of the current human-computer interaction technology, and emphasizes more on the understanding and adaptation of embodied artificial intelligence to human needs and behaviors, which covers the optimization of the technology, the user’s feeling, the privacy and security, and the ethical issues (Zhao and Zhang 2025). At the level of safety, ensuring the lives of both passengers and pedestrians is the paramount obligation of operators. However, in extreme life-conflict scenarios, operators may be forced to make choices that involve weighing the lives of passengers against those of pedestrians. Different choices give rise to theories such as “passenger priority” and “pedestrian priority”. Regarding life-conflict issues in autonomous driving technology, some scholars pin their hopes on resolving them through the theory of choice of evils or the necessity defense. The fundamental approach involves ethical deliberation, ultimately seeking to resolve life-conflict dilemmas in autonomous driving through the codification of ethical principles into law. As technology matures, it becomes necessary to analyze these issues within existing regulatory frameworks while accounting for the unique ethical characteristics of the artificial intelligence industry.

Part I New Characteristics Emerge in the Life-Conflict Dilemmas of Autonomous Vehicles

In 1967, Philippa Ruth Foot proposed the ‘trolley puzzle’, which is still debated today. The problem is that there are five people on a railroad track, and a runaway tram is approaching them, and the bystander can pull a lever to divert the tram go to another track, but there is also a person on the other track, so what should be the choice. The tunnel problem is a variation of the trolley puzzle. The tunnel problem refers to a situation where a car is about to enter a tunnel on a one-way mountain road, when a person runs into the center of the road, blocking the entrance to the tunnel, and there seem to be only two choices for the driver in such a situation: one is to crash into the child, which will result in its death, and the other is to crash into the entrance wall of the tunnel, which will result in the death of the driver. The life conflict dilemma is a generalization of this type of dilemma involving life, and the difficulty posed by life conflict for autonomous vehicles is how to make a choice between human lives, and who should make that hard choice.

1.1. Technologies That Are Sufficiently Safe

In terms of safety, autonomous vehicles have been rigorously validated before they are approved for operation on public roads, and they may perform more safely than human drivers. China adopts a strict regulatory model for the operation of autonomous vehicles on public roads, requiring all mass-produced vehicles to obtain road vehicle manufacturer and product approval from the Ministry of Industry and Information Technology before being permitted to operate on public roads. Even every province has established standards and regulations on autonomous vehicles testing. For example, Beijing enacted ‘Beijing Autonomous Vehicle Regulation’ ‘Technical specification for closed test site of intelligent connected vehicle Part 1: Passenger cars’. Since September 2020, Beijing has been at the forefront of establishing China’s first high-level autonomous driving demonstration zone, accumulating practical experience along the way. In China, there are extremely strict restrictions on the operation of autonomous vehicles on public roads. Autonomous vehicles are equipped with sophisticated sensors and advanced algorithms that enable precise navigation through traffic. They have the potential to create safer driving environments by reducing human error, which remains the primary cause of road traffic accidents today.
Besides Chinese government, many countries around the world have strict regulations governing the operation of autonomous vehicles on public roads. In 2021, Germany enacted the Autonomous Driving Act (Gesetz zum autonomen Fahren), which permits only Level 4 autonomous vehicles to operate in designated areas on public roads in German. However, to maximize the safety advantages of autonomous vehicles, it is crucial to understand under which conditions they outperform or underperform human drivers. A study indicates that autonomous vehicles are generally safer and less prone to accidents when performing routine driving tasks (Chen et al. 2025), such as maintaining lane position and adjusting positioning based on traffic flow. Additionally, autonomous vehicles are safer in rear-end collisions and side scrapes, with accident rates reduced by 50% and 20% respectively compared to human driving. However, current data indicates that autonomous vehicles appear more prone to accidents in specific scenarios, such as during low-light conditions at dawn or dusk and when turning, with accident rates 5.25 times and 1.98 times higher than human driving respectively (Abdel-Aty and Ding 2024).

1.2. The Subject of Decision Making Has Changed

When discussing the conflict of life challenges posed by autonomous driving technologies, it is important to clarify the ‘subject’ of the conflict of life. Unlike the traditional conflict of life challenges, in autonomous driving scenarios, it is primarily the right to life of the passenger and the pedestrian that is in conflict, and the subject of the decision may be the autonomous driving system itself, the passenger, or even the remote-control operator. To solve the life conflict dilemmas in the context of autonomous driving, scholars have proposed the theory of ‘passenger priority’, ‘pedestrian priority’ and ‘random choice’, which involves philosophy, ethics and jurisprudence. However, based on the characteristics of autonomous driving technology, the basic path to solve the ethical dilemmas such as the conflict of life brought about by autonomous driving technology is to strengthen the rule of law and solve them by means of the legalization of ethics.

1.3. The Inevitable Choice

Due to the specific attributes of autonomous vehicles, this is an issue that cannot be avoided, even if such decisions are difficult, as their scenarios for dealing with such scenarios must be pre-programmed at the beginning of their manufacture and the handling options communicated to the regulators as well as to the users. While the traditional life-and-death dilemmas are primarily matters of choice and responsibility for the person involved in the incident, the solution of the life-and-death dilemmas with automated driving technology is a prerequisite for the widespread implementation of automated driving technology, which, based on the technical characteristics of automated driving, does not allow us to argue any further, as such problems are no longer only in front of the human driver, but need to be embedded in advance in the form of algorithms in the automated driving system.
This is not only about whether the technology can be widely opened to the public, but also about how to deal with the relationship between human beings and artificial intelligence technology in the future. Autonomous vehicles as a modern means of transportation are closely related to human life, and AI provides the core perception, decision-making and control capabilities, which are life-and-death dilemmas in the context of self-driving technology presents many new features.

Part II The Changes in Human Status

2.1. Changes in the Positioning of the Human-Machine Relationship

In the traditional life-and-death dilemmas, human beings occupy the absolute dominant role in such events and can completely decide the direction of trams and trains. However, this is not the case in the automated driving situation, people cannot completely dominate the driving process of the vehicle, and even play different roles in different levels of automated driving situations, which means that the different positioning of the relationship between humans and machines will have a direct impact on the establishment of the ethical status of the machine. We can roughly categorize the relationship between autonomous vehicles and humans into two types. First, human beings regard autonomous vehicles as a kind of human transportation, and the control of the vehicle is still in the hands of human beings, even though this control may be completely embodied in the form of algorithms; second, human beings regard autonomous vehicles as partners, that is, the autonomous vehicles may be regarded as autonomous ethical subjects, which means that apart from the task instructions given by human beings, the autonomous vehicles have full autonomy to play their role in situations such as life-and-death dilemmas. The key to clarify the relationship between humans and autonomous vehicles is to formulate and introduce relevant standards and rules in a timely manner, which will have a very positive effect on the development and large-scale application of self-driving technology, and promote the mass production and landing process of different levels of autonomous vehicles.
The Society of Automotive Engineers (SAE) released the automated driving technology grading standard in 2014, which divides automated driving into six grades from L0-L5 according to the automated driving technology’s ability to control the vehicle and the driving area. In China, according to the State Administration for Market Regulation (Standards Committee) for the automatic driving function to develop and introduce the ‘Automotive Driving Automation Classification’ (GB/T 40429-2021), the standard defines automated driving levels based on the dynamic driving task, minimum risk state, minimum risk strategy and other multi-angle considerations, the level will be divided into six grades, namely, emergency assisted (L0), partial driver assistance ( L1), combined driving assistance (L2), conditional autonomous driving (L3), highly autonomous driving (L4), and fully autonomous driving (L5). According to the standard, the role played by humans varies in different levels of autonomous driving technology.
Among them, driving automation system at the L0, L1 and L2 levels require the driver to perform dynamic driving task takeover duties, and the human is still the primary controller of the self-driving vehicle in such levels of automated driving. In L3 conditional autonomous driving, the dynamic driving task backup user can take over the vehicle, but the backup user is not limited to the vehicle occupants in the standard. In vehicles with L4 autonomy, the system continuously performs all dynamic driving tasks under its designed operating conditions and automatically executes a minimal risk strategy; in L5 autonomy, the system continuously performs all dynamic driving tasks under any conditions and automatically executes a minimal risk strategy. This also indicates that the L4 and L5 autonomy can be called as autonomous vehicles, this kind of autonomous vehicles have been realized out of the traditional sense of the control of the driver, this paper also primarily focus on L4 and L5 levels of autonomous technology. In this level of autonomous driving technology, the exploration and response of the corresponding autonomous driving vehicle to the goals and events no longer require humans to take over and other intervening operations, which also means that passengers and even remote-control personnel seem to have lost the control of the vehicle in life conflict scenarios.

Part III. Artificial Intelligence and the Right to Life

Who can make choices when autonomous driving technology faces life conflicts dilemmas depends on the status of artificial intelligence within human society. At the current technological level, AI can already replace humans in controlling vehicles. However, in the context of life-and-death dilemmas, any decision made by any entity will result in the loss of life on one side. The issue of who should be the decision-maker requires further discussion at the ethical and legal levels.

3.1. Meaningful Human Control

Meaningful Human Control (MHC) is a loaded political concept that emerged from debates on autonomous weapons. Autonomous driving fundamentally differs from traditional human-vehicle interactions. It is neither truly human nor a vehicle devoid of autonomy; rather, it highly integrates human cognitive judgment with vehicle control. In extreme scenarios, autonomous vehicles could possess the capacity to autonomously decide whether to attack humans. Autonomous vehicles protecting occupants by employing emergency braking, rapid acceleration, or evasive steering to avert danger is uncontroversial. However, in life-or-death dilemmas where the vehicle chooses to collide with external individuals to prevent occupant harm, this action becomes highly contentious. Viewed solely through the lens of causing death, such behavior could be interpreted as an active harm against humans.
Although autonomous vehicles and weapons operate in entirely different contexts and pursue diametrically opposed objectives, both represent practical applications of artificial intelligence technology. Any misuse of either could result in human casualties. Meaningful Human Control is a concept proposed to address the safety concerns surrounding autonomous weapons. The International Committee for Robot Arms Control (ICRAC) and others maintain that Meaningful Human Control must require human operators to possess full contextual and situational awareness of the target area. They must also be afforded sufficient time to consider the nature of the target, the necessity and appropriateness of the attack, and the potential collateral harm and effects. Should additional conditions need to be met, they must possess the means to terminate the attack.

3.2. Irreversibility of Consequences

Although governments adopt different policies to address the challenges posed by autonomous vehicles, most governments agree that developing autonomous driving is essential and that maintaining technological leadership in this field is crucial. Consequently, governments worldwide must address the ethical issues raised by autonomous driving and devise solutions for extreme scenarios.
Regardless of the action taken, the outcome will result in the loss at least one life, with no realistic probability of preserving all lives simultaneously. This means that in the life-or-death dilemma, each of the parties has only one chance to make a choice, with no room for reversal. If artificial intelligence is tasked with making this decision, it effectively grants AI the power to take human life. This irreversibility of outcomes represents the greatest challenge in life-and-death dilemmas. While property damage from machine errors is often recoverable, the right to life differs fundamentally from other rights—once lost, it cannot be restored. Autonomous driving technology, whether making active or passive decisions based on real-world scenarios, makes no allowance for justification to the injured party. The fatal outcome may occur even when there exists a significant probability of preserving all lives simultaneously.
For instance, the proliferation of artificial intelligence has already raised numerous concerns. In the financial sector, credit assessments and other evaluations can be analyzed through algorithms, yielding pass/fail conclusions. While dissatisfaction with such outcomes may inconvenience users, they can still seek redress through human customer service. When human lives are at stake, such as in life-and-death dilemmas, utmost caution is imperative. Take capital punishment as an example: even when imposing the most severe penalty on the most heinous crimes, society enforces extremely rigorous evaluation criteria and procedures. In China, the death penalty review mechanism provides an additional layer of stringent oversight, as any error in this context is irreversible.

Part IV Principles for Addressing Life-and-Death Dilemmas

Resolving life-and-death dilemmas requires discussion across technological, ethical, and legal dimensions. However, no consensus has been reached on which rule should be applied in such dilemmas. Regardless of the approach taken to address life-conflict dilemmas, one party will inevitably face danger and suffer harm. For occupants inside the vehicle, the autonomous vehicle and its operator bear a duty of care. For pedestrians outside the vehicle, no such legal relationship of protection exists. This implies that, in extreme scenarios where an autonomous vehicle cannot simultaneously preserve the lives both of passengers and pedestrians, the death of passengers resulting from the operator’s inability to fulfill its duty of care does not constitute negligence or even intentional homicide.

4.1. Humans Must Remain the Ultimate Decision-Makers in Life-And-Death Dilemmas

Humans appear ill-equipped to make ‘correct’ decisions in life-or-death dilemmas, while artificial intelligence struggles even more to respond to such scenarios in a manner humans would find convincing. Scholars researching autonomous weapons have also emphasized that to reduce the risk of humanity’s potential extinction by machines and to safeguard fundamental human rights such as the right to life and dignity, robots should not be granted the autonomous authority to kill(Lu 2023). The act of an autonomous vehicle deliberately colliding with one party to save another inherently constitutes a form of autonomous harm to human beings.

4.1.1. The Value of Human Life Cannot be Weighed by Artificial Intelligence

The highest value of life signifies that the value of human life surpasses all else; life is the measure of all things, and nothing in the world is more precious than life (Han 2020). This principle dictates that no hierarchy or distinction of worth can be made among human lives. Furthermore, the Universal Declaration of Human Rights states that: “All human beings are born free and equal in dignity and rights.” Additionally, prevailing legal doctrine holds that “all human lives possess equal and supreme value and are therefore non-negotiable.” From this perspective, life is inherently incommensurable. The German Federal Constitutional Court has stated in its rulings that ‘Every human life... possesses the same intrinsic value and therefore cannot be subjected to any form of differential evaluation or weighed against other lives based on quantitative relationships’ (Wang 2016). Regardless of the basis for judgment, no trade-off can be made between lives.
In practice, life-and-death dilemmas are difficult to identify as fatal accidents account for only a very small proportion of everyday driving incidents. Predicting low-probability events is inherently challenging, let alone forecasting human survival across diverse scenarios. For autonomous driving, life-and-death dilemmas represent situational judgments made by artificial intelligence. The determination of such scenarios involves comprehensive evaluations by AI systems based on real-time road conditions and other data during vehicle operation. This judgment must assess both the survival probability of occupants if no action is taken and predict the survival probabilities of pedestrians and occupants based on the chosen course of action. However, in practice, life-and-death dilemmas require comprehensive evaluation through algorithms and other technologies. This raises several issues: first, what specific factors qualify as life-or-death conflict scenarios? How do we determine that these scenarios meet the criteria for life-or-death conflict?

4.1.2. Artificial Intelligence Has No Right to Deprive Human Life

As the most fundamental human right, Locke held that in the state of nature, all individuals are equal and independent, with no one possessing the right to infringe upon the lives, liberty, or property of others. The Universal Declaration of Human Rights explicitly states that: “everyone has the right to life, liberty, and security of person.” Though not legally binding under international law, it serves as the foundational document for global human rights standards. The International Covenant on Civil and Political Rights states: “Everyone has the inherent right to life. This right shall be protected by law. No one shall be arbitrarily deprived of their life.” Article 1002 of the Civil Code stipulates that: “A natural person shall enjoy the right to life. The safety and dignity of the life of natural persons are protected by law. No organization or individual may infringe upon the right to life of any other person.” A fundamental consensus has emerged regarding the deprivation of human life.
Regarding responses to autonomous vehicle collision dilemmas, philosophical discussions on the trolley problem and the ensuing utilitarianism-liberalism debate already exist. Building upon this foundation, the legitimacy of value choices should be extended to the specific design of collision rules and the corresponding application of law (Cai 2022). Some advocate for “passenger priority.” While this view has theoretical support, the ethical dilemma of life conflicts should fundamentally be resolved by humans. Suppose that, under a passenger-first rule, the passenger and pedestrian are father and son. In this scenario, the father wishes to sacrifice himself to save his son, yet the AI insists on adhering to the passenger-first principle, result in a collision with the son-pedestrian. While both outcomes are tragic, the AI’s decision—contrary to the parties’ wishes—may create an additional human tragedy atop the original tragedy? This implies that regardless of the level of autonomous driving, humans should be granted the opportunity to choose and should not be deprived of the right to take control of the vehicle.

4.2. AI Intervention as a Complement of Exhaustive Human Intervention

Autonomous driving technology should address life-conflict dilemmas by making programmatic decisions only when passengers and operators have exhausted all possible interventions. The methods for protecting the right to life differ between occupants inside the vehicle and individuals outside. Pedestrians expect that autonomous vehicles avoid causing harm, while occupants place their trust in the vehicle to provide maximum protection. Algorithms should not be employed to make value judgments regarding human life. They should be limited to strive to protect the safety of occupants to the greatest extent possible, guided by technological ethics and legal requirements.

4.2.1. Grounded in the Principle of Nonmaleficence

Resolving the ethical dilemma of life conflicts in autonomous vehicles requires an understanding of foundational robot ethics. In I, Robot, science fiction author Isaac Asimov proposed the Three Laws of Robotics. The first law states that a robot may not injure a human being or, through inaction, allow a human being to come to harm. Although these laws originated in science fiction, they laid a crucial foundation for robot ethics and became fundamental principles followed by most subsequent researchers. Analysis reveals that the most critical relationship between humans and robots lies in the non-harm principle (Shang and Du 2020).Shang, Xinjian and Du Liyan. 2020. On Robot Ethics Foreign Philosophy.1:184-222. Shang Xinjian argues that these principles are the guiding principles of robotics and also the fundamental tenets of robot ethics. Because they are grounded in considerations of not harming humans, obeying humans, and avoiding self-harm, they embody standards for what should and should not be done. They are primarily moral standards, and if enacted into law, they would also serve as legal principles to be observed.
The most critical aspect of the non-harm principle is that it prohibits robots from causing harm to humans through whether autonomous actions or by following others’ commands. It also requires robots to provide assistance when they encounter situations that humans suffering harm. In life-conflict scenarios, it may appear that regardless of the choice made it inevitably violates the non-harm principle. In reality, this interpretation is based on a flawed understanding of the relationship among the autonomous vehicle, passengers, and pedestrians, thereby rendering the life-conflict dilemma unsolvable under such assumptions. Some scholars argue that any decision made by autonomous vehicles or similar robots ultimately results in human harm, thus violating the non-harm principle. This, however, misinterprets the principle’s meaning. In the context of autonomous vehicles, the relationship among robots, passengers and pedestrians is not always one of harm versus non-harm: when a robot chooses to protect passengers, it fulfills a duty of care required by the transportation contract; when it chooses to protect pedestrians, it does so based on the principle of non-harm.

4.2.2. “Pedestrian Priority” Does Not Violate the Principle of Nonmaleficence

Clarifying the relationship among autonomous vehicles, passengers, and pedestrians is essential for accurately understanding the non-harm principle. First, the relationship between autonomous vehicles and passengers constitutes a legal transport contract. According to Article 822 of the Chinese Civil Code, “in the course of carriage, the carrier shall give its best efforts to assist the passenger who is seriously ill, or who is giving birth or whose life is at risk.” Article 823 stipulates that “the carrier shall be liable for compensation for the death of or personal injury to the passenger in the course of carriage, except where such death or personal injury results from the passenger’s own health condition, or the carrier proves that such death or personal injury is caused by the passenger’s intentional misconduct or gross negligence.” Autonomous vehicles (or their operators) bear a duty to ensure passenger safety, which constitutes both an ancillary obligation under the passenger transport contract and a statutory duty of the carrier.
In traditional transportation contracts, the duty of care owed by drivers and operators is not unlimited. The requirement to “exert reasonable efforts” emphasizes that carriers must act within their capabilities (Ding 2024). For instance, in the medical field, when medical robots assist in rescue patients, their primary task is to save lives based on their duty to assist. However, robots are not omnipotent, and a patient’s death despite treatment does not necessarily mean the robot violated the principle of non-maleficence. Similarly, this applies to autonomous vehicle robots operating under a duty of care: actions taken to protect passengers’ safety—even if they result in injury or death—should not automatically be deemed violations of the principle of non-harm.
Given the contractual relationship between operators and passengers, the “passenger priority” principle advocated by some scholars stems from the operator’s duty to ensure passenger safety. This principle is advantageous for passengers, not only fostering greater confidence in autonomous vehicles but also indirectly promoting the industry’s prosperity. However, this principle may not apply in extreme life-or-death conflicts. This is because the duty of care is not boundless but grounded in the principle of doing everything possible. Crossing ethical and legal boundaries in the name of safety is not permissible.

4.2.3. ‘Passenger Priority’ and the Conflict with the Non-Harm Principle

From another perspective, when an autonomous vehicle is programmed to collide with a pedestrian—whether this decision stems from the system itself or an operator’s command—it may constitute the deprivation of human life in the context of a life-and-death dilemma. This action risks permitting robots to take human lives in the civil sphere and raises concerns analogous to those associated with autonomous weapon systems deployed in military contexts. As Aaron M. Johnson of the University of Pennsylvania observes, “autonomous weapons systems raise numerous ethical questions, but the most fundamental issue is the transfer of the right to decide human life from humans to machines.” (Johnson and Axinn 2013) Therefore, even the use of lethal AI weapons in warfare remains controversial, and deploying similar technology in civilian transportation systems is even less acceptable. Even for robots used in warfare, a brief review of their development indicates that current military robotics technology and standards are predicated on “human intervention,” meaning humans remain the ultimate decision-makers in military operations.

Part V. Conclusion

Autonomous vehicles, as an application scenario for artificial intelligence, enable frequent interaction between humans and intelligent agents. This necessitates careful management of the relationship between passengers inside and pedestrians outside. This is not only an ethical issue but also a legal one. For autonomous driving to achieve widespread adoption and become a reliable mode of transportation for humanity, clear answers must be found for extreme dilemmas like the trolley problem. Addressing these questions necessitates carefully navigating the relationship between humans and artificial intelligence. As discussed in this paper, while the primacy of life is acknowledged, extreme scenarios demand consideration of how autonomous vehicles assess and manage risks. This requires governments worldwide to establish regulations and provide responses, balancing technological advancement with safety.
Against the backdrop of technological advancement, nations strive to maintain their leading positions. However, it is necessary to confront the challenges autonomous driving poses to traditional traffic systems, and to uphold the principle that human life is paramount. Certain challenges encountered in the development of artificial intelligence technologies, exemplified by autonomous driving, will inevitably transition from ethical considerations to legal frameworks. The rational and appropriate regulation of the autonomous driving industry is integral to safeguarding humanity’s most fundamental human rights. How to regulate the orderly development of the autonomous driving industry not only affects the prosperity of the sector but also relates to the protection of fundamental human rights. This is particularly evident when confronting life-and-death dilemmas, where the uniqueness of human beings becomes especially pronounced. No matter how advanced autonomous driving technology becomes or how mature artificial intelligence technology grows, these technologies should ultimately be human-centered. When decisions involve human life or even the future destiny of humanity, the power of choice should be placed in the hands of people themselves.

Author Contributions

Conceptualization, B.C.; methodology, B.C.; writing—original draft preparation,: B.C.; writing—review and editing, B.C. and Y.L.; project administration, B.C.; funding acquisition,.B.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant from the National Social Science Fund of China (Grant No. 25AFX023).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable. Data Availability Statement: All data underlying the results are available as part of the article and no additional source data are required.

Conflicts of Interest

The authors declare no conflicts of interest.

Reference

  1. Article 1. 1948. Universal Declaration of Human Rights.
  2. Article 6(1), 1966. International Covenant on Civil and Political Rights.
  3. Cai Mengjian. 2022. Analyzing the Criminal Justification Theory of Autonomous Vehicles in Collision Dilemmas. Global Legal Review 44(6):118-234.
  4. Chen Shuwan, Zhao Pengfei, Liu Dandan, Chen Jiajun and Zhao Rui. 2025. Incidence trend of road traffic accident death in China 2005−2021. Disease surveillance 40 (1): 133-137.
  5. Ding Shenyi. 2024. Establishing a “Safety Wall” for Ride-Hailing Drivers’ Duty to Assist. Legal Daily Legal Weekend. Available online: http://www.legalweekly.cn/fzsb/2024-02/01/content_8957215.html (Access on 15 December 2025).
  6. Duan weiwen. 2020. The Ethical Foundation of the Information Civilization, Shanghai: Shanghai People’s Publishing House, pp.44-94.
  7. Han Dayuan. 2020. Conflict and Balance Between the Right to Life and Other Rights. Human Right. 3:11-19.
  8. Johnson, A. M and Axinn, S. 2013. The morality of autonomous robots. Journal of Military Ethics 12(2):129-141.
  9. Lena Trabucco. 2023. What is Meaningful Human Control, Anyway? Cracking the Code on Autonomous Weapons and Human Judgment.Modern War Institute. Available online: https://mwi.westpoint.edu/what-is-meaningful-human-control-anyway-cracking-the-code-on-autonomous-weapons-and-human-judgment/(Accessed on 15 Decmber 2025).
  10. LU Xiao and Wang Qian. 2023. The integration of “science and technology” and “ethics”in the ethical governance of science and technology. Studies in Science of Science 41(11):1928-1931.
  11. Lu Yu. 2023. Challenges to Human Rights and Humanity Posed by Autonomous Weapons Systems and Legal Responses. Journal of International Relations and International Law. 10(100): 283-300.
  12. Mohamed Abdel-Aty and Shengxuan Ding. 2024. A matched case-control analysis of autonomous vs human-driven vehicle accidents. Nature Communications. 15: 4931. [CrossRef]
  13. Shang Xinjian and Du Liyan.2020. On Robot Ethics. Foreign Philosophy 1:184-222.
  14. United Nations Economic Commission for Europe. 2014. Inland Transport Committee. Report of the sixty - eighth session of the Working Party on Road Traffic Safety. Working Party on Road Traffic Safety. Geneva.
  15. Wang Gang. 2016. A New Perspective on Emergency Evasion of Danger to Life: Rejecting the Quantification of Life. Politics and Law 10:95-108.
  16. Zhao Yang and Zhang Jiayi. 2025. Research on the Human-Intelligence Interactive Experience Based on Embodied Intelligence:Theory, Application and Prospect. Information studies: Theory & Application 48(3):52-62.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated