Submitted:
13 February 2025
Posted:
14 February 2025
You are already at the latest version
Abstract
Keywords:
1. The Raise of the Problem
2. The Various Risks to the Development of Generative AI
2.1. Major Legal Risks
2.1.1. Legal Risk of Intellectual Property Rights
2.1.2. Legal Risk of Data Security and Personal Information
2.1.3. Legal Risk of Fair Competition in the Market
2.2. Ethical Risks in Science & Technology at Different Stages
2.2.1. In the Development Phase
2.2.2. In the Application Stage
2.2.3. In the Relief Phase
2.3. Major Risks in Social Governance
2.3.1. The Non-Authenticity of Generative AI
2.3.2. The Non-Reliability of Generative AI
2.3.3. The Weak-Controllability of Generative AI
2.4. Analysis of the Risk Causes
3. The Extraterritorial Investigations into the Development of Normative Generative AI
3.1. The European Union
3.2. The United States
3.3. The United Kingdom
4. The Considerations of the Promotion of the Rule of Law in the Development of Generative AI
4.1. The Measures of the Rule of Law to Promote the Development of Generative AI
4.1.1. The Mutual Promotion of Technology and Law
4.1.2. The Balance between Development and Security
4.2. The Requirements of the Rule of Law to Promote the Development of Generative AI
4.2.1. Security
4.2.2. Reliability
4.2.3. Controllability
5. The Practical Frameworks for the Facilitation of Development of Generative AI
5.1. Strengthen the Supply of Regulatory Institutions
5.1.1. System Design with Safety as the Bottom Line
5.1.2. Policy Supports as Oriented to Development
5.2. Establish a Multi-Faceted and Long-Term Regulatory Mechanism
5.2.1. Replenish Provisions to Verify the Rectification and Optimization Methods
5.2.2. Rationalize Allocations on the Cost of Generative AI Regulation
5.2.3. Pay Attention to the Digitization of Rule of Law Supervision
5.2.4. Give Full Play to the Regulatory Role of Enterprises and Users
5.3. Clarify and Refine the Responsibility System
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhu Xiaohang, "Italy Announces Ban on ChatGPT," China Economic Net, http://intl.ce.cn/qqss/202304/03/t20230403_ 38476962.shtm-l?utm_source=UfqiNews, accessed on April 28, 2023. It should be noted that the report on Italy's "ban" on ChatGPT refers to the authorities issuing a "ban," which strictly speaking, was a temporary injunction, and ChatGPT has since been restored for use.
- Zhang Xin, "From Algorithm Crisis to Algorithm Trust: Multiple Schemes and Localization Paths of Algorithm Governance", Journal of East China University of Political Science and Law, No. 6, 2019.
- Han Dayuan, "The Constitutional Boundaries of Contemporary Science and Technology Development", Research on the Modernization of the Rule of Law, No. 5, 2018.
- See Andersen v. Stability AI Ltd., January 13, 2023.
- Lin Xiuqin, "The Reshaping of the Copyright Fair Use System in the Era of Artificial Intelligence", Legal Research, No. 6, 2021.
- Ma Zhongfa and Xiao Yulu, "The Infringement Dilemma and Way Out of Artificial Intelligence Learning and Creation", Wuling Academic Journal, No. 5, 2019.
- Yang Lihua, "Exploration of the Copyright Issues of Artificial Intelligence Generated Materials", Modern Legal Science, No. 4, 2021.
- Zhou Xin, "Challenges and Countermeasures of Artificial Intelligence to the Traditional Civil Liability System", Rule of Law Forum, No. 3, 2021.
- Pei Chenwei and Wu Chunxin, "The copyright of AI-generated content is not clearly defined", Science and Technology Daily, June 19, 2023, page 2.
- Chen Bing, "Building a Scientific and Prudent Rule of Law Framework for the High-quality Development of AIGC", China Business News, April 19, 2023, page A11; Chen Bing, "Facing the Crisis of Trust in Artificial Intelligence and Accelerating the Development of Trusted AIGC", China Business News, April 25, 2023, page A11.
- Zheng Zhifeng, "Privacy Protection in the Age of Artificial Intelligence," Legal Science (Journal of Northwest University of Political Science and Law), 2019,No. 2.
- Product safety standards, https://openai.com/safety-standards.
- March 20 ChatGPT outage: Here’s what happened,https://openai.com/blog/march-20-Chatgpt-outage.
- Matthew Sindman: The Myth of Digital Democracy, Princeton, NJ : Princeton University Press, 2008.
- Ma Changshan, "The Social Risks of Artificial Intelligence and Its Legal Regulation", Legal Science (Journal of Northwest University of Political Science and Law), No. 6, 2018.
- Fan Chunliang, "Theory and Practice of Ethical Governance of Science and Technology", Science and Society, No. 4, 2021.
- Gu Haibo, "AI-Generated Image Wins Sony World Photography Award," Youth Reference, April 28, 2023, 5th edition.
- Zhao Zhiyun, Xu Feng, Gao Fang, et al., "Some Understandings on the Ethical Risks of Artificial Intelligence", China Soft Science, No. 6, 2021.
- Feng Jie, "Jurisprudence Reflection on the Legal Subject Status of Artificial Intelligence Body", Oriental Jurisprudence, No. 4, 2019.
- Yu Xue and Duan Weiwen, "The Ethical Construction of Artificial Intelligence", Theoretical Exploration, No. 6, 2019.
- Ministry of Civil Affairs of the People's Republic of China, "Statement on Cautioning Against Illegal Activities Involving the Forgery of Ministry of Civil Affairs Documents and Other Violations",https://www.mca.gov.cn/ article/xw/mzyw/202303/20230300046804.shtml?site=elder,accessed on April 28, 2023.
- Eric Bryan Joferson and Andrew McAfee, "The Second Machine Revolution: How Digital Technology Will Change Our Economy and Society", translated by Jiang Yongjun, CITIC Press, 2016, p. 340.
- Roman V. Janpolsky, Otto Barten, "ChatGPT and Other Language Models May Pose Existential Risks", translated by Wang Youran, China Social Science News, March 6, 2023 8th Edition.
- Yuan Kang, "Legal Regulation of Trusted Algorithms", Oriental Jurisprudence, No. 3, 2021.
- Zeng Xiong, Liang Zheng and Zhang Hui, "The Regulatory Path of Artificial Intelligence in the European Union and Its Enlightenment to China: Taking the Artificial Intelligence Act as the Object of Analysis", E-Government, No. 9, 2022.
- Fang Xu, Wei Yan, Zhang Ying, Wang Xiaosa, Sun Linxiao, Xu Lei, "Overview of the EU Artificial Intelligence Act", Computer Times, No. 5, 2022.
- European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts -General approach (6 December 2022), See https://data.consilium.europa.eu/doc/document/ST-15698-2022-INIT/EN/pdf.
- Op. cit. [XXV].
- The Blueprint for an AI Bill of Rights: Making Automated Systems Work for The American People,https://www.white- house.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
- Chen Jinghui, "Legal Attitude in the Face of Genetically Modified Issues: How Should Legal Persons Think About Scientific Issues", Law Science, No. 9, 2015.
- Chen Jinghui, "The Doctrinalization of Departmental Law and Its Limits", China Law Review, No. 3, 2018.
- Wu Chao, Yang Mian, Wang Bing: "The Scientific Definition of Security and Its Implications, Extensions, and Inferences," Journal of Zhengzhou University (Engineering Edition), 2018, No. 3.
- Leveson, N. A new accident model for engineering safer systems. Safety Science,2004,42(4):237-270. [CrossRef]
- The Blueprint for an AI Bill of Rights: Making Automated Systems Work for The American People,https://www.white- house.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
- Op. cit. [XXIV], Yuan Kangwen.
- Jiao Heping, "Copyright Risks and Mitigation Paths for Data Acquisition and Utilization in Artificial Intelligence Creation", Contemporary Legal Science, No. 4, 2022; Zhang Jinping, "The Fair Use Dilemma of Artificial Intelligence Works and Its Solution", Global Law Review, Issue 3, 2019.
- op. cit. [VI], Ma Zhongfa and Xiao Yu Luwen.
- Chen Bing and Ma Xianru, "The Governance Dilemma and Rule of Law Response to Cross-border Data Flow under the System Concept", Journal of Anhui University (Philosophy and Society Edition), No. 2, 2023.
- Chen Yongwei, "Beyond ChatGPT: Opportunities, Risks and Challenges of Generative AI", Journal of Shandong University (Philosophy and Social Science Edition), No. 3, 2023.
- Pu Qingping and Yearning, "Generative Artificial Intelligence: The Transformative Impact, Risks, Challenges and Coping Strategies of ChatGPT", Journal of Chongqing University (Social Science Edition), No. 3, 2023.
- Zhang Jingzhi, "The International Model of the "Regulatory Sandbox" and the Development Path of Chinese Mainland", Financial Supervision Research, No. 5, 2017.
- Yu Xingzhong, Zheng Ge, Ding Xiaodong, "Six Issues of Generative Artificial Intelligence and Law: A Case Study of ChatGPT", China Law Review, No.2, 2023.
- Op. cit. [X], Chen Bingwen, "Building a Scientific and Prudent Legal Framework for the High-quality Development of AIGC".
- Xu Wei, "On the Legal Status and Responsibilities of Generative AI Service Providers: A Case Study of ChatGPT", Legal Science (Journal of Northwest University of Political Science and Law), No. 4, 2023.
- Kacy Popye, Cache-22: The Fine Line Between Information And Defamation In Google’s Autocomplete Function, Car- dozo Arts and Entertainment Law Journal, Vol.34:835, p.841(2016).
| Types of Risks | Specific Risk | Superficial Causes | Root Cause | Governance Tendency |
|---|---|---|---|---|
| Legal risks | Legal risk of intellectual property rights | Current IP-related laws and regulations are not suitable for generative AI generation models | Lack of governance norms | Strengthen the safety supervision of generative AI |
| Legal risk of data security and personal information | The data security system is not in place | |||
| Legal risk of fair competition in the market | Driven by an oligopoly economic model | |||
| Ethical risks in science & technology | Ethics risks in the development phase | Training techniques are imperfect and moral requirements are lacking | Lack of governance norms Training techniques are imperfect |
Strengthen the safety supervision of generative AI Accelerate the technological development of generative AI |
| Ethical risks in the application stage | The rules for exploiting generated content have faultiness | |||
| Ethics risks in the relief phase | Lack of governance norms | |||
| Risks of social governance | The non-authenticity of generative AI | Training techniques are imperfect | Accelerate the technological development of generative AI | |
| The non-reliability of generative AI | ||||
| The weak- controllability of generative AI | Lack of governance norms | Strengthen the safety supervision of generative AI | ||
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
