Preprint
Article

This version is not peer-reviewed.

China's Legal Practices for Challenges of Artificial General Intelligence

A peer-reviewed version of this preprint was published in:
Laws 2024, 13(5), 60. https://doi.org/10.3390/laws13050060

Submitted:

06 July 2024

Posted:

08 July 2024

Read the latest preprint version here

Abstract
The artificial general intelligence (AGI) industry, represented by ChatGPT, has impacted social order during its development, and also brought various risks and challenges such as ethical con-cerns in science and technology, attribution of liability, intellectual property monopolies, data se-curity, and algorithm manipulation. The development of AI is currently facing a crisis of trust. Therefore, the governance of AGI industry must be prioritized and the opportunity of the imple-mentation of the Interim Administrative Measures for Generative Artificial Intelligence Services should be taken. It is necessary to enhance the norms for the supervision and management of scien-tific and technological ethics within the framework of the rule of law. Additionally, it is also es-sential to continuously improve the regulatory system for liability, balance the dual values of fair competition and innovation encouragement, and strengthen data security protection systems in the field of AI. All of these will enable coordinated governance across multiple domains, stakeholders, systems, and tools.
Keywords: 
;  ;  
Subject: 
Social Sciences  -   Law

1. Background

Nowadays, the world is at a historical intersection of a new round of technological revolution and industrial transformation. Following industrialization and informatization, intelligence has become a new developmental trend of the era.[1] Driven by national policies that promote the digital economy and the demand for high-quality economy development, AI technology and industry have maintained rapid advancement. As technological innovation becomes more active and industrial integration deepens, technologies such as intelligent automation, recommendations, search, and decision-making have deeply integrated into enterprise operations and social services, which brings significant economic and social benefits. In summary, artificial general intelligence (AGI) is playing an increasingly crucial role in optimizing industrial structures, enhancing economic activity, and aiding economic development.[2]
Generally speaking, artificial intelligence refers to algorithms or machines that achieve autonomous learning, decision-making, and execution based on a given amount of input information. The development of AI is built on the improvement of computer processing power, advancements in algorithms, and the exponential growth of data.[3] Since John McCarthy first proposed the concept of artificial intelligence in 1956, the progress of AI has not always been smooth. It has experienced three periods of prosperity driven by machine learning, neural networks, and internet technologies, as well as two periods of stagnation due to insufficient computing power and imperfect reasoning models.[4] With the deepening implement of AI and the recent popularity of technologies like GPT-4, a new wave of Artificial Intelligence Generated Content (AIGC) has emerged, demonstrating the capabilities of AGI. However, Generative Artificial Intelligence (GAI) has also raised concerns due to its inherent technical flaws and issues like algorithmic black boxes, decision biases, privacy breaches, and data misuse, leading to a crisis of trust.
In this context, the key to address the challenges of AGI development lies in providing a governance framework that balances ethics, technology, and law.[5] This framework should respect the laws of technological development while aligning with the requirements of legal governance and logic of scientific and technological ethics. However, both theoretical research and practical experience indicate that current governance on AGI lack specificity, systematicity, comprehensiveness, and a long-term perspective. So, it is urgently needed to use systematic scientific legal methods to ensure and promote a positive cycle between technological breakthroughs and high-level competition. This approach should aim to integrate technological, industrial, institutional, and cultural innovation, and advance the innovative development of AGI as well.
This article uses issues arising from representative GAI products and services as examples to explore the legal challenges brought about by the innovative development of AGI. Additionally, it discusses how to safeguard the innovative development of AGI by examining the current situation of China's response to these challenges. Based on this analysis, the article proposes legal solutions to promote the innovative development of AGI in the future, with the aim of enriching theoretical research in this field.

2. Challenges of Generative Artificial Intelligence Technology

Science and technology are the primary productive forces, and scientific and technological progress is an indispensable driver of industry development. With advancements in technologies such as GAI, people have discovered that AI is capable of accomplishing tasks previously unimaginable. However, people have also realized that the safety challenges, which are posed by AI's development and its deep integration into daily life, are becoming increasingly complex.
Artificial intelligence is mainly divided into specialized artificial intelligence and general artificial intelligence. Specialized artificial intelligence, also known as "narrow AI" or "weak AI," refers to AI programmed to perform a single task. It extracts information from specific data sets and cannot operate outside the designed task scenarios. Specialized AI is characterized by its strong functionality but poor interoperability. General artificial intelligence (AGI), also known as "strong AI," "full AI," or "deep AI," possesses general human-like intelligence, which enables it to learn, reason, solve problems, and adapt to new environments like a human. AGI can address a wide range of issues without the need for specially encoded knowledge and application areas.
With the emergence of large models like GPT-4 that demonstrate powerful natural language processing capabilities, the possibility of achieving AGI with "big data model + multi-scenario" has increased. Although no technology has yet fully reached the level of AGI, some scholars believe that certain generative AI models have initially achieved a level close to AGI.[6]
Currently, the security issues brought by GPT-3.5, characterized by autonomous intelligence, data dependency, the "algorithmic black box," and "lack of interpretability," have attained widespread attention. If technology products that truly meet AGI standards emerge, even more significant security challenges could be brought, potentially having more severe consequences and broader impacts on national security, social ethics, and individual life and property safety. Therefore, it is essential to explore the specific risks posed by generative AI to find ways to ensure that the innovative development of GAI benefits human society without causing harm.

2.1. Ethical Risks in Science and Technology

Scientific research and technological innovation must adhere to the norms of scientific and technological ethics, which are crucial for the healthy development of scientific activities. Currently, generative AI can generate content in text, image, audio, and video formats, and their application fields are extremely broad. But the lack of established usage norms for this technology poses ethical risks, leading to distrust in the application of AI. This issue is especially serious during the transition from weak AI to strong AI, where AI's increasing autonomy presents unprecedented challenges to traditional ethical frameworks and the fundamental nature of human thought.
GAI services excel in areas such as news reporting and academic writing, making the technology an easy tool for creating rumors and forging papers. The academic journal "Nature" has published multiple analytical articles on ChatGPT, discussing how large language models (LLMs) like ChatGPT could bring potential disruptions to academia, the potential infringement risks are associated with generated content, and the necessity of building usage regulations.[7] It is foreseeable that the lack of clear ethical standards could lead to frequent occurrences of academic fraud, misinformation, and rumor spreading, thereby destroying trust in AI technology. This distrust could even extend to situations where AI technology is not used.[8]
Moreover, the responses provided by GAI through data and algorithms are uncertain. With the continuous iteration of GAI, some technologies have been considered to have reached the level of AGI, approaching human-like intelligence. As GAI develops further, it raises profound questions about whether the technology will independently adopt ethical principles similar to those of humans. AI is now progressing towards an era of strong AI with increasingly general capabilities, and we human may find it challenging to control or even participate in the process of intelligent production, making the regulation of scientific and technological ethics even more critical.

2.2. Challenges in Responsibility Allocation

In recent years, incidents caused by autonomous driving technologies from companies like Google, Tesla, and Uber have intensified the ethical debate over whether humans or AI should take responsibility. In the context of GAI service, the enhanced autonomy of AI, coupled with the need to optimize generated content based on external feedback, poses significant challenges. The traditional legal framework for causality is difficult to apply due to the numerous hidden layers within algorithmic black boxes, leading to regulatory challenges. This complexity increases the difficulty of seeking remedies and defending rights after infringement, which makes it harder to effectively protect user rights and exacerbating public distrust in AI.
On the one hand, the legal and ethical standards of AI are still underdeveloped, resulting in many infringement incidents. When such incidents occur, determining the liable party and correctly allocating responsibility becomes a major challenge. The concept of the "responsibility gap," introduced by Andreas Matthias in 2004, refers to the inability of algorithm designers and operators to foresee future outcomes during the autonomous learning process of the algorithm. This implies that humans do not have sufficient control over the actions of machines and cannot be held liable for the fault of machine builders and operators under the traditional assignment of fault.[9]
On the other hand, GAI technology has "universal accessibility". Its usage and cost thresholds are not so high, so a wide range of people can easily access and use the technology. This accessibility increases the risk of infringement incidents. For example, spreading rumors can be easily facilitated by AI, making it simple to create and disseminate false information. Some users may intentionally spread and create false information and rumors to boost web traffic, which increases the frequency of misinformation dissemination.[10]

2.3. Intellectual Property Challenges

With the widespread application of GAI, concerns have arisen regarding the legality of the training data sources for large AI models and whether the content they generate can be considered as a work.
While it is widely accepted that GAI, as a computer program, can be protected as intellectual property, significant controversy remains over the intellectual property issues related to the massive data training. The lack of clear boundaries or definitions regarding intellectual property in data can easily result in a "tragedy of the commons." Conversely, overemphasizing the protection of data as intellectual property can hinder technological development, resulting in an "anti-commons tragedy."[11] Scholars are actively discussing how to balance the protection of intellectual property within data and the advancement of technological innovation.
Furthermore, there is debate over whether the content generated by AI can be recognized as a work. GAI produces content based on extensive data training and continuously refines the output according to user feedback. Therefore, it is challenging to determine that the content is entirely autonomously generated by AI, which leads to disputes. Some scholars argue that GAI mimics the human creative process and its content is not a product of human intellect. However, in practice, a few countries do recognize computer-generated content as a work. For instance, the UK's Copyright, Designs and Patents Act (CDPA) provides that content generated by a computer can be protected as intellectual property.[12] Though there is no consensus on the issue of ownership of content generated by AI, most scholars agree that AI itself cannot be the rights holder of a work.

2.4. Data Security Risks

Data elements have immense potential value. If this value is fully realized by the following pattern "potential value - value creation - value realization", it can significantly drive social and economic development.[13] As users become more aware of protecting their data privacy and as the risks associated with data breaches increase, finding a balance between data protection and data-driven AI research is crucial for achieving public trust in AI technology.
In GAI technology, the first type of risk is the inherent security risk of the training data. The training outcomes of GAI models directly depend on the input data. However, due to limitations in data collection conditions, the proportion of data from different groups is not balanced. For example, current training corpora are predominantly in English and Chinese, making it difficult for other minority languages to be integrated into the AI world, thus presenting certain limitations.
The second type of risk arises from the processes of data collection and usage. With the advancement of internet technology, the amount of personal information has increased and become easier to collect. The growing scale of data is both the key to achieving GAI services and a primary source of trust crises. The training data volume for GPT-4 has reached 13 trillion tokens. Although mainstream GAI service providers have not disclosed their data sources, it is known that these data mainly come from public web scraping datasets and large human language datasets. It is a challenge to access and process such data in a secure, compliant, and privacy-protective manner, demanding higher standards for security technical safeguards.

2.5. Algorithm Manipulation Challenges

In the AI era, the uncontrollability brought by the statistical nature of algorithms, the autonomous learning ability of AI, and the inexplicability of deep learning black-box models have become new factors leading to crisis of user trust. From the perspective of technical logic, algorithms play a core role in the hardware infrastructure and applications of GAI, shaping user habits and values.[14] Due to the black-box problem in the decision-making processes of AI models, this uncontrollable technical defect brings most of the algorithmic challenges.
Firstly, algorithms lack stability. GAI faces various attack methods targeting their data and systems, such as virus attacks, adversarial attacks, and backdoor attacks. For instance, feeding malicious comments into the model can effectively influence the recommendation algorithm, resulting in inaccurate recommendation outputs.
Secondly, the explainability of algorithms needs improvement. On the one hand, people are unclear about the processes and operational mechanisms within large models that contain vast amounts of parameters. On the other hand, it is also unclear which specific data from the database influence the AI algorithm's decision-making process.
Lastly, algorithmic bias and discrimination issues remain unresolved. Internally, if the algorithm developers set discriminatory factors or incorrectly configure certain parameters during the development stage, the algorithm will inherently exhibit biased tendencies. Externally, since GAI optimizes its content based on feedback, any biases and discrimination present in the feedback data will affect the final generated content.

3. China's Practice Plan

Artificial intelligence is a double-edged sword, bringing both convenience and risks to society. The trust crisis caused by the application of AI technology hinders further innovation and development. To align legal governance with AI technology innovation, the European Union passed the world's first comprehensive regulatory law of AI, the "Artificial Intelligence Act". Meanwhile, the formulation of China's Artificial Intelligence Law has also gained significant attention, with the "Artificial Intelligence Law Draft" included in the State Council's 2023 legislative work plan. On October 27, 2023, during the ninth collective study session of the 19th Central Political Bureau, it was explicitly stated that "we must strengthen the assessment and prevention of potential risks in AI development to ensure AI is safe, reliable, and controllable." All of these demonstrates China's proactive attitude and emphasis on supporting and regulating AI technology and industry development.
The Cyberspace Administration of China, along with six other departments, jointly issued the "Interim Measures for the Management of Generative AI Services" (hereinafter referred to as the "Interim Measures"), which came into effect on August 15, 2023. The "Interim Measures" focus on ex-ante regulation and effectively enhancing capabilities of GAI security governance through preventive supervision. The "Artificial Intelligence Law (Scholars' Draft)" was released in March 2024. By refining and reconstructing the regulatory targets, bodies, tools, and content of AI risk, it outlines the basic framework of the AI regulation system.[15]
Therefore, while vigorously developing AI, China places great emphasis on the safety challenges and sets clear safety governance objectives. A comprehensive governance approach is adopted, incorporating regulations, standards, and technical support, to implement an agile governance model. These actions enhance the capacity for AI safety governance and ensure the safe and healthy development of AI. In the process, China's governance of GAI exhibits two major characteristics: trustworthiness and human-centric.

3.1. Trustworthiness: The Fundamental Value of AGI Innovation

Trustworthiness is the primary principle or "imperative clause" that must be followed in the current stage of AGI innovation. It is also the focus of AI governance policy formulation.[16] Although the specific definition of trustworthy AI has yet to be unified, its core principles include stability, interpretability, privacy protection, and fairness. Stability refers to the ability of AI to make correct decisions in the presence of environmental noise and malicious attacks. Interpretability means that AI decisions must be understood by humans. Privacy protection refers to the AI system's ability to safeguard personal or group privacy from breaches. Fairness implies that the AI system should accommodate individual differences and treat different groups equitably.
In China, Academician He Jifeng of the Chinese Academy of Engineering first proposed the concept of "trustworthy AI" at the Xiangshan Science Conference in November 2017.[17] To continue fostering the development of trustworthy AGI, China aims to establish a comprehensive governance framework that integrates ethical guidelines, robust laws and regulations, and advanced technical safeguards. In December 2017, the Ministry of Industry and Information Technology issued the "Three-Year Action Plan to Promote the Development of a New Generation of Artificial Intelligence Industry (2018-2020)". In June 2019, the New Generation Artificial Intelligence Governance Expert Committee released the "New Generation AI Governance Principles - Developing Responsible AI", outlining the framework and action guidelines for AI governance. By issuing policies, China aims to guide the legal development of AI and address specific challenges posed by AI.
Furthermore, fostering collaboration between government, business, and research institutions, China encourages enterprises to actively participate in AI governance. In June 2020, Ant Group unveiled the Trusted AI Technology Architecture at the Global Artificial Intelligence Conference. In July 2021, Jing Dong Exploration Research Institute and the China Academy of Information and Communications Technology jointly released China's first "Trusted AI White Paper" at the World Artificial Intelligence Conference. Both companies highlight privacy protection, stability, interpretability, and fairness as the four fundamental principles of trustworthy AI. These efforts create a balanced environment that supports technological advancement and societal trust in AI.

3.2. Human-Centric: The Value Orientation of AGI Development

The safety baseline for the innovation and development of AGI encompasses three main elements: people, technology, and trust. Technology serves as the foundation for the robust growth and stability of the AGI industry. Trust is the pillar that promotes the continuous and healthy development of the AGI sector. And people are the core protection objects of trustworthy AI laws and policies. A human-centric approach is the fundamental principle of AGI innovation and development. In fact, the issue of trust is not entirely dependent on the underlying logic of AI development and application, but also on how well the law supervises AI technology. The key question is whether AI's trustworthiness can be achieved from a legal regulatory perspective.
In recent years, China has undertaken various legal explorations and practices to promote a human-centric approach to AGI. In terms of legislation, Shanghai issued the "Shanghai Regulations on Testing and Application of Intelligent Connected Vehicles" in December 2021, the "Shanghai Regulations on Promoting the Development of the AI Industry" in September 2022, and the "Pudong New Area Regulations on Promoting Innovation in Driverless Intelligent Connected Vehicles" in November 2022. These regulations emphasize the trustworthiness of AI algorithms, ethics, governance, and supervision, and provide detailed provisions on technical standards, data security, and personal information protection in the field of intelligent connected vehicles.
Additionally, the "Shanghai Regulations on Promoting the Development of the AI Industry" stipulate that the Shanghai Municipal will establish the "Shanghai AI Strategic Advisory Expert Committee" to provide consultation on major strategies and decisions in AI development. It also sets up the "AI Ethics Expert Committee" to formulate ethical guidelines and promote discussions and standard-setting on major ethical issues in AI both domestically and internationally.
In July 2021, the Shanghai Municipal Commission of Economy and Informatization and the Shanghai Municipal Market Supervision Administration issued the "Guiding Opinions on Promoting the Construction of a New Generation AI Standard System" (hereinafter referred to as the "Opinions"). The "Opinions" focus on areas such as intelligent connected vehicles, medical imaging diagnostics, visual image identity recognition, and intelligent sensors, aiming to accelerate the construction of a comprehensive trustworthy AI evaluation system, as well as establish common standards for testing and evaluation. On safety ethics, the "Opinions" propose guidance and regulation of AI development through safety and ethical standards, enhance safety assurance capabilities, establish proactive governance rules, and reinforce standards development in privacy protection norms and application scenario safety norms.
As social understanding deepens, the participants involved in ensuring the innovative development of AGI will become more diverse, fostering coordinated interaction among various entities and elements in the industry chain. Based on the legal frameworks of "Cybersecurity Law", "Data Security Law", and "Personal Information Protection Law", Shenzhen, Shanghai, and Beijing are accelerating AI legislation, establishing graded and classified AI application norms in public spaces, and forming AI governance systems with local characteristics. In addition, industry associations, alliances, and research institutions play an active role on formulating and publishing. Achievements in areas such as safety and reliability provide a reference for the human-centric development of AGI. Cases like the "first case of facial recognition [18]" have drawn widespread public attention, increasing public understanding and demand for AI, and significantly enhancing participation levels. This demand-driven approach compels the development of AGI towards a human-centric direction.

5. Conclusions

Strategic emerging industries are the new pillars of future development. The legal landscape in the digital era should anticipate the future form of global governance for AGI. The era of AGI is not far off, with GAI technologies advancing rapidly in a short period. Their wide range of applications highlights the revolutionary significance of AGI, making the AI industry a new focal point of global competition. However, the innovative development of the AGI industry also faces challenges related to technological ethics, intellectual property, accountability mechanisms, data security, and algorithmic manipulation, which undermine the trustworthiness of AI.
Therefore, it is necessary to further develop a legal regulatory framework for the AI industry and improve the governance ecosystem for technological ethics. By introducing relevant codes of conduct and ethical guidelines, we can promote the healthy and sustainable development of the AI industry within a legal framework. Addressing the aforementioned issues requires strategic research and the pursuit of feasible technical solutions. By establishing technological ethics standards, improving the system for regulating liability, protecting competition while encouraging innovation, enhancing AI data security measures, and standardizing algorithmic regulation in the AI field, the obstacles on the path to innovative development of AGI can eventually be removed.

Author Contributions

Conceptualization, methodology, writing-original draft preparation, project administration, funding acquisition, B.C.; data curation, J.C.; writing-review and editing, B. C and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the major project in Judicial Research of the Supreme People's Court of P.R.C. (grant number ZGFYZDKT202317-03) and the key project of Humanities and Social Science study from the Ministry of Education of P R.C. (grant number 19]JD820009).

Data Availability Statement

All data underlying the results are available as part of the article and no additional source data are required.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sun W.P., Li Y., " On the Ethical Principles of the Development of Artificial Intelligence," Philosophical Analysis, 01(2022), pp. 6-17.
  2. Guo Y.B., Hu L.J., “Study on the Impact of Al and Human Capital on Industrial Structure Upgrading: Empirical Evidence from 30 Chinese Provinces,” Soft Science, 05(2022), pp. 21-26.
  3. Cao J.F., Fang L.M., “The Path and Enlightenment of EU's Ethics and Governance of Artificial Intelligence,” AI-View, 04(2019), pp. 40-48.
  4. Jiang L.D., Xue L., “The Current Challenges and Paradigm Transformation of New-Generation Al Governance in China,” Journal of Public Management, 02(2022), pp. 6-16.
  5. Zhao J.W., “The Theoretical Misunderstanding and Path Transition in the Application Risk Governance of Generative Artificial Intelligence Technology,” Jingchu Law Review, 03(2023), pp. 47-58.
  6. Sébastien Bubeck et al., “Sparks of Artificial General Intelligence: Early Experiments with GPT-4,”. arXiv arXiv:2303.12712.
  7. Chris Stokel-Walker & Richard Van Noorden, What ChatGPT and Generative AI Mean for Science, Nature, 614 (7947): 214-216 (Feb. 2023).
  8. Chen B., Lin S.Y., “Facing the Trust Crisis in Artificial Intelligence and Accelerating the Development of Trustworthy AIGC,” First Financial Daily, April 25, 2023, p. A11.
  9. Andreas Matthias, The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata, 6 Ethics and In-formation Technology 175, 175-183 (2004).
  10. Chen B., Lin S.Y, “Facing the Trust Crisis in Artificial Intelligence and Accelerating the Development of Trustworthy AIGC,” First Financial Daily, April 25, 2023, p. A11.
  11. Peng H., “Its Logical Structure and Boundary Setting of Data Ownership: From the Perspective of the "Tragedy of the Com-mons" and "Tragedy of the Anti-commons”,” Journal of Comparative Law, 01(2022), pp. 105-119.
  12. Copyright, Designs and Patents Act (1988), Article 9 (3).
  13. Chen B., “Scientific Construction of Data Element Trading System,” Frontier, 06(2023), pp. 68-80.
  14. Zhang L.H., “Algorithm Accountability in Platform Regulation,” Oriental Law, 03(2021), pp. 24-42.
  15. Hu X.W., Liu L., “The Full Process Regulatory Logic and Institutional Response of Artificial Intelligence Risks,” Study and Practice, 05(2024), pp. 22-30.
  16. Chen J.D., “Theoretical System and Core Issues of Artificial intelligence Law,” Oriental Law, 01(2023), pp. 62-78.
  17. First Financial Information, “Exclusive Interview with Academician Ji-feng He: The Most Important Leverage for Achieving Trustworthy Artificial Intelligence Lies in People,” Tencent News, July 16, 2021. Available online: https://view.inews.qq.com/k/20210716A07WBI00?web_channel=wap&openApp=false (accessed on 30 June 2024).
  18. Bing Guo vs. Hangzhou Safari Park Co., Ltd., Service Contract Dispute Case, Civil Judgment of the People's Court of Fuyang District, Hangzhou, Zhejiang Province, (2019) Zhe 0111 Min Chu 6971.
  19. Shi J.Y., Liu Z.X., “The Rule of Law Path of Ethical Governance of Science and Technology: Taking the Governance of Genome Editing as an Example,” Academia Bimestris, 05(2022), pp. 185-195.
  20. "Explanation on the 'Measures for the Ethical Review of Science and Technology (Trial) (Draft for Comments)'," Ministry of Science and Technology of the People's Republic of China, April 4, 2023. Available online: https://www.most.gov.cn/wsdc/202304/t20230404_185388.html (accessed on 30 June 2024).
  21. Hu W., “Rules and Ways of Liability on Mining Damage,” Journal of Political Science and Law, 02(2015), pp. 121-128.
  22. Li C.L., “Eco-injury: From the Perspective of Law of Torts,” Modern Law Science, 01(2010), pp. 65-75.
  23. Fan Y.J., X. Zhang, “The Mode Transformation, Selection, and Approach of Data Security Governance,” E-Government, 04(2022), pp. 119-129.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated