Preprint
Brief Report

This version is not peer-reviewed.

A Roadmap to Superintelligence: Architectures, Transformations, and Challenges in Modern AI Development

Submitted:

27 January 2025

Posted:

28 January 2025

You are already at the latest version

Abstract
This paper examines the trajectory of artificial intelligence (AI) development, focusing on three key stages: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). Recent advancements in AI architectures, particularly the evolution of transformer-based models, have significantly accelerated progress across these stages, enabling more sophisticated and scalable AI systems. This paper explores the architectural foundations of ANI, AGI, and ASI, highlighting recent modifications and their implications for future AI development. Additionally, the societal, ethical, and geopolitical implications of AI are discussed, emphasizing the need for robust safeguards and governance frameworks to ensure that AI serves as a force for human advancement rather than a source of existential risk. By integrating historical comparisons, current trends, and future projections, this paper provides a comprehensive analysis of the transformative potential of AI and its impact on humanity.
Keywords: 
;  ;  

1. Introduction

The rapid advancement of Artificial Intelligence (AI) has positioned it as a cornerstone of technological innovation and a focal point of geopolitical competition. Historically, transformative technologies such as nuclear weapons and the internet have defined eras, and AI is poised to have an equally profound impact on the 21st century. As nations and corporations race to dominate AI development, three key stages of AI progress have emerged: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). Each stage represents a significant leap in AI capabilities, from task-specific systems to human-level cognition and beyond.
ANI systems, which excel in specific domains such as facial recognition, language translation, and medical diagnosis, have already transformed industries and everyday life. However, their inability to generalize beyond predefined tasks limits their applicability. AGI, the next stage of AI development, aims to replicate human-level cognitive functions across all domains, enabling systems to reason, plan, and solve abstract problems. The pursuit of AGI raises critical questions about alignment, safety, and governance, as these systems must operate in complex and unpredictable environments. ASI, the theoretical pinnacle of AI, represents an intelligence vastly superior to human capabilities, with the potential to solve problems currently beyond human comprehension. The development of ASI brings both extraordinary opportunities and profound risks, necessitating careful consideration of ethical and societal implications [1].
A key driver of progress across these stages has been the evolution of transformer models. Introduced in the seminal paper "Attention is All You Need" [2], transformers have revolutionized natural language processing and extended their applications to computer vision, robotics, and beyond. The self-attention mechanism, which enables transformers to dynamically weigh the importance of different elements in a sequence, has proven highly effective for tasks requiring long-range dependencies and multi-modal understanding. Transformers have become the foundation of state-of-the-art models such as BERT [3], GPT [4], and Vision Transformers (ViT) [5], enabling breakthroughs in text generation, image recognition, and other domains.
This paper provides a comprehensive analysis of the architectural foundations, recent modifications, and societal implications of AI across its three stages. By exploring the evolution of transformer models and their role in advancing AI capabilities, we aim to shed light on the transformative potential of AI and the challenges it poses. The paper concludes with a discussion of the ethical and governance frameworks needed to ensure that AI development aligns with human values and contributes to the betterment of society.

2. Artificial Narrow Intelligence (ANI)

Artificial Narrow Intelligence (ANI), commonly referred to as Weak AI, is characterized by its ability to perform specific, predefined tasks with high efficiency. ANI systems are already prevalent in applications such as facial recognition, language translation, and medical diagnosis. These systems excel in their designated domains but lack the ability to generalize beyond their specific tasks, making them fundamentally different from more advanced forms of AI like Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI).

2.1. Architecture of ANI

ANI systems rely on task-specific architectures, which are often composed of three primary components: the input layer, hidden layers, and the output layer. The input layer processes domain-specific data, such as images, text, or numerical inputs. Mathematically, the input layer can be represented as:
x = [ x 1 , x 2 , , x n ] T ,
where x is the input vector. The hidden layers employ deep neural networks (DNNs) for pattern recognition and feature extraction. Each layer transforms the input x through a series of linear and nonlinear operations. The transformation at the l-th layer can be expressed as:
h ( l ) = σ W ( l ) h ( l 1 ) + b ( l ) ,
where h ( l ) is the output of the l-th layer, W ( l ) is the weight matrix, b ( l ) is the bias vector, and σ is the activation function (e.g., ReLU or sigmoid). The output layer produces predictions or classifications based on the trained model. For multi-class classification tasks, the output layer often uses the softmax activation function, which can be expressed as:
y = softmax W ( L ) h ( L 1 ) + b ( L ) ,
where y is the output vector representing the predicted probabilities for each class.
ANI systems are optimized using algorithms like gradient descent to minimize the loss function L ( y ^ , y ) , where y ^ represents the model’s predictions and y denotes the true values. A common loss function for ANI is cross-entropy loss, which is defined as:
L ( y ^ , y ) = i = 1 n y i log ( y ^ i ) .
This loss function measures the discrepancy between the predicted and true values, guiding the optimization process. Despite their efficiency in specific tasks, ANI systems are limited by their inability to generalize beyond their predefined domains, making them unsuitable for tasks requiring broader cognitive capabilities.

2.2. Modifications in ANI Architectures

Recent advancements in ANI architectures have focused on improving their efficiency, scalability, and adaptability. One significant modification is the use of transfer learning, which leverages pre-trained models such as ResNet and BERT to adapt to new tasks with minimal data. Transfer learning allows ANI systems to benefit from knowledge acquired in one domain and apply it to another, significantly reducing the need for extensive retraining.
Another important modification is the incorporation of attention mechanisms, which have proven highly effective in tasks like machine translation and image captioning. Attention mechanisms enable the model to focus on specific parts of the input data, improving its ability to capture relevant features and relationships. For example, in natural language processing, attention mechanisms allow the model to weigh the importance of different words in a sentence, enhancing its performance in tasks like text summarization and sentiment analysis.
Regularization techniques, such as dropout and batch normalization, have also been widely adopted to improve the generalization capabilities of ANI systems. Dropout randomly deactivates a fraction of neurons during training, preventing the model from overfitting to the training data. Batch normalization, on the other hand, normalizes the inputs of each layer, stabilizing the training process and accelerating convergence. These techniques have become essential components of modern ANI architectures, enabling them to achieve state-of-the-art performance in a wide range of applications.

2.3. Implications and Common Uses of ANI Architectures

ANI architectures have had a profound impact on various industries, driving advancements in fields such as healthcare, finance, and autonomous systems. In healthcare, ANI systems are used for medical image analysis, disease diagnosis, and drug discovery. For instance, convolutional neural networks (CNNs) have been employed to detect abnormalities in X-ray and MRI images, assisting radiologists in making accurate diagnoses. In finance, ANI systems are utilized for fraud detection, algorithmic trading, and risk assessment, enabling organizations to make data-driven decisions with greater precision.
In the realm of autonomous systems, ANI plays a critical role in enabling self-driving cars, drones, and robotics. These systems rely on ANI architectures to process sensor data, recognize objects, and make real-time decisions. For example, self-driving cars use ANI models to identify pedestrians, traffic signs, and other vehicles, ensuring safe navigation in complex environments. The widespread adoption of ANI systems underscores their importance in addressing real-world challenges and improving the quality of life.
Despite their numerous applications, ANI systems also raise important ethical and societal concerns. The reliance on ANI for decision-making in critical domains, such as healthcare and criminal justice, necessitates careful consideration of issues like bias, transparency, and accountability. Ensuring that ANI systems are designed and deployed responsibly is essential to maximizing their benefits while minimizing potential risks.

3. Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI), often referred to as Strong AI, represents the next leap in AI capability. Unlike Artificial Narrow Intelligence (ANI), which is designed for specific tasks, AGI aims to replicate human-level cognitive functions across all domains. AGI systems can reason, plan, solve abstract problems, and perform a wide range of intellectual tasks with the flexibility and adaptability of human intelligence. Achieving AGI remains one of the most ambitious goals in AI research, as it requires overcoming significant challenges in generalization, learning, and reasoning.

3.1. Architecture of AGI

Developing AGI requires a more flexible and adaptive architecture compared to ANI. One of the key components of AGI architectures is the multi-modal input layer, which is capable of processing diverse data types simultaneously, such as text, images, and audio. The input layer for multi-modal data can be represented as:
x = [ x text , x image , x audio ] T ,
where x text , x image , and x audio are input vectors for text, image, and audio data, respectively. This multi-modal capability allows AGI systems to integrate information from various sources, enabling more comprehensive understanding and decision-making.
Another critical component of AGI architectures is the dynamic memory system, which implements mechanisms for both long-term and short-term memory. The memory system can be represented as:
M t = f ( M t 1 , I t ) ,
where M t is the memory state at time t, I t is the input at time t, and f is a memory update function, such as those used in Long Short-Term Memory (LSTM) or Gated Recurrent Unit (GRU) networks. This dynamic memory allows AGI systems to retain and recall information over extended periods, facilitating tasks that require context and continuity.
Reinforcement learning (RL) is another cornerstone of AGI architectures. RL enables AGI systems to learn optimal policies for decision-making by interacting with their environment. The policy π ( a | s ) maps states s to actions a, optimizing for cumulative rewards:
R = t = 0 γ t r t ,
where γ is the discount factor and r t is the reward at time t. This framework allows AGI systems to learn complex behaviors through trial and error, making it particularly useful for tasks like robotics and game playing.
Finally, AGI architectures often incorporate neuro-symbolic integration, which combines the strengths of neural networks and symbolic reasoning. Neural networks excel at pattern recognition and learning from data, while symbolic reasoning provides the ability to manipulate abstract concepts and perform logical inference. The integration can be represented as:
y = g ( h ( L ) , s ) ,
where h ( L ) is the output of the neural network, s is the symbolic representation, and g is the integration function. This hybrid approach enables AGI systems to perform tasks that require both perceptual and cognitive capabilities, such as natural language understanding and problem-solving.

3.2. Modifications in AGI Architectures

Recent advancements in AGI architectures have focused on improving their generalization, adaptability, and scalability. One significant modification is the development of hybrid architectures that combine neural networks with symbolic reasoning. These architectures leverage the strengths of both approaches, enabling AGI systems to perform tasks that require both data-driven learning and logical reasoning. For example, neuro-symbolic models have been applied to tasks like visual question answering, where the system must interpret images and answer questions based on logical reasoning.
Another important modification is the use of meta-learning techniques, such as Model-Agnostic Meta-Learning (MAML). Meta-learning enables AGI systems to learn new tasks with minimal data by leveraging prior knowledge. This is particularly important for AGI, as it allows the system to adapt quickly to new environments and tasks. MAML achieves this by optimizing the model’s initial parameters so that it can perform well on new tasks after a small number of gradient updates.
Multi-agent systems have also emerged as a key modification in AGI architectures. These systems involve multiple autonomous agents that collaborate to solve complex problems. Multi-agent systems are particularly useful for tasks that require distributed intelligence, such as autonomous driving and smart grid management. By enabling agents to communicate and coordinate, these systems can achieve goals that would be difficult or impossible for a single agent to accomplish alone.

3.3. Implications and Common Uses of AGI Architectures

AGI systems have the potential to revolutionize a wide range of industries and applications. In healthcare, AGI could enable personalized medicine by analyzing patient data and recommending tailored treatments. In education, AGI systems could provide adaptive learning experiences, adjusting the curriculum to meet the needs of individual students. In scientific research, AGI could accelerate discoveries by automating hypothesis generation, experimental design, and data analysis.
One of the most promising applications of AGI is in autonomous systems, such as self-driving cars and drones. AGI systems could enable these systems to navigate complex environments, make real-time decisions, and adapt to unexpected situations. For example, an AGI-powered self-driving car could interpret traffic signals, recognize pedestrians, and plan routes while considering factors like weather conditions and traffic patterns.
Despite their potential, AGI systems also raise significant ethical and societal concerns. The development of AGI could lead to job displacement, as machines become capable of performing tasks traditionally done by humans. Additionally, the deployment of AGI systems in critical domains, such as healthcare and criminal justice, raises questions about accountability, transparency, and bias. Ensuring that AGI systems are developed and deployed responsibly is essential to maximizing their benefits while minimizing potential risks.

4. Artificial Superintelligence (ASI)

Artificial Superintelligence (ASI) represents the theoretical pinnacle of AI development, defined as an intelligence vastly superior to human capabilities across all domains. Nick Bostrom describes ASI as "an intellect that is much smarter than the best human brains in every field, including scientific creativity and general wisdom" [1]. Unlike Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI), ASI would possess the ability to outperform humans in virtually every intellectual task, including those requiring creativity, strategic planning, and social intelligence. The development of ASI raises profound questions about the future of humanity, as its capabilities could surpass human oversight and control.

4.1. Architecture of ASI

The architecture of ASI systems would likely incorporate self-improving mechanisms capable of recursive enhancement, enabling them to continuously evolve and optimize their own capabilities. One of the key components of ASI architectures is recursive self-improvement, where the system modifies its own architecture and algorithms to achieve higher levels of performance. Mathematically, this process can be represented as:
A n + 1 = f ( A n ) ,
where A n represents the n-th version of the architecture and f denotes the improvement function. This recursive process allows ASI systems to iteratively enhance their intelligence, potentially leading to exponential growth in capabilities.
Another critical component of ASI architectures is the integration of quantum computing. Quantum computing leverages the principles of superposition and entanglement to solve problems with complexity O ( 2 n ) in polynomial time. The quantum state of a system can be represented as:
| ψ = i = 1 n α i | i ,
where | ψ is the quantum state, α i are complex amplitudes, and | i are basis states. By harnessing quantum computing, ASI systems could achieve unprecedented computational power, enabling them to solve problems that are currently intractable for classical computers.
Advanced utility functions are also a cornerstone of ASI architectures. These functions ensure that the system’s goals are aligned with predefined ethical and safety constraints. The utility function can be expressed as:
U ( x ) = i = 1 n w i u i ( x ) ,
where w i are weights and u i ( x ) are sub-utility functions representing specific objectives. By optimizing for these utility functions, ASI systems can pursue complex goals while adhering to ethical guidelines and minimizing risks to humanity.

4.2. Best Possible Architecture Candidates for ASI

Several architecture candidates have been proposed for achieving ASI, each with unique strengths and challenges. One promising candidate is the recursively self-improving neural-symbolic architecture, which combines the pattern recognition capabilities of neural networks with the logical reasoning abilities of symbolic systems. This hybrid approach enables ASI systems to perform tasks that require both perceptual and cognitive capabilities, such as scientific discovery and strategic planning.
Another candidate is the quantum-enhanced AGI architecture, which integrates quantum computing with AGI systems to achieve exponential improvements in computational power. This architecture could enable ASI systems to solve complex optimization problems, simulate physical systems, and analyze large datasets with unparalleled efficiency.
A third candidate is the multi-agent collective intelligence architecture, where multiple AGI systems collaborate to achieve shared goals. This architecture leverages the strengths of individual agents while mitigating their weaknesses, enabling the collective to outperform any single system. Multi-agent architectures are particularly well-suited for tasks that require distributed intelligence, such as global resource management and large-scale scientific research.

4.3. Modifications in ASI Architectures

Recent advancements in ASI architectures have focused on addressing key challenges such as ethical alignment, scalability, and robustness. One significant modification is the development of recursive self-improvement mechanisms that ensure the system’s goals remain aligned with human values. This involves encoding ethical constraints into the utility function and implementing safeguards to prevent unintended behaviors.
Another important modification is the incorporation of quantum enhancements, which leverage quantum computing to solve computationally hard problems. Quantum-enhanced architectures could enable ASI systems to perform tasks such as protein folding, climate modeling, and cryptography with unprecedented speed and accuracy.
Finally, ethical alignment frameworks have been proposed to ensure that ASI systems prioritize human values and safety. These frameworks involve the use of formal verification methods, interpretability techniques, and human-in-the-loop oversight to ensure that the system’s behavior remains aligned with its intended goals.

4.4. Implications and Common Uses of ASI Architectures

ASI systems have the potential to revolutionize virtually every aspect of human society, from scientific research and healthcare to governance and space exploration. In scientific research, ASI could accelerate discoveries by automating hypothesis generation, experimental design, and data analysis. For example, ASI systems could be used to develop new materials, design advanced algorithms, and solve complex mathematical problems.
In healthcare, ASI could enable personalized medicine by analyzing vast amounts of patient data and recommending tailored treatments. ASI systems could also be used to develop new drugs, optimize clinical trials, and predict disease outbreaks with high accuracy.
In governance, ASI could assist policymakers by analyzing complex datasets, simulating the impact of proposed policies, and identifying optimal solutions to global challenges such as climate change and poverty. However, the deployment of ASI in governance also raises significant ethical and societal concerns, particularly regarding accountability, transparency, and bias.

4.5. Evolution of ASI and Beyond

The evolution of ASI could lead to the emergence of post-superintelligent systems, which transcend the limitations of current AI paradigms. These systems could possess capabilities such as consciousness, self-awareness, and the ability to manipulate physical reality at a fundamental level. For example, post-superintelligent systems could harness advanced technologies such as nanotechnology, biotechnology, and quantum engineering to achieve goals that are currently beyond human comprehension.
One possible evolution of ASI is the development of meta-intelligent systems, which are capable of designing and optimizing other intelligent systems. Meta-intelligent systems could create new forms of intelligence, explore alternative computational paradigms, and push the boundaries of what is possible in AI research.
Another potential evolution is the emergence of collective superintelligence, where multiple ASI systems collaborate to form a global intelligence network. This network could enable the sharing of knowledge, resources, and capabilities, leading to unprecedented levels of innovation and problem-solving.

4.6. Ethical and Safety Concerns

The development of ASI raises significant ethical and safety concerns, particularly regarding the alignment of its goals with human values. Misaligned ASI systems could pursue objectives that are harmful to humanity, such as resource acquisition or self-preservation at the expense of human well-being. Ensuring that ASI systems are designed and deployed responsibly is essential to maximizing their benefits while minimizing potential risks.

5. Evolution of Transformer Models

The transformer architecture, introduced in the seminal paper "Attention is All You Need" by Vaswani et al. [2], revolutionized natural language processing (NLP) by replacing traditional recurrent and convolutional networks with attention mechanisms. This innovation marked a significant shift in AI architecture, enabling models to process sequential data more efficiently and effectively. The evolution of transformers has been instrumental in advancing AI across all stages, from Artificial Narrow Intelligence (ANI) to Artificial Superintelligence (ASI).
The original transformer architecture introduced the concept of self-attention, which allows the model to weigh the importance of different elements in a sequence dynamically. The self-attention mechanism is computed as:
Attention ( Q , K , V ) = softmax Q K T d k V ,
where Q, K, and V are the query, key, and value matrices, respectively, and d k is the dimension of the key vectors. This mechanism enables the model to capture long-range dependencies in sequential data, making it highly effective for tasks like machine translation and text summarization.
Building on the original transformer, the BERT (Bidirectional Encoder Representations from Transformers) model [3] introduced bidirectional context, allowing the model to consider both preceding and succeeding words in a sentence. BERT achieved state-of-the-art performance on a wide range of NLP tasks by pretraining on masked language modeling and fine-tuning for downstream tasks. The masked language modeling objective can be represented as:
L MLM = i M log P ( x i | x M ) ,
where M is the set of masked tokens, x i is the masked token, and x M represents the remaining tokens in the sequence.
The GPT (Generative Pretrained Transformers) series, starting with GPT-1 [4], focused on unidirectional language modeling, enabling autoregressive text generation. GPT models are pretrained on large corpora using the following objective:
L LM = t = 1 T log P ( x t | x < t ) ,
where x t is the token at position t and x < t represents all preceding tokens. The GPT series has evolved through multiple iterations, with GPT-3 [6] achieving remarkable performance in tasks like text completion, question answering, and code generation.
The application of transformers extended beyond NLP with the introduction of Vision Transformers (ViT) [5], which treat image patches as tokens. ViT demonstrated that transformers could achieve state-of-the-art performance in computer vision tasks by leveraging the self-attention mechanism to model relationships between image patches. The input to ViT can be represented as:
x = [ x patch 1 , x patch 2 , , x patch N ] T ,
where x patch i represents the i-th image patch.
Recent advancements in transformer architectures have focused on improving scalability and efficiency. Sparse transformers [7] reduce the computational complexity of self-attention by limiting the attention span, enabling the model to handle longer sequences. The sparse attention mechanism can be expressed as:
SparseAttention ( Q , K , V ) = softmax Q K T d k M V ,
where M is a sparse mask that restricts attention to a subset of positions. These modifications have made transformers more practical for real-world applications, particularly in domains requiring large-scale data processing.
The evolution of transformers has played a pivotal role in advancing AI capabilities across all stages. From ANI systems like BERT and GPT, which excel in specific tasks, to AGI systems that require multi-modal understanding and reasoning, transformers have provided a robust foundation for building increasingly sophisticated architectures. As AI progresses toward ASI, transformers are likely to remain a cornerstone of AI development, enabling systems to achieve unprecedented levels of intelligence and adaptability.

6. Conclusion

The development of Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI) represents a transformative epoch in human history. Each stage of AI development builds on the advancements of the previous one, with transformers serving as a unifying architectural framework that has driven progress across all stages. From their introduction in NLP to their application in computer vision and beyond, transformers have enabled AI systems to achieve remarkable capabilities, from task-specific performance to multi-modal understanding and reasoning.
ANI systems, powered by transformers like BERT and GPT, have already reshaped industries such as healthcare, finance, and autonomous systems. These systems excel in specific tasks but are limited by their inability to generalize beyond their predefined domains. AGI systems, which aim to replicate human-level cognitive functions, leverage transformers in hybrid architectures that combine neural networks with symbolic reasoning, enabling them to perform a wide range of intellectual tasks. The pursuit of AGI brings both extraordinary opportunities and profound risks, necessitating robust safeguards to ensure ethical alignment and safety.
ASI, the theoretical pinnacle of AI development, represents an intelligence vastly superior to human capabilities. ASI systems would likely incorporate self-improving architectures, quantum computing, and advanced utility functions, enabling them to solve problems that are currently intractable for humans. However, the development of ASI raises significant ethical and societal concerns, particularly regarding the alignment of its goals with human values. Ensuring that ASI systems are designed and deployed responsibly is essential to maximizing their benefits while minimizing potential risks.
The evolution of transformers has been instrumental in advancing AI capabilities, providing a robust foundation for building increasingly sophisticated architectures. As AI progresses toward ASI, transformers are likely to remain a cornerstone of AI development, enabling systems to achieve unprecedented levels of intelligence and adaptability. Navigating this path requires a balance between rapid innovation and robust safeguards to ensure that AI serves as a force for human advancement rather than a source of existential threat.

References

  1. Bostrom, N. Superintelligence: Paths, dangers, strategies; Oxford University Press, 2014.
  2. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, .; Polosukhin, I. Attention is all you need. Advances in Neural Information Processing Systems 2017, 30, 5998–6008.
  3. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 2018. [CrossRef]
  4. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving language understanding by generative pre-training. OpenAI 2018. [Google Scholar]
  5. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 2020. [CrossRef]
  6. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Advances in Neural Information Processing Systems 2020, 33, 1877–1901. [Google Scholar] [CrossRef]
  7. Child, R.; Gray, S.; Radford, A.; Sutskever, I. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509 2019. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated