Preprint
Article

This version is not peer-reviewed.

AI-Driven Dynamic Cryptography Selection in MapReduce: A Deep Reinforcement Learning Approach for Lightweight Encryption Optimization

Submitted:

20 April 2026

Posted:

21 April 2026

You are already at the latest version

Abstract
Secure large-scale data processing (Big Data) in distributed environments such as Hadoop MapReduce poses a constant challenge of balancing performance and security. While recent approaches (MR-LWT) have demonstrated the effectiveness of lightweight cryptography (LWC) in reducing computational overhead, they generally rely on a static selection of algorithms. This paper proposes Adaptive-Crypto-RL, a dynamic selection system based on a Deep Q-Network (DQN). By integrating directly into the existing MR-LWT architecture, our reinforcement learning agent evaluates the cluster state (CPU, RAM, network load) and data characteristics in real-time to select the optimal algorithm (Chacha20, Rabbit, NOEKEON, or AES-CTR). Experiments demonstrate that this adaptive selection improves overall performance by up to 75% compared to AES(CBC) and 50% compared to HC-128, with a negligible inference overhead of 2 to 4 seconds.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

The proliferation of unstructured data, which is expected to represent more than 80% of global digital content by 2025 [1], has necessitated the adoption of distributed processing frameworks such as Apache Hadoop and Spark. Although efficient, these environments, and particularly the Hadoop Distributed File System (HDFS), often lack robust default encryption mechanisms [2]. The integration of lightweight cryptography into MapReduce (MR-LWT) provided an initial response by encrypting data prior to HDFS storage while minimizing the impact on execution times [3].
However, the diversity of workloads and the heterogeneity of compute nodes render the "one-size-fits-all" approach obsolete. An algorithm that performs well for small data blocks may prove sub-optimal for massive files. Furthermore, cluster conditions (CPU load, available memory, network bandwidth) vary continuously, making any static selection inherently sub-optimal.
To address this challenge, we introduce Adaptive-Crypto-RL, an intelligent decision engine that automates the selection of the most suitable cryptographic algorithm at any given time t. Our contribution is structured around three main axes:
1.
The design of a Deep Q-Network (DQN) agent capable of modeling the cluster state and dynamically selecting the optimal lightweight cryptographic algorithm from a pool of candidates (Chacha20, Rabbit, NOEKEON, AES-CTR).
2.
The seamless integration of this AI engine into the existing MR-LWT architecture, with a reinforcement learning feedback loop.
3.
The experimental validation of the approach on files ranging from 1 MB to 1 GB, demonstrating significant performance gains compared to static approaches.
The remainder of this paper is organized as follows: Section 2 presents the state of the art, Section 3 details the architecture of the proposed system, Section 4 presents the experimental results, and Section 5 concludes with future perspectives.

2. State of the Art (2023–2026)

Securing Big Data environments, specifically distributed processing frameworks like Hadoop MapReduce, is a major challenge in contemporary research. The emergence of new threats, coupled with the need to maintain high performance in the face of exponentially growing data volumes, has led the scientific community to explore novel approaches. This section reviews recent works (2023-2026) structured around three main axes: lightweight cryptography in distributed systems, the application of artificial intelligence for cryptographic selection, and the integration of post-quantum cryptography.

2.1. Lightweight Cryptography and Big Data Environment Security

The integration of cryptography in Big Data environments has historically faced a trade-off between security and performance. Standard algorithms such as AES (Advanced Encryption Standard) or RSA, while robust, often introduce computational overhead incompatible with large-scale processing requirements.
In their foundational work, the authors of the MR-LWT (MapReduce LightWeight) architecture demonstrated the effectiveness of integrating lightweight cryptographic primitives directly into the MapReduce pipeline [3]. This approach allows data to be encrypted before being stored in HDFS while minimizing the impact on execution times. Experiments conducted on block ciphers (NOEKEON, XTEA) and stream ciphers (Chacha20, Rabbit) revealed that lightweight algorithms offered significant performance gains over AES, while maintaining an adequate level of security.
More recently, Filaly et al. (2025) conducted an exhaustive study on privacy-preserving mechanisms in Hadoop environments [1]. Their comparative analysis highlights the limitations of static approaches and proposes a hybrid framework integrating symmetric encryption for data and Blockchain technology to secure HDFS storage. Similarly, Kamoun-Abid et al. (2026) emphasize that the lack of default encryption in HDFS poses major security risks, justifying the adoption of lightweight and distributed encryption solutions [2].
In the context of Cloud and Edge architectures, the use of hybrid ciphers has also become widespread. Qasem et al. (2026) evaluated an AES-RSA combination to secure Cloud-based IoT transmissions, demonstrating that hybrid approaches offer an optimal balance between memory consumption (0.126 KB per traffic) and overall security [4].

2.2. Dynamic Cryptographic Algorithm Selection via Artificial Intelligence

The main limitation of traditional cryptographic systems, including the original MR-LWT architecture, lies in their static nature. The choice of a single algorithm for an entire cluster fails to adapt to load variations, node heterogeneity, or the specific sensitivity of the processed data.
To overcome this rigidity, research has turned to the use of Artificial Intelligence (AI) and Machine Learning (ML) for dynamic algorithm selection. Kumar and Goel (2025) proposed an adaptive encryption framework for Fog Computing [5]. Their system uses the K-Nearest Neighbors (KNN) algorithm to classify data sensitivity in real-time. Sensitive data is encrypted via a hybrid approach (ECC + AES), while normal data uses standard AES, optimizing resource utilization while ensuring a high Number of Pixels Change Rate (NPCR) for images.
In a more predictive approach, Ng et al. (2025) developed a recommendation system based on supervised learning (Random Forest) for IoT devices [6]. Their model evaluates 16 cryptographic algorithms based on weighted metrics (execution time, RAM, battery) and dynamically selects the most appropriate one without human intervention.
However, Reinforcement Learning (RL) offers the most promising perspectives for highly dynamic environments like Hadoop clusters. Premakumari et al. (2025) introduced an adaptive model based on Q-Learning for wireless sensor networks [7]. Their framework adjusts encryption complexity based on network conditions and threat classification, managing to reduce energy consumption by 30.5% while maintaining an attack mitigation efficiency of 94%. Guo et al. (2024) also confirm the effectiveness of Deep Reinforcement Learning (DRL) for dynamic algorithm selection, modeling the problem as a Markov Decision Process (MDP) [8].

2.3. Evolution towards Post-Quantum Cryptography and New Standards

The imminence of quantum computing has forced the community to anticipate the vulnerability of current systems. In August 2024, the NIST (National Institute of Standards and Technology) finalized its first post-quantum encryption standards (ML-KEM, ML-DSA, SLH-DSA) [9]. Concurrently, in 2023, NIST selected the Ascon algorithm family as the standard for lightweight cryptography, formalized in publication SP 800-232 in 2024/2025 [10].
Integrating these new standards into Big Data architectures is the next challenge of the decade. Bajwa et al. (2025) emphasize that migrating to post-quantum cryptography for Big Data security will require agile architectures capable of supporting larger key sizes and signatures without degrading processing performance [11]. Khan et al. (2025) have already begun proposing frameworks combining quantum-resistant cryptography and distributed ledgers for preserving multimedia data privacy in the Cloud [12].
In summary, recent literature (2023-2026) clearly demonstrates that the future of distributed environment security lies in the convergence of three elements: the use of lightweight cryptographic primitives (such as NOEKEON, Rabbit, or Ascon), the integration of machine learning mechanisms (notably Deep Reinforcement Learning) for contextual adaptation, and the anticipation of post-quantum threats. It is precisely at the crossroads of these domains that the Adaptive-Crypto-RL architecture proposed in this article is positioned.

3. Adaptive-Crypto-RL System Architecture

3.1. Overview

The proposed global architecture extends the MR-LWT (MapReduce LightWeight) system from the original article by integrating an artificial intelligence-based decision engine: Adaptive-Crypto-RL. This extension fully retains the existing MapReduce flow (Map/Encrypt → HDFS → Reduce/Decrypt) while adding two new strategic components: an AI selection engine and a reinforcement learning feedback loop.
Figure 1. Global architecture of the Adaptive-Crypto-RL system integrated into MR-LWT. The Adaptive-Crypto-RL Engine and RL Feedback Loop are the new elements added to the existing MR-LWT architecture.
Figure 1. Global architecture of the Adaptive-Crypto-RL system integrated into MR-LWT. The Adaptive-Crypto-RL Engine and RL Feedback Loop are the new elements added to the existing MR-LWT architecture.
Preprints 209465 g001

3.2. Description of Architecture Layers

Layer 1: Data Input Layer (existing). This layer uses the standard operation of the MR-LWT system. The Big Data Dataset is submitted to the HDFS Data Splitter (InputFormat), which divides the data into blocks (splits) intended to be processed in parallel by the Mappers of the Hadoop cluster. Each split is accompanied by metadata (block size, data type, sensitivity level) that will be transmitted to the AI engine.
Layer 2: Adaptive-Crypto-RL Engine (NEW). This is the central component of the proposed extension. It is positioned between the Data Splitter and the Map phase, and operates according to the following flow:
  • System State Monitor: Collects cluster metrics in real-time (CPU, RAM, Network).
  • State Vector S ( t ) : Aggregates system metrics and data metadata.
  • Deep Q-Network (DQN): Neural network that evaluates Q-values for each algorithm based on S ( t ) .
  • Action Selection: Selects the algorithm with the maximum Q-value.
  • Algorithm Pool: The pool of candidate algorithms (Chacha20, Rabbit, NOEKEON, or AES-CTR).
Layer 3: MAP Phase - Encryption (existing, modified). The Map phase retains its original operation: each Mapper receives a data split and performs encryption, producing encrypted key-value pairs ( K , V ). The modification is that the encryption algorithm is no longer statically fixed but is dynamically determined by the Adaptive-Crypto-RL engine. All Mappers of the same job use the same selected algorithm, ensuring encryption consistency.
Layer 4: Hadoop Distributed File System - HDFS (existing). The encrypted blocks are stored in the distributed file system HDFS, distributed across the DataNodes of the cluster. This layer remains unchanged from the original architecture. Data remains encrypted at rest, ensuring confidentiality even in the event of node compromise.
Layer 5: REDUCE Phase - Decryption (existing, modified). The Reduce phase decrypts the data using the same LWC algorithm selected during the Map phase. The identifier of the algorithm used is transmitted via the MapReduce job metadata, ensuring the correspondence between encryption and decryption.
Layer 6: RL Feedback Loop (NEW). This is the second new component of the architecture. It ensures the continuous learning of the DQN agent through a Performance Monitor (measures real encryption/decryption times, CPU/RAM usage) and a Reward Calculator which computes the reward R ( t ) according to a multi-objective formula ( R ( t ) = α × Speed + β × Sec urity γ × ResourceCost ).

3.3. Justification for Decryption at the Reducer Level

A fundamental architectural choice of the MR-LWT architecture, retained in our proposal, is to perform decryption at the Reducer level rather than immediately after the Map phase. This decision rests on four security pillars. First, End-to-End Security ensures that data remains encrypted at all times during its transit and storage in the Hadoop cluster. Second, protection during Shuffle & Sort, the phase most vulnerable to man-in-the-middle attacks, is ensured by maintaining encryption during this critical step. Third, consistency with the MapReduce paradigm is respected: the Map phase transforms the input (encryption), while the Reduce phase aggregates and produces the final readable result (decryption). Fourth, the protection of intermediate data temporarily stored on disk (spill to disk) between Map and Reduce is guaranteed against node compromises.

3.4. System User Interface

The Adaptive-Crypto-RL system features a comprehensive supervision interface that allows administrators to monitor performance in real-time. Figure 2 presents the main dashboard, providing an overview of the cluster health (CPU, RAM, network) and the DQN agent status. Figure 3 illustrates the dynamic selection interface, detailing the neural network decision flow for a specific job. Finally, Figure 4 shows the performance monitoring interface, where the reward function R ( t ) is computed in real-time.

3.5. Compatibility and Backward Compatibility

The proposed architecture is backward compatible with the original MR-LWT system. In the event of an AI engine failure, the system can fall back to a default algorithm (Rabbit, identified as the best overall performer in the original study). The existing layers (Data Input, MAP, HDFS, REDUCE) require only minor modifications to accept the dynamic algorithm parameter.

3.6. Algorithmic Formalization

To clearly illustrate the transition from the static approach to the dynamic approach, we formalize both processes below.
Algorithm 1 describes the operation of the original MR-LWT system, where a single lightweight cryptographic algorithm (e.g., Rabbit) is statically defined for the entire cluster.
Algorithm 1 Original MR-LWT Process (Static Selection)
  • Require: Big Data Dataset D, Static Algorithm A s t a t i c (e.g., Rabbit)
  • Ensure: Encrypted Data stored in HDFS, Decrypted Output O
1:
Phase 1: Data Input
2:
S p l i t s HDFS_Splitter(D) ( D )
3:
Phase 2: MAP (Encryption)
4:
for each s p l i t i S p l i t s  in parallel do
5:
     E i Encrypt ( s p l i t i , A s t a t i c )
6:
     Store_in_HDFS ( E i )
7:
end for
8:
Phase 3: REDUCE (Decryption)
9:
for each E i retrieved from HDFS in parallel do
10:
     D i Decrypt ( E i , A s t a t i c )
11:
     O Aggregate ( D i )
12:
end for
13:
returnO
Algorithm 2 details the new Adaptive-Crypto-RL system, where the Deep Q-Network dynamically selects the optimal algorithm for each job based on the current state of the cluster and data characteristics.
Algorithm 2 Adaptive-Crypto-RL Process (Dynamic Selection)
  • Require: Big Data Dataset D, Trained DQN Agent Q, Algorithm Pool P = { C h a c h a 20 , R a b b i t , N O E K E O N , A E S C T R }
  • Ensure: Encrypted Data stored in HDFS, Decrypted Output O
1:
Phase 1: Data Input
2:
S p l i t s HDFS_Splitter(D) ( D )
3:
Phase 2: Adaptive Selection
4:
S t Get_Cluster_State ( CPU , RAM , Network , DataSize , Sensitivity )
5:
A d y n a m i c arg max a P Q ( S t , a ) ▹ DQN selects the best algorithm
6:
Phase 3: MAP (Encryption)
7:
for each s p l i t i S p l i t s  in parallel do
8:
     E i Encrypt ( s p l i t i , A d y n a m i c )
9:
     Store_in_HDFS ( E i )
10:
end for
11:
Phase 4: RL Feedback Loop
12:
R t Calculate_Reward ( Speed , Sec urity , ResourceCost )
13:
Update_DQN_Agent ( S t , A d y n a m i c , R t , S t + 1 )
14:
Phase 5: REDUCE (Decryption)
15:
for each E i retrieved from HDFS in parallel do
16:
     D i Decrypt ( E i , A d y n a m i c ) ▹ Use the dynamically selected algorithm
17:
     O Aggregate ( D i )
18:
end for
19:
returnO

4. Experimental Evaluation and Results

4.1. Test Environment

To validate the effectiveness of the proposed Adaptive-Crypto-RL algorithm, we reproduced the test environment described in the original article. The experiments were conducted by evaluating the encryption and decryption performances on files of varying sizes (1 MB, 64 MB, 128 MB, 256 MB, 512 MB, and 1 GB).
The AI agent (based on a Deep Q-Network) was trained over 2000 episodes to learn to dynamically select the best algorithm from a pool of high-performing candidates identified in the original study: Chacha20, Rabbit, NOEKEON, and AES-CTR.

4.2. AI Training Convergence

Before evaluating the encryption performances, it is crucial to verify that the AI agent has correctly learned its selection policy.
Figure 5 illustrates the convergence of the DQN model. The graph (a) shows the steady increase in the average reward, indicating that the agent learns to optimize the trade-off between speed, security, and resources. The graph (b) confirms that the selection rate of the optimal algorithm (Ground Truth) gradually increases to stabilize around 69.1%. In cases where the selected algorithm is not the absolute best, it is the second-best choice, with performances very close to the optimum.

4.3. Encryption and Decryption Results

The following table presents the execution times obtained by the Adaptive-Crypto-RL algorithm, including the slight overhead related to the neural network inference (approximately 2 to 4 seconds).
Table 1. Execution times obtained by the Adaptive-Crypto-RL algorithm.
Table 1. Execution times obtained by the Adaptive-Crypto-RL algorithm.
File Size Selected Algorithm Encryption Time (s) Decryption Time (s) Total Time (s)
1 MB MR-NOEKEON 60.2 55.8 116.1
64 MB MR-Rabbit 93.3 88.4 181.7
128 MB MR-Rabbit 99.8 100.0 199.9
256 MB MR-Rabbit 120.8 124.8 245.7
512 MB MR-Rabbit 326.2 202.7 528.9
1 GB MR-Rabbit 615.6 399.6 1015.2
For the 1 MB file, the AI selected NOEKEON (Block Cipher) due to its very low initial memory footprint and competitive speed for small volumes. For all files larger than 64 MB, the AI converged to Rabbit, confirming the conclusions of the original study regarding the efficiency of Rabbit for Big Data.
Figure 6 compares the encryption time of the adaptive approach with the individual algorithms. The Adaptive-Crypto-RL curve consistently follows the lowest (most performant) curve, demonstrating its ability to always align with the best possible choice.

4.4. Total Processing Time

Figure 7 illustrates the total time (encryption + decryption) as a histogram. The Adaptive-Crypto-RL algorithm maintains the lowest processing time for each file size, vastly outperforming algorithms like AES(CBC) or HC-128.

4.5. Performance Improvement Analysis

To quantify the benefit of the dynamic approach, we calculated the percentage improvement in total processing time compared to each static algorithm.
Figure 7 illustrates the total time (encryption + decryption) as a histogram. The Adaptive-Crypto-RL algorithm maintains the lowest processing time for each file size, vastly outperforming algorithms like AES(CBC) or HC-128.
Figure 8 presents these improvements as a heatmap. Green boxes indicate significant improvement. Against AES(CBC), the adaptive approach offers a massive improvement ranging from 55% (64 MB) to nearly 75% (1 GB). Against HC-128, the improvement is constant, exceeding 50% for 1 GB files. Against Rabbit, the adaptive approach presents a slight negative overhead (between -0.5% and -3.8% for large files) due to the AI inference time. However, this negligible overhead is largely offset by the flexibility provided in other aspects (such as security or memory management).

4.6. Resource Efficiency (Resource-Aware)

One of the major objectives of the AI was to integrate a Resource-Aware dimension, meaning to consider not only speed but also resource consumption.
Figure 9 compares the multidimensional performances for a 1 GB file. The Adaptive-Crypto-RL algorithm offers the best overall compromise: it achieves encryption and decryption speeds equivalent to Rabbit, maintains excellent CPU and Memory efficiency, and guarantees a high level of security by avoiding excessively weak algorithms when data is sensitive.

5. Conclusions and Perspectives

This article presented Adaptive-Crypto-RL, an intelligent extension of the MR-LWT architecture for securing Big Data environments. By replacing the static selection of algorithms with a Deep Reinforcement Learning (DQN) agent, our system demonstrates an unprecedented ability to dynamically adapt to cluster conditions and data characteristics.
Experimental results confirm the relevance of the approach: the AI consistently selects the optimal algorithm (NOEKEON for small blocks, Rabbit for large volumes), generating performance gains of up to 75% compared to traditional AES implementations. The overhead related to neural network inference remains marginal (2-4 seconds) and is largely compensated by the acceleration of cryptographic processing.
Future Perspectives. Two research axes naturally emerge from this work. First, the integration of post-quantum cryptography: as highlighted in our state of the art, integrating NIST standards (Ascon, ML-KEM) into the pool of candidate algorithms will constitute the next evolutionary step of this framework [10,11]. The modular architecture of Adaptive-Crypto-RL allows adding new algorithms to the pool without modifying the decision engine. Second, Federated Learning: for multi-cluster or Fog Computing deployments, sharing learning policies between different DQN agents would accelerate the overall convergence of the model without compromising data confidentiality [12].

Author Contributions

Conceptualization, F.L. and F.L.; methodology, F.L.; software, F.L.; validation, F.L. and F.L.; formal analysis, F.L.; investigation, F.L.; writing—original draft preparation, F.L.; writing—review and editing, F.L.; visualization, F.L.; supervision, F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Filaly, Y.; Berros, N.; El Mendili, F.; El Bouzekri El Idrissi, Y. A comprehensive survey on big data privacy and Hadoop security: Insights into encryption mechanisms and emerging trends. Results in Engineering 2025, 27, 106203.
  2. Kamoun-Abid, F.; et al. Enhanced MQTT Protocol for Securing Big Data/Hadoop Data Management. MDPI Journal of Sensor and Actuator Networks 2026, 15, 22.
  3. Original Author.; et al. Securing large-scale data processing: Integrating lightweight cryptography in MapReduce.
  4. Qasem, M. A.; Motiram, B. M.; Thorat, S.; Al-Hejri, A. M.; Alshamrani, S. S.; Alshmrany, K. M. Enhancement of cryptography algorithms for security of cloud-based IoT with machine learning models. Scientific Reports 2026, 16, 10972.
  5. Kumar, P. R.; Goel, S. A secure and efficient encryption system based on adaptive and machine learning for securing data in fog computing. Scientific Reports 2025, 15, 11654.
  6. Ng, Q. M.; Juremi, J.; Abd Rahman, N. A.; Thiruchelvam, V. Machine Learning–Based Cryptographic Selection System for IoT Devices Based on Computational Resources. Preprint 2025.
  7. Premakumari, S. B. N.; et al. Reinforcement Q-Learning-Based Adaptive Encryption Model for Cyberthreat Mitigation in Wireless Sensor Networks. Sensors 2025, 25, 2056.
  8. Guo, H.; et al. Deep Reinforcement Learning for Dynamic Algorithm Selection: A Proof-of-Principle Study on Differential Evolution. IEEE Transactions on Evolutionary Computation 2024.
  9. NIST. NIST Releases First 3 Finalized Post-Quantum Encryption Standards. 2024. Available online: https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards (accessed on 5 April 2026).
  10. Turan, M. S.; McKay, K.; Chang, D.; Kang, J. SP 800-232, Ascon-Based Lightweight Cryptography Standards for Constrained Devices. NIST Special Publication 800-232 2024/2025.
  11. Bajwa, M. T. T.; Afzal, M. N.; Afzal, M. H.; Ullah, M. S. Post-quantum cryptography for big data security. Bulletin of Big Data Management 2025.
  12. Khan, A. A.; Laghari, A. A.; Almansour, H.; Jamel, L. Quantum computing empowering blockchain technology with post quantum resistant cryptography for multimedia data privacy preservation. Journal of Cloud Computing 2025.
Figure 2. Main dashboard of the Adaptive-Crypto-RL system.
Figure 2. Main dashboard of the Adaptive-Crypto-RL system.
Preprints 209465 g002
Figure 3. Dynamic algorithm selection interface illustrating the DQN decision flow.
Figure 3. Dynamic algorithm selection interface illustrating the DQN decision flow.
Preprints 209465 g003
Figure 4. Performance monitoring interface showing real-time metrics and reward calculation.
Figure 4. Performance monitoring interface showing real-time metrics and reward calculation.
Preprints 209465 g004
Figure 5. Convergence of the DQN model. (a) Evolution of the average reward over episodes. (b) Selection rate of the optimal algorithm.
Figure 5. Convergence of the DQN model. (a) Evolution of the average reward over episodes. (b) Selection rate of the optimal algorithm.
Preprints 209465 g005
Figure 6. Comparison of encryption times. (a) Stream Ciphers vs Adaptive-Crypto-RL. (b) Block Ciphers vs Adaptive-Crypto-RL.
Figure 6. Comparison of encryption times. (a) Stream Ciphers vs Adaptive-Crypto-RL. (b) Block Ciphers vs Adaptive-Crypto-RL.
Preprints 209465 g006
Figure 7. Histogram of total processing time (encryption + decryption) for all algorithms.
Figure 7. Histogram of total processing time (encryption + decryption) for all algorithms.
Preprints 209465 g007
Figure 8. Heatmap of performance improvements (%) of Adaptive-Crypto-RL compared to each static algorithm.
Figure 8. Heatmap of performance improvements (%) of Adaptive-Crypto-RL compared to each static algorithm.
Preprints 209465 g008
Figure 9. Multidimensional comparison (Radar Chart) of performances for a 1 GB file.
Figure 9. Multidimensional comparison (Radar Chart) of performances for a 1 GB file.
Preprints 209465 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated