Submitted:
10 October 2025
Posted:
14 October 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
- Non-IID Data: In PP-HFFL, each fog node aggregates updates from multiple heterogeneous IoT devices, often resulting in skewed or imbalanced class distributions and, in extreme cases, missing classes on certain clients. Such non-IID conditions can lead to biased model updates, slower convergence, and reduced global model accuracy. The severity of these effects depends on the complexity of the dataset and the degree of distributional heterogeneity, motivating a detailed analysis of non-IID impacts in hierarchical Fog-FL systems.
- Scalability: Scalability in PP-HFFL involves supporting a variable number of fog nodes and handling dynamic node participation. Increasing the number of fog nodes can enhance learning capacity but may also exacerbate heterogeneity. Moreover, nodes may join or leave during training, requiring robust coordination mechanisms to maintain stable convergence. Studying scalability behavior under diverse participation scenarios is thus crucial for practical deployments.
- Data Privacy: Although FL reduces privacy risks by keeping data local, interactions at the fog layer can introduce new attack surfaces. Malicious fog nodes could infer sensitive patterns from model updates or manipulate aggregation results. PP-HFFL integrates Differential Privacy (DP) mechanism to maintain strong privacy guarantees without significantly compromising model utility.
2. Background
2.1. Federated Learning
2.2. Non-IID Properties of IoT Data
- Covariate shift: where the input feature distribution varies across clients while the conditional label distribution remains constant.
- Concept shift: where clients share the same feature distribution but differ in label assignments, i.e., changes across clients.
2.3. Personalization in Federated Learning
2.4. Differential Privacy
3. Related Work
3.1. Fog-Enabled Federated Learning for IoT IDS
Summary and Research Gap
4. PP-HFFL: Privacy-Preserving Hierarchical Fog Federated Learning for IDS
System Assumptions
- Data Assumptions: IoT-collected datasets represent diverse behavioral and operational patterns, which are privacy-sensitive. Aggregated datasets at the fog level are inherently non-IID due to: (i) each fog node observing distinct subsets of attack and benign classes, and (ii) imbalanced class distributions both across and within clients. Data drift may occur over time as device behaviors evolve or new IoT devices join the network.
- Trust Assumptions: Each fog client is trusted by its associated IoT devices. All other fog nodes and the central cloud server are semi-honest (honest-but-curious), meaning they follow the protocol but may attempt to infer privacy-sensitive information from updates. No entity is fully malicious or colluding unless explicitly defined in the threat model.
- Computation Assumptions: IoT edge devices are resource-constrained, with limited processing power, memory, and energy, and cannot efficiently train complex ML models. Fog nodes have moderate computational resources to perform local training and communication with both IoT devices and cloud server, while the cloud server has sufficient computational capacity for global coordination and aggregation.
- Communication Assumptions: IoT-to-fog communication is bandwidth-limited and may experience latency or data loss. Fog-to-cloud links are relatively stable, leveraging high-speed backhaul. Synchronization between fog and cloud layers is periodic, conserving bandwidth while enabling efficient model updates.
- Security and Privacy Assumptions: Standard cryptographic mechanisms (secure aggregation) are assumed to protect local updates at the server side and prevent model inversion attacks. Secure key exchange and authentication exist between fog nodes and cloud to prevent impersonation or poisoning attacks.
- Deployment Assumptions: Each fog node serves a fixed set of IoT clusters. The number of fog nodes may scale dynamically based on network density. Time synchronization across nodes is loosely coordinated to allow asynchronous or semi-synchronous federated updates.

4.1. System Architecture of PP-HFFL
- Receives the global model from the cloud.
- Performs local training with its own dataset.
- Applies differential privacy (DP) via Gaussian noise to gradients before sending them to the cloud.
- Optionally personalizes the model to adapt to local non-IID data distributions.
4.2. Federated Learning Algorithm in PP-HFFL
| Algorithm 1: PP-HFFL: Federated Averaging with Differential Privacy (FedAvgDP) at Fog Nodes |
|
4.3. Security and Privacy Analysis in PP-HFFL
5. Experimental Results
5.1. Dataset
5.2. Experiment Setup
- Local machine: Visual Studio Code on Windows via Windows Subsystem for Linux (WSL), Intel Core i5-1235U CPU @ 1.30 GHz, 16 GB RAM, no GPU. Primarily used for end-to-end runs without GPU acceleration.
- Cloud environment: GPU-enabled cloud runtime with 2 GB GPU memory and Python 3.10.13 provided by the Digital Research Alliance. Used to confirm reproducibility and evaluate performance on a different hardware/software stack.
5.2.1. Model Architecture and Hyperparameters
| Layer (type) | Output | Param # |
|---|---|---|
| Input Layer (Linear) | (128) | |
| BatchNorm1d | (128) | 256 |
| ReLU | (128) | 0 |
| Linear | (256) | |
| BatchNorm1d | (256) | 512 |
| ReLU | (256) | 0 |
| Linear | (256) | |
| BatchNorm1d | (256) | 512 |
| ReLU | (256) | 0 |
| Linear | (128) | |
| BatchNorm1d | (128) | 256 |
| ReLU | (128) | 0 |
| Output Layer (Linear) | (n) |
| Name | Value |
|---|---|
| Aggregation algorithm | FedAvg |
| Total classes | 12, 7 |
| Input dimension | 79, 46 |
| Participation ratio | 1.0 |
| Max training rounds | 300 |
| Local epochs per round | 5 |
| Batch size | 64 |
| Optimizer | SGD |
| Initial learning rate | 0.03 |
| Weight decay | 1e-5 |
| Noise multiplier | 1.0 |
| Max_grad_norm | 1.0 |
| New Class | Old Class | Count | Downsampled Count |
|---|---|---|---|
| Flood Attacks | DDoSICMPFlood, DDoSTCPFlood, DDoSUDPFlood, DoSTCPFlood, DoSUDPFlood, DoSSynFlood, DDoSPSHACKFlood, DDoSRSTFINFlood, DoSHTTPFlood, DDoSSYNFlood, DDoSSynIPFlood, DDoSHTTPFlood | 921,428 | 92,143 |
| Botnet/Mirai Attacks | MiraiGreethFlood, MiraiUDPplain, MiraiGreipFlood | 59,233 | 5,924 |
| Benign | BenignTraffic | 24,476 | 2,448 |
| Spoofing/MITM | DNSSpoofing, MITMArpSpoofing | 11,053 | 1,106 |
| Reconnaissance | ReconHostDisc, ReconOSScan, ReconPortScan, ReconPingSweep | 7,136 | 714 |
| Backdoors & Exploits | BackdoorMalware, UploadingAttack, BrowserHijacking, DictionaryBF | 563 | 57 |
| Injection Attacks | SqlInjection, CommandInjection, XSS | 299 | 30 |
5.2.2. Data Preprocessing
5.3. Effect of non-IID Data on PP-HFFL Training
5.4. Effect of Differential Privacy in PP-HFFL Accuracy

- Per-sample gradient computation: Gradients are computed individually for each IoT sample rather than the entire batch.
- Gradient clipping and accumulation: Each sample gradient exceeding the L2 norm threshold is clipped, then aggregated to form the batch gradient.
- Noise addition: Gaussian noise is added to the aggregated gradient to mask individual contributions.
- Gradient scaling: The noisy gradient is normalized by the effective batch size to maintain correct learning dynamics.
- Optimizer step: The privacy-preserving gradient is applied to update the local model parameters, which are then uploaded to the hierarchical PP-HFFL server.
5.5. Personalized PP-HFFL
5.6. Discussion and Limitations
- The PP-HFFL IDS maintains near-centralized accuracy when fog clients retain sufficient class diversity. Performance drops sharply under extreme non-IID splits, though increasing the number of clients partially mitigates this by improving overall class coverage within the hierarchical federation.
- Integrating differential privacy (DP) in PP-HFFL introduces a modest accuracy reduction (approximately 1.3–5.8 points), with a larger impact observed for the higher-dimensional RT-IoT dataset. Nevertheless, both datasets achieve strong final accuracy, demonstrating that DP can be incorporated into PP-HFFL without severely compromising model utility.
- Personalization within PP-HFFL significantly enhances local model performance, particularly under DP. Many fog clients approach baseline centralized performance, and newly joined nodes can efficiently adapt the hierarchical global model via a transfer-learning-like mechanism, preserving privacy while improving local accuracy.
- Evaluation is restricted to two tabular IoT datasets (RT-IoT 2022 and CIC-IoT 2023) and a single type of non-IID partitioning strategy. Broader dataset diversity and more complex data distributions should be considered in future work.
- Differential privacy (DP) settings were fixed (noise_multiplier and max_grad_norm) without performing full -accounting; more rigorous privacy analyses could quantify the tradeoff between privacy guarantees and model utility in PP-HFFL.
- The study primarily focuses on accuracy; other important metrics such as F1-score, precision, and recall were not extensively analyzed. Evaluating these metrics would provide a more comprehensive assessment of PP-HFFL performance.
- The experimental setup assumes synchronous FedAvg aggregation and excludes fog clients with very few samples. Future investigations should consider partial client participation, heterogeneous resource constraints, and a wider range of non-IID scenarios to better reflect real-world IoT environments.
6. Conclusion
References
- Atzori, L.; Iera, A.; Morabito, G. The internet of things: A survey. Computer networks 2010, 54, 2787–2805. [Google Scholar] [CrossRef]
- Sun, X.; Wang, X.; Li, F.; Zhang, Q. A survey on IoT security: Threats, attacks, and countermeasures. IEEE Internet of Things Journal 2025, 12, 1245–1268. [Google Scholar]
- Axios. Hackers breach thousands of security cameras, exposing Tesla, jails, hospitals. https://www.axios.com/security-camera-hack-verkada-b6db6e5c-d8c0-4a3e-a3b6-3c9f8b0a5f7c.html, 2021. Accessed: 2025-10-07.
- Sharmila, B.S.; Nagapadma, R. Quantized deep neural network for intrusion detection in IoT networks. Computers & Security 2023, 126, 103042. [Google Scholar]
- Abusitta, A.; Bellaiche, M.; Dagenais, M.; Halabi, T. Deep learning-enabled anomaly detection for IoT systems. Internet of Things 2023, 21, 100656. [Google Scholar] [CrossRef]
- McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.y. Federated learning of deep networks using model averaging. arXiv 2016, arXiv:1602.05629. [Google Scholar]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.y. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial intelligence and statistics. PMLR, 2017, pp. 1273–1282.
- Arya, V.; Das, S.K. Intruder detection in IoT systems using federated learning. IEEE Internet of Things Journal 2023, 10, 7012–7025. [Google Scholar]
- Hamdi, M.; Zantout, H.; Alouini, M.S. Federated learning for intrusion detection in IoT networks: A comprehensive survey. ACM Computing Surveys 2023, 55, 1–35. [Google Scholar]
- Friha, O.; Ferrag, M.A.; Shu, L.; Maglaras, L.; Wang, X. FELIDS: Federated learning-based intrusion detection system for agricultural IoT. Journal of Parallel and Distributed Computing 2022, 165, 17–31. [Google Scholar] [CrossRef]
- Talpini, A.; Carrega, A.; Bolla, R. Clustering-based federated learning for intrusion detection in IoT. Computer Networks 2023, 224, 109608. [Google Scholar]
- Rashid, M.M.; Kamruzzaman, J.; Hassan, M.M.; Imam, T.; Gordon, S. Federated learning for IoT intrusion detection. Computers & Security 2023, 125, 103033. [Google Scholar]
- Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; Chandra, V. Federated learning with non-IID data. arXiv 2018, arXiv:1806.00582. [Google Scholar] [CrossRef]
- Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and open problems in federated learning. Foundations and Trends in Machine Learning 2021, 14, 1–210. [Google Scholar] [CrossRef]
- Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine 2020, 37, 50–60. [Google Scholar] [CrossRef]
- Zhu, H.; Xu, J.; Liu, S.; Jin, Y. Federated learning on non-IID data: A survey. Neurocomputing 2021, 465, 371–390. [Google Scholar] [CrossRef]
- Hsieh, K.; Phanishayee, A.; Mutlu, O.; Gibbons, P. The non-IID data quagmire of decentralized machine learning. In Proceedings of the International Conference on Machine Learning. PMLR, 2020, pp. 4387–4398.
- Hsu, T.M.H.; Qi, H.; Brown, M. Measuring the effects of non-identical data distribution for federated visual classification. arXiv 2019, arXiv:1909.06335. [Google Scholar] [CrossRef]
- Huang, Y.; Chu, L.; Zhou, Z.; Wang, L.; Liu, J.; Pei, J.; Zhang, Y. Personalized federated learning: A meta-learning approach. arXiv 2021, arXiv:2102.07078. [Google Scholar]
- Smith, V.; Chiang, C.K.; Sanjabi, M.; Talwalkar, A.S. Federated multi-task learning. Advances in neural information processing systems 2017, 30. [Google Scholar]
- Mansour, Y.; Mohri, M.; Ro, J.; Suresh, A.T. Three approaches for personalization with applications to federated learning. arXiv 2020, arXiv:2002.10619. [Google Scholar] [CrossRef]
- Ghosh, A.; Chung, J.; Yin, D.; Ramchandran, K. An efficient framework for clustered federated learning. In Proceedings of the Advances in Neural Information Processing Systems, 2020, Vol. 33, pp. 19586–19597.
- Dwork, C.; McSherry, F.; Nissim, K.; Smith, A. Calibrating noise to sensitivity in private data analysis. In Proceedings of the Theory of cryptography conference. Springer, 2006. 265–284.
- Dwork, C.; Roth, A.; et al. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science 2014, 9, 211–407. [Google Scholar] [CrossRef]
- Erlingsson, Ú.; Pihur, V.; Korolova, A. Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the Proceedings of the 2014 ACM SIGSAC conference on computer and communications security, 2014, pp. 1054–1067.
- Dong, J.; Roth, A.; Su, W.J. Gaussian differential privacy. Journal of the Royal Statistical Society Series B: Statistical Methodology 2022, 84, 3–37. [Google Scholar] [CrossRef]
- Mironov, I. Rényi differential privacy. In Proceedings of the 2017 IEEE 30th computer security foundations symposium (CSF). IEEE, 2017, pp. 263–275.
- Li, Z.; Sharma, V.; Mohanty, S.P. FLEAM: A federated learning empowered architecture to mitigate DDoS in industrial IoT. IEEE Transactions on Industrial Informatics 2021, 18, 4059–4068. [Google Scholar] [CrossRef]
- Al-Huthaifi, R.; Steingrímsson, G.; Yan, Y.; Hossain, M.S. Federated mimic learning for privacy preserving intrusion detection. IEEE Access 2020, 8, 193372–193383. [Google Scholar]
- Li, Y.; Zhou, Y.; Zhang, H.; Sun, L.; Huang, J. DeepFed: Federated deep learning for intrusion detection in industrial cyber-physical systems. IEEE Transactions on Industrial Informatics 2020, 17, 5615–5624. [Google Scholar] [CrossRef]
- Rahman, S.A.; Tout, H.; Talhi, C.; Mourad, A. Internet of things intrusion detection: Centralized, on-device, or federated learning? IEEE Network 2020, 34, 310–317. [Google Scholar] [CrossRef]
- Bhavsar, M.; Roy, K.; Kelly, J.; Okoye, C.T. FL-IDS: Federated learning-based intrusion detection system for IoT networks. Cluster Computing 2024, 27, 743–759. [Google Scholar]
- Imteaj, A.; Thakker, U.; Wang, S.; Li, J.; Amini, M.H. Federated learning for resource-constrained IoT devices: Panoramas and state-of-the-art. arXiv 2022, arXiv:2002.10610. [Google Scholar]
- Javeed, D.; Gao, T.; Khan, M.T. Fog computing and federated learning-based intrusion detection system for Internet of Things. Computers & Electrical Engineering 2023, 107, 108651. [Google Scholar]
- Bensaid, S.; Driss, M.; Boulila, W.; Alsaeedi, A.; Al-Sarem, M. Securing IoT via fog-layer federated learning. Journal of Network and Computer Applications 2025, 215, 103635. [Google Scholar]
- Liu, Y.; Ma, Z.; Liu, X.; Ma, S.; Nepal, S.; Deng, R. Distributed intrusion detection system for IoT based on federated learning and edge computing. Computers & Security 2022, 115, 102622. [Google Scholar]
- Saha, R.; Misra, S.; Dutta, P.K. FogFL: Fog-assisted federated learning for resource-constrained IoT devices. In Proceedings of the 2020 IEEE International Conference on Communications Workshops (ICC Workshops). IEEE, 2020, pp. 1–6.
- de Souza, C.A.; Westphall, C.M.; Machado, R.B. F-FIDS: Federated fog-based intrusion detection system for smart grids. International Journal of Information Security 2023, 22, 1059–1077. [Google Scholar]
- Abdel-Basset, M.; Chang, V.; Ding, W.; et al. Privacy-preserving federated learning: A comprehensive survey. Information Fusion 2024, 104, 102234. [Google Scholar]
- Geyer, R.C.; Klein, T.; Nabi, M. Differentially private federated learning: A client level perspective. In Proceedings of the NIPS Workshop on Privacy-Preserving Machine Learning, 2017.
- Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 1175–1191.
- Bagdasaryan, E.; Veit, A.; Hua, Y.; Estrin, D.; Shmatikov, V. How to backdoor federated learning. In Proceedings of the Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS). PMLR, 2020, Vol. 108, Proceedings of Machine Learning Research, pp. 2938–2948.
- Blanchard, P.; El Mhamdi, E.M.; Guerraoui, R.; Stainer, J. Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in neural information processing systems 2017, 30. [Google Scholar]
- Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. Membership inference attacks against machine learning models. In Proceedings of the 2017 IEEE symposium on security and privacy (SP). IEEE, 2017, pp. 3–18.
- Nasr, M.; Shokri, R.; Houmansadr, A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), IEEE, San Francisco, CA, USA, 2019; pp. 739–753.
- Sun, T.; Kairouz, P.; Suresh, A.T.; McMahan, H.B. Threats and countermeasures in federated learning: A survey. ACM Computing Surveys 2022, 55, 1–39. [Google Scholar] [CrossRef]
- Li, X.; Zeng, Y.; Xu, M.; Jin, R.; et al. A survey on privacy and security issues in federated learning: Threats, challenges, and solutions. IEEE Communications Surveys & Tutorials 2023, 25, 1234–1261. [Google Scholar]
- Kulkarni, V.; Kulkarni, M.; Pant, A. Survey of personalization techniques for federated learning. arXiv 2020, arXiv:2003.08673. [Google Scholar] [CrossRef]
- Tan, Y.; Ji, S.; Yang, T.; Yu, S.; Zhang, Y. Towards personalized federated learning. IEEE Transactions on Neural Networks and Learning Systems 2022, 33, 5005–5021. [Google Scholar] [CrossRef]
- Neto, E.C.; Dadkhah, S.; Ferreira, R.; Zohourian, A.; Lu, R.; Ghorbani, A.A. CIC-IoT2023: A real-time dataset and benchmark for large-scale attacks in IoT environment. Sensors 2023, 23, 5941. [Google Scholar] [CrossRef] [PubMed]
- Yu, F.X.; Rawat, A.S.; Menon, A.K.; Kumar, S. Federated learning with only positive labels. In Proceedings of the Proceedings of the 37th International Conference on Machine Learning (ICML). PMLR, 2020, Vol. 119, Proceedings of Machine Learning Research, pp. 10946–10956.
- Meta, AI. Opacus: User-friendly differential privacy library in PyTorch. https://opacus.ai/, 2021. Accessed: 2025-10-07.
- Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečný, J.; Mazzocchi, S.; McMahan, H.B.; et al. Towards Federated Learning at Scale: System Design. In Proceedings of the Proceedings of the 2nd SysML Conference, Stanford, CA, USA, 2019.

| Ref | Year | Main Contribution | Non-IID | Performance | Scalability | Privacy |
|---|---|---|---|---|---|---|
| [37] | 2020 | FogFL: hierarchical FL where fog nodes train/aggregate locally before cloud sync, reducing uplink traffic. | – | ✓ | – | – |
| [36] | 2022 | Proposed fog-client selection (random or resource-aware) in FL training, optimizing communication and performance. | – | ✓ | – | – |
| [34] | 2023 | Incorporated fog computing into FL to offload training from IoT devices, reducing latency and improving IDS performance. | – | ✓ | – | – |
| [38] | 2023 | Fog-based FL framework for IDS, leveraging fog-layer processing to enhance scalability and responsiveness. | – | ✓ | ✓ | – |
| [39] | 2024 | Proposed a privacy-preserving fog–federated IDS combining GAN-based data augmentation and differential privacy to address non-IID and adversarial data leakage. | ✓ | ✓ | – | ✓ |
| [35] | 2025 | Secured IoT via fog-layer FL, enabling low-latency collaborative IDS with privacy preservation. | – | ✓ | ✓ | ✓ |
| RT-IoT Dataset | CIC-IoT Dataset | ||||
|---|---|---|---|---|---|
| Attack Type | Count | Attack Type | Count | Attack Type | Count |
| DoSSYNHping | 94,659 | DDoSICMPFlood | 161,281 | DDoSUDPFlood | 121,205 |
| ThingSpeak | 8,108 | DDoSTCPFlood | 101,293 | DDoSPSHACKFlood | 92,395 |
| ARPPoisioning | 7,750 | DDoSSYNFlood | 91,644 | DDoSRSTFINFlood | 90,823 |
| MQTTPublish | 4,146 | DDoSSynIPFlood | 80,680 | DoSUDPFlood | 74,787 |
| NmapUDPScan | 2,590 | DoSTCPFlood | 59,807 | DoSSynFlood | 45,207 |
| NmapXMAStreesc | 2,010 | BenignTraffic | 24,476 | MiraiGreethFlood | 22,115 |
| NmapOSDetection | 2,000 | MiraiUdpplain | 20,166 | MiraiGreipFlood | 16,952 |
| NmapTCPScan | 1,002 | DDoSICMPFrag | 10,223 | MITMArpSpoofing | 7,019 |
| DDOSSlowloris | 534 | DDoSACKFrag | 6,431 | DDoSUDPFrag | 6,431 |
| Wiprobulb | 253 | DNSSpoofing | 4,034 | ReconHostDisc | 3,007 |
| MetasploitBF | 37 | ReconOSScan | 2,225 | ReconPortScan | 1,863 |
| NmapFINScan | 28 | DoSHTTPFlood | 1,680 | VulnerabilityScan | 809 |
| DDoSHTTPFlood | 626 | DDoSSlowLoris | 493 | ||
| DictionaryBF | 324 | BrowserHijacking | 140 | ||
| SqlInjection | 122 | CommandInjection | 105 | ||
| BackdoorMalware | 76 | XSS | 72 | ||
| ReconPingSweep | 41 | UploadingAttack | 23 | ||
| RT-IoT Dataset | CIC-IoT Dataset | ||||||
|---|---|---|---|---|---|---|---|
| Client/Class | 12 | 6 | 3 | 2 | 7 | 3 | 2 |
| 10 | 99.31 | 95.42 | 81.54 | 10.49 | 98.91 | 98.07 | 66.96 |
| 50 | 99.06 | 98.33 | 93.0 | 12.38 | 98.97 | 98.80 | 98.16 |
| 100 | 99.02 | 98.37 | 47.44 | 37.99 | 98.96 | 98.66 | 97.07 |
| 200 | 99.04 | 98.21 | 87.96 | 39.82 | 98.93 | 98.59 | 96.38 |
| 400 | 98.44 | 98.22 | 91.86 | 54.80 | 98.86 | 98.48 | 92.54 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).