Submitted:
02 February 2025
Posted:
04 February 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Materials and Methods
2.1. Exploring Issues of Client Selection Methods
2.2. Developing the Improved Federated RHLP Algorithm for Efficient Client Selection
2.3. Experiment Setup
2.3.1. Dataset Preparation
2.3.2. Setting Up Algorithms, CNN Model Structures, and Hardware Specifications
2.3.3. Algorithm Performance Evaluation
3. Results
4. Discussion and Conclusions
5. Future Works
Author Contributions
Funding
Data Availability Statement
Acknowledgments
References
- Tahir M, Ali MI. Multi-Criterion Client Selection for Efficient Federated Learning. InProceedings of the AAAI Symposium Series 2024 May 20 (Vol. 3, No. 1, pp. 318-322).
- de Souza AM, Maciel F, da Costa JB, Bittencourt LF, Cerqueira E, Loureiro AA, Villas LA. Adaptive client selec-tion with personalization for communication efficient Federated Learning. Ad Hoc Networks. 2024 Apr 15;157:103462.
- Zhou T, Lin Z, Zhang J, Tsang DH. Understanding and improving model averaging in federated learning on heterogeneous data. IEEE Transactions on Mobile Computing. 2024 May 28.
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A. Communication-efficient learning of deep net-works from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; PMLR: Birmingham, UK, 2017; pp. 1273–1282. [Google Scholar]
- Putra MA, Putri AR, Zainudin A, Kim DS, Lee JM. Acs: Accuracy-based client selection mechanism for feder-ated industrial iot. Internet of Things. 2023 Apr 1;21:100657.
- Schoinas, I. , Triantafyllou, A., Ioannidis, D., Tzovaras, D., Drosou, A., Votis, K., Lagkas, T., Argyriou, V. and Sarigiannidis, P., 2024. Federated learning: challenges, SoTA, performance improvements and application do-mains. IEEE Open Journal of the Communications Society.
- Zhang SQ, Lin J, Zhang Q. A multi-agent reinforcement learning approach for efficient client selection in fed-erated learning. InProceedings of the AAAI Conference on Artificial Intelligence 2022 Jun 28 (Vol. 36, No. 8, pp. 9091-9099).
- Zhou H, Lan T, Venkataramani G, Ding W. On the Convergence of Heterogeneous Federated Learning with Arbitrary Adaptive Online Model Pruning. arXiv preprint arXiv:2201.11803. 2022 Jan 27.
- Mu X, Shen Y, Cheng K, Geng X, Fu J, Zhang T, Zhang Z. Fedproc: Prototypical contrastive federated learning on non-iid data. Future Generation Computer Systems. 2023 Jun 1;143:93-104.
- Cho YJ, Wang J, Joshi G. Client selection in federated learning: Convergence analysis and power-of-choice se-lection strategies. arXiv preprint arXiv:2010.01243. 2020 Oct 3.
- Zeng et al. (2023). a Client Selection Method Based on Loss Function Optimization for Federated Learning . Computer Modeling in Engineering & Sciences 2023, 137(1), 1047-1064. [CrossRef]
- Sittijuk P, Tamee K. Fed-RHLP: Enhancing Federated Learning with Random High-Local Performance Client Selection for Improved Convergence and Accuracy. Symmetry. 2024;16(9):1181. [CrossRef]
- Kairouz P, McMahan HB, Avent B, Bellet A, Bennis M, Bhagoji AN, Bonawitz K, Charles Z, Cormode G, Cummings R, et al. Advances and open problems in federated learning. Found Trends Mach Learn. 2021;14(1–2):1–210.
- Le DD, Tran AK, Dao MS, Nguyen-Ly KC, Le HS, Nguyen-Thi XD, Pham TQ, Nguyen VL, Nguyen-Thi BY. Insights into multi-model federated learning: An advanced approach for air quality index forecasting. Algorithms. 2022 Nov 17;15(11):434.
- Yang H, Li J, Hao M, Zhang W, He H, Sangaiah AK. An efficient personalized federated learning approach in heterogeneous environments: a reinforcement learning perspective. Scientific Reports. 2024 Nov 21;14(1):28877.
- Iyer VN. A review on different techniques used to combat the non-IID and heterogeneous nature of data in FL. arXiv preprint arXiv:2401.00809. 2024 Jan 1.
- Hu M, Yue Z, Xie X, Chen C, Huang Y, Wei X, Lian X, Liu Y, Chen M. Is aggregation the only choice? federat-ed learning via layer-wise model recombination. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2024 Aug 25 (pp. 1096-1107).
- Zhang L, Fu L, Liu C, Yang Z, Yang J, Zheng Z, Chen C. Towards Few-Label Vertical Federated Learning. ACM Transactions on Knowledge Discovery from Data. 2024.
- Xu Y, Liao Y, Wang L, Xu H, Jiang Z, Zhang W. Overcoming Noisy Labels and Non-IID Data in Edge Feder-ated Learning. IEEE Transactions on Mobile Computing. 2024 May 9.
- Li, A. , Zhang, L., Tan, J., et al. (2021). Sample-level data selection for federated learning. IEEE INFOCOM 2021 - IEEE Conference on Computer Communications, 1–10. [CrossRef]
- Yan R, et al. Label-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical Imaging. IEEE Trans Med Imaging. 2023 Jul;42(7):1932-43. [CrossRef]
- Tirumalapudi R, Sirisha J. Onward and Autonomously: Expanding the Horizon of Image Segmentation for Self-Driving Cars through Machine Learning. Scalable Computing: Practice and Experience. 2024 Jun 16;25(4):3163-71.
- Singh G, Sood K, Rajalakshmi P, Nguyen DD, Xiang Y. Evaluating Federated Learning Based Intrusion Detection Scheme for Next Generation Networks. IEEE Transactions on Network and Service Management. 2024 Apr 4.
- Shi X, Zhang W, Wu M, Liu G, Wen Z, He S, Shah T, Ranjan R. Dataset Distillation-based Hybrid Federated Learning on Non-IID Data. arXiv:2409.17517. 2024.
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-mnist: A novel image dataset for benchmarking machine learning al-gorithms. arXiv arXiv:1708.07747, 2017.
- Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2020, 2, 429–450. [Google Scholar]
- Ren, H.; Deng, J.; Xie, X. GRNN: Generative regression neural network—A data leakage attack for federated learning. ACM Trans. Intell. Syst. Technol. (TIST) 2022, 13, 1–24. [Google Scholar] [CrossRef]
- Wu, Y.; Liu, L.; Bae, J.; Chow, K.H.; Iyengar, A.; Pu, C.; Wei, W.; Yu, L.; Zhang, Q. Demystifying learning rate policies for high accuracy training of deep neural networks. In Proceedings of the 2019 IEEE International Confer-ence on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; IEEE: New York, NY, USA, 2019; pp. 1971–1980. [Google Scholar]
- Zhai R, Jin H, Gong W, Lu K, Liu Y, Song Y, Yu J. Adaptive client selection and model aggregation for hetero-geneous federated learning. Multimedia Systems. 2024 Aug;30(4):211.
- Casella B, Esposito R, Sciarappa A, Cavazzoni C, Aldinucci M. Experimenting with normalization layers in federated learning on non-iid scenarios. IEEE Access. 2024 Apr 1.
- Juan PH, Wu JL. Enhancing Communication Efficiency and Training Time Uniformity in Federated Learning through Multi-Branch Networks and the Oort Algorithm. Algorithms. 2024 Jan 23;17(2):52.
- Al-Betar MA, Abasi AK, Alyasseri ZA, Fraihat S, Mohammed RF. A Communication-Efficient Federated Learning Framework for Sustainable Development Using Lemurs Optimizer. Algorithms. 2024 Apr 15;17(4):160.








| Device | Specification |
|---|---|
| Central Processing Unit | 11th Gen Intel(R) Core(TM) i5-11400F 2.59 GHz |
| Graphics Processing Unit | Radeon (TM) RX 480 Graphics |
| Random Access Memory | 32.0 GB |
| Operating System | Windows 11 |
| Software Environment | Python 3.7.2 with the Pytorch framework |
| Algorithm | Issue level | Accuracy of Global Model | Convergence Rounds and Time (Seconds) with Different Accuracy | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
| 60% | 70% | 80% | 90% | |||||||
| Round | Time | Round | Time | Round | Time | Round | Time | |||
| improved Fed-RHLP | High | 96.49% | 6 | 633.6 | 11 | 1161.6 | 19 | 2006.4 | 41 | 4329.6 |
| Low | 98.91% | 3 | 486 | 3 | 486 | 6 | 972 | 10 | 1620 | |
| Fed-Avg | High | 86.18% | 32 | 1747.2 | 74 | 4040.4 | 126 | 6879.6 | - | - |
| Low | 98.03% | 9 | 680.4 | 12 | 907.2 | 12 | 907.2 | 52 | 3931.2 | |
| PoC | High | 90.31% | 43 | 3947.4 | 57 | 5232.6 | 85 | 7803 | 185 | 16983 |
| Low | 98.76% | 3 | 451.8 | 5 | 753 | 8 | 1204.8 | 15 | 2259 | |
| Fed-Choice | High | 87.80% | 40 | 3816 | 60 | 5724 | 92 | 8776.8 | - | - |
| Low | 98.78% | 3 | 464.4 | 5 | 774 | 8 | 1238.4 | 14 | 2167.2 | |
| Algorithm | Issue level | Accuracy of Global Model | Convergence Rounds and Time (Seconds) with Different Accuracy | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
| 60% | 70% | 80% | 90% | |||||||
| Round | Time | Round | Time | Round | Time | Round | Time | |||
| improved Fed-RHLP | High | 89.06% | 6 | 1288.8 | 11 | 2362.8 | 15 | 3222 | - | - |
| Low | 90.97% | 1 | 328.2 | 2 | 656.4 | 7 | 2297.4 | 48 | 15753.6 | |
| Fed-Avg | High | 83.68% | 13 | 2386.8 | 23 | 4222.8 | 55 | 10098 | - | - |
| Low | 90.59% | 3 | 747 | 3 | 747 | 3 | 747 | 83 | 20667 | |
| PoC | High | 85.79% | 8 | 1929.6 | 9 | 2170.8 | 39 | 9406.8 | - | - |
| Low | 90.77% | 3 | 1072.8 | 5 | 1788 | 6 | 2145.6 | 85 | 30396 | |
| Fed-Choice | High | 87.90% | 8 | 1934.4 | 12 | 2901.6 | 21 | 5077.8 | - | - |
| Low | 90.63% | 2 | 714 | 4 | 1428 | 6 | 2142 | 59 | 21063 | |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).