Submitted:
26 March 2025
Posted:
26 March 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction

- We propose MedAgent-Pro, a reasoning agentic workflow that can provide accurate and explainable medical diagnoses supported by visual evidence and clinical guidelines.
- At the task level, MLLMs perform knowledge-based reasoning to generate well-founded diagnostic plans. Without fine-tuning, this is accomplished by integrating retrieved clinical criteria, ensuring greater reliability in diagnosis.
- At the case level, various medical expert tools execute the corresponding steps in the plan to process multi-modal patient information, providing interpretable qualitative and quantitative analyses to support evidence-based decision-making.
- We evaluate MedAgent-Pro on both 2D and 3D multi-modal medical diagnosis, where it achieves state-of-the-art performance, surpassing both general MLLMs and task-specific solutions. Additionally, case studies highlight MedAgent-Pro’s superior interpretability and reliability beyond quantitative results.
2. Related work
2.1. AI-Driven Multi-Modal Medical Diagnosis
2.2. LLM-Based AI Agents
3. Methods
3.1. Agents Involved in MedAgent-Pro
3.1.1. Agents in Task-Lavel Reasoning
- Retrieve-Augumented Generation (RAG) Agent We utilize RAG to retrieve relevant medical documents, ensuring the development of reliable diagnostic processes that follow clinical criteria. For retrieval, we employ the built-in functionality of LangChain [60], and retrieve from medical library [48,50].
- Planner Agent The planner agent generates a diagnostic plan based on the retrieved guidelines and available tools. We employ GPT-4o [1] as the planner agent due to its strong reasoning capabilities.
3.1.2. Agents in Case-Lavel Diagnosis
- Orchestrator Agent responsible for conducting a preliminary analysis of the patient’s multi-modal information and determining which steps of the diagnostic plan will be executed. We employ GPT-4o [1] since it can recognize different input effectively.
-
Tool Agents We utilize various tool agents to complete different tasks in the diagnostic plans, including:
- -
- -
- Segmentation Models We use the Medical SAM Adapter [66] as the segmentation model due to its ability to achieve strong performance on the target task with only a small amount of data. To further optimize its effectiveness, we trained task-specific adapters for the target task like optic cup/disc segmentation.
- -
- Coding Agent The coding module is designed to generate simple code for computing additional metrics from the raw outputs of vision models (i.e. segmentation masks). We use GPT-o1 for its strong coding ability.
- Summary Agent: Since LLM outputs are often lengthy, we introduce a summary agent to refine the LLM decider’s response into a simple "yes" or "no" for accuracy evaluation. Additionally, the summary agent condenses the VQA tool’s output into "yes," "no," or "uncertain." We employ GPT-4o [1] for its strong summarization capabilities.
- Decider Agent: In charge of making the final diagnosis based on the indicators obtained from previous steps. The implementation includes two approaches, which will be introduced in the following sections.
3.2. Knowledge-Based Task-Level Reasoning
3.3. Evidence-Based Case-Level Diagnosis
4. Experiment
4.1. Datasets and Evaluation Metrics
4.2. Comparison with Multi-Modal Foundation Models
4.3. Case Study
4.4. Comparison with Task-specific Models
4.5. Ablation Study
5. Conclusion and Future Work
Acknowledgments
References
- Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al.: Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023).
- Association, A.D.: 2. classification and diagnosis of diabetes: standards of medical care in diabetes—2020. Diabetes care 43(Supplement_1), S14–S31 (2020).
- Aubreville, M., Stathonikos, N., Donovan, T.A., Klopfleisch, R., Ammeling, J., Ganz, J., Wilm, F., Veta, M., Jabari, S., Eckstein, M., et al.: Domain generalization across tumor types, laboratories, and species—insights from the 2022 edition of the mitosis domain generalization challenge. Medical Image Analysis 94, 103155 (2024).
- Azizi, S., Mustafa, B., Ryan, F., Beaver, Z., Freyberg, J., Deaton, J., Loh, A., Karthikesalingam, A., Kornblith, S., Chen, T., et al.: Big self-supervised models advance medical image classification. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 3478–3488 (2021).
- Bakator, M., Radosav, D.: Deep learning and medical diagnosis: A review of literature. Multimodal Technologies and Interaction 2(3), 47 (2018). [CrossRef]
- Bannur, S., Bouzid, K., Castro, D.C., Schwaighofer, A., Thieme, A., Bond-Taylor, S., Ilse, M., Pérez-García, F., Salvatelli, V., Sharma, H., et al.: Maira-2: Grounded radiology report generation. arXiv preprint arXiv:2406.04449 (2024).
- Baumgartner, M., Jäger, P.F., Isensee, F., Maier-Hein, K.H.: nndetection: a self-configuring method for medical object detection. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part V 24. pp. 530–539. Springer (2021).
- Bejnordi, B.E., Veta, M., Van Diest, P.J., Van Ginneken, B., Karssemeijer, N., Litjens, G., Van Der Laak, J.A., Hermsen, M., Manson, Q.F., Balkenhol, M., et al.: Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. Jama 318(22), 2199–2210 (2017). [CrossRef]
- Benary, M., Wang, X.D., Schmidt, M., Soll, D., Hilfenhaus, G., Nassir, M., Sigler, C., Knödler, M., Keller, U., Beule, D., et al.: Leveraging large language models for decision support in personalized oncology. JAMA Network Open 6(11), e2343689–e2343689 (2023). [CrossRef]
- Boiko, D.A., MacKnight, R., Gomes, G.: Emergent autonomous scientific research capabilities of large language models. arXiv preprint arXiv:2304.05332 (2023).
- Brohan, A., Chebotar, Y., Finn, C., Hausman, K., Herzog, A., Ho, D., Ibarz, J., Irpan, A., Jang, E., Julian, R., et al.: Do as i can, not as i say: Grounding language in robotic affordances. In: Conference on robot learning. pp. 287–318. PMLR (2023).
- Chen, G., Dong, S., Shu, Y., Zhang, G., Sesay, J., Karlsson, B.F., Fu, J., Shi, Y.: Autoagents: A framework for automatic agent generation. arXiv preprint arXiv:2309.17288 (2023).
- Chen, X., Wu, Z., Liu, X., Pan, Z., Liu, W., Xie, Z., Yu, X., Ruan, C.: Janus-pro: Unified multimodal understanding and generation with data and model scaling. arXiv preprint arXiv:2501.17811 (2025).
- Coudray, N., Ocampo, P.S., Sakellaropoulos, T., Narula, N., Snuderl, M., Fenyö, D., Moreira, A.L., Razavian, N., Tsirigos, A.: Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nature medicine 24(10), 1559–1567 (2018). [CrossRef]
- Dou, Q., Chen, H., Yu, L., Zhao, L., Qin, J., Wang, D., Mok, V.C., Shi, L., Heng, P.A.: Automatic detection of cerebral microbleeds from mr images via 3d convolutional neural networks. IEEE transactions on medical imaging 35(5), 1182–1195 (2016). [CrossRef]
- Fallahpour, A., Ma, J., Munim, A., Lyu, H., Wang, B.: Medrax: Medical reasoning agent for chest x-ray. arXiv preprint arXiv:2502.02673 (2025).
- Fang, H., Li, F., Wu, J., Fu, H., Sun, X., Son, J., Yu, S., Zhang, M., Yuan, C., Bian, C., et al.: Refuge2 challenge: A treasure trove for multi-dimension analysis and evaluation in glaucoma screening. arXiv preprint arXiv:2202.08994 (2022).
- Gallotta, R., Todd, G., Zammit, M., Earle, S., Liapis, A., Togelius, J., Yannakakis, G.N.: Large language models and games: A survey and roadmap. arXiv preprint arXiv:2402.18659 (2024). [CrossRef]
- Ghafarollahi, A., Buehler, M.J.: Protagents: protein discovery via large language model multi-agent collaborations combining physics and machine learning. Digital Discovery (2024). [CrossRef]
- Ghezloo, F., Seyfioglu, M.S., Soraki, R., Ikezogwo, W.O., Li, B., Vivekanandan, T., Elmore, J.G., Krishna, R., Shapiro, L.: Pathfinder: A multi-modal multi-agent system for medical diagnostic decision-making applied to histopathology. arXiv preprint arXiv:2502.08916 (2025).
- Guo, D., Yang, D., Zhang, H., Song, J., Zhang, R., Xu, R., Zhu, Q., Ma, S., Wang, P., Bi, X., et al.: Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 (2025).
- Habli, Z., AlChamaa, W., Saab, R., Kadara, H., Khraiche, M.L.: Circulating tumor cell detection technologies and clinical utility: Challenges and opportunities. Cancers 12(7), 1930 (2020). [CrossRef]
- He, X., Zhang, Y., Mou, L., Xing, E., Xie, P.: Pathvqa: 30000+ questions for medical visual question answering. arXiv preprint arXiv:2003.10286 (2020).
- Huang, W., Xia, F., Xiao, T., Chan, H., Liang, J., Florence, P., Zeng, A., Tompson, J., Mordatch, I., Chebotar, Y., et al.: Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608 (2022).
- Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods 18(2), 203–211 (2021). [CrossRef]
- Jinxin, S., Jiabao, Z., Yilei, W., Xingjiao, W., Jiawen, L., Liang, H.: Cgmi: Configurable general multi-agent interaction framework. arXiv preprint arXiv:2308.12503 (2023).
- Khare, Y., Bagal, V., Mathew, M., Devi, A., Priyakumar, U.D., Jawahar, C.: Mmbert: Multimodal bert pretraining for improved medical vqa. In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). pp. 1033–1036. IEEE (2021).
- Kim, Y., Park, C., Jeong, H., Chan, Y.S., Xu, X., McDuff, D., Lee, H., Ghassemi, M., Breazeal, C., Park, H.W.: Mdagents: An adaptive collaboration of llms for medical decision-making. In: The Thirty-eighth Annual Conference on Neural Information Processing Systems (2024).
- Kononenko, I.: Machine learning for medical diagnosis: history, state of the art and perspective. Artificial Intelligence in medicine 23(1), 89–109 (2001).
- Lai, X., Tian, Z., Chen, Y., Yang, S., Peng, X., Jia, J.: Step-dpo: Step-wise preference optimization for long-chain reasoning of llms. arXiv preprint arXiv:2406.18629 (2024).
- Lau, J.J., Gayen, S., Ben Abacha, A., Demner-Fushman, D.: A dataset of clinically generated visual questions and answers about radiology images. Scientific data 5(1), 1–10 (2018). [CrossRef]
- Li, B., Yan, T., Pan, Y., Luo, J., Ji, R., Ding, J., Xu, Z., Liu, S., Dong, H., Lin, Z., et al.: Mmedagent: Learning to use medical tools with multi-modal agent. arXiv preprint arXiv:2407.02483 (2024).
- Li, C., Wong, C., Zhang, S., Usuyama, N., Liu, H., Yang, J., Naumann, T., Poon, H., Gao, J.: Llava-med: Training a large language-and-vision assistant for biomedicine in one day. Advances in Neural Information Processing Systems 36 (2024).
- Li, G., Hammoud, H., Itani, H., Khizbullin, D., Ghanem, B.: Camel: Communicative agents for" mind" exploration of large language model society. Advances in Neural Information Processing Systems 36, 51991–52008 (2023).
- Li, J., Lai, Y., Li, W., Ren, J., Zhang, M., Kang, X., Wang, S., Li, P., Zhang, Y.Q., Ma, W., et al.: Agent hospital: A simulacrum of hospital with evolvable medical agents. arXiv preprint arXiv:2405.02957 (2024).
- Li, K., Hopkins, A.K., Bau, D., Viégas, F., Pfister, H., Wattenberg, M.: Emergent world representations: Exploring a sequence model trained on a synthetic task. arXiv preprint arXiv:2210.13382 (2022).
- Li, R., Zhang, C., Mao, S., Huang, H., Zhong, M., Cui, Y., Zhou, X., Yin, F., Theodoridis, S., Zhang, Z.: From english to pcsel: Llm helps design and optimize photonic crystal surface emitting lasers (2023).
- Li, Z., Song, D., Yang, Z., Wang, D., Li, F., Zhang, X., Kinahan, P.E., Qiao, Y.: Visionunite: A vision-language foundation model for ophthalmology enhanced with clinical knowledge. arXiv preprint arXiv:2408.02865 (2024).
- Liang, X., Li, X., Li, F., Jiang, J., Dong, Q., Wang, W., Wang, K., Dong, S., Luo, G., Li, S.: Medfilip: Medical fine-grained language-image pre-training. IEEE Journal of Biomedical and Health Informatics (2025). [CrossRef]
- Lin, J., Xia, Y., Zhang, J., Yan, K., Lu, L., Luo, J., Zhang, L.: Ct-glip: 3d grounded language-image pretraining with ct scans and radiology reports for full-body scenarios. arXiv preprint arXiv:2404.15272 (2024).
- Liu, B., Zhan, L.M., Xu, L., Ma, L., Yang, Y., Wu, X.M.: Slake: A semantically-labeled knowledge-enhanced dataset for medical visual question answering. In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). pp. 1650–1654. IEEE (2021).
- Liu, H., Li, C., Wu, Q., Lee, Y.J.: Visual instruction tuning. Advances in neural information processing systems 36 (2024).
- Low, C.H., Wang, Z., Zhang, T., Zeng, Z., Zhuo, Z., Mazomenos, E.B., Jin, Y.: Surgraw: Multi-agent workflow with chain-of-thought reasoning for surgical intelligence. arXiv preprint arXiv:2503.10265 (2025).
- M. Bran, A., Cox, S., Schilter, O., Baldassari, C., White, A.D., Schwaller, P.: Augmenting large language models with chemistry tools. Nature Machine Intelligence pp. 1–11 (2024).
- Ma, J., He, Y., Li, F., Han, L., You, C., Wang, B.: Segment anything in medical images. Nature Communications 15(1), 654 (2024).
- Ma, Z., Mei, Y., Su, Z.: Understanding the benefits and challenges of using large language model-based conversational agents for mental well-being support. In: AMIA Annual Symposium Proceedings. vol. 2023, p. 1105 (2024).
- McPhee, S.J., Papadakis, M.A., Rabow, M.W., et al.: Current medical diagnosis & treatment 2010. McGraw-Hill Medical New York: (2010).
- of Medicine (US). Friends, N.L.: MedlinePlus, vol. 5. National Institutes of Health and the Friends of the National Library of … (2006).
- Mehta, N., Teruel, M., Sanz, P.F., Deng, X., Awadallah, A.H., Kiseleva, J.: Improving grounded language understanding in a collaborative environment by interacting with agents through help feedback. arXiv preprint arXiv:2304.10750 (2023).
- Miller, N., Lacroix, E.M., Backus, J.E.: Medlineplus: building and maintaining the national library of medicine’s consumer health web service. Bulletin of the Medical Library Association 88(1), 11 (2000).
- Moor, M., Huang, Q., Wu, S., Yasunaga, M., Dalmia, Y., Leskovec, J., Zakka, C., Reis, E.P., Rajpurkar, P.: Med-flamingo: a multimodal medical few-shot learner. In: Machine Learning for Health (ML4H). pp. 353–367. PMLR (2023).
- Moor, M., Huang, Q., Wu, S., Yasunaga, M., Dalmia, Y., Leskovec, J., Zakka, C., Reis, E.P., Rajpurkar, P.: Med-flamingo: a multimodal medical few-shot learner. In: Machine Learning for Health (ML4H). pp. 353–367. PMLR (2023).
- Qin, G., Hu, R., Liu, Y., Zheng, X., Liu, H., Li, X., Zhang, Y.: Data-efficient image quality assessment with attention-panel decoder. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 37, pp. 2091–2100 (2023). [CrossRef]
- Ranella, N., Eger, M.: Towards automated video game commentary using generative ai. In: EXAG@ AIIDE (2023).
- Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18. pp. 234–241. Springer (2015).
- Shinn, N., Cassano, F., Gopinath, A., Narasimhan, K., Yao, S.: Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems 36 (2024).
- Steinberg, E., Greenfield, S., Wolman, D.M., Mancher, M., Graham, R.: Clinical practice guidelines we can trust. national academies press (2011).
- Szolovits, P., Patil, R.S., Schwartz, W.B.: Artificial intelligence in medical diagnosis. Annals of internal medicine 108(1), 80–87 (1988).
- Tang, X., Zou, A., Zhang, Z., Li, Z., Zhao, Y., Zhang, X., Cohan, A., Gerstein, M.: Medagents: Large language models as collaborators for zero-shot medical reasoning. arXiv preprint arXiv:2311.10537 (2023).
- Topsakal, O., Akinci, T.C.: Creating large language model applications utilizing langchain: A primer on developing llm apps fast. In: International Conference on Applied Engineering and Natural Sciences. vol. 1, pp. 1050–1056 (2023). [CrossRef]
- Wang, D., Zhang, Y., Zhang, K., Wang, L.: Focalmix: Semi-supervised learning for 3d medical image detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 3951–3960 (2020).
- Wang, K., Lu, Y., Santacroce, M., Gong, Y., Zhang, C., Shen, Y.: Adapting llm agents through communication. arXiv preprint arXiv:2310.01444 (2023).
- Wang, M., Lin, T., Lin, A., Yu, K., Peng, Y., Wang, L., Chen, C., Zou, K., Liang, H., Chen, M., et al.: Common and rare fundus diseases identification using vision-language foundation model with knowledge of over 400 diseases. arXiv preprint arXiv:2406.09317 (2024).
- Wang, Z., Zhang, Y., Wang, Y., Cai, L., Zhang, Y.: Dynamic pseudo label optimization in point-supervised nuclei segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 220–230. Springer (2024).
- Wu, J., Antonova, R., Kan, A., Lepert, M., Zeng, A., Song, S., Bohg, J., Rusinkiewicz, S., Funkhouser, T.: Tidybot: Personalized robot assistance with large language models. Autonomous Robots 47(8), 1087–1102 (2023). [CrossRef]
- Wu, J., Ji, W., Liu, Y., Fu, H., Xu, M., Xu, Y., Jin, Y.: Medical sam adapter: Adapting segment anything model for medical image segmentation. arXiv preprint arXiv:2304.12620 (2023). [CrossRef]
- Xia, Y., Shenoy, M., Jazdi, N., Weyrich, M.: Towards autonomous system: flexible modular production system enhanced with large language model agents. In: 2023 IEEE 28th International Conference on Emerging Technologies and Factory Automation (ETFA). pp. 1–8. IEEE (2023).
- Yang, S., Chen, Y., Tian, Z., Wang, C., Li, J., Yu, B., Jia, J.: Visionzip: Longer is better but not necessary in vision language models. arXiv preprint arXiv:2412.04467 (2024).
- Yang, S., Liu, J., Zhang, R., Pan, M., Guo, Z., Li, X., Chen, Z., Gao, P., Guo, Y., Zhang, S.: Lidar-llm: Exploring the potential of large language models for 3d lidar understanding. arXiv preprint arXiv:2312.14074 (2023).
- Yang, S., Qu, T., Lai, X., Tian, Z., Peng, B., Liu, S., Jia, J.: Lisa++: An improved baseline for reasoning segmentation with large language model. arXiv preprint arXiv:2312.17240 (2023).
- Zhan, L.M., Liu, B., Fan, L., Chen, J., Wu, X.M.: Medical visual question answering via conditional reasoning. In: Proceedings of the 28th ACM International Conference on Multimedia. pp. 2345–2354 (2020).
- Zhang, C., Yang, K., Hu, S., Wang, Z., Li, G., Sun, Y., Zhang, C., Zhang, Z., Liu, A., Zhu, S.C., et al.: Proagent: building proactive cooperative agents with large language models. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 38, pp. 17591–17599 (2024). [CrossRef]
- Zhang, J., Xie, Y., Wu, Q., Xia, Y.: Medical image classification using synergic deep learning. Medical image analysis 54, 10–19 (2019). [CrossRef]
- Zhang, S., Xu, Y., Usuyama, N., Xu, H., Bagga, J., Tinn, R., Preston, S., Rao, R., Wei, M., Valluri, N., et al.: Biomedclip: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs. arXiv preprint arXiv:2303.00915 (2023).
- Zhang, X., Wu, C., Zhao, Z., Lin, W., Zhang, Y., Wang, Y., Xie, W.: Pmc-vqa: Visual instruction tuning for medical visual question answering. arXiv preprint arXiv:2305.10415 (2023).
- Zhang, Y., Wang, Y., Fang, Z., Bian, H., Cai, L., Wang, Z., Zhang, Y.: Dawn: Domain-adaptive weakly supervised nuclei segmentation via cross-task interactions. IEEE Transactions on Circuits and Systems for Video Technology (2024). [CrossRef]
- Zhao, D., Ferdian, E., Maso Talou, G.D., Quill, G.M., Gilbert, K., Wang, V.Y., Babarenda Gamage, T.P., Pedrosa, J., D’hooge, J., Sutton, T.M., et al.: Mitea: A dataset for machine learning segmentation of the left ventricle in 3d echocardiography using subject-specific labels from cardiac magnetic resonance imaging. Frontiers in Cardiovascular Medicine 9, 1016703 (2023). [CrossRef]
- Zhu, J., Qi, Y., Wu, J.: Medical sam 2: Segment medical images as video via segment anything model 2. arXiv preprint arXiv:2408.00874 (2024).
- Zuo, K., Jiang, Y., Mo, F., Lio, P.: Kg4diagnosis: A hierarchical multi-agent llm framework with knowledge graph enhancement for medical diagnosis. arXiv preprint arXiv:2412.16833 (2024).


| Method | Glaucoma | Heart Disease | ||
| mACC | F1 | mACC | F1 | |
| GPT-4o [1] | - | - | - | - |
| LLaVa-Med [33] | 50.0 | 0.0 | 50.0 | 0.0 |
| Janus-Pro-7B [13] | 53.4 | 13.3 | 52.3 | 10.7 |
| BioMedClip [74] | 58.1 | 21.3 | 47.0 | 37.8 |
| MedAgent-Pro (MOE Decider) | 90.4 | 76.4 | 66.8 | 52.6 |
| MedAgent-Pro (LLM Decider) | 75.9 | 44.8 | 63.8 | 44.1 |
| REFUGE2 winners | Ophthalmology Expert MLLMs | ||||
| Team Name | AUC | Rank | Method | mAcc | F1 |
| VUNO EYE TEAM | 88.3 | 1 | RetiZero [63] | 50.8 | 18.4 |
| MIG | 87.6 | 2 | VisionUnite [38] | 85.8 | 73.1 |
| MAI | 86.1 | 3 | MedAgent-Pro (LLM decider) | 75.9 | 44.8 |
| MedAgent-Pro (MOE decider) | 95.1 | - | MedAgent-Pro (MOE decider) | 90.4 | 76.4 |
| Indicators | Single Indicator | Multiple Indicators | |||||||
| vCDR | RT | PPA | DH | mACC | F1 | MOE decider | LLM decider | ||
| mACC | F1 | mACC | F1 | ||||||
| ✓ | 81.7 | 65.9 | - | - | - | - | |||
| ✓ | 70.8 | 31.3 | - | - | - | - | |||
| ✓ | 81.0 | 74.6 | - | - | - | - | |||
| ✓ | 66.8 | 29.6 | - | - | - | - | |||
| ✓ | ✓ | - | - | 87.0 | 55.0 | 71.5 | 55.4 | ||
| ✓ | ✓ | - | - | 93.8 | 78.7 | 69.7 | 52.0 | ||
| ✓ | ✓ | - | - | 80.4 | 70.4 | 52.8 | 14.3 | ||
| ✓ | ✓ | ✓ | - | - | 90.1 | 81.5 | 73.3 | 61.3 | |
| ✓ | ✓ | ✓ | ✓ | - | - | 90.4 | 76.4 | 75.9 | 44.8 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).