Submitted:
06 June 2025
Posted:
09 June 2025
You are already at the latest version
Abstract
Keywords:
I. Introduction
II. Method
III. Experiment
A. Datasets
B. Experimental Results
IV. Conclusions
References
- D. Mueller, M. Dredze and N. Andrews. Multi-Task Transfer Matters During Instruction-Tuning. Findings of the Association for Computational Linguistics ACL 2024, 2024.
- Duan, Y. , Yang, L., Zhang, T. , Song, Z., and Shao, F., "Automated UI Interface Generation via Diffusion Models: Enhancing Personalization and Efficiency. arXiv 2025, arXiv:2503.20229. [Google Scholar]
- X. Wang, et al.. Instructuie: Multi-task instruction tuning for unified information extraction. arXiv, 2023; arXiv:2304.08085.
- J. Chen, et al.. Taskgalaxy: Scaling multi-modal instruction fine-tuning with tens of thousands vision task types. arXiv, 2025; arXiv:2502.09925.
- Z. Zhao, Y. Ziser and S. B. Cohen. Layer by layer: Uncovering where multi-task learning happens in instruction-tuned large language models. arXiv, 2024; arXiv:2410.20008.
- Z. Xu, Y. Shen and L. Huang. Multiinstruct: Improving multi-modal zero-shot learning via instruction tuning. arXiv 2022, arXiv:2212.10773.
- M. Parmar, et al.. In-boxbart: Get instructions into biomedical multi-task learning. arXiv 2022, arXiv:2204.07600.
- Ma, Y. , Cai, G., Guo, F., Fang, Z., and Wang, X., "Knowledge-Informed Policy Structuring for Multi-Agent Collaboration Using Language Models. Journal of Computer Science and Software Applications 2025, 5. [Google Scholar]
- Peng, Y. , "Semantic Context Modeling for Fine-Grained Access Control Using Large Language Models. Journal of Computer Technology and Software 2024, 3. [Google Scholar]
- Zhu, L. , Guo, F., Cai, G., and Ma, Y., "Structured Preference Modeling for Reinforcement Learning-Based Fine-Tuning of Large Models. Journal of Computer Technology and Software 2025, 4. [Google Scholar]
- He, J. , Liu, G. , Zhu, B., Zhang, H., Zheng, H., and Wang, X., "Context-Guided Dynamic Retrieval for Improving Generation Quality in RAG Models. arXiv 2025, arXiv:2504.19436. [Google Scholar]
- Gong, J. , Wang, Y., Xu, W., and Zhang, Y., "A Deep Fusion Framework for Financial Fraud Detection and Early Warning Based on Large Language Models. Journal of Computer Science and Software Applications 2024, 4. [Google Scholar]
- Cai, G. , Gong, J., Du, J., Liu, H., and Kai, A., "Investigating Hierarchical Term Relationships in Large Language Models. Journal of Computer Science and Software Applications 2025, 5. [Google Scholar]
- D. Cheng, et al.. Instruction pre-training: Language models are supervised multitask learners. arXiv 2024, arXiv:2406.14491.
- Peng, Y. , "Structured Knowledge Integration and Memory Modeling in Large Language Systems. Transactions on Computational and Scientific Methods 2024, 4. [Google Scholar]
- Deng, Y. , "Transfer Methods for Large Language Models in Low-Resource Text Generation Tasks. Journal of Computer Science and Software Applications 2024, 4. [Google Scholar]
- Zhang, H. , Ma, Y, Wang, S., Liu, G., and Zhu, B., "Graph-Based Spectral Decomposition for Parameter Coordination in Language Model Fine-Tuning. arXiv 2025, arXiv:2504.19583. [Google Scholar]
- Wang, R. , "Joint Semantic Detection and Dissemination Control of Phishing Attacks on Social Media via LLama-Based Modeling. 2025.
- Wang, Y. , Fang, Z, Deng, Y., Zhu, L., Duan, Y., and Peng, Y., "Revisiting LoRA: A Smarter Low-Rank Approach for Efficient Model Adaptation. arXiv 2025. [Google Scholar]
- Xu, Z. , Sheng, Y., Bao, Q., Du, X., Guo, X., and Liu, Z., "BERT-Based Automatic Audit Report Generation and Compliance Analysis. 2025.
- X. Wang, et al.. Towards unified conversational recommender systems via knowledge-enhanced prompt learning. Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022.
- Y. Yang, et al.. Mtl-lora: Low-rank adaptation for multi-task learning. Proceedings of the AAAI Conference on Artificial Intelligence 2025, 39. [Google Scholar]
- B. Zhu, et al.. Prompt-aligned gradient for prompt tuning. Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023.




Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).