Submitted:
10 September 2025
Posted:
16 September 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Method


3. Experimental Results
4. Conclusions
Conflicts of Interest
References
- J. Fu, S. K. Ng and P. Liu, “Polyglot prompt: Multilingual multitask promptraining”, arXiv 2022, arXiv:2204.14264, 2022.
- V. Sanh, A. Webson, C. Raffel, et al., “Multitask prompted training enables zero-shot task generalization”. arXiv 2021, arXiv:2110.08207, 2021.
- W. Zhao, A. Gupta, T. Chung, et al., “SPC: Soft prompt construction for cross domain generalization”. In Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023); 2023; pp. 118–130.
- L. Zhu, W. Cui, Y. Xing and Y. Wang, “Collaborative Optimization in Federated Recommendation: Integrating User Interests and Differential Privacy”. Journal of Computer Technology and Software 2024, 3.
- W. Cui, “Vision-Oriented Multi-Object Tracking via Transformer-Based Temporal and Attention Modeling”. Transactions on Computational and Scientific Methods 2024, 4.
- Y. Wu, Y. Qin, X. Su and Y. Lin, “Transformer-Based Risk Monitoring for Anti-Money Laundering with Transaction Graph Integration”, 2025.
- B. Fang and D. Gao, “Collaborative Multi-Agent Reinforcement Learning Approach for Elastic Cloud Resource Scaling”. arXiv 2025, arXiv:2507.00550, 2025.
- S. Bai, J. S. Bai, J. Zhang, S. Guo, et al., “Diprompt: Disentangled prompt tuning for multiple latent domain generalization in federated learning”. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2024; pp. 27284–27293. [Google Scholar]
- Pouramini and H. Faili, “Enhancing Few-Shot Transfer Learning with Optimized Multi-Task Prompt Tuning through Modular Prompt Composition”. arXiv 2024, arXiv:2408.13227.
- R. Belanec, S. Ostermann, I. Srba, et al., “Task prompt vectors: Effective initialization through multi-task soft-prompt transfer”. arXiv 2024, arXiv:2408.01119.
- Y. Xin, J. Du, Q. Wang, et al., “Mmap: Multi-modal alignment prompt for cross-domain multi-task learning”. In Proceedings of the AAAI Conference on Artificial Intelligence; 2024; 38, pp. 16076–16084.
- X. Quan, “Layer-Wise Structural Mapping for Efficient Domain Transfer in Language Model Distillation”. Transactions on Computational and Scientific Methods 2024, 4.
- H. Zheng, Y. Ma, Y. Wang, G. Liu, Z. Qi and X. Yan, “Structuring Low-Rank Adaptation with Semantic Guidance for Model Fine-Tuning”, 2025.
- Y. Xing, T. Yang, Y. Qi, M. Wei, Y. Cheng and H. Xin, “Structured Memory Mechanisms for Stable Context Representation in Large Language Models”. arXiv 2025, arXiv:2505.22921.
- Y. Sun, R. Zhang, R. Meng, L. Lian, H. Wang and X. Quan, “Fusion-Based Retrieval-Augmented Generation for Complex Question Answering with LLMs”, 2025.
- Q. Wu, “Internal Knowledge Adaptation in LLMs with Consistency-Constrained Dynamic Routing”. Transactions on Computational and Scientific Methods 2024, 4.
- Y. Wang, “Structured Compression of Large Language Models with Sensitivity-aware Pruning Mechanisms”. Journal of Computer Technology and Software 2024, 3.
- T. Xu, X. Deng, X. Meng, H. Yang and Y. Wu, “Clinical NLP with Attention-Based Deep Learning for Multi-Disease Prediction”. arXiv 2025, arXiv:2507.01437.
- H. Zheng, L. Zhu, W. Cui, R. Pan, X. Yan and Y. Xing, “Selective Knowledge Injection via Adapter Modules in Large-Scale Language Models”, 2025.
- Z. Wang, R. Panda, L. Karlinsky, et al., “Multitask prompt tuning enables parameter-efficient transfer learning”. arXiv 2023, arXiv:2303.02861.
- T. Vu, B. Lester, N. Constant, et al., “Spot: Better frozen model adaptation through soft prompt transfer”. arXiv 2021, arXiv:2110.07904.
- Y. Liu, Y. Lu, H. Liu, et al., “Hierarchical prompt learning for multi-task learning”. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2023; pp. 10888–10898.
- Li, J. Qin, J. Cui, et al., “Joint Scheduling of Causal Prompts and Tasks for Multi-Task Learning”. In Proceedings of the Computer Vision and Pattern Recognition Conference; 2025; pp. 25124–25134.
- T. Sun, Z. He, Q. Zhu, et al., “Multitask pre-training of modular prompt for chinese few-shot learning”. arXiv 2022, arXiv:2210.07565.



Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).