Submitted:
05 May 2025
Posted:
06 May 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Related Work
3. Method
4. Experiment
4.1. Datasets
4.2. Experimental Results
5. Conclusion
References
- J. Smith and A. Brown, “A Comparative Study of Contrastive Learning Approaches for User and Item Representation in Recommender Systems,” Journal of Machine Learning Applications, vol. 15, no. 2, pp. 145-162, 2023.
- Y. Zhang, “Social Network User Profiling for Anomaly Detection Based on Graph Neural Networks,” arXiv preprint arXiv:2503.19380, 2025.
- G. Cai, J. Gong, J. Du, H. Liu and A. Kai, “Investigating Hierarchical Term Relationships in Large Language Models,” Journal of Computer Science and Software Applications, vol. 5, no. 4, 2025. [CrossRef]
- Y. Wang, “Optimizing Distributed Computing Resources with Federated Learning: Task Scheduling and Communication Efficiency,” Journal of Computer Technology and Software, vol. 4, no. 3, 2025.
- T. An, W. Huang, D. Xu, Q. He, J. Hu and Y. Lou, “A deep learning framework for boundary-aware semantic segmentation,” arXiv preprint arXiv:2503.22050, 2025.
- F. Guo, X. Wu, L. Zhang, H. Liu and A. Kai, “A Self-Supervised Vision Transformer Approach for Dermatological Image Analysis,” Journal of Computer Science and Software Applications, vol. 5, no. 4, 2025. [CrossRef]
- X. Wang, “Medical Entity-Driven Analysis of Insurance Claims Using a Multimodal Transformer Model,” Journal of Computer Technology and Software, vol. 4, no. 3, 2025. [CrossRef]
- X. Wang, G. Liu, B. Zhu, J. He, H. Zheng and H. Zhang, “Pre-trained Language Models and Few-shot Learning for Medical Entity Extraction,” arXiv preprint arXiv:2504.04385, 2025.
- Kai, L. Zhu and J. Gong, “Efficient Compression of Large Language Models with Distillation and Fine-Tuning,” Journal of Computer Science and Software Applications, vol. 3, no. 4, pp. 30-38, 2023. [CrossRef]
- Z. Yu, S. Wang, N. Jiang, W. Huang, X. Han and J. Du, “Improving Harmful Text Detection with Joint Retrieval and External Knowledge,” arXiv preprint arXiv:2504.02310, 2025.
- Y. Deng, “A Reinforcement Learning Approach to Traffic Scheduling in Complex Data Center Topologies,” Journal of Computer Technology and Software, vol. 4, no. 3, 2025. [CrossRef]
- S. Duan, “Human-Computer Interaction in Smart Devices: Leveraging Sentiment Analysis and Knowledge Graphs for Personalized User Experiences”, Proceedings of the 2024 4th International Conference on Electronic Information Engineering and Computer Communication (EIECC), pp. 1294-1298, 2024.
- J. Zhan, “Single-Device Human Activity Recognition Based on Spatiotemporal Feature Learning Networks,” Transactions on Computational and Scientific Methods, vol. 5, no. 3, 2025. [CrossRef]
- W. C. Kang and J. McAuley, “Self-attentive sequential recommendation”, Proceedings of the 2018 IEEE International Conference on Data Mining (ICDM), pp. 197-206, Nov. 2018.
- K. Zhou, H. Wang, W. X. Zhao, Y. Zhu, S. Wang, F. Zhang, et al., “S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization”, Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 1893-1902, Oct. 2020.
- S. Chakraborty, A. R. Gosthipaty and S. Paul, “G-SimCLR: Self-supervised contrastive learning with guided projection via pseudo labelling”, Proceedings of the 2020 International Conference on Data Mining Workshops (ICDMW), pp. 912-916, Nov. 2020.
- Z. Wei, N. Wu, F. Li, K. Wang and W. Zhang, “MoCo4SRec: A momentum contrastive learning framework for sequential recommendation,” Expert Systems with Applications, vol. 223, 119911, 2023. [CrossRef]
- X. Li, Y. Peng, X. Sun, Y. Duan, Z. Fang and T. Tang, “Unsupervised Detection of Fraudulent Transactions in E-commerce Using Contrastive Learning,” arXiv preprint arXiv:2503.18841, 2025.
- Y. Liang, L. Dai, S. Shi, M. Dai, J. Du and H. Wang, “Contrastive and Variational Approaches in Self-Supervised Learning for Complex Data Mining,” arXiv preprint arXiv:2504.04032, 2025.
- A. Liang, “Personalized Multimodal Recommendations Framework Using Contrastive Learning,” Transactions on Computational and Scientific Methods, vol. 4, no. 11, 2024.
- W. Huang, J. Zhan, Y. Sun, X. Han, T. An and N. Jiang, “Context-Aware Adaptive Sampling for Intelligent Data Acquisition Systems Using DQN,” arXiv preprint arXiv:2504.09344, 2025.
- L. Zhu, “Deep Learning for Cross-Domain Recommendation with Spatial-Channel Attention,” Journal of Computer Science and Software Applications, vol. 5, no. 4, 2025.
- Y. Wang, Z. Fang, Y. Deng, L. Zhu, Y. Duan and Y. Peng, “Revisiting LoRA: A Smarter Low-Rank Approach for Efficient Model Adaptation,” 2025.
- R. Adams and F. Green, “Contrastive Learning for Cross-Domain Recommender Systems: A Comparative Analysis,” Data Science and Engineering Journal, vol. 10, no. 2, pp. 88-100, 2023.
- S. Clark and E. Nelson, “Enhancing Collaborative Filtering with Contrastive Loss: An Experimental Study,” Journal of Information Retrieval and Data Mining, vol. 17, no. 5, pp. 301-316, 2023.
- E. Wright and J. Roberts, “The Role of Contrastive Learning in Improving Recommendation Systems with Sparse Data”, Proceedings of the IEEE Conference on Machine Learning and Data Mining (MLDM), vol. 14, no. 2, pp. 127-135, 2024.




| Method | Precision@10 | Recall@10 | NDGG@10 | HitRate@10 |
| Subsampling [24] | 0.324 | 0.401 | 0.362 | 0.812 |
| Feature Masking[25] | 0.337 | 0.417 | 0.375 | 0.827 |
| Behavioral Noise[26] | 0.318 | 0.396 | 0.354 | 0.798 |
| Sequence Shufflings | 0.330 | 0.409 | 0.367 | 0.816 |
| Ours | 0.352 | 0.435 | 0.388 | 0.841 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).