Submitted:
21 March 2025
Posted:
21 March 2025
You are already at the latest version
Abstract
With the rise of ecommerce systems and web application usage, recommendation systems have become important to our daily tasks. They provide personalized suggestions to assist with any task under consideration. While various machine learning algorithms have been developed for recommendation tasks, existing systems still face limitations. This research focuses on advancing contextaware recommendation systems by leveraging the capabilities of Large Language Models (LLMs) in conjunction with realtime data. The research exploits the integration of existing realtime data APIs with LLMs to enhance the capabilities of the recommendation systems already integrated into smart societies. The experimental results demonstrate that the hybrid approach significantly improves the user experience and recommendation quality, ensuring more relevant and dynamic suggestions.
Keywords:
1. Introduction
2. Related Work
2.1. LLM Effectiveness in API Interactions
2.2. Fine-Tuning for Enhanced Performance
2.3. LLMs in Recommendation Systems
2.4. Challenges in LLM Applications
3. Proposed System: GPT Restaurant Recommender
3.1. Framework Design
3.2. Architecture Design
3.3. Process Flow
3.4. System User Interface
3.5. Technical Features and System Architecture
3.5.1. Technology Stack
3.5.2. Data Flow and Processing
3.5.3. Scalability and Performance
3.5.4. Security and Privacy Measures
3.6. Detailed Description of Personalized Recommendations
3.7. Ethical and Privacy Consideration
4. Experimentation and Evaluation
4.1. Evaluation Criteria
4.2. Comparative Discussion
4.3. Comparative Analysis with Major AI Services
5. Results Evaluation and Discussion
6. Conclusion and Future Work
References
- Johnsen, M. Developing AI Applications with Large Language Models; Maria Johnsen, 2025.
- Zhao, Z.; Fan, W.; Li, J.; Liu, Y.; Mei, X.; Wang, Y.; Wen, Z.; Wang, F.; Zhao, X.; Tang, J.; et al. Recommender systems in the era of large language models (llms). IEEE Transactions on Knowledge and Data Engineering 2024. [Google Scholar] [CrossRef]
- Li, J.; Xu, J.; Huang, S.; Chen, Y.; Li, W.; Liu, J.; Lian, Y.; Pan, J.; Ding, L.; Zhou, H.; et al. Large language model inference acceleration: A comprehensive hardware perspective. arXiv 2024, arXiv:2410.04466. [Google Scholar]
- Gokul, A. LLMs and AI: Understanding Its Reach and Impact. Preprints 2023. [Google Scholar] [CrossRef]
- Li, L.; Zhang, Y.; Liu, D.; Chen, L. Large Language Models for Generative Recommendation: A Survey and Visionary Discussions. arXiv 2024, arXiv:cs.IR/2309.01157. [Google Scholar]
- Silva, B.; Tesfagiorgis, Y.G. Large language models as an interface to interact with API tools in natural language, 2023.
- Spinellis, D. Pair programming with generative AI, 2024.
- Patil, S.G.; Zhang, T.; Wang, X.; Gonzalez, J.E. Gorilla: Large language model connected with massive APIs. arXiv 2023, arXiv:2305.15334. [Google Scholar]
- Luo, D.; Zhang, C.; Zhang, Y.; Li, H. CrossTune: Black-box few-shot classification with label enhancement. arXiv 2024, arXiv:2403.12468. [Google Scholar]
- Fan, W.; Zhao, Z.; Li, J.; Liu, Y.; Mei, X.; Wang, Y.; Tang, J.; Li, Q. Recommender Systems in the Era of Large Language Models (LLMs). IEEE Transaction on Knowledge and Data Engineering 2023. [Google Scholar] [CrossRef]
- Roumeliotis, K.I.; Tselikas, N.D.; Nasiopoulos, D.K. Precision-Driven Product Recommenda-tion Software: Unsupervised Models, Evaluated by GPT-4 LLM for Enhanced Recommender Systems. Software 2024, 3, 62–80. [Google Scholar] [CrossRef]
- Mao, J.; Zou, D.; Sheng, L.; Liu, S.; Gao, C.; Wang, Y. Identify critical nodes in complex networks with large language models. arXiv 2024, arXiv:2403.03962. [Google Scholar]
- Lin, J.; Dai, X.; Xi, Y.; Liu, W.; Chen, B.; Zhang, H.; Liu, Y.; Wu, C.; Li, X.; Zhu, C.; et al. How can recommender systems benefit from large language models: A survey. ACM Transactions on Information Systems 2023. [Google Scholar] [CrossRef]
- Hu, S.; Tu, Y.; Han, X.; He, C.; Cui, G.; Long, X. MiniCPM: Unveiling the potential of small language models with scalable training strategies. arXiv 2024, arXiv:2404.06395. [Google Scholar]
- Corecco, N.; Piatti, G.; Lanzendörfer, L.A.; Fan, F.X.; Wattenhofer, R.; An LLM-based Recom-mender System Environment. CoRR 2024, abs/2406.01631.shadcn. shadcn/ui: Modern UI Components for React. 2025. Available online: https://ui.shadcn.com/shadcn/uiOfficial Documentation (accessed on 17 January 2025).
- Optimizing: Third Party Libraries | Next.js — nextjs.org. Available online: https://nextjs.org/docs/app/building-your-application/optimizing/third-party-libraries (accessed on 17 January 2025).
- Vercel AI SDK. OpenAI — SDK Documentation. 2025. Available online: https://sdk.vercel.ai/providers/ai-sdk-providers/openaiVercel AI SDK (accessed on 17 January 2025).
- Next.js by Vercel - The React Framework — nextjs.org. Available online: https://nextjs.org (accessed on 15 January 2025).
- Vercel: Build and deploy the best web experiences with the Frontend Cloud – Vercel — ver-.
- cel.com. Available online: https://vercel.com/ (accessed on 15 January 2025).
- Raza, S.; Rahman, M.; Kamawal, S.; Toroghi, A.; Raval, A.; Navah, F.; Kazemeini, A. A comprehensive review of recommender systems: Transitioning from theory to practice. arXiv 2024, arXiv:2407.13699. [Google Scholar]
- Roy, D.; Dutta, M. A systematic review and research perspective on recommender systems. Journal of Big Data 2022, 9, 59. [Google Scholar] [CrossRef]
- Gope, J.; Jain, S.K. A survey on solving cold start problem in recommender systems. In Proceedings of the 2017 IEEE International Conference on Computing, Communication and Automation (ICCCA), Greater Noida, India, 21 December 2017; pp. 133–138. [Google Scholar]











| Author(s) | Key Findings |
|---|---|
| Silva & Tesfagiorgis (2023) [6] | Effective prompt designs & fine-tuning methods. |
| Patil et al. (2023) [8] | Fine-tuned LLMs outperform GPT-4 in API interactions. |
| Luo et al. (2024) [9] | Fine-tuning improves LLM performance w.r.t. response times. |
| Roumeliotis et al. (2024) [11] | LLM-based unsupervised clustering enhances recommendation |
| precision. | |
| Spinellis (2024) [7] | Linguistic structures are crucial for precise API calls. |
| Mao et al. (2024) [12] | LLMs handle diverse inputs efficiently but risk stability. |
| Lin et al. (2023) [13] | Examined LLMs in recommendation pipelines, focusing on effi- ciency and ethical considerations. |
| Hu et al. (2024) [14] | Scalable training strategies might compromise LLM efficiency. |
| Nathan et al. & Giorgio et al. (2024) [15] | Integrated LLMs to improve RL-based recommendations. |
| Fan et al. (2023) [10] | Emphasized the fine-tuning of prompt design to leverage the effectiveness of the LLMs for context-aware recommendations. |
| Metrics(s) | Criteria |
|---|---|
| Handling of Complex Queries | Recommendations on varied choice in a single query, |
| such as “affordable” or “family friendly.” | |
| Location-Based Results | Recommendations based on the user’s specified lo- |
| cation i.e., suburb, city, or street. | |
| User Satisfaction | User satisfaction should be between the scale of 4.0- 5.0 |
| System Response Time | Response time of under 3.0 seconds. |
| Query Type | Outcome | Success Rate(%) |
|---|---|---|
| Simple Query | Pass | 88.0 |
| Fail | 12.0 | |
| Complex Query | Pass | 84.0 |
| Fail | 16.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).