Preprint
Article

This version is not peer-reviewed.

A Systematic Review of the Literature About New Emerging Technologies: AI, Ethical Concerns, and Innovations in Social Computing

Submitted:

03 September 2025

Posted:

04 September 2025

You are already at the latest version

Abstract
The rate of innovation in emerging technologies, particularly Artificial Intelligence (AI) and social computing, is shaping the IT landscape at a revolutionary pace. This is a systematic review of the literature in terms of a detailed examination of three major themes: seamless integration of AI with mainstream IT systems, ethical considerations of such integration, and social computing trends. The review to the forefront that AI integration, in addition to delivering unprecedented enhancement in performance, function, and security, also ushers in complex technical and ethical issues that compel a paradigmatic shift to hybrid models, aggressive data governance, and human-centered design. Further, advances in social computing are not merely facilitating human-to-human connection but are actually tapping into collective intelligence and transforming the very essence of digital sociality. The paper underscores interdependence of the fields and confirms IT technological innovation is successful if developed on an integrated platform where innovation and wise application coexist with ongoing optimization and close understanding of human-computer interaction. New concepts and future research after the present article are expected based on this extensive literature review, hoping to inform future inquiry and applications in the fast-evolving IT industry.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction to Research Methodology

The age of the computer is characterized by exploding technological change, and social computing and Artificial Intelligence (AI) are being termed two most significant drivers of change in business[1]. They are not step-like technologies but basic changes in how computing systems interact with information, process, and human behavior [2]. Accepting their complicated dynamics, latent benefits, inherent problems, and ethical implications is an organizational strategic problem to pave the way to this new technology frontier. This report analyzes the current literature systematically to gain a practical picture of these nascent technologies and their profound impact on the IT environment.
We are in a generation where Artificial Intelligence (AI) is advancing at a breakneck pace. It is re-shaping industries and societies with endless capabilities such as autonomous decision-making, predictive analytics, and content generation like never before. The AI has a revolutionary power, and this power also comes with ethical issues such as algorithmic bias, privacy invasion, misuse of data, and black box AI decision-making. States and institutions have more demands to be accountable in responding to the call for responsible innovation as they attempt to reconcile AI applications with societal values and regulatory calls. Technologies in social computing like online communities, collaboration platforms, and social media analytics are transforming human-to-technology relationships but also human-to-human relationships. They employ AI to interact with humans, anticipate trends, and exchange information but worsen issues such as spread of disinformation, surveillance, and social action manipulation. These need solutions of an interdisciplinary nature that align technological progress with ethics so as to ensure the benefits of AI and social computing are wisely utilized.
The overall purpose of this systematic review of the literature is to present a general and impartial overview of the new role of new technologies, or rather the integration of AI into existing systems, the ethical issues resulting from the application of AI, and new research in social computing. The review attempts to respond to three overall research questions:
(a)
What are the techniques through which organizations incorporate AI capabilities into existing IT infrastructure to enhance performance without compromising security or functionality? What are the integration problems and solutions?
(b)
What are the ethical concerns associated with bringing AI technologies into existing systems, and how do organizations ensure that AI integration keeps pace with ethical standards and societal expectations?
(c)
What are emerging trends in social computing that define the interplay between social behavior and computational systems, and how do they define future computer science research and applications?
To achieve these goals, this report adopts systematic literature review practice through an objective, systematic, and transparent process. This research field is at the heart of validity, reliability, and usefulness of findings, particularly in the fast-moving fields of AI and social computing where data can be plentiful but of variable quality [3]. A scientific, objective, and reproducible approach ensures findings are evidence-driven rather than rumour-driven [4]. The value of all the new ideas or recommendations brought forth with this report is as great as the integrity of its methodological framework, as it provides a solid foundation for evidence-based decision-making and even future research. Systematic approach involves some fundamental steps [5]:
(a)
Definition of Research Questions: The most crucial and initial step is the clear demarcation of the research questions guiding the entire research [6,7].
(b)
Definition of Terminature and Keyword List Development: Keywords are defined exhaustively in list form for database searching and technical terms used in the review are systematically defined [8].
(c)
Identification of Databases and Development of Search Queries: Certain databases used for computer science journals, i.e., ACM Digital Library, IEEE Xplore, ScienceDirect, SCOPUS, Web Science, and Google Scholar, are determined. Specific search queries are then developed for each of the databases in order to obtain specific results [9,10].
(d)
Is Assumptions of Inclusion and Exclusion Criteria: Specific criteria are developed in a bid to differentiate the collected literature in a way where only the best-fitting studies to the analysis are included [11].
(e)
Data Extraction and Synthesis: There is a procedure that involves following in extracting and synthesizing data from studies included with a view to aiming at completeness as well as objectivity in the analysis [12,13,14].
This systematic approach to methodology guarantees that the review will be more than a summary of existing work but an actual and significant analysis, responding to empirical questions in objectivity and specificity [15].

2. Related Works

The majority of the related research works have focused on artificial intelligence (AI) integration into information systems, social computing innovation, and ethics. Palani et al. [16] referenced that related works sections provide a structural foundation for situating new research in the context of previous research in a way that one can easily see how the present research builds upon, conflicts with, or differs from previous research. Nordström et al. [17] charted the AI tool autonomous use uncertainties, showing the epistemic and practical issues of uptake.
Within AI integration studies, various systematic reviews and meta-analyses have blended practice and methodological innovation. Li et al. [18] provided an overview of how the automation of meta-analysis has developed over time in the AI era, with a focus on improving evidence synthesis and automation. Wei et al. [19] conducted a meta-analysis of intelligent irrigation systems, with the presentation of AI application in decision optimization in context-specific settings.
Ethical issues of AI were described in detail in the emerging literature. Kochupillai et al. [20] illustrated the position of Explainable AI (XAI) in AI ethics by highlighting its transparency and accountability. Nannini et al. [21] rigorously uncovered ethical concerns of XAI studies like fairness, bias detection, and interpretability-performance trade-off.
Social computing research also enlightens us when it comes to the social facet of AI adoption. Li et al. [18] discussed Social Learning Theory for learning societies of the Internet, how human–computer collaboration creates collective intelligence. Robertson [22] tested GPT-4’s role in support for peer review, and he documented moderate but significant effect on scholarly work practices.
Research on the adoption of emerging technology in heritage systems has acquired technical and organizational evidence of impact. Marr [23] covered 50 industry AI adoption case studies of strategic integration patterns. Francis and Bessant [24] contrasted capability building for innovation targeting with regard to the organizational success readiness factor. Motwani et al. [25] presented ERP adoption case studies reporting common problems of change management, technical compatibility, and resistance to innovation.
Methodological guidance in systematic reviews in technologically developing areas is offered. Mutwani et al. [26], Paré [27], and Simsek [28] wrote on systematicity in the conduct of reviews of literature on rigor, reproducibility, and transparency. They provided systematic examples of conducting reviews, with Nightingale [29] focusing on methodological rigor in working on validity in technologically developing areas. Campanelli and Parreiras [29] established review practices use of evidence-based practice as the norm, and Taylor et al. [15] and Pollock et al. [14] described best evidence of data synthesis and extraction for evidence-based research. The aforementioned studies provide an integrating base of socio-technical models, technical case evidence, methodological practice, and ethical theory to guide the present review on innovations in AI integration, ethics, and social computing.

3. Methodology

Literature search, analysis, and synthesis of literature on AI integration, ethical concerns, and social computing innovations were conducted step by step, following guidelines that do exist to conduct rigorous literature reviews [5,27,29]. The systematic review used a structured search and analysis protocol to search, screen, and synthesize literature related to the integration of artificial intelligence (AI), ethical concerns, and social computing innovations. The review was carried out following systematic review protocols for reviewing literature as proposed by Booth et al. [5], Lame [27], and Nightingale [29] in order to obtain rigor, reproducibility, and transparency.
Keywords were defined to cover the range of the research subject matter, comprising AI concepts (Artificial Intelligence, Machine Learning, Deep Learning), system integration (AI integration, legacy systems, traditional IT), ethics (ethical AI, fairness, transparency, accountability, privacy), and social computing (social computing, human–computer interaction, collective intelligence). Technical terms were pre-defined according to best practice recommendations [14,15].
Boolean searching in ACM Digital Library, IEEE Xplore, ScienceDirect, SCOPUS, Web of Science, and Google Scholar was conducted to maximize retrieval. Example:
(“Artificial Intelligence” OR “AI”) AND (“integration” OR “legacy systems”) AND (“ethics” OR “ethical considerations”) AND (“social computing”)
Inclusion criteria were peer-reviewed journal articles, academic textbooks, and conference papers written in the English language within the past decade and concentrating on at least one of the three topics: AI integration, AI ethics, or social computing. Exclusion criteria removed non-peer-reviewed publications, opinion articles, and writings with no apparent relevance to the research focus. Duplicate records and unavailable full texts were also removed.
A data extraction form enabled consistency, as recommended by Xu et al. [13]. Extracted fields included bibliographic information, research design, AI techniques applied, application domains, integration outcomes, ethical issues addressed, and policy or practice implications.
Thematic analysis, as elaborated by Kiger and Varpio [7], allowed for themes classifying findings into performance, functionality, security, ethical, and social impact. Comparative analysis guided by Alvesson and Sandberg [6] gap-identification procedures was employed to derive convergent themes, contradictions, and knowledge gaps. Systematic synthesis aimed at formulating an objective evidence-based synthesis of existing research on AI integration, ethical concerns, and social computing innovations.
Table 1. Data Extraction Framework.
Table 1. Data Extraction Framework.
Category Data Extracted
Study Identification Title, Authors, Year, Source, Country/Region
Research Characteristics Methodology, AI Techniques/Models, Application Domain
AI Integration Findings Opportunities, Threats, Technical Challenges, Solutions
Ethical Considerations Privacy, Fairness/Bias, Transparency, Accountability
Conclusions/Implications Main Conclusions, Practical Implications, Policy Recommendations

4. Findings

Systematic literature review accumulates empirical evidence that biased the literature in using AI on IT systems in terms of performance, functionality, security, and complexity of integrating AI in current infrastructures. Experiments and computational simulation, benchmarking studies, and case studies were part of the study that also presented qualitative and quantitative evidence for the advantages, risks, and complexities of using AI.
Quantitative evidence of noticeable performance improvement following the implementation of AI in industry and services has been furnished in the form of certain studies. AI-enabled customer service robots, for example, are seen to answer 13.8% more questions in an hour and improve the quality of output by 1.3% over the traditional method [30]. The generative AI models have also been associated with an overall average gain of 66% on task performance and even larger effects on high-demand tasks [31].
In predictive maintenance use cases, AI-powered analytics of IoT sensor data assisted organizations in preventing 10–40% unplanned downtime and even 50% savings on maintenance cost [32]. General Motors reportedly save an estimated yearly USD 20 million with 15% fewer unplanned downtimes. The energy sector also quantified generator downtime by 30% and prevented significant spending.
However, these improvements come at computational costs of billions. Chen [33] had presented a “trilemma” of latency, throughput, and cost and had argued that billions of parameters LLMs must utilize hardware customized for them and astronomical infrastructure expenditure. Cao et al. [34] demonstrated that optimizations such as memcached-tuned memory provisioning and blocked key–value caching were 2–7× throughput gain possible and reducing some of such costs on resources.
Empirical findings always stand witness to more delivery of services using AI. Chatbots and virtual agents powered by AI have been found to respond to customers’ inquiries 4.2 times quicker than the traditional method, lowered operational cost by 31%, and enhanced customer satisfaction by 28% [35]. 94% intent recognition accuracy since AI assistants are auto-replied with up to 80% of the usual questions—reducing a 27% average decline in support questions and consumer satisfaction as high as 92% since adopting AI-driven live chats.
Where computer programming is concerned, the impact of AI code generators is two-fold. Even when later in the year 2025 there was some testing with veteran open-source programmers longer project life cycle for small one-time projects with AI, other experiments showing staggering productivity gains in large and complex work. This is only one of the indicia because heterogeneity in the impact such tools have towards productivity when coding exists based on task complexity, quality level requirements, and AI tools used.
AI-enabled security capabilities have been seen to enhance detection and response by orders of magnitude. Experiments have shown the time to respond to an event reduced by up to 96%, zero-day threat detection rate improved by 70%, and elimination of 90% of the false positives, with actual threats left for human analysts to handle. Phishing and insider threats can be prevented by AI-phishing technology by as much as 86% and 45%, respectively. AI tools took five times more time to detect APT, and 73% of potential cyberattacks were thwarted with the help of behavior analytics. Prediction enabled some platforms to foresee 85% of data breaches before they happened.
All such strengths have been elaborated while the remaining literature that has been reviewed talks of vulnerabilities of AI solutions. An experiment validates that adversarial attacks degrade the performance of the generative AI model by 80% at most, and attacks succeed 70–90% of the time [36,37]. Data poisoning attacks are 85% effective and unsusceptible and give biased output, and physical attacks on AI models are more than 80% effective. Model inversion attacks were already being used to extract sensitive training data and prompt injection attacks were already being used to successfully manipulate generative model output.
API vulnerabilities were on a massive scale. Within the past two years, 57% of the organizations were attacked by API-based attacks and 73% of the organizations had experienced more than one attack. Remarkably, 98% of intended API break-in attempts were on externally exposed endpoints, and these were being exploited to a great degree using valid credentials. While API pipelines integrated generative AI, the attack surface materialized, with 65% of the sample set members indicating increased API-related security threats.
Biased and poor-quality data were listed by a few studies as two of the biggest causes of AI system failure. A meta-review of 127 peer-reviewed articles estimated 68% of AI deployment failure due to poor-quality data and 43% of successfully deployed systems in real-world deployments with high algorithmic bias. Those trained on noisy or low-quality data performed poorly even at high data levels, and underscores the need for quantity and quality of the training data sets for AI.
Scale limitations also did. Legacy architecture, in most instances not AI-optimized, cannot scale to parallel processing demands and lead to latency, data silos, and network saturation. AI-first architectures delivered 2–5× throughput and latency enhancement compared to bolted-on (retrofitted) solutions. Long-term tests revealed some “catastrophic forgetting” of learned information as AI was trained on new data. Forgetting rates were accompanied by low stability and a shortage of balance between fast adaptation and long-term memory—a condition that was attributed to the “stability–plasticity dilemma” in neural network architecture.
Table 2. Summary of Findings from Reviewed Literature on AI, Ethical Concerns, and Social Computing.
Table 2. Summary of Findings from Reviewed Literature on AI, Ethical Concerns, and Social Computing.
Category Key Findings Empirical Evidence/Statistics Sources
System Performance & Efficiency Impact AI improves productivity and operational effectiveness in industrial and services applications. AI detects APTs as much as 5× sooner; predicts 85% of breaches in advance. [30,31,32]
AI agents handle 13.8% more queries/hour; 66% average performance gain on difficult tasks; predictive maintenance sees up to 50% decrease of unplanned downtime and 10–40% cost savings. Behavioral analytics blocked as much as 73% of cyberattacks. [32]
AI implementation in manufacturing and energy leads to substantial cost reduction. AI models vulnerable to adversarial, poisoning, and physical attacks. [33,34]
Functionality Enhancements GM reduced USD 20M/year; power generators had 30% fewer outages. Performance degraded by as much as 80%; targeted attacks 70–90% successful; data poisoning 85% successful; physical attacks > 80%. [35]
High computational resource usage creates a latency–throughput–cost “trilemma”. API attacks increase with AI rollouts. [35]
Cybersecurity Performance Specialized hardware is required; throughput boosted 2–7× by memory/cache optimization. 57% organizations were compromised by APIs; 98% of attacks were against open endpoints; 65% of those listed expanded attack surface by exposing AI pipeline. [33,34]
AI-powered support significantly accelerates customer service speed and satisfaction. Data quality and bias are the primary causes of failed AI projects. [33,34]
Response time 4.2× quicker; 31% cost saving; 28% increase in satisfaction; 94% intent recognition rate; 27% decrease in routine questions. 68% deployment failure with low-quality data; 43% of systems deployed with high bias. [36,37]
Impact on software development varies by type of task. Scale limitations impedes AI performance on legacy infrastructure. [36,37]
Merging AI with Legacy Systems Small tasks: ~19% longer to perform; large/computation-heavy tasks: significant improvement was noticeable. AI-native architectures delivered 2–5× higher throughput, latency than retrofitted solutions. Data aggregation from reviewed studies
AI-powered solutions accelerate threat detection, prevention, and incident response. Catastrophic forgetting impacts long-term AI performance.
Response time reduced by as much as 96%; detection of zero-day threats improved by 70%; false positives reduced by 90%; phishing by 86%; insider threats by 45%. As high as the rates when lower stability and retention had been achieved—demonstrating the stability-plasticity trade-off for neural networks.

5. Discussion and Conclusions

The overall review of this study in the general literature sets the worldwide and groundbreaking impact of Social Computing and Artificial Intelligence (AI) in contemporary Information Technology (IT) systems. All of the academic papers gathered in this review affirm that implementation of AI highly contributes to the capability, performance, and cyber-attack resilience of systems. Gains testified are measurable improvement in productivity, better quality of decisions, predictive maintenance, and advanced threat intelligence. Social computing was found to facilitate digital interaction, collaborative knowledge creation, and the development of social behavior in a virtual setting.
In addition to these beneficial impacts, various technical, functional, and ethical problems have been reported in the literature. Some frequent reported impediments include low data quality, scalability issues in existing infrastructure, excessively high costs of implementation, and the “black box” characterization of most AI models. Issues such as stability-plasticity tradeoff of continuous learning, adversarial vulnerability, and persistence of algorithmic bias are referenced most frequently as most critical. In several research works, poor governance practices, ethical regulation, and good system design processes have been associated with increased vulnerability to data poisoning, model inversion, and other security threats such as prompt injection and deep fakes.
Ethically, the writing has appealed for fairness, transparency, accountability, privacy, and security in AI system deployment and design. The impact of social computing on shaping interpersonal communication and collective behavior has also been noted as an area of potential benefits and dangerous social side effects. Among the threads is the appeal for multidisciplinary designs unifying innovation and human-centered design, resilient governance, and IT sustainable ecosystems for the long-term duration.
The alignment of conclusions sees the implementation of AI perform best in being aided by hybrid architectural frameworks, robust data stewardship practices, and standardized human–computer interface frameworks. Furthermore, capacity in infrastructure, capability shortages, and maintenance longevity underscore the importance of collective investment in knowledge stewardship, infrastructure modernization, and worker training. Analogues for these problems are studies of computational optimization techniques, e.g., simulated annealing and adaptive large neighborhood search algorithms used in problems such as the circle bin packing problem [38,39,40]. They all represent the need for adaptive, computation-efficient algorithms that trade computation with runtime limitation, just like the need for optimizing AI systems and trading in resources [40,41].
Some of the research priority areas for the future are identified according to the literature reviewed. Among the recurrent research needs, creating specific methods of Explainable AI (XAI) that can be applied in complicated and operational environments is identified. Such research identifies explaining AI decision-making to make it transparent, comprehensible, and accountable - an especially significant need in hybrid IT environments.
Another often-quoted priority is building robust AI security systems. With the ever-evolving nature of cyber-attacks—adversarial examples, model-stealing attacks, and prompt injection—there exists an urgent requirement for dynamic, self-healing security models that can learn to counter evolving threats in real-time.
Literature also calls for the development of sound ethical and regulatory frameworks to counteract bias, maintain privacy, and maintain public trust in AI systems. The complete social implications of frontier social computing are still unknown, with most authors suggesting longitudinal study to probe its impact on human behavior, collective intelligence, and social structure.
At the operating level, studies are required to develop cost-optimization tools and resource management techniques that can effectively manage high-workload AI in hybrid and multi-cloud environments. Literature consistently points to a lack of AI implementation and governance expertise. Best-practice frameworks for upskilling and reskilling IT professionals should be developed and pilot implemented so that employees’ skills will align with technology innovation. These research needs can well support careful AI adoption and on-going technological innovation in the emerging IT landscape toward being realized.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AI Artificial Intelligence
IT Information Technology
ACM Association for Computing Machinery
IEEE Institute of Electrical and Electronics Engineers
SCOPUS Elsevier’s abstract and citation database
XAI Explainable AI
LLMs Large Language Models
GPUs Graphics Processing Units
TPUs Tensor Processing Units
KV Key-Value (referring to caching)
NLP Natural Language Processing
IoT Internet of Things
APTs Advanced Persistent Threats
APIs Application Programming Interfaces

References

  1. Turban E, Pollard C, Wood G. Information Technology for Management: Driving Digital Transformation to Increase Local and Global Performance, Growth and Sustainability. John Wiley & Sons; 2021.
  2. Pantic M, Pentland A, Nijholt A, Huang T. Human computing and machine understanding of human behavior: a survey. In: Proceedings of the 8th International Conference on Multimodal Interfaces. ICMI ’06. Association for Computing Machinery; 2006:239-248. [CrossRef]
  3. Collins C, Dennehy D, Conboy K, Mikalef P. Artificial intelligence in information systems research: A systematic literature review and research agenda. Int J Inf Manag. 2021;60:102383. [CrossRef]
  4. Mohamed A, Mwakondo F, Tole K, Mgala M. Optimized Machine Learning Models for Poverty Detection: A Scientific Review of Multidimensional Approaches. Int J Res Sci Innov. 2025;XII(III):1081-1090. [CrossRef]
  5. Booth A, Martyn-St James M, Clowes M, Sutton A. Systematic Approaches to a Successful Literature Review. Published online 2021:1-100.
  6. Alvesson M, Sandberg J. Constructing Research Questions : Doing Interesting Research. Published online 2024:1-100.
  7. Kiger ME, Varpio L. Thematic analysis of qualitative data: AMEE Guide No. 131. Med Teach. 2020;42(8):846-854. [CrossRef]
  8. MARCHETTI D, SCARDOVI R. Artificial intelligence and human resources: innovative trends and main impacts. Published online December 11, 2024. Accessed July 28, 2025. https://www.politesi.polimi.it/handle/10589/231575.
  9. Msweli NT, Mawela T, Twinomurinzi H. Data science education – a scoping review. Published online 2023. Accessed July 28, 2025. http://hdl.handle.net/2263/95326.
  10. Salatino A, Aggarwal T, Mannocci A, Osborne F, Motta E. A survey of knowledge organization systems of research fields: Resources and challenges. Quant Sci Stud. 2025;6:567-610. [CrossRef]
  11. Drake S, Reid J. Rethinking Systematic Literature Reviews as the Gold Standard for Interdisciplinary Topics. Educ Think. 2021;1(1):27-42.
  12. Chakabwata W. Grounded Theory for a Doctoral Thesis: Retrospective and Prospective Insights. In: Qualitative Research Methods for Dissertation Research. IGI Global Scientific Publishing; 2025:251-278. [CrossRef]
  13. Xu C, Yu T, Furuya-Kanamori L, et al. Validity of data extraction in evidence synthesis practice of adverse events: reproducibility study. BMJ. 2022;377:e069155. [CrossRef]
  14. Pollock D, Peters MDJ, Khalil H, et al. Recommendations for the extraction, analysis, and presentation of results in scoping reviews. JBI Evid Synth. 2023;21(3):520. [CrossRef]
  15. Taylor KS, Mahtani KR, Aronson JK. Summarising good practice guidelines for data extraction for systematic reviews and meta-analysis. BMJ Evid-Based Med. 2021;26(3):88-90. [CrossRef]
  16. Palani G, Arputhalatha A, Kannan K, et al. Current Trends in the Application of Nanomaterials for the Removal of Pollutants from Industrial Wastewater Treatment—A Review. Molecules. 2021;26(9):2799. [CrossRef]
  17. Nordström M. AI under great uncertainty: implications and decision strategies for public policy. AI Soc. 2022;37(4):1703-1714. [CrossRef]
  18. Li L, Mathrani A, Susnjak T. Transforming Evidence Synthesis: A Systematic Review of the Evolution of Automated Meta-Analysis in the Age of AI. Published online April 28, 2025. [CrossRef]
  19. Wei H, Xu W, Kang B, et al. Irrigation with Artificial Intelligence: Problems, Premises, Promises. Hum-Centric Intell Syst. 2024;4(2):187-205. [CrossRef]
  20. Kochupillai M, Kahl M, Schmitt M, Taubenböck H, Zhu XX. Earth Observation and Artificial Intelligence: Understanding emerging ethical issues and opportunities. IEEE Geosci Remote Sens Mag. 2022;10(4):90-124. [CrossRef]
  21. Nannini M, Scheiber R, Moreira A. Estimation of the Minimum Number of Tracks for SAR Tomography. IEEE Trans Geosci Remote Sens. 2009;47(2):531-543. [CrossRef]
  22. Robertson Z. GPT4 is Slightly Helpful for Peer-Review Assistance: A Pilot Study. Published online June 16, 2023. [CrossRef]
  23. Marr B. Tech Trends in Practice: The 25 Technologies That Are Driving the 4th Industrial Revolution. John Wiley & Sons; 2020.
  24. Bessant J, Lamming R, Noke H, Phillips W. Managing innovation beyond the steady state. Technovation. 2005;25(12):1366-1376. [CrossRef]
  25. Motwani J, Mirchandani D, Madan M, Gunasekaran A. Successful implementation of ERP projects: Evidence from two case studies. Int J Prod Econ. 2002;75(1):83-96. [CrossRef]
  26. Motwani J, Subramanian R, Gopalakrishna P. Critical factors for successful ERP implementation: Exploratory findings from four case studies. Comput Ind. 2005;56(6):529-544. [CrossRef]
  27. Paré G, Tate M, Johnstone D, Kitsiou S. Contextualizing the twin concepts of systematicity and transparency in information systems literature reviews. Eur J Inf Syst. 2016;25(6):493-508. [CrossRef]
  28. Simsek Z, Fox B, Heavey C. Systematicity in Organizational Research Literature Reviews: A Framework and Assessment. Organ Res Methods. 2023;26(2):292-321. [CrossRef]
  29. Campanelli AS, Parreiras FS. Agile methods tailoring – A systematic literature review. J Syst Softw. 2015;110:85-100. [CrossRef]
  30. The effectiveness of the combined problem-based learning (PBL) and case-based learning (CBL) teaching method in the clinical practical teaching of thyroid disease | BMC Medical Education. Accessed June 29, 2025. [CrossRef]
  31. Ahmadi M, Kheslat NK, Akintomide A. Generative AI Impact on Labor Market: Analyzing ChatGPT’s Demand in Job Advertisements. Published online December 9, 2024. [CrossRef]
  32. Song X, Xu L, Peng C, et al. Enhanced creativity at the cost of increased stress? The impact of generative AI on serious games for creativity stimulation. Behav Inf Technol. 0(0):1-25. [CrossRef]
  33. Chen D, Youssef A, Pendse R, et al. Transforming the Hybrid Cloud for Emerging AI Workloads. Published online May 22, 2025. [CrossRef]
  34. Cao Y, Li S, Liu Y, et al. A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT. Published online March 7, 2023. [CrossRef]
  35. Rane N, Choudhary S, Rane J. Artificial Intelligence (AI), Internet of Things (IoT), and blockchain-powered chatbots for improved customer satisfaction, experience, and loyalty. Published online May 29, 2024. [CrossRef]
  36. Kalligeros P, Καλλίγερος Π. Generative Adversarial AI. Master Thesis. Πανεπιστήμιο Πειραιώς; 2025. Accessed August 6, 2025. https://dione.lib.unipi.gr/xmlui/handle/unipi/17814.
  37. Zhang C, Yu S, Tian Z, Yu JJQ. Generative Adversarial Networks: A Survey on Attack and Defense Perspective. ACM Comput Surv. 2023;56(4):91:1-91:35. [CrossRef]
  38. Yuan Y, Tole K, Ni F, He K, Xiong Z, Liu J. Adaptive simulated annealing with greedy search for the circle bin packing problem. Comput Oper Res. 2022;144:105826. [CrossRef]
  39. He K, Tole K, Ni F, Yuan Y, Liao L. Adaptive large neighborhood search for solving the circle bin packing problem. Comput Oper Res. 2021;127:105140. [CrossRef]
  40. Tole K, Moqa R, Zheng J, He K. A simulated annealing approach for the circle bin packing problem with rectangular items. Comput Ind Eng. 2023;176:109004. [CrossRef]
  41. He K, Tole K, Ni F, Yuan Y, Liao L. Adaptive Large Neighborhood Search for Circle Bin Packing Problem. Published online January 20, 2020. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated