Computer Science and Mathematics

Sort by

Review
Computer Science and Mathematics
Computer Science

Mehul Kumar,

Aryan Raj,

Sandeep Kumar

Abstract: Career counselling plays a critical role in guiding students through academic choices and professional pathways. Traditional counselling approaches, though valuable, often lack the ability to scale and personalize recommendations based on the unique attributes of each student. The integration of Artificial Intelligence (AI) and Machine Learning (ML) has introduced a transformative shift in career guidance through the development of intelligent, data-driven counselling systems. These systems leverage a combination of recommender algorithms, clustering techniques, and Natural Language Processing (NLP) to provide actionable, personalized career suggestions.Recommender systems are central to this transformation, employing collaborative filtering, content-based filtering, and hybrid models to match students with relevant career paths. Collaborative filtering draws on the preferences of similar users, while content-based methods utilize academic performance and skillsets. Hybrid models mitigate challenges such as data sparsity and cold-start scenarios, offering improved accuracy and adaptability.Clustering algorithms, such as K-Means and hierarchical clustering, enhance the system’s capability by grouping students with similar profiles, revealing hidden trends in performance and interest patterns that may inform targeted recommendations. NLP techniques further enrich the counselling process by analyzing unstructured inputs such as essays, feedback, and surveys. Sentiment analysis, keyword extraction, and topic modelling are employed to infer students’ latent interests and strengths.Despite its potential, AI-driven career counselling faces challenges related to data privacy, algorithmic bias, scalability, and digital equity. Addressing these requires ethical algorithm design, transparent decision-making, and collaborative policy frameworks. With the inclusion of real-time labour market data, gamification, and explainable AI, future systems can offer more equitable, adaptive, and insightful career guidance.
Article
Computer Science and Mathematics
Computer Science

Maoxi Li,

Daobo Ma,

Yingqi Zhang

Abstract: This paper presents a novel approach to improving database anomaly detection efficiency through sample difficulty estimation. Traditional anomaly detection methods often apply uniform computational resources across all data samples regardless of their complexity, resulting in inefficient resource utilization. Our framework addresses this limitation by quantifying the "difficulty" of individual database instances and strategically allocating computational resources where they provide maximum benefit. The proposed model combines isolation scores, density-based metrics, and surprise adequacy measurements to comprehensively assess sample difficulty. Based on these assessments, a difficulty-oriented priority assignment mechanism implemented through a sigmoid mapping function directs intensive computational efforts to challenging cases while processing simpler samples with lighter methods. Experimental evaluation across five diverse datasets demonstrates that our approach achieves a 52.84% reduction in average processing time compared to uniform approaches, while maintaining or improving detection accuracy. The framework achieves the highest Average Percentage of Faults Detected (APFD) score of 0.915, outperforming both traditional and deep learning-based methods. This research provides a foundation for developing intelligent, resource-aware anomaly detection systems capable of handling the increasing scale and complexity of modern database environments.
Review
Computer Science and Mathematics
Computer Science

Aliyya Hadia,

Tariq Afifa,

Fawzi Gamal

Abstract: Mixture of Experts (MoE) architectures have rapidly emerged as a foundational building block for scaling deep neural networks efficiently, enabling models with hundreds of billions of parameters to be trained and deployed with only a fraction of their total capacity active per input. By conditionally activating a sparse subset of expert modules, MoEs decouple model capacity from computation cost, offering an elegant and powerful framework for modular representation learning. This survey provides a comprehensive and systematic review of the MoE literature, spanning early formulations in ensemble learning and hierarchical mixtures to modern sparse MoEs powering large-scale language and vision models. We categorize MoE architectures along dimensions of gating mechanisms, expert sparsity, hierarchical composition, and cross-domain generalization. Further, we examine core algorithmic components such as routing strategies, load balancing, training dynamics, expert specialization, and infrastructure-aware deployment. We explore their applications across natural language processing, computer vision, speech, and multi-modal learning, and highlight their impact on foundation model development. Despite their success, MoEs raise open challenges in routing stability, interpretability, dynamic capacity allocation, and continual learning, which we discuss in depth alongside emerging research directions including federated MoEs, compositional generalization, and neuro-symbolic expert modules. We conclude by identifying trends that point toward MoEs as a central abstraction for building efficient, modular, and general-purpose AI systems. This survey serves as both a foundational reference and a forward-looking roadmap for researchers and practitioners seeking to understand and advance the state of Mixture of Experts.
Article
Computer Science and Mathematics
Computer Science

Tasoulas Theofanis,

Alexandros Gazis,

Tsohou Aggeliki

Abstract: Web tracking (WT) systems are advanced technologies used to monitor and analyze online user behavior. Initially focused on HTML and static webpages, these systems have evolved with the proliferation of IoT, edge computing, and Big Data, encompassing a broad array of interconnected devices with APIs, interfaces and computing nodes for interaction. WT systems are pivotal in technological innovation and business development, although trends like GDPR complicate data extraction and mandate transparency. Specifically, this study examines WT systems purely from a technological perspective, excluding organizational and privacy implications. A novel classification scheme based on technological architecture and principles is proposed, compared to two preexisting frameworks. The scheme categorizes WT systems into six classes, emphasizing technological mechanisms such as HTTP protocols, APIs, and user identification techniques. Additionally, a survey of over 1,000 internet users, conducted via Google Forms, explores user awareness of WT systems. Findings indicate that knowledge of WT technologies is largely unrelated to demographic factors such as age or gender but is strongly influenced by a user's background in computer science. Most users demonstrate only a basic understanding of WT tools, and this awareness does not correlate with heightened concerns about data misuse. As such, the research highlights gaps in user education about WT technologies and underscores the need for a deeper examination of their technical underpinnings. This study provides a foundation for further exploration of WT systems from multiple perspectives, contributing to advancements in classification, implementation, and user awareness.
Review
Computer Science and Mathematics
Computer Science

Janaka Ishan Senarathna

Abstract: Blockchain technology has transformed secure data management by employing a decentralized framework that fundamentally depends on cryptographic methods. This paper investigates how hash functions (e.g., SHA-256), digital signatures (e.g., ECDSA), and Merkle trees enable blockchain’s core attributes—immutability, security, and transparency. A Python-based proof-of-concept demonstrates hashing’s pivotal role in linking blocks, ensuring resistance to tampering. The study assesses cryptography’s contributions, such as enhanced security, alongside limitations like quantum vulnerabilities and scalability constraints. It proposes future directions, including post-quantum cryptography and zero-knowledge proofs, to mitigate these challenges. Real-world applications in finance and supply chains highlight practical relevance. Findings confirm cryptography as the bedrock of blockchain, offering insights to bolster its resilience amid evolving technological demands.
Article
Computer Science and Mathematics
Computer Science

Syed Uddin,

Michał Grega,

Mikolaj Leszczuk,

Waqas ur Rahman

Abstract: The demand for multimedia traffic over internet is exponentially growing. HTTP adaptive streaming (HAS) is the leading video delivery system that delivers high quality video to the end-user. The adaptive bitrate algorithms (ABR) running on the HTTP client select the highest feasible video quality by adjusting the quality according to the fluctuating network conditions. Recently, low-latency ABR algorithms have been introduced to reduce the end-to-end latency commonly experienced in HAS. However, a comprehensive study of the low-latency algorithms remains limited. In this paper, we present an evaluation of low-latency algorithms and compare their performance with traditional DASH-based ABR algorithms across multiple QoE metrics, various network conditions, and diverse content types. Additionally, we conduct an extensive subjective test to evaluate the impact of video quality variations on QoE. The results show that the algorithms do not perform consistently under different network conditions and content settings. The results indicate that the traditional ABR algorithms outperform low-latency algorithms in stable network conditions. When segment durations are shorter, low-latency algorithms outperform traditional ABR algorithms under variable network conditions. The findings also reveal that the performance of the algorithms drops as the segment duration increases. The dynamic algorithm performs best when the segment duration is increased and under high risk of playback interruptions.
Article
Computer Science and Mathematics
Computer Science

Jonathan Decker,

Vincent Florens Hasse,

Julian Kunkel

Abstract: Kubernetes has emerged as the industry standard for container orchestration in cloud environments. Its scheduler dynamically places container instances across cluster nodes based on predefined rules and algorithms. Various efforts exist to extend and improve upon the Kubernetes scheduler, however, as the majority of Kubernetes clusters operate on homogeneous hardware, most scheduling algorithms are also only developed for homogeneous systems. Heterogeneous infrastructures require specialized tuning to optimize workload assignment for which researchers and developers working on scheduling systems require access to heterogeneous hardware for development and testing, which may not be available. While simulations like CloudSim or K8sSim can provide insights, as simulations the level of detail they can offer to validate new schedulers is limited. To address this, we introduce Q8S, a tool for emulating heterogeneous Kubernetes clusters on OpenStack using QEMU. Q8S emulations provide a higher level of detail than simulations and can be used to train machine learning scheduling algorithms. By providing a more realistic environment, Q8S enables researchers and developers to test and refine their scheduling algorithms, ultimately leading to more efficient and effective heterogeneous cluster management. We release our implementation of Q8S as open source such that it can be further customized.
Article
Computer Science and Mathematics
Computer Science

Osvaldo Santos,

Natércia Santos

Abstract: Forest fires have become one of the most destructive natural disasters worldwide, causing catastrophic losses, sometimes with the loss of lives. Therefore, some countries have created legislation to enforce mandatory fuel management within buffer zones in the vicinity of buildings and roads. The purpose of this study is to investigate whether inexpensive off-the-shelf drones equipped with standard RGB cameras could be used to detect the excess of trees and vegetation in those buffer zones, using the services provided by the bundles of the EU Horizon project Chameleon. The article describes a system that uses drones equipped with RGB cameras to create detailed orthophoto maps and 3D point cloud files of the ground and then identifies trees and vegetation within the buffer zones that must be cut down to comply with the legislation on fuel management within those zones. The article also discusses the results obtained from two use cases: a road surrounded by dense forest and an isolated building with dense vegetation nearby. The main conclusion of this study is that off-the-shelf drones equipped with standard RGB cameras can be effective at detecting non-compliant vegetation and trees within buffer zones. This can be used to manage biomass within buffer zones, thus helping to reduce the risk of wildfire propagation in wildland-urban interfaces.
Review
Computer Science and Mathematics
Computer Science

Dimitar Rangelov,

Sierd Waanders,

Kars Waanders,

Maurice van Keulen,

Radoslav Miltchev

Abstract: Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors that influence reconstruction accuracy, lighting conditions at capture time remain one of the most influential yet widely neglected variables. This review provides a comprehensive survey of classical and modern 3D reconstruction techniques, including Structure from Motion (SfM), Multi-View Stereo (MVS), Photometric Stereo, and recent neural rendering approaches such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), while critically evaluating their performance under varying illumination conditions. We describe how lighting-induced artifacts such as shadows, reflections, and exposure imbalances compromise reconstruction quality, and how different approaches attempt to mitigate these effects. Furthermore, we uncover fundamental gaps in current research, including the lack of standardized lighting-aware benchmarks and the limited robustness of state-of-the-art algorithms in uncontrolled environments. By synthesizing knowledge across fields, this review aims to gain a deeper understanding of the interplay between lighting and reconstruction and provides research directions for the future that emphasize the need for adaptive, lighting-robust solutions in 3D vision systems.
Article
Computer Science and Mathematics
Computer Science

Jamalbek Tussupov,

Akmaral Kassymova,

Ayagoz Mukhanova,

Assyl Bissengaliyeva,

Zhanar Azhibekova,

Moldir Yessenova,

Zhanargul Abuova

Abstract: This article presents a comprehensive review of short text clustering using state-of-the-art methods: Bidirectional Encoder Representations from Transformers (BERT), Term Frequency-Inverse Document Frequency (TF-IDF), and the novel hybrid method Latent Dirichlet Allocation+BERT+Autoencoder (LDA+BERT+AE). The article begins by outlining the theoretical foundation of each technique and their merits and limitations. BERT is critiqued for its capability to understand word dependence in text, while TF-IDF is lauded for its applicability in terms of importance assessment. The experimental section compares the efficacy of these methods in clustering short texts, with a specific focus on the hybrid LDA+BERT+AE approach. A detailed examination of the LDA-BERT model’s training and validation loss over 200 epochs shows that the loss values start above 1.2 and quickly decrease to around 0.8 within the first 25 epochs, eventually stabilizing at approximately 0.4. The close alignment of these curves suggests the model’s practical learning and generalization capabilities, with minimal overfitting. The study demonstrates that the hybrid LDA+BERT+AE method significantly enhances text clustering quality compared to individual methods. Based on the findings, the study recommends the optimum choice and use of clustering methods for different short texts and natural language processing operations. The applications of these methods in industrial and educational settings where successful text handling and categorization are critical are also addressed. The study ends by emphasizing the importance of holistic handling of short texts for deeper semantic comprehension and effective information retrieval.
Article
Computer Science and Mathematics
Computer Science

Su-Ying Guo,

Xiu-Jun Gong

Abstract: The increasing adoption of deep neural networks (DNNs) in critical domains such as healthcare, finance, and autonomous systems underscores the growing importance of explainable artificial intelligence (XAI). In these high-stakes applications, understanding the decision-making processes of models is essential for ensuring trust and safety. However, traditional DNNs often function as "black boxes," delivering accurate predictions without providing insight into the factors driving their outputs. Expected Gradients (EG) is a prominent method for making such explanations by calculating the contribution of each input feature to the final decision. Despite its effectiveness, conventional baselines used in state-of-the-art implementations of EG often lack a clear definition of what constitutes "missing" information. In this work, we propose DeepPrior-EG, a deep prior-guided EG framework for leveraging prior knowledge to more accurately align with the concept of missingness and enhance interpretive fidelity. It resolves the baseline misalignment by initiating gradient path integration from learned prior baselines, which derived from the deep features of CNN layers. This approach not only mitigates feature absence artifacts but also amplifies critical feature contributions through adaptive gradient aggregation. We further introduce two probabilistic prior modeling strategies: a multivariate Gaussian model (MGM) to capture high-dimensional feature interdependencies and a Bayesian nonparametric Gaussian mixture model (BGMM) that autonomously infers mixture complexity for heterogeneous feature distributions. We also develop an explanation-driven model retraining paradigm to valid the robustness of the proposed framework. Comprehensive evaluations across various qualitative and quantitative metrics demonstrate the its superior interpretability. The BGMM variant achieves state-of-the-art performance in attribution quality and faithfulness against existing methods. DeepPrior-EG advances the interpretability of complex models within the XAI landscape and unlocks its potential in safety-critical applications.
Article
Computer Science and Mathematics
Computer Science

Dezhi An,

Wenqiang Liu,

Jun Lu,

Shengcai Zhang

Abstract: Video anomaly detection (VAD) remains a challenging task, especially when identifying occluded small scale and transient anomalies in complex contexts. Existing methods (such as optical flow analysis, trajectory modeling, and sparse coding) often ignore enhancement of fine-grained target features and lack adaptive precise attention mechanisms, thus limiting the robustness and generalization ability of anomaly detection. To this end, we propose an adaptive feature refinement (AFR), AFR method to improve the performance of small-scale anomaly detection. The AFR method integrates small-object attention module (SAM) into the feature pyramid network (FPN) of the clipdriven multi-scale instance learning architecture to adaptively enhance the feature representation of key areas. In addition, we combine the comparison language-image pre-training (CLIP) model to enrich semantic information and improve the generalization ability across scenes. Specifically, the SAM module guides the model to pay attention to the discriminant patterns of small-scale anomalies through channel recalibration and spatial attention mechanisms, while the semantic prior of the CLIP model further strengthens the expression ability of visual features. The AFR method combines optimization of SAM and CLIP to show superior generalization performance in cross-scale and cross-scene anomaly detection tasks. Extensive experiments on two common benchmark datasets UCF-Crime and XDViolence show that the AFR method outperforms existing state-of-the-art methods in performance, verifying its effectiveness and migration in real-world video anomaly detection tasks.
Article
Computer Science and Mathematics
Computer Science

Igor S. Kovalev,

Nibin Joy Muthipeedika,

Grigory V. Zyryanov

Abstract: The conductivity of polyacetylene has been investigated by using DFT quantum chemical computation of the polyacetylene (PA) oligomers’ bandgap with B3LYP1/def2-TZVP level of theory. The higher conductivity of the PA trans-isomer compared to the cis-isomer was confirmed. It was found that increasing the conjugation length of PA units asymptotically leads bandgap to a value of 1.26 eV for trans-isomer and 2.01 eV for cis-isomer respectively. Dramatical effect of doping on the PA conductivity was proven. This communication discusses the effects of doping by oxidation/reduction of PA (p/n doping), as well as by bromination and deprotonation. The good insulation property of polytetrafluoroethylene (PTFE) was also confirmed. An approach to find out the optimal dopant for best PA conductivity has been proposed at the later stage.
Article
Computer Science and Mathematics
Computer Science

Mohit Nayak,

Jari Kangas,

Roope Raisamo

Abstract: Applications of virtual reality (VR) have grown in significance in medicine as they are able to recreate real-life scenarios in 3D while posing reduced risks to patients. However, there are several interaction challenges to overcome when moving from 2D screens to 3D VR environments, such as complex controls, and slow user adaptation. More intuitive techniques are needed for enhanced user experience. Our research explores the potential of intelligent speech interfaces to enhance user interaction while conducting complex medical tasks. We developed a speech-based assistant within VR application for maxillofacial implant planning, leveraging natural language processing (NLP) to interpret user intentions and to execute tasks such as obtaining surgical equipment or answering questions related to the VR environment. The objective of the study was to evaluate the usability and cognitive load of the speech-based assistant. We conducted a mixed-methods within-subjects user study with 20 participants and compared the voice-assisted approach to traditional interaction methods such as button panels on the VR view, across various tasks. Our findings indicate that NLP driven speech-based assistants can enhance interaction and accessibility in medical VR, especially in areas such as locating controls, easiness of control, user comfort, and intuitive interaction. These findings highlight the potential benefits of augmenting traditional controls with speech interfaces, particularly in complex VR scenarios where conventional methods may limit usability. We identified key areas for future research, including improving the intelligence, accuracy, and user experience of speech-based systems. Addressing these areas could facilitate the development of more robust, user-centric, voice-assisted applications in virtual reality environments.
Article
Computer Science and Mathematics
Computer Science

Kazi Fatema,

Samrat Kumar Dey,

Mehrin Anannya,

Risala Tahsin Khan,

Mohammad Rashid,

SU Chunhua,

Rashed Mazumder

Abstract: Intrusion Detection Systems (IDS) are crucial module of cybersecurity which is designed to identify unauthorized activities in network environments. Traditional IDS, on the other hand, have a number of problems, such as high rates of inaccurate positives and inaccurate negatives and a lack of explainability that makes it difficult to provide adequate protection. Furthermore, centralized IDS approaches have issues with interpretability and data protection, especially when dealing with sensitive data. In order to overcome these drawbacks, we provide Federated XAI IDS, a brand-new explainable and privacy-preserving IDS that improves security and interpretability by fusing Federated Learning (FL) with Shapley Additive Explanations (SHAP). Our approach enables IDS models to be collaboratively trained across multiple decentralized devices while ensuring that local data remains securely on edge nodes, thus mitigating privacy risks. The Artificial Neural Network (ANN)-based IDS is distributed across four clients in a federated setup using the CICIoT2023 dataset, with model aggregation performed via FedAvg. The proposed method demonstrated efficacy in intrusion detection, achieving 88.4% training and 88.2% testing accuracy. Furthermore, SHAP was utilized to analyze feature importance, providing a deeper comprehension of the critical attributes influencing model predictions. Transparency is improved and the model becomes more dependable and interpretable thanks to the feature importance ranking that SHAP produces. Our findings demonstrate how well Federated XAI IDS handles the two problems of explainability and privacy in intrusion detection. This dissertation accelerates the major establishment in the creation of safe, interpretable, and decentralized intrusion detection systems (IDS) for contemporary cybersecurity applications by utilizing federated learning and explainable AI (XAI).
Article
Computer Science and Mathematics
Computer Science

Xiulan Jie,

Yahui Yang,

Yong Jianhong

Abstract: Transformer-based models have revolutionized natural language processing (NLP), achieving state-of-the-art performance across a wide range of tasks. However, their high computational cost and memory requirements pose significant challenges for real-world deployment, particularly in resource-constrained environments. Token pruning has emerged as a promising technique to improve efficiency by selectively removing less informative tokens during inference, thereby reducing FLOPs and latency while maintaining competitive performance. This survey provides a comprehensive overview of token pruning methods, categorizing them into static, dynamic, and hybrid approaches. We discuss key pruning strategies, including attention-based pruning, entropy-based pruning, reinforcement learning methods, and differentiable token selection. Furthermore, we examine empirical studies that evaluate the trade-offs between efficiency gains and accuracy retention, highlighting the effectiveness of token pruning in various NLP benchmarks. Beyond theoretical advancements, we explore real-world applications of token pruning, including mobile NLP, large-scale language models, streaming applications, and multimodal AI systems. We also outline open research challenges, such as preserving model generalization, optimizing pruning for hardware acceleration, ensuring fairness, and developing automated, adaptive pruning strategies.As deep learning models continue to scale, token pruning represents a crucial step toward making AI systems more efficient and practical for widespread adoption. We conclude by identifying future research directions that can further enhance the effectiveness and applicability of token pruning techniques in modern AI deployments.
Article
Computer Science and Mathematics
Computer Science

Janez Brest,

Mirjam Sepesy Maučec

Abstract: Since the discovery of the Differential Evolution algorithm, new and improved versions have continuously emerged. In this paper, we review selected algorithms based on Differential Evolution proposed in recent years. We examine the mechanisms integrated into them and compare the performances of algorithms. To compare their performances statistical comparisons were used as they enable us to draw reliable conclusions about algorithms performances. We use the Wilcoxon signed-rank test for pairwise comparisons and the Friedman test for multiple comparisons. Subsequently, the Mann-Whitney U test was added. We conducted not only a cumulative analysis of algorithms but we also focused on their performances regarding the function family (i.e., unimodal, multimodal, hybrid, and composition functions). Experimental results of algorithms were obtained on problems defined for the CEC’24 Special Session and Competition on Single Objective Real Parameter Numerical Optimization. Problem dimensions of 10, 30, 50, and 100 were analyzed. In this paper we highlight promising mechanisms for further development and improvements, based on the performed study of the selected algorithms.
Article
Computer Science and Mathematics
Computer Science

Sonia Dey,

Mohammad Shamsul Arefin

Abstract: Household budgeting is crucial for financial stability, yet many individuals find it challenging due to the lack of structured financial planning tools. This paper introduces a rule-based system that optimizes expenses by considering family size, income, and spending behavior. Unlike traditional budgeting tools, our system dynamically distributes income across essential categories such as housing, food, medical care, education, and savings using predefined rules. The system leverages dynamic input processing and predefined allocation rules to provide real-time insights into budgeting constraints. Experimental results show that the proposed model achieves 90% accuracy in budget allocation, ensuring financial sustainability and preventing overspending. Our system provides a transparent, flexible, and user-friendly alternative to machine learning-based budget models, making it accessible to households of all financial backgrounds.
Article
Computer Science and Mathematics
Computer Science

Eduardo Cansler,

Matthew Odogwu

Abstract: In today's fast-paced software development landscape, organizations strive to accelerate their time-to-market while maintaining high-quality, reliable software releases. Continuous Delivery (CD) pipelines play a crucial role in achieving this goal by enabling automated, efficient, and consistent software deployments. This paper explores best practices for optimizing CD pipelines to enhance deployment speed, reduce failure rates, and improve overall software delivery performance.Key strategies include automated testing, continuous monitoring, parallelized deployments, and Infrastructure as Code (IaC), all of which contribute to faster and more reliable releases. Additionally, emerging technologies such as AI-driven failure detection and predictive analytics are examined for their potential to further optimize CD workflows. The study also highlights common bottlenecks, such as security and compliance delays, and provides actionable recommendations for integrating DevSecOps principles to streamline these processes.By implementing these best practices, organizations can minimize lead times, enhance agility, and maintain a competitive edge in the ever-evolving software industry. The findings emphasize the importance of automation, feedback loops, and continuous improvement in achieving a high-performing CD pipeline that accelerates time-to-market while ensuring software reliability.
Article
Computer Science and Mathematics
Computer Science

Anthony Carignan,

Olanite Enoch

Abstract: In the fast-evolving world of software development, DevOps efficiency plays a critical role in ensuring rapid deployments, scalability, and system reliability. This study explores best practices for cloud infrastructure management to optimize DevOps workflows, enhance deployment speed, improve resource utilization, and strengthen security. Key strategies discussed include Infrastructure as Code (IaC), automated CI/CD pipelines, containerization, multi-cloud strategies, and AI-driven cloud monitoring. The findings indicate that implementing these cloud-native best practices leads to faster time-to-market, reduced operational costs, and improved system resilience. Additionally, the study highlights the importance of security automation, proactive cost optimization, and strategic multi-cloud adoption to ensure long-term DevOps success. By adopting these approaches, organizations can streamline cloud management, enhance collaboration between development and operations teams, and drive continuous innovation.

of 44

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated