Computer Science and Mathematics

Sort by

Review
Computer Science and Mathematics
Software

Ammad Ul Haq Farooqi

,

Omer Khalid

,

Muhammad Bilal

Abstract: The serverless architecture has progressively become one of the most popular way to build and deploy applications now a days. This architecture allows the developers to focus on their code without worrying about managing the backend servers. By using abstraction principle, the serverless approach makes it very easier to achieve scalability, automatic resource management and cost efficiency via pay per use model. With increase in the usage of the serverless approach, this architecture has now expanded into domains such as the Internet of Things (IoT), high-performance computing, artificial intelligence and the large-scale cloud environments. With the expansion of the serverless computing comes the challenges as well which include performance challenges, reliability and maintenance challenges. This review examines fifty peer-reviewed studies to present a structured overview of the current state of serverless computing and it also organizes the existing work into a taxonomy that shows key developments across application domains, technical methods, data sources and limitations while also identifying open the research directions. We found out that there is a clear evolution from simple function orchestration towards more intelligent, workload-aware scheduling systems with improved cold start latency and hybrid deployments ability that span both cloud and edge infrastructures. However despite of these advances the recurring issues such as vendor lock in, limited debugging visibility, difficulties in managing state and unpredictable performance still pose a challenge to widespread adoption of the serverless approach. Finally we would also like to add that the review highlights several promising directions for future research as well which includes adaptive resource management, distributed serverless runtimes, AI-driven optimization and better support for heterogeneous hardware. All in all our work offers a consolidated understanding of the current progress and future potential of serverless computing approach.
Article
Computer Science and Mathematics
Software

David Ostapchenko

Abstract: Microservice systems require reliable delivery, stable runtime environments, and production controls to achieve both high throughput and operational stability. This paper proposes a production-development framework that combines (i) runtime and toolchain version pinning, (ii) progressive delivery (canary / blue-green) with automated rollback, (iii) SLO and error-budget-based reliability management, and (iv) supply-chain security controls for build and artifact integrity. The framework is assessed via an 8-week synthetic observation design that mirrors operating conditions typical for mid-size telecom/fintech organizations (Central Asia), using eight independently deployable services. The evaluation is reported through DORA-style delivery metrics (deployment frequency, lead time, change failure rate, MTTR) and SLO attainment, and includes an ablation that isolates the expected contribution of each practice. The results indicate that deterministic builds (pinning) and progressive delivery provide the strongest improvements in delivery stability, while SLO/error-budget policies reduce incident impact by enforcing clear rollback and release-freeze rules.
Article
Computer Science and Mathematics
Software

Parvani Vafa Mokhammad

Abstract: More modern web applications are produced every year, they process more data and users, so effective memory management directly affects performance. Node.js uses the V8 JavaScript Engine, where Garbage Collection (GC) automatically re- leases unused memory. Despite the convenience, this process can cause certain delays and reduce the performance of applications under high load. The purpose of the study is to examine how different garbage collection configurations and param- eters affect the memory management performance of modern Nodes.js applications. In my research, I will analyze existing research and provide a series of load tests with different GC settings. The metrics of request response time and memory usage are compared as the number of simultaneous requests increases. The results obtained will allow us to determine the optimal approaches to configuring and profiling mem- ory in the Node.They will also provide practical recommendations to developers on how to improve the performance of their applications.
Article
Computer Science and Mathematics
Software

Saikal Batyrbekova

Abstract: This paper looks at two main ways to build large software pro- grams: the traditional Monolith, which is one large, unified applica- tion, and Microservices, which are many small, independent parts. The goal is to use real-world studies and experiments to clearly explain the confirmed pros and cons of each approach. The findings show that for new or small projects, the Monolith is generally faster and cheaper to launch. However, Microservices are much better at handling huge numbers of users, a key advantage known as scaling, and speed up development by allowing teams to work completely independently. The major trade-off is high com- plexity - Microservices are difficult to set up and operate, requiring specialized skills. I conclude that the best architectural choice is never fixed; it de- pends entirely on the project’s specific situation, such as the required size and growth speed of the system.
Review
Computer Science and Mathematics
Software

Guang Yang

,

Wei Zheng

,

Xiang Chen

,

Dong Liang

,

Peng Hu

,

Yukui Yang

,

Shaohua Peng

,

Zhenghan Li

,

Jiahui Feng

,

Xiao Wei

+7 authors

Abstract:

Code generation has emerged as a critical research area at the intersection of Software Engineering (SE) and Artificial Intelligence (AI), attracting significant attention from both academia and industry. Within this broader landscape, Verilog, as a representative hardware description language (HDL), plays a fundamental role in digital circuit design and verification, making its automated generation particularly significant for Electronic Design Automation (EDA). Consequently, recent research has increasingly focused on applying Large Language Models (LLMs) to Verilog code generation, particularly at the Register Transfer Level (RTL), exploring how these AI-driven techniques can be effectively integrated into hardware design workflows. Despite substantial research efforts have been invested to explore LLM applications in this domain, a comprehensive survey synthesizing these developments remains absent from the literature. This review fill addresses this gap by providing a systematic literature review of LLM-based methods for Verilog code generation, examining their effectiveness, limitations, and potential for advancing automated hardware design. The review encompasses research work from conferences and journals in the fields of SE, AI, and EDA, encompassing 70 published papers, along with 32 high-quality preprint papers, bringing the total to 102 papers. By answering four key research questions, we aim to (1) identify the LLMs used for Verilog generation, (2) examine the datasets and metrics employed in evaluation, (3) categorize the techniques proposed for Verilog generation, and (4) analyze LLM alignment approaches for Verilog generation. Based on our findings, we have identified a series of limitations of existing studies. Finally, we have outlined a roadmap highlighting potential opportunities for future research endeavors in LLM-assisted hardware design.

Article
Computer Science and Mathematics
Software

David A. Plaisted

Abstract: In contrast to some other term rewriting languages based on term graphs, the Imperative Term Graph Programming Language (ITGL) is an imperative term graph language, with assignment statements, conditional statements, iterative statements, arrays with destructive operations, procedures, and recursion. States consist of a term graph together with an environment, and statements map states to states. Pure functional languages may need to copy arrays when modifying them, which can lead to inefficiencies, but imperative languages avoid this problem. The syntax and semantics of the language ITGL are presented, followed by proofs of two of its properties called term dependence and isomorphism dependence, and proofs of some other properties as well. In addition, some possibilities for caching in this language are explored. The application of this language as an abstract language for algorithms which can then be translated into other common imperative languages is also mentioned.
Article
Computer Science and Mathematics
Software

María Jesús Manzanares

,

Diana Pérez-Marín

,

Celeste Pizarro

Abstract: Teaching programming to children at early age has been proven to be beneficial. Some research has focused on how to teach programming to children with special needs. According to Human-Computer Interaction, all users should be involved in the design of their systems (including learning systems). However, evaluation procedures with young children are a complex task, which can be even harder when the young children have some special needs. This paper proposes a validation procedure which was applied with 50 children between 6-7 years to evaluate the usability of a training system to teach them Scratch Jr irrespectively whether they are neurotypical or neurodivergent.
Article
Computer Science and Mathematics
Software

Peter Backeman

Abstract: Asserting program correctness is a longstanding challenge in software development which consumes lots of resources and manpower. This is often accomplished through software testing at various levels. One such level is unit testing, where individual components behaviour is tested. In this paper we introduce the concept of test analysis, which instead of executing unit tests, analyses them to establish their outcome. This is line with previous approach towards using formal methods for program verification, however we introduce a middle layer called test analysis framework which allows for the introduction of new capabilities. We (briefly) formalize ordinary testing and test analysis to define the relation between the two. We introduce the notion of rich tests with a syntax and semantic instantiated for C. A prototype framework is implemented and extended to handle property-based stubbing and non-deterministic string variables. A few select examples are presented to demonstrate the capabilities of the framework.
Article
Computer Science and Mathematics
Software

Bernard Zeigler

,

Robert Kewley

,

Gabriel Wainer

Abstract: This article explores the foundational mechanisms of the Discrete Event System Specification (DEVS) theory—closure under coupling, universality, and uniqueness—and their critical role in enabling interoperability through modular, hierarchical simulation frameworks. Closure under coupling empowers modelers to compose interconnected models, both atomic and coupled, into unified systems without departing from the DEVS formalism. We show how this modular approach supports the scalable and flexible construction of complex simulation architectures on a firm system-theoretic foundation. Also, we show that facilitating the transformation from non-modular to modular and hierarchical structures endows a major benefit in that existing non-modular models can be accommodated by simply wrapping them in DEVS-compliant format. Therefore, DEVS theory simplifies model maintenance, integration, and extension, thereby promoting interoperability and reuse. Additionally, we demonstrate how DEVS universality and uniqueness guarantee that any system with discrete event interfaces can be structurally represented with the DEVS formalism, ensuring consistency across heterogeneous platforms. We propose that these mechanisms collectively can streamline simulator design and implementation for advancing simulation interoperability. Finally, we conclude by discussing how DEVS concepts apply to the Department of Defense’s Modular Open Systems Approach (MOSA) to deployment of software systems. We propose that the DEVS-based development of modeling and simulation architecture provides a rigorous, formal basis to uniformly and efficiently integrate, execute, and manage diverse software systems, thereby enhancing interoperability, scalability, and maintainability across Department of Defense (DoD) software initiatives.
Article
Computer Science and Mathematics
Software

Philip Christian Zuniga

,

Rose Ann Zuniga

,

Marie Jo-anne Mendoza

,

Ada Angeli Cariaga

,

Prometheus Lazo

,

Czaezarina Calimbahin

,

Kristin Chloe Balbas

,

Raymond Francis Sarmiento

Abstract: Background: The Philippines reported its initial confirmed case of COVID-19 on January 30, 2020. In response, the government implemented a science-based, multi-agency governance framework led by the Inter-Agency Task Force for the Management of Emerging Infectious Diseases (IATF). A crucial element of this response was data-driven decision-making, supported by digital systems across surveillance, laboratories, facilities, and local government units (LGUs). Despite daily situation reports from the Department of Health (DOH), retrospective analysis revealed substantial underreporting during the early stages of the pandemic (e.g., 178 versus 1,584 total cases on March 17, 2020), indicating reporting delays and systemic fragmentation. The IATF established the Sub-Technical Working Group on Information and Communications Technology (ICT). Within this group, Workstream 4 (WS4) focused on End-to-End Data Integration to improve interoperability at the national level, supported by the Standards and Interoperability Lab–Asia (SIL-Asia).Objective: This paper documents and assesses an interoperability strategy that (i) unified diverse COVID-19 information systems using HL7 FHIR, (ii) tracked integration progress with a scorecard aligned to a maturity framework, and (iii) evaluated the impact on the timeliness of case confirmation from March 2020 to March 2021.Methods: WS4 used a combined monitoring approach that blends a simple scorecard with a LISI-type maturity pathway. Interoperability was measured as progress across six checkable points built into developer workflows: (1) API documentation provided; (2) data dictionaries mapped; (3) ETL templates supplied; (4) system changes made; (5) testing of integration; and (6) actual technical integration (successful data exchange based on agreed profiles). Each point was scored 0 (not started), 1 (in progress), or 2 (finished). Scores for each system pair were added, averaged across pairs, and scaled to a 0–10 score for the ecosystem.Technical interventions included a national HAPI FHIR sandbox for conformance testing, a CSV-to-FHIR converter to connect spreadsheet-based data into FHIR workflows (e.g., to COVID Kaya), and a locally tailored Philippine COVID-19 HL7 FHIR Implementation Guide (IG) limited to essential resources and fields mapped to the DOH Minimum Data Set. Organizational interventions comprised peer-to-peer technical clinics, targeted consultations, and capacity-building led by SIL-Asia. Impact was evaluated by analyzing DOH timelines for (a) symptom onset to specimen collection, (b) specimen collection to lab result release, and (c) result release to official case confirmation.Results: At baseline (June 2020), the ecosystem interoperability score was 3.0, reflecting early-stage activity focused on documentation and initial mappings. After staged interventions and ongoing monitoring, the score increased to 9.0 by October 2020, with nearly all priority systems exchanging data per the IG.Timeliness greatly improved during the same period. The average time from symptom onset to official confirmation dropped from 44 days (June 2020) to 6 days (October 2020), an 80% decrease. The most significant change was in the interval between lab result release and official confirmation, which fell from 22 days to 3 days (87% decrease), reflecting progress in data management, reconciliation, and automated exchanges across integrated systems. Improvements in onset-to-collection (~13 days) and collection-to-result (~9 days) were supported by increased laboratory capacity and better workflows.Discussion: The Philippine experience demonstrates that large-scale interoperability during a crisis needs both governance and technical strategies. A dedicated integration workstream (WS4), backed by clear mandates and escalation paths, facilitated quick resolution of issues and focused on integration. A “minimum, then mature” approach—using a limited HL7 FHIR IG and DOH Minimum Data Set—lowered entry barriers, while the scorecard showed incremental improvements before full production integration. Tools that accommodated diversity (e.g., spreadsheet bridges for LGUs and GIDAs) ensured fairness in participation and complete surveillance data. Interoperability labs (SIL-Asia) served as ongoing technical and knowledge hubs, offering sandboxes, validators, reference adapters, and capacity-building to reduce risks and speed up integration.Limitations: Gains in timeliness primarily result from interoperability, process learning, and system capacity expansions such as laboratory scale-up. The framework focuses on verifiable technical milestones; future efforts could incorporate routine data quality metrics (e.g., completeness, validity, code-set adherence) and outcome analytics (e.g., impacts on contact tracing effectiveness).Conclusions: A structured, checkpoint-based scorecard aligned with a maturity pathway, supported by targeted tools and dedicated governance, accelerated interoperability during a public health emergency, and translated into measurable improvements in reporting timeliness. Institutionalizing these capabilities—such as standing IGs, national sandboxes, adapters for low-resource settings, and interoperability labs—will strengthen preparedness and resilience for future pandemics.
Article
Computer Science and Mathematics
Software

Sara BourBour

,

Mohammad Reza Besharati

Abstract: Digital transformation, compliance to requirements and regulations, smart cyber-security, agility, data and information mesh, integration with convergent technologies, skill development and adapting to the socio-technical dynamics, Everyone benefits from a "adequately and sufficiently" sophisticated and complex platform of data-driven wisdom and its interaction with human experts. The realization of all these good goals and needs requires a good and innovative theory, method, framework, solution, and generally a good and innovative paradigm for enterprise architecture in the coming years, which seems to be slowly being experienced, evolving, and emerging. With such an approach and in this paper, a proposed conceptual architecture for the problem of "integrated and intelligent government financial management system (FMIS)" is presented (from the perspective of enterprise architecture and hybrid wisdom). This conceptual architecture establishes a dynamic and adjustable balance between centralization and distributedness, and with the help of the combination of computational data-driven and human wisdom, it is possible to improve the effectiveness of government resources, operational transparency, program adherence, and operational agility. It facilitates dynamic adaptability, in-depth reporting, support for analytical intelligence, and support for resolving budget discrepancies and disharmonies. Achieving the wisdom of cyber-human thinking in a systematic way in the field of FMIS will be one of the most distinctive achievements of such a conceptual architecture.
Article
Computer Science and Mathematics
Software

Lukas Beierlieb

,

Alexander Schmitz

,

Anas Karazon

,

Artur Leinweber

,

Christian Dietrich

Abstract: Virtual Machine Introspection (VMI) is a powerful technology used to detect and analyze malicious software inside Virtual Machines (VMs) from outside. Asynchronously accessing the VM’s memory can be insufficient for efficiently monitoring what is happening inside of a VM. Active VMI introduces breakpoints to intercept VM execution at relevant points. Especially for frequently visited breakpoints, and even more so for production systems, it is crucial to keep their performance overhead as low as possible. In this paper, we present an empirical study that compares the performance of four VMI breakpoint implementation variants–EPT switching (SLAT view switching) with and without fast single-stepping acceleration, instruction repair and instruction emulation–from two VMI applications (DRAKVUF, SmartVMI), with the XEN hypervisor, on 20 Intel Core i processors ranging from the 4th to the 13th generation. Regarding the time required to process a breakpoint hit, we found that on all platforms: instruction repair > EPT switching EPT switching fast single-step > instruction emulation. More modern processors such as the Intel Core i7 12700H and Intel Core i9 13900HX achieved median breakpoint processing times as low as 15μs.
Article
Computer Science and Mathematics
Software

Vania Linette Méndez Morales

,

José Manuel Gómez Zea

,

José Ángel Jesús Magaña

,

Teresa De Jesús Javier Baeza

,

Alejandro Hernández Cadena

,

Jonathan de la Cruz Álvarez

Abstract: Artificial Intelligence (AI) plays a key role in modern software development, signifi-cantly transforming developers’ design, writing, testing, and maintaining their code. Currently, programmers at various levels have integrated AI-based tools into different stages of the software development life cycle (SDLC), from code generation to deploy-ment. This study analyzes the impact of these technologies on professional practice, identifies the most used tools, and proposes best practices for the responsible adoption of AI, aiming to optimize its implementation efficiently and ethically. As part of this study, a methodological artifact was developed to guide the structured formulation of prompts, functioning as a model to enhance the precision and utility of AI-generated outputs. This artifact was validated through three proof-of-concept use cases (SQL queries, backend development, and deployment in AWS), demonstrating its potential as a knowledge base for teams seeking to incorporate AI tools systematically into their workflows.
Article
Computer Science and Mathematics
Software

Osama A. Marzouk

Abstract: The computational fluid dynamics (CFD) computer language OpenFOAM is used to develop a magnetohydrodynamic (MHD) solver, which we applied to the Sakhalin pulsed solid-propellant plasma (SPP) power system, with a supersonic divergent channel. The proposed solver corresponds to the low magnetic Reynolds number (Rem), and it utilizes the scalar electric potential as the working variable for handling the electric aspect of the problem. After validation and verification, we use the solver to explore various aspects of the thermal, fluidic, and electric features of the problem. These include the pressure, temperature, density, velocity, Mach number, electric-current density, specific turbulent kinetic energy, and turbulence dissipation. After validation and verification, we use the solver to explore various aspects of the thermal, fluidic, and electric features of the problem. These include the pressure, temperature, density, velocity, Mach number, electric-current density, specific turbulent kinetic energy, and turbulence dissipation. Despite earlier studies about this Sakhalin pulsed magnetohydrodynamic generator (PMHDG), the current study provides novel comprehensive aspects and presents a useful MHD solver.
Article
Computer Science and Mathematics
Software

Caiwei Wu

,

Fengrui Zhang

,

Huangyin Chen

,

Junlin Zhu

Abstract: Aiming at the demand for reliable storage of logs during the operation of AR/VR and smart glasses devices, this paper designs and implements a persistent logging system based on AOSP logd-persist. The system adopts a multi-threaded production-consumption model, and introduces a file rotation mechanism, a write throttling algorithm and a cache optimization strategy, which significantly reduces power consumption while ensuring system stability. Test results on the Meta Smart Glasses platform show that the architecture saves more than 30 minutes of standby power consumption, improves system debuggability and extends flash memory lifetime. The results show that the logging framework has good engineering utility and platform portability.
Article
Computer Science and Mathematics
Software

Caiwei Wu

,

Huangyin Chen

,

Junlin Zhu

,

Yao Yao

Abstract: This paper proposes a unified fault reporting system architecture based on Android Open Source Platform (AOSP) for multi-platform wearable devices (e.g. smart glasses and smart watches) to realize efficient fault capture, data integration and uploading mechanism. The system integrates the diagnostic interfaces of multiple device teams, and significantly simplifies the system deployment and maintenance burden by unifying the SDK packaging and uploading protocols. Experimental results show that the system can stably support the diagnostic needs of millions of terminals, with a 31.2% improvement in error capture rate and 54.8% improvement in cross-platform deployment efficiency. This study provides a reusable and scalable solution for system health diagnosis of large-scale wearable platforms.
Article
Computer Science and Mathematics
Software

Caiwei Wu

,

Junlin Zhu

,

Yao Yao

Abstract: The logging system carries the task of collecting and analyzing key operation status and user interaction data in augmented reality (AR) platform. In this paper, we start from performance profiling to locate the performance bottlenecks of AR device logging service in high-frequency I/O scenarios, and propose a set of asynchronous write buffer mechanism, fast path optimization scheme, and flash-friendly compression strategy. In multiple rounds of real-world tests, the system can maintain a 15.4% CPU occupancy reduction and 41.7% write latency reduction under log write pressure. The study shows that the performance optimization of the logging system, as the underlying guarantee of system maintainability, is of key significance to platform stability.
Article
Computer Science and Mathematics
Software

Caiwei Wu

,

Huangyin Chen

Abstract: In the context of rapid evolution of AR/VR terminals, parallel development of system services by multiple teams has led to the problems of deployment complexity and maintenance cost proliferation. In this paper, a converged system service architecture is proposed to unify the design and encapsulation of dispersed service modules such as bug report and health monitoring under multiple platforms, so as to realize the portability and shareability of the service layer. Through actual project verification, the architecture compresses the service maintenance manpower from 6 to 2 persons and shortens the deployment time by 38.7%. This study demonstrates the optimization potential of system software architecture in multi-device collaborative development, which has good promotion value.
Article
Computer Science and Mathematics
Software

Tanvir Ahmed

,

Samiul Hasan

,

Ahammed Shorif

,

Ansarul Hoque

,

Shadman Sajid

,

Md. Badiuzzaman Biplob

Abstract: The integration of generative AI( Gen- AI) agents within business settings presents unique security challenges that differ from those of traditional systems. These agents extend beyond introductory LLMs, flaunting their ability to reason, retain information, and operate autonomously. This exploration introduces a comprehensive trouble model specifically adapted for Gen-AI agents, highlighting the new challenges associated with their independence, enduring memory access, advanced logic, and integration with tools. The study identifies nine significant pitfalls, grouped into five crucial categories: functional prosecution vulnerabilities, concession of trust boundaries, vulnerabilities within the cognitive armature, temporal continuity pitfalls, and governance endurance. Real-world issues, such as detainments in exploitability, cross-system spread, side movement, and subtle thing misalignment, are difficult to spot using current fabrics and conventional styles. To address these challenges, this study proposes two supplementary frameworks. The advanced trouble Framework for Autonomous AI Agents (ATFAA) categorizes pitfalls material to agents, while SHIELD offers practical threat mitigation strategies to reduce organizational exposure. While focusing on earlier AI security and LLM exploration, this study focuses on what distinguishes these agents and underscores the significance of these features. Eventually, Study 1 argues for a new security perspective for GenAI agents. Without reassessing our threat models and defenses to incorporate their specific infrastructures and actions, we threaten transubstantiating an important new tool into a substantial liability for enterprises.
Review
Computer Science and Mathematics
Software

Shuvo Chakraborty

,

Mehedi Hassan

,

Habibullah Mohammad Masum

,

Md Rakibul Islam Fahim

,

Sayed Mahmood Twki

,

Md. Badiuzzaman Biplob

Abstract: The increasing complexity of software systems and the escalating threat of cyberattacks have necessitated the development of advanced, automated tools for ensuring software security. Large Language Models (LLMs) have recently emerged as a transformative technology with the potential to revolutionize vulnerability detection and automated program repair. This review paper synthesizes the current state of research on the application of LLMs in software security, drawing from a comprehensive analysis of recent scholarly articles and empirical studies. We provide a structured overview of the key methodologies and techniques being employed, including the use of different LLM architectures such as encoder-only, decoder-only, and encoder-decoder models. A central focus of this review is the critical role of domain-specific adaptation through fine-tuning, sophisticated prompt engineering strategies like few-shot and chain-of-thought prompting, and the provision of rich contextual information to enhance the performance of these models. Our analysis reveals a consensus on the significant potential of LLMs to accurately identify and remediate a wide range of security vulnerabilities. However, we also highlight the persistent challenges that must be addressed for their effective real-world deployment. These include high false positive rates, the "black-box" nature of many models which hinders interpretability and trust, and the inherent risk of models introducing new vulnerabilities. We conclude by discussing the most promising future research directions, such as the development of hybrid systems that integrate LLMs with traditional static and dynamic analysis tools, the exploration of multi-agent LLM systems for more robust analysis, and the critical need for improved model explainability and developer-in-the-loop frameworks. This review serves as a comprehensive resource for researchers and practitioners seeking to understand the current capabilities and future potential of LLMs in bolstering software security.

of 11

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated