Computer Science and Mathematics

Sort by

Review
Computer Science and Mathematics
Software

Keston G. Lindsay

Abstract: Repeated measures ANOVA is the statistical method for comparing means of the same sample measured at least two different times, or two different contexts. It may also be used to compare means between two or more related groups. This paper serves as a tutorial for repeated measures ANOVA using R. It will introduce readers to parametric, nonparametric and robust one-way repeated measures ANOVA using the rstatix, afex, WRS2, and ARTool packages.

Concept Paper
Computer Science and Mathematics
Software

Vijay Narayan Raikar

,

Tarun R

,

Suhas Shetti

,

Nandini C

,

K Y Hemalata

,

Jayanthi P N

Abstract: The philanthropic sector faces persistent challenges in transparency and accountability that fundamentally undermine donor trust. While blockchain technology offers an immutable transaction ledger, it cannot inherently verify whether donated funds achieve their intended real-world impact. This paper presents a novel decentralized application that addresses this critical gap through the integration of Ethereum Smart Contracts with an AI-driven auditing pipeline. The system introduces a milestone-based fund release mechanism requiring NGOs to submit vendor invoices as documentary proof of expenditure. These documents undergo Optical Character Recognition followed by credibility assessment using Google Gemini, a Large Language Model that performs semantic analysis of financial documents. A two-thirds multi-signature consensus by platform administrators authorizes direct on-chain payments to verified ven dors. The primary innovation lies in bridging the oracle problem through automated AI verification of off chain documentary evidence controlling on-chain fund disbursement. Experimental validation demonstrates that this hybrid approach effectively automates credibility verification while ensuring every released fund is backed by verifiable evidence, achieving significant performance improvements in fraud detection and accountability enforcement compared to purely manual or purely blockchain-based systems.

Article
Computer Science and Mathematics
Software

Leon Sterling

,

Ben Golding

,

Hanying Li

,

Yingyi Luan

,

Aoxiang Xiao

,

Xinyi Yuan

,

Qingying Lyu

,

Peter Harding

Abstract: Software engineering activities have shifted from building new systems to maintaining 2 code bases. Yet teaching software engineering rarely echoes that reality. It is easier to teach 3 students to develop software from scratch rather than guide them to maintain an existing 4 project by suggesting and implementing modifications and enhancements. We believe that 5 the best way to teach maintenance is through practical experience. This paper describes the 6 evolution of an agent modelling tool called AMMBER. It started as a software engineering 7 team project eight years ago. Tens of students have been involved in the subsequent years 8 in maintaining and extending the software through projects in capstone units, internships 9 and casual employment. The paper authors are the current team maintaining the software. 10 The maintenance activities have included corrective maintenance, adaptive maintenance 11 and perfective maintenance, and we describe lessons learned through our failures and 12 successes. We advocate exposing students to maintenance activities on active code bases 13 and share lessons learned about increasing the capability of software engineering students 14 to usefully perform maintenance activities.

Article
Computer Science and Mathematics
Software

Iosif Iulian Petrila

Abstract: The augmented assembly language @Asm is proposed in order to transcend the fragmentation of architecture-specific dialects, to provide a unified framework for diverse processing paradigms as a universal assembly language and to function as a self-compiling bootstrap instrument adaptable to any processor system. The language augmentations include: flexible machine-language descriptions, general memory and data management directives, custom lexical identification through regular expressions, parsing facilities, generalized macroprocessing, flexible assembly control instructions, customizable encoding and code generation features, compiler-oriented abstraction mechanisms at the language level. The native abstraction augmentations enable expressive and concise high-level descriptions within assembly language for any present, emerging, or future systems.

Article
Computer Science and Mathematics
Software

Anthony Savidis

,

Yannis Valsamakis

,

Theodoros Chalkidis

,

Stephanos Soultatos

Abstract: This paper presents a compositional Artificial Intelligence (AI) service pipeline for generating interactive structured data from raw scanned images. Unlike conventional document digitization approaches, which primarily emphasize optical character recognition (OCR) or static metadata extraction, the proposed framework adopts a modular architecture that decomposes the problem into specialized AI services and orchestrates them to achieve higher-level functionality. The pipeline integrates core services including OCR for text conversion, image recognition for embedded visual content, interactive form modelling for structured data and NLP for extraction of structured representations from raw text. The form models incorporate various rules like value-type filtering and domain-aware constraints, thereby enabling normalization and disambiguation across heterogeneous document sources. A key contribution is the interactive browser linking extracted structures back to the original scanned images, thus facilitating bidirectional navigation between unstructured input and structured content. This functionality enhances interpretability, supports error analysis, and preserves the provenance of extracted information. Furthermore, the compositional design allows each service to be independently optimized, replaced, or extended, ensuring scalability and adaptability to diverse application domains such as press archives, enterprise repositories and government documents.

Article
Computer Science and Mathematics
Software

Huiwen Han

Abstract: Architecture viewpoints play a central role as an abstraction mechanism for structuring, communicating, and reviewing software architectures by separating concerns and addressing diverse stakeholder needs [1] [8] [9] [11]. However, in both industrial practice and academic research, viewpoint definitions are often fragmented, inconsistently expressed, or narrowly scoped, which limits comparability, reuse, and long-term architectural evolution [10] [14] [15]. Existing architecture frameworks and standards, including TOGAF, C4, and ISO/IEC/IEEE 42010, either emphasize processes and notations or deliberately avoid prescribing concrete viewpoint sets [1] [4] [33]. While this flexibility supports broad applicability, it also leaves practitioners without a reusable reference taxonomy that systematically consolidates architectural concerns encountered in modern software-intensive systems [8] [10] [11]. This paper introduces PACT (Practical Architecture Viewpoint Taxonomy), a reference-level taxonomy of architecture viewpoints that consolidates recurring architectural concerns observed across standards, established viewpoint models, and industrial practice. PACT defines 52 viewpoints spanning business, application, integration, data, security, infrastructure, governance, and operations concerns. Each viewpoint is specified using a unified definition template that captures its primary concern, key questions, stakeholders, abstraction focus, and scope, enabling systematic comparison, selection, and reuse [1] [11]. PACT is explicitly aligned with the conceptual model of ISO/IEC/IEEE 42010, while remaining method-, notation-, and tool-independent, enabling reuse across heterogeneous architectural practices without imposing process or documentation lock-in [1]. It is intended as a reference taxonomy rather than a prescriptive framework, supporting enterprise architecture governance and system design practices [6] [7], as well as academic analysis of architectural knowledge and viewpoints [15] [18]. A case-based evaluation and industrial illustrations illustrate how PACT supports more systematic concern coverage, improved clarity, and structured architecture reviews across heterogeneous systems. The taxonomy is designed to be extensible, providing a stable reference for future research and evolving architectural practices [10] [32].

Article
Computer Science and Mathematics
Software

Junio Cesar Ferreira

,

Júlio C. Estrella

,

Alexandre C. B. Delbem

,

Cláudio F. M. Toledo

Abstract: Wireless Sensor Networks (WSNs) have diverse applications in urban, industrial and environmental monitoring. However, the design complexity of this type of network is high, due to conflicting objectives such as latency, energy consumption, connectivity and coverage. This article addresses the need for structured and reproducible approaches to developing WSNs. We propose a modular and scalable system designed to integrate simulators and evolutionary algorithms for multi-objective optimization in WSNs. We present a formalized process and supporting architecture that combines containerized simulations, a reactive data management layer, and a flexible optimization engine capable of handling diverse objective formulations and search strategies. The proposed environment enables distributed, simulation-based optimization experiments with automated orchestration, persistent metadata and versioned execution artifacts. To demonstrate feasibility, we present a prototype implementation that incorporates synthetic test modules and real WSN simulations using a classical simulator for simulating sensor networks. The results illustrate the potential of the proposed system to support reproducible and extensible research in design and optimization of WSNs.

Article
Computer Science and Mathematics
Software

Robin Nunkesser

Abstract: Mobile Software Engineering has emerged as a distinct subfield, raising questions about the transferability of its research findings to general Software Engineering. This paper addresses the challenge of evaluating the generalizability of mobile-specific research, using Green Computing as a representative case. We propose a systematic method that combines a mapping study to identify potentially overlooked mobile-specific papers with a focused literature review to assess their broader relevance. Applying this approach, we find that several mobile-specific studies offer insights applicable beyond their original context, particularly in areas such as energy efficiency guidelines, measurement, and trade-offs. The results demonstrate that systematic identification and evaluation can reveal valuable contributions for the wider Software Engineering community. The proposed method provides a structured framework for future research to assess the generalizability of findings from specialized domains, fostering greater integration and knowledge transfer across Software Engineering disciplines.

Article
Computer Science and Mathematics
Software

Michael Dosis

,

Antonios Pliatsios

Abstract: This paper presents Sem4EDA, an ontology-driven and rule-based framework for automated fault diagnosis and energy-aware optimization in Electronic Design Automation (EDA) and Internet of Things (IoT) environments. The escalating complexity of modern hardware systems, particularly within IoT and embedded domains, presents formidable challenges for traditional EDA methodologies. While EDA tools excel at design and simulation, they often operate as siloed applications, lacking the semantic context necessary for intelligent fault diagnosis and system-level optimization. Sem4EDA addresses this gap by providing a comprehensive ontological framework developed in OWL 2, creating a unified, machine-interpretable model of hardware components, EDA design processes, fault modalities, and IoT operational contexts. We present a rule-based reasoning system implemented through SPARQL queries, which operates atop this knowledge base to automate the detection of complex faults such as timing violations, power inefficiencies, and thermal issues. A detailed case study, conducted via a large-scale trace-driven co-simulation of a smart city environment, demonstrates the framework’s practical efficacy: by analyzing simulated temperature sensor telemetry and Field-Programmable Gate Array (FPGA) configurations, Sem4EDA identified specific energy inefficiencies and overheating risks, leading to actionable optimization strategies that resulted in a 23.7% reduction in power consumption and 15.6% decrease in operating temperature for the modeled sensor cluster. This work establishes a foundational step towards more autonomous, resilient, and semantically-aware hardware design and management systems.

Technical Note
Computer Science and Mathematics
Software

Rehnumah Taslim Munmun

Abstract: Aquaculture is a major contributor to Bangladesh’s economy, but farmers still struggle to maintain proper water quality because manual testing is slow, inaccurate, and difficult to manage in rural areas. This project introduces a low-cost, real-time monitoring system using an ESP32-32 N4 with temperature, pH, and TS300B turbidity sensors. The system collects water quality data and sends it to a mobile device, allowing farmers to track pond conditions remotely and receive alerts when values cross safe limits. Field tests show that the system provides reliable readings and helps reduce fish mortality by enabling quick action. This approach offers an affordable and practical solution for small-scale farmers, supporting better farm management and promoting wider technological adoption in the aquaculture sector.

Article
Computer Science and Mathematics
Software

Chibuzor Udokwu

Abstract: Digital product passports outline information about a product’s lifecycle, circularity, and sustainability related data. Sustainability data contains claims about carbon footprint, recycled material composition, ethical sourcing of production materials, etc. Also, upcoming regulatory directives require companies to disclose this type of information. However, current sustainability reporting practices face challenges, such as greenwashing, where companies make incorrect claims that are difficult to verify. There is also a challenge of disclosing sensitive production information when other stakeholders, such as consumers or other economic operators, wish to independently verify sustainability claims. Zero-knowledge proofs (ZKPs) provide a cryptographic system for verifying statements without revealing sensitive information. The goal of this research paper is to explore ZKP cryptography, trust models, and implementation concepts for extending DPP capability in privacy-aware reporting and verification of sustainability claims in products. To achieve this goal, first, formal representations of sustainability claims are provided. Then, a data matrix and trust model for the proof generation are developed. An interaction sequence is provided to show different components for various proof generation and verification scenarios for sustainability claims. Lastly, the paper provides a circuit template for the proof generation of an example claim and a credential structure for their input data validation.

Article
Computer Science and Mathematics
Software

Luis Alberto Pfuño Alccahuamani

,

Anthony Meza Bautista

,

Hesmeralda Rojas

Abstract:

This study addresses the persistent inefficiencies in incident management within regional public institutions, where dispersed offices and limited digital infrastructure constrain timely technical support. The research aims to evaluate whether a hybrid web architecture integrating AI-assisted interaction and mobile notifications can significantly improve efficiency in this context. The system was designed using a Laravel 10 MVC backend, a responsive Bootstrap 5 interface, and a relational MariaDB/MySQL model optimized with migrations and composite indexes, and incorporated two low-cost integrations: a stateless AI chatbot through the OpenRouter API and asynchronous mobile notifications using the Telegram Bot API managed via Laravel Queues and webhooks. Developed through four Scrum sprints and deployed on an institutional XAMPP environment, the solution was evaluated from January to April 2025 with 100 participants using operational metrics and the QWU usability instrument. Results show a reduction in incident resolution time from 120 to 31 minutes (74.17%), an 85.48% chatbot interaction success rate, a 94.12% notification open rate, and a 99.34% incident resolution rate, alongside an 88% usability score. These findings indicate that a modular, low-cost, and scalable architecture can effectively strengthen digital transformation efforts in the public sector, especially in regions with resource and connectivity constraints.

Article
Computer Science and Mathematics
Software

Oras Baker

,

Ricky Lim

,

Kasthuri Subaramaniam

,

Sellappan Palaniappan

Abstract: The research investigates secure recommender systems through federated learning on educational platforms because online education platforms face increasing threats to student data privacy. The research creates an innovative system which merges FL technology with collaborative filtering to generate personalised course recommendations while maintaining user data protection on client devices. The system evaluated its performance by analysing data from major platforms including edX and Coursera and Udemy and other platforms through MSE and R-squared and precision and recall and F1-score metrics. The evaluation shows that FL maintains user privacy through data aggregation restrictions but users must accept reduced recommendation quality than what centralised systems offer. The research establishes two essential findings which confirm FL maintains user privacy in secure educational settings and reveals that performance reduction from limited data constitutes a core challenge for distributed systems. The research presents two primary methodological contributions which integrate data preprocessing methods for dealing with missing information and develop a complete evaluation system for federated recommendation platforms. The research results differ from previous studies because they demonstrate how model performance deteriorates when operating under federated system constraints. The research develops educational technology FL application expertise by studying privacy-accuracy tradeoffs and presenting methods to boost federated recommender systems in protected data environments.

Article
Computer Science and Mathematics
Software

Paulo Serrra

,

Ângela Oliveira

,

Filipe Fidalgo

,

Bruno Serra

,

Tiago Infante

,

Luís Baião

Abstract: This study explores how artificial intelligence can promote accessibility and inclusiveness in digital culinary environments. Centred on the Receitas +Power platform, the research adopts an exploratory, multidimensional case study design integrating qualitative and quantitative analyses. The investigation addresses three research questions concerning (i) user empowerment beyond recommendation systems, (ii) accessibility best practices across disability types, and (iii) the effectiveness of AI-enabled inclusive solutions. The system was developed following user-centred design principles and WCAG 2.2 standards, combining generative AI modules for recipe creation with accessibility features such as voice interaction and adaptive navigation. The evaluation, conducted with 87 participants, employed the System Usability Scale complemented by thematic qualitative feedback. Results indicate excellent usability (M = 80.6), high reliability (Cronbach’s α = 0.798–0.849), and moderate positive correlations between usability and accessibility dimensions (r = 0.45–0.55). Participants highlighted the platform’s personalisation, clarity, and inclusivity, confirming that accessibility enhances rather than restricts user experience. The findings provide empirical evidence that AI-driven adaptability, when grounded in universal design principles, offers an effective and ethically sound pathway toward digital inclusion. Receitas +Power thus advances the field of inclusive digital gastronomy and presents a replicable framework for human–AI co-creation in accessible web technologies.

Review
Computer Science and Mathematics
Software

Omer Khalid

,

Ammad Ul Haq Farooqi

,

Muhammad Bilal

Abstract: Agentic Artificial Intelligence (AI) marks a shift from traditional AI systems that simply generate responses to autonomous systems that can independently plan to achieve goals with minimal human intervention. These models can do much more than just respond to prompts as they can observe, adapt, coordinate with other agents, and even refine their own outputs over time. This literature review draws insights from fifty-one recent empirical studies on various domains to understand how agentic AI is being built and used today. Agentic AI systems appear in the domains of healthcare, digital twin architectures, educational platforms, e-commerce applications, cybersecurity systems, and large-scale network management systems and they often improve efficiency, reduce manual workload, and help in making more informed decisions. However, this increased autonomy also raises new questions as well because autonomous systems that can act without human intervention must be reliable, explainable, secure, and aligned with human expectations otherwise they may cause great harm to humans. Many implementations of such systems are still in early stages, lacking standard evaluation methods and are facing challenges such as data access, ethical responsibility, and coordination among multiple agents. For clearer understanding, this review outlines a taxonomy of agentic AI and it portrays several of its current application domains, discusses common architectures and techniques, and highlights its limitations and future directions. The results of this review suggest that progress in governance, multimodal reasoning, and scalable coordination will be central to advancing safe and useful agentic AI systems.

Article
Computer Science and Mathematics
Software

Dong Liu

Abstract: This paper introduces Primary Breadth-First Development (PBFD) and Primary Depth-First Development (PDFD)—formally and empirically verified methodologies for scalable, industrial-grade full-stack software engineering. Both approaches enforce structural and behavioral correctness through graph-theoretic modeling, bridging formal methods and real-world practice.PBFD and PDFD model software development as layered directed graphs with unified state machines, verified using Communicating Sequential Processes (CSP) and Linear Temporal Logic (LTL). This guarantees bounded-refinement termination, deadlock freedom, and structural completeness.To manage hierarchical data at scale, we present the Three-Level Encapsulation (TLE)—a novel bitmask-based encoding scheme. TLE operations are verified via CSP failures-divergences refinement, ensuring constant-time updates and compact storage that underpin PBFD's robust performance.PBFD demonstrates exceptional industrial viability through eight years of enterprise deployment with zero critical failures, achieving approximately 20× faster development than Salesforce OmniScript, 7–8× faster query performance, and 11.7× storage reduction compared to conventional relational models. These results are established through longitudinal observational studies, quasi-experimental runtime comparisons, and controlled schema-level experiments.Open-source Minimum Viable Product implementations validate key behavioral properties, including bounded refinement and constant-time bitmask operations, under reproducible conditions. All implementations, formal specifications, and non-proprietary datasets are publicly available.

Review
Computer Science and Mathematics
Software

Ammad Ul Haq Farooqi

,

Omer Khalid

,

Muhammad Bilal

Abstract: The serverless architecture has progressively become one of the most popular way to build and deploy applications now a days. This architecture allows the developers to focus on their code without worrying about managing the backend servers. By using abstraction principle, the serverless approach makes it very easier to achieve scalability, automatic resource management and cost efficiency via pay per use model. With increase in the usage of the serverless approach, this architecture has now expanded into domains such as the Internet of Things (IoT), high-performance computing, artificial intelligence and the large-scale cloud environments. With the expansion of the serverless computing comes the challenges as well which include performance challenges, reliability and maintenance challenges. This review examines fifty peer-reviewed studies to present a structured overview of the current state of serverless computing and it also organizes the existing work into a taxonomy that shows key developments across application domains, technical methods, data sources and limitations while also identifying open the research directions. We found out that there is a clear evolution from simple function orchestration towards more intelligent, workload-aware scheduling systems with improved cold start latency and hybrid deployments ability that span both cloud and edge infrastructures. However despite of these advances the recurring issues such as vendor lock in, limited debugging visibility, difficulties in managing state and unpredictable performance still pose a challenge to widespread adoption of the serverless approach. Finally we would also like to add that the review highlights several promising directions for future research as well which includes adaptive resource management, distributed serverless runtimes, AI-driven optimization and better support for heterogeneous hardware. All in all our work offers a consolidated understanding of the current progress and future potential of serverless computing approach.

Article
Computer Science and Mathematics
Software

David Ostapchenko

Abstract: Microservice systems require reliable delivery, stable runtime environments, and production controls to achieve both high throughput and operational stability. This paper proposes a production-development framework that combines (i) runtime and toolchain version pinning, (ii) progressive delivery (canary / blue-green) with automated rollback, (iii) SLO and error-budget-based reliability management, and (iv) supply-chain security controls for build and artifact integrity. The framework is assessed via an 8-week synthetic observation design that mirrors operating conditions typical for mid-size telecom/fintech organizations (Central Asia), using eight independently deployable services. The evaluation is reported through DORA-style delivery metrics (deployment frequency, lead time, change failure rate, MTTR) and SLO attainment, and includes an ablation that isolates the expected contribution of each practice. The results indicate that deterministic builds (pinning) and progressive delivery provide the strongest improvements in delivery stability, while SLO/error-budget policies reduce incident impact by enforcing clear rollback and release-freeze rules.

Article
Computer Science and Mathematics
Software

Parvani Vafa Mokhammad

Abstract: More modern web applications are produced every year, they process more data and users, so effective memory management directly affects performance. Node.js uses the V8 JavaScript Engine, where Garbage Collection (GC) automatically re- leases unused memory. Despite the convenience, this process can cause certain delays and reduce the performance of applications under high load. The purpose of the study is to examine how different garbage collection configurations and param- eters affect the memory management performance of modern Nodes.js applications. In my research, I will analyze existing research and provide a series of load tests with different GC settings. The metrics of request response time and memory usage are compared as the number of simultaneous requests increases. The results obtained will allow us to determine the optimal approaches to configuring and profiling mem- ory in the Node.They will also provide practical recommendations to developers on how to improve the performance of their applications.

Article
Computer Science and Mathematics
Software

Saikal Batyrbekova

Abstract: This paper looks at two main ways to build large software pro- grams: the traditional Monolith, which is one large, unified applica- tion, and Microservices, which are many small, independent parts. The goal is to use real-world studies and experiments to clearly explain the confirmed pros and cons of each approach. The findings show that for new or small projects, the Monolith is generally faster and cheaper to launch. However, Microservices are much better at handling huge numbers of users, a key advantage known as scaling, and speed up development by allowing teams to work completely independently. The major trade-off is high com- plexity - Microservices are difficult to set up and operate, requiring specialized skills. I conclude that the best architectural choice is never fixed; it de- pends entirely on the project’s specific situation, such as the required size and growth speed of the system.

of 12

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated