Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Information Systems

Yu Mao

,

Zhishen Chen

,

Xiangjun Ma

Abstract: This paper proposes a lightweight full-stack execution framework integrating Serverless architecture with WebAssembly runtime optimization to enhance performance and energy efficiency in edge deployments. The system employs modular task decomposition and Light-Container Isolation (LCI) technology to achieve cross-node function reuse on AWS Lambda and Cloudflare Workers platforms. An Reinforcement Learning Scheduler (RL-Scheduler) predicts request distribution in real-time, dynamically allocating CPU cycles and memory limits. Targeted testing demonstrates a 52% reduction in cold start time, a 33% decrease in average execution latency, and a 21% reduction in energy consumption under 3,000 concurrent tasks. Results confirm the framework effectively enhances execution autonomy and cross-platform portability for edge Serverless systems in multi-tenant environments.

Article
Computer Science and Mathematics
Information Systems

Elaine Roberts

,

Jason McAllister

,

Rachel Nguyen

,

Michael Turner

Abstract: Airborne terminals increasingly rely on OTA updates, yet their performance is limited by high satellite-link delays and the overhead of kernel-based packet handling. This study designs a DPDK–SR-IOV transmission path that moves packet processing to user space and assigns fixed queues to OTA traffic. Tests on an airborne terminal and a co-simulation platform show that the new path raises link utilization from 68.4% to 91.7%, reduces median delay by 36.2%, and lowers the 99th-percentile jitter by 47.9%. The retransmission rate stays below 0.4% across 1000 update cycles, indicating stable behavior under long runs. These findings show that kernel-bypass methods, when applied with controlled queue and CPU settings, can support high-throughput and low-jitter OTA updates in aircraft. The study also notes the need for broader testing across different hardware and mixed traffic conditions before deployment at fleet scale.

Article
Computer Science and Mathematics
Information Systems

Mohammed Khasawneh

,

Anjali Awasthi

Abstract: The rapid development of smart cities addresses urban challenges from population growth, resource management, and sustainability needs. Smart cities rely on Systems of Systems (SoS)—interconnected, independent systems—to achieve capabilities beyond individual components. This analysis explores SoS principles like operational autonomy, geographic distribution, and evolutionary growth in smart cities, with applications spanning healthcare, transportation, public safety, and energy efficiency. Case studies from India, Atlanta, and Porto illustrate successful SoS implementations using data-driven methods like open data platforms and IoT devices to tackle issues such as traffic congestion and resource allocation. Applying SoS frameworks in urban traffic light management can significantly reduce congestion and enhance transportation efficiency through dynamic data sharing and predictive analytics. By transforming traffic lights into interconnected ’smart sensors,’ real-time responses to traffic conditions, proactive congestion management, and improved emergency access are enabled. Addressing interoperability, scalability, and data security challenges ensures seamless system integration, supporting sustainable urban mobility.

Article
Computer Science and Mathematics
Information Systems

Yu Mao

,

Keng-Ming Chang

,

Zhishen Chen

Abstract: Modern web applications commonly face imbalances between frontend and backend rendering and inefficient resource utilization in high-concurrency scenarios. This paper proposes a full-stack collaborative optimization framework integrating React Server Components with Node.js load-tiered scheduling. This approach first enables dynamic switching between frontend SSR and CSR through Async Render Tree Partitioning. It then introduces Nginx reverse proxy and multi-level Redis caching to reduce database access latency. Finally, it employs an event-loop-based Adaptive Task Scheduler on the backend to dynamically allocate I/O load. Experiments conducted in K6 and JMeter simulation environments demonstrate that under 10,000 concurrent requests, average response time decreased by 38.6%, P95 latency dropped by 41%, and server energy consumption reduced by 18.3%. This research provides a systematic engineering solution for performance optimization in large-scale web systems.

Article
Computer Science and Mathematics
Information Systems

Emily J. Carter

,

Jonathan P. Reeves

,

Michael R. Dawson

Abstract: OTA updates across aircraft, edge gateways, and ground systems can introduce faults that slow updates or affect service stability. This study builds an AIOps-based process that uses time-series data and log signals to detect OTA faults and to trigger automatic repair on aviation edge devices. Data were collected from 280 nodes over 90 days and included CPU load, memory use, network delay, and structured OTA logs. The detector reached an F1 score of 0.964 with a false-alarm rate of 1.1%, and most faults were identified within seconds. Automated actions resolved 73.6% of OTA issues and reduced manual tickets by 57.9%. These results show that combining simple sequence models, log features, and controlled rollback can shorten fault-location time and recovery time in cloud–edge–aircraft environments. The work provides a practical direction for improving OTA reliability, although wider testing across more aircraft types and update systems is still needed.

Article
Computer Science and Mathematics
Information Systems

Daniel R. Hughes

,

Mei Lin

,

Javier Ortiz

,

Hannah K. Fischer

Abstract: Over-the-air (OTA) updates often face unstable delay and limited bandwidth, which lower data transfer speed and reliability. This study built an adaptive OTA transmission method that combines a Bayesian delay prediction model with Brotli–LZMA compression. The model estimates short-term delay changes and adjusts compression level according to network conditions. Tests were done under simulated satellite and IoT links with bandwidth between 0.5 and 10 Mbps. The results showed that packet loss dropped by 41%, transfer rate increased by 29%, and compression time accounted for 3.8% of the total process. The prediction model reached a root mean square error (RMSE) of 18 ms, showing good accuracy in delay estimation. These results show that combining delay prediction with adaptive compression can make OTA transmission faster and more stable in low-bandwidth networks. The method can be used in satellite, IoT, and remote monitoring systems that require reliable OTA data delivery.

Article
Computer Science and Mathematics
Information Systems

Giulia Esposito

,

Marco Conti

,

Luca Bianchi

Abstract: Modern avionics increasingly depend on frequent software updates, making it necessary to understand how fleet-wide OTA rollouts affect operational risk. This study builds a digital-twin model that links onboard software states, air–ground communication, and maintenance timing, and uses three years of operational data containing 7.2×108 logs to test 32 OTA strategies. The simulations show that single-shot updates create the highest exposure, while batch updates with fixed thresholds reduce exposure but remain sensitive to short link disturbances. A combined strategy that uses batch updates, dynamic thresholds, and delayed rollback produces the best performance, lowering potential exposure by 48.3% without affecting mission completion. Module-level analysis based on importance sampling identifies the communication link and the update agent as the main contributors to the remaining risk and supports the construction of safety limit curves. These results demonstrate that software-centered digital twins can give practical guidance for OTA planning and fleet management. The study also notes limits related to human actions, fleet diversity and simplified security events, which should be addressed in future work.

Article
Computer Science and Mathematics
Information Systems

Wei Zhang

,

Michael R. Lewis

Abstract: Over-the-air (OTA) updates in edge computing systems face practical challenges due to unstable network conditions and heterogeneous node capacities. To address this, we propose a task scheduling framework that integrates Deep Q-Network (DQN) reinforcement learning with a genetic algorithm. The model was tested with 120 OTA tasks across 50 industrial edge nodes. Results show that the proposed method reduces average scheduling latency by 23.9% and energy use by 18.5% compared to static baseline methods. Under network delays up to 300 ms, the task success rate remained at 99.2%, significantly outperforming FIFO and fixed-priority schedulers by 27.6%. The load distribution, measured by the coefficient of variation (COV), improved from 0.42 to 0.17. This indicates better task balancing among nodes. The framework adapts to fluctuating network conditions and provides a reliable solution for industrial and vehicle-mounted systems. However, long-term deployment effects and scalability in real-world environments require further investigation.

Article
Computer Science and Mathematics
Information Systems

Michael Chen

,

Sara Patel

,

David L. Wong

,

Emily J. Morales

Abstract: Over-the-air (OTA) update systems are used to deliver software in real time for fields such as aviation, railway systems, and medical devices. This study builds a cloud-based OTA setup using Kubernetes and Istio to improve update speed and system stability across different types of devices. The system includes rolling updates, blue-green switching, gRPC transmission, and message queue scheduling. Tests were run on 72 terminals from vehicle, avionics, and medical settings. Results show that the average image transfer time dropped from 842 ms to 493 ms, and the failure rate was reduced from 3.6% to 0.8%. In 500 failure tests, the average time to restore service became 38.7% shorter. These results confirm that using containers and service-level routing helps shorten delays and reduce errors in OTA processes. The method can be applied in real-world embedded systems but may require extra tuning on older hardware or unstable networks.

Article
Computer Science and Mathematics
Information Systems

Andrew P. Collins

,

Maria J. Estevez

,

Tobias H. Weber

Abstract: Over-the-air (OTA) updates in multi-tenant systems often face task conflicts, cache overlap, and weak fault recovery during parallel updates. This study designed a layered fault-tolerance and isolation method that combines task redundancy, cache separation, and snapshot rollback. Tests were carried out on 120 devices across six tenants with a fault rate of up to 95%. The system kept stable operation, extended the mean time between failures (MTBF) to 182 hours, and raised total availability from 98.2% to 99.7%. The average update delay per tenant stayed below 1.1 seconds, showing that higher reliability did not slow the process. The method effectively avoided tenant interference, reduced recovery time, and improved update stability. It provides a simple and practical solution for dependable OTA updates in industrial, automotive, and IoT systems.

Article
Computer Science and Mathematics
Information Systems

Bowen Su

,

Xiaoping Chen

,

Yuehong Dai

,

Xiaobo Ma

Abstract: Considered the possible assembly malfunction in control loop, this paper researches the sliding mode observer(SMO) design for a linearized physical system with environmental disturbances and sensor faults in some constrained conditions on system structure, to set up the fault detection and isolation(FDI) scheme for system in the loop. On one hand, by utilizing the features of fault distribution, the coefficients of fault and disturbance in unobserved subsystem are canceled by state transform under the presumed conditions. Then SMO served in FDI for observable subsystem is constructed, where the convergence of observed error is verified by analysis on Lyapunov functions. On the other hand, for the general situation when fault and disturbance are distributed randomly, the coefficients of fault corresponding to unobserved states are canceled by imposing some similarity transforms on system matrix, such that a reconfigured SMO is designed to counteract and detect the fault in observable subsystem. Furthermore, using inequality transform, the convergence of observed error is shown to be bounded with oscillation, which is proved for the existence of disturbance. Finally, FDI scheme is applied and tested in a fixed-wing airplane system to validate the stability of SMO.

Article
Computer Science and Mathematics
Information Systems

Afonso Crespo

,

José Barateiro

,

Elsa Cardoso

Abstract: Housing inequalities remain a major challenge for contemporary urban governance, as they combine economic, social, spatial, and demographic dynamics that are difficult to capture through single indicators. This paper develops a data-driven assessment of housing inequalities in Portugal between 2015 and 2025, drawing on official national and European statistics and applying a Business Intelligence (BI) and urban analytics frame-work oriented towards policy monitoring. Data from Statistics Portugal and Eurostat are integrated through an analytical pipeline including automated extraction via public APIs, data enrichment, and visual analytics. The workflow follows a CRISP-DM–inspired structure, creating a set of normalized indicators to capture different dimensions of housing conditions. The results point to a structurally polarized housing market. Housing valuations increased across all regions, but at uneven rates, reinforcing territorial disparities rather than convergence. Metropolitan and tourism-oriented regions experienced faster appreciation and indirect effects, while year-over-year growth in completed dwellings slowed after 2021–2022, indicating an un-even supply response. Beyond its empirical findings, the primary contribution of this study lies in demonstrating how BI and data science methodologies can be operationalized to monitor housing inequalities using official statistics. The proposed framework is replicable and can be adapted to other territorial and policy contexts.

Article
Computer Science and Mathematics
Information Systems

Kadek Suarjuna Batubulan

,

Nobuo Funabiki

,

I Nyoman Darma Kotama

,

Komang Candra Brata

,

Anak Agung Surya Pradhana

Abstract: Nowadays, pedestrian navigation systems using smartphones have become common in daily activities. For their ubiquitous, accurate, and reliable services, the map information collection is essential for constructing a comprehensive spatial database. Previously, we have developed a map information collection tool to extract building information using Google Maps, optical character recognition (OCR), geolocation, and web scraping with smartphones. However, indoor navigation often suffers from inaccurate localization coming from degraded GPS signals inside buildings and Simultaneous Localization and Mapping (SLAM) estimation errors, causing position errors and confusing augmented reality (AR) guidance. In this paper, we present an improved map information collection tool to address this problem. It captures 360° panoramic images to build 3D models, apply photogrammetry-based mesh reconstruction to correct geometry, and georeference point clouds to refine latitude-longitude coordinates. For evaluations of the proposal, we conducted experiments with indoor scenarios and assessed the performance based on positional accuracy with Haversine distance, geometric accuracy (RMSE), AR drift, task completion rates, completion time, and the System Usability Scale (SUS). The results demonstrate significant enhancements of reliability and user satisfaction compared with the previous one.

Article
Computer Science and Mathematics
Information Systems

Matthew P. Dube

,

Brendan P. Hall

Abstract: Temporal reasoning is an important part of the field of time geography and spatio-temporal data science. Recent advances in qualitative temporal reasoning have developed a set of 74 relations that apply between discretized time intervals of at least two pixels each. While the identification of specific relations is important, the field of qualitative spatial and temporal reasoning relies on conceptual neighborhood graphs to address relational similarity. This similarity is paramount for generating essential decision support structures, notably reasonable aggregations of concepts into single terms and the determination of nearest neighbor queries. In this paper, conceptual neighborhoods graphs of qualitative topological changes to discretized temporal interval relations in the form of translation, isotropic scaling, and anisotropic scaling are identified using data generated through a simulation protocol. The outputs of this protocol are compared to the extant literature regarding conceptual neighborhood graphs of the Allen interval algebra, demonstrating the theoretical accuracy of the work. This work supports the development of robust spatio-temporal artificial intelligence as well as the future development of spatio-temporal query systems upon the spatio-temporal stack data architecture.

Article
Computer Science and Mathematics
Information Systems

Vladimír Moskovkin

Abstract: The Webometrics University Ranking website ceased to function in 2025 due to an inability to obtain citation data from Google Scholar. Since then, Webometrics University Ranking data has been published on the Figshare server, but the values of the three individual indicators have not been ranked. From July 2025 onwards, the Openness indicator values for citations have been calculated using OpenAlex via the ROR identifier. Data on the ranking of all three indicators will be provided twice a year in the form of an Excel file on a paid basis. Examples of universities included in the TOP-2,000 of the January 2025 and July 2025 Webometrics Ranking, which had missing legitimate Institutional Google Scholar Citation profiles (IGSCPs), demonstrate a sharp increase in their rankings when switching to the new methodology for calculating Openness indicator values. Experiments on the webometric ranking of universities with missing IGSCPs included in the TOP-2,000, when restored, showed identical average changes in world rankings, both in experiments using the old methodology and when transitioning from the old methodology to the new one together with the strong rank correlation of two corresponding layers in the Top 1,000 Webometrics University Ranking in time led to the conclusion that the transition to the new methodology will have virtually no effect on university rankings. This conclusion allows researchers and university managers to conduct comparative analyses of university positioning and benchmarking exercises starting in 2016, when IGSCPs were introduced into the Webometrics University Ranking calculation. The importance of creating and maintaining Institutional Google Scholar Citation profiles, despite changes in the Webometrics University Ranking calculation methodology, has been demonstrated.

Article
Computer Science and Mathematics
Information Systems

Rui Pascoal

,

José Gómez

,

Élmano Ricarte

Abstract: Accurate distance measurement in outdoor environments remains a challenging problem for mobile augmented reality (AR) systems due to sensor noise, environmental variability, and the limitations of single-modality approaches. Existing consumer AR solutions often prioritize usability over metric robustness, leading to performance degradation in large-scale or heterogeneous outdoor scenarios. This work presents EfMAR, an adaptive framework for outdoor mobile AR-based geospatial measurements that integrates multiple sensing modalities through a structured sensor fusion architecture. EfMAR combines visual SLAM, inertial sensing, depth information, and global positioning cues to improve robustness and consistency in distance estimation across diverse outdoor conditions. Beyond implementation, the framework formalizes a reusable architectural model for adaptive multi-sensor fusion, supporting reproducibility and future comparative research. A dedicated dataset is described, comprising a combination of real-world field measurements collected in representative outdoor scenarios and synthetic data informed by existing literature, enabling structured performance analysis while maintaining methodological transparency. Performance evaluation focuses on analyzing relative behavior, stability, and variability across sensing approaches rather than establishing absolute accuracy benchmarks. Comparative results across multiple distance ranges and environments indicate that hybrid sensor fusion strategies exhibit more stable and consistent performance trends compared to single-modality solutions, particularly in challenging urban contexts. Dispersion analysis further highlights the influence of environmental factors such as lighting conditions and spatial scale on measurement variability. Overall, the results position EfMAR as a flexible and adaptive framework designed to enhance robustness in outdoor AR-based geospatial measurement tasks. By emphasizing consistency, transparency, and architectural generalization, this work contributes a practical foundation for future research and development in mobile AR sensing for real-world outdoor applications.

Concept Paper
Computer Science and Mathematics
Information Systems

Abhigyan Mukherjee

Abstract: The growing demand for cost-efficient digital transactions has driven the need for scalable and low-cost payment solutions. Traditional blockchain-based transactions suffer from high fees and slow processing times, making decentralized off-chain payment networks a promising alternative. In this paper, we propose SpeedyMurmurs, an AI-enhanced decentralized routing algorithm that significantly reduces payment processing costs and transaction delays. Our approach optimizes payment routing efficiency through embedding-based path discovery, reducing routing overhead by up to two orders of magnitude and cutting transaction processing times by over 50 percent compared to existing blockchain networks. By leveraging machine learning-driven transaction optimization, our system dynamically selects the most cost-effective paths for digital payments while maintaining user privacy and security. Experimental results demonstrate that SpeedyMurmurs reduces transaction fees and computational costs, making decentralized payment systems more financially viable. This research highlights the role of AI-powered routing strategies in minimizing costs and improving efficiency in modern payment networks.

Concept Paper
Computer Science and Mathematics
Information Systems

Abhigyan Mukherjee

Abstract: With the growing reliance on peer-to-peer (P2P) networks for digital transactions, traditional electronic payment systems require enhancements to ensure security, efficiency, and trust. This study introduces an innovative digital payment framework enabling currency-based exchanges between consumers and vendors within a peer-to-peer environment. The outlined approach is inspired by Millicent’s scrip methodology and leverages digital envelope encryption to bolster protection. Unlike conventional payment methods that heavily rely on financial institutions, the protocol minimizes their involvement, restricting their role to trust establishment and transaction finalization. The system introduces a distributed allocation model, where merchants locally authorize payments, reducing transaction overhead and enhancing scalability. Additionally, the protocol is optimized for repeated payments, making it particularly efficient for recurring transactions between the same buyer and merchant. By integrating cryptographic techniques and decentralizing payment authorization, this protocol presents a secure, efficient, and scalable solution for digital payments in P2P environments.

Article
Computer Science and Mathematics
Information Systems

Hyunwoo Choi

,

Jisoo Han

,

Minseo Park

Abstract: This study develops an adaptive workflow allocation mechanism for anti-money laundering (AML) operations, aiming to improve the accuracy and efficiency of suspicious-transaction review. A multi-agent simulation platform was constructed to model transaction flows, alert generation, and analyst decision behaviors. The system integrates model-confidence estimation, analyst-fatigue prediction, and real-time workload signals to dynamically route alerts. Experiments were conducted using 27.3 million historical transactions and 186,000 alerts from a large commercial financial dataset. Compared with fixed allocation rules, the adaptive mechanism increased alert-escalation precision from 0.32 to 0.46 and recall from 0.70 to 0.78, while reducing average handling time by 19.4%. The proportion of high-risk alerts processed within the target time window improved by 23.8%. These results demonstrate that workflow optimization can meaningfully enhance AML performance beyond model-level improvements.

Concept Paper
Computer Science and Mathematics
Information Systems

Michael (Mike) Curnow

Abstract: Many modern systems operate in open environments that cannot be fully observed, controlled, or reset. In such settings, reasoning about system behavior often relies on snapshots: instantaneous states, static configurations, or endpoint outcomes. While useful in closed or tightly controlled contexts, snapshot-based reasoning can miss important information about how systems actually interact with their environments over time. This paper introduces Stateful Temporal Entropy (STE), a general, domain-neutral synthesis that treats a system’s irreversible trajectory as a source of observable evidence. Rather than focusing on what a system is at a moment, STE emphasizes how a system evolves under irreversible progression, internal state, and external influence. From this perspective, entropy is understood not as noise or disorder, but as observer-side uncertainty that accumulates through lived process. Systems may appear identical at an endpoint yet remain distinguishable by the entropy of the paths they took to arrive there. STE is not proposed as a new physical law, algorithm, or inference rule. Instead, it provides a conceptual lens for identifying where evidentiary information resides in open systems and why that information is absent from snapshot-based observation. By treating trajectories as first-class evidentiary objects, STE clarifies how irreversible evolution constrains plausible explanations of system behavior. Illustrative examples from physical, cyber, and socio-technical domains are discussed to demonstrate breadth, without narrowing the synthesis to any specific application.

of 35

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated