Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computer Networks and Communications

Andrea Piroddi

,

Maurizio Torregiani

Abstract: This paper proposes a novel information-theoretic upper bound on the mutual information between the physical position of a user and the observed MIMO channel state information (CSI). Unlike classical Cram’er-Rao bounds or I-MMSE relations, our bound explicitly incorporates the spatial variability of the channel via the Jacobian of the channel with respect to position. We provide a derivation for both local linearized models and global nonlinear bounds, highlighting the dependence on array geometry and multipath structure. The results offer new insight into the intrinsic information available for position estimation and semantic localization in wireless networks.

Article
Computer Science and Mathematics
Computer Networks and Communications

Chandramouli Haldar

Abstract: This paper introduces a Morse code transmission system utilizing an ESP32 microcontroller and displaying the decoded message via a Telegram bot in real-time. In contrast to the traditional Morse code inputting methods, which are normally based on a single button with timing distinction between dot and dash, the suggested design utilizes two special buttons for dot and dash, respectively, and another button for sending the entire message. This method simplifies the input of users, minimizes timing errors, and enhances accuracy in message delivery. The decoded text is then transmitted securely to a specified Telegram chat via the Bot API at high speed and reliability over Wi-Fi. The system is portable, lightweight, and compact, thereby ideal for covert or clandestine messaging without inviting unnecessary attention.

Article
Computer Science and Mathematics
Computer Networks and Communications

Aymen I. Zreikat

,

Julien El Amine

Abstract: Wireless communications face both opportunities and challenges due to the coexistence of 5G New Radio (NR) high-band, 5G mid-band, and 5G low-band technologies. Each technology uses both licensed and unlicensed spectrum to operate in separate frequency bands. For example, 5G NR uses the high-band of 24+ GHz, the mid-band of 2-6 GHz, or the low-band of less than 2 GHz, including the 5 GHz band via Licensed-Assisted Access (LAA). With the use of sophisticated coexistence mechanisms and optimization techniques, this 5G coexistence scenario in shared spectrum can be effectively managed. These strategies are essential for boosting network capacity, reducing latency, and ensuring fair spectrum use across different wireless technologies. This work provides a comprehensive system-level evaluation of multi-band coexistence and offloading strategies under realistic deployment assumptions. The simulation results confirm the effectiveness of the proposed model, showing that spectrum sharing and coexistence among these technologies deliver scalable and robust performance in heterogeneous service environments. This approach enables efficient load balancing across the entire network and highlights the need for additional features to achieve further performance gains.

Article
Computer Science and Mathematics
Computer Networks and Communications

Mona Alghamdi

,

Atm S. Alam

,

Asma Cherif

Abstract: Mobile edge computing (MEC) enables resource-constrained mobile devices to execute delay-sensitive and compute-intensive applications by offloading tasks to nearby edge servers. However, task orchestration in MEC is challenged by the highly dynamic system conditions, unreliable networks, and the distributed edge environments. Moreover, as the number of users, tasks, and resources increases, the offloading decision-making problem becomes increasingly complex due to the exponential growth of the search space. To address these challenges, this paper proposes a Multi-Criteria Hierarchical Clustering-based Task Orchestrator (MCHC-TO), a novel framework that integrates multi-criteria decision making with divisive hierarchical clustering for preference-aware and adaptive workload orchestration. Edge servers are first evaluated using multiple decision criteria, and the resulting preference rankings are exploited to form hierarchical preference-based clusters. Incoming tasks are then assigned to the most suitable cluster based on task requirements, enabling efficient resource utilization and dynamic decision making. Extensive simulations conducted using an edge computing simulator demonstrate that the proposed MCHC-TO framework consistently outperforms benchmark approaches, achieving reductions in average service delay and task failure rate of up to 48\% and 92\%, respectively. These results highlight the effectiveness of combining multi-criteria evaluation with hierarchical clustering for robust and dynamic task orchestration in MEC environments.

Article
Computer Science and Mathematics
Computer Networks and Communications

Liliana Enciso

Abstract: In the context of communications networks, a mobile ad hoc network (MANET) represents a set of mobile nodes that are dynamically configured, without physical infrastructure or centralized administration. This article analyzes which of the routing strategies for MANET—proactive, reactive, or hybrid—offers the best results. For the measurement, twenty-four simulations were established in emergency situations in an urban area. This involved, in addition to the defined quality of service (QoS) parameters, calculating node densities and using a mobility model necessary to validate the results. The AODV, DSDV, and AOMDV protocols were used in these simulations, and the AOMDV protocol offered the best QoS with the random obstacle mobility model in the NS2 tool.

Article
Computer Science and Mathematics
Computer Networks and Communications

Ponglert Sangkaphet

,

Supawee Makdee

,

Chaivichit Kaewklom

,

Nawara Chansiri

,

Buppawan Chaleamwong

,

Pheerasap Wonglamai

,

Phattaraphol Chinnachot

Abstract: Cage-based tilapia farming is highly affected by rapid variations in water quality, particularly variations in dissolved oxygen (DO), which can lead to mass fish mortality and significant economic losses. To address this challenge, in this study, an internet of things (IoT)- and LoRa-based water quality monitoring and control system is proposed, designed for real-time aquaculture management. The developed prototype enables peer-to-peer communication among distributed control nodes for continuous monitoring of dissolved oxygen, pH, and water temperature. Measured data are transmitted remotely and integrated with automated oxygen pump control through a mobile application, allowing timely intervention without continuous on-site supervision. To mitigate sensor degradation caused by prolonged submersion, an automatic probe lifting mechanism was incorporated into the system, significantly reducing biofouling and sensor drift. The experimental results show that this mechanism improves measurement accuracy, achieving a dissolved oxygen RMSE of 0.186, which is substantially lower than that of a continuously submerged sensor. Evaluation of communication performance confirms reliable LoRa transmission with a 100% packet delivery rate over distances up to 1,600 m, maintaining positive signal-to-noise ratios and RSSI values above receiver sensitivity. Detection latency analysis demonstrates sub-second response times for both single- and multi-hop configurations, sufficient for timely aeration control. Evaluation by five specialists yielded a high average performance score of 4.11, while post-implementation satisfaction assessments involving 20 tilapia farmers indicated an average score of 4.48, confirming the system’s effectiveness, reliability, and suitability for practical deployment in cage-based tilapia farming.

Article
Computer Science and Mathematics
Computer Networks and Communications

Sethu Subramanian N.

,

Prabu P.

,

Kurunandan Jain

,

Prabhakar Krishnan

Abstract: Smart-city IoT ecosystems depend on a large number of devices with limited resources, which often lack built-in security mechanisms. While traditional cloud-based or gateway-centric intrusion detection systems (IDS) offer essential security, they are still characterized by high detection latency, considerable bandwidth demand, and lack of precise monitoring of single device actions. This work presents and experimentally evaluates a novel micro-layer intrusion detection architecture, termed the Edge AI Bridge as a new micro-computing security layer that is positioned between IoT devices and the gateway to enable early-stage threat interception. The proposed architecture incorporates embedded AI hardware that has a hybrid detection pipeline, tapping into the unsupervised anomaly detection mode for behavioral profiling and a lightweight signature-matching module that is used to cut down the false positives, thereby improving detection reliability. System operations—including localized traffic inspection, protocol parsing, and feature extraction—are performed before data aggregation, which not only preserves device-level privacy but also eases the computational burden of the IoT gateway to a large extent. The contemporary CIC-IoT-2023 dataset, which captures a wide range of smart-city protocols and attack vectors, is used to evaluate the architecture. The Edge AI Bridge leads to a significant reduction in detection latency, approximately 50 ms on average as opposed to the 500 ms of cloud-based solutions—while the resource footprint is kept low to about 20% CPU utilization. The Edge AI Bridge demonstrates a potential solution that is scalable, modular, and can preserve privacy while improving the cyber resilience of the smart-city infrastructures that are large, heterogeneous, and difficult to manage.

Article
Computer Science and Mathematics
Computer Networks and Communications

Craig S. Wright

Abstract: This article examines Nash equilibrium stability in digital cash systems, using Bitcoin as a canonical model for protocol-constrained strategic interaction. Building on the formal framework established in Wright (2025), we characterise mining as a repeated non-cooperative game under endogenous constraints: hashpower allocation, latency asymmetries, fee-substitution dynamics, and institutional noise. We show that equilibrium behaviours are sensitive to the structural composition of miner rewards—specifically, the transition from subsidy-dominated to fee-dominated environments—and that volatility in protocol rules leads to equilibrium multiplicity and eventual collapse. Using tools from mainstream game theory and Austrian time preference theory, we demonstrate that rational strategic cooperation is only sustainable under strict protocol immutability. Rule mutation introduces uncertainty that distorts intertemporal valuation and incentivises short-term extractive strategies. These results suggest that digital monetary systems must be governed by non-negotiable constitutional rules to preserve incentive compatibility across time.

Article
Computer Science and Mathematics
Computer Networks and Communications

Zsolt Bringye

,

Rita Fleiner

,

Eszter Kail

Abstract: The increasing reliance of Internet of Things (IoT) applications on low-power wide-area network technologies, particularly LoRaWAN, has amplified the need for intrusion detection approaches that go beyond attack-specific signatures and generic traffic anomalies. Existing IoT intrusion detection systems are often tailored to individual threat scenarios or rely on statistical indicators, which limits their ability to capture protocol-level misuse in a systematic and interpretable manner. This paper addresses this gap by proposing a methodology for protocol-aware anomaly detection based on a digital twin abstraction of LoRaWAN communication behavior. The approach models the Over-The-Air Activation (OTAA) procedure as a finite-state machine that serves as a lightweight, protocol-specific digital twin, encoding expected message sequences and specification-driven constraints. Rather than targeting individual attacks, observed network events are continuously validated against the modeled state evolution, enabling the identification of deviations that indicate anomalous or non-conformant behavior. Illustrative examples include replay attempts, integrity violations, and inconsistencies in protocol parameters, although the framework is not limited to predefined attack categories. The results demonstrate that state-machine-based digital twins provide a structured and extensible foundation for intrusion detection and can be integrated into SOC (Security Operation Center) oriented monitoring environments. Overall, the study highlights the methodological advantages of digital-twin-driven, state-aware detection for improving protocol compliance monitoring and interpretability in LoRaWAN-based IoT networks. Unlike prior LoRaWAN IDS approaches, the proposed model enables the detection of protocol-conformant yet semantically invalid behaviors that remain invisible to packet-centric or statistical detectors.

Article
Computer Science and Mathematics
Computer Networks and Communications

Ahmed Lateef Salih Al-Karawi

,

Rafet Akdeniz

Abstract: The fifth-generation (5G) networks are facing critical security challenges in device authenti- cation for massive Internet of Things deployments while preserving privacy. Traditional federated learning approaches depend on the computationally expensive homomorphic encryption to protect model gradients, resulting in substantial latency, communication over- head, and the energy consumption impractical for resource-constrained 5G devices. This paper proposes zero-knowledge federated learning (ZK-FL), eliminating homomorphic encryption by enabling devices to prove model correctness without revealing gradients. Our approach integrates zero-knowledge proofs with FL updates, where each device generates where each device generates a proof Proofi = ZK(Gradienti, Hashi), demon- strating computational integrity.Experimental results from 10,000 authentication attempts demonstrate ZK-FL achieves 78.4 ms average authentication latency versus 342.5 ms for homomorphic encryption-based FL (77% reduction), proof sizes of 0.128 KB versus 512 KB (99.97% reduction), and energy consumption of 284.5 mJ versus 6.525 mJ (95% reduc- tion), while maintaining 99.3% authentication success rate with formal privacy guarantees. These results demonstrate ZK-FL enables practical privacy-preserving authentication for massive-scale 5G deployment.

Article
Computer Science and Mathematics
Computer Networks and Communications

Qiang Duan

,

Zhihui Lu

Abstract: The rapid evolution of artificial intelligence technologies toward the agentic AI paradigm enables the emergence of Agentic Web in the future Internet. Agent communication plays a critical role in constructing the Agentic Web but faces unique challenges posed by the edge-network-cloud continuum in the future Internet. This paper provides a comprehensive overview of state‑of‑the‑art agent communication protocols and technologies, evaluating their readiness to support the construction of the Agentic Web. We first survey representative communication protocols and analyze the key technologies they employ, assessing their effectiveness in addressing the challenges for agent communications in the future Internet. We then identify critical gaps between existing approaches and the requirements of the Agentic Web, propose a unified architectural framework grounded in virtualization and service‑oriented principles, and outline key research directions needed to advance toward a fully realized Agentic Web.

Article
Computer Science and Mathematics
Computer Networks and Communications

Abraham George

,

Mir Wajahtat Hussain

Abstract: The convergence of the metaverse and ubiquitous computing presents a new paradigm for applications that connect humans, machines, and their virtual counterparts. These applications are built upon wireless networks and demand massive data transfers between device clusters. While several technological candidates are in active development to meet these needs, the question remains whether technological progress alone can keep pace with their growing requirements. This paper argues that, alongside advances in wireless technology, the current layered architecture must be reimagined to inherently promote network heterogeneity. This approach allows multiple physical layer technologies to be integrated and individual layer protocols to better adapt to the network environment. We propose a novel augmented network model with a physical abstraction layer to seamlessly integrate diverse wireless technologies, enabling application data to be distributed across multiple network interfaces. We present an analytical proof of concept to evaluate its performance using key metrics such as throughput, packet delay, and server utilization. The results demonstrate that our proposed model significantly improves throughput and minimizes packet delay under heavy network loads, outperforming single-interface solutions. This work proves the feasibility and substantial performance benefits of wireless heterogeneity in future communication networks.

Article
Computer Science and Mathematics
Computer Networks and Communications

Saio Alusine Marrah

,

Jiahao Wang

,

Koroma Abu Bakarr

,

Gibrilla Deen Kamara

,

Ryvel Timothy Stamber

,

Ologun Sodiq Babatunde

,

Mabel Ernestine Cole

Abstract: This paper presents a deep learning-based adaptive sensor fusion framework for re-al-time control and fault-tolerant automation in Industrial IoT systems. The core of the framework is an attention-based CNN-Transformer model that dynamically fuses het-erogeneous sensor streams; its interpretable weighting signals are leveraged directly for fault detection and to inform a supervisory control policy. By dynamically weighting multiple heterogeneous sensor streams using an attention-based CNN-Transformer architecture, the proposed method reduces estimation error under noisy and fault-prone conditions, and seamlessly integrates with a closed-loop controller that adjusts to detected faults through a stability-aware supervisory policy. Experiments on synthetic IIoT data with injected transient faults demonstrate significant improvements in fusion accuracy (RMSE: 0.049 ± 0.003 vs 0.118 ± 0.008 for Kalman filter, p < 0.001), faster fault detection (F1-score: 0.89 ± 0.02) and recovery (1.1 ± 0.2 seconds), and hard real-time performance suitable for edge deployment (99th percentile latency: 58ms). The results show that the proposed approach outperforms classical baselines in terms of RMSE, detection F1-score, recovery time, and latency trade-offs. This work contributes to more reliable, adaptive automation in industrial settings with minimal manual tuning and empirical stability validation.

Article
Computer Science and Mathematics
Computer Networks and Communications

Cameron T. Day

,

Abdussalam Salama

,

Reza Saatchi

,

Maryam Bagheri

,

Najam Ul Hasan

,

Samuel Betts

Abstract: Many existing healthcare facilities still rely on the aging Wi-Fi 5 (IEEE 802.11ac) standard, which is based on Orthogonal Frequency-Division Multiplexing (OFDM). OFDM supports single-user-per-channel access, leading to increased contention, higher latency, jitter, and packet loss under dense device deployments commonly found in clinical settings. This study presents a quantitative performance evaluation of Wi-Fi 5 and Wi-Fi 6/7 by comparing the effectiveness of OFDM with Orthogonal Frequency-Division Multiple Access (OFDMA) and Target Wake Time (TWT) in a simulated dense IoMT environment. Simulations were conducted using Network Simulator 3 (NS-3), and key Quality of Service (QoS) metrics. The results demonstrate that OFDMA reduces average network delay by up to approximately 30%, improves throughput by approximately 20%, and reduces packet loss ratio by up to 85% compared to OFDM under high-density conditions, while exhibiting marginally improved jitter performance (approximately 2%). In addition, the use of TWT achieved substantial reductions in device power consumption of up to approximately 90%, at the cost of reduced aggregate throughput of up to approximately 75% under high station densities. These results demonstrate that Wi-Fi 6/7 technologies offer significant advantages in terms of QoS and energy efficiency over legacy Wi-Fi 5 for dense IoMT environments.

Article
Computer Science and Mathematics
Computer Networks and Communications

Nael M Radwan

,

Frederick T Sheldon

Abstract: Message Queuing Telemetry Transport (MQTT) is a lightweight communication protocol widely used in Internet of Things (IoT) systems; however, its original design prioritizes efficiency over security, making authentication and authorization critical areas of concern, particularly when wildcard subscriptions and access control misconfigurations are present. This study experimentally investigates the effectiveness, limitations, and performance impact of MQTT authentication and authorization mechanisms in a controlled IoT environment. The experiments were conducted using the Eclipse Mosquitto broker and MQTT clients implemented in C++, evaluating username/password and certificate-based authentication alongside Access Control List (ACL)–based authorization under multiple test scenarios. Metrics including authentication success rate, false acceptance and rejection rates, authorization effectiveness, latency, system throughput, and resource consumption were systematically measured. The results show that password-based authentication achieves high success rates when correctly configured but remains vulnerable in the absence of transport-layer security, while certificate-based authentication improves security at the cost of increased latency and computational overhead. Authorization effectiveness was strongly influenced by ACL granularity, with misconfigured or default policies enabling unauthorized access, especially when wildcard topic filters were used. Overall, the findings demonstrate a clear trade-off between security strength and system performance in MQTT-based IoT deployments. The study concludes that although MQTT provides basic security mechanisms, stronger and more fine-grained authentication and authorization strategies are required to achieve secure and scalable IoT communication.

Article
Computer Science and Mathematics
Computer Networks and Communications

Daniel Gaetano Riviello

,

Giusi Alfano

Abstract: Spectrum Sensing (SS) is expected to play a crucial role in forthcoming 6G Cognitive Radio Networks (CRNs), where unlicensed users will be able to dynamically access the spectrum and perform opportunistic transmissions without causing interference to licensed users. In this work, we investigate multiple-antenna SS techniques by analyzing the performance of several widely used detection schemes—namely, Roy’s Largest Root Test (RLRT), Generalized Likelihood Ratio Test (GLRT), Eigenvalue Ratio Detector (ERD), and Energy Detector (ED)—under varying false alarm probabilities and signal-to-noise ratios (SNRs). The study assumes a fixed number of sensors at the secondary user receiver, equal to four. To evaluate the behavior of these detectors in realistic conditions, we developed a software-defined radio (SDR) testbed using Universal Software Radio Peripherals (USRPs), enabling both primary user signal transmission and secondary user data acquisition. The experimental results, illustrated through Receiver Operating Characteristic (ROC) and performance curves, are compared with simulation outcomes. The analysis is complemented by a detailed state-of-the-art listing of the available analytical characterizations of the false alarm probabilities, for the considered SS schemes. In particular, the GLRT false alarm probability, previously unavailable in explicit form for a four antenna equipped receiver, is computed as well. These results validate the superior detection capability of RLRT over the other tested schemes, confirming its effectiveness not only in theoretical analysis but also in practical SDR-based implementations.

Article
Computer Science and Mathematics
Computer Networks and Communications

Vimal Teja Manne

Abstract:

This paper presents a payment‑processing gateway that executes debit→credit transfers across banks utilizing a virtual payment ID resolver, a transaction processing core, and bank connectors engineered for low‑latency, high‑reliability flows. The gateway coordinates gRPC services behind load balancers, applies client/server‑side request balancing, and tunes retry policies with bounded attempts to withstand temporary faults, then commits or rolls back atomically via a scheduler‑driven reversal queue to guarantee consistency. A centralized per‑account mutex, database sharding by account hash, and ElasticSearch‑backed idempotency logs provide ordering, concurrency control, and fast lookups at scale, while Dockerized microservices simplify deployment and scale‑out. Benchmarks differ nodes and connection fan‑out to measure throughput and tail latency under debit/credit paths, showing trade‑offs among locks, retries, and batching, and aligning with fast‑payment system patterns and modern gRPC load‑balancing practices for payment gateways.

Technical Note
Computer Science and Mathematics
Computer Networks and Communications

Daisuke Sugisawa

Abstract: This paper proposes a bidirectional Layer7 Proxy architecture using NGINX and a minimal Acceptor module. In conventional Layer7 load balancers, NGINX issues connect() to the backend for each request, making context switches between kernel and user space structurally unavoidable due to TCP handshakes. In the proposed approach, service modules register sockets with completed TCP handshakes to the Acceptor in advance, and NGINX receives the file descriptors via UNIX domain sockets. This eliminates connect() calls and skips per-request TCP handshakes.Performance evaluation demonstrates that the proposed method directly translates context switch reduction into throughput (RPS) improvement and latency reduction. While the conventional method shows no correlation between context switches and RPS, the proposed method enables context switches to function as a ``controllable performance parameter,'' forming the foundation for deterministic throughput control. Additionally, the Acceptor queue functions as a buffer, structurally limiting requests exceeding the service module's processing capacity during traffic spikes, thereby avoiding non-linear performance degradation.This approach enables dynamic reconfiguration and graceful restarts through socket file descriptor passing, without relying on health checks or frequent configuration reloads. It presents a new design guideline that achieves deterministic throughput control at the application layer, complementing scale-out dependent resource operations in cloud environments.

Article
Computer Science and Mathematics
Computer Networks and Communications

Robert Campbell

Abstract: The impending threat of cryptographically relevant quantum computers (CRQCs) necessitates a comprehensive migration to post-quantum cryptography (PQC) across all computing domains. While commercial Cryptographic Asset Discovery and Inventory (CADI) tooling has emerged to support enterprise IT environments, embedded systems, which dominate defense platforms, tactical communications, and critical infrastructure, remain inadequately addressed. This paper presents a comprehensive framework for embedded systems-specific CADI, establishing a six-class taxonomy based on cryptographic characteristics and discovery feasibility. We show through feasibility analysis that fundamental constraints of embedded systems, including severe resource limitations, mission/operational continuity requirements (often including availability and safety imperatives), certification requirements, and hardware-bound cryptography, render IT-centric CADI approaches largely ineffective. Documentation-based discovery through vendor Cryptographic Bills of Materials (CBOMs) should typically serve as the primary methodology, with automated scanning relegated to supplemental verification. We analyze technical barriers to detection, including static linking, stripped binaries, cryptographic hardware offload, and proprietary implementations. The framework addresses lightweight cryptography considerations for constrained devices that are unable to accommodate standard PQC algorithm sizes, and examines lifecycle and certification constraints, including those related to DO-178C, IEC 62443, and Common Criteria. We establish planning-assumption discovery accuracy expectations (Table 6) ranging from 55–99% by embedded system class, and propose detection methodologies calibrated to each class. The paper concludes with integration pathways for Department of Defense Risk Management Framework processes and PQC migration planning.

Technical Note
Computer Science and Mathematics
Computer Networks and Communications

Daisuke Sugisawa

Abstract: Since approximately 2005, major processor manufacturers have shifted their architectural focus from instruction-level parallelism (ILP) toward multicore and manycore parallelism to achieve higher performance.Rather than relying on deeper pipelines and speculative execution, performance gains have increasingly been realized through thread-level parallelism (TLP).Consequently, the responsibility for efficiently utilizing processor resources has transitioned from hardware mechanisms to software implementations. This technical note examines design strategies for achieving deterministic, high-throughput packet processing on manycore architectures using the Data Plane Development Kit (DPDK).It presents a simplified Packet Gateway (PGW) pipeline implementation, analyzing cache-coherence effects, NUMA-local memory allocation, and multicore scheduling patterns critical to maintaining per-packet processing budgets under nanosecond-level constraints.

of 23

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated