Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computer Networks and Communications

Ade Dotun Ajasa

,

Hassan Chizari

,

Abu Alam

Abstract: Docker is widely used to deploy applications in containerised environments. This study investigates whether the physical memory utilisation of databases deployed in Docker differs from that of equivalent non-Docker deployments during a Structured Query Language injection (SQLi) attack. A quantitative approach was adopted, using Glances to collect system data, JASP 0.18 for descriptive statistics and paired-samples t tests, and StatKey to examine mean differences. Two application stacks were evaluated: DVWA (PHP/MariaDB) and Acunetix (MySQL). Within the conditions examined in this study, the Docker-based deployments did not demonstrate improved memory performance when compared with the non-Docker deployments during SQLi. Instead, the results suggest that the Docker-based configurations were associated with higher memory use.

Article
Computer Science and Mathematics
Computer Networks and Communications

Feiyu Lin

,

Shunqing Zhang

Abstract: With the rapid growth of location-based services (LBS) in the Internet of Things (IoT), fingerprint-based indoor localization has attracted attention for its high accuracy. However, environmental changes degrade signal stability, and traditional methods require frequent site surveys, leading to high labor and time costs. In response, we propose an adaptive Bluetooth Low Energy (BLE) indoor localization system that relies on an updated fingerprint to address these issues. We integrate the Domain Adaptation Localization (DALoc) method into the system. The framework combines historical data with deep transfer learning to extract features and update fingerprints based on a small amount of labeled data at a new time. To enhance the adaptability of the Received Signal Strength (RSS), we utilize historically collected RSS data to fit a K-order Gaussian mixture model (GMM). Furthermore, we assess the system’s performance using the Cramér-Rao lower bound (CRLB) to ensure reliability and robustness. The DAloc approach helps address the challenges posed by mixed and time-varying signals. We conducted multiple sets of experiments related to positioning error in the laboratory corridor, and the results demonstrate that the system’s location accuracy exceeds 70% when tested with dynamic signals, with a location error within the meter-level range.

Article
Computer Science and Mathematics
Computer Networks and Communications

Nurul I. Sarkar

,

William Knight

Abstract: Network simulation plays a critical role in evaluating protocol performance and predicting real‑world behavior before deployment. This paper presents a comparative study of two widely used network simulators, OMNeT++ (open source) and Riverbed Modeler (commercial) with a focus on feature capabilities, modeling flexibility, and throughput accuracy when results are validated against a physical testbed. We first develop a Wi-Fi network testbed using Gigabit Wi-Fi cards (IEEE 802.11ac) and an access point (AP). Using this testbed, we conduct various field experiments involving Wi-Fi links in the university multistory building under line-of-sight indoor radio propagation conditions. The link throughput was recorded for various AP-Rx separation (Rx: Receiver/wireless laptop) ranging from 5 to 25 m. We then develop simulation models on both network simulators using a consistent stream of multimedia traffic to obtain accurate reading for network throughput. We validate link throughputs of both simulators by comparing them against the testbed results in the same operating condition of the testbed. Our findings show that while Riverbed Modeler shows a clear drop in throughput for increasing AP-Rx separation (5 to 25 m), OMNeT++ didn’t drop any throughputs. The discrepancy of throughput performance between two credible simulators suggests that network researchers and practitioners should be careful in selecting the most appropriate simulator for simulation tasks based on experimental goals, network context, and required level of modeling and validation. Finally, we provide guidelines for best practice checklist in network simulation and model validation.

Article
Computer Science and Mathematics
Computer Networks and Communications

Gregory Yu

,

Ian Butler

,

Aaron Collins

Abstract: The integration of large language models into wireless communication has shown promising results for individual tasks. However, existing approaches are typically designed for single-task scenarios and rely on supervised fine-tuning that fails to optimize for long-term decision quality. In this paper, we propose WirelessLLM-Agent, a unified LLM-based agent framework for multi-task wireless communication decision-making. Our framework integrates a semantic state serialization module that transforms heterogeneous wireless states into structured textual representations, a multi-task adapter architecture based on MoE-LoRA for parameter-efficient knowledge sharing, and a two-stage training paradigm combining SFT warm-start with GRPO reinforcement learning enhanced by lookahead collaborative simulation. Extensive experiments on channel multi-task learning, mobile edge computing task offloading, and cooperative edge caching demonstrate that WirelessLLM-Agent consistently outperforms existing methods while exhibiting strong zero-shot generalization.

Article
Computer Science and Mathematics
Computer Networks and Communications

Youssef Ahmedm

,

Ruotong Luan

Abstract: Reinforcement learning (RL) in mobile edge computing (MEC) faces critical challenges of data heterogeneity, communication overhead, and limited generalization across diverse preferences and system configurations. We propose Adaptive Reinforcement Learning Offloading (ARLO), a unified framework integrating adaptive dissimilarity measures for federated learning with generalizable multi-objective optimization for computation offloading. The Adaptive Dissimilarity Measure module leverages parameter dissimilarity with Lagrangian multipliers to mitigate model drift under Non-IID data and loss dissimilarity to reduce communication overhead via adaptive aggregation. The Contextual Multi-Objective Decision module employs histogram-based state encoding and a Generalizable Neural Network Architecture with action masking, enabling a single policy to adapt to varying preferences, server counts, and CPU frequencies. Experiments show ARLO achieves 82.6% accuracy on CIFAR-10 with 44.3% fewer communication rounds than FedProx, and a 121.0% hypervolume improvement in offloading with only 1.7% generalization error across unseen configurations.

Article
Computer Science and Mathematics
Computer Networks and Communications

Krišjānis Petručeņa

,

Sergejs Kozlovičs

,

Juris Vīksna

,

Elīna Kalniņa

,

Reinis Isaks

,

Edgars Celms

,

Lelde Lāce

,

Edgars Rencis

Abstract: Quantum key distribution (QKD) networks require relaying when distant key management entities share no direct quantum link. Most relay strategies, however, rely on centralized control or globally maintained routing state. This paper asks whether useful security and efficiency can still be obtained with topology-oblivious stochastic forwarding. It studies the security-overhead trade-off in a model in which fragmented key material is relayed via random-walk variants and reconstructed under privacy amplification. Under a restricted model with at most one compromised relay, the analysis asks whether strictly local forwarding can retain useful information-theoretic security. Evaluation on the GÉANT topology, representing a European academic backbone network, shows clear differences between random-walk variants. The proposed highest-score-neighbor local path-diversification heuristic reduces the risk that relayed key material passes through a compromised node. The evaluation also shows that a preliminary loop-erasure step significantly shortens sampled routes and improves throughput in the model. These findings position topology-oblivious stochastic forwarding as a decentralized alternative to global-state maintenance or centralized orchestration in QKD networks.

Article
Computer Science and Mathematics
Computer Networks and Communications

Zeren Gu

,

Jialei Tan

Abstract: Mobile Edge Computing (MEC) offers significant advantages in enhancing user experience and reducing energy consumption by offloading tasks to proximate edge servers. However, MEC systems are inherently complex, facing highly dynamic environments with fluctuating channel conditions and server loads, coupled with the necessity to optimize multiple conflicting objectives such as latency and energy consumption. Furthermore, existing offloading strategies often lack robustness against inherent uncertainties, leading to performance degradation in unpredictable scenarios. To address these critical challenges, this paper introduces ADAPT-ROMA, a novel Adaptive Dynamic Risk-aware Multi-objective Offloading Algorithm. We formulate the MEC offloading problem as a Contextual Risk-aware Multi-objective Markov Decision Process, integrating Conditional Value at Risk (CVaR) to explicitly quantify and mitigate worst-case performance losses. ADAPT-ROMA employs an adaptive weight generation network to dynamically balance latency, energy, and risk objectives based on real-time system states. Its core is built upon a robust distributed deep reinforcement learning framework, enhanced by adversarial training to bolster resilience against unmodeled uncertainties, and a hierarchical attention network for effective contextual encoding. Extensive simulations demonstrate that ADAPT-ROMA achieves a superior balance between average task latency and total energy consumption, alongside an outstanding task completion rate. Crucially, it yields a significantly low risk-aware score, outperforming baseline methods by ensuring high performance and stability even under dynamic and uncertain MEC conditions. Ablation studies, adaptability analysis, robustness evaluation, and scalability tests further confirm the individual contributions of ADAPT-ROMA's components and its practical applicability.

Article
Computer Science and Mathematics
Computer Networks and Communications

Aiman Moldagulova

,

Zhuldyz Kalpeyeva

,

Raissa Uskenbayeva

,

Nurdaulet Tasmurzayev

,

Bibars Amangeldy

,

Yeldos Altay

Abstract: Low-cost air quality sensors enable dense monitoring networks but suffer from significant measurement noise and instability particularly in dynamic environments. Conventional fixed-window smoothing reduces noise but introduces a trade-off between signal stability and temporal responsiveness, often attenuating short-term pollution events. This paper proposes an adaptive filtering algorithm that dynamically adjusts the averaging window size based on short-term signal variability. The method relies on real-time variance estimation to balance noise suppression and sensitivity to rapid changes without increasing computational complexity. The approach is implemented within an IoT-based monitoring framework and evaluated using parallel measurements with a certified reference device. Comparative analysis against raw data and fixed-window filtering demonstrates improved statistical accuracy and stronger temporal correlation with reference measurements. In addition, this method enhances event detection stability in threshold-based monitoring scenarios. To support automated decision-making, the filtered signal integrated into an event-driven architecture with Robotic Process Automation (RPA), enabling reliable triggering of predefined workflows. The results show that proposed adaptive filtering provides an efficient and lightweight solution for real-time signal processing on resource-constrained devices, making it suitable for large-scale deployment in environmental monitoring systems.

Article
Computer Science and Mathematics
Computer Networks and Communications

Jisi Chandroth

,

Jehad Ali

Abstract: The Internet of Things (IoT) comprises diverse devices connected through heterogeneous communication protocols to deliver a wide range of services. However, the complexity and scale of IoT networks make them difficult to secure. Network intrusion detection systems (NIDS) have therefore become essential for identifying malicious activities and protecting IoT environments across many applications. Although recent deep learning (DL)-based IDS approaches achieve strong detection performance, they often require substantial computation and storage, which limits their practicality on resource-constrained IoT devices. To balance detection accuracy with computational efficiency, we propose a lightweight deep learning model for IoT intrusion detection. Specifically, our method learns compact, intrusion-relevant representations from traffic features using a two-layer Multi-Layer Perceptron (MLP) embedding backbone, followed by a linear SoftMax classification head for multi-class attack detection. We evaluate the proposed approach on two benchmark datasets, CICIDS2017 and NSL-KDD, and the results show strong performance, achieving 99.85% and 99.21% accuracy, respectively, while significantly reducing model size and computational overhead. Experimental results demonstrate that the proposed method achieves excellent classification performance while maintaining a lightweight design, with fewer parameters and lower FLOPs than existing approaches.

Article
Computer Science and Mathematics
Computer Networks and Communications

Ryan J. Buchanan

,

Parker M.D. Emmerson

Abstract: Using modest spectral graph theory, we show that under the assumption of convexity, beliefs will diffuse towards consensus. Our toy model captures opinion dynamics in a manner sensitive to the order of belief--updates amongst agents in a network. To accomplish this, we introduce a first-order deformation of the classical observable algebra and study the resulting non-commutative correction through an explicit graph bracket. We include a concrete computation alongside some code in the appendix.

Article
Computer Science and Mathematics
Computer Networks and Communications

Jak Brierley

Abstract: Authoritative server-client network models dominate multiplayer game implementations but incur significant operational costs due to simulation and replication workload. This paper investigates whether client assisted replication techniques can reduce server-side CPU utilisation without degrading the quality of gameplay. An experimental prototype was developed in which both simulation and replication of movement state were deferred from the server to all connected clients. The results show a reduction of server CPU utilisation of 95-98%. While these findings demonstrated substantial computational savings on the server, shifting responsibility of replication and simulation to clients introduces new technical challenges in maintaining authoritative consistency and resistance to client-side manipulation. The impact of this model on gameplay quality was not objectively evaluated in this study, and remains an area for future investigation.

Article
Computer Science and Mathematics
Computer Networks and Communications

Haowen Shi

,

Yichen Zong

Abstract: Efficient Mobile Edge Computing (MEC) resource management is critical for diverse Quality of Service (QoS) demands, but traditional reactive methods and existing preemptive policies struggle in dynamic environments, causing suboptimal experiences. This paper proposes Proactive Adaptive Preemptive Allocation (PAPA), a novel framework for intelligent, forward-looking MEC resource management. PAPA features a QoS prediction module using lightweight sequence models to forecast short-term trends, assess risk, and trigger pre-warnings. Its core, the Proactive Preemptive Strategy Learning (APPL) module, employs a deep reinforcement learning (DRL) agent with a unique dual-layer reward. This includes a proactive penalty compelling anticipatory preemptive actions when predicted QoS enters a warning zone, differentiating it from reactive approaches. PAPA further enhances adaptability via meta-learning and dynamic priority mechanisms. Extensive simulations show PAPA consistently outperforms baselines, achieving superior throughput, reduced latency, and a significantly lower critical QoS violation rate than reactive DRL. Ablation studies confirm the impact of proactive penalty and meta-learning. PAPA demonstrates competitive energy efficiency and optimized preemption, affirming its robustness and practical viability in dynamic MEC environments.

Article
Computer Science and Mathematics
Computer Networks and Communications

Hsu Ouyang

,

Shao-Yu Chang

,

Chao-Chun Chuang

,

Chia-Hung Yang

,

Yu-Chieh Lin

Abstract: Federated learning (FL) enables collaborative model training without sharing raw data and has become an important paradigm for privacy-sensitive applications such as healthcare and other regulated domains. However, most existing federated learning frameworks rely on centralized coordination servers, fixed network configurations, and complex infrastructure requirements, which limit their deployment in real-world institutional environments with strict cybersecurity and data governance constraints.In this work, we propose STellar-FL, a decentralized federated learning architecture designed for scalable cross-institution model training under network-constrained environments. The proposed framework adopts a microservice-based design consisting of a federated training orchestration module, a distributed communication layer, and federated execution nodes. STellar-FL enables secure model exchange through relay-assisted peer connectivity, eliminates the need for centralized servers with public IP exposure, and provides a unified workflow for model development, deployment, and validation.Compared with conventional federated learning frameworks, STellar-FL reduces deployment complexity, improves system robustness by removing single points of failure, and supports flexible collaboration across heterogeneous institutional infrastructures. The proposed architecture provides a practical foundation for real-world privacy-preserving AI deployment in healthcare and other data-sensitive domains.

Article
Computer Science and Mathematics
Computer Networks and Communications

Basker Palaniswamy

,

Paolo Palmieri

Abstract: Modern e-commerce platforms must handle sudden and unpredictable traffic surges caused by flash sales, festive shopping events, and viral online activity. Traditional web architectures typically adopt one of two extremes: a tightly coupled monolithic design that provides low latency but becomes fragile under heavy load, or a loosely coupled microservices architecture that improves scalability and resilience but introduces communication overhead during normal operation. This trade-off forces system designers to choose between performance efficiency and scalability robustness. This paper introduces ATLAS (Adaptive Traffic-aware Loose–tight Architecture System), a next-generation adaptive web architecture that dynamically adjusts its coupling strategy based on real-time system conditions. ATLAS employs machine learning models to analyse operational telemetry, predict traffic surges, detect anomalies, and forecast potential system failures. Using these predictions, the architecture can automatically transform its runtime structure, switching between tightly coupled monolithic execution and loosely coupled microservices deployment as traffic conditions evolve. To improve reliability, ATLAS incorporates a self-healing recovery pipeline that autonomously detects service failures, isolates faulty components, and restores normal operation without human intervention. Through case studies of large-scale platforms such as Google Search, Amazon, and Flipkart, we illustrate how existing systems can evolve toward the ATLAS paradigm, enabling self-adaptive and resilient web infrastructures for the next generation of large-scale online services.

Article
Computer Science and Mathematics
Computer Networks and Communications

Vladislav Vasilev

,

Georgi Iliev

Abstract: In this paper we introduce the CDF manifold algorithm that operates on data sets where a single target dimension is strictly increasing given a minimum of two or more number of input dimension which is very common in telco data. The manifold can then be used to compute the closest upper and lower limit to a given new point as well as its CDF. Training takes O(n.ln[n]) steps in the best case and O(n3/2) in the worst case. Look up takes O(ln[n]) steps in the best case and O(n1/2) in the worst case. The asymptotic computational cost is proven with a theorem. We compared our manifold method versus a standard dense neural network and show the asymptotic advantages both in terms of speed and accuracy. We also comment of potentials speed gains through the use of reference points. In summary, the manifold is a non-parametric and explanatory method to find the tightest data driven upper and lower limit of the output dimension given a new unseen input. This makes it ideal for planning new site deployments where we would need to find actual measurements as base-line performance.

Article
Computer Science and Mathematics
Computer Networks and Communications

Xianke Qiang

,

Zheng Chang

,

Jianhua Tang

,

Wei Feng

,

Chungang Yang

,

Yan Zhang

Abstract: Large language models (LLMs) are rapidly transforming the design and operation of communication systems, while the advent of sixth-generation (6G) networks provides the infrastructure necessary to sustain their unprecedented scale. This survey investigates the bidirectional relationship between LLMs and 6G networks from two complementary perspectives. From the perspective of LLM for Network, we illustrate how LLMs can enhance network management, strengthen security, optimize resource allocation, and act as intelligent agents. By leveraging their natural language understanding and reasoning capabilities, LLMs offer new opportunities for intent-driven orchestration, anomaly detection, and adaptive optimization beyond the scope of conventional AI models. From the perspective of Network for LLM, we discuss how 6G-native features such as integrated sensing and communication, semantic-aware transmission, and green resource management enable scalable, efficient, and sustainable training and inference of LLMs at the edge and in the cloud. Building on these two perspectives, we identify key challenges related to scalability and efficiency, robustness and security, as well as trustworthiness and sustainability. We further highlight open research directions as well. We envision that this work serves as a roadmap for cross-disciplinary research, fostering the integration of LLMs and 6G toward trustworthy and intelligent next-generation communication systems.

Concept Paper
Computer Science and Mathematics
Computer Networks and Communications

Edet Ekpenyong

,

Ubio Obu

,

Godspower Emmanuel Achi

,

Clement Umoh

,

Duke Peter

,

Udoma Obu

Abstract: In blockchain ecosystems, maintaining transparency and privacy has become an ethical dilemma. This is because, while certain specific information of the user is shared to ensure transparency of transactions across networks, such information could be detrimental to the user, as there is a possibility of it being tampered with. For instance, in the Catalyst voting process in Cardano, users can still see the amount of ADA tokens being held by other users, which can influence their voting options, especially when large ADA holders vote in support of certain ideas or proposals. To discourage such challenges as voter manipulation and vote buying, this study proposed the implementation of zero-knowledge proof (ZKP) in blockchain ecosystems to enhance the transparency of the catalyst voting process and enhance efficiency and speed of result release. Using survey questionnaire and a multivocal literature review, this study was able to proof that ZKP cannot only be applied in the catalyst voting process to enhance its transparency, but also addressed potential challenges to its applications such as scalability, encourage trust and fairness of the voting system, and improve voter participation due to its user-friendliness. Mathematical models emphasize scaled voting as optimal for balancing inclusion and plutocratic control.

Article
Computer Science and Mathematics
Computer Networks and Communications

Andrea Piroddi

,

Maurizio Torregiani

Abstract: This paper proposes a novel information-theoretic upper bound on the mutual information between the physical position of a user and the observed MIMO channel state information (CSI). Unlike classical Cram’er-Rao bounds or I-MMSE relations, our bound explicitly incorporates the spatial variability of the channel via the Jacobian of the channel with respect to position. We provide a derivation for both local linearized models and global nonlinear bounds, highlighting the dependence on array geometry and multipath structure. The results offer new insight into the intrinsic information available for position estimation and semantic localization in wireless networks.

Article
Computer Science and Mathematics
Computer Networks and Communications

Chandramouli Haldar

Abstract: This paper introduces a Morse code transmission system utilizing an ESP32 microcontroller and displaying the decoded message via a Telegram bot in real-time. In contrast to the traditional Morse code inputting methods, which are normally based on a single button with timing distinction between dot and dash, the suggested design utilizes two special buttons for dot and dash, respectively, and another button for sending the entire message. This method simplifies the input of users, minimizes timing errors, and enhances accuracy in message delivery. The decoded text is then transmitted securely to a specified Telegram chat via the Bot API at high speed and reliability over Wi-Fi. The system is portable, lightweight, and compact, thereby ideal for covert or clandestine messaging without inviting unnecessary attention.

Article
Computer Science and Mathematics
Computer Networks and Communications

Aymen I. Zreikat

,

Julien El Amine

Abstract: Wireless communications face both opportunities and challenges due to the coexistence of 5G New Radio (NR) high-band, 5G mid-band, and 5G low-band technologies. Each technology uses both licensed and unlicensed spectrum to operate in separate frequency bands. For example, 5G NR uses the high-band of 24+ GHz, the mid-band of 2-6 GHz, or the low-band of less than 2 GHz, including the 5 GHz band via Licensed-Assisted Access (LAA). With the use of sophisticated coexistence mechanisms and optimization techniques, this 5G coexistence scenario in shared spectrum can be effectively managed. These strategies are essential for boosting network capacity, reducing latency, and ensuring fair spectrum use across different wireless technologies. This work provides a comprehensive system-level evaluation of multi-band coexistence and offloading strategies under realistic deployment assumptions. The simulation results confirm the effectiveness of the proposed model, showing that spectrum sharing and coexistence among these technologies deliver scalable and robust performance in heterogeneous service environments. This approach enables efficient load balancing across the entire network and highlights the need for additional features to achieve further performance gains.

of 24

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated