Engineering

Sort by

Article
Engineering
Telecommunications

Rubén Juárez Cádiz

,

Fernando Rodríguez-Sela

Abstract: Motorsport’s upcoming 2027 technical constraints reduce the role of active mechanical stabilizers and shift a larger share of vehicle-dynamics understanding to real-time perception and software. This paper introduces Agentic Visual Telemetry, a hybrid Retrieval-Augmented Generation (RAG) and Cache-Augmented Generation (CAG) framework designed to diagnose high-frequency dynamic regimes from onboard video under millisecond-level latency and edge-hardware limits. The approach combines (i) spatiotemporal gating to detect novelty and uncertainty, (ii) cache-first inference to reuse stable visual priors at O(1) cost, and (iii) safety-aware supervision with fail-silent operation and a safe-mode degradation strategy when thermal or compute margins shrink. We validate the framework on the Aspar-Synth-10K dataset, focusing on safety-critical phenomena such as suspension chatter. Retrieval grounding yields large gains over a memoryless baseline, improving Macro-F1 from 0.62 (B0) to 0.88 (B5), while maintaining real-time feasibility; a RAG-only oracle provides slightly higher PR-AUC but violates the latency envelope. Full precision–recall curves show that the proposed hybrid model preserves performance in the high-recall operating region for chatter detection, reducing false negatives consistent with the grounding hypothesis. Overall, the results demonstrate that high-fidelity video interpretation can be achieved within strict real-time constraints through cache-first, retrieval-grounded agentic perception, enabling robust visual telemetry for next-generation motorsport analytics.

Article
Engineering
Telecommunications

Vladislav Vasilev

,

Georgi Iliev

Abstract: In this paper we derived a novel dynamic dictionary set of algorithms that supports perfectly balanced binary searches for large data sets. The dynamic dictionary is part of our FSP_vgv open source C# package that aims to implement a portable version of Octave/Matlab in order to enable its users to apply the iterative design methodology faster. Our package does not attempt to outdate other software, but fill specialised needs listed in this work. By processing the version control commit history of the FSP_vgv package we validate empirically that with unknown research horizon the time spend developing grows exponentially in the volume of production code. We identify a parameter related to howintuitive a programming language is which also controls the exponential growth of research time hence motivating the need for the highly intuitive Octave/Matlab language and its various software deployments.

Article
Engineering
Telecommunications

Najmeh Khosroshahi

,

Ron Mankarious

,

M. Reza Soleymani

Abstract: This paper presents a hardware-aware field-programmable gate array (FPGA) implementation of a layered 2-dimensional corrected normalized min-sum (2D-CNMS) decoder for quasi-cyclic low-density parity-check (QC-LDPC) codes in very small aperture terminal (VSAT) satellite communication systems. The main focus of this work is leveraging Xilinx Vitis high-level synthesis (HLS) to design and generate an LDPC decoder IP core based on the proposed algorithm, enabling rapid development and portability across FPGA platforms. Unlike conventional NMS and 2D-NMS algorithms, the proposed architecture introduces dyadic, multiplier-free normalization combined with two-level magnitude correction, achieving near-belief propagation (BP) performance with reduced complexity and latency. Implemented entirely in HLS and integrated in Vivado, the design achieves real-time operation on Zynq UltraScale+ multiprocessor system-on-chip (MPSoC) with throughput of 116-164 Mbps at 400 MHz and resource utilization of 8.7K-22.9K LUTs, 2.6K-7.5K FFs, and zero DSP blocks. Bit-error-rate (BER) results show no error floor down to 10−8 across additive white gaussian noise (AWGN) channel model. Fixed scaling factors are optimized to minimize latency and hardware overhead while preserving decoding accuracy. These results demonstrate that the proposed HLS-based 2D-CNMS IP core offers a resource-efficient, high-performance solution for multi-frequency time division multiple access (MF-TDMA) satellite links.

Article
Engineering
Telecommunications

Anfal R. Desher

,

Ali Al-Shuwaili

Abstract: In disaster scenarios where communication infrastructure is damaged, Unmanned Aerial Vehicle (UAV)-assisted wireless networks can provide temporary connectivity and hence the indispensable mobile edge computing functionality. However, limited resources on UAVs require prioritization of critical data in such scenarios. This research addresses re-liable transmission and task offloading by modeling user tasks as layered compositions, where the base layer is essential and enhancement layers are optional. TDMA-based prioritization is employed to ensure reliable decoding of high-priority layers of the computational tasks (i.e., intra-user priority) along with inter-user priority needed for urgent users like rescue teams. Under these reliability constraints, the work formulates a joint communication-computation optimization problem to allocate transmission power and UAV CPU cycles efficiently in order to minimize total weighted offloading latency. The original problem is non-convex and thus we leverage epigraph and perspective functions to recast the problem into convex. We also derive analytically, using KKT conditions, the optimal water-filling-like solutions for the reformulated problem. The numerical results show that, at a signal-to-noise ratio of 5 dB, the proposed algorithm achieves relative latency reductions vs the baseline algorithms (39.99% reduction vs Equal Allocation, 49.99% reduction vs Enhancement First, and 69.99% reduction vs No Priority) which reflect considerable latency reduction with priority-aware offloading.

Review
Engineering
Telecommunications

Nick Bray

,

Michael Hempel

,

Hamid Sharif

Abstract: As wireless communications become increasingly synonymous with everyday life, the demand for higher data rates, reliability, and efficiency continues to grow. This is further accelerated by the rapid rise in the Internet of Things (IoT) and industrial automation. However, traditional algorithm-based signal processing is limited due to algorithm complexity and the limited ability to adapt to, and cope with, increasingly adverse and congested channel conditions, which reduce the effectiveness of traditional digital signal processing techniques in real-world environments. To address these challenges, approaches using Deep Learning (DL) have rapidly gained attention as a promising alternative to traditional DSP techniques. DL techniques excel in adaptability and have been shown to outperform traditional approaches in various RF environments. In this survey, we examine and analyze the various stages that comprise popular wireless transmission techniques, specifically Orthogonal Frequency Division Multiplexing (OFDM), which forms the foundation for numerous technologies, including Wi-Fi, 4G LTE, 5G, and DVB. We review recent research activities to implement the various stages of the OFDM receiver chain using DL methods, including synchronization, Cyclic Prefix (CP) removal, Fast Fourier Transform (FFT), channel estimation and equalization, demodulation, and decoding. We also review approaches that focus on a holistic view that aims to utilize a unified DL approach for the entire signal processing chain. For each stage, we review existing Deep Learning-based methods and provide insights into how they aim to meet or exceed the performance of traditional approaches. This survey seeks to provide a comprehensive overview of the current development of deep learning-based OFDM systems, highlighting the potential benefits and challenges that remain in fully replacing conventional signal processing methods with modern deep learning approaches.

Article
Engineering
Telecommunications

Saugat Sharma

,

Grzegorz Chmaj

,

Henry Selvaraj

Abstract: In the age of the Internet of Things (IoT), IoT devices scattered across various locations gather and store data in a decentralized manner to improve computational efficiency. Nevertheless, within IoT networks, factors such as fragile devices, challenging deployment conditions, and unreliable data transmission are raising the likelihood of data gaps, potentially having a substantial impact on the subsequent data processing resulting in failure of the system. Conventional imputation approach relies on using historical trend or sensor fusion techniques to combine information from different sensors to fill in the gaps in where information is missing. Historical trend struggles to capture new or emerging patterns, whereas using sensor fusion, even though it shows promising results, relies on information from multiple sensors from same target environment, making it vulnerable to single-point failures. This article presents an alternative strategy: using sensor-based fusion, but in this case, multiple sensors gather data from different targets independently. The architecture intelligently looks and gathers the sensor information from other location/target (multiple locations), sensing the same environmental information, learns the distribution and correlation and employ algorithm to generate synthetic data for imputing missing information. The study conducted experiments by fusing weather station data from various US locations and comparing the effectiveness of this approach to conventional methods. Further, the proposed synthetic data generation approach outperformed other algorithms when applied to the fused weather station dataset. This innovative approach mitigates the risk of single-point failures and offers a more robust solution for dealing with missing data in IoT networks.

Article
Engineering
Telecommunications

Afan Ali

,

Muhammad Usama Zahid

,

Maqsood Hussain Shah

Abstract: The rapid evolution toward sixth-generation (6G) wireless networks introduces Integrated Sensing and Communication (ISAC) as a key enabler for intelligent and resource-efficient systems. Traditional resource allocation schemes for ISAC primarily focus on maximizing spectral efficiency, sensing accuracy, or energy efficiency. However, as networks increasingly support semantics-driven applications, the fidelity of transmitted information becomes equally critical. In this paper, we propose a semantic-aware resource allocation mechanism for 6G ISAC systems that leverages the Deep Deterministic Policy Gradient (DDPG) reinforcement learning algorithm. Unlike conventional approaches, our method explicitly incorporates semantic constraints into the optimization process, prioritizing semantic fidelity while jointly enhancing sensing accuracy and energy efficiency. Simulation results, benchmarked against 3GPP’s emerging 6G standards, demonstrate that the proposed mechanism achieves notable performance improvements across all three dimensions, highlighting its potential to support the next generation of intelligent, context-aware communication systems.

Article
Engineering
Telecommunications

Anoush Mirbadin

Abstract: This paper investigates a receiver-centric decoding framework for unit-rate transmission in which no redundancy is conveyed through the physical channel. Only k information bits are transmitted over an additive white Gaussian noise (AWGN) channel, while reliability is pursued by structured hypothesis testing and increased receiver-side computational complexity. The receiver embeds each candidate information hypothesis into a higher-dimensional (k, n) linear block code and evaluates all 2k hypotheses in parallel. For each hypothesis, a single message-passing iteration on the Tanner graph is employed as a soft refinement operator, and the final decision is obtained via an orthogonality-based constraint metric that measures the consistency of the refined estimate with the hypothesis-induced code structure. The parity-related terms used within this metric are not modeled as stochastic channel observations and do not introduce additional mutual information beyond the channel output; instead, they act as deterministic, hypothesis-conditioned constraint weights that control how strongly code consistency is enforced within the decision rule. The relationship between metric weighting, apparent horizontal shifts in bit-error-rate (BER) curves, and information-theoretic limits is explicitly clarified. Simulations for a short (8, 24) code demonstrate that near maximum-likelihood decision behavior can be approached by trading receiver complexity for reliability in a finite-hypothesis regime, without altering the physical channel model or violating established channel-capacity principles.

Article
Engineering
Telecommunications

Sirigiet Phunklang

,

Atawit Jantaupalee

,

Patawee Mesawad

,

Preecha Yupapin

,

Piyaporn Krachodnok

Abstract: This work presents a computational study of a hybrid plasmonic–photonic Panda-ring antenna embedded with a gold grating for dual-mode optical and terahertz (THz) transmission. The proposed structure integrates whispering gallery modes (WGMs) supported by a multi-ring resonator with surface plasmon polariton (SPP) excitation at a metal–dielectric interface, enabling strong near-field confinement and efficient far-field radiation. A systematic structural evolution—from a linear silicon waveguide to single-ring, add-drop, and Panda-ring configurations—is investigated to clarify the role of resonant coupling and power routing. Full-wave simulations using Optiwave FDTD and CST Microwave Studio are employed to analyze electric-field distributions, spectral power intensity, and radiation characteristics. The results demonstrate that the embedded gold grating facilitates effective SPP–WGM hybridization, allowing confined photonic energy to be converted into directional radiation with a peak gain exceeding 5 dBi near 1.52–1.55 µm. The proposed antenna exhibits stable dual-mode operation, making it a promising candidate for Li-Fi transmitters, THz wireless links, and integrated photonic–plasmonic communication systems.

Review
Engineering
Telecommunications

Emanuel Trinc

,

Cosmin Ancuti

,

Andy Vesa

,

Calin Simu

Abstract: Accurate modeling of outdoor Wi-Fi propagation in dense urban environments is essential for smart city connectivity. Deterministic ray-tracing techniques provide high-fidelity insight into multipath propagation but suffer from high computational cost and limited scalability in large 3D environments. This work investigates a hybrid approach that combines MATLAB-based ray-tracing simulations with Machine Learning to enable scalable Wi-Fi~7 network analysis. A large dataset is generated over a realistic simulated university campus, covering multiple frequency bands (2.4, 5, and 6~GHz), transmit power levels, and ray-tracing configurations with reflections and diffractions. Several regression models are evaluated, with emphasis on transformer-based architectures. Results show that the FT-Transformer accurately approximates ray-tracing outputs while reducing inference time from months to minutes. The proposed framework enables fast surrogate modeling of radio propagation and supports network planning and digital twin applications.

Article
Engineering
Telecommunications

Stefano Cunietti

,

Víctor Monzonís Melero

,

Chiara Sammarco

,

Ilaria Ferrando

,

Domenico Sguerso

,

Juan V. Balbastre

Abstract: In urban environments, the accuracy of traditional Global Navigation Satellite System (GNSS) could be compromised due to signal occlusion and multipath interference, particularly during critical operational phases such as drone take-off and landing. This study seeks to enhance drone positioning accuracy by integrating 4G and 5G communication antennas and applying multilateration (MLAT) techniques based on Time-Difference-of-Arrival (TDOA) and Angle of Arrival (AOA) measurements. The research focuses on a real-world case study in the metropolitan area of Valencia, Spain, where extensive mobile network data were analysed using the Cramér-Rao Lower Bound (CRLB) to identify zones with minimal positioning errors and, separately, optimal coverage for connectivity. The results suggest that integrating terrestrial antennas could enhance drone navigation; however, its current applicability remains limited to urban areas.

Article
Engineering
Telecommunications

Anoush Mirbadin

Abstract: This paper aims to maximize the information transmission rate by eliminating channel redundancy while still enabling reliable recovery of uncoded data. It is shown that parallel message-passing decoders can recover uncoded transmitted bits by increasing only the receiver-side computational complexity. In the proposed architecture, the $k$ transmitted information bits are embedded into a higher-dimensional linear block code at the receiver, and appropriately valued log-likelihood ratios (LLRs) are assigned to the parity positions. One-shot parallel decoding is performed across all hypotheses in the codebook, and the final decision is obtained by minimizing an orthogonality-based energy criterion between the decoded vector and the complement of the code space. For a fixed $(8,24)$ linear block code, the decoding behavior is investigated as a function of the parity-bit LLR magnitude. Increasing the parity LLR magnitude introduces an artificial reliability that improves hypothesis separation in the code space and yields a sharper waterfall region in the bit-error-rate (BER) curves. This increase in parity LLR also induces a systematic rightward shift of the BER curves, which does not correspond to a physical noise reduction and must therefore be compensated for fair performance comparison. After proper compensation, it is observed that increasing the parity LLR improves decoding performance up to a point where it can surpass the performance of conventional LDPC decoding with iterative processing. In principle, arbitrarily strong decoding performance can be approached by increasing the parity LLR magnitude; however, the maximum usable value is limited by numerical instabilities in practical message-passing implementations. Overall, the results demonstrate that strong decoding performance can be achieved without transmitting redundancy or employing high-dimensional coding at the transmitter, relying instead on receiver-side processing and controlled parity reliability over an additive white Gaussian noise (AWGN) channel.

Article
Engineering
Telecommunications

Xiuxia Cai

,

Chenyang Diwu

,

Ting Fan

,

Wenjing Wang

,

Jinglu He

Abstract: Remote sensing image super-resolution (RSISR) aims to reconstruct high-resolution images from low-resolution observations of remote sensing data to enhance the visual quality and usability of remote sensors. Real world RSISR is challenging owing to the diverse degradations like blur, noise, compression, and atmospheric distortions. We propose hierarchical multi-task super- resolution framework including degradation-aware modeling, dual-decoder reconstruction, and static regularization-guided generation. Speciffcally, the degradation-wise module adaptively characterizes multiple types of degradation and provides effective conditional priors for reconstruction. The dual-doder platform incorporates both convolutional and Transformer branches to match local detail preservation as well as global structural consistency. Moreover, the static regularizing guided generation introduces prior constraints such as total variation and gradient consistency to improve robustness to varying degradation levels. Extensive experiments on two public remote sensing datasets show that our method achieves performance that is robust against varying degradation conditions.

Article
Engineering
Telecommunications

Yasir Al-Ghafri

,

Hafiz M. Asif

,

Zia Nadir

,

Naser G. Tarhuni

Abstract: In this paper, a wireless network architecture is considered that combines double Intelligent Reflecting Surfaces (IRSs), Energy Harvesting (EH), and Non-Orthogonal Multiple Access (NOMA) with cooperative relaying (C-NOMA), to leverage the performance of Non-Line-of-Sight (NLoS) communication and incorporate energy efficiency in next-generation networks. To optimize the phase shifts of both IRSs, we employ a machine learning model that offers a low-complexity alternative to traditional optimization methods. This lightweight learning-based approach is introduced to predict effective IRS phase configurations without relying on solver-generated labels or repeated iterations. The model learns from channel behaviour and system observations, which allows it to react rapidly under dynamic channel conditions. Numerical analysis demonstrates the validity of the proposed architecture in providing considerable improvements in terms of spectral efficiency and service reliability through the integration of energy harvesting and relay-based communication, compared to conventional systems, thereby facilitating green communication systems.

Article
Engineering
Telecommunications

Giuseppina Rizzi

,

Vittorio Curri

Abstract: The constant growth of IP data traffic, driven by sustained annual increases surpassing 26%, is pushing current optical transport infrastructures towards their capacity limits. Since the deployment of new fiber cables is economically demanding, ultra-wideband transmission is emerging as a promising costly-effective solution, enabled by multi-band amplifiers and transceivers spanning the entire low-loss window of standard single-mode fibers. In this scenario, an accurate modeling of the frequency-dependent fiber parameters is essential to reliably model optical signal propagation. In particular, the combined impact of attenuation slope and inter-channel stimulated Raman scattering (SRS) fundamentally shapes the power evolution of wide wavelength division multiplexing (WDM) combs and directly affects nonlinear interference (NLI) generation. In this work, a set of analytical approximations for the frequency-dependent attenuation and Raman gain coefficient is presented, providing an effective balance between computational efficiency and physical fidelity. Through extensive simulations covering C, C+L, and ultra-wideband U-to-E transmission scenarios, the accuracy in reproducing the behavior of the power evolution and NLI profiles of fully numerical SRS models with the proposed approximations is demonstrated.

Article
Engineering
Telecommunications

RAFE ALASEM

,

Mahmud Mansour

Abstract: Vehicle Ad-Hoc Networks (VANETs) face critical challenges in trust management, privacy preservation, and scalability, particularly with the integration of 5G networks in Intelligent Transportation Systems (ITS). Traditional centralized trust models present single points of failure and privacy concerns that compromise network security and user anonymity. This paper presents a novel decentralized trust model leveraging blockchain technology, Interplanetary File System (IPFS) integration, and post-quantum cryptographic algorithms to address these limitations. Our proposed TrustChain-VANETs framework implements advanced privacy-preserving encryption techniques including threshold and homomorphic encryption, geographical sharding for scalability, and edge-assisted consensus mechanisms. Performance evaluation demonstrates significant improvements: 40% reduction in authentication latency (90-120ms vs 150-300ms), 90% malicious node detection rate (+15% improvement), 300% increase in transaction throughput (2000-2150 TPS), and 100% scalability enhancement supporting up to 5000 nodes. The system integrates seamlessly with 5G network slicing (URLLC, eMBB, mMTC) while maintaining quantum resistance through CRYSTALS-Dilithium, KYBER, and FALCON algorithms. Real-world deployment considerations including OBU computational constraints, standardization gaps, and energy efficiency are comprehensively analyzed. Results indicate that the proposed decentralized approach provides robust security, enhanced privacy, and improved scalability for next-generation vehicular networks, making it suitable for large-scale ITS deployment.

Article
Engineering
Telecommunications

Rafe Alasem

Abstract: The rapid proliferation of smart cities and Intelligent Transportation Systems (ITS) demands revolutionary approaches to vehicular communications that can simultaneously address energy efficiency, security, and quality of service requirements. This paper presents GreenFlow VANET, a novel 5G-enabled secure and energy-efficient routing protocol specifically designed for smart city deployments. Our approach leverages a three-tier architecture integrating Vehicle Ad-Hoc Networks (VANETs) with 5G Ultra-Reliable Low-Latency Communication (URLLC), enhanced Mobile Broadband (eMBB), and massive Machine Type Communication (mMTC) network slices. The GreenFlow Secure Routing Protocol (GF-5G-SRP) introduces MEC-assisted route discovery, multi-criteria next-hop selection incorporating 5G quality metrics, and adaptive energy management techniques. Our security framework employs ECC-256 cryptography, ChaCha20-Poly1305 encryption, and distributed trust management to ensure robust protection against vehicular network threats while preserving location privacy through SUPI/SUCI mechanisms. Extensive simulations using NS-3 with 5G-LENA and SUMO mobility models demonstrate that GreenFlow VANET achieves 96.8% packet delivery ratio, 59% energy reduction compared to traditional approaches, and 81% improvement in network lifetime while maintaining sub-millisecond latency for safety-critical communications. The proposed solution effectively addresses the scalability challenges of dense urban vehicular networks with up to 1000 vehicles while providing comprehensive security with 97.8% attack detection rates and minimal false positives.

Review
Engineering
Telecommunications

Panagiotis K. Gkonis

,

Anastasios Giannopoulos

,

Nikolaos Nomikos

,

Lambros Sarakis

,

Vasileios Nikolakakis

,

Gerasimos Patsourakis

,

Panagiotis Trakadas

Abstract: The goal of the study presented in this work is to analyze all recent advances in the context of the computing continuum and meta–operating systems (meta-OSs). The term continuum includes a variety of diverse hardware and computing elements as well as network protocols, ranging from lightweight internet of things (IoT) components to more complex edge or cloud servers. To this end, the rapid penetration of IoT technology in modern era networks along with associated applications poses new challenges towards efficient application deployment over heterogeneous network infrastructures. These challenges involve among others the interconnection of a vast number of IoT devices and protocols, proper resource management, as well as threat protection and privacy preservation. Hence, unified access mechanisms, data management policies and security protocols are required across the continuum to support the vision of seamless connectivity and diverse device integration. This task becomes even more important as discussions on sixth generation (6G) networks are already taking place, which they are envisaged to coexist with IoT applications. Therefore, in this work the most significant technological approaches to satisfy the aforementioned challenges and requirements are presented and analyzed. To this end, a proposed architectural approach is also presented and discussed which takes into consideration all key players and components in the continuum. In the same context, indicative use cases and scenarios that are leveraged from a meta-OS in the computing continuum are discussed as well. Finally, open issues and related challenges are also discussed.

Article
Engineering
Telecommunications

Galia Marinova

,

Edmond Hajrizi

,

Besnik Qehaja

,

Vassil Guliashki

Abstract: This study presents the development of a smart microgrid control framework. The goal is to achieve optimal energy management and maximize photovoltaic (PV) generation utilization through a combination of optimization and reinforcement learning techniques. A detailed Simulink model is developed in MATLAB to represent the dynamic behavior of the microgrid, including load variability, temperature profiles, and solar radiation. Initially, a genetic algorithm (GA) is used to perform static optimization and parameter tuning – identifying optimal battery charging/discharging schedules and balancing power flow between buildings in the microgrid to minimize main grid dependency. After that a Soft Actor-Critic (SAC) reinforcement learning agent is trained to perform real-time maximum power point tracking (MPPT) for the PV system under different environmental (weather) and load conditions. The SAC agent learns from multiple (eight) simulated PV generation scenarios and demand profiles, optimizing the duty cycle of the DC-DC converter to adaptively maintain maximum energy yield. The combined GA-SAC approach is validated on a university campus microgrid consisting of four interconnected buildings with heterogeneous loads, including computer labs that generate both active and reactive power demands. The results show improved efficiency, reduced power losses, and improved energy autonomy of the microgrid, illustrating the potential of AI-driven control strategies for sustainable smart energy systems.

Article
Engineering
Telecommunications

Anastasia Daraseliya

,

Eduard Sopin

,

Julia Kolcheva

,

Vyacheslav Begishev

,

Konstantin Samouylov

Abstract: Modern 5G+grade low power wide area network (LPWAN) technologies such as Narrowband Internet-of-Things (NB-IoT) operate utilizing a multi-channel slotted ALOHA algorithm at the random access phase. As a result, the random access phase in such systems is characterized by relatively low throughput and is highly sensitive to traffic fluctuations that could lead the system outside of its stable operational regime. Although theoretical results specifying the optimal transmission probability that maximizes the successful preamble transmission probability are long known, the lack of knowledge about the current offered traffic load at the BS makes the problem of maintaining the optimal throughput a challenging task. In this paper, we propose and analyze a new reactive access barring scheme for NB+IoT systems based on machine learning (ML) techniques. Specifically, we first demonstrate that knowing the number of user equipments (UE) expierencing a collision at the BS is sufficient to make conclusions about the current offered traffic load. Then, we show that utilizing ML-based techniques, one can safely differentiate between events in the Physical Random Access Channel (PRACH) at the base station (BS) side based on only the signal-to-noise ratio (SNR). Finally, we mathematically characterize the delay experienced under the proposed reactive access barring technique. In our numerical results, we show that by utilizing modern neural network approaches, such as the XGBoost classifier, one can precisely differentiate between events on the PRACH channel with accuracy reaching 0.98 and then associate it with the number of user equipment (UE) competing at the random access phase. Our simulation results show that the proposed approach can keep the successful preamble transmission probability constant at approximately 0.3 in overloaded conditions, when for conventional NB-IoT access, this value is less than 0.05. The proposed scheme achieves near-optimal throughput in multi-channel ALOHA by employing dynamic traffic awareness to adjust the non-unit transmission probability. This proactive congestion control ensures a controlled and bounded delay, preventing latency from exceeding the system’s maximum load capacity.

of 14

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated