Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Signal Processing

Makhanbetov Adilet,

Yesmagambetov Bulat-Batyr,

Balabekova Madina,

Saidakhmetov Murad,

Zhanteli Khassen,

Sarsenbayev Kanat,

Sultanova Gulbanu,

Tursumbayeva Adiya

Abstract: The article is devoted to the issues of signal detection in a radio communication system. As a rule, such tasks are solved in two ways – adaptive and nonparametric. The adaptive method is to adjust the structure and parameters of the detection system in accordance with the parameters of signals and interference. Nonparametric methods are used when it is necessary to ensure that the system is insensitive to changes in the properties of signals and interference, in particular in the absence of a priori information about the probabilistic properties of the measured signals and interference. Nonparametric methods of signal detection are based on the use of nonparametric methods of statistical hypothesis testing theory. Nonparametric methods are effectively used in radio electronic systems due to the need to stabilize the frequency of false alarms with unknown interference properties. The invariant properties of nonparametric procedures are based on the independence of the statistical properties of nonparametric functions from the statistical properties of the measured signals. One of the most effective nonparametric methods is the use of ranks, which are formed as a result of ranking the input samples of the measured signal either in ascending or descending order. Ranks have many useful properties for practice and are the most suitable for the purposes of forming various statistical hypotheses. The article is devoted to the criterion of post-detector rank signal processing based on minimizing the RMS error of measuring the situation parameter. The article considers the noise immunity of this criterion in comparison with optimal rank detection methods based on the Neumann -Pearson criterion.
Article
Computer Science and Mathematics
Signal Processing

Keith Jones

Abstract: The paper describes two schemes, together with architectures, for the resource-efficient parallel computation of twiddle factors for the fixed-radix version of the fast Fourier transform (FFT) algorithm. Assuming a silicon-based hardware implementation with suitably chosen parallel computing equipment, the two schemes considered provide one with the facility for trading off the arithmetic component of the resource requirements, as expressed in terms of the numbers of multipliers and adders, against the memory component, as expressed in terms of the amount of memory required for constructing the look-up tables (LUTs) needed for their storage. With a separate processing element (PE) being assigned to the computation of each twiddle factor, the first scheme is based upon the adoption of the single‑instruction multiple‑data (SIMD) technique, as applied in the ‘spatial’ domain, whereby the PEs operate independently upon their own individual LUTs and may thus be executed simultaneously; the second scheme is based upon the adoption of the pipelining technique, as applied in the ‘temporal’ domain, whereby the operation of all but the first LUT-based PE is based upon second-order recursion using previously computed PE outputs. Although the FFT radix and LUT level (where the LUT may be of either single‑level or multi‑level type) may each take on arbitrary integer values, we will be particularly concerned with the radix-4 version of the FFT algorithm, together with the two‑level version of the LUT, as these two algorithmic choices facilitate ease of illustration and offer the potential for flexible computationally-efficient FFT designs. A brief comparison of the resource requirements for the two schemes is provided for various parameter sets which cater, in particular, for those big‑data memory-intensive applications involving the use of long (with length of order one million) to ultra-long (with length of order one billion) FFTs.
Article
Computer Science and Mathematics
Signal Processing

Xuzhao Yang,

Hui Tian,

Fan Wang,

Jinping Ni,

Rui Chen

Abstract: The Laser Light Screen system faces critical technical challenges in high-speed, long-range target detection: when target passes through the light screen, weak light flux variations lead to significantly degraded signal-to-noise ratios (SNR). Traditional signal processing algorithms fail to effectively suppress phase distortion and boundary effects under extremely low SNR conditions, creating a technical bottleneck that severely constrains system detection performance. To address this problem, this paper proposes a Multi-stage Collaborative Filtering Chain (MCFC) signal processing framework incorporating three key innovations: 1) Design of zero-phase FIR bandpass filtering with forward-backward processing and dynamic phase compensation mechanisms to effectively suppress phase distortion; 2) Implementation of a four-stage cascaded collaborative filtering strategy, combining adaptive sampling and anti-aliasing techniques to significantly enhance signal quality; 3) Development of a multi-scale adaptive transform algorithm based on fourth-order Daubechies wavelets to achieve high-precision signal reconstruction. Experimental results demonstrate that under -20dB conditions, the method achieves a 25dB SNR improvement and boundary artifact suppression while reducing processing time from 0.42 to 0.04 seconds. These results validate the proposed method's effectiveness in high-speed target detection under low SNR conditions.
Article
Computer Science and Mathematics
Signal Processing

Aymane Edder,

Fatima-Ezzahraa Ben-Bouazza,

Idriss Tafala,

Oumaima Manchadi,

Bassma Jioudi

Abstract: The analysis of electrocardiogram (ECG) signals is profoundly affected by the presence of electromyographic (EMG) noise, which can lead to substantial misinterpretations in healthcare applications. To address this challenge, we present ECGDnet, an innovative architecture based on Transformer technology, specifically engineered to denoise multi-channel ECG signals. By leveraging multi-head self-attention mechanisms, positional embeddings, and an advanced sequence-to-sequence processing architecture, ECGDnet effectively captures both local and global temporal dependencies inherent in cardiac signals. Experimental validation on real-world datasets demonstrates ECGDnet's remarkable efficacy in noise suppression, achieving a Signal-to-Noise Ratio (SNR) of 19.83, a Normalized Mean Squared Error (NMSE) of 0.9842, a Reconstruction Error (RE) of 0.0158, and a Pearson Correlation Coefficient (PCC) of 0.9924. These results represent significant improvements over traditional deep learning approaches, while maintaining complex signal morphology and effectively mitigating noise interference.
Article
Computer Science and Mathematics
Signal Processing

Jian Sun,

Hongxin Lin,

Wei Shi,

Wei Xu,

Dongming Wang

Abstract: Swarm-based unmanned aerial vehicle (UAV) systems offer enhanced spatial coverage, collaborative intelligence, and mission scalability for various applications, including environmental monitoring and emergency response. However, their onboard computing capabilities are often constrained by stringent size, weight, and power limitations, posing challenges for real-time data processing and autonomous decision-making. This paper proposes a comprehensive communication and computation framework that integrates cloud-edge-end collaboration with cell-free massive multiple-input multiple-output (CF-mMIMO) technology to support scalable and efficient computation offloading in UAV swarm networks. A lightweight task migration mechanism is developed to dynamically allocate processing workloads between UAVs and edge/cloud servers, while a CF-mMIMO communication architecture is designed to ensure robust, low-latency connectivity under mobility and interference. Furthermore, we implement a hardware-in-the-loop experimental testbed with nine UAVs and validate the proposed framework through real-time object detection tasks. Results demonstrate over 30% reduction in onboard computation and significant improvements in communication reliability and latency, highlighting the framework’s potential for enabling intelligent, cooperative aerial systems.
Article
Computer Science and Mathematics
Signal Processing

Syed Athif Usman,

Mridul Bhattacharjee,

Rozin Khan,

Xu Jiashun,

Noor Ul Amin

Abstract: Digital Signal Processing (DSP) is an integral part of modern computing applications like telecommunications, biomedical engineering, and multimedia systems. The efficiency of DSP algorithms plays a critical role in ensuring real-time processing, low computational complexity, and power consumption. This study examines sophisticated DSP techniques and optimization strategies to enhance processing speed and accuracy. By exploring various signal processing methods, including Fourier transforms, filter algorithms, and adaptive techniques, the research highlights the significance of efficiency in computation in practical applications. Further, a comparative analysis of traditional and modern DSP architectures provides valuable insights into the performance of trade-offs. The outcome is additional contributions to the ongoing development of more powerful and scalable DSP applications.
Review
Computer Science and Mathematics
Signal Processing

Miguel A. Becerra,

Carolina Duque-Mejía,

Andres Eduardo Castro-Ospina,

Leonardo Serna-Guarín,

Cristian Mejía,

Eduardo Duque-Grisales

Abstract: This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview discusses the influence of emotional states on EEG signals and the consequent impact on biometric reliability. It also evaluates recent emotion recognition techniques, including machine learning methods such as Support Vector Machines (SVM), Convolutional Neural Networks (CNN), and Long Short-Term Memory networks (LSTM). Additionally, the role of multimodal EEG datasets in enhancing emotion recognition accuracy is explored. Findings from key studies are synthesized to highlight the potential of EEG for secure, adaptive biometric systems that account for emotional variability. This overview emphasizes the need for future research on resilient biometric identification that integrates emotional context, aiming to establish EEG as a viable component of advanced biometric technologies.
Review
Computer Science and Mathematics
Signal Processing

Petar Slavka,

Aliona Tatyana

Abstract: Tensor decomposition has emerged as a powerful mathematical framework for analyzing multi-dimensional data, extending classical matrix decomposition techniques to higher-order representations. As modern applications generate increasingly complex datasets with multi-way relationships, tensor methods provide a principled approach to uncovering latent structures, reducing dimensionality, and improving computational efficiency. This paper presents a comprehensive review of tensor decomposition techniques, their theoretical foundations, and their applications in signal processing and machine learning.We begin by introducing the fundamental concepts of tensor algebra, discussing key tensor operations, norms, and properties that form the basis of tensor factorization methods. The two most widely used decompositions—Canonical Polyadic (CP) and Tucker decomposition—are examined in detail, along with alternative factorization techniques such as Tensor Train (TT), Tensor Ring (TR), and Block Term Decomposition (BTD). We explore the computational complexity of these methods and discuss numerical optimization techniques, including Alternating Least Squares (ALS), gradient-based approaches, and probabilistic tensor models.The paper then delves into the applications of tensor decomposition in signal processing, where tensors have been successfully applied to source separation, multi-sensor data fusion, image processing, and compressed sensing. In machine learning, tensor-based models have enhanced feature extraction, deep learning efficiency, and representation learning. We highlight the role of tensor decomposition in reducing the parameter space of deep neural networks, improving generalization, and accelerating training through low-rank approximations.Despite its numerous advantages, tensor decomposition faces several challenges, including the difficulty of determining tensor rank, the computational cost of large-scale tensor factorization, and robustness to noise and missing data. We discuss recent theoretical advancements addressing uniqueness conditions, rank estimation strategies, and adaptive tensor factorization techniques that improve performance in real-world applications. Furthermore, we explore emerging trends in tensor methods, including their integration with quantum computing, neuroscience, personalized medicine, and geospatial analytics.Finally, we provide a detailed discussion of open research questions, such as the need for more scalable decomposition algorithms, automated rank selection mechanisms, and robust tensor models that can handle high-dimensional, noisy, and adversarial data. As data-driven applications continue to evolve, tensor decomposition is poised to become an indispensable tool for uncovering hidden patterns in complex datasets, advancing both theoretical research and practical implementations across multiple scientific domains.
Article
Computer Science and Mathematics
Signal Processing

Baoyuan Duan

Abstract:

Research Collatz odd sequence, change (×3 + 1) ÷ 2k operation in Collatz Conjecture to (×3 + 2m − 1) ÷ 2k operation. Expand loop Collatz odd sequence (if exists) in (×3 + 2m − 1) ÷ 2k odd sequence to become ∞-steps non-loop sequence. Build a (×3 + 2m − 1) ÷ 2k odd tree model and transform position model for odds in tree. Via comparing actual and virtual positions, prove if a (×3 + 2m − 1) ÷ 2k odd sequence can not converge after ∞ steps of (×3 + 2m − 1) ÷ 2k operation, the sequence must walk out of the boundary of the tree.

Article
Computer Science and Mathematics
Signal Processing

Jorge Rodriguez-Echeverria,

Evans Ocansey,

Roxana Holom,

Tomasz Michno,

Hannes Hinterbichler,

Pauline Meyer-Heye,

Sidharta Gautama

Abstract: Amidst the advent of Industry 4.0, the manufacturing industry is exploring AI methodologies and other data-driven approaches for the understanding and optimization of gas metal arc welding (GMAW) processes. Various data sources such as process data logs and image data are available to the users of modern welding systems. However, to make good use of the data for machine learning, data sets of different quality and information density have to be fused. In this paper, we propose strategies for improving the dataset quality of time series process data and image data from the GMAW process. We explore resampling strategies to ensure the harmonization of time series data. Additionally, ideas for improving image quality from welding process cameras are discussed.
Article
Computer Science and Mathematics
Signal Processing

Sabour Abderrahim

Abstract:

This paper presents a unified framework that integrates the noncommutative Fourier transform with equivariant cohomology for the analysis and reconstruction of diffusion MRI data. We develop a rigorous mathematical approach that exploits the symmetries of the group SO(3) to optimize high-resolution image reconstruction while ensuring an algorithmic complexity of O(|G| log |G|). Our analysis includes a detailed investigation of numerical stability through differential geometric techniques, resulting in explicit error bounds based on the curvature of representation spaces. The proposed method significantly enhances the accuracy of nerve fiber mapping in cerebral white matter and offers promising perspectives for advanced clinical applications. In bridging abstract mathematical theory with practical medical imaging, this work opens new avenues for high-resolution computational image processing.

Article
Computer Science and Mathematics
Signal Processing

Josip Sabic,

Toni Perković,

Dinko Begušić,

Petar Šolić

Abstract: LoRaWAN networks are increasingly recognized for their vulnerability to various jamming attacks, which can significantly disrupt communication between end nodes and gateways. This paper explores the feasibility of executing reactive jammers upon detecting packet transmission using commercially available equipment based on Software-Defined Radios (SDRs). The proposed approach demonstrates how attackers can exploit packet detection to initiate targeted interference, effectively compromising message integrity. Two distinct experimental setups, one using separate SDRs for reception and transmission, and another leveraging a single SDR for both functions, were used to evaluate attack efficiency, reaction times, and packet loss ratios. Our experiments demonstrate that both scenarios effectively jam LoRaWAN packets across a range of spreading factors and payload sizes. This finding underscores a pressing need for enhanced security measures to maintain reliability and counter sophisticated attacks.
Article
Computer Science and Mathematics
Signal Processing

Anatolyi Petrenko,

Oleh Boloban

Abstract: The quality of biomedical signals collected by wearable devices is crucial for accurate physiological monitoring and detecting potential deviations. This study evaluates the effectiveness of various filtering algorithms for photoplethysmographic (PPG) signals, including low-pass, moving average, weighted average, and Kalman filters, to determine the optimal method for noise reduction while preserving key parameters such as heart rate (HR) and blood oxygen saturation (SpO₂). Data were collected using an ESP32 microcontroller and MAX30102 sensor in stationary and motion conditions. Noise levels were analyzed over 20-second intervals with high artifact presence. The results indicate that the Kalman filter achieves the best balance between noise suppression and signal integrity, preserving peak components and ensuring accurate physiological readings (HR: 87.06 bpm, SpO₂: 85.37%). In contrast, the low-pass filter excessively smooths peaks, affecting accuracy, while the moving and weighted average filters offer moderate noise reduction with varying adaptability. The findings highlight the importance of selecting appropriate filtering techniques based on computational constraints and noise levels. The Kalman filter is preferred for real-time monitoring in dynamic conditions, while simpler filters may suffice in lower-noise environments. Future research could explore machine learning-driven adaptive filtering for enhanced diagnostic accuracy and real-time medical applications.
Article
Computer Science and Mathematics
Signal Processing

Sayantan Bhattacharya,

Dimitris M. Christodoulou,

Silas G. T. Laycock

Abstract: The broad point spread function of the NuSTAR telescope makes resolving astronomical X-ray sources a challenging task, especially for off-axis observations. This limitation has affected the observations of the high-mass X-ray binary pulsars SXP 15.3 and SXP 305, in which pulsations are detected from nearly overlapping regions without spatially resolving these X-ray sources. To address this issue, we introduce a deconvolution algorithm designed to enhance NuSTAR’s spatial resolution for closely-spaced X-ray sources. We apply this technique to archival data and simulations of synthetic point sources placed at varying separations and locations, testing thus the algorithm’s efficacy in source detection and differentiation. Our study confirms that on some occasions when SXP 305 is brighter, SXP 15.3 is also resolved, suggesting that some prior non-detections may have resulted from imaging limitations. This deconvolution technique represents a proof of concept test for analyzing crowded fields in the sky with closely-spaced X-ray sources in future NuSTAR observations.
Article
Computer Science and Mathematics
Signal Processing

Xiuyuan Wang,

Liwen Wang,

Hanjiang Dong,

Qing Wang,

Weimin Lyu,

Tongyu Ma,

Changyuan Yu

Abstract: A large group of people, nowadays, has been suffering from chronic sleep disorders and diseases, resulting in wide attention on sleep quality assessment. Conventional sleep-staging networks frequently consider multiple channel inputs, hindering the feasibility of the network to single-channel input or other sensor data input. In this paper, we proposed an Auto-SleepNet: a CPU-driven and end-to-end deep learning network for sleep stage classification using single-lead electroencephalogram (EEG) signals. The network is composed of a tailored Auto-Encoder for feature extraction and correction, and an LSTM network for temporal-signal classification. Compared with multi-lead connections, our design renders a higher accuracy in comparison to state-of-the-art, provides a meaningful reference for simplifying the hardware requirements of the EEG measurement device, and simultaneously lowers the computational loads significantly. We used the Per-Class precision (PR), Recall (RE), Per-Class F1 Score, overall accuracy, confusion matrix, and Cohen’s kappa coefficient (κ) to evaluate the performance. The overall accuracy, RE, and Cohen’s Kappa of our model are 95.7%, 95.19% and 0.91, respectively. Compared to state-of-the-art methods mentioned in the paper, Auto-SleepNet outperforms single-channel methods by 13.97%, and multiple-channel methods by 15.97% on average. Furthermore, it is not compulsory to use a GPU to train our Auto-SleepNet. Experiments show that our model can converge in 15.6 minutes using a CPU only. The results highlight the practicability of the network to sleep stage classification problems.
Article
Computer Science and Mathematics
Signal Processing

Zhiyi Jin,

Zhouhao Pan,

Zhe Zhang,

Xiaolan Qiu

Abstract: Sparse Synthetic Aperture Radar (SAR) imaging has garnered significant attention due to its ability to suppress azimuth ambiguity in under-sampled conditions, making it particularly useful for high-resolution wide-swath (HRWS) SAR systems. Traditional compressed sensing -based sparse SAR imaging algorithms are hindered by range-azimuth coupling induced by range cell migration (RCM), which results in high computational cost and limits their applicability to large-scale imaging scenarios. To address this challenge, the approximated observation-based sparse SAR imaging algorithm was developed, which decouples the range and azimuth directions, significantly reducing computational and temporal complexities to match the performance of conventional matched filtering algorithms. However, this method requires iterative processing and manual adjustment of parameters. In this paper, we propose a novel deep neural network-based sparse SAR imaging method, namely Self-supervised Azimuth Ambiguity Suppression Network (SAAS-Net). Unlike traditional iterative algorithms, SAAS-Net directly learns the parameters from data, eliminating the need for manual tuning. This approach not only improves imaging quality but also accelerates the imaging process. Additionally, SAAS-Net retains the core advantage of sparse SAR imaging—azimuth ambiguity suppression in under-sampling conditions. The method introduces self-supervision to achieve orientation ambiguity suppression without altering the hardware architecture. Simulations and real data experiments using Gaofen-3 validate the effectiveness and superiority of the proposed approach.
Article
Computer Science and Mathematics
Signal Processing

Victor Maya Venegas,

Abel García Barrientos,

Paul Hernández Herrera,

José Sergio Camacho Juárez,

Sharon Macias-Velasquez,

Obed Pérez Cortez,

Bersaín Alexander Reyes

Abstract: This project explores the development of a 1D version of the U-net convolutional neural network (CNN) for automated segmentation of heart sounds from phonocardiogram (PCG) signals. The approach begins with a feature extraction process designed to identify the onset and offset of each heart sound (HS) as binary markers. This process is similar to the image segmentation techniques used in biomedical image analysis, where 80 PCG signals, each 30 seconds long and annotated with heart sound positions, were used to train the CNN. The model receives PCG signals as input and returns the exact onset and offset for each heart sound. By combining classical feature extraction with advanced image-based segmentation techniques, this method offers a more robust and scalable solution for automated heart sound analysis. The results show that the trained model outperforms traditional segmentation methods, demonstrating the potential for deep learning architectures to be adapted for 1D biomedical signal analysis.
Article
Computer Science and Mathematics
Signal Processing

Mohammad Ali Tahouri,

Alin Adrian Alecu,

Leon Denis,

Adrian Munteanu

Abstract: Depth information acquisition is critically important in medical applications, such as monitoring of elderly or human biometrics extraction. In such applications, compressing the stream of depth video data plays an important role due to bandwidth constraints on the transmission channels. This paper introduces a novel lightweight compression system that encodes the semantics of the input depth video and can operate in both lossless and L-infinite near-lossless compression modes. A quantization technique that targets the L-infinite norm for sparse distributions and a new L-infinite compression method that sets bounds on the quantization error are proposed. The proposed codec enables controlling the coding error on every pixel in the input video data, which is crucial in medical applications. Experimental results show an average improvement of 45% and 17% in lossless mode compared to standalone JPEG-LS and CALIC codecs, respectively. Furthermore, in near-lossless mode, the proposed codec achieves superior rate-distortion performance and reduced maximum error per frame compared to HEVC. Additionally, the proposed lightweight codec is designed to perform efficiently in real time when deployed on an embedded depth camera platform.
Article
Computer Science and Mathematics
Signal Processing

Bert Van Hauwermeiren,

Leon Denis,

Adrian Munteanu

Abstract: Point cloud compression is essential for the efficient storage and transmission of 3D data in various applications, such as virtual reality, autonomous driving, and 3D modelling. Most existing compression methods employ voxelisation, all of which uniform, to partition 3D space into voxels for more efficient compression. However, uniform voxelisation may not capture the underlying geometry of complex scenes effectively. In this paper, we propose a novel non-uniform voxelisation technique for point cloud geometry compression. Our method adaptively adjusts voxel sizes based on local point density, preserving geometric details while enabling more accurate reconstructions. Through comprehensive experiments on well-known benchmark datasets, ScanNet and ModelNet, we demonstrate that our approach achieves better compression ratios and reconstruction quality in comparison to traditional uniform voxelisation methods. The results highlight the potential of non-uniform voxelisation as a viable and effective alternative, offering improved performance for point cloud geometry compression in a wide range of real-world scenarios.
Article
Computer Science and Mathematics
Signal Processing

Mikhail Bakulin,

Taoufik Ben Rejeb,

Vitaly Kreyndelin,

Denis Pankratov,

Aleksei Smirnov

Abstract:

In modern (5G) and future Multi-User (MU) wireless communication systems Beyond 5G (B5G) using Multiple Input Multiple Output (MIMO) technology, base stations with large number of antennas communicate with many mobile stations with a small number of antennas. MU-MIMO technology is becoming especially relevant in modern multi-user wireless sensor networks in various application scenarios, but the problem of organizing a multi-user mode on the downlink arises. It can be solved using precoding technology at the base station, using full Channel State Information (CSI) for each mobile station. Transmitting this information for Massive MIMO systems normally requires the allocation of high-speed feedback channel. With limited feedback, reduced information (partial CSI) is used, for example, the code word from the codebook that is closest to the estimated channel vector. An incomplete (or inaccurate) CSI information causes interference from signals, transmitted to neighboring mobile stations, that ultimately results in a decrease of the number of active users served. In this paper we propose a new downlink precoding approach with dynamic users selection for MU-MIMO systems, which also uses codebooks to reduce the information transmitted over feedback channel, but unlike in the existing approaches, here new information uncorrelated with the previous one is transmitted on each new transmission cycle. This allows accumulating the received information and restoring the full MIMO channel matrix with greater accuracy without increasing the feedback overhead: as the CSI accuracy improves, the number of active users increases and after several cycles reaches the maximum value, which is determined by the number of base station transmitting antennas. The statistical simulation confirms the effectiveness of the proposed precoding algorithm for modern and future Massive MIMO systems.

of 6

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated