Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Denoising; Savitzky-Golay filter; polynomial fitting.
Online: 23 August 2021 (12:28:24 CEST)
A method for noise reduction of spectra based on fitting a multi-window model is presented. The spectrum is modeled as the sum of the polynomial background and Lorentzian peaks. This model applies to all points in the spectrum and to all window sizes. An iterative algorithm is used for fitting. Based on the initial data, the background calculated by the direct least squares method is subtracted. Positive data values are inverted using the 1/x function and the same procedure is used to fit the Lorentzian peaks. The weighted sum of all windows fit containing the point to be processed is used as the result. The weighting factors are calculated by evaluating the quality of the fit. The performance of the presented method is compared with the Savitsky-Golay method and the wavelet noise reduction method. The proposed approach provides good noise reduction performance without using user-entered parameters.
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Denoising; Savitzky-Golay filter; polynomial fitting
Online: 9 August 2021 (09:06:12 CEST)
A method for noise reduction of spectra based on the adaptive application of the Savitsky-Golay polynomial filter is presented. A polynomial approximation is calculated at all points of the spectrum and for all window sizes. The weighted sum of all polynomials containing the point to be processed is used as the result. The weighting factors are calculated by evaluating the quality of the fit. This paper proposes two evaluation functions. The performance of the presented method is compared with the Savitsky-Golay method and the wavelet noise reduction method. The proposed approach provides good noise reduction performance without using user-entered parameters.
ARTICLE | doi:10.20944/preprints201708.0086.v1
Subject: Mathematics & Computer Science, Other Keywords: seasonality; forecasting; pull and push models; denoising
Online: 25 August 2017 (08:21:40 CEST)
In this paper we develop a forecasting algorithm for recurrent patterns in consumer demand. We study this problem in two different settings: pull and push models. We discuss several features of the algorithm concerning sampling, periodic approximation, denoising and forecasting.
ARTICLE | doi:10.20944/preprints202103.0215.v1
Subject: Engineering, Automotive Engineering Keywords: Indoor Localization; Wi-Fi Fingerprint; Denoising Auto-encoder; JLGBMLoc
Online: 8 March 2021 (12:23:36 CET)
Wi-Fi based localization has become one of the most practical methods for mobile users in location-based services. However, due to the interference of multipath and high-dimensional sparseness of fingerprint data, the localization system based on received signal strength (RSS) is hard to obtain high accuracy. In this paper, we propose a novel indoor positioning method, named JLGBMLoc (Joint denoising auto-encoder with LightGBM Localization). Firstly, because the noise and outliers may influence the dimensionality reduction on high-dimensional sparseness fingerprint data, we propose a novel feature extraction algorithm, named joint denoising auto-encoder (JDAE), which reconstructs the sparseness fingerprint data for a better feature representation and restores the fingerprint data. Then, the LightGBM is introduced to the Wi-Fi localization by scattering the processed fingerprint data to histogram, and dividing the decision tree under leaf-wise algorithm with depth limitation. At last, we evaluated the proposed JLGBMLoc on UJIIndoorLoc dataset and Tampere dataset, experimental results show that the proposed model increases the positioning accuracy dramatically comparing with other existing methods.
ARTICLE | doi:10.20944/preprints202101.0344.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: attitude estimation; autoencoders; deep learning; denoising; Kalman filter; underwater environment
Online: 18 January 2021 (14:22:52 CET)
One of the main issues for underwater robots navigation is represented by the accurate vehicle positioning, which heavily depends on the orientation estimation phase. The systems employed to this scope are affected by different noise typologies, mainly related to the sensors and to the irregular noise of the underwater environment. Filtering algorithms can reduce their effect if opportunely configured, but this process usually requires fine techniques and time. This paper presents DANAE++, an improved denoising autoencoder based on DANAE, which is able to recover Kalman Filter IMU/AHRS orientation estimations from any kind of noise, independently of its nature. This deep learning-based architecture already proved to be robust and reliable, but in its enhanced implementation significant improvements are obtained both in terms of results and performance. In fact, DANAE++is able to denoise the three angles describing the attitude at the same time, and that is verified also on the estimations provided by the more performing Extended KF. Further tests could make this method suitable for real-time applications on navigation tasks.
ARTICLE | doi:10.20944/preprints201709.0158.v1
Subject: Engineering, Automotive Engineering Keywords: variational mode decomposition; Euclidean Distance; diesel engine; vibration signal; denoising algorithm
Online: 29 September 2017 (14:53:38 CEST)
Variational mode decomposition (VMD) is a recently introduced adaptive signal decomposition algorithm with a solid theoretical foundation and good noise robustness compared with empirical mode decomposition (EMD). There is a lot of background noise in the vibration signal of diesel engine. To solve the problem, a denoising algorithm based on VMD and Euclidean Distance is proposed. Firstly, a multi-component, non-Gauss, and noisy simulation signal is established, and decomposed into a given number K of band-limited intrinsic mode functions by VMD. Then the Euclidean distance between the probability density function of each mode and that of the simulation signal are calculated. The signal is reconstructed using the relevant modes, which are selected on the basis of noticeable similarities between the probability density function of the simulation signal and that of each mode. Finally, the vibration signals of diesel engine connecting rod bearing faults are analyzed by the proposed method. The results show that compared with other denoising algorithms, the proposed method has better denoising effect, and the fault characteristics of vibration signals of diesel engine connecting rod bearings can be effectively enhanced.
Subject: Keywords: 3D object reconstruction, depth cameras, Kinect sensors; open source, signal denoising, SLAM
Online: 9 April 2019 (12:24:34 CEST)
3D object reconstruction from depth image streams using Kinect-style depth cameras has been extensively studied. In this paper, we propose an approach for accurate camera tracking and volumetric dense surface reconstruction assuming a known cuboid reference object is present in the scene. Our contribu¬tion is three-fold. (a) We maintain drift-free camera pose tracking by incorporating the 3D geometric constraints of the cuboid reference object into the image registration process. (b) We reformulate the problem of depth stream fusion as a binary classification problem, enabling high-fidelity surface reconstruction, especially in the con¬cave zones of objects. (c) We further present a surface denoising strategy to mitigate the topological inconsistency (e.g., holes and dangling triangles), which facilitates the generation of a noise-free triangle mesh. We extend our public dataset CU3D with several new image sequences, test our algorithm on these sequences and quantitatively compare them with other state-of-the-art algorithms. Both our dataset and our algorithm are available as open-source content at https://github.com/zhangxaochen/CuFusion for oth-er researchers to reproduce and verify our results.
ARTICLE | doi:10.20944/preprints201808.0517.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: fractal dimension; surface defect identification; adaptive fractal filtering; edge extraction; image denoising
Online: 30 August 2018 (05:53:25 CEST)
In addition to image filtering in the spatial and frequency domains, fractal characteristics induced algorithms offers considerable flexibility in the design and implementations of image processing solutions in areas such as image enhancement, image restoration, image data compression and spectrum of applications of practical interests. Facing up to a real-world problem of identifying workpiece surface defects, a generic adaptive fractal filtering algorithm is proposed, which shows advantages on the problems of target recognition, feature extraction and image denoising at multiple scales. First, we reveal the physical principles underlying between signal SNR and its representative fractal dimension indicative parameters, validating that the fractal dimension can be used to adaptively obtain the image features. Second, an adaptive fractal filtering algorithm (Abbreviated as AFFA) is proposed according to the identified correlation between the image fractal dimensions and the scales of objects, and it is verified by a benchmarking image processing case study. Third, by using the proposed fractal filtering algorithm, surface defects on a flange workpiece are identified. Compared to conventional image processing algorithms, the proposed algorithm shows superior computing simplicity and better performance Numerical analysis and engineering case studies show that the fractal dimension is eligible for deriving an adaptive filtering algorithm for diverse-scale object identification, and the proposed AFFA is feasible for general application in workpiece surface defect detection.
ARTICLE | doi:10.3390/sci2020039
Subject: Keywords: ensemble empirical mode decomposition (EEMD); denoising; mode mixing; electromyographic (EMG) signals; filtering; wavelet method
Online: 3 June 2020 (00:00:00 CEST)
One of the most basic pieces of information gained from dynamic electromyography is accurately defining muscle action and phase timing within the gait cycle. The human gait relies on selective timing and the intensity of appropriate muscle activations for stability, loading, and progression over the supporting foot during stance, and further to advance the limb in the swing phase. A common clinical practice is utilizing a low-pass filter to denoise integrated electromyogram (EMG) signals and to determine onset and cessation events using a predefined threshold. However, the accuracy of the defining period of significant muscle activations via EMG varies with the temporal shift involved in filtering the signals; thus, the low-pass filtering method with a fixed order and cut-off frequency will introduce a time delay depending on the frequency of the signal. In order to precisely identify muscle activation and to determine the onset and cessation times of the muscles, we have explored here onset and cessation epochs with denoised EMG signals using different filter banks: the wavelet method, empirical mode decomposition (EMD) method, and ensemble empirical mode decomposition (EEMD) method. In this study, gastrocnemius muscle onset and cessation were determined in sixteen participants within two different age groups and under two different walking conditions. Low-pass filtering of integrated EMG (iEMG) signals resulted in premature onset (28% stance duration) in younger and delayed onset (38% stance duration) in older participants, showing the time-delay problem involved in this filtering method. Comparatively, the wavelet denoising approach detected onset for normal walking events most precisely, whereas the EEMD method showed the smallest onset deviation. In addition, EEMD denoised signals could further detect pre-activation onsets during a fast walking condition. A comprehensive comparison is discussed on denoising EMG signals using EMD, EEMD, and wavelet denoising in order to accurately define an onset of muscle under different walking conditions.
ARTICLE | doi:10.20944/preprints202201.0411.v1
Subject: Engineering, General Engineering Keywords: compressive sensing; image reconstruction; regularization; total variation; augmented Lagrangian; non-local self-similarity; wavelet denoising
Online: 27 January 2022 (11:02:58 CET)
In remote sensing applications, one of the key points is the acquisition, real-time pre-processing and storage of information. Due to the large amount of information present in the form of images or videos, compression of this data is necessary. Compressed sensing (CS) is an efficient technique to meet this challenge. It consists in acquiring a signal, assuming that it can have a sparse representation, using a minimal number of non-adaptive linear measurements. After this CS process, a reconstruction of the original signal must be performed at the receiver. Reconstruction techniques are often unable to preserve the texture of the image and tend to smooth out its details. To overcome this problem, we propose in this work, a CS reconstruction method that combines the total variation regularization and the non-local self-similarity constraint. The optimization of this method is performed by the augmented Lagrangian which avoids the difficult problem of non-linearity and non-differentiability of the regularization terms. The proposed algorithm, called denoising compressed sensing by regularizations terms (DCSR), will not only perform image reconstruction but also denoising. To evaluate the performance of the proposed algorithm, we compare its performance with state-of-the-art methods, such as Nesterov's algorithm, group-based sparse representation and wavelet-based methods, in terms of denoising, and preservation of edges, texture and image details, as well as from the point of view of computational complexity. Our approach allows to gain up to 25% in terms of denoising efficiency, and visual quality using two metrics: PSNR and SSIM.
ARTICLE | doi:10.20944/preprints202008.0206.v1
Subject: Life Sciences, Microbiology Keywords: tongue microbiome; salivary microbiome; amplicon sequence variant (ASV); operational taxonomical unit (OTU); denoising; DADA2; taxonomic classifier
Online: 8 August 2020 (09:29:46 CEST)
The bacterial composition of oral samples has traditionally been determined by PCR amplicon sequencing of 16S rRNA genes. Recent amplicon sequence variant (ASV)-based analyses of 16S rRNA genes differ from that based on operational taxonomic unit (OTU) clustering in the way it deals with sequences having potential errors. However, little information is available on its application in oral microbiome studies. Here, we conducted ASV-based analysis of oral microbiome samples using QIIME 2. We investigated the optimal parameters for sequence denoising, using DADA2, and found the trimming of the first 20 nucleotides from 5′-end of both paired reads avoided excessive sequence loss during chimera removal. Truncating reads at positions 240–245 allowed the removal of low-quality sequences while maintaining sufficient length to merge matching paired ends. Taxonomic assignment, using the naïve Bayes classifier trained with the V3-V4 region of reference 16S rRNA sequences in the extended human oral microbiome database (eHOMD), resulted in bacterial compositions similar to those of OTU-based analyses. Contrary to OTU-based clustering, ASV-based analysis showed taxonomic abundance at the genus or species level to not differ significantly in tongue microbiomes, regardless of brushing. QIIME 2 can, therefore, be a standard pipeline for ASV-based analysis of oral microbiomes.
ARTICLE | doi:10.20944/preprints202007.0209.v1
Subject: Engineering, Other Keywords: Deep learning; Head Related Transfer Function (HRTF); Restoration; Ambisonics; Spatial Audio; Spherical harmonic; Audio signal processing; Denoising; Auto-Encoder; Neural Network
Online: 10 July 2020 (08:58:11 CEST)
Spherical harmonic (SH) interpolation is a commonly used method to spatially up-sample sparse Head Related Transfer Function (HRTF) datasets to denser HRTF datasets. However, depending on the number of sparse HRTF measurements and SH order, this process can introduce distortions in high frequency representation of the HRTFs. This paper investigates whether it is possible to restore some of the distorted high frequency HRTF components using machine learning algorithms. A combination of Convolutional Auto-Encoder (CAE) and Denoising Auto-Encoder (DAE) models is proposed to restore the high frequency distortion in SH interpolated HRTFs. Results are evaluated using both Perceptual Spectral Difference (PSD) and localisation prediction models, both of which demonstrate significant improvement after the restoration process.
REVIEW | doi:10.20944/preprints201908.0152.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: deep learning; machine learning model; convolutional neural networks (CNN); recurrent neural networks (RNN); denoising autoencoder (DAE); deep belief networks (DBNs); long short-term memory (LSTM); review; survey; state of the art
Online: 13 August 2019 (09:32:09 CEST)
Deep learning (DL) algorithms have recently emerged from machine learning and soft computing techniques. Since then, several deep learning (DL) algorithms have been recently introduced to scientific communities and are applied in various application domains. Today the usage of DL has become essential due to their intelligence, efficient learning, accuracy and robustness in model building. However, in the scientific literature, a comprehensive list of DL algorithms has not been introduced yet. This paper provides a list of the most popular DL algorithms, along with their applications domains.