ARTICLE | doi:10.20944/preprints201901.0142.v1
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: remote sensing; multispectral imaging; DCT-filtering; vectorial (three-dimensional) filtering; BM3D-filtering; filtering with reference
Online: 15 January 2019 (07:18:33 CET)
Multispectral remote sensing data may contain component images which are heavily corrupted by noise and pre-filtering (denoising) procedure is often applied to enhance these component images. To do this, one can use reference images – component images having relatively high quality and which are similar to the image subject to pre-filtering. Here we study the following problems: how to select component images that can be used as references (e.g., for the Sentinel multispectral remote sensing data) and how to perform the actual denoising. We demonstrate that component images of the same resolution as well as component images of a better resolution can be used as references. Examples of denoising of real-life images demonstrate high efficiency of the proposed approach.
ARTICLE | doi:10.20944/preprints201810.0253.v1
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: adaptive filtering; set-membership filtering; affine projection; data censoring; big data; outliers
Online: 12 October 2018 (04:57:08 CEST)
In this paper, the set-membership affine projection (SM-AP) algorithm is utilized to censor non-informative data in big data applications. To this end, the probability distribution of the additive noise signal and the excess of mean-squared error (EMSE) in steady-state are employed in order to estimate the threshold parameter of the single threshold SM-AP (ST-SM-AP) algorithm aiming at attaining the desired update rate. Furthermore, by defining an acceptable range for the error signal, the double threshold SM-AP (DT-SM-AP) algorithm is proposed to detect very large errors due to the irrelevant data such as outliers. The DT-SM-AP algorithm can censor non-informative and irrelevant data in big data applications, and it can improve misalignment and convergence rate of the learning process with high computational efficiency. The simulation and numerical results corroborate the superiority of the proposed algorithms over traditional algorithms.
ARTICLE | doi:10.20944/preprints202208.0426.v1
Online: 25 August 2022 (07:22:48 CEST)
With the ever-increasing popularity of unmanned aerial vehicles and other platforms providing dense point clouds, filters for identification of ground points in such dense clouds are needed. Many filters have been proposed and are widely used, usually based on the determination of an original surface approximation and subsequent identification of points within a predefined dis-tance from such surface. We present a new filter, Multi-view and shift rasterization algorithm (MVSR) is based on a different principle, i.e., on the identification of just the lowest points in in-dividual grid cells, shifting the grid along both planar axis and subsequent tilting of the entire grid. The principle is presented in detail and compared both visually and numerically to other commonly used ground filters (PMF, SMRF, CSF, ATIN) on three sites with different ruggedness and vegetation density. Visually, the MVSR filter showed the smoothest and thinnest ground profiles, with ATIN the only filter performing comparably. The same was confirmed when comparing ground filtered by other filters with the MVSR-based surface. The goodness of fit with the original cloud is demonstrated by the root mean square deviations (RMSD) of the points from the original cloud found below the MVSR-generated surface (ranging, depending on site, between 0.6-2.5 cm). The MVSR filter performed outstandingly at all sites, identifying the ground points with great accuracy while filtering out the maximum of vegetation/above-ground points. The filter dilutes the cloud somewhat; in such dense point clouds, however, this can be perceived rather as a benefit than as a disadvantage.
ARTICLE | doi:10.20944/preprints202206.0300.v1
Online: 22 June 2022 (03:37:45 CEST)
With the ever-increasing popularity of unmanned aerial vehicles and other platforms providing dense point clouds, universal filters for accurate identification of ground points in such dense clouds are needed. Many filters have been proposed and are widely used, usually based on the determination of an original surface approximation and subsequent identification of points within a predefined distance from such surface. In this paper, we present a new filter. This Multi-view and shift rasterization algorithm (MVSR) is based on an entirely different principle, i.e., on the identification of just the lowest points in individual grid cells, shifting the grid along both planar axis and subsequent tilting of the entire grid – after each of these steps, one lowest point per cell is detected. The principle is presented in detail and compared both visually and numerically to other commonly used ground filters (PMF, SMRF, CSF, ATIN) on three sites with different ruggedness and vegetation density. Visually, the MVSR filter showed the smoothest and thinnest ground profiles, with ATIN the only filter performing comparably (although the profiles were somewhat thicker and not as complete as MVSR-acquired ground). The same was confirmed when comparing ground filtered by other filters with the MVSR-based surface. The goodness of fit with the original cloud is demonstrated by the root mean square deviations (RMSD) of the points from the original cloud found below the MVSR-generated surface (ranging, depending on site, between 0.6-2.5 cm). ATIN again performed closest to MVSR, with RMSDs of ground filtered points found above MVSR-based surface at individual sites ranging between 4.5-7.4 cm. The remaining filters performed comparable in the simplest flat area but poorly in rugged and much-vegetated sites, with RMSDs above the MVSR surface ranging at such sites from 21 to 95 cm. In conclusion, the novel filter presented in this paper performed outstandingly at all sites, identifying the ground points with great accuracy while filtering out the maximum of vegetation/above-ground points. The filter dilutes the cloud somewhat; in such dense point clouds, however, this can be perceived rather as a benefit than as a disadvantage.
ARTICLE | doi:10.20944/preprints201812.0061.v1
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: adaptive filtering; set-membership filtering; quaternion; SM-QNLMS; wind profile prediction; quaternionic adaptive beamforming
Online: 5 December 2018 (04:21:21 CET)
In this paper, we propose the set-membership quaternion normalized least-mean-square (SM-QNLMS) algorithm. For this purpose, first, we review the quaternion least-mean-square (QLMS) algorithm, then go into the quaternion normalized least-mean-square (QNLMS) algorithm. By having the QNLMS algorithm, we propose the SM-QNLMS algorithm in order to reduce the update rate of the QNLMS algorithm and avoid updating the system parameters when there is not enough innovation in upcoming data. Moreover, the SM-QNLMS algorithm, thanks to the time-varying step-size, has higher convergence rate as compared to the QNLMS algorithm. Finally, the proposed algorithm is utilized in wind profile prediction and quaternionic adaptive beamforming. The simulation results demonstrate that the SM-QNLMS algorithm outperforms the QNLMS algorithm and it has higher convergence speed and lower update rate.
ARTICLE | doi:10.20944/preprints202111.0151.v1
Online: 8 November 2021 (14:37:44 CET)
Subglottal Impedance-Based Inverse Filtering (IBIF) allows for the continuous, non-invasive estimation of glottal airflow from a surface accelerometer placed over the anterior neck skin below the larynx, which has been shown to be advantageous for the ambulatory monitoring of vocal function. However, during long-term ambulatory recordings over several days, conditions may drift from the laboratory environment where the IBIF parameters were initially estimated due to sensor positioning, skin attachment, and temperature, among other factors. Observation uncertainties and model mismatch may result in significant deviations in the glottal airflow estimates, but are very difficult to quantify in ambulatory conditions due to a lack of a reference signal. To address this issue, we propose a Kalman filter implementation of the IBIF filter, which allows for both estimating the model uncertainty and adapting the airflow estimates to correct for signal deviations. One-way ANOVA results from laboratory experiments using the Rainbow Passage indicate a an improvement on amplitude-based measures for PVH subjects compared to IBIF which shows a statistically difference with respect to the reference oral airflow (p=0.02,F=4.1). MFDR from PVH subjects is slightly different to the oral airflow when compared to IBIF (p=0.04, F=3.3). Other measures did not have significant differences with either Kalman or IBIF, with the exception of H1H2, whose performance deteriorates for both methods. Overall, both methods show similar flottal airflow measures, with the advantage of Kalman by improving amplitude estimation. Moreover, Kalman filter deviations from the IBIF output airflow might suggest a better representation of some fine details in the ground-truth glottal airflow signal. Other applications may take more advantage from the adaptation offered by the Kalman filter implementation.
ARTICLE | doi:10.20944/preprints202103.0459.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Trend decomposition; Median filtering; kmeans; BiLSTM
Online: 18 March 2021 (07:21:22 CET)
Sedimentary microfacies division is the basis of oil and gas exploration research. The traditional sedimentary microfacies division mainly depends on human experience, which is greatly influenced by human factor and is low in efficiency. Although deep learning has its advantage in solving complex nonlinear problems, there is no effective deep learning method to solve sedimentary microfacies division so far. Therefore, this paper proposes a deep learning method based on DMC-BiLSTM for intelligent division of well-logging—sedimentary microfacies. First, the original curve is reconstructed multi-dimensionally by trend decomposition and median filtering, and spatio-temporal correlation clustering features are extracted from the reconstructed matrix by Kmeans. Then, taking reconstructed features, original curve features and clustering features as input, the prediction types of sedimentary microfacies at current depth are obtained based on BiLSTM. Experimental results show that this method can effectively classify sedimentary microfacies with its recognition efficiency reaching 96.84%.
ARTICLE | doi:10.20944/preprints201811.0566.v2
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Color image, grayscale image, motion blurring, random noise, inverse filtering, Wiener filtering, restoration of an image
Online: 5 February 2019 (16:13:14 CET)
In this paper, at first, a color image of a car is taken. Then the image is transformed into a grayscale image. After that, the motion blurring effect is applied to that image according to the image degradation model described in equation 3. The blurring effect can be controlled by a and b components of the model. Then random noise is added in the image via Matlab programming. Many methods can restore the noisy and motion blurred image; particularly in this paper Inverse filtering as well as Wiener filtering are implemented for the restoration purpose. Consequently, both motion blurred and noisy motion blurred images are restored via Inverse filtering as well as Wiener filtering techniques and the comparison is made among them.
ARTICLE | doi:10.20944/preprints202212.0498.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: face detection; depth map; deep learning; filtering
Online: 27 December 2022 (01:49:45 CET)
Face detection is an important problem in computer vision because it enables a wide range of applications, such as facial recognition and analysis of human behavior. The problem is challenging because of the large variations in facial appearance across different individuals and different lighting and pose conditions. One way to detect faces is to utilize a highly advanced face detection method, such as RetinaFace, which uses deep learning techniques to achieve high accuracy in various datasets. However, even the best face detectors can produce false positives, which can lead to incorrect or unreliable results. In this paper, we propose a method for reducing false positives in face detection by using information from a depth map. A depth map is a two-dimensional representation of the distance of objects in an image from the camera. By using the depth information, the proposed method is able to better differentiate between true faces and false positives. The authors evaluate their method on a combined dataset of 549 images, containing a total of 614 upright frontal faces. The results show that the proposed method is able to significantly reduce the number of false positives without sacrificing the overall detection rate. This indicates that the use of depth information can be a useful tool for improving face detection performance.
ARTICLE | doi:10.20944/preprints202007.0237.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Independent Component Analysis; PolSAR; speckle filtering; statistical classification
Online: 11 July 2020 (09:49:37 CEST)
The Independent Component Analysis (ICA) has been recently introduced as a reliable alternative to identify canonical scattering mechanisms within Polarimetric Synthetic Aperture Radar (PolSAR) images. This manuscript addresses two important aspects when applying such methods on real data, namely speckle filtering and statistical classification with ICA. A novel PolSAR data processing framework is introduced by adjusting the Lee's sigma filter to the particular nature of the Touzi's polarimetric decomposition. In its current form, it allows the use of the ICA mixing matrix in the derived speckle filter. An extension of the Fromont at al. iterative segmentation is introduced, equally. This proposed framework is tested using P band airborne PolSAR data acquired for the ESA campaign TropiSAR campaign.
ARTICLE | doi:10.20944/preprints202001.0247.v1
Subject: Mathematics & Computer Science, Other Keywords: global illumination; rendering; filtering; caching; Level-of-Detail
Online: 22 January 2020 (02:18:05 CET)
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
ARTICLE | doi:10.20944/preprints201803.0253.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Cold start, Recommender systems, Active learning, Collaborative filtering
Online: 29 March 2018 (15:08:38 CEST)
This paper focuses on the new users cold-start issue in the context of recommender systems. New users who do not receive pertinent recommendations may abandon the system. In order to cope with this issue, we use active learning techniques. These methods engage the new users to interact with the system by presenting them with a questionnaire that aim to understand their preferences to the related items. Example of questions may include "do you like this book?" and the users answer,"yes", "no", "I have not read it (unknown)", will reflect the degree of interest for the item by the users. As a consequence, the system can learn the users' preferences from these answers. The goal of active learning is to correctly choose the questions (items) for users. Thus it is necessary to personalize the questionnaires to retrieve the maximum information by avoiding "unknown" answers. In this paper, we propose an active learning technique that exploits past users' interests and past users' predictions in order to identify the best questions to ask.
ARTICLE | doi:10.20944/preprints202207.0368.v1
Subject: Engineering, Mechanical Engineering Keywords: Systems Engineering; Kane Damper; Forward Error Correction; Matched Filtering
Online: 25 July 2022 (09:35:15 CEST)
Within the past decade, the aerospace engineering industry has evolved outside the constraints of using single, large, custom satellites. Due to increased reliability and robustness of commercial off the shelf (COTS) printed circuit board (PCB) components, missions instead have transitioned towards deploying swarms of smaller satellites. This approach significantly decreases the mission cost by reducing custom engineering and deployment expenses. Nanosatellites are able to be quickly developed with a more modular design at lowered risk. The Alpha mission at Cornell Space Systems Studio is fabricated in this manner. However, for the purpose of this mission, only one satellite was initially developed. This manuscript will discuss a systems engineering approach to the development of this satellite. As a disclaimer, this manuscript is written from a systems perspective. Therefore it will follow many subsystems from a wide range of functionalities. The research in this manuscript was kept broad with the hope to contribute to the mission as a system, through a range of development phases including validation and verification of existing methods. The two systems that will be primarily focused on are the Attitude Control System (ACS) of the carrier nanosatellite (cubesat), and the RF communications on the ex-creted picosatellites (chipsat). Milestones achieved in chipsat RF include chipsat to chipsat communication, chipsat to SDR ground station communication, packet creation, error correction, appending a preamble, and filtering the signal. Achievements on the ACS side included controller traceability/verification and validation, software rigidity tests, hardware endurance testing, Kane damper and IMU tuning. These developments matured the technological readiness level (TRL) of our systems in preparation for satellite deployment.
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: filtering; information; degeneracy; entropy; relevance; resolution; complexity; complex networks
Online: 2 August 2020 (16:44:25 CEST)
We explored the statistics of filtering of simple patterns on a number of deterministic and random graphs as a tractable simple example of information processing in complex systems. In this problem, multiple inputs map to the same output, and the statistics of filtering is represented by the distribution of this degeneracy. For a few simple filter patterns on a ring we obtained an exact solution of the problem and described numerically more difficult filter setups. For each of the filter patterns and networks we found a few numbers essentially describing the statistics of filtering and compared them for different networks. Our results for networks with diverse architectures appear to be essentially determined by two factors: whether the graphs structure is deterministic or random, and the vertex degree. We find that filtering in random graphs produces a much richer statistics than in deterministic graphs. This statistical richness is reduced by increasing the graph’s degree.
ARTICLE | doi:10.20944/preprints201905.0243.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: Machine Vision; Morphological image filtering; Galvanic Industry; Rear-projection.
Online: 20 May 2019 (11:46:34 CEST)
In the fashion field, the use of electroplated small metal parts such as studs, clips and buckles is widespread. The plate is often made of precious metal, such as gold or platinum. Due to the high cost of these materials, it is strategically relevant and of primary importance for manufacturers to avoid any waste by depositing only the strictly necessary amount of material. To this aim, Companies need to be aware of the overall number of items to be electroplated so that it is possible to properly set the parameters driving the galvanic process. Accordingly, the present paper describes a Machine Vision-based method able to automatically count small metal parts arranged on a galvanic frame. The devised method relies on the definition of a proper acquisition system and on the development of image processing-based routines. Such a system is then implemented on a counting machine is meant to be adopted in the galvanic industrial practice to properly define a suitable set or working parameters (such as current, voltage and deposition time) for the electroplating machine and, thereby, to assure the desired plate thickness from one side and to avoid material waste on the other.
ARTICLE | doi:10.20944/preprints201804.0209.v2
Subject: Mathematics & Computer Science, Other Keywords: plant phenotyping; noise filtering; binarization; accuracy evaluation; connected components
Online: 24 April 2018 (17:02:18 CEST)
Plants are such important keys of biological part of our environment, supply the human life and creatures. Understanding how the plant’s functions react with our surroundings, helps us better to make plant growth and development of food products. It means the plant phenotyping gives us bio information which needs some tools to reach the plant knowledge. Imaging tools is one of the phenotyping solutions which consists of imaging hardware such as the camera and image analysis software analyses the plant images changings such as plant growth rates. In this paper, we proposed a preprocessing algorithm to eliminate the noise and separate foreground from the background which results the plant image to help the plant image segmentation. The preprocessing is one of important levels has effect on better image segmentation and finally better plant’s image labeling and analysis. Our proposed algorithm is focused on removing noise such as converting the color space, applying the filters and local adaptive binarization step such as Niblack. Finally, we evaluate our algorithm with other algorithms by testing a variety of binarization methods.
ARTICLE | doi:10.20944/preprints201709.0036.v1
Subject: Physical Sciences, Optics Keywords: interference; interference cancellation; noise reduction; digital filtering; spectroscopy; sensors
Online: 11 September 2017 (04:42:08 CEST)
One of the most common limits to gas sensor performance is the presence of unwanted interference fringes or etalons arising, for example, from multiple reflections between surfaces in the optical path. Additionally, since the amplitude and the frequency of these interference depend on the distance and alignment of the optical elements, they are affected by temperature changes and mechanical disturbances, giving rise to a drift of the signal. In this work, we present a novel semi-parametric algorithm which allows the extraction of a signal, like the spectroscopic absorption line of a gas molecule, from a background containing arbitrary disturbances, without having to make any assumption on the functional form of these disturbances. The algorithm is applied first to simulated data and then to oxygen absorption measurements in presence of strong fringes.To the best of the authors' knowledge, the algorithm enables an unprecedented accuracy particularly if the fringes have a free spectral range and amplitude comparable to those of the signal to be detected. The described method presents the advantage of being based purely on post processing, and to be of extremely straightforward implementation if the functional form of the Fourier transform of the signal is known. Therefore it has the potential to enable interference-immune absorption spectroscopy. Finally, its relevance goes beyond absorption spectroscopy for gas sensing since it can be applied to any kind of spectroscopic data.
ARTICLE | doi:10.20944/preprints202107.0056.v1
Subject: Engineering, Automotive Engineering Keywords: vibrations; dynamics; sounding rocket; vibration filtering; signal processing; space engineering
Online: 2 July 2021 (14:10:51 CEST)
Determining the vibration environment is crucial to analyzing a design of any mechanical system, especially such dynamic systems as sounding rockets. Accuracy of measurement using accelerometers could be improved by application of mechanical vibration filtering and amplifying devices. This work presents a theoretical description of a tunable filter and amplifier. Principle of work is provided as well as results from application of the device on a sounding rocket are provided. It is shown that implementation of such devices allowed for enhancing the accuracy of acceleration measurements. Conclusions on future implementations are also provided.
ARTICLE | doi:10.20944/preprints201905.0228.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Deep learning, LSTM, Machine learning, Post-filtering, Signal processing, Speech Synthesis
Online: 17 May 2019 (16:16:53 CEST)
Several researchers have contemplated deep learning-based post-filters to increase the quality of statistical parametric speech synthesis, which perform a mapping of the synthetic speech to the natural speech, considering the different parameters separately and trying to reduce the gap between them. The Long Short-term Memory (LSTM) Neural Networks have been applied successfully in this purpose, but there are still many aspects to improve in the results and in the process itself. In this paper, we introduce a new pre-training approach for the LSTM, with the objective of enhancing the quality of the synthesized speech, particularly in the spectrum, in a more efficient manner. Our approach begins with an auto-associative training of one LSTM network, which is used as an initialization for the post-filters. We show the advantages of this initialization for the enhancing of the Mel-Frequency Cepstral parameters of synthetic speech. Results show that the initialization succeeds in achieving better results in enhancing the statistical parametric speech spectrum in most cases when compared to the common random initialization approach of the networks.
ARTICLE | doi:10.20944/preprints201703.0127.v1
Subject: Engineering, Control & Systems Engineering Keywords: hybrid adaptive; unscented kalman filtering; maximum a posteriori; maximum likelihood criterion
Online: 17 March 2017 (01:49:42 CET)
In order to overcome the limitation of the traditional adaptive Unscented Kalman Filtering (UKF) algorithm in noise covariance estimation for statement and measurement, we propose a hybrid adaptive UKF algorithm based on combining Maximum a posteriori (MAP) criterion and Maximum likelihood (ML) criterion, in this paper. First, to prevent the actual noise covariance deviating from the true value which can lead to the state estimation error and arouse the filtering divergence, a real-time covariance matrices estimation algorithm based on hybrid MAP and ML is proposed for obtaining the statement and measurement noises covariance, respectively; and then, a balance equation the two kinds of covariance matrix is structured in this proposed to minimize the statement estimation error. Compared with the UFK based MAP and based ML, the proposed algorithm provides better convergence and stability.
ARTICLE | doi:10.20944/preprints201610.0002.v1
Subject: Earth Sciences, Oceanography Keywords: Destriping; Undecimated wavelet transform; Fourier filtering; Sea Surface Temperature; Ocean color
Online: 3 October 2016 (20:39:17 CEST)
This paper introduces a new destriping algorithm for remote sensing data. The method is based on combined Haar Stationary Wavelet transform and Fourier filtering. State-of-the-Art methods based on the discrete wavelet transform (DWT) may not always be effective and may cause different artifacts. Our contribution is three-fold: i) we propose to use the Undecimated Wavelet transform (UWT) to avoid as much as possible shortcomings of the classical DWT; ii) we combine a spectral filtering and UWT using the simplest possible wavelet, the Haar basis, for a computational efficiency; iii) we handle 2D fields with missing data, as commonly observed in ocean remote sensing data due to atmospheric conditions (e.g., cloud contamination). The performances of the proposed filter are tested and validated on the suppression of horizontal strip artifacts in cloudy L2 Sea Surface Temperature (SST) and ocean color snapshots.
ARTICLE | doi:10.20944/preprints202212.0405.v1
Subject: Earth Sciences, Geoinformatics Keywords: hyperspectral data; few-shot learning; deep features; convolution kernels; edge-preserving filtering
Online: 22 December 2022 (01:44:48 CET)
In recent years, different deep learning frameworks were introduced for hyperspectral image (HSI) classification. However, the proposed network models have a higher model complexity and do not provide high classification accuracy if few-shot learning is used. This paper pre-sents an HSI classification method that combines random patches network (RPNet) and re-cursive filtering (RF) to obtain informative deep features. The proposed method first convolves image bands with random patches to extract multi-level deep RPNet features. Thereafter, the RPNet feature set is subjected to dimension reduction through principal component analysis (PCA) and the extracted components are filtered using the RF procedure. Finally, HSI spectral features and the obtained RPNet-RF features are combined to classify the HSI using a support vector machine (SVM) classifier. In order to test the performance of the proposed RPNet-RF method, some experiments were performed on three widely known datasets using a few training samples for each class and classification results were compared with those obtained by other advanced HSI classification methods adopted for small training samples. The comparison showed that the RPNet-RF classification is characterized by higher values of such evaluation metrics as overall accuracy and Kappa coefficient (https://github.com/UchaevD/RPNet-RF).
ARTICLE | doi:10.20944/preprints202211.0155.v1
Subject: Engineering, Mechanical Engineering Keywords: Statistical Energy Analysis; Coupling Loss Factor; Power Injection Method; Monte Carlo Filtering
Online: 8 November 2022 (10:33:02 CET)
Monte Carlo Filtering (MCF) is one of the methods of Experimental Statistical Energy Analysis (E-SEA), which allows the correction of a negative LF (Loss Factor). In this article, a modification of the MCF method, called DESA (Diagonal Extension of the Search Area), is proposed. The technique applies a non-uniform extension of the search area when generating a population of normalized energy matrices. The degree of expansion of the search area is controlled by the Diagonal Penalty Factor (DPF). The authors demonstrated the method's effectiveness on a system that could not be identified in several frequency bands by the classical MCF method. After applying DESA, it was possible to fill in the problematic bands that were missing CLF and DLF values. The paper also proposes a way to minimize the errors introduced by using overly high DPF values.
ARTICLE | doi:10.20944/preprints201908.0282.v1
Subject: Engineering, Other Keywords: intelligent tractor; vision navigation; improved anti-noise morphology; boundary line; Guided Filtering
Online: 27 August 2019 (10:37:59 CEST)
An improved anti-noise morphology vision navigation algorithm is proposed for intelligent tractor tillage in a complex agricultural field environment. At first the two key steps, Guided Filtering and improved anti-noise morphology navigation line extraction were addressed in detail. Then the experiments were carried out in order to verify the effectiveness and advancement of the presented algorithm. Finally, the optimal template and it’s application condition were studied for improving the image processing speed. The comparison experiment results show that the YCbCr color space has minimum time consumption of 0.094 s in comparison with HSV, HIS and 2R-G-B color spaces. The Guided Filtering method can effectively distinguish the boundary between the new and old soil than other competing vanilla methods such as Tarel, Multi-scale Retinex, Wavelet-based Retinex and Homomorphic Filtering inspite of having the fastest processing speed of 0.113 s. The extracted soil boundary line of the improved anti-noise morphology algorithm has best precision and speed compared with other operators such as Sobel, Roberts, Prewitt and Log. After comparing different size of image template, the optimal template with the size of 140×260 pixels can meet high precision vision navigation while the course deviation angle is not more than 7.5°. The maximum tractor speed of the optimal template and global template are 51.41 km/h and 27.47 km/h respectively which can meet real-time vision navigation requirement of the smart tractor tillage operation in the field. The experimental vision navigation results demonstrated the feasibility of the autonomous vision navigation for tractor tillage operation in the field using the new and old soil boundary line extracted by the proposed improved anti-noise morphology algorithm which has broad application prospect.
ARTICLE | doi:10.20944/preprints201907.0248.v1
Subject: Engineering, Control & Systems Engineering Keywords: intelligent tractor; vision navigation; improved anti-noise morphology; boundary line; Guided Filtering
Online: 23 July 2019 (04:27:44 CEST)
An improved anti-noise morphology vision navigation algorithm is proposed for intelligent tractor tillage in a complex agricultural field environment. Firstly, the two key steps, Guided Filtering and improved anti-noise morphology navigation line extraction, were addressed in detail. Then the experiments were carried out in order to verify the effectiveness and advancement of the presented algorithm. Finally, the optimal template and its application condition were studied for improving the image processing speed. The comparison experiment results show that the YCbCr color space has minimum time consumption, 0.094 s, compared with HSV, HIS and 2R-G-B color spaces. The Guided Filtering method can enhance the new & old soil boundary effectively than any other methods such as Tarel, Multi-scale Retinex, Wavelet-based Retinex and Homomorphic Filtering, meanwhile, has the fastest processing speed of 0.113 s. The extracted soil boundary line of the improved anti-noise morphology algorithm has best precision and speed compared with other operators such as Sobel, Roberts, Prewitt and Log. After comparing different size of image template, the optimal template with the size of 140×260 pixels can meet high precision vision navigation while the course deviation angle is not more than 7.5°. The maximum tractor speed of the optimal template and global template are 51.41 km/h and 27.47 km/h respectively which can meet real-time vision navigation requirement of the smart tractor tillage operation in the field. The experimental vision navigation results demonstrated the feasibility of the autonomous vision navigation for tractor tillage operation in the field using the new & old soil boundary line extracted by the proposed improved anti-noise morphology algorithm which has broad application prospect.
ARTICLE | doi:10.20944/preprints201903.0006.v1
Subject: Earth Sciences, Environmental Sciences Keywords: Forest hydrology; Canopy filtering; trace metal; throughfall; gap edge canopy; closed canopy.
Online: 1 March 2019 (11:29:55 CET)
Trace metals can enter some natural regions with low human disturbance from atmospheric circulation, but little information is available regarding how the canopy can retained trace metals. Therefore, a representative sub-alpine spruce plantation was selected to investigate the net throughfall fluxes of eight trace metals (Fe, Mn, Cu, Zn, Al, Pb, Cd and Cr) of closed canopy and gap-edge canopy from August 2015 to July 2016. Over a one-year observational period, the annual fluxes of Al, Zn, Fe, Mn, Cu, Cd, Cr and Pb were 7.29 kg·ha-1, 2.30 kg·ha-1, 7.02 kg·ha-1, 0.16 kg·ha-1, 0.19 kg·ha-1, 0.06 kg·ha-1, 0.56 kg·ha-1 and 0.24 kg·ha-1, respectively, in the deposited precipitation. The annual net throughfall fluxes of these trace metals were 1.73 kg·ha-1, 0.9 kg·ha-1, 1.68kg·ha-1, -0.032 kg·ha-1, 0.04 kg·ha-1, 0.018 kg·ha-1, 0.093 kg·ha-1 and 0.087kg·ha-1, respectively, in the gap-edge canopy and -1.6 kg·ha-1, 1.13 kg·ha-1, 1.65 kg·ha-1, -0.10 kg·ha-1, 0.05 kg·ha-1, 0.03 kg·ha-1, 0.26 kg·ha-1 and 0.15 kg·ha-1, respectively, in the closed canopy. The closed canopy displayed a greater filter effect on the trace metals from precipitation than did the gap-edge canopy in the sub-alpine forest. In the rainy season, the net filtering ratio of trace metals ranged from -66%-89% in the closed canopy and from -52% to 25% in the gap-edge canopy. However, the net filtering ratio of all trace metals was greater than 50% in the closed canopy in the snowy season. Therefore, the results suggested that the most trace metals moving through the forest canopy are taken up rather than by rainfall leaching; moreover, the closed canopy can efficiently take up trace metals in the snowy season.
ARTICLE | doi:10.20944/preprints201901.0287.v1
Subject: Biology, Agricultural Sciences & Agronomy Keywords: on-farm precision experimentation; normalized difference vegetation index; data filtering; error correction
Online: 29 January 2019 (04:55:05 CET)
The objective of this work was to investigate the use of remotely sensed vegetation indices to improve the quality of yield maps. The method was applied to the yield data of twelve cornfields from the Data Intensive Farm Management project. The results revealed the need to time shift the yield values up to three seconds to better match the sensor readings with the geographic coordinates. The residuals of the yield prediction model were used to identify points with unlikely yield values for that location, as an alternative to traditional approaches using local spatial statistics, without any assumption of spatial dependence or stationarity. The temporal and spatial distribution of the standardized coefficients for each experimental unit highlighted the presence of trends in the data. At least five out of the twelve fields presented trends that could have been induced by data collection.
ARTICLE | doi:10.20944/preprints201810.0720.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: electroencephalogram; event-driven signal acquisition; activity selection; data compression; adaptive rate filtering
Online: 30 October 2018 (09:22:56 CET)
The segmentation and de-noising are basic operations, required in every signal processing and classification system. The classical segmentation and de-noising approaches are time-invariant. Consequently, it results in the post processing of an unnecessary information and causes an increase in the system processing activity and power consumption. In this context, an efficient event-driven segmentation and de-noising technique is proposed. It is founded on the principles of level crossing and activity selection. Therefore, it can adapt its sampling frequency, segmentation window length and position along with the filter order by analyzing the input signal local characteristics. As a result, the computational complexity and the power consumption of the proposed system is reduced compared to the counter ones. The suggested system performance is compared with the classical one. It is done for the case of a multi-channel Electroencephalogram (EEG) signals. Results show a noticeable compression gain with an effective adaptation of the de-noising filters order. It aptitudes a significant computational gain, transmission data rate reduction and power consumption reduction of the proposed technique, compared to the counter ones. It shows that the proposed solution is an attractive candidate to embed in the new generation EEG wearables.
ARTICLE | doi:10.20944/preprints201808.0517.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: fractal dimension; surface defect identification; adaptive fractal filtering; edge extraction; image denoising
Online: 30 August 2018 (05:53:25 CEST)
In addition to image filtering in the spatial and frequency domains, fractal characteristics induced algorithms offers considerable flexibility in the design and implementations of image processing solutions in areas such as image enhancement, image restoration, image data compression and spectrum of applications of practical interests. Facing up to a real-world problem of identifying workpiece surface defects, a generic adaptive fractal filtering algorithm is proposed, which shows advantages on the problems of target recognition, feature extraction and image denoising at multiple scales. First, we reveal the physical principles underlying between signal SNR and its representative fractal dimension indicative parameters, validating that the fractal dimension can be used to adaptively obtain the image features. Second, an adaptive fractal filtering algorithm (Abbreviated as AFFA) is proposed according to the identified correlation between the image fractal dimensions and the scales of objects, and it is verified by a benchmarking image processing case study. Third, by using the proposed fractal filtering algorithm, surface defects on a flange workpiece are identified. Compared to conventional image processing algorithms, the proposed algorithm shows superior computing simplicity and better performance Numerical analysis and engineering case studies show that the fractal dimension is eligible for deriving an adaptive filtering algorithm for diverse-scale object identification, and the proposed AFFA is feasible for general application in workpiece surface defect detection.
ARTICLE | doi:10.20944/preprints202211.0124.v1
Subject: Engineering, Control & Systems Engineering Keywords: Intelligent archive repository; archive access robot; complex digital filtering; map matrix; navigational mobility
Online: 7 November 2022 (12:23:04 CET)
By constructing a static map, the library robot can navigate and access documents autonomously, which greatly improves efficiency; however, for libraries where shelves can be moved, map changes do not allow for direct navigation, and also sensors such as radar and camera are relatively expensive, so we propose a low-cost navigation algorithm based on real-time ranging data and map matrices, and designed a mobile archive access robot. In order to obtain real-time distance data, nine near/long range sensors were mounted on the chassis and a composite digital filtering algorithm was designed based on different moving area characteristics; then the access task matrix and map matrix were designed based on the archive access task and the location characteristics of the archive shelf placement, the robot can rely on the range data to update the map matrix during the moving process, complete its own positioning, and use the task matrix The robot can use the task matrix to complete autonomous navigation and access to multiple files. Experiments show that the filtered positioning accuracy can reach 1cm, while the robot can move to the target shelf autonomously, which is more practical and less costly.
ARTICLE | doi:10.20944/preprints202105.0468.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: 3D reconstruction; ICP; Azure Kinect; RGB-D image processing; point cloud filtering; rapeseed
Online: 20 May 2021 (09:52:14 CEST)
The 3D reconstruction method using RGB-D camera has a good balance in hardware cost, point cloud quality and automation. However, due to the limitation of inherent structure and imaging principle, the acquired point cloud has problems such as a lot of noise and difficult registration. This paper proposes a three-dimensional reconstruction method using Azure Kinect to solve these inherent problems. Shoot color map, depth map and near-infrared image of the target from six perspectives by Azure Kinect sensor. Multiply the 8-bit infrared image binarization with the general RGB-D image alignment result provided by Microsoft to remove ghost images and most of the background noise. In order to filter the floating point and outlier noise of the point cloud, a neighborhood maximum filtering method is proposed to filter out the abrupt points in the depth map. The floating points in the point cloud are removed before generating the point cloud, and then using the through filter filters out outlier noise. Aiming at the shortcomings of the classic ICP algorithm, an improved method is proposed. By continuously reducing the size of the down-sampling grid and the distance threshold between the corresponding points, the point clouds of each view are continuously registered three times, until get the complete color point cloud. A large number of experimental results on rape plants show that the point cloud accuracy obtained by this method is 0.739mm, a complete scan time is 338.4 seconds, and the color reduction is high. Compared with a laser scanner, the proposed method has considerable reconstruction accuracy and a significantly ahead of the reconstruction speed, but the hardware cost is much lower and it is easy to automate the scanning system. This research shows a low-cost, high-precision 3D reconstruction technology, which has the potential to be widely used for non-destructive measurement of crop phenotype.
ARTICLE | doi:10.20944/preprints202208.0389.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Numerical weather prediction; Time integration; Filtering; Laplace transform; semi-implicit; semi-Lagrangian; Forecast accuracy
Online: 23 August 2022 (03:13:59 CEST)
A time integration scheme based on semi-Lagrangian advection and Laplace transform adjustment has been implemented in a baroclinic primitive equation model. The semi-Lagrangian scheme makes it possible to use large time steps. However, errors arising from the semi-implicit scheme increase with the time step size. In contrast, the errors using the Laplace transform adjustment remain relatively small for typical time steps used with semi-Lagrangian advection. Numerical experiments confirm the superior performance of the Laplace transform scheme relative to the semi-implicit reference model. The algorithmic complexity of the scheme is comparable to the reference model, making it computationally competitive, and indicating its potential for integrating weather and climate prediction models.
ARTICLE | doi:10.20944/preprints202105.0118.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Applied Geophysics; Digital Signal Processing; Enhancement of GPR Datasets; Clutter Noise Removal; Spectral Filtering
Online: 6 May 2021 (17:12:05 CEST)
Usually, in ground-penetrating radar (GPR) datasets the user defines the limits between the useful signal and the noise through standard filtering to isolate the effective signal as much as possible. However, there are true reflections that mask the coherent reflectors that can be considered noise. In archaeological sites these clutter reflections are caused by scattering with origin in subsurface elements (e.g., isolated masonry, ceramic objects and archaeological collapses). Its elimination is difficult because the wavelet parameters similar to coherent reflections and there is a risk of creating artifacts. In this study a procedure to filtering the clutter reflection noise (CRN) from GPR datasets is presented. The CRN filter is a singular value decomposition-based method (SVD), applied in the 2D spectral domain. This CRN filtering was tested in a dataset obtained from a controlled laboratory environment, to establish a mathematical control of this algorithm. Also, it has been applied in a 3D-GPR dataset acquired in the Roman villa of Horta da Torre (Fronteira, Portugal), which is an uncontrolled environment. The results show an increase in the quality of archaeological-GPR planimetry that were verified via archaeological excavation.
ARTICLE | doi:10.3390/sci2020039
Subject: Keywords: ensemble empirical mode decomposition (EEMD); denoising; mode mixing; electromyographic (EMG) signals; filtering; wavelet method
Online: 3 June 2020 (00:00:00 CEST)
One of the most basic pieces of information gained from dynamic electromyography is accurately defining muscle action and phase timing within the gait cycle. The human gait relies on selective timing and the intensity of appropriate muscle activations for stability, loading, and progression over the supporting foot during stance, and further to advance the limb in the swing phase. A common clinical practice is utilizing a low-pass filter to denoise integrated electromyogram (EMG) signals and to determine onset and cessation events using a predefined threshold. However, the accuracy of the defining period of significant muscle activations via EMG varies with the temporal shift involved in filtering the signals; thus, the low-pass filtering method with a fixed order and cut-off frequency will introduce a time delay depending on the frequency of the signal. In order to precisely identify muscle activation and to determine the onset and cessation times of the muscles, we have explored here onset and cessation epochs with denoised EMG signals using different filter banks: the wavelet method, empirical mode decomposition (EMD) method, and ensemble empirical mode decomposition (EEMD) method. In this study, gastrocnemius muscle onset and cessation were determined in sixteen participants within two different age groups and under two different walking conditions. Low-pass filtering of integrated EMG (iEMG) signals resulted in premature onset (28% stance duration) in younger and delayed onset (38% stance duration) in older participants, showing the time-delay problem involved in this filtering method. Comparatively, the wavelet denoising approach detected onset for normal walking events most precisely, whereas the EEMD method showed the smallest onset deviation. In addition, EEMD denoised signals could further detect pre-activation onsets during a fast walking condition. A comprehensive comparison is discussed on denoising EMG signals using EMD, EEMD, and wavelet denoising in order to accurately define an onset of muscle under different walking conditions.
ARTICLE | doi:10.20944/preprints201708.0031.v1
Subject: Keywords: cantilever; NiCr strain gauge; biosensor; liposome; amyloid beta; aggregation; fibrillization; interaction; human serum; digital low-pass filtering procedure
Online: 8 August 2017 (09:57:28 CEST)
We have successfully measured amyloid beta (Aβ) (1-40) protein added in human serum by a NiCr strain gauge cantilever biosensor immobilized with liposomes incorporating cholesterol. Importantly, we investigated the effect of incorporation of cholesterol in the liposome in order to suppress the interaction between the liposome and many different proteins included in human serum. It was revealed that incorporating cholesterol suppresses the interaction between the proteins other than Aβ in human serum and the liposome. Finally, we detected Aβ(1-40) in human serum with typical chronological behaviors due to Aβ aggregation and fibrillization. Furthermore, as a digital low-pass filtering procedure could reduce external noises, the cantilever sensor immobilized with liposome incorporating cholesterol can detect low-concentrated Aβ in human serum.
ARTICLE | doi:10.20944/preprints202107.0087.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Electric Vehicles; Stationary Battery Energy Storage System; Battery Automated System; Online State Estimation; Thermal Modeling; First-order model; Second-order Model; Kalman Filtering
Online: 5 July 2021 (10:11:31 CEST)
Estimation of core and surface temperature is one of the crucial functionalities of the lithium-ion Battery Management System (BMS) towards providing effective thermal management, fault detection and operational safety. While, it is impractical to measure core temperature using physical sensors, implementing a complex estimation strategy in on-board low-cost BMS is challenging due to high computational cost and the cost of implementation. Typically, a temperature estimation scheme consists of a heat generation model and a heat transfer model. Several researchers have already proposed ranges of thermal models having different levels of accuracy and complexity. Broadly, there are first-order and second-order heat capacitor-resistor-based thermal models of lithium-ion batteries (LIBs) for core and surface temperature estimation. This paper deals with a detailed comparative study between these two models using extensive laboratory test data and simulation study to access suitability in online prediction and onboard BMS. The aim is to guide whether it’s worth investing towards developing a second-order model instead of a first-order model with respect to prediction accuracy considering modelling complexity, experiments required and the computational cost. Both the thermal models along with the parameter estimation scheme are modelled and simulated using MATLAB/Simulink environment. Models are validated using laboratory test data of a cylindrical 18650 LIB cell. Further, a Kalman Filter with appropriate process and measurement noise levels are used to estimate the core temperature in terms of measured surface and ambient temperatures. Results from the first-order model and second-order models are analyzed for comparison purposes.