ARTICLE | doi:10.20944/preprints202012.0721.v1
Subject: Earth Sciences, Geoinformatics Keywords: Remote sensing; Global discrete grid; Accuracy evaluation; Hexagon grid
Online: 29 December 2020 (09:19:49 CET)
With the rapid development of earth observation, satellite navigation, mobile communication and other technologies, the order of magnitude of the spatial data we acquire and accumulate is increasing, and higher requirements are put forward for the application and storage of spatial data. Under this circumstance, a new form of spatial data organization emerged-the global discrete grid. This form of data management can be used for the efficient storage and application of large-scale global spatial data, which is a digital multi-resolution the geo-reference model that helps to establish a new model of data association and fusion. It is expected to make up for the shortcomings in the organization, processing and application of current spatial data. There are different types of grid system according to the grid division form, including global discrete grids with equal latitude and longitude, global discrete grids with variable latitude and longitude, and global discrete grids based on regular polyhedrons. However, there is no accuracy evaluation index system for remote sensing images expressed on the global discrete grid to solve this problem. This paper is dedicated to finding a suitable way to express remote sensing data on discrete grids, and establishing a suitable accuracy evaluation system for modeling remote sensing data based on hexagonal grids to evaluate modeling accuracy. The results show that this accuracy evaluation method can evaluate and analyze remote sensing data based on hexagonal grids from multiple levels, and the comprehensive similarity coefficient of the images before and after conversion is greater than 98%, which further proves that the availability hexagonal grid-based remote sensing data of remote sensing images. And among the three sampling methods, the image obtained by the nearest interpolation sampling method has the highest correlation with the original image.
ARTICLE | doi:10.20944/preprints202111.0407.v1
Subject: Medicine & Pharmacology, Gastroenterology Keywords: HpSA; H. pylori; diagnostic values; sensitivity; specificity; accuracy; PPV; NPV
Online: 22 November 2021 (14:26:56 CET)
Helicobacter pylori is the most common human gastric infection. H. pylori stool antigen lateral flow immunochromatography assay (HpSA-LFIA) is considered one of the most cost-effective and rapid non-invasive assays (active tests). The evaluation of this test is crucial for accuracy and utility assurance. This study aimed to evaluate the polyclonal antibody-based HpSA-LFIA in comparison to a monoclonal antibody-based ELISA kit. Methodology: Stool samples were collected from 200 gastric patients for HpSA-LFIA and semi-quantitative HpSA-ELISA. Statistical analysis of the diagnostic values was performed using MedCalc software. Chi-square tests were used to determine the effects of gender and age. Results: The obtained results found that HpSA-LFIA achieved promising sensitivity (93.75%) and NPV (98.00%). However, it had poor specificity, PPV, and accuracy, respectively, 59.76%, 31.25%, and 65.31%. LR+ & LR- were 2.33% & 0.1%, respectively. Gender had no significance on the di-agnostic parameters of HpSA-LFIA. Age groups had irrelevant sensitivity; however, specificity was significantly higher in patients over 45 years. Conclusion: It was concluded that HpSA-LFIA was not accurate enough to be the sole test for di-agnosis and needs other confirmatory tests in case of positive conditions
ARTICLE | doi:10.20944/preprints202201.0352.v1
Subject: Earth Sciences, Geoinformatics Keywords: Per-pixel classification confidence; spatial pattern; image classification; accuracy assessment; interpolation method
Online: 24 January 2022 (11:53:46 CET)
Obtaining classification confidence at the pixel level is a challenging task for accuracy assessment in remote sensing image classification. Among the various methods for estimating classification confidence at the pixel level, interpolation-based methods have drawn special attention in the literature. Even though they have been widely recognized in the literature, their usefulness has not been rigorously evaluated. This paper conducts a comprehensive evaluation of three interpolation-based methods: local error matrix method, bootstrap method, and geostatistical method. We applied each of the three methods to three representative datasets with different spatial resolutions, spectral bands, and the number of classes. We then derive the estimated classification confidence and true classification confidence and compared the results with each other using both exploratory data analysis (bi-histogram) and statistical analysis (Willmott's d and Binned classification quality). The results indicate that the three interpolation methods provide some interesting insights on various aspects of estimating per-pixel classification confidence. Unfortunately, the interpolation assumes that classification confidence is smooth across the space, which is usually not true in practice. In other words, interpolation-based methods have limited practical use.
ARTICLE | doi:10.20944/preprints201906.0036.v1
Subject: Earth Sciences, Geoinformatics Keywords: digital elevation models; multi-source fusion; multi-scale fusion; global evaluation; accuracy validation.
Online: 5 June 2019 (10:26:30 CEST)
The quality of digital elevation models (DEMs) is inevitably affected by the limitations of the imaging modes and the generation methods. One effective way to solve this problem is to merge the available datasets through data fusion. In this paper, a fusion-based global DEM dataset (82°S-82°N) is introduced, which we refer to as GSDEM-30. This is a 30-m DEM mainly reconstructed from the unfilled SRTM1, AW3D30, and ASTER GDEM v2 datasets combining the multi-source and multi-scale fusion techniques. A comprehensive evaluation of the GSDEM-30 data, as well as the 30-m ASTER GDEM v2 and AW3D30 DEM, was presented. Global ICESat GLAS data and the local National Elevation Dataset (NED) were used as the reference for the vertical accuracy validation, while GlobeLand30 was introduced for the landscape analysis. Furthermore, we employed the maximum slope approach to detect the potential artefacts in the DEMs. The results show that the GDEM data are seriously affected by noise and artefacts. With the advantage of the multiple datasets and the refined post-processing, the GSDEM-30 are contaminated with fewer anomalies than both ASTER GDEM and AW3D30. The fusion techniques used can also be applied to the reconstruction of other fused DEM datasets.
ARTICLE | doi:10.20944/preprints201812.0067.v1
Subject: Earth Sciences, Environmental Sciences Keywords: built-up area; classification; Landsat 8- OLI; feature engineering; feature learning; CNN; accuracy evaluation
Online: 5 December 2018 (12:06:34 CET)
Detailed built-up area information is valuable for mapping complex urban environments. Although a large number of classification algorithms about built-up areas have been developed, they are rarely tested from the perspective of feature engineering and feature learning. Therefore we launched a unique investigation to provide a full test of the OLI imagery for 15-m resolution built-up area classification in 2015, in Beijing, China. Training a classifier requires many sample points, and we propose a method based on the ESA's 38-meter global built-up area data of 2014, Open Street Map and MOD13Q1-NDVI to achieve rapid and automatic generation of a large number of sample points. Our aim is to examine the influence of a single pixel and image patch under traditional feature engineering and modern feature learning strategies. In feature engineering, we consider spectra, shape and texture as the input features, and SVM, random forest (RF) and AdaBoost as the classification algorithms. In feature learning, the convolution neural network (CNN) is used as the classification algorithm. In total, 26 built-up land cover maps were produced. Experimental results show that: (1) the approaches based on feature learning are generally better than those based on feature engineering in terms of classification accuracy, and the performance of ensemble classifiers e.g., RF, is comparable to that of CNN. Two dimensional CNN and the 7 neighborhood RF have the highest classification accuracy of nearly 91%. (2) Overall, the classification effect and accuracy based on image patches are better than those based on single pixels. The features that can highlight the information of the target category (for example, PanTex and EMBI) can help improve classification accuracy.
ARTICLE | doi:10.20944/preprints201711.0006.v1
Subject: Behavioral Sciences, Other Keywords: hypothetics; enothetics; reliability; validity; accuracy
Online: 1 November 2017 (04:56:55 CET)
The purpose of this article is to assess the reliability and accuracy (validity) of hypothetical binary tasting judgments in an enological framework. The heuristic model that is utilized allows for the control of a wide array of variables that would be exceedingly difficult to fully control in the typical enological investigation. It is shown that results that are judged to be enologically significant are uniformly judged to be statistically significant as well, whether the level of wine Taster agreement is set at 70% (Fair); 80% (Good), or 90% (Excellent), However, in a number of instances, results that were statistically significant were not enologically significant by standards that are widely accepted and utilized. This finding is consistent with the bio-statistical fact that given a sufficiently large sample size, even the most trivial of results will prove to be statistically significant. Consistent with expectations, multiple patterns of 80% (Good) and 90% (Excellent) agreement tended to be both statistically and enologically significant.
ARTICLE | doi:10.20944/preprints201704.0159.v1
Online: 25 April 2017 (11:19:25 CEST)
YG-13A represents the highest level of Chinese SAR satellites to date. In this paper, we report on experiments conducted to improve and validate ranging accuracy with YG-13A. We analyze the error sources in the YG-13A ranging system, such as atmospheric path delay, and transceiver channel delay. A real-time atmospheric delay correction model is established to calculate the atmospheric path delay, considering the troposphere delay and ionosphere delay. Six corner reflectors (CRs) were set up to ensure the accuracy of validation methods. Pixel location accuracies of up to 0.479-m standard deviation can be achieved after a complete calibration. We further demonstrate that the adjustment of the CRs can cause a marginal loss of ranging precision. After eliminating this error, the ranging accuracy is improved to 0.237 m. For YG-13A, a single frequency GPS receiver is used and the orbital nominal accuracy is 0.3 m, which is the biggest factor restricting its ranging accuracy. Our results show that the ranging accuracy of YG-13A can achieve decimeter-level, which is lower than centimeter-level accuracy with TerraSAR-X loading a dual frequency GPS. YG-13A has great convenience in terms of access to control points and target location that does not depend on ground equipment.
ARTICLE | doi:10.20944/preprints202108.0140.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: K-Mean, Mean-Shift, Performance, Accuracy
Online: 5 August 2021 (11:00:32 CEST)
Clustering, or otherwise known as cluster analysis, is a learning problem that takes place without any human supervision. This technique has often been utilized, much efficiently, in data analysis, and serves for observing and identifying interesting, useful, or desired patterns in the said data. The clustering technique functions by performing a structured division of the data involved, in similar objects based on the characteristics that it identifies. This process results in the formation of groups, and each group that is formed, is called a cluster. A single said cluster consists of objects from the data, that have similarities among other objects found in the same cluster, and resemble differences when compared to objects identified from the data that now exist in other clusters. The process of clustering is very significant in various aspects of data analysis, as it determines and presents the intrinsic grouping of objects present in the data, based on their attributes, in a batch of unlabeled raw data. A textbook or otherwise said, good criteria, does not exist in this method of cluster analysis. That is because this process is so different and so customizable for every user, that needs it in his/her various and different needs. There is no outright best clustering algorithm, as it massively depends on the user’s scenario and needs. This paper is intended to compare and study two different clustering algorithms. The algorithms under investigation are k-mean and mean shift. These algorithms are compared according to the following factors: time complexity, training, prediction performance and accuracy of the clustering algorithms.
ARTICLE | doi:10.20944/preprints201709.0139.v1
Online: 27 September 2017 (16:45:25 CEST)
Object-Based Image Analysis (OBIA) has been successfully used to map slums. In general, the occurrence of uncertainties in producing geographic data is inevitable. However, most studies concentrated solely on assessing the classification accuracy and neglecting the inherent uncertainties. Our research analyses the impact of uncertainties in measuring the accuracy of OBIA-based slum detection. We selected Jakarta as our case study area, because of a national policy of slum eradication, which is causing rapid changes in slum areas. Our research comprises of four parts: slum conceptualization, ruleset development, implementation, and accuracy and uncertainty measurements. Existential and extensional uncertainty arise when producing reference data. The comparison of a manual expert delineations of slums with OBIA slum classification results into four combinations: True Positive, False Positive, True Negative and False Negative. However, the higher the True Positive (which lead to a better accuracy), the lower the certainty of the results. This demonstrates the impact of extensional uncertainties. Our study also demonstrates the role of non-observable indicators (i.e., land tenure), to assist slum detection, particularly in areas where uncertainties exist. In conclusion, uncertainties are increasing when aiming to achieve a higher classification accuracy by matching manual delineation and OBIA classification.
ARTICLE | doi:10.20944/preprints202012.0240.v1
Subject: Materials Science, Biomaterials Keywords: intraoral scanner; orthodontic bracket; accuracy; precision; trueness
Online: 10 December 2020 (08:17:43 CET)
Accurate expression of bracket prescription is important for successful orthodontic treatment. The aim of this study was to evaluate the accuracy of digital scan images of brackets produced by four different intraoral scanners (IOSs) in terms of the height, position, and angle of the bracket slot when scanning the surface of dental model attached with bracket materials made from different composition of materials. Brackets made from stainless steel, polycrystalline alumina, composite and composite/stainless steel slot were considered, which have been scanned from 4 different IOSs (Primescan, Trios, CS3600 and i500). SEM images were used as references. Each bracket axis was set in the reference scan image, and the axis was set identically by superimposing with the IOS image, and then only the brackets were divided and analyzed. The difference between the manufacturer's nominal torque and bracket slot base angle was 0.39 in SEM, 1.96 in Primescan, 2.04 in Trios, and 5.21 in CS3600 (P <0.001). The parallelism, which is the difference between the upper and lower angles of the slot wall, was 0.55 in SEM, 7.55 in Primescan, 6.74 in Trios3, 6.59 in CS3600, and 24.95 in i500 (p <0.001). This study evaluated the accuracy of the bracket only and it must be admitted that there is some error in recognizing slots through scanning in general
BRIEF REPORT | doi:10.20944/preprints202005.0193.v1
Online: 11 May 2020 (12:35:08 CEST)
Introduction: Clinicians have been struggling with the optimal diagnostic approach of patients with suspected COVID-19. We evaluated the added value of chest CT over RT-PCR alone. Methods: Consecutive adult patients with suspected COVID-19 presenting to the emergency department (Academic Medical Center, Amsterdam University Medical Centers, the Netherlands) from March 16th to April 16th were retrospectively included if they required hospital admission and underwent chest CT and RT-PCR testing for SARS-CoV-2 infection. The CO-RADS classification was used to assess the radiological probability of COVID-19, where a score of 1-2 was considered as negative, 3 as indeterminate, and 4-5 as positive. CT results were stratified by initial RT-PCR results. For patients with a negative RT-PCR but a positive CT, serology or multidisciplinary discussion after clinical follow-up constituted the final diagnosis. Results: 258 patients with suspected COVID-19 were admitted, of which 239 were included because they had both CT and RT-PCR testing upon admission. Overall, 112 patients (46.9%) had a positive initial RT-PCR, and 14 (5.9%) had a positive repeat RT-PCR. Of 127 patients with a negative or indeterminate initial RT-PCR, 38 (29.9% [95%CI 21.3-39.3%]) had a positive CT. Of these, 13 had a positive RT-PCR upon repeat testing, and 5 had positive serology. The remaining 20 patients were assessed in a multidisciplinary consensus meeting, and for 13 it was concluded that COVID-19 was ‘very likely’. Of 112 patients with a positive initial RT-PCR result, CT was positive in 104 (92.9% [95%CI 89.3-97.5%]). Conclusion: In a high-prevalence emergency department setting, chest CT showed high probability of COVID-19 (CO-RADS 4-5) in 29.9% of patients with a negative or indeterminate initial RT-PCR result. As the majority of these patients had proven or ‘very likely’ COVID-19 after follow-up, we believe that CT helps in the identification of patients who should be admitted in isolation.
ARTICLE | doi:10.20944/preprints202207.0395.v1
Subject: Social Sciences, Business And Administrative Sciences Keywords: Waste Recycling System; Disaster Response; Network; Cognitive Accuracy
Online: 26 July 2022 (08:06:26 CEST)
Since the process of waste recycling generates dust and flammable gas during fragmentation, there is always a risk of fire resulting therefrom, and fire does, in fact, frequently occur. However, research on disaster management at recycling facilities deals only with the problem of processing systems from a technical point of view, but it does not suggest concrete alternatives from a management point of view. Therefore, in this study, we analyzed the influence of the disaster response network of a waste recycling center at the organizational level based on the concept of the cognitive accuracy of a network when considering administrative aspects. Through this analysis, we confirmed that factors affecting the influence of the network exist, such that the entire network and the networks of different levels of position are different. We suggest that this can be improved by deploying members who perform formal tasks at the center of the network so that everyone can agree political approach.
ARTICLE | doi:10.20944/preprints202111.0217.v1
Subject: Medicine & Pharmacology, Other Keywords: Diabetes Technology; CGM; Accuracy; Type 1 Diabetes; Sustainability
Online: 12 November 2021 (11:58:57 CET)
Aim of this study was to evaluate the accuracy and usability of a novel continuous glucose moni-toring (CGM) system designed for needle-free insertion and reduced environmental impact. We assessed sensor performance of two GlucoMen® Day CGM systems worn simultaneously in eight participants with type 1 diabetes. Self-monitoring of blood glucose (SMBG) was performed reg-ularly over 14 days at home. Participants underwent two standardized 5-hour meal challenges with frequent plasma glucose (PG) measurements using a laboratory reference instrument at the research center. When comparing CGM to PG the overall mean absolute relative difference (MARD) was 9.7 [2.6-14.6]%. The overall MARD of CGM vs SMBG was 13.1 [3.5-18.6]%. In the consensus error grid (CEG) analysis, 98% of both CGM/PG and CGM/SMBG pairs were in the clinically acceptable zones A and B. The analysis confirms that GlucoMen® Day CGM meets the clinical requirements for state-of-the-art CGM. The needle-free insertion technology is well toler-ated by users and reduces medical waste compared to conventional CGM systems.
ARTICLE | doi:10.20944/preprints202106.0738.v1
Subject: Earth Sciences, Atmospheric Science Keywords: time series; homogenization; ACMANT; observed data; data accuracy
Online: 30 June 2021 (13:08:39 CEST)
The removal of non-climatic biases, so-called inhomogeneities, from long climatic records needs sophistically developed statistical methods. One principle is that usually the differences between a candidate series and its neighbour series are analysed instead of directly the candidate series, in order to neutralize the possible impacts of regionally common natural climate variation on the detection of inhomogeneities. In most homogenization methods, two main kinds of time series comparisons are applied, i.e. composite reference series or pairwise comparisons. In composite reference series the inhomogeneities of neighbour series are attenuated by averaging the individual series, and the accuracy of homogenization can be improved by the iterative improvement of composite reference series. By contrast, pairwise comparisons have the advantage that coincidental inhomogeneities affecting several station series in a similar way can be identified with higher certainty than with composite reference series. In addition, homogenization with pairwise comparisons tends to facilitate the most accurate regional trend estimations. A new time series comparison method is presented here, which combines the use of pairwise comparisons and composite reference series in a way that their advantages are unified. This time series comparison method is embedded into the ACMANT homogenization method, and tested in large, commonly available monthly temperature test datasets.
Subject: Earth Sciences, Geoinformatics Keywords: Drone; GNSS RTK; UAV; photogrammetry; precision; accuracy; elevation
Online: 11 March 2021 (11:49:25 CET)
Georeferencing using ground control points (GCPs) is the most common strategy in photogrammetry modeling using UAV-acquired imagery. However, with the increased availability of UAVs with onboard GNSS RTK, georeferencing without GCPs is a promising alternative. However, systematic elevation error remains a problem of this technique. We aimed to analyze the reasons for this systematic error and propose strategies for the elimination of this error. Multiple flights differing in the flight altitude and image acquisition axis were performed at two real-world sites. A flight height of 100m with vertical (nadiral) image acquisition axis was considered primary, supplemented with flight altitudes of 75 m and 125 m with vertical image acquisition axis and two flights at 100 m with oblique image acquisition axes (30° a 15°). Each of these flights was performed twice to produce a full double grid. Models were calculated from individual flights and their combinations. The elevation error from individual flights or even combinations yielded systematic elevation errors of up to several decimeters. This error was linearly dependent on the deviation of the focal length from the reference value. A combination of two flights from the same altitude (with nadiral and oblique image acquisition) was capable of reducing the systematic elevation error to less than 0.03 m. This study is the first to demonstrate the linear dependence between the systematic elevation error of the models based only on the onboard GNSS-RTK data and the deviation in the determined internal orientation parameters (focal length). Besides, we have shown that a combination of two flights with different image acquisition axis can eliminate this systematic error even in real-world conditions and that georeferencing without GCPs is, therefore, a feasible alternative to the use of GCPs.
ARTICLE | doi:10.20944/preprints201610.0038.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Face Recognition; Intelligent Coupling Algorithm; Robustnes; Accuracy; Speed
Online: 11 October 2016 (14:42:02 CEST)
The key links of face recognition are digital image preprocessing, facial feature extraction and pattern recognition, this article aimed at the current problem of slow speed and low recognition accuracy of face recognition , from the above three key links, on the basic of analyzing the therories of Fractional Differential Masks Operator (FDMO), Principal Component Analysis (PCA) and Support Vector Machine (SVM), design a kind of FDMO+PVA+SVM coupling algorithm that applies to face recognition to improve the speed and accuracy of it. To realize FDMO+PCA+SVM coupling algorithm, first, we should apply FDMO to face image processing binary marginalization, the purpose is getting face contour; Then, we apply PCA to get the feature of face image which is disposed by binary marainalization. At last, we can apply One-Against All of the SVM classifier and LibSVM software package to realize face recognition. Beside, the article with nine different coupling algorithm design four groups of experimental reaults on the ORL face database verified by comparative analysic FDMO+PCA+SVM coupling algorithm in the superiority of face recognition accuracy and speed.
ARTICLE | doi:10.20944/preprints202109.0390.v1
Subject: Life Sciences, Biophysics Keywords: COVID-19; vibraimage; behavioral parameters; diagnosis accuracy; ANN; AI
Online: 22 September 2021 (16:28:12 CEST)
The Covid-19 pandemic spreads in waves for a year and a half, despite significant worldwide efforts, the development of biochemical diagnostic methods and population vaccination. One of the reasons for the infection spread is the impossibility of early disease detection through biochemical diagnostics, since biochemical processes slowly develop in a body. At the same time, well known that behavioral characteristics of a person, measured based on reflex movements, are capable for inertialess assessment of psychophysiological parameters. Vibraimage technology is the method of head micromovements video processing by inter-frame difference accumulation and converting spatial and temporal characteristics of the inter-frame difference into behavioral and psychophysiological parameters. Here we shown that behavioral parameters measured by vibraimage changed during COVID-19 infection. The identification of changes signs in behavioral parameters detected by AI trained on patients and controls. The best diagnostic accuracy (higher 94%) obtained using instantaneous values of behavioral parameters measured with the following vibraimage settings: 10Hz frequency of basic measurements; 25 inter-frame difference accumulations and averaging the diagnostic results over period of at least 5 seconds. COVID-19 diagnoses by behavioral parameters showed earlier (5-7 days) detection of the disease compared to symptoms and positive results of biochemical RT-PCR testing. Proposed method for COVID-19 diagnosis indicates infected persons within 5 seconds video processing using standard television cameras (web, IP) and computers, allows mass testing/selftesting and will stop the pandemic spread. We assume that head micromovements analysis for diagnosis of various diseases is possible not only with the help of vibraimage technology. Further research of human head micromovement analysis will help stop the COVID-19 pandemic and will contribute to the development of new contactless and environmentally friendly methods for early diagnosis of diseases.
ARTICLE | doi:10.20944/preprints202108.0190.v1
Subject: Keywords: Programming by Demonstration; Virtual Reality; Augmented Reality; Accuracy; Repeatability
Online: 9 August 2021 (10:42:14 CEST)
Augmented and Virtual Reality have been experiencing a rapidly growth in recent years, but there is not still a deep knowledge on their capabilities and where they could be explored. In that sense, this paper presents a study on the accuracy and repeatability of the Microsoft's HoloLens 2 (Augmented Reality device) and HTC Vive (Virtual Reality device) using an OptiTrack system as ground truth. For the HoloLens 2, the method used was hand tracking, while in HTC Vive, the object tracked was the system's hand controller. A series of tests in different scenarios and situations were performed to explore what could influence the measures. The HTC Vive obtained results in the millimetre scale, while the HoloLens 2 revealed not so accurate measures (around 2 centimetres). Although the difference can seem to be considerable, the fact that HoloLens 2 was tracking the user's hand and not an inherit controller made a huge impact. The results were considered a significant step for the on going project of developing a human-robot interface to program by demonstration an industrial robot using Extended Reality, which shows great potential to succeed based on this data.
REVIEW | doi:10.20944/preprints202107.0515.v2
Subject: Life Sciences, Biochemistry Keywords: Typhoid fever; Diagnostic; Metabolomics; Composite reference standard; Accuracy; Sensitivity.
Online: 29 July 2021 (13:28:33 CEST)
Typhoid fever is a major public health burden which causes substantial global morbidity and mortality due to lack of decisive diagnostic protocols. The capacity of commonly use diagnostic test to validate the absence of typhoid fever is controversial. This study explores to evaluate new techniques for typhoid diagnosis and proposed a harmonised suitable standardized composite reference to be adopted. Published peer-reviewed articles indexed in PubMed, MEDLINE and Google scholar were reviewed for hospital-based studies. This study reveals new typhoid diagnostic techniques such as proteomics, serology, Rapid Diagnostic tests (RDTs), transcriptomics, genomics, and metabolomics. 34.4% of the studies use prospective study design. The study result establishes that, Widal test has a moderate diagnostic accuracy with average percentage sensitivity (52.9%), specificity (54%), positive predictive value (PPV) (56.8%) as well as negative predictive value (NPV) (55.6%) when compared with 29.4%, 28%, 29.5%, and 27.8% of Typhidot respectively. The findings showed a statistically significant difference on the sensitivity between Widal and Typhidot t (40) = 2.639, p = 0.012 at p<0.05 using independent sample t-test. When there is no perfect reference standard that has an optimal diagnostic accuracy, the need for a harmonised suitable standardized composite reference is essential. Hence, this study recommends that, peripheral blood culture with established sensitivity of 60% and Widal test with average sensitivity of 52.9% be adopted as a consensus composite reference standard for typhoid fever diagnosis in other to improve confidence in prevalence estimates.
REVIEW | doi:10.20944/preprints202104.0373.v1
Subject: Medicine & Pharmacology, General Medical Research Keywords: mHealth devices; diagnosis; accuracy; sensitivity; specificity; sub-Saharan Africa
Online: 14 April 2021 (12:27:39 CEST)
Mobile health devices are emerging applications that could help deliver point-of-care (POC) diagnosis, particularly in settings with limited laboratory infrastructure, such as sub-Saharan Africa (SSA). The advent of coronavirus has resulted in an increased deployment and use of mHealth-linked POC diagnostics in SSA. We performed a systematic review and meta-analysis to evaluate the accuracy of mobile-linked point-of-care diagnostics in SSA. Our systematic review and meta-analysis were guided by the Preferred Reporting Items requirements for Systematic Reviews and Meta-Analysis (PRISMA). We exhaustively searched PubMed, Science Direct, Google Scholar, MEDLINE, and CINAHL with full-text via EBSCOhost databases from mHealth inception to March 2021. The statistical analyses were conducted using OpenMeta-Analyst software. All 11 included studies were considered for the meta-analysis. The included studies focused on malaria infections, Schistosoma haematobium, Schistosoma mansoni, soil-transmitted helminths, and trichuris trichiura. The pooled summary of sensitivity and specificity estimates were moderate compared to the gold reference standard. The overall pooled estimates of sensitivity, specificity, positive likelihood ratio, negative likelihood ratio and diagnostic odds ratio of mobile-linked POC diagnostic devices were as follows: 0.499 (95% CI: 0.458-0.541); 0.535 (95% CI: 0.401-0.663); 0.952 (95% CI: 0.60-1.324); 1.381 (95% CI: 0.391-4.879); and 0.944 (95% CI: 0.579-1.538), respectively. Evidence shows that mobile-linked POC diagnostics' diagnostic accuracy is presently moderate in detecting infections in sub-Saharan Africa. Future research is recommended to evaluate mHealth devices' diagnostics with excellent sensitivities and specificities in diagnosing diseases in this setting.
ARTICLE | doi:10.20944/preprints202104.0343.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Family violence; Machine Learning; Classification; ROC; Accuracy; COVID-19
Online: 13 April 2021 (10:51:20 CEST)
In Southern Asia, Bangladesh is a well-known developing country. Because of COVID-19, we continuously face challenges. Not only can these issues occur beyond economic or health concerns, but they also generate dangerous social problems, such as family abuse. Since the inception of this epidemic, multiple social crimes are looming. Remaining home during the lockout period enhances divorce rates. This research presents a customized forecast of family violence during the COVID-19 outbreak by using machine learning methods. In this paper, we have applied Random Forest, Logistic Regression, and Naive Bayes machine learning classifiers to predict family violence and discovered the feature importance. The performance of the classifiers is evaluated based on accuracy, precision, recall, and F-score. We have employed an oversampling strategy named synthetic minority oversampling technique (SMOTE) to solve the imbalance problem of our data. Even, we have tried to compare three machine learning model performances before and after balancing of normalization data. Finally, ROC analyses and confusion matrices were developed and analyzed by using data augmentation. Our proposed system with the random forest classifier performed better with 77% accuracy in comparison with other two machine learning classifiers.
ARTICLE | doi:10.20944/preprints201804.0209.v2
Subject: Mathematics & Computer Science, Other Keywords: plant phenotyping; noise filtering; binarization; accuracy evaluation; connected components
Online: 24 April 2018 (17:02:18 CEST)
Plants are such important keys of biological part of our environment, supply the human life and creatures. Understanding how the plant’s functions react with our surroundings, helps us better to make plant growth and development of food products. It means the plant phenotyping gives us bio information which needs some tools to reach the plant knowledge. Imaging tools is one of the phenotyping solutions which consists of imaging hardware such as the camera and image analysis software analyses the plant images changings such as plant growth rates. In this paper, we proposed a preprocessing algorithm to eliminate the noise and separate foreground from the background which results the plant image to help the plant image segmentation. The preprocessing is one of important levels has effect on better image segmentation and finally better plant’s image labeling and analysis. Our proposed algorithm is focused on removing noise such as converting the color space, applying the filters and local adaptive binarization step such as Niblack. Finally, we evaluate our algorithm with other algorithms by testing a variety of binarization methods.
ARTICLE | doi:10.20944/preprints202202.0041.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: Laser scanning instrument; 3D scanner calibrator; surface reflectance; measurement accuracy
Online: 2 February 2022 (15:57:04 CET)
Abstract: The calibrator is one of the most important factors in the calibration of various laser 3D scanning instruments. The requirements for diffuse reflection surface are specially emphasized in many national standards. In this study, the spherical calibrator and plane calibrator compara-tive measurement experiments were carried out. The black ceramic standard sphere, white ce-ramic standard sphere, metal standard sphere, metal standard plane and white ceramic standard plane were used to test the laser 3D scanner. In the spherical calibrators comparative measure-ment experiments, the results indicated that the RMS of the white ceramic spherical calibrator with reflectance about 60% is 10 times that of the metal spherical calibrator with the reflectance of about 15%, and the RMS of the black ceramic spherical calibrator with reflectance of about 11% is of the same order as the metal spherical calibrator. In the plane calibrators comparative measurement experiments, the RMS of flatness measurement is 0.077 mm for metal plane cali-brator with reflectance of 15%, and 2.915 mm for ceramic plane calibrator with reflectance of 60%. The results show that when the optimal measurement distance and incident angle are selected, the reflectance of the calibrator has a great effect on the measurement results, regardless of the outlines or profiles. Based on the experiments, it is recommended to use the spherical calibrator or the standard plane with reflectance of around 18% as the standard, which can obtain the rea-sonable results. In addition, it is necessary to clearly provide the material category and surface reflectance information of the standard when calibrating the scanner according to the measure-ment standard.
ARTICLE | doi:10.20944/preprints202202.0034.v1
Subject: Physical Sciences, Optics Keywords: stray light; radiometric accuracy; Earth observation; correction algorithm; ghost reflections
Online: 2 February 2022 (12:58:46 CET)
Stray light is a critical aspect for high performance optical instruments. When stray light control by design is insufficient to reach the performance requirement, correction by post-processing must be considered. This situation is encountered for example in the case of the Earth observation in-strument 3MI, whose stray light properties are complex due to the presence of many ghosts dis-tributed on the detector array. We implement an iterative correction method and discuss its con-vergence properties. Spatial and field binning can be employed to reduce the computation time but at the cost of a decreased performance. Interpolation of the stray light properties is required to achieve high performance correction. For that, two methods are proposed and tested. The first interpolate the stray light in the field domain while the second applies a scaling operation based on a local symmetry assumption. Ultimately, the scaling method is selected and a stray light reduction by a factor of 58 is obtained at 2σ (129 at 1σ) for an extended scene illumination.
ARTICLE | doi:10.20944/preprints202105.0424.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Convolutional Neural Network (CNN); Emotion Recognition; Facial Expression; Classification; Accuracy
Online: 18 May 2021 (11:34:19 CEST)
Emotion recognition defined as identifying human emotion and is directly related to different fields such as human-computer interfaces, human emotional processing, irrational analysis, medical diagnostics, data-driven animation, human-robot communi- cation and many more. The purpose of this study is to propose a new facial emotional recognition model using convolutional neural network. Our proposed model, “ConvNet”, detects seven specific emotions from image data including anger, disgust, fear, happiness, neutrality, sadness, and surprise. This research focuses on the model’s training accuracy in a short number of epoch which the authors can develop a real-time schema that can easily fit the model and sense emotions. Furthermore, this work focuses on the mental or emotional stuff of a man or woman using the behavioral aspects. To complete the training of the CNN network model, we use the FER2013 databases, and we test the system’s success by identifying facial expressions in the real-time. ConvNet consists of four layers of convolution together with two fully connected layers. The experimental results show that the ConvNet is able to achieve 96% training accuracy which is much better than current existing models. ConvNet also achieved validation accuracy of 65% to 70% (considering different datasets used for experiments), resulting in a higher classification accuracy compared to other existing models. We also made all the materials publicly accessible for the research community at: https://github.com/Tanoy004/Emotion-recognition-through-CNN.
REVIEW | doi:10.20944/preprints202105.0316.v1
Subject: Medicine & Pharmacology, Allergology Keywords: intraoral scanners; digital dentistry; trueness; precision; accuracy; 3D printing; materials
Online: 14 May 2021 (08:05:37 CEST)
Introduction: The current generation of 3D printers are lighter, cheaper, and smaller, making them more accessible to the chairside digital dentist than ever before. 3D printers in general in the industrial and chairside setting can work with various types of materials including, metals, ceramics, and polymers. Evidence presented in many studies show that an ideal material used for dental restorations is characterised by several properties related to durability, cost-effectiveness, and high performance. This review is the second part in a 3D Printing series that looks at the literature on material science and applications for these materials in 3D printing as well as a discussion on the potential further development and future evolution in 3D printing materials. Conclusions: Current materials in 3D printing provide a wide range of possibilities for providing more predictable workflows as well as improving efficiency through less wasteful additive manufacturing in CAD/CAM procedures. Incorporating a 3D printer and a digital workflow into a dental practice is challenging but the wide range of manufacturing options and materials available mean that the dentist should be well prepared to treat patients with a more predictable and cost effective treatment pathway. As 3D printing continues to become a commonplace addition to chair side dental clinics, the evolution of these materials, in particular reinforced PMMA, resin incorporating zirconia and glass reinforced polymers offer increased speed and improved aesthetics that will likely replace subtractive manufacturing milling machines for most procedures.
REVIEW | doi:10.20944/preprints202105.0221.v1
Subject: Medicine & Pharmacology, Allergology Keywords: 3D printing; intraoral scanners; digital dentistry; trueness; precision; accuracy; history
Online: 10 May 2021 (15:57:02 CEST)
Introduction: The term 3D printing is commonly used to depict an assembling method whereby the final form of an object is the result of the addition of different layers to build the frame of an object. This procedure is more accurately portrayed as additive manufacturing and is likewise alluded to as fast prototyping. The term 3D printing, in any case, is generally new and has been an active part of current developments in Dentistry. Much publicity encompasses the evolution of 3D printing, which is hailed as an innovation that will perpetually change CAM manufacturing, including in the dental sector. This review is the first part in a 3D Printing series that looks at the history of 3D Printing, the technologies available and reviews the literature relating to the accuracy of these technologies. Conclusions: The recent advancement in digital dentistry to incorporate these tools has modernised dental practices by paving the way for computer-aided design (CAD) technology and rapid prototyping. The use of 3D printing has led to 3D digital models produced with intraoral scanners (IOS), which can be manipulated easily for diagnosis, treatment planning, mockups, and a multitude of other uses. Combining 3D Printing with a 3D intraoral scan eliminates the need for physical storage but makes it to retrieve a 3D models for use within all dental modalities.
ARTICLE | doi:10.20944/preprints202012.0105.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Built-up land; Fourier transformation; high-accuracy mapping; temporal correcti
Online: 4 December 2020 (11:58:42 CET)
Long-term, high-accuracy mapping of built-up land dynamics is essential for understanding urbanization and its consequences for the environment. Despite advances in remote sensing and classification algorithms, built-up land mapping using early satellite imagery (e.g., from the 2000s and earlier) remains prone to uncertainty. We mapped the extent of built-up land in the North China Plain, one of China’s most important agricultural regions, from 1990 to 2019 at three-year intervals. Using dense time-stack Landsat data, we applied discrete Fourier transformation to create temporal predictors and reduce mapping uncertainties for early years. We improved overall accuracy by 8% compared to using spectral and indices predictors alone. We implemented a temporal correction algorithm to remove inconsistent pixel classifications, further improving accuracy to a consistently high level (>94%) across years. A cross-product comparison showed that our study achieved the highest levels of accuracy across years. Total built-up land in the North China Plain increased from 37,941 km2 in 1990–1992 to 131,578 km2 in 2017–2019. Consistent, high-accuracy built-up land mapping provides a reliable basis for policy planning in one of the most rapidly urbanizing regions of the planet.
ARTICLE | doi:10.20944/preprints201812.0056.v1
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: Low accuracy CDRs; Group movement pattern; Data mining; Travel behaviors
Online: 4 December 2018 (10:02:30 CET)
Identifying group movement patterns of crowds and understanding group behaviors is valuable for urban planners, especially when the groups are special such as tourist groups. In this paper, we present a framework to discover tourist groups and investigate the tourist behaviors using mobile phone call detail records (CDRs). Unlike GPS data, CDRs are relatively poor in spatial resolution with low sampling rates, which makes it a big challenge to identify group members from thousands of tourists. Moreover, since touristic trips are not on a regular basis, no historical data of the specific group can be used to reduce the uncertainty of trajectories. To address such challenges, we propose a method called group movement pattern mining based on similarity (GMPMS) to discover tourist groups. To avoid large amounts of trajectory similarity measurements, snapshots of the trajectories are firstly generated to extract candidate groups containing co-occurring tourists. Then, considering that different groups may follow the same itineraries, additional traveling behavioral features are defined to identify the group members. Finally, with Hainan province as an example, we provide a number of interesting insights of travel behaviors of group tours as well as individual tours, which will be helpful for tourism planning and management.
ARTICLE | doi:10.20944/preprints201705.0170.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: accuracy; depth data; RMS error; 3D vision sensors; stereo disparity
Online: 23 May 2017 (09:20:27 CEST)
We propose an approach for estimating the error in depth data provided by generic 3D sensors, which are modern devices capable of generating an image (RGB data) and a depth map (distance) or other similar 2.5D structure (e.g. stereo disparity) of the scene. Our approach starts capturing images of a checkerboard pattern devised for the method. Then proceed with the construction of a dense depth map using functions that generally comes with the device SDK (based on disparity or depth). The 2D processing of RGB data is performed next to find the checkerboard corners. Clouds of corner points are finally created (in 3D), over which an RMS error estimation is computed. We come up with a multi-platform system and its verification and evaluation has been done, using the development kit of the board nVIDIA Jetson TK1 with the MS Kinects v1/v2 and the Stereolabs ZED camera. So the main contribution is the error determination procedure that does not need any data set or benchmark, thus relying only on data acquired on-the-fly. With a simple checkerboard, our approach is able to determine the error for any device. Envisioned application is on 3D reconstruction for robotic vision, with a series of 3D vision sensors embarked in robots (UAV of type quadcopter and terrestrial robots) for high-precision map construction, which can be used for sensing and monitoring.
ARTICLE | doi:10.20944/preprints201609.0103.v1
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Maximum entropy model; K-means clustering; accuracy; classification; sports forecasting
Online: 27 September 2016 (11:10:50 CEST)
Predicting the outcome of a future game between two National Basketball Association (NBA) teams poses a challenging problem of interest to statistical scientists as well as the general public. In this article, we formalize the problem of predicting the game results as a classification problem and apply the principle of maximum entropy to construct NBA maximum entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs by the NBAME model. The best NBAME model is able to correctly predict the winning team 74.4 percent of the time as compared to some other machine learning algorithms which is correct 69.3 percent of the time.
ARTICLE | doi:10.20944/preprints202212.0390.v1
Subject: Earth Sciences, Environmental Sciences Keywords: hydraulic geometry; rating curves; flood mapping; accuracy; data acquisition; data needs
Online: 21 December 2022 (06:59:11 CET)
Hydraulic relationships are important for water resource management, hazard prediction, and modelling. Since Leopold first identified power law expressions that could relate streamflow to top-width, depth, and velocity, hydrologists have been estimating ‘At-a-station Hydraulic Geometries’ (AHG) to describe average flow hydraulics. As the amount of data, data sources, and application needs increase, the ability to apply, integrate and compare disparate and often noisy data is critical for applications ranging from reach to continental scales. However, even with quality data, the standard practice of solving each AHG relationship independently can lead to solutions that fail to conserve mass. The challenge addressed here is how to extend the physical properties of the AHG relations, while improving the way they are hydrologically addressed and fit. We present a framework for minimizing error while ensuring mass conservation at reach - or hydrologic Feature - scale geometries’(FHG) that complies with current state-of-the-practice conceptual and logical models. Through this framework, FHG relations are fit for the United States Geological Survey’s (USGS) Rating Curve database, the USGS HYDRoacoustic dataset in support of the Surface Water Oceanographic Topography satellite mission (HYDRoSWOT), and the hydraulic property tables produced as part of the NOAA/Oakridge Continental Flood Inundation Mapping framework. The paper describes and demonstrates the accuracy, interoperability, and application of these relationships to flood modelling and presents this framework in an R package.
ARTICLE | doi:10.20944/preprints202104.0138.v1
Subject: Social Sciences, Accounting Keywords: Energy consumption; BRICS; GM (1, 1); Fractional-order; GREY; Forecasting accuracy
Online: 5 April 2021 (13:51:38 CEST)
Brazil, Russia, China, India, and the Republic of South Africa (BRICS) represent developing economies facing different energy and economic development challenges. The current study aims to forecast energy consumption in BRICS at aggregate and disaggregate levels using the annual time series data set from 1992 to 2019 and to compare results obtained from a set of models. The time-series data are from the British Petroleum (BP-2019) Statistical Review of World Energy. The forecasting methodology bases on a novel Fractional-order Grey Model (FGM) with different order parameters. This study contributes to the literature by comparing the forecasting accuracy and the forecasting ability of the FGM(1,1) with traditional ones, like standard GM(1,1) and ARIMA(1,1,1) models. Also, it illustrates the view of BRICS's nexus of energy consumption at aggregate and disaggregates levels using the latest available data set, which will provide a reliable and broader perspective. The Diebold-Mariano test results confirmed the equal predictive ability of FGM(1,1) for a specific range of order parameters and the ARIMA(1,1,1) model and the usefulness of both approaches for energy consumption efficient forecasting.
ARTICLE | doi:10.20944/preprints201910.0039.v1
Subject: Earth Sciences, Environmental Sciences Keywords: tree species; forest; biodiversity; time series; spatial autocorrelation; cross-validation; accuracy
Online: 3 October 2019 (13:56:18 CEST)
Mapping forest composition using multiseasonal optical time series is still challenging. Highly contrasted results are reported from one study to another suggesting that drivers of classification errors are still under-explored. We evaluated the performances of single-year Formosat-2 time series to discriminate tree species in temperate forests in France and investigated how predictions vary statistically and spatially across multiple years. Our objective was to better estimate the impact of spatial autocorrelation in the validation data on measurement accuracy and to understand which drivers in the time series are responsible for classification errors. The experiments were based on ten Formosat-2 image time series irregularly acquired during the seasonal vegetation cycle from 2006 to 2014. Due to lot of clouds in the year 2006, an alternative 2006 time series using only cloud-free images has been added. Thirteen tree species were classified in each single-year dataset based on the SVM algorithm. The performances were assessed using a spatial leave-one-out cross validation (SLOO-CV) strategy, thereby guaranteeing full independence of the validation samples, and compared with standard non-spatial leave-one-out cross-validation (LOO-CV). The results show relatively close statistical performances from one year to the next despite the differences between the annual time series. Good agreements between years were observed in monospecific tree plantations of broadleaf species versus high disparity in other forests composed of different species. A strong positive bias in the accuracy assessment (up to 0.4 of Overall Accuracy) was also found when spatial dependence in the validation data was not removed. Using the SLOO-CV approach, the average OA values per year ranged from 0.48 for 2006 to 0.60 for 2013, which satisfactorily represents the spatial instability of species prediction between years.
ARTICLE | doi:10.20944/preprints201708.0099.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: inequality; ratio; Bernoulli number; Riemann zeta function; Dirichlet eta function; accuracy
Online: 28 August 2017 (09:30:34 CEST)
In the paper, by virtue of some properties for the Riemann zeta function, the author finds a double inequality for the ratio of two consecutive Bernoulli numbers with even indexes and analyzes the approximating accuracy of the double inequality.
ARTICLE | doi:10.20944/preprints202102.0570.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: Low-cost Metal Material Extrusion; Additive Manufacturing; Machine Learning; Dimensional Accuracy; Sintering
Online: 25 February 2021 (10:02:44 CET)
Additive manufacturing (AM) is an emerged layer-by-layer manufacturing process. However, its broad adoption is still hindered by limited material options, different fabrication defects, and inconsistent part quality. Material extrusion (ME) is one of the most widely used AM technologies, and, hence, is adopted in this research. Low-cost metal ME is a new and AM technology used to fabricate metal composite parts using sintered metal infused filament material. Since the involved materials and process are relatively new, there is a need to investigate the dimensional accuracy of ME fabricated metal parts for real-world applications. Each step of the manufacturing process, from the material extrusion to sintering, might significantly affect the dimensional accuracy. This research provides a comprehensive analysis of dimensional changes of metal samples fabricated by the ME and sintering process, using statistical and machine learning algorithms. Machine learning (ML) methods can be used to assist researchers in sophisticated pre-manufacturing planning and product quality assessment and control. This study compares linear regression to neural networks in assessing and predicting the dimensional changes of ME made components after 3D printing and sintering process. The prediction outcomes using a neural network performed the best with the highest accuracy as compared to regression. The findings of this study can help researchers and engineers to predict the dimensional variations and optimize the printing and sintering process parameters to obtain high quality metal parts fabricated by the low-cost ME process.
ARTICLE | doi:10.20944/preprints202009.0034.v1
Subject: Earth Sciences, Geoinformatics Keywords: vertical accuracy; photogrammetric DTM; ASTER; SRTM; TanDEM-X; orthometric height; geoid height
Online: 2 September 2020 (08:30:48 CEST)
The quality of photogrammetric-based derived products like orthophotos, digital terrain models (DTMs) and digital line maps as well as the global digital elevation models (DEM) are rigorously dependent on the accuracy of image orientation. This paper evaluates the vertical accuracy of aerial photogrammetric Digital Terrain Model (DTM), Shuttle Radar Topography Mission (SRTM), Advanced Spaceborne Thermal Emission and Reflectance Radiometer (ASTER), and TerraSAR-X's twin satellite of TanDEM-X (TDX) datasets against in-situ orthometric heights computed from ellipsoidal heights and the 2008 Earth Gravitational Model (EGM2008) derived geoid heights in Ethiopia. The quality of the four global digital elevation models was also validated against the aerial photogrammetric DTM measurements. Besides, the accuracies of the photogrammetric DTM and the four DEM products were checked for their compliance to the American Society for Photogrammetry and Remote Sensing (ASPRS) standards as well as the Ethiopian national vertical data evaluation standards. The study showed that the photogrammetric DTM is in a good agreement with the reference orthometric heights compared to SRTM, ASTER and TDX datasets. More precisely, the result has an absolute accuracy of 1.67 m at Linear Error (LE) 95% confidence level, while the absolute accuracy of SRTM3 arc seconds (SRTM3) at LE 90% (11.91 m) is better than its product specification (16 m). The absolute accuracy of SRTM1 arc second (SRTM1) (7.70 m at LE 90%) surpasses that of SRTM3, whereas the absolute accuracy of ASTER DEM is somehow below its product specification. TDX also has the same vertical accuracy (10.29 m at LE 90%) compared to its product specification (10 m). Furthermore, the vertical accuracy of the photogrammetric DTM meets the100 cm vertical accuracy of the 2015 ASPRS standard. However, it does not meet the Ethiopian national vertical data accuracy requirement standard, i.e., RMSEz of ± 0.45 m. In general, the photogrammetric DTM, SRTM1, and TDX have been proven a superior product over the SRTM3 and ASTER DEMs, and better to use these products for high-level precision and accuracy required applications.
ARTICLE | doi:10.20944/preprints201810.0558.v1
Subject: Earth Sciences, Geoinformatics Keywords: digital terrain models; DTM vertical accuracy; DTM comparison; hydrologeomorphological Modelling; Mediterranean catchments
Online: 24 October 2018 (08:27:47 CEST)
Digital Terrain Models (DTMs) are currently a fundamental source of information in Earth Sciences. However, DTM-based studies can contain remarkable biases if limitations and inaccuracies of these models are disregarded. In this work, four freely available datasets such as SRTM C-SAR DEM, ASTER GDEM V2 and two airborne LiDAR derived DTMs (at 5 and 1 m spatial resolution, respectively) were analysed in a comparative study in three geomorphologically contrasted catchments located in Mediterranean geoecosystems under intensive human land use influence. Vertical accuracy as well as the influence of each dataset characteristics on hydrological and geomorphological modelling applicability were assessed by using classic geometric and morphometric parameters and the more recently proposed index of sediment connectivity. Overall vertical accuracy – expressed as Root Mean Squared Error (RMSE) and Normalized Median Deviation (NMAD) – revealed the highest accuracy in the cases of the 1 m (RMSE = 1.55 m; NMAD = 0.44 m) and 5 m LiDAR DTMs (RMSE = 1.73 m; NMAD = 0.84 m). Vertical accuracy of SRTM was lower (RMSE = 6.98 m; NMAD = 5.27 m) but considerably higher than in the case of ASTER (RMSE = 16.10 m; NMAD = 11.23 m). All datasets were affected by systematic distortions. As a consequence, propagation of these errors caused negative impacts on flow routing, stream network and catchment delineation and, to a lower extent, on the distribution of slope values. These limitations should be carefully considered when applying DTMs for hydrogeomorphological modelling.
ARTICLE | doi:10.20944/preprints201807.0488.v1
Subject: Mathematics & Computer Science, Other Keywords: heart rate variability; machine learning; abnormality detection; window shifting; high accuracy prediction
Online: 25 July 2018 (14:22:10 CEST)
The use of machine learning techniques in predictive health care is on the rise with minimal data used for training machine-learning models to derive high accuracy predictions. In this paper, we propose such a system, which utilizes Heart Rate Variability (HRV) as features for training machine learning models. This paper further benchmarks the usefulness of HRV as features calculated from basic heart-rate data using a window shifting method. The benchmarking has been conducted using different machine-learning classifiers such as artificial neural network, decision tree, k-nearest neighbour and naive bays classifier. Empirical results using MIT-BIH Arrhythmia database shows that the proposed system can be used for highly efficient predictability of abnormality in heartbeat data series.
ARTICLE | doi:10.20944/preprints201803.0161.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: acoustic positioning system; three-dimensional assessment model; positioning accuracy; DOP; optimal configuration
Online: 19 March 2018 (11:43:18 CET)
This paper addresses the problem of assessing and optimizing acoustic positioning system for underwater target localization with range measurements only. We present a new three-dimensional assessment model to assess the optimal geometric beacon formation whether meet user needs. For the sake of mathematical tractability, it is assumed that the measurements of the range between the target and beacons are corrupted with white Gaussian noise with variance is distance-dependent. Then by adopting dilution of precision (DOP) parameters in the assessment model, the relationship between DOP parameters and positioning accuracy is derived. In addition, the optimal geometric beacon formation that will yield the best performance is achieved by minimizing the values of geometric dilution of precision (GDOP) on condition that the position of target is known and fixed. Next, in order to make sure whether the estimate positioning accuracy over interesting region satisfy the precision needed by the users, geometric positioning accuracy (GPA), horizonal positioning accuracy (HPA) and vertical positioning accuracy (VPA) are utilized to assess the optimal geometric beacon formation. Simulation examples are designed to illustrate the exactness of the conclusion. Unlike other work which only use GDOP to optimize the formation and cannot assess the performance of the specified dimensions, this new three-dimensional assessment model can assess the optimal geometric beacon formation in each dimension for any point in three-dimensional space, which can provide users with guidance advices to optimize performance of every specified dimension.
ARTICLE | doi:10.20944/preprints202112.0206.v1
Subject: Engineering, Control & Systems Engineering Keywords: Motion capture camera; robotic total station; autonomous vehicle; 6 DoF pose estimation; accuracy
Online: 13 December 2021 (13:30:53 CET)
To validate the accuracy and reliability of onboard sensors for object detection and localization in driver assistance, as well as autonomous driving applications under realistic conditions (indoors and outdoors), a novel tracking system is presented. This tracking system is developed to determine the position and orientation of a slow-moving vehicle (e.g. car during parking maneuvers), independent of the onboard sensors, during test maneuvers within a reference environment. One requirement is a 6 degree of freedom (DoF) pose with a position uncertainty below 5 mm (3σ), an orientation uncertainty below 0.3° (3σ) at a frequency higher than 20 Hz, and a latency smaller than 500 ms. To compare the results from the reference system with the vehicle’s onboard system, a synchronization via Precision Time Protocol (PTP) and a system interoperability to Robot Operating System (ROS) is implemented. The developed system combines motion capture cameras mounted in a 360° panorama view set-up on the vehicle with robotic total stations. A point cloud of the test site serves as a digital twin of the environment, in which the movement of the vehicle is simulated. Results have shown that the fused measurements of these sensors complement each other, so that the accuracy requirements for the 6 DoF pose can be met, while allowing a flexible installation in different environments.
Subject: Physical Sciences, Acoustics Keywords: SAR Interferometry; Accuracy; Big Data; Deformation Monitoring, Sentinel-1; Fading Signal; Signal Decorrelation
Online: 27 October 2020 (15:26:30 CET)
We scrutinize the reliability of multilooked interferograms for deformation analysis. Designing a simple approach in the evaluation of the accuracy of the estimated deformation signals, we reveal a prominent bias in the deformation velocity maps. The bias is the result of propagation of small phase error of multilooked interferograms through the time series and can sum up to 6.5 mm/yr in case of using the error prone short temporal baseline interferograms. We further discuss the role of the phase estimation algorithms in reduction of the bias and put recommend a unified intermediate InSAR product for achieving high-precision deformation monitoring.
DATA DESCRIPTOR | doi:10.20944/preprints201812.0148.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: Additive manufacturing; fused deposition modeling; FDM; dimensional accuracy; manufacturing process repeatability; polymer testing
Online: 12 December 2018 (12:58:13 CET)
This report describes the collection of a large dataset (6930 measurement) on dimensional error in the fused deposition modeling (FDM) additive manufacturing process for full-density parts. Three different print orientations were studied, as well as seven raster angles (0°, 15°, 30°, 45°, 60°, 75°, and 90°) for the rectilinear infill pattern. All measurements were replicated ten times on ten different samples to ensure a comprehensive dataset. Eleven polymer materials were considered: acrylonitrile butadiene styrene (ABS), polylactic acid (PLA), high-temperature PLA, wood-composite PLA, carbon-fiber-composite PLA, copper-composite PLA, aluminum-composite PLA, high-impact polystyrene (HIPS), polyethylene terephthalate glycol-enhanced (PETG), polycarbonate, and synthetic polyamide (nylon). The samples were ASTM-standard impact testing samples, since this geometry allows the measurement of error on three different scales; the nominal dimensions were 3.25mm thick, 63.5mm long, and 12.7mm wide. This dataset is intended to give engineers and product designers a benchmark for judging the accuracy and repeatability of the FDM process for use in manufacturing of end-user products.
ARTICLE | doi:10.20944/preprints201806.0240.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Bit-serial; Low Power; Variable Accuracy Computing; FFT; Energy Harvesting; VLSI; Hardware Design
Online: 14 June 2018 (16:22:15 CEST)
In this paper, a new approach is proposed for designing ultra-low-power FFT (Fast Fourier Transform) system suitable for use in energy harvesting powered sensors. Bit-serial architecture is adopted to reduce the power consumption of butterfly operation. Simulation results show that, compared with state-of-the-art bit-serial and conventional parallel processors, the proposed technique is superior in terms of silicon area, power consumption, dynamic energy use due to variable precision arithmetic. A sample design of a 64-point FFT shows that the implementation can save about 40% area and 36% leakage power compared with a conventional parallel counterpart, accordingly achieving significant power benefits at a low sample rate and low voltage domain. The dynamic variation of the arithmetic precision can be achieved through a simple modification of the controller with hardware area overhead of 10% gate count.
ARTICLE | doi:10.20944/preprints202208.0389.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Numerical weather prediction; Time integration; Filtering; Laplace transform; semi-implicit; semi-Lagrangian; Forecast accuracy
Online: 23 August 2022 (03:13:59 CEST)
A time integration scheme based on semi-Lagrangian advection and Laplace transform adjustment has been implemented in a baroclinic primitive equation model. The semi-Lagrangian scheme makes it possible to use large time steps. However, errors arising from the semi-implicit scheme increase with the time step size. In contrast, the errors using the Laplace transform adjustment remain relatively small for typical time steps used with semi-Lagrangian advection. Numerical experiments confirm the superior performance of the Laplace transform scheme relative to the semi-implicit reference model. The algorithmic complexity of the scheme is comparable to the reference model, making it computationally competitive, and indicating its potential for integrating weather and climate prediction models.
ARTICLE | doi:10.20944/preprints202105.0541.v1
Subject: Keywords: Artificial intelligence; Accounting systems integration; Accounting systems accuracy; Financial statements; Aqaba Special Economic Zone
Online: 24 May 2021 (08:47:20 CEST)
The study aims to examine the effects of artificial intelligence (AI) on the consistency and analysis of financial statements in hotels in ASEZA, Jordan. This research is an exploratory, empirical study, which uses the methodology of data collection and interpretation to draw conclusions. The researchers used the arithmetic mean, standard deviation, T-test and ANOVA test to calculate the degree of significance of the study questions. The findings of a basic linear regression study of the impact of AI implemented in Jordanian hotels on the integration of accounting information systems and the association between AI and the integration of accounting information systems (R = 59.6%) also indicate that the fixed limit value amounted to (2.060) and the value of (Beta) for T-test
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: spectral collocation; Chebfun; singular Schrodinger; high index eigenpairs; multiple eigenpairs; accuracy; numerical stability
Online: 26 November 2020 (11:07:47 CET)
We are concerned with the use of some classical spectral collocation methods as well as with the new software system Chebfun in order to compute high order (index) eigenpairs of singular as well as regular Schrodinger eigenproblems. We want to highlight both the qualities as well as the shortcomings of these methods and evaluate them vis-a-vis the usual ones. In order to resolve a boundary singularity we use Chebfun with the simple domain truncation technique. Although this method is equally easy to apply with spectral collocation, things are more nuanced in the case of these methods. A special technique to introduce boundary conditions as well as a coordinate transform which maps an unbounded domain to a nite one are the ingredients. A challenging set of "hard" benchmark problems, for which usual numerical methods (f. d., f. e. m., shooting etc.) fail, are analysed. In order to separate "good"and "bad"eigenvalues we estimate the drift of the set of eigenvalues of interest with respect to the order of approximation and/or scaling of domain parameter. It automatically provides us with a measure of the error within which the eigenvalues are computed and a hint on numerical stability. We pay a particular attention to problems with almost multiple eigenvalues as well as for problems with a mixed (continuous) spectrum. In the latter case we try to numerically highlight its existence. Special attention will be paid to the higher eigenpairs (the pair of eigenvalue and the corresponding eigenfunction approximated by an eigenvector spanning its nodal values).
ARTICLE | doi:10.20944/preprints202005.0052.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: COVID-19 infection; CT scan image; serial feature fusion; KNN classiﬁer; segmentation; detection accuracy
Online: 5 May 2020 (02:32:05 CEST)
The Coronavirus disease (COVID-19) caused by a novel coronavirus, SARS-CoV-2, has been declared as a global pandemic. Due to its infection rate and severity, it has emerged as one of the major global threats of the current generation. To support the current combat against the disease, this research aims to propose a Machine Learning based pipeline to detect the COVID-19 infection using the lung Computed Tomography scan images (CTI). This implemented pipeline consists of a number of sub-procedures ranging from segmenting the COVID-19 infection to classifying the segmented regions. The initial part of the pipeline implements the segmentation of the COVID-19 affected CTI using Social-Group-Optimization and Kapur’s Entropy thresholding, followed by k-means clustering and morphology-based segmentation. The next part of the pipeline implements feature extraction, selection and fusion to classify the infection. PCA based serial fusion technique is used in fusing the features and the fused feature vector is then employed to train, test and validate four different classifiers namely Random Forest, k-Nearest Neighbors (KNN), Support Vector Machine with Radial Basis Function, and Decision Tree. Experimental results using benchmark datasets show a high accuracy (> 91%) for the morphology-based segmentation task and for the classification task the KNN offers the highest accuracy among the compared classifiers (> 87%). However, this should be noted that this method still awaits clinical validation, and therefore should not be used to clinically diagnose the ongoing COVID-19 infection.
ARTICLE | doi:10.20944/preprints201810.0379.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: surgical simulator training; individual performance trend; speed-accuracy function; automatic detection; performance feed-back
Online: 17 October 2018 (08:40:08 CEST)
Simulator training for image-guided surgical interventions may benefit from artificial intelligence systems that control the evolution of task skills in terms of time and precision of a trainee's performance on the basis of fully automatic feed-back systems. At the earliest stages of training, novice trainees frequently focus on getting faster at the task, and may thereby compromise the optimal evolution of the precision of their performance. For automatically guiding them towards attaining an optimal speed-accuracy trade-off, an effective control system for the reinforcement/correction of strategies must be able to exploit the right individual performance criteria in the right way, reliably detect individual performance trends at any given moment in time, and alert the trainee, as early as necessary, when to slow down and focus on precision, or when to focus on getting faster. This article addresses several aspects of this challenge for speed-accuracy controlled simulator training before any training on specific surgical tasks or clinical models should be envisaged. Analyses of individual learning curves from the simulator training sessions of novices and benchmark performance data of one expert surgeon, who had no specific training in the simulator task, validate the suggested approach.
ARTICLE | doi:10.20944/preprints202201.0060.v1
Subject: Medicine & Pharmacology, Urology Keywords: extra-prostatic extension; magnetic resonance imaging; radical prostatectomy; nerve-sparing; prostate cancer; staging; diagnostic accuracy
Online: 6 January 2022 (10:05:55 CET)
The accuracy of multi-parametric MRI (mpMRI) in pre-operative staging of prostate cancer (PCa) remains controversial. Objective: To evaluate the ability of mpMRI to accurately predict PCa extra-prostatic extension (EPE) on a side-specific basis using a risk-stratified 5-point Likert scale. This study also aimed to assess the influence of mpMRI scan quality on diagnostic accuracy. Patients and Methods: We included 124 men who underwent robot-assisted RP (RARP) as part of the NeuroSAFE PROOF study at our centre. Three radiologists retrospectively reviewed mpMRI blinded to RP pathology and assigned a Likert score (1-5) for EPE on each side of the prostate. Each scan was also ascribed a Prostate Imaging Quality (PI-QUAL) score for assessing the quality of the mpMRI scan, where 1 represents poorest and 5 represents best diagnostic quality. Outcome measurements and statistical analyses: Diagnostic performance is presented for binary classification of EPE including 95% confidence intervals and area under the receiver operating characteristic curve (AUC). Results: A total of 231 lobes from 121 men (mean age 56.9 years) were evaluated. 39 men (32.2%), or 43 lobes (18.6%) had EPE. Likert score ≥3 had sensitivity (SE), specificity (SP), NPV, PPV of 90.4%, 52.3%, 96%, 29.9%, respectively, and AUC was 0.82 (95% CI: 0.77-0.86). AUC was 0.63 (95% CI: 0.37-0.9), 0.77 (0.71-0.84) and 0.92 (0.88-0.96) for biparametric scans, PI-QUAL 1-3 and PI-QUAL 4-5 scans, respectively. Conclusions: MRI can be used effectively by genitourinary radiologists to rule out EPE and help inform surgical planning for men undergoing RARP. EPE prediction was more reliable when the MRI scan was a) multi-parametric and b) of a higher image quality according to the PI-QUAL scoring system.
ARTICLE | doi:10.20944/preprints202105.0165.v1
Subject: Medicine & Pharmacology, Allergology Keywords: Intraoral Scanners; Intra-Oral Scanners; CAD/CAM; Digital Dentistry; Trueness; Precision; Accuracy; Scanners; Lab Scanners
Online: 10 May 2021 (10:44:19 CEST)
(1) Background: The purpose of this study is to evaluate the full arch scan accuracy (precision and trueness) of nine digital intra-oral scanners and four lab scanners. Previous studies have compared the accuracy of some intra-oral scanners, but as this is a field of quickly developing technologies, a more up-to-date study was needed to assess the capabilities of currently available models.; (2) Methods: The present in vitro study compared nine different intraoral scanners (Omnicam 4.6; Omnicam 5.1; Primescan; CS 3600; Trios 3; Trios 4; Runyes; i500 and DL206) as well as four lab light scanners (Einscan SE; 300e; E2 and Ineos X5) to investigate the accuracy of each scanner by examining the overall trueness and precision. Ten aligned and cut scans from each of the intra-oral and lab scanners in the in vitro study were brought into CloudCompare. A comparison was made with the master STL using the CloudCompare 3D analysis best-fit algorithm. The results were recorded along with individual standard deviation and a colorimetric map of the deviation across the surface of the STL mesh; a comparison was made to the master STL, quantified at specific points. ; (3) Results: In the present study, the Primescan had the best overall trueness (17.3 ± 4.9). Followed by (in order of increasing deviation) the Trios 4 (20.8 ± 6.2), i500 (25.2 ± 7.3), CS3600 (26.9 ± 15.9), Trios 3 (27.7 ± 6.8), Runyes (47.2 ± 5.4), Omnicam 5.1 (55.1 ± 9.5), Omnicam 4.6 (57.5 ± 3.2) and Launca DL206 (58.5 ± 22.0). Regarding the lab light scanners, the Ineos X5 had the best overall trueness with (0.0 ± 1.9). Followed by (in order of increasing deviation) the 3Shape E2 (3.6 ± 2.2), Up3D 300E (12.8 ± 2.7), and Einscan SE (14.9 ± 9.5); (4) Conclusions: This study confirms that all current generations of intra-oral digital scanners can capture a reliable, reproducible full arch scan in dentate patients. Out of the intra-oral scanners tested, no scanner produced results significantly similar in trueness to the Ineos X5. However, the Primescan was the only one to be statistically of a similar level of trueness to the 3Shape E2 lab scanner. All scanners in the study had mean trueness of under 60-micron deviation. While this study can compare the scanning accuracy of this sample in a dentate arch, the scanning of a fully edentulous arch is more challenging. The accuracy of these scanners in edentulous cases should be examined in further studies.
ARTICLE | doi:10.20944/preprints201802.0060.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: surveying; close-range photogrammetry; internal coincidence precision estimation; external coincidence accuracy estimation; experimental work; testing
Online: 7 February 2018 (10:28:16 CET)
Precision and accuracy estimation is an important index used to reflect the measurement performance and quality of a measurement system. To reveal the significance and connotations of the precision and accuracy estimation index of a close-range photogrammetry system, several common precision and accuracy estimation methods used in close-range photogrammetry are explained from a theoretical perspective, and the mechanism of the internal coincidence precision estimation and the external coincidence accuracy estimation are deduced, respectively. Through detailed experimental design and testing, the validity and reliability of the proposed precision and accuracy estimation methods are verified, which provides strong evidence for the quality control, optimisation, and evaluation of the measurement results from a close-range photogrammetry system. At the same time, it has significance for the further development of precision and accuracy estimation analysis of close-range photogrammetry systems.
ARTICLE | doi:10.20944/preprints202202.0197.v1
Subject: Medicine & Pharmacology, Other Keywords: public health; occupational; Covid; SARS-CoV-2; work; job exposure matrix; JEM; compensation; predictivity; validity; accuracy
Online: 16 February 2022 (09:47:18 CET)
Background. We aimed to assess the validity of the Mat-O-Covid Job Exposure Matrix (JEM) on SARS-CoV2 using compensation data from the French National Health Insurance compensation system for occupational-related COVID-19. Methods. Deidentified compensation data for occupational COVID-19 in France were obtained between August 2020 and August 2021. The acceptance was considered as the reference. Mat-O-Covid is an expert based French JEM on workplace exposure to SARS-CoV2. Bivariate and multivariate models were used to study the association between the exposure assessed by Mat-O-Covid and the reference, as well as the Area Under Curves (AUC), sensitivity, specificity, predictive values, and likelihood ratios. Results. In the 1140 cases included, there was a close association between the Mat-O-Covid index and the reference (p<0.0001). The overall predictivity was good, with an AUC of 0.78 and an optimal threshold at 13 per thousand. Using Youden’s J statistic resulted in 0.67 sensitivity and 0.87 specificity. Both positive and negative likelihood ratios were significant: respectively 4.9 [2.4-6.4] and 0.4 [0.3-0.4]. Discussion. It was possible to assess Mat-O-Covid’s validity using data from the national compensation system for occupational COVID-19. Though further studies are needed, Mat-O-Covid exposure assessment appears to be accurate enough to be used in research.
REVIEW | doi:10.20944/preprints202004.0155.v1
Subject: Medicine & Pharmacology, Other Keywords: COVID-19; Coronavirus; False-negative; Nucleic Acid Test; Screening; Diagnostic Accuracy; Missed Diagnosis; Epidemic; Infectious Disease
Online: 9 April 2020 (14:37:56 CEST)
Reliable methods to confirm the diagnosis of COVID-19 are essential to the successful management and containment of the virus. Current diagnostic options are limited in type, supply, and reliability. This article explores the controversial unreliability of existing diagnostic methods and maintains that more reliable diagnostic methods, combinations, and sequencing are necessary to effectively assist in reducing the occurrence of discharge of the patient on false negative test results. This reduction would in effect reduce transmission of the disease.
Subject: Engineering, Biomedical & Chemical Engineering Keywords: brain-computer Interface; cognitive aging; steady-state visual evoked potential, neural network; detection accuracy; band power
Online: 13 May 2019 (08:32:23 CEST)
Cognitive deterioration caused by illness or aging often occurs before symptoms arise, and their timely diagnosis is crucial to reducing its medical, personal, and societal impacts. Brain-Computer Interfaces (BCIs) stimulate and analyze key cerebral rhythms, enabling reliable cognitive assessment that can accelerate diagnosis. The BCI system presented analyzes Steady-State Visually Evoked Potentials (SSVEPs) elicited in subjects of varying age to detect cognitive aging, predict its magnitude, and identify its relationship with SSVEP features (band power and frequency detection accuracy), which were hypothesized to indicate cognitive decline due to aging. The BCI system was tested with subjects of varying age to assess its ability to detect aging-induced cognitive deterioration. Rectangular stimuli flickering at theta, alpha, and beta frequencies were presented to subjects, and frontal and occipital EEG responses were recorded. These were processed to calculate detection accuracy for each subject and calculate SSVEP band power. A neural network was trained using the features to predict cognitive age. The results showed potential cognitive deterioration through age-related variations in SSVEP features. Frequency detection accuracy declined after age group 20–40 and band power, throughout all age groups. SSVEPs generated at theta and alpha frequencies, especially 7.5 Hz, were the best indicators of cognitive deterioration. Here, frequency detection accuracy consistently declined after age group 20-40 from an average of 96.64% to 69.23%. The presented system can be used as an effective diagnosis tool for age related cognitive decline.
ARTICLE | doi:10.20944/preprints201712.0100.v2
Subject: Earth Sciences, Geology Keywords: cold-water coral; carbonate mound; habitat mapping; spatial prediction; image segmentation; GEOBIA; random forest; accuracy, confidence
Online: 18 January 2018 (16:08:36 CET)
Cold-water coral reefs are rich, yet fragile ecosystems found in colder oceanic waters. Knowledge of their spatial distribution on continental shelves, slopes, seamounts and ridge systems is vital for marine spatial planning and conservation. Cold-water corals frequently form conspicuous carbonate mounds of varying sizes, which are identifiable from multibeam echosounder bathymetry and derived geomorphometric attributes. However, the often large number of mounds makes manual interpretation and mapping a tedious process. We present a methodology that combines image segmentation and random forest spatial prediction with the aim to derive maps of carbonate mounds and an associated measure of confidence. We demonstrate our method based on multibeam echosounder data from Iverryggen on the mid-Norwegian shelf. We identified the image-object mean planar curvature as the most important predictor. The presence and absence of carbonate mounds is mapped with high accuracy (overall accuracy = 84.4%, sensitivity = 0.827 and specificity = 0.866). Spatially-explicit confidence in the predictions is derived from the predicted probability and whether the predictions are within or outside the modelled range of values and is generally high. We plan to apply the showcased method to other areas of the Norwegian continental shelf and slope where MBES data have been collected with the aim to provide crucial information for marine spatial planning.
ARTICLE | doi:10.20944/preprints202007.0656.v1
Subject: Keywords: COVID-19 infection; Chest X-ray image; generalized regression neural network; probabilistic neural network and detection accuracy
Online: 27 July 2020 (00:52:49 CEST)
Corona virus disease (COVID-19) has infected over more than 10 million people around the globe and killed at least 500K worldwide by the end of June 2020. As this disease continues to evolve and scientists and researchers around the world now trying to find out the way to combat this disease in most effective way. Chest X-rays are widely available modality for immediate care in diagnosing COVID-19. Precise detection and diagnosis of COVID-19 from these chest X-rays would be practical for the current situation. This paper proposes one shot cluster based approach for the accurate detection of COVID-19 chest x-rays. The main objective of one shot learning (OSL) is to mimic the way humans learn in order to make classification or prediction on a wide range of similar but novel problems. The core constraint of this type of task is that the algorithm should decide on the class of a test instance after seeing just one test example. For this purpose we have experimented with widely known Generalized Regression and Probabilistic Neural Networks. Experiments conducted with publicly available chest x-ray images demonstrate that the method can detect COVID-19 accurately with high precision. The obtained results have outperformed many of the convolutional neural network based existing methods proposed in the literature.
ARTICLE | doi:10.20944/preprints201811.0025.v1
Subject: Materials Science, General Materials Science Keywords: additive manufacturing; selective laser melting; AlSi10Mg; Al6061; SLM process parameters; powder characterization; density, surface topology; dimensional accuracy
Online: 2 November 2018 (06:19:40 CET)
Additive manufacturing (AM) of high strength Al alloys promises to enhance the performance of critical components related to various aerospace and automotive applications. The key advantage of AM is its ability to generate lightweight, robust, and complex shapes. However, the characteristics of the as-built parts may represent an obstacle to satisfy the part quality requirements. The current study investigates the influence of selective laser melting (SLM) process parameters on the quality of parts fabricated from different Al alloys. A design of experiment (DOE) is used to analyze relative density, porosity, surface roughness, and dimensional accuracy according to the interaction effect between the SLM process parameters. The results show a range of energy densities and SLM process parameters for the AlSi10Mg and Al6061 alloys needed to achieve “optimum” values for each performance characteristic. A process map is developed for each material by combining the optimized range of SLM process parameters for each characteristic to ensure good quality of the as-built parts. The second part of this study investigates the effect of SLM process parameters on the microstructure and mechanical properties of the same Al alloys. This comprehensive study is also aimed at reducing the amount of post-processing needed.
ARTICLE | doi:10.20944/preprints202007.0336.v3
Subject: Biology, Agricultural Sciences & Agronomy Keywords: Prediction accuracy; Mixed linear and Bayesian models; Machine Learning algorithms; Training set size and composition; Parametric and nonparametric models
Online: 17 September 2020 (05:41:51 CEST)
Genomic selection (GS) can accelerate variety improvement when training set (TS) size, and its relationship with the breeding set (BS) are optimized for prediction accuracies (PA) of genomic prediction (GP) models. Sixteen GP algorithms were run on phenotypic best linear unbiased predictors (BLUPs) and estimators (BLUEs) of resistance to both fall armyworm (FAW) and maize weevil (MW) in a tropical maize panel. For MW resistance, 37% of the panel was the TS, and BS was the remainder whilst for FAW, random-based training sets (RBTS) and pedigree-based training sets (PBTS) were designed. PAs achieved with BLUPs varied from 0.66 to 0.82 for MW resistance traits, and, for FAW resistance, 0.694 to 0.714 for RBTS of 37%, and 0.843 to 0.844 for RBTS of 85%, and, these were at least two-fold those from BLUEs. For PBTS, FAW resistance PAs were generally higher than those for RBTS, except for one dataset. GP models generally showed similar PAs across individual traits whilst the TS designation was determinant since a positive correlation (R=0.92***) between TS size and PAs was observed for RBTS and, for the PBTS, it was negative (R=0.44**). This study pioneers the use of GS for maize resistance to insect pests in sub-Saharan Africa.
ARTICLE | doi:10.20944/preprints201907.0275.v1
Subject: Earth Sciences, Other Keywords: Accuracy Assessment, Analysis Change, Detection analysis, Environmental change, GIS and Remote Sensing, Jarmet and others wetland change,LULC, change population growth
Online: 24 July 2019 (12:04:29 CEST)
Wetlands are one of the crucial natural resources. They provide invaluable biodiversity resources, aid in water quality improvement, support ground water recharge, help in moderating climate change and support flood control. Environment is in the other hand, where we live and something, we are very familiar with our day to day life. Geographic Information Systems (GIS), Remote Sensing and Global Positioning System (GPS) were a useful tool for wetland and environmental change analysis and to improve on the classification accuracy. This study investigates population and environmental change of Jarmet wetland and its surrounding area change analysis over the period of 1972 to 2015. The purpose of this study was to show land use/ land cover change of Jarmet wetland and its surrounding environment over years as a response to population growth. For this purpose, multi-temporal satellite imageries (Landsat MSS 1972, TM1986, ETM+ 2000, 2005 and 2015 and SRTM 2000) were obtained and used for LULC change analysis, elevation analysis and change detection analysis. ERDAS Imagine 2015, ARC GIS 10.5.1, Global Mapper11, ENVI 5.0 and DNR Garmin softwares were used to process the image data and accuracy assessment analysis. The result of LULC showed that there is spatial reduction in wetland, forest, Shrubland and grassland in the period of 43 years (1972-2015) by -1,722.8 ha, -296.2 ha, -1,718.7 ha and -661.9 ha respectively, due to increase in the farmland and plantation area as a response to overpopulation, lack of environmental policy implementation and irresponsible for natural resource degradation. The accuracy assessment of LULC change are done for recent satellite image showed the overall accuracy of 84.06% with Kappa index 75.19% this means this classification is accurately classified and handle greater than 75% of error. Finally, this study suggests that create strictly natural resource conservation law, stopping illegal expansion of farmland, educating society about the value of natural resource especially wetland and create a source of income for society rather than farming.
ARTICLE | doi:10.20944/preprints202109.0034.v3
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Artificial intelligence; CMAPSS; consistency and local accuracy; CUSUM chart; deep learning; prognostic and health management; RMSE; sensing and data extraction; SHAP; Uncertainty; XAI
Online: 12 January 2022 (10:22:47 CET)
: Mistrust, amplified by numerous artificial intelligence (AI) related incidents, has caused the energy and industrial sectors to be amongst the slowest adopter of AI methods. Central to this issue is the black-box problem of AI, which impedes investments and fast becoming a legal hazard for users. Explainable AI (XAI) is a recent paradigm to tackle this challenge. Being the backbone of the industry, the prognostic and health management (PHM) domain has recently been introduced to XAI. However, many deficiencies, particularly lack of explanation assessment methods and uncertainty quantification, plague this young field. In this paper, we elaborate a framework on explainable anomaly detection and failure prognostic employing a Bayesian deep learning model to generate local and global explanations from the PHM tasks. An uncertainty measure of the Bayesian model is utilized as marker for anomalies expanding the prognostic explanation scope to include model’s confidence. Also, the global explanation is used to improve prognostic performance, an aspect neglected from the handful of PHM-XAI publications. The quality of the explanation is finally examined employing local accuracy and consistency properties. The method is tested on real-world gas turbine anomalies and synthetic turbofan data failure prediction. Seven out of eight of the tested anomalies were successfully identified. Additionally, the prognostic outcome showed 19% improvement in statistical terms and achieved the highest prognostic score amongst best published results on the topic.
ARTICLE | doi:10.20944/preprints202107.0200.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: image quality assessment; image quality metrics; NR-IQAs; D-IQA; OCR accuracy; OCR prediction; OCR improvements; visual aids; visually impaired; reading aids; document images; text-based images
Online: 8 July 2021 (13:21:49 CEST)
For Visually impaired People (VIPs), the ability to convert text to sound can mean a new level of independence or the simple joy of a good book. With significant advances in Optical Character Recognition (OCR) in recent years, a number of reading aids are appearing on the market. These reading aids convert images captured by a camera to text which can then be read aloud. However, all of these reading aids suffer from a key issue – the user must be able to visually target the text and capture an image of sufficient quality for the OCR algorithm to function – no small task for VIPs. In this work, a Sound-Emitting Document Image Quality Assessment metric (SEDIQA) is proposed which allows the user to hear the quality of the text image and automatically captures the best image for OCR accuracy. This work also includes testing of OCR performance against image degradations, to identify the most significant contributors to accuracy reduction. The proposed No-Reference Image Quality Assessor (NR-IQA) is validated alongside established NR-IQAs and this work includes insights into the performance of these NR-IQAs on document images.
ARTICLE | doi:10.3390/sci2030071
Subject: Keywords: telescopes; lightweight telescope mirrors; adaptive optics; better resolution; increased accuracy; more bandwidth; cluster of satellites; innovative platform; more capabilities into smaller packages; far-shorter time from click to customer
Online: 9 September 2020 (00:00:00 CEST)
The use of Light Amplification by Stimulated Emission of Radiation (i.e., LASERs or lasers) by the U.S. Department of Defense is not new and includes laser weapons guidance, laser-aided measurements, even lasers as weapons (e.g., Airborne Laser). Lasers in support of telecommunications is also not new. The use of laser light in fiber optics shattered thoughts on communications bandwidth and throughput. Even the use of lasers in space is no longer new. Lasers are being used for satellite-to-satellite crosslinking. Laser communication can transmit orders-of-magnitude more data using orders-of-magnitude less power and can do so with minimal risk of exposure to the sending and receiving terminals. What is new is using lasers as the uplink and downlink between the terrestrial segment and the space segment of satellite systems. More so, the use of lasers to transmit and receive data between moving terrestrial segments (e.g., ships at sea, airplanes in flight) and geosynchronous satellites is burgeoning. This manuscript examines the technological maturation of employing lasers as the signal carrier for satellite communications linking terrestrial and space systems. The purpose of the manuscript is to develop key performance parameters (KPPs) to inform U.S. Department of Defense initial capabilities documents (ICDs) for near-future satellite acquisition and development. By appreciating the history and technological challenges of employing lasers rather than traditional radio frequency sources for satellite uplink and downlink signal carrier, this manuscript recommends ways for the U.S. Department of Defense to employ lasers to transmit and receive high bandwidth, large-throughput data from moving platforms that need to retain low probabilities of detection, intercept, and exploitation (e.g., carrier battle group transiting to a hostile area of operations, unmanned aerial vehicle collecting over adversary areas). The manuscript also intends to identify commercial sector early-adopter fields and those fields likely to adapt to laser employment for transmission and receipt.
ARTICLE | doi:10.20944/preprints202110.0248.v1
Subject: Earth Sciences, Environmental Sciences Keywords: Posidonia oceanica (PO); LAI & density; PO health & Pergent model; sea truth sampling; Earth Observation; HR satellite multispectral/hyperspectral sensors; atmospheric correction; coastal monitoring; mapping shallow waters habitat seabed; Calibration/validation & training/test; Classification & regression Machine Learning; Model Performance & thematic Accuracy; Sentinel 2 MSI multispectral & PRISMA hyperspectral; ISWEC(Inertial Sea Wave Energy Converter)
Online: 18 October 2021 (14:41:35 CEST)
The Mediterranean basin is a hot spot of climate change where the Posidonia oceanica (L.) Delile (PO) and other seagrass are under stress due to its effect on marine habitats and the rising influence of anthropogenic activities (tourism, fishery). The PO and seabed ecosystems, in the coastal environments of Pantelleria and Lampedusa, suffer additional growing impacts from tourism in synergy with specific stress factors due to increasing vessel traffic for supplying potable water, fossil fuels for electrical power generation. Earth Observation (EO) data, provided by high resolution (HR) multi/hyperspectral operative satellite sensors of the last generation (i.e. Sentinel 2 MSI and PRISMA) have been successfully tested, using innovative calibration and sea truth collecting methods, for monitoring and mapping of PO meadows under stress, in the coastal waters of these islands, located in the Sicily Channel, to better support the sustainable management of these vulnerable ecosystems. The area of interest in Pantelleria was where the first prototype of the Italian Inertial Sea Wave Energy Converter (ISWEC) for renewable energy production was installed in 2015, and sea truth campaigns on the PO meadows were conducted. The PO of Lampedusa coastal areas, impacted by ship traffic linked to the previous factors and tropicalization effects of Italy southernmost climate change transitional zone, was mapped through a multi/hyper spectral EO-based approach, using training/testing data provided by side scan sonar data, previously acquired. Some advanced machine learning algorithms (MLA) were successfully evaluated with different supervised regression/classification models to map seabed and PO meadow classes and related Leaf Area Index (LAI) distributions in the areas of interest, using multi/hyperspectral data atmospherically corrected via different advanced approaches.