ARTICLE | doi:10.20944/preprints202012.0721.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: Remote sensing; Global discrete grid; Accuracy evaluation; Hexagon grid
Online: 29 December 2020 (09:19:49 CET)
With the rapid development of earth observation, satellite navigation, mobile communication and other technologies, the order of magnitude of the spatial data we acquire and accumulate is increasing, and higher requirements are put forward for the application and storage of spatial data. Under this circumstance, a new form of spatial data organization emerged-the global discrete grid. This form of data management can be used for the efficient storage and application of large-scale global spatial data, which is a digital multi-resolution the geo-reference model that helps to establish a new model of data association and fusion. It is expected to make up for the shortcomings in the organization, processing and application of current spatial data. There are different types of grid system according to the grid division form, including global discrete grids with equal latitude and longitude, global discrete grids with variable latitude and longitude, and global discrete grids based on regular polyhedrons. However, there is no accuracy evaluation index system for remote sensing images expressed on the global discrete grid to solve this problem. This paper is dedicated to finding a suitable way to express remote sensing data on discrete grids, and establishing a suitable accuracy evaluation system for modeling remote sensing data based on hexagonal grids to evaluate modeling accuracy. The results show that this accuracy evaluation method can evaluate and analyze remote sensing data based on hexagonal grids from multiple levels, and the comprehensive similarity coefficient of the images before and after conversion is greater than 98%, which further proves that the availability hexagonal grid-based remote sensing data of remote sensing images. And among the three sampling methods, the image obtained by the nearest interpolation sampling method has the highest correlation with the original image.
ARTICLE | doi:10.20944/preprints202111.0407.v1
Subject: Medicine And Pharmacology, Gastroenterology And Hepatology Keywords: HpSA; H. pylori; diagnostic values; sensitivity; specificity; accuracy; PPV; NPV
Online: 22 November 2021 (14:26:56 CET)
Helicobacter pylori is the most common human gastric infection. H. pylori stool antigen lateral flow immunochromatography assay (HpSA-LFIA) is considered one of the most cost-effective and rapid non-invasive assays (active tests). The evaluation of this test is crucial for accuracy and utility assurance. This study aimed to evaluate the polyclonal antibody-based HpSA-LFIA in comparison to a monoclonal antibody-based ELISA kit. Methodology: Stool samples were collected from 200 gastric patients for HpSA-LFIA and semi-quantitative HpSA-ELISA. Statistical analysis of the diagnostic values was performed using MedCalc software. Chi-square tests were used to determine the effects of gender and age. Results: The obtained results found that HpSA-LFIA achieved promising sensitivity (93.75%) and NPV (98.00%). However, it had poor specificity, PPV, and accuracy, respectively, 59.76%, 31.25%, and 65.31%. LR+ & LR- were 2.33% & 0.1%, respectively. Gender had no significance on the di-agnostic parameters of HpSA-LFIA. Age groups had irrelevant sensitivity; however, specificity was significantly higher in patients over 45 years. Conclusion: It was concluded that HpSA-LFIA was not accurate enough to be the sole test for di-agnosis and needs other confirmatory tests in case of positive conditions
ARTICLE | doi:10.20944/preprints202311.1068.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: PRPD sensor; MV-class bushing; accuracy class; phase error; partial discharges
Online: 16 November 2023 (09:29:15 CET)
This paper proposes a novel phase-resolved partial discharge (PRPD) sensor embedded in MV-class bushing for high-accuracy insulation analysis. The design, fabrication, and evaluation of a PRPD sensor embedded in MV-class bushing aimed to detect partial discharge (PD) pulses that are phase-synchronized with the applied primary HV signals. A prototype PRPD sensor was composed of a flexible printed circuit board (PCB) with dual-sensing electrodes, utilizing a capacitive voltage divider (CVD) principle for voltage measurement and D-dot principle for PD detection, and a signal transducer with passive elements. A PD simulator was prepared to emulate typical PD defects, i.e., a metal protrusion. The voltage measurement precision of the prototype PRPD sensor was satisfied with the accuracy class of 0.2 specified in IEC 61869-11, as the maximum corrected voltage error ratios and corrected phase errors in 80%, 100%, and 120% of the rated voltage (13.2 kV) were less than 0.2% and 10 minutes, respectively. In addition, the prototype PRPD sensor had good linearity and high sensitivity for PD detection compared with a conventional electrical detection method. According to performance evaluation tests, the prototype PRPD sensor embedded in the MV-class bushing can measure PRPD patterns phase-synchronized with the primary voltage without any additional synchronization equipment or system. Therefore, the prototype PRPD sensor holds potential as a substitute for conventional commercial PD sensors. Consequently, this advancement could lead to the enhancement of power system monitoring and maintenance, contributing to the digitalization and minimization of power apparatus.
ARTICLE | doi:10.20944/preprints202201.0352.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: Per-pixel classification confidence; spatial pattern; image classification; accuracy assessment; interpolation method
Online: 24 January 2022 (11:53:46 CET)
Obtaining classification confidence at the pixel level is a challenging task for accuracy assessment in remote sensing image classification. Among the various methods for estimating classification confidence at the pixel level, interpolation-based methods have drawn special attention in the literature. Even though they have been widely recognized in the literature, their usefulness has not been rigorously evaluated. This paper conducts a comprehensive evaluation of three interpolation-based methods: local error matrix method, bootstrap method, and geostatistical method. We applied each of the three methods to three representative datasets with different spatial resolutions, spectral bands, and the number of classes. We then derive the estimated classification confidence and true classification confidence and compared the results with each other using both exploratory data analysis (bi-histogram) and statistical analysis (Willmott's d and Binned classification quality). The results indicate that the three interpolation methods provide some interesting insights on various aspects of estimating per-pixel classification confidence. Unfortunately, the interpolation assumes that classification confidence is smooth across the space, which is usually not true in practice. In other words, interpolation-based methods have limited practical use.
ARTICLE | doi:10.20944/preprints202309.1796.v2
Subject: Medicine And Pharmacology, Pathology And Pathobiology Keywords: telecytology; ROSE, telepathology; EBUS; lung cytology; endoscopic diagnostics; whole-slide imaging; diagnostic accuracy; cytopathology
Online: 18 October 2023 (04:23:27 CEST)
Background: This prospective study assesses the use of Rapid Remote Online Cytological Evaluation analysis for diagnosing endoscopical achieved biopsies. It focuses on its effectiveness in identifying benign and malignant conditions using digital image processing. Methods: The study was conducted between April 2021 and September 2022 and involved a total of 314 Rapid Remote Online Cytological Evaluations (17 brush, 143 fine needle aspirations and 154 imprint cytologies) analyses performed on 239 patients at the LungenClinic Grosshansdorf. During on-site evaluation via telecytology, the time requirement was determined and the findings were compared with the cyto-/histological and final diagnoses. Results: By means of Rapid Remote Online Evaluation, 86 cytological benign and 190 malignant and 38 findings of unclear diagnosis were recorded (Ø assessment time 100 sec., range 11 - 370sec.). In 27 of the 38 cases with unclear diagnosis, the final findings were malignant tumours and only 6 were benign changes. The diagnosis of another five of these 38 cases remained unclear. Excluding these 38 findings, the Rapid Remote online cytology achieved a sensitivity of 78.6% with a specificity of 99.5% and a correct classification rate of 93.1% with regard to the final diagnosis of all cases. As expected, an increase in the sensitivity rate for the cytological detection of malignant tumours (76.1% vs. 92.5%) was found especially in fine-needle aspirations. Conclusions: Rapid remote online analysis allows fast quantitative and qualitative evaluation of clinically obtained cytological specimens. With a correct classification rate of more than 93%, sampling deficiencies can be corrected promptly and diagnostic and therapeutic approaches can be derived.
ARTICLE | doi:10.20944/preprints201906.0036.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: digital elevation models; multi-source fusion; multi-scale fusion; global evaluation; accuracy validation.
Online: 5 June 2019 (10:26:30 CEST)
The quality of digital elevation models (DEMs) is inevitably affected by the limitations of the imaging modes and the generation methods. One effective way to solve this problem is to merge the available datasets through data fusion. In this paper, a fusion-based global DEM dataset (82°S-82°N) is introduced, which we refer to as GSDEM-30. This is a 30-m DEM mainly reconstructed from the unfilled SRTM1, AW3D30, and ASTER GDEM v2 datasets combining the multi-source and multi-scale fusion techniques. A comprehensive evaluation of the GSDEM-30 data, as well as the 30-m ASTER GDEM v2 and AW3D30 DEM, was presented. Global ICESat GLAS data and the local National Elevation Dataset (NED) were used as the reference for the vertical accuracy validation, while GlobeLand30 was introduced for the landscape analysis. Furthermore, we employed the maximum slope approach to detect the potential artefacts in the DEMs. The results show that the GDEM data are seriously affected by noise and artefacts. With the advantage of the multiple datasets and the refined post-processing, the GSDEM-30 are contaminated with fewer anomalies than both ASTER GDEM and AW3D30. The fusion techniques used can also be applied to the reconstruction of other fused DEM datasets.
ARTICLE | doi:10.20944/preprints201812.0067.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: built-up area; classification; Landsat 8- OLI; feature engineering; feature learning; CNN; accuracy evaluation
Online: 5 December 2018 (12:06:34 CET)
Detailed built-up area information is valuable for mapping complex urban environments. Although a large number of classification algorithms about built-up areas have been developed, they are rarely tested from the perspective of feature engineering and feature learning. Therefore we launched a unique investigation to provide a full test of the OLI imagery for 15-m resolution built-up area classification in 2015, in Beijing, China. Training a classifier requires many sample points, and we propose a method based on the ESA's 38-meter global built-up area data of 2014, Open Street Map and MOD13Q1-NDVI to achieve rapid and automatic generation of a large number of sample points. Our aim is to examine the influence of a single pixel and image patch under traditional feature engineering and modern feature learning strategies. In feature engineering, we consider spectra, shape and texture as the input features, and SVM, random forest (RF) and AdaBoost as the classification algorithms. In feature learning, the convolution neural network (CNN) is used as the classification algorithm. In total, 26 built-up land cover maps were produced. Experimental results show that: (1) the approaches based on feature learning are generally better than those based on feature engineering in terms of classification accuracy, and the performance of ensemble classifiers e.g., RF, is comparable to that of CNN. Two dimensional CNN and the 7 neighborhood RF have the highest classification accuracy of nearly 91%. (2) Overall, the classification effect and accuracy based on image patches are better than those based on single pixels. The features that can highlight the information of the target category (for example, PanTex and EMBI) can help improve classification accuracy.
ARTICLE | doi:10.20944/preprints202308.1811.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: climate change; sensitivity analysis; evaluation accuracy; hazard; year return period; overall accuracy
Online: 28 August 2023 (03:43:01 CEST)
Flood inundation causes socioeconomic losses for coastal tourism under climate extremes, progressively attracting global attention. Mapping, evaluating, and predicting the flood inundation risk (FIR) is significant for coastal tourism. The study develops a spatial tourism–aimed framework integrating a weighted k-Nearest Neighbors (WkNN), Geographic Information Systems, and flood-related spatially environmental criteria such as precipitation, elevation, soil, and drainage systems. These model inputs were standardized and weighted using distance, and integrated into WkNN to infer regional probability and distribution of FIR. Zhejiang province, China, was selected as a case study. The resulting map was pictured to denote the likelihood of the criteria at various risk categories, which was validated by historical Maximum Inundation Extent (MIE) extracted from World Environment Situation Room. The result indicates 80.59% of WkNN results reasonably confirm the MIE. Precipitation and elevation make a negative contribution to high-medium risk, and drainage systems positively alleviate the regional stress of FIR. The results can help stakeholders make suitable strategies to protect coastal tourism, and also weigh WkNN is superior to kNN in FIR assessment. The framework provides a productive way to yield a reliable assessment of FIR and can also be extended to other risk-related environmental studies under climate change.
REVIEW | doi:10.20944/preprints202308.0992.v1
Subject: Medicine And Pharmacology, Oncology And Oncogenics Keywords: artificial intelligence; radiotherapy; workflow; accuracy
Online: 14 August 2023 (08:38:02 CEST)
In recent years, the field of radiotherapy has witnessed remarkable advancements with the integration of artificial intelligence (AI) technologies into clinical practice. Traditionally, radiotherapy treatment planning has been a labor-intensive process, requiring meticulous manual segmentation and optimization. With the advent of AI, particularly deep learning algorithms, the accuracy and efficiency of target delineation and organ-at-risk segmentation have significantly improved. AI-driven algorithms analyze voluminous medical imaging data, enabling rapid and precise contouring, thus expediting the planning phase and reducing inter-observer variability. Furthermore, AI's role extends to treatment plan optimization, where it intelligently explores vast parameter spaces to generate optimal plans tailored to individual patients. This not only saves dosimetrists' time but also enhances plan quality by accounting for complex anatomical variations and personalized clinical goals. In the realm of treatment delivery, AI-powered real-time image guidance enhances the accuracy of patient positioning, ensuring precise radiation targeting. Adaptive radiotherapy, enabled by AI, allows on-the-fly plan modifications in response to anatomical changes, significantly improving treatment accuracy in scenarios like tumor shrinkage or weight loss. Beyond planning and delivery, AI algorithms contribute to outcome prediction by analyzing historical patient data and treatment responses. This predictive capability aids clinicians in making informed decisions and refining treatment strategies for better prognoses. Despite the revolutionary potential, challenges remain in seamlessly integrating AI into clinical workflows. Ethical considerations, data privacy, and algorithm interpretability necessitate cautious implementation. Additionally, fostering interdisciplinary collaboration between AI experts and radiation oncologists is imperative to harness the technology's full potential. This paper explores the impact of AI in four key areas of radiotherapy: automated segmentation, dosimetric and machine quality assurance, adaptive radiation therapy, and clinical outcome prediction.
ARTICLE | doi:10.20944/preprints201711.0006.v1
Subject: Social Sciences, Behavior Sciences Keywords: hypothetics; enothetics; reliability; validity; accuracy
Online: 1 November 2017 (04:56:55 CET)
The purpose of this article is to assess the reliability and accuracy (validity) of hypothetical binary tasting judgments in an enological framework. The heuristic model that is utilized allows for the control of a wide array of variables that would be exceedingly difficult to fully control in the typical enological investigation. It is shown that results that are judged to be enologically significant are uniformly judged to be statistically significant as well, whether the level of wine Taster agreement is set at 70% (Fair); 80% (Good), or 90% (Excellent), However, in a number of instances, results that were statistically significant were not enologically significant by standards that are widely accepted and utilized. This finding is consistent with the bio-statistical fact that given a sufficiently large sample size, even the most trivial of results will prove to be statistically significant. Consistent with expectations, multiple patterns of 80% (Good) and 90% (Excellent) agreement tended to be both statistically and enologically significant.
ARTICLE | doi:10.20944/preprints201704.0159.v1
Subject: Environmental And Earth Sciences, Space And Planetary Science Keywords: YG-13A; geometric accuracy; validation
Online: 25 April 2017 (11:19:25 CEST)
YG-13A represents the highest level of Chinese SAR satellites to date. In this paper, we report on experiments conducted to improve and validate ranging accuracy with YG-13A. We analyze the error sources in the YG-13A ranging system, such as atmospheric path delay, and transceiver channel delay. A real-time atmospheric delay correction model is established to calculate the atmospheric path delay, considering the troposphere delay and ionosphere delay. Six corner reflectors (CRs) were set up to ensure the accuracy of validation methods. Pixel location accuracies of up to 0.479-m standard deviation can be achieved after a complete calibration. We further demonstrate that the adjustment of the CRs can cause a marginal loss of ranging precision. After eliminating this error, the ranging accuracy is improved to 0.237 m. For YG-13A, a single frequency GPS receiver is used and the orbital nominal accuracy is 0.3 m, which is the biggest factor restricting its ranging accuracy. Our results show that the ranging accuracy of YG-13A can achieve decimeter-level, which is lower than centimeter-level accuracy with TerraSAR-X loading a dual frequency GPS. YG-13A has great convenience in terms of access to control points and target location that does not depend on ground equipment.
ARTICLE | doi:10.20944/preprints202309.1037.v1
Subject: Medicine And Pharmacology, Clinical Medicine Keywords: respiratory syncytial virus; influenza A virus; influenza B virus; SARS-CoV-2; rapid RT-PCR; diagnostic accuracy; point-of-care testing
Online: 15 September 2023 (05:38:13 CEST)
SARS-CoV-2, influenza A/B virus (IAV/IBV) and respiratory syncytial virus (RSV) are among the common viruses causing acute respiratory infections. Clinical diagnosis to differentiate these viruses is challenging due to similar clinical presentations, thus laboratory-based real-time RT PCR is the gold standard for diagnosis. The study aimed to evaluate the diagnostic performance of STANDARD M10 Flu/RSV/SARS-CoV-2 (SD Biosensor Inc, Seoul, Korea). This retrospective study was conducted with archived respiratory samples positive and negative for SARS-CoV-2, IAV, IBV or RSV. The evaluation study involved a total of 322 respiratory samples comprising 215 positive samples (49 SARS-CoV-2, 48 IAV, 53 IBV, 65 RSV) and 107 negative samples. All samples were tested with both STANDARD M10 and either Xpert Xpress SARS-CoV-2 or Xpert Xpress Flu/RSV (Cepheid, USA). The sensitivity, specificity, positive predictive value and negative predictive value rates of STANDARD M10 were very similar to Xpert Xpress SARS-CoV-2 or Xpert Xpress Flu/RSV ranges for each virus (98-100%). The overall agreement was 99.4% with 99.1% agreement for positive samples and 100% agreement for negative samples. In conclusion, the STANDARD M10 point-of-care test is suitable for rapid simultaneous detection of SARS-CoV-2, IAV, IBV and RSV.
ARTICLE | doi:10.20944/preprints202311.0436.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: sampling error; improving accuracy; data acquisition
Online: 7 November 2023 (11:17:15 CET)
Although data acquisition is a very usual technique, there are several aspects not always considered, as synchronization of acquired measures and evaluation of the errors that derive from this fact. The paper hereby aims to point this thing out, by mathematical determination of the necessary correction and implementing the software meant to evaluate the performances of the acquisition system. As an example, a three-phased acquisition system has been developed in order to monitor the currents and voltages on the three phases. Also other measures have been calculated, such as power and phase. The components on each phase don’t have to be fully identified because a whole system calibration can be made in the first stage. Calibration consists in finding the weighting coefficients for each quantity. The implemented solution for three-phased measures data acquisition starts from the hypothesis of a sampling frequency that respects Shannon theorem. The distance between two samples is small enough to consider a linear evolution between two moments for the same measure. Errors that affect the above mentioned, due to the different moments of time when samples are acquired, are analyzed and brought to the minimum value.
ARTICLE | doi:10.20944/preprints202305.1554.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: microplastics; beach; measurement; simple method; accuracy
Online: 23 May 2023 (03:56:22 CEST)
The environmental pollution by microplastics (MPs) has become a growing concern, and measures are being taken in many countries. Long-term and extensive MPs investigations involving not only professional researchers but also the citizenry are needed to understand the pollution situation and to confirm the decreasing trend of MPs pollution as a result of pollution control measures. In this study, the author evaluated the accuracy of a simple method of investigating MPs on sandy beaches that can be conducted even by high school students. In a land survey using such simple tools as a tape measure and cardboard, 70% of the deviations were within approximately 20 cm when multiple surveys of approximately 20 m distance were performed. Even without heavy liquid, 89% of MPs could be recovered using only seawater. An investigation of MPs content by sampling 0.5 cm of the surface layer of sand could explain more than half of the MPs content when the sand was sampled to a depth of approximately 50 cm below the surface layer. A method in which the recovered MPs are not visually sorted but floating matter after boiling is considered as MPs is acceptable. If there was no concern about pumice contamination, the overestimation was within approximately 1.5 times. Simple laboratory equipment such as buckets, sieves, seawater, hot plates, dryers, and electronic balances could achieve lower limits of quantification of MPs of 13 mg-MPs/m2-sand and 2 mg-MPs/kg-sand.
ARTICLE | doi:10.20944/preprints202108.0140.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: K-Mean, Mean-Shift, Performance, Accuracy
Online: 5 August 2021 (11:00:32 CEST)
Clustering, or otherwise known as cluster analysis, is a learning problem that takes place without any human supervision. This technique has often been utilized, much efficiently, in data analysis, and serves for observing and identifying interesting, useful, or desired patterns in the said data. The clustering technique functions by performing a structured division of the data involved, in similar objects based on the characteristics that it identifies. This process results in the formation of groups, and each group that is formed, is called a cluster. A single said cluster consists of objects from the data, that have similarities among other objects found in the same cluster, and resemble differences when compared to objects identified from the data that now exist in other clusters. The process of clustering is very significant in various aspects of data analysis, as it determines and presents the intrinsic grouping of objects present in the data, based on their attributes, in a batch of unlabeled raw data. A textbook or otherwise said, good criteria, does not exist in this method of cluster analysis. That is because this process is so different and so customizable for every user, that needs it in his/her various and different needs. There is no outright best clustering algorithm, as it massively depends on the user’s scenario and needs. This paper is intended to compare and study two different clustering algorithms. The algorithms under investigation are k-mean and mean shift. These algorithms are compared according to the following factors: time complexity, training, prediction performance and accuracy of the clustering algorithms.
ARTICLE | doi:10.20944/preprints201709.0139.v1
Subject: Social Sciences, Geography, Planning And Development Keywords: Accuracy; Uncertainties; Object-Based; Slums; Jakarta
Online: 27 September 2017 (16:45:25 CEST)
Object-Based Image Analysis (OBIA) has been successfully used to map slums. In general, the occurrence of uncertainties in producing geographic data is inevitable. However, most studies concentrated solely on assessing the classification accuracy and neglecting the inherent uncertainties. Our research analyses the impact of uncertainties in measuring the accuracy of OBIA-based slum detection. We selected Jakarta as our case study area, because of a national policy of slum eradication, which is causing rapid changes in slum areas. Our research comprises of four parts: slum conceptualization, ruleset development, implementation, and accuracy and uncertainty measurements. Existential and extensional uncertainty arise when producing reference data. The comparison of a manual expert delineations of slums with OBIA slum classification results into four combinations: True Positive, False Positive, True Negative and False Negative. However, the higher the True Positive (which lead to a better accuracy), the lower the certainty of the results. This demonstrates the impact of extensional uncertainties. Our study also demonstrates the role of non-observable indicators (i.e., land tenure), to assist slum detection, particularly in areas where uncertainties exist. In conclusion, uncertainties are increasing when aiming to achieve a higher classification accuracy by matching manual delineation and OBIA classification.
ARTICLE | doi:10.20944/preprints202309.1045.v1
Subject: Medicine And Pharmacology, Surgery Keywords: surgical wound; dimensional measurement accuracy; surgical flap
Online: 15 September 2023 (05:18:23 CEST)
The accurate assessment of wound size is a critical step in advanced wound care management. This study aims to introduce and validate a Light Detection and Ranging (LiDAR) technique for measuring wound size. Twenty-eight wounds treated from December 2022 to April 2023 at the Chunganam National University Hospital were analyzed. All wounds were measured using three techniques; conventional ruler methods, the LiDAR technique, and ImageJ analysis. Correlation analysis, linear regression, and Bland-Altman plot analysis were conducted to validate the accuracy of the newly introduced method. The measurement results (Mean ± Standard deviation) obtained using the Ruler method, LiDAR technique, and ImageJ analysis were 112.99 ± 110.07 cm², 73.59 ± 72.97 cm², and 74.29 ± 72.15 cm², respectively. The Pearson correlation coefficient was higher in the LiDAR application (0.995) than the conventional ruler methods (Mean difference; -5.0000 cm2), as was the degree of agreement (Mean difference; 38.6933 cm2). Wound size measurement using LiDAR is a simple and reliable method that allows practitioners to conveniently assess wounds with a flattened and irregular shape. However, non-flattened wounds cannot be assessed owing to the technical limitations of LiDAR.
REVIEW | doi:10.20944/preprints202305.2164.v1
Subject: Computer Science And Mathematics, Analysis Keywords: machine vision; pose measurement algorithms; accuracy; applications
Online: 31 May 2023 (03:33:34 CEST)
This review paper provides a comprehensive overview of machine vision pose measurement algorithms. The paper focuses on the state-of-the-art algorithms and their applications. The paper is structured as follows: The introduction in Section 1 provides a brief overview of the field of machine vision pose measurement. Section 2 describes the commonly used algorithms for machine vision pose measurement. Section 3 discusses the factors that affect the accuracy and reliability of machine vision pose measurement algorithms. Section 4 presents the applications of machine vision pose measurement in various fields. The paper provides specific examples of how machine vision pose measurement is used in each of these fields. Finally, Section 5 summarizes the paper and provides future research directions. The paper highlights the need for more robust and accurate algorithms that can handle varying lighting conditions and occlusion. It also suggests that the integration of machine learning techniques may improve the performance of machine vision pose measurement algorithms. Overall, this review paper provides a comprehensive overview of machine vision pose measurement algorithms, their applications, and the factors that affect their accuracy and reliability. It provides a valuable resource for researchers and practitioners working in the field of computer vision.
ARTICLE | doi:10.20944/preprints202012.0240.v1
Subject: Chemistry And Materials Science, Biomaterials Keywords: intraoral scanner; orthodontic bracket; accuracy; precision; trueness
Online: 10 December 2020 (08:17:43 CET)
Accurate expression of bracket prescription is important for successful orthodontic treatment. The aim of this study was to evaluate the accuracy of digital scan images of brackets produced by four different intraoral scanners (IOSs) in terms of the height, position, and angle of the bracket slot when scanning the surface of dental model attached with bracket materials made from different composition of materials. Brackets made from stainless steel, polycrystalline alumina, composite and composite/stainless steel slot were considered, which have been scanned from 4 different IOSs (Primescan, Trios, CS3600 and i500). SEM images were used as references. Each bracket axis was set in the reference scan image, and the axis was set identically by superimposing with the IOS image, and then only the brackets were divided and analyzed. The difference between the manufacturer's nominal torque and bracket slot base angle was 0.39 in SEM, 1.96 in Primescan, 2.04 in Trios, and 5.21 in CS3600 (P <0.001). The parallelism, which is the difference between the upper and lower angles of the slot wall, was 0.55 in SEM, 7.55 in Primescan, 6.74 in Trios3, 6.59 in CS3600, and 24.95 in i500 (p <0.001). This study evaluated the accuracy of the bracket only and it must be admitted that there is some error in recognizing slots through scanning in general
BRIEF REPORT | doi:10.20944/preprints202005.0193.v1
Subject: Medicine And Pharmacology, Epidemiology And Infectious Diseases Keywords: COVID-19; diagnostic accuracy; CT; RT-PCR
Online: 11 May 2020 (12:35:08 CEST)
Introduction: Clinicians have been struggling with the optimal diagnostic approach of patients with suspected COVID-19. We evaluated the added value of chest CT over RT-PCR alone. Methods: Consecutive adult patients with suspected COVID-19 presenting to the emergency department (Academic Medical Center, Amsterdam University Medical Centers, the Netherlands) from March 16th to April 16th were retrospectively included if they required hospital admission and underwent chest CT and RT-PCR testing for SARS-CoV-2 infection. The CO-RADS classification was used to assess the radiological probability of COVID-19, where a score of 1-2 was considered as negative, 3 as indeterminate, and 4-5 as positive. CT results were stratified by initial RT-PCR results. For patients with a negative RT-PCR but a positive CT, serology or multidisciplinary discussion after clinical follow-up constituted the final diagnosis. Results: 258 patients with suspected COVID-19 were admitted, of which 239 were included because they had both CT and RT-PCR testing upon admission. Overall, 112 patients (46.9%) had a positive initial RT-PCR, and 14 (5.9%) had a positive repeat RT-PCR. Of 127 patients with a negative or indeterminate initial RT-PCR, 38 (29.9% [95%CI 21.3-39.3%]) had a positive CT. Of these, 13 had a positive RT-PCR upon repeat testing, and 5 had positive serology. The remaining 20 patients were assessed in a multidisciplinary consensus meeting, and for 13 it was concluded that COVID-19 was ‘very likely’. Of 112 patients with a positive initial RT-PCR result, CT was positive in 104 (92.9% [95%CI 89.3-97.5%]). Conclusion: In a high-prevalence emergency department setting, chest CT showed high probability of COVID-19 (CO-RADS 4-5) in 29.9% of patients with a negative or indeterminate initial RT-PCR result. As the majority of these patients had proven or ‘very likely’ COVID-19 after follow-up, we believe that CT helps in the identification of patients who should be admitted in isolation.
ARTICLE | doi:10.20944/preprints202308.0619.v1
Subject: Medicine And Pharmacology, Orthopedics And Sports Medicine Keywords: accuracy; distance; angle; augmented reality (AR); orthopaedics; knee
Online: 8 August 2023 (11:18:09 CEST)
Background: Recent advances allow the usage of Augmented Reality (AR) for many medical procedures. We perform different AR-assisted knee surgery techniques using optical surgical navigation with ArUco-type artificial marker sensors. Our study aimed to evaluate the system’s accuracy using an in vitro protocol. We hypothesised that the system’s accuracy was equal to or less than 1mm and 1° for distance and angular measurements, respectively. Methods: Our research was an in-vitro laboratory with a 316 L steel model. We evaluated absolute reliability according to the Hopkins criteria with seven independent evaluators. Each observer measured the thirty palpation points and the trademarks to acquire direct angular measurements on three occasions separated by at least two weeks. Results: The accuracy of the system to assess distances had a mean error of 1.203mm and an un-certainty of 2.062, and for the angular values, a mean error of 0.778° and an uncertainty of 1.438. The intraclass correlation coefficient was for all intra-observer and inter-observer almost perfect or perfect. Conclusions: The mean error for the distance’s determination has been statistically larger than 1mm (1.203mm) but with a trivial effect size. The mean error assessing angular values was sta-tistically minor than 1°. Our results are similar to those published by other authors in accuracy analyses of AR systems.
ARTICLE | doi:10.20944/preprints202207.0395.v1
Subject: Business, Economics And Management, Business And Management Keywords: Waste Recycling System; Disaster Response; Network; Cognitive Accuracy
Online: 26 July 2022 (08:06:26 CEST)
Since the process of waste recycling generates dust and flammable gas during fragmentation, there is always a risk of fire resulting therefrom, and fire does, in fact, frequently occur. However, research on disaster management at recycling facilities deals only with the problem of processing systems from a technical point of view, but it does not suggest concrete alternatives from a management point of view. Therefore, in this study, we analyzed the influence of the disaster response network of a waste recycling center at the organizational level based on the concept of the cognitive accuracy of a network when considering administrative aspects. Through this analysis, we confirmed that factors affecting the influence of the network exist, such that the entire network and the networks of different levels of position are different. We suggest that this can be improved by deploying members who perform formal tasks at the center of the network so that everyone can agree political approach.
ARTICLE | doi:10.20944/preprints202111.0217.v1
Subject: Medicine And Pharmacology, Endocrinology And Metabolism Keywords: Diabetes Technology; CGM; Accuracy; Type 1 Diabetes; Sustainability
Online: 12 November 2021 (11:58:57 CET)
Aim of this study was to evaluate the accuracy and usability of a novel continuous glucose moni-toring (CGM) system designed for needle-free insertion and reduced environmental impact. We assessed sensor performance of two GlucoMen® Day CGM systems worn simultaneously in eight participants with type 1 diabetes. Self-monitoring of blood glucose (SMBG) was performed reg-ularly over 14 days at home. Participants underwent two standardized 5-hour meal challenges with frequent plasma glucose (PG) measurements using a laboratory reference instrument at the research center. When comparing CGM to PG the overall mean absolute relative difference (MARD) was 9.7 [2.6-14.6]%. The overall MARD of CGM vs SMBG was 13.1 [3.5-18.6]%. In the consensus error grid (CEG) analysis, 98% of both CGM/PG and CGM/SMBG pairs were in the clinically acceptable zones A and B. The analysis confirms that GlucoMen® Day CGM meets the clinical requirements for state-of-the-art CGM. The needle-free insertion technology is well toler-ated by users and reduces medical waste compared to conventional CGM systems.
ARTICLE | doi:10.20944/preprints202106.0738.v1
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: time series; homogenization; ACMANT; observed data; data accuracy
Online: 30 June 2021 (13:08:39 CEST)
The removal of non-climatic biases, so-called inhomogeneities, from long climatic records needs sophistically developed statistical methods. One principle is that usually the differences between a candidate series and its neighbour series are analysed instead of directly the candidate series, in order to neutralize the possible impacts of regionally common natural climate variation on the detection of inhomogeneities. In most homogenization methods, two main kinds of time series comparisons are applied, i.e. composite reference series or pairwise comparisons. In composite reference series the inhomogeneities of neighbour series are attenuated by averaging the individual series, and the accuracy of homogenization can be improved by the iterative improvement of composite reference series. By contrast, pairwise comparisons have the advantage that coincidental inhomogeneities affecting several station series in a similar way can be identified with higher certainty than with composite reference series. In addition, homogenization with pairwise comparisons tends to facilitate the most accurate regional trend estimations. A new time series comparison method is presented here, which combines the use of pairwise comparisons and composite reference series in a way that their advantages are unified. This time series comparison method is embedded into the ACMANT homogenization method, and tested in large, commonly available monthly temperature test datasets.
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: Drone; GNSS RTK; UAV; photogrammetry; precision; accuracy; elevation
Online: 11 March 2021 (11:49:25 CET)
Georeferencing using ground control points (GCPs) is the most common strategy in photogrammetry modeling using UAV-acquired imagery. However, with the increased availability of UAVs with onboard GNSS RTK, georeferencing without GCPs is a promising alternative. However, systematic elevation error remains a problem of this technique. We aimed to analyze the reasons for this systematic error and propose strategies for the elimination of this error. Multiple flights differing in the flight altitude and image acquisition axis were performed at two real-world sites. A flight height of 100m with vertical (nadiral) image acquisition axis was considered primary, supplemented with flight altitudes of 75 m and 125 m with vertical image acquisition axis and two flights at 100 m with oblique image acquisition axes (30° a 15°). Each of these flights was performed twice to produce a full double grid. Models were calculated from individual flights and their combinations. The elevation error from individual flights or even combinations yielded systematic elevation errors of up to several decimeters. This error was linearly dependent on the deviation of the focal length from the reference value. A combination of two flights from the same altitude (with nadiral and oblique image acquisition) was capable of reducing the systematic elevation error to less than 0.03 m. This study is the first to demonstrate the linear dependence between the systematic elevation error of the models based only on the onboard GNSS-RTK data and the deviation in the determined internal orientation parameters (focal length). Besides, we have shown that a combination of two flights with different image acquisition axis can eliminate this systematic error even in real-world conditions and that georeferencing without GCPs is, therefore, a feasible alternative to the use of GCPs.
ARTICLE | doi:10.20944/preprints201610.0038.v1
Subject: Computer Science And Mathematics, Data Structures, Algorithms And Complexity Keywords: Face Recognition; Intelligent Coupling Algorithm; Robustnes; Accuracy; Speed
Online: 11 October 2016 (14:42:02 CEST)
The key links of face recognition are digital image preprocessing, facial feature extraction and pattern recognition, this article aimed at the current problem of slow speed and low recognition accuracy of face recognition , from the above three key links, on the basic of analyzing the therories of Fractional Differential Masks Operator (FDMO), Principal Component Analysis (PCA) and Support Vector Machine (SVM), design a kind of FDMO+PVA+SVM coupling algorithm that applies to face recognition to improve the speed and accuracy of it. To realize FDMO+PCA+SVM coupling algorithm, first, we should apply FDMO to face image processing binary marginalization, the purpose is getting face contour; Then, we apply PCA to get the feature of face image which is disposed by binary marainalization. At last, we can apply One-Against All of the SVM classifier and LibSVM software package to realize face recognition. Beside, the article with nine different coupling algorithm design four groups of experimental reaults on the ORL face database verified by comparative analysic FDMO+PCA+SVM coupling algorithm in the superiority of face recognition accuracy and speed.
ARTICLE | doi:10.20944/preprints202308.2019.v2
Subject: Social Sciences, Urban Studies And Planning Keywords: Modeling of Spatial Data; Accuracy Value; Slum Areas Distribution
Online: 23 November 2023 (09:28:13 CET)
Urbanization triggers the emergence of slums in urban areas. Palembang is one of the Indonesian cities with slum settlements. This research aimed to analyze the interpolation of slum positions in Palembang and their distribution pattern using kernel density analysis and by modeling spatial data on accuracy values. It employed a quantitative method with a survey approach. Samples were selected using proportional random sampling from families in 64 slum areas across the 13 districts in the city, and their positions were recorded with a GPS device and then processed and mapped in ArcGIS. The data were analyzed with inverse distance weighted, kernel density, and spatial data modeling. Results showed that slum areas near the riverbanks had high population density, while those located further were lower. The values obtained from the kernel density analysis varied from 0 to 58.1123, while the inverse distance weighted showed a value range of 2.26745–380.991. The spatial data modelling analysis demonstrated that the distribution of slum areas was widely spread along the Musi River, with an accuracy value of 96.8%, which falls within the range of >0.9-1.
ARTICLE | doi:10.20944/preprints202308.1683.v1
Subject: Medicine And Pharmacology, Dentistry And Oral Surgery Keywords: intraoral scanners; subgingival preparation; vertical preparation; accuracy; digital impression
Online: 24 August 2023 (08:00:56 CEST)
One of the most critical aspects in introral impression is the detection of the finish line, in particular in case of subgingival preparations. The aim of these in vitro study was to evaluate the accuracy among four different IntraOral Scanners (IOSs) in scanning a subgingival vertical margins preparation (VP). A reference maxillary typodont (MT) was fabricated with a VP for full crown on #16 and #21. The MT was scanned with a laboratory scanner (Aadva lab scanner, GC, Tokyo, Japan) to obtain a digital MT (dMT) in .stl format file. A group of 40 digital casts (dIOC) were obtained by scanning the MT 10 times with four different IOSs (Trios 3, 3Shape A/S), ( I700, Medit), (Vivascan, Ivoclar), (Experimental IOS, GC). All the obtained dIOCs were imported into an inspection software program (Geomagic Control X; 3D SYSTEMS) to be superimposed to the dMT, to calculate trueness. Therefore, in order to calculate precision all the scans of the same scanner group were superimposed onto the cast that obtained the best result of trueness. Results were collected as root mean square value (RMS) on #16 and #21 abutment surfaces and on a marginal area positioned 1 mm above and below the gingival margin. A nonparametric analysis Kruskal-Wallis test was performed to compare the RMS values obtained in the different iOS groups for trueness and precision. Statistically significance was set at 0.05. The trueness on #16 abutment, did not statistically differ in between the IOS, while on #21 abutment, Vivascan (56.0 ±12.1) and Experimental IOS, GC (59.2±2.7) performed statistically better than the others. Regarding precision Experimental IOS, GC and Trios 3 were significantly better than I700 and Vivascan both in the #16 (10.7±2,1; 15.8 ± 2,7) and in the #21 area (16.9±13.8 ; 18.0±2.7). At marginal level for both #16 and #21 all the IOS reported reduced accuracy compared to clinical acceptance.
ARTICLE | doi:10.20944/preprints202109.0390.v1
Subject: Biology And Life Sciences, Biophysics Keywords: COVID-19; vibraimage; behavioral parameters; diagnosis accuracy; ANN; AI
Online: 22 September 2021 (16:28:12 CEST)
The Covid-19 pandemic spreads in waves for a year and a half, despite significant worldwide efforts, the development of biochemical diagnostic methods and population vaccination. One of the reasons for the infection spread is the impossibility of early disease detection through biochemical diagnostics, since biochemical processes slowly develop in a body. At the same time, well known that behavioral characteristics of a person, measured based on reflex movements, are capable for inertialess assessment of psychophysiological parameters. Vibraimage technology is the method of head micromovements video processing by inter-frame difference accumulation and converting spatial and temporal characteristics of the inter-frame difference into behavioral and psychophysiological parameters. Here we shown that behavioral parameters measured by vibraimage changed during COVID-19 infection. The identification of changes signs in behavioral parameters detected by AI trained on patients and controls. The best diagnostic accuracy (higher 94%) obtained using instantaneous values of behavioral parameters measured with the following vibraimage settings: 10Hz frequency of basic measurements; 25 inter-frame difference accumulations and averaging the diagnostic results over period of at least 5 seconds. COVID-19 diagnoses by behavioral parameters showed earlier (5-7 days) detection of the disease compared to symptoms and positive results of biochemical RT-PCR testing. Proposed method for COVID-19 diagnosis indicates infected persons within 5 seconds video processing using standard television cameras (web, IP) and computers, allows mass testing/selftesting and will stop the pandemic spread. We assume that head micromovements analysis for diagnosis of various diseases is possible not only with the help of vibraimage technology. Further research of human head micromovement analysis will help stop the COVID-19 pandemic and will contribute to the development of new contactless and environmentally friendly methods for early diagnosis of diseases.
ARTICLE | doi:10.20944/preprints202108.0190.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: Programming by Demonstration; Virtual Reality; Augmented Reality; Accuracy; Repeatability
Online: 9 August 2021 (10:42:14 CEST)
Augmented and Virtual Reality have been experiencing a rapidly growth in recent years, but there is not still a deep knowledge on their capabilities and where they could be explored. In that sense, this paper presents a study on the accuracy and repeatability of the Microsoft's HoloLens 2 (Augmented Reality device) and HTC Vive (Virtual Reality device) using an OptiTrack system as ground truth. For the HoloLens 2, the method used was hand tracking, while in HTC Vive, the object tracked was the system's hand controller. A series of tests in different scenarios and situations were performed to explore what could influence the measures. The HTC Vive obtained results in the millimetre scale, while the HoloLens 2 revealed not so accurate measures (around 2 centimetres). Although the difference can seem to be considerable, the fact that HoloLens 2 was tracking the user's hand and not an inherit controller made a huge impact. The results were considered a significant step for the on going project of developing a human-robot interface to program by demonstration an industrial robot using Extended Reality, which shows great potential to succeed based on this data.
REVIEW | doi:10.20944/preprints202107.0515.v2
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: Typhoid fever; Diagnostic; Metabolomics; Composite reference standard; Accuracy; Sensitivity.
Online: 29 July 2021 (13:28:33 CEST)
Typhoid fever is a major public health burden which causes substantial global morbidity and mortality due to lack of decisive diagnostic protocols. The capacity of commonly use diagnostic test to validate the absence of typhoid fever is controversial. This study explores to evaluate new techniques for typhoid diagnosis and proposed a harmonised suitable standardized composite reference to be adopted. Published peer-reviewed articles indexed in PubMed, MEDLINE and Google scholar were reviewed for hospital-based studies. This study reveals new typhoid diagnostic techniques such as proteomics, serology, Rapid Diagnostic tests (RDTs), transcriptomics, genomics, and metabolomics. 34.4% of the studies use prospective study design. The study result establishes that, Widal test has a moderate diagnostic accuracy with average percentage sensitivity (52.9%), specificity (54%), positive predictive value (PPV) (56.8%) as well as negative predictive value (NPV) (55.6%) when compared with 29.4%, 28%, 29.5%, and 27.8% of Typhidot respectively. The findings showed a statistically significant difference on the sensitivity between Widal and Typhidot t (40) = 2.639, p = 0.012 at p<0.05 using independent sample t-test. When there is no perfect reference standard that has an optimal diagnostic accuracy, the need for a harmonised suitable standardized composite reference is essential. Hence, this study recommends that, peripheral blood culture with established sensitivity of 60% and Widal test with average sensitivity of 52.9% be adopted as a consensus composite reference standard for typhoid fever diagnosis in other to improve confidence in prevalence estimates.
REVIEW | doi:10.20944/preprints202104.0373.v1
Subject: Medicine And Pharmacology, Other Keywords: mHealth devices; diagnosis; accuracy; sensitivity; specificity; sub-Saharan Africa
Online: 14 April 2021 (12:27:39 CEST)
Mobile health devices are emerging applications that could help deliver point-of-care (POC) diagnosis, particularly in settings with limited laboratory infrastructure, such as sub-Saharan Africa (SSA). The advent of coronavirus has resulted in an increased deployment and use of mHealth-linked POC diagnostics in SSA. We performed a systematic review and meta-analysis to evaluate the accuracy of mobile-linked point-of-care diagnostics in SSA. Our systematic review and meta-analysis were guided by the Preferred Reporting Items requirements for Systematic Reviews and Meta-Analysis (PRISMA). We exhaustively searched PubMed, Science Direct, Google Scholar, MEDLINE, and CINAHL with full-text via EBSCOhost databases from mHealth inception to March 2021. The statistical analyses were conducted using OpenMeta-Analyst software. All 11 included studies were considered for the meta-analysis. The included studies focused on malaria infections, Schistosoma haematobium, Schistosoma mansoni, soil-transmitted helminths, and trichuris trichiura. The pooled summary of sensitivity and specificity estimates were moderate compared to the gold reference standard. The overall pooled estimates of sensitivity, specificity, positive likelihood ratio, negative likelihood ratio and diagnostic odds ratio of mobile-linked POC diagnostic devices were as follows: 0.499 (95% CI: 0.458-0.541); 0.535 (95% CI: 0.401-0.663); 0.952 (95% CI: 0.60-1.324); 1.381 (95% CI: 0.391-4.879); and 0.944 (95% CI: 0.579-1.538), respectively. Evidence shows that mobile-linked POC diagnostics' diagnostic accuracy is presently moderate in detecting infections in sub-Saharan Africa. Future research is recommended to evaluate mHealth devices' diagnostics with excellent sensitivities and specificities in diagnosing diseases in this setting.
ARTICLE | doi:10.20944/preprints202104.0343.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: Family violence; Machine Learning; Classification; ROC; Accuracy; COVID-19
Online: 13 April 2021 (10:51:20 CEST)
In Southern Asia, Bangladesh is a well-known developing country. Because of COVID-19, we continuously face challenges. Not only can these issues occur beyond economic or health concerns, but they also generate dangerous social problems, such as family abuse. Since the inception of this epidemic, multiple social crimes are looming. Remaining home during the lockout period enhances divorce rates. This research presents a customized forecast of family violence during the COVID-19 outbreak by using machine learning methods. In this paper, we have applied Random Forest, Logistic Regression, and Naive Bayes machine learning classifiers to predict family violence and discovered the feature importance. The performance of the classifiers is evaluated based on accuracy, precision, recall, and F-score. We have employed an oversampling strategy named synthetic minority oversampling technique (SMOTE) to solve the imbalance problem of our data. Even, we have tried to compare three machine learning model performances before and after balancing of normalization data. Finally, ROC analyses and confusion matrices were developed and analyzed by using data augmentation. Our proposed system with the random forest classifier performed better with 77% accuracy in comparison with other two machine learning classifiers.
ARTICLE | doi:10.20944/preprints201804.0209.v2
Subject: Computer Science And Mathematics, Data Structures, Algorithms And Complexity Keywords: plant phenotyping; noise filtering; binarization; accuracy evaluation; connected components
Online: 24 April 2018 (17:02:18 CEST)
Plants are such important keys of biological part of our environment, supply the human life and creatures. Understanding how the plant’s functions react with our surroundings, helps us better to make plant growth and development of food products. It means the plant phenotyping gives us bio information which needs some tools to reach the plant knowledge. Imaging tools is one of the phenotyping solutions which consists of imaging hardware such as the camera and image analysis software analyses the plant images changings such as plant growth rates. In this paper, we proposed a preprocessing algorithm to eliminate the noise and separate foreground from the background which results the plant image to help the plant image segmentation. The preprocessing is one of important levels has effect on better image segmentation and finally better plant’s image labeling and analysis. Our proposed algorithm is focused on removing noise such as converting the color space, applying the filters and local adaptive binarization step such as Niblack. Finally, we evaluate our algorithm with other algorithms by testing a variety of binarization methods.
ARTICLE | doi:10.20944/preprints202311.1387.v1
Subject: Medicine And Pharmacology, Clinical Medicine Keywords: AI; spine fractures; diagnosis; thoracic X-ray; medical imaging; accuracy
Online: 22 November 2023 (08:46:51 CET)
Post-menopausal women are at risk of osteoporotic vertebral fractures (OVFs), which if undetected and untreated, can lead to significant pain and morbidity. However, OVFs are often not reported by radiologists on routine chest radiographs. Artificial Intelligence (AI) is increasingly used in medical applications with improvements in detection and diagnosis. This study aims to investigate the clinical value of a newly developed AI tool, Ofeye 1.0 for automated detection of OVFs on lateral chest radiographs in post-menopausal women (>60 years) who were referred to undergo chest x-rays for other reasons. A total of 510 de-identified lateral chest radiographs from three clinical sites were retrieved and analysed by Ofeye1.0 tool. These images were then reviewed by a consultant radiologist to decide whether there is any fracture present or not with findings served as the reference standard for determining the diagnostic performance of the AI tool. The percentage of radiologist reports which included the OVF was analysed by comparing diagnostic reports with AI findings, while the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive values (NPV) of the AI tool were calculated. Out of all the original radiologist reports, missed OVFs were found in 14% images but were detected by the AI tool. The AI tool demonstrated a relatively high specificity 92.8% (95% CI: 89.6, 95.2%), moderate accuracy 80.3% (95% CI: 76,3, 80.4%), PPV 73.7% (95% CI: 65.2, 80.8%) and NPV 81.5% (95%CI: 79, 83.8%), but low sensitivity 49% (95%CI: 40.7, 57.3%). The high false negative results were mainly due to mild OVFs with <20% vertebral height loss. The new AI software tool has high specificity with a low false positive rate of 5.1%, showing that it can be used as a complimentary tool to the routine diagnostic reports for reduction of missed OVF in elderly women. The low sensitivity with high false negative rates indicate the necessity of radiologist’s in identifying and reporting early OVFs.
ARTICLE | doi:10.20944/preprints202308.1481.v1
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: lightning; VLF/LF; Blitzortung.org; flash rate; detection efficiency; location accuracy
Online: 21 August 2023 (16:09:12 CEST)
We evaluated the detection efficiency and location accuracy of lightning discharges in Japan using Blitzortung.org, a volunteer-based network for locating lightning discharges from sferics measured by VLF electromagnetic receivers that have been deployed worldwide in recent years. A comparison of the flash rate (the detected lightning rate per area and period) from Blitzortung.org with that from the satellite-based OTD/LIS and the ground-based World Wide Lightning Location Network (WWLLN) observations showed that Blitzortung.org clearly observed intense lightning activity in and around the Kanto area, including Tokyo, in summer, which is typical of Japanese lightning activity. However, it did not clearly observe lightning activity in and around the Nansei Islands, including Okinawa. Conversely, Blitzortung.org observed winter lightning activity in the Hokuriku area and off the Kanto. In addition, event studies have compared the detection efficiency and location accuracy of Blitzortung.org with those of the Japanese Lightning Location Network (JLDN) to infer their absolute values. The latest detection efficiency of Blitzortung.org in the Kanto area was estimated at roughly 90%. The mean location accuracy was estimated at up to 5.6 km.
ARTICLE | doi:10.20944/preprints202305.0949.v1
Subject: Biology And Life Sciences, Anatomy And Physiology Keywords: basketball skills; physical fitness; basketball testing; dynamic shooting; shooting accuracy
Online: 12 May 2023 (13:21:24 CEST)
Three-point shooting plays an important role in determining the outcome of the basketball games, and could be relevant for player selection. However, there has been little research into the relationship between basketball players' physical capacities, metabolic capacities and three-point shooting accuracy, particularly among female players. The aim of this study was to examine the relationship between physical capacities, metabolic capacities and dynamic three-point shooting accuracy in female professional basketball players. Twelve female professional basketball players from the Women’s Chinese Basketball Association (WCBA) league (age: 19.04±1.31 years, height: 181.33±4.90cm, playing experience: 7.83±1.7 years) were recruited for this study. Pearson correlations and multiple linear regression analysis were run to assess the relationship between physical capacities, metabolic capacities and dynamic three-point shooting. Results showed that coordination, balance, core strength and relative average power were positively correlated with the three-point shooting accuracy (r>0.58, P<0.05), while no other variables showed significant correlations. The current study suggests that coaching staff should consider coordination, balance, core strength and anaerobic capacities when selecting players as well as in their training periodization if three-point shooting accuracy is considered relevant.
ARTICLE | doi:10.20944/preprints202305.0366.v1
Subject: Public Health And Healthcare, Public Health And Health Services Keywords: medical tests, accuracy, cost-effectiveness analysis, matching intervention, lower bound
Online: 5 May 2023 (11:48:27 CEST)
Medical tests' cost-effectiveness analysis reveals a strong correlation between their accuracy and analysis results. Theoretically, we provide two lower bounds that medical tests' accuracy must exceed to be considered cost-effective. Theoretically, we provide two lower bounds that the accuracy of medical tests meeting cost-effectiveness must exceed. First, medical tests' accuracy should surpass the lower bound of matching requirements for each patient to receive the most appropriate intervention based on their test results. Second, the overall lower limit should be exceeded for the medical testing to be cost-effective for the patients as a whole. Furthermore, the conclusions from this model also provide an economic explanation for the under-diagnosis of rare diseases and the challenges faced in implementing precision medicine for minority patients. A numerical simulation example validates the conclusions of this paper.
ARTICLE | doi:10.20944/preprints202303.0062.v1
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: Underground space, information detection, fractional differentiation, high accuracy remote data
Online: 3 March 2023 (08:37:27 CET)
The quality of underground space information has become a major problem that endangers the safety of underground spaces. Currently, the main methods for the high-precision and long-distance transmission of detection information are radar and optical methods. However, in practical applications, we found that the radar method has the shortcomings of large energy loss and poor anti-jamming ability, which limit the accuracy of information data transmission and distance. The optical method has the shortcomings that the weather has a great impact on its accuracy and can only be applied to static objects above ground; therefore, it has the limitation of application objects and use environment. More importantly, the current high-precision information remote detection methods are limited to the detection of overground space objects and are not applicable to the detection of various information data in underground space. In this study, we analyze the spectral properties of the fractional differential operator and find that it is suitable for studying non-linear, non-causal, and non-stationary signals. The theory of fractional calculus is applied to the field of data processing, and a mathematical model of remote transmission and high-precision detection of information based on fractional difference is established, which realizes the functions of high-precision and remote detection of information. By fusing the information data to detect the mathematical model over a long distance and with high accuracy, a mathematical model for stratum data processing used to provide long-distance and high-accuracy data was established. Through application in engineering practice, the effectiveness of this method for underground space information data detection was verified.
ARTICLE | doi:10.20944/preprints202202.0041.v1
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: Laser scanning instrument; 3D scanner calibrator; surface reflectance; measurement accuracy
Online: 2 February 2022 (15:57:04 CET)
Abstract: The calibrator is one of the most important factors in the calibration of various laser 3D scanning instruments. The requirements for diffuse reflection surface are specially emphasized in many national standards. In this study, the spherical calibrator and plane calibrator compara-tive measurement experiments were carried out. The black ceramic standard sphere, white ce-ramic standard sphere, metal standard sphere, metal standard plane and white ceramic standard plane were used to test the laser 3D scanner. In the spherical calibrators comparative measure-ment experiments, the results indicated that the RMS of the white ceramic spherical calibrator with reflectance about 60% is 10 times that of the metal spherical calibrator with the reflectance of about 15%, and the RMS of the black ceramic spherical calibrator with reflectance of about 11% is of the same order as the metal spherical calibrator. In the plane calibrators comparative measurement experiments, the RMS of flatness measurement is 0.077 mm for metal plane cali-brator with reflectance of 15%, and 2.915 mm for ceramic plane calibrator with reflectance of 60%. The results show that when the optimal measurement distance and incident angle are selected, the reflectance of the calibrator has a great effect on the measurement results, regardless of the outlines or profiles. Based on the experiments, it is recommended to use the spherical calibrator or the standard plane with reflectance of around 18% as the standard, which can obtain the rea-sonable results. In addition, it is necessary to clearly provide the material category and surface reflectance information of the standard when calibrating the scanner according to the measure-ment standard.
ARTICLE | doi:10.20944/preprints202202.0034.v1
Subject: Physical Sciences, Optics And Photonics Keywords: stray light; radiometric accuracy; Earth observation; correction algorithm; ghost reflections
Online: 2 February 2022 (12:58:46 CET)
Stray light is a critical aspect for high performance optical instruments. When stray light control by design is insufficient to reach the performance requirement, correction by post-processing must be considered. This situation is encountered for example in the case of the Earth observation in-strument 3MI, whose stray light properties are complex due to the presence of many ghosts dis-tributed on the detector array. We implement an iterative correction method and discuss its con-vergence properties. Spatial and field binning can be employed to reduce the computation time but at the cost of a decreased performance. Interpolation of the stray light properties is required to achieve high performance correction. For that, two methods are proposed and tested. The first interpolate the stray light in the field domain while the second applies a scaling operation based on a local symmetry assumption. Ultimately, the scaling method is selected and a stray light reduction by a factor of 58 is obtained at 2σ (129 at 1σ) for an extended scene illumination.
ARTICLE | doi:10.20944/preprints202105.0424.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: Convolutional Neural Network (CNN); Emotion Recognition; Facial Expression; Classification; Accuracy
Online: 18 May 2021 (11:34:19 CEST)
Emotion recognition defined as identifying human emotion and is directly related to different fields such as human-computer interfaces, human emotional processing, irrational analysis, medical diagnostics, data-driven animation, human-robot communi- cation and many more. The purpose of this study is to propose a new facial emotional recognition model using convolutional neural network. Our proposed model, “ConvNet”, detects seven specific emotions from image data including anger, disgust, fear, happiness, neutrality, sadness, and surprise. This research focuses on the model’s training accuracy in a short number of epoch which the authors can develop a real-time schema that can easily fit the model and sense emotions. Furthermore, this work focuses on the mental or emotional stuff of a man or woman using the behavioral aspects. To complete the training of the CNN network model, we use the FER2013 databases, and we test the system’s success by identifying facial expressions in the real-time. ConvNet consists of four layers of convolution together with two fully connected layers. The experimental results show that the ConvNet is able to achieve 96% training accuracy which is much better than current existing models. ConvNet also achieved validation accuracy of 65% to 70% (considering different datasets used for experiments), resulting in a higher classification accuracy compared to other existing models. We also made all the materials publicly accessible for the research community at: https://github.com/Tanoy004/Emotion-recognition-through-CNN.
REVIEW | doi:10.20944/preprints202105.0316.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: intraoral scanners; digital dentistry; trueness; precision; accuracy; 3D printing; materials
Online: 14 May 2021 (08:05:37 CEST)
Introduction: The current generation of 3D printers are lighter, cheaper, and smaller, making them more accessible to the chairside digital dentist than ever before. 3D printers in general in the industrial and chairside setting can work with various types of materials including, metals, ceramics, and polymers. Evidence presented in many studies show that an ideal material used for dental restorations is characterised by several properties related to durability, cost-effectiveness, and high performance. This review is the second part in a 3D Printing series that looks at the literature on material science and applications for these materials in 3D printing as well as a discussion on the potential further development and future evolution in 3D printing materials. Conclusions: Current materials in 3D printing provide a wide range of possibilities for providing more predictable workflows as well as improving efficiency through less wasteful additive manufacturing in CAD/CAM procedures. Incorporating a 3D printer and a digital workflow into a dental practice is challenging but the wide range of manufacturing options and materials available mean that the dentist should be well prepared to treat patients with a more predictable and cost effective treatment pathway. As 3D printing continues to become a commonplace addition to chair side dental clinics, the evolution of these materials, in particular reinforced PMMA, resin incorporating zirconia and glass reinforced polymers offer increased speed and improved aesthetics that will likely replace subtractive manufacturing milling machines for most procedures.
REVIEW | doi:10.20944/preprints202105.0221.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: 3D printing; intraoral scanners; digital dentistry; trueness; precision; accuracy; history
Online: 10 May 2021 (15:57:02 CEST)
Introduction: The term 3D printing is commonly used to depict an assembling method whereby the final form of an object is the result of the addition of different layers to build the frame of an object. This procedure is more accurately portrayed as additive manufacturing and is likewise alluded to as fast prototyping. The term 3D printing, in any case, is generally new and has been an active part of current developments in Dentistry. Much publicity encompasses the evolution of 3D printing, which is hailed as an innovation that will perpetually change CAM manufacturing, including in the dental sector. This review is the first part in a 3D Printing series that looks at the history of 3D Printing, the technologies available and reviews the literature relating to the accuracy of these technologies. Conclusions: The recent advancement in digital dentistry to incorporate these tools has modernised dental practices by paving the way for computer-aided design (CAD) technology and rapid prototyping. The use of 3D printing has led to 3D digital models produced with intraoral scanners (IOS), which can be manipulated easily for diagnosis, treatment planning, mockups, and a multitude of other uses. Combining 3D Printing with a 3D intraoral scan eliminates the need for physical storage but makes it to retrieve a 3D models for use within all dental modalities.
ARTICLE | doi:10.20944/preprints202012.0105.v1
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: Built-up land; Fourier transformation; high-accuracy mapping; temporal correcti
Online: 4 December 2020 (11:58:42 CET)
Long-term, high-accuracy mapping of built-up land dynamics is essential for understanding urbanization and its consequences for the environment. Despite advances in remote sensing and classification algorithms, built-up land mapping using early satellite imagery (e.g., from the 2000s and earlier) remains prone to uncertainty. We mapped the extent of built-up land in the North China Plain, one of China’s most important agricultural regions, from 1990 to 2019 at three-year intervals. Using dense time-stack Landsat data, we applied discrete Fourier transformation to create temporal predictors and reduce mapping uncertainties for early years. We improved overall accuracy by 8% compared to using spectral and indices predictors alone. We implemented a temporal correction algorithm to remove inconsistent pixel classifications, further improving accuracy to a consistently high level (>94%) across years. A cross-product comparison showed that our study achieved the highest levels of accuracy across years. Total built-up land in the North China Plain increased from 37,941 km2 in 1990–1992 to 131,578 km2 in 2017–2019. Consistent, high-accuracy built-up land mapping provides a reliable basis for policy planning in one of the most rapidly urbanizing regions of the planet.
ARTICLE | doi:10.20944/preprints201812.0056.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: Low accuracy CDRs; Group movement pattern; Data mining; Travel behaviors
Online: 4 December 2018 (10:02:30 CET)
Identifying group movement patterns of crowds and understanding group behaviors is valuable for urban planners, especially when the groups are special such as tourist groups. In this paper, we present a framework to discover tourist groups and investigate the tourist behaviors using mobile phone call detail records (CDRs). Unlike GPS data, CDRs are relatively poor in spatial resolution with low sampling rates, which makes it a big challenge to identify group members from thousands of tourists. Moreover, since touristic trips are not on a regular basis, no historical data of the specific group can be used to reduce the uncertainty of trajectories. To address such challenges, we propose a method called group movement pattern mining based on similarity (GMPMS) to discover tourist groups. To avoid large amounts of trajectory similarity measurements, snapshots of the trajectories are firstly generated to extract candidate groups containing co-occurring tourists. Then, considering that different groups may follow the same itineraries, additional traveling behavioral features are defined to identify the group members. Finally, with Hainan province as an example, we provide a number of interesting insights of travel behaviors of group tours as well as individual tours, which will be helpful for tourism planning and management.
ARTICLE | doi:10.20944/preprints201705.0170.v1
Subject: Computer Science And Mathematics, Data Structures, Algorithms And Complexity Keywords: accuracy; depth data; RMS error; 3D vision sensors; stereo disparity
Online: 23 May 2017 (09:20:27 CEST)
We propose an approach for estimating the error in depth data provided by generic 3D sensors, which are modern devices capable of generating an image (RGB data) and a depth map (distance) or other similar 2.5D structure (e.g. stereo disparity) of the scene. Our approach starts capturing images of a checkerboard pattern devised for the method. Then proceed with the construction of a dense depth map using functions that generally comes with the device SDK (based on disparity or depth). The 2D processing of RGB data is performed next to find the checkerboard corners. Clouds of corner points are finally created (in 3D), over which an RMS error estimation is computed. We come up with a multi-platform system and its verification and evaluation has been done, using the development kit of the board nVIDIA Jetson TK1 with the MS Kinects v1/v2 and the Stereolabs ZED camera. So the main contribution is the error determination procedure that does not need any data set or benchmark, thus relying only on data acquired on-the-fly. With a simple checkerboard, our approach is able to determine the error for any device. Envisioned application is on 3D reconstruction for robotic vision, with a series of 3D vision sensors embarked in robots (UAV of type quadcopter and terrestrial robots) for high-precision map construction, which can be used for sensing and monitoring.
ARTICLE | doi:10.20944/preprints201609.0103.v1
Subject: Computer Science And Mathematics, Computational Mathematics Keywords: Maximum entropy model; K-means clustering; accuracy; classification; sports forecasting
Online: 27 September 2016 (11:10:50 CEST)
Predicting the outcome of a future game between two National Basketball Association (NBA) teams poses a challenging problem of interest to statistical scientists as well as the general public. In this article, we formalize the problem of predicting the game results as a classification problem and apply the principle of maximum entropy to construct NBA maximum entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs by the NBAME model. The best NBAME model is able to correctly predict the winning team 74.4 percent of the time as compared to some other machine learning algorithms which is correct 69.3 percent of the time.
ARTICLE | doi:10.20944/preprints202310.1184.v1
Subject: Computer Science And Mathematics, Geometry And Topology Keywords: Medical diagnosis; rough sets; Nano Beta-topological space; Decision-making; Accuracy
Online: 19 October 2023 (03:52:17 CEST)
In this paper we introduce the concept of Nano Beta Closure, Nano Beta Interior and study the relationship between them also study some of their properties. Based on these notions we introduced the topological Approach to Decision Making and Its Application in the Medical Field" which describes the use of topological methods in decision-making, specifically through the utilization of nano beta closure and nano beta interior. These mathematical tools are applied in the medical field to improve diagnostic accuracy and aid in treatment planning. The authors demonstrate the effectiveness of this approach through a case study on the diagnosis of Chronic Kidney. The results show that the topological approach using nano beta closure and nano beta interior provides a reliable and accurate method for medical decision-making. Finally, we introduce a medical application for disease groups patients of with chronic kidney. To identify the main causative agent of kidney disease. where using this approach, the medical team can identify the specific areas of the kidney that are affected by the infection, as well as the extent of damage and any potential complications. This information is used to create an individualized treatment strategy that is customized to meet the specific requirements of the patient. and medical history. We write an algorithm that can be a helpful tool for doctors to utilize in diagnosing infection disease Chronic Kidney infections. By utilizing this approach, clinicians can provide accurate diagnoses, develop effective treatment plans, and ultimately improve patient outcomes. finally, we show that fractals hold significant importance in future nano topological generation of the human factor. Their unique properties and self-similar patterns offer insights into the complex organization of biological systems and provide a foundation for designing advanced Nanosystems.
ARTICLE | doi:10.20944/preprints202308.0323.v1
Subject: Medicine And Pharmacology, Pathology And Pathobiology Keywords: urothelial carcinoma in situ; immunohistochemistry; meta-analysis; diagnostic test accuracy review
Online: 3 August 2023 (10:03:19 CEST)
This study aimed to evaluate the diagnostic roles of various immunohistochemical markers in urothelial carcinoma in situ (uCIS) through a meta-analysis and review of diagnostic test accuracy. Immunohistochemical markers CK20, CD44, AMACR, and p53 were evaluated in the present study. We analyzed the expression rates of immunohistochemical markers and compared their diagnostic accuracies. The estimated expression rates were 0.803 (95% confidence interval [CI]: 0.726–0.862), 0.142 (95% CI: 0.033–0.449), 0.824 (95% CI: 0.720–0.895), and 0.600 (95% CI: 0.510–0.683) for CK20, CD44, AMACR, and p53, respectively. In the comparison between uCIS and reactive/normal urothelium, the expression of CK20, AMACR, and p53 in uCIS was significantly higher than in reactive/normal urothelium. CD44 showed significantly lower expression in uCIS than in the reactive/normal urothelium. Among the markers, AMACR had the highest sensitivity, specificity, and diagnostic odds ratio. The AUC on SROC was the highest for CK20. In conclusion, immunohistochemical markers, such as CK20, CD44, AMACR, and p53, can be useful in differentiating uCIS from reactive/normal urothelium.
ARTICLE | doi:10.20944/preprints202307.1005.v1
Subject: Business, Economics And Management, Economics Keywords: willingness– to– pay; CVML method; low cost; high accuracy; improved environment
Online: 14 July 2023 (10:29:02 CEST)
This paper introduces an advanced method that integrates contingent valuation and machine learning (CVML) to estimate residents’ demand for reducing or mitigating environmental pollutions and climate change. To be precise, CVML is an innovative hybrid machine-learning model, and it can leverage a limited amount of survey data for prediction and data enrichment purposes. The model comprises of two interconnected modules: Module I, an unsupervised learning algorithm, and Module II, a supervised learning algorithm. Module I is responsible for grouping the data into groups based on common characteristics, thereby grouping the corresponding dependent variable, whereas Module II is in charge of demonstrating the ability to predict and the capacity to appropriately assign new samples to their respective category based on input attributes. Take a survey on the topic of air pollution in Hanoi in 2019 as an example, we found that CVML can predict households’ willingness– to– pay for polluted air mitigation at a high degree of accuracy (i.e., 98%). We also found that CVML can help users reduce costs or save resources because it makes use of using the secondary data that exist on many open-data sources for estimating the WTP. These findings suggest that CVML is a sound and practical method that would be potentially widely applied in a wide range of fields, particularly environmental economics and sustainability science and in practice in years to come.
ARTICLE | doi:10.20944/preprints202307.0374.v1
Subject: Computer Science And Mathematics, Hardware And Architecture Keywords: Floating point, hybrid floating point, FPGA, numerical precision, accuracy, mixed-precision
Online: 6 July 2023 (07:54:48 CEST)
Nowadays, there are implemented devices whose purpose is to perform massive computations by saving resources at the time they reduce the latency of arithmetic operations. These devices are usually GPUs, FPGAs and other specialised devices such as "Coral". Neural networks, digital filters and numerical simulators take advantage of the massively parallel operations of such devices. One way to reduce the amount of resources used is to limit the size of the registers that store data. This has led to the proliferation of numeric formats with a length of less than 32 bits, known as short floating point or SFP. We have developed several SFP’s for use in our neural network accelerator design, allowing for different levels of accuracy. We use a 16-bit format for data transfer and different formats can be used simultaneously for internal operations. The internal operations can be performed in 16-bit, 20-bit and 24-bit. The use of registers larger than 16-bit allows the preservation of fractional information while increasing precision. By leveraging some of the FPGA’s arithmetic resources, our design outperforms designs implemented from scratch and is competitive with specialized arithmetic circuits already implemented in the FPGA.
ARTICLE | doi:10.20944/preprints202306.0300.v1
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: CART algorithm; Accuracy; Synthetic Minority Over-sampling Technique; Particle Swarm Optimization
Online: 5 June 2023 (10:10:36 CEST)
Diabetes is a serious health problem throughout the world, including in Indonesia. The International Diabetes Federation (IDF) reports that the number of adults with diabetes is increasing every year. The Behavioral Risk Factor Surveillance System (BRFSS) is a survey conducted by the Centers for Disease Control and Prevention (CDC) in the United States. Classification methods in data mining techniques are used to classify diabetics and non-diabetics. The data mining process is carried out by preprocessing, feature selection, and dataset classification stages. In the preprocessing stage, data cleaning, data formatting, and data oversampling are carried out using the Synthetic Minority Over-sampling Technique (SMOTE). Next, the feature selection stage is carried out using the Particle Swarm Optimization (PSO) algorithm to find the best attributes. The dataset classification stage is carried out using the CART Model Decision Tree algorithm. The results of the performance evaluation of the CART algorithm are calculated using the confusion matrix and the MAE value, the results obtained for the CART algorithm without SMOTE and PSO obtained the best accuracy of 75.34% and the MAE value of 0.2466, while the CART algorithm using SMOTE and PSO can increase accuracy by 10 .94% to 86.28% and an MAE value of 0.1372.
ARTICLE | doi:10.20944/preprints202305.1409.v1
Subject: Engineering, Automotive Engineering Keywords: calibration device; kinematic calibration; on-site calibration; industrial robot; accuracy measurement
Online: 19 May 2023 (08:27:51 CEST)
MultiCal is an affordable, high-precision measuring device designed for the on-site calibration of industrial robots. Its design features a long measuring rod with a spherical tip that is attached to the robot. By restricting the rod’s tip to multiple fixed points under different rod orientations, the relative positions of these points are accurately measured beforehand. A common issue with MultiCal is the gravity deformation of the long measuring rod, which introduces measurement errors into the system. This problem becomes especially serious when calibrating large robots, as the length of the measuring rod needs to be increased to enable the robot to move in a sufficient space. To address this issue, we propose two improvements in this paper. Firstly, we suggest the use of a new design of the measuring rod that is lightweight yet has high rigidity. Secondly, we propose a deformation compensation algorithm. Experimental results have shown that the new measuring rod improves calibration accuracy by 20% to 39%, while by using the deformation compensation algorithm, the accuracy increases by 6% to 16%. In the best configuration, the calibration accuracy is similar to that of a measuring arm with a laser scanner, producing an average positioning error of 0.274 mm and a maximum positioning error of 0.838 mm. The improved design is cost-affordable, robust, and has sufficient accuracy, making MultiCal a more reliable tool for industrial robot calibration.
ARTICLE | doi:10.20944/preprints202304.1235.v1
Subject: Biology And Life Sciences, Agricultural Science And Agronomy Keywords: Genomic prediction; flavonoid pigmentation; Sorghum bicolor; prediction accuracy; marker-assisted selection
Online: 29 April 2023 (10:11:10 CEST)
Marker-assisted selection (MAS) and genomic selection (GS) have been used to select individuals with desirable traits. MAS used a few markers associated with a specific trait to select individuals with desirable traits, which are determined after a Genome-wide association studies (GWAS). On the contrary, GS uses a large number of markers distributed across the genome to predict the genomic breeding values for a further selection of the individuals. In general, MAS has shown a high prediction accuracy but is not suitable for traits that are controlled for multiple genes, and has another constraint, it is required the phenotypic data; on the contrary, GS has not shown the highest prediction accuracy as MAS but it takes into account the effect of multiple genes controlling a target trait and it can be used without phenotypic data. Including GWAS-selected markers in GS can enhance the reduced prediction accuracy that GS shows in comparison with MAS. Thus, the objective of this study was to compare the prediction accuracy of MAS, and some models of genomic prediction (gBLUP, gBLUP including GWAs-selected markers, and some Bayesian models such as Bayes A, Bayes B, Bayes LASSO and Bayesian Ridge Regression) with GWAS-selected markers incorporated in gBLUP in order to confirm if the incorporation of GWAS in GS increases the prediction accuracy of GS. As a model for this study, it was used data from Sorghum which has shown population structure, to evaluate if the incorporation of GWAs-selected markers into GS improves prediciton accuracy. It was used a sample of 6000 SNPs out of the 265.487 reported in the study conducted by Morris et al (2013), and also it was considered some parameters that affect the efficiency of the selection such as the size of the training population, the heritability, and the number of QTNs. The GWAS-selected SNPs were identified after using the model BLINK. The results showed that the incorporation of GWAS-selected markers enhanced the performance of the genomic selection with similar prediction accuracy as MAS, the number of QTNs and size of the training population affected the accuracy, with higher accuracy with a bigger size of the training population and with a lower number of QTNs, but it seems that the heritability does not have any impact in the model where GWAS-selected SNPs were included in gBLUP.
ARTICLE | doi:10.20944/preprints202303.0177.v1
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: Integrated Manufacturing; Mobile Equipment; Production Information; Data Accuracy; Fractional Partial Differential
Online: 9 March 2023 (11:38:36 CET)
The information data accuracy affects the working reliability of production management system directly. Mobile equipment is an important part of integrated manufacturing system, in the course of its works, due to the complex working environment and the dynamic change of its position, it lead to the important problem of information detection distortion. Because the application of fractional differential operator has the dual functions of enhancing signal strength and reducing data error, we puts forward a data processing method of obtaining production information data of various accuracy by changing the parameters of fractional differential operator. Establish a mobile equipment detection data fusion processing model based on fractional order partial differential equations and apply it in the processing experiment of mobile equipment testing information detection data in an integrated manufacturing system, the feasibility and effectiveness of the method are verified. We conclude that the fractional order partial differential equations used in the processing of mobile equipment monitoring information data of an integrated manufacturing system has the function of obtaining production information data of various accuracy, which are of great significance to improve the working reliability in integrated manufacturing system. which is of great significance to improve the working reliability and scientific decision of integrated manufacturing system.
ARTICLE | doi:10.20944/preprints202212.0390.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: hydraulic geometry; rating curves; flood mapping; accuracy; data acquisition; data needs
Online: 21 December 2022 (06:59:11 CET)
Hydraulic relationships are important for water resource management, hazard prediction, and modelling. Since Leopold first identified power law expressions that could relate streamflow to top-width, depth, and velocity, hydrologists have been estimating ‘At-a-station Hydraulic Geometries’ (AHG) to describe average flow hydraulics. As the amount of data, data sources, and application needs increase, the ability to apply, integrate and compare disparate and often noisy data is critical for applications ranging from reach to continental scales. However, even with quality data, the standard practice of solving each AHG relationship independently can lead to solutions that fail to conserve mass. The challenge addressed here is how to extend the physical properties of the AHG relations, while improving the way they are hydrologically addressed and fit. We present a framework for minimizing error while ensuring mass conservation at reach - or hydrologic Feature - scale geometries’(FHG) that complies with current state-of-the-practice conceptual and logical models. Through this framework, FHG relations are fit for the United States Geological Survey’s (USGS) Rating Curve database, the USGS HYDRoacoustic dataset in support of the Surface Water Oceanographic Topography satellite mission (HYDRoSWOT), and the hydraulic property tables produced as part of the NOAA/Oakridge Continental Flood Inundation Mapping framework. The paper describes and demonstrates the accuracy, interoperability, and application of these relationships to flood modelling and presents this framework in an R package.
ARTICLE | doi:10.20944/preprints202104.0138.v1
Subject: Business, Economics And Management, Accounting And Taxation Keywords: Energy consumption; BRICS; GM (1, 1); Fractional-order; GREY; Forecasting accuracy
Online: 5 April 2021 (13:51:38 CEST)
Brazil, Russia, China, India, and the Republic of South Africa (BRICS) represent developing economies facing different energy and economic development challenges. The current study aims to forecast energy consumption in BRICS at aggregate and disaggregate levels using the annual time series data set from 1992 to 2019 and to compare results obtained from a set of models. The time-series data are from the British Petroleum (BP-2019) Statistical Review of World Energy. The forecasting methodology bases on a novel Fractional-order Grey Model (FGM) with different order parameters. This study contributes to the literature by comparing the forecasting accuracy and the forecasting ability of the FGM(1,1) with traditional ones, like standard GM(1,1) and ARIMA(1,1,1) models. Also, it illustrates the view of BRICS's nexus of energy consumption at aggregate and disaggregates levels using the latest available data set, which will provide a reliable and broader perspective. The Diebold-Mariano test results confirmed the equal predictive ability of FGM(1,1) for a specific range of order parameters and the ARIMA(1,1,1) model and the usefulness of both approaches for energy consumption efficient forecasting.
ARTICLE | doi:10.20944/preprints201910.0039.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: tree species; forest; biodiversity; time series; spatial autocorrelation; cross-validation; accuracy
Online: 3 October 2019 (13:56:18 CEST)
Mapping forest composition using multiseasonal optical time series is still challenging. Highly contrasted results are reported from one study to another suggesting that drivers of classification errors are still under-explored. We evaluated the performances of single-year Formosat-2 time series to discriminate tree species in temperate forests in France and investigated how predictions vary statistically and spatially across multiple years. Our objective was to better estimate the impact of spatial autocorrelation in the validation data on measurement accuracy and to understand which drivers in the time series are responsible for classification errors. The experiments were based on ten Formosat-2 image time series irregularly acquired during the seasonal vegetation cycle from 2006 to 2014. Due to lot of clouds in the year 2006, an alternative 2006 time series using only cloud-free images has been added. Thirteen tree species were classified in each single-year dataset based on the SVM algorithm. The performances were assessed using a spatial leave-one-out cross validation (SLOO-CV) strategy, thereby guaranteeing full independence of the validation samples, and compared with standard non-spatial leave-one-out cross-validation (LOO-CV). The results show relatively close statistical performances from one year to the next despite the differences between the annual time series. Good agreements between years were observed in monospecific tree plantations of broadleaf species versus high disparity in other forests composed of different species. A strong positive bias in the accuracy assessment (up to 0.4 of Overall Accuracy) was also found when spatial dependence in the validation data was not removed. Using the SLOO-CV approach, the average OA values per year ranged from 0.48 for 2006 to 0.60 for 2013, which satisfactorily represents the spatial instability of species prediction between years.
ARTICLE | doi:10.20944/preprints201708.0099.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: inequality; ratio; Bernoulli number; Riemann zeta function; Dirichlet eta function; accuracy
Online: 28 August 2017 (09:30:34 CEST)
In the paper, by virtue of some properties for the Riemann zeta function, the author finds a double inequality for the ratio of two consecutive Bernoulli numbers with even indexes and analyzes the approximating accuracy of the double inequality.
ARTICLE | doi:10.20944/preprints202310.0530.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Bacteria Colony; Multi-feature selection; Classification Accuracy; Improved Salp Swarm Algo-rithm
Online: 9 October 2023 (14:53:39 CEST)
In this study, we introduce and advanced multi-feature selection technique for bacterial classifi-cation employing the Salp Swarm Algorithm (SSA). We enhance SSA’s effectiveness by incorpo-rating the Opposition-Based Learning (OBL) strategy and the Local Search (LSA) algorithm. The proposed technique encompasses three key stages, streamlining the automated categorization of bacteria based on their distinctive features. The research adopts a multi-feature selection approach bolstered by an enhanced iteration of the Salp Swarm Algorithm (SSA). Enhancements include the utilization of Opposition-Based-Learning (OBL) to increase population diversity during search and Local Search Algorithms (LSA) to tackle local optimization challenges. The ISSA algorithm is designed to optimize the multi-feature selection by increasing the number of selected feature and improving classification accuracy. This study compares its performance with several other algo-rithms across ten distinct test datasets. The comparison results show that ISSA has better perfor-mance in terms of classification accuracy on 3 datasets consisting of 19 features, with a value reaching 73.75%. Additionally, ISSA excels in determining the optimal feature count and producing a better-fit value with a classification error rate of 0,249. Thus, the ISSA technique is expected to make a significant contribution to solving feature selection problems in bacterial analysis
ARTICLE | doi:10.20944/preprints202309.1895.v1
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: relative homogenization; ACMANT; homogenization accuracy; synthetic data; regional trend bias; automatic networking
Online: 28 September 2023 (02:24:54 CEST)
Homogenization of climatic time series aims to remove non-climatic biases which come from the technical changes of climate observations. The method comparison tests of the Spanish MULTITEST project (2015-2017) showed that ACMANT was likely the most accurate homogenization method available at that time, in spite of the tested ACMANTv4 version gave suboptimal results when the test data included synchronous breaks for several time series. The technique of combined time series comparison has been introduced to ACMANTv5 in order to treat better this specific problem. Tests confirm that ACMANTv5 treats adequately synchronous inhomogeneities, but the accuracy has slightly worsened in some other cases. Results for a known daily temperature test dataset for 4 U.S. regions show that the residual errors after homogenization may be larger with ACMANTv5 than with ACMANTv4. Further tests have been performed to learn more about the efficiencies of ACMANTv4 and ACMANTv5, and to find solutions for the appearing problems with the new version. Planned changes in ACMANTv5 are presented in the paper along with connecting test results. The overall results indicate that the combined time series comparison can be kept in ACMANT, but smaller networks should be generated by the automatic networking process of the method. For the further improvements of homogenization methods and for obtaining more reliable and more solid knowledge about their accuracies, more synthetic test datasets mimicking fairly the true spatio-temporal structures of real climatic data would be in need.
CASE REPORT | doi:10.20944/preprints202303.0372.v1
Subject: Biology And Life Sciences, Agricultural Science And Agronomy Keywords: agricultural modeling; fitting quality; function; grain corn; prediction accuracy; soybeans; winter wheat
Online: 21 March 2023 (07:10:49 CET)
Crop yield prediction is relevant subject of current agricultural science. There are various mathematical approaches to crop yield prediction, and regression analysis, notwithstanding the fact that it is somewhat outdated, is still one of the most used ones in this purpose. The quality of predictive model is of great importance, and it is strongly dependent on the rational choice of the target function. The goal of this study is to find out the best regression model for winter wheat, soybeans, and grain corn yield prediction depending on the crops’ water use. The data on true crops’ yields and water use were collected within 1970-2020 at the experimental fields of the Institute of Climate-Smart Agriculture, Kherson region, Ukraine. In total, 145 data pairs were processed by the best subsets regression to find out the best model in terms of fitting quality (assessed by the Pearson’s coefficient of correlation), and prediction accuracy (assessed by the values of the minimum and maximum absolute errors and mean average percentage error). As a result, it was established that the best fitting quality for all the studied crops is attributed to cubic function, while the best accuracy is recorded for hyperbolic (reverse) function in soybeans (mean absolute percentage error is 12.27%), quadratic and hyperbolic functions in winter wheat (mean absolute percentage error is 20.54%), and cubic function in grain corn (mean absolute percentage error is 14.92%). To sum up the results of the study, it is recommended to apply cubic regression function for modeling crops’ yields in agricultural studies.
ARTICLE | doi:10.20944/preprints202302.0348.v1
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: multiple influencing factors; data detection error; fractional partial differential; high accuracy detection
Online: 21 February 2023 (03:04:58 CET)
In engineering practice, various types of information data are affected by many factors during the collection process. For example, information data measurement errors are caused by equipment performance and the working environment. During the transmission of detection information, the signal distortion caused by energy loss and signal interference causes unpredictable detection errors in the collected information data. Through the study of fractional calculus theory, it was found that it is suitable for studying nonlinear, non-causal, and non-stationary signals, and has the dual functions of improving detection information and enhancing signal strength. Therefore, under the influence of many factors, we applied the fractional difference algorithm to the field of information data processing. Multi-sensor detection data fusion algorithm based on the fractional partial differential equation was adopted to establish its online detection data. A multi-sensor detection data fusion algorithm based on a fractional partial differential equation is established, which effectively fuses the information data detection errors caused by various influencing factors and greatly improves the detection accuracy of information data. The effectiveness of this method is proved through its application in an experiment.
ARTICLE | doi:10.20944/preprints202102.0570.v1
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: Low-cost Metal Material Extrusion; Additive Manufacturing; Machine Learning; Dimensional Accuracy; Sintering
Online: 25 February 2021 (10:02:44 CET)
Additive manufacturing (AM) is an emerged layer-by-layer manufacturing process. However, its broad adoption is still hindered by limited material options, different fabrication defects, and inconsistent part quality. Material extrusion (ME) is one of the most widely used AM technologies, and, hence, is adopted in this research. Low-cost metal ME is a new and AM technology used to fabricate metal composite parts using sintered metal infused filament material. Since the involved materials and process are relatively new, there is a need to investigate the dimensional accuracy of ME fabricated metal parts for real-world applications. Each step of the manufacturing process, from the material extrusion to sintering, might significantly affect the dimensional accuracy. This research provides a comprehensive analysis of dimensional changes of metal samples fabricated by the ME and sintering process, using statistical and machine learning algorithms. Machine learning (ML) methods can be used to assist researchers in sophisticated pre-manufacturing planning and product quality assessment and control. This study compares linear regression to neural networks in assessing and predicting the dimensional changes of ME made components after 3D printing and sintering process. The prediction outcomes using a neural network performed the best with the highest accuracy as compared to regression. The findings of this study can help researchers and engineers to predict the dimensional variations and optimize the printing and sintering process parameters to obtain high quality metal parts fabricated by the low-cost ME process.
ARTICLE | doi:10.20944/preprints202009.0034.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: vertical accuracy; photogrammetric DTM; ASTER; SRTM; TanDEM-X; orthometric height; geoid height
Online: 2 September 2020 (08:30:48 CEST)
The quality of photogrammetric-based derived products like orthophotos, digital terrain models (DTMs) and digital line maps as well as the global digital elevation models (DEM) are rigorously dependent on the accuracy of image orientation. This paper evaluates the vertical accuracy of aerial photogrammetric Digital Terrain Model (DTM), Shuttle Radar Topography Mission (SRTM), Advanced Spaceborne Thermal Emission and Reflectance Radiometer (ASTER), and TerraSAR-X's twin satellite of TanDEM-X (TDX) datasets against in-situ orthometric heights computed from ellipsoidal heights and the 2008 Earth Gravitational Model (EGM2008) derived geoid heights in Ethiopia. The quality of the four global digital elevation models was also validated against the aerial photogrammetric DTM measurements. Besides, the accuracies of the photogrammetric DTM and the four DEM products were checked for their compliance to the American Society for Photogrammetry and Remote Sensing (ASPRS) standards as well as the Ethiopian national vertical data evaluation standards. The study showed that the photogrammetric DTM is in a good agreement with the reference orthometric heights compared to SRTM, ASTER and TDX datasets. More precisely, the result has an absolute accuracy of 1.67 m at Linear Error (LE) 95% confidence level, while the absolute accuracy of SRTM3 arc seconds (SRTM3) at LE 90% (11.91 m) is better than its product specification (16 m). The absolute accuracy of SRTM1 arc second (SRTM1) (7.70 m at LE 90%) surpasses that of SRTM3, whereas the absolute accuracy of ASTER DEM is somehow below its product specification. TDX also has the same vertical accuracy (10.29 m at LE 90%) compared to its product specification (10 m). Furthermore, the vertical accuracy of the photogrammetric DTM meets the100 cm vertical accuracy of the 2015 ASPRS standard. However, it does not meet the Ethiopian national vertical data accuracy requirement standard, i.e., RMSEz of ± 0.45 m. In general, the photogrammetric DTM, SRTM1, and TDX have been proven a superior product over the SRTM3 and ASTER DEMs, and better to use these products for high-level precision and accuracy required applications.
ARTICLE | doi:10.20944/preprints201810.0558.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: digital terrain models; DTM vertical accuracy; DTM comparison; hydrologeomorphological Modelling; Mediterranean catchments
Online: 24 October 2018 (08:27:47 CEST)
Digital Terrain Models (DTMs) are currently a fundamental source of information in Earth Sciences. However, DTM-based studies can contain remarkable biases if limitations and inaccuracies of these models are disregarded. In this work, four freely available datasets such as SRTM C-SAR DEM, ASTER GDEM V2 and two airborne LiDAR derived DTMs (at 5 and 1 m spatial resolution, respectively) were analysed in a comparative study in three geomorphologically contrasted catchments located in Mediterranean geoecosystems under intensive human land use influence. Vertical accuracy as well as the influence of each dataset characteristics on hydrological and geomorphological modelling applicability were assessed by using classic geometric and morphometric parameters and the more recently proposed index of sediment connectivity. Overall vertical accuracy – expressed as Root Mean Squared Error (RMSE) and Normalized Median Deviation (NMAD) – revealed the highest accuracy in the cases of the 1 m (RMSE = 1.55 m; NMAD = 0.44 m) and 5 m LiDAR DTMs (RMSE = 1.73 m; NMAD = 0.84 m). Vertical accuracy of SRTM was lower (RMSE = 6.98 m; NMAD = 5.27 m) but considerably higher than in the case of ASTER (RMSE = 16.10 m; NMAD = 11.23 m). All datasets were affected by systematic distortions. As a consequence, propagation of these errors caused negative impacts on flow routing, stream network and catchment delineation and, to a lower extent, on the distribution of slope values. These limitations should be carefully considered when applying DTMs for hydrogeomorphological modelling.
ARTICLE | doi:10.20944/preprints201807.0488.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: heart rate variability; machine learning; abnormality detection; window shifting; high accuracy prediction
Online: 25 July 2018 (14:22:10 CEST)
The use of machine learning techniques in predictive health care is on the rise with minimal data used for training machine-learning models to derive high accuracy predictions. In this paper, we propose such a system, which utilizes Heart Rate Variability (HRV) as features for training machine learning models. This paper further benchmarks the usefulness of HRV as features calculated from basic heart-rate data using a window shifting method. The benchmarking has been conducted using different machine-learning classifiers such as artificial neural network, decision tree, k-nearest neighbour and naive bays classifier. Empirical results using MIT-BIH Arrhythmia database shows that the proposed system can be used for highly efficient predictability of abnormality in heartbeat data series.
ARTICLE | doi:10.20944/preprints201803.0161.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: acoustic positioning system; three-dimensional assessment model; positioning accuracy; DOP; optimal configuration
Online: 19 March 2018 (11:43:18 CET)
This paper addresses the problem of assessing and optimizing acoustic positioning system for underwater target localization with range measurements only. We present a new three-dimensional assessment model to assess the optimal geometric beacon formation whether meet user needs. For the sake of mathematical tractability, it is assumed that the measurements of the range between the target and beacons are corrupted with white Gaussian noise with variance is distance-dependent. Then by adopting dilution of precision (DOP) parameters in the assessment model, the relationship between DOP parameters and positioning accuracy is derived. In addition, the optimal geometric beacon formation that will yield the best performance is achieved by minimizing the values of geometric dilution of precision (GDOP) on condition that the position of target is known and fixed. Next, in order to make sure whether the estimate positioning accuracy over interesting region satisfy the precision needed by the users, geometric positioning accuracy (GPA), horizonal positioning accuracy (HPA) and vertical positioning accuracy (VPA) are utilized to assess the optimal geometric beacon formation. Simulation examples are designed to illustrate the exactness of the conclusion. Unlike other work which only use GDOP to optimize the formation and cannot assess the performance of the specified dimensions, this new three-dimensional assessment model can assess the optimal geometric beacon formation in each dimension for any point in three-dimensional space, which can provide users with guidance advices to optimize performance of every specified dimension.
ARTICLE | doi:10.20944/preprints202311.0640.v1
Subject: Biology And Life Sciences, Biophysics Keywords: hyper convolution spectra; epithelial tissues; paramagnetism; diamagnetism; opto-magnetic images; cancer detection; accuracy
Online: 9 November 2023 (14:24:05 CET)
In this paper, a relatively new hyper convolution spectra for characterization liquid, viscoelastic and solid samples is presented. Special attention is paid to the biophysical characterization of epithelial tissues and early detection of cancer of the skin, cervix, colon, and oral cavity. The method is based on the use of two types of white light, diffuse and polarized, from the same LED light sources. Based on the subtraction of the intensity values of these two lights and using the hyper convolution algorithm, the ratio of paired and unpaired electrons of tissue is obtained. This ratio correlates well with the state of the tissue: normal, irritating, precancerous, and cancerous. First, a proof-of-concept studies were performed, and later clinical studies on samples from one thousand two hundred fifteen people were performed. It showed that hyper Opto-Magnetic Imaging Spectroscopy (OMIS) can detect epithelial tissue cancers with an accuracy of 85-96%. Bearing in mind that the OMIS method also acquires the biophysical characteristics of the deeper skin layers (epidermis, basement membrane, dermis) by analysing convolution spectra of RGB channels of both diffuse and polarized white light, it is possible to apply this method in cosmetics (skin hydration, collagen, and elastin states, etc.). During OMIS applications, we noticed places where improvements could be made to increase method’s accuracy using advanced machine learning methodologies.
ARTICLE | doi:10.20944/preprints202310.1195.v1
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: powder bed fusion laser sintering; polyamide 11; reuse; dimensional accuracy; tensile properties; crystallinity
Online: 18 October 2023 (16:24:54 CEST)
Polyamide 11 (PA11) is a plant-based nylon made from castor beans. Powder bed fusion laser sintering (PBF-LS) is an additive manufacturing process used for PA11 which allows the reuse of the unsintered powder. The unsintered powder is mixed with virgin powders at different refresh rates which has been studied extensively for most semi-crystalline polyamides. However, there is lack of information on the effect of using 100% reused PA11 powder and the number of times it is reused on its own during powder bed fusion laser sintering. This paper investigates the effect of reusing PA11 powder in PBF-LS and number of times it is reused on the dimensional accuracy, density, thermal and tensile properties. From 100% virgin powder to 3rd use of the powder, there is a decrease in powder wastage, decrease in crystallinity and tensile strength. These was associated with the polymerisation and cross-linking process of polymer chains, upon exposure to high temperature. This results in a higher molecular weight and hence density. From the 4th use to 10th reuse, the opposite was observed which was associated to an increase in high-viscosity and un-molten particles which resulted in defects in the PBF-LS parts.
ARTICLE | doi:10.20944/preprints202307.1043.v2
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: Image classification; Land use/land cover mapping; Accuracy assessment; Landsat-8; Snetinel-2
Online: 14 August 2023 (09:01:24 CEST)
Satellite-based data classification performance remains a challenge for research community in the field of land use/land cover mapping. Here we investigated supervised per-pixel classifications performance under different scenarios, based on single and seasonal multispectral data combi-nations of different sensors (Landsat-8 OLI and Sentinel-2 MSI). In case of Landsat, seasonal spectral indices (EVI and NDMI) were included. A typical Mediterranean watershed with a complex landscape comprised of various forest and wetland ecosystems, crops, artificial surfaces, and lake water was selected to test our approach. All available geospatial data from national databases (Forest Map, LPIS, Natura2000 habitats, cadastral parcels, etc.) are used as ancillary data for clas-sification training and validation. We examined and compared the performance of ML, RF, KNN and SVM classifiers under different scenarios for land use/land cover mapping, according to Copernicus Land Cover (CLC2018) nomenclature. In total, eight land use/land cover classes were identified in Landsat-8 OLI and nine in Sentinel-2a MSI for an acceptable overall accuracy over 85%. A comparison of the overall classification accuracies shows that Sentinel-2a overall accuracy was slightly higher than Landsat-8 (96.68% vs. 93.02%). Respectively, the best-performed algorithm was ML in Sentinel-2 while in Landsat-8 was KNN. However, machine-learning algorithms have similar results regardless the type of sensor. We concluded that best classification performances achieved using seasonal multispectral data. Future research should be oriented towards inte-grating time-series multispectral data of different sensors and geospatial ancillary data for land use/land cover mapping.
ARTICLE | doi:10.20944/preprints202303.0201.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: measurement of time intervals; clock generator; accuracy error compensation; correction of measurement results
Online: 13 March 2023 (02:01:30 CET)
A method of accuracy error compensation for the time interval is examined, allowing to eliminate the dependence of accuracy error on the duration of measured intervals and minimising the influence of destabilising factors - ambient temperature changes and the initial deviation of the meter clock generator frequency from the nominal value. The compensation method is based on a calibration procedure that measures precise time intervals under conditions of changing ambient temperature. Then the dependence of the accuracy error on temperature for a particular meter is recorded. Based on these data, a correction table is compiled containing correction factors and temperature values at which these factors were determined. Under real measurement conditions, the correction factor corresponding to the current temperature is determined from the table for measured results correction. The table with correction factor values could be stored in the memory of the meter or processing computer. Experimental verification of the method showed that applying a correction for a meter with a standard XO class clock generator (certificated instability of ±50 ppm) can obtain an equivalent clock generator instability of ±0.15 ppm. The application of the method is efficient in cases where the use of high-end clocking to ensure the required measurement accuracy is not economically feasible.
ARTICLE | doi:10.20944/preprints202112.0206.v1
Subject: Engineering, Control And Systems Engineering Keywords: Motion capture camera; robotic total station; autonomous vehicle; 6 DoF pose estimation; accuracy
Online: 13 December 2021 (13:30:53 CET)
To validate the accuracy and reliability of onboard sensors for object detection and localization in driver assistance, as well as autonomous driving applications under realistic conditions (indoors and outdoors), a novel tracking system is presented. This tracking system is developed to determine the position and orientation of a slow-moving vehicle (e.g. car during parking maneuvers), independent of the onboard sensors, during test maneuvers within a reference environment. One requirement is a 6 degree of freedom (DoF) pose with a position uncertainty below 5 mm (3σ), an orientation uncertainty below 0.3° (3σ) at a frequency higher than 20 Hz, and a latency smaller than 500 ms. To compare the results from the reference system with the vehicle’s onboard system, a synchronization via Precision Time Protocol (PTP) and a system interoperability to Robot Operating System (ROS) is implemented. The developed system combines motion capture cameras mounted in a 360° panorama view set-up on the vehicle with robotic total stations. A point cloud of the test site serves as a digital twin of the environment, in which the movement of the vehicle is simulated. Results have shown that the fused measurements of these sensors complement each other, so that the accuracy requirements for the 6 DoF pose can be met, while allowing a flexible installation in different environments.
Subject: Physical Sciences, Acoustics Keywords: SAR Interferometry; Accuracy; Big Data; Deformation Monitoring, Sentinel-1; Fading Signal; Signal Decorrelation
Online: 27 October 2020 (15:26:30 CET)
We scrutinize the reliability of multilooked interferograms for deformation analysis. Designing a simple approach in the evaluation of the accuracy of the estimated deformation signals, we reveal a prominent bias in the deformation velocity maps. The bias is the result of propagation of small phase error of multilooked interferograms through the time series and can sum up to 6.5 mm/yr in case of using the error prone short temporal baseline interferograms. We further discuss the role of the phase estimation algorithms in reduction of the bias and put recommend a unified intermediate InSAR product for achieving high-precision deformation monitoring.
DATA DESCRIPTOR | doi:10.20944/preprints201812.0148.v1
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: Additive manufacturing; fused deposition modeling; FDM; dimensional accuracy; manufacturing process repeatability; polymer testing
Online: 12 December 2018 (12:58:13 CET)
This report describes the collection of a large dataset (6930 measurement) on dimensional error in the fused deposition modeling (FDM) additive manufacturing process for full-density parts. Three different print orientations were studied, as well as seven raster angles (0°, 15°, 30°, 45°, 60°, 75°, and 90°) for the rectilinear infill pattern. All measurements were replicated ten times on ten different samples to ensure a comprehensive dataset. Eleven polymer materials were considered: acrylonitrile butadiene styrene (ABS), polylactic acid (PLA), high-temperature PLA, wood-composite PLA, carbon-fiber-composite PLA, copper-composite PLA, aluminum-composite PLA, high-impact polystyrene (HIPS), polyethylene terephthalate glycol-enhanced (PETG), polycarbonate, and synthetic polyamide (nylon). The samples were ASTM-standard impact testing samples, since this geometry allows the measurement of error on three different scales; the nominal dimensions were 3.25mm thick, 63.5mm long, and 12.7mm wide. This dataset is intended to give engineers and product designers a benchmark for judging the accuracy and repeatability of the FDM process for use in manufacturing of end-user products.
ARTICLE | doi:10.20944/preprints201806.0240.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: Bit-serial; Low Power; Variable Accuracy Computing; FFT; Energy Harvesting; VLSI; Hardware Design
Online: 14 June 2018 (16:22:15 CEST)
In this paper, a new approach is proposed for designing ultra-low-power FFT (Fast Fourier Transform) system suitable for use in energy harvesting powered sensors. Bit-serial architecture is adopted to reduce the power consumption of butterfly operation. Simulation results show that, compared with state-of-the-art bit-serial and conventional parallel processors, the proposed technique is superior in terms of silicon area, power consumption, dynamic energy use due to variable precision arithmetic. A sample design of a 64-point FFT shows that the implementation can save about 40% area and 36% leakage power compared with a conventional parallel counterpart, accordingly achieving significant power benefits at a low sample rate and low voltage domain. The dynamic variation of the arithmetic precision can be achieved through a simple modification of the controller with hardware area overhead of 10% gate count.
ARTICLE | doi:10.20944/preprints202307.0529.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: machine learning; accuracy; complexity; entropy; landslide susceptibility mapping; dimensionality reduction; Principal Component Analysis (PCA)
Online: 10 July 2023 (11:11:27 CEST)
In this study, our primary objective was to analyze the tradeoff between accuracy and complexity in machine learning models, with a specific focus on the impact of reducing complexity and entropy on the production of landslide susceptibility maps. We aimed to investigate how simplifying the model and reducing entropy can affect the capture of complex patterns in the susceptibility maps. To achieve this, we conducted a comprehensive evaluation of various machine-learning algorithms for classification tasks. We compared the performance of these algorithms in terms of accuracy and complexity, considering both "before" and "after" scenarios of dimensionality reduction using Principal Component Analysis (PCA). Our findings revealed that reducing complexity and lowering entropy can lead to an increase in model accuracy. However, we also observed that this reduction in complexity comes at the cost of losing important complex patterns in the produced landslide susceptibility maps. By simplifying the model and reducing entropy, certain intricate relationships and uncertain patterns may be overlooked, resulting in a loss of information and potentially compromising the accuracy of the susceptibility maps. The analysis encompassed a diverse range of machine learning algorithms, including Random Forest (RF), Extra Trees (EXT), XGboost, LightGBM, Catboost, Naive Bayes (NB), K-Nearest Neighbors (KNN), Gradient Boosting Machine (GBM), and Decision Trees (DT). Each algorithm was evaluated for its strengths and limitations, considering the tradeoff between accuracy and complexity. Before dimensionality reduction, the algorithms demonstrated promising results, with RF exhibiting excellent AUC/ROC scores and average accuracy. However, computational costs were noted as a potential drawback for RF, especially when dealing with large datasets. EXT showcased robust performance and good accuracy, while XGboost demonstrated its ability to handle complex relationships within large datasets, albeit requiring careful hyperparameter tuning. The efficiency and scalability of LightGBM made it a suitable choice for large datasets, although it displayed sensitivity to class imbalance. Catboost excelled in handling categorical features, but longer training times were observed for larger datasets. NB showcased simplicity and computational efficiency but assumed independence among features. KNN, known for its capability to capture local patterns and spatial relationships, was found to be sensitive to the choice of distance metric. GBM, while capturing complex relationships effectively, was prone to overfitting without proper regularization. DT, with its interpretability and ease of understanding, faced limitations in terms of overfitting and limited generalization. After dimensionality reduction, certain algorithms exhibited improvements in their AUC/ROC scores and average accuracy, including RF, EXT, XGboost, and LightGBM. However, for a few algorithms, such as NB and DT, a decrease in performance was observed. This study provides valuable insights into the performance characteristics, strengths, and limitations of various machine learning algorithms in classification tasks. Researchers and practitioners can utilize these findings to make informed decisions when selecting algorithms for their specific datasets and requirements. We also aim to identify the potential factors contributing to the high accuracy rates obtained from these ensembled algorithms and explore possible shortcomings of non-ensembled algorithms that may result in lower accuracy rates. By conducting a comprehensive analysis of these algorithms, we seek to provide valuable insights into the benefits and limitations of ensembled approaches for landslide susceptibility mapping. Our study sheds light on the challenges faced when balancing accuracy and complexity in machine learning models for landslide susceptibility mapping. It emphasizes the importance of carefully considering the level of complexity and entropy reduction in relation to the specific patterns and uncertainties present in the data. By providing insights into this tradeoff, our research aims to assist researchers and practitioners in making informed decisions regarding model complexity and entropy reduction, ultimately improving the quality and interpretability of landslide susceptibility maps.
ARTICLE | doi:10.20944/preprints202304.0405.v1
Subject: Medicine And Pharmacology, Neuroscience And Neurology Keywords: multiple system atrophy; dysautonomia; functional neuroimaging testing; diagnostic accuracy; Ioflupane-123; cross-sectional study
Online: 17 April 2023 (04:58:29 CEST)
Background: Multiple system atrophy (MSA) is a rapidly progressive neurodegenerative disorder that has no curative treatment. Diagnosis is based on a set of criteria established by Gilman (1998 and 2008) and recently updated by Wenning (2022). We aim to determine the effectiveness of [123I]Ioflupane SPECT in MSA, especially at the initial clinical suspicion. Methods: Cross-sectional study of patients at initial clinical suspicion of MSA, referred for [123I]Ioflupane SPECT. Results: 139 patients (68 men, 71 women) were included, 104 being MSA-probable and 35 MSA-possible. MRI was normal in 89.2% while SPECT was positive in 78.45%. SPECT showed high sensitivity (82.46%) and positive predictive value (86.24), reaching maximum sensitivity in MSA-P (97.26%). Significant differences were found when relating both SPECT assessments in the healthy-sick and inconclusive-sick groups. We also found an association when relating SPECT to the subtype (MSA-C or MSA-P), as well as to the presence of parkinsonian symptoms. Lateralization of striatal involvement was detected (left side). Conclusions: [123I]Ioflupane SPECT is a useful and reliable tool for diagnosing MSA, with good effectiveness and accuracy. Qualitative assessment shows a clear superiority when distinguishing between the healthy-sick categories, as well as between the parkinsonian (MSA-P) and cerebellar (MSA-C) subtypes at initial clinical suspicion.
ARTICLE | doi:10.20944/preprints202208.0389.v1
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: Numerical weather prediction; Time integration; Filtering; Laplace transform; semi-implicit; semi-Lagrangian; Forecast accuracy
Online: 23 August 2022 (03:13:59 CEST)
A time integration scheme based on semi-Lagrangian advection and Laplace transform adjustment has been implemented in a baroclinic primitive equation model. The semi-Lagrangian scheme makes it possible to use large time steps. However, errors arising from the semi-implicit scheme increase with the time step size. In contrast, the errors using the Laplace transform adjustment remain relatively small for typical time steps used with semi-Lagrangian advection. Numerical experiments confirm the superior performance of the Laplace transform scheme relative to the semi-implicit reference model. The algorithmic complexity of the scheme is comparable to the reference model, making it computationally competitive, and indicating its potential for integrating weather and climate prediction models.
ARTICLE | doi:10.20944/preprints202105.0541.v1
Subject: Business, Economics And Management, Accounting And Taxation Keywords: Artificial intelligence; Accounting systems integration; Accounting systems accuracy; Financial statements; Aqaba Special Economic Zone
Online: 24 May 2021 (08:47:20 CEST)
The study aims to examine the effects of artificial intelligence (AI) on the consistency and analysis of financial statements in hotels in ASEZA, Jordan. This research is an exploratory, empirical study, which uses the methodology of data collection and interpretation to draw conclusions. The researchers used the arithmetic mean, standard deviation, T-test and ANOVA test to calculate the degree of significance of the study questions. The findings of a basic linear regression study of the impact of AI implemented in Jordanian hotels on the integration of accounting information systems and the association between AI and the integration of accounting information systems (R = 59.6%) also indicate that the fixed limit value amounted to (2.060) and the value of (Beta) for T-test
Subject: Computer Science And Mathematics, Mathematics Keywords: spectral collocation; Chebfun; singular Schrodinger; high index eigenpairs; multiple eigenpairs; accuracy; numerical stability
Online: 26 November 2020 (11:07:47 CET)
We are concerned with the use of some classical spectral collocation methods as well as with the new software system Chebfun in order to compute high order (index) eigenpairs of singular as well as regular Schrodinger eigenproblems. We want to highlight both the qualities as well as the shortcomings of these methods and evaluate them vis-a-vis the usual ones. In order to resolve a boundary singularity we use Chebfun with the simple domain truncation technique. Although this method is equally easy to apply with spectral collocation, things are more nuanced in the case of these methods. A special technique to introduce boundary conditions as well as a coordinate transform which maps an unbounded domain to a nite one are the ingredients. A challenging set of "hard" benchmark problems, for which usual numerical methods (f. d., f. e. m., shooting etc.) fail, are analysed. In order to separate "good"and "bad"eigenvalues we estimate the drift of the set of eigenvalues of interest with respect to the order of approximation and/or scaling of domain parameter. It automatically provides us with a measure of the error within which the eigenvalues are computed and a hint on numerical stability. We pay a particular attention to problems with almost multiple eigenvalues as well as for problems with a mixed (continuous) spectrum. In the latter case we try to numerically highlight its existence. Special attention will be paid to the higher eigenpairs (the pair of eigenvalue and the corresponding eigenfunction approximated by an eigenvector spanning its nodal values).
ARTICLE | doi:10.20944/preprints202005.0052.v1
Subject: Computer Science And Mathematics, Computer Vision And Graphics Keywords: COVID-19 infection; CT scan image; serial feature fusion; KNN classiﬁer; segmentation; detection accuracy
Online: 5 May 2020 (02:32:05 CEST)
The Coronavirus disease (COVID-19) caused by a novel coronavirus, SARS-CoV-2, has been declared as a global pandemic. Due to its infection rate and severity, it has emerged as one of the major global threats of the current generation. To support the current combat against the disease, this research aims to propose a Machine Learning based pipeline to detect the COVID-19 infection using the lung Computed Tomography scan images (CTI). This implemented pipeline consists of a number of sub-procedures ranging from segmenting the COVID-19 infection to classifying the segmented regions. The initial part of the pipeline implements the segmentation of the COVID-19 affected CTI using Social-Group-Optimization and Kapur’s Entropy thresholding, followed by k-means clustering and morphology-based segmentation. The next part of the pipeline implements feature extraction, selection and fusion to classify the infection. PCA based serial fusion technique is used in fusing the features and the fused feature vector is then employed to train, test and validate four different classifiers namely Random Forest, k-Nearest Neighbors (KNN), Support Vector Machine with Radial Basis Function, and Decision Tree. Experimental results using benchmark datasets show a high accuracy (> 91%) for the morphology-based segmentation task and for the classification task the KNN offers the highest accuracy among the compared classifiers (> 87%). However, this should be noted that this method still awaits clinical validation, and therefore should not be used to clinically diagnose the ongoing COVID-19 infection.
ARTICLE | doi:10.20944/preprints201810.0379.v1
Subject: Social Sciences, Psychology Keywords: surgical simulator training; individual performance trend; speed-accuracy function; automatic detection; performance feed-back
Online: 17 October 2018 (08:40:08 CEST)
Simulator training for image-guided surgical interventions may benefit from artificial intelligence systems that control the evolution of task skills in terms of time and precision of a trainee's performance on the basis of fully automatic feed-back systems. At the earliest stages of training, novice trainees frequently focus on getting faster at the task, and may thereby compromise the optimal evolution of the precision of their performance. For automatically guiding them towards attaining an optimal speed-accuracy trade-off, an effective control system for the reinforcement/correction of strategies must be able to exploit the right individual performance criteria in the right way, reliably detect individual performance trends at any given moment in time, and alert the trainee, as early as necessary, when to slow down and focus on precision, or when to focus on getting faster. This article addresses several aspects of this challenge for speed-accuracy controlled simulator training before any training on specific surgical tasks or clinical models should be envisaged. Analyses of individual learning curves from the simulator training sessions of novices and benchmark performance data of one expert surgeon, who had no specific training in the simulator task, validate the suggested approach.
ARTICLE | doi:10.20944/preprints202310.0099.v1
Subject: Engineering, Bioengineering Keywords: Intelligent Image Recognition; Left and Right Upper Limb Dislocation Surgery; Accuracy Rate; Recall Rate; IRB
Online: 3 October 2023 (09:17:35 CEST)
Our image recognition system mainly judges whether the left upper limb in the image is the left upper limb or the right upper limb through our deep learning model in the image. The doctor then could give the correct surgical position. From the experimental results, it could be found that the precision rate and recall rate of the intelligent image recognition system proposed in this paper for preventing the upper limb dislocation surgery could reach 98% and 93%, respectively. It proved that our artificial intelligent image recognition system, AIIRS, could indeed assist orthopedic surgeons to prevent the occurrence of left and right dislocation in upper limb surgery. At the same time, this paper also completes the IRB application approval through the prototype experimental results and will conduct the second phase of human trials in the future. It showed that the research results of this paper will be of great benefit and research value to upper limb orthopedic surgery.
ARTICLE | doi:10.20944/preprints202309.0290.v1
Subject: Engineering, Bioengineering Keywords: Intelligent Image Recognition; Left and Right Upper Limb Dislocation Surgery; Accuracy Rate; Recall Rate; IRB
Online: 5 September 2023 (09:20:54 CEST)
Our image recognition system mainly judges whether the left upper limb in the image is the left upper limb or the right upper limb through our deep learning model in the image. The doctor then could give the correct surgical position. From the experimental results, it could be found that the precision rate and recall rate of the intelligent image recognition system proposed in this paper for preventing the upper limb dislocation surgery could reach 98% and 93%, respectively. It proved that our intelligent image recognition system could indeed assist orthopedic surgeons to prevent the occurrence of left and right dislocation in upper limb surgery. At the same time, this paper also completes the IRB application approval through the prototype experimental results and will conduct the second phase of human trials in the future. It showed that the research results of this paper will be of great benefit and research value to upper limb orthopedic surgery.
ARTICLE | doi:10.20944/preprints202201.0060.v1
Subject: Medicine And Pharmacology, Urology And Nephrology Keywords: extra-prostatic extension; magnetic resonance imaging; radical prostatectomy; nerve-sparing; prostate cancer; staging; diagnostic accuracy
Online: 6 January 2022 (10:05:55 CET)
The accuracy of multi-parametric MRI (mpMRI) in pre-operative staging of prostate cancer (PCa) remains controversial. Objective: To evaluate the ability of mpMRI to accurately predict PCa extra-prostatic extension (EPE) on a side-specific basis using a risk-stratified 5-point Likert scale. This study also aimed to assess the influence of mpMRI scan quality on diagnostic accuracy. Patients and Methods: We included 124 men who underwent robot-assisted RP (RARP) as part of the NeuroSAFE PROOF study at our centre. Three radiologists retrospectively reviewed mpMRI blinded to RP pathology and assigned a Likert score (1-5) for EPE on each side of the prostate. Each scan was also ascribed a Prostate Imaging Quality (PI-QUAL) score for assessing the quality of the mpMRI scan, where 1 represents poorest and 5 represents best diagnostic quality. Outcome measurements and statistical analyses: Diagnostic performance is presented for binary classification of EPE including 95% confidence intervals and area under the receiver operating characteristic curve (AUC). Results: A total of 231 lobes from 121 men (mean age 56.9 years) were evaluated. 39 men (32.2%), or 43 lobes (18.6%) had EPE. Likert score ≥3 had sensitivity (SE), specificity (SP), NPV, PPV of 90.4%, 52.3%, 96%, 29.9%, respectively, and AUC was 0.82 (95% CI: 0.77-0.86). AUC was 0.63 (95% CI: 0.37-0.9), 0.77 (0.71-0.84) and 0.92 (0.88-0.96) for biparametric scans, PI-QUAL 1-3 and PI-QUAL 4-5 scans, respectively. Conclusions: MRI can be used effectively by genitourinary radiologists to rule out EPE and help inform surgical planning for men undergoing RARP. EPE prediction was more reliable when the MRI scan was a) multi-parametric and b) of a higher image quality according to the PI-QUAL scoring system.
ARTICLE | doi:10.20944/preprints202105.0165.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: Intraoral Scanners; Intra-Oral Scanners; CAD/CAM; Digital Dentistry; Trueness; Precision; Accuracy; Scanners; Lab Scanners
Online: 10 May 2021 (10:44:19 CEST)
(1) Background: The purpose of this study is to evaluate the full arch scan accuracy (precision and trueness) of nine digital intra-oral scanners and four lab scanners. Previous studies have compared the accuracy of some intra-oral scanners, but as this is a field of quickly developing technologies, a more up-to-date study was needed to assess the capabilities of currently available models.; (2) Methods: The present in vitro study compared nine different intraoral scanners (Omnicam 4.6; Omnicam 5.1; Primescan; CS 3600; Trios 3; Trios 4; Runyes; i500 and DL206) as well as four lab light scanners (Einscan SE; 300e; E2 and Ineos X5) to investigate the accuracy of each scanner by examining the overall trueness and precision. Ten aligned and cut scans from each of the intra-oral and lab scanners in the in vitro study were brought into CloudCompare. A comparison was made with the master STL using the CloudCompare 3D analysis best-fit algorithm. The results were recorded along with individual standard deviation and a colorimetric map of the deviation across the surface of the STL mesh; a comparison was made to the master STL, quantified at specific points. ; (3) Results: In the present study, the Primescan had the best overall trueness (17.3 ± 4.9). Followed by (in order of increasing deviation) the Trios 4 (20.8 ± 6.2), i500 (25.2 ± 7.3), CS3600 (26.9 ± 15.9), Trios 3 (27.7 ± 6.8), Runyes (47.2 ± 5.4), Omnicam 5.1 (55.1 ± 9.5), Omnicam 4.6 (57.5 ± 3.2) and Launca DL206 (58.5 ± 22.0). Regarding the lab light scanners, the Ineos X5 had the best overall trueness with (0.0 ± 1.9). Followed by (in order of increasing deviation) the 3Shape E2 (3.6 ± 2.2), Up3D 300E (12.8 ± 2.7), and Einscan SE (14.9 ± 9.5); (4) Conclusions: This study confirms that all current generations of intra-oral digital scanners can capture a reliable, reproducible full arch scan in dentate patients. Out of the intra-oral scanners tested, no scanner produced results significantly similar in trueness to the Ineos X5. However, the Primescan was the only one to be statistically of a similar level of trueness to the 3Shape E2 lab scanner. All scanners in the study had mean trueness of under 60-micron deviation. While this study can compare the scanning accuracy of this sample in a dentate arch, the scanning of a fully edentulous arch is more challenging. The accuracy of these scanners in edentulous cases should be examined in further studies.
ARTICLE | doi:10.20944/preprints201802.0060.v1
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: surveying; close-range photogrammetry; internal coincidence precision estimation; external coincidence accuracy estimation; experimental work; testing
Online: 7 February 2018 (10:28:16 CET)
Precision and accuracy estimation is an important index used to reflect the measurement performance and quality of a measurement system. To reveal the significance and connotations of the precision and accuracy estimation index of a close-range photogrammetry system, several common precision and accuracy estimation methods used in close-range photogrammetry are explained from a theoretical perspective, and the mechanism of the internal coincidence precision estimation and the external coincidence accuracy estimation are deduced, respectively. Through detailed experimental design and testing, the validity and reliability of the proposed precision and accuracy estimation methods are verified, which provides strong evidence for the quality control, optimisation, and evaluation of the measurement results from a close-range photogrammetry system. At the same time, it has significance for the further development of precision and accuracy estimation analysis of close-range photogrammetry systems.
REVIEW | doi:10.20944/preprints202311.1630.v1
Subject: Physical Sciences, Other Keywords: Nonlinear Schrö dinger Equation; NLSE; energy-conserving methods; Hamiltonian Boundary Value Methods; HBVMs; spectral accuracy
Online: 28 November 2023 (01:44:26 CET)
In this review we collect some recent achievements in the accurate and efficient solution of the Nonlinear Schrödinger Equation (NLSE), with the preservation of its Hamiltonian structure. This is achieved by using the energy-conserving Runge-Kutta methods named Hamiltonian Boundary Value Methods (HBVMs) after a proper space semi-discretization. The main facts about HBVMs, along with their application for solving the given problem, are here recalled and explained in detail. In particular, their use as spectral methods in time, which allows efficiently solving the problems with spectral space-time accuracy.
ARTICLE | doi:10.20944/preprints202305.1001.v1
Subject: Computer Science And Mathematics, Computational Mathematics Keywords: Generalized Legendre wavelet; operational matrix of integration; linear differential equations; product operation matrix; convergence analysis; accuracy
Online: 15 May 2023 (08:05:45 CEST)
In this work, we offer a novel and accurate method in order to find the solution of the linear differential equations over the intervals [0, 1) based on the generalization of Legendre wavelets. The mechanism is still upon workable implementation of the operational matrix of integration and its derivatives. This method reduces the problems into algebraic equations via the properties of generalized Legendre wavelet (GLW) together with the operational matrix of integration. The function approximation has been picked out in such a way so as to enumerate the connection coefficients in an facile manner. The proposed numerical technique, based on the GLW, has been examined on three linear problems as a consequence of this investigation. The outcomes have shown that this method, as opposed to some other existing numerical and analytical methods, is a very useful and advantageous for tackling such problems.
ARTICLE | doi:10.20944/preprints202202.0197.v1
Subject: Public Health And Healthcare, Health Policy And Services Keywords: public health; occupational; Covid; SARS-CoV-2; work; job exposure matrix; JEM; compensation; predictivity; validity; accuracy
Online: 16 February 2022 (09:47:18 CET)
Background. We aimed to assess the validity of the Mat-O-Covid Job Exposure Matrix (JEM) on SARS-CoV2 using compensation data from the French National Health Insurance compensation system for occupational-related COVID-19. Methods. Deidentified compensation data for occupational COVID-19 in France were obtained between August 2020 and August 2021. The acceptance was considered as the reference. Mat-O-Covid is an expert based French JEM on workplace exposure to SARS-CoV2. Bivariate and multivariate models were used to study the association between the exposure assessed by Mat-O-Covid and the reference, as well as the Area Under Curves (AUC), sensitivity, specificity, predictive values, and likelihood ratios. Results. In the 1140 cases included, there was a close association between the Mat-O-Covid index and the reference (p<0.0001). The overall predictivity was good, with an AUC of 0.78 and an optimal threshold at 13 per thousand. Using Youden’s J statistic resulted in 0.67 sensitivity and 0.87 specificity. Both positive and negative likelihood ratios were significant: respectively 4.9 [2.4-6.4] and 0.4 [0.3-0.4]. Discussion. It was possible to assess Mat-O-Covid’s validity using data from the national compensation system for occupational COVID-19. Though further studies are needed, Mat-O-Covid exposure assessment appears to be accurate enough to be used in research.
REVIEW | doi:10.20944/preprints202004.0155.v1
Subject: Medicine And Pharmacology, Epidemiology And Infectious Diseases Keywords: COVID-19; Coronavirus; False-negative; Nucleic Acid Test; Screening; Diagnostic Accuracy; Missed Diagnosis; Epidemic; Infectious Disease
Online: 9 April 2020 (14:37:56 CEST)
Reliable methods to confirm the diagnosis of COVID-19 are essential to the successful management and containment of the virus. Current diagnostic options are limited in type, supply, and reliability. This article explores the controversial unreliability of existing diagnostic methods and maintains that more reliable diagnostic methods, combinations, and sequencing are necessary to effectively assist in reducing the occurrence of discharge of the patient on false negative test results. This reduction would in effect reduce transmission of the disease.
Subject: Medicine And Pharmacology, Other Keywords: brain-computer Interface; cognitive aging; steady-state visual evoked potential, neural network; detection accuracy; band power
Online: 13 May 2019 (08:32:23 CEST)
Cognitive deterioration caused by illness or aging often occurs before symptoms arise, and their timely diagnosis is crucial to reducing its medical, personal, and societal impacts. Brain-Computer Interfaces (BCIs) stimulate and analyze key cerebral rhythms, enabling reliable cognitive assessment that can accelerate diagnosis. The BCI system presented analyzes Steady-State Visually Evoked Potentials (SSVEPs) elicited in subjects of varying age to detect cognitive aging, predict its magnitude, and identify its relationship with SSVEP features (band power and frequency detection accuracy), which were hypothesized to indicate cognitive decline due to aging. The BCI system was tested with subjects of varying age to assess its ability to detect aging-induced cognitive deterioration. Rectangular stimuli flickering at theta, alpha, and beta frequencies were presented to subjects, and frontal and occipital EEG responses were recorded. These were processed to calculate detection accuracy for each subject and calculate SSVEP band power. A neural network was trained using the features to predict cognitive age. The results showed potential cognitive deterioration through age-related variations in SSVEP features. Frequency detection accuracy declined after age group 20–40 and band power, throughout all age groups. SSVEPs generated at theta and alpha frequencies, especially 7.5 Hz, were the best indicators of cognitive deterioration. Here, frequency detection accuracy consistently declined after age group 20-40 from an average of 96.64% to 69.23%. The presented system can be used as an effective diagnosis tool for age related cognitive decline.
ARTICLE | doi:10.20944/preprints201712.0100.v2
Subject: Environmental And Earth Sciences, Geophysics And Geology Keywords: cold-water coral; carbonate mound; habitat mapping; spatial prediction; image segmentation; GEOBIA; random forest; accuracy, confidence
Online: 18 January 2018 (16:08:36 CET)
Cold-water coral reefs are rich, yet fragile ecosystems found in colder oceanic waters. Knowledge of their spatial distribution on continental shelves, slopes, seamounts and ridge systems is vital for marine spatial planning and conservation. Cold-water corals frequently form conspicuous carbonate mounds of varying sizes, which are identifiable from multibeam echosounder bathymetry and derived geomorphometric attributes. However, the often large number of mounds makes manual interpretation and mapping a tedious process. We present a methodology that combines image segmentation and random forest spatial prediction with the aim to derive maps of carbonate mounds and an associated measure of confidence. We demonstrate our method based on multibeam echosounder data from Iverryggen on the mid-Norwegian shelf. We identified the image-object mean planar curvature as the most important predictor. The presence and absence of carbonate mounds is mapped with high accuracy (overall accuracy = 84.4%, sensitivity = 0.827 and specificity = 0.866). Spatially-explicit confidence in the predictions is derived from the predicted probability and whether the predictions are within or outside the modelled range of values and is generally high. We plan to apply the showcased method to other areas of the Norwegian continental shelf and slope where MBES data have been collected with the aim to provide crucial information for marine spatial planning.
ARTICLE | doi:10.20944/preprints202303.0236.v1
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: Additive Manufacturing; Selective Paste Intrusion; Print Quality; Print Nozzle to Particle Bed Distance; Shape accuracy; 3D scanning
Online: 13 March 2023 (15:35:22 CET)
The Selective Paste Intrusion (SPI) method is a layer-by-layer additive manufacturing technique that allows for the production of complex geometries in concrete elements by selectively bonding aggregates with cement paste in a particle bed. To create reinforced concrete, the Wire and Arc Additive Manufacturing (WAAM) process shall be integrated into SPI. This technique allows the production of almost free-formed reinforcement and thus complements the advantage of SPI to produce free-formed structures of almost any geometry. However, integration of WAAM into SPI poses a considerable challenge, as high temperatures are generated during the welding process. These temperatures can negatively affect the rheological properties of the cement paste, in turn the penetration behavior of the paste in the particle bed and, subsequently, the mechanical properties of the hardened concrete. A possible passive cooling strategy is to increase the protruding length of the reinforcement bars out of the particle-bed. This requires that the distance of the print nozzle to the particle bed is as well increased, since it must be possible to move it across the reinforcement. The objective was thus to investigate the effect of that distance on print quality and to quantify a maximum allowable distance for an adequate print quality (for the printer setting used) in terms of shape accuracy and concrete strength. Compressive and flexural strength tests as well as geometrical measurements using a 3D scanning method were performed on specimen, printed with varying print nozzle to particle bed distances. It can be stated that for the used SPI print-heads, nozzle-types and parameter settings, the distance between the nozzle and the particle bed should not exceed 50 mm to ensure sufficient print quality in both shape accuracy and mechanical strength.
ARTICLE | doi:10.20944/preprints202007.0656.v1
Subject: Medicine And Pharmacology, Epidemiology And Infectious Diseases Keywords: COVID-19 infection; Chest X-ray image; generalized regression neural network; probabilistic neural network and detection accuracy
Online: 27 July 2020 (00:52:49 CEST)
Corona virus disease (COVID-19) has infected over more than 10 million people around the globe and killed at least 500K worldwide by the end of June 2020. As this disease continues to evolve and scientists and researchers around the world now trying to find out the way to combat this disease in most effective way. Chest X-rays are widely available modality for immediate care in diagnosing COVID-19. Precise detection and diagnosis of COVID-19 from these chest X-rays would be practical for the current situation. This paper proposes one shot cluster based approach for the accurate detection of COVID-19 chest x-rays. The main objective of one shot learning (OSL) is to mimic the way humans learn in order to make classification or prediction on a wide range of similar but novel problems. The core constraint of this type of task is that the algorithm should decide on the class of a test instance after seeing just one test example. For this purpose we have experimented with widely known Generalized Regression and Probabilistic Neural Networks. Experiments conducted with publicly available chest x-ray images demonstrate that the method can detect COVID-19 accurately with high precision. The obtained results have outperformed many of the convolutional neural network based existing methods proposed in the literature.
ARTICLE | doi:10.20944/preprints201811.0025.v1
Subject: Chemistry And Materials Science, Materials Science And Technology Keywords: additive manufacturing; selective laser melting; AlSi10Mg; Al6061; SLM process parameters; powder characterization; density, surface topology; dimensional accuracy
Online: 2 November 2018 (06:19:40 CET)
Additive manufacturing (AM) of high strength Al alloys promises to enhance the performance of critical components related to various aerospace and automotive applications. The key advantage of AM is its ability to generate lightweight, robust, and complex shapes. However, the characteristics of the as-built parts may represent an obstacle to satisfy the part quality requirements. The current study investigates the influence of selective laser melting (SLM) process parameters on the quality of parts fabricated from different Al alloys. A design of experiment (DOE) is used to analyze relative density, porosity, surface roughness, and dimensional accuracy according to the interaction effect between the SLM process parameters. The results show a range of energy densities and SLM process parameters for the AlSi10Mg and Al6061 alloys needed to achieve “optimum” values for each performance characteristic. A process map is developed for each material by combining the optimized range of SLM process parameters for each characteristic to ensure good quality of the as-built parts. The second part of this study investigates the effect of SLM process parameters on the microstructure and mechanical properties of the same Al alloys. This comprehensive study is also aimed at reducing the amount of post-processing needed.
ARTICLE | doi:10.20944/preprints202310.0302.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: recommendation systems, personalized recommendations, ResNetMF, Residual Network Matrix Factorization, deep residual network, recommendation accuracy, linear and nonlinear relationships
Online: 6 October 2023 (08:23:50 CEST)
In this paper, we introduce ResNetMF, a groundbreaking approach that harnesses the power of residual network matrix factorization to revolutionize recommendation systems. ResNetMF integrates residual networks, renowned for their ability to capture intricate patterns and features, with matrix factorization techniques that excel in modelling user-item interactions. This fusion presents a novel solution that surpasses the limitations of traditional recommendation systems. Through comprehensive experimentation and evaluation of diverse datasets, ResNetMF demonstrates remarkable enhancements in recommendation accuracy and efficiency. By effectively capturing both linear and nonlinear relationships in user-item interactions, ResNetMF provides superior recommendation quality. The outcomes from experiments unequivocally highlight the superiority of ResNetMF over existing state-of-the-art recommendation approaches, thereby validating its innovative nature and underscoring its potential to shape the future of recommendation systems. Through the integration of the deep residual network, ResNetMF approach facilitates the training of neural networks, enabling them to explore the underlying data layers more comprehensively. Extensive experimentation and evaluation across various datasets provide compelling evidence for the superiority of ResNetMF. Moreover, the proposed method utilized natural language processing (NLP) techniques for targeted information dissemination in recommendation systems, emphasizing the importance of personalized and relevant recommendations for user satisfaction and engagement.
ARTICLE | doi:10.20944/preprints202307.0214.v1
Subject: Medicine And Pharmacology, Cardiac And Cardiovascular Systems Keywords: 3D printing; accuracy; calcification; cardiovascular disease; computed tomography; coronary artery disease; coronary stenosis; micro-computed tomography; plaque; synchrotron radiation
Online: 4 July 2023 (11:15:11 CEST)
Synchrotron radiation computed tomography (SRCT) allows more accurate calcified plaque and coronary stenosis assessment as a result of its superior spatial resolution, however, typical micro-computed tomography (micro-CT) systems have even higher resolution. The purpose of this study was to compare performance of high-resolution micro-CT with SRCT in the assessment of calcified plaques and a previously published dataset of coronary stenosis assessment. This experimental study involved micro-CT scanning of three-dimensional printed coronary artery models with calcification in situ used in our previously published SRCT study on coronary stenosis assessment. Measurements of coronary stenosis utilizing both modalities were compared using a paired sample t-test. The degrees of stenosis measured on all but one micro-CT dataset were statistically significantly lower than the corresponding SRCT measurements reported in our previous paper (p<0.0005-0.05). This indicates that the superior spatial resolution of micro-CT was able to further reduce over-estimation of stenosis caused by extensive calcification of coronary arteries and hence false positive results. This study shows that high-resolution micro-CT outperforms SRCT in both calcified plaque and coronary stenosis assessment. This finding will become clinically important for cardiovascular event prediction and enable reclassification of individuals with low and intermediate risk into appropriate risk categories when the technical challenges of micro-CT in clinical practice such as the small field of view and demanding on image processing power are addressed.