ARTICLE | doi:10.20944/preprints202212.0123.v2
Online: 9 December 2022 (10:10:08 CET)
(1) Background: This study aims to validate the use of Bloom's revised taxonomy as an instrument for the design of assessment tests; (2) Methods: A validation has been carried out by external judges, as well as by teachers and students, validating the instrument by means of Aiken's V; (3) Results: Judges, teachers and students consider Bloom's revised taxonomy as an effective tool for the design of assessment tests; (4) Conclusions: Using Bloom's revised taxonomy as a model for designing assessment tests promotes learning.
ARTICLE | doi:10.20944/preprints201807.0181.v1
Subject: Environmental And Earth Sciences, Oceanography Keywords: Salinity; Coastal; Upwelling; Validation
Online: 10 July 2018 (13:41:28 CEST)
Data from NASA’s Soil Moisture Active Passive Mission (SMAP) and from the California Cooperative Oceanic Fisheries Investigations (CalCOFI) were used to examine the freshening that occurred during 2015-2016 in the Southern California Current System. Overall the freshening was found to be related to the 2014-2016 Northeast Pacific Warm Anomaly. The primary goal was to determine the feasibility of using SMAP data to observe the surface salinity signal associated with the warming. As a first step direct comparisons were done with salinity from the CalCOFI data at one-meter depth. During 2015 SMAP was saltier than CalCOFI by 0.5 PSU, but biases were reduced to < 0.1 PSU during 2016. South of 33°N, and within 100 km of the coast, SMAP was fresher in 2015 by almost 0.2 PSU. CalCOFI showed freshening of 0.1 PSU. North of 33°N SMAP and CalCOFI saw significant freshening in 2016, SMAP by 0.4 PSU and CalCOFI by 0.2 PSU. Differences between SMAP and CalCOFI are consistent with the increased stratification in 2015 and changes in the mixed layer depth.
TECHNICAL NOTE | doi:10.20944/preprints202206.0097.v1
Subject: Environmental And Earth Sciences, Geophysics And Geology Keywords: InSAR; deformation; validation; GPS; consistency
Online: 7 June 2022 (08:12:53 CEST)
InSAR and associated analytic methods can measure surface deformation from low earth orbit with a claimed accuracy of centimeters to millimeters. The realized accuracy depends on the area being measured and on the choice of analytic method, suggesting one choose a method in response to the area being measured. Here we consider a specific fixed analytic method and compare the results it produces to measurements gathered from other means in a variety of settings. In particular we compare Sentinel-1 InSAR with GPS at the Kilauea volcano around the 2018 eruption, with GPS in the city of Arica, Chile, and with public survey data at a decommissioned tailings mine. In addition, we compare two independent Sentinel-1 InSAR analyses for a railway station in Oslo, Norway. Our goal is estimate the accuracy of a fully automated Sentinel-1 InSAR pipeline in various settings. Our conclusions are that centimeter level accuracy is a reasonable claim in many, but not all settings, and that accuracy is typically not lost by using an automated pipeline, instead of hand-selecting and tuning parameters.
CONCEPT PAPER | doi:10.20944/preprints202006.0294.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: metrics; algor-ethics; evaluation; validation
Online: 24 June 2020 (09:48:04 CEST)
Ethics is a research field that is obtaining more and more attention in Computer Science due to the proliferation of artificial intelligence software, machine learning algorithms, robot agents (like chatbot), and so on. Indeed, ethics research has produced till now a set of guidelines, such as ethical codes, to be followed by people involved in Computer Science. However, a little effort has been spent for producing formal requirements to be included in the design process of software able to act ethically with users. In the paper, we investigate those issues that make a software product ethical and propose a set of metrics devoted to quantitatively evaluate if a software product can be considered ethical or not.
ARTICLE | doi:10.20944/preprints201704.0159.v1
Subject: Environmental And Earth Sciences, Space And Planetary Science Keywords: YG-13A; geometric accuracy; validation
Online: 25 April 2017 (11:19:25 CEST)
YG-13A represents the highest level of Chinese SAR satellites to date. In this paper, we report on experiments conducted to improve and validate ranging accuracy with YG-13A. We analyze the error sources in the YG-13A ranging system, such as atmospheric path delay, and transceiver channel delay. A real-time atmospheric delay correction model is established to calculate the atmospheric path delay, considering the troposphere delay and ionosphere delay. Six corner reflectors (CRs) were set up to ensure the accuracy of validation methods. Pixel location accuracies of up to 0.479-m standard deviation can be achieved after a complete calibration. We further demonstrate that the adjustment of the CRs can cause a marginal loss of ranging precision. After eliminating this error, the ranging accuracy is improved to 0.237 m. For YG-13A, a single frequency GPS receiver is used and the orbital nominal accuracy is 0.3 m, which is the biggest factor restricting its ranging accuracy. Our results show that the ranging accuracy of YG-13A can achieve decimeter-level, which is lower than centimeter-level accuracy with TerraSAR-X loading a dual frequency GPS. YG-13A has great convenience in terms of access to control points and target location that does not depend on ground equipment.
ARTICLE | doi:10.20944/preprints202308.2045.v1
Subject: Engineering, Automotive Engineering Keywords: Automotive Radar; Validation Measurements; Sensor Models
Online: 30 August 2023 (13:53:29 CEST)
Virtual validation of radar sensor models is becoming increasingly important for the safety validation of adf. Therefore, methods for quantitative comparison of radar measurements in the context of model validation need to be developed. This paper presents a novel methodology for accessing and quantifying validation measurements of radar sensor models. This method uses edf and the so-called dvm to effectively quantify deviations between distributions. By applying this metric, the study measures the consistency, reproducibility and repeatability of radar sensor measurements. Different interfaces and different levels of detail are investigated. By comparing the radar signals from real world experiments where different objects are present, valuable insights are gained into the performance of the sensor. In particular, the research extends to assessing the impact of varying rain intensities on the measurement results, providing a comprehensive understanding of the sensor’s behaviour under these conditions. This holistic approach significantly advances the evaluation of radar sensor capabilities and enables the quantification of the maximum required quality of radar simulation models.
ARTICLE | doi:10.20944/preprints202308.0914.v1
Subject: Public Health And Healthcare, Public, Environmental And Occupational Health Keywords: sustainable diet; behaviour; validation; university students
Online: 11 August 2023 (09:51:47 CEST)
Sustainable diet behaviour is crucial in ensuring food security and healthy life with low environmental impacts, for the present and future generations. However, sustainable diet is a new concept both globally as well as in Malaysia. Therefore, this study aimed to adapt and validate a questionnaire on sustainable diet behaviours, as well as to assess the levels and willingness of sustainable diet behaviour and its association with socio-demographic characteristics among university students in a public university in Malaysia. The final questionnaire resulted in three factors (perceived effectiveness, local/ seasonal food and behavioural control). The questionnaire displayed acceptable factor loadings (0.57 - 0.79) with total variance of 50.4% in the Exploratory Factor Analysis and demonstrated relative fitness in the Confirmatory Factor Analysis. These findings showed that the adapted questionnaire was valid and could be used in the assessment of sustainable diet behaviour among young adults. In the survey, the participants showed moderate positive sustainable diet behaviour. Females and Indian participants were more likely to purchase seasonal / local foods and choose sustainable food products (p<0.05). Participants in the preparation and action stage for sustainable diet behaviour had higher scores in all three factors individually or combined. The findings should be used as the basis for future studies among the general population and intervention programs in promoting sustainable diet behaviour in the country.
ARTICLE | doi:10.20944/preprints202212.0262.v1
Subject: Social Sciences, Psychology Keywords: temperament; measurement; mood disorder; validation; Korean
Online: 15 December 2022 (03:05:48 CET)
Background and Objectives: The Temperament Evaluation of Memphis, Pisa, Paris and San Diego Autoquestionnaire (TEMPS-A) is designed to assess affective temperaments. The short version of TEMPS-A (TEMPS-A-SV) was translated into diverse languages for its broad application in research and clinical settings. However, no study was conducted to validate the Korean version of TEMPS-A-SV among patients with mood disorders. The purpose of this study is to examine the reliability and validity of the TEMPS-A-SV in mood disorder patients of the Korean population. Materials and Methods: In this cross-sectional retrospective study, a total of 715 patients (267 patients with major depressive disorder, 94 patients with bipolar disorder I, and 354 patients with bipolar disorder II) completed the Korean TEMPS-A-SV. Cronbach's alpha and McDonald’s omega was used to assess reliability. Exploratory factor analysis (EFA) was also performed. Spearman's correlation coefficient was used to examine associations between five temperaments. The difference in five temperament scores between gender or diagnosis groups were analyzed, and the correlation between five temperament scores and age were tested. Results: The Korean TEMPS-A-SV displayed good internal consistency (α = 0.65–0.88, ω = 0.66–0.9) and significant correlations between the subscales except one (correlation between hyperthymic and anxious). Using EFA, a two-factor structure was produced: Factor I (cyclothymic, depressive, irritable, and anxious) and Factor II (hyperthymic). The cyclothymic temperament score differed by gender and the anxious temperament score was significantly correlated with age. All the temperaments, except for irritable temperament, showed significant differences between diagnosis groups. Conclusions: Overall, our findings indicate that TEMPS-A-SV is a valid and reliable measure of estimating affective temperaments among Koreans. Our results confirm validity of TEMPS-A-SV among Korean patients with mood disorders. However, more study is required on affective temperaments and associated characteristics in people with mood disorders.
ARTICLE | doi:10.20944/preprints202308.1080.v1
Subject: Public Health And Healthcare, Public, Environmental And Occupational Health Keywords: CD-RISC; translation; Greek; validation; resilience; nurses
Online: 15 August 2023 (08:37:37 CEST)
Resilience has been defined as one's competence to maintain a mental health state and overall well-being when undergoing grave stress or facing significant adversities. Numerous resili-ence-investigating research tools were developed over the years, with the Connor-Davidson Re-silience Scale (CD-RISC), a self-rated tool presenting valuable psychometric properties, remaining one of the most prominent. We aimed to translate and validate the brief CD-RISC-10 in a convenient sample of 584 nurses in Greece's secondary and tertiary health care system. We conducted a con-firmatory factor analysis and known-groups validity and estimated the reliability of the CD-RISC-10. Our confirmatory factor analysis revealed that the scale had a unifactorial structure since all the model fit indices were very good. Moreover, the reliability of the CD-RISC-10 was very good since Cronbach's alpha was 0.924 and McDonald's omega was 0.925. Therefore, the Greek version of the CD-RISC-10 confirmed the factor structure of the original one and had very good validity and reliability.
ARTICLE | doi:10.20944/preprints202308.0284.v1
Subject: Biology And Life Sciences, Virology Keywords: VNNV; Diagnosis; Validation; dPCR; droplet digital PCR.
Online: 3 August 2023 (10:44:12 CEST)
The viral nervous necrosis virus (VNNV) is the causative agent of an important disease affecting fish species cultured worldwide. Early and accurate diagnosis is at present the most effective control and prevention tool, and the molecular techniques have been strongly introduced and accepted by official organizations. Among those, real time quantitative polymerase chain reaction (rt-qPCR) is nowadays displacing other molecular techniques. However, another PCR-based technology, the droplet digital PCR (ddPCR), is on the increase. It has many advantages over qPCR, such as higher sensitivity and more reliability of the quantification. Therefore, we decided to design and validate a protocol for diagnosis and quantification of SJ and RG type VNNV, using reverse transcription-ddPCR (RT-ddPCR). We obtained an extremely low limit of detection, 10 to 100-folds lower than with RT-qPCR. Quantification by RT-ddPCR, with a dynamic range of 6.8 – 6.8 x 104 (SJ type) or 1.04 x 101 – 1.04 x 105 (RG) cps/rctn, was more reliable than with RT-qPCR. The procedure was tested and validated in field samples, providing the high clinical sensitivity and negative predictive values. In conclusion, we propose this method to substitute RT-qPCR protocols because it exceeds the expectations of qPCR in the diagnosis and quantification of VNNV.
COMMUNICATION | doi:10.20944/preprints202210.0427.v1
Subject: Physical Sciences, Fluids And Plasmas Physics Keywords: Turbulence model; Reynolds stresses; RANS; validation rule
Online: 27 October 2022 (08:39:32 CEST)
In this paper, for the Reynolds-averaged Navier-Stokes equations, a self-closed turbulence model without any adjustable parameter is formulated. The validation rule for self-closed turbulence model is rigorously derived from the Reynolds-averaged Navier-Stokes equation. The rule is not effected by turbulence modelling on the Reynolds stresses.
REVIEW | doi:10.20944/preprints202206.0047.v1
Subject: Public Health And Healthcare, Physical Therapy, Sports Therapy And Rehabilitation Keywords: Rehabilitation; new technology; validation; study design; methods
Online: 3 June 2022 (11:12:44 CEST)
Important current limitations of the implementation of Evidence-Based Practice (EBP) in the rehabilitation field are related to the validation process of new technologies and interventions. Indeed, most of the strict guidelines that have been developed for the validation of new drugs (i.e., double or triple blinded, strict control of the doses and intensity) cannot – or only partially – be applied in rehabilitation. Well powered high quality randomized controlled trials are more difficult to organize in rehabilitation (e.g., longer duration of the intervention in rehabilitation, more difficult to standardize the intervention compared to drugs’ validation studies, limited funding’s since not sponsorized by big pharma companies), which reduces the possibility of conducting systematic reviews and meta-analyses, as currently high level of evidence is sparse. The current limitations of EBP in rehabilitation are presented in this paper and innovative solutions are suggested such as: technology-supported rehabilitation systems, continuous assessment, pragmatic trials, rehabilitation treatment specification systems, and advanced statistical methods, to tackle the limitations to increase the quality of the research in rehabilitation. The development and implementation of new technologies should increase the quality of research and the level of evidence supporting rehabilitation provided some adaptation in our research methodology.
Subject: Environmental And Earth Sciences, Geophysics And Geology Keywords: InSAR; InSAR calibration/validation; atmosphere/troposphere variations
Online: 21 December 2020 (12:34:10 CET)
Atmospheric propagational phase variations are the dominant source of error for InSAR timeseries analysis, generally exceeding uncertainties from poor SNR or signal correlation. The spatial properties of these errors have been well studied, but their temporal dependence and correction have received much less attention to date. We present here an evaluation of the magnitude of tropospheric artifacts in derived time series after compensation using an algorithm that requires only the InSAR data themselves. The level of artifact reduction equals or exceeds that from many weather model based methods, while avoiding the need to access fine-scale atmosphere parameters globally at all times. Our method consists of identifying all points in an InSAR stack with consistently high correlation, and computing, then removing, a fit of the phase at each of these points with respect to elevation. Comparison with GPS truth yields a reduction of 3, from an rms misfit of 5-6 cm to ~2 cm over time. This algorithm can be readily incorporated into InSAR processing flows without need for outside information.
ARTICLE | doi:10.20944/preprints201810.0089.v2
Subject: Biology And Life Sciences, Endocrinology And Metabolism Keywords: Metabolomics; Benchtop NMR; Biomarkers; Biomolecules; Validation; Protocol
Online: 5 December 2018 (16:14:52 CET)
Novel sensing technologies for liquid biopsies offer a promising prospect for the early detection of metabolic conditions through -omics techniques. Indeed, high-field NMR facilities are routinely used for metabolomics investigations on a range of biofluids in order to rapidly recognize unusual metabolic patterns in patients suffering from a range of diseases. However, these techniques are restricted by the prohibitively large size and cost of such facilities, suggesting a possible role for smaller, low-field NMR instruments in biofluid analysis. Herein we describe selected biomolecule validation on a low-field benchtop NMR spectrometer (60 MHz), and present an associated protocol for the analysis of biofluids on compact NMR instruments. We successfully detect common markers of diabetic control at low-to-medium concentrations through optimized experiments, including glucose (≤ 2.6 mmol./L) and acetone (25 μmol./L), and additionally in readily-accessible biofluids. We present a combined protocol for the analysis of these biofluids with low-field NMR spectrometers for metabolomics, and offer a perspective on the future of this technique appealing to point-of-care applications.
ARTICLE | doi:10.20944/preprints201810.0632.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: Modelica; heat pump; HiL; model validation; testbed
Online: 26 October 2018 (12:11:57 CEST)
Heating systems such as heat pump and combined heat and power cycle systems (CHP) are representing a key component in the future smart grid. Their capability to couple the electricity and heat sector promises a massive potential to the energy transition. Hence, these systems are continuously studied numerical and experimental to quantify their potential and develop optimal control methods. Although numerical simulations provide time and cost-effective solution for system development and optimization, they are exposed to several uncertainties. Hardware in the loop (HiL) system enables system validation and evaluation under different real-life dynamic constraints and boundary conditions. In this paper, a HiL system of heat pump testbed is presented. This system is used to present two case studies. In the first case, the conventional heat pump testbed operation method is compared to the HiL operation method. Energetic and dynamic analyses are performed to quantify the added value of the HiL and its necessity for dynamics analysis. The second case, the HiL testbed is used to validate the heat pump operation in a single family house participating in a local energy market. It enables not only the dynamics of the heat pump and the space heating circuit to be validated but also the building room temperature. The energetic analysis indicated a deviation of 2% and 5% for heat generation and electricity consumption of the heat pump, respectively. The model dynamics emphasized the model capability to present the dynamics of a real system with a temporal distortion of 3%.
REVIEW | doi:10.20944/preprints201806.0191.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: rare disease; functional genomics; genetic variant validation
Online: 12 June 2018 (12:36:08 CEST)
Many insights into human disease have been built on experimental results in Drosophila, and research in fruit flies is often justified on the basis of its predictive value for questions related to human health. Additionally, there is now a growing recognition of the value of Drosophila for the study of rare human genetic diseases, either as a means of validating the causative nature of a candidate genetic variant found in patients, or as a means of obtaining functional information about a novel disease-linked gene when there is little known about it. For these reasons, funders in the US, Europe, and Canada have launched targeted programs to link human geneticists working on discovering new rare disease loci with researchers who work on the counterpart genes in Drosophila and other model organisms. Several of these initiatives are described here, as are a number of output publications that validate this new approach.
SHORT NOTE | doi:10.20944/preprints201801.0030.v1
Subject: Environmental And Earth Sciences, Space And Planetary Science Keywords: tri-stereo; DSM, validation; urban surface morphology
Online: 5 January 2018 (05:18:21 CET)
A very high-resolution DSM covering an area of 400km2 over the Athens Metropolitan Area has been produced using Pleiades 1B 0,5m panchromatic tri-stereo images. Applied Remote Sensing and Photogrammetry tools have been used resulted in a 1x1m DSM over the study area. DSM accuracy has been evaluated by comparison with measured elevations with D-GPS and a reference DSM provided by the National Cadaster & Mapping Agency S.A. In addition, different combinations of stereo images have been prepared for further exploitation of the quality of the produced DSM by stereo vs. tri-stereo images. Results show that the produced by the tri-stereo images DSM has an RMSE of 1.17m in elevation (z), which is among the best reported in the relevant literature. Stereo based DSMs from the same sensor have worst performance to this end. Satellite Remote Sensing (SRS) based DSMs over urban areas provide the best cost-effective approach in comparison to airborne-based datasets due to high spatial coverage, lower cost and high temporal coverage. Pleiades-based high-quality DSM products can serve the domains of urban planning/climate, hydrological modelling and natural hazards, as major input for simulation models and morphological analysis at local scale.
ARTICLE | doi:10.20944/preprints201706.0118.v1
Subject: Medicine And Pharmacology, Dietetics And Nutrition Keywords: dietary assessment; FFQ; recall; nutritional biomarker; validation
Online: 27 June 2017 (04:58:14 CEST)
The development of reliable Food Frequency Questionnaires (FFQs) requires detailed information about the level and variation of dietary food intake of the target population. However, these data are often limited. To facilitate the development of new high quality FFQs and validation of existing FFQs, we developed a comprehensive National Dietary Assessment Reference Database (NDARD) detailing information about the level and variation in dietary food intake of people 20-70 years old in the general Dutch population. This paper describes the methods and characteristics of the population included in the NDARD database. 1063 men and 985 women agreed to participate in this research. Dietary intake data were collected using different FFQs, web-based and telephone-based 24-hour recalls, as well as blood and urine-based biomarkers. The baseline FFQ was completed by 1647 participants whose mean BMI was 26±4 kg/m2; 1117 participants completed telephone-based recalls and 1781 participants completed web-based recalls. According to the baseline FFQ, the mean energy intake was 2051±605 kcal/day. The percentage of total energy intake from protein was 15±2 En%, from carbohydrates was 43±6 En%, and from fat was 36±5 En%. This database will enable researchers to validate existing FFQs and to develop new high quality dietary assessment methods.
ARTICLE | doi:10.20944/preprints202309.0484.v1
Subject: Medicine And Pharmacology, Otolaryngology Keywords: Eustachian tube dysfunction; ETDQ7 questionnaire; validation; cultural adaptation
Online: 7 September 2023 (11:41:03 CEST)
Eustachian Tube Dysfunction is considered a common condition among ENT patients and requires careful history, clinical examination and appropriate investigations to obtain a diagnosis. The ETDQ-7 questionnaire is a useful tool to subjectively score severity of symptoms that are related to this pathology (Appendix A). This study aimed to adapt and validate the ETDQ-7 questionnaire in Greece to ensure correct and efficient use in the outpatient setting (Appendix B). The ETDQ-7 was translated into the Greek language following appropriate methodology. Data for the main study were collected from a sample of 75 Greek patients diagnosed with ETD as well as 25 patients that did not have ETD and who served as a control group. The participants completed the adapted ETDQ-7 questionnaire and underwent a clinical examination that was statistically correlated with their ETDQ-7 answers. Face and content validity were confirmed and the questionnaire was found easy to administer and to be completed by our patients. Test-retest reliability revealed similar internal consistency for the questions and good correlation between individual items and total score. Discriminative validity confirmed statistically significant difference between the two patient groups to ensure that the Greek version of ETDQ-7 is useful to confirm the diagnosis. The Greek version of the ETDQ-7 is safe and efficient to use among a variety of investigation methods for the diagnosis of ETD in the Greek speaking population.
ARTICLE | doi:10.20944/preprints202308.0580.v1
Subject: Computer Science And Mathematics, Data Structures, Algorithms And Complexity Keywords: retailing; customer behavior; clustering; segmentation; external validation indices
Online: 8 August 2023 (13:57:37 CEST)
While there are several ways to identify customer behaviors, few extract this value from information already in a database, much less extract relevant characteristics. This paper presents the development of a prototype using the recency, frequency, and monetary attributes for customer segmentation of a retail database. For this purpose, the standard K-means, K-medoids, and MiniBatch K-means were evaluated. The standard K-means clustering algorithm was more appropriate for data clustering than other algorithms as it remained stable until solutions with 6 clusters. The evaluation of the clusters’ quality was obtained through the internal validation indexes: Silhouette, Calinski Harabasz, and Davies Bouldin. Once consensus was not obtained, three external validation indexes were applied: global stability, stability per cluster, and segment-level stability across solutions. Six customer segments were obtained, identified by their unique behavior: Lost customers, disinterested customers, recent customers, less recent customers, loyal customers, and best customers. Their behavior was evidenced and analyzed, indicating trends and preferences.
ARTICLE | doi:10.20944/preprints202307.1029.v1
Subject: Public Health And Healthcare, Nursing Keywords: self-care; self-management; hypertension; validation; scale development
Online: 17 July 2023 (12:11:43 CEST)
Background: The adoption of self-care behaviors among patients with arterial hypertension (AH) plays an important role in the management of their health condition. However, a lack of scales assessing self-care is observed. We aimed to develop and validate the Hippocratic hypertension self-care scale. Methods: From a pool of questions derived from a literature review, 18 items were included in the scale and reviewed by a committee of experts. Participants indicate the frequency at which they follow the self-behavior prescribed in each statement on a five-point Likert scale. Data were collected between April 2019 and December 2019. Re-sults: 202 consecutive adult patients with AH were enrolled in the study. The internal consistency of the scale was found to be 0.807, using Cronbach’s alpha coefficient. An exploratory factor analysis identified two do-mains that accounted for 92.94% of the variance of the scale items; however, each sub-scale could not be used as an independent scale. Finally, the test- retest of the scale showed a significant strong correlation (r=0.0095, p<0.001). Conclusion: Our data indicate that the scale is a reliable and valid tool for assessing self-behaviors in patients with AH. Health professionals can use it in their clinical practice to improve the management of patients’ health condition.
ARTICLE | doi:10.20944/preprints202301.0563.v1
Subject: Biology And Life Sciences, Food Science And Technology Keywords: Dietary fiber; food frequency questionnaire; questionnaire screening; validation
Online: 31 January 2023 (02:22:50 CET)
Dietary fiber has been associated with health benefits, therefore, the availability of validated tools to assess food consumption associated with high-fiber foods would allow the quantification of the intake of this functional nutrient, the identification of risk groups and target populations, and the development of public policies and/or programs aimed at improving the health of the population. In this study, a fiber intake short food frequency questionnaire (FFQ) was translated into Spanish and its content validity was determined by a group of experts, to subsequently conduct a pilot test including 198 subjects aged 36+12.5 years, residing in Chile (46 men and 150 women), with the purpose of quantifying dietary fiber intake. The global assessment of the FFQ revealed a validity coefficient of 0.98+0.02; after the application of the pilot, mean dietary fiber intake in adult Chilean residents was of 13 g per day, with similar results to those found in the National Food Consumption Survey 2010 (12.5 g per day in men, and 11.5 g in women). The FFQ is a quick and valid tool to classify people on the basis of their habitual dietary fiber intake.
ARTICLE | doi:10.20944/preprints202301.0451.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: meloxicam; nimesulide; UV-spectrophotometric determination; cleaning validation samples
Online: 25 January 2023 (09:41:22 CET)
The spectrophotometric methods of determination of the active pharmaceutical ingredients meloxicam and nimesulide were reviewed, and a simple UV-spectrophotometric method for the determination of these active pharmaceutical ingredients in industrial equipment cleaning validation samples were proposed. The methods are based on extraction of the residual quantities of meloxicam and numesulide from the manufacturing equipment surface by the concentrated sodium carbonate solution, and the subsequent UV-spectrophotometric determination of the basic forms of the drugs at the wavelength of 362 nm for meloxicam and at 397 nm for nimesulide. The calibration graphs are linear in the range from 5 to 25 mg/L of both nimesulide and meloxicam, the molar attenuation coefficients are 6100 m2/mol for nimesulide and 9100 m2/mol for meloxicam, the limit of detection is 0.8 mg/L for nimesulide and 1.9 mg/L for meloxicam, the limit of quantification is 2.5 mg/L for nimesulide and 5.8 mg/L for meloxicam, the methods are selective with respect to the common excipients, show a good accuracy (the relative uncertainty does not exceed 4%) and precision (the relative standard deviation does not exceed 5%), do not require lengthy sample preparation and sophisticated laboratory equipment and are suitable for the routine analysis of cleaning validation samples.
REVIEW | doi:10.20944/preprints202210.0034.v1
Subject: Biology And Life Sciences, Agricultural Science And Agronomy Keywords: insect; genome; biopesticide; silencing; topical; gene target; validation
Online: 5 October 2022 (10:57:47 CEST)
Global crop yields are estimated to be reduced by 30–40% per year on account of plant pests and pathogens. Agricultural insect pests raise concerns about constraining global food security and climate changes contributing to the rise of infestation. The current management relies on plant breeding, associated or not with transgenes and chemical pesticides. Both approaches face serious technology obsolescence on the field due to resistance breakdown or development of insecticide resistance. The need for new Modes of Action (MoA) approaches in managing crop health grows each year, driven by market demands to reduce economic losses and phytosanitary requirements to meet the consumer perception. Disabling pest genes by sequence-specific expression silencing is considered a promising tool in the development of environment and health respectful biopesticides. The specificity conferred by long dsRNA-base solutions give support to minimizing effects on off-targeted genes in the insect pest genome and the target gene in non-target organisms (NTOs). In this review, we summarize the current status of gene silencing by RNA interference (RNAi) for agricultural control. More specifically, we focus on the engineering, development and application of gene silencing to control Lepidoptera by the employment of non-transforming dsRNA technologies. Despite some delivery and stability drawbacks of topical applications, we reviewed works showing convincing proof-of-concept results that point to imminent innovative solutions. Considerations about the regulamentation of the ongoing research on dsRNA-based pesticides to produce commercialized products for exogenous application are discussed. Academic and industry initiatives reveal a worthy effort to accomplish controlling Lepidoptera pests with this new mode of action to provide more sustainable and reliable technologies to field management. New data on genomics of this taxon encourage the increment of a customized target genes portfolio. As a case of study, we illustrate how dsRNA and associated methodologies could be applied to control an important Lepidopteran coffee pest.
ARTICLE | doi:10.20944/preprints202208.0424.v1
Subject: Social Sciences, Psychology Keywords: Aging; Attitudes; Subjective Well-being; Ageism; Psychometric Validation
Online: 25 August 2022 (03:17:06 CEST)
Scientific literature shows increased interest in the aged and the aging phenomenon. The Aging Attitudes Questionnaire - AAQ was validated for the Portuguese population to understand the importance of attitudes towards old age and their impact on the subjective well-being of the elderly. A sample of 400 subjects (from 18 to 93 years) answered a socio-demographic questionnaire, and the AAQ was composed of three subscales (psychosocial losses, physical change, and psychological growth). The CFA confirmed the tri-factorial structure with very good adjustment of the model to the data with the Cronbach alpha of the total scale scoring .84 and ranging from .65 to .77 for each factor. A total of 9 items were omitted both for poor factor loadings (<0.50. Notwithstanding, 3 items below the criteria were maintained, as they conceptually fit into the factor. Of the final 15 AAQp items, 5 belong to the Psychosocial Loss factor, 6 to Physical Change, and 4 to Psychosocial Growth. This tree factor model explained 50.1 % of the total variance. In conclusion, this study supports that AAQ has acceptable validity, confirming the composite reliability and the discriminant validity, but not the convergent validity. Through multi-group analysis, the invariance of the scale was confirmed. This validation is of pivotal importance once it allows measuring the attitudes towards aging, thus facilitating the promotion of wellbeing across the lifespan.
ARTICLE | doi:10.20944/preprints202206.0126.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: HPLC-DAD-UV; Verteporfin; ICH Q2 R; Validation
Online: 8 June 2022 (11:03:04 CEST)
The aim of this study, for determination of verteporfin in real samples (simulated body fluid, and simulated tears, 0.9% isotonic sodium chloride solution, Lactated Ringer IV solution for infusion, 5% dextrose IV solution for infusion, lemon juice and drinking water) was the method validation and to examined by HPLC-DAD-UV. Metod validation parameters such as specificity, linearity, precision, accuracy, robustness, limit of detection (LOD) and limit of quantitation (LOQ) for verteporfin were validated and developed according to the International Conference on Harmonization (ICH) Q2 R1 guidelines. The LOD and LOQ for verteporfin were found 0.06 µg/L and 0.2 µg/L, respectively. The recovery values of the optimization and validation for verteporfin were found in the range of 97.5-100.7%. The relative standard deviations (RSD) for vertepofin were <1%. The developed method was successfully applied to real samples with high accuracy and the recoveries (%) from real samples were 99.9, 100, 98.2, 99.2, 99.4, 98.8 and 99.4, respectively.
ARTICLE | doi:10.20944/preprints202203.0350.v1
Subject: Biology And Life Sciences, Insect Science Keywords: insecticide resistance; resistance monitoring; method validation; WHO tube
Online: 25 March 2022 (15:40:56 CET)
Accurately monitoring insecticide resistance in target mosquito populations is important to combating malaria and other vector-borne diseases, and robust methods are key. The “WHO susceptibility bioassay” has been used for +60 years: mosquitoes of known physiological status are exposed to a discriminating concentration of insecticide. Several changes to the test procedures have been made historically which may seem minor but could impact bioassay results. The published test procedures and literature for this method were reviewed for methodological details. Areas where there was room for interpretation in the test procedures or where the test procedures were not being followed were assessed experimentally for impact on bioassay results: covering or uncovering of the tube end during exposure, number of mosquitoes per test unit, and mosquito age. Many publications do not cite the most recent test procedures, methodological details are reported which contradict the test procedures referenced or methodological details are not fully reported. As a result, the precise methodology is unclear. Experimental testing showed that using fewer than the recommended 15-30 mosquitoes per test unit significantly reduced mortality, covering the exposure tube had no effect, and using mosquitoes older than 2-5 days old increased mortality, particularly in the resistant strain. Recommendations are made for better reporting of experimental parameters.
ARTICLE | doi:10.20944/preprints202105.0771.v1
Subject: Physical Sciences, Acoustics Keywords: Validation; Kinematic; Inertial measurement units; motion analysis; gait
Online: 31 May 2021 (12:47:51 CEST)
Gait analysis has historically been implemented in laboratory settings with expensive instruments; however, recently, wearable sensors have allowed the integration into clinical applications and use in daily activities. Previous studies have shown poor validity of ankle joints using inertial measurement units (IMUs), especially for small movement ranges. The purpose of this study was to validate the ability of commercially available IMUs to accurately measure the ankle joint angles during running. Ten healthy subjects participated in the study. Validation was performed by comparing the ankle joint angles measured using the wearable device with those obtained using the gold standard motion capture system during running. Reliability was evaluated using the intraclass correlation coefficient and standard error of measurement, whereas validity was evaluated using Pearson coefficient correlation method. Day-to-day reliability was excellent in the two planes for ankle joints. Validity was good in both sagittal and frontal planes for ankle joints. The results suggested that the developed device might be used as an alternative tool to the 3D motion capture system.
ARTICLE | doi:10.20944/preprints202005.0431.v2
Subject: Public Health And Healthcare, Physical Therapy, Sports Therapy And Rehabilitation Keywords: Hip fracture; Casemix; Validation; Discrimination; Risk score; Calibration
Online: 9 July 2020 (17:23:04 CEST)
Objectives Independent validation of risk scores after hip fracture is uncommon, particularly for evaluation of outcomes other than death. We aimed to assess the Nottingham Hip Fracture Score (NHFS) for prediction of mortality, physical function, length of stay and postoperative complications. Design Analysis of routinely collected prospective data partly collected by follow-up interviews. Setting and Participants Consecutive hip fracture patients were identified from the Northumbria hip fracture database between 2014-2018. Patients were excluded if they were not surgically managed or if scores for predictive variables were missing. Methods C-statistics were calculated to test the discriminant ability of the NHFS, Abbreviated Mental Test Score (AMTS), and ASA grade for in-hospital, 30- and 120-day mortality, functional independence at discharge, 30-days and 120-days, length of stay, and postoperative complications. Results We analysed data from 3208 individuals, mean age 82.6 (SD 8.6). 2192 (70.9%) were female. 194 (6.3%) died during the first 30-days, 1686 (54.5%) were discharged to their own home, 211 (6.8%) had no mobility at 120-days, 141 (4.6%) experienced a postoperative complication. The median length of stay was 18 days (IQR 8-28). For mortality, c-statistics for the NHFS ranged from 0.68-0.69, similar to ASA and AMTS. For postoperative mobility, the c-statistics for the NHFS ranged from 0.74-0.83, similar to AMTS (0.61-0.82) and better than the ASA grade (0.68-0.71). Length of stay was significantly correlated with each score (p<0.001 by Jonckheere-Terpstra test); NHFS and AMTS showed inverted U-shaped relationships with length of stay. For postoperative complications, c-statistics for NHFS (0.54-0.59) were similar to ASA grade (0.53-0.61) and AMTS (0.50-0.58). Conclusions and Implications The NHFS performed consistently well in predicting functional outcomes, moderately in predicting mortality, but less well in predicting length of stay and complications. There remains room for improvement by adding further predictors such as measures of physical performance in future analyses.
ARTICLE | doi:10.20944/preprints202003.0210.v1
Subject: Public Health And Healthcare, Public Health And Health Services Keywords: telemedicine; Questionnaires and Surveys; validation studies; health personnel
Online: 12 March 2020 (09:58:12 CET)
Background: Telemedicine is both effective and can provide efficient care at lower costs. It also enjoys a high acceptance rate among users. The Technology Acceptance Model proposed is based on the two main concepts of ease of use and perceived usefulness and comprises three dimensions: the individual context, the technological context and the implementation or organizational context. There is not a short and validated questionnaire to check the acceptance of telemedicine services amongst health care professionals using a technology acceptance model. Objective To translate and validate a telemedicine acceptance questionnaire based in the technology acceptance model. Methods The study included the following phases: adaptation and translation of the questionnaire into Catalan and psychometric validation which include construct (exploratory factor analysis), consistency (Cronbach’s alpha) and stability (test-retest). Factor analysis was used to describe variability amongst observed variables. Results After removing incomplete responses 144 responses where considered for analysis. The internal consistency measured with the Cronbach’s alpha coefficient was good with an alpha coefficient of 0.84 (95%, CI: 0.79-0.84). The intraclass correlation coefficient was 0.93 (95% CI: 0.852-0.964). The Kaiser-Meyer-Olkin test of sampling was adequate (KMO = 0.818) and the Bartlett test of sphericity was significant (Chi-square 424.188; gl=28; P < .001), indicating that the items were appropriate for a factor analysis. Conclusions The questionnaire validated with this study has robust statistical features that make it a good predictive model of professional’s satisfaction with telemedicine programs.
ARTICLE | doi:10.20944/preprints201912.0349.v1
Subject: Engineering, Mechanical Engineering Keywords: verification and validation; computational thermal analysis; computational physics
Online: 26 December 2019 (02:35:47 CET)
In the power plant industry, the turbine inlet temperature (TIT) plays a key role in the efficiency of the gas turbine and, therefore, the overall—in most cases combined—thermal power cycle efficiency. Gas turbine efficiency increases by increasing TIT. However, an increase of TIT would increase the turbine component temperature which can be critical (e.g., hot gas attack). Thermal barrier coatings (TBCs)—porous media coatings—can avoid this case and protect the surface of the turbine blade. This combination of TBC and film cooling produces a better cooling performance than conventional cooling processes. The effective thermal conductivity of this composite is highly important in design and other thermal/structural assessments. In this article, the effective thermal conductivity of a simplified model of TBC is evaluated. This work details a numerical study on the steady-state thermal response of two-phase porous media in two dimensions using personal finite element analysis (FEA) code. Specifically, the system response quantity (SRQ) under investigation is the dimensionless effective thermal conductivity of the domain. A thermally conductive matrix domain is modeled with a thermally conductive circular pore arranged in a uniform packing configuration. Both the pore size and the pore thermal conductivity are varied over a range of values to investigate the relative effects on the SRQ. In this investigation, an emphasis is placed on using code and solution verification techniques to evaluate the obtained results. The method of manufactured solutions (MMS) was used to perform code verification for the study, showing the FEA code to be second-order accurate. Solution verification was performed using the grid convergence index (GCI) approach with the global deviation uncertainty estimator on a series of five systematically refined meshes for each porosity and thermal conductivity model configuration. A comparison of the SRQs across all domain configurations is made, including uncertainty derived through the GCI analysis.
Subject: Public Health And Healthcare, Nursing Keywords: Validation; communication; questionnaire; healthcare attention; patient satisfaction; nursing.
Online: 25 March 2019 (10:40:45 CET)
Background: Healthcare attention is sometimes considered purely technical, but communication has proven to be closely related to clinical results and patient satisfaction. Therefore, evaluation of communication in the scope of healthcare is a priority. The purpose of this study was to validate and adapt, if necessary, the Spanish version of the Communication Styles Inventory (CSI) in a sample of nursing professionals. (2) Methods: The sample was made up of 2313 nursing professionals selected at random from various medical centers in Spain, and is therefore a sample actively employed at the time data were acquired. We started out from the Communication Style Inventory, a questionnaire for evaluating the predominance of certain individual communication behaviors on six scales (expressiveness, preciseness, verbal aggressiveness, questioningness, emotionality and impression manipulativeness). (3) Results: Confirmatory Factor Analysis of the model proposed showed god fit indices. The reliability of the model shown by the Cronbach’s alpha of α=.81 was adequate, and so was single-level and aggregate consistency. Finally, in the analysis of variance by type of contract, configural, metric and scalar invariance was acceptable, but not strict invariance. (4) Conclusions: This instrument progresses in measuring non-technical attributes, such as communication styles, in nursing personnel.
ARTICLE | doi:10.20944/preprints201903.0018.v1
Subject: Physical Sciences, Optics And Photonics Keywords: Fast Forward Model, Infrared, Emissivity Spectrum, Satellite, Validation
Online: 4 March 2019 (08:42:15 CET)
Timely processing of observations from hyper-spectral imagers, such as SEVIRI (Spinning Enhanced Visible and Infrared Imager), largely depends on fast radiative transfer calculations. This paper mostly concerns the development and implementation of a new forward model for SEVIRI to be applied to real time processing of infrared radiances for the physical retrieval of surface temperature and emissivity. The new radiative transfer model improves computational time by a factor of ≈ 7 compared to the previous versions and makes it possible to process SEVIRI data at nearly real time. The new forward model has been applied for the simultaneous retrieval of surface temperature and emissivity in three infrared channels (8.7, 10.8, 12 μm). The inverse scheme relies on a Kalman filter approach, which allows us to exploit a sequential processing of SEVIRI observations. Based on the new forward model, the paper also presents a validation retrieval performed with in situ observations acquired during a field experiment carried out in 2017 at Gobabeb (Namib desert) validation station. Furthermore, a comparison with IASI (Infrared Atmospheric Sounder Interferometer) emissivity retrievals has been performed as well. It has been found that the retrieved emissivities are in good agreement with each other and with in situ observations, i.e. average differences are generally well below 0.01.
ARTICLE | doi:10.20944/preprints201809.0389.v1
Subject: Engineering, Mechanical Engineering Keywords: Extrapolative Predictions, Model Validation, Bayesian Inference, Structural Dynamics
Online: 19 September 2018 (16:05:50 CEST)
The creation of computer models is often driven by the need to make predictions in regions where there is no data (i.e. extrapolations). This makes validation challenging as it is difficult to ensure that a model will be suitable when it is applied in a region where there are no observations of the system of interest. The current paper proposes a method that can reveal flaws in a model which may be difficult to identify using traditional approaches for model calibration and validation. The method specifically targets the situation where one is attempting to model a dynamical system that is believed to possess time-invariant calibration parameters. The proposed approach allows these parameters to vary with time, even though it is believed that they are time-invariant. The of such an analysis is to identify key discrepancies - indications that a model has inherent flaws and, as a result, should not be used to influence decisions in regions where there is no data. The proposed method isn't necessarily a predictor of extrapolation performance, rather, it is a stringent test that, the authors believe, should be applied before extrapolation is attempted. The approach could therefore form a useful part of wider validation frameworks in the future.
ARTICLE | doi:10.20944/preprints201805.0435.v1
Subject: Public Health And Healthcare, Other Keywords: Kinect; validation; assessment; functional evaluation; shoulder; markerless system
Online: 30 May 2018 (05:59:51 CEST)
Optoelectronic devices are gold standard for 3D evaluation in clinics but due to the complexity of such kind of hardware and the lack of access for patients affordable, transportable and easy to use systems must be developed to be largely used in daily clinics. The KinectTM sensor presents various advantages compared to optoelectronic devices: price, transportability but also some limitations: (in)accuracy of the skeleton detection and tracking as well as the limited amount of available points that make 3D evaluation impossible. To overcome these limitations a novel method has been developed to perform 3D evaluation of the upper limbs. This system is coupled to rehabilitation exercises allowing functional evaluation while performing physical rehabilitation. To validate this new approach a double step method was used. The first step is a laboratory validation where the results obtained with the KinectTM have been compared with results obtained with an optoelectronic device, 40 healthy young adults participated in this first part. The second step was to determine the clinical relevance of such kind of measurement. Results of the healthy subjects were compared with a group of 22 elderly adults and a group of 10 chronic stroke patients to determine if different patterns can be observed. The new methodology and the different steps of the validations are presented in this paper.
COMMUNICATION | doi:10.20944/preprints201610.0106.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: vanillyl butyl ether; HPLC; method validation; cosmetic product
Online: 25 October 2016 (09:43:18 CEST)
A specific HPLC method has been developed and validated for the determination of vanillyl butyl ether in cosmetic products. The extraction procedure with an isopropanol water 1:1 mixture is described. The method uses a RP-C-18 column with isocratic elution and a UV detector. The mobile phase consists of a mixture of acetonitrile and buffer (Na2HPO4 20mM in water) (30:70 v/v) with a variable flow rate. The method was validated with respect to accuracy, precision (repeatability and reproducibility), specificity and linearity. The procedure described here is simple, selective and reliable for routine quality control analysis and stability tests of commercially available cosmetic products.
ARTICLE | doi:10.20944/preprints201610.0078.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: calibration; validation; optical; instrument; processing; imagery; spatial; operational
Online: 19 October 2016 (10:59:29 CEST)
As part of the Copernicus programme of the European Union (EU), the European Space Agency (ESA) has developed and is currently operating the Sentinel-2 mission that is acquiring high spatial resolution optical imagery. This paper provides a description of the calibration activities and the current status of the mission products validation activities. Measured performances, from the validation activities, cover both Top-Of-Atmosphere (TOA) and Bottom-Of-Atmosphere (BOA) products. Results presented in this paper show the good quality of the mission products both in terms of radiometry and geometry and provide an overview on next mission steps related to data quality aspects.
ARTICLE | doi:10.20944/preprints202309.0264.v1
Subject: Engineering, Architecture, Building And Construction Keywords: ANN; energy consumption; optimization; direct fired absorption chiller; validation
Online: 5 September 2023 (11:28:53 CEST)
With an increasing concern for global warming, there have been many attempts to reduce greenhouse gas emissions. About 30 % of total energy has been consumed by buildings and much attention has been paid to reducing building energy consumption. While there are many ways of reducing building energy consumption, accurate energy consumption prediction becomes more significant. As mechanical systems are the most energy-consuming components in the building, the present study developed the energy consumption prediction model for a direct-fired absorption chiller by using the ANN technique for the short term. The ANN model was optimized and validated with the actual data collected through a BAS. For the optimization, the numbers of input variables and neurons, and the data size of training were applied. By changing these parameters, the predictive performance was analyzed. In sum, the outcome of the present study can used to predict the energy consumption of the chiller as well as improve the efficiency of the energy management. The outcome of the present study can be used to develop a more accurate prediction model with a few datasets in that it can improve the efficiency of building energy management.
ARTICLE | doi:10.20944/preprints202308.1569.v1
Subject: Medicine And Pharmacology, Orthopedics And Sports Medicine Keywords: Cervical Pain; Patient-Reported Outcome Measure; Validation Study; Poland
Online: 22 August 2023 (11:33:55 CEST)
This study aimed to translate and psychometrically validate the Neck Outcome Score (NOOS) in the Polish population according to the recommendations of American Academy of Orthopedic Surgeons (AAOS) for the Cross-Cultural Adaptation of Health Status Measures. Participants completed the NOOS, Neck Disability Index (NDI), and Visual Analogue Scale (VAS) for pain assessment. The questionnaires were completed by 57 women and 32 men with cervical spine ailments. A retest was performed in all patients after 48 hours. The analysis confirmed the high internal consistency (Cronbach's alpha of 0.95) of the Polish NOOS. No floor / ceiling effects were observed. The Polish NOOS showed a significant correlation with NDI (0.87; p<0.001) and VAS (0.79; p<0.001). The intraclass correlation coefficient (ICC) for the test-retest was found to be high (0.97). The Polish version of NOOS can be used for clinical and research purposes as an equivalent to the original English version. This study contributes to the area of patient-reported outcome measures available in the Polish language.
ARTICLE | doi:10.20944/preprints202306.0754.v1
Subject: Medicine And Pharmacology, Medicine And Pharmacology Keywords: Rivaroxaban; Aspirin; fixed-dose combination; validation; RP HPLC method
Online: 12 June 2023 (04:20:14 CEST)
Rivaroxaban and aspirin are commonly used antithrombotic agents that are used in combination for the prevention of coronary artery disease (CAD) and atherothrombotic events in adult patients after an acute coronary syndrome (ACS) with elevated cardiac biomarkers, or with coronary artery disease (CAD) or symptomatic peripheral artery disease (PAD) in high-risk patients. The recommended dosage is 2.5 mg Rivaroxaban twice daily with 75–100 mg aspirin daily. This study aimed to develop a fixed-dose combination tablet of rivaroxaban (2.5 mg) and aspirin (50 mg) to decrease the patient's pill burden and enhance medication adherence. The product formula was developed based on compatibility studies among the active ingredients and the active ingredients' compatibility studies. The formula and the manufacturing procedure were chosen based on the risk assessment for each active substance, wet granulation with both actives intragranular was found to have faster dissolution than direct mix formulae. Furthermore, a validated reverse-phase HPLC stability indicator method was developed to detect APIs and their possible degradants in the formula.
ARTICLE | doi:10.20944/preprints202305.2055.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: remote sensing; ground-truth data; validation; superconducting gravimeters; evapotranspiration
Online: 30 May 2023 (05:31:13 CEST)
The practical utility of remote sensing techniques relies on validating them with ground-truth data. Validation requires similar spatial-temporal scales for ground measurements and remote sensing resolution. Evapotranspiration (ET) estimates are commonly compared to weighing lysimeter data, which provide precise but localized measurements. To address this limitation, we propose using superconducting gravimeters (SG) to obtain ground-truth ET data at larger spatial scales. SG measure gravity acceleration with high resolution (tenths of nm/s2) within a few hundred meters. Similar to lysimeters, gravimeters provide direct estimates of water mass changes for determining ET without soil disturbance. To demonstrate the practical applicability of SG data, we conducted a case study in Buenos Aires Province, Argentina (-34.87, -58.14). We estimated cumulative ET values for 8-day and monthly intervals using gravity and precipitation data from the study site. Comparing these values with MODIS-based ET products (MOD16A2), we found a very good agreement at the monthly scale, with an RMSE of 32.6 mm/month (1.1 mm/day). This study represents progress in using SG for hydrogeological applications. The future development of lighter and smaller gravimeters is expected to further expand their use.
REVIEW | doi:10.20944/preprints202305.1773.v1
Subject: Engineering, Energy And Fuel Technology Keywords: HALEU; High-Assay LEU; Advanced Reactors; Validation; DOE; NRC
Online: 25 May 2023 (08:45:44 CEST)
Many advanced reactor concept designs rely on high-assay low-enriched uranium (HALEU) fuel, enriched up to approximately 19.75% 235U by weight. Efforts are underway by the US government to increase HALEU production in the United States to meet anticipated needs. However, very little data exists for validation of computational models that include HALEU, beyond a few fresh fuel benchmark specifications in the International Reactor Physics Experiment Evaluation Project. Nevertheless, there is other data with potential value available for developing into quality benchmarks for use in data and software validation efforts. This paper reviews the available evaluated HALEU fuel benchmarks and some of the potentially relevant benchmarks for fresh highly enriched uranium. It then introduces experimental data for HALEU fuel irradiated at Idaho National Laboratory, from relatively recent irradiation programs at the Advanced Test Reactor. Such data should be evaluated and, if valuable, collected into detailed benchmark specifications to meet the needs of HALEU-based reactor designers.
COMMUNICATION | doi:10.20944/preprints202212.0351.v1
Subject: Social Sciences, Other Keywords: uncertainty principle; limits of mathematics; validation models; holistic approach
Online: 20 December 2022 (03:33:52 CET)
Science evolves over a gentle arc spanning centuries, with scientists building upon and extending the hypotheses and discoveries of their forebears when nurturing their own work from ideation to crystallization and finally implementation. However, evidence suggests several limitations of our modern academic pursuits including major inertia and epistemological biases to implement even major advancements. For instance, the transformative uncertainty principles of quantum mechanics are yet to be satisfactorily integrated in modern analyses and publications, even almost a century after Heisenberg received the Nobel prize for these. Another example is ever expanding reliance on mathematics to validate the hypotheses of Physics, and undermine the opinions to the contrary. In addition, modern science limits itself to the era post fifteenth century and hastily rejects premodern achievements despite glaring examples. This reluctance and inertia to capitalize on existing knowledge is a challenge that imperils our intellectual pursuits. A salient facet of science is "the willingness to admit ignorance". Only on this foundational principle can science meaningfully evolve. It is time we take a step back to evaluate widely accepted and foundational premises of modern science and institute structured processes to implement the treasure trove of knowledge amassed by our predecessors. This essay highlights some of the opportunities that can and should be availed capitalizing upon the recent developments of computational and analytical capabilities along with artificial intelligence.
BRIEF REPORT | doi:10.20944/preprints202010.0082.v1
Subject: Engineering, Automotive Engineering Keywords: exploratory analysis; model selection; MLR; K fold cross validation
Online: 5 October 2020 (12:16:38 CEST)
In this project, we use a statistical multiple regression to study the impact of eight various predictors (relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, glazing area distribution) to estimate the cooling load energy efficiency of residential buildings. We try to analyze and visualize the effect of each predictor with each of the response variable using different classical statistical analysis tools used in describing linear models, in such a way so that we can find out the most strongly related predictor variables. Before starting all of this, we use the idea of model selection by stepwise regression technique and compare the AIC of these models and identified a better model between all of them. Then, we compare a classical linear regression approach by simulations on 768 diverse residential buildings show that we can predict CL with low mean absolute error. By using ANOVA we determine variation in the different residuals. Also, we use non constant variance test to verify it. Furthermore, we check leverage and influence points as well as outliers as well as determined cook distance for influential points. By taking box cox transformation and weights, we also introduce WLS technique to fit the model for better results and did all type of important analysis to understand the energy efficiency. Finally, we show 5-fold cross validation to verify our model.
Subject: Environmental And Earth Sciences, Geophysics And Geology Keywords: solar radiation; meteosat second generation; validation; land surface modelling
Online: 27 October 2019 (04:25:31 CET)
High frequency knowledge of the spatio-temporal distribution of the Downwelling Surface Shortwave Flux (DSSF) and its diffuse fraction (fd) at the surface is nowadays essential for understanding climate processes at the surface-atmosphere interface, plant photosynthesis and carbon cycle, and for the solar energy sector. The EUMETSAT Satellite Application Facility for Land Surface Analysis operationally delivers estimation of the MDSSFTD (Downwelling Surface Short-wave radiation Fluxes – Total and Diffuse fraction) product with an operational status since the year 2019. The method for the retrieval was presented in the companion paper . The part 2 now focuses on the evaluation of the MDSSFTD algorithm and presents the comparison of the corresponding outputs, i.e. total DSSF and diffuse fraction (fd) components, against in-situ measurements acquired at four BSRN stations over a seven-month period. The validation is performed on an instantaneous basis. We show that the satellite estimates of DSSF and fd meet the target requirements defined by the user community for all-sky (clear and cloudy) conditions. For DSSF, the requirements are 20Wm-2 for DSSF<200Wm-2, and 10% for DSSF>=200Wm-2. The MBE and rMBE compared to the ground measurements are 3.618Wm-2 and 0.252%, respectively. For fd, the requirements are 0.1 for fd<0.5, and 20% for fd>=0.5. The MBE and rMBE compared to the ground measurements are -0.044 and -17.699%, respectively. The study also provides a separate analysis of the product performances for clear sky and cloudy sky conditions. The importance of representing the cloud-aerosol radiative coupling in the MDSSFTD method is discussed. Finally, it is concluded that the quality of the Aerosol Optical Depth (AOD) forecasts currently available is enough accurate to obtain reliable diffuse solar flux estimates. This quality of AOD forecasts was still a limitation a few years ago.
ARTICLE | doi:10.20944/preprints201910.0096.v1
Subject: Environmental And Earth Sciences, Water Science And Technology Keywords: Rainfall-runoff model; large-scale river basins; calibration; validation
Online: 9 October 2019 (10:16:32 CEST)
In this work a modified version of the well-known Simple Water Balance (SWB) model, comprising here three parameters instead of one, was used. Although simple, the model was tested in large-scale river basins in east-central Greece, upstream two hydrometric stations. The available historic runoff records comprised 19 hydrologic years each, on a monthly basis. Thirteen among them were used for calibrating the model, whereas the six subsequent, for validating it. Two different efficiency criteria were used as a measure of performance of the modified model. Their values, calculated for both calibration and validation stages, were close and relatively high. Thus, keeping in mind both the size and complexity of the river basins studied, one can conclude that the modified model, despite its simplistic concept and lumped form, fits satisfactorily the historic runoff series.
Subject: Social Sciences, Cognitive Science Keywords: Validation; Questionnaire Design; Self-Perception; Diabetes Mellitus; Self Care.
Online: 25 March 2019 (10:00:07 CET)
Background: Level of perceived competence as a basic psychological need could trigger achievement of diabetes self-management goals. Due to lack of a specific data collection tool to measure level of self-competence among Persian speaking patients with diabetes this study was conducted for cross-cultural adaptation and psychometric assessment of the Persian version of Perceived Competence Scale for Diabetes (PCSD-P). Methods: Standard translation/back-translation procedure was carried out to prepare a preliminary draft of the PCSD-P. Content and face validity of the early draft were checked by an expert panel including 15 scholars in the field of health education and promotion as well as nursing education with experience of working and research on diabetes. The final drafted questionnaire was completed by 177 randomly selected patients with type 2 diabetes. Based on the collected data structural validity of the contrived version was appraised using exploratory and confirmatory factor analysis (EFA, CFA). Cronbach's alpha and Intraclass Correlation coefficients (ICC) were used to check the scale’s reliability and internal consistency. ; (3) Results: The estimated measures of Content Validity Index (CVI= 0.95) and Content Validity Ratio (CVR= 0.8) were in the range of acceptable recommended limits. The EFA analysis results demonstrated a single factor solution according to the items’ loadings for the component. The model fit indices i.e. RMSEA= 0.000, CFI=1, TLI=1, GFI= 0.998, NFI= 0.999 RFI= 0.995 confirmed consistency of the hypothesized one-factor solution. Values of the internal consistency and reliability coefficients were also in the vicinity of acceptable range (α= 0.892, ICC=0. 886, P= 0.001). Conclusions: The study findings revealed good internal validity and applicability of the PCSD-P to measure degree of self-competence among Persian speaking type 2 diabetes patients to manage the chronic disease. Due to unrepresentativeness of the study sample future cross-cultural test of PCSD-P on diverse and broader Persian speaking populations is recommended.
ARTICLE | doi:10.20944/preprints201806.0055.v1
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: quality control; validation; reconstruction of missing data; temperature; precipitation
Online: 5 June 2018 (08:42:40 CEST)
This study provides a unique procedure for validating and reconstructing temperature and precipitation data. Although developed from data in Middle Italy, the validation method is intended to be universal, subject to appropriate calibration according to the climate zones analysed. This~research is an attempt to create shared applicative procedures that are most of the time only theorized or included in some software without a clear definition of the methods. The purpose is to detect most types of errors according to the procedures for data validation prescribed by the World Meteorological Organization, defining practical operations for each of the five types of data controls: gross error checking, internal consistency check, tolerance test, temporal consistency, and~spatial consistency. Temperature and~precipitation data over the period 1931--2014 were investigated. The~outcomes of this process have led to the removal of 375 records (0.02%) of temperature data from 40 weather stations and 1286 records (1.67%) of precipitation data from 118 weather stations, and 171 data points reconstructed. In conclusion, this work contributes to the development of standardized methodologies to validate climate data and provides an innovative procedure to reconstruct missing data in the absence of reliable reference time series.
ARTICLE | doi:10.20944/preprints201702.0026.v1
Subject: Engineering, Marine Engineering Keywords: wave energy; system identification; model validation; wave tank testing
Online: 8 February 2017 (17:00:08 CET)
Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followed for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. While most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.
ARTICLE | doi:10.20944/preprints202308.1421.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: Earth Observation; Remote Sensing; Calibration; Validation; Fiducial Reference Measurement; CEOS
Online: 21 August 2023 (05:00:04 CEST)
In recent years, the concept of Fiducial Reference Measurements (FRM) has been developed to highlight the need for precise and well-characterised measurements tailored explicitly to the post-launch calibration and validation (Cal/Val) of Earth observation satellite missions. The confidence that stems from robust unambiguous uncertainty assessment of space observations is fundamental to assessing the changes in the Earth system and climate model prediction, and delivering the essential evidence-based input for policy makers and society striving to mitigate and adapt to climate change. The underlying concept of an FRM has long been a core element of a Cal/Val program, providing a ‘trustable’ reference against which performance can be anchored or assessed. The ‘FRM’ label was created to embody into such a reference a set of key criteria. These criteria included the establishment of documented evidence of uncertainty with respect to a community-agreed reference (ideally SI-traceable) and specific tailoring to the needs of a satellite mission. It, therefore, facilitates comparison and interoperability between products and missions in a cost-efficient manner. CEOS Working Group Cal/Val (WGCV) is now putting in place a framework to assess the maturity and compliance of a ‘Cal/Val reference measurement’ in terms of a set of community-agreed criteria which define it to be of CEOS-FRM quality. The assessment process is based on a maturity matrix that provides a visual assessment of the state of any FRM against each of a set of given criteria; making visible where it is mature and where evolution and effort needs to be done. This paper provides the overarching definition of what constitutes an FRM and introduces the new CEOS FRM assessment framework.
ARTICLE | doi:10.20944/preprints202305.0412.v1
Subject: Medicine And Pharmacology, Dietetics And Nutrition Keywords: non-nutritive sweeteners; food frequency questionnaire; survey validation; pregnant women
Online: 6 May 2023 (10:20:17 CEST)
: Studies on the effects of non-nutritive sweeteners (NNSs) in pregnant women are scarce are conflicting. A major challenge is to accurately assess NNS intake, especially in countries where many foods and beverages have been reformulated with the progressive replacement of sugar by NNSs, due to the implementation of new nutrition labelling policies for preventing obesity. This study aimed to develop and valid a food frequency questionnaire (FFQ) to examine the intake of NNSs in pregnant women. This questionnaire was tested in 29 women in their eighth month of gestation, compared to a 3-day dietary records (3-DR). FFQ validity was assessed using Spearman´s correlation coefficient, Lin´s concordance correlation coefficient (CCC) and Bland-Altman plots. Spearman correlations between NNS FFQ and 3-DR ranged from 0.50 for acesulfame K to 0.83 for saccharin. The CCC ranged from 0.22 to 0.66. Bland-Altman plots showed an overestimation of saccharin, sucralose and steviol glycosides intake by the NNS FFQ, and an underestimation of acesulfame K and aspartame, compared to 3-DR. Overall, the most frequently consumed NNS was sucralose None of the participants exceeded the acceptable daily intake for any of the NNSs evaluated. The FFQ of NNSs appears to be a reasonably valid tool for assessing NNS consumption in pregnant women.
ARTICLE | doi:10.20944/preprints202302.0225.v1
Subject: Medicine And Pharmacology, Psychiatry And Mental Health Keywords: health psychology; mental distress; stress; population; psychometric property; validation; Mongolia
Online: 14 February 2023 (03:03:17 CET)
Abstract: (1) Background: We hypothesized measuring the extent of brain overwork can extrapolate the burden of mental distress. This study aimed to develop a scale that measures mental distress and validate it in the general population. (2) Methods: In this population-based cross-sectional study, we recruited a total of 739 adults aged 16-65 years from 64 sampling centers of a clinical cohort across Mongolia to validate a 10-item self-report questionnaire. Internal consistency was measured using the McDonald’s ω. Test-retest reliability was analyzed using intraclass correlation coefficients. Construct and convergent validities were examined using the principal component analysis (PCA) and confirmatory factor analysis (CFA). Hospital Anxiety and Depression Scale (HADS) and the abbreviated version of the World Health Organization Quality of Life (WHOQOL-BREF) were used to evaluate the criterion validity. (3) Results: Among the participants, 70.9% were women, 22% held a bachelor's degree or higher, 38.8% were employed, and 66% were married. The overall McDonald’s ω coefficient was 0.861 demonstrating evidence of excellent internal consistency. The total intraclass correlation coefficient of the test-retest analysis was 0.75, indicating moderate external reliability. PCA and CFA established a three-domain structure that provided an excellent fit to the data (RMSEA=0.033, TLI=0.984, CFI=0.989, χ2=58, p=0.003). This 10-item scale, the Brain Overwork Scale (BOS-10), determines mental distress in three dimensions: Excessive Thinking, Hypersensitivity, and Restless Behavior. All items had higher item-total correlations with their corresponding domain than the other domains, and correlations between the domain scores ranged from 0.547–0.615. The BOS-10 correlated with the HADS, whereas it was inversely correlated with the WHOQOL-BREF. (4) Conclusions: The results suggest that the BOS-10 is a valid and reliable instrument for assessing mental distress in the general population. The current findings also demonstrate that the BOS-10 is quantitative, simple, and applicable for large-group testing.
ARTICLE | doi:10.20944/preprints202211.0460.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: Fundamental variability; homogeneity; variance; method validation; proficiency testing; measurement uncertainty
Online: 24 November 2022 (14:33:00 CET)
The question whether a given set of test items can be considered “identical” is often addressed in terms of the homogeneity of the test material from which said items were taken. However, for some types of matrices – in particular, for matrices consisting of minute separate particles, only some of which carry the analyte under consideration – even in the case of homogenous test material, an irreducible source of variability between test items may remain: the fundamental variability. In this paper, the concept of fundamental variability is explained, and procedures for reducing and characterizing it are described.
ARTICLE | doi:10.20944/preprints202202.0260.v1
Subject: Environmental And Earth Sciences, Oceanography Keywords: sea surface salinity; sampling mismatch; sub footprint variability; uncertainty; validation
Online: 22 February 2022 (02:44:05 CET)
Validation of satellite sea surface salinity (SSS) products is typically based on comparisons with in-situ measurements at a few meters depth, that are mostly done at a single location and time. The difference in term of spatio-temporal resolution between the in-situ near-surface salinity and the two-dimensional satellite SSS results in a sampling mismatch uncertainty. The Climate Change Initiative (CCI) project has merged SSS from three satellite missions. Using an optimal interpolation, weekly and monthly SSS and their uncertainties are estimated at a 50 km spatial resolution over the global ocean. Over the 2016-2018 period the mean uncertainty on weekly CCI SSS is 0.13, whereas the standard deviation of weekly CCI minus in-situ Argo salinities is 0.24. Using high resolution SSS simulations, we estimate the expected uncertainty due to the CCI versus Argo sampling mismatch. Most of the largest spatial variability of the satellite minus Argo salinity are observed in regions with large mismatch. A quantitative validation is performed by considering the statistical distribution of the CCI minus Argo salinity normalized by the sampling and retrieval uncertainties. This quantity should follow a Gaussian distribution with a standard deviation of 1, if all uncertainty contributions are properly considered. We find that 1) the sampling mismatch can explain most of the observed differences between Argo and CCI data, especially for monthly products and in dynamical regions (river plumes, fronts), 2) overall, the uncertainties are well estimated in CCI version 3, much better compared to CCI version 2. There are a few dynamical regions where discrepancies remain, and where the satellite SSS, their associated uncertainties and the sampling mismatch estimates should be further validated.
ARTICLE | doi:10.20944/preprints202106.0419.v1
Subject: Business, Economics And Management, Accounting And Taxation Keywords: Aesthetic literacy, Informational literacy, Promotional literacy, Rhetorical literacy, Students, Validation.
Online: 15 June 2021 (16:13:16 CEST)
Although advertising literacy leads to critical thinking in the face of advertising, but so far, no action has been taken in Iran regarding a tool to measure this type of literacy. And after the investigations, it was determined that although much research has been done on advertising, but the lack of appropriate measurement tools to measure the level of advertising literacy is clearly evident. Therefore, this research provides a valid tool for measuring advertising literacy from students' perspective. In this study, referring to the dimensions of advertising literacy from the perspective of Malmelin (2010) and the views of related professors, a questionnaire was developed and to determine the validity of the structure, confirmatory factor analysis was used. In this study, the statistical population was high school students, considering the impact of advertising in this age range; finally, a tool with four dimensions of informational literacy, aesthetic literacy, rhetorical literacy and promotional literacy was obtained. According to the confirmation of this tool in the present study, it can be used to examine the status of items, their order and prioritization from the perspective of the mentioned population.
ARTICLE | doi:10.20944/preprints202012.0268.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: Oenothera biennis; standardization; quercetin 3-glucuronide; ellagic acid; method validation
Online: 10 December 2020 (16:43:01 CET)
Toward the standardization of O. biennis sprout extract (OBS-E), we aimed to obtain indicator compounds, using a validated method. HPLC-UVD allowed simultaneous quantification of indicator compounds quercetin 3-glucuronide and ellagic acid. The method was validated in terms of specificity, linearity, precision, accuracy, and limit of detection/limit of quantification (LOD/LOQ). High specificity and linearity was demonstrated, with correlation coefficients of 1.0000 for quercetin 3-glucuronide and 0.9998 for ellagic acid. The LOD/LOQ values were 0.486/1.472 μg/mL for quercetin 3-glucuronide and 1.003/3.039 μg/mL for ellagic acid. Intra-day and inter-day variability tests produced relative standard deviation for each compound of <2%, a generally accepted precision criterion. High recovery rate were also obtained, indicating accuracy validation. The OBS-E prepared using various concentrations of ethanol were then analyzed. The 50% ethanol extract had highest content of quercetin 3-glucuronide, whereas the 70% ethanol extract possessed the lowest. However, the ellagic acid content was highest in the 70% ethanol extract and lowest in the 90% ethanol extract. Thus, quercetin 3-glucuronide and ellagic acid can be used industrially as indicator compounds for O. biennis sprout products, and our validated method can be used to establish indicator compounds for other natural products.
ARTICLE | doi:10.20944/preprints202008.0568.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: pedotransfer functions; inverse methods; gravity irrigation; model validation; experimental data
Online: 26 August 2020 (09:01:37 CEST)
In the present work, we evaluate the prediction capability of six Pedotransfer functions (PTFs), reported in the literature, for the saturated hydraulic conductivity estimations (Ks). We used a database with 900 measured samples obtained from the Irrigation District 023, in San Juan del Rio, Queretaro, Mexico. Additionally, six new PTFs were construct for Ks from clay percentage, bulk density and saturation water content data. The results show, for the evaluated models, that one model present an overestimation for Ks>0.5 cm h-1 values, three models have a underestimation for Ks>1.0 cm h-1 and two models have a good correlation (R2>0.98) but are necessary more than three parameters. Nevertheless, the last two models requires from three to four parameters in order to get the optimization. By other hand, the models proposed in this work have a similar correlation with a less number of parameters: the fit is seen to be much better than using the existing ones, achieving a correlation of R2 = 0.9822 with only one variable and a R2 = 0.9901 when we use two.
ARTICLE | doi:10.20944/preprints202005.0217.v1
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: SEAIQR model; stability analysis; COVID-19; optimal control; model validation
Online: 12 May 2020 (13:02:12 CEST)
Background: Outbreak of the Covid-19 is now an ongoing global health emergency. At the end of December 2019, the first infection was reported in Wuhan and the world did not pay attention to this extremely contaminated disease and plucked to react rapidly. The World is in an vulnerable state in disease spreading, facing a great loss of lives and socio-economic aspects also. That is why we have proposed a potential mathematical model with data analysis to predict and control the outcome of this pandemic. Methods: The model presented the epidemic dynamics of multiple compartments. We collected available online data of Spain. In primary step, we estimated the parameters using either the data analysis or reference papers. Then we did the data fitting analysis in comparison with the outcome of our mathematical results. The results of the system depended not only the parameters also on social consciousness. Results: It is found that disease progression in this model is determined by the basic reproductive ratio, $R_0$, the actual epidemic of $R_0$ and effective $R(t)$ of each day. If $R_0>1$, the number of latently infected individuals grows exponentially; endemic solution is stable while infection rate decays if $R_0<1$. The optimal control theory stated that vaccination and treatment strategies are highly effective for reducing both susceptible and infected population and to increase the recover rate high. In Spain, after state of alarm (quarantine) on 14 March 2020, reported cases increasing for 13 days only and from the 14th day, daily reported cases started to decline albeit with small fluctuation. Our proposed model approximates that the disease in Spain could be fully under control by after July 2020. Conclusion: Outbreak will be in control of health care system, reduce the death rate and will ensure social-economic stability.
ARTICLE | doi:10.20944/preprints202003.0384.v1
Subject: Public Health And Healthcare, Nursing Keywords: quality; just culture; patient safety; nurses; hospital; measuring instrument validation
Online: 26 March 2020 (07:24:55 CET)
Purpose: "Just culture" is an element of safety culture, and in a broader sense – a part of quality culture. It is the subject of studies, especially in healthcare. This phenomenon is almost unknown in Polish medical facilities. For this reason, the aim of the article is to present the essence and significance of "just culture" in healthcare. The other aim of the research is to present the results of the validation of "just culture" assessment instrument used to recognize the "just culture" maturity level and evaluate the nurses’ beliefs and behaviours in the light of "just culture" criteria. Methodology/Approach: The verified questionnaire consisted of 28 statements in relation to which respondents expressed their opinion on a 5-point Likert scale. The questionnaire was distributed among nurses in one of the largest hospitals in Pomorskie Voivodeship, in Poland. The results based on 68 responses were statistically processed with Statistica 13.1 software. Findings: The obtained results allowed to confirm the reliability of the assessment tool, to recognize the level of „just culture” as wisdom (68%) and to indicate strengths and weaknesses of observed beliefs and behaviours. On this basis, improvement actions were proposed. Originality/Value: We use the original, own prepared questionnaire. This is the first study on "just culture" in healthcare in Poland. There are only few studies devoted to patient safety culture in Poland and no research addressed to "just culture" phenomenon, as well in Poland as in Central Europe. The results in this area allow to recommend the assessment tool for other hospitals and seem to help in understanding the essence of "just culture" implementation.
ARTICLE | doi:10.20944/preprints201908.0294.v1
Subject: Physical Sciences, Mathematical Physics Keywords: time series; Colorado River; water supply; cross-validation; decadal prediction
Online: 28 August 2019 (11:32:10 CEST)
The future of the Colorado River water supply (WS) affects millions of people and the U.S. economy. A recent study suggested a cross-basin correlation between the Colorado River and its neighboring Great Salt Lake (GSL). Following that study, the feasibility of using the previously developed multi-year prediction of the GSL water level to forecast the Colorado River WS was tested. Time-series models were developed to predict the changes in WS out to 10 years. Regressive methods and the GSL water level data were used for the depiction of decadal variability of the Colorado River WS. Various time-series models suggest a decline in the 10-year-averaged WS since 2013 before starting to increase around 2020. Comparison between this WS prediction and the WS projection published in a 2012 government report (derived from climate models) reveals a widened imbalance between supply and demand by 2020. Further research to update similar multi-year prediction of the Colorado River WS is needed. Such information could aid in management decision making in the face of future water shortages.
ARTICLE | doi:10.20944/preprints201901.0241.v1
Subject: Engineering, Civil Engineering Keywords: energy piles; validation; floor slab heat loss; energy; computer simulations
Online: 23 January 2019 (14:04:51 CET)
As the energy efficiency demands for future buildings become increasingly stringent, preliminary assessments of energy consumption are mandatory. These are possible only through numerical simulations, whose reliability crucially depends on boundary conditions. We therefore investigate their role in numerical estimates for the usage of geothermal energy, performing annual simulations of transient heat transfer for a building employing a geothermal heat pump plant and energy piles. Starting from actual measurements, we solve the heat equations in 2D and 3D using COMSOL Multiphysics and IDA-ICE, and discover a negligible impact of the multiregional ground surface boundary conditions. Moreover, we verify that the thermal mass of the soil medium induces a small vertical temperature gradient on the piles surface. We also find a roughly constant temperature on each horizontal cross-section, with nearly identical values if the average temperature is integrated over the full plane or evaluated at one single point. Calculating the yearly heating need for an entire building we then show that the chosen upper boundary condition affects the energy balance dramatically. Using directly the pipes’ outlet temperature induces a 54% overestimation of the heat flux, while the exact ground surface temperature above the piles reduces the error to 0.03%.
ARTICLE | doi:10.20944/preprints201901.0071.v1
Subject: Social Sciences, Behavior Sciences Keywords: Facebook; Facebook intrusion; couple relationships; conflicts; jealousy; psychometric properties; validation
Online: 8 January 2019 (15:15:56 CET)
The present study evaluates the psychometric properties of the Conflicts in Romantic Relationships Over Facebook Use Scale with a sample of Puerto Rican adults. A total of 577 Puerto Ricans participated on this confirmatory and psychometric study. The results confirmed that the scale has a multidimensional structure. These dimensions are: Partner Facebook intrusion, Conflict over Facebook use, and Jealousy over Facebook use. A total of 18 items complied with the criteria of discrimination and presented appropriate factorial loads (6 items per dimension). The Cronbach’s Alpha indexes of the dimensions ranged between .87 and .95 and the omega coefficients ranged between .88 and .95. In summary, the instrument has the appropriate psychometric properties to continue with validation studies, as well as to be implemented in various work areas, both theoretical and applied.
ARTICLE | doi:10.20944/preprints201808.0325.v1
Subject: Engineering, Energy And Fuel Technology Keywords: modelling; lead-acid battery; parameter identification; genetic algorithms; experimental validation
Online: 18 August 2018 (06:14:37 CEST)
Accurate and efficient battery modeling is essential to maximize the performance of isolated energy systems and to extend battery lifetime. This paper proposes a battery model that represents the charging and discharging process of a lead-acid battery bank. This model is validated over real measures taken from a battery bank installed in a research center placed at “El Chocó”, Colombia. In order to fit the model, three optimization algorithms (Particle Swarm Optimization, Cuckoo Search, and Particle Swarm Optimization+Perturbation) are implemented and compared, being the last one a new proposal. This research shows that the model with the proposed algorithm is able to estimate and manage the real battery characteristics as SOC and charging/discharging voltage. The comparison between simulations and real measures shows that the model is able to absorb reading problems, signal delays, and scaling errors. The approach we present can be implemented in other types of batteries especially those used in stand-alone systems.
ARTICLE | doi:10.20944/preprints201808.0243.v1
Subject: Social Sciences, Sociology Keywords: alcoholism; health professionals’ attitudes; social perception; Seaman-Mannello scale; validation
Online: 14 August 2018 (05:42:32 CEST)
Objective: The goal of this study was to analyse the attitudes and perceptions of emergency and mental health nurses through the validation of the SM-GIBED scale in specialised care in Spain on alcoholics and other drug-dependent patients. Design and Setting: This cross-sectional study was developed using the Spanish hospital version of the Seaman-Mannello scale to denominate the SM-GIBED scale. Participants: 170 Emergency and Mental Health Nursing from five Spanish Hospitals. Intervention: Self-administered questionnaire to analiyse the perceptions and attitudes about the drug addict and the alcoholic. Primary and Secondary Outcome Measures: A descriptive and inferential analysis of the study variables was carried out. A psychometric analysis was performed to validate the scale. Results: A total of 170 questionnaires were collected from 257 healthcare workers. Overall, 99.1% of the participants had contact with drug-dependent patients during their professional experience. Nearly 75% had difficulties in treating them. The psychometric analysis of the SM-GIBED scale in the Spanish context obtained values of KMO = 0.655 and Bartlett's test p < 0.000. Cronbach's alpha of 0.738 was obtained from the reliability analysis. A reliability analysis of each of the SM-GIBED questions found no case with an alpha lower than 0.71. In conclusion, positive aspects include an ingratiating attitude and subject-to-subject communication when nurses self-define as empathic and non-paternalistic. Among the negative aspects, there is a lack of communication skills and assertiveness with these patients. This highlights a certain degree of resignation and dissatisfaction when working with drug addicts.
ARTICLE | doi:10.20944/preprints202308.1854.v1
Subject: Public Health And Healthcare, Nursing Keywords: obstetrics labor; obstetric violence; surveys and questionnaires; validation Studies as topic
Online: 28 August 2023 (10:02:28 CEST)
Obstetric Violence refers to dehumanized or derogative treatment of women in their pregnancy, childbirth or postpartum periods and may be manifested through the attitudes of health professionals or the performance of unjustified or outdated practices without maternal consent. Currently, there is no tool validated in Spain to measure women’s perception of obstetric violence. The objective of this study was to carry out the cultural adaption and validation of an existing 14-item Obstetric Violence Scale to the Spanish context, and to evaluate its psychometric properties. The research was conducted in two phases: first, a methodological study designed to evaluate content validity, through assessments by 8 experts (calculating the Aiken V coefficient) and face validity in a sample of 20 women; second, a cross-sectional study to evaluate construct validity (through confirmatory factor analysis and Rasch analysis), divergent validity against a scale of birth satisfaction, known groups validity and reliability. In Phase 1, Aiken V values higher than 0.71 were obtained for all items. Phase 2 was conducted on a sample of 256 women and the fit values for the unidimensional model were RMSEA: 0.070 [95%CI: 0.059-0.105] and GFI: 0.982 [95%CI: 0.823-0.990]. The Rasch analysis indicated poor performance of item 2, which was removed. The Omega and Cronbach's Alpha coefficients were 0.863 and 0.860, respectively. A final 13-item version of the Obstetric Violence scale was produced, with total score ranging from 0 (no obstetric violence perception) to 52 (maximum obstetric violence perception). The Obstetric Violence Scale is a reliable and useful tool to measure women's perception of obstetric violence.
ARTICLE | doi:10.20944/preprints202305.1073.v1
Subject: Medicine And Pharmacology, Dietetics And Nutrition Keywords: continuous glucose monitoring; application in sports; carbohydrate management; active subjects; validation
Online: 16 May 2023 (03:52:33 CEST)
The objective of this pilot study was to compare glucose concentrations in capillary blood (CB) samples analysed in a laboratory by a validated method and glucose concentrations measured in the interstitial fluid (ISF) by continuous glucose monitoring under different physical activity levels in a postprandial state in healthy and active subjects without diabetes. Ten healthy, active subjects (26±4 years, 67±11 kg bodyweight (BW), 11±3 h) were included in the study. Within 14 days, they underwent six tests consisting of a) resting fasted (R/Fast), b) resting after intake of 1 g glucose/kg BW (R/Glc) and c) running for 60 minutes at moderate (65/Glc) and d) high (85/Glc) intensity after intake of 1 g glucose/kg BW. Data were collected in the morning, following a standardised dinner before test day. Sensor-based glucose concentrations were compared to simultaneous capillary blood glucose concentrations. Pearson’s r correlation coefficient was highest for R/Glc (.92, p<.001) compared to R/Fast (.45, p<.001), 65/Glc (.60, p<.001) and 85/Glc (.70, p<.001). Mean absolute relative deviation (MARD) and standard deviation (SD) was smallest for resting fasted and similar between all other conditions (R/Fast: 8±6%, R/Glc: 17±12%, 65/Glc: 22 ± 24%, 85/Glc: 18±17%). However, Bland-Altman plot analysis showed a higher range between lower and upper limits of agreement (95% confidence interval) of paired data under exercising compared to resting conditions. Under resting fasted conditions, both methods produce similar outcomes. Under resting postprandial and exercising conditions, respectively, there are differences between both methods. However, further data in healthy subjects need to be gathered considering physical activity and nutrition status.
ARTICLE | doi:10.20944/preprints202301.0212.v1
Subject: Public Health And Healthcare, Nursing Keywords: aphasia; surveys and questionnaires; standardised nursing terminology; nursing assessment; validation studies
Online: 12 January 2023 (06:34:25 CET)
(1) Background: The CEECCA questionnaire assesses the ability to communicate among individuals with aphasia. It was designed using the NANDA-I and NOC standardised nursing languages (SNLs), reaching high content validity index and representativeness index values. The questionnaire was pilot-tested, demonstrating its feasibility for use by nurses in any healthcare setting. This study aims to identify the psychometric properties of this instrument. (2) Methods: 47 individuals with aphasia recruited from primary and specialist care facilities. The instrument was tested for construct validity and criterion validity, reliability, internal consistency, and responsiveness. The NANDA-I and NOC SNLs and the Boston test were used for criterion validity testing. (3) Results: 5 language dimensions explain 78.6% of the total variance. Convergent criterion validity tests showed concordances of up to 94% (Cohen’s κ: 0.9; p<0.001) using the Boston test, concordances of up to 81% using DCs of NANDA-I diagnoses (Cohen’s κ: 0.6; p<0.001), and concordances of up to 96% (Cohen’s κ: 0.9; p<0.001) using NOC indicators. Internal consistency (Cronbach’s alpha) was 0.98. Reliability tests revealed test-retest concordances of 76%-100% (p<0,001). (4) Conclusions: The CEECCA is an easy-to-use, valid, reliable instrument to assess the ability to communicate among individuals with aphasia.
ARTICLE | doi:10.20944/preprints202212.0316.v1
Subject: Environmental And Earth Sciences, Oceanography Keywords: sea surface salinity; ocean reanalysis; moored buoy; in situ measurements; validation
Online: 19 December 2022 (03:45:59 CET)
Sea surface salinity (SSS) is one of the Essential Climate Variables (ECVs) as defined by the Global Climate Observing System (GCOS). Acquiring high-quality SSS datasets with high spatial-temporal resolution is cruicial for research on the hydrological cycle and earth climate. This study assessed the quality of SSS data provided by four high-resolution ocean reanalysis products, including the Hybrid Coordinate Ocean Model (HYCOM) 1/12° global reanalysis, the The Copernicus Global 1/12° Oceanic and Sea Ice GLORYS12 Reanal-ysis, the Simple Ocean Data Assimilation (SODA) reanalysis, the ECMWF Oceanic Reanalysis System 5 (ORAS5) product and the Estimating the Circulation and Climate of the Ocean Phase II (ECCO2) reanalysis. Regional comparison in the Mediterranean Sea shows that reanalysis largely depicts the accurate spatial SSS structure away from river mouths and coastal areas but slightly underestimates the mean SSS values. Better SSS reanalysis performance is found in the Levantine Sea while larger SSS uncertainties are found in the Adriatic Sea and the Aegean Sea. The global comparison with CMEMS level-4 (L4) SSS show generally con-sistent large-scale structures. The mean ΔSSS between monthly gridded reanalysis data and in situ analyzed data is -0.1 PSU in the open seas between 40°S and 40°N with the mean Root Mean Square Deviation (RMSD) generally smaller than 0.3 PSU and the majority of correlation coefficients higher than 0.5. Comparison with collocated buoy salinity shows that reanalysis products well captures the SSS variations at the locations of tropical moored buoy arrays at weekly scale. Among all the four products, the data quality of HYCOM re-analysis SSS is highest in marginal sea, GLORYS12 has the best performance in the global ocean especially in tropical regions. Comparatively, ECCO2 has the overall worst performance to reproduce SSS states and variations by showing the largest discrepancies with CMEMS L4 SSS.
ARTICLE | doi:10.20944/preprints202012.0080.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: multivariate linear method; validation; diagnosis; discriminative; signatures of disease; schizophrenia; depression
Online: 3 December 2020 (10:38:31 CET)
In order to overcome this problem our group designed a novel machine learning technique, multivariate linear method (MLM) which can capture convergent data from voxel-based morphometry, functional resting state and task-related neuroimaging and the relevant clinical measures. In this paper we report results from convergent cross-validation of biological signatures of disease in a sample of patients with schizophrenia as compared to depression. Our model provides evidence that the combination of the neuroimaging and clinical data in MLM analysis can inform the differential diagnosis in terms of incremental validity to reach 90 % accuracy of the prediction.
ARTICLE | doi:10.20944/preprints202008.0713.v1
Subject: Chemistry And Materials Science, Food Chemistry Keywords: NMR; alcoholic beverages; ethanol; methanol; acetaldehyde; screening; validation; food control; PULCON
Online: 31 August 2020 (06:21:35 CEST)
Due to legal regulations, the rise of globalised (online) commerce and the need for public health protection, the analysis of spirits (alcoholic beverages > 15 % vol) is a task with growing importance for governmental and commercial laboratories. In this article a newly developed method using nuclear magnetic resonance (NMR) spectroscopy for the simultaneous determination of 15 substances relevant for the quality and authenticity assessment of spirits is described. The new method starts with a simple and rapid sample preparation and does not need an internal standard. For each sample a group of 1H-NMR spectra is recorded, among them a 2D spectrum for analyte identification and 1D spectra with suppression of solvent signals for quantification. Using the Pulse Length Based Concentration Determination (PULCON) method, concentrations are calculated from curve fits of the characteristic signals for each analyte. The optimisation of the spectra, their evaluation and the transfer of the results are done fully automatically. Glucose, fructose, sucrose, acetic acid, citric acid, formic acid, ethyl acetate, ethyl lactate, acetaldehyde, ethanol, methanol, n-propanol, isobutanol, isopentanol, 2-phenylethanol and 5-(hydroxymethyl)furfural (HMF) can be quantified with an overall accuracy better than 8 %. This new NMR-based targeted quantification method enables the simultaneous and efficient quantification of relevant spirits ingredients in their typical concentration ranges in one process with good accuracy. It has proven to be a reliable method for all kinds of spirits in routine food control.
ARTICLE | doi:10.20944/preprints202008.0376.v1
Subject: Social Sciences, Psychology Keywords: Organizational skills, test development, general population, goal-directed behaviors, psychometric validation
Online: 18 August 2020 (05:12:27 CEST)
Organizational skills are a set of cognitive abilities responsible for goal-directed behaviors. While they are moderately studied in clinical settings, the assessment of organizational skills in the general population remains under-studied. This paper presents the new Durand Organizational Skills Questionnaire (DOSQ), which was developed to examine the factors associated with organizational abilities in the general population. Exploratory factor analysis, validated by a confirmatory factor analysis, suggests eight factors: Work Organization, Communication Clarity, Punctuality, Goal-Oriented Behavior, Assiduity, Workspace Organization, Strategies, and Attentiveness. Three studies using samples from the general population provided evidence for the reliability and validity of the DOSQ’s scores. Overall, the results suggest that the DOSQ offers a valid approach to measuring organizational skills in the general population.
ARTICLE | doi:10.20944/preprints202006.0351.v1
Subject: Medicine And Pharmacology, Oncology And Oncogenics Keywords: Brain Tumor; Machine Learning; Ensemble techniques; AdaBoost; Cross-Validation; Stratified technique
Online: 29 June 2020 (07:27:38 CEST)
Brain Tumor is one of the severe diseases and occurrence of this disease threats human life. Detection of brain tumor in advance can secure patient’s life from unwanted loss. Well-timed and swift disease detection and treatment strategy can lead to improved quality of life in these patients. This paper attempts to use Machine Learning based ensemble approaches for recognising patients with brain tumor. Ensemble technique based AdaBoost classifier and 10-fold stratified cross-validation method are assembled in single platform is proposed in this paper for prediction of brain tumor. This prediction is compared against three baseline classifiers such as Gradient Boost, Random Forest and Extra Trees classifier. Experimental result implies the superiority of this model with an accuracy of 98.97%, f1-score of 0.99, kappa statistics score of 0.95 and MSE of 0.0103.
ARTICLE | doi:10.20944/preprints202006.0210.v1
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: SIHR model; COVID-19; Basic reproduction number; stability analysis; model validation
Online: 17 June 2020 (08:16:59 CEST)
The COVID-19 epidemic is an emerging infectious disease of the viral zoonosis type caused by the corona-virus strain SARS-CoV-2, is classified as a human-to-human communicable disease and is currently a pandemic worldwide. In this paper, we propose conceptual mathematical models the epidemic dynamics of four compartments. We have collected data from the Djibouti health ministry. We define the positivity, boundedness of solutions and basic reproduction number. Then, we study local and global stability and bifurcation analysis of equilibrium to examine its epidemiological relevance. Finally, we analyze the fit of the data in comparison with the result of our mathematical results, to validate the model and estimating the important model parameters and prediction about the disease, we consider the real cases of Djibouti from 23th March to 10th June, 2020.
ARTICLE | doi:10.20944/preprints202001.0385.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: wildfires; susceptibility mapping; machine learning; random forest; model validation; Liguria region
Online: 31 January 2020 (11:40:30 CET)
Wildfire susceptibility maps display the wildﬁres occurrence probability, ranked from low to high, under a given environmental context. Current studies in this field often rely on expert knowledge, including or not statistical models allowing to assess the cause-effect correlation. Machine learning (ML) algorithms can perform very well and be more generalizable thanks to their capability of learning from and make predictions on data. Italy is highly affected by wildfires due to the high heterogeneity of the territory and to the predisposing meteorological conditions. The main objective of the present study is to elaborate a wildfire susceptibility map for Liguria region (Italy) by applying Random Forest, an ensemble ML algorithm based on decision trees. Susceptibility was assessed by evaluating the probability for an area to burn in the future considering where wildfires occurred in the past and which are the geo-environmental factors that favor their spread. Different models were compared, including or not the neighboring vegetation and using an increasing number of folds for the spatial-cross validation. Susceptibility maps for the two fire seasons were finally elaborated and validated and results critically discussed highlighting the capacity of the proposed approach to identify the efficiency of fire fighting activities.
ARTICLE | doi:10.20944/preprints201910.0039.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: tree species; forest; biodiversity; time series; spatial autocorrelation; cross-validation; accuracy
Online: 3 October 2019 (13:56:18 CEST)
Mapping forest composition using multiseasonal optical time series is still challenging. Highly contrasted results are reported from one study to another suggesting that drivers of classification errors are still under-explored. We evaluated the performances of single-year Formosat-2 time series to discriminate tree species in temperate forests in France and investigated how predictions vary statistically and spatially across multiple years. Our objective was to better estimate the impact of spatial autocorrelation in the validation data on measurement accuracy and to understand which drivers in the time series are responsible for classification errors. The experiments were based on ten Formosat-2 image time series irregularly acquired during the seasonal vegetation cycle from 2006 to 2014. Due to lot of clouds in the year 2006, an alternative 2006 time series using only cloud-free images has been added. Thirteen tree species were classified in each single-year dataset based on the SVM algorithm. The performances were assessed using a spatial leave-one-out cross validation (SLOO-CV) strategy, thereby guaranteeing full independence of the validation samples, and compared with standard non-spatial leave-one-out cross-validation (LOO-CV). The results show relatively close statistical performances from one year to the next despite the differences between the annual time series. Good agreements between years were observed in monospecific tree plantations of broadleaf species versus high disparity in other forests composed of different species. A strong positive bias in the accuracy assessment (up to 0.4 of Overall Accuracy) was also found when spatial dependence in the validation data was not removed. Using the SLOO-CV approach, the average OA values per year ranged from 0.48 for 2006 to 0.60 for 2013, which satisfactorily represents the spatial instability of species prediction between years.
ARTICLE | doi:10.20944/preprints201808.0363.v1
Subject: Physical Sciences, Applied Physics Keywords: quality control; BSRN; solar radiation; satellite-retrieve irradiance; ground stations; validation
Online: 21 August 2018 (04:23:52 CEST)
Quality control (QC) may be a lengthy and tedious process. As a result, most data users use data from meteorological services without performing data quality checks. The South African Weather Service (SAWS) re-established the national solar radiometric network comprising of 13 new stations within the six climatic zones of the country. This study reports on the performance results of the Baseline Surface Radiation Network (BSRN) QC procedures applied to the solar radiation data within the SAWS radiometric network. The overall percentage performance of the SAWS solar radiation network based on BSRN QC methodology is 97.79%, 93.64%, 91.6% and 92.23% for Long Wave Downward Irradiance (LWD), Global Horizontal Irradiance (GHI), Diffuse Horizontal Irradiance (DHI) and Direct Normal Irradiance (DNI) respectively with operational problems largely dominating the percentage of bad data. The overall average performance of the Surface Solar Radiation Dataset – Heliosat (SARAH) data records for the GHI estimation for all the stations showed a Mean Bias Deviation (MBD) of -8.28 Wm-2, a Mean Absolute Deviation (MAD) of 9.06 Wm-2 and the Root Mean Square Deviation (RMSD) of 11.02 Wm-2. The correlation (quantified by R2) between ground-based and SARAH-derived GHI time series was ~ 0.98. The established network has the potential of providing high quality minute solar radiation data sets (GHI, DHI, DNI and LWD) and auxiliary hourly meteorological parameters vital for scientific and practical applications in renewable energy technologies in South Africa.
ARTICLE | doi:10.20944/preprints201805.0150.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: Quantitative Precipitation Estimates; Validation; PERSIANN-CCS; meteorological radar; Satellite Rainfall Estimates
Online: 9 May 2018 (15:37:29 CEST)
QPEs (Quantitative Precipitation Estimates) obtained from remote sensing or ground-based radars could complement or even be an alternative to rain gauge readings. However, to be used in operational applications, a validation process has to be carried out, usually by comparing their estimates with those of a rain gauges network. In this paper, the accuracy of two QPEs are evaluated for three extreme precipitation events in the last decade in the southeast of the Iberian Peninsula. The first QPE is PERSIANN-CCS, a satellite-based QPE. The second is a meteorological radar with Doppler capabilities that works in the C band. Pixel-to-point comparisons are made between the values offered by the QPEs and those obtained by two networks of rain gauges. The results obtained indicate that both QPEs were well below the rain gauge values, especially in extreme rainfall time slots. There seems to be a weak linear association between the value of the discrepancies and the precipitation value of the QPEs. It does not seem that radar is more accurate than PERSIANN-CCS, despite its larger spatial resolution and its commonly higher effectiveness. The main conclusion is that neither PERSIANN-CCS nor radar, without empirical calibration, are acceptable QPEs for the real-time monitoring of meteorological extremes in the southeast of the Iberian Peninsula.
ARTICLE | doi:10.20944/preprints202308.1627.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: machine learning; indoor localization; radio environment map; extra trees regressor; cross-validation
Online: 23 August 2023 (07:56:44 CEST)
As an established and widely available infrastructure, wireless local area networks (WLANs) have emerged as a viable option for indoor localization of both mobile and stationary users. In planning a mobile communications network and radio system design, the critical role of coverage prediction becomes evident, empowering network operators to optimize cellular networks and elevate the overall customer experience. Moreover, WLANs present several challenges that must be fulfilled when it comes to localization based on Wi-Fi signals to get a proper coverage prediction map. This paper presents a study based on application of the extra trees regression (ETR) for indoor localization by using coverage prediction maps. The aim of the proposed method is to accurately estimate a user’s position within a radio environment map (REM) area using collected received signal strength indicator (RSSI) values. The proposed scheme investigates several machine learning (ML) regression algorithms for localization, where the training dataset is obtained from the REM by using the nearest neighbors method. Parameter tuning is conducted to optimize the performance of the ETR scheme by using 10-fold cross-validation. In the numerical results, we first demonstrate the effectiveness of utilizing ML regression techniques for generating coverage maps, which enables accurate estimation of the Wi-Fi signal strength in indoor environments. Then, we showcase the superior performance of the proposed ETR-based method compared to several other ML schemes for indoor localization using the REM. ML algorithms, including decision tree regression and the ETR, are compared to evaluate the system model. Based on error metrics, the proposed ETR-based approach exhibits the best performance among the evaluated techniques. The combination of coverage map generation and localization using regression techniques offers a powerful approach for analyzing the radio frequency (RF) environment in indoor spaces.
ARTICLE | doi:10.20944/preprints202308.1151.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: method development; optimisation and validation; parallel factor modelling; partial least squares modelling
Online: 16 August 2023 (07:18:19 CEST)
In the present protocol, we determined the presence and concentrations of bisphenol A (BPA) spiked in surface water samples using EEM fluorescence spectroscopy in conjunction with modelling using partial least squares (PLS) and parallel factor (PARAFAC). PARAFAC modelling of the EEM fluorescence data obtained from surface water samples contaminated with BPA unraveled four fluorophores including BPA. The best outcomes for BPA concentration (R2 = 0:996; Standard deviation to prediction error's root mean square ratio (RPD) = 3.41; and a Pearson's r value of 0.998). With these values of R2 and Pearson's r, the PLS model showed a strong correlation between the predicted and measured BPA concentrations. The detection and quantification limits of the methods were 3.512 and 11.708 micro molar (µM), respectively. In conclusion, BPA can be precisely detected and its concentration in surface water predicted using the PARAFAC and PLS models developed in this study and fluorescence EEM data collected from BPA-contaminated water. It is necessary to spatially relate surface water contamination data with other datasets in order to connect drinking water quality issues with health, environmental restoration, and environmental justice concerns.
ARTICLE | doi:10.20944/preprints202306.0493.v1
Subject: Environmental And Earth Sciences, Geophysics And Geology Keywords: Artificial fractured tight stones; equivalent medium theories; rock physics; experimental validation; anisotropy
Online: 7 June 2023 (05:45:05 CEST)
The study of fractures in the subsurface is very important in unconventional reservoirs since they are the main conduits for hydrocarbon flow. For this reason, a variety of equivalent medium theories have been proposed for the estimation of fracture and fluid properties within reservoir rocks. We experimentally investigated the feasibility of applying the Galvin model in fractured tight stones. For this proposal, three artificial fractured tight sandstone samples with the same background porosity (11.7% ± 1.2%) but different fracture densities of 0.00, 0.0312, and 0.0624 were manufactured. The fracture thickness was 0.06 mm and the fracture diameter was 3 mm in all the fractured samples. Ultrasonic P- and S-wave velocities were measured at 0.5 MHz in a laboratory setting in dry and water-saturated conditions in directions at 0°, 45°, and 90° to the fracture normal. The results were compared with theoretical predictions of the Galvin model. The comparison showed that model predictions significantly underestimated P- and S- wave velocities as well as P-wave anisotropy in water-saturated conditions, but overestimated P-wave anisotropy in dry conditions. By analyzing the differences between the measured results and theoretical predictions, we modified the Galvin model by adding the squirt flow mechanism to it and used the Thomsen model to obtain the elastic moduli in high- and low-frequency limits. The modified model predictions showed good fits with the measured results.
ARTICLE | doi:10.20944/preprints202306.0153.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: programming learning; Java; JUnit; code writing problem; code validation; test data generation
Online: 2 June 2023 (08:10:28 CEST)
To assist Java programming learning of novice students, we have developed the web-based Java programming learning assistant system (JPLAS). JPLAS provides several types of exercise problems to cultivate code reading/writing skills at various levels. In JPLAS, the code writing problem (CWP) asks a student to write a source code that will pass the test code given in the assignment where the correctness is verified by running them on JUnit. In this paper, to reduce the teacher load at marking process, we present the answer code validation program that verifies all the source codes from a lot of students for each assignment at a time and reports the number of passing tests for each source code in the CSV file. Besides, to test a source code with various input data, we implement the test data generation algorithm that identifies the data type, generates a new data, and replaces it for each test data in the test code. Furthermore, to verify the correctness of the implemented procedure in the source code, we introduce the intermediate state testing in the test code. For evaluations, we applied the proposal to source codes and test codes in a Java programming course in Okayama university, Japan, and confirmed the validity and effectiveness.
ARTICLE | doi:10.20944/preprints202208.0504.v1
Subject: Medicine And Pharmacology, Pharmacology And Toxicology Keywords: Amphetamine-related drugs; Forensic Toxicology; blood; UPLC-qTOF-MS; MMSPE; Validation; SWGTOX
Online: 30 August 2022 (04:09:57 CEST)
Abuse of amphetamine-related drugs (ARDs) causes traffic accidents, violence, and overdose. In forensic toxicology, analysis for ARDs in biological samples can help identify those driving or performing other tasks under the influence of drugs, clarify the cause of death, and identify recent drug users. In this study, we validated a pseudo-isocratic UPLC-qTOF-MS method following mixed mode cation exchange (MMSPE) extraction for analysis of ARDs in blood. The procedure requires 250 μL of blood to achieve a limit of quantification (LOQ) and detection (LOD) of 20 ng/mL for all analytes. In aged animal blood samples, extraction recoveries of 63-90% and matrix effects of 9-21% were observed. Precision and accuracy for all analytes were within 20% and 89–118%, respectively. The analytical method was developed and validated in accordance with the Scientific Working Group for Forensic Toxicology (SWGTOX) Standard. It has acceptable accuracy and precision for use in doping control and forensic toxicology.
ARTICLE | doi:10.20944/preprints202208.0179.v1
Subject: Computer Science And Mathematics, Probability And Statistics Keywords: In-house validation study; reproducibility precision; measurement uncertainty; prediction interval; uncertainty interval
Online: 9 August 2022 (10:56:40 CEST)
Measurement uncertainty is typically expressed in terms of a symmetric interval , where denotes the measurement result and the expanded uncertainty. However, in the case of heteroscedasticity, symmetric uncertainty intervals can be misleading. In this paper, a different approach for the calculation of uncertainty intervals is introduced. This approach is applicable when a validation study has been conducted with samples with known concentrations. It will be shown how, under certain circumstances, asymmetric uncertainty intervals arise quite naturally and lead to more reliable uncertainty intervals.
ARTICLE | doi:10.20944/preprints202208.0059.v1
Subject: Biology And Life Sciences, Immunology And Microbiology Keywords: Staphylococcus epidermidis; metabolic network validation; minimal cut sets; knock-outs; systems biology
Online: 2 August 2022 (09:33:09 CEST)
Increasingly, systems biology is gaining relevance in basic and applied research. The combination of computational biology with wet lab produces a synergy that results in an exponential increase in knowledge of biological systems. The study of microorganisms such as Staphylococcus epidermidis RP62A enables the researcher to understand better its metabolic network, which allows the design of effective strategies to treat infections caused by this species or others. S. epidermidis is the second cause of infection in patients with joint implants, so treating its proliferation seems vital for public health. There are different approaches to the analysis of metabolic networks. Flux Balance Analysis (FBA) is one of the most widespread streams. It allows the study of large metabolic networks, their structural properties, the optimization of metabolic flux, and the search for intervention strategies to modify the state of the metabolic network. This work presents the validation of the Staphylococcus epidermidis RP62A metabolic network model elaborated by Díaz-Calvo et al.. Then, we elaborate further on the network analysis’s essential reactions, classifying them. Finally, we introduce some proposals to intervene in the network and design knock-outs.
ARTICLE | doi:10.20944/preprints202202.0164.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: rare variants; genome-wide association study; validation test; SNP chip; genomic selection
Online: 11 February 2022 (15:59:26 CET)
The experiments described in this research article were designed to test the effect of rare variants into genomic prediction in dairy cattle. Common polymorphisms are able to explain only a small proportion of the underlying genetic variation of complex phenotypes. Variants representing functional mutations with large effects on complex phenotypes are expected to be rare due to natural (humans) or artificial (livestock) selection pressure. Therefore, it is important to check whether the use of rare variants could increase the accuracy of ranking of animals by providing the tool for more precise differentiation among the bulls with high additive genetic merit. The goal of our study was to verify whether including rare variants in a genomic selection model allows for a more accurate description of the additive genetic background of traits under selection in dairy cattle. We used the linear mixed model for comparison SNP estimates for Holstein-Friesian cattle of the two data sets – a set containing only single nucleotide polymorphisms defined by minor allele frequency ≥ 0.01, which is routinely used in the Polish genomic evaluation system (46,216 SNPs), and a set containing SNPs selected based only on the call rate (54,378 SNPs). Based on the SNP estimates we also calculated DGV and GEBV and compared them between both data sets. In all the analyses we used production, fertility, conformation and udder health traits. We also assessed the time required for the two most computationally demanding components of genomic selection: preparing genotype data, and estimation of SNP effects between those two data sets. The results of our study indicated that the analysis including rare variants resulted in changes in the individual ranking of the top 100 male and female candidates, but had no effect on the outcome of the quality of EBV prediction as expressed by the Interbull validation test.
ARTICLE | doi:10.20944/preprints202105.0333.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: genome editing; CRISPR; protoplast; , targeted mutagenesis; TREX2; construct validation; transi-ent expression
Online: 14 May 2021 (13:44:26 CEST)
Cas endonuclease-mediated genome editing provides a long-awaited molecular biological approach to the modification of predefined genomic target sequences in living organisms. Although cas9/guide (g)RNA constructs are straightforward to assemble and can be customized to target virtually any site in the plant genome, the implementation of this technology can be cumbersome, especially in species like Triticale that are difficult to transform, for which only limited genome information is available and/or which carry comparatively large genomes. To cope with these challenges, we have pre-validated cas9/gRNA constructs (1) by frameshift restitution of a reporter gene co-introduced by ballistic DNA transfer to barley epidermis cells, and (2) via transfection in Triticale protoplasts followed by either a T7E1-based cleavage assay or by deep-sequencing of target-specific PCR amplicons. For exemplification, we addressed the Triticale ABA 8’-hydroxylase 1 gene, one of the putative determinants of pre-harvest sprouting of grains. We further show that in-del induction frequency in Triticale can be increased by TREX2 nuclease activity, which holds true for both well- and poorly performing gRNAs. The presented results constitute a sound basis for the targeted induction of heritable modifications in Triticale genes.
ARTICLE | doi:10.20944/preprints202006.0333.v1
Subject: Medicine And Pharmacology, Oncology And Oncogenics Keywords: Lung Cancer Prediction; Neural Network; Cross-validation; Gradient Boosting Classifier; Automated tool
Online: 28 June 2020 (09:56:30 CEST)
Lung cancer is known as lung carcinoma. It is a disease which is malignant tumor leading to the uncontrolled cell growth in the lung tissue. Lung Cancer disease is one of the most prominent cause of death in all over world. Early detection of this disease can assist medical care unit as well as physicians to provide counter measures to the patients. The objective of this paper is to approach an automated tool that takes influential causes of lung cancer as input and detect patients with higher probabilities of being affected by this disease. A neural network classifier accompanied by cross-validation technique is proposed in this paper as a predictive tool. Later, this proposed method is compared with another baseline classifier Gradient Boosting Classifier in order to justify the prediction performance.
ARTICLE | doi:10.20944/preprints202002.0178.v1
Subject: Medicine And Pharmacology, Pharmacology And Toxicology Keywords: DILIrank; DILI; drug hepatotoxicity; QSAR; nested cross-validation; virtual screening; in silico
Online: 14 February 2020 (02:24:04 CET)
Drug induced liver injury (DILI) remains one of the challenges in the safety profile of both authorized drugs and candidate drugs and predicting hepatotoxicity from the chemical structure of a substance remains a challenge worth pursuing, being also coherent with the current tendency for replacing non-clinical tests with in vitro or in silico alternatives. In 2016 a group of researchers from FDA published an improved annotated list of drugs with respect to their DILI risk, constituting “the largest reference drug list ranked by the risk for developing drug-induced liver injury in humans”, DILIrank. This paper is one of the few attempting to predict liver toxicity using the DILIrank dataset. Molecular descriptors were computed with the Dragon 7.0 software, and a variety of feature selection and machine learning algorithms were implemented in the R computing environment. Nested (double) cross-validation was used to externally validate the models selected. A number of 78 models with reasonable performance have been selected and stacked through several approaches, including the building of multiple meta-models. The performance of the stacked models was slightly superior to other models published. The models were applied in a virtual screening exercise on over 100,000 compounds from the ZINC database and about 20% of them were predicted to be non-hepatotoxic.
ARTICLE | doi:10.20944/preprints202309.0289.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: soil moisture; remote sensing; SMAP; Sentinel-1; soil-water retention curve; validation; Thailand
Online: 6 September 2023 (03:46:43 CEST)
Soil moisture plays a crucial role in various hydrological processes and energy partitioning of the global surface. The Soil Moisture Active Passive-Sentinel (SMAP-Sentinel) remote sensing technology has demonstrated a great potential in monitoring soil moisture at a scale greater than 1 km. This capability can be applied to improve weather forecast accuracy, enhance water management for agriculture, and climate-related disasters. Despite the techniques increasing used worldwide, its accuracy still requires field validation in specific regions like Thailand. In this paper, we report on extensive in-situ monitoring of soil moisture (from surface up to 1 m depth) at 10 stations across Thailand spanning the years 2021 to 2023. The aim was to validate SMAP surface soil moisture (SSM) Level 2 product over a period of two years. Using one month averaging approach, the study revealed linear relationships between the two measurement types, with the coefficient of determination (R-squared) varying from 0.13 to 0.58. Notably, areas with more uniform land use and topography such as croplands tended to have a better coefficient of determination. We also conducted detailed soil core characterization, including soil-water retention curves, permeability, porosity, and other physics properties. These soil properties were then used for estimating the correlation constants between SMAP and in-situ soil moistures using multiple linear regression. The results demonstrated R-squared values between 0.933 and 0.847. An upscaling approach of SMAP was proposed which showed a promising results when using 3-month average of all measurements in cropland together. The finding also suggest that the SMAP-Sentinel remote sensing technology exhibits significant potential for accurate soil moisture monitoring in diverse applications. Further validation efforts and research, particularly in terms of root zone depths and area-based assessments, especially in the agricultural sector, can greatly improve the technology’s effectiveness and usefulness in the region.
ARTICLE | doi:10.20944/preprints202305.1872.v1
Subject: Public Health And Healthcare, Other Keywords: radiomics; reproducibility; repeatability; validation; lung cancer; head and neck cancer; CT im-aging
Online: 26 May 2023 (07:15:55 CEST)
Radiomics involves the extraction of information from medical images not visible to the human eye. There is evidence these features can be used for treatment stratification and outcome prediction. However, there is much discussion about the reproducibility of results between different studies. This paper studies the reproducibility of CT texture features used in radiomics, comparing two feature extraction implementations namely Matlab toolkit and Pyradiomics when applied on independent datasets of CT scans of patients i) the open access RIDER dataset containing a set of repeat CT scans taken 15 minutes apart for 31 patients (RIDER Scan 1 and Scan 2 respectively) treated for lung cancer and ii) the open access HN1 dataset containing 137 patients treated for head and neck cancer. Gross tumor volume (GTV) manually outlined by an experienced observer available on both datasets was used. 43 common radiomics features available on Matlab and Pyradiomics were calculated using 2 intensity-level quantization methods with and without an intensity threshold. Cases were ranked for each feature for all combinations of quantization parameters and the Spearman’s rank coefficient, rs, calculated. Reproducibility was defined when a highly correlated feature in the RIDER dataset also correlated highly in the HN1 dataset and vice versa. 29 out of 43 reported stable features were found to be highly reproducible between Matlab and Pyradiomics implementations, having consistently high correlation in rank ordering for RIDER Scan 1 and RIDER Scan 2 (rs > 0.8). 18/43 reported features were common in RIDER and HN1 datasets, suggesting they may be agnostic to disease site. Useful radiomics features should be selected based on reproducibility. This study identified a set of features that meet this requirement and validated the methodology for evaluating reproducibility between datasets.
REVIEW | doi:10.20944/preprints202210.0390.v1
Subject: Biology And Life Sciences, Immunology And Microbiology Keywords: aflatoxins; APEDA; EU-ML; FSSAI-ML; mycotoxins; ochratoxin A; patulin; regulation, method validation.
Online: 25 October 2022 (12:30:35 CEST)
Mycotoxins are deleterious fungal secondary metabolites that contaminate food and feed, thereby creating concerns regarding food safety. Common fungal genera can easily proliferate in Indian tropical and sub-tropical conditions, and scientific attention is warranted to curb its growth. To address this, two nodal governmental agencies, namely the Agricultural and Processed Food Products Export Development Authority (APEDA) and Food Safety and Standards Authority of India (FSSAI), have developed and implemented analytical methods and quality control procedures to monitor mycotoxin levels in a range of food matrices and assess risks to human health over the past two decades. However, comprehensive information on such advancements in mycotoxin testing and issues in implementing these regulations is inadequately covered in recent literature. The aim of this review is thus to uphold a systematic picture of the role played by the FSSAI and APEDA for mycotoxin control at the domestic level and for the promotion of international trade along with certain challenges in dealing with mycotoxin monitoring. Additionally, it unfolds various regulatory concerns regarding mycotoxin mitigation in India. Overall, it provides valuable insights to the Indian farming community, food supply chain stakeholders, and researchers about India’s success story in arresting mycotoxins throughout the food supply chain.
ARTICLE | doi:10.20944/preprints201912.0093.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: mixed linear model; genotyping-by-sequencing; functional validation; RT-qPCR; resistance genes; GWAS
Online: 7 December 2019 (12:41:39 CET)
Meloidogyne javanica causing root-knot nematode in soybean is an important problem in soybean areas, leading to several yield losses. Some accessions have been identified carrying resistance loci to this nematode specie. In this study, a set of 317 soybean accessions were characterized for resistance to M. javanica. Genome-wide association study (GWAS) was performed using SNPs from genotyping-by-sequencing (GBS), and a region of 29.2 Kbp on chromosome 13 was identified. The haplotype analysis showed that SNPs were able to discriminate susceptible and resistant accessions, leading to 25 accessions sharing the resistance locus. Furthermore, 5 accessions may be new M. javanica resistance sources. The screening of the SNPs in the USDA soybean germplasm showed that several accessions previous reported as resistance to other nematodes also showed the resistance haplotype on chromosome 13. High levels of concordance among the phenotypes of Brazilian cultivars and the SNPs in chromosome 13 were observed. A in silico analysis of the mapped region on soybean genome revealed a presence of 5 genes with structural similarity with major resistance genes. The expression levels of the candidate genes in the interval demonstrated a potential pseudogene, and other two model genes up-regulated in the resistance source after pathogen infection. The SNPs associated to the region conferring resistance is a important tool for introgression of the resistance by marker-assisted selection in soybean breeding programs.
ARTICLE | doi:10.20944/preprints201906.0036.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: digital elevation models; multi-source fusion; multi-scale fusion; global evaluation; accuracy validation.
Online: 5 June 2019 (10:26:30 CEST)
The quality of digital elevation models (DEMs) is inevitably affected by the limitations of the imaging modes and the generation methods. One effective way to solve this problem is to merge the available datasets through data fusion. In this paper, a fusion-based global DEM dataset (82°S-82°N) is introduced, which we refer to as GSDEM-30. This is a 30-m DEM mainly reconstructed from the unfilled SRTM1, AW3D30, and ASTER GDEM v2 datasets combining the multi-source and multi-scale fusion techniques. A comprehensive evaluation of the GSDEM-30 data, as well as the 30-m ASTER GDEM v2 and AW3D30 DEM, was presented. Global ICESat GLAS data and the local National Elevation Dataset (NED) were used as the reference for the vertical accuracy validation, while GlobeLand30 was introduced for the landscape analysis. Furthermore, we employed the maximum slope approach to detect the potential artefacts in the DEMs. The results show that the GDEM data are seriously affected by noise and artefacts. With the advantage of the multiple datasets and the refined post-processing, the GSDEM-30 are contaminated with fewer anomalies than both ASTER GDEM and AW3D30. The fusion techniques used can also be applied to the reconstruction of other fused DEM datasets.
ARTICLE | doi:10.20944/preprints201905.0309.v1
Subject: Environmental And Earth Sciences, Oceanography Keywords: MODIS; oceanography; remote sensing; Saildrone; sea surface salinity; sea surface temperature; SMAP; validation
Online: 27 May 2019 (10:19:17 CEST)
Traditional ways of validating satellite-derived sea surface temperature (SST) and sea surface salinity (SSS) products, using comparisons with buoy measurements, do not allow for evaluating the impact of mesoscale to submesoscale variability. Here we present the validation of remotely-sensed SST and SSS data against the unmanned surface vehicle (USV) – Saildrone – measurements from the Spring 2018 Baja deployment. More specifically, biases and root mean square differences (RMSD) were calculated between USV-derived SST and SSS values, and six satellite-derived SST (MUR, OSTIA, CMC, K10, REMSS, and DMI) and three SSS (JPLSMAP, RSS40, RSS70) products. Biases between the USV SST and OSTIA/CMC/DMI were approximately zero while MUR showed a bias of 0.2C. OSTIA showed the smallest RMSD of 0.36C while DMI had the largest RMSD of 0.5C. An RMSD of 0.4C between Saildrone SST and the satellite-derived products could be explained by the daily variability in USV SST which currently cannot be resolved by remote sensing measurements. For SSS, values from the JPLSMAP product showed saltier biases of 0.2 PSU, while RSS40 and RSS70 showed fresh biases of 0.3 PSU. An RMSD of 0.4 PSU could not be explained solely by the daily variability of the USV-derived SSS. Coherences were significant at the longer wavelengths, with a local maximum at 100 km that is most likely associated with the mesoscale turbulence in the California Current System.
ARTICLE | doi:10.20944/preprints202305.0987.v1
Subject: Biology And Life Sciences, Life Sciences Keywords: Maximum Entropy; species distribution model; ground validation; endemic; tree-frog; Protected Area; Biodiversity Hotspot
Online: 15 May 2023 (07:19:47 CEST)
Conservation of tropical endemic amphibians largely suffers from Wallacean shortfall, a gap to which predictive species distribution models have contributed significantly to bridge by delineating probable distribution and the underlying suitable habitat within their distributional range. However, rarely is a prediction model ground-truthed to evaluate their predictive performance. Here we present a species distribution modeling approach using maximum entropy algorithm corrected for smaller sample size, in guiding explorative surveys aimed at optimizing survey effort and discovering unrecorded populations of Zhangixalus suffry, a rhacophorid tree-frog endemic to the northeastern part of the Indian subcontinent along with the factors limiting their distribution. With only 16 established historical locality data to model for (after spatial thinning to reduce autocorrelation) and a set of environmental predictors (climatic, topographic, and landscape composition), our model prediction enabled the successful discovery of seven new population records from unreported landscapes, extending its southernmost distributional limit over a considerable distance. The final composite distribution model combining all the locality records (n=23) predicted similar core areas of suitable habitat consistent with the known geographic distribution of the species but showed poor representation under existing coverage of Protected Area (PA) network in the Region with only 7% representation of suitable habitat under protection. Habitat suitability of a site was significantly governed by factors related to precipitation (precipitation seasonality and precipitation of the warmest quarter) and topographic factors that can influence it (elevation and aspect). This corroborates with the known ecology of Rhacophorid frogs, especially concerning their seasonal explosive reproductive strategy and foam nest-building behavior.Through this study, we propose explorative surveys guided by species distribution models to expedite unknown population discovery of rare, tropical endemic amphibians and using such taxa as surrogates in identifying conservation priority zones that can be directly applied to reserve design and conservation and management planning.
COMMUNICATION | doi:10.20944/preprints202112.0420.v3
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: Non-targeted methods; method validation; food fraud; food authenticity; mass spectrometry; spectroscopy; NGS; NMR
Online: 23 May 2022 (11:10:00 CEST)
Through its suggestive name, non-targeted methods (NTMs) do not aim at a predefined "needle in the haystack". Instead, they exploit all the constituents of the haystack. This new form of analytical methods is increasingly finding applications in food and feed testing. However, the concepts, terms, and considerations related to this burgeoning field of analytical testing needs to be propagated for the benefit of ones associated in academic research, commercial development, and official control. This paper addresses the frequently asked questions around notations and terminologies surrounding NTMs. The widespread development and adoption of these methods also necessitates the need to develop approaches to NTM validation, i.e., evaluating the performance characteristics of a method to determine if it is fit-for-purpose. This work aims to provide a roadmap to approaching NTM validation. In doing so, the paper deliberates on the different considerations that influence the approach to validation and provides suggestions thereof.
ARTICLE | doi:10.20944/preprints202101.0545.v3
Subject: Engineering, Energy And Fuel Technology Keywords: Solar radiation; Satellite-derived irradiance; Global Horizontal Irradiance; Clear sky model; ground stations; validation
Online: 18 March 2021 (14:30:42 CET)
Access to reliable, clean, modern cooking enhances life chances. One option is photovoltaic cooking systems. Accurate solar data is needed to ascertain to what extent these can satisfy the needs of local people. This paper investigates how to choose the most accurate satellite derived solar irradiance database for use in Africa. This is necessary because there is a general shortage of ground measurements for Africa. The solar data is needed to model the output of solar cooking systems, for instance, a solar panel, battery and electric pressure cooker. Four easily accessible satellite databases are validated against ground measurements using a range of statistical tests. Results demonstrate the impact of the mathematical measure used and the phenomenon of balancing errors. Fitting of the satellite model to appropriate climate zone and/or nearby measurements improves accuracy, as does higher spatial and temporal resolution of input parameters. That said, all the four databases reviewed were found to be suitable for simulating PV yield in East Africa.
ARTICLE | doi:10.20944/preprints202102.0600.v1
Subject: Engineering, Automotive Engineering Keywords: Tunnel boring machine (TBM); correlation models; mechanical and operational parameters; performance prediction; model validation
Online: 26 February 2021 (09:31:43 CET)
The study takes into account different classes of tunnel boring machines (TBMs), with the aim of identifying correlation models which are meant to estimate, at a preliminary design phase, the construction time of a tunnel and to evaluate the mechanical and operational parameters of the TBMs, starting from the knowledge of the tunnel length and/or the excavation diameter. To achieve this goal, first of all a database was created, thanks to the collection of the most meaningful technical parameters from a large number of tunnels; afterward, it was statistically analysed through Microsoft Excel. In a first phase, forecasting models were identified for the three types of machines investigated, separately for compact rocks (open TBM) and fractured rocks (single and double shield TBM). Then, the mechanical parameters collected through the database were analysed, with the aim of obtaining models that take into account, in addition to the type of TBM, the geological aspect, and the type of rock characterising the rock mass. Finally, the validation of the study was proposed in a real case, represented by the Moncenisio base tunnel, a work included in the new Turin–Lyon connection line. The estimated values were compared with the real ones, in order to verify the accuracy of the experimental models identified.
ARTICLE | doi:10.20944/preprints202008.0693.v1
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: Industry 4.0; Product Data Management; Product Life Cycle Management; Concurrent Engineering; Validation of Design
Online: 31 August 2020 (04:17:05 CEST)
All departments in a business work separately, but for the same purpose.In this article, a system that allows not only the mechanical design department but also the manufacturing, storage, process planning, quality control, electrical design, purchasing departments, etc. to have access to the required information has been developed. Initially, current manufacturng result informations is collected from the project attandees. Secondly, a workflow is designed dependent on the current data flow. All the project stakeholders are introduced to join and use product data management system. In the absence of this kind of system, loss of time, scraps and loss of engineering time would be investigated. This allowed the company owners to be sure that no faulty revision of design will be produced after the system started. On the other hand automation of bill of materials generation provided the purchasing department correct and up to date information about outsourced parts. Allowing different engineering disciplines to work together provided more suitable environment. Gradually this conditions allowed all the departments work faster and market the new product much faster than before the system. Tracing the workflows for management purposes would be handled by the system. A ‘Validation of Design’ process is modelled for the company.
ARTICLE | doi:10.20944/preprints202006.0360.v1
Subject: Business, Economics And Management, Business And Management Keywords: Term deposit subscription; 10-fold stratified cross-validation; Neural network; DT; MLP; k-NN
Online: 30 June 2020 (08:22:58 CEST)
For enhancing the maximized profit from bank as well as customer perspective, term deposit can accelerate finance fields. This paper focuses on likelihood of term deposit subscription taken by the customers. Bank campaign efforts and customer details are influential while considering possibilities of taking term deposit subscription. An automated system is provided in this paper that approaches towards prediction of term deposit investment possibilities in advance. Neural network(NN) along with stratified 10-fold cross-validation methodology is proposed as predictive model which is later compared with other benchmark classifiers such as k-Nearest Neighbor (k-NN), Decision tree classifier (DT), and Multi-layer perceptron classifier (MLP). Experimental study concluded that proposed model provides significant prediction results over other baseline models with an accuracy of 88.32% and Mean Squared Error (MSE) of 0.1168.