TECHNICAL NOTE | doi:10.20944/preprints202206.0097.v1
Online: 7 June 2022 (08:12:53 CEST)
InSAR and associated analytic methods can measure surface deformation from low earth orbit with a claimed accuracy of centimeters to millimeters. The realized accuracy depends on the area being measured and on the choice of analytic method, suggesting one choose a method in response to the area being measured. Here we consider a specific fixed analytic method and compare the results it produces to measurements gathered from other means in a variety of settings. In particular we compare Sentinel-1 InSAR with GPS at the Kilauea volcano around the 2018 eruption, with GPS in the city of Arica, Chile, and with public survey data at a decommissioned tailings mine. In addition, we compare two independent Sentinel-1 InSAR analyses for a railway station in Oslo, Norway. Our goal is estimate the accuracy of a fully automated Sentinel-1 InSAR pipeline in various settings. Our conclusions are that centimeter level accuracy is a reasonable claim in many, but not all settings, and that accuracy is typically not lost by using an automated pipeline, instead of hand-selecting and tuning parameters.
ARTICLE | doi:10.20944/preprints201711.0141.v1
Subject: Behavioral Sciences, Other Keywords: organizational culture; mission; consistency; involvement; adaptability
Online: 22 November 2017 (04:19:38 CET)
The main goal of this paper is to address how quickly and to what extent are international organizational cultures, brought by the world companies after the process of privatization, being implemented in a single monolithic culture. For this purpose was adopted and applied Denison model of organizational culture, which has been chosen because it emphasizes the need for balance between requirements for organization’s stability demands and its required flexibility. Considering that a different organizational culture reflects systematic change of an entire organization, this paper focuses on exploring the differences in culture dimensions among companies in domestic and foreign ownership in Serbia. A sample of 1000 employees was statistically processed. Changes in organizational culture tend to be relatively slow. The results confirm that organizational culture is a complex working environment, concerning organizational values, which represents a fundamental element of organizations. Given that the process of company ownership changes occurred fifteen years prior to the research implementation, obtained results show effects of interaction between national and organizational culture in this, relatively short, period of time. Obtained results can be generalized to countries that are passing or have recently passed a transition, and are similar in cultural characteristics.
ARTICLE | doi:10.20944/preprints202106.0185.v1
Subject: Life Sciences, Biochemistry Keywords: pituitary adenoma; consistency; magnetic resonance imaging; pharmacokinetic analysis; collagen.
Online: 7 June 2021 (13:25:15 CEST)
Prediction of tumor consistency is valuable for planning transsphenoidal surgery for pituitary adenoma. A prospective study was conducted involving 49 participants with pituitary adenoma to determine whether quantitative pharmacokinetic analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is useful for predicting consistency of adenoma. Pharmacokinetic parameters in the adenomas including volume of extravascular extracellular space (EES) per unit volume of tissue (ve), blood plasma volume per unit volume of tissue (vp), volume transfer constant between blood plasma and EES (Ktrans), and rate constant between EES and blood plasma (kep) were obtained. The pharmacokinetic parameters and the histologic percentage of collagen content (PCC) were compared between soft and hard adenomas using Mann–Whitney U test. Pearson’s correla-tion coefficient was used to correlate pharmacokinetic parameters with PCC. Hard adenomas showed significantly higher PCC (44.08 ± 15.14% vs. 6.62 ± 3.47%, p < 0.01), ve (0.332 ± 0.124% vs. 0.221 ± 0.104%, p = 0.02), and Ktrans (0.775 ± 0.401/min vs. 0.601 ± 0.612/min, p = 0.02) than soft adenomas. Moreover, a significant positive correlation was found between ve and PCC (r = 0.601, p < 0.01). The ve derived using DCE-MRI may have predictive value for consistency of pituitary adenoma.
ARTICLE | doi:10.20944/preprints201709.0131.v1
Subject: Earth Sciences, Environmental Sciences Keywords: terrestrial LiDAR; TLS; LAI; LAD; element size; bias; consistency; efficiency
Online: 26 September 2017 (15:42:47 CEST)
Terrestrial LiDAR becomes more and more popular to estimate leaf and plant area density. Voxel-based approaches account for this vegetation heterogeneity and significant work has been done in this recent research field, but no general theoretical analysis is available. Although estimators have been proposed and several causes of biases have been identified, their consistency and efficiency have not been evaluated. Also, confidence intervals are almost never provided. In the present paper, we solve the transmittance equation and use the Maximum Likelihood Estimation (MLE), to derive unbiased estimators and confidence intervals for the attenuation coefficient, which is proportional to leaf area density. The new estimators and confidence intervals are defined at voxel scale, and account for the number of beams crossing the voxel, the inequality of path lengths in voxel, the size of vegetation elements, as well as for the variability of element positions between vegetation samples. They are completed by numerous numerical simulations for the evaluation of estimator consistency and efficiency, as well as the assessment of the coverage probabilities of confidence intervals. • Although commonly used when the beam number is low, the usual estimators are strongly biased and the 95% confidence intervals can be ≈±100% of the estimate. • Our unbiased estimators are consistent in a wider range of validity than the usual ones, especially for the unbiased MLE, which is consistent when the beam number is as low as 5. The unbiased MLE is efficient, meaning it reaches the lowest residual errors that can be expected (for an unbiased estimator). Also the unbiased MLE does not require any bias correction when path lengths are unequal. • When elements are small (or voxel is large), 103 beams entering the voxel leads to some confidence intervals ≈±10%, but when elements are larger (or voxel smaller), it can remain wider than ±50%, even for a large beam number. This is explained by the variability of element positions between vegetation samples. Such a result shows that a significant part of residual error can be explained by random effects. • Confidence intervals are much smaller (±5 to 10%) when LAD estimates are averaged over several small voxels, typically within a horizontal layer or in the crown of individual plants. In this context, our unbiased estimators show a reduction of 50% of the radius of confidence intervals, in comparison to usual estimators. Our study provides some new ready-to-use estimators and confidence intervals for attenuation coefficients, which are consistent and efficient within a fairly large range of parameter values. The consistency is achieved for a low beam number, which is promising for application to airborne LiDAR data. They entail to raise the level of understanding and confidence on LAD estimation. Among other applications, their usage should help determine the most suitable voxel size, for given vegetation types and scanning density, whereas existing guidelines are highly variable among studies, probably because of differences in vegetation, scanning design and estimators.
ARTICLE | doi:10.20944/preprints202105.0117.v2
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: decision trees; deep feed-forward network; neural trees; consistency; optimal rate of convergence
Online: 9 November 2021 (16:54:30 CET)
Decision tree algorithms have been among the most popular algorithms for interpretable (transparent) machine learning since the early 1980s. On the other hand, deep learning methods have boosted the capacity of machine learning algorithms and are now being used for non-trivial applications in various applied domains. But training a fully-connected deep feed-forward network by gradient-descent backpropagation is slow and requires arbitrary choices regarding the number of hidden units and layers. In this paper, we propose near-optimal neural regression trees, intending to make it much faster than deep feed-forward networks and for which it is not essential to specify the number of hidden units in the hidden layers of the neural network in advance. The key idea is to construct a decision tree and then simulate the decision tree with a neural network. This work aims to build a mathematical formulation of neural trees and gain the complementary benefits of both sparse optimal decision trees and neural trees. We propose near-optimal sparse neural trees (NSNT) that is shown to be asymptotically consistent and robust in nature. Additionally, the proposed NSNT model obtain a fast rate of convergence which is near-optimal up to some logarithmic factor. We comprehensively benchmark the proposed method on a sample of 80 datasets (40 classification datasets and 40 regression datasets) from the UCI machine learning repository. We establish that the proposed method is likely to outperform the current state-of-the-art methods (random forest, XGBoost, optimal classification tree, and near-optimal nonlinear trees) for the majority of the datasets.
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: non-steady partial differential equation; higher order finite difference scheme; axial diffusion; convergence; consistency; stability
Online: 28 January 2020 (09:14:07 CET)
In the present study, a mathematical model of non-steady partial differential equation from the process of oxygen mass transport in the human pulmonary circulation is proposed. Mathematical modelling of this kind of problems lead to a non-steady partial differential equation and for its numerical simulation, we have used finite differences. The aim of the process is the exact numerical analysis of the study, wherein consistency, stability and convergence is proposed. The necessity of doing the process is that, we would like to increase the order of numerical solution to a higher order scheme. An increment in the order of numerical solution makes the numerical simulation more accurate, also makes the numerical simulation being more complicated. In addition, the process of numerical analysis of the study in this order of solution needs more research work.
ARTICLE | doi:10.20944/preprints202111.0306.v1
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: COVID-19 Pandemic; Dietary patterns; BMI; Nutrition; Vitamins; Healthy Food; Dietary Supplements; Factor analyses; Internal Consistency; weight gain
Online: 17 November 2021 (12:09:58 CET)
Since its inception in 2019, COVID-19 has been associated with significant changes in lifestyle-related behavior, including physical activity, diet, and sleep, which are vital to maintaining our well-being. This study measures lifestyle-related behavior during the COVID-19 pandemic lockdown using a 21-item questionnaire. The responses were collected from March 2021 to September 2021. Four hundred sixty-seven participants were engaged in assessing the changes caused by the pandemic and their effect on BMI. The validity and reliability of the questionnaire were tested for 71 participants. Cronbach's alpha values for the questionnaire all exceeded 0.7, demonstrating good validity and internal consistency for it. The effect of each question regarding physical activity and dietary habits over the BMI difference was studied using ANOVA. The study shows that more than half of the participants reported snacking more between meals and increased their sitting and screen time, while 74% felt more stressed and anxious. These indications were the cause of the increase in the BMI rate for individuals in the lockdown. In contrast, 62% of the participants showed more awareness about their health by increasing the intake of immunity-boosting foods, and 56% of the participants showed an increase in the consumption of nutrition supplements. Females and married individuals tended to be healthier, so their BMI showed stability compared to others based on their gender and marital status.
ARTICLE | doi:10.20944/preprints201612.0002.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: change point; estimation; consistency; panel data; short panels; boundary issue; structural change; bootstrap; non-life insurance; change in claim amounts
Online: 1 December 2016 (10:02:03 CET)
Panel data of our interest consist of a moderate number of panels, while the panels contain a small number of observations. An estimator of common breaks in panel means without a boundary issue for this kind of scenario is proposed. In particular, the novel estimator is able to detect a common break point even when the change happens immediately after the first time point or just before the last observation period. Another advantage of the elaborated change point estimator is that it results in the last observation in situations with no structural breaks. The consistency of the change point estimator in panel data is established. The results are illustrated through a simulation study. As a by-product of the developed estimation technique, a theoretical utilization for correlation structure estimation, hypothesis testing, and bootstrapping in panel data is demonstrated. A practical application to non-life insurance is presented as well.
ARTICLE | doi:10.20944/preprints202109.0034.v3
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Artificial intelligence; CMAPSS; consistency and local accuracy; CUSUM chart; deep learning; prognostic and health management; RMSE; sensing and data extraction; SHAP; Uncertainty; XAI
Online: 12 January 2022 (10:22:47 CET)
: Mistrust, amplified by numerous artificial intelligence (AI) related incidents, has caused the energy and industrial sectors to be amongst the slowest adopter of AI methods. Central to this issue is the black-box problem of AI, which impedes investments and fast becoming a legal hazard for users. Explainable AI (XAI) is a recent paradigm to tackle this challenge. Being the backbone of the industry, the prognostic and health management (PHM) domain has recently been introduced to XAI. However, many deficiencies, particularly lack of explanation assessment methods and uncertainty quantification, plague this young field. In this paper, we elaborate a framework on explainable anomaly detection and failure prognostic employing a Bayesian deep learning model to generate local and global explanations from the PHM tasks. An uncertainty measure of the Bayesian model is utilized as marker for anomalies expanding the prognostic explanation scope to include model’s confidence. Also, the global explanation is used to improve prognostic performance, an aspect neglected from the handful of PHM-XAI publications. The quality of the explanation is finally examined employing local accuracy and consistency properties. The method is tested on real-world gas turbine anomalies and synthetic turbofan data failure prediction. Seven out of eight of the tested anomalies were successfully identified. Additionally, the prognostic outcome showed 19% improvement in statistical terms and achieved the highest prognostic score amongst best published results on the topic.
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: geographic information fusion; data quality; data consistency checking; historic GIS; railway network; patrimonial data; crowdsourcing open data; volunteer geographic information VGI; wikipedia geo-spatial information extraction.
Online: 17 August 2020 (14:51:04 CEST)
Transportation of goods is as old as human civilizations : past networks and their evolution shed light on long term trends. Transportation impact on climate change is measured as major, as well as the impact on spreading a pandemic. These two reasons motivate the importance of providing relevant and reliable historical geographic datasets of these networks. This paper focuses on reconstructing the railway network in France at its maximal extent, a century ago. The active stations and lines are well documented by the French SNCF, in open public data. However, that information ignores past stations (ante 1980), which represent probably more than what is recorded in public data. Additional open data, individual or collaborative (eg. Wikipedia) are particularly valuable, but they are not always geo-coded, and two more sources are necessary to completing that geo-coding: ancient maps and aerial photography. Therefore, remote sensing and volunteer geographic information are the two pillars of past railway reconstruction. The methods developed are adapted to the extraction of information from these sources: automated parsing of Wikipedia Infoboxes, data extraction from simple tables, even from simple text. That series of sparse procedures can be merged into a comprehensive computer-assisted process. Beyond this, a huge effort in quality control is necessary when merging these data: automated wherever possible, or finally visually controlled by observation of remote sensing information. The main output is a reliable dataset, under ODbl, of more than 9100 stations, which can be combined with the information about the 35000 communes of France, for a large variety of studies. This work demonstrates two thesis: (a) it is possible to reconstruct transport network data from the past, and generic computer assisted methods can be developed; (b) the value of remote sensing and volunteered geo info is considerable (what archeologists already know).
ARTICLE | doi:10.20944/preprints201709.0170.v1
Subject: Social Sciences, Business And Administrative Sciences Keywords: business sustainability; research and development (R&D); multiple criteria decision-making (MCDM); financial objective; variable-consistency dominance-based rough set approach (VC-DRSA); internetwork relationship map (INRM); directional flow graph (DFG)
Online: 30 September 2017 (06:19:43 CEST)
The influence and importance of research and development (R&D) for business sustainability have gained increasing interests, especially in the high-tech sector. However, the efforts of R&D might cause complex and mixed impacts on the financial results considering the associated expenses. Thus, this study aims to examine how R&D efforts may influence business to improve its financial performance considering the dual objectives: the gross and the net profitability. This research integrated a rough-set-based soft computing technique and multiple criteria decision-making (MCDM) methods to explore this complex and yet valuable issue. A group of public listed companies from Taiwan, all in the semiconductor sector, was analyzed as a case study. Initially, more than 30 variables were considered, and the adopted soft computing technique retrieved 14 core attributes—for the dual profitability objectives—to form the evaluation model. The importance of R&D for pursuing superior financial prospects is confirmed, and the empirical case demonstrates how to guide an individual company to plan for improvements to achieve its long-term sustainability by this hybrid approach.