ARTICLE | doi:10.20944/preprints202311.1742.v1
Subject: Biology And Life Sciences, Neuroscience And Neurology Keywords: synchronization; modal completion; dACC; consistency
Online: 28 November 2023 (10:16:41 CET)
We addressed whether arbitrary top–down regulation based on self-referential thinking could be valid for sensorimotor synchronization. We hypothesized that the dorsal anterior cingulate cortex (dACC) could regulate sensorimotor synchronization in accordance with internal models generated by self-referential processing. To illustrate the validity of the hypothesis, we conducted a missing oddball task with musical students. Utilizing the event-related deep-brain activity (ER-DBA) method cooperatively with the event-related potential (ERP) method, we found that dACC activation promotes repetitive tapping supported by modal completion, whereas dACC deactivation promotes precise tapping under avoiding erroneous responses for the missing pulses. Endogenous ERP components including P150 as a marker of modal completion and N200 as a marker of response inhibition supported this claim. Furthermore, combining the ER-DBA and ERP results, it was suggested that the brain creates a coherent story to promote repetitive tapping by mediating sensory evidence via modal completion. The free energy measurements theoretically support these findings, thereby confirming the validity of our hypothesis.
TECHNICAL NOTE | doi:10.20944/preprints202206.0097.v1
Subject: Environmental And Earth Sciences, Geophysics And Geology Keywords: InSAR; deformation; validation; GPS; consistency
Online: 7 June 2022 (08:12:53 CEST)
InSAR and associated analytic methods can measure surface deformation from low earth orbit with a claimed accuracy of centimeters to millimeters. The realized accuracy depends on the area being measured and on the choice of analytic method, suggesting one choose a method in response to the area being measured. Here we consider a specific fixed analytic method and compare the results it produces to measurements gathered from other means in a variety of settings. In particular we compare Sentinel-1 InSAR with GPS at the Kilauea volcano around the 2018 eruption, with GPS in the city of Arica, Chile, and with public survey data at a decommissioned tailings mine. In addition, we compare two independent Sentinel-1 InSAR analyses for a railway station in Oslo, Norway. Our goal is estimate the accuracy of a fully automated Sentinel-1 InSAR pipeline in various settings. Our conclusions are that centimeter level accuracy is a reasonable claim in many, but not all settings, and that accuracy is typically not lost by using an automated pipeline, instead of hand-selecting and tuning parameters.
ARTICLE | doi:10.20944/preprints201711.0141.v1
Subject: Social Sciences, Behavior Sciences Keywords: organizational culture; mission; consistency; involvement; adaptability
Online: 22 November 2017 (04:19:38 CET)
The main goal of this paper is to address how quickly and to what extent are international organizational cultures, brought by the world companies after the process of privatization, being implemented in a single monolithic culture. For this purpose was adopted and applied Denison model of organizational culture, which has been chosen because it emphasizes the need for balance between requirements for organization’s stability demands and its required flexibility. Considering that a different organizational culture reflects systematic change of an entire organization, this paper focuses on exploring the differences in culture dimensions among companies in domestic and foreign ownership in Serbia. A sample of 1000 employees was statistically processed. Changes in organizational culture tend to be relatively slow. The results confirm that organizational culture is a complex working environment, concerning organizational values, which represents a fundamental element of organizations. Given that the process of company ownership changes occurred fifteen years prior to the research implementation, obtained results show effects of interaction between national and organizational culture in this, relatively short, period of time. Obtained results can be generalized to countries that are passing or have recently passed a transition, and are similar in cultural characteristics.
ARTICLE | doi:10.20944/preprints202306.0598.v1
Subject: Computer Science And Mathematics, Probability And Statistics Keywords: bias; consistency; efficiency; simulation design; volatility estimation
Online: 8 June 2023 (07:31:27 CEST)
This study rolls out a robust framework relevant for simulation studies through the Generalised Autoregressive Conditional Heteroscedasticity (GARCH) model using the rugarch package. The package is thoroughly investigated, and novel findings are identified for improved and effective simulations. The focus of the study is to provide necessary simulation steps for volatility estimation that involve "background (optional), defining the aim, research questions, method of implementation, and summarised conclusion". The method of implementation is a workflow that includes writing the code, setting the seed, setting the true parameters a priori, data generation process and performance assessment through meta-statistics. This novel, easy-to-understand steps are demonstrated on financial returns using illustrative Monte Carlo simulation with empirical verification. Among the findings, the study shows that regardless of the arrangement of the seed values, the efficiency and consistency of an estimator generally remain the same as the sample size increases. The study also derived a new and flexible true-parameter-recovery measure which can be used by researchers to determine the level of recovery of the true parameter by the MCS estimator. It is anticipated that the outcomes of this study will be broadly applicable in finance, with intuitive appeal in other areas, for volatility estimation.
ARTICLE | doi:10.20944/preprints202310.1985.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: video quality consistency; adaptive QP; perceptual-based RDO
Online: 31 October 2023 (06:49:33 CET)
In industry 4.0 era, video applications such as surveillance visual systems, video conferencing, or video broadcasting have been playing a vital role. In these applications, for manipulating and tracking objects in decoded video, the quality of decoded video should be consistent because it largely affects to the performance of the machine analysis. To cope with this problem, we propose a novel perceptual video coding (PVC) solution in which a full reference quality metric named Video Multimethod Assessment Fusion (VMAF) is employed together with a deep convolutional neural network (CNN) to obtain the consistent quality while still achieving the high compression performance. First of all, to achieve the consistent quality requirement, we propose a CNN model with an expected VMAF as input to adaptively adjust the quantization parameters (QP) for each coding block. Afterwards, to increase the compression performance, a Lagrange coefficient of Rate-Distortion optimization (RDO) mechanism is adaptively computed under Rate-QP and Quality-QP models. Experimental results show that the proposed PVC has achieved two targets simultaneously: the quality of video sequence is kept consistently with an expected quality level and the bit rate saving of the proposed method is higher than traditional video coding standards and relevant benchmark, notably with around 10% bitrate saving in average.
ARTICLE | doi:10.20944/preprints202106.0185.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: pituitary adenoma; consistency; magnetic resonance imaging; pharmacokinetic analysis; collagen.
Online: 7 June 2021 (13:25:15 CEST)
Prediction of tumor consistency is valuable for planning transsphenoidal surgery for pituitary adenoma. A prospective study was conducted involving 49 participants with pituitary adenoma to determine whether quantitative pharmacokinetic analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is useful for predicting consistency of adenoma. Pharmacokinetic parameters in the adenomas including volume of extravascular extracellular space (EES) per unit volume of tissue (ve), blood plasma volume per unit volume of tissue (vp), volume transfer constant between blood plasma and EES (Ktrans), and rate constant between EES and blood plasma (kep) were obtained. The pharmacokinetic parameters and the histologic percentage of collagen content (PCC) were compared between soft and hard adenomas using Mann–Whitney U test. Pearson’s correla-tion coefficient was used to correlate pharmacokinetic parameters with PCC. Hard adenomas showed significantly higher PCC (44.08 ± 15.14% vs. 6.62 ± 3.47%, p < 0.01), ve (0.332 ± 0.124% vs. 0.221 ± 0.104%, p = 0.02), and Ktrans (0.775 ± 0.401/min vs. 0.601 ± 0.612/min, p = 0.02) than soft adenomas. Moreover, a significant positive correlation was found between ve and PCC (r = 0.601, p < 0.01). The ve derived using DCE-MRI may have predictive value for consistency of pituitary adenoma.
ARTICLE | doi:10.20944/preprints202307.2073.v1
Subject: Computer Science And Mathematics, Computer Vision And Graphics Keywords: image registration; edge consistency features; multi-source remote sensing images
Online: 31 July 2023 (10:58:28 CEST)
Multi-source image registration often suffered from great radiation and geometric differences. Specifically, gray scale and texture from similar landforms in different source images often show significantly different visual features. And these differences disturb the corresponding point extraction in the following image registration process. Considering that edges between heterogeneous images can provide homogeneous information and more consistent features can be extracted based on image edges, an edge consistency radiation-change insensitive feature transform (EC-RIFT) method is proposed in this paper. Firstly, the noise and texture interference are reduced by preprocessing according to the image characteristics. Secondly, image edges are extracted based on phase congruency, and an orthogonal Log-Gabor filter is performed to replace the global algorithm. Finally, the descriptors are built with logarithmic partition of the feature point neighborhood, which improves the robustness of descriptors. Comparative experiments on datasets containing multi-source remote sensing image pairs show that the proposed EC-RIFT method outperforms other registration methods in terms of precision and effectiveness.
ARTICLE | doi:10.20944/preprints202305.1446.v1
Subject: Physical Sciences, Theoretical Physics Keywords: conductivity; electromigration; effective charge; metal solutions; Drude-Sommerfeld; consistency rule
Online: 19 May 2023 (15:35:07 CEST)
The phenomena of electrical conductivity and electromigration in metallic systems are related, since in both cases the basic physical process is the scattering of conduction electrons by metal ions. Numerous searches have been made for equations connecting the conductivity with electromigration. In the case of a liquid metal, when using the Drude-Sommerfeld (DS) conduc-tivity equation, it was not possible to obtain a quantitative relationship between these phenomena, which would be correct. Attempts to find such a relationship when taking into ac-count the N. Mott correction (g - factor) in the DS equation were unsuccessful. This article proposes a different correction (b - factor) to the DS equation, which takes into account the pos-sibility of varying the momentum transferred by the conduction electron to a metal ion during the scattering. This correction allows to establish a quantitative relationship between conductivity and electromigration, as well as between electromigration in various binary systems with common components, in agreement with experiment. The proposed theory describes well, in particular, two- and multi-component metal systems of any concentration (the consistency rule for triangles A-B, B-C, C-A). The value of b - factor smoothly changes depending on the heat of vaporization of the metal, per unit volume.
ARTICLE | doi:10.20944/preprints201709.0131.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: terrestrial LiDAR; TLS; LAI; LAD; element size; bias; consistency; efficiency
Online: 26 September 2017 (15:42:47 CEST)
Terrestrial LiDAR becomes more and more popular to estimate leaf and plant area density. Voxel-based approaches account for this vegetation heterogeneity and significant work has been done in this recent research field, but no general theoretical analysis is available. Although estimators have been proposed and several causes of biases have been identified, their consistency and efficiency have not been evaluated. Also, confidence intervals are almost never provided. In the present paper, we solve the transmittance equation and use the Maximum Likelihood Estimation (MLE), to derive unbiased estimators and confidence intervals for the attenuation coefficient, which is proportional to leaf area density. The new estimators and confidence intervals are defined at voxel scale, and account for the number of beams crossing the voxel, the inequality of path lengths in voxel, the size of vegetation elements, as well as for the variability of element positions between vegetation samples. They are completed by numerous numerical simulations for the evaluation of estimator consistency and efficiency, as well as the assessment of the coverage probabilities of confidence intervals. • Although commonly used when the beam number is low, the usual estimators are strongly biased and the 95% confidence intervals can be ≈±100% of the estimate. • Our unbiased estimators are consistent in a wider range of validity than the usual ones, especially for the unbiased MLE, which is consistent when the beam number is as low as 5. The unbiased MLE is efficient, meaning it reaches the lowest residual errors that can be expected (for an unbiased estimator). Also the unbiased MLE does not require any bias correction when path lengths are unequal. • When elements are small (or voxel is large), 103 beams entering the voxel leads to some confidence intervals ≈±10%, but when elements are larger (or voxel smaller), it can remain wider than ±50%, even for a large beam number. This is explained by the variability of element positions between vegetation samples. Such a result shows that a significant part of residual error can be explained by random effects. • Confidence intervals are much smaller (±5 to 10%) when LAD estimates are averaged over several small voxels, typically within a horizontal layer or in the crown of individual plants. In this context, our unbiased estimators show a reduction of 50% of the radius of confidence intervals, in comparison to usual estimators. Our study provides some new ready-to-use estimators and confidence intervals for attenuation coefficients, which are consistent and efficient within a fairly large range of parameter values. The consistency is achieved for a low beam number, which is promising for application to airborne LiDAR data. They entail to raise the level of understanding and confidence on LAD estimation. Among other applications, their usage should help determine the most suitable voxel size, for given vegetation types and scanning density, whereas existing guidelines are highly variable among studies, probably because of differences in vegetation, scanning design and estimators.
ARTICLE | doi:10.20944/preprints202306.1735.v1
Subject: Computer Science And Mathematics, Probability And Statistics Keywords: consistency; efficiency; score driven model; simulation design; time-varying parameter estimation
Online: 26 June 2023 (04:40:02 CEST)
In econometrics and finance, volatility modelling has long been a specialised field for addressing a variety of issues pertaining to the risk and uncertainties of an asset. This study presents a robust framework, through a step-by-step design, that is relevant for effective Monte Carlo simulation (MCS) with empirical verifications to estimate volatility using the Generalized Autoregressive Score (GAS) model. The framework describes an organised approach to the MCS experiment that includes "background (optional), defining the aim, research questions, method of implementation, and summarised conclusion". The method of implementation is a workflow that consists of writing the code, setting the seed, setting the true parameter a priori, data generation process, and performance assessment through meta-statistics. Among the findings, it is experimentally demonstrated in the study that the GAS model with a lower unconditional shape parameter value can generate a dataset that adequately reflects the behaviour of financial time series data, relevant for volatility estimation. This dynamic framework is intended to help interested users on MCS experiments utilising the GAS model for reliable volatility calculations in finance and other areas.
ARTICLE | doi:10.20944/preprints202309.0412.v1
Subject: Engineering, Control And Systems Engineering Keywords: multiocular vision; random sampling consistency; 3D reconstruction; parallel robot; target localisation and grasping
Online: 6 September 2023 (09:27:00 CEST)
Some traditional robots are based on offline programming reciprocal motion, and with the continuous upgrading of vision technology, more and more tasks are replaced by machine vision. For the current problem of insufficient accuracy of robot target localization based on binocular vision, and an improved random sampling consistency algorithm is proposed to complete parallel robot target localization and grasping under the guidance For the current problem of insufficient accuracy of robot target localization based on binocular vision, an improved random sampling consistency algorithm is proposed to complete parallel robot target localization and grasping under the guidance of multi Firstly, the RANSAC algorithm is improved based on the SURF algorithm; then the parallax gradient method is applied to iterate the matched point pairs several times to further optimize the data; then the 3D reconstruction is completed by the program technique; finally the obtained data is finally the obtained data is input into the robot arm and the camera internal and external parameters are obtained by the calibration method so that the robot can accurately locate and The experiments show that the improved algorithm has advantages in recognition accuracy and grasping success rate under multi- The experiments show that the improved algorithm has advantages in recognition accuracy and grasping success rate under multi-vision system.
ARTICLE | doi:10.20944/preprints202305.1697.v1
Subject: Engineering, Marine Engineering Keywords: AUV Formation; Formation Avoidance; Artificial Potential Field Method； Auxiliary Potential Fields； Consistency Protocol.
Online: 24 May 2023 (05:04:21 CEST)
Formation avoidance is one of the critical technologies for autonomous underwater vehicle (AUV) formations. To this end, a cooperative obstacle avoidance algorithm based on an improved artificial potential field and a consistency protocol is proposed in this paper. A cooperative obstacle avoidance algorithm based on the improved artificial potential field method and consistency protocol is proposed for the local obstacle avoidance problem of AUV formation. Firstly, for the disadvantage that the traditional artificial potential field method is easy to fall into local minima, an auxiliary potential field perpendicular to the AUV moving direction is designed to solve the problem that AUVs are easy to have zero combined force in the potential field and local minima. Secondly, the control law of AUV formation that keeps the speed and position consistent is designed for the problem that the formation will change during the local obstacle avoidance of the formation system. The control conflict problem of the combined algorithm of the artificial potential field law and the consistency protocol is solved by adjusting the desired formation of the consistency protocol through the potential field force. Finally, the bounded energy function demonstrates system convergence's stability. The simulation verification confirmed that the AUV formation could achieve the convergence of the formation state under local obstacle avoidance.
ARTICLE | doi:10.20944/preprints202304.1076.v1
Subject: Medicine And Pharmacology, Cardiac And Cardiovascular Systems Keywords: Electrical Impedance Tomography; Motion Artifact Detection; Heart Rate; Cardiac Output; Hemodialysis; Source Consistency
Online: 27 April 2023 (10:54:52 CEST)
Electrical impedance tomography (EIT) can monitor the real-time hemodynamic state of a conscious and spontaneously breathing patient noninvasively. However, cardiac volume signal (CVS) extracted from EIT images has a small amplitude and is sensitive to motion artifacts (MAs). This study aimed to develop a new algorithm to reduce MAs from the CVS for more accurate heart rate (HR) and cardiac output (CO) monitoring in patients undergoing hemodialysis based on the source consistency between the electrocardiogram (ECG) and the CVS of heartbeats. Two signals were measured at different locations on the body through independent instruments and electrodes, but the frequency and phase were matched when no MAs. As the number of motions per hour (MI) increased over 30, the proposed algorithm had a correlation of 0.83 and a precision of 1.65 beats per minute (BPM) compared to the conventional statical algorithm of a correlation of 0.56 and a precision of 4.04 BPM. For CO monitoring, the precision and upper limit of the mean ∆CO were 3.41 and 2.82 liters per minute (LPM), respectively, compared to 4.05 and 3.82 LPM of the statistical algorithm. The developed algorithm could reduce MAs and improve HR/CO monitoring accuracy and reliability, particularly in high-motion environments.
ARTICLE | doi:10.20944/preprints202105.0117.v2
Subject: Computer Science And Mathematics, Probability And Statistics Keywords: decision trees; deep feed-forward network; neural trees; consistency; optimal rate of convergence
Online: 9 November 2021 (16:54:30 CET)
Decision tree algorithms have been among the most popular algorithms for interpretable (transparent) machine learning since the early 1980s. On the other hand, deep learning methods have boosted the capacity of machine learning algorithms and are now being used for non-trivial applications in various applied domains. But training a fully-connected deep feed-forward network by gradient-descent backpropagation is slow and requires arbitrary choices regarding the number of hidden units and layers. In this paper, we propose near-optimal neural regression trees, intending to make it much faster than deep feed-forward networks and for which it is not essential to specify the number of hidden units in the hidden layers of the neural network in advance. The key idea is to construct a decision tree and then simulate the decision tree with a neural network. This work aims to build a mathematical formulation of neural trees and gain the complementary benefits of both sparse optimal decision trees and neural trees. We propose near-optimal sparse neural trees (NSNT) that is shown to be asymptotically consistent and robust in nature. Additionally, the proposed NSNT model obtain a fast rate of convergence which is near-optimal up to some logarithmic factor. We comprehensively benchmark the proposed method on a sample of 80 datasets (40 classification datasets and 40 regression datasets) from the UCI machine learning repository. We establish that the proposed method is likely to outperform the current state-of-the-art methods (random forest, XGBoost, optimal classification tree, and near-optimal nonlinear trees) for the majority of the datasets.
ARTICLE | doi:10.20944/preprints202310.1660.v1
Subject: Engineering, Transportation Science And Technology Keywords: asphalt recycling; reclaimed asphalt pavement (RAP); recycling rate; MCDM; criteria weight; expert evaluation; opinion consistency
Online: 25 October 2023 (16:16:00 CEST)
Transport infrastructure’s asphalt pavement deteriorates under the influence of destructive factors. Damages which have been occurred during its exploitation period are repaired, and when their further rehabilitation is economically and technically irrational, the asphalt pavement is recycled. The material from the asphalt pavement layer that has reached its limit state is milled out or broken and crushed and then is repeatedly used in the production of hot-mix asphalt (HMA) or warm-mix asphalt (WMA) mixtures. In this paper, the dynamics of the percentage recycling ratio (RR) of old asphalt pavement material was investigated. RR represents the quantity of reclaimed asphalt pavement (RAP) used in the production of HMA and WMA mixtures in Europe and the USA, divided by the total amount of RAP prepared in the country. Factors and goals affecting it are analyzed. An original system of 10 criteria that increase the RR country has been created. By applying different multiple criteria decision making (MCDM) methods and using the importance given to these criteria by 14 experts, the normalized subjective weights of the criteria were determined. Analytic Hierarchy Process (AHP), rank correlation, Average Rank Transformation into Weight - Linear (ARTIW-L) and Non-linear (ARTIW-N), Direct Percentage Weight (DPW) methods were used in the study. The results display that the RAP recycle rate is close to 100% in countries with a sustainable economic background. In the Baltic countries, it is mostly increased by the adequacy of regulatory documents, the strategy promoting asphalt recycling in the country, and the homogeneity and classifying of RAP. The number and capacity of RAP stocks, the number and productivity of asphalt milling equipment and the wear and tear of the asphalt pavement have the least influence on the increase of RR. The opinions of experts in assessing the significance of all criteria are consistent. The averages of the weights of criteria determined by four MCDM methods (AHP, ARTIW-L, ARTIW-N, DPW) made it possible to obtain more reliable results. These results can be used to make strategic decisions and to create a plan of practical actions to increase the RAP recycling rate in the developing countries.
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: non-steady partial differential equation; higher order finite difference scheme; axial diffusion; convergence; consistency; stability
Online: 28 January 2020 (09:14:07 CET)
In the present study, a mathematical model of non-steady partial differential equation from the process of oxygen mass transport in the human pulmonary circulation is proposed. Mathematical modelling of this kind of problems lead to a non-steady partial differential equation and for its numerical simulation, we have used finite differences. The aim of the process is the exact numerical analysis of the study, wherein consistency, stability and convergence is proposed. The necessity of doing the process is that, we would like to increase the order of numerical solution to a higher order scheme. An increment in the order of numerical solution makes the numerical simulation more accurate, also makes the numerical simulation being more complicated. In addition, the process of numerical analysis of the study in this order of solution needs more research work.
ARTICLE | doi:10.20944/preprints202111.0306.v1
Subject: Medicine And Pharmacology, Dietetics And Nutrition Keywords: COVID-19 Pandemic; Dietary patterns; BMI; Nutrition; Vitamins; Healthy Food; Dietary Supplements; Factor analyses; Internal Consistency; weight gain
Online: 17 November 2021 (12:09:58 CET)
Since its inception in 2019, COVID-19 has been associated with significant changes in lifestyle-related behavior, including physical activity, diet, and sleep, which are vital to maintaining our well-being. This study measures lifestyle-related behavior during the COVID-19 pandemic lockdown using a 21-item questionnaire. The responses were collected from March 2021 to September 2021. Four hundred sixty-seven participants were engaged in assessing the changes caused by the pandemic and their effect on BMI. The validity and reliability of the questionnaire were tested for 71 participants. Cronbach's alpha values for the questionnaire all exceeded 0.7, demonstrating good validity and internal consistency for it. The effect of each question regarding physical activity and dietary habits over the BMI difference was studied using ANOVA. The study shows that more than half of the participants reported snacking more between meals and increased their sitting and screen time, while 74% felt more stressed and anxious. These indications were the cause of the increase in the BMI rate for individuals in the lockdown. In contrast, 62% of the participants showed more awareness about their health by increasing the intake of immunity-boosting foods, and 56% of the participants showed an increase in the consumption of nutrition supplements. Females and married individuals tended to be healthier, so their BMI showed stability compared to others based on their gender and marital status.
ARTICLE | doi:10.20944/preprints201612.0002.v1
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: change point; estimation; consistency; panel data; short panels; boundary issue; structural change; bootstrap; non-life insurance; change in claim amounts
Online: 1 December 2016 (10:02:03 CET)
Panel data of our interest consist of a moderate number of panels, while the panels contain a small number of observations. An estimator of common breaks in panel means without a boundary issue for this kind of scenario is proposed. In particular, the novel estimator is able to detect a common break point even when the change happens immediately after the first time point or just before the last observation period. Another advantage of the elaborated change point estimator is that it results in the last observation in situations with no structural breaks. The consistency of the change point estimator in panel data is established. The results are illustrated through a simulation study. As a by-product of the developed estimation technique, a theoretical utilization for correlation structure estimation, hypothesis testing, and bootstrapping in panel data is demonstrated. A practical application to non-life insurance is presented as well.
ARTICLE | doi:10.20944/preprints202109.0034.v3
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Artificial intelligence; CMAPSS; consistency and local accuracy; CUSUM chart; deep learning; prognostic and health management; RMSE; sensing and data extraction; SHAP; Uncertainty; XAI
Online: 12 January 2022 (10:22:47 CET)
: Mistrust, amplified by numerous artificial intelligence (AI) related incidents, has caused the energy and industrial sectors to be amongst the slowest adopter of AI methods. Central to this issue is the black-box problem of AI, which impedes investments and fast becoming a legal hazard for users. Explainable AI (XAI) is a recent paradigm to tackle this challenge. Being the backbone of the industry, the prognostic and health management (PHM) domain has recently been introduced to XAI. However, many deficiencies, particularly lack of explanation assessment methods and uncertainty quantification, plague this young field. In this paper, we elaborate a framework on explainable anomaly detection and failure prognostic employing a Bayesian deep learning model to generate local and global explanations from the PHM tasks. An uncertainty measure of the Bayesian model is utilized as marker for anomalies expanding the prognostic explanation scope to include model’s confidence. Also, the global explanation is used to improve prognostic performance, an aspect neglected from the handful of PHM-XAI publications. The quality of the explanation is finally examined employing local accuracy and consistency properties. The method is tested on real-world gas turbine anomalies and synthetic turbofan data failure prediction. Seven out of eight of the tested anomalies were successfully identified. Additionally, the prognostic outcome showed 19% improvement in statistical terms and achieved the highest prognostic score amongst best published results on the topic.
Subject: Computer Science And Mathematics, Computer Vision And Graphics Keywords: geographic information fusion; data quality; data consistency checking; historic GIS; railway network; patrimonial data; crowdsourcing open data; volunteer geographic information VGI; wikipedia geo-spatial information extraction.
Online: 17 August 2020 (14:51:04 CEST)
Transportation of goods is as old as human civilizations : past networks and their evolution shed light on long term trends. Transportation impact on climate change is measured as major, as well as the impact on spreading a pandemic. These two reasons motivate the importance of providing relevant and reliable historical geographic datasets of these networks. This paper focuses on reconstructing the railway network in France at its maximal extent, a century ago. The active stations and lines are well documented by the French SNCF, in open public data. However, that information ignores past stations (ante 1980), which represent probably more than what is recorded in public data. Additional open data, individual or collaborative (eg. Wikipedia) are particularly valuable, but they are not always geo-coded, and two more sources are necessary to completing that geo-coding: ancient maps and aerial photography. Therefore, remote sensing and volunteer geographic information are the two pillars of past railway reconstruction. The methods developed are adapted to the extraction of information from these sources: automated parsing of Wikipedia Infoboxes, data extraction from simple tables, even from simple text. That series of sparse procedures can be merged into a comprehensive computer-assisted process. Beyond this, a huge effort in quality control is necessary when merging these data: automated wherever possible, or finally visually controlled by observation of remote sensing information. The main output is a reliable dataset, under ODbl, of more than 9100 stations, which can be combined with the information about the 35000 communes of France, for a large variety of studies. This work demonstrates two thesis: (a) it is possible to reconstruct transport network data from the past, and generic computer assisted methods can be developed; (b) the value of remote sensing and volunteered geo info is considerable (what archeologists already know).
ARTICLE | doi:10.20944/preprints201709.0170.v1
Subject: Business, Economics And Management, Business And Management Keywords: business sustainability; research and development (R&D); multiple criteria decision-making (MCDM); financial objective; variable-consistency dominance-based rough set approach (VC-DRSA); internetwork relationship map (INRM); directional flow graph (DFG)
Online: 30 September 2017 (06:19:43 CEST)
The influence and importance of research and development (R&D) for business sustainability have gained increasing interests, especially in the high-tech sector. However, the efforts of R&D might cause complex and mixed impacts on the financial results considering the associated expenses. Thus, this study aims to examine how R&D efforts may influence business to improve its financial performance considering the dual objectives: the gross and the net profitability. This research integrated a rough-set-based soft computing technique and multiple criteria decision-making (MCDM) methods to explore this complex and yet valuable issue. A group of public listed companies from Taiwan, all in the semiconductor sector, was analyzed as a case study. Initially, more than 30 variables were considered, and the adopted soft computing technique retrieved 14 core attributes—for the dual profitability objectives—to form the evaluation model. The importance of R&D for pursuing superior financial prospects is confirmed, and the empirical case demonstrates how to guide an individual company to plan for improvements to achieve its long-term sustainability by this hybrid approach.