ARTICLE | doi:10.20944/preprints202208.0234.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Variational Bayesian Approach (VBA); Kullback–Leibler Divergence; Mean Field Approximation (MFA); Optimization Algorithm
Online: 12 August 2022 (10:26:02 CEST)
In many Bayesian computations, first, we obtain the expression of the joint distribution of all the unknown variables given the observed data. In general, this expression is not separable in those variables. Thus, obtaining their marginals for each variable and computing the expectations are difficult and costly. This problem becomes even more difficult in high dimensional quandaries, which is an important issue in inverse problems. We may then try to propose a surrogate expression with which we can do approximate computations. Often a separable expression approximation can be useful enough. The Variational Bayesian Approximation (VBA) is a technique that approximates the joint distribution $p$ with an easier, for example separable, one $q$ by minimizing Kullback–Leibler Divergence $KL(q|p)$. When $q$ is separable in all the variables, the approximation is also called Mean Field Approximation (MFA) and so $q$ is the product of the approximated marginals. A first standard and general algorithm is alternate optimization of $KL(q|p)$ with respect to $q_i$. A second general approach is its optimization in the Riemannian manifold. However, in this paper, for practical reasons, we consider the case where $p$ is in the exponential family and so is $q$. For this case, $KL(q|p)$ becomes a function of the parameters $\thetab$ of the exponential family. Then, we can use any other optimization algorithm to obtain those parameters. In this paper, we compare three optimization algorithms: standard alternate optimization, a gradient-based algorithm and a natural gradient algorithm and study their relative performances on three examples.
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Langevin equation; Mean Field Games system; kinetic Fokker-Planck equation
Online: 12 January 2021 (09:31:54 CET)
We consider a Mean Field Games model where the dynamics of the agents is given by a controlled Langevin equation and the cost is quadratic. A change of variables, introduced in , transforms the Mean Field Games system into a system of two coupled kinetic Fokker-Planck equations. We prove an existence result for the latter system, obtaining consequently existence of a solution for the Mean Field Games system.
ARTICLE | doi:10.20944/preprints202008.0653.v2
Subject: Physical Sciences, Acoustics Keywords: BCS superconductivity; mean field; current-current interaction; internal/external fields; stability; Meissner effect
Online: 30 September 2020 (15:12:50 CEST)
We show that the implementation of the 1/c² transverse current-current interaction between electrons into the standard self-consistent electron BCS model in bulk under thermal equilibrium ensures in the stable superconductive phase the full compensation of a constant external magnetic field by the internal magnetic field created by the electrons i.e. one has an ideal diamagnet. However, no proof of the phenomenological London equation emerges within the bulk approach.
ARTICLE | doi:10.20944/preprints201804.0004.v1
Subject: Engineering, Marine Engineering Keywords: ship’s propeller jet; mean axial velocity of flow; prediction equations
Online: 1 April 2018 (16:07:01 CEST)
The propeller jet from a ship has a significant component directed upwards towards the free surface of the water, which can be used for ice management. This paper describes a comprehensive laboratory experiment where the influences of operational factors affecting a propeller wake velocity field were investigated. The experiment was done on a steady wake field to investigate the characteristics of the axial velocity of the fluid in the wake and the corresponding variability downstream of the propeller. The axial velocities and the variability recorded were time-averaged. Propeller rotational speed was found to be the most significant factor, followed by propeller inclination. The experimental results also provide some idea about the change of the patterns of the mean axial velocity distribution against the factors considered for the test throughout the effective wake field, as well as the relationships to predict the axial velocity for known factors.
ARTICLE | doi:10.20944/preprints202108.0140.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: K-Mean, Mean-Shift, Performance, Accuracy
Online: 5 August 2021 (11:00:32 CEST)
Clustering, or otherwise known as cluster analysis, is a learning problem that takes place without any human supervision. This technique has often been utilized, much efficiently, in data analysis, and serves for observing and identifying interesting, useful, or desired patterns in the said data. The clustering technique functions by performing a structured division of the data involved, in similar objects based on the characteristics that it identifies. This process results in the formation of groups, and each group that is formed, is called a cluster. A single said cluster consists of objects from the data, that have similarities among other objects found in the same cluster, and resemble differences when compared to objects identified from the data that now exist in other clusters. The process of clustering is very significant in various aspects of data analysis, as it determines and presents the intrinsic grouping of objects present in the data, based on their attributes, in a batch of unlabeled raw data. A textbook or otherwise said, good criteria, does not exist in this method of cluster analysis. That is because this process is so different and so customizable for every user, that needs it in his/her various and different needs. There is no outright best clustering algorithm, as it massively depends on the user’s scenario and needs. This paper is intended to compare and study two different clustering algorithms. The algorithms under investigation are k-mean and mean shift. These algorithms are compared according to the following factors: time complexity, training, prediction performance and accuracy of the clustering algorithms.
BRIEF REPORT | doi:10.20944/preprints202004.0455.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: statistics; mean; weighted mean; average; mathematical thinking
Online: 25 April 2020 (02:50:55 CEST)
This study explores students’ understanding of one measure of central tendency, the mean. A teaching experiment was conducted to understand how sixth-grade students made sense of this concept. Findings suggest that the students know how to solve mathematical problems related to mean using procedural understanding and lack of conceptual understanding.
ARTICLE | doi:10.20944/preprints201710.0199.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Ensemble mean; Analogue ensemble mean; Multi–member analogue ensemble mean; Quantitative rainfall prediction
Online: 31 October 2017 (16:28:54 CET)
Accurate and timely rainfall prediction enhances productivity and can aid proper planning in sectors such as agriculture, health, transport and water resources. This study is aimed at improving rainfall prediction using ensemble methods. It first assesses the performance of six convective schemes (Kain–Fritsch (KF); Betts–Miller–Janji´c (BMJ); Grell–Fretas (GF); Grell 3D ensemble (G3); New–Tiedke (NT) and Grell–Devenyi (GD)) using the root mean square error (RMSE) and mean error (ME) focusing on the March–May 2013 rainfall period over Uganda. 18 ensemble members are generated from the three best performing convective schemes (i.e. KF, GF & G3). The performance of three ensemble methods (i.e. ensemble mean (EM); ensemble mean analogue (EMA) and multi–member analogue ensemble (MAEM)) is also analyzed using the RMSE and ME. The EM presented a smaller RMSE compared to individual schemes (EM:10.02; KF:23.96; BMJ:26.04; GF:25.85; G3:24.07; NT:29.13 & GD:26.27) and a better bias (EM:-1.28; KF:-1.62; BMJ:-4.04; GF:-3.90; G3:-3.62; NT:-5.41 & GD:-4.07). The EMA and MAEM presented 13 out of 21 stations & 17 out of 21 stations respectively with smaller RMSE compared to EM thus demonstrating additional improvement in predictive performance. The MAEM is a new approach proposed and described in the study.
ARTICLE | doi:10.20944/preprints201809.0083.v1
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: mean-field type game; non-zero-sum differential game; cooperative game; backward stochastic differential equations; linear-quadratic stochastic control; social cost; price of anarchy
Online: 5 September 2018 (04:59:41 CEST)
In this paper, mean-field type games between two players with backward stochastic dynamics are defined and studied. They make up a class of non-zero-sum differential games where the players' state dynamics solve backward stochastic differential equations (BSDEs) that depend on the marginal distributions of player states. Players try to minimize their individual cost functionals, also depending on the marginal state distributions. Under some regularity conditions, we derive necessary and sufficient conditions for existence of Nash equilibria. Player behavior is illustrated by numerical examples, and is compared to a centrally planned solution where the social cost, the sum of player costs, is minimized. The inefficiency of a Nash equilibrium, compared to socially optimal behavior, is quantified by the so-called price of anarchy. Numerical simulations of the price of anarchy indicate how the improvement in social cost achievable by a central planner depends on problem parameters.
ARTICLE | doi:10.20944/preprints201705.0039.v1
Subject: Mathematics & Computer Science, Analysis Keywords: Lévy--Khintchine representation; integral representation; bivariate mean; bivariate complex geometric mean; reciprocal; Heronian mean; application
Online: 4 May 2017 (08:44:25 CEST)
In the paper, the authors survey integral representations (including the Lévy--Khintchine representations) and applications of some bivariate means (including the logarithmic mean, the identric mean, Stolarsky's mean, the harmonic mean, the (weighted) geometric means and their reciprocals, and the Toader--Qi mean) and the multivariate (weighted) geometric means and their reciprocals, derive integral representations of bivariate complex geometric mean and its reciprocal, and apply these newly-derived integral representations to establish integral representations of Heronian mean of power 2 and its reciprocal.
ARTICLE | doi:10.20944/preprints201809.0608.v1
Subject: Mathematics & Computer Science, Analysis Keywords: Hyers-Ulam stability; mean value theorem; Lagrange's mean value point; two-dimensional Lagrange's mean value point
Online: 30 September 2018 (10:42:59 CEST)
Using a theorem of Ulam and Hyers, we will prove the Hyers-Ulam stability of two-dimensional Lagrange's mean value points $(\eta, \xi)$ which satisfy the equation, $f(u, v) - f(p, q) = (u-p) f_x(\eta, \xi) + (v-q) f_y(\eta, \xi)$, where $(p, q)$ and $(u, v)$ are distinct points in the plane. Moreover, we introduce an efficient algorithm for applying our main result in practical use.
ARTICLE | doi:10.20944/preprints201809.0099.v2
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Global, non-parametric, non-iterative optimization; Time-mean quantities; Small time-varying forcing; Ordinary differential equation system (ODEs); Eigenvalue problem
Online: 30 August 2019 (09:35:18 CEST)
This study demonstrates a global, non-parametric, non-iterative optimization of time-mean value of a kind of index vibrated by time-varying forcing. It is based on the fact that the (steady) forced vibration of non-autonomous ordinary differential equation systems is well approximated by an analytical solution when the amplitude of forcing is sufficiently small and its base state without forcing is linearly stable and steady. It is applied to optimize a time-averaged heat-transfer rate on a two-dimensional thermal convection field in a square cavity with horizontal temperature difference, and the globally optimal way of vibrational forcing, i.e. the globally optimal, spatial distribution of vibrational heat and vorticity sources, is first obtained. The maximized vibrational thermal convection corresponds well to the state of internal gravity wave resonance. In contrast, the minimized thermal convection is weak, keeping the boundary layers on both sidewalls thick.
ARTICLE | doi:10.20944/preprints202106.0573.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Mean; Mean of Trapezoidal Fuzzy Numbers; Trapezoidal Fuzzy Numbers; Transportation Problem; and Fuzzy Transportation Problem
Online: 23 June 2021 (11:17:57 CEST)
In this paper, improved matrix Reduction Method is proposed for the solution of fuzzy transportation problem in which all inputs are taken as fuzzy numbers. Since ranking fuzzy number is important tool in decision making, Fuzzy trapezoidal number is converting in to crisp set by using Mean techniques and solved by proposed method for fuzzy transportation problem. We give suitable numerical example for unbalanced and compare the optimal value with other techniques. The Result shows that the optimum profit of transportation problem using proposed technique under robust ranking method is better than the other method. Novelty: The numerical illustration demonstrates that the new projected method for managing the transportation problems on fuzzy algorithms.
ARTICLE | doi:10.20944/preprints201807.0405.v1
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: interval-valued intuitionistic fuzzy set; aggregation operator; Heronian mean; geometric Heronian mean; multi-attribute decision making
Online: 23 July 2018 (05:29:46 CEST)
The Pythagorean fuzzy set (PFS), which is characterized by a membership and a non-membership degree and the square sum of them is less or equal to one, can act as an effective tool to express decision makers’ fuzziness and uncertainty. Considering that the Heronian mean (HM) is a powerful aggregation operator which can take the interrelationship between any two arguments, we study the HM in Pythagorean fuzzy environment and propose new operators for aggregating interval-valued Pythagorean fuzzy information. First, we investigate the HM and geometric HM (GHM) under interval-valued intuitionistic fuzzy environment and develop a series of aggregation operators for interval-valued intuitionistic fuzzy numbers (IVIFNs) including interval-valued intuitionistic fuzzy Heronian mean (IVIFHM), interval-valued intuitionistic fuzzy geometric Heronian mean (IVIFGHM), interval-valued intuitionistic fuzzy weighted Heronian mean (IVIFWHM) and interval-valued intuitionistic fuzzy weighted geometric Heronian mean (IVIFWGHM). Second, some desirable and important properties of these aggregation operators are discussed. Third, based on these aggregation operators, a novel approach to multi-attribute decision making (MADM) is proposed. Finally, to demonstrate the validity of the approach, a numerical example is provided and discussed. Moreover, we discuss several real-world applications of these operators within policy-making contexts.
ARTICLE | doi:10.20944/preprints201701.0126.v1
Subject: Biology, Animal Sciences & Zoology Keywords: anemia; iron deficiency; pregnancy; serum ferritin; mean corpuscular volume (mcv); mean corpuscular hemoglobin (MCH); Northern Pakistan
Online: 27 January 2017 (03:46:07 CET)
Abstract: The aim of this study was to find out the incidence of anemia in pregnant women of Swat District; to analyze the iron variations and its dietary effects.Data were collected during the periods of January – September 2016. The study of samples comprised of 250 pregnant women in the different trimester. Blood sample from each woman was collected and full blood count (FBC) was conducted through Mindray BC-3000 plus hem analyzer for all pregnant individuals. Confirmed anemic cases were then examined for IDA with serum ferritin, serum iron, total iron binding capacity (TIBC) through Randox kit and serum transferrin saturation was estimated by formula (serum ferritin saturation =serum iron ×100/TIBC). The total number of participants in the first trimester were 50, among them 26 women were suffer from iron deficiency anemia (IDA) with 52% weightage of prevalence rate, (mean Hb concentration 9.602 ± 0.87 g/dl). The rates of IDA were 63.3%; ( mean Hb concentration 8.48 ± 1.24 g/dl) and 54%; ( mean Hb concentration 9.18 ± 1.28 g/dl), among 150 and 50 participants in the second and third trimester, respectively. A significant correlation was found between serum ferritin and Hb, serum ferritin against MCV and serum ferritin against MCH. The high prevalence of anemia was found 78.2% in the age group from 26-30 followed by 78.2% in the age group 36-40 years compared to those of other age groups in the second trimester. In this study the prevalence of IDA in third trimester is lower compared to first and second trimester.
ARTICLE | doi:10.20944/preprints201706.0002.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: probability; inference; information theory; Bayesian; generalized mean
Online: 1 June 2017 (05:49:21 CEST)
An approach to the assessment of probabilistic inference is described which quantifies the performance on the probability scale. From both information and Bayesian theory, the central tendency of an inference is proven to be the geometric mean of the probabilities reported for the actual outcome and is referred to as the “Accuracy.” Upper and lower error bars on the accuracy are provided by the arithmetic mean and the -2/3 mean. The arithmetic is called the “Decisiveness” due to its similarity with the cost of a decision and the -2/3 mean is called the “Robustness”, due to its sensitivity to outlier errors. Visualization of inference performance is facilitated by plotting the reported model probabilities versus the histogram calculated source probabilities. The visualization of the calibration between model and source is summarized on both axes by the arithmetic, geometric, and -2/3 means. From information theory, the performance of the inference is related to the cross-entropy between the model and source distribution. Just as cross-entropy is the sum of the entropy and the divergence; the accuracy of a model can be decomposed into a component due to the source uncertainty and the divergence between the source and model. Translated to the probability domain these quantities are plotted as the average model probability versus the average source probability. The divergence probability is the average model probability divided by the average source probability. When an inference is over/under-confident, the arithmetic mean of the model increases/decreases, while the -2/3 mean decreases/increases, respectively.
ARTICLE | doi:10.20944/preprints202101.0611.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Mean surface temperature; CMIP6; evaluation; projections; East Africa
Online: 29 January 2021 (11:35:29 CET)
This study evaluates the historical mean surface temperature (hereafter T2m) and examines how T2m changes over East Africa (EA) in the 21st century using CMIP6 models. An evaluation was conducted based on mean state, trends, and statistical metrics (Bias, Correlation Coefficient, Root Mean Square Difference, and Taylor skill score). For future projections over EA, five best performing CMIP6 models (based on their performance ranking in historical mean temperature simulations) under the shared socioeconomic pathways SSP2-4.5 and SSP5-8.5 scenarios were employed. The historical simulations reveal an overestimation of the mean annual T2m cycle over the study region with fewer models depicting underestimations. Further, CMIP6 models reproduce the spatial and temporal trends within the observed range proximity. Overall, the best performing models are as follows: FGOALS-g3, HadGEM-GC31-LL, MPI-ESM2-LR, CNRM-CM6-1, and IPSL-CM6A-LR. During the three-time slice under consideration, the Multi Model Ensemble (MME) project many changes during the late period (2080 – 2100) with expected mean changes at 2.4 °C for SSP2-4.5 and 4.4 °C for the SSP5-8.5 scenario. The magnitude of change based on Sen’s slope estimator and Mann-Kendall test reveal significant increasing tendencies with projections of 0.24°C decade-1 (0.65°C decade-1) under SSP2-4.5 (SSP5-8.5) scenarios. The findings from this study illustrate higher warming in the latest model outputs of CMIP6 relative to its predecessor, despite identical instantaneous radiative forcing.
ARTICLE | doi:10.20944/preprints202005.0347.v1
Subject: Engineering, Mechanical Engineering Keywords: deep learning; maximum mean discrepancy; gearbox; fault detection
Online: 22 May 2020 (05:21:56 CEST)
In the past years, various intelligent machine learning and deep learning algorithms have been developed and widely applied for gearbox fault detection and diagnosis. However, the real-time application of these intelligent algorithms has been limited, mainly due to the fact that the model developed using data from one machine or one operating condition has serious diagnosis performance degradation when applied to another machine or the same machine with a different operating condition. The reason for poor model generalization is the distribution discrepancy between the training and testing data. This paper proposes to address this issue using a deep learning based cross domain adaptation approach for gearbox fault diagnosis. Labelled data from training dataset and unlabeled data from testing dataset is used to achieve the cross-domain adaptation task. A deep convolutional neural network (CNN) is used as the main architecture. Maximum mean discrepancy is used as a measure to minimize the distribution distance between the labelled training data and unlabeled testing data. The study proposes to reduce the discrepancy between the two domains in multiple layers of the designed CNN to adapt the learned representations from the training data to be applied in the testing data. The proposed approach is evaluated using experimental data from a gearbox under significant speed variation and multiple health conditions. An appropriate benchmarking with both traditional machine learning methods and other domain adaptation methods demonstrates the superiority of the proposed method.
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: extinction; permanence in mean; stability; stochastic epidemic model
Online: 30 May 2019 (10:53:58 CEST)
In this paper, we propose a new mathematical model based on the association between susceptible and recovered individual, where the association between susceptible and recovered individual is disturbed by white noise. This model is based on demographic changes and is used for long term behavior. We study the stability of equilibria of the deterministic model and prove the conditions for the extinction of diseases. Then, we investigate and obtain the critical condition of the stochastic epidemic model for the extinction and the permanence in mean of the disease with the white noise. To verify our results, we present some numerical simulations for real data related to disease.
ARTICLE | doi:10.20944/preprints201707.0090.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: portfolio optimization; Kelly criterion; differential evolution; mean-variance
Online: 31 July 2017 (11:22:09 CEST)
Kelly's Criterion is well known among gamblers and investors as a method for maximizing the returns one would expect to observe over long periods of betting or investing. These ideas are conspicuously absent from portfolio optimization problems in the financial and automation literature. This paper will show how Kelly's Criterion can be incorporated into standard portfolio optimization models. The model developed here combines risk and return into a single objective function by incorporating a risk parameter. This model is then solved for a portfolio of 10 stocks from a major stock exchange using a differential evolution algorithm. Monte Carlo calculations are used to verify the accuracy of the results obtained from differential evolution. The results show that evolutionary algorithms can be successfully applied to solve a portfolio optimization problem where returns are calculated by applying Kelly's Criterion to each of the assets in the portfolio.
ARTICLE | doi:10.20944/preprints202010.0355.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Probability Distribution Function; Weibull Distribution; Parameter Estimation; Mean Time Between Failures; Failure Rate; Mean Time To Repair; Downtime and Reliability
Online: 16 October 2020 (14:52:26 CEST)
Reliability analysis techniques are customary standard tools that are used for evaluating the performance of different equipment and devices in order to minimize their downtime. To predict the reliability, life data from a sample that is satisfactorily representative of the equipment should be fitted to the suitable statistical distribution. The parameterized distribution may be used to estimate essential characteristics such as failure rate; and probability at a precise time, as well as system reliability. In the current study, Weibull++/ALTA software package is used as a novel tool to fit the available data set to estimate the best fitted probability density function (PDF) using Maximum Likelihood (MLE) for parameter estimation. The determined distributions are then assessed using goodness-of-fit test to define how well it fits the available data set. There are multiple methods for determining goodness-of-fit. Weibull distributions and their special cases’ parameters have an effect on life times.
ARTICLE | doi:10.20944/preprints202106.0571.v2
Subject: Medicine & Pharmacology, Ophthalmology Keywords: glaucoma progression; nycthemeral intraocular pressure; mean ocular perfusion pressure
Online: 1 July 2021 (11:08:41 CEST)
Purpose: Nycthemeral (24-hour) glaucoma inpatient intraocular pressure (IOP) monitoring has been used in Europe for more than 100 years to detect peaks missed during regular office hours. Data supporting this practice is lacking, partially because it is difficult to correlate manually drawn IOP curves to objective glaucoma progression. To address this, we deployed automated IOP data extraction tools and tested for a correlation to a progressive retinal nerve fiber layer loss on spectral-domain optical coherence tomography (SDOCT). Methods: We created and deployed a machine-learning image analysis software to extract IOP data from hand-drawn, nycthemeral IOP curves of 225 retrospectively identified glaucoma patients. The relationship between demographic parameters, IOP and mean ocular perfusion pressure (MOPP) data to SDOCT data was analyzed. Sensitivities and specificities for the historical cut-off values of 15 mmHg and 22 mmHg in detecting glaucoma progression were calculated. Results: IOP data could be extracted efficiently. The IOP average was 15.2±4.0 mmHg, nycthemeral IOP variation was 6.9±4.2 mmHg, and MOPP was 59.1±8.9 mmHg. Peak IOP occurred at 10 AM and trough at 9 PM. Disease progression occurred mainly in the temporal-superior and -inferior SDOCT sectors. No correlation could be established between demographic, IOP, or MOPP parameters and SDOCT disease progression. The sensitivity and specificity of both cut-off points (15 and 22 mmHg) were insufficient to be clinically useful. Outpatient IOPs were non-inferior to nycthemeral IOPs. Conclusion: IOP data obtained during a single visit make for a poor diagnostic tool, no matter whether obtained using nycthemeral measurements or during outpatient hours.
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Central tendency; Weighted geometric mean; means; variance; non-parametric statistics
Online: 6 April 2021 (14:02:04 CEST)
Various means (the arithmetic mean, the geometric mean, the harmonic mean, the power means) are often used as central tendency statistics. A new statistic of such type is offered for a sample from a distribution on the positive semi-axis, the gamma-weighted geometric mean. This statistic is a certain weighted geometric mean with adaptive weights.
ARTICLE | doi:10.20944/preprints202001.0208.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: pig; behavior analysis; hourglass; stacked dense-net; K-mean sampler
Online: 19 January 2020 (04:40:15 CET)
Animal behavior analysis is a crucial tasks for the industrial farming. In an indoor farm setting, extracting Key joints of animal is essential for tracking the animal for longer period of time. In this paper, we proposed a deep network that exploit transfer learning to trained the network for the pig skeleton extraction in an end to end fashion. The backbone of the architecture is based on hourglass stacked dense-net. In order to train the network, key frames are selected from the test data using K-mean sampler. In total, 9 Keypoints are annotated that gives a brief detailed behavior analysis in the farm setting. Extensive experiments are conducted and the quantitative results show that the network has the potential of increasing the tracking performance by a substantial margin.
ARTICLE | doi:10.20944/preprints201906.0195.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: adaptive bilateral; marker watershed; PSO; fuzzy C-mean; GLCM; SVM
Online: 20 June 2019 (09:22:05 CEST)
Recently, the medical image processing is extensively used in several areas. In earlier detection and treatment of these diseases is very helpful to find out the abnormality issues in that image. Here there are number of methods available for segmentation to detect the lung nodule of computer tomography (CT) image. The main result of this paper, the earlier detection of lung nodules using Pre-processing techniques of top-hat transform, median and adaptive bilateral filter was compared both filtering methods and proved the adaptive bilateral filter is suitable method for CT images. The proposed segmentation technique uses novel strip method and the image is split into number of strips 3, 4, 5 and 6. A marker- watershed method based on PSO and Fuzzy C-mean Clustering method was proposed method. Firstly, the input image was reduced noise reduction and smoothing and the filter image is using strips method and then the image is segmented by marker watershed method. Secondly, the enhanced PSO technique was used to locate the better accurate value of the clustering centers of Fuzzy C-mean Clustering. Final stage, with the accurate value of centers and the enhanced target function and the small region of the segmented object was clustered by Fuzzy C-mean. In segmentation algorithm presented in this paper gives 95% of accuracy rate to detect lung nodules when strip count is 5.
ARTICLE | doi:10.20944/preprints201711.0124.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: neutrosophic number; neutrosophic number harmonic mean operator (NNHMO); neutrosophic number weighted harmonic mean operator (NNWHMO); cosine function, score function; multi criteria group decision making
Online: 20 November 2017 (09:53:31 CET)
The concept of neutrosophic number is a significant mathematical tool to deal with real scientific problems because it can tackle indeterminate and incomplete information which exists generally in real problems. In this article, we use neutrosophic numbers (a + bI), where a and bI denote determinate component and indeterminate component respectively. We explore the situations in which the input information is needed to express in terms of neutrosophic numbers. We define score functions and accuracy functions for ranking neutrosophic numbers. We then define a cosine function to determine unknown criteria weights. We define neutrosophic number harmonic mean operators and proved their basic properties. Then, we develop two novel MCGDM strategies using the proposed aggregation operators. We solve a numerical example to demonstrate the feasibility and effectiveness of the proposed two strategies. Sensitivity analysis with variation of “I” on neutrosophic numbers is performed to demonstrate how the preference ranking order of alternatives is sensitive to the change of “I”. The efficiency of the developed strategies is ascertained by comparing the obtained results from the proposed strategies with the existing strategies in the literature.
ARTICLE | doi:10.20944/preprints202009.0024.v1
Subject: Earth Sciences, Atmospheric Science Keywords: AWS; land cover; LDAPS; mean bias error; temperature; topography; wind speed
Online: 2 September 2020 (05:00:09 CEST)
We investigated the characteristics of surface wind speeds and temperatures predicted by the local data assimilation and prediction system (LDAPS) operated by the Korean Meteorological Administration. First, we classified automated weather stations (AWSs) into four categories [urban flat (Uf), rural flat (Rf), rural mountainous (Rm), and rural coastal (Rc) terrains] based on the surrounding land cover and topography, and selected 25 AWSs representing each category. Then we calculated the mean bias error of wind speed (WE) and temperature (TE) using AWS observations and LDAPS predictions for the 25 AWSs in each category for a period of 1 year (January–December 2015). We found that LDAPS overestimated wind speed (average WE = 1.26 m s–1) and underestimated temperature (average TE = –0.63°C) at Uf AWSs located on flat terrain in urban areas because it failed to reflect the drag and local heating caused by buildings. At Rf, located on flat terrain in rural areas, LDAPS showed the best performance in predicting surface wind speed and temperature (average WE = 0.42 m s–1, average TE = 0.12°C). In mountainous rural terrain (Rm), WE and TE were strongly correlated with differences between LDAPS and actual altitude. LDAPS underestimated (overestimated) wind speed (temperature) for LDAPS altitudes that were lower than actual altitude, and vice versa. In rural coastal terrain (Rc), LDAPS temperature predictions depended on whether the grid was on land or sea, whereas wind speed did not depend on grid location. LDAPS underestimated temperature at grid points on the sea, with smaller TE obtained for grid points on sea than on land.
ARTICLE | doi:10.20944/preprints201912.0015.v1
Subject: Social Sciences, Other Keywords: school sports facility; assessment; t-sne; fuzzy c mean; unsupervised learning
Online: 3 December 2019 (05:24:26 CET)
The aim of this study is (a) to develop, test, and employ a combined method of unsupervised machine learning to objectively assess the condition of sports facility in primary schools (PSSFC) and (b) examine the examine the geographical and typological association with PSSFC. Based on the Sixth National Sports Facility Census (NSFC), six PSSFC indicators (indoor and outdoor facility included) were selected as the measurements and decomposed by using the t-stochastic neighbor embedding (t-SNE). Thereafter, the Fuzzy C-mean (FCM) algorithm was used to cluster the same type of PSSFC with selecting the optimum numbers of evaluation level. Overall 845 primary schools in Shanghai, China were recruited and tested by this combined approach of unsupervised machine learning. In addition, the two-way analysis of covariance was used to examine the location and types of school associated with PSSFC variables in each level. The combined method was found to have acceptable reliability and good interpretability, differentiating PSSFC into five gradient levels. The characteristics of PSSFC differ by the location and school type of individual school. Our findings are conducive to the regionalized and personalized intervention and promotion on the children’s physical activity (PA) upon the practical situation of particular schools.
ARTICLE | doi:10.20944/preprints201708.0066.v1
Subject: Engineering, Other Keywords: non-homogeneous poisson process; software reliability; weibull function; mean square error
Online: 18 August 2017 (13:05:46 CEST)
The main focus when developing software is to improve the reliability and stability of a software system. When software systems are introduced, these systems are used in field environments that are the same as or close to those used in the development-testing environment; however, they may also be used in many different locations that may differ from the environment in which they were developed and tested. In this paper, we propose a new software reliability model that takes into account the uncertainty of operating environments. The explicit mean value function solution for the proposed model is presented. Examples are presented to illustrate the goodness-of-fit of the proposed model and several existing non-homogeneous Poisson process (NHPP) models and confidence intervals of all models based on two sets of failure data collected from software applications. The results show that the proposed model fits the data more closely than other existing NHPP models to a significant extent.
ARTICLE | doi:10.20944/preprints202109.0021.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Effect size; correlation coefficient; association measure; covariance; mean square contingency coefficient; mean square effect half-size; Pearson’s Phi; 2 × 2 table; binary crosstab; gross crosstab; contingency table
Online: 1 September 2021 (14:28:47 CEST)
Evidence-based medicine (EBM) is in crisis, in part due to bad methods, which are understood as misuse of statistics that is considered correct in itself. This article exposes two related common misconceptions in statistics, the effect size (ES) based on correlation (CBES) and a misconception of contingency tables (MCT). CBES is a fallacy based on misunderstanding of correlation and ES and confusion with 2 × 2 tables, which makes no distinction between gross crosstabs (GCTs) and contingency tables (CTs). This leads to misapplication of Pearson’s Phi, designed for CTs, to GCTs and confusion of the resulting gross Pearson Phi, or mean-square effect half-size, with the implied Pearson mean square contingency coefficient. Generalizing this binary fallacy to continuous data and the correlation in general (Pearson’s r) resulted in flawed equations directly expressing ES in terms of the correlation coefficient, which is impossible without including covariance, so these equations and the whole CBES concept are fundamentally wrong. MCT is a series of related misconceptions due to confusion with 2 × 2 tables and misapplication of related statistics. The misconceptions are threatening because most of the findings from contingency tables, including CBES-based meta-analyses, can be misleading. Problems arising from these fallacies are discussed and the necessary changes to the corpus of statistics are proposed resolving the problem of correlation and ES in paired binary data. Since exposing these fallacies casts doubt on the reliability of the statistical foundations of EBM in general, we urgently need to revise them.
ARTICLE | doi:10.20944/preprints201909.0228.v1
Subject: Physical Sciences, Condensed Matter Physics Keywords: topological insulators; Floquet states; Dynamical Mean Field Theory; semiconductors; strongly correlated electronics
Online: 19 September 2019 (15:46:56 CEST)
Spatially uniform optical excitations can induce Floquet topological band structures within insulators which can develop similar or equal characteristics as are known from three-dimensional topological insulators. We derive in this article theoretically the development of Floquet topological quantum states for electromagnetically driven semiconductor bulk matter and we present results for the lifetime of these states and their occupation in the non-equilibrium. The direct physical impact of the mathematical precision of the Floquet-Keldysh theory is evident when we solve the driven system of a generalized Hubbard model with our framework of dynamical mean field theory (DMFT) in the non-equilibrium for a case of ZnO. The physical consequences of the topological non-equilibrium effects in our results for correlated systems are explained with their impact on optoelectronic applications.
ARTICLE | doi:10.20944/preprints201805.0296.v1
Subject: Mathematics & Computer Science, Other Keywords: normal intuitionistic fuzzy numbers; Heronian mean; Hamacher t-conorm; Hamacher t-norm
Online: 22 May 2018 (10:15:21 CEST)
Hamacher operation which is generalization of the Algebraic and Einstein operation, can widely provide a large number of arithmetical operation with respect to uncertainty information, and Heronian mean can deal with correlations of the input arguments or different criteria and don’t make calculation redundancy, meanwhile, the normal intuitionistic fuzzy numbers (NIFNs) can depict distinctively normal distribution information in practical decision making. In this paper, a multi-criteria group decision-making (MCGDM) problem is researched under the NIFNs environment, and a new MCGDM approach is introduced on the basis of the Hamacher operation. Firstly, according to Hamacher t-conorm and t-norm, some operational laws of NIFNs are presented. Secondly, it is noticed that Heronian mean can’t only once take into account mutual relation between attribute values once, but also consider the correlation between input argument and itself. Therefore, we develop some operators and study their properties in order to aggregate normal intuitionistic fuzzy numbers information, these operators include Hamacher Heronian mean (NIFHHM), Hamacher weighted Heronian mean (NIFHWHM), Hamacher geometric Heronian mean (NIFHGHM) and Hamacher weighted geometric Heronian mean (NIFHWGHM). Furthermore, we apply the proposed operators to the MCGDM problem and present a new method. The main characteristics of this new method involve that: (1) it is suitable to make decision under the normal intuitionistic fuzzy numbers environment and more reliable and reasonable to aggregate the normal distribution information. (2) it utilizes Hamacher operation which can provide more reliable and flexible decision-making results and offer an effective and powerful mathematic tool for the MAGDM under uncertainty. (3) it uses the Heronian mean operator which can considers relationships between the input arguments or the attributes and don’t brings subsequently about redundancy. Lastly, an application is given for showing the feasibility and effectiveness of the presented method in this paper.
ARTICLE | doi:10.20944/preprints201709.0084.v1
Subject: Engineering, Control & Systems Engineering Keywords: Passive Sonar; Target Detection; Adaptive Threshold; Bayesian Classifier; K-Mean; Particle Filter
Online: 18 September 2017 (17:04:13 CEST)
This paper presents the results of an experimental investigation about target detecting with passive sonar in Persian Gulf. Detecting propagated sounds in the water is one of the basic challenges of the researchers in sonar field. This challenge will be complex in shallow water (like Persian Gulf) and noise less vessels. Generally, in passive sonar the targets are detected by sonar equation (with constant threshold) which increase the detection error in shallow water. Purpose of this study is proposed a new method for detecting targets in passive sonars using adaptive threshold. In this method, target signal (sound) is processed in time and frequency domain. For classifying, Bayesian classification is used and prior distribution is estimated by Maximum Likelihood algorithm. Finally, target was detected by combining the detection points in both domains using LMS adaptive filter. Results of this paper has showed that proposed method has improved true detection rate about 27% compare other the best detection method.
ARTICLE | doi:10.20944/preprints201703.0119.v1
Subject: Mathematics & Computer Science, Analysis Keywords: Lévy–Khintchine representation; integral representation; Bernstein function; Stieltjes function; Toader–Qi mean; weighted geometric mean; Bessel function of the first kind; probabilistic interpretation; probabilistic interpretation; application in engineering; inequality
Online: 16 March 2017 (11:31:31 CET)
In the paper, by virtue of a Lévy–Khintchine representation and an alternative integral representation for the weighted geometric mean, the authors establish a Lévy–Khintchine representation and an alternative integral representation for the Toader–Qi mean. Moreover, the authors also collect an probabilistic interpretation and applications in engineering of the Toader–Qi mean.
Subject: Social Sciences, Accounting Keywords: Short-term trading; mean reversion; VIX; SPY; linear stochastic process; MACD; Bollinger Bands
Online: 29 July 2021 (16:24:34 CEST)
One of the key challenges of stock trading is the stock prices follow a random walk process, which is a special case of a stochastic process, and are highly sensitive to new information. A random walk process is difficult to predict in the short-term. Many linear process models that are being used to predict financial time series are structural models that provide an important decision boundary, albeit not adequately considering the correlation or causal effect of market sentiment on stock prices. This research seeks to increase the predictive capability of linear process models using the SPDR S\&P 500 ETF (SPY) and the CBOE Volatility (VIX) Index as a proxy for market sentiment. Three econometric models are considered to forecast SPY prices: (i) Auto-Regressive Integrated Moving Average (ARIMA), (ii) Generalized Auto Regressive Conditional Heteroskedasticity (GARCH), and (iii) Vector Autoregression (VAR). These models are integrated into two technical indicators, Bollinger Bands and Moving Average Convergence Divergence (MACD), focusing on forecast performance. The profitability of various algorithmic trading strategies is compared based on a combination of these two indicators. This research finds that linear process models that incorporate the VIX Index do not improve the performance of algorithmic trading strategies.
ARTICLE | doi:10.20944/preprints202002.0319.v1
Subject: Earth Sciences, Oceanography Keywords: altimeter; sea surface wind speed; significant wave height; mean wave period; atmospheric instability
Online: 23 February 2020 (11:09:10 CET)
Spaceborne altimeters are an important data source for obtaining global sea surface wind speeds (U10). Although many altimeter U10 algorithms have been proposed and they perform well, there is still room for improvement. In this study, the data from ten altimeters were collocated with buoys to investigate the error of the altimeter U10 retrievals. The U10 residuals were found to be significantly dependent on many oceanic and atmospheric parameters. Because these oceanic and atmospheric parameters are inter-correlated, an asymptotic strategy was used to isolate the impact of different parameters and establish a neural-network-based correction model of altimeter U10. The results indicated that significant wave heights and mean wave periods are effective in correcting U10 retrievals, probably due to the tilting modulation of long-waves on the sea surface. After the wave correction, the root-mean-square error of the retrieved U10 was reduced from 1.42 m/s to 1.24 m/s and the impacts of thermodynamic parameters, such as sea surface (air) temperate, became negligible. The U10 residuals after correction showed that the atmospheric instability can lead to errors on extrapolated buoy U10. The buoy measurements with large air-sea temperature differences need to be excluded in the Cal/Val of remotely sensed U10.
ARTICLE | doi:10.20944/preprints201808.0071.v3
Subject: Physical Sciences, Fluids & Plasmas Keywords: turbulence; mean velocity; uctuation velocity; the Reynolds stress tensor; vorticity; turbulence closure problem
Online: 9 July 2019 (05:00:29 CEST)
This paper proposed an explicit and simple representation of velocity fluctuation and the Reynolds stress tensor in terms of the mean velocity field. The proposed turbulence equations are closed. The proposed formulations reveal that the mean vorticity is the key source of producing turbulence. It is found that there are no velocity fluctuation and turbulence if there were no vorticity. As a natural consequence, the laminar- turbulence transition condition was obtained in a rational way.
ARTICLE | doi:10.20944/preprints201807.0030.v1
Subject: Physical Sciences, Fluids & Plasmas Keywords: turbulence; mean velocity; fluctuation velocity; the Reynolds stress tensor; vorticity; turbulence closure problem
Online: 3 July 2018 (08:39:49 CEST)
Based on author's previous work [Sun, B. The Reynolds Navier-Stokes Turbulence Equations of Incompressible Flow Are Closed Rather Than Unclosed. Preprints 2018, 2018060461 (doi: 10.20944/preprints201806.0461.v1)], this paper proposed an explicit representation of velocity fluctuation and formulated the Reynolds stress tensor in terms of the mean velocity field. The proposed closed Reynolds Navier-Stokes turbulence formulations reveal that the mean vorticity is the key source of producing turbulence.
ARTICLE | doi:10.20944/preprints201806.0159.v1
Subject: Mathematics & Computer Science, Geometry & Topology Keywords: 4-space; the third Laplace-Beltrami operator; rotational hypersurface; Gaussian curvature; mean curvature
Online: 11 June 2018 (12:35:51 CEST)
We consider rotational hypersurface in the four dimensional Euclidean space. We calculate the mean curvature and the Gaussian curvature, and some relations of the rotational hypersurface. Moreover, we define the third Laplace-Beltrami operator and apply it to the rotational hypersurface.
ARTICLE | doi:10.20944/preprints202201.0336.v1
Subject: Physical Sciences, Condensed Matter Physics Keywords: q-state clock model; entropy; Berezinskii-Kosterlitz-Thouless transition; Otto engine; Mean- field approximation
Online: 24 January 2022 (09:28:39 CET)
This present work explores the performance of a thermal-magnetic engine of Otto type, considering as a working substance an effective interacting spin model corresponding to the q− state clock model. We obtain all the thermodynamic quantities for the q = 2, 4, 6, 8 cases in a small lattice size (3×3 with free boundary conditions) by using the exact partition function calculated from the energies of all the accessible microstates of the system. The extension to bigger lattices was performed using the mean-field approximation. Our results indicate that the total work extraction of the cycle is highest for the q=4 case, while the performance for the Ising model (q=2) is the lowest of all cases studied. These results are strongly linked with the phase diagram of the working substance and the location of the cycle in the different magnetic phases present, where we find that the transition from a ferromagnetic to a paramagnetic phase extracts more work than one of the Berezinskii–Kosterlitz–Thouless to paramagnetic type. Additionally, as the size of the lattice increases, the extraction work is lower than smaller lattices for all values of q presented in this study.
ARTICLE | doi:10.20944/preprints202103.0302.v2
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Searaser; Flow-3D; Prediction; Long short term memory; deep neural network; Root mean error.
Online: 13 April 2021 (09:51:25 CEST)
Accurate forecasts of ocean waves energy can not only reduce costs for investment but it is also essential for management and operation of electrical power. This paper presents an innovative approach based on the Long Short Term Memory (LSTM) to predict the power generation of an economical wave energy converter named “Searaser”. The data for analyzing is provided by collecting the experimental data from another study and the exerted data from numerical simulation of searaser. The simulation is done with Flow-3D software which has high capability in analyzing the fluid solid interactions. The lack of relation between wind speed and output power in previous studies needs to be investigated in this field. Therefore, in this study the wind speed and output power are related with a LSTM method. Moreover, it can be inferred that the LSTM Network is able to predict power in terms of height more accurately and faster than the numerical solution in a field of predicting. The network output figures show a great agreement and the root mean square is 0.49 in the mean value related to the accuracy of LSTM method. Furthermore, the mathematical relation between the generated power and wave height was introduced by curve fitting of the power function to the result of LSTM method.
ARTICLE | doi:10.20944/preprints202011.0466.v1
Subject: Engineering, Other Keywords: trust-based recommender system; pearson correlation coefficient; confidence; mean absolute error; precision; recall; coverage
Online: 18 November 2020 (10:50:52 CET)
Information overload is the biggest challenges nowadays for any website, especially the e-commerce website. However, this challenge arises for the fast growth of information on the web (WWW) with easy access to the internet. Collaborative filtering based recommender system is the most useful application to solve the information overload problem by filtering relevant information for the users according to their interests. But, the existing system faces some significant limitations like as data sparsity, low accuracy, cold-start and malicious attacks. To alleviate the mentioned issues, the relationship of trust incorporates in the system where it can be between the users or items, and such system is known as the trust-based recommender system (TBRS). From the user perspective, the motive of the TBRS is to utilize the reliability between the users to generate more accurate and trusted recommendations. However, the aim of the paper is to present a comparative analysis of different trust metrics in the context of the type of trust definition of TBRS. Also, the study accomplishes on twenty-four trust metrics in terms of the methodology, trust properties & measurement, validation approaches, and the experimented dataset.
Subject: Mathematics & Computer Science, Other Keywords: Yang-Baxter equation; Euler’s formula; dual numbers; non-associative algebras; UJLA structures; mean inequalities
Online: 9 October 2019 (11:18:43 CEST)
This paper is a continuation of previous papers on unification theories publishes in AXIOMS. We present results about the (modified) Yang-Baxter equation, Euler’s formula, dual numbers, coalgebra structures, non-associative structures, differential geometry, and (mean) inequalities. We will also attempt to relate our discussion to some Brain Studies and Machine Learnig.
ARTICLE | doi:10.20944/preprints201902.0243.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Convex function, Ostrowski inequality, Holder's inequality, Power mean inequality, Conformable integrals, Midpoint formula
Online: 26 February 2019 (13:10:40 CET)
In the article, by applied the concept of strongly convex function and one known identity, we establish several Ostrowski type inequalities involving conformable fractional integrals. As applications, some new error estimations for the midpoint formula are provided as well.
ARTICLE | doi:10.20944/preprints201807.0274.v1
Subject: Life Sciences, Other Keywords: asymmetry; mean skin temperature; non-uniform; outdoor environment; physiological response; skin temperature; solar radiation
Online: 16 July 2018 (10:46:36 CEST)
Depending on human body conditions and environmental conditions, it is sometimes difficult to conduct subject experiments. In such cases, it is effective to use a thermal manikin. There are few studies that investigate the effect of the non-uniform and asymmetric outdoor thermal environment on the mean skin temperature. The purpose of this study is to clarify the influence of the non-uniform and asymmetric thermal radiation of short-wavelength solar radiation in an outdoor environment on the calculation of the mean skin temperature. The skin temperature of the front of the coronal surface, which was facing the sun and where the body received direct short-wavelength solar radiation, and the skin temperature of the rear of the coronal surface, which was in the shadow and did not receive direct short-wavelength solar radiation were respectively measured. The feet, upper arm, forearm, hand and lower leg, which are susceptible to short-wavelength solar radiation in a standing posture, had a noticeable difference in skin temperature between sites in the sun and in shade. The mean skin temperature of sites facing the sun was significantly higher than the mean skin temperature of those in the shade.
ARTICLE | doi:10.20944/preprints202208.0094.v1
Subject: Medicine & Pharmacology, Sport Sciences & Therapy Keywords: Linear analysis; Non-linear analysis; Detrended fluctuation analysis; Entropy; Recurrence plot; Root mean square; Fractals
Online: 4 August 2022 (03:32:16 CEST)
This study aimed to apply different complexity-based methods to surface electromyography (EMG) in order to detect neuromuscular changes after realistic warm-up and stretching procedures. Sixteen volunteers conducted two experimental sessions. They were tested before, after a standardized warm-up, and after a stretching exercise (static or neuromuscular nerve gliding technique). Tests included measurements of the knee flexion torque and EMG of biceps femoris (BF) and semitendinosus (ST) muscles. EMG was analyzed using the root mean square (RMS), sample entropy (SampEn), percentage of recurrence and determinism following a recurrence quantification analysis (%Rec and %Det) and a scaling parameter from a detrended fluctuation analysis. Torque was significantly greater after warm-up as compared to baseline and after stretching. RMS was not affected by the experimental procedure. In contrast, SampEn was significantly greater after warm-up and stretching as compared to baseline values. %Rec was not modified but %Det for BF muscle was significantly greater after stretching as compared to baseline. The a scaling parameter was significantly lower after warm-up as compared to baseline for ST muscle. From the present results, complexity-based methods applied to the EMG give additional information than linear-based methods. They appeared sensitive to detect EMG complexity increases following warm-up.
ARTICLE | doi:10.20944/preprints202202.0322.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: cumulative entropy; cumulative residual entropy; extropy; gini mean difference; tsallis entropy; weighted cumulative residual entropy
Online: 25 February 2022 (04:44:39 CET)
In this work, we introduce a generalized measure of cumulative residual entropy and study its properties. We show that several existing measures of entropy such as cumulative residual entropy, weighted cumulative residual entropy and cumulative residual Tsallis entropy, are all special cases of the generalized cumulative residual entropy. We also propose a measure of generalized cumulative entropy, which includes cumulative entropy, weighted cumulative entropy and cumulative Tsallis entropy as special cases. We discuss generating function approach using which we derive different entropy measures. Finally, using the newly introduced entropy measures, we establish some relationships between entropy and extropy measures.
ARTICLE | doi:10.20944/preprints202102.0138.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Mean time to failure; Poisson shock; Steady-state availability; Steady-state frequency; Supplementary variable technique.
Online: 4 February 2021 (13:07:59 CET)
This article examines the impact of some system parameters on an industrial system composed of two dissimilar parallel units with one repairman. The active unit may fail due to essential factors like aging or deteriorating, or exterior phenomena such as Poisson shocks that occur at various time periods. Whenever the value of a shock is larger than the specified threshold of the active unit, the active unit will fail. The article assumes that the repairman has the right to take any of two decisions at the beginning of the system operation: either a takes a vacation if the two units work in a normal way, or stay in the system to monitor the system until the first system failure. In case of having a failure in any of the two units during the absence of the repairman, the failing unit will have to wait until the repairman is called back to work. We suppose that the value of every shock is assumed to be i.i.d. with some known distribution. The length of the repairman’s vacation, repair time, and recall time are arbitrary distributions. Various reliability measures have been calculated by the supplementary variable technique and the Markov’s vector process theory. At last, numerical computation and graphical analysis have been given for a particular case to validate the derived indices.
ARTICLE | doi:10.20944/preprints202010.0637.v1
Subject: Keywords: Mean Opinion Score (MOS); Quality of Experience (QoE); bandwidth; bandwidth cost; Quality of Service (QoS)
Online: 30 October 2020 (12:55:01 CET)
Quality of Service (QoS) metrics deal with network quantities, e.g. latency and loss, whereas Quality of Experience (QoE) provides a proxy metric for end-user experience. Many papers in the literature have proposed mappings between various QoS metrics and QoE. This paper goes further in providing analysis for QoE versus bandwidth cost. We measure QoE using the widely accepted Mean Opinion Score (MOS) rating. Our results naturally show that increasing bandwidth increases MOS. However, we extend this understanding by providing analysis for internet access scenarios, using TCP, and varying the number of TCP sources multiplexed together. For these target scenarios our analysis indicates what MOS increase you get by further expenditure on bandwidth. We anticipate that this will be of considerable value to commercial organizations responsible for bandwidth purchase and allocation.
ARTICLE | doi:10.20944/preprints201901.0326.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Law of Large Numbers; weak or Kolmogorov mean; Abel's Theorem; mollifiers; summation methods; stable distributions
Online: 31 January 2019 (10:58:52 CET)
The aim of this work is to study generalizations of the notion of mean. Kolmogorov proposed a generalization based on an improper integral with a decay rate for the tail probabilities. This weak or Kolmogorov mean relates to the Weak Law of Large Numbers in the same way that the ordinary mean relates to the Strong Law. We propose a further generalization, also based on an improper integral, called the doubly weak mean, applicable to heavy-tailed distributions such as the Cauchy distribution and the other symmetric stable distributions, We also consider generalizations arising from Abel-Feynman type mollifiers that damp the behavior at infinity and alternative formulations of the mean in terms of the cumulative distribution and the characteristic function.
ARTICLE | doi:10.20944/preprints202112.0369.v1
Subject: Physical Sciences, Nuclear & High Energy Physics Keywords: inhomogeneous phases; chiral imbalance; isospin imbalance; 2+1 dimensional field theories; Gross-Neveu model; mean-field
Online: 22 December 2021 (13:15:54 CET)
We study the μ-μ45-T phase diagram of the 2+1-dimensional Gross-Neveu model, where μ denotes the ordinary chemical potential, μ45 the chiral chemical potential and T the temperature. We use the mean-field approximation and two different lattice regularizations with naive chiral fermions. An inhomogeneous phase at finite lattice spacing is found for one of the two regularizations. Our results suggest that there is no inhomogeneous phase in the continuum limit. We show that a chiral chemical potential is equivalent to an isospin chemical potential. Thus, all results presented in this work can also be interpreted in the context of isospin imbalance.
ARTICLE | doi:10.20944/preprints202109.0425.v1
Subject: Mathematics & Computer Science, Other Keywords: Covid-19; fractal analysis; epidemic curves; box-counting dimension; reproduction rate; global radiation; daily mean temperature
Online: 24 September 2021 (11:19:52 CEST)
The present paper proposes a fractal analysis of the Covid-19 dynamics in 45 European countries. We introduce a new idea of using the box-counting dimension of the epidemiologic curves as a means of classifying the Covid-19 pandemic in the countries taken into consideration. The classification can be a useful tool in deciding upon the quality and accuracy of the data available. We also investigated the reproduction rate, which proves to have significant fractal features, thus enabling another perspective on this epidemic characteristic. Moreover, we studied the correlation between two meteorological parameters: global radiation and daily mean temperature and two Covid-19 indicators: daily new cases and reproduction rate. The fractal dimension differences between the analysed time series graphs could represent a preliminary analysis criterion, increasing research efficiency. Daily global radiation was found to be stronger linked with Covid-19 new cases than air temperature (with a greater correlation coefficient -0.386, as compared with -0.318), and consequently it is recommended as the first-choice meteorological variable for prediction models.
ARTICLE | doi:10.20944/preprints202109.0407.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: cloud computing; cloud resource management; task scheduling; ecosystem; geometric mean; symbiotic organisms search algorithm; convergence speed
Online: 23 September 2021 (12:31:06 CEST)
The search algorithm based on symbiotic organisms’ interactions is a relatively recent bio-inspired algorithm of the swarm intelligence field for solving numerical optimization problems. It is meant to optimize applications based on the simulation of the symbiotic relationship among the distinct species in the ecosystem. The modified SOS algorithm is developed to solve independent task scheduling problems. This paper proposes a modified symbiotic organisms search based scheduling algorithm for efficient mapping of heterogeneous tasks to access cloud resources of different capacities. The significant contribution of this technique is the simplified representation of the algorithm's mutualism process, which uses equity as a measure of relationship characteristics or efficiency of species in the current ecosystem to move to the next generation. These relational characteristics are achieved by replacing the original mutual vector, which uses an arithmetic mean to measure the mutual characteristics with a geometric mean that enhances the survival advantage of two distinct species. The modified symbiotic organisms search algorithm (G_SOS) aimed to minimize the task execution time (Makespan), response, degree of imbalance and cost and improve the convergence speed for an optimal solution in an IaaS cloud. The performances of the proposed technique have been evaluated using a Cladism toolkit simulator, and the solutions are found to be better than the existing standard (SOS) technique and PSO.
ARTICLE | doi:10.20944/preprints202105.0543.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Blind Source Separation (BSS), Minimum Mean Square Error (MMSE), convolutive mixture, source Prior, generalized Gaussian distribution
Online: 24 May 2021 (08:50:37 CEST)
This paper proposes a novel efficient multistage algorithm to extract source speech signals from a noisy convolutive mixture. The proposed approach comprises of two stages named Blind Source Separation (BSS) and De-noising. A hybrid source prior model separates the source signals from the noisy reverberant mixture in the BSS stage. Moreover, we model the low and high-energy components by generalized multivariate Gaussian and super-Gaussian models, respectively. We use Minimum Mean Square Error (MMSE) to reduce noise in the noisy convolutive mixture signal in the de-noising stage. Furthermore, two proposed models investigate the performance gain. In the first model, the speech signal is separated from the observed noisy convolutive mixture in the BSS stage, followed by suppression of noise in the estimated source signals in the de-noising module. In the second approach, the noise is reduced using the MMSE filtering technique in the received noisy convolutive mixture at the de-noising stage, followed by separation of source signals from the de-noised reverberant mixture at the BSS stage. We evaluate the performance of the proposed scheme in terms of signal-to-distortion ratio (SDR) with respect to other well-known multistage BSS methods. The results show the superior performance of the proposed algorithm over the other state-of-the-art methods.
Subject: Physical Sciences, Other Keywords: econophysics; market dynamics; market networks; price variation; Monte Carlo simulations; mean-field theory; statistical physics models
Online: 19 January 2020 (14:26:32 CET)
We study in this paper the time evolution of stock markets using a statistical physics approach. We consider an ensemble of agents who sell or buy a good according to several factors acting on them: the majority of the neighbors, the market ambiance, the variation of the price and some specific measure applied at a given time. Each agent is represented by a spin having a number of discrete states q or continuous states, describing the tendency of the agent for buying or selling. The market ambiance is represented by a parameter T which plays the role of the temperature in physics: low T corresponds to a calm market, high T to a turbulent one. We show that there is a critical value of T, say Tc, where strong fluctuations between individual states lead to a disordered situation in which there is no majority: the numbers of sellers and buyers are equal, namely the market clearing. The specific measure, by the government or by economic organisms, is parameterized by $H$ applied on the market at the time t1 and removed at the time t2. We have used Monte Carlo simulations to study the time evolution of the price as functions of those parameters. In particular we show that the price strongly fluctuates near Tc and there exists a critical value Hc above which the boosting effect remains after H is removed. Our model replicates the stylized facts in finance (time-independent price variation), volatility clustering (time-dependent dynamics) and persistent effect of a temporary shock. The second party of the paper deals with the price variation using a time-dependent mean-field theory. By supposing that the sellers and the buyers belong to two distinct communities with their characteristics different in both intra-group and inter-group interactions, we find the price oscillation with time. Results are shown and discussed.
Subject: Engineering, Electrical & Electronic Engineering Keywords: circular membrane mems devices; electrostatic actuator; boundary non-linear second-order differential problems; singularities; mean curvature
Online: 8 November 2019 (10:33:32 CET)
In the framework of 2D circular membrane Micro-Electric-Mechanical-Systems (MEMS), a new non-linear second-order differential model with singularity in the steady-state case is presented in this paper. In particular, starting from the fact that the electric field magnitude is locally proportional to the curvature of the membrane, the problem is formalized in terms of the mean curvature. Then, a result of existence of at least one solution is achieved. Finally, two different approaches prove that the uniqueness of the solutions is not ensured.
ARTICLE | doi:10.20944/preprints201901.0002.v1
Subject: Engineering, Control & Systems Engineering Keywords: final control element; electro-pneumatic transducer, controller effort, control quality factors, wear, mean-time-between-failures
Online: 3 January 2019 (08:45:42 CET)
For many years, the programmable positioners have been widely applied in structures of modern electro-pneumatic final control elements. The positioner consists of an electro-pneumatic transducer, embedded controller and measuring instrumentation. Electro-pneumatic transducers that are used in positioners are characterized by a relatively short mean time-to-failure. The practical and economical method of a reasonable prolongation of this time is proposed in this paper. It is principally based on assessment and minimizing the effort of the embedded controller. For this purpose, were introduced: the control value variability, mean-time and the cumulative controller's effort. The diminishing of controller effort has significant practical repercussions, because it reduces the intensity of mechanical wear of the final control element components. On the other hand, the reduction of the cumulative effort is important in the context of process economy due to limitation of the consumption of energy of compressed air supplying the final control element. Therefore, the minimization of introduced effort factors has an impact on increasing the functional safety and economics of the controlled process. As a result of the performed simulations, the recommendations regarding the selection of the structure and tuning of positioner controller were elaborated. The simulations were performed in the Matlab-Simulink environment with the use of the liquid level control system in which a phenomenological model of a final control element was deployed. It has been proven that under appropriate conditions, it is possible to extend significantly the lifetime of the final control element and simultaneously enhance the control quality factors.
ARTICLE | doi:10.20944/preprints201907.0185.v1
Subject: Earth Sciences, Atmospheric Science Keywords: summer-mean Arctic circulation patterns; extra-tropical synoptic cyclones; self-organizing maps (SOMs); cyclone detection and tracking
Online: 15 July 2019 (15:24:28 CEST)
The contribution of extra-tropical synoptic cyclones to the formation of summer-mean atmospheric circulation patterns in the Arctic is investigated by clustering the dominant Arctic circulation patterns by the self-organizing maps (SOMs) using the daily mean sea level pressure (MSLP) in the Arctic domain (≥ 60°N). Three SOM patterns are identified: one with prevalent low pressure anomalies in the Arctic Circle (SOM1) and two opposite dipoles with primary high pressure anomalies covering the Arctic Ocean (SOM2 and SOM3). The time series of summertime occurrence frequencies demonstrate the largest inter-annual variation in the SOM1, the slight decreasing trend in the SOM2, and the abrupt upswing after 2007 in the SOM3. The relevant analyses with produced cyclone track data confirm that the vital contribution. The Arctic cyclone activity is enhanced in the SOM1 because the meridional temperature gradient increases over the land–Arctic Ocean boundaries co-located with major extra-tropical cyclone pathways. The composite daily synoptic evolutions for each SOM reveal that the persistence of all the three SOMs is less than 5 days on average. These evolutionary short-term weather patterns have substantial variability at inter-annual and longer timescales. Therefore, the synoptic-scale activity is central to forming the seasonal-mean climate of the Arctic.
ARTICLE | doi:10.20944/preprints201806.0464.v1
Subject: Engineering, Mechanical Engineering Keywords: harmonic identification; adaptive linear neutral network; least mean M-estimate; electro-hydraulic servo shaking table; harmonic distortion
Online: 28 June 2018 (10:55:10 CEST)
Since the electro-hydraulic servo shaking table exists many nonlinear elements, such as, dead zone, friction and blacklash, its acceleration response has higher harmonics which result in acceleration harmonic distortion, when the electro-hydraulic system is excited by sinusoidal signal. For suppressing the harmonic distortion and precisely identify harmonics, a combination of the adaptive linear neural network and least mean M-estimate (ADALINE-LMM), is proposed to identify the amplitude and phase of each harmonic component. Namely, the Hampel’s three-part M-estimator is applied to provide thresholds for detecting and suppressing the error signal. Harmonic generators are used by this harmonic identification scheme to create input vectors and the value of the identified acceleration signal is subtracted from the true value of the system acceleration response to construct the criterion function. The weight vector of the ADALINE is updated iteratively by the LMM algorithm, and the amplitude and phase of each harmonic, even the results of harmonic components, can be computed directly online. The simulation and experiment are performed to validate the performance of the proposed algorithm. According to the experiment result, the above method of harmonic identification possesses great real-time performance and it has not only good convergence performance but also high identification precision.
ARTICLE | doi:10.20944/preprints202209.0083.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: RF energy harvesting; wireless power transfer; path-loss; shadowing; multi-path fading; unmodulated carrier; AWGN; mean; variance; correlation
Online: 6 September 2022 (08:39:55 CEST)
In the past few years, the possibility to transfer power wirelessly has experienced growing interest from the research community. Since the wireless channel is subject to a large number of random phenomena, a crucial aspect is the statistical characterization of the energy that can be harvested by a given device. For this characterization to be reliable, a powerful model of the propagation channel is necessary. The recently proposed Generalized-K model has proven to be very useful, as it encompasses the effects of path-loss, shadowing and fast fading for a broad set of wireless scenarios, and it is analytically tractable. Accordingly, the purpose of this paper is to characterize, from a statistical point of view, the energy harvested by a static device from an unmodulated carrier signal generated by a dedicated source, assuming that the wireless channel obeys the Generalized-K propagation model. Specifically, using simulation-validated analytical methods, this paper provides exact closed-form expressions for the average and variance of the energy harvested over an arbitrary time period. The derived formulation can be used to determine a power transfer plan that allows multiple or even massive numbers of low-power devices to operate continuously, as expected from future network scenarios such as IoT or 5G/6G.
Subject: Physical Sciences, Acoustics Keywords: Fibonacci numbers; the Golden Ratio; the Golden Mean; dimensionality; quasiparticles; anions; non-Abelian; Fibonacci; excitations; Dimensional Gate Operators
Online: 22 February 2021 (15:21:52 CET)
The importance and near ubiquity of the Golden Ratio in disciplines like chemistry and biology is well known, but only recently has it come to light in areas pertaining to the Quantum domain. By using a modified tool-kit of hyper-complex numbers (known as Dimensional Gate Operators) and numerical analysis, we uncover a connection between hyper-dimensional objects, the Fibonacci sequence and that of quasiparticles and excitations. The results show that dimensionality increases in step with the Fibonacci sequence.
ARTICLE | doi:10.20944/preprints201803.0158.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: ANFIS; artificial neural network; brushless DC motor; FPA; maximum power point tracking; photovoltaic system; root mean square error
Online: 19 March 2018 (11:04:32 CET)
In this research paper, a hybrid Artificial Neural Network (ANN)-Fuzzy Logic Control (FLC) tuned Flower Pollination Algorithm (FPA) as a Maximum Power Point Tracker (MPPT) is employed to emend root mean square error (RMSE) of photovoltaic (PV) modeling. Moreover, Gaussian membership functions have been considered for fuzzy controller design. This paper interprets Luo converter occupied brushless DC motor (BLDC) directed PV water pump application. Experimental responses certify the effectiveness of the suggested motor-pump system supporting diverse operating states. Luo converter is newly developed dc-dc converter has high power density, better voltage gain transfer and superior output waveform and able to track optimal power from PV modules. For BLDC speed controlling there is no extra circuitry and phase current sensors are enforced for this scheme. The recentness of this attempt is adaptive neuro-fuzzy inference system (ANFIS)-FPA operated BLDC directed PV pump with advanced Luo converter has not been formerly conferred.
ARTICLE | doi:10.20944/preprints202105.0261.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: forecasting; forecast evaluation; forecast bias; mean bias; median bias; MPE; AvgRel-metrics; AvgRelAME; AvgRelAMdE; RelAME; RelMdE; AvgRelME; AvgRelMdE; OPc
Online: 12 May 2021 (09:48:29 CEST)
Measuring bias is important as it helps identify flaws in quantitative forecasting methods or judgmental forecasts. It can, therefore, potentially help improve forecasts. Despite this, bias tends to be under-represented in the literature: many studies focus solely on measuring accuracy. Methods for assessing bias in single series are relatively well-known and well-researched, but for datasets containing thousands of observations for multiple series, the methodology for measuring and reporting bias is less obvious. We compare alternative approaches against a number of criteria when rolling-origin point forecasts are available for different forecasting methods and for multiple horizons over multiple series. We focus on relatively simple, yet interpretable and easy-to-implement metrics and visualization tools that are likely to be applicable in practice. To study the statistical properties of alternative measures we use theoretical concepts and simulation experiments based on artificial data with predetermined features. We describe the difference between mean and median bias, describe the connection between metrics for accuracy and bias, provide suitable bias measures depending on the loss function used to optimise forecasts, and suggest which measures for accuracy should be used to accompany bias indicators. We propose several new measures and provide our recommendations on how to evaluate forecast bias across multiple series.
ARTICLE | doi:10.20944/preprints201808.0507.v1
Subject: Earth Sciences, Environmental Sciences Keywords: hydrologic forecast verification, mean squared forecast error, methods of forecast error estimation, comparison of hydrologic forecasting methods, forecast applicability assessment.
Online: 29 August 2018 (16:11:24 CEST)
This paper presents the methods of estimating the mean square error of hydrological forecasts, allowing for assessment of their practical applicability. Depending upon the amount and composition of available hydrometeorological data, an appropriate method for forecast error estimation is chosen. A system of statistical tests for comparison of different forecasting methods for the same hydrologic characteristic with the same lead time is presented. These tests allow for choosing an optimal and most accurate forecasting method. Hydrological forecasting method efficiency estimation is based on comparing the forecast error with climatology or inertial (persistence) forecast error using presented tests.
Subject: Mathematics & Computer Science, Other Keywords: Gaussian noise; Speckle Noise; Mean square error(MSE); DE noising filters; Maximum difference value (MD); Peak signal to noise ratio(PSNR)
Online: 4 June 2020 (05:52:55 CEST)
Noise reduction in medical images is a perplexing undertaking for the researchers in digital image processing. Noise generates maximum critical disturbances as well as touches the medical images quality, ultrasound images in the field of biomedical imaging. The image is normally considered as gathering of data and existence of noises degradation the image quality. It ought to be vital to reestablish the original image noises for accomplishing maximum data from images. Medical images are debased through noise through its transmission and procurement. Image with noise reduce the image contrast and resolution, thereby decreasing the diagnostic values of the medical image. This paper mainly focuses on Gaussian noise, Pepper noise, Uniform noise, Salt and Speckle noise. Different filtering techniques can be adapted for noise declining to improve the visual quality as well as reorganization of images. Here four types of noises have been undertaken and applied on medical images. Besides numerous filtering methods like Gaussian, median, mean and Weiner applied for noise reduction as well as estimate the performance of filter through the parameters like mean square error (MSE), peak signal to noise ratio (PSNR), Average difference value (AD) and Maximum difference value (MD) to diminish the noises without corrupting the medical image data.
ARTICLE | doi:10.20944/preprints202108.0161.v1
Subject: Physical Sciences, Condensed Matter Physics Keywords: long–range memory; 1/f noise; absolute value estimator; anomalous diffusion; ARFIMA; first–passage times; fractional Lèvy stable motion; Higuchi’s method; Mean squared displacement; multiplicative point process
Online: 6 August 2021 (11:22:25 CEST)
In the face of the upcoming 30th anniversary of econophysics, we review our contributions and other related works on the modeling of the long–range memory phenomenon in physical, economic, and other social complex systems. Our group has shown that the long–range memory phenomenon can be reproduced using various Markov processes, such as point processes, stochastic differential equations and agent–based models. Reproduced well enough to match other statistical properties of the financial markets, such as return and trading activity distributions and first–passage time distributions. Research has lead us to question whether the observed long–range memory is a result of actual long–range memory process or just a consequence of non–linearity of Markov processes. As our most recent result we discuss the long–range memory of the order flow data in the financial markets and other social systems from the perspective of the fractional Lèvy stable motion. We test widely used long-range memory estimators on discrete fractional Lèvy stable motion represented by the ARFIMA sample series. Our newly obtained results seem indicate that new estimators of self–similarity and long–range memory for analyzing systems with non–Gaussian distributions have to be developed.
ARTICLE | doi:10.20944/preprints201801.0074.v1
Subject: Mathematics & Computer Science, Analysis Keywords: beta function; extended beta function; hypergeometric function; extended hypergeometric function; confluent hypergeometric function; extended confluent hypergeometric function; Mellin transform; beta distribution; mean; variance; transformation formula; summation formula
Online: 9 January 2018 (07:08:48 CET)
The main objective of this paper is to introduce a further extension of extended (p, q)-beta function by considering two Mittag-Leffler function in the kernel. We investigate various properties of this newly defined beta function such as integral representations, summation formulas and Mellin transform. We define extended beta distribution and its mean, variance and moment generating function with the help of extension of beta function. Also, we establish an extension of extended (p, q)-hypergeometric and (p, q)-confluent hypergeometric functions by using the extension of beta function. Various properties of newly defined extended hypergeometric and confluent hypergeometric functions such as integral representations, Mellin transformations, differentiation formulas, transformation and summation formulas are investigated.
ARTICLE | doi:10.20944/preprints202204.0314.v1
Subject: Medicine & Pharmacology, Pathology & Pathobiology Keywords: Emergency Use Authorization; endemic; false omission; false omission rate; home testing; point-of-care testing (POCT); positive predictive value geometric mean-squared; prevalence boundary; recursive protocol; tier; visual logistics
Online: 30 April 2022 (08:42:08 CEST)
Goals: To use visual logistics for interpreting COVID-19 molecular and rapid antigen test (RAgT) performance, determine prevalence boundaries where risk exceeds expectations, and evaluate benefits of recursive testing along home, community, and emergency spatial care paths. Methods: Mathematica/open access software helped graph relationships, compare performance patterns, and perform recursive computations. Results: Tiered sensitivity/specificity comprise: T1) 90%/95%; T2) 95%/97.5%; and T3) 100%/≥99%, respectively. In emergency medicine, median RAgT performance peaks at 13.2% prevalence, then falls below T1, generating risky prevalence boundaries. RAgTs in pediatric ERs/EDs parallel this pattern with asymptomatic worse than symptomatic performance. In communities, RAgTs display large uncertainty with median prevalence boundary of 14.8% for 1/20 missed diagnoses, and at prevalence >33.3-36.9% risk 10% false omissions for symptomatic subjects. Recursive testing improves home RAgT performance. Home molecular tests elevate performance above T1, but lack adequate validation. Conclusions: Widespread RAgT availability encourages self-testing. Asymptomatic RAgT and PCR-based saliva testing present the highest chance of missed diagnoses. Home testing twice, once just before mingling, and molecular-based self-testing help avoid false omissions. Community and ER/ED RAgTs can identify contagiousness in low prevalence (<22%). Real-world trials of performance, cost-effectiveness, and public health impact could identify home molecular diagnostics as the optimal diagnostic portal.