ARTICLE | doi:10.20944/preprints202310.0444.v1
Subject: Computer Science And Mathematics, Computer Vision And Graphics Keywords: Monocular Vision; Binocular Vision; Forward Projection; Inverse Projection; Displacement Projection
Online: 8 October 2023 (10:19:31 CEST)
A human eye has about 120 million rod cells and 6 million cone cells. This huge number of light sensing cells inside a human eye will continuously produce a huge quantity of visual signals which flow into a human brain for daily processing. However, the real-time processing of these visual signals does not cause any fatigue to a human brain. This fact tells us the truth which is to say that human-like vision processes do not rely on complicated formulas to compute depth, displacement, and colors, etc. On the other hand, a human eye is like a PTZ camera. Here, PTZ stands for pan, tilt and zoom. We all know that in computer vision, each set of PTZ parameters (i.e., coefficients of pan, tilt and zoom) requires a dedicated calibration to determine a camera’s projection matrix. Since there is an infinite number of PTZ parameters which could be produced by a human eye, it is unlikely that a human brain stores an infinite number of calibration matrices for each human eye. Therefore, it is an interesting question for us to answer, which is to say whether simpler formulas of computing depth and displacement exist or not. Moreover, these formulas must be calibration friendly (i.e., easy process on the fly or on the go). In this paper, we disclose an important discovery of a new solution to 3D projection in a human-like binocular vision system. The purpose of doing 3D projection in binocular vision is to undertake forward and inverse transformations (or mappings) between coordinates in 2D digital images and coordinates in a 3D analogue scene. The formulas underlying the new solution are accurate, easily computable, easily tunable (i.e., to be calibrated on the fly or on the go) and could be easily implemented by a neural system (i.e., a network of neurons). Experimental results have validated the discovered formulas.
ARTICLE | doi:10.20944/preprints202306.0433.v1
Subject: Environmental And Earth Sciences, Other Keywords: Hengduan mountains; rainfall erosivity; distribution; projection
Online: 6 June 2023 (09:51:42 CEST)
The spatiotemporal variations of rainfall erosivity in the Hengduan Mountains, charac-terized by rugged terrain and high potential soil erosion risks, over the past 30 years was exam-ined. The changing trends of rainfall erosivity for 2025-2040 was also be investigated under the comprehensive scenario of moderate socio-economic development (SSP2-4.5) combined with me-dium-low radiative forcing, using four global climate models (GCMs) based on CMIP6. The results indicated that: (1) The annual distribution of rainfall erosivity in the Hengduan Mountains exhib-ited significant seasonal variations, with the order of erosivity being summer > autumn > spring > winter on a seasonal scale. (2) Over the past 30 years, there has been a slight decrease in annual precipitation and a slight increase in rainfall erosivity, with periodic extreme values occurring every 6-8 years. (3) Rainfall erosivity showed a decreasing gradient from southeast to northwest in terms of spatial distribution. There was a significant positive correlation between rainfall ero-sivity and precipitation, while a significant negative correlation existed with elevation in the ver-tical direction. Moreover, there was an increasing trend of rainfall erosivity in the northeastern part of the Hengduan Mountains and a decreasing trend in the southern region. (4) Under the joint driving forces of increased precipitation and erosive rainfall events, rainfall erosivity in the future is expected to significantly increase, posing a more severe risk of soil erosion in the Heng-duan Mountains.
ARTICLE | doi:10.20944/preprints202004.0504.v1
Subject: Biology And Life Sciences, Virology Keywords: COVID-19; Pakistan; exponential growth; projection
Online: 29 April 2020 (10:38:30 CEST)
The observed data of COVID-19 progression in Pakistan for first 50 days from the first patient been reported has shown quite an unusual trend which is in opposition to clear exponential spread pattern of any infectious disease. The data of positive cases of 50 days of disease progression has been collected from COVID-19 dashboard of Pakistan and analyzed to see the graphical trend and to forecast the behaviour of disease progression for next 30 days. Mathematical equations regarding exponential growth are used to analyse the disease progression and different possible trajectories are plotted to understand the approximate trend pattern. The possible projections estimated 20k-456k positive cases within 80 days of disease spread in Pakistan. Although, the disease progression pattern is not perfectly exponential, it is still threatening a major fraction of susceptible population and demands effective strategic planning and control.
ARTICLE | doi:10.20944/preprints202207.0403.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: Climate Projection; Downscaling; Drought; Runoff; Snow; Wildfire
Online: 26 July 2022 (10:42:21 CEST)
Snowpack loss in midlatitude mountains is ubiquitously projected by Earth system models, though the magnitudes, persistence and time horizons of decline vary. Using daily downscaled hydroclimate and snow projections we examine changes in snow seasonality across the U.S. Pacific Southwest region during a simulated severe 20-year dry spell in the 21st century (2051–2070) developed as part of the 4th California Climate Change Assessment to provide a "stress test" for water resources. Across California’s mountains, substantial declines (30–100% loss) in median peak annual snow water equivalent accompany changes in snow seasonality throughout the region compared to the historic period. We find 80% of historic seasonal snowpacks transition to ephemeral conditions. Subsetting empirical-statistical wildfire projections for California by snow seasonality transition regions indicates a two-to-four-fold increase in burned area, consistent with recent observations of high elevation wildfires following extended drought conditions. By analyzing six of the major California snow-fed river systems we demonstrate snowpack reductions and seasonality transitions result in concomitant declines in annual runoff (47-58% of historic values). The negative impacts to statewide water supply reliability by the projected dry spell will likely be magnified by changes in snowpack seasonality and increased wildfire activity.
ARTICLE | doi:10.20944/preprints201710.0029.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: Climate change; HBV; climate projection; Ethiopian highland
Online: 5 October 2017 (13:50:02 CEST)
This study assessed the impact of climate change on water availability and variability in two subbasins in the Upper Blue Nile Basin of Ethiopia. Downscaled future climate data from HadCM3 of A2 (medium-high) and B2 (medium-low) emission scenarios were compared to the observed climate data for a baseline period (1961 to 1990). The emission scenario representing the baseline period was used to predict future climate and as input to a hydrologic model to estimate the impact of future climate on the streamflow at three future time horizons 2020 - 2045, 2045 - 2070 and 2070 - 2100. Results suggest that medium-high emission scenario best represents the local rainfall and temperature pattern. With A2 scenario, daily maximum/minimum temperature will increase throughout the future time horizons. The minimum and maximum temperature will increase by 3.6oC and 2.4oC, respectively, towards the end of the 21st century. Consequently, potential evapotranspiration is expected to increase by 7.8%, though trends in annual rainfall do not show statistically meaningful trends between years. A notable seasonality was found in the rainfall pattern such that dry season rainfall amounts are likely to increase and wet season rainfall to decrease. The hydrological model indicated that the local hydrology of the study watersheds will be significantly influenced by climate change. Overall, at the end of the century, streamflow will increase in both rivers by up to 64% in dry seasons and decrease by 19% in wet seasons.
Subject: Engineering, Automotive Engineering Keywords: drought; drought indices; South Asia; prediction; projection; teleconnection
Online: 1 March 2021 (17:52:21 CET)
South Asian countries experience frequent drought incidents recently, and due to this reason, many scientific studies were carried to explore the drought in South Asia. In this context, we review scientific studies related to drought in South Asia. The study initially identifies the importance of drought-related studies and discusses drought types for South Asian regions. The representative examples of drought events, severity, frequency, and duration in South Asian countries are identified. The Standardized Precipitation Index (SPI) was mostly adopted in South Asian countries to quantify and monitor droughts. Nevertheless, the absence of drought quantification studies in Bhutan and Maldives is of great concern. Future studies to generate a combined drought severity map for the South Asian region are required. Moreover, the drought prediction and projection in the regions is rarely studied. Further, the teleconnection between drought and large-scale atmospheric circulations in the South Asian area has not been discussed in detail in the most scientific literature. Therefore, as a take-home message, there is an urgent need for scientific studies related to drought quantification for some regions in South Asia, prediction and projection of drought for an individual country (or as a region), and drought teleconnection to atmospheric circulation.
ARTICLE | doi:10.20944/preprints202310.1228.v1
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: Rainfall; Temperature; Potential evapotranspiration; Soil water content; Climate Projection
Online: 19 October 2023 (07:02:24 CEST)
In Ethiopia, climate change risks are anticipated to have significant consequences for agriculture and food security. This study investigated the past (1981-2010) and the future (2041-2070) climate change trends and their influence on crop length of growing seasons in North-Western (NW) Ethiopian highlands. Climate data were obtained from National Meteorological Agency of Ethiopia and the most valid and high resolution CMIP5 rcp6 (Coupled models Intercomparison Project representative concentration path six) model data were extracted and applied for the analysis purpose. Standard statistical methods are then applied to compute soil water content as well as to evaluate climate variability and trends and their impact on crop Length of Growing Season (LGS). Maximum temperature (tasmax) and minimum temperature (tasmin) inter-annual variability anomalies show the region has experienced coolest years than hottest years during the past. However, in the future the coolest years will highly decrease by -1.2oC while the hottest years increase by +1.3oC. During the major rainfall season (JJAS), the area has received an adequate amount of rainfall in the past and is very likely to get similar rainfall in the future. Whereas the February to May (FMAM) season assists only for early planting and October to January (ONDJ) season for lengthen growing season of JJAS if properly utilized. Otherwise, the season will have the possibility to destroy crops before and during the harvesting time. The soil water content change in the future remains close to past condition, The length of growing seasons has less variable onset and cessation dates while the projected length of growing period (LGP) 174 to 177 days will be suitable for short, long cycle crops and double cropping that could benefit crop production yield of NW-Ethiopian highlands in the future.
ARTICLE | doi:10.20944/preprints202101.0324.v4
Subject: Physical Sciences, Quantum Science And Technology Keywords: Information; Quantum Physics; Biochemical Projection; Neural Interpretation; Consciousness; Reality
Online: 25 August 2021 (11:24:41 CEST)
How does the world around us work and what is real? This question has preoccupied humanity since its beginnings. From the 16th century onwards, it has been periodically necessary to revise the prevailing worldview. But things became very strange at the beginning of the 20th century with the advent of relativity theory and quantum physics. The current focus is on the role of information, there being a debate about whether this is ontological or epistemological. A theory has recently been formulated in which spacetime and gravity emerges from microscopic quantum information, more specifically from quantum entanglement via entanglement entropy. A latest theory describes the emergence of reality itself through first-person perspective experiences and algorithmic information theory. In quantum physics, perception and observation play a central role. Perception, interaction with the environment, requires an exchange of information. Via biochemical projection, information is given an interpretation that is necessary to make life and consciousness possible. The world around us is not at all what it seems.
Subject: Computer Science And Mathematics, Computer Science Keywords: COVID-19; description; prediction; causal inference; extrapolation; simulation; projection
Online: 10 August 2020 (10:44:46 CEST)
The models used to estimate disease transmission, susceptibility and severity determine what epidemiology can (and cannot tell) us about COVID-19. These include: ‘model organisms’ chosen for their phylogenetic/aetiological similarities; multivariable statistical models to estimate the strength/direction of (potentially causal) relationships between variables (through ‘causal inference’), and the (past/future) value of unmeasured variables (through ‘classification/prediction’); and a range of modelling techniques to predict beyond the available data (through ‘extrapolation’), compare different hypothetical scenarios (through ‘simulation’), and estimate key features of dynamic processes (through ‘projection’). Each of these models: address different questions using different techniques; involve assumptions that require careful assessment; and are vulnerable to generic and specific biases that can undermine the validity and interpretation of their findings. It is therefore necessary that the models used: can actually address the questions posed; and have been competently applied. In this regard, it is important to stress that extrapolation, simulation and projection cannot offer accurate predictions of future events when the underlying mechanisms (and the contexts involved) are poorly understood and subject to change. Given the importance of understanding such mechanisms/contexts, and the limited opportunity for experimentation during outbreaks of novel diseases, the use of multivariable statistical models to estimate the strength/direction of potentially causal relationships between two variables (and the biases incurred through their misapplication/misinterpretation) warrant particular attention. Such models must be carefully designed to address: ‘selection-collider bias’, ‘unadjusted confounding bias’ and ‘inferential mediator adjustment bias’ – all of which can introduce effects capable of enhancing, masking or reversing the estimated (true) causal relationship between the two variables examined. Selection-collider bias occurs when these two variables independently cause a third (the ‘collider’), and when this collider determines/reflects the basis for selection in the analysis. It is likely to affect all incompletely representative samples, although its effects will be most pronounced wherever selection is constrained (e.g. analyses focusing on infected/hospitalised individuals). Unadjusted confounding bias disrupts the estimated (true) causal relationship between two variables when: these share one (or more) common cause(s); and when the effects of these causes have not been adjusted for in the analyses (e.g. whenever confounders are unknown/unmeasured). Inferentially similar biases can occur when: one (or more) variable(s) (or ‘mediators’) fall on the causal path between the two variables examined (i.e. when such mediators are caused by one of the variables and are causes of the other); and when these mediators are adjusted for in the analysis. Such adjustment is commonplace when: mediators are mistaken for confounders; prediction models are mistakenly repurposed for causal inference; or mediator adjustment is used to estimate direct and indirect causal relationships (in a mistaken attempt at ‘mediation analysis’). These three biases are central to ongoing and unresolved epistemological tensions within epidemiology. All have substantive implications for our understanding of COVID-19, and the future application of artificial intelligence to ‘data-driven’ modelling of similar phenomena. Nonetheless, competently applied and carefully interpreted, multivariable statistical models may yet provide sufficient insight into mechanisms and contexts to permit more accurate projections of future disease outbreaks.
ARTICLE | doi:10.20944/preprints201905.0243.v1
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: Machine Vision; Morphological image filtering; Galvanic Industry; Rear-projection.
Online: 20 May 2019 (11:46:34 CEST)
In the fashion field, the use of electroplated small metal parts such as studs, clips and buckles is widespread. The plate is often made of precious metal, such as gold or platinum. Due to the high cost of these materials, it is strategically relevant and of primary importance for manufacturers to avoid any waste by depositing only the strictly necessary amount of material. To this aim, Companies need to be aware of the overall number of items to be electroplated so that it is possible to properly set the parameters driving the galvanic process. Accordingly, the present paper describes a Machine Vision-based method able to automatically count small metal parts arranged on a galvanic frame. The devised method relies on the definition of a proper acquisition system and on the development of image processing-based routines. Such a system is then implemented on a counting machine is meant to be adopted in the galvanic industrial practice to properly define a suitable set or working parameters (such as current, voltage and deposition time) for the electroplating machine and, thereby, to assure the desired plate thickness from one side and to avoid material waste on the other.
ARTICLE | doi:10.20944/preprints201608.0046.v1
Subject: Social Sciences, Cognitive Science Keywords: visual symmetry; affine projection; fractals; visual sensation; aesthetics; preference
Online: 5 August 2016 (05:15:32 CEST)
Evolution and geometry generate complexity in similar ways. Evolution drives natural selection while geometry may capture the logic of this selection and express it visually, in terms of specific generic properties representing some kind of advantage. Geometry is ideally suited for expressing the logic of evolutionary selection for symmetry, which is found in the shape curves of vein systems and other natural objects such as leaves, cell membranes, or tunnel systems built by ants. The topology and geometry of symmetry is controlled by numerical parameters, which act in analogy with a biological organism's DNA. The introductory part of this paper reviews findings from experiments illustrating the critical role of two-dimensional design parameters and shape symmetry for visual or tactile shape sensation, and for perception-based decision making in populations of experts and non-experts. Thereafter, results from a pilot study on the effects of fractal symmetry, referred to herein as the symmetry of things in a thing, on aesthetic judgments and visual preference are presented. In a first experiment (psychophysical scaling procedure), non-expert observers had to rate (scale from 0 to 10) the perceived beauty of a random series of 2D fractal trees with varying degrees of fractal symmetry. In a second experiment (two-alternative forced choice procedure), they had to express their preference for one of two shapes from the series. The shape pairs were presented successively in random order. Results show that the smallest possible fractal deviation from "symmetry of things in a thing" significantly reduces the perceived attractiveness of such shapes. The potential of future studies where different levels of complexity of fractal patterns are weighed against different degrees of symmetry is pointed out in the conclusion.
ARTICLE | doi:10.20944/preprints202305.2029.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: discrete global grid system; equidistant cylindrical projection; yin-yang; distortion
Online: 30 May 2023 (03:56:22 CEST)
The rapid growth of Earth's global geospatial data necessitates an efficient system for organizing the data, facilitating data fusion from diverse sources, and promoting interoperability. Mapping the spheroidal surface of the planet presents significant challenges, as it involves balancing distortion and splitting the surface into multiple partitions. The distortion decreases as the number of partitions increases, but at the same time the complexity of data processing increases, since each partition represents a separate data set and is defined in its own local coordinate system. In this paper, we propose the dual orthogonal equidistant cylindrical projection method to mitigate distortion and reduce the number of partitions. Additionally, we use the rotation of projection cylinders to effectively minimize average angular and areal distortions of the Earth’s landmass and reduce the interruption of continental plates caused by partition edges. By incorporating auxiliary latitudes and proposing an approximate authalic latitude, we further enhance the mapping of the ellipsoid onto the sphere, simplifying calculations. Experimental results demonstrate a substantial reduction in distortion and interruption of continental plates. With only two partitions, an average landmass angular distortion of less than 3.56 degrees and an average normalized surface distortion of less than 1.07 were achieved.
ARTICLE | doi:10.20944/preprints202110.0159.v2
Subject: Business, Economics And Management, Finance Keywords: Maritime Silk Road; investment environment; dynamic evaluation; projection pursuit cluster
Online: 14 October 2021 (10:47:02 CEST)
Understanding and evaluating urban investment environment is essential for effectively improving the efficiency of resource allocation between cities and promoting overall development of the regional economy. This paper takes 15 node cities on maritime Silk Road covered by the “Belt and Road” as the research object, establishes a dynamic evaluation index system for investment environment, and uses projection pursuit cluster to analyze and evaluate the investment environment of the cities. It is found that the investment environment potential of a city is directly related to the level of social development, economic development, and the degree of opening to the outside world. It is recommended that node cities should seize the important opportunity of the construction of the Maritime Silk Road, introduce world-wide human, financial and material resources to promote regional resources allocation and flow, and continuously improve and upgrade the investment environment quality.
REVIEW | doi:10.20944/preprints202105.0666.v1
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: muon tomography; time projection chamber; Micromegas; cosmic rays; geophysics; dam
Online: 27 May 2021 (13:11:57 CEST)
Tomography based on cosmic muon absorption is a rising technique because of its versatility and its consolidation as geophysics tool over the past decade. It allows to address major societal issues such as long-term stability of natural and man-made large infrastructures or sustainable underwater management. Traditionally, muon trackers consist of hodoscopes or multilayer detectors. For applications with challenging available volumes or wide field of view required, a thin time projection chamber (TPC) associated with a Micromegas readout plane can provide a good tradeoff between compactness and performance. This paper details the design of such a TPC aiming at maximizing primary signal and minimizing track reconstruction artifacts. The results of the measurements performed during a case study addressing the aforementioned applications are discussed. The current works lines and perspectives of the project are also presented.
ARTICLE | doi:10.20944/preprints201903.0093.v1
Subject: Computer Science And Mathematics, Computational Mathematics Keywords: projection; optimization; generalization; box constraints; declipping; desaturation; proximal splitting; sparsity
Online: 7 March 2019 (12:11:19 CET)
In theory and applications, it is often inevitable to work with projectors onto convex sets, where a linear transform is involved. In this article, a novel projector is presented, which generalizes previous results in that it admits a broader family of linear transforms, but on the other hand it is limited to box-type convex sets in the transformed domain. The new projector has an explicit formula and it can be interpreted within the framework of proximal optimization. The benefit of the new projector is demonstrated on an example from signal processing, where it was possible to speed up the convergence of a signal declipping algorithm by a factor of more than two.
ARTICLE | doi:10.20944/preprints201810.0249.v2
Subject: Chemistry And Materials Science, Metals, Alloys And Metallurgy Keywords: resistance projection welding; nugget size; maximum failure load; welding parameter
Online: 22 October 2018 (11:32:18 CEST)
The aim of this paper is to at first evaluate the influence of three key parameters including weld current, weld time and electrode force on nugget diameter and tensile strength in resistance projection welding. Then, a 2-D axis-symmetric finite element model is developed to simulate the projection welding and predict the nugget diameter. Finally, the FEM results are compared to experimental data to verify the simulation model and simulated results. In the finite element model, the temperature-dependent material properties were taken into account.
ARTICLE | doi:10.20944/preprints202210.0411.v1
Subject: Engineering, Mechanical Engineering Keywords: Multiple exposure image fusion; Fringe projection profilometry; High reflective surface measurement
Online: 26 October 2022 (10:13:48 CEST)
Fringe projection profilometry(FPP) has been extensively applied in various fields for its superior fast speed, high accuracy and high data density. However, measuring some objects with high reflective surfaces or high dynamic range surfaces remains challenging for FPP. Some multiple exposure image fusion methods have been proposed and successfully improved the measurement performance for these kinds of objects. Normally, these methods have a relatively fixed sequence of exposure settings determined by artificial experience or trial and error experiments, which may decrease the efficiency of the entire measurement process and may have less robustness to various environmental lighting conditions and object reflective properties. In this paper, a novel self-adaptive multiple exposure image fusion method is proposed with two main aspects of improvement on adaptively optimizing the initial exposure and the exposure sequence. By introducing the theory of information entropy, combined with the analysis of the characterization of fringe image entropy, an adaptive initial exposure searching method is first proposed. Then, an exposure sequence generation method based on dichotomy is further described. On the base of these two improvements, a novel self-adaptive multiple exposure image fusion method for FPP as well as its detailed procedures are given. Experimental results validate the performance of the proposed self-adaptivity multiple exposure image fusion method by measuring the objects with different surface reflectivity and in different ambient lighting conditions.
ARTICLE | doi:10.20944/preprints201806.0198.v1
Subject: Computer Science And Mathematics, Probability And Statistics Keywords: quantile regression; quantile time series; demographics; mortality; longevity; modelling mortality projection
Online: 12 June 2018 (15:05:09 CEST)
This paper has three objectives, the first is to present a detailed overview in the form of a tutorial for the developments of several key quantile time series modelling approaches. The second objective is to develop a general framework to represent such quantile models in a unifying manner in order to easily develop extensions and connections between existing models that can then be developed to further extend these models in practice. In this regard, the core theme of the paper is to provide perspectives to a general audience of core components that go into construction of a quantile time series model and then to explore each of these core components in detail. The paper is not addressing the concerns of estimation of these models, as there is existing literature on these aspects in many settings, we provide references to relevant works on these aspects in several classes of model. Instead, the focus is rather to provide a unified framework to construct such models for practitioners, therefore the focus is instead on the properties of the models and links between such models from a constructive perspective. The third objective is to compare and discuss the application of the different quantile time series models on several sets of interesting demographic and mortality based time series data sets of relevance to life insurance analysis. The exploration included detailed mortality, fertility, births and morbidity data in several countries with more detailed analysis of regional data in England, Wales and Scotland.
ARTICLE | doi:10.20944/preprints201705.0188.v1
Subject: Engineering, Mechanical Engineering Keywords: Creep; Composite constitutive model; θ projection method; low and intermediate temperature
Online: 25 May 2017 (17:35:30 CEST)
The creep behaviors of TA2 and R60702 at low and intermediate temperature were presented and discussed in this paper. Experimental results indicated that an apparent threshold stress exhibited in the creep deformation of R60702. Meanwhile, the primary creep phase was found as the main pattern in the room temperature creep behavior of TA2. Compared with exponential law, the power law has been proved to be a proper constitutive model in the description of primary creep phase. It also showed that θ projection method had its significant advantage in the evaluation of accelerated creep stage. Thus, a composite model which combined power law with θ projection method was applied in the creep curves evaluation at low and intermediate temperature. Based on the multiaxial creep deformation results, the model was modified and discussed. A linear relationship existed between composite model parameters and applied load. Finally, the creep life could be accurately predicted and the composite model method is suitable for application in low and intermediate temperature creep life analysis.
ARTICLE | doi:10.20944/preprints201612.0015.v1
Subject: Computer Science And Mathematics, Mathematics Keywords: FM-BEM; variable restart parameter; GMRES(m); error vector projection; convergence
Online: 2 December 2016 (09:01:51 CET)
To solve large scale linear equations involved in the Fast Multipole Boundary Element Method (FM-BEM) efficiently, an iterative method named the generalized minimal residual method (GMRES)(m)algorithm with Variable Restart Parameter (VRP-GMRES(m) algorithm) is proposed. By properly changing a variable restart parameter for the GMRES(m) algorithm, the iteration stagnation problem resulting from improper selection of the parameter is resolved efficiently. Based on the framework of the VRP-GMRES(m) algorithm and the relevant properties of generalized inverse matrix, the projection of the error vector rm+1 on rm is deduced. The result proves that the proposed algorithm is not only rapidly convergent but also highly accurate. Numerical experiments further show that the new algorithm can significantly improve the computational efficiency and accuracy. Its superiorities will be much more remarkable when it is used to solve larger scale problems. Therefore, it has extensive prospects in the FM-BEM field and other scientific and engineering computing.
ARTICLE | doi:10.20944/preprints202309.0041.v1
Subject: Computer Science And Mathematics, Computer Vision And Graphics Keywords: Auxiliary venipuncture; vein segmentation; improved U-Net; coaxial correction; vein projection system
Online: 1 September 2023 (10:23:22 CEST)
Vein segmentation and projection correction constitute the core algorithms of auxiliary venipuncture device, responding to the accurate venous positioning to assist puncture and reduce the number of punctures and pain of patients. This paper proposes an improved U-Net for segmenting vein and a coaxial correction for image alignment in the self-build vein projection system. The proposed U-Net is embedded by Gabor convolution kernels in the shallow layers to enhance segmentation accuracy. Additionally, to mitigate the semantic information loss caused by channel reduction, the network model is lightweighted by the mean of replacing conventional convolutions with inverted residual blocks. During the visualization process, a method that combining coaxial correction and homography matrix is proposed to address the non-planarity of the dorsal hand in this paper. First, use a hot mirror to adjust the light paths of both projector and camera to be coaxial, and then align the projected image with dorsal hand using a homography matrix. Using this approach, the device requires only a single calibration before use. With the implementation of the improved segmentation method, an accuracy rate of 95.12% is achieved by the dataset. The intersection over union ratio between the segmented and original images is reached at 90.07%. The entire segmentation process is completed in a 0.09 second, and the largest distance error of vein projection onto dorsal hand is 0.53mm. Experiments show that the device has reached practical accuracy and has the value of research and application.
ARTICLE | doi:10.20944/preprints202207.0455.v1
Subject: Engineering, Control And Systems Engineering Keywords: reduced-order control; rank constraint; linear matrix inequality; alternating projection; convex optimization
Online: 29 July 2022 (09:50:27 CEST)
In this paper, we propose an efficient numerical computation method of reduced-order controller design for linear time-invariant systems. The design problem is described by linear matrix inequalities (LMIs) with a rank constraint on a structured matrix, due to which the problem is NP-hard. Instead of the heuristic method that approximates the matrix rank by the nuclear norm, we propose a numerical projection onto the rank-constrained set based on the alternating direction method of multipliers (ADMM). Then the controller is obtained by alternating projection between the rank-constrained set and the LMI set. We show the effectiveness of the proposed method compared with existing heuristic methods, by using 95 benchmark models from the COMPLeib library.
ARTICLE | doi:10.20944/preprints202106.0438.v1
Subject: Physical Sciences, Acoustics Keywords: holography; hologram; computer-generated hologram; holographic projection; time-division; digital micromirror device
Online: 16 June 2021 (10:33:00 CEST)
Holographic projection is a simple projection because it enlarges or reduces reconstructed images without using a zoom lens. However, one major problem associated with this projection is the deterioration of image quality as the reconstructed image enlarges. In this paper, we propose a time-division holographic projection, in which the original image is divided into blocks and the holograms of each block are calculated. Using a digital micromirror device (DMD), the holograms were projected at high speed to obtain the entire reconstructed image. However, the holograms on the DMD need to be binarized, thereby causing uneven brightness between the divided blocks. We correct this by controlling the displaying time of each hologram. Additionally, combining both the proposed and noise reduction methods, the image quality of the reconstructed image was improved. Results from the simulation and optical reconstructions show we obtained a full-color reconstruction image with reduced noise and uneven brightness.
ARTICLE | doi:10.20944/preprints201810.0253.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: adaptive filtering; set-membership filtering; affine projection; data censoring; big data; outliers
Online: 12 October 2018 (04:57:08 CEST)
In this paper, the set-membership affine projection (SM-AP) algorithm is utilized to censor non-informative data in big data applications. To this end, the probability distribution of the additive noise signal and the excess of mean-squared error (EMSE) in steady-state are employed in order to estimate the threshold parameter of the single threshold SM-AP (ST-SM-AP) algorithm aiming at attaining the desired update rate. Furthermore, by defining an acceptable range for the error signal, the double threshold SM-AP (DT-SM-AP) algorithm is proposed to detect very large errors due to the irrelevant data such as outliers. The DT-SM-AP algorithm can censor non-informative and irrelevant data in big data applications, and it can improve misalignment and convergence rate of the learning process with high computational efficiency. The simulation and numerical results corroborate the superiority of the proposed algorithms over traditional algorithms.
ARTICLE | doi:10.20944/preprints202302.0078.v2
Subject: Computer Science And Mathematics, Information Systems Keywords: COVID-19; Particle Filtering; Machine Learning; Epidemiologic Modeling; Compartmental Model; Projection and Intervention
Online: 7 September 2023 (10:25:49 CEST)
COVID-19 transmission models have conferred great value in informing public health understanding, planning, and response. However, the pandemic also demonstrated the infeasibility of basing public health decision-making on transmission models with pre-set assumptions. No matter how favourably evidenced when built, a model with fixed assumptions is challenged by numerous factors that are difficult to predict. Ongoing planning associated with rolling back and re-instituting measures, initiating surge planning, and issuing public health advisories can benefit from approaches that allow state estimates for transmission models to be continuously updated in light of unfolding time series. A model being continuously regrounded by empirical data in this way can provide a consistent, integrated depiction of the evolving underlying epidemiology and acute care demand, offer the ability to project forward such a depiction in a fashion suitable for triggering the deployment of acute care surge capacity or public health measures, support quantative evaluation of tradeoffs associated with prospective interventions in light of the latest estimates of the underlying epidemiology. We describe here the design, implementation and multi-year daily use for public health and clinical support decision-making of a particle filtered COVID-19 compartmental model, which served Canadian federal and provincial governments via regular reporting starting in June 2020. The use of the Bayesian Sequential Monte Carlo algorithm of Particle Filtering allows the model to be re-grounded daily and adapt to new trends within daily incoming data – including test volumes and positivity rates, endogenous and travel-related cases, hospital census and admissions flows, daily counts dose-specific vaccinations administered, measured concentration of SARS-CoV-2 in wastewater, and mortality. Important model outputs include estimates (via sampling) of the count of undiagnosed infectives, the count of individuals at different stages of the natural history of frankly and pauci-symptomatic infection, the current force of infection, effective reproductive number, and current and cumulative infection prevalence. Following a brief description of model design, we describe how the machine learning algorithm of particle filtering is used to continually reground estimates of dynamic model state, support probabilistic model projection of epidemiology and health system capacity utilization and service demand and probabilistically evaluate trade-offs between potential intervention scenarios. We further note aspects of model use in practice as an effective reporting tool in a manner that is parameterized by jurisdiction, including support of a scripting pipeline that permits a fully automated reporting pipeline other than security-restricted new data retrieval, including automated model deployment, data validity checks, and automatic post-scenario scripting and reporting. As demonstrated by this multi-year deployment of Bayesian machine learning algorithm of particle filtering to provide industrial-strength reporting to inform public health decision making across Canada, such methods offer strong support for evidence-based public health decision making informed by ever-current articulated transmission models whose probabilistic state and parameter estimates are continually regrounded by diverse data streams.
Subject: Physical Sciences, Acoustics Keywords: exponential atmosphere; acoustic wave; diagnostics; projection operators; artificial periodic irregularities; neutral temperature; density
Online: 26 July 2021 (18:04:42 CEST)
The main result of this work is the estimation of the entropy mode accompanying a wave disturbance, observed at the atmosphere heights range of 90-120km. The study is the direct continuation and development of recent results on diagnosis of the acoustic wave with the separation on direction of propagation. The estimation of the entropy mode contribution relies upon the measurements of the three dynamic variables (the temperature, density and vertical velocity perturbations) of the neutral atmosphere measured by the method of the resonant scattering of radio waves on the artificial periodic irregularities of the ionospheric plasma. The measurement of the atmosphere dynamic parameters has been carried out on the SURA heating facility. The mathematical foundation of the mode separation algorithm is based on the dynamic projecting operator technique. The operators are constructed via the eigenvectors of the coordinate evolution operator of the transformed system of balance equations of the hydro-thermodynamics.
ARTICLE | doi:10.20944/preprints202307.0276.v1
Subject: Medicine And Pharmacology, Neuroscience And Neurology Keywords: cell death, corticostriatal projection, motor cortex, neocortical pyramidal neuron, neonatal encephalopathy, RNA binding protein
Online: 5 July 2023 (11:21:17 CEST)
The effects of hypothermia on neonatal encephalopathy may vary regionally, spatially, and cytopathologically in the gyrencephalic neocortex with manifestations potentially influenced by seizures that alter the severity and distribution of neuropathology. We developed a neonatal piglet survival model of hypoxic-ischemic (HI) encephalopathy and hypothermia with continuous encephalographic (cEEG) monitoring for seizures to study injury in neocortex. Neonatal piglets were randomized to naïve, HI-normothermia (NT), overnight hypothermia (HT) initiated 2 hours after HI, sham-NT, or sham-HT treatments. Some piglets within sham and HI groups received cEEG monitoring during recovery. Survival was 2-7 days (piglets were unmedicated and those with poor recovery and unresolving seizures were euthanized early); there was no differences in survival among groups (p>0.078). Neuropathology was assessed by hematoxylin and eosin staining and immunohistochemistry for RNA Binding FOX-1 Homolog 3 (Rbfox3/NeuN). Normal and ischemic-necrotic neurons were counted (layers II-VI collectively) in somatosensory, motor, and prefrontal cortices, identified by cytoarchitecture and connectomics, and in inferior parietal cortex by layer. Seizure burden was determined. HI-NT piglets had reduced normal/total neuron ratio and increased ischemic-necrotic/total neuron ratio relative to naïve, sham-NT, and sham-HT piglets in anterior and posterior motor and somatosensory cortices. Frontal cortex was vulnerable in the prefrontal lateral bank after HI-NT with ischemic-necrosis. HI-HT piglets had higher normal/total neuron ratios and lower ischemic-necrotic/total neuron ratios than HI-NT piglets in anterior/posterior motor and somatosensory cortices. Total normal neuron density in layer III of inferior parietal cortex was reduced in HI-NT piglets compared to sham piglets and was protected by HT. Laminar analysis of Rbfox3 in somatosensory cortex revealed three types of neurons: Rbfox3-positive/normal, Rbfox3-positive/ischemic-necrotic, and Rbfox3-depleted. HI piglets had increased Rbfox3-depleted/total neuron ratio in layers II and III compared to sham-NT. Cortical neuron Rbfox3-depletion was partly rescued by HT. Seizure burden was more severe in HI piglets compared to sham piglets, but HI-NT and HI-HT piglets were similar. We conclude that 1) the neonatal piglet neocortex has a suprasylvian spatial vulnerability to HI and seizures; 2) HT protects against neuropathology in functionally different regions of the neonatal gyrencephalic neocortex; 3) neurons in neonatal neocortex have a limited cytopathology repertoire seen by H&E staining; 4) higher seizure burden correlates with more ischemic-necrotic neurons in somatosensory cortex; 5) seizure presence associates with damage spread to inferior parietal cortex; 6) seizure presence is insensitive to HT, and 7) Rbfox3 immunophenotyping identifies a novel neuronal RNA splicing protein nuclear-depletion pathology that appears reversible by HT. This work demonstrates that HT protection of neocortex in neonatal HI is topographic and laminar, seizure unmitigating, and restores depleted neuronal RNA splicing factor.
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: Koopman Operator; Dynamic Mode Decomposition(DMD); Johnson-Lindenstrauss Lemma; Random Projection; Data-driven method
Online: 24 September 2021 (09:14:01 CEST)
A data-driven analysis method known as dynamic mode decomposition (DMD) approximates the linear Koopman operator on projected space. In the spirit of Johnson-Lindenstrauss Lemma, we will use random projection to estimate the DMD modes in reduced dimensional space. In practical applications, snapshots are in high dimensional observable space and the DMD operator matrix is massive. Hence, computing DMD with the full spectrum is infeasible, so our main computational goal is estimating the eigenvalue and eigenvectors of the DMD operator in a projected domain. We will generalize the current algorithm to estimate a projected DMD operator. We focus on a powerful and simple random projection algorithm that will reduce the computational and storage cost. While clearly, a random projection simplifies the algorithmic complexity of a detailed optimal projection, as we will show, generally the results can be excellent nonetheless, and quality understood through a well-developed theory of random projections. We will demonstrate that modes can be calculated for a low cost by the projected data with sufficient dimension.
ARTICLE | doi:10.20944/preprints201809.0114.v1
Subject: Chemistry And Materials Science, Polymers And Plastics Keywords: crystalline gel; 3D printing; mask-projection stereolithography; thermal energy storage; phase change material; thermoregulation
Online: 6 September 2018 (12:04:00 CEST)
Most of the phase change materials (PCMs) have been limited to use as functional additions or sealed in containers, and extra auxiliary equipment or supporting matrix is needed. The emergence of 3D printing technique has dramatically advanced the developments of materials and simplified production processes. This study focuses on a novel strategy to model thermal energy storage crystalline gels with three-dimensional architecture directly from liquid resin without supporting materials through light-induced polymerization 3D printing technique. A mask-projection stereolithography printer was used to measure the 3D printing test, and the printable characters of crystalline thermal energy storage P(SA-DMAA) gels with different molar ratios were evaluated. For the P(SA-DMMA) gels with small fraction of SA, the 3D fabrication was realized with higher printing precision both on mili- and micro-meter scales. As a comparison of 3D printed samples, P(SA-DMAA) gels made by other two methods, post-UV curing treatment after 3D printing and UV curing using conventional mold, were prepared. The 3D printed P(SA-DMAA) gels shown high crystallinity. Post–UV curing treatment was beneficial to full curing of 3D printed gels, but did not lead to the further improvement of crystal structure to get higher crystallinity. The P(SA-DMAA) crystalline gel having the highest energy storage enthalpy that reached 69.6 J·g−1 was developed. Its good thermoregulation property in the temperature range from 25 to 40 °C was proved. The P(SA-DMAA) gels are feasible for practical applications as one kind of 3D printing material with thermal energy storage and thermoregulation functionality.
ARTICLE | doi:10.20944/preprints202208.0435.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: genomic DNAs; stochastics; tensor-unitary transformation; quantum informatics; fractal; projection operators; gestalt phenomena; stochastic determinism
Online: 25 August 2022 (11:52:12 CEST)
The article is devoted to algebraic modeling of universal rules of stochastic organization of genomic DNA of higher and lower organisms, previously published by the author. The proposed algebraic apparatus, which uses formalisms of quantum mechanics and quantum informatics and which is based on the so-called tensor-unitary transformations of vectors that generate families of interrelated stochastic-deterministic vectors of increased dimensions. The features of the vectors' interconnections in these families model the stochastic-deterministic properties of the named phenomenological rules. There are new approaches to modeling of developing multi-parameter biosystems, whose number of parameters increases in the course of step-by-step development. In the light of the presented materials, the issues of fractal-like organization in genetically inherited biosystems are considered. The development of the theory of stochastic determinism as an antipode of deterministic chaos is discussed.
ARTICLE | doi:10.20944/preprints202311.1917.v1
Subject: Computer Science And Mathematics, Probability And Statistics Keywords: empirical equations; generalized least‐squares regression; weight matrix; systematic errors; covariance propagation; projection matrix; residual structure
Online: 30 November 2023 (09:16:25 CET)
Empirical equations representing, interpolating and smoothing groups of measurement results by regression methods are in widespread use in metrology and other fields of science and engineering. Standard equations for the propagation of measurement uncertainties to values computed from such equations are available but suffer from a lack of general acceptance and are only infrequently applied in practice. One reason for the slow uptake is the lack of clear methods to account for systematic error. In this paper, uncertainty propagation equations in terms of covariance matrices are generalized to allow for systematic errors in least-squares regression, and effects of the resulting uncertainties are investigated analytically. A stochastic ensemble model of systematic error is proposed for the computation of the non-diagonal elements of the weight matrix of the Generalized Least-Squares method (GLS) from the measurement uncertainty. A GLS projector formalism is described which separates the effects of measurement scatter on values calculated by the equation from those on the related “residuals”, i.e., the residual errors in the fitted data. The same projectors act similarly on the associated uncertainties and covariance matrices and permit the effects of systematic errors on the simulation covariance to be quantified. It is demonstrated that, in order to include systematic errors in the uncertainty estimates of GLS, covariance matrices may be substituted by novel, specifically defined dispersion matrices that are not specified yet by the GUM. Systematic measurement errors may be estimated from various sources; a particularly easy way suggested here involves analyzing the structure in the regression residuals. It is demonstrated that GLS projectors exhibit inherent features of “error blindness” also with respect to systematic errors.  GUM: Guide to the Expression of Uncertainty in Measurement, http://www.bipm.org/en/publications/guides
ARTICLE | doi:10.20944/preprints202305.1436.v1
Subject: Engineering, Aerospace Engineering Keywords: Small-field telescope; Space target detection; Image preprocessing; Target signal enhancement; Multi frame projection; Adaptive filtering
Online: 19 May 2023 (10:57:42 CEST)
Compared to wide-field telescopes, small-field detection systems have higher spatial resolution, resulting in stronger detection capabilities and higher positioning accuracy. When detecting by small-field in synchronous orbit, both space debris and fixed stars are imaged as point targets, making it difficult to distinguish them. In addition, with the improvement of detection capabilities, the number of stars in the background rapidly increases, which puts higher requirements on recognition algorithms. Therefore, star detection is indispensable for identifying and locating space debris in complex backgrounds. To address these difficulties, this paper proposes a real-time star extraction method based on adaptive filtering and multi frame projection. We use bad point repair and background suppression algorithms to preprocess star image. Afterwards, we analyze and enhance the target signal-to-noise ratio(SNR). Then, we use multi frame projection to fuse information. Subsequently, adaptive filtering, adaptive morphology, and adaptive median filtering algorithms are proposed to detect trajectories. Finally, the projection is released to locate the target. Our recognition algorithm has been verified by real star images, and the images were captured using small-field telescope. The experimental results demonstrate the effectiveness of the algorithm proposed in this paper. We successfully extracted hip-27066 star, which has magnitudes about 12 and SNR about 1.5. Compared with existing methods, our algorithm has advantages in both recognition rate and false-alarm rate, and can be used as real time target recognition algorithm for space-based synchronous orbit detection payload.
REVIEW | doi:10.20944/preprints202106.0448.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: Parkinson’s disease; abnormal involuntary movements; dopaminergic signaling; basal ganglia; spiny projection neurons; neurotransmission; deep brain stimulation
Online: 16 June 2021 (14:17:30 CEST)
Levodopa remains the primary drug for controlling motor symptoms in Parkinson's disease through the whole course, but over time complications develop in the form of dyskinesias, which gradually become more frequent and severe. These abnormal, involuntary, hyperkinetic movements are mostly characteristic of the ON phase and reflect an excess of exogenous levodopa. They may also occur during OFF phase, or in both phases. Over the past 10 years, the issue of levodopa-induced dyskinesia has been the subject of research into both the substrate of this pathology and potential remedial strategies. The purpose of the present study was to review the results of recent research on the background and treatment of dyskinesia. To this end, databases were reviewed using a search strategy that included both relevant keywords related to the topic and appropriate filters to limit results to English-language literature published since 2010. Based on the selected papers, the current state of knowledge on morphological, functional, genetic, and clinical features of levodopa-induced dyskinesia, as well as pharmacological, genetic treatment and other therapies such as deep brain stimulation are described.
REVIEW | doi:10.20944/preprints202005.0488.v1
Subject: Medicine And Pharmacology, Neuroscience And Neurology Keywords: striatal development; Huntington’s disease; spiny projection neurons; medium spiny neurons; neuronal excitability; striosomes; matrix; basal ganglia
Online: 31 May 2020 (18:20:17 CEST)
Huntington's disease (HD) is an inherited neurodegenerative disorder that usually starts during midlife with progressive alterations of motor and cognitive functions. The disease is caused by a CAG repeat expansion within the huntingtin gene leading to severe striatal neurodegeneration. Recent studies conducted on pre-HD children highlight early striatal developmental alterations starting as soon as 6 years old, the earliest age assessed. These findings, in line with data from mouse models of HD, raise the question of when during development do the first disease-related striatal alterations emerge or whether they contribute to the later appearance of the neurodegenerative features of the disease. In this review we will describe the different stages of striatal network development and then discuss recent evidence for its alterations in rodent models of the disease. We argue that a better understanding of the striatum’s development should help in assessing aberrant neurodevelopmental processes linked to the HD mutation.
ARTICLE | doi:10.20944/preprints202012.0086.v1
Subject: Engineering, Automotive Engineering Keywords: Velocity–pressure coupling; Fully coupled solvers; Augmented Lagrangian; two-phase flows; saddle point; projection method; preconditioning; smooth VOF
Online: 3 December 2020 (13:50:52 CET)
In this paper, we investigate the accuracy and robustness of three classes of methods for solving two-phase incompressible flows on a staggered grid. Here, the unsteady two-phase flow equations are simulated by finite volumes and penalty methods using implicit and monolithic approaches (such as the augmented Lagrangian and the fully coupled methods), where all velocity components and pressure variables are solved simultaneously (as opposed to segregated methods). The interface tracking is performed with a Volume-of-Fluid (VOF) method, using the Piecewise Linear Interface Construction (PLIC) technique. The home code Fugu is used for implementing the various methods. Our target application is the simulation of two-phase flows at high density and viscosity ratios, which are known to be challenging to simulate. The resulting strategies of monolithic approaches will be proven to be considerably better suited for these two-phase cases, they also allow to use larger time step than segregated methods.
ARTICLE | doi:10.20944/preprints202307.2083.v1
Subject: Computer Science And Mathematics, Analysis Keywords: Parallel subgradient-like extragradient approach; Variational inequality problem; Inertial effect; Bregman relatively asymptotically nonexpansive mapping; Bregman distance; Bregman projection
Online: 2 August 2023 (08:07:21 CEST)
In a p-uniformly convex and uniformly smooth Banach space, let the pair of variational inequality and fixed point problems (VIFPPs) consist of two variational inequality problems (VIPs) involving two uniformly continuous and pseudomonotone mappings and two fixed point problems implicating two uniformly continuous and Bregman relatively asymptotically nonexpansive mappings. This article designs two parallel subgradient-like extragradient algorithms with inertial effect for solving this pair of VIFPPs, where each algorithm consists of two parts which are of symmetric structure mutually. Under mild registrations, we prove weak and strong convergence of the suggested algorithms to a common solution of this pair of VIFPPs, respectively. Lastly, an illustrative example is furnished to verify the applicability and implementability of our proposed approaches.
ARTICLE | doi:10.20944/preprints202306.0388.v1
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: matrix population model; Androsace albana; life cycle graph; population projection matrix; matrix average; constrained optimization; linear programming; exact solution
Online: 6 June 2023 (05:44:29 CEST)
Given several nonnegative matrices with a single pattern of allocation among their zero/nonzero elements, the average matrix should have the same pattern, too. This is the first tenet of the pattern-multiplicative average (PMA) concept, while the second one suggests the multiplicative (or geometric) nature of averaging. The original concept of PMA was motivated by the practice of matrix population models as a tool to assess the population viability from long-term monitoring data. The task has reduced to searching for an approximate solution to an overdetermined system of polynomial equations for unknown elements of the average matrix (G), hence to a nonlinear constrained minimization problem for the matrix norm. Former practical solutions faced certain technical problems, which required sophisticated algorithms but returned acceptable estimates. Fresh idea to use the eigenvalue approximation ensues also from the basic equation of averaging, which determines the exact value of λ1(G), the dominant eigenvalue of matrix G, too, and the corre-sponding eigenvector. These are bound by the well-known linear equations, which reduce the task to a standard problem of linear programing (LP). The LP approach is realized for 13 fixed-pattern matrices gained in a case study of Androsace albana, an alpine short-lived perennial, monitored on permanent plots during 14 years. A standard software routine reveals the unique exact solution, rather than an approximate one, to the PMA problem, which turns the LP approach into ‘’the best of versatile optimization tools”. The exact solution turns out to be peculiar in reaching zero bounds for certain nonnegative entries of G, which deserves a modified problem formulation separating the lower bounds from zero.
ARTICLE | doi:10.20944/preprints202106.0029.v1
Subject: Computer Science And Mathematics, Discrete Mathematics And Combinatorics Keywords: discrete-structured population; matrix population model; population projection matrices; calibration; net reproductive rate; reproductive uncertainty; colony excavation; Diophantine systems
Online: 1 June 2021 (11:49:45 CEST)
The notion of potential-growth indicator came to being in the field of matrix population models long ago, almost simultaneously with the pioneering Leslie model for age-structured population dynamics, albeit the term has been given and the theory developed only recent years. The indicator represents an explicit function, R(L), of matrix L elements and indicates the position of the spectral radius of L relative to 1 on the real axis, thus signifying the population growth, or decline, or stabilization. Some indicators turned out useful in theoretical layouts and practical applications prior to calculating the spectral radius itself. The most senior (1994) and popular indicator, R0(L), is known as the net reproductive rate, and we consider two more ones, R1(L) and RRT(A), developed later on. All the three are different in what concerns their simplicity and the level of generality, and we illustrate them with a case study of Calamagrostis epigeios, a long-rhizome perennial weed actively colonizing open spaces in the temperate zone. While the R0(L) and R1(L) fail respectively because of complexity and insufficient generality, the RRT(L) does succeed, justifying the merit of indication.
ARTICLE | doi:10.20944/preprints201902.0113.v1
Subject: Environmental And Earth Sciences, Geography Keywords: Cascais tide gauge; sea level rise; sea level acceleration; sea level projection; SLR probability density function; uplift derived from SLR
Online: 13 February 2019 (10:45:09 CET)
Data collected at the Cascais tide gauge, located on the west coast of Portugal Mainland, have been analyzed and sea level rise rates have been updated. Based on a bootstrapping linear regression model and on polynomial adjustments, time series are used to calculate different empirical projections for the 21st century sea level rise, by estimating the initial velocity and its corresponding acceleration. The results are consistent to an accelerated sea level rise, showing evidence of a faster rise than previous century estimates. Based on different numerical methods of second order polynomial fitting, it is possible to build a set of projection models of relative sea level rise. Appling the same methods to regional sea level anomaly from satellite altimetry, additional projections are also built with good consistency. Both data sets, tide gauge and satellite altimetry data, enabled the development of an ensemble of projection models. The relative sea level rise projections are crucial for national coastal planning and management since extreme sea level scenarios can potentially cause erosion and flooding. Based on absolute vertical velocities obtained by integrating global sea level models, neo-tectonic studies and permanent Global Positioning System (GPS) station time series, it is possible to transform relative into absolute sea level rise scenarios, and vice-versa, allowing the generation of absolute sea level rise projection curves and its comparison with already established global projections. The sea level rise observed at the Cascais tide gauge has always shown a significant correlation with global sea level rise observations, evidencing relatively low rates of composed vertical land velocity from tectonic and post-glacial isostatic adjustment, and residual synoptic regional dynamic effects rather than a trend. An ensemble of sea level projection models for the 21st century is proposed with its corresponding probability density function, both for relative and absolute sea level rise for the west coast of Portugal Mainland.
ARTICLE | doi:10.20944/preprints202303.0297.v2
Subject: Computer Science And Mathematics, Computational Mathematics Keywords: linear programming; apex method; iterative method; projection-type method; Fejér mapping; parallel algorithm; cluster computing system; scalability evaluation; Netlib-LP repository
Online: 24 March 2023 (03:48:13 CET)
The article presents a new scalable iterative method for linear programming called the “apex method”. The key feature of this method is constructing a path close to optimal on the surface of the feasible region from a certain starting point to the exact solution of linear programming problem. The optimal path refers to a path of minimum length according to the Euclidean metric. The apex method is based on the predictor-corrector framework and proceeds in two stages: quest (predictor) and target (corrector). The quest stage calculates a rough initial approximation of linear programming problem. The target stage refines the initial approximation with a given precision. The main operation used in the apex method is an operation that calculates the pseudoprojection, which is a generalization of the metric projection to a convex closed set. This operation is used both in the quest stage and in the target stage. A parallel algorithm using a Fejér mapping to compute the pseudoprojection is presented. An analytical estimation of the parallelism degree of this algorithm is obtained. Also, an algorithm implementing the target stage is given. The convergence of this algorithm is proven. An experimental study of the scalability of the apex method on a cluster computing system is described. The results of applying the apex method to solve problems from the Netlib-LP repository are presented.
ARTICLE | doi:10.20944/preprints202307.1835.v1
Subject: Computer Science And Mathematics, Analysis Keywords: Modified inertial-type subgradient extragradient method; Variational inequality problem; Finite Bregman relatively nonexpansive mappings; Bregman relatively demicontractive mapping; Bregman distance; Bregman projection.
Online: 27 July 2023 (05:46:03 CEST)
In this paper, we design two inertial-type subgradient extragradient algorithms with line-search process for solving the pseudomonotone variational inequality problems (VIPs) and common fixed-point problem (CFPP) of finite Bregman relatively nonexpansive mapping and a Bregman relatively demicontractive mapping in p-uniformly convex and uniformly smooth Banach spaces, which are more general than Hilbert spaces. Under mild conditions, we derive weak and strong convergence of the suggested algorithms to a common solution of the VIPs and CFPP, respectively. Additionally, an illustrated example is furnished to back up the feasibility and implementability of our proposed methods.
ARTICLE | doi:10.20944/preprints201910.0145.v1
Subject: Biology And Life Sciences, Forestry Keywords: clumping index; crown architecture; crown projection area; lidar-based crown metrics; discrete-return lidar; fire severity; leaf area density; post-fire effects
Online: 13 October 2019 (15:34:43 CEST)
Fire-tolerant eucalypt forests of south eastern Australia are assumed to fully recover from even the most intense fires but surprisingly very few studies have quantitatively assessed that recovery. Accurate assessment of horizontal and vertical attributes of tree crowns after fire is essential to understand the fire’s legacy effects on tree growth and on forest structure. In this study, we quantitatively assessed individual tree crowns 8.5 years after a 2009 wildfire that burnt extensive areas of eucalypt forest in temperate Australia. We used airborne lidar data validated with field measurements to estimate multiple metrics that quantified the cover, density, and vertical distribution of individual-tree crowns in 51 plots of 0.05 ha in fire-tolerant eucalypt forest across four wildfire severity types (unburnt, low, moderate, high). Significant differences in the field-assessed mean height of fire scarring as a proportion of tree height, and in the proportions of trees with epicormic (stem) resprouts were consistent with the gradation in fire severity. Linear mixed-effects models indicated persistent effects of both moderate- and high-severity wildfire on tree crown architecture. Trees at high-severity sites had significantly less crown projection area and live crown width as a proportion of total crown width than those at unburnt and low-severity sites. Significant differences in lidar-based metrics (crown cover, evenness, leaf area density profiles) indicated that tree crowns at moderate- and high-severity sites were comparatively narrow and more evenly distributed down the tree stem. These conical-shaped crowns contrasted sharply with the rounded crowns of trees at unburnt and low-severity sites, and likely influenced both tree productivity and the accuracy of biomass allometric equations for nearly a decade after the fire. Our data provide a clear example of the utility of airborne lidar data for quantifying the impacts of disturbances at the scale of individual trees. Quantified effects of contrasting fire severities on the structure of resprouter tree crowns provide a strong basis for interpreting post-fire patterns in forest canopies and vegetation profiles in lidar and other remotely-sensed data at larger scales.
ARTICLE | doi:10.20944/preprints202004.0032.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: indoor positioning system; image-based positioning system; computer vision; SIFT; feature detection; feature description; cell phone camera; PnP problem; projection matrix; epipolar geometry; OpenCV
Online: 3 April 2020 (11:59:48 CEST)
As people grow a custom to effortless outdoor navigation there is a rising demand for similar possibility indoors as well. Unfortunately, indoor localization, being one of the necessary requirements for navigation, continues to be problem without a clear solution. In this article we are proposing a method for an indoor positioning system using a single image. This is made possible using small preprocessed database of images with known control points as the only preprocessing needed. Using feature detection with SIFT algorithm we can look through the database and find image which is the most similar to the image taken by user. Pair of images is then used to find coordinates of database image using PnP problem. Furthermore, projection and essential matrices are determined allowing for the user image localization ~ determining the position of the user in indoor environment. Benefits of this approach lies in the single image being the only input from user and no requirements for new onsite infrastructure and thus enables a simpler realization for the building management.
ARTICLE | doi:10.20944/preprints202112.0067.v1
Subject: Physical Sciences, Mathematical Physics Keywords: category; topos; presheaf; probability; validity; truth; conditional expectation; measurement; quantum mechanics; information; entropy; reduction; collapse; projection; logic; algebra; Wiener; Bayes; Boole; Heyting; Brownian motion; filter; crible; capacity; reservation; context
Online: 6 December 2021 (12:13:38 CET)
Research for a theory of quantum gravity has recently led to the use of presheaf topos. Quantum uncertainty is linked to the truth values of intuitionistic logic. This paper proposes transposing this model into a classic probability context, that of conditional mathematical expectations. A simulation of Brownian motion is offered for illustrative purposes.
ARTICLE | doi:10.20944/preprints202308.1564.v2
Subject: Physical Sciences, Mathematical Physics Keywords: Spinor; vector; complex vector; spin; quantum mechanics; isotropic vectors; Hilbert space; division of vector; inversion of vector; three-dimensional number; quaternion; ket (physics); inner product; outer product; cross product; dot product; geometric algebra; vector projection; Stern-Gerlach analyzer; Bloch sphere; General quantum state vector
Online: 30 August 2023 (02:36:14 CEST)
Spinors are used for the computation of probability in quantum mechanics. They are treated as elements of a complex vector space in contrast to a real vector space. Spinor theory is abstract mathematics with an ambiguous interpretation. The overall phase of spinors does not affect the computation of probability in quantum mechanics. If two spinors have an overall phase of imaginary number i, then they can be treated like vectors in a real vector space. The square of the magnitude of the number, which is the sum of dot product and the term having basis vector i of cross product of two such vectors, is equal to the probability computed using the corresponding spinors. Therefore, the geometry of such spinors can be easily depicted in a three-dimensional space like vectors in a real vector space. Spinors are not isotropic vectors in Hilbert space. The sum of dot product and cross product of two complex numbers is equivalent to the quotient of the division of one complex number by another. Similarly, the sum of dot product and cross product of two vectors is the quotient of the division of one vector by another. Using the rules of division of vectors, we can find the rules of multiplication of vectors. The rules of multiplication and division of basis vector i match the rules of multiplication and division of imaginary number i. Therefore, basis vector i is nothing but imaginary number i. Multiplication of dot and cross product of two vectors to the second vector will give us the components of the first vector that are parallel and orthogonal to the second vector. Vectors are also made up of complex numbers like spinors. The reason for finding dot product of complex numbers in the process of computing the probability is to ignore the overall phase in contrast to the phase difference. This is misconstrued as complex vector space in quantum mechanics. 3-D number which is nothing but a spinor with new notation is an extension of vector algebra. It can have a real number as a term in addition to the terms i, j and k. In all other respects, it is a vector. 3-D numbers are part of the number system like real numbers and complex numbers. The real numbers are one dimensional numbers, the complex numbers are two-dimensional numbers and the 3-D numbers as well as vectors and spinors are three-dimensional numbers. As polarisation and spin are one and the same, we can straight away apply 3-D numbers to polarisation of light. 3-D numbers will greatly simplify the study of areas of science where three-dimensional space is involved.