ARTICLE | doi:10.20944/preprints202308.2030.v1
Subject: Physical Sciences, Fluids And Plasmas Physics Keywords: aerodynamics; hypersonic flow; shock wave; physical and chemical processes; chemical kinetics; relaxation
Online: 30 August 2023 (08:54:09 CEST)
Steady-state one-dimensional flows of five-component air behind a normal shock wave are considered with one-temperature model. A mathematical model is formulated to describe the relaxation of a five-component air mixture with a one-temperature non-equilibrium approximation. A numerical study of non-equilibrium flows of a reacting five-component air mixture behind shock waves at different heights and velocities of free flow is performed. The contribution of different types of reactions to the overall relaxation of the mixture is discussed, and the distributions of macro-parameters of the flow behind the shock wave front are calculated. The lengths of the relaxation zones behind the shock wave front are compared at different initial conditions.
ARTICLE | doi:10.20944/preprints202003.0368.v1
Subject: Chemistry And Materials Science, Food Chemistry Keywords: phase equilibrium; in vitro lipid digestion; fats and oils
Online: 25 March 2020 (04:14:17 CET)
The absorption of medium-chain fatty acids (MCFA) depends on the solubility of these components in the gastric fluid. Parameters such as the total MCFA concentration, carboxyl ionization level, and carbon chain length affect the solubility of these molecules. Moreover, the enzymatic lipolysis of solubilized triacylglycerol (TAG) molecules may depend on the carbon chain length of the fatty acids (FAs) components and their positions on the glycerol backbone. This present study aimed at investigating the effect of electrolyte usually formed during the gastric digestion phase on the solubility of MCFA, and evaluating the influence of the FA carbon chain length on the lipolysis rate during the in vitro digestion simulation. The results obtained here showed that the increasing of electrolyte concentrations tend to decrease the mutual solubility of systems composed by the caproic and caprylic fatty acids + sodium chloride, sodium bicarbonate, and potassium chloride solutions. We also observed that a conventional version of the thermodynamic UNIQUAC model was able to correlate the liquid-liquid phase behavior of the electrolyte solutions. Regarding the in vitro digestion simulation, the experimental data indicated that the action of the pancreatic enzyme occurred preferentially in TAG molecules comprised of short and medium-chain fatty acids.
ARTICLE | doi:10.20944/preprints201911.0148.v2
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Nash equilibria; game and mechanism design; simulated annealing; fuzzy ASA; artificial inference; global machine learning
Online: 18 November 2019 (07:34:00 CET)
This work presents significant results obtained by the application of global optimization techniques to the design of finite, normal form games with mixed strategies. To that end, the Fuzzy ASA global optimization method is applied to several design examples of strategic games, demonstrating its effectiveness in obtaining payoff functions whose corresponding games present a previously established Nash equilibrium. In other words, the game designer becomes able to choose a convenient Nash equilibrium for a generic finite state strategic game and the proposed method computes payoff functions that will realize the desired equilibrium, making it possible for the players to reach the favorable conditions represented by the chosen equilibrium. Considering that game theory is a very significant approach for modeling interactions between competing agents, and Nash equilibrium represents a powerful solution concept, portraying situations in which joint strategies are optimal in the sense that players cannot benefit from individually modifying their current strategies provided that other players do not change their strategies as well, it is natural to infer that the proposed method may be very useful for strategists in general. In summary, it is a genuine instance of artificial inference of payoff functions after a process of global machine learning, applied to their numerical components.
ARTICLE | doi:10.20944/preprints202308.0747.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: automatic license plate detection and recognition; automatic vehicle logo detection and recognition; deep learning; transfer learning; convolutional neural network
Online: 9 August 2023 (10:54:13 CEST)
Recently, the number of vehicles on the road, especially in urban centers, has increased dramatically due to the increasing trend of individuals towards urbanization. As a result, manual detection and recognition of vehicles (i.e., license plates and vehicle manufacturer) become an arduous task and beyond human capabilities. In this paper, we have developed a system using transfer learning-based DL techniques for automatic identification of Jordanian vehicles. The YOLOv3 (You Only Look Once) model was re-trained using transfer learning to accomplish the license plate detection, character recognition, and vehicle logo detection. While VGG16 (Visual Geometry Group) model was retrained to accomplish the vehicle logo recognition. To train and test these models, four datasets have been collected. The first dataset consists of 7,035 Jordanian vehicle images, the second dataset consist of 7,176 Jordanian license plates, and the third dataset consists of 8,271 Jordanian vehicle images. These datasets have been used to train and test the YOLOv3 model for Jordanian license plate detection, character recognition, and vehicle logo detection, respectively. While the fourth dataset consists of 158,230 vehicle logo images used to train and test the VGG16 model for the vehicle logo recognition. Text measures were used to evaluate the performance of our developed system. Moreover, mean average precision (mAP) measure was used to evaluate the YOLOv3 model of the detection tasks (i.e., license plate detection and vehicle logo detection). For license plate detection, the precision, recall, F-measure, and mAP were 99.6%, 100%, 99.8%, and 99.9%, respectively. While for character recognition, the precision, recall, and F-measure were 100%, 99.9%, and 99.95%, respectively. The performance of license plate recognition stage was evaluated by evaluating these two sub-stages as a sequence, where the precision, recall, and F-measure were 99.8%, 99.8%, and 99.8%, respectively. Furthermore, for vehicle logo detection, the precision, recall, F-measure, and mAP were 99%, 99.6%, 99.3%, and 99.1%, respectively, while for vehicle logo recognition, the precision, recall, F-measure were 98%, 98%, and 98%, respectively. The performance of vehicle logo recognition stage was evaluated by evaluating these two sub-stages as a sequence, where the precision, recall, and F-measure were 95.3%, 99.5%, and 97.4%, respectively.
ARTICLE | doi:10.20944/preprints202310.1986.v1
Subject: Medicine And Pharmacology, Clinical Medicine Keywords: brain diseases; wavelet; feature extraction; signal denoising; classification and diagnosis; time-frequency analysis
Online: 31 October 2023 (03:34:53 CET)
This paper first introduces the main classification of brain diseases and the main causes of these diseases. It is followed by wavelet analysis, which is a mathematical function or wave-like pattern used to transform data, which can decompose the signal into different wavelet functions, each of which is related to a specific scale or frequency. Wavelet analysis has been applied in many fields such as image processing, data analysis and engineering. In addition, wavelet is also used in the analysis of biological signals. Wavelet analysis breaks down brain signals, such as EEG or fMRI data, into various frequency components at different scales. Wavelet analysis provides time-frequency localization. Pattern recognition can be enhanced by isolating salient features in the data. Wavelet denoising can effectively separate noise from underlying brain signals. Wavelet denoising usually adopts threshold method. Wavelet analysis can enhance pattern recognition by isolating salient features in the data. Researchers can train machine learning models based on these wavelet-derived features to recognize specific patterns associated with different neurological disorders. Wavelet analysis can track these changes by continuously evaluating the frequency content of the neuroimage data over time. Through continuous efforts, wavelet theory and technology have become valuable tools in the field of neuroscience and brain disease research.
REVIEW | doi:10.20944/preprints202109.0376.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: Alzheimer’s disease; Parkinson’s disease; Aβ cascade hypothesis; a-synuclein aggregation and spreading; transcriptomics of nervous system
Online: 22 September 2021 (11:33:22 CEST)
Alzheimer’s and Parkinson's diseases (AD and PD) are amongst top of the prevalent neurodegenerative disease. One-third of PD patients are diagnosed with dementia, a pre-symptom of AD, but the underlying mechanism is elusive. Amyloid beta (Aβ) and α-synuclein are two of the most investigated proteins, whose pathological aggregation and spreading are crucial to the pathogenesis of AD and PD, respectively. Transcriptomic studies of the mammalian central nervous system shed light on gene expression profiles at molecular levels, regarding the complexity of neuronal morphologies and electrophysiological inputs/outputs. In the last decade, the booming of the single-cell RNA sequencing technique helped to understand gene expression patterns, alternative splicing, novel transcripts, and signal pathways in the nervous system at single-cell levels, providing insight for molecular taxonomy and mechanistic targets of the degenerative nervous system. Here, we re-visited the cell-cell transmission mechanisms of Aβ and α-synuclein in medi-ating disease propagation, and summarized recent single-cell transcriptome sequencing from different perspectives and discussed its understanding of neurodegenerative diseases.
REVIEW | doi:10.20944/preprints202311.1975.v1
Subject: Medicine And Pharmacology, Surgery Keywords: Surgical robotics; Laparoscopy Bariatric; Bariatric techniques; Surgical complications; Bariatric revision; Obesity and comorbidities
Online: 30 November 2023 (10:41:34 CET)
This article examines the evolution of bariatric surgery, with a focus on emerging technologies such as robotics and laparoscopy. In the case of gastric bypass, no significant differences have emerged between the two techniques in terms of hospitalization duration, weight loss, weight regain, or 30-day mortality. Robotic surgery, while requiring more time in the operating room, has been associated with lower rates of bleeding, mortality, transfusions, and infections. In revisional bariatric surgery, the robotic approach has shown fewer complications, shorter hospital stays, and a reduced need for conversion to open surgery. In the case of sleeve gastrectomy, robotic procedures have required more time and longer post-operative stays but have recorded lower rates of transfusions and bleeding compared to laparoscopy. However, robotic surgeries have proven to be more costly and potentially more complex in terms of post-operative complications. The review has also addressed the topic of the single anastomosis duodeno-ileal switch (SADIS), finding comparable results between robotic and laparoscopic techniques, although robotic procedures have required more time in the operating room. Robotic technology has proven to be safe and effective, albeit with slightly longer operating times in some cases.
ARTICLE | doi:10.20944/preprints201803.0101.v2
Subject: Chemistry And Materials Science, Chemical Engineering Keywords: onion; drying; bioactive; nutritional and organoleptic
Online: 9 April 2018 (09:51:30 CEST)
Onion (Allium cepa L.) is a strong-flavoring vegetable consumed in different ways. It is mainly due its distinctive flavor or simply pungency. Onion has also important natural compounds effective for medical functions such as inhibition of bone resorption, lower risk of cardiovascular disease and cancer. This importance is directly related to high content of organo-sulphur compounds. Shelf life of fresh onion bulb is short enough about two weeks at ambient storage conditions in Fogera district, Amhara region, Ethiopia. This is mainly due to the presence of high moisture in fresh onion bulbs. Postharvest loss of onion bulb reaches up to 50% in the production season in Fogera district. Consequently onion bulb had extreme variable market price during production and off season in the district which directly influences both the growers and consumers. In this study the effect of different drying techniques on nutritional and volatile components of onion were evaluated. Effect of different drying techniques on protein, carbohydrate, total sugar, fat, pyurvic acid, ascorbic acid, total phenol, total flovonol, rehydration ratio, color and sensory properties of onion slice were evaluated and found insignificant at (P > 0.05) for microwave and modified direct solar dryers taking fresh onion bulb as a control. But oven drying method had significant effect on onion physicochemical quality attributes at (P < 0.05) as compared to fresh onion bulbs.
ARTICLE | doi:10.20944/preprints202209.0014.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: No-show; Medical Appointments; Healthcare; Artificial Intelligence; Data processing and management
Online: 1 September 2022 (08:57:07 CEST)
No-show appointments in healthcare is a problem faced by medical centers around the world, and understand the factors associated with the no-show behavior is essential. In the last decades, artificial intelligence took place in the medical field and machine learning algorithms can work as a efficient tool to understand the patients behavior and to achieve better medical appointment allocation in scheduling systems. In this work, we provide a systematic literature review (SLR) of machine learning techniques applied to no-show appointments aiming at establishing the current state-of-the-art. Based on a SLR following the Kitchenham methodology, 24 articles were found and analyzed, in which the characteristics of the database, algorithms and performance metrics of each studies were synthesized. Results regarding which factors have a higher impact on missed appointment rates were analyzed too. The results indicate that the most appropriate algorithms for building the models are decision tree algorithms. Furthermore, the most significant determinants of no-show were related to the patients age, whether the patient missed a previous appointment, and the distance between the appointment and the patients scheduling.
ARTICLE | doi:10.20944/preprints202106.0698.v1
Subject: Medicine And Pharmacology, Cardiac And Cardiovascular Systems Keywords: Machine Learning, Deep Learning, Syntactic Pattern Recognition, Pattern Primitives and Heart Disease
Online: 29 June 2021 (11:44:07 CEST)
Cardiovascular disease (CVD) may sometimes unexpected loss of life. It affects the heart and blood vessels of body. CVD plays an important factor of life since it may cause death of human. It is necessary to detect early of this disease for securing patients life. In this chpter two exclusively different methods are proposed for detection of heart disease. The first one is Pattern Recognition Approach with grammatical concept and the second one is machine learning approach. In the syntactic pattern recognition approach initially ECG wave from different leads is decomposed into pattern primitive based on diagnostic criteria. These primitives are then used as terminals of the proposed grammar. Pattern primitives are then input to the grammar. The parsing table is created in a tabular form. It finally indicates the patient with any disease or normal. Here five diseases beside normal are considered. Different Machine Learning (ML) approaches may be used for detecting patients with CVD and assisting health care systems also. These are useful for learning and utilizing the patterns discovered from large databases. It applies to a set of information in order to recognize underlying relationship patterns from the information set. It is basically a learning stage. Unknown incoming set of patterns can be tested using these methods. Due to its self-adaptive structure Deep Learning (DL) can process information with minimal processing time. DL exemplifies the use of neural network. A predictive model follows DL techniques for analyzing and assessing patients with heart disease. A hybrid approach based on Convolutional Layer and Gated-Recurrent Unit (GRU) are used in the paper for diagnosing the heart disease.
ARTICLE | doi:10.20944/preprints201810.0572.v1
Subject: Engineering, Control And Systems Engineering Keywords: wind turbine system; hydroelectric plant simulator; model--based control; data--driven approach; self--tuning control; robustness and reliability
Online: 24 October 2018 (11:26:20 CEST)
The interest on the use of renewable energy resources is increasing, especially towards wind and hydro powers, which should be efficiently converted into electric energy via suitable technology tools. To this aim, self--tuning control techniques represent viable strategies that can be employed for this purpose, due to the features of these nonlinear dynamic processes working over a wide range of operating conditions, driven by stochastic inputs, excitations and disturbances. Some of the considered methods were already verified on wind turbine systems, and important advantages may thus derive from the appropriate implementation of the same control schemes for hydroelectric plants. This represents the key point of the work, which provides some guidelines on the design and the application of these control strategies to these energy conversion systems. In fact, it seems that investigations related with both wind and hydraulic energies present a reduced number of common aspects, thus leading to little exchange and share of possible common points. This consideration is particularly valid with reference to the more established wind area when compared to hydroelectric systems. In this way, this work recalls the models of wind turbine and hydroelectric system, and investigates the application of different control solutions. The scope is to analyse common points in the control objectives and the achievable results from the application of different solutions. Another important point of this investigation regards the analysis of the exploited benchmark models, their control objectives, and the development of the control solutions. The working conditions of these energy conversion systems will be also taken into account in order to highlight the reliability and robustness characteristics of the developed control strategies, especially interesting for remote and relatively inaccessible location of many installations.
ARTICLE | doi:10.20944/preprints202009.0373.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: active power loss; total generation cost; emission index; optimal power flow; equilibrium optimizer; solar PV integrated IEEE 30-bus system; wind integrated IEEE 30-bus system; hybrid wind and solar PV integrated IEEE 30-bus system
Online: 17 September 2020 (05:15:42 CEST)
Over the last decades, the energy market around the world has reshaped due to accommodating the high penetration of renewable energy resources. Although renewable energy sources have brought various benefits, including low operation cost of wind and solar PV power plants, and reducing the environmental risks associated with the conventional power resources, they have imposed a wide range of difficulties in power system planning and operation. Naturally, classical optimal power flow (OPF) is a nonlinear problem. Integrating renewable energy resources with conventional thermal power generators escalates the difficulty of the OPF problem due to the uncertain and intermittent nature of these resources. To address the complexity associated with the process of the integration of renewable energy resources into the classical electric power systems, two probability distribution functions (Weibull and lognormal) are used to forecast the voltaic power output of wind and solar photovoltaic, respectively. Optimal power flow, including renewable energy, is formulated as a single-objective and multi-objective problem in which many objective functions are considered, such as minimizing the fuel cost, emission, real power loss, and voltage deviation. Real power generation, bus voltage, load tap changers ratios, and shunt compensators values are optimized under various power systems’ constraints. This paper aims to solve the OPF problem and examines the effect of renewable energy resources on the above-mentioned objective functions. A combined model of wind integrated IEEE 30-bus system, solar PV integrated IEEE 30-bus system, and hybrid wind and solar PV integrated IEEE 30-bus system are performed using the equilibrium optimizer technique (EO) and other five heuristic search methods. A comparison of simulation and statistical results of EO with other optimization techniques showed that EO is more effective and superior.
ARTICLE | doi:10.20944/preprints202009.0344.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: active power loss; total generation cost; emission index; optimal power flow; equilibrium optimizer; 21 solar PV integrated IEEE 30-bus system; wind integrated IEEE 30-bus system; hybrid wind and solar PV 22 integrated IEEE 30-bus system
Online: 16 September 2020 (03:50:46 CEST)
Over the last decades, the energy market around the world has reshaped due to accommodating the high penetration of renewable energy resources. Although renewable energy sources have brought various benefits, including low operation cost of wind and solar PV power plants, and reducing the environmental risks associated with the conventional power resources, they have imposed a wide range of difficulties in power system planning and operation. Naturally, classical optimal power flow (OPF) is a nonlinear problem. Integrating renewable energy resources with conventional thermal power generators escalates the difficulty of the OPF problem due to the uncertain and intermittent nature of these resources. To address the complexity associated with the process of the integration of renewable energy resources into the classical electric power systems, two probability distribution functions (Weibull and lognormal) are used to forecast the voltaic power output of wind and solar photovoltaic, respectively. Optimal power flow, including renewable energy, is formulated as a single-objective and multi-objective problem in which many objective functions are considered, such as minimizing the fuel cost, emission, real power loss, and voltage deviation. Real power generation, bus 13 voltage, load tap changers ratios, and shunt compensators values are optimized under various power systems’ 14 constraints. This paper aims to solve the OPF problem and examines the effect of renewable energy resources 15 on the above-mentioned objective functions. A combined model of wind integrated IEEE 30-bus system, solar 16 PV integrated IEEE 30-bus system, and hybrid wind and solar PV integrated IEEE 30-bus system are performed 17 using the equilibrium optimizer technique (EO) and other five heuristic search methods. A comparison of 18 simulation and statistical results of EO with other optimization techniques showed that EO is more effective 19 and superior.
CONCEPT PAPER | doi:10.20944/preprints202012.0535.v1
Subject: Social Sciences, Safety Research Keywords: Employer Preparedness, health and safety, emergencies and disasters, planning, Total Worker Health
Online: 21 December 2020 (15:45:33 CET)
Background: Recent disasters have demonstrated gaps in employers’ preparedness to protect employees and promote their well-being in the face of emergencies and disasters affecting the workplace and their communities. Total Worker Health (TWH), a comprehensive perspective developed by the National Institute for Occupational Safety and Health, is a helpful framework for addressing employer preparedness. It includes attention to health and safety at work, and the promotion of the health and well-being of the employee in the context of social determinants of health, such as work-life balance. Methods: TWH concepts, including the domains of TWH and the TWH Hierarchy of Controls, were investigated for their relevance to protecting employees and promoting their well-being during and after crises such as weather disasters, pandemics, and acts of terrorism. Building upon TWH concepts, an employer preparedness framework and model is proposed. Findings: The Model emphasizes upstream prevention, workplace-community linkages, social and economic impacts, and employer leadership through a cyclical planning process. Conclusions/Application to Practice: The Model can assist employers in advancing their preparedness for all hazards through self-assessment and planning agendas based upon the proposed domains.
ARTICLE | doi:10.20944/preprints202311.1860.v1
Subject: Engineering, Energy And Fuel Technology Keywords: Solid-state batteries; lithium halides; temperature; pressure; In-situ XRD and XPS
Online: 29 November 2023 (10:51:31 CET)
Abstract: In the past years, lithium-ion solid-state batteries demonstrated significant advancements regarding such properties as safety, long-term endurance, and energy density. The properties of these depend on solid-state electrolytes and especially ionic transport interfacial transport with cathode or anode materials. Solid-state electrolytes based on lithium halides offer new opportunities due to their unique features such as broad electrochemical stability window, high lithium-ion conductivity, and elasticity at close to melting point temperatures that help to improve lithium-ion transport at interfaces. A comparative study of lithium indium halide (Li3InCl6) electrolytes syn-thesized by a mechano-thermal method using different optimization parameters revealed a signif-icant effect of ball-milling time, temperature, and pressure on lithium-ion transport. Based on the Electrochemical Impedance Spectroscopy (EIS) data in the temperature range of 25-100 °C, the optimized Li3InCl6 electrolyte phase demonstrates high ionic conductivity reaching 0.98 mS cm−1 at room temperature. However, at 70 °C, phase transformation was observed leading to significant changes in the activation energy for lithium-ion transport. In-situ X-ray diffraction and in-situ/operando X-ray photoelectron spectroscopy techniques confirmed the tempera-ture-dependent behavior of Li3InCl6 synthesized. These observations provide critical information for practical applications of solid-state electrolytes and nanocomposites based on Li3InCl6 within the broad temperature range for lithium-ion solid-state batteries with improved morphology, chemical interactions, and structural integrity.
REVIEW | doi:10.20944/preprints202203.0285.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: Large power transformer; condition monitoring; transformer fault diagnosis; diagnostic techniques; mechanical or electrical integrity of the core and windings
Online: 21 March 2022 (10:47:56 CET)
Large power transformers are generally associated with a maximum capacity rating of 100 MVA or higher. These large liquid dielectric power transformers are a custom-built piece of equipment, thus very expensive, and a backbone element of the power grid. In extreme cases as, for example, severe geomagnetic disturbances, permanently monitoring their condition will enhance their electrical reliability and resilience to guarantee efficient management of its life cycle. However, some traditional monitoring/diagnosis techniques have singular features when applied to large power transformers and their interlinked subsystems. In this context, and since that information is hardly put in evidence and compiled in the literature, this paper reviews the particularities of monitoring and diagnosing those assets.
ARTICLE | doi:10.20944/preprints202108.0347.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Parkinson’s Disease; Freeze of Gait; Deep Learning; Ensemble Learning; Wearable Sensor Data, Detection and Predication
Online: 16 August 2021 (16:48:14 CEST)
Freezing of Gait (FOG) is an impairment that affects the majority of patients in the advanced stages of Parkinson’s Disease (PD). FOG can lead to sudden falls and injuries, negatively impacting the quality of life for the patients and their families. Rhythmic Auditory Stimulation (RAS) can be used to help patients recover from FOG and resume normal gait. RAS might be ineffective due to the latency between the start of a FOG event, it’s detection and initialization of RAS. We propose a system capable of both FOG prediction and detection using signals from tri-axial accelerometer sensors that will be useful in initializing RAS with minimal latency. We compared the performance of several time frequency analysis techniques, including moving windows extracted from the signals, handcrafted features, Recurrence Plots (RP), Short Time Fourier Transform (STFT), Discreet Wavelet Transform (DWT) and Pseudo Wigner Ville Distribution (PWVD) with Deep Learning (DL) based Long Short Term Memory (LSTM) and Convolutional Neural Networks (CNN). We also propose three Ensemble Network Architectures that combine all the time frequency representations and DL architectures. Experimental results show that our ensemble architectures significantly improve the performance compared with existing techniques. We also present the results of applying our method trained on publicly available dataset to data collected from patients using wearable sensors in collaboration with A.T. Still University.
ARTICLE | doi:10.20944/preprints202009.0729.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: Data Envelopment Analysis; Machine learning; Optimization; Parametric and non-parametric methods; Supervised and unsupervised models; CVS model
Online: 30 September 2020 (08:19:51 CEST)
The main purpose of this paper is to propose a novel optimization model with a new machine learning approach in the first section to achieve the best results in financial institutions in the second section. Since the constancy of efficacy derived from parametric and non-parametric is not significant, this paper provides a scientific assessment at the optimization section and proposes a novel combined parametric and non-parametric model which will be a new experiment in literature perception. A scientific assessment of banks based on a combination of the efficiency measurement method of CCR(Charnes, Cooper and Rhodes model) or CRS(Constant Return to Scale) BCC(Banker, Charnes, and Cooper model) or VRS (Variable Return to Scale) in Data Envelopment Analysis (DEA), as well as Stochastic Frontier Approach (SFA) for 65 banks during Feb to July 2020, are introduced. For analyzing the performance of the parametric and non-parametric approaches we have considered the linear regression and Unreplicated Linear Functional Relationship (ULFR). At the machine learning section, a novel four-layers data mining filtering pre-processes for selected supervised classification as well as unsupervised clustering algorithms to increase the accuracy and to remove unrelated attributes and data are applied. For the four kinds of preprocessing approaches of unsupervised attributes, supervised attributes, supervised instances, and unsupervised instances, we have chosen discretization, attribute selection, stratified remove folds, and resample filters respectively. Based on the nature of the suggested financial institution's dataset and attributes, the most appropriate preprocessing filter in each layer to achieve the highest performance is suggested. Finally, the superior bank, best performance model, and the most accurate algorithm are introduced. The results indicate that the bank number 56 is the superior bank. Among the proposed techniques, the novel recommended CVS compared with CCR-BCC and SFA models, has a more positive correlation with profit risk, and show a higher coefficient of determination values. Sequential Minimal Optimization(SMO) algorithm receives the highest accuracy in all four suggested filtering layers.
ARTICLE | doi:10.20944/preprints201612.0050.v1
Subject: Engineering, Control And Systems Engineering Keywords: modelling and simulation for control; advanced control design; model–based and data-driven approaches; artificial intelligence; thermal unit nonlinear system
Online: 9 December 2016 (03:17:38 CET)
The paper presents the design and the implementation of different advanced control strategies that are applied to a nonlinear model of a thermal unit. A data–driven grey–box identification approach provided the physically meaningful nonlinear continuous–time model, which represents the benchmark exploited in this work. The control problem of this thermal unit is important since it constitutes the key element of passive air conditioning systems. The advanced control schemes analysed in this paper are used to regulate the outflow air temperature of the thermal unit by exploiting the inflow air speed, whilst the inflow air temperature is considered as an external disturbance. The reliability and robustness issues of the suggested control methodologies are verified with a Monte–Carlo analysis for simulating modelling uncertainty, disturbance and measurement errors. The achieved results serve to demonstrate the effectiveness and the viable application the suggested control solutions to air conditioning systems. The benchmark model represents one of the key issues of this study, which is exploited for benchmarking different model–based and data–driven advanced control methodologies through extensive simulations. Moreover, this work highlights the main features of the proposed control schemes, while providing practitioners and heating, ventilating and air conditioning engineers with tools to design robust control strategies for air conditioning systems.
ARTICLE | doi:10.20944/preprints202211.0520.v1
Subject: Biology And Life Sciences, Biology And Biotechnology Keywords: Cryopreservation; Nitrogen vapor; Buffalo semen; Bangladeshi buffalo; Diluents and extenders.
Online: 29 November 2022 (01:25:56 CET)
Cryopreservation has been used extensively for cattle in Bangladesh, albeit no study was conducted on the cryopreservation techniques of buffalo. This study compares two freezing methods and the effects of diluters on the semen quality of buffalo. In the first freezing protocol, semen was frozen in two-step: from 37 °C to 5 °C for 30 minutes in a BLRI-developed equilibration chamber and from 5 oC to -120 oC in a Styrofoam box using liquid nitrogen vapor from different distances (0.5, 1.5, 1.6, 2 and 3 inches). At the same time semen was frozen in a programmable freezer in three steps. The semen samples were then evaluated for motility and morphological quality by CASA. In another experiment, the efficacy of one locally developed diluter-Tris-fructose-egg yolk-based (TFE) and three commercial diluters (Andromed, Triladyl and Steridyl) were evaluated. The highest number of motile sperms (62.67±1.12; P< 0.01) and progressive motility (38.97±1.10; P < 0.001) was observed at 1.6 inches above liquid nitrogen. There was no significant difference in overall motility, progressive and slow motility between semen cryopreserved in the cheap nitrogen vapor technique (57.49±5.67, 38.70± 4.04 and 3.83± 0.63, respectively) and expensive automated technique (65.94±4.65, 45.54 ± 3.64 and 2.43± 0.36, respectively). The highest recovery rate and conception rate were observed in semen diluted with TFE (82.4% and 80%, respectively). Hence, the cryopreservation technique using nitrogen vapor and TFE diluent is cost-effective and suitable for freezing buffalo semen that would produce superior semen for artificial insemination.
ARTICLE | doi:10.20944/preprints202108.0022.v1
Subject: Engineering, Automotive Engineering Keywords: permanent-magnet motors; electrical drives; torque and speed control; multiphase machine; 6-phase machine; field-oriented control; multiphase variable speed drive
Online: 2 August 2021 (11:52:35 CEST)
The paper interprets a comparison of two mostly used techniques of a field-oriented control for 6-phase electric drives, with their pros and cons, as well as their differences in construction and behaviour. Both of these approaches have been realized. Frequency and step responses analysis have been demonstrated with a 6-phase permanent magnet synchronous machine. Experimental results have been compared with simulations based on a mathematical model.
ARTICLE | doi:10.20944/preprints201801.0143.v1
Subject: Physical Sciences, Mathematical Physics Keywords: current-quark masses; Helmholtz equation; roots of Bessel and Neumann functions
Online: 16 January 2018 (13:20:36 CET)
Current-quark masses are compared to the rest masses allowed by the Helmholtz equation in a polar model. Within the uncertainty of the current u quark mass determination, the current quark mass coincides with the rest mass allowed by the Helmholtz equation in the polar model in accordance with the second root of the zero Neumann function. Current d quark mass coincides with the rest mass calculated in accordance with the third root of the Bessel zero function. On the basis of a comparison of these results with the results obtained earlier for ordinary real particles u and d quarks stability is discussed.
ARTICLE | doi:10.20944/preprints201901.0267.v1
Subject: Engineering, Control And Systems Engineering Keywords: wind turbine system; hydroelectric plant simulator; model--based control; data–driven approach; self–tuning control; robustness and reliability
Online: 26 January 2019 (10:08:46 CET)
The interest on the use of renewable energy resources is increasing, especially towards wind and hydro powers, which should be efficiently converted into electric energy via suitable technology tools. To this aim, data--driven control techniques represent viable strategies that can be employed for this purpose, due to the features of these nonlinear dynamic processes working over a wide range of operating conditions, driven by stochastic inputs, excitations and disturbances. Some of the considered methods, such as fuzzy and adaptive self--tuning controllers, were already verified on wind turbine systems, and similar advantages may thus derive from their appropriate implementation and application to hydroelectric plants. These issues represent the key features of the work, which provides some guidelines on the design and the application of these control strategies to these energy conversion systems. The working conditions of these systems will be also taken into account in order to highlight the reliability and robustness characteristics of the developed control strategies, especially interesting for remote and relatively inaccessible location of many installations.
REVIEW | doi:10.20944/preprints202202.0280.v2
Subject: Physical Sciences, Particle And Field Physics Keywords: Standard and Multi-Level Models; Wigner-Segal method; quark matter; generations of quarks and leptons; chronometric proton’s wave function; proton shape
Online: 20 April 2022 (09:10:20 CEST)
The Multi-Level Model (=MLM) was suggested by A. Levichev as a description of quarks and gluons. The review recalls MLM-terminology while MLM-findings (for quarks and leptons) are compared to the theoretical and experimental data as accepted by the Standard Model (=SM). MLM is based on Segal’s compact space-time U(2) and on the sequence of embeddings: U(2) into U(3), U(2) into U(4), etc. These groups were called the levels (of matter): U(2) - the 0th (that is, our mundane world), U(3) - the 1st, U(4) - the 2nd, etc. Such a convention matches the SM-quarks' generations list. Each SM-quark is viewed either as a sunken proton, or as a captured proton. The MLM-proton is elementary and indestructible (hence no need for confinement). For MLM-quarks, in terms of the notion of a ruling group, flavor and color are defined mathematically. The number of colors (and of flavors) is level-dependent. For levels U(3) through U(6) the correspondence with the SM-quarks is established. Three new quarks and two new leptons are predicted. The SM-puzzle of quark and lepton generations is solved. Using the Han-Nambu scheme, the notion of the quark’s electric charge is expressed in terms of color charges. The original part of the review suggests studying the proton’s properties (like mass, shape and inner pressure) on the ba-sis of its wave function.
ARTICLE | doi:10.20944/preprints201709.0089.v1
Subject: Engineering, Control And Systems Engineering Keywords: Wind turbine simulator; data-driven and model-based approaches; fuzzy identification; on-line estimation; robustness and reliability
Online: 19 September 2017 (15:47:14 CEST)
Wind turbine plants are complex dynamic and uncertain processes driven by stochastic inputs and disturbances, as well as different loads represented by gyroscopic, centrifugal, and gravitational forces. Moreover, as their aerodynamic models are nonlinear, both modelling and control become challenging problems. On one hand, high-fidelity simulators should contain different parameters and variables in order to accurately describe the main dynamic system behaviour. Therefore, the development of modelling and control for wind turbine systems should consider these complexity aspects. On the other hand, these control solutions have to include the main wind turbine dynamic characteristics without becoming too complicated. The main point of this paper is thus to provide two practical examples of development of robust control strategies when applied to a simulated wind turbine plant. Experiments with the wind turbine simulator and the Monte–Carlo tools represent the instruments for assessing the robustness and reliability aspects of the developed control methodologies when the model-reality mismatch and measurement errors are also considered. Advantages and drawbacks of these regulation methods are also highlighted with respect to different control strategies via proper performance metrics.
ARTICLE | doi:10.20944/preprints201609.0038.v2
Subject: Environmental And Earth Sciences, Environmental Science Keywords: SAR offset and speckle tracking; glacier velocity; Radarsat-2 Wide Fine; Svalbard
Online: 10 September 2016 (05:03:14 CEST)
Glacier dynamics play an important role in the mass balance of many glaciers, ice caps and ice sheets. In this study we exploit Radarsat-2 (RS-2) Wide Fine (WF) data to determine the surface speed of Svalbard glaciers in the winters of 2012/2013 and 2013/2014 using Synthetic Aperture RADAR (SAR) offset and speckle tracking. The RS-2 WF mode combines the advantages of the large spatial coverage of the Wide mode (150 x 150 km) and the high pixel resolution (9m) of the Fine mode and thus has a major potential for glacier velocity monitoring from space through offset and speckle tracking. Faster flowing glaciers (1.95 m d-1 - 2.55 m d-1) which are studied in detail are Nathorstbreen, Kronebreen, Kongsbreen and Monacobreen. Using our Radarsat-2 WF dataset, we compare the performance of two SAR tracking algorithms, namely the GAMMA Remote Sensing Software and a custom written MATLAB script (GRAY method) that has primarily been used in the Canadian Arctic. Both algorithms provide comparable results, especially for the faster flowing glaciers and the termini of slower tidewater glaciers. A comparison of the WF data to RS-2 Ultrafine and Wide mode data reveals the superiority of RS-2 WF data over the Wide mode data.
REVIEW | doi:10.20944/preprints201904.0005.v1
Subject: Physical Sciences, Nuclear And High Energy Physics Keywords: standard model; first and second horizons; cosmological constant
Online: 1 April 2019 (09:55:46 CEST)
We present a brief history of the construction of models of the universe, followed by calculations of quantitative characteristics of basic geometric and kinematic properties of the Standard Cosmological Model ($\Lambda$CDM). Using the Friedmann equations of uniform space, we derive equations characterizing a $\Lambda$CDM model that describes a universe corresponding to current observational data. The equations take into account the effects of radiation and ultra-relativistic neutrinos. It is shown that the universe at very early and late stages can be described to sufficient accuracy by simple formulas. Certain important moments of cosmic evolution are determined: the times when densities of the gravitational components of the universe become equal, when they contribute equally to the gravitational force, when the accelerating expansion of space begins, and several others. The dependences of different distances on redshift and the scale factor of space are derived. The distance to the sphere that expands with the speed of light (the Hubble distance), and its current and future acceleration, are found. Concepts of a horizon, second inflation, and second horizon are discussed. We consider the remote future of the universe and the opportunity, in principle, of connection with extraterrestrial civilizations.
ARTICLE | doi:10.20944/preprints201608.0153.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: Eco-security; Land use and cover change (LUCC); Sustainability development and assessment
Online: 15 August 2016 (12:41:51 CEST)
Land use and cover change (LUCC) is an important method to investigate the causes of global environment change. We utilized the emergy ecological footprint (EEF) model to construct a land-use change model to be used as a systematic measuring tool for monitoring sustainable development trends. In particular, we estimated the eco-security of the Cing-jing region as a case study so that responsible agencies can use it to maintain a balance between ecological preservation and tourism development. The results indicated the following: First, the ecological environment of the Cing-jing region satisfied the safety standard in 2008–2014; however, the related indices increased considerably. Second, the grey model predicted a decrease in 2015–2024 ecological carrying capacities of Cing-jing and a large increase in capita EFs, resulting in a larger ecological deficit and higher EFI. The eco-security from 2015–2024 was higher compared to 7 years ago and is predicted to reach the Grade 2 intermediate level in 2022; thus the Cing-jing region is gradually becoming ecologically unsustainable. Strengths of our study included the use of EEF theory in a quantitative analysis of slope lands for the effective evaluation of ecological security. Finally, we expanded our research to include ecological security issues.
ARTICLE | doi:10.20944/preprints202308.1845.v1
Subject: Engineering, Metallurgy And Metallurgical Engineering Keywords: Localised corrosion and inhibition; wire beam electrode (WBE); scanning vibrating electrode technique (SVET); scanning electrochemical microscopy (SECM)
Online: 28 August 2023 (09:54:49 CEST)
The inhibition of localised corrosion is known to involve multiple processes taking place over a range of time and length scales that are difficult to study by conventional electrochemical methods. This work demonstrates an approach to probing complex localised electrochemical processes by combining an electrochemically integrated multi-electrode array (also known as wire beam electrode (WBE)), scanning vibrating electrode technique, scanning electrochemical microscopy and surface analytical techniques. Each technique reveals certain aspects of the dynamically changing, multi-scale, inhibitor-microstructure interactions and local chemistry over a heterogeneous AA2024-T3 alloy surface in the presence of an environmentally friendly inhibitor; cerium diphenyl phosphate (Ce(dpp)3).
ARTICLE | doi:10.20944/preprints202303.0023.v1
Subject: Computer Science And Mathematics, Data Structures, Algorithms And Complexity Keywords: Game Design; Variational AutoEncoder (VAE); Image and Video Generation; Bayesian Algorithm; Loss Function; Data Clustering; Data and Image Analytics; MNIST database; Generator and Discriminator
Online: 1 March 2023 (11:17:12 CET)
In recent decades, the Variational AutoEncoder (VAE) model has shown good potential and capabilities in image generation and dimensionality reduction. The combination of VAE and various machine learning frameworks has also worked effectively in different daily life applications, however its possibility and effectiveness in modern game design has seldom been explored nor assessed. The use of its feature extractor for data clustering was minimally discussed in literature neither. This paper first attempts to explore different mathematical properties of the VAE model, in particular, the theoretical framework of the encoding and decoding processes, the possible achievable lower bound and loss functions of different applications; then applies the established VAE model into generating new game levels within two well-known game settings; as well as validating the effectiveness of its data clustering mechanism with the aid of the Modified National Institute of Standards and Technology (MNIST) database. Respective statistical metrics and assessments were also utilized for evaluating the performance of the proposed VAE model in aforementioned case studies. Based on the statistical and spatial results, several potential drawbacks and future enhancement of the established model were outlined, with the aim of maximizing the strengths and advantages of VAE for future game design tasks and relevant industrial missions.
ARTICLE | doi:10.20944/preprints202103.0169.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: head and neck cancer; radiotherapy; IMRT; SIB; hypofractionation; toxicity
Online: 4 March 2021 (16:10:05 CET)
Abstract: Background: Intensity modulated radiotherapy (IMRT) is still a standard of care for radiotherapy in locally advanced head and neck cancer (LA-HNSCC). Simultaneous integrated boost (SIB) and moderate hypofractionation offer an opportunity for individual dose painting and a reduction in overall treatment time. We present retrospective data on toxicity and locore-gional control of a patient cohort with LA-HNSCC treated with an IMRT-SIB concept in compar-ison to normofractionated 3D-conformal radiotherapy (3D-RT) after a long-term follow-up. Methods: Between 2012 and 2014, n=67 patients with HNSCC (stages III/IV without distant me-tastases) were treated with IMRT-SIB either definitively (single/total doses: 2.2/66 Gy, 2.08/62.4 Gy, 1.8/54 Gy in 30 fractions) or in the postoperative setting (2.08/62.4 Gy, 1.92/57.6 Gy, 1.8/54 Gy). These patients' clinical courses were matched (for sex, primary, and treatment concept) as part of a matched-pair analysis with patients treated before mid-2012 with normofractionated 3D-CRT (definitive: 2/50 Gy followed by a sequential boost up to 70 Gy; postoperative: 2/60-64 Gy). Chemotherapy/ immunotherapy was given concomitantly in both groups in the definitive situation (postoperative dependent on risk factors). The primary endpoints were acute and late toxicity; the secondary endpoint was locoregional control (LRC). Results: Sixty-seven patients treated with IMRT-SIB (n = 20 definitive, n = 47 adjuvant) were matched with 67 patients treated with 3D-RT. There were minor imbalances between the groups concerning nonmatching variables such as extracapsular extension (ECE) and chemotherapy in IMRT-SIB. Significantly less toxicity was found in favor of IMRT-SIB concerning dysphagia, ra-diation dermatitis, xerostomia, fibrosis, and lymphedema. After a median follow-up of 63 months, the median LRC was not reached (IMRT-SIB) vs. 69.5 months (3D-RT) (p=0.63). Conclusions: This moderately hypofractionated IMRT-SIB concept was shown to be feasible with less toxicity than conventional 3D-RT in this long-term follow-up observation.
Subject: Computer Science And Mathematics, Mathematics Keywords: equipment vendor selection; fuzzy TOPSIS; fuzzy weighted average left and right score; multi-choice goal programming; multi-aspiration goal programming
Online: 30 May 2019 (08:42:27 CEST)
The airport ground handling service (AGHS) equipment vendor selection (AGHSEVS) problem is critical for ramp work safety management, because AGHS equipment malfunctions affect airport ramp work safety. Appropriate vendor selection can prevent aircraft damage and delays in airlines schedules, and ensure reliable and high-quality ground handling service. The AGHSEVS problem is a time-consuming and complex process that requires professional knowledge and experience to make judgments. Specifically, AGHSEVS is a multi-criteria decision-making (MCDM) problem. Previous research has seldom integrated MCDM methods with linear and goal programming to solve the AGHSEVS problem. The objective of this study was to develop a new system evaluation model for AGHSEVS by considering both qualitative and quantitative methods. We test the proposed approach on an AGHS company in Taiwan.
ARTICLE | doi:10.20944/preprints202109.0462.v1
Subject: Physical Sciences, Particle And Field Physics Keywords: modified theories of gravity; renormalizability; unitarity; astrophysical and cosmological scales
Online: 28 September 2021 (10:39:47 CEST)
We analyze the R + R2 model of quantum gravity where terms quadratic in the curvature tensor are added to the General Relativity action. This model was recently proved to be a self-consistent quantum theory of gravitation, being both renormalizable and unitary. The model can be made practically indistinguishable from General Relativity at astrophysical and cosmological scales by the proper choice of parameters.
ARTICLE | doi:10.20944/preprints201911.0276.v1
Subject: Public Health And Healthcare, Public, Environmental And Occupational Health Keywords: occupational health and safety education; quality of health and safety education; health and safety education best practices
Online: 24 November 2019 (13:14:27 CET)
Research into professionalization in health and safety has recently gained in interest. Graduate training is one of the factors that determines or conditions the role of the safety professional, thus intervene in the professionalization process. This article is the result of a workshop and the discussions of nine academic directors of safety education programs about quality evaluation. This article introduces the issue with a historic overview of safety education, presents a synthesis of nine selected education programs, discusses quality evaluation of health and safety education programs, propose a quality evaluation frame and finally, proposes a process for designing a quality safety education program with an associated model of the learning objectives. The outcomes are interesting for everyone who is interested in health and safety education and quality evaluation and will give insights into how safety professionals are educated.
ARTICLE | doi:10.20944/preprints202308.1151.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: method development; optimisation and validation; parallel factor modelling; partial least squares modelling
Online: 16 August 2023 (07:18:19 CEST)
In the present protocol, we determined the presence and concentrations of bisphenol A (BPA) spiked in surface water samples using EEM fluorescence spectroscopy in conjunction with modelling using partial least squares (PLS) and parallel factor (PARAFAC). PARAFAC modelling of the EEM fluorescence data obtained from surface water samples contaminated with BPA unraveled four fluorophores including BPA. The best outcomes for BPA concentration (R2 = 0:996; Standard deviation to prediction error's root mean square ratio (RPD) = 3.41; and a Pearson's r value of 0.998). With these values of R2 and Pearson's r, the PLS model showed a strong correlation between the predicted and measured BPA concentrations. The detection and quantification limits of the methods were 3.512 and 11.708 micro molar (µM), respectively. In conclusion, BPA can be precisely detected and its concentration in surface water predicted using the PARAFAC and PLS models developed in this study and fluorescence EEM data collected from BPA-contaminated water. It is necessary to spatially relate surface water contamination data with other datasets in order to connect drinking water quality issues with health, environmental restoration, and environmental justice concerns.
ARTICLE | doi:10.20944/preprints202308.0212.v1
Subject: Computer Science And Mathematics, Mathematical And Computational Biology Keywords: whooping cough; atangana-Baleanu derivative; vaccination; existence and uniqueness; stability and sensitivity analysis; toufik-atangana scheme; optimal control
Online: 3 August 2023 (08:28:46 CEST)
In this study, we construct a new Atangana-Baleanu fractional model for whooping cough disease to predict future dynamics of the disease, as well as to suggest strategies to eliminate the disease in an optimal way. We prove that the proposed model has a unique solution which is positive and bounded. To measure contagiousness of the disease, we determine the reproduction number R0 and use it to examine the local and global stability at equilibrium points that have symmetry. Through sensitivity analysis, we determine parameters of the model that are most sensitive to R0. The ultimate aim of this research is to analyze different disease prevention approaches in order to find the most suitable one. For this, we include the vaccination and quarantine compartments in the proposed model and formulate an optimal control problem to assess the effect of vaccination and quarantine rates on disease control in three distinct scenarios. Firstly, we study the impact of vaccination strategy and conclude findings with presentation of graphical results. Secondly, we examine the impact of quarantine strategy on whooping cough infection with possible elimination from the society. Lastly, we implement vaccination and quarantine strategies together to visualize their combine effect on infection control. In addition to the study of an optimal control problem, we examine the effect of fractional order on disease dynamics as well as the impact of constant vaccination and quarantine rates on disease transmission and control. We determine that the optimal control strategy with the three controls is more effective in reducing the spread of whooping cough infection. Implementation of Toufik-Atangana type numercial scheme both for the state and adjoint equations is another contribution of this article.
ARTICLE | doi:10.20944/preprints202103.0163.v1
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: Remote sensing and Geographical Information System; hydro-geomorphology; Weightage Index Overlay; decision maker
Online: 4 March 2021 (14:10:08 CET)
Remote sensing and Geographical Information System (GIS) have played an important role in exploration and management of groundwater resources. In this study, we present modeling of groundwater potential zone in Khoyrasol block in Birbhum district, West Bengal by using remote sensing and GIS techniques. The objective of the study is to explore groundwater as well as surface water availability in different geomorphic units. Different thematic maps of geology, hydro-geomorphology, lineament, slope, land use/land cover (LULC), depth to water level and soil maps are prepared and groundwater potential zones are obtained by overlaying all thematic maps in terms of Weightage Index Overlay (WIO) method. All the thematic map classes have been assigned weightage according to their role in groundwater occurrence. Finally, groundwater potential zones are classified into four categories viz., excellent, good to medium, medium to poor and poor. The outcome of the present research work will help the local farmers, decision-maker, researchers and planners for exploration, monitoring, and management of groundwater resources for this study area.
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: health care; hospital; blockchain technology; security and privacy
Online: 20 October 2020 (15:04:03 CEST)
One of the special trends in health care is the increasing availability of data and services to the cloud, especially for convenience (for example, providing a complete patient medical record without interruption) and savings (for example, economic issues). Management of health care data). However, there are limitations to using common cryptographic prototypes and access control models to address security and privacy concerns in an increasingly cloudy environment. In this paper, we explore the potential and capacity of using China's Blockchain technology to protect health care data hosted in the cloud. We also explain the real challenges of such an approach and further research is needed. Health care is a highly data-dependent domain, with large amounts of data being created, published, stored and accessed daily. For example, data are created when a patient undergoes a number of examinations (such as computed tomography or computed tomography scans) and the data need to be sent to the radiologist and then to a physician. The visit results are then stored in the hospital, and then need to be accessed later by another physician at another hospital within the network. It is clear that technology can play an important role in improving the quality of care for patients (for example, using data analytics to make informed medical decisions) and potentially costing more by allocating resources more efficiently in terms of personnel, equipment, etc. , Reduce. For example, paper-based data extraction is difficult to extract into systems (for example, it is costly and may involve data entry errors), archiving them and accessing them as needed is costly. These challenges may cause medical decisions to be incomplete, requiring repeated tests for missing information or missing data stored in another hospital in another state or country (at the expense of increased costs and no convenience). (For patients) and so on. Because of the nature of the industry, it is important to ensure the security, privacy and integrity of health care data. As a result, there is definitely a need for a secure and secure data management system.
ARTICLE | doi:10.20944/preprints202309.0430.v1
Subject: Computer Science And Mathematics, Computer Networks And Communications Keywords: Vehicle-to-Vehicle (V2V) Communication; Vehicular Ad-Hoc Networks (VANETs); Internet of Things (IoT); Internet of Vehicle (IoV); Communication Protocols; Security Protocols; Intrusion Detection Systems; Safety and Security; Efficiency in V2V Communication
Online: 7 September 2023 (02:52:57 CEST)
Vehicle-to-Vehicle (V2V) communication plays a pivotal role in modern intelligent transportation systems, enabling seamless information exchange among vehicles to enhance road safety, traffic efficiency, and overall driving experience. However, the secure transmission of sensitive data between vehicles remains a critical concern due to potential security threats and vulnerabilities. This research paper focuses on investigating the security protocols employed in Vehicle-to-Vehicle communication systems. A comprehensive review and analysis of relevant literature and research papers is conducted to gather information on existing V2V communication security protocols and techniques. The analysis encompasses key areas, including authentication mechanisms, encryption algorithms, key management protocols, and intrusion detection systems specifically applicable to V2V communication networks. Within the context of real-world V2V environments, this study delves into the challenges and limitations associated with implementing these protocols. Moreover, to foster a deeper understanding, the paper investigates present communication protocols in the field of Internet of Things (IoT) that are tailored for V2V. Various parameters, such as band-width consumption, energy consumption, latency, and message size, are considered during the evaluation of these protocols to gauge their effectiveness. The research outcomes aim to provide a comprehensive understanding of the strengths and weaknesses of the current V2V communication security protocols. Furthermore, based on the findings, this paper will propose improvements and recommendations to enhance the security measures of the V2V communication protocol. Ultimately, this research will contribute to the development of more secure and reliable V2V communication systems, propelling the advancement of intelligent transportation technology.
ARTICLE | doi:10.20944/preprints202210.0440.v1
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: HIV/AIDS Mathematical Model; Basic Reproduction Number; Stationary Points; Local and Global Stability Analysis.
Online: 28 October 2022 (07:05:39 CEST)
In this paper mathematical analysis of the HIV/AIDS deterministic model exposed in [Espitia, C. et. al. Mathematical Model of HIV/AIDS Considering Sexual Preferences Under Antiretroviral Therapy, a Case Study in San Juan de Pasto, Colombia, Journal of Computational Biology 29 (2022) 483–493] is made. The objective is to gain insight into the qualitative dynamics of the model determining the conditions for the persistence or effective control of the disease in the community through the study of basic properties such as positiveness and boundedness, calculus of basic reproduction number, stationary points such as disease free equilibrium (DFE), boundary equilibrium (BE) and endemic equilibrium (EE) are calculated, local stability (LAS) of disease free equilibrium. It research allow to conclude that the best way to reduce contagion and consequently to reach a DFE is thought to be the reduction of homosexual partners rate as they are the most affected population by the virus, and are therefore the most likely to become infected and to spread the infection. Increasing the departure rate of infected individuals, leads to a decrease in untreated infected heterosexual men and untreated infected women.
ARTICLE | doi:10.20944/preprints202011.0702.v1
Subject: Physical Sciences, Particle And Field Physics Keywords: standard model; fermion mass hierarchy; mixing matrices; nonlinear dynamics and chaos; bifurcations; Feigenbaum’s universality
Online: 27 November 2020 (20:17:38 CET)
As paradigm of complex behavior, Self-organized Criticality (SOC) reflects the ability of nonequilibrium dynamical systems to self-adjust into metastable states that are scale independent. The goal of this report is to tentatively show that the hierarchy of Standard Model masses and mixing angles follows from the universal scaling attributes of SOC.
ARTICLE | doi:10.20944/preprints202310.0347.v1
Subject: Business, Economics And Management, Finance Keywords: business and financial risks, capital structure; Modigliani–Miller (MM) theory; Brusov–Filatova–Orekhova (BFO) theory; risk and profitability, CAPM; Fama–French model
Online: 9 October 2023 (08:54:05 CEST)
The famous Capital Asset Pricing Model (CAPM) takes into account only business risk. In practice, companies use debt financing and operate at non–zero levels of leverage. This means that it is necessary to take into account the financial risk associated with the use of debt financing along with business one. The purpose of this paper is to simultaneously take into account the business and financial risk. A new approach to CAPM has been developed that takes into account both business and financial risk. We combine the theory of CAPM and the Modigliani–Miller (MM) theory. The first is based on portfolio analysis and accounting for business risks in relation to the market (or industry). The second one (the Modigliani–Miller (MM) theory) describes a specific company and takes into account the financial risks associated with the use of debt financing. The combination of these two different approaches makes it possible to take into account both types of risks: business and financial ones. We combine these two approaches analytically, while Hamada did it phenomenologically. Using the Modigliani–Miller (MM) theory, it is shown that the Hamada’s model, first model, used for this purpose half a century ago, is incorrect. In addition to the renormalization of the beta–coefficient, obtained in the Hamada model, two additional terms are found: the renormalized risk–free return and the term dependent on the cost of debt kd. A critical analysis of the Hamada model was carried out. The vast majority of listing companies use debt financing and are levered, and the Hamada model is not applicable to them in contrast to a new approach applicable to leveraged companies. Implemented a new approach to specific companies. A comparison of the results of the new approach with the results of the conventional CAPM is shown. Two versions of CAPM (market or industry) are considered.
CASE REPORT | doi:10.20944/preprints202002.0028.v1
Subject: Chemistry And Materials Science, Biomaterials Keywords: Geometry optimization of scaffolds; allograft; block bone grafts; custom made bone; design techniques for scaffold; precision and translational medicine
Online: 3 February 2020 (09:46:05 CET)
Background: The aim of the present investigation was to evaluate the clinical success of horizontal ridge augmentation in the severely atrophic maxilla (Cawood and Howell class IV) using freeze-dried custom made bone harvested from cadaver donors tibial hemiplateau and to analyze the marginal bone level gain prior dental implants placement at 9 months after bone grafting and before prosthetic rehabilitation. Methods: A 52-year-old woman received custom made bone grafts. Patient underwent CT scans 2 weeks prior and 9 months after surgery for graft volume and density analysis. Results: The clinical and radiographic bone observations showed a very low rate of resorption after bone graft and implant placement. Conclusions: The custom-made allograft material was a highly effective modality for restoring the alveolar horizontal ridge, resulting in this way to reduce the need to obtain autogenous bone from a secondary site with predictable procedure. Further studies are needed to investigate its behavior at longer time points.
ARTICLE | doi:10.20944/preprints202105.0619.v1
Subject: Business, Economics And Management, Finance Keywords: Betting, Dawson model, Football, xG, Pitch partitioning, possession sequences, expected goal model and player evaluation
Online: 25 May 2021 (15:33:19 CEST)
One of the most significant developments in the sports world over the last two decades has been the use of mathematical methods in conjunction with the massive amounts of data now available to analyze performances, identify trends and patterns, and forecast results. Football analytics has advanced significantly in recent years and continues to evolve as it becomes a more recognized and integral part of the game. Football analytics is also used to forecast game outcomes, allowing bettors to make educated guesses. This article describes mathematical concepts related to football analytics that enable a better betting strategies. We explain how the pitch is partitioned into different zones and we define possession sequences. Furthermore, we explain what an expected goals model is and which expected goals model we use in this research. Furthermore, we define two general characteristics of a player evaluation method, each corresponding to one of the equations of the Dawson model. Based on these characteristics, we describe the developments of several general approaches for evaluating players in the context of the Dawson model.
ARTICLE | doi:10.20944/preprints202102.0552.v1
Subject: Medicine And Pharmacology, Medicine And Pharmacology Keywords: modified SIR model, epidemic, death and source.
Online: 24 February 2021 (15:59:09 CET)
The original purpose of this article was to modify the original SIR equations to allow for a direct source of infection (without which the original equations would have no solutions unless one starts with an already infected population) and also to see to what extent one could obtain multiple outbreaks of an infectious disease. In the course of developing the basic ideas several other factors arose to take prominent roles.Perhaps one of the more salient factors is the point that choosing an arbitrary time to change conditions from say a lock-down for the population to a less stringent social behavior, such as allowing partial or complete opening of businesses and schools, etc. should be based on knowledge of the disease and its evolution. Such decisions are usually made by politicians who have less than full information concerning the consequences of their actions. Several examples are given to illustrate these points.
ARTICLE | doi:10.20944/preprints202306.1829.v1
Subject: Environmental And Earth Sciences, Geography Keywords: Glacial Lake Inventory (Rapstreng, Thorthomi, and Luggye); Sentinel-2 MSI; Normalized difference water index; Semi-automatic delineation
Online: 26 June 2023 (14:52:50 CEST)
Bhutan located in the Hindu Kush Himalayan (HKH) region consists of several glaciers and glacial lakes at higher elevations. With the rapid change in global temperature, glaciers are found to melt at an accelerated rate. This rapid melt of glaciers gets accumulated in weak moraine walls forming a glacial lake, posing a major threat to the downstream communities. As per the Bhutan Glacial Lake Inventory 2021, Bhutan has 567 glacial lakes. Furthermore, the Phochhu basin has the maximum glacial lakes (0.05%) and of which 9 are PDGL. Hence a need for a time monitoring system is imminent. With the availability of free High-resolution Satellite Imageries and Advanced Remote Sensing tools, it has been a sine qua no for monitoring glacial lakes in high areas. Therefore, using the Google Earth Engine and Qgis Platform a semi-automated technique was used to generate glacial lake inventories for Phochhu Sub-basin for the year 2021. We found out that there are 166 glacial lakes covering an area of 24.051 km2.
ARTICLE | doi:10.20944/preprints202101.0003.v1
Subject: Engineering, Automotive Engineering Keywords: Microgrids; Power Quality and Reliability; Model Predictive Control; Interconnected systems; Harmonics; Power System Control
Online: 4 January 2021 (08:32:21 CET)
In this paper, the power quality of interconnected microgrids is managed using a Model Predictive Control (MPC) methodology which manipulates the power converters of the microgrids in order to achieve the requirements. The control algorithm is developed for the microgrids working modes: grid-connected, islanded and interconnected. The results and simulations are also applied to the transition between the different working modes. In order to show the potential of the control algorithm, a comparison study is carried out with classical Proportional-Integral Pulse Width Modulation (PI-PWM) based controllers. The proposed control algorithm not only improves the transient response in comparison with classical methods but also shows an optimal behavior in all the working modes, minimizing the harmonics content in current and voltage even with the presence of non-balanced and non-harmonic-free three-phase voltage and current systems
ARTICLE | doi:10.20944/preprints202305.0058.v1
Subject: Engineering, Control And Systems Engineering Keywords: Autonomous Vehicles (AVs); LiDAR (Light Detection and Ranging); Point Clouds; Clustering Algorithms; Multi-Target Tracking (MTT); Object Detection; Sensor Fusion; Deep Learning; 3D Point Cloud Segmentation
Online: 2 May 2023 (05:04:57 CEST)
The safe and reliable operation of autonomous vehicles in complex and dynamic environments is heavily dependent upon exteroceptive and proprioceptive sensors. A LiDAR system generates accurate 3D point clouds, which are crucial for the detection, classification, and tracking of multiple targets. LiDAR data, however, presents significant challenges due to its density, noise, and varying sampling rates. In this study, various clustering and MTT techniques for LiDAR point clouds will be identified and classified within the context of autonomous driving to assess the state-of-the-art methods’ key challenges and their performances. We have categorized clustering and MTT methods used in AV applications, identified research gaps and challenges and analyzed existing algorithms and their challenges in detail. Researchers and practitioners in the field of autonomous driving will find this review to be a valuable resource.
Subject: Engineering, Bioengineering Keywords: bioprocess models; model validation; model calibration; Quality by Design; mechanistical and statistical models; hybrid models; chemometric models; Biopharmaceutical engineering; regulatory guidance
Online: 10 May 2021 (09:57:09 CEST)
In bioprocess engineering the Qualtiy by Design (QbD) initiative encourages the use of models to define design spaces. However, clear guides on how models for QbD are validated are still missing. In this review we provide a comprehensive overview about validation methods, mathematical approaches and metrics currently applied in bioprocess modeling. The methods cover analytics for data used for modeling, model training and selection, measures for predictiveness and model uncertainties. We point out general issues in model validation and calibration for different types of models and put this into context of existing health authority recommendations. This review provides the start-point for developing a guidance for model validation approaches. There is no one-fits-all approach but this review shall help to identify the best fitting validation method or combination of methods for the specific task and type of bioprocess models that is developed.
ARTICLE | doi:10.20944/preprints202112.0199.v1
Subject: Physical Sciences, Particle And Field Physics Keywords: Affine quantization; Ashtekar-like variables; gravity and field theories
Online: 13 December 2021 (12:36:39 CET)
A half-harmonic oscillator, which gets its name because the coordinate is strictly positive, has been quantized and determined that it was a physically correct quantization. This positive result was found using affine quantization (AQ). The main purpose of this paper is to compare results of this new quantization procedure with those of canonical quantization (CQ). Using Ashtekar-like classical variables and CQ, we quantize the same toy model. While these two quantizations lead to different results, they both would reduce to the same classical Hamiltonian if $\hbar\rightarrow0$. Since these two quantizations have differing results, only one of the quantizations can be physically correct. Two brief sections illustrate how AQ can correctly help quantum gravity and the quantization of most field theory problems.
ARTICLE | doi:10.20944/preprints201909.0105.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: Simultaneous Localization And Mapping; voxel grids; scan-to-model; Partition of Unity
Online: 10 September 2019 (03:48:29 CEST)
Purpose: Localization and mapping with LiDAR data is a fundamental building block for autonomous vehicles. Though LiDAR point clouds can often encode the scene depth more accurate and steadier compared with visual information, laser-based Simultaneous Localization And Mapping (SLAM) remains challengeable as the data is usually sparse, density variable and less discriminative. The purpose of this paper is to propose an accurate and reliable laser-based SLAM solution. Design/methodology/approach: The method starts with constructing voxel grids based on the 3D input point cloud. These voxels are then classified into three types to indicate different physical objects according to the spatial distribution of the points contained in each voxel. A global environment model with Partition of Unity (POU) implicit surface is maintained along the process and located voxels are merged into it from stage to stage, through scan-to-model matching implemented by Levenberg-Marquardt method. Findings: We find a laser-based SLAM method. The method uses POU implicit surface representation to build the model and is evaluated on the KITTI odometry benchmark without loop closure. Experimental results indicate that the method achieves accuracy comparable to the state-of-the-art methods. Originality/value: We propose a novel, low-drift SLAM method, which falls into a scan-to-model matching paradigm, operates on point clouds obtained from Velodyne HDL64. The method is of value to researchers developing SLAM systems for autonomous vehicles.
ARTICLE | doi:10.20944/preprints202307.1631.v1
Subject: Engineering, Mining And Mineral Processing Keywords: waste dump design and site selection; hybrid model; optimisation; genetic algorithm
Online: 24 July 2023 (16:50:56 CEST)
Waste management is an unavoidable technological operation in the process of raw material extraction. The main characteristic of this technological operation is the handling of large quantities of waste material, which can amount to several hundred million cubic metres. Working with this amount of material usually requires high-capacity systems for excavation and loading, a large fleet of trucks for haulage, construction, and maintenance of a complex roads network, use of a significant area of land in order to achieve the required capacities, etc. At the same time, this operation must comply with all administrative and environmental standards. Therefore, optimising waste rock management (particularly haulage and dumping) has the potential to significantly improve the overall value of the project. This paper presents a hybrid model for the optimisation of waste dump design and site selection. The model is based on different mathematical methods (genetic algorithm, analytic hierarchy process and heuristic methods) adapted to different aspects of the problem. The main objective of the model is to provide a solution (in analytical and graphical form) for the draft waste dump design, on the basis of which the final waste dump design can be defined.
ARTICLE | doi:10.20944/preprints202212.0430.v1
Subject: Biology And Life Sciences, Other Keywords: EAQD-PC; Metacognitive skills; train and learning model
Online: 22 December 2022 (13:49:40 CET)
Biology is so complex that it has to be studied in different ways and focuses on processes. Self-regulation skills in learning are important metacognitive skills for students in studying Biology. Learning models that train strategic students in learning are a must so that they gain meaningful learning experiences through the process of exploration, analysis of new information carried out in learning by applying self-questioning strategies to monitor why new information can be trusted. The results of the synthesis between old knowledge and new knowledge are enhanced through peer coaching in class. This research was conducted to develop an EAQD-PC learning model (exploring, analyzing, questioning, defining peer coaching) and tools that are valid, practical, and effective. The model development procedure includes the preliminary research stage, the prototype stage, and the assessment stage. Model validation data was collected using expert validation instruments. While the practicality of the model components is measured by the implementation observation sheet. The effectiveness of the model was measured using a metacognitive skill test and a questionnaire. This model is applied to students majoring in Biology Education at Muhammadiyah University of Bone, even semester 2019/2020. The results showed that the components of the EAQD-PC model (a) with a value of 3.87, and valid learning model supporting tools with a value of 3.83, (b) practicality with a value of 3.89 and (c) effectiveness with a value of 3, 97 . This study proves that the EAQD-PC model is practical and effective for training students' metacognitive skills.
ARTICLE | doi:10.20944/preprints202301.0132.v1
Subject: Business, Economics And Management, Business And Management Keywords: Maturity Model; Sustainability assessment; Supply Chain; Intra- and inter-organizational perspec-tive; TBL dimensions
Online: 9 January 2023 (01:27:50 CET)
Nowadays, frameworks and models are critical to enabling organizations to identify their current sustainability integration into business and to follow up on these initiatives over time. In this context, the maturity models offer a structured way of analyzing how a supply chain meets specific sustainability requirements and which areas demand attention to reach maturity levels. This study proposes a five-level maturity model to help supply chains identify their level of engagement with sustainability practices combining three perspectives: intra and inter-organizational sustainability practices, triple-bottom-line approach and critical areas for sustainability. All the steps followed in constructing the maturity model were based on a literature review, and case studies supported its improvement, application, and testing. The proposed model presents many advantages, such as being used as a self-assessment tool, a roadmap for sustainability behaviour improvement, and a benchmarking tool to evaluate and compare standards and best practices among organizations and supply chains.
ARTICLE | doi:10.20944/preprints202305.1799.v1
Subject: Business, Economics And Management, Finance Keywords: Carbon tax; Carbon tax and NDC; CGE Cobb-Douglas model; Carbon tax and the United States Government
Online: 25 May 2023 (10:39:12 CEST)
Our study shows how the United States government can achieve its goal of Nationally Determined Contribution (NDC) in 2025, 2030, and 2050 by reducing energy consumption through a pure carbon tax. To achieve its emissions reduction goals, it is necessary for the US to impose a long-term carbon tax that balances taxes on labour, capital, energy, and carbon. Therefore, in this study, through the two-layer CGE Cobb-Douglas model, the carbon tax rate is set while balancing the production and profit functions of government, businesses, and households. This study concludes that the carbon price will increase from US$ 0.4391/kg CO2 in 2020 to US$ 2.5671/kg CO2 in 2050 when the CO2 emissions reduction target is increased from 17% reduction in 2020 to 83% reduction in 2050 for the US.
ARTICLE | doi:10.20944/preprints202307.0600.v1
Subject: Business, Economics And Management, Finance Keywords: Bachelier’s market model; Bachelier’s partial differential equation and option pricing; Bachelier’s term structure of interest rates
Online: 10 July 2023 (11:02:16 CEST)
This paper delves into the dynamics of asset pricing within Bachelier’s market model (BMM), elucidating the representation of risky asset price dynamics and the definition of riskless assets. It highlights the fundamental differences between BMM and the Black-Scholes-Merton market model (BSMMM), including the extension of BMM to handle assets yielding a simple dividend. Our investigation further explores Bachelier’s term structure of interest rates (BTSIR), introducing a novel version of Bachelier’s Heath-Jarrow-Morton model and adapting the Hull-White interest rate model to fit BMM. The study concludes by examining the applicability of BMM in real-world scenarios, such as those involving environmental, social, and governance (ESG)-adjusted stock prices and commodity spreads.
ARTICLE | doi:10.20944/preprints201906.0307.v2
Subject: Physical Sciences, Applied Physics Keywords: granular flow；drag and lift forces；discrete element method
Online: 2 July 2019 (11:12:46 CEST)
Both drag and lift forces impact an inclined plane when it is dragged through a granular bed. In this paper, the following results have been obtained: the drag and lift forces grow with the velocity of motion; when the immersion depth is constant, the inclination angle has no effect on drag force, however, the lift force increases linearly with this inclination angle; the ratio of drag and lift forces is exactly equal to the tangent value of the inclined angle. In order to describe this physical process macroscopically, a continuum wedge model based on the Coulomb model is established to predict drag and lift forces. Particularly，the dynamic friction angle in the assumed shear band is predicted as a function of both inclined angle and moving velocity.
ARTICLE | doi:10.20944/preprints202211.0406.v1
Subject: Engineering, Civil Engineering Keywords: finite element model updating; soil-structure interaction; system identification; joint system and input identification; Bayesian estimation; Millikan Library
Online: 22 November 2022 (04:23:22 CET)
We present a finite element model updating technique for soil-structure system identification of the Millikan Library building using the seismic data recorded during the 2002 Yorba Linda earthquake. A detailed finite element (FE) model of the Millikan Library building is developed in OpenSees and updated using a sequential Bayesian estimation approach for joint parameter and input identification. A two-step system identification approach is devised. First, the fixed-base structural model is updated to estimate the structural model parameters (including effective elastic modulus of structural components, distributed floor mass, and Rayleigh damping parameters) and some uncertain components of the foundation-level motion. Then, the identified structural model is used for soil-structure model updating wherein the Rayleigh damping parameters, the stiffness and viscosity of the soil subsystem (modeled using a substructure approach), and the foundation input motions (FIMs) are estimated. The identified model parameters are compared with state-of-practice recommendations. While a specific application is made for the Millikan Library, the present work offers a framework for integrating large-scale FE models with measurement data for model inversion. By utilizing this framework for different civil structures and earthquake records, key structural model parameters can be estimated from the real-world recorded data, which can subsequently be used for assessing and improving, as necessary, state-of-the-art seismic analysis and structural modeling techniques. This paper presents an effort towards using real-world measurements for large-scale FE model updating in the soil and structure, uniform soil time domain for joint parameter and input estimation, and thus paves the way for future applications in system identification, health monitoring and diagnosis of civil structures.
ARTICLE | doi:10.20944/preprints201806.0499.v1
Subject: Engineering, Mechanical Engineering Keywords: lumped parameter simulation; aircraft hybrid propulsion; fuel fconomy; propulsion and propellant systems
Online: 30 June 2018 (15:04:34 CEST)
This paper describes a case study for applying of hybrid-electric propulsion system for a general aviation aircraft. The work was performed by a joint team of CIRA and the Department of Industrial Engineering of the University of Naples “Federico II”. Electric and hybrid electric propulsion for aircraft has gained widespread and significant attention over the past decade. The driver for industry interest has principally been the need to reduce emissions of combustion engine exhaust products and noise, but increasingly studies revealed potential for overall improvement in energy efficiency and mission flexibility of new aircraft types. The project goal was to demonstrate feasibility of aeronautic parallel hybrid-electric propulsion for a Light aircraft varying the mission profiles and the electric configuration. Through a creation, and application, of a global model, with software AMESim®, in which it can be represented everything about the components chosen by the industrial partners, some interesting considerations are carried out. In particular, it was confirmed that with the only integration of state of the art technologies, for some particular missions, the advantages of aircraft hybrid-electric propulsion, for light aircraft, are notable.
ARTICLE | doi:10.20944/preprints201611.0097.v1
Subject: Business, Economics And Management, Economics Keywords: forest industry and ecology; environmental capacity; interaction mechanism; SEM; PSIR
Online: 18 November 2016 (10:01:42 CET)
By introducing environmental mediator, for avoiding randomness in selecting indicators and subjectivity in deciding interaction path coefficients to reveal interaction mechanism among forest industry, ecology and environmental capacity, structural Pressure–State–Impact–Response (PSIR) and quantitative Structural Equation Modeling (SEM) are utilized with data on thirty–one provinces of China. We can find that: (i) forest industry has a negative influence on ecology, whereas ecology has a positive influence on industry; (ii) the destructive influence that industry has on ecology is much greater than the positive effect ecology has on industry; and (iii) environmental capacity plays a partial mediating role between industry and ecology. According to these results we can conclude that: (i) forest industry and ecology are not in a symbiotic relationship and have not reached a level of ecological security; (ii) the interaction between industry and ecology is in a period of transition from antagonistic to beneficial; and (iii) industry dominates ecology. However, we prospect the following advice: (i) methods of forest industry should be changed; (ii) new modes of industrial integration and circular economy should be developed; and (iii) the ability of the environment to sustain itself should be enhanced. The level of forest ecological security to discuss the symbiotic mechanism of industry and ecology will propose to be measured in future.
ARTICLE | doi:10.20944/preprints202006.0323.v1
Subject: Social Sciences, Sociology Keywords: diversity and inclusion; science; social pain; theory of inclusion; framework model of inclusion; practice of inclusion; meaning of inclusion
Online: 28 June 2020 (08:50:35 CEST)
The diversity and inclusion discussion permeates many sectors of society. Within this dialogue, science and scientists are acutely aware of the value of diversity and the need for inclusion. While demographic diversity in science has received considerable recent attention, very little research and understanding exists on inclusion. Our study presents empirical data on the meaning of inclusion using a crowdsourcing approach that sought responses to the question “What does inclusion mean to you?”. The most prominent concepts were those of empathy, warmth, support, love, acceptance and curiosity; diverse perspectives; and participation. We clustered conceptual elements of inclusion into four themes: access and participation, embracing diverse perspectives, a welcome environment, and team belonging. On the basis of these data, we theorize a conceptual framework model from which inclusion may be put into practice. Our model suggests a dynamic process of inclusion operating from principal structural elements of 1) a foundation that involves place, access and participation, and space; 2) reciprocal engagement as an engine for inclusion; and 3) expression of inclusion as culture. The framework model demonstrates a means by which the practice of diversity can be more than shifts in demographic statistics, and instead promote the full expression of benefits derived when the many dimensions of diversity are truly included.
ARTICLE | doi:10.20944/preprints201911.0250.v2
Subject: Physical Sciences, Applied Physics Keywords: quantum information; shannon entropy; quantum physics; paraconsistent logic; mathematics and computing
Online: 24 February 2020 (04:04:11 CET)
In this work, we present a model of the atom that is based on a nonclassical logic called paraconsistent logic (PL), which has the main property of accepting the contradiction in logical interpretations without the conclusions being annulled. The proposed model is constructed with an extension of PL called paraconsistent annotated logic with annotation of two values (PAL2v), which is associated with an interlaced bilattice of four vertices. We use the logarithmic function of the Shannon entropy H(s) to construct the paraconsistent equations and thus adapt a probabilistic model for representations in quantum physics. Through analyses of the interlaced bilattice, comparative values are obtained for some of the phenomena and effects of quantum mechanics, such as superposition of states, quantum entanglement, wave functions, and equations that determine the energy levels of the layers of an atom. At the end of this article, we use the hydrogen atom as a basis of the representation of the PAL2v model, where the values of the energy levels in six orbital layers are obtained. As an example, we present a possible method of applying the PAL2v model to the use of Raman spectroscopy signals in the detection of lubricating mineral oil quality.
ARTICLE | doi:10.20944/preprints202305.1741.v1
Subject: Computer Science And Mathematics, Analysis Keywords: tactic analysis; gaming tree; nash equilibrium; badminton analysis and prediction; computer-based analysis
Online: 25 May 2023 (05:35:09 CEST)
Badminton tactics refer to the techniques and strategies employed by players to win a match. Analyzing these tactics can help players improve their performance and outsmart their opponents. To study the tactics of top players, we use a gaming tree to analyze matches between two of the most powerful badminton players in history: Lin and Lee. By employing the Nash Equilibrium, we can discover the most beneficial strategies for both players, which reflect their most powerful techniques. Additionally, with the help of this gaming tree, we can precisely predict how players will implement their tactics. Empirical experimental results demonstrate that our proposed method not only evaluates and identifies each player's weaknesses and strengths but also has powerful capabilities to predict their tactics.
ARTICLE | doi:10.20944/preprints202205.0244.v1
Subject: Engineering, Automotive Engineering Keywords: failure mode and effect analysis (FMEA); model-based design; automatic generation tool; fault injection simulation
Online: 18 May 2022 (12:40:58 CEST)
In the development of the safety-critical systems, it is important to perform Failure Modes and Effects Analysis (FMEA) process to identify potential failures. However, traditional FMEA activities tend to be considered difficult and time-consuming tasks. To compensate for the difficulty of the FMEA task, various types of tools are used to increase the quality and the effectiveness of the FMEA reports. This paper explains an Automatic FMEA tool which integrates the Model-based Design (MBD), FMEA, and Simulated Fault Injection techniques in a single environment. The Automatic FMEA tool has the following advantages compared to the existing FMEA analysis tool. First, the Automatic FMEA tool automatically generates FMEA reports compared to the traditional spreadsheet-based FMEA tools. Second, the Automatic FMEA tool analyzes the causality between the failure modes and the failure effects by performing model-based fault injection simulation. In order to demonstrate the applicability of the Automatic FMEA, we used the electronic fuel injection system (EFI) Simulink model. The results of the Automatic FMEA were compared to that of the legacy FMEA.
ARTICLE | doi:10.20944/preprints202304.0202.v1
Subject: Public Health And Healthcare, Health Policy And Services Keywords: model developement; TB– HIV integrated model; TB and HIV; model; quantitative and qualitative data
Online: 11 April 2023 (05:39:06 CEST)
Few studies have examined the pros and cons of integrated TB and HIV service delivery in public healthcare facilities, and even fewer have proposed conceptual models for improved integration. This study intends to fill that vacuum by outlining the development of a facility-based paradigm for integrating TB, HIV and patients services. The design of the proposed model were in stages that involved the evaluation of existing TB-HIV integration model and synthesis of both quantitative and qualitative data from the study sites which were selected public healthcare facilities at both rural and peri-urban settings in Oliver Reginald (O.R) Tambo District Municipality in Eastern Cape, South Africa. Secondary data on 2009-2013 TB-HIV clinical outcomes were obtained from multiple sources for quantitative analysis. Qualitative data involved focus group discussions among patient and heath care staff, which was thematically analysed. The development of a possibly better model and validation of this model show that the district's health system was reinforced by the model's guiding principles, which placed a strong emphasis on inputs, processes, outcomes, and integration effects.The model is adaptable to different healthcare delivery systems but will require support from healthcare stakeholders and professionals to be successful.
ARTICLE | doi:10.20944/preprints202308.0309.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: feature identification and extraction; Copula analysis; multi-energy loads; model fusion
Online: 3 August 2023 (10:13:57 CEST)
To improve the accuracy of short-term multi-energy load prediction models for integrated energy systems, the historical development law of the multi-energy loads must be considered. Moreover, understanding the complex coupling correlation of the different loads in the multi-energy systems and accounting for other load-influencing factors, such as weather, may further improve the forecasting performance of such models. In this study, a two-stage fuzzy optimization method is proposed for the feature selection and identification of the multi-energy loads. To enrich the information content of the prediction input feature, we introduced a copula correlation feature analysis in the proposed framework, which extracts the complex dynamic coupling correlation of multi-energy loads and applies Akaike information criterion (AIC) to evaluate the adaptability of the different copula models presented. Furthermore, we combined a NARX neural network with Bayesian optimization and an extreme learning machine model optimized using a genetic algorithm to effectively improve the feature fusion performances of the proposed multi-energy load prediction model. The effectiveness of the proposed short-term prediction model was confirmed by the experimental results obtained using the multi-energy load time-series data of an actual integrated energy system.
ARTICLE | doi:10.20944/preprints201901.0204.v1
Subject: Computer Science And Mathematics, Probability And Statistics Keywords: gross domestic product (GDP); lending rates; savings; loans and advances; ARDL
Online: 21 January 2019 (10:02:52 CET)
In most econometrics literature, the Autoregressive Distributed Lag (ARDL) model is often applied in many economic analyses to study short and long run relationships. This is because ARDL model can deal with economic variables that are integrated of different order (I(0), I(1) or combination of both) and also it is robust where there is single long-run relationship between the underlying variables in a simple sample size. This study applied the ARDL model to examine the contributions of commercial Banks to GDP growth in Nigeria. To achieve this, annual data covering 1981 to 2015 for loans and advances, savings, lending rates and GDP of Financial Institutions were collected from CBN bulletin. The ADF test revealed that the variables are I(1) except for lending rate which was of I(0) order. The ARDL(1,1,1,2) model revealed that loans and advances, and lending rates are significantly positively related to GDP in Nigeria but savings was not significant in the model. The model revealed some evidence of short run relationships while the ecm(-1) was -0.6156 (P-value=0.0038<0.05) which means that the rate of the speed of adjustment to equilibrium is 61.56% annually. The estimated model is free from serial correlation, multicollinearity, heteroscedasticity while the model is stable and the residuals are normally distributed. The study recommends that savings and savings culture should be encouraged in Nigeria since economic theory states that savings and investment are related in any economic development.
ARTICLE | doi:10.20944/preprints202006.0039.v2
Subject: Computer Science And Mathematics, Analysis Keywords: Epidemiology; infectious disease; compartmental model; mathematical modelling and optimisation; COVID-19; SARS-CoV-2
Online: 12 June 2020 (12:14:28 CEST)
A compartmental epidemiological model with seven groups is introduced herein, to account for the dissemination of diseases similar to the Coronavirus disease 2019 (COVID-19). In its simplified version, the model contains ten parameters, four of which relate to characteristics of the virus, whereas another four are transition probabilities between the groups; the last two parameters enable the empirical modelling of the effective transmissibility, associated in this study with the cumulative number of fatalities due to the disease within each country. The application of the model to the fatality data (the main input herein) of five countries (to be specific, of those which had suffered most fatalities by April 30, 2020) enabled the extraction of an estimate for the basic reproduction number $R_0$ for the COVID-19 disease: $R_0=4.91(33)$.
ARTICLE | doi:10.20944/preprints202212.0135.v1
Subject: Engineering, Civil Engineering Keywords: Hydrodynamic model; marine and coastal tourism; analysis hierarchy process
Online: 7 December 2022 (14:47:31 CET)
Poso regency, Central Sulawesi, Indonesia, has a coastal area that has marine tourism potential to be developed. It is expected that marine tourism can bring socio-economic impact to the com-munity. This research was conducted with the objective of assessing the suitability of the area to be developed as a marine and coastal tourism site to provide benefits to the coastal community. Hydrodynamic model will be used in this research as coastal area mapping. As an approach, Analysis Hierarchy Process (AHP) is utilized whose parameters consist of depth, coast type, coast width, brightness, current speed, water base materials, observation of dangerous biota and availability of fresh water. Based on the overall mapping area of 98,644 ha, the research results show that the area that can be utilized is 7,979 ha with a very suitable category, while there is an area of 1,045 ha which can still be classified in the appropriate category.
ARTICLE | doi:10.20944/preprints202306.0894.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Educational policies; Learning and development; Machine Learning Techniques; Skillset; IT Governance; Team Members
Online: 13 June 2023 (08:19:19 CEST)
Software governance is a management structure that guides projects in terms of their accountability and responsibility. Prime motivation of this approach is to improve the skillset of the team members through software governance policies and increase the overall success rate of the software projects. The scope of skill development is across the pillars of governance, such as structure, people, and information. Primary focus of this paper is on the skillset development of the project team members through educational policies in software governance. As part of the governance process, educational policies are defined for the skillset development of project team members. The JIRA dataset was used to determine the skillset development of the team members. Machine learning techniques, such as J48, Random Forest, Decision Table, Logistics, and Naïve Bayes, were used in the JIRA dataset. These machine learning techniques were processed using WEKA open-source software. Based on these results, it was concluded that the J48 algorithm can be applied to multiple projects/programs to monitor and track the skill development process. Machine learning model such as J48 is required to use this model at an organizational level. The skillset development of project team members should be aligned with IT governance and educational policies. Overall upskilling and reskilling strategies are provided to demonstrate the impact of skillset development through software governance.
COMMUNICATION | doi:10.20944/preprints202107.0667.v1
Subject: Engineering, Automotive Engineering Keywords: 5G and beyond/6G wireless networks; greencom; IoT; passive repeater; relaying systems; SWIPT
Online: 29 July 2021 (14:30:56 CEST)
In order to support a massive number of resource-constrained Internet-of-Things (IoT) devices and machine-type devices, it is crucial to design future beyond 5G/6G wireless networks in an energy-efficient manner while incorporating suitable network coverage expansion methodologies. To this end, this invited paper proposes a novel two-hop hybrid active-and-passive relaying scheme to facilitate simultaneous wireless information and power transfer (SWIPT) considering both the time-switching (TS) and power-splitting (PS) receiver architectures, while dynamically modelling the involved dual-hop time-period (TP) metric. An optimization problem is formulated to jointly optimize the throughput, harvested energy, and transmit power of a SWIPT-enabled system with the proposed hybrid scheme. In this regard, we provide two distinct ways to obtain suitable solutions based on the Lagrange dual technique and Dinkelbach method assisted convex programming, respectively, where both the approaches yield an appreciable solution within polynomial computational-time. The experimental results are obtained by directly solving the primal problem using a non-linear optimizer. Our numerical results in terms of weighted utility function show the superior performance of proposed hybrid scheme over passive repeater-only and active relay-only schemes, while also depicting their individual performance benefits over the corresponding benchmark SWIPT systems with the fixed-TP.
ARTICLE | doi:10.20944/preprints202306.2058.v1
Subject: Public Health And Healthcare, Public Health And Health Services Keywords: Text Messaging Vaccination Reminder and Recall System; Technology Acceptance Model; Nurses’ Attitude and Intention; Nurses’ Acceptance; Artificial Neural Network
Online: 28 June 2023 (16:46:22 CEST)
Malaysian healthcare institutions still use ineffective paper-based vaccination systems to manage childhood immunization schedules. This may lead to missed appointments, incomplete vaccinations, and outbreaks of preventable diseases among infants. To address this issue, we proposed a text messaging vaccination reminder and recall system named Virtual Health Connect (VHC) to simplify and accelerate the immunization administration for nurses that may result in improving the completion and timeliness of immunizations among infants. Considering the narrow research about the acceptance of these systems in the healthcare sector, we examined factors influencing nurses’ attitude and intention to use VHC using the extended technology acceptance model (TAM). The novelty of the conceptual model is proposing new predictors of attitude, namely, perceived compatibility and perceived privacy and security issues. We conducted a survey among 121 nurses in Malaysian government hospitals and clinics to test the model. We analyzed the collected data using partial least squares-structural equation modeling (PLS-SEM) to examine the significant factors influencing nurses’ attitude and intention to use VHC. Moreover, we applied artificial neural network (ANN) to determine the most significant factors of acceptance with higher accuracy. Therefore, we could offer more accurate insights to decision-makers in the healthcare sector for the advancement of health services. Our results highlighted that the compatibility of VHC with the current work setting of nurses developed their positive perspectives toward the system. Moreover, the nurses felt optimistic about the system when considering it to be useful and easy to use in the workplace. Finally, their attitude toward using VHC played a pivotal role in raising their intention. Based on the ANN models, we also found that perceived compatibility was the most significant factor influencing nurses' attitude towards using VHC, followed by perceived ease of use and perceived usefulness.
ARTICLE | doi:10.20944/preprints202311.0778.v1
Subject: Environmental And Earth Sciences, Water Science And Technology Keywords: Nitrogen and phosphorus discharges; green port; fertilizers; BRT models; eutrophication
Online: 13 November 2023 (10:32:00 CET)
Abstract: Marine eutrophication is a pervasive and growing threat to global sustainability. Thereby nutrient discharges to the marine environment should be reduced to a minimum. When fertilizers are loaded to vessels in ports, a significant amount of nutrients are released into the sea, but so far these actions have received little attention. Here, we employed the Boosted Regression Trees modeling (BRT) to define relationships between fertilizer loading, loading area, rain intensity and nutrient discharge to the marine environment and then used the established relationships to predict daily nutrient discharge due to fertilizer loading. The studied subject was a port in the Gulf of Finland, where significant amount of both nitrogen and phosphorus are loaded to vessels. BRT models accounted for a significant proportion of the variability of nutrient discharge. As expected, the nutrient discharge increased with the amounts of fertilizers loaded and the intensity of rain. On the other hand, with increasing loading area the amounts of total nitrogen discharge increased, but phosphorus discharge decreased. The latter result may be due to different char-acteristics of the loading areas of different terminals. The model predicted that at the studied port the total nitrogen and phosphorus discharge into the marine environment due to fertilizer loading was 272,906 and 196 kg per year, respectively. Im-portantly, the developed model can be used to predict nutrient loads for different future scenarios to propose the best mitigation methods for nutrient discharges to the sea.
ARTICLE | doi:10.20944/preprints202109.0088.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: Iodine; leaching; HYDRUS 1D model; Simulation; Organic and inorganic amendments
Online: 6 September 2021 (12:07:01 CEST)
This study investigated the ability of a HYDRUS 1D model for predicting the vertical distribution of potassium iodine (200 ppm) in soil columns after amendment with five different common remediation materials (gypsum, lime, fly ash, charcoal and sawdust) at a rate of 2.5% (w/w), relative to an unamended control soil. Results shows that relative to the unamended soil, iodine leaching was decreased by all amendments but that the magnitude of the decreases varied with the soil amendment applied. Iodine content was highest in the upper layer of the soil columns and decreased progressively with soil depth. The model was evaluated via comparison of the model simulated values with measured values from the soil column studies. The results showed that the HYDRUS 1D model efficiency was near to 1, indicating that the stimulated results were near to the measured values. Therefore, this study showed that iodine leaching through a soil could be ascertained well using a HYDRUS 1D model. The model over predicted iodine leaching, resulting to a weak correspondence between the simulated and the measured results for iodine leaching. This suggests that the HYDRUS-1D model does not explain accurately different organic and inorganic amended soil and the preferential flow that occurs in these columns. This may be due to the fact that Freundlich isotherm, which is part of the transport equations, does not sufficiently describe the mechanism of iodine adsorption onto the soil particles. This study would help to select amendments for an effective management strategy to reduce exogenous iodine losses from agro-ecosystems. This would also improve understanding of iodine transport in soil profile.
ARTICLE | doi:10.20944/preprints201705.0188.v1
Subject: Engineering, Mechanical Engineering Keywords: Creep; Composite constitutive model; θ projection method; low and intermediate temperature
Online: 25 May 2017 (17:35:30 CEST)
The creep behaviors of TA2 and R60702 at low and intermediate temperature were presented and discussed in this paper. Experimental results indicated that an apparent threshold stress exhibited in the creep deformation of R60702. Meanwhile, the primary creep phase was found as the main pattern in the room temperature creep behavior of TA2. Compared with exponential law, the power law has been proved to be a proper constitutive model in the description of primary creep phase. It also showed that θ projection method had its significant advantage in the evaluation of accelerated creep stage. Thus, a composite model which combined power law with θ projection method was applied in the creep curves evaluation at low and intermediate temperature. Based on the multiaxial creep deformation results, the model was modified and discussed. A linear relationship existed between composite model parameters and applied load. Finally, the creep life could be accurately predicted and the composite model method is suitable for application in low and intermediate temperature creep life analysis.
ARTICLE | doi:10.20944/preprints202110.0237.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: Software reliability; deep learning; long short-term memory; project similarity and clustering; cross-project prediction
Online: 18 October 2021 (10:33:39 CEST)
Software reliability is an important characteristic for ensuring the qualities of software products. Predicting the potential number of bugs from the beginning of a development project allows practitioners to make the appropriate decisions regarding testing activities. In the initial development phases, applying traditional software reliability growth models (SRGMs) with limited past data does not always provide reliable prediction result for decision making. To overcome this, herein we propose a new software reliability modeling method called deep cross-project software reliability growth model (DC-SRGM). DC-SRGM is a cross-project prediction method that uses features of previous projects’ data through project similarity. Specifically, the proposed method applies cluster-based project selection for training data source and modeling by a deep learning method. Experiments involving 15 real datasets from a company and 11 open source software datasets show that DC-SRGM can more precisely describe the reliability of ongoing development projects than existing traditional SRGMs and the LSTM model.
ARTICLE | doi:10.20944/preprints202309.0737.v1
Subject: Engineering, Civil Engineering Keywords: construction enterprise; digital transformation maturity; AHP-Decision Testing and Evaluation Laboratory (AHP-DEMATEL)
Online: 12 September 2023 (07:21:55 CEST)
With the continuous development of digital transformation and upgrading of Chinese construction enterprises, it is becoming increasingly important to measure their digital level, find the problems in the enterprise transformation process, and identify the key factors of enterprise digital capacity enhancement. This paper constructs a construction enterprise digital transformation maturity evaluation model from six first-level indicators and 20 second-level indicators, including digital strategy, digital business application, digital technology capability, data capability, digital organization capability, and change management. Digital maturity is divided into five levels: business management, process operation, intelligent construction, intelligent scene application, and industrial ecological collaboration. A detailed process of digital maturity evaluation based on the method of Analytic Hierarchy Process (AHP)-Decision Testing and Evaluation Laboratory (DEMATEL) is then developed. A questionnaire survey of 25 experts is used to weight the various parameters in the model, which is then demonstrated with an example construction enterprise. The model comprehensively reflects digital levels under the background of the digital economy. Its application will help understand the advantages and disadvantages enterprises face in their digital transformation to enable targeted measures to improve their digital transformation capabilities and efficiency, enhance their core competitiveness of enterprises, and promote the development of digital transformation in the construction industry.
REVIEW | doi:10.20944/preprints201702.0002.v1
Subject: Physical Sciences, Astronomy And Astrophysics Keywords: Cosmology; Observational cosmology; Origin, formation, and abundances of the elements; dark matter; dark energy; superclusters; large-scale structure of the Universe
Online: 1 February 2017 (16:06:52 CET)
The main foundations of the standard $\Lambda $CDM model of cosmology are that: 1) The redshifts of the galaxies are due to the expansion of the Universe plus peculiar motions; 2) The cosmic microwave background radiation and its anisotropies derive from the high energy primordial Universe when matter and radiation became decoupled; 3) The abundance pattern of the light elements is explained in terms of primordial nucleosynthesis; and 4) The formation and evolution of galaxies can be explained only in terms of gravitation within a inflation+dark matter+dark energy scenario. Numerous tests have been carried out on these ideas and, although the standard model works pretty well in fitting many observations, there are also many data that present apparent caveats to be understood with it. In this paper, I offer a review of these tests and problems, as well as some examples of alternative models.
REVIEW | doi:10.20944/preprints202202.0287.v1
Subject: Medicine And Pharmacology, Orthopedics And Sports Medicine Keywords: integration of sports and health care; sports; health; community
Online: 23 February 2022 (07:06:51 CET)
(1) Background: With continuous globalization and modernization of people's lives, lifestyle has changed dramatically, with decreased physical activity and increased unhealthy eating patterns in many nations throughout the world. With the COVID-19 pandemic and changes taking place in people’s health and lifestyles around the world, the need for rehabilitation is expected to rise in the coming years.(2)Methods: This paper analyzes the integration model of sports and health care using theoretical analysis, literature reviews, logical reasoning, and other methods.(3)Results: The integration of sports and health care in China has entered the stage of practical implementation after many years of development, forming a few representative integration patterns. Governments, communities, community hospitals, hospitals, and third-party institutions are the main participants, with the community playing an important role in the integration. Pharmacies, sports venues, and schools with sufficient staff have a relatively low participation rate.(4)Conclusion: The grading treatment has been applied in health management and sports rehabilitation, based on the development of digital medicine, a government-led grading treatment model of "health management center" can promote the participation of multiple subjects in the integration of sports and health care, solving the problems existing in the current integration process to a certain extent.
ARTICLE | doi:10.20944/preprints202307.0259.v1
Subject: Engineering, Marine Engineering Keywords: PMMA; J-C constitutive Model; Impact; Loss and Damage; Numerical Simulation
Online: 5 July 2023 (08:50:16 CEST)
Polymethyl methacrylate (PMMA) polymer is widely used in various fields today. In order to thoroughly reveal the structural impact performance of PMMA materials, this paper is based on the comprehensive Johnson-Cook constitutive model and damage failure model, and accurately confirm the J-C constitutive and damage failure model parameters of PMMA through material test data, The dynamic process of steel bullet impacting PMMA plate structure is analyzed by finite element software ABAQUS. The calculation results show that the numerical simulation results in this paper are in good agreement with the experimental test data. Therefore, the feasibility and accuracy of the impact analysis of PMMA structures based on J-C constitutive and damage failure models in this paper are studied. Finally, based on the J-C constitutive model and damage failure model, the variation of the residual velocity of the bullet with the PMMA plate thickness is analyzed in depth, that is, the results show that the residual velocity of the bullet has a certain linear relationship with the plate thickness.
DATA DESCRIPTOR | doi:10.20944/preprints202308.1383.v1
Subject: Computer Science And Mathematics, Probability And Statistics Keywords: hausman test; random effect model; wald test; fixed effect model and least squares dummy variable
Online: 18 August 2023 (12:39:06 CEST)
The impacts of COVID-19 (Novel Coronavirus) epidemic cannot have been more severe, with the globe experiencing both economic and health crises. This study examines the trends and correlations in number of COVID-19-related deaths and number of COVID-19-infected patients in all 37 regions of Tamil Nadu state, India, in the month of August 2020 on the basis of panel regression model,
ARTICLE | doi:10.20944/preprints202307.1481.v1
Subject: Engineering, Transportation Science And Technology Keywords: Macroscopic Pedestrian Flow Model; Passenger Collection and Distribution; Dynamic network loading model; Optimal control theory
Online: 21 July 2023 (08:46:16 CEST)
A macro network loading Model for multi-flow lines, time-varying and pedestrian congestion is proposed. The station hub is abstracted as a network of different types of nodes, and the flow of passengers at each node is calculated in real time for the purpose of simulating the hub's collection and distribution process. For correct transmission of passenger flow on heterogeneous networks, three types of indexes are proposed to distinguish the nodes, and then match the corresponding fundamental diagrams. This paper divides the update process of the dynamic network loading model into multiple processes by flow lines, and improves computational speed of DNL model. The proposed model is applied to the simulation of passenger flow collection and distribution in an actual hub station with multi-flow lines. The analysis results illustrate that the model can accurately reflect the realistic congestion facilities and explain the formation process of high-density areas. A rolling passenger flow control model based on optimal control theory is proposed. The effectiveness of the control model is verified based on simulation data.
ARTICLE | doi:10.20944/preprints202103.0135.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: Machine Learning Applications; Quality Assurance Methodology; Process Model; Automotive Industry and Academia; Best Practices; Guidelines
Online: 3 March 2021 (14:11:09 CET)
Machine learning is an established and frequently used technique in industry and academia but a standard process model to improve success and efficiency of machine learning applications is still missing. Project organizations and machine learning practitioners have a need for guidance throughout the life cycle of a machine learning application to meet business expectations. We therefore propose a process model for the development of machine learning applications, that covers six phases from defining the scope to maintaining the deployed machine learning application. The first phase combines business and data understanding as data availability oftentimes affects the feasibility of the project. The sixth phase covers state-of-the-art approaches for monitoring and maintenance of a machine learning applications, as the risk of model degradation in a changing environment is eminent. With each task of the process, we propose quality assurance methodology that is suitable to address challenges in machine learning development that we identify in form of risks. The methodology is drawn from practical experience and scientific literature and has proven to be general and stable. The process model expands on CRISP-DM, a data mining process model that enjoys strong industry support but lacks to address machine learning specific tasks. Our work proposes an industry and application neutral process model tailored for machine learning applications with focus on technical tasks for quality assurance.
ARTICLE | doi:10.20944/preprints202010.0616.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: Spike-and-wave; Generalized Gaussian distribution; EEG; Morlet wavelet; k-nearest neighbors classifier; Epilepsy
Online: 29 October 2020 (14:05:54 CET)
Spike-and-wave discharge (SWD) pattern detection in electroencephalography (EEG) signals is a key signal processing problem. It is particularly important for overcoming time-consuming, difficult, and error-prone manual analysis of long-term EEG recordings. This paper presents a new SWD method with a low computational complexity that can be easily trained with data from standard medical protocols. Precisely, EEG signals are divided into time segments for which the Morlet 1-D decomposition is applied. The generalized Gaussian distribution (GGD) statistical model is fitted to the resulting wavelet coefficients. A k-nearest neighbors (k-NN) self-supervised classifier is trained using the GGD parameters to detect the spike-and-wave pattern. Experiments were conducted using 106 spike-and-wave signals and 106 non-spike-and-wave signals for training and another 96 annotated EEG segments from six human subjects for testing. The proposed SWD classification methodology achieved 95 % sensitivity (True positive rate), 87% specificity (True Negative Rate), and 92% accuracy. These results set the path to new research to study causes underlying the so-called absence epilepsy in long-term EEG recordings.
ARTICLE | doi:10.20944/preprints202108.0178.v1
Subject: Computer Science And Mathematics, Probability And Statistics Keywords: Spatial regression model; Influential observation; Outlier; Leverage; prediction residual; Masking and swamping; Diagnostic
Online: 9 August 2021 (07:57:56 CEST)
Influential Observations, which are outliers in x direction, y direction or both, remain a hitch in classical regression model fitting. Spatial regression model, with peculiar nature of outliers due to their local nature, is not free from the effect of such influential observations. Researchers have adapted some classical regression techniques to the spatial models and yielded satisfactory results. However, masking or/and swamping remain stumbling block to such methods. We obtained the spatial representation of the classical regression measures of diagnostic in general spatial model. Commonly used diagnostic measure in spatial diagnostic, the Cook's distance, is compared to some robust methods, Hi2 (using robust and non-robust measures), and classification based on generalized residuals and diagnostic generalized potentials, ISRs-Posi and ESRs-Posi, with the help of the obtained spatial prediction residuals and the spatial leverage term. Results of simulation and applications to real data have shown the advantage of the ISRs-Posi and ESRs-Posi due to classification of outliers over Cook's distance and non-robust Hsi12, which suffer from masking, and robust Hsi22 which suffer from swamping in general spatial model.
CONCEPT PAPER | doi:10.20944/preprints202007.0515.v1
Subject: Business, Economics And Management, Economics Keywords: Public Health Intervention, Health Education and Promotion, Behavior Change Intervention, Intervention Design, Multifaceted Intervention, Repeated Intervention, Mental Model Mapping, Low- and Medium-Income Country (LMIC).
Online: 22 July 2020 (10:58:58 CEST)
Improving the effectiveness of health interventions is a major challenge in public health research and program development. A large body of literature has found low or no impact of health education and promotional interventions. We aim to develop a conceptual framework in support of intervention designs for preventive health behavior improvement programs and outcomes. The proposed approach is based on a narrative review of empirical literature assessing the limitations of less effective or ineffective field experiments regarding preventive health education and promotion interventions. We found three major limitations regarding the mental model’s balance of treatment and comparison groups, treatment groups’ willingness to adopt suggested behaviors, and the type, length, frequency, intensity, and sequence of treatments. To minimize the influence of these concerns, we propose a mental model-based repeated multifaceted (MRM) intervention design framework to provide an intervention design for improving health education and promotional programs.
Subject: Computer Science And Mathematics, Information Systems Keywords: Academic Analytics; data storage; education and big data; analysis of data; learning analytics
Online: 19 July 2020 (20:37:39 CEST)
Business Intelligence, defined by  as "the ability to understand the interrelations of the facts that are presented in such a way that it can guide the action towards achieving a desired goal", has been used since 1958 for the transformation of data into information, and of information into knowledge, to be used when making decisions in a business environment. But, what would happen if we took the same principles of business intelligence and applied them to the academic environment? The answer would be the creation of Academic Analytics, a term defined by  as the process of evaluating and analyzing organizational information from university systems for reporting and making decisions, whose characteristics allow it to be used more and more in institutions, since the information they accumulate about their students and teachers gathers data such as academic performance, student success, persistence, and retention . Academic Analytics enables an analysis of data that is very important for making decisions in the educational institutional environment, aggregating valuable information in the academic research activity and providing easy to use business intelligence tools. This article shows a proposal for creating an information system based on Academic Analytics, using ASP.Net technology and trusting storage in the database engine Microsoft SQL Server, designing a model that is supported by Academic Analytics for the collection and analysis of data from the information systems of educational institutions. The idea that was conceived proposes a system that is capable of displaying statistics on the historical data of students and teachers taken over academic periods, without having direct access to institutional databases, with the purpose of gathering the information that the director, the teacher, and finally the student need for making decisions. The model was validated with information taken from students and teachers during the last five years, and the export format of the data was pdf, csv, and xls files. The findings allow us to state that it is extremely important to analyze the data that is in the information systems of the educational institutions for making decisions. After the validation of the model, it was established that it is a must for students to know the reports of their academic performance in order to carry out a process of self-evaluation, as well as for teachers to be able to see the results of the data obtained in order to carry out processes of self-evaluation, and adaptation of content and dynamics in the classrooms, and finally for the head of the program to make decisions.
ARTICLE | doi:10.20944/preprints202105.0627.v1
Subject: Engineering, Automotive Engineering Keywords: Sensitivity analysis; Motion artifacts; Electrocardiography; Instrumentation and measurement; Bioelectromagnetism
Online: 26 May 2021 (10:22:00 CEST)
Wearable vital signs monitoring and specially the electrocardiogram have taken important role due to the information that provide about high-risk diseases, it has been evidenced by the needed to increase the health service coverage in home care as has been encouraged by WHO. Some wearables devices have been developed to monitor the ECG in which the location of the measurement electrodes is modified respect to the Einthoven model. However, mislocation of the electrodes on the torso can lead to the modification of acquired signals, diagnostic mistakes and misinterpretation of the information in the signal. This work presents a volume conductor evaluation and an ECG signal waveform comparison when the location of electrodes is changed, to find a electrodes’ location that reduces distortion of interest signals. In addition, effect of motion artifacts and electrodes’ location on the signal acquisition are evaluated. A group of volunteers was recorded to obtain ECG signals, the result was compared with a computational model of the heart behavior through the EA ECG, DTW and SNR methods to quantitatively determine the signal distortion. It was found that while the Einthoven method is followed, it is possible to acquire the ECG signal from the patient’s torso or back without a significant difference, and the electrodes position can be moved 6cm at most from the suggested location by the Einthoven triangle in Mason-Likar’s method.
ARTICLE | doi:10.20944/preprints202011.0310.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: Home health care; Routing and scheduling; Nurse downgrading; Epsilon-constraint method; Bi-objective optimization.
Online: 10 November 2020 (12:20:43 CET)
In recent years, the management of health systems is a main concern of governments and decision makers. Home health care is one of the newest methods of providing services to patients in developed societies that can respond to the individual lifestyle of modern age and the increase of life expectancy. The home health care routing and scheduling problem is a generalized version of the vehicle routing problem, which is extended to a complex problem by adding special features and constraints of health care problems. In this problem, there are multiple stakeholders such as nurses for which an increase of their satisfaction level is very important. In this study, a mathematical model is developed to expand traditional home health care routing and scheduling models to downgrading cost aspects by adding the objective of minimizing the difference between the actual and potential skills of the nurses. Downgrading can lead to a dissatisfaction of the nurses. In addition, skillful nurses have higher salaries and high-level services increase equipment costs and need more expensive trainings and nursing certificates. Therefore, downgrading can enforce hidden huge costs to the managers of a company. To solve the bi-objective model, an -constraint based approach is suggested and the model applicability and its ability to solve the problem in various sizes are discussed. A sensitivity analysis on the Epsilon parameter is conducted to analyze the effect of this parameter on the problem. Finally, some managerial insights are presented to help the managers in this field, and some directions for future studies are mentioned as well.
ARTICLE | doi:10.20944/preprints202103.0589.v1
Subject: Chemistry And Materials Science, Biomaterials Keywords: LLDPE; quasi-static and dynamic experimental tests, impact energy absorption; material parameter identification; constitutive material model; validation; simulation
Online: 24 March 2021 (13:38:40 CET)
Current industrial trends bring new challenges in energy absorbing systems. Polymer materials as the traditional packaging material seem to be promising due to their low weight, structure and production price. Based on the review, the linear low-density polyethylene material was identified as the most promising material for absorbing impact energy. The current paper addresses the identification of the material parameters and the development of a Constitutive material model to be used in future design by virtual prototyping. The paper deals with the experimental measurement of the stress-strain relations of the linear low-density polyethylene under static and dynamic loading. The quasi-static measurement is realized in two perpendicular principal directions and is supplemented by a test measurement in the 45 degrees direction, i.e. exactly between the principal directions. The quasi-static stress-strain curves are analyzed as an initial step for dynamic strain rate dependent material behavior. The dynamic response is tested in the drop tower using a spherical impactor hitting the flat material multi-layered specimen at two different energy levels. The strain rate dependent material model is identified by optimizing the static material response obtained in the dynamic experiments. The material model is validated by the virtual reconstruction of the experiments and by comparing the numerical results to the experimental ones.
REVIEW | doi:10.20944/preprints202306.1332.v1
Subject: Biology And Life Sciences, Other Keywords: Head and neck squamous cell carcinoma; natural products; phytochemicals; chemotherapeutics; chemoprevention
Online: 19 June 2023 (09:04:03 CEST)
Head and neck squamous cell carcinoma (HNSCC) is a type of cancer that arises from the epithelium lining of the oral cavity, hypopharynx, oropharynx, and larynx. Despite advancement of the current treatment to HNSCC, such as surgery, chemotherapy, and radiotherapy, the overall survival rate of HNSCC remains poor due to late diagnosis and acquired resistance to treatment. Natural products have been extensively explored as a safer and more acceptable alternative to current treatments, with numerous studies displaying their potential against HNSCC. This review highlights preclinical studies in the past 5 years involving natural products against HNSCC and explores the signalling pathways altered by these products. This review also addresses challenges and future directions of the use of natural products as chemotherapeutic and chemoprevention agents against HNSCC.
ARTICLE | doi:10.20944/preprints202108.0014.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: Beta-alanine; supplementation; nutrition,; aerob and anaerob performance
Online: 2 August 2021 (10:18:40 CEST)
: Supplement the use of ergogenic aids in cyclist’s directly have been improved the body metabolism and hemodynamic factors that are micro supplement in chancing reactions on the body muscle mass and limb muscle. Mostly knowing that, muscle power development progressive fast glycolytic and short time oxidative systems reactions. Sport competition intervals, therefore, during periods has been used specific drinks supported to cyclists. But, be obtained during should be long race times. Athletes directly needed some drug and fluid intake to prevented from metabolic breakdown rapidly the dynamic physiologic performance factors. Beta-alanine supplementation can be direct muscle performance development affects the anaerobic metabolism and capacity. It should be de-termined how the cyclists will use the competitive and training period intervals can increase the cyclists specific sprint and endurance race performance. Science cyclist International Road doses will be created in which, intervals can random effectively the investigate. This study random a cohort studies is examined the effects of beta-alanine supplementation on aerobic and anaerobic power output in specific cyclists. Therefore, we have been databases PubMed, Scopus and Medline initial search 10 August 2020 were created prospective effect the quality of bias work concluded effect size (ES) 95% confidence interval (CI) were used in participant. Participations (N=66) have age range 25 to 38 of the using beta-alanine in training periods to endurance muscle performance, aerobic power, anaerobic power, and sprint time trials. As a result of beta-alanine improved an-aerobic and aerobic power output on 4-week time-dependent trial performance condition. Signifi-cant values are obtained level factor alpha <0.05 and p-value analysis pre-post interactive stand-ardization.
ARTICLE | doi:10.20944/preprints202310.0411.v1
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: hub covering location-routing; simultaneous delivery and pickup; Two-stage stochastic programming
Online: 9 October 2023 (09:26:45 CEST)
In this paper, a model of the hub covering the location-routing problem with simultaneous delivery and pickup should be designed in order to minimize the total costs, minimize the maximum vehicles travel time and minimize the amount of CO_2 gas emissions. Two-stage stochastic programming is used in this paper to control the uncertainty parameters of the problem. The mathematical model aims to decide the appropriate location of hubs and vehicle routing for simultaneous delivery and pickup of products. In this regard, the results obtained from the LP-Metric method show that there is a conflict between the objective functions, and with the reduction of the amount of CO_2 gas emissions and the reduction of the maximum vehicles travel time, the total costs have increased. On the other hand, by examining the economical factor, it was observed that by reducing this factor, the robustness factors are changed and the total costs of the network are reduced. MOIWO and MOALO were used to solve the problem in large scale, the result of which was the existence of a significant difference between the calculation time averages. Also, the efficiency of these methods was very high compared to the LP-Metric method and the maximum relative difference was less than 2.83%. The results of numerical examples show a higher efficiency of the MOALO than the MOIWO.
ARTICLE | doi:10.20944/preprints201812.0203.v1
Subject: Chemistry And Materials Science, Organic Chemistry Keywords: peptide/RNA world; prebiotic information system; translation and the genetic code; coevolution of translation machine and the genetic code; MVC architecture pattern and biological information; numerical codons; AnyLogic software for computer simulation of translation machine
Online: 17 December 2018 (16:03:16 CET)
Information is the currency of life, but the origin of prebiotic information remains a mystery. We propose transitional pathways from the cosmic building blocks of life to the complex prebiotic organic chemistry that led to the origin of information systems. The prebiotic information system, specifically the genetic code, is segregated, linear, and digital and probably appeared during biogenesis four billion years ago. In the peptide/RNA world, lipid membranes randomly encapsulated amino acids, RNA, and protein molecules, drawn from the prebiotic soup, to initiate a molecular symbiosis inside the protocells. This endosymbiosis led to the hierarchical emergence of several requisite components of the translation machine: tRNAs, aaRS, mRNAs, and ribosomes. When assembled in the right order, the translation machine created biosynthetic polypeptides, a process that transferred information from mRNAs to proteins. This was the beginning of the prebiotic information age. The molecular attraction between tRNA and amino acids led to different stages of the translation machines and the genetic code. tRNA is an ancient molecule that designed and built mRNA for storing the information of its cognate amino acid. Each mRNA strand became the storage device for the genetic information that encoded the amino acid sequences in triplet nucleotides. As information appeared in the digital languages of the codon within mRNA, and the genetic code for protein synthesis evolved, the prebiotic chemistry then became more organized and directional. The origin of the genetic code is enigmatic; herein we propose an evolutionary explanation: the demand for a wide range of specific enzymes in the peptide/RNA world was the main selective pressure for the origin of information-directed protein synthesis. We review three main concepts on the origin and evolution of the genetic code: the stereochemical theory, the coevolution theory, and the adaptive theory. These three theories are compatible with our coevolution model of the translation machines and the genetic code. We suggest biosynthetic pathways as the origin of the specific translation machines which provided the framework for the origin of the genetic code. During translation, the genetic code developed in three stages coincident with the refinement of the translation machines: GNC code developed by the pre-tRNA/pre-aaRS /pre-mRNA machine, SNS code by the tRNA/aaRS/mRNA machine, and finally the universal genetic code by the tRNA/aaRS/mRNA/ribosome machine. Our hypothesis provides the logical and incremental steps for the origin of the programmed protein synthesis. In order to understand the prebiotic information system better, we converted letter codons into numerical codons in the Universal Genetic Code Table. We have developed a software called CATI (Codon-Amino Acid-Translator-Imitator) to translate randomly chosen numerical codons into corresponding amino acids and vice versa. This conversion has granted us insight into how the translation might have worked in the peptide/RNA world. There is great potential in the application of numerical codons to bioinformatics such as barcoding, DNA mining, or DNA fingerprinting. We constructed the likely biochemical pathways for the origin of translation and the genetic code using the Model-View-Controller (MVC) software framework, and the translation machinery step-by-step. Using AnyLogic software we were able to simulate and visualize the entire evolution of the translation machines and the genetic code. The results indicate that the emergence of the information age from the peptide/RNA world was a watershed event in the origin of life about four billion years ago.
ARTICLE | doi:10.20944/preprints202312.0054.v1
Subject: Medicine And Pharmacology, Oncology And Oncogenics Keywords: Ex Vivo Resistance Platform; CAF-mediated Resistance; Combinations of cytotoxic and targeted drugs
Online: 1 December 2023 (07:57:41 CET)
Drug resistance in tumor cells is a significant roadblock in the clinical management of advanced or recurrent diseases in endometrial cancers. As a part of the tumor-stromal ecosystem, tumor cells are ecologically connected to the cancer-associated fibroblasts (CAFs), which form contributory elements of the tumor microenvironment (TME). We recently reported a novel model of patient-derived CAF-based 2-cell Hybrid Co-Culture (HyCC) to evaluate CAFs' role in developing drug resistance and understanding personalized tumor cell-CAF dialogue. Using our 2-cell HyCC model of patient-derived endometrial CAFs, we present data to demonstrate the direct counter-inhibitory effect of CAFs to combinations of cytotoxic and targeted drugs in developing drug resistance. CAFs derived from resected endometrial tumor samples were first passage-wise characterized before and after freeze-thaw for their positive and negative CAF markers expression pattern. Paclitaxel and its combination with copanlisib, TAK228, lenvatinib, and trametinib were used to test the 3D clonogenic growth of endometrial AN3CA cells on endometrial CAFs. We demonstrate that CAF-mediated resistance to antitumor drugs occurs via direct and indirect contact with CAFs in HyCC. Our data established the strength of the 2-cell HyCC model of patient-derived CAFs in solid tumors and provided a model of resistance to antitumor drugs tailored on a patient-to-patient basis.
ARTICLE | doi:10.20944/preprints202107.0314.v1
Subject: Business, Economics And Management, Accounting And Taxation Keywords: Small and Marginal farmers, Minimum Support Price (MSP), Agriculture, Economics, Poverty, Agriculture Land-holdings
Online: 14 July 2021 (09:55:38 CEST)
The rural population percentage decreased from 82.7% to 68.9% in 2011, even though there is an increase in the total rural population, which stands at 833.7 million, and the rural population were now more than three times compared to the population seven decades ago. Another observation is the decrease in cultivators percentage from 71.9% to 45.1 %, while agriculture labour increase from 28.1% to 54.9% during the same period. Despite the increase in irrigated land and net area sown, the average holdings' size under the farmers is continuously decreasing, and it requires a study to look into the reasons. The research probes the role of Minimum Support Price (MSP) in supporting farmers and measuring market price above MSP needed to help marginal and small farmers remain above the poverty level. It explains how different market rates above MSP have a different impact on different categories of agriculture landholding. The study works on developing a common model that relates the impact of MSP on different farmers categories. The model can be generalized to all crops and regions and useful in designing policies that focus on uplifting the income of agricultural farmers.
ARTICLE | doi:10.20944/preprints201709.0087.v1
Subject: Engineering, Civil Engineering Keywords: physical model test; rock joint; strata and surface movement; final slope mining; surface settlement
Online: 19 September 2017 (07:30:14 CEST)
Strata and surface movement induced by mining under open-pit final slope is a huge threat to mine safety. Physical model test is an important method to study mining-induced strata and surface movement laws. Because of rock joints predominantly control rock mass deformation and failure, thus physical model test leaving out of consideration of rock joints is difficult to reflect the influence of rock joints on rock mass deformation. Therefore, this paper presents a three-dimensional physical model test considering simplified dominant rock joints. This test process includes the design of testing equipment, the construction of physical model with dominant rock joint sets, conduction of mining and deformation monitoring. And mining under eastern final slope of Yanqianshan iron mine was selected as a case to study the behavior of mining-induced strata and surface movement.
ARTICLE | doi:10.20944/preprints202101.0145.v1
Subject: Physical Sciences, Acoustics Keywords: General theory of Relativity; Bianchi Type I model; Isotropic and Anisotropic cosmology; Perfect fluid; Fluid mechanics; Quintessence model; cosmological inflation; Viscosity; Gravitational physics
Online: 8 January 2021 (11:19:51 CET)
We propose the Hamiltonian formalism of Bianchi type 1 cosmological model for perfect fluid. We have considered both the equation of state parameter ω and the cosmological constant Λ as the function of volume V(t) which is defined by the product of three scale factors of the Bianchi type 1 line element. We propose a Lagrangian for the anisotropic Bianchi type-1 model in view of a variable mass moving in a variable potential . We can decompose the anisotropic expansion in terms of expansion and shearing motion by Lagrangian mechanism. We have considered a canonical transformation from expanding scale factor to scalar field ø which helps us to give the proper classical definition of that scalar field in terms of scale factors of the mentioned model. This definition helps us to explain the cosmological inflation. We have used large anisotropy(as in the cases of Bianchi models) and proved that cosmic inflation is not possible in such large anisotropy. Therefore we can conclude that the extent of anisotropy is less in case of our universe. Otherwise the inflation theory which explained the limitations of Big Bang cannot be resolved.Part II is contained with some analysis of the lagrangian ; derived in Part I ; on the quintessence model.
ARTICLE | doi:10.20944/preprints202309.0267.v1
Subject: Business, Economics And Management, Business And Management Keywords: Digitization; Spatial and urban planning; eSpace; ePlan; Geospatial Data; Data pro-duction; Data distribution; Value co-creation
Online: 5 September 2023 (08:10:22 CEST)
The introduction of digitization has changed all spheres of business on a global level, including geospatial data. In the Republic of Serbia, the process on this topic should begin with the introduction of the terms ePlan and eSpace. The general goal of the paper implies the construction and implementation of the ePlan as a future part of the eSpace for digital management of geospatial data, by creating co-creating value. For this purpose, the authors conducted research in which representatives of local self-governments and holders of public authority participated, by means of structured online research. The focus was on the digitization of urban and planning documents and the establishment a central database of spatial and planning documents in electronic format, and its further distribution through the one system. In that way, easy access to digital plan data expands the user community, and enables communication with different stakeholder groups. Ac-cording to the results of the research, the authors point out that it is necessary to form a new model for managing geospatial data through a eSpace system. This is achieved by shifting the focus from urban and spatial planning to other databases and their registers that are related to geospatial data. The goal of the paper indicates that it is necessary to raise the awareness of society, to introduce the concept of value co-creation, because conditions are created for the implementation of all measures aimed at digitization and management of electronic services in sustainable project society, at all countries.
ARTICLE | doi:10.20944/preprints201810.0127.v1
Subject: Environmental And Earth Sciences, Other Keywords: hierarchical origin of life; RNA/protein world; biological information system; translation and the genetic code; coevolution of translation machine and the genetic code; MVC architecture pattern and biological information; AnyLogic software for computer simulation of translation machine
Online: 8 October 2018 (05:33:22 CEST)
The Late Heavy Bombardment Period (4.1 to 3.8 billion years ago) of heightened impact cratering activity on young Earth is likely the driving force for the origin of life. During the Eoarchean, asteroids such as carbonaceous chondrites delivered the building blocks of life and water to early Earth. Asteroid collisions created innumerable hydrothermal crater lakes in the Eoarchean crust which inadvertently became the perfect cradle for prebiotic chemistry. These hydrothermal crater lakes were filled with cosmic water and the building blocks of life. forming a thick prebiotic soup. The unique combination of exogenous delivery of extraterrestrial building blocks of life, and the endogenous biosynthesis in hydrothermal impact crater lakes very likely gave rise to life. A new symbiotic model for the origin of life within the hydrothermal crater lakes is here proposed. In this scenario, life arose around four billion years ago through five hierarchical stages of increasing molecular complexity: cosmic, geologic, chemical, information, and biological. During the prebiotic synthesis, membranes first appeared in the hydrothermal crater lakes, followed by the simultaneous origin of RNA and protein molecules, creating the RNA/protein world. These proteins were noncoded protein enzymes that facilitated chemical reactions. RNA molecules formed in the hydrothermal crater basin by polymerization of the nucleotides on the montmorillonite mineral substrate. Similarly, the initial synthesis of abiotic protein enzymes was mediated by the condensation of amino acids on pyrite surfaces. The regular wet-dry cycles within the crater lakes assisted further concentration, condensation, and polymerization of the RNAs and proteins. Lipid membranes randomly encapsulated amino acids, RNA, and protein molecules from the prebiotic soup to initiate a molecular symbiosis inside the protocells, this led to the hierarchical emergence of several cell components. As the role of protein enzymes became essential for catalytic process in the RNA/protein world, Darwinian selection from noncoded to coded protein synthesis led to translation systems and the genetic code, heralding the information stage. In this stage, the biochemical pathways suggest the successive emergence of translation machineries such as tRNAs, aaRS, mRNAs, and of ribosomes for protein synthesis. The molecular attraction between tRNA and amino acid led to the emergence of translation machinery and the genetic code. tRNA is an ancient molecule that created mRNA for the purpose of storing amino acid information like a digital strip. Each mRNA strand became the storage device for genetic information that encoded the amino acid sequences in triplet nucleotides. As information became available in the digital languages of the codon within mRNA, biosynthesis became less random and more organized and directional. The original translation machinery was simpler before the emergence of the ribosome than that of today. We review three main concepts on the origin and evolution of the genetic code: the stereochemical theory, the coevolution theory, and adaptive theory. We believe that these three theories are not mutually exclusive, but are compatible with our coevolution model of translations machines and the genetic code. We suggest biosynthetic pathways as the origin of the translation machine that provided the framework for the origin of the genetic code. During translation, the genetic code developed in three stages coincident with the refinement of the translation machinery: GNC code with four codons and four amino acids during interactions of pre-tRNA/pre-aaRS /pre-mRNA, SNS code consisting of 16 codons and 10 amino acids appeared during the tRNA/aaRS/mRNA interaction, and finally the universal genetic code evolved with the emergence of the tRNA/aaRS/mRNA/ribosome machine. The universal code consists of 64 codons and 20 amino acids, with a redundancy that minimizes errors in translation. To address the question of the origin of the biological information system in the RNA/protein world, we converted letter codons into numerical codons in the Universal Genetic Code Table. We developed a software called CATI (Codon-Amino Acid-Translator-Imitator) to translate randomly chosen numerical codons into corresponding amino acids and vice versa, gaining insight into how translation might have worked in the RNA/protein world. We simulated the likely biochemical pathways for the origin of translation and the genetic code using the Model-View-Controller (MVC) software framework, and the translation machinery step-by-step. We used AnyLogic software to simulate and visualize the evolution of the translation machines and the genetic code. We conclude that the emergence of the information age from the RNA/protein world was a watershed event in the origin of life about four billion years ago.