ARTICLE | doi:10.20944/preprints202208.0177.v1
Subject: Engineering, Other Keywords: model-based system engineering (MBSE); model-based systems architecting (MBSA); model-based pattern language (MBPL); system architecture; logical architecture; SysML patterns; pattern library; systems engineering (SE); pattern language; logical decomposition
Online: 9 August 2022 (09:26:54 CEST)
This paper presents an approach to the application of the Model-Based Systems Engineering (MBSE) and Model-Based Systems Architecting (MBSA) principles to develop a Model-Based Pattern Language (MBPL). It takes too long for systems engineers and architects to develop a new system from scratch, particularly new space-based systems derived from the existing space systems architectures. A pattern language is a holistic view of reusable logical model artifacts; many are interdisciplinary and introductory, if at all. The results are mostly a combination of the application-specific logical solution, which further results in the best possible overall solution. The main benefit of the pattern language is reducing the time and validation required to generate a new space-based system architecture; this approach will develop top-level requirements in the initial phase of the system development. The rationale of the methodology proposed by the paper is as follows, collect, and decompose published literature and other open-source information available on space system architectures and system models; develop SysML models for systems, subsystems, products, assembly, subassembly level, and mission-specific requirements using CAMEO SysML software. Arrange these patterns to develop a functional ontology and construct a logical architecture pattern library. This approach created, updated, and managed SysML pattern language, which evaluated the expedited new model construction. Again, our objective is to develop a logical pattern language using public domain information and evaluate patterns by constructing a new space mission concept—for example, planetary surface habitat.
Subject: Engineering, Control & Systems Engineering Keywords: Model-based systems engineering (MBSE); Model informatics and analytics; Model-based collaboration
Online: 12 March 2021 (16:52:34 CET)
In MBSE there is yet no converged terminology. The term ’system model’ is used in different contexts in literature. In this study we elaborated the definitions and usages of the term ’system model’, to find a common definition. 104 publications have been analyzed in depth for their usage and definition as well as their meta-data e.g., the publication year and publication background to find some common patterns. While the term is gaining more interest in recent years it is used in a broad range of contexts for both analytical and synthetic use cases. Based on this three categories of system models have been defined and integrated into a more precise definition.
ARTICLE | doi:10.20944/preprints202201.0455.v1
Subject: Engineering, Energy & Fuel Technology Keywords: energy modeling; biomass transformation efficiency; global change assessment model; integrated assessment model; cooking fuel
Online: 31 January 2022 (12:45:00 CET)
The building sector of most tropical countries still use predominantly primary biomass as the principal fuel. This has adverse effects like CO2 emission and deforestation and is associated with issues like poverty, ill-health, and low standard of living. Therefore, energy policies try to improve on the efficiency of firewood and charcoal end-use technologies, to palliate the negative effects. In this research, the global change assessment model (GCAM) is used, to investigate the impact of efficiency improvement on the energy consumption pattern of the building sector of developing countries. The aim of the study is to provide empirical data that would better inform policymakers on the effects of modernizing these primary fuels. The study developed three scenarios with different levels of efficiency improvements. The results show that efficiency improvement rather increases primary biomass consumption and CO2 emission. However, there is a fall in the consumption of traditional biomass in the second half of the modelling period. The increase in biomass-based fuels consumption was seen to be linked to their affordability. Therefore, policymakers need not only elaborate policies that improve biomass efficiency, but also introduce and motivate other clean cooking fuels like butane, biogas, and electricity.
ARTICLE | doi:10.20944/preprints202005.0171.v2
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: COVID-19; coronavirus; case-based reasoning; ontology; natural language processing
Online: 15 June 2020 (11:16:23 CEST)
Coronavirus, also known as COVID-19, has been declared a pandemic by the World Health Organization (WHO). At the time of conducting this study, it had recorded over 1.6 million cases while more than 105,000 have died due to it, with these figures rising on a daily basis across the globe. The burden of this highly contagious respiratory disease is that it presents itself in both symptomatic and asymptomatic patterns in those already infected, thereby leading to an exponential rise in the number of contractions of the disease and fatalities. It is therefore crucial to expedite the process of early detection and diagnosis of the disease across the world. The case-based reasoning (CBR) model is an effective paradigm that allows for the utilization of cases’ specific knowledge previously experienced, concrete problem situations or specific patient cases for solving new cases. This study therefore aims to leverage the very rich database of cases of COVID-19 to solve new cases. The approach adopted in this study employs the use of an improved CBR model for state-of-the-art reasoning task in classification of suspected cases of Covid19. The CBR model leverages on a novel feature selection and semantic-based mathematical model proposed in this study for case similarity computation. An initial population of the archive was achieved with 68 cases obtained from the Italian Society of Medical and Interventional Radiology (SIRM) repository. Results obtained revealed that the proposed approach in this study successfully classified suspected cases into their categories at an accuracy of 97.10%. The study found that the proposed model can support physicians to easily diagnose suspected cases of Covid19 base on their medical records without subjecting the specimen to laboratory test. As a result, there will be a global minimization of contagion rate occasioned by slow testing and as well reduce false positive rates of diagnosed cases as observed in some parts of the globe.
ARTICLE | doi:10.20944/preprints202209.0309.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: machine learning; natural language processing; commit messages; change prediction model
Online: 20 September 2022 (14:52:49 CEST)
Version Control and Source Code Management Systems, such as GitHub, contain large amount ofunstructured historical information of software projects. Recent studies have introduced Natural Language Processing (NLP) to help software engineers retrieve information from very large collection of unstructured data. In this study, we have extended our previous study by increasing our datasets and ML and clustering techniques. Method: We have followed a complex methodology made up of various steps. Starting from the raw commit messages we have employed NLP techniques to build a structured database. We have extracted their main features and used as input of different clustering algorithms. Once labelled each entry, we have applied supervised machine learning techniques to build a prediction and classification model. Results: We have developed a machine learning-based model to automatically classify commit messages of a software project. Our model exploits a ground-truth dataset which includes commit messages obtained from various GitHub projects belonging to the HEP context. Conclusions: The contribution of this paper is two-fold: it proposes a ground-truth database; it provides a machine learning prediction model. They automatically identify the more change-proneness areas of code. Our model has obtained a very high average precision, recall and F1-score.
ARTICLE | doi:10.20944/preprints201907.0223.v1
Subject: Engineering, Civil Engineering Keywords: Physical model; chute structure; scour hole; divergence; downstream
Online: 19 July 2019 (10:15:33 CEST)
The effect of divergence of chute sidewalls with three different baffle block geometries namely USBR, trihedral and semicircular blocks, as well as the depth and dimensions of the scour hole downstream of the chute were studied using a physical model. For this purpose, 9 models of baffled chutes were designed and constructed with divergence ratios of 1.45, 1.75, and 2.45 and without divergence (with a divergence ratio of 1). Comparing the results on the effect of block geometry at different divergence ratios revealed that the use of blocks proposed in this study instead of standard USBR blocks reduced the mean and maximum scour hole by 50%. For a given block geometry, the mean depth, maximum depth, and length of scour hole were reduced by 75%, 58%, and 50%, respectively.
ARTICLE | doi:10.20944/preprints201912.0043.v1
Subject: Social Sciences, Education Studies Keywords: socio-educational resilience; protective factors; risk factors; multilevel logistic model, simce
Online: 4 December 2019 (08:00:35 CET)
Framed in a context with an emerging economy and with a high percentage of school failure, this study aimed to identify the factors that turn students with socioeconomic disadvantages into resilient students. Two questions guided the research: Can resilience be supported in students in adverse socioeconomic situations? What factors influence building resilient students? A cross-sectional study was carried out from 2011 to 2015 in Chile, using a multilevel logistic regression model with three levels, considering the hierarchical data structure. The behavior of 63100 to 76400 sampled students was analyzed. Results show five relevant factors in building resilience: self-efficacy in language, minor aggressions and violence perceived by students, norms and values of the establishment, interest in study, and self-efficacy. Some risk factors identified were an atmosphere of less respect and trust, engagement in physical education activities and good performance in carrying them out. These results could orient educational leaders interested in supporting the educational community in order to improve the academic success of disadvantaged students.
ARTICLE | doi:10.20944/preprints202001.0032.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: model based diagnosis; applications; diagnosis; physiotherapy; education
Online: 4 January 2020 (06:34:25 CET)
Many physiotherapy treatments begin with a diagnosis process. The patient describes symptoms, upon which the physiotherapist decides which tests to perform until a final diagnosis is reached. The relationships between the anatomical components are too complex to keep in mind and the possible actions are abundant. A trainee physiotherapist with little experience naively applies multiple tests to reach the root cause of the symptoms, which is a highly inefficient process. This work proposes to assist students in this challenge by presenting three main contributions: (1) A compilation of the neuromuscular system as components of a system in a Model-Based Diagnosis problem; (2) The PhysIt is an AI-based tool that enables an interactive visualization and diagnosis to assist trainee physiotherapists; and (3) An empirical evaluation that comprehends performance analysis and a user study. The performance analysis is based on evaluation of simulated cases and common scenarios taken from anatomy exams. The user study evaluates the efficacy of the system to assist students in the beginning of the clinical studies. The results show that our system significantly decreases the number of candidate diagnoses, without discarding the correct diagnosis, and that students in their clinical studies find PhysIt helpful in the diagnosis process.
ARTICLE | doi:10.20944/preprints202007.0306.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: COVID-19; Linear and non-Linear trend; Spline function; Autoregressive Time series model; Bayesian inference
Online: 14 July 2020 (11:41:59 CEST)
A vast majority of the countries is under the economic and health crises due to the current epidemic of coronavirus disease 2019 (COVID-19). The present study analyzes the COVID-19 using time series, which is an essential gizmo for knowing the enlargement of infection and its changing behavior, especially the trending model. We have considered an autoregressive model with a non-linear time trend component that approximately converted into the linear trend using the spline function. The spline function split the COVID-19 series into different piecewise segments between respective knots and fitted the linear time trend. First, we obtain the number of knots with its locations in the COVID-19 series and then the estimation of the best-fitted model parameters are determined under Bayesian setup. The results advocate that the proposed model/methodology is a useful procedure to convert the non-linear time trend into a linear pattern of newly coronavirus case for various countries in the pandemic situation of COVID-19.
ARTICLE | doi:10.20944/preprints202112.0021.v1
Subject: Earth Sciences, Environmental Sciences Keywords: bioeconomy 1; footprint analysis 2; land use modelling 3; Multi-Regional Input-Output (MRIO) model 4; land conversion 5; biodiversity 6; ecosystem functions 7
Online: 1 December 2021 (18:08:00 CET)
Footprints are powerful indicators for evaluating the impact of the bioeconomy of a country on environmental goods, domestically and abroad. In this study, we apply a hybrid approach combining a Multi-Regional Input-Output model and land use modelling to compute the agricultural land footprint (aLF). Furthermore, we added information on land-use change to the analysis and allocated land conversion to specific commodities. The German case study shows that the aLF abroad is larger by a factor of 2.5 to 3 than the aLF in Germany. In 2005 and 2010, conversion of natural and semi-natural land-cover types abroad allocated to Germany due to import increases was 2.5 times higher than the global average. Import increases to Germany slowed down in 2015 and 2020, reducing land conversion attributed to the German bioeconomy to the global average. The case study shows that the applied land footprint provides clear and meaningful information for policymakers and other stakeholders. The presented methodological approach can be applied to other countries and regions covered in the underlying database EXIOBASE. It can be adapted, also for an assessment of other ecosystem functions, such as water or soil fertility.
ARTICLE | doi:10.20944/preprints201703.0124.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: metabolic flux analysis, model misspecification, constraint-based model, stoichiometric model, Chinese hamster ovary cell culture
Online: 16 March 2017 (17:38:36 CET)
Background: Metabolic flux analysis (MFA) is an indispensable tool in metabolic engineering. The simplest variant of MFA relies on an overdetermined stoichiometric model of the cell’s metabolism under the pseudo-steady state assumption, to evaluate the intracellular flux distribution. Despite its long history, the issue of model error in the overdetermined MFA, particularly misspecifications of the stoichiometric matrix, has not received much attention. Method: We evaluated the performance of statistical tests from linear least square regressions, namely Ramsey RESET test, F-test and Lagrange multiplier test, in detecting model misspecifications in the overdetermined MFA, particularly missing reactions. We further proposed an iterative procedure using the F-test to correct such an issue. Result: Using Chinese hamster ovary and random metabolic networks, we demonstrated that: (1) a statistically significant regression does not guarantee high accuracy of the flux estimates, (2) the removal of a reaction with a low flux magnitude can cause disproportionately large biases in the flux estimates, (3) the F-test could efficiently detect missing reactions, and (4) the proposed iterative procedure could robustly resolve the omission of reactions. Conclusion: Our work demonstrated that statistical analysis and tests could be used to systematically assess, detect and resolve model misspecifications in the overdetermined MFA.
TECHNICAL NOTE | doi:10.20944/preprints202103.0116.v2
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: DAPT; workflow; agent-based modeling; model exploration; crowdsourcing
Online: 10 May 2021 (09:47:54 CEST)
Modern agent-based models (ABM) and other simulation models require evaluation and testing of many different parameters. Managing that testing for large scale parameter sweeps (grid searches) as well as storing simulation data requires multiple, potentially customizable steps that may vary across simulations. Furthermore, parameter testing, processing, and analysis are slowed if simulation and processing jobs cannot be shared across teammates or computational resources. While high-performance computing (HPC) has become increasingly available, models can often be tested faster through the use of multiple computers and HPC resources. To address these issues, we created the Distributed Automated Parameter Testing (DAPT) Python package. By hosting parameters in an online (and often free) "database", multiple individuals can run parameter sets simultaneously in a distributed fashion, enabling ad hoc crowdsourcing of computational power. Combining this with a flexible, scriptable tool set, teams can evaluate models and assess their underlying hypotheses quickly. Here we describe DAPT and provide an example demonstrating its use.
ARTICLE | doi:10.20944/preprints201703.0027.v1
Subject: Engineering, Energy & Fuel Technology Keywords: Fischer-Tropsch synthesis; kinetics model; cobalt based catalyst
Online: 6 March 2017 (06:47:14 CET)
A comprehensive kinetic model of the Fischer-Tropsch synthesis (FTS) is developed in a fixed bed reactor under operating conditions (temperature, 230–235°C, pressure, 20–25 bar, gas hourly space velocity, 4000–5000 cm3(STP)/h/gcatalyst ,H2/CO feed molar ratio, 2.1) over a Co based catalyst. Reaction rate equations based on Eley-Rideal (ER) type model for initiation step and Langmuir-Hinshelwood-Hougen-Watson (LHHW) type model for propagation and termination steps of the FTS reactions have been considered and the readsorption of olefins were taken into account. The model that was reported in the literature was modified in order to explain many significant deviations from the ASF distribution. Optimum parameters have been obtained by Genetic Algorithms (GA). The calculated activation energies to produce n-paraffins and 1-olefins were in the range of 82.24 to 90.68 kJ/mol and 100.66 to 105.24 kJ/mol, respectively. The hydrocarbon distribution in FTS reactions was satisfactorily predicted particularly for paraffins.
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Model-Based Systems Engineering; Category Theory; Object-Process Methodology; Model Analytics; Concept-Model-Graph-View-Concept; Graph Data Structures; Graph Query; Decision Support Matrix; Matrix-Based Analysis
Online: 18 February 2021 (12:27:50 CET)
We introduce the Concept-Model-Graph-View Cycle (CMGVC). The CMGVC facilitates coherent architecture analysis, reasoning, insight, and decision-making based on conceptual models that are transformed into a generic, robust graph data structure (GDS). The GDS is then transformed into multiple views of the model, which inform stakeholders in various ways. This GDS-based approach decouples the view from the model and constitutes a powerful enhancement of model-based systems engineering (MBSE). The CMGVC applies the rigorous foundations of Category Theory, a mathematical framework of representations and transformations. We show that modeling languages are categories, drawing an analogy to programming languages. The CMGVC architecture is superior to direct transformations and language-coupled common representations. We demonstrate the CMGVC to transform a conceptual system architecture model built with the Object Process Modeling Language (OPM) into dual graphs and a stakeholder-informing matrix that stimulates system architecture insight.
ARTICLE | doi:10.20944/preprints202107.0259.v1
Subject: Engineering, Automotive Engineering Keywords: Driveability; low-frequency; energy path analysis; powertrain; model-based engineering
Online: 12 July 2021 (12:21:24 CEST)
Vehicle driveability is one of the important vehicle attributes in range-extender electric vehicles due to the electric motor torque characteristics at low-speed events. The process of validating and rectifying vehicle driveability attributes is typically utilised by a physical vehicle prototype that can be expensive and required several design iterations. In this paper, a model-based energy method to assess vehicle driveability is presented based on a high-fidelity 49 degree-of-freedom powertrain and vehicle systems. Multibody dynamics components were built according to their true centre of gravity relative to the vehicle datum for providing an accurate system interaction. The work covered a frequency at less than 20 Hz. The results that consisted of the component frequency domination are structured and examined to identify the low-frequency sensitivity based on different operating parameters such as a road surface coefficient. An energy path technique was also implemented on the dominant component by decoupling its compliances to study the effect on the vehicle driveability and low-frequency response. The outcomes of the research provided a good understanding of the interaction across the sub-systems levels. The powertrain rubber mounts were the dominant components that controlled the low-frequency contents (< 15.33 Hz) and can change the vehicle driveability quality.
ARTICLE | doi:10.20944/preprints201608.0080.v2
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: left ventricle; myofibre; myocardium structure; rule-based model; mathematical anatomy
Online: 20 October 2016 (08:22:39 CEST)
Computer simulation of normal and diseased human heart activity requires a 3D anatomical model of the myocardium, including myofibres. For clinical applications, such a model has to be constructed based on routine methods of cardiac visualisation such as sonography. Symmetrical models are shown to be too rigid, so an analytical non-symmetrical model with enough flexibility is necessary. Based on previously made anatomical models of the left ventricle, we propose a new, much more flexible spline-based analytical model. The model is fully described and verified against DT-MRI data. We show a way to construct it on the basis of sonography data. To use this model in further physiological simulations, we propose a numerical method to utilise finite differences in solving the reaction-diffusion problem together with an example of scroll wave dynamics simulation.
ARTICLE | doi:10.20944/preprints202002.0273.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: linguistic knowledge; neural machine translation model; machine translation tasks; knowledge-based encoder; model representation ability
Online: 19 February 2020 (10:51:41 CET)
Incorporating source-side linguistic knowledge into the neural machine translation (NMT) model has recently achieved impressive performance on machine translation tasks. One popular method is to generalize the word embedding layer of the encoder to encode each word and its linguistic features. The other method is to change the architecture of the encoder to encode syntactic information. However, the former cannot explicitly balance the contribution from the word and its linguistic features. The latter cannot flexibly utilize various types of linguistic information. Focusing on the above issues, this paper proposes a novel NMT approach that models the words in parallel to the linguistic knowledge by using two separate encoders. Compared with the single encoder based NMT model, the proposed approach additionally employs the knowledge-based encoder to specially encode linguistic features. Moreover, it shares parameters across encoders to enhance the model representation ability of the source-side language. Extensive experiments show that the approach achieves significant improvements of up to 2.4 and 1.1 BLEU points on Turkish→English and English→Turkish machine translation tasks, respectively, which indicates that it is capable of better utilizing the external linguistic knowledge and effective improving the machine translation quality.
ARTICLE | doi:10.20944/preprints201904.0326.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: complex systems modeling; systems architecture; system’s model complexity; visualization; agent-based systems; system’s model evolution
Online: 30 April 2019 (11:15:20 CEST)
This work presents some characteristics of MoNet, a computerized platform for the modeling and visualization of complex systems. Emphasis is on the ideas that allowed the successful progressive development of this modeling platform, which goes along with the implementation of applications to the modeling of several studied systems. The platform has the capacity to represent different aspects of systems modeled at different observation scales. This tool offers advantages in the sense of favoring the perception of the phenomenon of the emergence of information, associated with changes of scale. Some criteria used for the construction of this modeling platform are included. The power of current computers has made practical representing graphic resources such as shapes, line thickness, overlaying-text tags, colors and transparencies, in the graphical modeling of systems made up of many elements. By visualizing diagrams conveniently designed to highlight contrasts, these modeling platforms allow the recognition of patterns that drive our understanding of systems and their structure. Graphs that reflect the benefits of the tool regarding the visualization of systems at different scales of observation are presented to illustrate the application of the platform.
ARTICLE | doi:10.20944/preprints201805.0156.v1
Subject: Earth Sciences, Environmental Sciences Keywords: rule-based system; reservoir management model; land management model; SWAT (Soil and Water Assessment Tool)
Online: 10 May 2018 (06:27:38 CEST)
Decision tables have been used for many years in data processing and business applications to simulate complex rule sets. Several computer languages have been developed based on rule systems and they are easily programmed in several current languages. Land management and river-reservoir models simulate complex land management operations and reservoir management in highly regulated river systems. Decision tables are a precise yet compact way to model the rule sets and corresponding actions found in these models. In this study, we discuss the suitability of decision tables to simulate management in the river basin scale Soil and Water Assessment Tool (SWAT+) model. Decision tables are developed to simulate automated irrigation and reservoir releases. A simple auto irrigation application of decision tables was developed using plant water stress as a condition for irrigating corn in Texas. Sensitivity of the water stress trigger and irrigation application amounts were shown on soil moisture and corn yields. In addition, the Grapevine Reservoir near Dallas, Texas was used to illustrate the use of decision tables to simulate reservoir releases. The releases were conditioned on reservoir volumes and flood season. The release rules as implemented by the decision table realistically simulated flood releases as evidenced by a daily NSE (Nash-Sutcliffe Efficiency) of 0.52 and a percent bias of -1.1%. Using decision tables to simulate management in land, river and reservoir models was shown to have several advantages over current approaches including: 1) mature technology with considerable literature and applications, 2) ability to accurately represent complex, real world decision making, 3) code that is efficient, modular and easy to maintain, and 4) tables that are easy to maintain, support, and modify.
ARTICLE | doi:10.20944/preprints202009.0418.v1
Subject: Engineering, Automotive Engineering Keywords: large sized lithium-ion battery; physic-based model; life prediction; scale-up model; reduced order cell model; electric vehicles
Online: 18 September 2020 (04:29:49 CEST)
Large lithium-ion batteries (LIBs) in electric vehicles and energy storage systems demonstrate different performance and lifetime compared to small LIB cells, owing to the size effects generated by the electrical configuration and property imbalance. However, the calculation time for performing life predictions with three-dimensional (3D) cell models is undesirably long. In this paper, a lumped cell model with equivalent resistances (LER cell model) is proposed as a reduced order model of the 3D cell model, which enables accurate and fast life predictions of large LIBs. The developed LER cell model is validated via the comparisons with results of the 3D cell models by simulating a 20-Ah commercial pouch cell (NCM/graphite) and the experimental values. In addition, the LER cell models are applied to different cell types and sizes, such as a 20-Ah cylindrical cell and a 60-Ah pouch cell.
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Unsupervised anomalous sound detection; classification-based model; Outlier classifier; ID classifier
Online: 17 August 2021 (08:36:44 CEST)
The task of unsupervised anomalous sound detection (ASD) is challenging for detecting anomalous sounds from a large audio database without any annotated anomalous training data. Many unsupervised methods were proposed, but previous works have confirmed that the classification-based models far exceeds the unsupervised models in ASD. In this paper, we adopt two classification-based anomaly detection models: (1) Outlier classifier is to distinguish anomalous sounds or outliers from the normal; (2) ID classifier identifies anomalies using both the confidence of classification and the similarity of hidden embeddings. We conduct experiments in task 2 of DCASE 2020 challenge, and our ensemble method achieves an averaged area under the curve (AUC) of 95.82% and averaged partial AUC (pAUC) of 92.32%, which outperforms the state-of-the-art models.
ARTICLE | doi:10.20944/preprints202104.0535.v1
Subject: Social Sciences, Accounting Keywords: model-based learning; mental health; physical activity; cognitive functions; active learning.
Online: 20 April 2021 (11:39:03 CEST)
This study examined the effect of an educational hybrid physical education (PE) intervention on cognitive performance and academic achievement in adolescents. A 9-month group-randomized controlled trial was conducted in 150 participants (age: 14.63 ± 1.38) allocated into control group (CG, n = 37) and experimental group (EG, n = 113). Inhibition, verbal fluency, planning and academic achievement were assessed. Significant differences were observed in the post-test for cognitive inhibition, verbal fluency in animals, and the average from verbal fluency in favour of the EG. With regard to the intervention, verbal fluency in animals, verbal fluency in vegetables, the average of verbal fluency, cognitive inhibition, language, the average of all subjects, the average of all subjects except PE, and the average from the core subjects) increased significantly in the EG. The last five variables (the academic ones and cognitive inhibition) also increased in the CG, in addition to mathematics. This study contributes to the knowledge by suggesting that both methodologies produced improvements in the measured variables, but the use of a hybrid program based on TPSR and gamification strategies produce improvements in cognitive performance, specifically through the cognitive inhibition and verbal fluency.
REVIEW | doi:10.20944/preprints202102.0179.v1
Subject: Medicine & Pharmacology, Allergology Keywords: evidence-based practice; clinical reasoning; causal model; intervention theory; Concept Mapping
Online: 8 February 2021 (10:35:52 CET)
Significant efforts in the past decades to teach evidence-based practice (EBP) implementation has emphasized increasing knowledge of EBP and developing interventions to support adoption to practice. These efforts have resulted in only limited sustained improvements in the daily use of evidence-based interventions in clinical practice in most health professions. Many new interven-tions with limited evidence of effectiveness are readily adopted each year - indicating openness to change is not the problem. The selection of an intervention is the outcome of an elaborate and complex cognitive process which is shaped by how they represent the problem in their mind and is mostly invisible processes to others. Therefore, the complex thinking process which support appropriate adoption of interventions should be taught more explicitly. Making the process visible to clinicians increases the acquisition of the skills required to judiciously select one in-tervention over others. The purpose of this paper is to provide a review of the selection process and the critical analysis that is required to appropriately decide to trial or not trial new intervention strategies with patients.
ARTICLE | doi:10.20944/preprints201705.0098.v1
Subject: Earth Sciences, Environmental Sciences Keywords: rule-based classification model; wetland remote sensing; SVM; TC-Wetness; China
Online: 11 May 2017 (08:03:34 CEST)
Wetlands are among the most bio-diverse and highest productivity ecosystems on earth, making their monitoring a high priority to conservation, protection and management interests. Although visual interpretation of satellite images is generally precise for monitoring wetlands, recent works have emphasized computerized classification methods because of the reduction in analyst time. However, it is difficult to automatically identify wetland solely based on spectral characteristics due to the complexity of wetland ecosystems. The ability to extract wetland information rapidly and accurately is the basis and the key to wetland mapping at a large scale. Here we propose an operational method to map China wetlands based on Landsat TM data and ancillary data. On the basis of theoretical analysis of wetland automatic classification, we developed a revised multi-layer wetland classification scheme and a rule-based classification model. In the latter, supervised classification (SVM and decision tree) and unsupervised classification (ISODATA) methods were tested. Four Landsat TM images, representing various wetland eco-regions in China (i.e. the Sanjiang Plain in the northeast China, the North China Plain, the Zoige Plateau in the southwest China and the Pearl River Estuary in southeast China), were automatically classified. The overall classification accuracies were 86.57%, 96.00%, 84.51% and 88.30%, respectively, which we considered to be satisfactory accuracy. Our results indicate that issues such as the resolution of geographic data and the understanding of wetland samples should be carefully addressed in the future.
ARTICLE | doi:10.20944/preprints202208.0490.v1
Subject: Engineering, Mechanical Engineering Keywords: cardiovascular 0-D model; pulmonary arterial pressure; gradient-based optimization; automatic differentiation
Online: 29 August 2022 (10:57:18 CEST)
Reliable quantification of pulmonary arterial pressure is essential in the diagnostic and prognostic assessment of a range of cardiovascular pathologies including rheumatic heart disease, yet an accurate and routinely available method for its quantification remains elusive. This work proposes an approach to infer pulmonary arterial pressure based on scientific machine learning techniques and non-invasive, clinically available measurements. A 0-D multicompartment model of the cardiovascular system was optimized using several optimization algorithms, subject to forward-mode automatic differentiation. Measurement data were synthesized from known parameters to represent the healthy, mitral regurgitant, aortic stenosed and combined valvular disease situations with and without pulmonary hypertension. Eleven model parameters were selected for optimization based on 95 % explained variation in mean pulmonary arterial pressure. A hybrid Adam and limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer yielded the best results with input data including valvular flow rates, heart chamber volume changes and systematic arterial pressure. Mean absolute percentage errors ranged from 1.8 % to 3.78 % over the simulated test cases. The model was able to capture pressure dynamics under hypertensive conditions with pulmonary arterial systole, diastole, and mean pressure average percentage errors of 1.12 %, 2.49 % and 2.14 %, respectively. The relatively low errors highlight the potential of the proposed model to recover pulmonary pressures for diseased heart valve and pulmonary hypertensive conditions.
Subject: Mathematics & Computer Science, Other Keywords: reinforcement learning; bitrate streaming; world-models; video streaming; model-based reinforcement learning
Online: 20 August 2020 (07:02:57 CEST)
Adaptive bitrate (ABR) algorithms optimize the quality of streaming experiences for users in client-side video players especially in unreliable or slow mobile networks. Several rule-based heuristic algorithms can achieve stable performance, but they sometimes fail to adapt properly to changing network conditions. Fluctuating bandwidth may cause algorithms to default to behavior that creates a negative experience for the user. ABR algorithms can be generated with reinforcement learning, a decision-making paradigm in which an agent learns to make optimal choices through interactions with an environment. Training reinforcement learning algorithms for bitrate streaming requires building a simulator for an agent to experience interactions quickly; training an agent in the real environment is infeasible due to the long step times in real environments. This project explores using supervised learning to construct a world-model, or a learned simulator, from recorded interactions. A reinforcement learning agent trained inside of the learned model, rather than a simulator, can outperform rule-based heuristics. Furthermore, agents trained inside the learned world-model can outperform model-free agents in low sample regimes. This work highlights the potential for world-models to quickly learn simulators, and to be used to generate optimal policies.
ARTICLE | doi:10.20944/preprints201810.0668.v1
Subject: Engineering, Other Keywords: climate change; carbon emissions; low carbon city; sustainability; strategy-based model; SLCM
Online: 29 October 2018 (09:55:25 CET)
Low carbon cities are increasingly forming a distinct strand of sustainability literature. Models have been developed to measure the performance of low carbon cities. The purpose of this paper is to formulate a strategy-based model to evaluate current performance and predict future conditions of low carbon cities. It examines the dynamic interrelationships between key performance indicators (KPIs), induces changes to city plan targets and then instantly predicts the outcome of these changes. Designed generic and flexible, the proposed model shows how low carbon targets could be used to guide the transformation of low carbon cities under four strategies: (1) passive intervention, (2) problem solving, (3) trend modifying and (4) opportunity seeking. Further, the model has been applied to 17 cities and then tested on 5 cities: London, New York, Barcelona, Dubai and Istanbul. The paper concludes with policy implications to realign city plans and support low carbon innovation.
ARTICLE | doi:10.20944/preprints201808.0550.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Food Safety; Agent-Based Model; Social Networking; Recommendation; the wisdom of crowd.
Online: 31 August 2018 (14:37:36 CEST)
"The wisdom of crowd'' is so often observed in social discourses and activities around us. The manifestations of it are, however, so intrinsically embedded and behaviorally accepted that an elaboration of a social phenomenon evidencing such wisdom is often cheered as a discovery; or at least an astonishing fact. One such scenario is explored here, namely conceptualization and modeling of a food safety system, a system directly related to social cognition. Food safety is an area of concern these days. Models representing the food safety systems are recently published to study the effect of interactions between important entities of the system. For example, Knowles’s model finds conditions leading to a more efficient and dependable system of entities like consumers, regulators and stores with specific focus on regulators behavior and their impact on the food safety. The first contribution of this paper is reevaluation of Knowles’s model towards a more conscious understanding of ``the wisdom of crowd'' effects on inspection and consuming behaviors. The second contribution is augmenting of the model with social networking capabilities, which acts as a medium to spread information about stores and help consumers find stores which are not contaminated. Simulation results reveal that stores’ respecting social cognition improve effectiveness of the food safety system for consumers and stores both. Simulation findings also reveals that an active society has a capability to self-organize effectively even in the absence of any regulatory compulsion.
ARTICLE | doi:10.20944/preprints201901.0067.v1
Subject: Engineering, Energy & Fuel Technology Keywords: wind farm production maximisation; coordinated control; $C_P$-based optimisation; yaw-based optimisation; wake effects; turbulence intensity; Jensen model; particle swarm optimisation
Online: 8 January 2019 (11:34:39 CET)
A practical wind farm controller for production maximisation based on coordinated control is presented. The farm controller emphasises computational efficiency without compromising accuracy. The controller combines Particle Swarm Optimisation (PSO) with a turbulence intensity based Jensen wake model (TI-JM) for exploiting the benefits of either curtailing upstream turbines using coefficient of power ($C_P$) or deflecting wakes by applying yaw-offsets for maximising net farm production. First, TI-JM is evaluated using convention control benchmarking WindPRO and real time SCADA data from three operating wind farms. Then the optimized strategies are evaluated using simulations based on TI-JM and PSO. The innovative control strategies can optimise a medium size wind farm, Lillgrund consisting of 48 wind turbines, requiring less than 50 seconds for a single simulation, increasing farm efficiency up to a maximum of 6% in full wake conditions.
ARTICLE | doi:10.20944/preprints201906.0049.v1
Subject: Social Sciences, Geography Keywords: mobile phone data; residents commuting behavior; agent-based model; urban planning; traffic congestion
Online: 6 June 2019 (11:31:48 CEST)
Abstract：Commuting of residents in big city often brings tidal traffic pressure or congestions. Understanding the causes behind this phenomenon is of great significance for urban space optimization. Various spatial big data make possible the fine description of urban residents travel behaviors, and bring new approaches to related studies. The present study focuses on two aspects: one is to obtain relatively accurate features of commuting behaviors by using mobile phone data, and the other is to simulate commuting behaviors of residents through the agent-based model and inducing backward the causes of congestion. Taking the Baishazhou area of Wuhan, a local area of a mega city in China, as a case study, travel behaviors of commuters are simulated: the spatial context of the model is set up using the existing urban road network and by dividing the area into travel units; then using the mobile phone call detail records (CDR) of a month, statistics of residents' travel during the four time slots in working day mornings are acquired and then used to generated the OD matrix of travels at different time slots; and then the data are imported into the model for simulation. By the preset rules of congestion, the agent-based model can effectively simulate the traffic conditions of each traffic intersection, and can also induce backward the causes of traffic congestion using the simulation results and the OD matrix. Finally, the model is used for the evaluation of road network optimization, which shows evident effects of the optimizing measures adopted in relieving congestion, and thus also proves the value of this method in urban studies.
ARTICLE | doi:10.20944/preprints201812.0133.v1
Subject: Earth Sciences, Geoinformatics Keywords: Radiation risk analysis, GIS based model, thermal power plant, surface radiation, remedial measures
Online: 11 December 2018 (13:57:09 CET)
Coal combustion in thermal power plants releases ash. Ash is reported to cause different adverse health hazards in humans and other organisms. Owing to the presence of radionuclides, it is also considered as a potential radiation hazard. In this study, based on the surface radiation measurements and relevant ancillary data, expected radiation risk zones were identified with regard to the human population residing near the Thermal Power Plant. With population density as the risk determining criteria, about 20% of the study area was at ‘High’ risk and another 20% of the study area was at ‘Low’ risk zone. The remaining 60% was under medium risk zone. Based on the findings remedial measures which may be adopted have been suggested.
REVIEW | doi:10.20944/preprints202006.0050.v1
Subject: Behavioral Sciences, Other Keywords: competitive learning and memory functions; cognitive development; basal ganglia; medial temporal lobe; prefrontal cortex; model-based learning; model-free learning
Online: 5 June 2020 (14:10:15 CEST)
There has been a growing interest in incorporating psychological and neuroscientific knowledge about the development of cognitive functions in educational policies and academic practices. In this paper, we argue that the current knowledge about the interactions between these functions and their neurodevelopmental characteristics should also be considered in order to develop practices that could be better suited to pupils depending on their age. To facilitate this, we review current neuroscientific knowledge on the competitive interactions between two neural circuitry underlying distinct learning functions, their developmental trajectories and how they are linked to other functions such as cognitive control. The incorporation of this knowledge in education could help improve academic outcomes.
ARTICLE | doi:10.20944/preprints202009.0381.v1
Subject: Life Sciences, Biotechnology Keywords: high throughput screening; rapid phenotyping; model-based experimental design; Escherichia coli; automated bioprocess development
Online: 17 September 2020 (07:34:19 CEST)
In bioprocess development, the host and the genetic construct for a new biomanufacturing process are selected in the early developmental stages. This decision, made at the screening scale with very limited information about the performance of the selected cell factory in larger reactors, has a major influence on the performance of the final process. To overcome this, scaledown approaches are essential to run screenings that show the real cell factory performance at industrial like conditions. We present a fully automated robotic facility with 24 parallel mini-bioreactors that is operated by a model based adaptive input design framework for the characterization of clone libraries under scale-down conditions. The cultivation operation strategies are computed and continuously refined based on a macro-kinetic growth model that is continuously re-fitted to the available experimental data. The added value of the approach is demonstrated with 24 parallel fed-batch cultivations in a mini-bioreactor system with eight different Escherichia coli strains in triplicate. The 24 fed-batches ran under the desired conditions generating sufficient information to define the fastest growing strain in an environment with varying glucose concentrations similar to industrial scale bioreactors.
REVIEW | doi:10.20944/preprints201607.0012.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: role-based access control; attribute-based access control; attribute-based encryption
Online: 8 July 2016 (10:12:21 CEST)
Cloud Computing is a promising and emerging technology that is rapidly being adopted by many IT companies due to a number of benefits that it provides, such as large storage space, low investment cost, virtualization, resource sharing, etc. Users are able to store a vast amount of data and information in the cloud and access it from anywhere, anytime on a pay-per-use basis. Since many users are able to share the data and the resources stored in the cloud, there arises a need to provide access to the data to only those users who are authorized to access it. This can be done through access control schemes which allow the authenticated and authorized users to access the data and deny access to unauthorized users. In this paper, a comprehensive review of all the existing access control schemes has been discussed along with analysis. Keywords: role-based access control, attribute-based access control, attribute-based encryption
ARTICLE | doi:10.20944/preprints202205.0244.v1
Subject: Engineering, Automotive Engineering Keywords: failure mode and effect analysis (FMEA); model-based design; automatic generation tool; fault injection simulation
Online: 18 May 2022 (12:40:58 CEST)
In the development of the safety-critical systems, it is important to perform Failure Modes and Effects Analysis (FMEA) process to identify potential failures. However, traditional FMEA activities tend to be considered difficult and time-consuming tasks. To compensate for the difficulty of the FMEA task, various types of tools are used to increase the quality and the effectiveness of the FMEA reports. This paper explains an Automatic FMEA tool which integrates the Model-based Design (MBD), FMEA, and Simulated Fault Injection techniques in a single environment. The Automatic FMEA tool has the following advantages compared to the existing FMEA analysis tool. First, the Automatic FMEA tool automatically generates FMEA reports compared to the traditional spreadsheet-based FMEA tools. Second, the Automatic FMEA tool analyzes the causality between the failure modes and the failure effects by performing model-based fault injection simulation. In order to demonstrate the applicability of the Automatic FMEA, we used the electronic fuel injection system (EFI) Simulink model. The results of the Automatic FMEA were compared to that of the legacy FMEA.
Subject: Engineering, Control & Systems Engineering Keywords: hybrid energy storage system; L2-gain disturbance attenuation; passivity-based control; port-controlled Hamiltonian model
Online: 16 April 2020 (06:36:09 CEST)
Battery/Supercapacitor(SC) current tracking control is a key issue for hybrid energy storage system (HESS) in electric vehicles. An innovative passivity-based L2-gain adaptive control (PBL2AC) based on port-controlled Hamiltonian model with dissipativity (PCHD) for reference current tracking and bus voltage stability in HESS is presented. The developed PCHD model has considered both parameter variations and external disturbances. By using L2-gain disturbance attenuation, the PBL2AC ensures robust reference current tracking and stable bus voltage. Moreover, adaptive mechanism is adopted to estimate the electrical parameters. To validate the proposed control scheme for HESS, simulations and experiments were done and compared with traditional PID and sliding mode control under several typical driving cycles, and results show that the effectiveness of the proposed controller can be confirmed.
ARTICLE | doi:10.20944/preprints201910.0118.v1
Subject: Earth Sciences, Geology Keywords: statistics-based estimation model (sem); different geological condition; permeability coefficient; shearing strength; landslide-triggering factor
Online: 10 October 2019 (14:53:30 CEST)
In South Korea, landslides are caused by localized heavy rainfall and typhoons, which often occur in the summer season at natural slopes in mountainous areas and artificial slopes in urban surroundings. Flow-type landslides frequently occur in mountainous areas. To evaluate flow-type landslides, it is essential to identify the physical characteristics of soil, giving focus to the soil on the top layers of various types of slope. This study conducts a survey and an analysis of the characteristics of landslides that occurred in the study area with different geological conditions of granite and gneiss. The characteristics of soil in the area and its surroundings that have or have not undergone landslides for every geological condition is also evaluated. Based on these characteristics and a statistics method, it extracts the triggering factors, permeability coefficients (k), and shearing strength with cohesion (c) and internal friction angel (φ) of soils that are highly related to landslides around weathered soil layers. As a result, the permeability coefficients show significant relevance with void ratio (e), the effective size of grains (D10), and uniformity coefficient (cu), while the shearing strength with the proportion of fine-grained soil (Fines), uniformity coefficient (cu), degree of saturation (S), dry weight density (rd), and void ratio (e). By obtaining this result, the study uses the regression analysis to suggest models to estimate the permeability coefficients and shearing strength. For the gneiss area, the statistics-based estimation model (SEM) is proposed as kgn = (1.488 × 10-02 × e) + (1.076 × D10) + (-1.629 × 10-04 × cu) - (1.893 × 10-02) for permeability coefficients; cgn = (-0.712 × Fines) + (-0.131 × cu) + 15.335 for cohesion; and φgn = (27.01 × rd) + (-12.594 × e) + 6.018 for internal frictional angle of soils. For the granite area, the statistics-based estimation model (SEM) is proposed as kgr = (8.281 × 10-03 × e) + (0.639 × D10) + (-2.766 × 10-05 × cu) - (9.907 × 10-03) for permeability coefficients; cgr = (-0.689 × Fines) + (-0.0744 × S) + 18.59 for cohesion; and φgr = (33.640 × rd) + (-0.875 × e) - 9.685 for internal frictional angle of soils. The use of statistics-based estimation models (SEMs) for landslide-triggering factors that trigger landslides will support the simple calculation of permeability coefficient and shearing strength (cohesion and internal frictional angle), only requiring information about the physical properties of soil at the natural slopes that have different geological features such as gneiss and granite areas.
ARTICLE | doi:10.20944/preprints201811.0479.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: mixing; CFD-simulation; surrogate-based optimization; compartmental modeling; competing reaction system; optimization; model order reduction
Online: 20 November 2018 (05:07:13 CET)
Mixing is considered as a critical process parameter (CPP) during process development due to its significant influence on reaction selectivity and process safety. Nevertheless, mixing issues are difficult to identify and solve owing to their complexity and dependence on knowledge of kinetics and hydrodynamics. In this paper, we proposed an optimization methodology using Computational Fluid Dynamics (CFD) based compartmental modelling to improve mixing and reaction selectivity. More importantly, we have demonstrated that through the implementation of surrogate-based optimization, the proposed methodology can be used as a computationally non-intensive way for rapid process development of reaction unit operations. For illustration purpose, reaction selectivity of a process with Bourne competitive reaction network is discussed. Results demonstrate that we can improve reaction selectivity by dynamically controlling rates and locations of feeding in the reactor. The proposed methodology incorporates mechanistic understanding of the reaction kinetics together with an efficient optimization algorithm to determine the optimal process operation and thus can serve as a tool for quality-by-design (QbD) during product development stage.
ARTICLE | doi:10.20944/preprints201809.0481.v1
Subject: Engineering, Other Keywords: Brain-Computer Interfaces, spectrogram-based convolutional neural network model(pCNN), Deep Learning, EEG, LSTM, RCNN
Online: 25 September 2018 (08:58:34 CEST)
Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g. hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause for the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: 1) a long short-term memory (LSTM); 2) a proposed spectrogram-based convolutional neural network model (pCNN); and 3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (manual) feature engineering. Results were evaluated on our own, publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from "BCI Competition IV". Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.
ARTICLE | doi:10.20944/preprints201611.0092.v2
Subject: Keywords: semantic spatial trajectory; role based access control; Bell-Lapadula model; multi-policy; Web Ontology Language
Online: 17 November 2016 (15:19:51 CET)
With the proliferation of locating devices, more and more raw spatial trajectories are formed, and many works enrich these raw trajectories with semantics, and mine patterns from both raw and semantic trajectories, but access control of spatial trajectories is not considered yet. We present a multi-policy secure model for semantic spatial trajectories. In our model, Mandatory Access Control, Role Based Access Control and Discretionary Access control are all enforced, separately and combined, and we represent the model semi-formally in Ontology Web Language.
ARTICLE | doi:10.20944/preprints202004.0398.v1
Subject: Social Sciences, Other Keywords: COVID-19; Perception-based questionnaire; principal component analysis (PCA); Linear regression model; social panic; social conflict
Online: 22 April 2020 (09:55:38 CEST)
The COVID-19 pandemic situation, disease intensity, weak healthcare facilities, unawareness, and misinformation led people to fear and anxiety in Bangladesh. This study intended to get peoples’ perception on psychosocial, socio-economic and environmental crisis amidst the pandemi. An online questionnaire was surveyed nationwide (respondents no.1066). Datasets were analyzed through the Principal Component Analysis (PCA), hierarchical Cluster Analysis (CA), Pearson’s correlation matrix (PCM), Linear regression analysis (LRA), and psychometric characteristics were included in the Classical Test Theory (CTT) analysis. There were good associations among the psychosocial, socio-economic and environmental parameters. A significant association between fear of COVID-19 with struggling healthcare system (p<0.05) was found. Also, negative association between fragile health system and government’s ability to deal with the pandemic (p<0.05) revealing poor governance. Again, a positive association of shutdown and social distancing with fear of losing life, and due to lack of health treatment (p<0.05) reveals that shut down hampers normal activities which may lead to mental and economic stress. However, a positive association of socio-economic impact of the shutdown with poor people’s suffering, the price hike of basic need, hamper of formal education (p<0.05) may lead to severe socio-economic and health crisis. There is a possibility of climate-induced disaster during/after the pandemic, which will create severe food insecurity (p<0.01). Daily wage earners and poors will suffer most by food and nutritional deficiency, and the country may face huge economic burden. Proper risk assessment and communications is needed to alleviate fear and anxiety. Thus, financial support and mental boosting is required.
ARTICLE | doi:10.20944/preprints201810.0341.v1
Subject: Social Sciences, Business And Administrative Sciences Keywords: sustainable transformative business model; shared-value, digitization; innovation management; dynamic capabilities; transformation management; resource based view
Online: 16 October 2018 (08:23:41 CEST)
We examine how external triggers, including the digital imperative and the need for more sustainable resource and stakeholder employment, spark the development of transformative sustainable business models. Drawing on the resource-based view and the shared value approach we conceptualize a multifaceted framework that helps to identify key determinants and coherent layers of transformative sustainable businesses models. Our theoretical arguments integrate recent research findings on external dynamics, such as digital technological advances and rising global competitive dynamics, with internal capabilities on both the organizational and the individual level, allowing for a more complete understanding of transformative potentials on the firm level. We propose that key determinants of sustainable transformative business models adhere to both, innovative value-creating reconstructionist and sustainable shared-value logic, and include elements such as co-creation with customers, usage-based pricing, agile and adaptive behavior, closed-loop resource employment, asset-sharing, and collaborative business ecosystems. At the same time, organizational, economic, and environmental layers encompassing sustainable business models need to be both horizontally and vertically coherent to unfold their full potential.
ARTICLE | doi:10.20944/preprints201802.0174.v1
Subject: Social Sciences, Geography Keywords: environmental stress; human exposure; agent-based model; air pollution; urban heat wave; exposure modeling; climate change
Online: 27 February 2018 (05:12:24 CET)
The importance of predicting the exposure to environmental hazards is highlighted by issues like global climate change, public health problems caused by environment stresses, and property damages and depreciations. Several approaches have been used to assess potential exposure and achieve optimal results under various conditions, for example, for different scales, groups of people, or certain points in time. Micro-simulation tools are becoming increasingly important in human exposure assessment, where each person is simulated individually and continuously. This paper describes an agent-based model (ABM) framework that can dynamically simulate human exposure levels, along with their daily activities, in urban areas that are characterized by environmental stresses such as air pollution and heat stress. Within the framework, decision making processes can be included for each individual based on rule-based behavior to achieve goals under changing environmental conditions. The ideas described in this paper are implemented in a free and open source NetLogo platform. A simplified modeling scenario of the ABM framework in Hamburg, Germany, further demonstrates its utility in various urban environments and individual activity patterns, and portability to other models, programs and frameworks. The prototype model can potentially be extended to support environmental incidence management by exploring the daily routines of different groups of citizens and compare the effectiveness of different strategies. Further research is needed to fully develop an operational version of the model.
ARTICLE | doi:10.20944/preprints202204.0300.v1
Subject: Social Sciences, Other Keywords: agent-based model; electric vehicles; traffic simulation; energy intake; urban environment; fuel costs; public policy; electric mobility
Online: 29 April 2022 (11:05:15 CEST)
By 2020, over 100 countries expanded electric and plug-in hybrid electric vehicle (EV/PHEV) technologies, with global sales surpassing 7 million units. Governments are adopting cleaner vehicle technologies due to proven environmental and health implications of internal combustion engine vehicles (ICEVs), evidenced by the recent COP26 meeting. This article proposes an agent-based model of vehicle activity as a tool for quantifying energy consumption by simulating a fleet of EV/PHEVs within an urban street network at various spatio-temporal resolutions. Driver behaviour plays a significant role in fuel consumption, thus, simulating various levels of individual behaviour enhancing heterogeneity should provide more accurate results of potential energy demand in cities. The study found that 1) energy consumption is lowest when speed limit adherence increases (low variance in behaviour) and is highest when acceleration/deceleration patterns vary (high variance in behaviour) and 2) on average, for tested vehicles, EV/PHEVs were £116.33 cheaper to run than ICEVs across all experiment conditions. The difference in the average fuel costs (electricity and petrol) shrinks at the vehicle level as driver behaviour is less varied (more homogeneous). This research should allow policymakers to quantify the demand for energy and subsequent fuel costs in cities.
REVIEW | doi:10.20944/preprints202111.0044.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: deep reinforcement learning; model-based RL; hierarchy; trading; cryptocurrency; foreign exchange; stock market; risk; prediction; reward shaping
Online: 2 November 2021 (10:57:23 CET)
Deep reinforcement learning (DRL) has achieved significant results in many Machine Learning (ML) benchmarks. In this short survey we provide an overview of DRL applied to trading on financial markets, including a short meta-analysis using Google Scholar, with an emphasis on using hierarchy for dividing the problem space as well as using model-based RL to learn a world model of the trading environment which can be used for prediction. In addition, multiple risk measures are defined and discussed, which not only provide a way of quantifying the performance of various algorithms, but they can also act as (dense) reward-shaping mechanisms for the agent. We discuss in detail the various state representations used for financial markets, which we consider critical for the success and efficiency of such DRL agents. The market in focus for this survey is the cryptocurrency market.
ARTICLE | doi:10.20944/preprints202106.0733.v1
Subject: Engineering, Automotive Engineering Keywords: Discrete multiphysics; smooth particle hydrodynamics; Lattice Spring Model; Fluid-structure interaction; particle-based method; Coronary stent; Atherosclerosis
Online: 30 June 2021 (11:55:59 CEST)
Stenting is a common method for treating atherosclerosis. A metal or polymer stent is deployed to open the stenosed artery or vein. After the stent is deployed, the blood flow dynamics influence the mechanics by compressing and expanding the structure. If the stent does not respond properly to the resulting stress, vascular wall injury or re-stenosis can occur. In this work, Discrete Multiphysics is used to study the mechanical deformation of the coronary stent and its relationship with the blood flow dynamics. The major parameters responsible for deforming the stent are sort in terms of dimensionless numbers and a relationship between the elastic forces in the stent and pressure forces in the fluid is established. The blood flow and the stiffness of the stent material contribute significantly to the stent deformation and affect the rate of deformation. The stress distribution in the stent is not uniform with the higher stresses occurring at the nodes of the structure.
ARTICLE | doi:10.20944/preprints201802.0178.v1
Subject: Medicine & Pharmacology, General Medical Research Keywords: Human Papillomavirus; vaccine refusal; hesitancy; women; school based; Health Belief Model; gynaecologist; general practitioner; survey; catch up
Online: 27 February 2018 (09:02:41 CET)
In Italy HPV vaccination was implemented for girls since 2007 but its coverage was lower than recommended level. Sicily is one of the Italian administrative regions with lower vaccination coverage, ranging in the birth cohorts 1996–1999 from 59% to 62%. Aim of the study was to investigate factors associated with refusal of anti-HPV vaccination among young adult women of Palermo, Italy. A cross-sectional study was conducted through the administration of a telephone questionnaire, consisting of 23 items on HPV infection and vaccination knowledge based on Health Belief Model framework. The eligible population were young women with at least a previous vaccination among all included in Sicilian Vaccination schedule, without starting or completing anti-HPV vaccination schedule. Overall, 141 young women were enrolled, of them 84.4% were unvaccinated and 15.6% had at least one dose of HPV vaccine. In multivariate analysis, factors associated with the failure to perform the HPV vaccination were degree as school level (OR = 10.2, p = 0.041), lower participation at school seminar on HPV (OR = 0.2, p = 0.047) and lower perception of anti-HPV vaccine benefits (OR = 0.4, p = 0.048). Public health educational program focusing and tailored on benefits perception of anti-HPV vaccine and HPV disease severity, especially if carried out at school, can improve HPV vaccination uptake.
ARTICLE | doi:10.20944/preprints201710.0060.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: available bandwidth estimation techniques; active bandwidth estimation techniques; passive bandwidth estimation techniques; model based bandwidth estimation techniques
Online: 10 October 2017 (10:37:41 CEST)
Wireless communication has proliferated due to the colossal augmentation of smart phones, mobile phones, tablets and laptops etc. Multimedia applications such as live videos, audios, still images, animated graphics and live communications all requires accompaniments of quality of service (QoS) as well as reliable end-to-end transfer of data. Information about the availability of resources in channel is must in order to improve the QoS. QoS can be determined in terms of capacity of channel, available bandwidth (ABW), and bulk transfer capacity (BTC). Bandwidth availability is a key check for improving the QoS in a network. The performance of a multimedia application is directly affected by the bandwidth availability. One of the most important QoS characteristic is ABW at a wireless route and it can be demarcated as least unused capacity of links instituting a network route. Since now there have been many bandwidth estimation techniques are available in the literature to increase the network performance. Bandwidth estimation techniques have been arranged into three leading classifications: i) Active probing bandwidth estimation techniques, ii) Passive bandwidth estimation technique and iii) Model based bandwidth estimation techniques. Each of these techniques are briefly discussed in this paper.
ARTICLE | doi:10.20944/preprints201709.0089.v1
Subject: Engineering, Control & Systems Engineering Keywords: Wind turbine simulator; data-driven and model-based approaches; fuzzy identification; on-line estimation; robustness and reliability
Online: 19 September 2017 (15:47:14 CEST)
Wind turbine plants are complex dynamic and uncertain processes driven by stochastic inputs and disturbances, as well as different loads represented by gyroscopic, centrifugal, and gravitational forces. Moreover, as their aerodynamic models are nonlinear, both modelling and control become challenging problems. On one hand, high-fidelity simulators should contain different parameters and variables in order to accurately describe the main dynamic system behaviour. Therefore, the development of modelling and control for wind turbine systems should consider these complexity aspects. On the other hand, these control solutions have to include the main wind turbine dynamic characteristics without becoming too complicated. The main point of this paper is thus to provide two practical examples of development of robust control strategies when applied to a simulated wind turbine plant. Experiments with the wind turbine simulator and the Monte–Carlo tools represent the instruments for assessing the robustness and reliability aspects of the developed control methodologies when the model-reality mismatch and measurement errors are also considered. Advantages and drawbacks of these regulation methods are also highlighted with respect to different control strategies via proper performance metrics.
ARTICLE | doi:10.20944/preprints201708.0034.v1
Subject: Engineering, Control & Systems Engineering Keywords: wind turbines; hydroelectric systems; nonlinear modelling; model--based control; data--driven approach; advanced control; robustness and reliability
Online: 9 August 2017 (04:42:58 CEST)
Increasingly, there is a focus on utilising renewable energy resources in a bid to fulfil increasing energy requirements and mitigate the climate change impacts of fossil fuels. While most renewable resources are free, the technology used to usefully convert such resources is not and there is an increasing focus on improving the conversion economy and efficiency. To this end, advanced control technologies can have a significant impact and is already a relatively mature technology for wind turbines. Though hydroelectric plants can use simple regulation systems, significant benefits have been shown to accrue from the appropriate use of the same control methods designed for wind turbine plants. This represents the key point of the paper. In fact, to date, the application communities connected with wind and hydraulic energies have had little communication, resulting in little cross fertilisation of control ideas and experience, particularly from the more mature wind area to hydrodynamic systems. Therefore, this paper examines the models and the application of control technology across both domains, both from a comparative and contrasting point of view, with the aim of identifying commonalities in models and control objectives, as well as potential solutions. Key comparative reference points include the articulation of the exployed models, specification of control objectives, development of high--fidelity simulators, and development of solution concepts. Not least, in terms of realistic system requirements are the set of physical and constraints under which such renewable energy systems must operate, and the need to provide reliable and robust control solutions, which respect the often remote and relatively inaccessible location of many onshore and offshore deployments.
Subject: Medicine & Pharmacology, Allergology Keywords: In vitro–In vivo Correlation; Physiologically Based Pharmacokinetic Model; BCS Class II; Rivaroxaban; Xarelto; Food Effect; Population Kinetics
Online: 25 January 2021 (09:41:41 CET)
The present work evaluates the food effect on the absorption of rivaroxaban (Riva), a BCS II drug, from the orally administered commercial immediate-release tablet (Xarelto IR) using physiologically based pharmacokinetic (PBPK) and conventional in vitro- in vivo correlation (IVIVC) models. The bioavailability of Riva upon oral administration of Xarelto IR tablet is reported to exhibit a positive food effect. The PBPK model for Riva was developed and verified using the previously reported in vivo data for oral solution (5 and 10 mg) and Xarelto IR tablet (5 and 10 mg dose strength). Once the PBPK model was established, the in vivo performance of the tablet formulation with the higher dose strength (Xarelto IR tablet 20 mg in fasted and fed state) was predicted using the experimentally obtained data of in vitro permeability, biorelevant solubility and in vitro dynamic dissolution data using United States Pharmacopeia (USP) IV flow-through cell apparatus. In addition, the mathematical IVIVC model was developed using the in vitro dissolution and in vivo profile of 20 mg strength Xarelto IR tablet in fasted condition. Using the developed IVIVC model, the pharmacokinetic (PK) profile of the Xarelto IR tablet in fed condition was predicted and compared with the PK parameters obtained via the PBPK model. A virtual in vivo PK study was designed using a single-dose, 3-treatment cross-over trial in 50 subjects to predict the PK profile of the Xarelto® IR tablet in the fed state. Overall, the results obtained from the IVIVC model were found to be comparable with that from the PBPK model. The outcome from both the model pointed to the positive food effect on the in vivo profile of the Riva. The developed models thus can be effectively extended to establish bioequivalence for the marketed and novel complex formulations of Riva such as amorphous solid dispersions.
ARTICLE | doi:10.20944/preprints201901.0267.v1
Subject: Engineering, Control & Systems Engineering Keywords: wind turbine system; hydroelectric plant simulator; model--based control; data–driven approach; self–tuning control; robustness and reliability
Online: 26 January 2019 (10:08:46 CET)
The interest on the use of renewable energy resources is increasing, especially towards wind and hydro powers, which should be efficiently converted into electric energy via suitable technology tools. To this aim, data--driven control techniques represent viable strategies that can be employed for this purpose, due to the features of these nonlinear dynamic processes working over a wide range of operating conditions, driven by stochastic inputs, excitations and disturbances. Some of the considered methods, such as fuzzy and adaptive self--tuning controllers, were already verified on wind turbine systems, and similar advantages may thus derive from their appropriate implementation and application to hydroelectric plants. These issues represent the key features of the work, which provides some guidelines on the design and the application of these control strategies to these energy conversion systems. The working conditions of these systems will be also taken into account in order to highlight the reliability and robustness characteristics of the developed control strategies, especially interesting for remote and relatively inaccessible location of many installations.
ARTICLE | doi:10.20944/preprints201810.0572.v1
Subject: Engineering, Control & Systems Engineering Keywords: wind turbine system; hydroelectric plant simulator; model--based control; data--driven approach; self--tuning control; robustness and reliability
Online: 24 October 2018 (11:26:20 CEST)
The interest on the use of renewable energy resources is increasing, especially towards wind and hydro powers, which should be efficiently converted into electric energy via suitable technology tools. To this aim, self--tuning control techniques represent viable strategies that can be employed for this purpose, due to the features of these nonlinear dynamic processes working over a wide range of operating conditions, driven by stochastic inputs, excitations and disturbances. Some of the considered methods were already verified on wind turbine systems, and important advantages may thus derive from the appropriate implementation of the same control schemes for hydroelectric plants. This represents the key point of the work, which provides some guidelines on the design and the application of these control strategies to these energy conversion systems. In fact, it seems that investigations related with both wind and hydraulic energies present a reduced number of common aspects, thus leading to little exchange and share of possible common points. This consideration is particularly valid with reference to the more established wind area when compared to hydroelectric systems. In this way, this work recalls the models of wind turbine and hydroelectric system, and investigates the application of different control solutions. The scope is to analyse common points in the control objectives and the achievable results from the application of different solutions. Another important point of this investigation regards the analysis of the exploited benchmark models, their control objectives, and the development of the control solutions. The working conditions of these energy conversion systems will be also taken into account in order to highlight the reliability and robustness characteristics of the developed control strategies, especially interesting for remote and relatively inaccessible location of many installations.
ARTICLE | doi:10.20944/preprints202110.0336.v1
Subject: Biology, Ecology Keywords: nature-based solutions; climate change adaptation; biodiversity; ecosystem-based adaptation
Online: 23 October 2021 (14:19:30 CEST)
Nature-based solutions (NbS) are increasingly recognised for their potential to address both the climate and biodiversity crises. These outcomes are interdependent, and both rely on the capacity of NbS to support and enhance the health of an ecosystem: its biodiversity, the condition of its abiotic and biotic elements, and its capacity to function normally despite environmental change. However, while understanding of ecosystem health outcomes of nature-based interventions for climate change mitigation is growing, the outcomes of those implemented for adaptation remain poorly understood with evidence scattered across multiple disciplines. To address this, we conducted a systematic review of the outcomes of 109 nature-based interventions for climate change adaptation using 33 indicators of ecosystem health across eight broad categories (e.g. diversity, biomass, ecosystem functioning and population dynamics). We showed that 88% of interventions with positive outcomes for climate change adaptation also reported measurable benefits for ecosystem health. We also showed that interventions were associated with a 67% average increase in local species richness. All eight studies that reported benefits in terms of both climate change mitigation and adaptation also supported ecosystem health, leading to a triple win. However, there were also trade-offs, mainly for forest management and creation of novel ecosystems such as monoculture plantations of non-native species. Our review highlights two major limitations of research to date. First, only a limited selection of metrics are used to assess ecosystem health and these rarely include key aspects such as functional diversity and habitat connectivity. Second, taxonomic coverage is poor: 67% of outcomes assessed only plants and 57% did not distinguish between native and non-native species. Future research addressing these issues will allow the design and adaptive management of NbS to support healthy and resilient ecosystems, and thereby enhance their effectiveness for meeting both climate and biodiversity targets.
REVIEW | doi:10.20944/preprints202202.0212.v1
Subject: Mathematics & Computer Science, Analysis Keywords: Knowledge Graphs; Link Prediction; Semantic-Based Models; Translation Based Embedded Models
Online: 17 February 2022 (11:49:24 CET)
For disciplines like biological science, security, and the medical field, link prediction is a popular research area. To demonstrate the link prediction many methods have been proposed. Some of them that have been demonstrated through this review paper are TransE, Complex, DistMult, and DensE models. Each model defines link prediction with different perceptions. We argue that the practical performance potential of these methods, having similar parameter values, using the fine-tuning technique to evaluate their reliability and reproducibility of results. We describe those methods and experiments; provide theoretical proofs and experimental examples, demonstrating how current link prediction methods work in such settings. We use the standard evaluation metrics for testing the model's ability.
REVIEW | doi:10.20944/preprints202112.0027.v2
Subject: Biology, Animal Sciences & Zoology Keywords: Zoo animal welfare; Five Domains; Validity; Animal-based; Resource-based; Scoring
Online: 22 December 2021 (11:59:32 CET)
Zoos are increasingly putting in place formalized animal welfare assessment programs to allow monitoring of welfare over time, as well as to aid in resource prioritization. These programs tend to rely on assessment tools that incorporate resource-based and observational animal- focused measures since it is rarely feasible to obtain measures of physiology in zoo-housed animals. A range of assessment tools are available which commonly have a basis in the Five Domains framework. A comprehensive review of the literature was conducted to bring together recent studies examining welfare assessment methods in zoo animals. A summary of these methods is provided with advantages and limitations of the approach es presented. We then highlight practical considerations with respect to implementation of these tools into practice, for example scoring schemes, weighting of criteria, and innate animal factors for consideration. It is concluded that would be value in standardizing guidelines for development of welfare assessment tools since zoo accreditation bodies rarely prescribe these. There is also a need to develop taxon or species- specific assessment tools to inform welfare management.
ARTICLE | doi:10.20944/preprints202109.0129.v1
Subject: Medicine & Pharmacology, Other Keywords: antimicrobial resistance; antibiotic prescribing; acute non-complicated infections; primary care; data-based feedback; mixed logistic regression model; multi-faceted intervention
Online: 7 September 2021 (13:54:02 CEST)
The three-armed cluster-randomized trial ARena (Sustainable reduction of antibiotic-induced antimicrobial resistance) aimed to foster appropriate antibiotic use and reduce overprescribing in German ambulatory care to counter antibiotic resistance. Multi-faceted interventions targeted primary care physicians, teams and patients. This study examined effectiveness of the implementation program. ARena was conducted in 14 primary care networks with 196 practices. All arms received data-based feedback on antibiotics prescribing and quality circles. Arms II and III received different add-on components each. Primary outcome examined is the prescribing rate for systemic antibiotics for cases with non-complicated acute infections (upper respiratory tract, bronchitis, sinusitis, tonsillitis, otitis media). Secondary outcomes refer to prescribing of quinolones and guideline-recommended antibiotics. Based on pseudonymized quarterly claims data, mixed logistic regression models examined pre-post intervention antibiotic prescribing rate changes and compared to matched standard care. A significant rate reduction (arm I 11.7%; arm II 9.9%; arm III 12.7%) and significantly lower prescribing rates were observed for all arms (20.1%, 18.9% and 23.6%) compared to matched standard care (29.4%). Fluoroquinolone prescribing was reduced in all intervention arms and rates for recommended substances generally increased. No significant post-interventional difference between intervention arms was detected. Findings indicate implementation program impact compared to standard care. Trial registration: ISRCTN, ISRCTN58150046
ARTICLE | doi:10.3390/sci2040061
Subject: Keywords: industry4.0; fault detection; fault diagnosis; random forest; diagnostic graph; distributed diagnosis; model-based; data-driven; hybrid approach; hydraulic test rig
Online: 24 September 2020 (00:00:00 CEST)
In this work, a hybrid component Fault Detection and Diagnosis (FDD) approach for industrial sensor systems is established and analyzed, to provide a hybrid schema that combines the advantages and eliminates the drawbacks of both model-based and data-driven methods of diagnosis. Moreover, it shines the light on a new utilization of Random Forest (RF) together with model-based diagnosis, beyond its ordinary data-driven application. RF is trained and hyperparameter tuned using three-fold cross validation over a random grid of parameters using random search, to finally generate diagnostic graphs as the dynamic, data-driven part of this system. This is followed by translating those graphs into model-based rules in the form of if-else statements, SQL queries or semantic queries such as SPARQL, in order to feed the dynamic rules into a structured model essential for further diagnosis. The RF hyperparameters are consistently updated online using the newly generated sensor data to maintain the dynamicity and accuracy of the generated graphs and rules thereafter. The architecture of the proposed method is demonstrated in a comprehensive manner, and the dynamic rules extraction phase is applied using a case study on condition monitoring of a hydraulic test rig using time-series multivariate sensor readings.
ARTICLE | doi:10.20944/preprints202007.0548.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: industry4.0; fault detection; fault diagnosis; random forest; diagnostic graph; distributed diagnosis; model-based; data-driven; hybrid approach; hydraulic test rig
Online: 23 July 2020 (11:26:41 CEST)
In this work, A hybrid component Fault Detection and Diagnosis (FDD) approach for industrial sensor systems is established and analyzed, to provide a hybrid schema that combines the advantages and eliminates the drawbacks of both model-based and data-driven methods of diagnosis. Moreover, spotting the light on a new utilization of Random Forest (RF) together with model-based diagnosis, beyond its ordinary data-driven application. RF is trained and hyperparameter tuned using 3-fold cross-validation over a random grid of parameters using random search, to finally generate diagnostic graphs as the dynamic, data-driven part of this system. Followed by translating those graphs into model-based rules in the form of if-else statements, SQL queries or semantic queries such as SPARQL, in order to feed the dynamic rules into a structured model essential for further diagnosis. The RF hyperparameters are consistently updated online using the newly generated sensor data, in order to maintain the dynamicity and accuracy of the generated graphs and rules thereafter. The architecture of the proposed method is demonstrated in a comprehensive manner, as well as the dynamic rules extraction phase is applied using a case study on condition monitoring of a hydraulic test rig using time series multivariate sensor readings.
ARTICLE | doi:10.20944/preprints201806.0116.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: liquid metal sensors; liquid wire; wearables; athletic training; ankle complex; plantar flexion; resistance-based sensors; human ankle model; sensor substrate
Online: 7 June 2018 (11:15:27 CEST)
Interviews from strength and conditioning coaches across all levels of athletic competition identified their two biggest concerns with the current state of wearable technology: (a) the lack of solutions that accurately capture data "from the ground up" and (b) the lack of trust due to inconsistent measurements. The purpose of this research is to investigate the use of liquid metal sensors, specifically Liquid Wire sensors, as a potential solution for accurately capturing ankle complex movements such as plantar flexion, dorsiflexion, inversion, and eversion. Sensor stretch linearity was validated using a Micro-Ohm Meter and a Wheatstone bridge circuit. Sensors made from different substrates were also tested and discovered to be linear at multiple temperatures. An ankle complex model and computing unit for measuring resistance values were developed to determine sensor output based on simulated plantar flexion movement. The sensors were found to have a significant relationship between the positional change and the resistance values for plantar flexion movement. The results of the study ultimately confirm the researchers' hypothesis that liquid metal sensors, and Liquid Wire sensors specifically, can serve as a mitigating substitute for inertial measurement unit (IMU) based solutions that attempt to capture specific joint angles and movements.
ARTICLE | doi:10.20944/preprints201706.0113.v1
Subject: Engineering, Control & Systems Engineering Keywords: conceptual modeling; cyber-physical systems; cyber-physical gap; Object-Process Methodology; model-based systems engineering; Three Mile Island 2 Accident
Online: 26 June 2017 (04:59:29 CEST)
: The cyber-physical gap (CPG) is the difference between the 'real' state of the world and the way the system perceives it. This discrepancy often stems from the limitations of sensing and data collection technologies and capabilities, and is an inevitable issue in any cyber-physical system (CPS). Ignoring or misrepresenting such limitations during system modeling, specification, design, and analysis can potentially result in systemic misconceptions, disrupted functionality and performance, system failure, severe damage, and potential detrimental impacts on the system and its environment. We propose CPG-Aware Modeling & Engineering (CPGAME), a conceptual model-based approach for capturing, explaining, and mitigating the CPG, on top of and in sync with the conventional system model, and as an inherent systems engineering activity. This approach enhances the systems engineer’s ability to cope with CPGs, mitigate them by design, and prevent erroneous decisions, actions, and hazardous implications. CPGAME is a generic, conceptual approach, specified and demonstrated with Object Process Methodology (OPM). OPM is a holistic conceptual modeling paradigm for multidisciplinary, complex, dynamic systems, which is also ISO-19450. We analyze the 1979 Three Miles Island 2 nuclear accident as a prime example of the disastrous consequences of unmitigated CPGs in complex systems.
ARTICLE | doi:10.20944/preprints201612.0050.v1
Subject: Engineering, Control & Systems Engineering Keywords: modelling and simulation for control; advanced control design; model–based and data-driven approaches; artificial intelligence; thermal unit nonlinear system
Online: 9 December 2016 (03:17:38 CET)
The paper presents the design and the implementation of different advanced control strategies that are applied to a nonlinear model of a thermal unit. A data–driven grey–box identification approach provided the physically meaningful nonlinear continuous–time model, which represents the benchmark exploited in this work. The control problem of this thermal unit is important since it constitutes the key element of passive air conditioning systems. The advanced control schemes analysed in this paper are used to regulate the outflow air temperature of the thermal unit by exploiting the inflow air speed, whilst the inflow air temperature is considered as an external disturbance. The reliability and robustness issues of the suggested control methodologies are verified with a Monte–Carlo analysis for simulating modelling uncertainty, disturbance and measurement errors. The achieved results serve to demonstrate the effectiveness and the viable application the suggested control solutions to air conditioning systems. The benchmark model represents one of the key issues of this study, which is exploited for benchmarking different model–based and data–driven advanced control methodologies through extensive simulations. Moreover, this work highlights the main features of the proposed control schemes, while providing practitioners and heating, ventilating and air conditioning engineers with tools to design robust control strategies for air conditioning systems.
ARTICLE | doi:10.20944/preprints202010.0148.v2
Subject: Social Sciences, Accounting Keywords: Sustainable Teaching; multidisciplinary; multicultural; teams; Case-based Learning; Problem-based Learning; teamwork
Online: 26 April 2021 (15:38:20 CEST)
This article investigates the prospect of implementing multidisciplinary and multicultural student teamwork (MMT) involving Case-based Learning (CBL) and Problem-based Learning (PBL) as a sustainable teaching practice. Based on a mixed methods approach, which includes direct observation (both physical and virtual), questionnaire distribution and focus-group interviews the study reveals that MMT through CBL and PBL can both facilitate and hinder sustainable learning. Our findings show that while MMT enhances knowledge sharing, it also poses a wide range of challenges, raising questions about its social significance as a sustainable teaching practice. The study suggests the implementation of certain mechanisms, such as ‘Teamwork Training’ and ‘Pedagogical Mentors’, aiming to strengthen the sustainable orientation of MMT through CBL and PBL.
ARTICLE | doi:10.20944/preprints201807.0523.v1
Subject: Mathematics & Computer Science, Other Keywords: game-based learning; game design; project-based teaching; informatics and society, cybersecurity
Online: 26 July 2018 (16:38:48 CEST)
This article discusses the use of game design as a method for interdisciplinary project-based teaching in secondary school education to convey informatics and society topics. There is a lot of knowledge about learning games but little background on project-based teaching using game design as a method. We present the results of an analysis of student-created games and an evaluation of a student-authored database on learning contents found in commercial off-the-shelf games. We further contextualise these findings using a group discussion with teachers. Results underline the effectiveness of project-based teaching to raise awareness for informatics and society topics. We further outline informatics and society topics that are particularly interesting to students, genre preferences and potentially engaging game mechanics stemming from our analyses.
ARTICLE | doi:10.20944/preprints201709.0074.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: recommendation system; context awareness; location based services; mobile computing, cloud-based computing
Online: 18 September 2017 (08:54:04 CEST)
The ubiquity of mobile sensors (such as GPS, accelerometer and gyroscope) together with increasing computational power have enabled an easier access to contextual information, which proved its value in next generation of the recommender applications. The importance of contextual information has been recognized by researchers in many disciplines, such as ubiquitous and mobile computing, to filter the query results and provide recommendations based on different user status. A context-aware recommendation system (CoARS) provides a personalized service to each individual user, driven by his or her particular needs and interests at any location and anytime. Therefore, a contextual recommendation system changes in real time as a user’s circumstances changes. CoARS is one of the major applications that has been refined over the years due to the evolving geospatial techniques and big data management practices. In this paper, a CoARS is designed and implemented to combine the context information from smartphones’ sensors and user preferences to improve efficiency and usability of the recommendation. The proposed approach combines user’s context information (such as location, time, and transportation mode), personalized preferences (using individuals past behavior), and item-based recommendations (such as item’s ranking and type) to personally filter the item list. The context-aware methodology is based on preprocessing and filtering of raw data, context extraction and context reasoning. This study examined the application of such a system in recommending a suitable restaurant using both web-based and android platforms. The implemented system uses CoARS techniques to provide beneficial and accurate recommendations to the users. The capabilities of the system is evaluated successfully with recommendation experiment and usability test.
REVIEW | doi:10.20944/preprints202201.0073.v1
Subject: Medicine & Pharmacology, Other Keywords: Messenger RNA • Hospital-based mRNA therapeutics • circular mRNA • self-amplifying mRNA • RNA-based CAR T-cell • RNA-based gene-editing tools
Online: 6 January 2022 (11:20:59 CET)
Hospital-based programs democratize mRNA therapeutics by facilitating the processes to translate a novel RNA idea from the bench to the clinic. Because mRNA is essentially biological software, therapeutic RNA constructs can be rapidly developed. The generation of small batches of clinical grade mRNA to support IND applications and first-in-man clinical trials, as well as personalized mRNA therapeutics delivered at the point-of-care, is feasible at a modest scale of cGMP manufacturing. Advances in mRNA manufacturing science and innovations in mRNA biology, are increasing the scope of mRNA clinical applications.
ARTICLE | doi:10.20944/preprints202208.0523.v1
Subject: Mathematics & Computer Science, Other Keywords: angle-based outlier detection: percentile-based outlier detection; multiphilda, noise; irrelevant software requirements
Online: 30 August 2022 (11:25:24 CEST)
Noise in requirements has been known to be a defect in software requirements specifications (SRS). Detecting defects at an early stage is crucial in the process of software development. Noise can be in the form of irrelevant requirements that are included within a SRS. A previous study had attempted to detect noise in SRS, in which noise was considered as an outlier. However, the resulting method only demonstrated a moderate reliability due to the overshadowing of unique actor words by unique action words in the topic-word distribution. In this study, we propose a framework to identify irrelevant requirements based on the MultiPhiLDA method. The proposed framework distinguishes the topic-word distribution of actor words and action words as two separate topic-word distributions with two multinomial probability functions. Weights are used to maintain a proportional contribution of actor and action words. We also explore the use of two outlier detection methods, namely Percentile-based Outlier Detection (PBOD) and Angle-based Outlier Detection (ABOD), to distinguish irrelevant requirements from relevant requirements. The experimental results show that the proposed framework was able to exhibit better performance than previous methods. Furthermore, the use of the combination of ABOD as the outlier detection method and topic coherence as the estimation approach to determine the optimal number of topics and iterations in the proposed framework outperformed the other combinations and obtained sensitivity, specificity, F1-score, and G-mean values of 0.59, 0.65, 0.62, and 0.62, respectively.
ARTICLE | doi:10.20944/preprints202111.0196.v1
Subject: Life Sciences, Other Keywords: crocodilian; animal welfare; animal-based measure; animal-based indicator; welfare assessment; welfare measure
Online: 10 November 2021 (08:46:54 CET)
Animal-based measures are the measure of choice in animal welfare assessment protocols as they can often be applied completely independently to the housing or production system employed. Although there has been a small body of work on potential animal-based measures for farmed crocodilians [1-3], they have not been studied in the context of an animal welfare assessment protocol. Potential animal-based measures, that could be used to reflect the welfare state of farmed crocodilians, were identified and aligned with the Welfare Quality® principles of good housing, good health, good feeding and appropriate behaviour. A consultation process with a panel of experts was used to evaluate and score the potential measures in terms of validity and feasibility. This resulted in a toolbox of measures being identified for further development and integration into animal welfare assessment on the farm. Animal-based measures related to ‘good feeding’ and ‘good health’ received the highest scores for validity and feasibility by the experts. There was less agreement on the animal-based measures that could be used to reflect ‘appropriate behaviour’. Where no animal-based measures were deemed to reliably reflect a welfare criterion nor be useful as a measure on the farm, additional measures of resources or management were suggested as alternatives. Future work in this area should focus on the reliability of the proposed measures and involve further evaluation of their validity and feasibility as they relate to different species of crocodilian and farming system.
REVIEW | doi:10.20944/preprints201810.0175.v1
Subject: Chemistry, Analytical Chemistry Keywords: biosensors; enzyme-based systems; receptor-based systems; toxins; food analysis; environmental monitoring; nanotechnology
Online: 9 October 2018 (05:59:30 CEST)
The exploitation of lipid membranes in biosensors has provided the ability to reconstitute a considerable part of their functionality to detect trace of food toxicants and environmental pollutants. Nanotechnology enabled sensor miniaturization and extended the range of biological moieties that could be immobilized within a lipid bilayer device. This chapter reviews recent progress in biosensor technologies based on lipid membranes suitable for environmental applications and food quality monitoring. Numerous biosensing applications are presented, putting emphasis on novel systems, new sensing techniques and nanotechnology-based transduction schemes. The range of analytes that can be currently detected include, insecticides, pesticides, herbicides, metals, toxins, antibiotics, microorganisms, hormones, dioxins, etc. Technology limitations and future prospects are discussed, focused on the evaluation/ validation and eventually commercialization of the proposed sensors.
REVIEW | doi:10.20944/preprints201808.0069.v1
Subject: Chemistry, Analytical Chemistry Keywords: biosensors, enzyme-based systems, receptor-based systems, toxins, food analysis, environmental monitoring, nanotechnology
Online: 3 August 2018 (14:20:04 CEST)
The exploitation of lipid membranes in biosensors has provided the ability to reconstitute a considerable part of their functionality to detect trace of food toxicants and environmental pollutants. Nanotechnology enabled sensor miniaturization and extended the range of biological moieties that could be immobilized within a lipid bilayer device. This chapter reviews recent progress in biosensor technologies based on lipid membranes suitable for environmental applications and food quality monitoring. Numerous biosensing applications are presented, putting emphasis on novel systems, new sensing techniques and nanotechnology-based transduction schemes. The range of analytes that can be currently detected include, insecticides, pesticides, herbicides, metals, toxins, antibiotics, microorganisms, hormones, dioxins, etc. Technology limitations and future prospects are discussed, focused on the evaluation/ validation and eventually commercialization of the proposed sensors.
ARTICLE | doi:10.20944/preprints201807.0307.v1
Subject: Social Sciences, Marketing Keywords: sustainable outcomes; dedication-based mechanism; constraint-based mechanism; perceived switching costs; loyalty program
Online: 17 July 2018 (10:55:47 CEST)
Given the increase in consumers’ preferences for coffee, it is becoming important to understand their decision-making processes in the coffee chain context. To deepen the understanding of sustainable outcomes in this context, this study investigates the role of dedication- and constraint-based mechanisms in forming consumers’ repurchase and positive word-of-mouth (WOM) intentions, two critical sustainable outcomes. We examined the effects of coffee quality, the quality of the physical environment, and service quality in accelerating the formation of dedication-based factors. Moreover, this study offers an in-depth understanding of the enablers of perceived switching costs. Data collected from 238 university students that frequently visit coffee chains are empirically tested against the proposed theoretical framework by using structural equation modeling. The results confirm that both dedication- and constraint-based factors substantially predict consumers’ sustainable outcomes in the coffee chain context. Brand image and perceived switching costs play an important role in enhancing consumers’ repurchase and positive WOM intentions compared with customer satisfaction. Coffee quality is significantly associated with both customer satisfaction and brand image, whereas the quality of the physical environment and service quality are only significantly associated with brand image. Habit is found to be the key enabler of perceived switching costs, while loyalty programs have no significant impact on perceived switching costs.
ARTICLE | doi:10.20944/preprints201608.0069.v1
Subject: Earth Sciences, Environmental Sciences Keywords: Rubber (Hevea brasiliensis) plantation; phenology; Xishuangbanna; Landsat; object-based approach; pixel-based approach
Online: 6 August 2016 (11:54:28 CEST)
Effectively mapping and monitoring rubber plantation is still changing. Previous studies have explored the potential of phenology features for rubber plantation mapping through a pixel-based approach (pixel-based phenology approach). However, in fragmented mountainous Xishuangbanna, it could lead to noises and low accuracy of resultant maps. In this study, we investigated the capability of an integrated approach by combining phenology information with an object-based approach (object-based phenology approach) to map rubber plantations in Xishuangbanna. Moderate Resolution Imaging Spectroradiometer (MODIS) data were firstly used to acquire the temporal profile and phenological features of rubber plantations and natural forests, which delineates the time windows of defoliation and foliation phases. Landsat images were then used to extract a phenology algorithm comparing three different approaches: pixel-based phenology, object-based phenology, and extended object-based phenology to separate rubber plantations and natural forests. The results showed that the two object-based approaches achieved higher accuracy than the pixel-based approach, having overall accuracies of 96.4%, 97.4%, and 95.5%, respectively. This study proved the reliability of a phenology-based rubber mapping in fragmented landscapes with a distinct dry/cool season using Landsat images. This study indicated that the object-based phenology approaches can effectively improve the accuracy of the resultant maps in fragmented landscapes.
ARTICLE | doi:10.20944/preprints201907.0131.v1
Online: 9 July 2019 (14:15:17 CEST)
Saudi Arabia is an oil-reliant nation as a large percentage of its GDP comes from oil resources. Oil dependency leaves a county at the mercy of the international crude market, and a decrease in the price of crude can seriously destabilize the economy of such nations. An example is the case of Venezuela whose dependence on oil caused a national disaster (McCarthy, 2017). As such, the nation’s exports, GDP, and government revenue are primarily dependent on oil revenue, and the recent decrease in the oil prices has decreased Venezuela’s national revenue resulting in economic collapse as well as inflation. A shift from a resource based economy to a knowledge based economy will help Saudi Arabia become less reliant on its oil revenues for its economic stability and growth (Nurunnabi, 2017).
ARTICLE | doi:10.20944/preprints202012.0021.v1
Subject: Mathematics & Computer Science, Analysis Keywords: Coronavirus disease; COVID-19; outbreak model; Gaussian-SIRD model; SIRD model; epidemiological model
Online: 1 December 2020 (13:11:02 CET)
The eruption of COVID-19 patients in 215 countries worldwide has urged for robust predictive methods that can detect as early as possible the size and duration of the contagious disease and also providing precision predictions. In much recent literature reported on COVID-19, one or more essential parts of such investigation were missed. One of the crucial elements for any predictive method is that such methods should fit simultaneously as much data as possible; these data could be total infected cases, daily hospitalized cases, cumulative recovered cases, and deceased cases, and so on. Other crucial elements include sensitivity and precision of such predictive methods on the amount of data as the contagious disease evolved day by day. To show the importance of these aspects, we have evaluated the standard SIRD model and a newly introduced Gaussian-SIRD model on the development of COVID-19 in Kuwait. It is observed that the SIRD model quickly picks up the main trends of COVID-19 development, but the Gaussian-SIRD model provides precise prediction a longer period of time.
ARTICLE | doi:10.20944/preprints202104.0028.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Software Engineering; Model; Model-Driven; Model Driven Development; MDD; MDA
Online: 1 April 2021 (14:47:55 CEST)
In Model-Driven Development (MDD), the models, their generation, and imposing changes on them (model transformation) are used for the development of software. Models provide a framework to start from the imagination and abstraction to create and accomplish the final system. Models create a slow and steady transition from whatness to howness, i.e. from the natural path of the generation of software. For supporting this path, the Logic and Functionality of software must be changeable during its evolution. Here we provide a brief introduction to the concept of Model Driven Development.
Subject: Physical Sciences, Acoustics Keywords: Winterberg Model, Extended Gravity Model, Gravitational Susceptibility Model, Cosmic Vacuum Model, Dark Energy, Dark Matter
Online: 16 February 2021 (13:40:55 CET)
Assuming a Winterberg model for space where the vacuum consists of a very stiff two component superfluid made up of positive and negative mass planckions, theory is the hypothesis, that Planck charge, , was created at the same time as Planck mass. Moreover, the repulsive force that like-mass planckions experience is, in reality, due to the electrostatic force of repulsion between like charges. These forces also give rise to what appears to be a gravitational force of attraction between two like planckions, but this is an illusion. In reality, gravity is electrostatic in origin if our model is correct. We determine the spring constant associated with planckion masses, and find that, , where equals Apery's constant, , and, , is the relaxed, i.e., , number density of the positive and negative mass planckions. In the present epoch, we estimate that, equals, , and the relaxed distance of separation between nearest neighbor positive, or negative, planckion pairs is, These values were determined using box quantization for the positive and negative mass planckions, and considering transitions between energy states, much like as in the hydrogen atom. For the cosmos as a whole, given a net smeared macroscopic gravitational field of, , due to all the ordinary, and bound, matter contained within the observable universe, an average displacement from equilibrium for the planckion masses is a mere , within the vacuum made up of these particles. On the surface of the earth, where, , the displacement amounts to, . All of these displacements are due to increased gravitational pressure within the vacuum, which in turn are caused by applied gravitational fields. The gravitational potential is also derived and directly related to gravitational pressure.
ARTICLE | doi:10.20944/preprints202112.0046.v1
Subject: Chemistry, Analytical Chemistry Keywords: Paperfluidics; Parafilm; Paper-based Analytical Devices
Online: 3 December 2021 (09:58:36 CET)
Paper-based analytical devices have been substantially developed in recent decades. Many fabrication techniques for paper-based analytical devices have been demonstrated and reported. Herein we report a relatively rapid, simple, and inexpensive method for fabricating paper-based analytical devices using parafilm hot pressing. We studied and optimized the effect of the key fabrication parameters, namely pressure, temperature, and pressing time. We discerned the optimal conditions, including pressure of 3.8 MPa (3 tons), temperature of 80oC, and 3 minutes of pressing time, with the smallest hydrophobic barrier size (821 µm) being governed by laminate mask and parafilm dispersal from pressure and heat. Physical and biochemical properties were evaluated to substantiate the paper functionality for analytical devices. Wicking speed in the fabricated paper strips was slightly slower than that of non-processed paper, resulting from reducing paper pore size. A colorimetric immunological assay was performed to demonstrate the protein binding capacity of the paper-based device after exposure to pressure and heat from the fabrication. Moreover, mixing in two-dimensional paper-based device and flowing in a three-dimensional counterpart were thoroughly investigated, demonstrating that the paper device from this fabrication process is potentially applicable as analytical devices for biomolecule detection. Fast, easy, and inexpensive parafilm hot press fabrication presents an opportunity for researchers to develop paper-based analytical devices in resource-limited environments.
ARTICLE | doi:10.20944/preprints202109.0490.v1
Subject: Chemistry, Physical Chemistry Keywords: Hydroxyapatite; Ca-based catalyst; stability; polyglycerol.
Online: 29 September 2021 (11:26:01 CEST)
Abstract: Calcium-based catalysts are of a high interest for glycerol polymerization due to their high catalytic activity and large availability. However, their poor stability under reaction conditions is an issue. In the present study, we investigated the stability and catalytic activity of Ca-hydroxyapatites (HAps) as one of the most abundant Ca-source in nature. A stochiometric, a Ca-deficient and a Ca-rich HAps have been synthetized and tested as catalysts in the glycerol polymerization reaction. Deficient and stochiometric HAps exhibited a remarkable 100% selectivity to triglycerol at 15 % of glycerol conversion at 245 °C after 8 h of reaction in the presence 0.5 mol.% of catalyst. Moreover, under the same reaction conditions, Ca-rich HAp showed a high selectivity (88 %) to di- and triglycerol at a glycerol conversion of 27 %. Most importantly, these catalysts were unexpectedly stable towards leaching under the reaction conditions based on the ICP-OES results. However, based on the catalytic tests and characterization analysis performed by XRD, XPS, IR, TGA-DSC and ICP-OES, we found that HAps can be deactivated by the presence of the reaction products themselves, i.e., water and polymers.
ARTICLE | doi:10.20944/preprints202108.0050.v1
Subject: Arts & Humanities, Anthropology & Ethnography Keywords: SDG; Gender Equality; project-based methodology
Online: 2 August 2021 (14:45:06 CEST)
A project-based module on Sustainable Development Goal number 5, Gender Equality, was im-plemented on 5 different groups of Business English students consisting of a total number of 62 students in higher education. The main purpose of this project was to raise awareness of this goal by means of a flipped method in which students were required to carry out some research on specific areas of the aforementioned goal and work in teams to elaborate oral presentations. Once their findings were shared in class, students were expected to answer a written questionnaire of open-ended questions which were part of a qualitative analysis. Results of this survey showed that not only 90% of the students gained in depth knowledge of this goal, but also 85% had built a positive attitude to take initiative and 80% were optimistic about future gender equality. Finally, 70% of students suggested further social action to curb the problem of gender discrimination. On the whole, the flipped classroom method of learning combined with project-based group work have proven to be an effective way to raise awareness of this goal, create a more positive attitude, in-crease their willingness to take action as well as widening their English lexical resources.
ARTICLE | doi:10.20944/preprints201709.0139.v1
Online: 27 September 2017 (16:45:25 CEST)
Object-Based Image Analysis (OBIA) has been successfully used to map slums. In general, the occurrence of uncertainties in producing geographic data is inevitable. However, most studies concentrated solely on assessing the classification accuracy and neglecting the inherent uncertainties. Our research analyses the impact of uncertainties in measuring the accuracy of OBIA-based slum detection. We selected Jakarta as our case study area, because of a national policy of slum eradication, which is causing rapid changes in slum areas. Our research comprises of four parts: slum conceptualization, ruleset development, implementation, and accuracy and uncertainty measurements. Existential and extensional uncertainty arise when producing reference data. The comparison of a manual expert delineations of slums with OBIA slum classification results into four combinations: True Positive, False Positive, True Negative and False Negative. However, the higher the True Positive (which lead to a better accuracy), the lower the certainty of the results. This demonstrates the impact of extensional uncertainties. Our study also demonstrates the role of non-observable indicators (i.e., land tenure), to assist slum detection, particularly in areas where uncertainties exist. In conclusion, uncertainties are increasing when aiming to achieve a higher classification accuracy by matching manual delineation and OBIA classification.
REVIEW | doi:10.20944/preprints201608.0173.v1
Online: 18 August 2016 (06:07:05 CEST)
ARTICLE | doi:10.20944/preprints201902.0210.v1
Online: 21 February 2019 (13:30:48 CET)
Stemming from the urgency of the actual situation, commercial banks need an effective credit risk management tool to limit risks. The authors went to survey, study and propose a set of factors affecting the ability of debt repayment of individual customers and conducting surveys. The topic uses data sets including 240 observation samples. Using the SPSS software to clean data and run the model based on Maddala's Binary logistics regression published in 1984 to find out the impact of each individual element of customers affecting their ability to repay such debts. Come on. The authors also specify the order of influence of each factor determining the ability to repay individual customers, thereby helping bank managers have a better visual view to make decisions for borrowing accurately, limiting risks.
ARTICLE | doi:10.20944/preprints201808.0246.v3
Subject: Earth Sciences, Oceanography Keywords: property-carrying particle model; coupled models; ecosystem simulation; biophysical modeling; Sandusky bay; great lakes
Online: 17 September 2018 (11:23:12 CEST)
Current numerical methods for simulating biophysical processes in aquatic environments are typically constructed in a grid-based Eulerian framework or as an individual-based model in a particle-based Lagrangian framework. Often, the biogeochemical processes and physical (hydrodynamic) processes occur at different time and space scales, and changes in biological processes do not affect the hydrodynamic conditions. Therefore, it is possible to develop an alternative strategy to grid-based approaches for linking hydrodynamic and biogeochemical models that can significantly improve computational efficiency for this type of linked biophysical model. In this work, we utilize a new technique which links hydrodynamic effects and biological processes through a property-carrying particle model (PCPM) in a Lagrangian/Eulerian framework. The model is tested in idealized cases and its utility is demonstrated in a practical application to Sandusky Bay. Results show the integration of Lagrangian and Eulerian approaches allows for a natural coupling of mass transport (represented by particle movements and random walk) and biological processes in water columns which is described by a nutrient-phytoplankton-zooplankton-detritus (NPZD) biological model. This method is far more efficient than traditional tracer based Eulerian biophysical models for 3-D simulation, particularly for a large domain and/or ensemble simulations.
ARTICLE | doi:10.20944/preprints201807.0501.v1
Online: 26 July 2018 (04:22:14 CEST)
The predictability of wind information in a given location is essential for the evaluation of a wind power project. Predicting wind speed accurately improves the planning of wind power generation, reducing costs and improving the use of resources. This paper seeks to predict the mean hourly wind speed in anemometric towers (at a height of 50 meters) at two locations: a coastal region and one with complex terrain characteristics. To this end, the Holt-Winters (HW), Artificial Neural Networks (ANN) and Hybrid time-series models were used. Observational data evaluated by the Modern-Era Retrospective analysis for Research and Applications-Version 2 (MERRA-2) reanalysis at the same height of the towers. The results show that the hybrid model had a better performance in relation to the others, including when compared to the evaluation with MERRA-2. For example, in terms of statistical residuals, RMSE and MAE were 0.91 and 0.62 m/s, respectively. As such, the hybrid models are a good method to forecast wind speed data for wind generation.
ARTICLE | doi:10.20944/preprints201810.0076.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Business Process Modelling; Declarative Model; Imperative Model; Model Configuration; Constraint Programming
Online: 4 October 2018 (14:18:32 CEST)
Configuration techniques have been used in several fields, such as the design of business process models. Sometimes these models depend on the data dependencies, being easier to describe "what" has to be done instead of "how". Configuration models enable to use a declarative representation of business processes, deciding the most appropriate work-flow in each case. Unfortunately, data dependencies among the activities and how they can affect the correct execution of the process, has been overlooked in the declarative specifications and configurable systems found in the literature. In order to find the best process configuration for optimizing the execution time of processes according to data dependencies, we propose the use of Constraint Programming paradigm with the aim of obtaining an adaptable imperative model in function of the data dependencies of the activities described declarative.
ARTICLE | doi:10.20944/preprints202010.0550.v2
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: expectation maximization (EM) algorithm; finite mixture model; conditional mixture model; regression model; adaptive regressive model (ARM)
Online: 28 October 2020 (11:18:04 CET)
Expectation maximization (EM) algorithm is a powerful mathematical tool for estimating statistical parameter when data sample contains hidden part and observed part. EM is applied to learn finite mixture model in which the whole distribution of observed variable is average sum of partial distributions. Coverage ratio of every partial distribution is specified by the probability of hidden variable. An application of mixture model is soft clustering in which cluster is modeled by hidden variable whereas each data point can be assigned to more than one cluster and degree of such assignment is represented by the probability of hidden variable. However, such probability in traditional mixture model is simplified as a parameter, which can cause loss of valuable information. Therefore, in this research I propose a so-called conditional mixture model (CMM) in which the probability of hidden variable is modeled as a full probabilistic density function (PDF) that owns individual parameter. CMM aims to extend mixture model. I also propose an application of CMM which is called adaptive regressive model (ARM). Traditional regression model is effective when data sample is scattered equally. If data points are grouped into clusters, regression model tries to learn a unified regression function which goes through all data points. Obviously, such unified function is not effective to evaluate response variable based on grouped data points. The concept “adaptive” of ARM means that ARM solves the ineffectiveness problem by selecting the best cluster of data points firstly and then evaluating response variable within such best cluster. In order words, ARM reduces estimation space of regression model so as to gain high accuracy in calculation.
ARTICLE | doi:10.20944/preprints202206.0426.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: event-based vision; object detection and tracking; high-temporal resolution tracking; frame-based vision; hybrid approach
Online: 30 June 2022 (09:54:14 CEST)
Event-based vision is an emerging field of computer vision that offers unique properties such as asynchronous visual output, high temporal resolutions, and dependence on brightness changes to generate data. These properties can enable robust high-temporal-resolution object detection and tracking when combined with frame-based vision. In this paper, we present a hybrid, high-temporal-resolution, object detection and tracking approach, that combines learned and classical methods using synchronized images and event data. Off-the-shelf frame-based object detectors are used for initial object detection and classification. Then, event masks, generated per each detection, are used to enable inter-frame tracking at varying temporal resolutions using the event data. Detections are associated across time using a simple low-cost association metric. Moreover, we collect and label a traffic dataset using the hybrid sensor DAVIS 240c. This dataset is utilized for quantitative evaluation using state-of-the-art detection and tracking metrics. We provide ground truth bounding boxes and object IDs for each vehicle annotation. Further, we generate high-temporal-resolution ground truth data to analyze the tracking performance at different temporal rates. Our approach shows promising results with minimal performance deterioration at higher temporal resolutions (48 – 384 Hz) when compared with the baseline frame-based performance at 24 Hz.
REVIEW | doi:10.20944/preprints202203.0032.v1
Subject: Chemistry, Medicinal Chemistry Keywords: artificial intelligence; machine learning; drug design; covid-19; structure-based drug design; ligand-based drug design
Online: 2 March 2022 (03:00:37 CET)
The recent covid crisis has proven important lessons for academia and industry regarding digital reorganization. Among fascinating lessons from these times is the huge potential of data analytics and artificial intelligence. The crisis exponentially accelerated the adoption of analytics and artificial intelligence, and this momentum is predicted to continue into the 2020s and over. Moreover, drug development is a costly and time-consuming business, and only a minority of approved drugs return the revenue that exceeds the research and development costs. As a result, there is a huge drive to make drug discovery cheaper and faster. With modern algorithms and hardware, it is not too surprising that the new technologies of artificial intelligence and other computational simulation tools can help drug developers. In only two years of covid research, many novel molecules have been designed/identified using artificial intelligence methods with astonishing results in terms of time and effectiveness. This paper will review the most significant research on artificial intelligence in the de novo drug design for COVID-19 pharmaceutical research.
DATA DESCRIPTOR | doi:10.20944/preprints202104.0351.v1
Subject: Keywords: lecture based instruction; actual community-based instruction; maternal and child care; social competency skills; community awareness
Online: 13 April 2021 (12:47:52 CEST)
Maternal-child care is one of the foundations of primary health care. Nurses’ competency skills they have been taught. Community awareness is an important part of preventive healthcare, and nurses must be aware of the factors that impact the health of the community. This study examines the effectiveness of lecture-based instructions in maternal and child care and its implications to students' social competency skills and community awareness in Nursing Colleges in Nueva Ecija, Philippines. The researcher uses survey questionnaire and employed the descriptive design where fifteen (15) nursing students and five (5) teachers were purposively selected. The findings revealed that the weighted mean for the effectiveness of lecture based instruction in maternal and child care is 3.91 with verbal description of “Effective”, the effects of lecture based instruction in maternal and childcare to students’ social competency skills and community awareness got the weighted mean of 3.87 and interpreted as “very satisfactory” and the effectiveness of actual community-based instruction is very effective with weighted mean of 4.25 and is higher compare to lecture based instruction. The results also revealed that students and teachers were challenged in lecture-based instruction in maternal and chi8ldcare during distance learning. Recommendations for the enhancement of lecture-based instruction in maternal and childcare in social competency skills and community awareness were also made.
REVIEW | doi:10.20944/preprints202104.0203.v1
Subject: Engineering, Automotive Engineering Keywords: Additive manufacturing; Fused Deposition Modelling; Robot-based additive manufacturing; Polylactic acid (PLA) and PLA-based composite.
Online: 7 April 2021 (12:24:16 CEST)
Over the last decade, a significant literature has emerged that advocates the potential of different Additive manufacturing (AM) technologies and printable polymeric materials. Nevertheless, large scale printing and complex geometric shapes, with curvatures and non-planar layer deposition, are a challenging process for the traditional gantry-based machine. The 3 degrees of freedom cartesian configuration restricted their capability to planar layered printing and restricted part dimensions. To date, many researchers have used industrial robots to overcomes this limitation. This review gives the reader a good overview of the FDM technique due to its scalability, cost efficiency and a wide range of material printability. A strong emphasis is laid on the PLA and PLA-based composites as promising materials for the FDM process applications. The second part of this paper links the successful use of these materials in the traditional printing process to large scale printing using the robot-based FDM process. This survey presents representative setups for robot-based AM and works that have been used these setups for non-planar material deposition. Finally, we conclude this paper by identifying opportunities for realizing new functional capabilities by exploiting robot-based AM, and we also present the future trends in this area.
ARTICLE | doi:10.20944/preprints202002.0249.v1
Subject: Biology, Agricultural Sciences & Agronomy Keywords: Fungal diversity; Saccharomyces; genetic diversity; glyphosate-based herbicides; copper-based fungicides; RoundUp Ready™ corn; phylogenetics
Online: 17 February 2020 (15:37:11 CET)
Saccharomyces cerevisiae are a phenotypically diverse species that adapt to a wide variety of environments by exploiting standing genetic diversity and selecting for advantageous mutations. Glyphosate and copper-based herbicides/ fungicides affect non-target organisms, these incidental exposures can impact microbial populations. In this study, glyphosate resistance was found in the historical collection of yeast which was collected over the last century, but only in yeast isolated after the introduction of glyphosate. The highest glyphosate-resistant yeasts were isolated from agricultural sites. However, herbicide application at these sites was not recorded. In an effort to assess glyphosate resistance and impact on non-target microorganisms, yeast were harvested from 15 areas with known herbicidal histories, including an organic farm, conventional farm, remediated coal mine, suburban locations, state park, and a national forest. Yeast representing 23 genera were isolated from 237 samples of plant, soil, spontaneous fermentation, nut, flower, fruit, feces, and tree material samples. Saccharomyces, Candida, Metschnikowia, Klyveromyces, Hanseniaspora, and Pichia were other genera commonly found across our sampled environments. Managed areas had less species diversity and at the brewery, only Saccharomyces and Pichia were isolated. A conventional farm growing RoundUp Ready™ corn had the lowest phylogenetic diversity and the highest glyphosate resistance. The mine was sprayed with multiple herbicides including a commercial formulation of glyphosate; however, the yeast did not have elevated glyphosate resistance. In contrast to the conventional farm, the mine was exposed to glyphosate only one year prior to sample isolation. Glyphosate resistance is an example of the anthropogenic selection of nontarget organisms.
REVIEW | doi:10.20944/preprints201812.0129.v1
Subject: Life Sciences, Biochemistry Keywords: food safety; gel-based proteomics; LC-based proteomics; post-translational modifications; proteomics; seed ageing; seed quality
Online: 11 December 2018 (11:00:26 CET)
For centuries, crop plants have represented the basis of the daily human diet. Among them, cereals and legumes, accumulating oils, proteins and carbohydrates in their seeds, distinctly dominate modern agronomic practice. Indeed, these plants play an essential role in the food industry and fuel production. Therefore, the seeds of crop plants are intensively studied by food chemists, biologists, biochemists, and nutritional physiologists. Accordingly, not only seed development and germination, but also age- and stress-related alterations in seed vigor, longevity, nutritional value and safety can be addressed by a broad panel of analytical, biochemical and physiological methods. Currently, functional genomics is one of the most powerful tools, giving direct access to characteristic metabolic changes, accompanying plant development, senescence and response to biotic or environmental stress. Among individual methodological platforms, proteomics represents one of the most effective ones, giving access to cellular metabolism at the level of proteins. Here we discuss the main methodological approaches employed by seed proteomics in the context of physiological changes related to seed development, ageing and response to environmental stress.
Subject: Keywords: COVID-19; coronavirus; SARS-CoV2; model; transmission model; mathematical model; lockdown; quarantine
Online: 16 May 2020 (18:46:59 CEST)
Objective: To use mathematical models to predict the epidemiological impact of lifting the lockdown in London, UK, and alternative strategies to help inform policy in the UK. Methods: A mathematical model for the transmission of SARS-CoV2 in London. The model was parametrised using data on notified cases, deaths, contacts, and mobility to analyse the epidemic in the UK capital. We investigated the impact of multiple non pharmaceutical interventions (NPIs) and combinations of these measures on future incidence of COVID-19. Results: Immediate action at the early stages of an epidemic in the affected districts would have tackled spread. While an extended lockdown is highly effective, other measures such as shielding older populations, universal testing and facemasks can all potentially contribute to a reduction of infections and deaths. However, based on current evidence it seems unlikely they will be as effective as continued lockdown. In order to achieve elimination and lift lockdown within 5 months, the best strategy seems to be a combination of weekly universal testing, contact tracing and use of facemasks, with concurrent lockdown. This approach could potentially reduce deaths by 48% compared with continued lockdown alone. Conclusions: A combination of NPIs such as universal testing, contact tracing and mask use while under lockdown would be associated with least deaths and infections. This approach would require high uptake and sustained local effort but it is potentially feasible as may lead to elimination in a relatively short time scale.
REVIEW | doi:10.20944/preprints202209.0201.v1
Subject: Chemistry, Medicinal Chemistry Keywords: ligand-based pharmacophores; structure-based pharmacophores; virtual screening; drug design; machine learning; molecular dynamics; de novo design
Online: 14 September 2022 (09:10:58 CEST)
G protein-coupled receptors (GPCRs) are amongst the most pharmaceutically relevant and well-studied protein targets, yet unanswered questions in the field leave significant gaps in our understanding of their nuanced structure and function. 3D pharmacophore models are powerful computational tools in silico drug discovery, presenting myriad opportunities for the integration of GPCR structural biology and cheminformatics. This review highlights success stories in the application of 3D pharmacophore modeling to de novo drug design, discovery of biased and allosteric ligands, scaffold hopping, QSAR analysis, hit-to-lead optimization, GPCR de-orphanization, mechanistic understanding of GPCR pharmacology and elucidation of ligand-receptor interactions. Furthermore, advances in the incorporation of dynamics and machine learning will be highlighted. The review will analyze challenges in the field of GPCR drug discovery, detailing how 3D pharmacophore modeling can be used to address them. Finally, we will present opportunities afforded by 3D pharmacophore modeling in the advancement of our understanding and targeting of GPCRs.
ARTICLE | doi:10.20944/preprints202111.0569.v1
Subject: Earth Sciences, Environmental Sciences Keywords: ecosystem dynamics; discrete-event model; qualitative modelling; boolean model; state-and-transition model
Online: 30 November 2021 (12:39:11 CET)
Sub-Saharan social-ecological systems are undergoing changes in environmental conditions, including modifications in rainfall pattern and biodiversity loss. Consequences of such changes depend on complex causal chains which call for integrated management strategies whose efficiency could benefit from ecosystem dynamic modelling. However, ecosystem models often require lots of quantitative information for estimating parameters, which is often unavailable. Alternatively, qualitative modelling frameworks have proved useful for explaining ecosystem response to perturbations, while requiring fewer information and providing more general predictions. However, current qualitative methods have some shortcomings which may limit their utility for specific issues. In this paper, we propose the Ecological Discrete-Event Network (EDEN), an innovative qualitative dynamic modelling framework based on "if-then" rules which generates many alternative event sequences (trajectories). Based on expert knowledge, observations and literature, we use this framework to assess the effect of permanent changes in surface water and herbivores diversity on vegetation and socio-economic transitions in an East African savanna. Results show that water availability drives changes in vegetation and socio-economic transitions, while herbivore functional groups had highly contrasted effects depending on the group. This first use of EDEN in a savanna context is promising for bridging expert knowledge and ecosystem modelling.
ARTICLE | doi:10.20944/preprints202203.0239.v1
Subject: Engineering, Civil Engineering Keywords: ATO; Performance Evaluation; Scenario-based Testing; Simulation
Online: 17 March 2022 (02:42:05 CET)
There is increasing interest in automating train operations of mainline services, e.g. to increase network capacity. Automatic train operation (ATO) is already achieved by several pilot projects, but not implemented on a large scale. Before the general introduction of new or adapted technologies can have a transformative effect on the operation of such a complex system as train operation on mainlines, they have to pass functional, interoperability and performance tests. A virtual preliminary analysis is one way to ensure a smooth as well as safe introduction and implementation. This paper aims to present an approach that applies to the performance testing of ATO systems. Therefore, methods and test standards for technologies enabling automatic operation in other transport sectors are reviewed. The main findings have been adapted, transformed and combined to be used as a general strategy for virtual performance testing in the railway sector. Specifically, universal performance indicators, namely punctuality, accuracy, energy consumption, safety and comfort, are presented. A layer model for scenario description is adapted from the automotive sector, as well as the definition of different scenario types. Lastly, factors that can influence the performance of an ATO algorithm are identified. To demonstrate the developed approach, a straightforward investigation of a case study is conducted using a microscopic train simulator in combination with an ATO algorithm.
ARTICLE | doi:10.20944/preprints202107.0698.v1
Online: 30 July 2021 (11:43:12 CEST)
Background: In an age where information is generally accessible, most of the interest these days has focused on how accessible and convenient technology can be. So small and personal, mobile devices can transform our perception of learning by combining both mobility and convenience. Mobile learning is part of the digital learning landscape alongside e-learning and serious games. However, knowledge about effective design of mobile learning experiences remains of interest with a focus on appropriate design models and the embodiments that can be implemented to achieve the intended educational outcomes. Exploring the instructor's perspective on mobile learning is essential. Therefore, the aim of this study was to investigate the Moroccan instructors' perception and practice of mobile learning to inform the development of an ecologically valid mobile learning integration model. Methods: Higher education Instructors (n=41) were recruited to the study. The Moroccan instructors' perception and their experiences regarding their adoption of mobile learning were collected using an online survey. The analysis focused on their mobile use, perceived IT competency, and opinions on mobile learning. Results: We described most of the instructors' considerations regarding integrating mobile technologies into their teaching activities. We found that most of the mobile learning activities defined by the respondents corresponded to relatively advanced use of mobile devices. More promising, instructors have found innovative ways to use the educational potential of mobile devices. However, the prospect of mobile devices was still to challenge. No or poor Wi-Fi connection, number of devices or limited access, sometimes fees or applications incompatibility were identified as reasons and obstacles to mobile learning usage. Conclusion: Mobile learning is mostly perceived positively among Moroccan instructors allowing many applications and usage to enhance teaching and learning. In this study, a better understanding of aspects and factors influencing the integration of mobile learning in the Moroccan educational context is exposed, helping further the development of an ecologically valid mobile learning integration model. Future work on mobile learning should consider the highly paced evolution of mobile technologies, emphasizing the flexibility of integration frameworks to support instructors and learners.
Subject: Keywords: gender-based violence, coping, abuse, survival, resilient
Online: 2 July 2021 (14:00:57 CEST)
Gender-based violence is considered a serious social and public health problem. Overcoming this situation implies a process that results in the favorable biopsychosocial rehabilitation, the resilient of women. The objective of this study was to analyze the tools, resources and personal and psychosocial mechanisms used by women survivors of gender-based violence. The design was an interpretative phenomenology. It carried out with 22 women who have overcome gender-based violence. Data was collected through personal interviews and narration. The results were grouped into four themes: "Process of violence", "Social resources for coping and overcoming GBV", "Personal tools for coping and overcoming GBV", and "Feelings identified, from the abuse stage to the survival stage". Several studies concluded that overcoming abuse is influenced by the women social network, and it can be the action of these people determining their survival to gender violence. Despite the recognized usefulness of these available resources, it would be desirable to strengthen them in order to be able to drive more women toward survival, assuming a strengthening of coping and overcoming, without forgetting the importance of other support mechanisms such as their family and group therapies.
ARTICLE | doi:10.20944/preprints202012.0437.v1
Subject: Medicine & Pharmacology, Allergology Keywords: malnutrition; translation; physiologically based pharmacokinetics; PBPK; pediatrics
Online: 17 December 2020 (16:03:40 CET)
Malnutrition in children is a global health problem, particularly in developing countries. The effects of an insufficient supply of nutrients on body composition and physiological functions may have implications for drug disposition and ultimately affect the clinical outcome in this vulnerable population. Physiologically based pharmacokinetic (PBPK) modeling can be used to predict the effect of malnutrition as it links physiological changes to pharmacokinetic (PK) consequences. However, the absence of detailed information on body composition and the limited availability of controlled clinical trials in malnourished children complicates the establishment and evaluation of a generic PBPK model in this population. In this manuscript we describe the creation of physiologically-based bridge to a malnourished pediatric population, by combining information on a) the differences in body composition between healthy and malnourished adults and b) the differences in physiology between healthy adults and children. Model performance was confirmed using clinical reference data. This study presents a physiologically-based translational framework for prediction of drug disposition in malnourished children. The model is readily applicable for dose recommendation strategies to address the urgent medicinal needs of this vulnerable population.
ARTICLE | doi:10.20944/preprints202007.0326.v1
Subject: Engineering, Control & Systems Engineering Keywords: mobile robot; vision-based navigation; cascade classifiers
Online: 15 July 2020 (09:16:44 CEST)
This work presents the development and implementation of a distributed navigation system based on computer vision. The autonomous system consists of a wheeled mobile robot with an integrated colour camera. The robot navigates through a laboratory scenario where the track and several traffic signals must be detected and recognized by using the images acquired with its on-board camera. The images are sent to a computer server that processes them and calculates the corresponding speeds of the robot using a cascade of trained classifiers. These speeds are sent back to the robot, which acts to carry out the corresponding manoeuvre. The classifier cascade should be trained before experimentation with two sets of positive and negative images. The number of images in these sets should be considered to limit the training stage time and avoid overtraining the system.