Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Model-Based Systems Engineering; Category Theory; Object-Process Methodology; Model Analytics; Concept-Model-Graph-View-Concept; Graph Data Structures; Graph Query; Decision Support Matrix; Matrix-Based Analysis
Online: 18 February 2021 (12:27:50 CET)
We introduce the Concept-Model-Graph-View Cycle (CMGVC). The CMGVC facilitates coherent architecture analysis, reasoning, insight, and decision-making based on conceptual models that are transformed into a generic, robust graph data structure (GDS). The GDS is then transformed into multiple views of the model, which inform stakeholders in various ways. This GDS-based approach decouples the view from the model and constitutes a powerful enhancement of model-based systems engineering (MBSE). The CMGVC applies the rigorous foundations of Category Theory, a mathematical framework of representations and transformations. We show that modeling languages are categories, drawing an analogy to programming languages. The CMGVC architecture is superior to direct transformations and language-coupled common representations. We demonstrate the CMGVC to transform a conceptual system architecture model built with the Object Process Modeling Language (OPM) into dual graphs and a stakeholder-informing matrix that stimulates system architecture insight.
ARTICLE | doi:10.20944/preprints202208.0177.v1
Subject: Engineering, Other Keywords: model-based system engineering (MBSE); model-based systems architecting (MBSA); model-based pattern language (MBPL); system architecture; logical architecture; SysML patterns; pattern library; systems engineering (SE); pattern language; logical decomposition
Online: 9 August 2022 (09:26:54 CEST)
This paper presents an approach to the application of the Model-Based Systems Engineering (MBSE) and Model-Based Systems Architecting (MBSA) principles to develop a Model-Based Pattern Language (MBPL). It takes too long for systems engineers and architects to develop a new system from scratch, particularly new space-based systems derived from the existing space systems architectures. A pattern language is a holistic view of reusable logical model artifacts; many are interdisciplinary and introductory, if at all. The results are mostly a combination of the application-specific logical solution, which further results in the best possible overall solution. The main benefit of the pattern language is reducing the time and validation required to generate a new space-based system architecture; this approach will develop top-level requirements in the initial phase of the system development. The rationale of the methodology proposed by the paper is as follows, collect, and decompose published literature and other open-source information available on space system architectures and system models; develop SysML models for systems, subsystems, products, assembly, subassembly level, and mission-specific requirements using CAMEO SysML software. Arrange these patterns to develop a functional ontology and construct a logical architecture pattern library. This approach created, updated, and managed SysML pattern language, which evaluated the expedited new model construction. Again, our objective is to develop a logical pattern language using public domain information and evaluate patterns by constructing a new space mission concept—for example, planetary surface habitat.
ARTICLE | doi:10.20944/preprints201904.0326.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: complex systems modeling; systems architecture; system’s model complexity; visualization; agent-based systems; system’s model evolution
Online: 30 April 2019 (11:15:20 CEST)
This work presents some characteristics of MoNet, a computerized platform for the modeling and visualization of complex systems. Emphasis is on the ideas that allowed the successful progressive development of this modeling platform, which goes along with the implementation of applications to the modeling of several studied systems. The platform has the capacity to represent different aspects of systems modeled at different observation scales. This tool offers advantages in the sense of favoring the perception of the phenomenon of the emergence of information, associated with changes of scale. Some criteria used for the construction of this modeling platform are included. The power of current computers has made practical representing graphic resources such as shapes, line thickness, overlaying-text tags, colors and transparencies, in the graphical modeling of systems made up of many elements. By visualizing diagrams conveniently designed to highlight contrasts, these modeling platforms allow the recognition of patterns that drive our understanding of systems and their structure. Graphs that reflect the benefits of the tool regarding the visualization of systems at different scales of observation are presented to illustrate the application of the platform.
ARTICLE | doi:10.20944/preprints201612.0077.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: rule based models; gene expression data; bayesian networks; parsimony
Online: 15 December 2016 (08:21:24 CET)
The comprehensibility of good predictive models learned from high-dimensional gene expression data is attractive because it can lead to biomarker discovery. Several good classifiers provide comparable predictive performance but differ in their abilities to summarize the observed data. We extend a Bayesian Rule Learning (BRL-GSS) algorithm, previously shown to be a significantly better predictor than other classical approaches in this domain. It searches a space of Bayesian networks using a decision tree representation of its parameters with global constraints, and infers a set of IF-THEN rules. The number of parameters and therefore the number of rules are combinatorial to the number of predictor variables in the model. We relax these global constraints to a more generalizable local structure (BRL-LSS). BRL-LSS entails more parsimonious set of rules because it does not have to generate all combinatorial rules. The search space of local structures is much richer than the space of global structures. We design the BRL-LSS with the same worst-case time-complexity as BRL-GSS while exploring a richer and more complex model space. We measure predictive performance using Area Under the ROC curve (AUC) and Accuracy. We measure model parsimony performance by noting the average number of rules and variables needed to describe the observed data. We evaluate the predictive and parsimony performance of BRL-GSS, BRL-LSS and the state-of-the-art C4.5 decision tree algorithm, across 10-fold cross-validation using ten microarray gene-expression diagnostic datasets. In these experiments, we observe that BRL-LSS is similar to BRL-GSS in terms of predictive performance, while generating a much more parsimonious set of rules to explain the same observed data. BRL-LSS also needs fewer variables than C4.5 to explain the data with similar predictive performance. We also conduct a feasibility study to demonstrate the general applicability of our BRL methods on the newer RNA sequencing gene-expression data.
Subject: Engineering, Control & Systems Engineering Keywords: Model-based systems engineering (MBSE); Model informatics and analytics; Model-based collaboration
Online: 12 March 2021 (16:52:34 CET)
In MBSE there is yet no converged terminology. The term ’system model’ is used in different contexts in literature. In this study we elaborated the definitions and usages of the term ’system model’, to find a common definition. 104 publications have been analyzed in depth for their usage and definition as well as their meta-data e.g., the publication year and publication background to find some common patterns. While the term is gaining more interest in recent years it is used in a broad range of contexts for both analytical and synthetic use cases. Based on this three categories of system models have been defined and integrated into a more precise definition.
ARTICLE | doi:10.20944/preprints201810.0572.v1
Subject: Engineering, Control & Systems Engineering Keywords: wind turbine system; hydroelectric plant simulator; model--based control; data--driven approach; self--tuning control; robustness and reliability
Online: 24 October 2018 (11:26:20 CEST)
The interest on the use of renewable energy resources is increasing, especially towards wind and hydro powers, which should be efficiently converted into electric energy via suitable technology tools. To this aim, self--tuning control techniques represent viable strategies that can be employed for this purpose, due to the features of these nonlinear dynamic processes working over a wide range of operating conditions, driven by stochastic inputs, excitations and disturbances. Some of the considered methods were already verified on wind turbine systems, and important advantages may thus derive from the appropriate implementation of the same control schemes for hydroelectric plants. This represents the key point of the work, which provides some guidelines on the design and the application of these control strategies to these energy conversion systems. In fact, it seems that investigations related with both wind and hydraulic energies present a reduced number of common aspects, thus leading to little exchange and share of possible common points. This consideration is particularly valid with reference to the more established wind area when compared to hydroelectric systems. In this way, this work recalls the models of wind turbine and hydroelectric system, and investigates the application of different control solutions. The scope is to analyse common points in the control objectives and the achievable results from the application of different solutions. Another important point of this investigation regards the analysis of the exploited benchmark models, their control objectives, and the development of the control solutions. The working conditions of these energy conversion systems will be also taken into account in order to highlight the reliability and robustness characteristics of the developed control strategies, especially interesting for remote and relatively inaccessible location of many installations.
ARTICLE | doi:10.20944/preprints201608.0155.v1
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: component-based software development; dependability attributes; availability; reliability; integrity; confidentiality; safety; maintainability
Online: 15 August 2016 (12:21:45 CEST)
The software industry has adopted component-based software development (CBSD) to rapidly build and deploy large and complex software systems with significant savings at minimal engineering effort, cost, and time. However, CBSD encounters issues on security trust, mainly with respect to dependability attributes. A system is considered dependable when it can produce the outputs for which it was designed with no adverse effect on its intended environment. Dependability consists of several attributes that imply availability, confidentiality, integrity, reliability, safety, and maintainability. Dependability attributes must be embedded in a CBSD model to develop dependable component software. Motivated by the importance of these attributes, this paper pursues two objectives: to design a model for developing a dependable system that mitigates the vulnerabilities of software components, and to evaluate the proposed model. The model proposed in this study is labelled as developing dependable component-based software (2DCBS). To develop this model, the CBSD architectural phases and processes must be framed and the six dependability attributes embedded according to the best practice method. The expert opinion approach was applied to evaluate 2DCBS framing. In addition, the 2DCBS model was applied to the development of an information communication technology (ICT) portal through an empirical study method. Vulnerability assessment tools (VATs) were employed to verify the dependability attributes of the developed ICT portal. Results show that the 2DCBS model can be adopted to develop web application systems and to mitigate the vulnerabilities of the developed systems. This study contributes to CBSD and facilitates the specification and evaluation of dependability attributes throughout model development. Furthermore, the reliability of the dependable model can increase confidence in the use of CBSD for industries.
ARTICLE | doi:10.20944/preprints201708.0034.v1
Subject: Engineering, Control & Systems Engineering Keywords: wind turbines; hydroelectric systems; nonlinear modelling; model--based control; data--driven approach; advanced control; robustness and reliability
Online: 9 August 2017 (04:42:58 CEST)
Increasingly, there is a focus on utilising renewable energy resources in a bid to fulfil increasing energy requirements and mitigate the climate change impacts of fossil fuels. While most renewable resources are free, the technology used to usefully convert such resources is not and there is an increasing focus on improving the conversion economy and efficiency. To this end, advanced control technologies can have a significant impact and is already a relatively mature technology for wind turbines. Though hydroelectric plants can use simple regulation systems, significant benefits have been shown to accrue from the appropriate use of the same control methods designed for wind turbine plants. This represents the key point of the paper. In fact, to date, the application communities connected with wind and hydraulic energies have had little communication, resulting in little cross fertilisation of control ideas and experience, particularly from the more mature wind area to hydrodynamic systems. Therefore, this paper examines the models and the application of control technology across both domains, both from a comparative and contrasting point of view, with the aim of identifying commonalities in models and control objectives, as well as potential solutions. Key comparative reference points include the articulation of the exployed models, specification of control objectives, development of high--fidelity simulators, and development of solution concepts. Not least, in terms of realistic system requirements are the set of physical and constraints under which such renewable energy systems must operate, and the need to provide reliable and robust control solutions, which respect the often remote and relatively inaccessible location of many onshore and offshore deployments.
ARTICLE | doi:10.20944/preprints201901.0267.v1
Subject: Engineering, Control & Systems Engineering Keywords: wind turbine system; hydroelectric plant simulator; model--based control; data–driven approach; self–tuning control; robustness and reliability
Online: 26 January 2019 (10:08:46 CET)
The interest on the use of renewable energy resources is increasing, especially towards wind and hydro powers, which should be efficiently converted into electric energy via suitable technology tools. To this aim, data--driven control techniques represent viable strategies that can be employed for this purpose, due to the features of these nonlinear dynamic processes working over a wide range of operating conditions, driven by stochastic inputs, excitations and disturbances. Some of the considered methods, such as fuzzy and adaptive self--tuning controllers, were already verified on wind turbine systems, and similar advantages may thus derive from their appropriate implementation and application to hydroelectric plants. These issues represent the key features of the work, which provides some guidelines on the design and the application of these control strategies to these energy conversion systems. The working conditions of these systems will be also taken into account in order to highlight the reliability and robustness characteristics of the developed control strategies, especially interesting for remote and relatively inaccessible location of many installations.
ARTICLE | doi:10.20944/preprints201806.0116.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: liquid metal sensors; liquid wire; wearables; athletic training; ankle complex; plantar flexion; resistance-based sensors; human ankle model; sensor substrate
Online: 7 June 2018 (11:15:27 CEST)
Interviews from strength and conditioning coaches across all levels of athletic competition identified their two biggest concerns with the current state of wearable technology: (a) the lack of solutions that accurately capture data "from the ground up" and (b) the lack of trust due to inconsistent measurements. The purpose of this research is to investigate the use of liquid metal sensors, specifically Liquid Wire sensors, as a potential solution for accurately capturing ankle complex movements such as plantar flexion, dorsiflexion, inversion, and eversion. Sensor stretch linearity was validated using a Micro-Ohm Meter and a Wheatstone bridge circuit. Sensors made from different substrates were also tested and discovered to be linear at multiple temperatures. An ankle complex model and computing unit for measuring resistance values were developed to determine sensor output based on simulated plantar flexion movement. The sensors were found to have a significant relationship between the positional change and the resistance values for plantar flexion movement. The results of the study ultimately confirm the researchers' hypothesis that liquid metal sensors, and Liquid Wire sensors specifically, can serve as a mitigating substitute for inertial measurement unit (IMU) based solutions that attempt to capture specific joint angles and movements.
ARTICLE | doi:10.20944/preprints202105.0271.v1
Subject: Engineering, Other Keywords: Micro-mobility; Ride-sharing; Agent-based modelling; Crowdsourcing
Online: 12 May 2021 (13:48:39 CEST)
Substantial research is required to ensure that micro-mobility ride sharing provides a better fulfillment of user needs. This study proposes a novel crowdsourcing model for the ride-sharing system where light vehicles such as scooters and bikes are crowdsourced. The proposed model consists of three entities: suppliers, customers, and a management party responsible for receiving, renting, booking, and demand matching with offered resources. It can allow suppliers to define the location of their private e-scooters/e-bikes and the period of time they are available for rent. Using a dataset of over 9 million e-scooter trips in Austin, Texas, we ran an agent-based simulation six times using three maximum battery ranges (i.e., 35, 45, and 60 km) and different numbers of e-scooters (e.g., 50 and 100) at each origin. Computational results show that the proposed model is promising and might be advantageous to shift the charging and maintenance efforts to a crowd of suppliers.
ARTICLE | doi:10.20944/preprints202112.0323.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Intrusion Detection System (IDS); HNADAM-SDG(Hybrid Nestrov-Accelerated Adaptive Moment Estimation –Stochastic Gradient Descent); Network-based Intrusion Detection System (NIDS); Host-based Intrusion Detection System (HIDS); Signature-based Intrusion Detection System (SIDS); Anomaly-based Intrusion Detection System (AIDS); Algorithms; Machine Learning.
Online: 21 December 2021 (11:45:39 CET)
A single Information security is of pivotal concern for consistently streaming information over the widespread internetwork. The bottleneck flow of incoming and outgoing data traffic introduces the issue of malicious activities taken place by intruders, hackers and attackers in the form of authenticity desecration, gridlocking data traffic, vandalizing data and crashing the established network. The issue of emerging suspicious activities is managed by the domain of Intrusion Detection Systems (IDS). The IDS consistently monitors the network for identifica-tion of suspicious activities and generates alarm and indication in presence of malicious threats and worms. The performance of IDS is improved by using different signature based machine learning algorithms. In this paper, the performance of IDS model is determined using hybridization of nestrov-accelerated adaptive moment estimation –stochastic gradient descent (HNADAM-SDG) algorithm. The performance of the algorithm is compared with other classi-fication algorithms as logistic regression, ridge classifier and ensemble algorithm by adapting feature selection and optimization techniques
ARTICLE | doi:10.20944/preprints201806.0025.v1
Subject: Engineering, Energy & Fuel Technology Keywords: high pressure hydrogen; metal hydride-based high pressure compression; techno-economic analysis; Ti-based AB2 metal hydrides; mini-channel heat exchanger
Online: 4 June 2018 (09:36:54 CEST)
Traditional high pressure mechanical compressors account for over half of the car station’s cost, have insufficient reliability and are not feasible for a large-scale fuel cell market. An alternative technology, employing a two-stage, hybrid system based on electrochemical and metal hydride compression technologies, represents an excellent alternative to conventional compressors. The high-pressure stage, operating at 100-875 bar, is based on a metal hydride thermal system. A techno-economic analysis of the metal hydride system is presented and discussed. A model of the metal hydride system was developed, integrating a lumped parameter mass and energy balance model with an economic model. A novel metal hydride heat exchanger configuration is also presented, based on mini-channel heat transfer systems, allowing for effective high-pressure compression. Several metal hydrides were analyzed and screened, demonstrating that one selected material, namely (Ti0.97Zr0.03)1.1Cr1.6Mn0.4, is likely the best candidate material to be employed for high-pressure compressors under the specific conditions. System efficiency and costs were assessed based on the properties of currently available materials at industrial levels. Results show that the system can reach pressures on the order of 875 bar with thermal power provided at approximately 150 °C. The system cost is comparable with the current mechanical compressors and can be reduced in several ways as discussed in the paper.
ARTICLE | doi:10.3390/sci2040061
Subject: Keywords: industry4.0; fault detection; fault diagnosis; random forest; diagnostic graph; distributed diagnosis; model-based; data-driven; hybrid approach; hydraulic test rig
Online: 24 September 2020 (00:00:00 CEST)
In this work, a hybrid component Fault Detection and Diagnosis (FDD) approach for industrial sensor systems is established and analyzed, to provide a hybrid schema that combines the advantages and eliminates the drawbacks of both model-based and data-driven methods of diagnosis. Moreover, it shines the light on a new utilization of Random Forest (RF) together with model-based diagnosis, beyond its ordinary data-driven application. RF is trained and hyperparameter tuned using three-fold cross validation over a random grid of parameters using random search, to finally generate diagnostic graphs as the dynamic, data-driven part of this system. This is followed by translating those graphs into model-based rules in the form of if-else statements, SQL queries or semantic queries such as SPARQL, in order to feed the dynamic rules into a structured model essential for further diagnosis. The RF hyperparameters are consistently updated online using the newly generated sensor data to maintain the dynamicity and accuracy of the generated graphs and rules thereafter. The architecture of the proposed method is demonstrated in a comprehensive manner, and the dynamic rules extraction phase is applied using a case study on condition monitoring of a hydraulic test rig using time-series multivariate sensor readings.
ARTICLE | doi:10.20944/preprints202007.0548.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: industry4.0; fault detection; fault diagnosis; random forest; diagnostic graph; distributed diagnosis; model-based; data-driven; hybrid approach; hydraulic test rig
Online: 23 July 2020 (11:26:41 CEST)
In this work, A hybrid component Fault Detection and Diagnosis (FDD) approach for industrial sensor systems is established and analyzed, to provide a hybrid schema that combines the advantages and eliminates the drawbacks of both model-based and data-driven methods of diagnosis. Moreover, spotting the light on a new utilization of Random Forest (RF) together with model-based diagnosis, beyond its ordinary data-driven application. RF is trained and hyperparameter tuned using 3-fold cross-validation over a random grid of parameters using random search, to finally generate diagnostic graphs as the dynamic, data-driven part of this system. Followed by translating those graphs into model-based rules in the form of if-else statements, SQL queries or semantic queries such as SPARQL, in order to feed the dynamic rules into a structured model essential for further diagnosis. The RF hyperparameters are consistently updated online using the newly generated sensor data, in order to maintain the dynamicity and accuracy of the generated graphs and rules thereafter. The architecture of the proposed method is demonstrated in a comprehensive manner, as well as the dynamic rules extraction phase is applied using a case study on condition monitoring of a hydraulic test rig using time series multivariate sensor readings.
ARTICLE | doi:10.20944/preprints202203.0056.v1
Subject: Engineering, Mechanical Engineering Keywords: Airborne wind energy; crosswind kite; induction factor; wake model; aerodynamic performance; CFD; analytical model
Online: 3 March 2022 (07:50:24 CET)
This paper presents some results from a computational fluid dynamics (CFD) model of a multi-megawatt crosswind kite spinning on a circular path in a straight downwind configuration. The unsteady Reynolds averaged Navier-Stokes equations closed by the k−ω SST turbulence model are solved in the three-dimensional space using ANSYS Fluent. The flow behaviour is examined at the rotation plane, and the overall (or global) induction factor is obtained by getting the weighted average of induction factors on multiple annuli over the swept area. The wake flow behaviour is also discussed in some details using velocity and pressure contour plots. In addition to the CFD model, an analytical model for calculating the average flow velocity and radii of the annular wake downstream of the kite is developed. The model is formulated based on the widely-used Jensen’s model (Technical Report Risø-M; No. 2411, 1983), which was developed for conventional wind turbines, and thus has a simple form. Expressions for the dimensionless wake flow velocity and wake radii are obtained by assuming self-similarity of flow velocity and linear wake expansion. Comparisons are made between numerical results from the analytical model and those from the CFD simulation. The level of agreement was found to be reasonably good. Such computational and analytical models are indispensable for kite farm layout design and optimization, where aerodynamic interactions between kites should be considered.
Subject: Social Sciences, Accounting Keywords: cyber-physical systems; digital twin; subject orientation; agent-based systems
Online: 7 December 2020 (09:00:51 CET)
Cyber-Physical Systems form the new backbone of digital ecosystems. Their design can be coupled with engineering activities to facilitate dynamic adaptation and (re-)configuration. Behavior-oriented technologies enable highly distributed and while coupled operation of systems. Utilizing them for digital twins as self-contained design entities with federation capabilities makes them promising candidates to develop and run highly functional CPS. In this paper we discuss mapping CPS components to behavior-based digital twin constituents mirroring integration and implementation through subject-oriented models. These models, inspired by agent-oriented system thinking can be executed and increase transparency at design and runtime. Patterns recognizing environmental factors and operation details facilitate configuration of CPS. Subject-oriented runtime support enable dynamic adaptation and federated use.
ARTICLE | doi:10.20944/preprints202009.0295.v1
Subject: Engineering, Energy & Fuel Technology Keywords: Streamline-based simulation; Nanoparticle transport; Reservoir simulation; Field-scale simulation
Online: 13 September 2020 (16:03:33 CEST)
Nanoparticle (NP) transport is increasingly relevant to subsurface engineering applications such as aquifer characterization, fracture electromagnetic imaging and environmental remediation. An efficient field-scale simulation framework is critical for predicting NP performance and designing subsurface applications. In this work, for the first time, a streamline-based model is presented to simulate NP transport in field-scale subsurface systems. It considers a series of behaviors exhibited by engineered nanoparticles (NPs), including time-triggered encapsulation, retention, formation damage effects and variable nanofluid viscosity. The key methods employed by the algorithm are streamline-based simulation (SLS) and an operator-splitting (OS) technique for modeling NP transport. SLS has proven to be efficient for solving transport in large and heterogeneous systems, where the pressure and velocity fields are firstly solved on underlying grids using finite-difference (FD) methods. After tracing streamlines, one-dimensional (1D) NP transport is solved independently along each streamline. The adoption of OS enhances flexibility for the entire solution procedure by allowing different numerical schemes to solve different governing equations efficiently and accurately. For the NP transport model, an explicit FD scheme is used to solve the advection term, an implicit FD scheme is used for the diffusion term and an adaptive numerical integration is used to solve the retention terms. The model is implemented in an in-house streamline-based code, which is verified against analytical solutions, a commercial FD reservoir simulator (ECLIPSE) and an academic FD colloid transport code (MNMs). For a 1D homogeneous case, the effluent breakthrough curves (BTC) produced by the in-house simulator are in good agreement with the analytical solution and MNMs, respectively. For a two-dimensional (2D) heterogeneous case, the BTC and concentration pattern of the in-house simulator all match well with the solution produced by commercial simulator. Simulations on a synthetic three-dimensional (3D) nanocapsule application engineering design case, are performed to investigate the effect of fluid and NP properties on the displacement pattern of an existing subsurface fluid.
ARTICLE | doi:10.20944/preprints202001.0032.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: model based diagnosis; applications; diagnosis; physiotherapy; education
Online: 4 January 2020 (06:34:25 CET)
Many physiotherapy treatments begin with a diagnosis process. The patient describes symptoms, upon which the physiotherapist decides which tests to perform until a final diagnosis is reached. The relationships between the anatomical components are too complex to keep in mind and the possible actions are abundant. A trainee physiotherapist with little experience naively applies multiple tests to reach the root cause of the symptoms, which is a highly inefficient process. This work proposes to assist students in this challenge by presenting three main contributions: (1) A compilation of the neuromuscular system as components of a system in a Model-Based Diagnosis problem; (2) The PhysIt is an AI-based tool that enables an interactive visualization and diagnosis to assist trainee physiotherapists; and (3) An empirical evaluation that comprehends performance analysis and a user study. The performance analysis is based on evaluation of simulated cases and common scenarios taken from anatomy exams. The user study evaluates the efficacy of the system to assist students in the beginning of the clinical studies. The results show that our system significantly decreases the number of candidate diagnoses, without discarding the correct diagnosis, and that students in their clinical studies find PhysIt helpful in the diagnosis process.
ARTICLE | doi:10.20944/preprints201811.0407.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: participatory sensing; smart city; Internet of Things; distributed event-based systems
Online: 16 November 2018 (11:23:30 CET)
Since smart cities aim at becoming self-monitoring and self-response systems, their deployment relies on close resource monitoring through large-scale urban sensing. The subsequent gathering of massive amounts of data makes essential the development of event filtering mechanisms that enable the selection of what is relevant and trustworthy. Due to the rise of mobile event producers, location information has become a valuable filtering criterion as it not only offers extra information on the event described but also enhances trust on the producer. Implementing mechanisms that validate the quality of location information becomes then imperative. The lack of such strategies in cloud architectures compels the adoption of new communication schemes for IoT-based urban services. To serve the demand for location verification in urban event-based systems (DEBS), we have designed three different fog architectures that combine proximity and cloud communication. Moreover, we have successfully assessed their performance using network simulations with realistic urban traces.
ARTICLE | doi:10.20944/preprints202105.0103.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Converter-driven stability; hybrid renewable energy source (HRES) system; modal resonance; full converter-based wind power generation (FCWG); full converter-based photovoltaic generation (FCPV)
Online: 6 May 2021 (15:14:24 CEST)
Various renewable energy sources such as wind power and photovoltaic (PV) have been increasingly integrated into the power system through power electronic converters in recent years. However, power electronic converter-driven stability issues under specific circumstances, for instance, modal resonances might deteriorate the dynamic performance of the power systems or even threaten the overall stability. In this paper, the integration impact of a hybrid renewable energy source (HRES) system on modal interaction and converter-driven stability is investigated in an IEEE 16-machine 68-bus power system. Firstly, an HRES system is introduced, which consists of full converter-based wind power generation (FCWG) and full converter-based photovoltaic generation (FCPV). The equivalent dynamic models of FCWG and FCPV are then established, followed by the linearized state-space modeling. On this basis, converter-driven stability analyses are performed to reveal the modal resonance mechanisms of the interconnected power systems and the modal interaction phenomenon. Additionally, time-domain simulations are conducted to verify effectiveness of dynamic models and support the converter-driven stability analysis results. To avoid detrimental modal resonances, an optimization strategy is further proposed by retuning the controller parameters of the HRES system. The overall results demonstrate the modal interaction effect between external AC power system and the HRES system and its various impacts on converter-driven stability.
CONCEPT PAPER | doi:10.20944/preprints201607.0078.v2
Subject: Engineering, Electrical & Electronic Engineering Keywords: MIMO system Model; 2N–MIMO system model; Outage Probability; Channel Capacity Ratio (CCR)
Online: 10 August 2016 (10:11:12 CEST)
Finding a good MIMO system model also major issue in Wireless Communication system. It is facing with so many problem, one of the major problem is finding good system model in terms of capacity and transmitting antenna system. In this paper, we analyze the channel capacity of various MIMO system model with some constant SNR level and outage probability. We establish a novel idea for MIMO system models as consider as 2N- MIMO system model and find-out change in channel capacity when different transmitting antennas with constant SNR and outage probability. The channel capacity ratio CCR is presented here on the basis of 2N- MIMO channel capacity model. Number of transmitted antenna presented in MIMO system is increases is well-known however paper shows change in capacity in simple form.
ARTICLE | doi:10.20944/preprints201703.0044.v1
Subject: Engineering, Mechanical Engineering Keywords: advanced high strength steel; yield function; hardening model; springback; deformation mode
Online: 8 March 2017 (04:51:37 CET)
The objective of this study is to evaluate the effect of constitutive equations on the prediction accuracy for springback in cold stamping with various deformation modes. In this study, two types of yield functions—Hill’48 and Yld2000-2d—were considered to describe yield behavior. Isotropic and kinematic hardening models based on the Yoshida–Uemori model were also adopted to describe hardening behavior. Various material tests (such as uniaxial tension, tension- compression, loading-unloading, and hydraulic bulging tests) were carried out to determine the material parameters of the models. The obtained parameters were implemented in the finite element (FE) simulation to predict springback, and the results were compared with experimental data. U-bending and T-shape drawing were employed to evaluate the springback prediction accuracy. Obviously, the springback prediction accuracy was greatly influenced by constitutive equations. Therefore, it is important to choose appropriate constitutive equations for accurate description of material behaviors in FE simulation.
REVIEW | doi:10.20944/preprints202205.0365.v1
Subject: Biology, Physiology Keywords: Fluid flow; conservation laws; Bidomain model; glymphatic system
Online: 26 May 2022 (10:42:20 CEST)
Biology is about structure. Structures within structures. Organs within animals, tissues within organs, cells within tissues, and molecules, often proteins within cells. The structures are so complex that they can only be described by numbers. No numbers are of more importance than those that describe proteins. The numbers that describe coordinates of its atoms, often determined by Patterson functions (which are inverse Fourier Transforms of intensities) of crystal diffraction. Without these numbers, structural biology would hardly exist. Without numbers, engineering would not exist. Numbers are surely needed by the engineers who produce the x-rays diffracting from atoms of protein crystals. Devices of engineering have function. They are built to implement equations. Engineering devices use structures to implement equations, when power is supplied at the right places, that produces appropriate flows. Flows are the essence of life. Equilibrium means death in most living systems. Flows in biological structures are hard to analyze because we do not know input output equations in advance. Sometimes we do not know their function. Flows, forces, and structures of life (like those of engineering) are related by field equations of conservation laws, partial differential equations, constrained by location and properties of structures. Constraints are boundary conditions located on the complicated domain of biological structure. Dealing with this complexity is simplified if one systematically analyzes structure using the most general field theory known, electricity described by the Maxwell equations, without significant known error. Currents are involved because flows of biology usually involve migration of charges, convection of water and solutes, diffusion of ions that form the plasma of life, and their interactions. Interactions can dominate function. Here I show how a few complex structures can be understood in engineering detail. This approach may be useful in dealing with biological and medical issues in many other cases. In one limited case—the clearance of a toxic waste (potassium ions) from the optic nerve—this approach seems to have succeeded.
TECHNICAL NOTE | doi:10.20944/preprints202012.0296.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Systems Biology; Numerical Solver; Java™; API Library; SBML; SED-ML; OMEX; Constraint-Based Modeling; Stochastic Simulation; Ordinary Differential Equation Systems
Online: 11 December 2020 (19:41:58 CET)
Summary: Studying biological systems generally relies on computational modeling and simulation, e.g., for model-driven discovery and hypothesis testing. Progress in standardization efforts led to the development of interrelated file formats to exchange and reuse models in systems biology, such as SBML, the Simulation Experiment Description Markup Language (SED-ML), or the Open Modeling EXchange format (OMEX). Conducting simulation experiments based on these formats requires efficient and reusable implementations to make them accessible to the broader scientific community and to ensure the reproducibility of the results. The Systems Biology Simulation Core Library (SBSCL) provides interpreters and solvers for these standards as a versatile open-source API in Java™. The library simulates even complex bio-models and supports deterministic Ordinary Differential Equations (ODEs); Stochastic Differential Equations (SDEs); constraint-based analyses; recent SBML and SED-ML versions; exchange of results, and visualization of in silico experiments; open modeling exchange formats (COMBINE archives); hierarchically structured models; and compatibility with standard testing systems, including the Systems Biology Test Suite and published models from the BioModels and BiGG databases. Availability and implementation: SBSCL is freely available at https://draeger-lab.github.io/SBSCL/.
ARTICLE | doi:10.20944/preprints201703.0124.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: metabolic flux analysis, model misspecification, constraint-based model, stoichiometric model, Chinese hamster ovary cell culture
Online: 16 March 2017 (17:38:36 CET)
Background: Metabolic flux analysis (MFA) is an indispensable tool in metabolic engineering. The simplest variant of MFA relies on an overdetermined stoichiometric model of the cell’s metabolism under the pseudo-steady state assumption, to evaluate the intracellular flux distribution. Despite its long history, the issue of model error in the overdetermined MFA, particularly misspecifications of the stoichiometric matrix, has not received much attention. Method: We evaluated the performance of statistical tests from linear least square regressions, namely Ramsey RESET test, F-test and Lagrange multiplier test, in detecting model misspecifications in the overdetermined MFA, particularly missing reactions. We further proposed an iterative procedure using the F-test to correct such an issue. Result: Using Chinese hamster ovary and random metabolic networks, we demonstrated that: (1) a statistically significant regression does not guarantee high accuracy of the flux estimates, (2) the removal of a reaction with a low flux magnitude can cause disproportionately large biases in the flux estimates, (3) the F-test could efficiently detect missing reactions, and (4) the proposed iterative procedure could robustly resolve the omission of reactions. Conclusion: Our work demonstrated that statistical analysis and tests could be used to systematically assess, detect and resolve model misspecifications in the overdetermined MFA.
TECHNICAL NOTE | doi:10.20944/preprints202103.0116.v2
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: DAPT; workflow; agent-based modeling; model exploration; crowdsourcing
Online: 10 May 2021 (09:47:54 CEST)
Modern agent-based models (ABM) and other simulation models require evaluation and testing of many different parameters. Managing that testing for large scale parameter sweeps (grid searches) as well as storing simulation data requires multiple, potentially customizable steps that may vary across simulations. Furthermore, parameter testing, processing, and analysis are slowed if simulation and processing jobs cannot be shared across teammates or computational resources. While high-performance computing (HPC) has become increasingly available, models can often be tested faster through the use of multiple computers and HPC resources. To address these issues, we created the Distributed Automated Parameter Testing (DAPT) Python package. By hosting parameters in an online (and often free) "database", multiple individuals can run parameter sets simultaneously in a distributed fashion, enabling ad hoc crowdsourcing of computational power. Combining this with a flexible, scriptable tool set, teams can evaluate models and assess their underlying hypotheses quickly. Here we describe DAPT and provide an example demonstrating its use.
ARTICLE | doi:10.20944/preprints201703.0027.v1
Subject: Engineering, Energy & Fuel Technology Keywords: Fischer-Tropsch synthesis; kinetics model; cobalt based catalyst
Online: 6 March 2017 (06:47:14 CET)
A comprehensive kinetic model of the Fischer-Tropsch synthesis (FTS) is developed in a fixed bed reactor under operating conditions (temperature, 230–235°C, pressure, 20–25 bar, gas hourly space velocity, 4000–5000 cm3(STP)/h/gcatalyst ,H2/CO feed molar ratio, 2.1) over a Co based catalyst. Reaction rate equations based on Eley-Rideal (ER) type model for initiation step and Langmuir-Hinshelwood-Hougen-Watson (LHHW) type model for propagation and termination steps of the FTS reactions have been considered and the readsorption of olefins were taken into account. The model that was reported in the literature was modified in order to explain many significant deviations from the ASF distribution. Optimum parameters have been obtained by Genetic Algorithms (GA). The calculated activation energies to produce n-paraffins and 1-olefins were in the range of 82.24 to 90.68 kJ/mol and 100.66 to 105.24 kJ/mol, respectively. The hydrocarbon distribution in FTS reactions was satisfactorily predicted particularly for paraffins.
ARTICLE | doi:10.20944/preprints202102.0299.v1
Subject: Physical Sciences, Acoustics Keywords: Emergence; Ising Model; Information; Computation; State Space
Online: 12 February 2021 (11:11:50 CET)
The exact dynamics of emergence remains one of the most prominent outstanding questions for the field of complexity science. I first discuss various perspectives on emergence in various contexts, then offer a different perspective on understanding emergence in a graph-theoretic representation. From the discussion, an observer’s choice in state space seems to have an effect for that observer to detect emergent behavior. To test these ideas, I analyze the dynamics of all possible spatial state spaces near the critical temperature in an Ising model. As a result, state space topologies that appear more deterministic flip more bits than topologies that appear more random, which is contrary to our intuitions about randomness. In addition, the size of different state spaces constrain a system’s ability to explore various states within the same time frame. These results are important to understanding emergent phenomena in biological systems, which are layered with various state spaces and observational perspectives.
ARTICLE | doi:10.20944/preprints201901.0303.v1
Subject: Engineering, Civil Engineering Keywords: solid waste management; performance assessment; fuzzy rule-based modeling; performance indicators; Simulink MATLAB
Online: 30 January 2019 (06:55:00 CET)
Most of the municipalities in the Gulf region are facing performance related issues in their municipal solid waste management (MSWM) systems. They lack to possess a deliberate inter-municipality benchmarking processes. Instead of identifying the performance gaps for their key components (e.g., personnel productivity, operational reliability, etc.) and adopt proactive measures, the municipalities primarily rely on an efficient emergency response. A novel hierarchical modeling framework, based on deductive reasoning, is developed for performance assessment of MSWM systems. Fuzzy rule based modeling using Simulink-MATLAB was used for performance inferencing at different levels, i.e., component, sub-components, etc. The model is capable of handling the inherent uncertainties due to limited data and imprecise knowledge base. The model’s outcomes can exclusively assist the managers working at different levels of organizational hierarchy for effective decision-making. Performance of the key component, assists the senior management to assess the overall compliance level of performance objectives. Subsequently, operation management can hone in the sub-components to acquire useful information for intra-municipality performance management. While, individual indicators are useful for inter-municipality benchmarking. The model has been implemented on two municipalities operating in Qassim Region, Saudi Arabia. The results demonstrate the model’s pragmatism for continuous performance improvement of MSWM systems in the country and elsewhere.
CASE REPORT | doi:10.20944/preprints202103.0430.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Queue-Systems; SAN model; Mobius tool; Simulation
Online: 16 March 2021 (14:58:52 CET)
There is a case about a system comprising two queues of M/M(a)/1/K with a due date with FCFS service policy in each queue. Using the methods of simulation and mathematical analysis, the solution diagrams were illustrated for the fixed due date Ɵ=1 second at the phase of due date up to the commencement of the service, limited capacity K1=5 and K2=4, for both scenarios of different service management rate resulted. For the analytical solution, first, a SAN model from the system was created in the Mobius tool. For simulation, a java discrete systems simulation code was used. Adjustment of the initial values, selection of the scenarios for service rate, the number of the consumers in the simulation, etc. were all carried out in the simulation code.
ARTICLE | doi:10.20944/preprints202010.0072.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: indoor positioning; access point placement; path loss model; optimization
Online: 5 October 2020 (11:34:03 CEST)
Indoor Positioning Systems (IPSs) are designed to provide solutions for location-based services. Wireless local area network (WLAN)-based positioning systems are the most widespread around the globe and are commonly found to have a ready-to-use infrastructure composed mostly of access points (APs). They provide useful information on signal strength to be processed by adequate location algorithms, which are not always capable of achieving the desired localization error only by themselves. In this sense, this paper proposes a new method to improve the accuracy of IPSs by optimizing some of their most relevant infrastructure components. Included are the arrangement of APs over the environment, the number of reference points (RPs), and the number of samples per location estimation test. A simulation environment is also proposed, in which the impact of key influencing factors on system accuracy is analyzed. Finally, a case study is simulated to validate an optimal combination of design parameters and its compliance with the requirements of localization error and the limited number of access points. Our simulation results clearly show that the desired localization accuracy, which is set as a goal, can be achieved while maintaining the factors already mentioned at minimal levels, which decreases both system deployment costs and computational effort.
ARTICLE | doi:10.20944/preprints201809.0556.v1
Subject: Engineering, Control & Systems Engineering Keywords: wave energy converter; model predictive control; comparitive of robustness; embedded integrator; mathematical model; identification methodology; real time series
Online: 28 September 2018 (08:21:41 CEST)
This work is located in a growing sector within the field of renewable energies, wave energy converters (WECs). Specifically, it focuses on one of the point absorbers wave (PAWs) of the hybrid platform W2POWER. With the aim of maximising the mechanical power extracted from the waves by these WECs and reduce their mechanical fatigue, the design of five different model predictive controllers (MPCs) with hard and soft constraints has been carried out. As contribution of this paper, two of the MPCs have been designed with the addition of an embedded integrator. In order to validate the MPCs, an exhaustive study on performance and robustness is realized through simulations carried out in which uncertainties in the WEC dynamics are considered. Furthermore, looking for realistic in these simulations, an identification methodology for PAWs is proposed and validated by means of real time series of a scale prototype.
ARTICLE | doi:10.20944/preprints201803.0088.v1
Subject: Engineering, Civil Engineering Keywords: extreme water level; hydrodynamic model; Monte Carlo; joint probability; model calibration and verification; Danshuei River system
Online: 12 March 2018 (07:56:58 CET)
Estimates of extreme water level return periods in river systems are crucial for hydraulic engineering design and planning. Recorded historical water level data of Taiwan’s rivers are not long enough for traditional frequency analyses when predicting extreme water levels for different return periods. In this study, the integration of a one-dimensional flash flood routing hydrodynamic model with the Monte Carlo simulation was developed to predict extreme water levels in the Danshuei River system of northern Taiwan. The numerical model was calibrated and verified with observed water levels using four typhoon events. The results indicated a reasonable agreement between the model simulation and observation data. Seven parameters, including the astronomical tide and surge height at the mouth of the Danshuei River and the river discharge at five gauge stations, were adopted to calculate the joint probability and generate stochastic scenarios via the Monte Carlo simulation. The validated hydrodynamic model driven by the stochastic scenarios was then used to simulate extreme water levels for further frequency analysis. The design water level was estimated using different probability distributions in the frequency analysis at five stations. The design high-water levels for a 200-year return period at Guandu Bridge, Taipei Bridge, Hsin-Hai Bridge, Da-Zhi Bridge, and Chung-Cheng Bridge were 2.90 m, 5.13 m, 6.38 m, 6.05 m, and 9.94 m, respectively. The estimated design water levels plus the freeboard are proposed and recommended for further engineering design and planning.
ARTICLE | doi:10.20944/preprints202012.0750.v1
Subject: Engineering, Energy & Fuel Technology Keywords: Solar thermal; flat-plate collector; stagnation; steam range; two-phase mixture model; thermal-hydraulic model.
Online: 30 December 2020 (10:02:25 CET)
Stagnation is the transient state of a solar thermal system under high solar irradiation where the useful solar gain is zero. Both flat-plate collectors with selective absorber coatings and vacuum-tube collectors exhibit stagnation temperatures far above the saturation temperature of the glycol-based heat carriers within the range of typical system pressures. Therefore, stagnation is always associated with vaporization and propagation of vapor into the pipes of the solar circuit. It is therefore essential to design the system in such a way that vapor never reaches components that cannot withstand high temperatures. In this article, a thermal-hydraulic model based on the integral form of a two-phase mixture model and a drift-flux correlation is presented. The model is applicable to solar thermal flat-plate collectors with meander-shaped absorber tubes and selective absorber coatings. Experimental data from stagnation experiments on two systems, which are identical except for the optical properties of the absorber coating, allowed comparison with simulations carried out under the same boundary conditions. The absorber of one system features a conventional highly selective coating, while the absorber of the other system features a thermochromic coating, which exhibits a significantly lower stagnation temperature. Comparison of simulation results and experimental data show good conformity. This model is implemented into an open-source software tool called “THD” for the thermal-hydraulic dimensioning of solar systems. The latest version of THD, updated by the results of this article, enables planners to achieve cost-optimal design of solar thermal systems and to ensure failsafe operation by predicting the steam range under the initial and boundary conditions of worst-case scenarios.
Subject: Engineering, Civil Engineering Keywords: Sampling frequency; deterministic approach; simulation model; water quality.
Online: 5 June 2019 (10:29:22 CEST)
This paper proposes a novel deterministic methodology for estimating the optimal sampling frequency (SF) of water quality monitoring systems. The proposed methodology is based on employing two-dimensional contaminant transport simulation models to determine the minimum SF considering all the potential changes in the boundary conditions of a water body. A two-dimensional contaminant transport simulation model (RMA4) was implemented to estimate the distribution patterns of the total dissolved solids (TDS) within the Al-Hammar Marsh in the southern part of Iraq for 30 cases of potential boundary conditions. Using geographical information system (GIS) tools, a spatiotemporal analysis approach was applied to the results of the RMA4 model to determine the minimum SF of the monitoring stations with an accuracy level of detectable change in TDS concentration (ALC) of 5%, 10% and 15%. The proposed methodology specified a minimum and maximum SF for each monitoring station (MS) that ranged between 12 and 33 times per year, respectively. Additionally, increasing the ALC to 10% and 15% increase the minimum SF for some MSs by approximately 18% and 21%, respectively. However, the proposed methodology includes all the potential values and cases of boundary conditions, which increases the certainty of monitoring the system and the efficiency of the SF schedule. Moreover, the proposed methodology can be effectively applied to all types of surface water resources.
Subject: Engineering, Automotive Engineering Keywords: functional dependency; network-based linear dependency modelling; internet of things; micro mort model; goal-oriented approach; transformation roadmap; cyber risk regulations; empirical analysis; cyber risk self-assessment; cyber risk target state.
Online: 25 December 2020 (11:35:48 CET)
The Internet-of-Things (IoT) triggers new types of cyber risks. Therefore, the integration of new IoT devices and services requires a self-assessment of IoT cyber security posture. By security posture this article refers to the cybersecurity strength of an organisation to predict, prevent and respond to cyberthreats. At present, there is a gap in the state-of-the-art, because there are no self-assessment methods for quantifying IoT cyber risk posture. To address this gap, an empirical analysis is performed of 12 cyber risk assessment approaches. The results and the main findings from the analysis is presented as the current and a target risk state for IoT systems, followed by conclusions and recommendations on a transformation roadmap, describing how IoT systems can achieve the target state with a new goal-oriented dependency model. By target state, we refer to the cyber security target that matches the generic security requirements of an organisation. The research paper studies and adapts four alternatives for IoT risk assessment and identifies the goal-oriented dependency modelling as a dominant approach among the risk assessment models studied. The new goal-oriented dependency model in this article enables the assessment of uncontrollable risk states in complex IoT systems and can be used for a quantitative self-assessment of IoT cyber risk posture.
ARTICLE | doi:10.20944/preprints201811.0296.v1
Subject: Physical Sciences, Condensed Matter Physics Keywords: q-states clock model; Entropy; Berezinskii-Kosterlitz-Thouless transition
Online: 13 November 2018 (05:09:16 CET)
In this paper, we revisit the q-states clock model for small systems. We present results for the thermodynamics of the q-states clock model from $q=2$ to $q=20$ for small square lattices $L \times L$, with L ranging from $L=3$ to $L=64$ with free-boundary conditions. Energy, specific heat, entropy and magnetization are measured. We found that the Berezinskii-Kosterlitz-Thouless (BKT)-like transition appears for $q>5$ regardless of lattice size, while the transition at $q=5$ is lost for $L<10$; for $q\leq 4$ the BKT transition is never present. We report the phase diagram in terms of $q$ showing the transition from the ferromagnetic (FM) to the paramagnetic (PM) phases at a critical temperature T$_1$ for small systems which turns into a transition from the FM to the BKT phase for larger systems, while a second phase transition between the BKT and the PM phases occurs at T$_2$. We also show that the magnetic phases are well characterized by the two dimensional (2D) distribution of the magnetization values. We make use of this opportunity to do an information theory analysis of the time series obtained from the Monte Carlo simulations. In particular, we calculate the phenomenological mutability and diversity functions. Diversity characterizes the phase transitions, but the phases are less detectable as $q$ increases. Free boundary conditions are used to better mimic the reality of small systems (far from any thermodynamic limit). The role of size is discussed.
ARTICLE | doi:10.20944/preprints201912.0336.v1
Subject: Physical Sciences, Condensed Matter Physics Keywords: non-equilibrium thermodynamics; Ising model; Kuramoto model; Rayleigh-Bénard convection; pattern formation
Online: 25 December 2019 (06:56:02 CET)
Soft-matter systems when driven out-of-equilibrium often give rise to structures that usually lie in-between the macroscopic scale of the material and microscopic scale of its constituents. In this paper we review three such systems, the two-dimensional square-lattice Ising model, the Kuramoto model and the Rayeligh-Bénard convection system which when driven out-of-equilibrium give rise to emergent spatio-temporal order through self-organization. A common feature of these systems is that the entities that self-assemble are coupled to one another in some way, either through local interactions or through a continuous media. Therefore, the general nature of non-equilibrium fluctuations of the intrinsic variables in these systems are found to follow similar trends as order emerges. Through this paper, we attempt to look for connections between among these systems and systems in general which give rise to emergent order when driven out-of-equilibrium.
ARTICLE | doi:10.20944/preprints201810.0632.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Modelica; heat pump; HiL; model validation; testbed
Online: 26 October 2018 (12:11:57 CEST)
Heating systems such as heat pump and combined heat and power cycle systems (CHP) are representing a key component in the future smart grid. Their capability to couple the electricity and heat sector promises a massive potential to the energy transition. Hence, these systems are continuously studied numerical and experimental to quantify their potential and develop optimal control methods. Although numerical simulations provide time and cost-effective solution for system development and optimization, they are exposed to several uncertainties. Hardware in the loop (HiL) system enables system validation and evaluation under different real-life dynamic constraints and boundary conditions. In this paper, a HiL system of heat pump testbed is presented. This system is used to present two case studies. In the first case, the conventional heat pump testbed operation method is compared to the HiL operation method. Energetic and dynamic analyses are performed to quantify the added value of the HiL and its necessity for dynamics analysis. The second case, the HiL testbed is used to validate the heat pump operation in a single family house participating in a local energy market. It enables not only the dynamics of the heat pump and the space heating circuit to be validated but also the building room temperature. The energetic analysis indicated a deviation of 2% and 5% for heat generation and electricity consumption of the heat pump, respectively. The model dynamics emphasized the model capability to present the dynamics of a real system with a temporal distortion of 3%.
ARTICLE | doi:10.20944/preprints202104.0648.v1
Online: 26 April 2021 (10:22:54 CEST)
The runtime environment is an important concern for self-adaptive systems (SASs). Although researchers have proposed many approaches for developing SASs that address the issue of uncertain runtime environments, the understanding of these environments varies depending on the objectives, perspectives, and assumptions of the research. Thus, the current understanding of the environment in SAS development is ambiguous and abstract. To make this understanding more concrete, we describe the landscape in this area through a systematic literature review (SLR). We examined 128 primary studies and 14 unique environment models. We investigated concepts of the environment depicted in the primary studies and the proposed environment models based on their ability to aid in understanding. This illustrates the characteristics of the SAS environment, the associated emerging environmental uncertainties, and what is expressed in the existing environment models. This paper makes explicit the implicit understanding about the environment made by the SAS research community and organizes and visualizes them.
CONCEPT PAPER | doi:10.20944/preprints201702.0065.v1
Subject: Social Sciences, Sociology Keywords: Systems Theory; multi-factor model; sustainable development; DRR; CCA
Online: 17 February 2017 (07:22:56 CET)
This article considers the concepts of sustainability and sustainable development in relation to disaster risk reduction and climate change adaptation. We conceptualize sustainability from a social systemic perspective, that is, from a perspective that encompasses the multiple functionalities of a social system and their interrelationships in particular environmental contexts. The systems perspective is applied in our consideration and analysis of disaster risk reduction (DRR), climate change adaptation (CCA), and sustainable development (SD). Section 1 introduces briefly sustainability and sustainable development, followed by a brief presentation of the theory of complex social systems (Section 2). The theory conceptualizes interdependent subsystems, their multiple functionalities, and the agential and systemic responses to internal and external stressors on a social system. Section 3 considers disaster risk reduction (DRR) and climate change adaptation (CCA), emerging in response to one or more systemic stressors. It illustrates these with disaster risk reduction in the cases of food and chemical security regulation in the EU. CCA is illustrated by initiatives and developments on the island of Gotland, Sweden and in the Gothenburg Metropolitan area, which go beyond a limited CCA perspective, taking into account long-term sustainability issues. Section 4 discusses the limitations of DRR and CCA, not only their technical limitations but economic, socio-cultural, and political limitations, as informed from a sustainability perspective. It is argued that DRRs are only partial subsystems and must be considered and assessed in the context of a more encompassing systemic perspective. Part of the discussion is focused on the distinction between sustainable and non-sustainable DRRs and CCAs. Section 5 presents a few concluding remarks about the importance of a systemic perspective in analyzing DRR and CCA as well as other similar subsystems in terms of sustainable development.
ARTICLE | doi:10.20944/preprints202107.0259.v1
Subject: Engineering, Automotive Engineering Keywords: Driveability; low-frequency; energy path analysis; powertrain; model-based engineering
Online: 12 July 2021 (12:21:24 CEST)
Vehicle driveability is one of the important vehicle attributes in range-extender electric vehicles due to the electric motor torque characteristics at low-speed events. The process of validating and rectifying vehicle driveability attributes is typically utilised by a physical vehicle prototype that can be expensive and required several design iterations. In this paper, a model-based energy method to assess vehicle driveability is presented based on a high-fidelity 49 degree-of-freedom powertrain and vehicle systems. Multibody dynamics components were built according to their true centre of gravity relative to the vehicle datum for providing an accurate system interaction. The work covered a frequency at less than 20 Hz. The results that consisted of the component frequency domination are structured and examined to identify the low-frequency sensitivity based on different operating parameters such as a road surface coefficient. An energy path technique was also implemented on the dominant component by decoupling its compliances to study the effect on the vehicle driveability and low-frequency response. The outcomes of the research provided a good understanding of the interaction across the sub-systems levels. The powertrain rubber mounts were the dominant components that controlled the low-frequency contents (< 15.33 Hz) and can change the vehicle driveability quality.
ARTICLE | doi:10.20944/preprints201608.0080.v2
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: left ventricle; myofibre; myocardium structure; rule-based model; mathematical anatomy
Online: 20 October 2016 (08:22:39 CEST)
Computer simulation of normal and diseased human heart activity requires a 3D anatomical model of the myocardium, including myofibres. For clinical applications, such a model has to be constructed based on routine methods of cardiac visualisation such as sonography. Symmetrical models are shown to be too rigid, so an analytical non-symmetrical model with enough flexibility is necessary. Based on previously made anatomical models of the left ventricle, we propose a new, much more flexible spline-based analytical model. The model is fully described and verified against DT-MRI data. We show a way to construct it on the basis of sonography data. To use this model in further physiological simulations, we propose a numerical method to utilise finite differences in solving the reaction-diffusion problem together with an example of scroll wave dynamics simulation.
ARTICLE | doi:10.20944/preprints202002.0273.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: linguistic knowledge; neural machine translation model; machine translation tasks; knowledge-based encoder; model representation ability
Online: 19 February 2020 (10:51:41 CET)
Incorporating source-side linguistic knowledge into the neural machine translation (NMT) model has recently achieved impressive performance on machine translation tasks. One popular method is to generalize the word embedding layer of the encoder to encode each word and its linguistic features. The other method is to change the architecture of the encoder to encode syntactic information. However, the former cannot explicitly balance the contribution from the word and its linguistic features. The latter cannot flexibly utilize various types of linguistic information. Focusing on the above issues, this paper proposes a novel NMT approach that models the words in parallel to the linguistic knowledge by using two separate encoders. Compared with the single encoder based NMT model, the proposed approach additionally employs the knowledge-based encoder to specially encode linguistic features. Moreover, it shares parameters across encoders to enhance the model representation ability of the source-side language. Extensive experiments show that the approach achieves significant improvements of up to 2.4 and 1.1 BLEU points on Turkish→English and English→Turkish machine translation tasks, respectively, which indicates that it is capable of better utilizing the external linguistic knowledge and effective improving the machine translation quality.
ARTICLE | doi:10.20944/preprints201805.0156.v1
Subject: Earth Sciences, Environmental Sciences Keywords: rule-based system; reservoir management model; land management model; SWAT (Soil and Water Assessment Tool)
Online: 10 May 2018 (06:27:38 CEST)
Decision tables have been used for many years in data processing and business applications to simulate complex rule sets. Several computer languages have been developed based on rule systems and they are easily programmed in several current languages. Land management and river-reservoir models simulate complex land management operations and reservoir management in highly regulated river systems. Decision tables are a precise yet compact way to model the rule sets and corresponding actions found in these models. In this study, we discuss the suitability of decision tables to simulate management in the river basin scale Soil and Water Assessment Tool (SWAT+) model. Decision tables are developed to simulate automated irrigation and reservoir releases. A simple auto irrigation application of decision tables was developed using plant water stress as a condition for irrigating corn in Texas. Sensitivity of the water stress trigger and irrigation application amounts were shown on soil moisture and corn yields. In addition, the Grapevine Reservoir near Dallas, Texas was used to illustrate the use of decision tables to simulate reservoir releases. The releases were conditioned on reservoir volumes and flood season. The release rules as implemented by the decision table realistically simulated flood releases as evidenced by a daily NSE (Nash-Sutcliffe Efficiency) of 0.52 and a percent bias of -1.1%. Using decision tables to simulate management in land, river and reservoir models was shown to have several advantages over current approaches including: 1) mature technology with considerable literature and applications, 2) ability to accurately represent complex, real world decision making, 3) code that is efficient, modular and easy to maintain, and 4) tables that are easy to maintain, support, and modify.
ARTICLE | doi:10.20944/preprints201904.0303.v1
Subject: Biology, Other Keywords: selective biomineralization; recovery of Au(III); AuNP; bacillus licheniformis FZUL-63; aqua regia-based metal wastewater
Online: 28 April 2019 (08:36:49 CEST)
The recovery of precious metals is a project with both economic and environmental significance. In this paper, it presents how to use bacterial mineralization to selectively recover gold from multi-ionic aqueous systems. The Bacillus licheniformis FZUL-63, separated from a landscape lake in FuZhou University, was shown to selectively mineralize and precipitate gold from coexisting ions in aqueous solution. The removal of Au(III) was almost happened in first hour, and FTIR data show that the amino, carboxyl and phosphate groups on the surface of the bacteria are related to the adsorption of gold ions. XPS results implied that Au(III) ions are reduced to monovalent, and then the Au(I) was adsorbed on the bacterial surface at the beginning stage(first hour). XRD results showed the gold biomineralization began about 10 hours after the interaction between Au(III) ions and bacteria. The Au(III) mineralization has been rarely influenced by other co-existing metal ions. TEM analysis shows the gold nanoparticles are polyhedral structure with a particle size of ~20 nm. The Bacillus licheniformis FZUL-63 could selectively mineralize and recover 478 mg/g(dry biomass) gold from aqua regia-based metal wastewater through four cycles. It could be of great potential in the practical application.
ARTICLE | doi:10.20944/preprints202009.0418.v1
Subject: Engineering, Automotive Engineering Keywords: large sized lithium-ion battery; physic-based model; life prediction; scale-up model; reduced order cell model; electric vehicles
Online: 18 September 2020 (04:29:49 CEST)
Large lithium-ion batteries (LIBs) in electric vehicles and energy storage systems demonstrate different performance and lifetime compared to small LIB cells, owing to the size effects generated by the electrical configuration and property imbalance. However, the calculation time for performing life predictions with three-dimensional (3D) cell models is undesirably long. In this paper, a lumped cell model with equivalent resistances (LER cell model) is proposed as a reduced order model of the 3D cell model, which enables accurate and fast life predictions of large LIBs. The developed LER cell model is validated via the comparisons with results of the 3D cell models by simulating a 20-Ah commercial pouch cell (NCM/graphite) and the experimental values. In addition, the LER cell models are applied to different cell types and sizes, such as a 20-Ah cylindrical cell and a 60-Ah pouch cell.
ARTICLE | doi:10.20944/preprints202011.0680.v1
Subject: Engineering, Other Keywords: Urban Drainage Systems; Sustainable Stormwater Management; Costa Rica; Place-based research; Transition Stages
Online: 27 November 2020 (09:02:24 CET)
Green Infrastructure promotes the use of natural functions and processes as potential solutions to reduce negative effects derived from anthropocentric interventions such as urbanization. In cities of Latin America, for example, the need for more nature-sound infrastructure is evident due to its degree of urbanization and degradation of ecosystems, as well as the alteration of the local water cycle. In this study, an experimental approach for implementation of a prototype is presented. The experiment took place in a highly urbanized watershed located in the Metropolitan Area of Costa Rica. Initially, understanding the characteristics of the study area at different scales was achieved by applying the Urban Water System Transition Framework to identify the existing level of development of the urban water infrastructure, and potential future stages. Subsequently, preferences related to spatial locations and technologies were identified from different local decision-makers. Those insights were adopted to identify a potential area for implementation of the prototype. The experiment consisted on an adaptation of the local sewer to act as a temporal reservoir to reduce the effects derived from rapid generation of stormwater runoff. Unexpected events, not considered initially in the design, are reported in this study as a means to identify necessary adaptations of the methodology. Our study shows from an experimental learning-experience that the relation between different actors advocating for such technologies influences the implementation and operation of non-conventional technologies. Furthermore, the perception of security associated to green spaces was found as a key driver to increase the willingness of residents to modify their urban environments. In consequence, those aspects should be carefully considered as factors of designs of engineering elements when they are related to complex socio-ecological urban systems.
ARTICLE | doi:10.20944/preprints201912.0381.v1
Subject: Physical Sciences, Mathematical Physics Keywords: singlet correlations; twisted Malus law; EPR-B experiments; local hidden variables; spinning coloured disk model; spinning coloured ball model; simulation models
Online: 29 December 2019 (11:28:25 CET)
The famous singlet correlations of a composite quantum system consisting of two spatially separated components exhibit notable features of two kinds. The first kind consists of striking certainty relations: perfect correlation and perfect anti-correlation in certain settings. The second kind consists of a number of symmetries, in particular, invariance under rotation, as well as invariance under exchange of components, parity, or chirality. In this note, I investigate the class of correlation functions that can be generated by classical composite physical systems when we restrict attention to systems which reproduce the certainty relations exactly, and for which the rotational invariance of the correlation function is the manifestation of rotational invariance of the underlying classical physics. I call such correlation functions classical EPR-B correlations. It turns out that the other three (binary) symmetries can then be obtained "for free": they are exhibited by the correlation function, and can be imposed on the underlying physics by adding an underlying randomisation level. We end up with a simple probabilistic description of all possible classical EPR-B correlations in terms of a "spinning coloured disk" model, and a research programme: describe these functions in a concise analytic way.
ARTICLE | doi:10.20944/preprints201701.0015.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: distributed generators; risk assessment; distribution systems; developed PEM-based method; optimal power flow algorithm
Online: 4 January 2017 (07:31:09 CET)
the intermittency and variability of these permeated Distributed Generators (DGs) could critically cause many security and economy risks to distribution systems. This paper applied a certain mathematical distribution to imitate the output variability and uncertainty of DGs. And then, four risk indices EENS, PLC, EFLC and SI were established to reflect the system risk level of distribution system. For the certain mathematical distribution of the DGs' output power, a developed PEM (point estimate method)-based method was proposed to calculate these four system risk indices. In this developed PEM-based method, enumeration method was used to list the states of distribution systems, an improved PEM was presented to deal with the uncertainties of DGs and the value of load curtailment in distribution systems was calculated by an optimal power flow algorithm. Finally, the effectiveness and advantages of this proposed PEM-based method for distribution system assessment were verified by the tests of a modified IEEE 30-bus system. Simulation results have shown that this proposed PEM-based method has a high computational accuracy and highly reduced computational costs compared with other risk assessment methods and is very effective for risk assessments.
ARTICLE | doi:10.20944/preprints202010.0303.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: fuzzy genetic algorithm; reachability property; deadlock; model checking
Online: 14 October 2020 (10:58:37 CEST)
model checking techniques are often used for the verification of software systems. Such techniques are accompanied with several advantages. However, state space explosion is one of the drawbacks to model checking. During recent years, several methods have been proposed based on evolutionary and meta-heuristic algorithms to solve this problem. In this paper, a hybrid approach is presented to cope with the SSE problem in model checking of systems modeled by GTS with an ample state space. Most of existence proposed methods that aim to verify systems are applied to detect deadlocks by graph transformations. The proposed approach is based on the fuzzy genetic algorithm and is designed to decline the safety property by verifying the reachability property and detecting deadlocks. In this solution, the state space of the system is searched by a fuzzy genetic algorithm to find the state in which the specified property is refuted/verified. To implement and evaluate the suggested approach, GROOVE is used as a powerful designing and model checking toolset in GTS. The experimental results indicate that the presented hybrid fuzzy method improves speed and performance by comparing other techniques
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Unsupervised anomalous sound detection; classification-based model; Outlier classifier; ID classifier
Online: 17 August 2021 (08:36:44 CEST)
The task of unsupervised anomalous sound detection (ASD) is challenging for detecting anomalous sounds from a large audio database without any annotated anomalous training data. Many unsupervised methods were proposed, but previous works have confirmed that the classification-based models far exceeds the unsupervised models in ASD. In this paper, we adopt two classification-based anomaly detection models: (1) Outlier classifier is to distinguish anomalous sounds or outliers from the normal; (2) ID classifier identifies anomalies using both the confidence of classification and the similarity of hidden embeddings. We conduct experiments in task 2 of DCASE 2020 challenge, and our ensemble method achieves an averaged area under the curve (AUC) of 95.82% and averaged partial AUC (pAUC) of 92.32%, which outperforms the state-of-the-art models.
ARTICLE | doi:10.20944/preprints202104.0535.v1
Subject: Social Sciences, Accounting Keywords: model-based learning; mental health; physical activity; cognitive functions; active learning.
Online: 20 April 2021 (11:39:03 CEST)
This study examined the effect of an educational hybrid physical education (PE) intervention on cognitive performance and academic achievement in adolescents. A 9-month group-randomized controlled trial was conducted in 150 participants (age: 14.63 ± 1.38) allocated into control group (CG, n = 37) and experimental group (EG, n = 113). Inhibition, verbal fluency, planning and academic achievement were assessed. Significant differences were observed in the post-test for cognitive inhibition, verbal fluency in animals, and the average from verbal fluency in favour of the EG. With regard to the intervention, verbal fluency in animals, verbal fluency in vegetables, the average of verbal fluency, cognitive inhibition, language, the average of all subjects, the average of all subjects except PE, and the average from the core subjects) increased significantly in the EG. The last five variables (the academic ones and cognitive inhibition) also increased in the CG, in addition to mathematics. This study contributes to the knowledge by suggesting that both methodologies produced improvements in the measured variables, but the use of a hybrid program based on TPSR and gamification strategies produce improvements in cognitive performance, specifically through the cognitive inhibition and verbal fluency.
REVIEW | doi:10.20944/preprints202102.0179.v1
Subject: Medicine & Pharmacology, Allergology Keywords: evidence-based practice; clinical reasoning; causal model; intervention theory; Concept Mapping
Online: 8 February 2021 (10:35:52 CET)
Significant efforts in the past decades to teach evidence-based practice (EBP) implementation has emphasized increasing knowledge of EBP and developing interventions to support adoption to practice. These efforts have resulted in only limited sustained improvements in the daily use of evidence-based interventions in clinical practice in most health professions. Many new interven-tions with limited evidence of effectiveness are readily adopted each year - indicating openness to change is not the problem. The selection of an intervention is the outcome of an elaborate and complex cognitive process which is shaped by how they represent the problem in their mind and is mostly invisible processes to others. Therefore, the complex thinking process which support appropriate adoption of interventions should be taught more explicitly. Making the process visible to clinicians increases the acquisition of the skills required to judiciously select one in-tervention over others. The purpose of this paper is to provide a review of the selection process and the critical analysis that is required to appropriately decide to trial or not trial new intervention strategies with patients.
ARTICLE | doi:10.20944/preprints201705.0098.v1
Subject: Earth Sciences, Environmental Sciences Keywords: rule-based classification model; wetland remote sensing; SVM; TC-Wetness; China
Online: 11 May 2017 (08:03:34 CEST)
Wetlands are among the most bio-diverse and highest productivity ecosystems on earth, making their monitoring a high priority to conservation, protection and management interests. Although visual interpretation of satellite images is generally precise for monitoring wetlands, recent works have emphasized computerized classification methods because of the reduction in analyst time. However, it is difficult to automatically identify wetland solely based on spectral characteristics due to the complexity of wetland ecosystems. The ability to extract wetland information rapidly and accurately is the basis and the key to wetland mapping at a large scale. Here we propose an operational method to map China wetlands based on Landsat TM data and ancillary data. On the basis of theoretical analysis of wetland automatic classification, we developed a revised multi-layer wetland classification scheme and a rule-based classification model. In the latter, supervised classification (SVM and decision tree) and unsupervised classification (ISODATA) methods were tested. Four Landsat TM images, representing various wetland eco-regions in China (i.e. the Sanjiang Plain in the northeast China, the North China Plain, the Zoige Plateau in the southwest China and the Pearl River Estuary in southeast China), were automatically classified. The overall classification accuracies were 86.57%, 96.00%, 84.51% and 88.30%, respectively, which we considered to be satisfactory accuracy. Our results indicate that issues such as the resolution of geographic data and the understanding of wetland samples should be carefully addressed in the future.
ARTICLE | doi:10.20944/preprints202204.0229.v1
Subject: Engineering, Energy & Fuel Technology Keywords: Solar energy; Refrigeration; Absorption-compression; Energy saving; Thermodynamic model
Online: 26 April 2022 (06:03:27 CEST)
Solar assisted hybrid cooling systems are promising for the energy saving of refrigeration systems. In most cases, the solar thermal gain is only able to power the heat-driven process of facilities in part of the working period. Therefore, the reduction of compressor power strongly depends upon the duration of heat-driven processes, which has not been addressed properly. Motivated by such knowledge gap, the thermodynamic understanding of solar assisted hybrid cooling systems is deepened through considering the duration in heat-driven processes. Three absorption-compression integrated cooling cycles were taken as examples. It is found that optimal parameters, e.g., inter-stage pressure and temperature, corresponding to various performance indicators trend to be identical, as the duration of heat-driven processes is taken into account. Furthermore, the optimal parameter for different working conditions was obtained. It is displayed that the dimensionless optimal intermediate temperature of layout with the cascade condensation process varies slightly, e.g., 4%, for different conditions. Moreover, the fall of compressor power in entire working periods is nearly independent upon the intermediate temperature. The paper is favorable for the efficient design and operation of solar assisted hybrid cooling systems.
REVIEW | doi:10.20944/preprints201912.0048.v2
Subject: Engineering, Energy & Fuel Technology Keywords: thermal desalination; reverse osmosis; advanced heat transfer fluids; sustainable desalination practices; integrated solar thermal nanofluids based desalination
Online: 9 January 2020 (08:39:19 CET)
Desalination accounts for 1% of the total global water consumption and is an energy-intensive process, with the majority of operational expenses attributed to energy consumption. Moreover, at present, a significant portion of the power comes from traditional fossil fuel-fired power plants and the greenhouse gas emissions associated with power production along with concentrated brine discharge from the process, pose a severe threat to the environment. Due to the dramatic impact of climate change, there is a major opportunity to develop sustainable desalination processes to combat the issues of brine discharge, greenhouse gas emissions along with a reduction in energy consumption per unit of freshwater produced. Nanotechnology can play a vital role to achieve specific energy consumption reduction as nanofluids application increases the overall heat transfer coefficient enabling the production of more water for the same size desalination plant. Furthermore, concentrated brine discharge harms the marine ecosystems, and hence, this problem must also be solved to support the objective of sustainable desalination. Several studies have been carried out in the past several years in the field of nanotechnology applications for desalination, brine treatment and the role of renewable energy in desalination. This paper aims to review the major advances in this field of nanotechnology for desalination. Furthermore, a hypothesis for developing an integrated solar thermal and nanofluid sustainable desalination system, based on the cyclic economy model is proposed.
ARTICLE | doi:10.20944/preprints201811.0146.v1
Subject: Engineering, Control & Systems Engineering Keywords: model predictive control; HVAC; climate control; flexible control technologies
Online: 7 November 2018 (06:40:45 CET)
The following paper describes an economical, multiple model predictive control (EMMPC) for an air conditioning system of a confectionery manufacturer in Germany. The application consists of a packaging hall for chocolate bars, in which a new local conveyor belt air conditioning system is used and thus the temperature and humidity limits in the hall can be significantly extended. The EMMPC calculates the optimum energy or cost humidity and temperature set points in the hall. For this purpose, time-discrete state space models and an economic objective function with which it is possible to react to flexible electricity prices in a cost-optimised manner are created. A possible future electricity price model for Germany with a flexible EEG levy was used as a flexible electricity price. The flexibility potential is determined by variable temperature and humidity limits in the hall, which are oriented towards the comfort field for easily working persons, and the building mass. The building mass of the created room model is used as a thermal energy store. Considering electricity price and weather forecasts as well as internal, production plan-dependent load forecasts, the model predictive controller directly controls the heating and cooling register and the humidifier of the air conditioning system.
ARTICLE | doi:10.20944/preprints202203.0019.v1
Subject: Engineering, Control & Systems Engineering Keywords: Autonomous surface vehicles (ASV); autonomous underwater vehicle (AUV); Control and guidance; nonlinear control; deterministic artificial intelligence (D.A.I.); model-following; R.L.S.; marine actuators
Online: 1 March 2022 (11:15:37 CET)
This study determines the threshold for the computational rate of actuator motor controllers for unmanned underwater vehicles necessary to accurately follow discontinuous square wave commands. Motors must track challenging square-wave inputs, and identification of key computational rates permit application of deterministic artificial intelligence (D.A.I.) to achieve tracking to a machine-precision degree of accuracy in direct comparison to other state-of-art approaches. All modeling approaches are validated in MATLAB simulations where the motor process is discretized at varying step-sizes (inversely proportional to computational rate). At a large step-size (fast computational rate), discrete D.A.I. shows a mean error more than three times larger than that of a ubiquitous model-following approach. Yet, at a smaller step size (slower computational rate), the mean error decreases by a factor of 10, only three percent larger than that of continuous D.A.I. Hence, the performance of discrete D.A.I. is critically affected by the sampling period for discretization of the system equations and computational rate. Discrete D.A.I. should be avoided when small step-size discretization is unavailable. In fact, continuous D.A.I. has surpassed all modeling approaches which makes it the safest and most viable solution to future commercial applications in unmanned underwater vehicles.
ARTICLE | doi:10.20944/preprints201812.0058.v1
Subject: Engineering, Mechanical Engineering Keywords: big data; parameter estimation; model updating; system identification; sequential Monte Carlo sampler
Online: 4 December 2018 (11:17:24 CET)
In this paper the authors present a method which facilitates computationally efficient parameter estimation of dynamical systems from a continuously growing set of measurement data. It is shown that the proposed method, which utilises Sequential Monte Carlo samplers, is guaranteed to be fully parallelisable (in contrast to Markov chain Monte Carlo methods) and can be applied to a wide variety of scenarios within structural dynamics. Its ability to allow convergence of one's parameter estimates, as more data is analysed, sets it apart from other sequential methods (such as the particle filter).
REVIEW | doi:10.20944/preprints202009.0652.v3
Subject: Mathematics & Computer Science, Other Keywords: quantified self; health; physical activity; behavior change; model; support system; persuasive design; user centered design
Online: 7 December 2020 (10:53:52 CET)
Since the emergence of the quantified self movement, users aim at health behavior change, but only those who are sufficiently motivated and competent with the tools will succeed. Our literature review shows that theoretical models for quantified self exist but they are too abstract to guide the design of effective user support systems. Here, we propose principles linking theory and implementation to arrive at a hierarchical model for an adaptable and personalized self-quantification system for physical activity support. We show that such a modeling approach should include a multi-factors user model (activity, context, personality, motivation), a hierarchy of multiple time scales (week, day, hour), and a multi-criteria decision analysis (user activity preference, user measured activity, external parameters). This theoretical groundwork, which should facilitate the design of more effective solutions, has now to be validated by further empirical research.
ARTICLE | doi:10.20944/preprints202208.0490.v1
Subject: Engineering, Mechanical Engineering Keywords: cardiovascular 0-D model; pulmonary arterial pressure; gradient-based optimization; automatic differentiation
Online: 29 August 2022 (10:57:18 CEST)
Reliable quantification of pulmonary arterial pressure is essential in the diagnostic and prognostic assessment of a range of cardiovascular pathologies including rheumatic heart disease, yet an accurate and routinely available method for its quantification remains elusive. This work proposes an approach to infer pulmonary arterial pressure based on scientific machine learning techniques and non-invasive, clinically available measurements. A 0-D multicompartment model of the cardiovascular system was optimized using several optimization algorithms, subject to forward-mode automatic differentiation. Measurement data were synthesized from known parameters to represent the healthy, mitral regurgitant, aortic stenosed and combined valvular disease situations with and without pulmonary hypertension. Eleven model parameters were selected for optimization based on 95 % explained variation in mean pulmonary arterial pressure. A hybrid Adam and limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer yielded the best results with input data including valvular flow rates, heart chamber volume changes and systematic arterial pressure. Mean absolute percentage errors ranged from 1.8 % to 3.78 % over the simulated test cases. The model was able to capture pressure dynamics under hypertensive conditions with pulmonary arterial systole, diastole, and mean pressure average percentage errors of 1.12 %, 2.49 % and 2.14 %, respectively. The relatively low errors highlight the potential of the proposed model to recover pulmonary pressures for diseased heart valve and pulmonary hypertensive conditions.
Subject: Mathematics & Computer Science, Other Keywords: reinforcement learning; bitrate streaming; world-models; video streaming; model-based reinforcement learning
Online: 20 August 2020 (07:02:57 CEST)
Adaptive bitrate (ABR) algorithms optimize the quality of streaming experiences for users in client-side video players especially in unreliable or slow mobile networks. Several rule-based heuristic algorithms can achieve stable performance, but they sometimes fail to adapt properly to changing network conditions. Fluctuating bandwidth may cause algorithms to default to behavior that creates a negative experience for the user. ABR algorithms can be generated with reinforcement learning, a decision-making paradigm in which an agent learns to make optimal choices through interactions with an environment. Training reinforcement learning algorithms for bitrate streaming requires building a simulator for an agent to experience interactions quickly; training an agent in the real environment is infeasible due to the long step times in real environments. This project explores using supervised learning to construct a world-model, or a learned simulator, from recorded interactions. A reinforcement learning agent trained inside of the learned model, rather than a simulator, can outperform rule-based heuristics. Furthermore, agents trained inside the learned world-model can outperform model-free agents in low sample regimes. This work highlights the potential for world-models to quickly learn simulators, and to be used to generate optimal policies.
ARTICLE | doi:10.20944/preprints201810.0668.v1
Subject: Engineering, Other Keywords: climate change; carbon emissions; low carbon city; sustainability; strategy-based model; SLCM
Online: 29 October 2018 (09:55:25 CET)
Low carbon cities are increasingly forming a distinct strand of sustainability literature. Models have been developed to measure the performance of low carbon cities. The purpose of this paper is to formulate a strategy-based model to evaluate current performance and predict future conditions of low carbon cities. It examines the dynamic interrelationships between key performance indicators (KPIs), induces changes to city plan targets and then instantly predicts the outcome of these changes. Designed generic and flexible, the proposed model shows how low carbon targets could be used to guide the transformation of low carbon cities under four strategies: (1) passive intervention, (2) problem solving, (3) trend modifying and (4) opportunity seeking. Further, the model has been applied to 17 cities and then tested on 5 cities: London, New York, Barcelona, Dubai and Istanbul. The paper concludes with policy implications to realign city plans and support low carbon innovation.
ARTICLE | doi:10.20944/preprints201808.0550.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Food Safety; Agent-Based Model; Social Networking; Recommendation; the wisdom of crowd.
Online: 31 August 2018 (14:37:36 CEST)
"The wisdom of crowd'' is so often observed in social discourses and activities around us. The manifestations of it are, however, so intrinsically embedded and behaviorally accepted that an elaboration of a social phenomenon evidencing such wisdom is often cheered as a discovery; or at least an astonishing fact. One such scenario is explored here, namely conceptualization and modeling of a food safety system, a system directly related to social cognition. Food safety is an area of concern these days. Models representing the food safety systems are recently published to study the effect of interactions between important entities of the system. For example, Knowles’s model finds conditions leading to a more efficient and dependable system of entities like consumers, regulators and stores with specific focus on regulators behavior and their impact on the food safety. The first contribution of this paper is reevaluation of Knowles’s model towards a more conscious understanding of ``the wisdom of crowd'' effects on inspection and consuming behaviors. The second contribution is augmenting of the model with social networking capabilities, which acts as a medium to spread information about stores and help consumers find stores which are not contaminated. Simulation results reveal that stores’ respecting social cognition improve effectiveness of the food safety system for consumers and stores both. Simulation findings also reveals that an active society has a capability to self-organize effectively even in the absence of any regulatory compulsion.
ARTICLE | doi:10.20944/preprints202010.0288.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Fuzzy Adaptive Particle Swarm Optimization; Graph Transformation System; Model Checking; Reachability Property; State Space Explosion
Online: 14 October 2020 (08:21:28 CEST)
Nowadays, model checking is applied as an accurate technique to verify software systems. The main problem of model checking techniques is the state space explosion. This problem occurs due to the exponential memory usage by the model checker. In this situation, using meta-heuristic and evolutionary algorithms to search for a state in which a property is satisfied/violated is a promising solution. Recently, different evolutionary algorithms like GA, PSO, etc. are applied to find deadlock state. Even though useful, most of them are concentrated on finding deadlock. This paper proposes a fuzzy algorithm in order to analyze reachability properties in systems specified through GTS with enormous state space. To do so, we first extend the existing PSO algorithm (for checking deadlocks) to analyze reachability properties. Then, to increase the accuracy, we employ a Fuzzy adaptive PSO algorithm to determine which state and path should be explored in each step to find the corresponding reachable state. These two approaches are implemented in an open-source toolset for designing and model checking GTS called GROOVE. Moreover, the experimental results indicate that the hybrid fuzzy approach improves speed and accuracy in comparison with other techniques based on meta-heuristic algorithms such as GA and the hybrid of PSO-GSA in analyzing reachability properties.
ARTICLE | doi:10.20944/preprints202009.0687.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: mathematical model; reduction; identification procedure; incorrectness; neural network; ordinary differential equation (ODE)
Online: 28 September 2020 (14:12:12 CEST)
The paper proposes a new principle of finding and removing elements of mathematical model, redundant in terms of parametric identification of the model. It allows reducing computational and time complexity of the applications built on the model. Especially this is important for AI based systems, systems based on IoT solutions, distributed systems etc. Besides, the complexity reduction allows increasing an accuracy of mathematical models implemented. Despite the model order reduction methods are well known, they are extremely depended however on the problem area. Thus, proposed reduction principles can be used in different areas, what is demonstrated in this paper. The proposed method for the reduction of mathematical models of dynamic systems allows also the assessment of the requirements for the parameters of the simulator elements to ensure the specified accuracy of dynamic similarity. Efficiency of the principle is shown on the ordinary differential equations and on the neural network model. The given examples demonstrate efficient normalizing properties of the reduction principle for the mathematical models in the form of neural networks.
ARTICLE | doi:10.20944/preprints201901.0067.v1
Subject: Engineering, Energy & Fuel Technology Keywords: wind farm production maximisation; coordinated control; $C_P$-based optimisation; yaw-based optimisation; wake effects; turbulence intensity; Jensen model; particle swarm optimisation
Online: 8 January 2019 (11:34:39 CET)
A practical wind farm controller for production maximisation based on coordinated control is presented. The farm controller emphasises computational efficiency without compromising accuracy. The controller combines Particle Swarm Optimisation (PSO) with a turbulence intensity based Jensen wake model (TI-JM) for exploiting the benefits of either curtailing upstream turbines using coefficient of power ($C_P$) or deflecting wakes by applying yaw-offsets for maximising net farm production. First, TI-JM is evaluated using convention control benchmarking WindPRO and real time SCADA data from three operating wind farms. Then the optimized strategies are evaluated using simulations based on TI-JM and PSO. The innovative control strategies can optimise a medium size wind farm, Lillgrund consisting of 48 wind turbines, requiring less than 50 seconds for a single simulation, increasing farm efficiency up to a maximum of 6% in full wake conditions.
ARTICLE | doi:10.20944/preprints202107.0087.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Electric Vehicles; Stationary Battery Energy Storage System; Battery Automated System; Online State Estimation; Thermal Modeling; First-order model; Second-order Model; Kalman Filtering
Online: 5 July 2021 (10:11:31 CEST)
Estimation of core and surface temperature is one of the crucial functionalities of the lithium-ion Battery Management System (BMS) towards providing effective thermal management, fault detection and operational safety. While, it is impractical to measure core temperature using physical sensors, implementing a complex estimation strategy in on-board low-cost BMS is challenging due to high computational cost and the cost of implementation. Typically, a temperature estimation scheme consists of a heat generation model and a heat transfer model. Several researchers have already proposed ranges of thermal models having different levels of accuracy and complexity. Broadly, there are first-order and second-order heat capacitor-resistor-based thermal models of lithium-ion batteries (LIBs) for core and surface temperature estimation. This paper deals with a detailed comparative study between these two models using extensive laboratory test data and simulation study to access suitability in online prediction and onboard BMS. The aim is to guide whether it’s worth investing towards developing a second-order model instead of a first-order model with respect to prediction accuracy considering modelling complexity, experiments required and the computational cost. Both the thermal models along with the parameter estimation scheme are modelled and simulated using MATLAB/Simulink environment. Models are validated using laboratory test data of a cylindrical 18650 LIB cell. Further, a Kalman Filter with appropriate process and measurement noise levels are used to estimate the core temperature in terms of measured surface and ambient temperatures. Results from the first-order model and second-order models are analyzed for comparison purposes.
ARTICLE | doi:10.20944/preprints201902.0150.v1
Subject: Engineering, Marine Engineering Keywords: preventive maintenance model; LNG cargo containment system; aging effect; dock specification; natural language processing
Online: 18 February 2019 (09:21:26 CET)
The high demand for liquefied natural gas (LNG) requires more LNG carriers (LNGCs) to be in operation. During transportation, there is a high risk due to the required extremely low temperatures and the explosive nature of LNG cargo. Moreover, when there is a lack of experience in operating old LNGCs, there is a serious concern regarding operational accidents. A systematic maintenance strategy, especially for LNG cargo containment systems, is crucial for maintaining safe LNG transportation at sea. The purpose of this study is to develop preventive LNG cargo containment system maintenance models by using LNGC dock specifications from LNGCs of various ages. The dock specifications from a conventional LNGC repairing dock were analyzed using natural language processing techniques in order to develop preventive maintenance models of the LNG cargo containment system. From these results, and by considering the ship’s age, it was found that for young LNGCs the priority for repair should focus on checking routine consumable spare parts by tank inspections, whereas for older LNGCs the focus should be on tank condition maintenance rather than on other facilities. These results are expected to be useful in the development of a maintenance strategy of preventive LNG cargo containment systems in maritime LNG transportation.
ARTICLE | doi:10.20944/preprints201906.0049.v1
Subject: Social Sciences, Geography Keywords: mobile phone data; residents commuting behavior; agent-based model; urban planning; traffic congestion
Online: 6 June 2019 (11:31:48 CEST)
Abstract：Commuting of residents in big city often brings tidal traffic pressure or congestions. Understanding the causes behind this phenomenon is of great significance for urban space optimization. Various spatial big data make possible the fine description of urban residents travel behaviors, and bring new approaches to related studies. The present study focuses on two aspects: one is to obtain relatively accurate features of commuting behaviors by using mobile phone data, and the other is to simulate commuting behaviors of residents through the agent-based model and inducing backward the causes of congestion. Taking the Baishazhou area of Wuhan, a local area of a mega city in China, as a case study, travel behaviors of commuters are simulated: the spatial context of the model is set up using the existing urban road network and by dividing the area into travel units; then using the mobile phone call detail records (CDR) of a month, statistics of residents' travel during the four time slots in working day mornings are acquired and then used to generated the OD matrix of travels at different time slots; and then the data are imported into the model for simulation. By the preset rules of congestion, the agent-based model can effectively simulate the traffic conditions of each traffic intersection, and can also induce backward the causes of traffic congestion using the simulation results and the OD matrix. Finally, the model is used for the evaluation of road network optimization, which shows evident effects of the optimizing measures adopted in relieving congestion, and thus also proves the value of this method in urban studies.
ARTICLE | doi:10.20944/preprints201812.0133.v1
Subject: Earth Sciences, Geoinformatics Keywords: Radiation risk analysis, GIS based model, thermal power plant, surface radiation, remedial measures
Online: 11 December 2018 (13:57:09 CET)
Coal combustion in thermal power plants releases ash. Ash is reported to cause different adverse health hazards in humans and other organisms. Owing to the presence of radionuclides, it is also considered as a potential radiation hazard. In this study, based on the surface radiation measurements and relevant ancillary data, expected radiation risk zones were identified with regard to the human population residing near the Thermal Power Plant. With population density as the risk determining criteria, about 20% of the study area was at ‘High’ risk and another 20% of the study area was at ‘Low’ risk zone. The remaining 60% was under medium risk zone. Based on the findings remedial measures which may be adopted have been suggested.
ARTICLE | doi:10.20944/preprints202108.0259.v1
Subject: Life Sciences, Other Keywords: SBML; kinetic models; time-course simulation; steady-state simulation; parameter estimation; model calibration; software; web application
Online: 11 August 2021 (12:19:38 CEST)
In systems biology, biological phenomena are often modeled by ODE and distributed in the de facto standard file format SBML. The primary analyses performed with such models are dynamic simulation, steady-state analysis, and parameter estimation. These methodologies are mathematically formalized, and libraries for such analyses have been published. Several tools exist to create, simulate, or visualize models encoded in SBML. However, setting up and establishing analysis environments is a crucial hurdle for non-modelers. Therefore, easy access to perform fundamental analyses of ODE models is a significant challenge. We developed SBMLWebApp, a web-based service to execute SBML-based simulations, steady-state analysis, and parameter estimation directly in the browser without the need for any setup or prior knowledge to address this issue. SBMLWebApp visualizes the result and numerical table of each analysis and provides a download of the results. SBMLWebApp allows users to select and analyze SBML models directly from the BioModels Database. Taken together, SBMLWebApp provides barrier-free access to an SBML analysis environment for simulation, steady-state analysis, and parameter estimation for SBML models. SBMLWebApp is implemented in Java™ based on an Apache Tomcat® web server using COPASI, the SBSCL, and LibSBMLSim as simulation engines. SBMLWebApp is licensed under MIT with source code available from https://github.com/TakahiroYamada/SBMLWebApp. The program runs online at http://simulate-biology.org.
ARTICLE | doi:10.20944/preprints202208.0123.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: systems analysis; model predictive control; transcranial electrical stimulation; functional near infrared spectroscopy; pupillometry
Online: 5 August 2022 (14:26:00 CEST)
Individual differences in the responsiveness of the brain to transcranial electrical stimulation (tES) is increasingly demonstrated in large variability in the tES effects. Anatomically detailed computational brain models have been developed to address this variability; however, static brain models are not ‘realistic’ in accounting for the dynamic state of the brain. Therefore, human-in-the-loop optimization is proposed in this perspective article based on an extensive systems analysis of the tES neurovascular effects. First, modal analysis was conducted using a physiologically detailed neurovascular model that found stable modes in the 0 Hz to 0.05 Hz range for the pathway for vessel response through the smooth muscle cells, measured with functional near-infrared spectroscopy (fNIRS). tES effects in the 0 Hz to 0.05 Hz range can also be measured with functional magnetic resonance imaging (fMRI)-tDCS data with a maximum TR=10sec. Therefore, we investigated an open-source fMRI-tDCS dataset that used a TR=3.36sec. We found that both the anodal tDCS condition and sham tDCS condition had similar Finite Impulse Response at the region of interest underlying the anode and a remote location, which indicated a global hemodynamic effect of sham tDCS beyond the intended transient sensations. Here, transient sensations can have arousal effects on the hemodynamics so we conducted a healthy case series for black box modeling of fNIRS-pupillometry of short-duration tDCS effects. The block exogeneity test rejected the claim that tDCS is not a 1-step Granger-cause of the fNIRS total hemoglobin changes (HbT) and pupil dilation changes (p<0.05). Also, grey-box modeling using fNIRS of the tDCS effects in chronic stroke showed HbT response to be significantly different (paired-sample t-test, p<0.05) between the ipsilesional and the contralesional hemisphere for primary motor cortex tDCS and cerebellar tDCS which was subserved by the smooth muscle cells. Here, our perspective is that various physiological pathways subserving tES effects can lead to state-trait variability that can be challenging for clinical translation. Therefore, we conducted a case study on human-in-the-loop optimization using our reduced dimension model and a stochastic, derivative-free Covariance Matrix Adaptation Evolution Strategy. Future studies need to investigate human-in-the-loop optimization of tES for reducing inter-subject and intra-subject variability in tES effects.
ARTICLE | doi:10.20944/preprints201806.0272.v1
Subject: Social Sciences, Other Keywords: environmental attitudes; social-ecological systems; coral reef; scale development; item-response theory; reliability; generalized structural equation model
Online: 18 June 2018 (15:26:36 CEST)
This study addresses the latent construct of attitudes towards environmental conservation based on study participant’s responses. We measured and evaluated the latent scale based on an 18-item scale instrument, over four experimental strata (N=945) in the US Virgin Islands and the Caribbean. We estimated the latent scale reliability and validity. We further fitted multiple alternative two-parameter logistic (2PL) and graded response models (GRM) from Item-Response Theory. We finally constructed and fitted equivalent structural and generalized structural equation models (SEM/GSEM) for the attitudinal latent scale. All scale measures (composite, alpha-based, IRT-based and SEM-based) were consistently and reliably valid measures of the study participants’ latent attitudes toward conservation. We found statistically significant differences among participant’s attributes relating to socio-demographic, physical and core environmental characteristics of participants. We assert that the nature of relationship between cognitive attitudes and individual as well as social behavior related to environmental conservation.
REVIEW | doi:10.20944/preprints202006.0050.v1
Subject: Behavioral Sciences, Other Keywords: competitive learning and memory functions; cognitive development; basal ganglia; medial temporal lobe; prefrontal cortex; model-based learning; model-free learning
Online: 5 June 2020 (14:10:15 CEST)
There has been a growing interest in incorporating psychological and neuroscientific knowledge about the development of cognitive functions in educational policies and academic practices. In this paper, we argue that the current knowledge about the interactions between these functions and their neurodevelopmental characteristics should also be considered in order to develop practices that could be better suited to pupils depending on their age. To facilitate this, we review current neuroscientific knowledge on the competitive interactions between two neural circuitry underlying distinct learning functions, their developmental trajectories and how they are linked to other functions such as cognitive control. The incorporation of this knowledge in education could help improve academic outcomes.
REVIEW | doi:10.20944/preprints201908.0166.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: machine learning; deep learning; big data; hydrology; climate change; global warming; hydrological model; earth systems
Online: 15 August 2019 (05:50:48 CEST)
Artificial intelligence methods and application have recently shown great contribution in modeling and prediction of the hydrological processes, climate change, and earth systems. Among them, deep learning and machine learning methods mainly have reported being essential for achieving higher accuracy, robustness, efficiency, computation cost, and overall model performance. This paper presents the state of the art of machine learning and deep learning methods and applications in this realm and the current state, and future trends are discussed. The survey of the advances in machine learning and deep learning are presented through a novel classification of methods. The paper concludes that deep learning is still in the first stages of development, and the research is still progressing. On the other hand, machine learning methods are already established in the fields, and novel methods with higher performance are emerging through ensemble techniques and hybridization.
ARTICLE | doi:10.20944/preprints201808.0545.v2
Subject: Engineering, Electrical & Electronic Engineering Keywords: model intercomparison; renewable energy; production cost modeling; security-constrained unit commitment; open-source software
Online: 24 December 2018 (10:55:11 CET)
Background: New open-source electric-grid planning models have the potential to improve power system planning and bring a wider range of stakeholders into the planning process for next-generation, high-renewable power systems. However, it has not yet been established whether open-source models perform similarly to the more established commercial models for power system analysis. This reduces their credibility and attractiveness to stakeholders, postponing the benefits they could offer. In this paper, we report the first model intercomparison between an open-source power system model and an established commercial production cost model. Results: We compare the open-source Switch 2.0 to GE Energy Consulting’s Multi Area Production Simulation (MAPS) for production-cost modeling, considering hourly operation under 17 scenarios of renewable energy adoption in Hawaii. We find that after configuring Switch with similar inputs to MAPS, the two models agree closely on hourly and annual production from all power sources. Comparing production gave a coefficient of determination of 0.996 across all energy sources and scenarios, indicating that the two models agree on 99.6% of the variation. For individual energy sources, the coefficient of determination was 69–100. Conclusions: Although some disagreement remains between the two models, this work indicates that Switch is a viable choice for renewable integration modeling, at least for the small power systems considered here. Although some disagreement remains between the two models, this work indicates that Switch is a viable choice for renewable integration modeling, at least for the small power systems considered here.
ARTICLE | doi:10.20944/preprints202009.0381.v1
Subject: Life Sciences, Biotechnology Keywords: high throughput screening; rapid phenotyping; model-based experimental design; Escherichia coli; automated bioprocess development
Online: 17 September 2020 (07:34:19 CEST)
In bioprocess development, the host and the genetic construct for a new biomanufacturing process are selected in the early developmental stages. This decision, made at the screening scale with very limited information about the performance of the selected cell factory in larger reactors, has a major influence on the performance of the final process. To overcome this, scaledown approaches are essential to run screenings that show the real cell factory performance at industrial like conditions. We present a fully automated robotic facility with 24 parallel mini-bioreactors that is operated by a model based adaptive input design framework for the characterization of clone libraries under scale-down conditions. The cultivation operation strategies are computed and continuously refined based on a macro-kinetic growth model that is continuously re-fitted to the available experimental data. The added value of the approach is demonstrated with 24 parallel fed-batch cultivations in a mini-bioreactor system with eight different Escherichia coli strains in triplicate. The 24 fed-batches ran under the desired conditions generating sufficient information to define the fastest growing strain in an environment with varying glucose concentrations similar to industrial scale bioreactors.
ARTICLE | doi:10.20944/preprints202103.0531.v2
Subject: Engineering, Energy & Fuel Technology Keywords: Sector coupling; 100% renewable; Sub-national energy model; Energy transition; Open science.
Online: 24 March 2021 (13:32:30 CET)
The energy transition requires integration of different energy carriers, including electricity, heat, and transport sectors. Energy modeling methods and tools are essential to provide a clear insight into the energy transition. However, the methodologies often overlook the details of small-scale energy systems. The study states an innovative approach to facilitate sub-national energy systems with 100% renewable penetration and sectoral integration. An optimization model, OSeEM-SN, is developed under the Oemof framework. The model is validated using the case study of Schleswig-Holstein. The study assumes three scenarios representing 25%, 50%, and 100% of the total available biomass potentials. OSeEM-SN reaches feasible solutions without additional offshore wind investment, indicating that they can be reserved for supplying other states’ energy demand. The annual investment cost varies between 1.02 bn – 1.44 bn €/yr for the three scenarios. The electricity generation decreases by 17%, indicating that with high biomass-based combined heat and power plants, the curtailment from other renewable plants can be decreased. Ground source heat pumps dominate the heat mix; however, their installation decreases by 28% as the biomass penetrates fully into the energy mix. The validation confirms OSeEM-SN as a beneficial tool to examine different scenarios for sub-national energy systems.
ARTICLE | doi:10.20944/preprints201802.0018.v1
Subject: Materials Science, General Materials Science Keywords: surface energy, interfacial energy, surface tension, wetting model, wetting thermodynamics, sessile drop shape, microgravity
Online: 2 February 2018 (13:05:40 CET)
In this study, the values of the interfacial energies of seven different polymer-water systems obtained by Sessile Drop Accelerometry (SDACC) are compared with the values obtained by the Young’s-equation-based Owens-Wendt method. The SDACC laboratory instrument –a combination of a drop shape analyzer with high-speed camera and a microgravity tower- and the evaluation algorithms, are designed to measure the interfacial energies as a function of the geometrical changes of a sessile droplet shape due to the effect of “switching off” gravity during the experiment. The method bases on Thermodynamics of Interfaces and differs from the conventional aproach of the two hundred-years-old Young’s equation in that it assumes a thermodynamic equilibrium between interfaces, rather than a balance of forces on a point of the solid-liquid-gas contour line. A comparison of the mathematical model that supports the SDACC method with the widely accepted Young`s equation is discussed in detail in this study.
REVIEW | doi:10.20944/preprints201607.0012.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: role-based access control; attribute-based access control; attribute-based encryption
Online: 8 July 2016 (10:12:21 CEST)
Cloud Computing is a promising and emerging technology that is rapidly being adopted by many IT companies due to a number of benefits that it provides, such as large storage space, low investment cost, virtualization, resource sharing, etc. Users are able to store a vast amount of data and information in the cloud and access it from anywhere, anytime on a pay-per-use basis. Since many users are able to share the data and the resources stored in the cloud, there arises a need to provide access to the data to only those users who are authorized to access it. This can be done through access control schemes which allow the authenticated and authorized users to access the data and deny access to unauthorized users. In this paper, a comprehensive review of all the existing access control schemes has been discussed along with analysis. Keywords: role-based access control, attribute-based access control, attribute-based encryption
ARTICLE | doi:10.20944/preprints202001.0265.v1
Subject: Social Sciences, Economics Keywords: climate change; sustainable intensification (SI); smallholders; meta-analysis; random-effect model; Adoption, Southern Africa Development Community (SADC); effect size
Online: 23 January 2020 (14:03:51 CET)
Climate change and environmental degradation are major threats to sustainable agricultural development in Southern Africa. Thus, the concept of sustainable intensification (SI) i.e. getting more output from less input using certain practices such as agroforestry, organic fertilizer, sustainable water management etc. has become an important topic among researchers and policy makers in the region in the last three decades. A comprehensive review of literatures on the adoption of SI in the region identify nine relevant drivers of adoption of SI among (smallholder) farmers. These drivers include (i) age, (ii) size of arable land, (iii) education, (iv) extension services, (v) gender, (vi) household size, (vii) income, (viii) membership in farming organization and (ix) access to credit. We present the results of a meta-analysis of 21 papers on the impact of these determinants on SI adoption among (smallholder) farmers in Southern African Development Community (SADC) using random-effects estimation techniques for the true effect size. While our result suggests that variables such as extension services, education, age, and household size may influence the adoption of SI in SADC, factors such as access to credit is also of great importance. Decision-makers should therefore concentrate efforts on these factors in promoting SI across the SADC. This includes increasing the efficiency of public extension service as well as involvement of private sector in extension service. Furthermore, both public and private agriculture financing models should consider sustainability indicators in their assessment process.
ARTICLE | doi:10.20944/preprints202205.0244.v1
Subject: Engineering, Automotive Engineering Keywords: failure mode and effect analysis (FMEA); model-based design; automatic generation tool; fault injection simulation
Online: 18 May 2022 (12:40:58 CEST)
In the development of the safety-critical systems, it is important to perform Failure Modes and Effects Analysis (FMEA) process to identify potential failures. However, traditional FMEA activities tend to be considered difficult and time-consuming tasks. To compensate for the difficulty of the FMEA task, various types of tools are used to increase the quality and the effectiveness of the FMEA reports. This paper explains an Automatic FMEA tool which integrates the Model-based Design (MBD), FMEA, and Simulated Fault Injection techniques in a single environment. The Automatic FMEA tool has the following advantages compared to the existing FMEA analysis tool. First, the Automatic FMEA tool automatically generates FMEA reports compared to the traditional spreadsheet-based FMEA tools. Second, the Automatic FMEA tool analyzes the causality between the failure modes and the failure effects by performing model-based fault injection simulation. In order to demonstrate the applicability of the Automatic FMEA, we used the electronic fuel injection system (EFI) Simulink model. The results of the Automatic FMEA were compared to that of the legacy FMEA.
Subject: Engineering, Control & Systems Engineering Keywords: hybrid energy storage system; L2-gain disturbance attenuation; passivity-based control; port-controlled Hamiltonian model
Online: 16 April 2020 (06:36:09 CEST)
Battery/Supercapacitor(SC) current tracking control is a key issue for hybrid energy storage system (HESS) in electric vehicles. An innovative passivity-based L2-gain adaptive control (PBL2AC) based on port-controlled Hamiltonian model with dissipativity (PCHD) for reference current tracking and bus voltage stability in HESS is presented. The developed PCHD model has considered both parameter variations and external disturbances. By using L2-gain disturbance attenuation, the PBL2AC ensures robust reference current tracking and stable bus voltage. Moreover, adaptive mechanism is adopted to estimate the electrical parameters. To validate the proposed control scheme for HESS, simulations and experiments were done and compared with traditional PID and sliding mode control under several typical driving cycles, and results show that the effectiveness of the proposed controller can be confirmed.
ARTICLE | doi:10.20944/preprints201910.0118.v1
Subject: Earth Sciences, Geology Keywords: statistics-based estimation model (sem); different geological condition; permeability coefficient; shearing strength; landslide-triggering factor
Online: 10 October 2019 (14:53:30 CEST)
In South Korea, landslides are caused by localized heavy rainfall and typhoons, which often occur in the summer season at natural slopes in mountainous areas and artificial slopes in urban surroundings. Flow-type landslides frequently occur in mountainous areas. To evaluate flow-type landslides, it is essential to identify the physical characteristics of soil, giving focus to the soil on the top layers of various types of slope. This study conducts a survey and an analysis of the characteristics of landslides that occurred in the study area with different geological conditions of granite and gneiss. The characteristics of soil in the area and its surroundings that have or have not undergone landslides for every geological condition is also evaluated. Based on these characteristics and a statistics method, it extracts the triggering factors, permeability coefficients (k), and shearing strength with cohesion (c) and internal friction angel (φ) of soils that are highly related to landslides around weathered soil layers. As a result, the permeability coefficients show significant relevance with void ratio (e), the effective size of grains (D10), and uniformity coefficient (cu), while the shearing strength with the proportion of fine-grained soil (Fines), uniformity coefficient (cu), degree of saturation (S), dry weight density (rd), and void ratio (e). By obtaining this result, the study uses the regression analysis to suggest models to estimate the permeability coefficients and shearing strength. For the gneiss area, the statistics-based estimation model (SEM) is proposed as kgn = (1.488 × 10-02 × e) + (1.076 × D10) + (-1.629 × 10-04 × cu) - (1.893 × 10-02) for permeability coefficients; cgn = (-0.712 × Fines) + (-0.131 × cu) + 15.335 for cohesion; and φgn = (27.01 × rd) + (-12.594 × e) + 6.018 for internal frictional angle of soils. For the granite area, the statistics-based estimation model (SEM) is proposed as kgr = (8.281 × 10-03 × e) + (0.639 × D10) + (-2.766 × 10-05 × cu) - (9.907 × 10-03) for permeability coefficients; cgr = (-0.689 × Fines) + (-0.0744 × S) + 18.59 for cohesion; and φgr = (33.640 × rd) + (-0.875 × e) - 9.685 for internal frictional angle of soils. The use of statistics-based estimation models (SEMs) for landslide-triggering factors that trigger landslides will support the simple calculation of permeability coefficient and shearing strength (cohesion and internal frictional angle), only requiring information about the physical properties of soil at the natural slopes that have different geological features such as gneiss and granite areas.
ARTICLE | doi:10.20944/preprints201811.0479.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: mixing; CFD-simulation; surrogate-based optimization; compartmental modeling; competing reaction system; optimization; model order reduction
Online: 20 November 2018 (05:07:13 CET)
Mixing is considered as a critical process parameter (CPP) during process development due to its significant influence on reaction selectivity and process safety. Nevertheless, mixing issues are difficult to identify and solve owing to their complexity and dependence on knowledge of kinetics and hydrodynamics. In this paper, we proposed an optimization methodology using Computational Fluid Dynamics (CFD) based compartmental modelling to improve mixing and reaction selectivity. More importantly, we have demonstrated that through the implementation of surrogate-based optimization, the proposed methodology can be used as a computationally non-intensive way for rapid process development of reaction unit operations. For illustration purpose, reaction selectivity of a process with Bourne competitive reaction network is discussed. Results demonstrate that we can improve reaction selectivity by dynamically controlling rates and locations of feeding in the reactor. The proposed methodology incorporates mechanistic understanding of the reaction kinetics together with an efficient optimization algorithm to determine the optimal process operation and thus can serve as a tool for quality-by-design (QbD) during product development stage.
ARTICLE | doi:10.20944/preprints201809.0481.v1
Subject: Engineering, Other Keywords: Brain-Computer Interfaces, spectrogram-based convolutional neural network model(pCNN), Deep Learning, EEG, LSTM, RCNN
Online: 25 September 2018 (08:58:34 CEST)
Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g. hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause for the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: 1) a long short-term memory (LSTM); 2) a proposed spectrogram-based convolutional neural network model (pCNN); and 3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (manual) feature engineering. Results were evaluated on our own, publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from "BCI Competition IV". Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.
ARTICLE | doi:10.20944/preprints201611.0092.v2
Subject: Keywords: semantic spatial trajectory; role based access control; Bell-Lapadula model; multi-policy; Web Ontology Language
Online: 17 November 2016 (15:19:51 CET)
With the proliferation of locating devices, more and more raw spatial trajectories are formed, and many works enrich these raw trajectories with semantics, and mine patterns from both raw and semantic trajectories, but access control of spatial trajectories is not considered yet. We present a multi-policy secure model for semantic spatial trajectories. In our model, Mandatory Access Control, Role Based Access Control and Discretionary Access control are all enforced, separately and combined, and we represent the model semi-formally in Ontology Web Language.
ARTICLE | doi:10.20944/preprints202010.0010.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Totem-pole power factor correction; energy storage systems (ESS); digital control; Gallium Nitride (GaN) based; current harmonic distortion mitigation; efficiency and power quality improvement
Online: 1 October 2020 (09:12:36 CEST)
With the unceasing advancement in wide-bandgap (WBG) semiconductor technology, the minimal reverse-recovery charge Qrr and other more powerful natures of WBG transistors enable totem-pole bridgeless PFC to become a dominant solution for energy storage systems (ESS). This paper focuses on design and implementation of a control structure for a totem-pole boost PFC with newfangled enhancement-mode Gallium Nitride (eGaN) FETs, not only to simplify the control implementation, but also to achieve high power quality and efficiency. The converter is designed to convert a 90-264-VAC input to a 385-VDC output for a 2.6-kW output power. Lastly, to validate the methodology, an experimental prototype is characterized and fabricated. The uttermost efficiency at 230 VAC attains 99.14%. The lowest total harmonic distortion in the current (ITHD) at high line condition (230 V) reaches 1.52% while the power factor gains 0.9985.
ARTICLE | doi:10.20944/preprints201808.0136.v2
Subject: Medicine & Pharmacology, Pharmacology & Toxicology Keywords: acute toxicity; cardiovascular depression; intravenous lipid emulsion; propofol; rat model; respiratory depression
Online: 23 October 2018 (09:34:43 CEST)
Abstract: Background and objective: Propofol is an anesthetic agent that is frequently used in anesthesia induction, maintenance and sedation. Propofol has severe side effects such as hypotension, bradycardia and respiratory depression. Although propofol is commonly used, there is no known antidote for its toxic effects. An approach to prevent toxic effects of propofol would be beneficial. The aim of this study was to assess the effects of intravenous lipid emulsion (ILE) therapy in the prevention of depressive effects of propofol on cardiovascular and respiratory systems. Materials and methods: Twenty-eight Sprague-Dawley adult rats were randomly divided into 4 groups. The saline-administered group was determined as the Control group. The second group was administered propofol (PP group); the third group was administered ILE (ILE group), and the fourth was administered propofol with ILE therapy (ILE+PP group). Systolic blood pressure (SBP), Diastolic blood pressure (DBP), Mean arterial blood pressure (MAP), Respiratory rate (RR), Heart rate (HR) and mortality were recorded at 10 points during 60 minutes. A repeated measures linear mixed-effect model with unstructured covariance was used to compare the groups. Results: In the PP group, SBP, DBP, MAP, RR and HR levels were declining steadily; all rats in this group died after 60 minutes. In the ILE+PP group, after a while, the decreased SBP, DBP, MAP, RR and HR levels increased SBP, DBP, MAP, RR and HR levels of the Propofol group were found to be significantly lower than those of the other groups (p<0.01). The mortality rate was 100% (surviving period, 60 min) for the PP group, whereas 0% for the ILE, ILE+PP and Control groups. Conclusion: Our results suggest that undesirable side effects that can be seen after propofol application such as hypotension, bradycardia and respiratory depression might be prevented by using ILE therapy.
ARTICLE | doi:10.20944/preprints201611.0131.v1
Subject: Social Sciences, Other Keywords: adaptation; mental model refinement; food systems; knowledge management participatory modeling; system dynamics; systems thinking
Online: 27 November 2016 (04:12:49 CET)
Food systems will need to undergo considerable transformation. To be better prepared for and resilient to uncertainty and disturbances in the future, resource users and managers need to further develop knowledge about the food and farming system, with its dominating feedback structures and complexities, and to test robust and integrated system-based solutions. This paper investigates how participatory system dynamics modeling can be adapted to groups at the community level with low or no formal educational background. The paper also analyzes the refinement of workshop participants’ mental models as a consequence of a participatory system dynamics intervention. For this purpose, we ran two workshops with small-scale farmers in Zambia. Analysis of workshop data and post-workshop interviews shows that participatory system dynamics is well adaptable to support an audience-specific learning-by-doing approach. The use of pictures, objects and water glasses in combination with the basic aspects of causal loop diagramming makes for a well-balanced toolbox. Participants acquire understanding that is also relevant beyond systems thinking in that is offers a range of practical insights such as a critical evaluation of common food security strategies.
ARTICLE | doi:10.20944/preprints202004.0398.v1
Subject: Social Sciences, Other Keywords: COVID-19; Perception-based questionnaire; principal component analysis (PCA); Linear regression model; social panic; social conflict
Online: 22 April 2020 (09:55:38 CEST)
The COVID-19 pandemic situation, disease intensity, weak healthcare facilities, unawareness, and misinformation led people to fear and anxiety in Bangladesh. This study intended to get peoples’ perception on psychosocial, socio-economic and environmental crisis amidst the pandemi. An online questionnaire was surveyed nationwide (respondents no.1066). Datasets were analyzed through the Principal Component Analysis (PCA), hierarchical Cluster Analysis (CA), Pearson’s correlation matrix (PCM), Linear regression analysis (LRA), and psychometric characteristics were included in the Classical Test Theory (CTT) analysis. There were good associations among the psychosocial, socio-economic and environmental parameters. A significant association between fear of COVID-19 with struggling healthcare system (p<0.05) was found. Also, negative association between fragile health system and government’s ability to deal with the pandemic (p<0.05) revealing poor governance. Again, a positive association of shutdown and social distancing with fear of losing life, and due to lack of health treatment (p<0.05) reveals that shut down hampers normal activities which may lead to mental and economic stress. However, a positive association of socio-economic impact of the shutdown with poor people’s suffering, the price hike of basic need, hamper of formal education (p<0.05) may lead to severe socio-economic and health crisis. There is a possibility of climate-induced disaster during/after the pandemic, which will create severe food insecurity (p<0.01). Daily wage earners and poors will suffer most by food and nutritional deficiency, and the country may face huge economic burden. Proper risk assessment and communications is needed to alleviate fear and anxiety. Thus, financial support and mental boosting is required.
ARTICLE | doi:10.20944/preprints201810.0341.v1
Subject: Social Sciences, Business And Administrative Sciences Keywords: sustainable transformative business model; shared-value, digitization; innovation management; dynamic capabilities; transformation management; resource based view
Online: 16 October 2018 (08:23:41 CEST)
We examine how external triggers, including the digital imperative and the need for more sustainable resource and stakeholder employment, spark the development of transformative sustainable business models. Drawing on the resource-based view and the shared value approach we conceptualize a multifaceted framework that helps to identify key determinants and coherent layers of transformative sustainable businesses models. Our theoretical arguments integrate recent research findings on external dynamics, such as digital technological advances and rising global competitive dynamics, with internal capabilities on both the organizational and the individual level, allowing for a more complete understanding of transformative potentials on the firm level. We propose that key determinants of sustainable transformative business models adhere to both, innovative value-creating reconstructionist and sustainable shared-value logic, and include elements such as co-creation with customers, usage-based pricing, agile and adaptive behavior, closed-loop resource employment, asset-sharing, and collaborative business ecosystems. At the same time, organizational, economic, and environmental layers encompassing sustainable business models need to be both horizontally and vertically coherent to unfold their full potential.
ARTICLE | doi:10.20944/preprints201802.0174.v1
Subject: Social Sciences, Geography Keywords: environmental stress; human exposure; agent-based model; air pollution; urban heat wave; exposure modeling; climate change
Online: 27 February 2018 (05:12:24 CET)
The importance of predicting the exposure to environmental hazards is highlighted by issues like global climate change, public health problems caused by environment stresses, and property damages and depreciations. Several approaches have been used to assess potential exposure and achieve optimal results under various conditions, for example, for different scales, groups of people, or certain points in time. Micro-simulation tools are becoming increasingly important in human exposure assessment, where each person is simulated individually and continuously. This paper describes an agent-based model (ABM) framework that can dynamically simulate human exposure levels, along with their daily activities, in urban areas that are characterized by environmental stresses such as air pollution and heat stress. Within the framework, decision making processes can be included for each individual based on rule-based behavior to achieve goals under changing environmental conditions. The ideas described in this paper are implemented in a free and open source NetLogo platform. A simplified modeling scenario of the ABM framework in Hamburg, Germany, further demonstrates its utility in various urban environments and individual activity patterns, and portability to other models, programs and frameworks. The prototype model can potentially be extended to support environmental incidence management by exploring the daily routines of different groups of citizens and compare the effectiveness of different strategies. Further research is needed to fully develop an operational version of the model.
REVIEW | doi:10.20944/preprints202106.0055.v1
Subject: Social Sciences, Accounting Keywords: Case-study analysis; Citizen engagement; Collaborative ecosystem; Governance; Innovation systems; n-Helix model; Smart city
Online: 2 June 2021 (08:49:42 CEST)
Despite the rising interest in smart city initiatives worldwide, governmental theories along with the managerial perspectives of city planning are a great lack in the literature. It is definitely understandable that the adoption of configurational pathways towards the ‘smart’ ‘governance’ models is required as key factor and smartness’ facilitator in modern cities. In this manuscript, we display an exhaustive analysis on the importance of the n-Helix models along with a benchmarking critical approach through selected European case-studies. The study, through the literature review, revealed the lack of exhaustive analyses for the methodological investigation, identification and adoption of the most appropriate governance model and collaborative approaches per project and collaborative approaches and create modular frameworks to address efficiently the continuous urban challenges, such as the rapid urbanization or the climate change.
ARTICLE | doi:10.20944/preprints202106.0209.v1
Subject: Social Sciences, Accounting Keywords: transit; entrepreneurship; rail; Effectuation; Entrepreneur Rail Model; finance; PPP; Transit-Activated Corridor; corridor transit; urban planning.
Online: 8 June 2021 (10:34:35 CEST)
The need for Transit Oriented Development (TOD) around railway stations has been well accepted and continues to be needed in cities looking to regenerate both transit and urban development. Large parts of suburban areas remain without quality transit down Main Roads which are usually filled with traffic resulting in reduced urban value. The need to regenerate both the mobility and land development along such roads will likely be the next big agenda in transport policy. This paper learns from century-old experiences in public-private approaches to railway systems from around the world, along with new insights from entrepreneurship theory and urban planning to create the notion of a ‘Transit Activated Corridor’ (TAC). TAC’s prioritise fast transit and a string of station precincts along urban Main Roads. TOD’s were primarily a government role, whereas TAC’s will be primarily a private sector, entrepreneurship role. The core policy processes for a TAC are outlined with some early case studies. Five design principles for delivering a TAC are presented in this paper, three principles from entrepreneurship theory and two from urban planning. The potential for Trackless Trams to enable TAC’s is used to illustrate how these design processes can be an effective approach for designing, financing and delivering a ‘Transit Activated Corridor’.
ARTICLE | doi:10.20944/preprints202204.0300.v1
Subject: Social Sciences, Other Keywords: agent-based model; electric vehicles; traffic simulation; energy intake; urban environment; fuel costs; public policy; electric mobility
Online: 29 April 2022 (11:05:15 CEST)
By 2020, over 100 countries expanded electric and plug-in hybrid electric vehicle (EV/PHEV) technologies, with global sales surpassing 7 million units. Governments are adopting cleaner vehicle technologies due to proven environmental and health implications of internal combustion engine vehicles (ICEVs), evidenced by the recent COP26 meeting. This article proposes an agent-based model of vehicle activity as a tool for quantifying energy consumption by simulating a fleet of EV/PHEVs within an urban street network at various spatio-temporal resolutions. Driver behaviour plays a significant role in fuel consumption, thus, simulating various levels of individual behaviour enhancing heterogeneity should provide more accurate results of potential energy demand in cities. The study found that 1) energy consumption is lowest when speed limit adherence increases (low variance in behaviour) and is highest when acceleration/deceleration patterns vary (high variance in behaviour) and 2) on average, for tested vehicles, EV/PHEVs were £116.33 cheaper to run than ICEVs across all experiment conditions. The difference in the average fuel costs (electricity and petrol) shrinks at the vehicle level as driver behaviour is less varied (more homogeneous). This research should allow policymakers to quantify the demand for energy and subsequent fuel costs in cities.
REVIEW | doi:10.20944/preprints202111.0044.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: deep reinforcement learning; model-based RL; hierarchy; trading; cryptocurrency; foreign exchange; stock market; risk; prediction; reward shaping
Online: 2 November 2021 (10:57:23 CET)
Deep reinforcement learning (DRL) has achieved significant results in many Machine Learning (ML) benchmarks. In this short survey we provide an overview of DRL applied to trading on financial markets, including a short meta-analysis using Google Scholar, with an emphasis on using hierarchy for dividing the problem space as well as using model-based RL to learn a world model of the trading environment which can be used for prediction. In addition, multiple risk measures are defined and discussed, which not only provide a way of quantifying the performance of various algorithms, but they can also act as (dense) reward-shaping mechanisms for the agent. We discuss in detail the various state representations used for financial markets, which we consider critical for the success and efficiency of such DRL agents. The market in focus for this survey is the cryptocurrency market.