ARTICLE | doi:10.20944/preprints201802.0105.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: multi-objective multi-level programming; fuzzy parameters; TOPSIS; fuzzy goal programming; multi-objective decision making
Online: 15 February 2018 (20:29:20 CET)
The paper proposes TOPSIS method for solving multi-objective multi-level programming problem (MO-MLPP) with fuzzy parameters via fuzzy goal programming (FGP). At first, - cut method is used to transform the fuzzily described MO-MLPP into deterministic MO-MLPP. Then, for specific , we construct the membership functions of distance functions from positive ideal solution (PIS) and negative ideal solution (NIS) of all level decision makers (DMs). Thereafter, FGP based multi-objective decision model is established for each level DM for obtaining individual optimal solution. A possible relaxation on decisions for all DMs is taken into account for satisfactory solution. Subsequently, two FGP models are developed and compromise optimal solutions are found by minimizing the sum of negative deviational variables. To recognize the better compromise optimal solution, the concept of distance functions is utilized. Finally, a novel algorithm for MO-MLPP involving fuzzy parameters is provided and an illustrative example is solved to verify the proposed procedure.
ARTICLE | doi:10.20944/preprints202005.0331.v3
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: optimization; multi-objective optimization; decision making; Time
Online: 3 February 2022 (17:30:32 CET)
Multi-objective optimization (MOO) is an optimization involving minimization or maximization of several objective functions more than the conventional one objective optimization, which is useful in many fields. Many of the current methodologies addresses challenges and solutions that attempt to solve simultaneously several Objectives with multiple constraints subjoined to each. Often MOO are generally subjected to linear inequality, equality and or bounded constraint that prevent all objectives from being optimized at once. This paper reviews some recent articles in area of MOO and presents deep analysis of Random and Uniform Entry-Exit time of objectives. It further break down process into sub-process and then provide some new concepts for solving problems in MOO, which comes due to periodical objectives that do not stay for the entire duration of process lifetime, unlike permanent objectives which are optimized once for the entire process duration. A methodology based on partial optimization that optimizes each objective iteratively and weight convergence method that optimizes sub-group of objectives are given. Furthermore, another method is introduced which involve objective classification, ranking, estimation and prediction where objectives are classified based on their properties, and ranked using a given criteria and in addition estimated for an optimal weight point (pareto optimal point) if it certifies a coveted optimal weight point. Then finally predicted to find how far it deviates from the estimated optimal weight point. A Sample Mathematical Tri-Objectives and Real world Optimization was analyzed using partial method, ranking and classification method, the result showed that an objective can be added or removed without affecting previous or existing optimal solutions. Therefore suitable for handling time governed MOO. Although this paper presents concepts work only, it’s practical application are beyond the scope of this paper, however base on analysis and examples presented, the concept is worthy of igniting further research and application.
ARTICLE | doi:10.20944/preprints201611.0139.v2
Subject: Engineering, Automotive Engineering Keywords: clutch; diaphragm spring; multi-objective; optimization; NSGA-II
Online: 29 November 2016 (05:02:44 CET)
The weight coefficients of the diaphragm spring depend on experiences in the traditional optimization. However, this method not only cannot guarantee the optimal solution but it is also not universal. Therefore, a new optimization target function is proposed. The new function takes the minimum of average compress force changing of the spring and the minimum force of the separation as total objectives. Based on the optimization function, the result of the clutch diaphragm spring in a car is analyzed by the non-dominated sorting genetic algorithm (NSGA-II) and the solution set of Pareto is obtained. The results show that the pressing force of the diaphragm spring is improved by 4.09%by the new algorithmand the steering separation force is improved by 6.55%, which has better stability and steering portability. The problem of the weight coefficient in the traditional empirical design is solved. The pressing force of the optimized diaphragm spring varied slightly during the abrasion range of the friction film, and the manipulation became remarkably light.
Subject: Social Sciences, Business And Administrative Sciences Keywords: vendor selection; product life cycle; multi-objective linear programming; Multi-choice goal programming.
Online: 3 June 2019 (09:52:41 CEST)
The framework of product life cycle (PLC) cost analysis is one of the most important evaluation tools for a contemporary high-tech company in an increasingly competitive market environment. The PLC-purchasing strategy provides the framework for a procurement plan and examines the sourcing strategy of a firm. The marketing literature emphasizes that ongoing technological change and shortened life cycles are important elements in commercial organizations. From a strategic viewpoint, the vendor has an important position between supplier, buyer and manufacturer. The buyer seeks to procure the products from a set of vendors to take advantage of economies of scale and to exploit opportunities for strategic relationships. However, previous studies have seldom considered vendor selection (VS) based on PLC cost (VSPLCC) analysis. The purpose of this paper is to solve the VSPLCC problems considering the situation of a single-buyer-multiple-supplier. For this issue, a new VSPLCC procurement model and solution procedure are derived by this paper to minimize net cost, rejection rate, late delivery and PLC cost subject to vendor capacities and budget constraints. Moreover, a real case in Taiwan is provided to show how to solve the VSPLCC procurement problem.
ARTICLE | doi:10.20944/preprints202008.0462.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Volt-VAr Control; Volt-VAr Optimization; Multi-Objective optimization
Online: 20 August 2020 (13:14:16 CEST)
Recent research has enabled the integration of traditional Volt-VAr Control (VVC) resources, such as capacitors banks and transformer tap changers, with Distributed Energy Resources (DERs), such as photo-voltaic sources and energy storage, in order to achieve various Volt-VAr Optimization (VVO) targets, such as Conservation Voltage Reduction (CVR), minimizing VAr flow at the transformer, minimizing grid losses, minimizing asset operations and more. When more than one target function can be optimized, the question of multi-objective optimization is raised. In this work, we propose a general formulation of the multi-objective Volt-VAr optimization problem. We consider the applicability of various multi-optimization techniques and discuss the operational interpretation of these solutions. We demonstrate the methods using simulation on a test feeder.
ARTICLE | doi:10.20944/preprints201801.0094.v2
Subject: Engineering, Energy & Fuel Technology Keywords: Direct Methanol Fuel Cell; Operation strategy; Multi-objective optimization
Online: 8 May 2018 (16:14:24 CEST)
An adaptive operation strategy for on-demand control of DMFC system is proposed as an alternative method to enhance the voltage stability. Based on a single-cell DMFC stack, a newly simplified semi-empirical model is developed from the uniform-designed experimental results to describe the I-V relationship. Integrated with this model, the multi-objective optimization method is utilized to develop an adaptive operation strategy. Although the voltage instability is frequently encountered in unoptimized operations, the voltage deviation is successfully decreased to a required level by adaptive operations with operational adjustments. Moreover, the adaptive operations are also found to be able to extend the range of operating current density or to decrease the voltage deviation according to ones requirements. Numerical simulations are implemented to investigate the underlying mechanisms of the proposed adaptive operation strategy, and experimental adaptive operations are also performed on another DMFC system to validate the adaptive operation strategy. Preliminary experimental study shows a rapid response of DMFC system to the operational adjustment, which further validates the effectiveness and feasibility of the adaptive operation strategy in practical applications. The proposed strategy contributes to a guideline for the better control of output voltage from operating DMFC systems.
ARTICLE | doi:10.20944/preprints201804.0298.v1
Subject: Engineering, Energy & Fuel Technology Keywords: multi-objective optimisation; NSGAII; MCDM; TOPSIS; life cycle cost
Online: 23 April 2018 (12:58:03 CEST)
This research proposes a framework to assist wind energy developers to select the optimum deployment site of a wind farm by considering the Round 3 zones in the UK. The framework includes optimisation techniques, decision-making methods and experts’ input in order to help stakeholders with investment decisions. Techno-economic, Life Cycle Costs (LCC) and physical aspects for each location are considered along with experts’ opinions to provide deeper insight into the decision making process. A process on the criteria selections is also presented and seven conflicting criteria are being considered in TOPSIS methods in order to suggest the optimum location that was produced by the NSGAII algorithm. Seagreen Alpha was the most probable solution, followed by Moray Firth Eastern Development Area 1, which demonstrates by example the effectiveness of the newly introduced framework that is also transferable and generic. The outcomes are expected to help stakeholders and decision makers to make more informed and cost-effective decisions under uncertainty when investing in offshore wind energy in the UK.
Subject: Materials Science, Biomaterials Keywords: plastics thermoforming; sheet thickness distribution; evolutionary algorithms; multi-objective optimization
Online: 1 June 2021 (09:41:57 CEST)
The practical application of a multi-objective optimization strategy based on evolutionary algorithms was proposed to optimize the plastics thermoforming process. For that purpose, the various steps of the process were considered individually and the optimization strategy was applied to the determination of the final part thickness distribution with the aim of demonstrating the validity of the methodology proposed. The preliminary results obtained considering three different theoretical initial sheet shapes indicates clearly that the methodology proposed is valid, as it provides solutions with physical meaning and with a great potential to be applied in real practice.
ARTICLE | doi:10.20944/preprints202011.0031.v1
Subject: Engineering, Automotive Engineering Keywords: BIM; Insulation Design; Building Envelope; Multi-objective; Optimisation; Pareto-front
Online: 2 November 2020 (11:26:02 CET)
Insulation systems for the floor, roof and external walls play a prominent role in providing a thermal barrier for the building envelope. Design decisions made for the insulation material type and thickness can alleviate potential impacts on the embodied energy and improve the building thermal performance. This design problem is often addressed using a BIM-integrated optimisation approach. However, one major weakness lies in the current studies is that BIM is merely used as the source for design parameters input. This study proposes a BIM-based envelope insulation optimisation design framework using a common software Revit to find the trade-off between the total embodied energy of the insulation system and the thermal performance of the envelope by considering the material type and thickness. In addition, the framework also permits data visualisation in a BIM environment, and subsequent material library mapping together with instantiating the optimal insulation designs. The framework is tested on a case study based in Sydney, Australia. By analysing sample designs from the Pareto front, it is found that slight improvement in the thermal performance (1.3399 to 1.2112 GJ/m2) would cause the embodied energy to increase by more than 50 times.
ARTICLE | doi:10.20944/preprints201810.0420.v1
Subject: Engineering, Energy & Fuel Technology Keywords: District Heating; multi-objective evolutionary optimization; distributed cogeneration; optimal operation.
Online: 18 October 2018 (11:53:15 CEST)
The paper deals with the modelization and optimization of an integrated multi-component energy system. On-off operation and presence-absence of components must be described by means of binary decision variables, besides equality and inequality constraints; furthermore, the synthesis and the operation of the energy system should be optimized at the same time. In this paper a hierarchical optimization strategy is used, adopting a genetic algorithm in the higher optimization level, to choose the main binary decision variables, whilst a MILP algorithm is used in the lower level, to choose the optimal operation of the system and to supply the merit function to the genetic algorithm. The method is then applied to a distributed generation system, which has to be designed for a set of users located in the center of a small town in the North-East of Italy. The results show the advantage of distributed cogeneration, when the optimal synthesis and operation of the whole system are adopted, and significant reduction in the computing time by using the proposed two-level optimization procedure.
ARTICLE | doi:10.20944/preprints201806.0365.v1
Subject: Engineering, General Engineering Keywords: ARIMA model; data forecasting; multi-objective genetic algorithm; regression model
Online: 24 June 2018 (07:48:49 CEST)
The aim of this study has been to develop a novel two-level multi-objective genetic algorithm (GA) to optimize time series forecasting data for fans used in road tunnels by the Swedish Transport Administration (Trafikverket). Level 1 is for the process of forecasting time series cost data, while level 2 evaluates the forecasting. Level 1 implements either a multi-objective GA based on the ARIMA model or a multi-objective GA based on the dynamic regression model. Level 2 utilises a multi-objective GA based on different forecasting error rates to identify a proper forecasting. Our method is compared with using the ARIMA model only. The results show the drawbacks of time series forecasting using only the ARIMA model. In addition, the results of the two-level model show the drawbacks of forecasting using a multi-objective GA based on the dynamic regression model. A multi-objective GA based on the ARIMA model produces better forecasting results. In level 2, five forecasting accuracy functions help in selecting the best forecasting. Selecting a proper methodology for forecasting is based on the averages of the forecasted data, the historical data, the actual data and the polynomial trends. The forecasted data can be used for life cycle cost (LCC) analysis.
ARTICLE | doi:10.20944/preprints201706.0016.v1
Subject: Mathematics & Computer Science, Other Keywords: pumped storage hydro unit; guide vane closing schemes; multi-objective optimization; enhanced multi-objective bacterial-foraging chemotaxis gravitational search algorithm (EMOBCGSA); hydraulic and mechanical constraints
Online: 2 June 2017 (07:56:05 CEST)
The optimization of guide vane closing schemes (OGVCS) of pumped storage hydro unit (PSHU) is the research field of cooperative control and optimal operation of pumped storage, wind power and solar power generation. This paper presents a OGVCS model of PSHU considering the rise rate of the unit rotational speed and the specific node pressure of each hydraulic unit, as well as various complicated hydraulic and mechanical constraints. OGVCS model is formulated as a multi-objective optimization problem to optimize conflictive objectives, i.e., unit rotational speed and water hammer pressure criteria. In order to realize the efficient solution of the OGVCS model, an enhanced multi-objective bacterial-foraging chemotaxis gravitational search algorithm (EMOBCGSA) is proposed to solve this problem, which adopts population reconstruction, adaptive selection chemotaxis operator of local searching strategy and Elite archive set to efficiently solve the multi-objective problem. Especially, novel constraints-handling strategy with eliminating and local search based on violation ranking is used to balance various hydraulic and mechanical constraints. Finally, simulation cases of complex extreme operating conditions (i.e., load rejection and pump outage) of ‘single tube-double units’ type PSHU system are conducted to verify the feasibility and effectiveness of the proposed EMOBCGSA in solving OGVCS problem. The simulation results indicate that the proposed EMOBCGSA can provide lower rise rate of the unit rotational speed and smaller water hammer pressure than other method established recently while considering various complex constraints in OGVCS problem.
ARTICLE | doi:10.20944/preprints202110.0245.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Electric Vehicle; Power Grid; Carbon Reduction Benefit; Multi-objective Optimization Model
Online: 18 October 2021 (13:12:29 CEST)
Under the goal of carbon peak and carbon neutrality, the carbon emission reduction of the automobile industry has attracted more and more attention in recent years. Electric vehicle has the dual attributes of power load and energy storage unit. With the increase of the number of electric vehicles, reducing carbon emissions through the collaborative interaction between electric vehicle and power network will become an important way to control carbon emissions in the automotive field. In this study, an optimization model of emission reduction benefits based on integrated development of electric vehicle and power grid is proposed, which explores the best technical way of synergy between power grid and electric vehicle, achieves the best carbon reduction effect and provides a model basis for large-scale demonstration application. Numerical simulations based on the real case in Beijing are conducted to validate the effectiveness of the proposed method.
ARTICLE | doi:10.20944/preprints202106.0457.v1
Subject: Engineering, Automotive Engineering Keywords: Machine Learning; Multi-objective optimization; Low Pressure Turbine; Transition; Turbulence Modeling
Online: 17 June 2021 (10:39:40 CEST)
Existing Reynolds Averaged Navier-Stokes based transition models do not accurately predict separation induced transition for low pressure turbines. Therefore, in this study, a novel framework based on computational fluids dynamics driven machine learning coupled with multi-expression and multi-objective optimization is explored to develop models which can improve the transition prediction for the T106A low pressure turbine at an isentropic exit Reynolds number of Re2is=100,000. Model formulations are proposed for the transfer and laminar eddy viscosity terms of the laminar kinetic energy transition model using seven non-dimensional pi groups. The multi-objective optimization approach makes use of cost functions based on the suction-side wall-shear stress and the pressure coefficient. A family of solutions is thus developed, whose performance is assessed using Pareto analysis and in terms of physical characteristics of separated-flow transition. Two models are found which bring the wall-shear stress profile in the separated region at least two times closer to the reference high-fidelity data than the baseline transition model. As these models are able to accurately predict the flow coming off the blade trailing edge, they are also able to significantly enhance the wake-mixing prediction over the baseline model.
ARTICLE | doi:10.20944/preprints202011.0738.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: parking management system; smart parking; multi-objective method; decision-making method
Online: 30 November 2020 (16:04:41 CET)
Due to the tremendous progress in the automotive industry, the growth of the urban population, the number of vehicles is increasing and this creates parking challenges. Intelligent parking management systems offer an optimal solution for finding empty parking space so that drivers can quickly find their car parking space. To solve these problems, it is necessary to design an intelligent parking system, in addition to providing comfort to drivers, which is also economically viable. This paper proposes an intelligent multi-storey car parking system with the help of RFID technology and examining user preferences that can effectively solve car parking problems. The proposed method is a multi-objective decision-making method to reduce the problem of car parking, which is called MODM-RPCP. Therefore, the proposed MODM-RPCP method can allocate the best space for their stopping place by using the decision-making system and based on the priorities considered by the users. The simulation results show that the MODM-RPCP reduces the average booking time more than 19.2% and 27.1%, and decreases the response time of central parking management server more than 20.1% and 29.78% compared to MOGWOLA and ODPP approaches.
ARTICLE | doi:10.20944/preprints201807.0034.v2
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: multi-objective optimization; resource efficiency; metal mines; production process; NSGA-II
Online: 29 November 2018 (10:59:56 CET)
The optimization of the production process of metal mines has been traditionally driven only by economic benefits while ignoring resource efficiency. However, it has become increasingly aware of the importance of resource efficiency since mineral resource reserves continue to decrease while the demand continues to grow. To better utilize the mineral resources for sustainable development, this paper proposes a multi-objective optimization model of the production process of metal mines considering both economic benefits and resource efficiency. Specifically, the goals of the proposed model are to maximize the profit and resource utilization rate. Then, the fast and elitist Non-Dominated Sorting Genetic Algorithm (NSGA-II) is used to optimize the multi-objective optimization model. The proposed model has been applied to the optimization of the production process of a stage in the Huogeqi Copper Mine. The optimization results provide a set of Pareto-optimal solutions that can meet varying needs of decision makers. Moreover, compared with those of the current production indicators, the profit and resource utilization rate of some points in the optimization results can increase respectively by 2.99% and 2.64%. Additionally, the effects of the decision variables (geological cut-off grade, minimum industrial grade and loss ratio) on objective functions (profit and resource utilization rate) were discussed using variance analysis. The sensitivities of the Pareto-optimal solutions to the unit copper concentrate price were studied. The results show that the Pareto-optimal solutions at higher profits (with lower resource utilization rates) are more sensitive to the unit copper concentrate prices than those obtained in regions with lower profits.
ARTICLE | doi:10.20944/preprints201811.0094.v2
Subject: Engineering, Biomedical & Chemical Engineering Keywords: user intent recognition; transfemoral prosthesis; multi-objective optimization; biogeography-based optimization
Online: 16 November 2018 (11:27:10 CET)
One control challenge in prosthetic legs is seamless transition from one gait mode to another. User intent recognition (UIR) is a high-level controller that tells a low-level controller to switch to the identified activity mode, depending on the user’s intent and environment. We propose a new framework to design an optimal UIR system with simultaneous maximum performance and parsimony for gait mode recognition. We use multi-objective optimization (MOO) to find an optimal feature subset that creates a trade-off between these two conflicting objectives. The main contribution of this paper is two-fold: (1) a new gradient-based multi-objective feature selection (GMOFS) method for optimal UIR design; and (2) the application of advanced evolutionary MOO methods for UIR. GMOFS is an embedded method that simultaneously performs feature selection and classification by incorporating an elastic net in multilayer perceptron neural network training. Experimental data are collected from six subjects, including three able-bodied subjects and three transfemoral amputees. We implement GMOFS and four variants of multi-objective biogeography-based optimization (MOBBO) for optimal feature subset selection, and we compare their performances using normalized hypervolume and relative coverage. GMOFS demonstrates competitive performance compared to the four MOBBO methods. We achieve a mean classification accuracy of 97.14% ± 1.51% and 98.45% ± 1.22% with the optimal selected subset for able-bodied and amputee subjects, respectively, while using only 23% of the available features. Results thus indicate the potential of advanced optimization methods to simultaneously achieve accurate, reliable, and compact UIR for locomotion mode detection of lower-limb amputees with prostheses.
ARTICLE | doi:10.20944/preprints202009.0219.v1
Subject: Engineering, Energy & Fuel Technology Keywords: solar energy; micro-cogeneration; exergy; multi-objective optimization; PVT collector; PV panel
Online: 10 September 2020 (04:42:24 CEST)
A photovoltaic-thermal (PVT) collector is a solar-based micro-cogeneration system which generates simultaneously heat and power for buildings. The novelty of this paper is to conduct energy and exergy analysis on PVT collector performance under two different European climate conditions. The performance of the PVT collector is compared to a PV panel. Finally, the PVT design is optimized in terms of thermal and electrical exergy efficiencies. The optimized PVT designs are compared to the PV panel performance as well. The main focus is to find out if the PVT is still competitive with the PV panel electrical output, after maximizing its thermal exergy efficiency. The PVT collector is modelled into Matlab/Simulink to evaluate its performance under varying weather conditions. The PV panel is modelled with the CARNOT toolbox library. The optimization is conducted using Matlab gamultiobj-function based on Non-Dominated Sorting Genetic Algorithm-II (NSGA-II). The results indicated 7.7% higher annual energy production in Strasbourg. However, the exergy analysis revealed a better quality of thermal energy in Tampere with 72.9% higher thermal exergy production. The electrical output of the PVT is higher than from the PV during the summer months. The thermal exergy- driven PVT design is still competitive compared to the PV panel electrical output.
ARTICLE | doi:10.20944/preprints201911.0235.v1
Subject: Engineering, Mechanical Engineering Keywords: fuzzy based AHP (FAHP); multi-objective decision making; path planning; mobile robot
Online: 20 November 2019 (07:34:53 CET)
This study presents a path planning method for a mobile robot to be effectively operated through a multi-objective decision-making problem. Specifically, the proposed Fuzzy analytic hierarchy process (FAHP) determines an optimal position as a sub-goal within the multi-objective boundary. The key features of the proposed FAHP are evaluating the candidates according to the fuzzified relative importance among objectives to select an optimal solution. In order to incorporate FAHP into path planning, an AHP framework is defined, which includes the highest level (goal), middle level (objectives), and the lowest level (alternatives). The distance to the target, robot’s rotation, and safety against collision between obstacles are considered as objective functions. Comparative results obtained from the artificial potential field and AHP/FAHP simulations show that FAHP is much preferable for the mobile robot’s path planning than typical AHP.
ARTICLE | doi:10.20944/preprints201807.0045.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: verbal decision analysis; multi-objective optimization; software release planning; ZAPROS III-i
Online: 3 July 2018 (12:24:02 CEST)
The activity of prioritizing software requirements should be done as efficiently as possible. Selecting the most stable requirements for the most important customers for the development company can be a positive factor when we consider that the available resource does not always encompass the implementation of all requirements. Quantitative methods for reaching software prioritization in releases are many in the field of Search-Based Software Engineering (SBSE). However, we show that it is possible to use qualitative Verbal Decision Analysis (VDA) methods to solve this same type of problem. Moreover, we will use the ZAPROS III-i methods to prioritize requirements considering the opinion of the decision-maker, who will participate in this process. Finally, the results obtained in the VDA structured methods were quite satisfactory when compared to the methods using SBSE. A comparison of results between quantitative and qualitative methods will be made and discussed later.
ARTICLE | doi:10.20944/preprints201804.0137.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: eddy current loss; multi-objective optimization (MOO); electromagnetic analysis; equivalent hierarchical method
Online: 11 April 2018 (05:45:35 CEST)
The eddy current loss should be optimized to be as less as possible for the stability of permanent magnet in high speed permanent magnet synchronous motor (HSPMSM) rotor and ensure the high efficiency and low temperature of the motor. This paper analyzes the eddy current distribution in rotor, with consideration of the conflict of the thickness of sleeve and diameter of the rotor, calculating the eddy current loss (ECL) and the thermal distribution via Separation of variables method for solving Maxwell's equations with analytical hieratical model of ECL constructed. The optimization result of ECL of the HSPMSM whose power and rated speed is 30kw 48000r/min can be got by multi-objective optimization method, combined weighting coefficient method and traversal algorithm based on chaotic local search particle swarm optimization (CLSPSO), utilizing ECL analytical model and other analytical constraints. Related experiment and measurement has been implemented with new approach of loss separation.
ARTICLE | doi:10.20944/preprints202211.0103.v1
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: multi-objective optimization; hypervolume indicator; Newton method; evolutionary algorithms; constraint handling; hypervolume scalarization
Online: 7 November 2022 (04:20:09 CET)
Recently, the Hypervolume Newton method (HVN) has been proposed as fast and precise indicator-based method for solving unconstrained bi-objective optimization problems with objective functions that are at least twice continuously differentiable. The HVN is defined on the space of (vectorized) fixed cardinality sets of decision space vectors for a given multi-objective optimization problem (MOP) and seeks to maximize the hypervolume indicator adopting the Newton-Raphson method for deterministic numerical optimization. To extend its scope to non-convex optimization problems the HVN method was hybridized with a multi-objective evolutionary algorithm (MOEA), which resulted in a competitive solver for continuous unconstrained bi-objective optimization problems. In this paper, we extend the HVN to constrained MOPs with in principle any number of objectives. We demonstrate the applicability of the extended HVN on a set of challenging benchmark problems and show that the new method can be readily be applied to solve equality constraints with a high precision problems, and to some extend also inequalities. We finally use HVN as local search engine within a MOEA and show the benefit of this hybrid method on several benchmark problems.
ARTICLE | doi:10.20944/preprints202201.0402.v1
Subject: Engineering, Other Keywords: project scheduling; underground mine; random breakdown simulation; wolf colony algorithm; multi-objective optimization
Online: 26 January 2022 (14:02:22 CET)
Due to production space and operating environment requirements, mine production equipment often breaks down, which seriously affects the mine’s production schedule. To ensure the smooth completion of the haulage operation plan under abnormal conditions, a model of the haulage equipment rescheduling plan based on the random simulation of equipment breakdowns is established in this paper. The model aims to accomplish both the maximum completion rate of the original mining plan and the minimum fluctuation of the ore grade during the rescheduling period. This model is optimized by improving the wolf colony algorithm and changing the location update formula of the individuals in the wolf colony. Then, the optimal model solution can be used to optimize the rescheduling of the haulage plan by considering equipment breakdowns. The application of the proposed method in an underground mine revealed that the completion rate of the mine’s daily mining plan reached 83.40% without increasing the number of the equipment, while and the ore quality was stable. Moreover, the improved optimization algorithm converged fast and was characterized by high robustness.
ARTICLE | doi:10.20944/preprints202112.0506.v1
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: multi-objective; evolutionary algorithms; Pareto optimality; Wasserstein distance; network vulnerability; resilience; sensor placement.
Online: 31 December 2021 (11:01:51 CET)
This paper is focused on two topics very relevant in water distribution networks (WDNs): vulnerability assessment and the optimal placement of water quality sensors. The main novelty element of this paper is to represent the data of the problem, in this case all objects in a graph underlying a water distribution network, as discrete probability distributions. For vulnerability (and the related issue of re-silience) the metrics from network theory, widely studied and largely adopted in the water research community, reflect connectivity expressed as closeness centrality or, betweenness centrality based on the average values of shortest paths between all pairs of nodes. Also network efficiency and the related vulnerability measures are related to average of inverse distances. In this paper we propose a different approach based on the discrete probability distribution, for each node, of the node-to-node distances. For the optimal sensor placement, the elements to be represented as dis-crete probability distributions are sub-graphs given by the locations of water quality sensors. The objective functions, detection time and its variance as a proxy of risk, are accordingly represented as a discrete e probability distribution over contamination events. This problem is usually dealt with by EA algorithm. We’ll show that a probabilistic distance, specifically the Wasserstein (WST) distance, can naturally allow an effective formulation of genetic operators. Usually, each node is associated to a scalar real number, in the optimal sensor placement considered in the literature, average detection time, but in many applications, node labels are more naturally expressed as histograms or probability distributions: the water demand at each node is naturally seen as a histogram over the 24 hours cycle. The main aim of this paper is twofold: first to show how different problems in WDNs can take advantage of the representational flexibility inherent in WST spaces. Second how this flexibility translates into computational procedures.
ARTICLE | doi:10.20944/preprints202002.0225.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: selective laser melting; 316L stainless steel; multi-objective optimization; relative density; surface roughness
Online: 16 February 2020 (15:52:05 CET)
Although the concept of additive manufacturing has been proposed for several decades, momentum of selective laser melting (SLM) is finally starting to build. In SLM, density and surface roughness, as the important quality indexes of SLMed parts, are dependent on the processing parameters. However, there are few studies on their collaborative optimization in SLM to obtain high relative density and low surface roughness simultaneously in the previous literature. In this work, the response surface method was adopted to study the influences of different processing parameters (laser power, scanning speed and hatch space) on density and surface roughness of 316L stainless steel parts fabricated by SLM. The statistical relationship model between processing parameters and manufacturing quality is established. A multi-objective collaborative optimization strategy considering both density and surface roughness is proposed. The experimental results show that the main effects of processing parameters on the density and surface roughness are similar. It is noted that the effects of the laser power and scanning speed on the above objective quality show highly significant, while hatch space behaves an insignificant impact. Based on the above optimization, 316L stainless steel parts with excellent surface roughness and relative density can be obtained by SLM with optimized processing parameters.
ARTICLE | doi:10.20944/preprints201709.0063.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: design of experiments; multi-objective optimization; Fisher information matrix; curvature; biological processes; mathematical modeling
Online: 15 September 2017 (11:07:52 CEST)
The bottleneck in creating dynamic models of biological networks and processes often lies in estimating unknown kinetic model parameters from experimental data. In this regard, experimental conditions have a strong influence on parameter identifiability and should therefore be optimized to give the maximum information for parameter estimation. Existing model-based design of experiment (MBDOE) methods commonly rely on the Fisher Information Matrix (FIM) for defining a metric of data informativeness. When the model behavior is highly nonlinear, FIM-based criteria may lead to suboptimal designs since the FIM only accounts for the linear variation of the model outputs with respect to the parameters. In this work, we developed a multi-objective optimization (MOO) MBDOE, where model nonlinearity was taken into consideration through the use of curvature. The proposed MOO MBDOE involved maximizing data informativeness using a FIM-based metric and at the same time minimizing the model curvature. We demonstrated the advantages of the MOO MBDOE over existing FIM-based and other curvature-based MBDOEs in an application to the kinetic modeling of fed-batch fermentation of Baker's yeast.
ARTICLE | doi:10.20944/preprints202201.0130.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: multi objective optimization; VRP; compromise programming; genetic algorithm; tabu search; local search; metaheuristics; combinatorial optimization
Online: 10 January 2022 (18:38:55 CET)
The Vehicle Routing Problem (VRP) and its variants are found in many fields, especially logistics. In this study, we introduced an adaptive method to a complex VRP. It combines multi-objective optimization and several forms of VRPs with practical requirements for an urban shipment system. The optimizer needs to consider terrain and traffic conditions. The proposed model also considers customers' expectations, shipper considerations as goals, and the common goal like transportation cost. We offered compromise programming to approach the multi-objective problem by decomposing the original multi-objective problem into a minimized distance-based problem. We designed a hybrid version of the Genetic algorithm with the Local Search algorithm to solve the proposed problem. We evaluate the effectiveness of the proposed algorithm with the Tabu Search algorithm, the original Genetic algorithm on the tested dataset. The results show that our method is an effective decision-making tool for the multi-objective VRP and an effective solver for the new variation of VRP.
REVIEW | doi:10.20944/preprints202108.0441.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Artificial & Biological Intelligence; Complex Coevolutionary Systems Engineering; Sustainability; Multi-Objective Optimization; Sustainable Universal Intelligent Agents
Online: 23 August 2021 (13:23:42 CEST)
The strong couplings among ecological, economic, social and technological processes explains the complexification of human-made systems, and phenomena such as globalization, climate change, the increased urbanization and inequality of human societies, the power of information, and the COVID-19 syndemics. Among complexification’s essential features are non-decomposability, asynchronous behavior, components with many degrees of freedom, increased likelihood of catastrophic events, irreversibility, nonlinear phase spaces with immense combinatorial sizes, and the impossibility of long-term, detailed prediction. Sustainability for complex systems implies enough efficiency to explore and exploit their dynamic phase spaces and enough flexibility to coevolve with their environments. This in turn means solving intractable nonlinear semi-structured dynamic multi-objective optimization problems, with conflicting, incommensurable, non-cooperative objectives and purposes, under dynamic uncertainty, restricted access to materials, energy and information, and a given time horizon, aiming at enhancing the co-evolutionary power of the Biosphere and its human subsystems. Giving the high-stakes, the need for effective, efficient, diverse solutions, their local-global, present-future effects, and their unforeseen short, medium and long-term impacts, achieving sustainable complex systems implies the need for Sustainability-designed Universal Intelligent Agents, harnessing the strong functional coupling between human, artificial and nonhuman biological intelligence in a no-zero-sum game to achieve sustainability.
ARTICLE | doi:10.20944/preprints202008.0108.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: sustainable distribution; food perishability; multi-objective optimization; temperature prediction; shelf life; food waste; NSGA-II
Online: 5 August 2020 (04:34:06 CEST)
The food distribution process is responsible for significant quality loss in perishable products. However, preserving quality is costly and consumes a tremendous amount of energy. To tackle the challenge of minimizing transportation costs and CO2 emissions while also maximizing product freshness, a novel multi-objective model is proposed. The model integrates a vehicle routing problem with temperature, shelf life, and energy consumption prediction models, thereby enhancing its accuracy. Non-dominated sorting genetic algorithm II is adapted to solve the proposed model for the set of Solomon test data. The conflicting nature of these objectives and the sensitivity of the model to shelf life and shipping container temperature settings are analyzed. The results show that optimizing freshness objective degrade the cost and the emission objectives, and the distribution of perishable foods are sensible to the shelf life of the perishable foods and temperature settings inside the container.
ARTICLE | doi:10.20944/preprints201904.0221.v1
Subject: Engineering, Energy & Fuel Technology Keywords: fuel cell; wind energy; solar energy; hybrid energy system; Colombian caribbean region; multi-objective optimization
Online: 19 April 2019 (11:40:02 CEST)
The hybrid system is analyzed and optimized to produce electric energy in Non-Interconnected Zones in the Colombian Caribbean region, contributing both to the improvement in the reduction of greenhouse gas emissions and to the rational use of energy. A comparative analysis of the performance of these systems was carried using a dynamic model in real wind and solar data. The model is integrated by a Southwest Wind Power Inc. wind turbine. AIR 403, a proton exchange fuel cell (PEM), an electrolyze, a solar panel and a charge regulator based on PID controllers to manipulate oxygen and hydrogen flows in the cell. The transient responses of the cell voltage, current, and power were obtained for the demand of 200 W for changes in solar radiation and wind speed for all days of the year 2013 in the Ernesto Cortissoz airport, Puerto Bolívar, Alfonso Lopez airport and Simon Bolívar airport, by regulating the flow of hydrogen and oxygen into the fuel cell. The maximum contribution of power generation from the fuel cell was presented for the Simon Bolívar airport in November with a value of 158,358W (9.45%). A multi-objective design optimization under a Pareto front is presented for each place studied to minimize the Levelized Cost of Energy and CO2 emission, where the objective variables are the number of panel and stack in the PV system and PEM.
ARTICLE | doi:10.20944/preprints201811.0123.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: multi-objective approach, sustainable purchasing, lot sizing, Cap and Trade, Economic Order Quantity, Carbon Price
Online: 5 November 2018 (15:12:06 CET)
Sustainability in material purchasing is a growing area of research. Goods purchasing decisions strongly affect transportation path flows, vehicle consolidation, inventory levels and related obsolescence costs. Within a global sourcing context, companies experience the need of new decision making approaches capable to consider a large variety of factors, also linked with society and environment. Environmental impact assessment has become a key requirement for materials purchasing and transportation decisions since global warming is a rising concern both in academic and industrial researches. In fact, it is well known that the freight transport industry is responsible for large amounts of carbon emissions contributing to global warming. In this paper, we initially analyse and compare the environmental economic policies established by the International Governments in relation to the carbon trading systems adopted. Then, we develop a multi-objective lot sizing approach useful in practice to define the sustainable quantity to purchase when a Cap and Trade mitigation policy is present. We further analyse the model behaviour according to different carbon price values by demonstrating that carbon prices are still far too low to motivate managers towards sustainable purchasing choices.
ARTICLE | doi:10.20944/preprints202010.0111.v1
Subject: Engineering, Automotive Engineering Keywords: multi-objective; optimisation; revit; dynamo; BIM; window design; window type; window position; window-to-wall ratio
Online: 6 October 2020 (09:10:02 CEST)
Windows account for a significant proportion of the total energy lost in buildings. The interaction of window type, Window-to-Wall Ratio (WWR) scheduled and window placement height would influence the natural lighting and heat transfer through windows. This is a pressing issue for non-tropical regions considering their high emissions and distinct climatic characteristics. A limitation exists in the adoption of common simulation-based optimisation approaches in the literature, which are hardly accessible to practitioners. This article develops a numerical-based window design optimisation model using a common Building Information Modelling (BIM) platform adopted throughout the industry, focusing on non-tropical regions of Australia. Three objective functions are proposed; the first objective is to maximize the available daylight, and the other two emphasize on the undesirable heat transfer through windows in summer and winter respectively. The developed model is tested on a case study located in Sydney, Australia, and a set of Pareto-optimum solutions is obtained. Through the use of the proposed model, energy savings of up to 16.43% are achieved. Key findings on the case example indicate that leveraging winter heat gain to reduce annual energy consumption should not be the top priority when designing windows for Sydney.
ARTICLE | doi:10.20944/preprints201804.0303.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: multi-objective optimization; optimal configuration; improved gravitational search algorithm (IGSA); wind-solar-battery system; demand response
Online: 24 April 2018 (04:05:05 CEST)
This study presents application of demand response strategy in a standalone wind-solar-battery hybrid energy system (HES). Inputs for the designed HES are wind speed, solar radiation, temperature and load demand which is variable with time. In this study, hourly values of meteorological data and hourly load demand are considered in one year. An improved gravitational search algorithm (IGSA) is used to optimize the configuration of the standalone wind-solar-battery hybrid power system. The optimal objectives of the system are cost of the system in life cycle, the loss of power supply probability（LPSP）and the energy excess percentage（EXC）.The effect of demand response on economic benefit and energy storage allocation of the standalone wind-solar-battery system is studied. The obtained optimal configuration of the proposed HES can provide minimal energy cost with excellent performance and reduced waste and unmet load.
ARTICLE | doi:10.20944/preprints201905.0100.v1
Subject: Engineering, Control & Systems Engineering Keywords: Dual-Miller cycle; thermodynamic analysis; power; ecological coefficient of performance; thermal efficiency; entropy generation; multi-objective optimization
Online: 9 May 2019 (11:27:49 CEST)
Although different assessments and evaluations of Dual-Miller cycle performed, specified output power and thermal performance associated with engine determined. Besides, multi objective optimization of thermal efficiency, Ecological Coefficient of performance ( ) and Ecological function ( ) by the mean of NSGA-II technique and thermodynamic analysis performed. The Pareto optimal frontier obtaining the best optimum solution is chosen by fuzzy Bellman-Zadeh, LINMAP, and TOPSIS decision-making techniques. Based on the results, performances of dual-Miller cycles and their optimization are improved.
ARTICLE | doi:10.20944/preprints202110.0365.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Model predictive control; Mixed-integer linear programming; Multi-objective optimization; Energy storage management; Load management; More electric aircraft; Demand-side flexibility
Online: 25 October 2021 (15:43:38 CEST)
Abstract: Safety issues related to the electrification of more electric aircraft (MEA) need to be addressed because of the increasing complexity of aircraft electrical power systems and the growing number of safety-critical sub-systems that need to be powered. Managing the energy storage systems and the flexibility in the load-side plays an important role in preserving the system’s safety when facing an energy shortage. This paper presents a system-level centralized operation management strategy based on model predictive control (MPC) for MEA to schedule battery systems and exploit flexibility in the demand-side while satisfying time-varying operational requirements. The proposed online control strategy aims to maintain energy storage (ES) and prolong the battery life cycle, while minimizing load shedding, with fewer switching activities to improve devices lifetime and to avoid unnecessary transients. Using a mixed-integer linear programming (MILP) formulation, different objective functions are proposed to realize the control targets, with soft constraints improving the robustness of the model. Besides, an evaluation framework is proposed to analyze the effects of various objective functions and the prediction horizon on system performance, which provides the designers and users of MEA and other complex systems with new insights into operation management problem formulation.
ARTICLE | doi:10.20944/preprints201805.0152.v1
Subject: Behavioral Sciences, Other Keywords: sitting time; occupational; sedentary fragmentation; objective measurement
Online: 10 May 2018 (05:18:52 CEST)
Prolonged sedentary behaviour (SB) has shown to be detrimental to health. Nevertheless, population levels of SB are high and interventions to decrease SB are needed. This study aimed to explore the effect of an individualized consultation intervention aimed at reducing SB and increasing breaks in SB among college employees. A pre-experimental study design was used. Participants (n=36) were recruited at a college in Massachusetts, USA. SB was measured over 7 consecutive days using an activPAL3 accelerometer. Following baseline measures, all participants received an individualized SB consultation which focused on limiting bouts of SB >30 minutes, participants also received weekly follow-up e-mails. Post-intervention measures were taken after 16 weeks. Primary outcome variables were sedentary minutes/day and SB bouts >30 minutes. Differences between baseline and follow-up were analyzed using paired t-tests. The intervention did not change daily sedentary time (-0.48%; p>0.05). The number of sedentary bouts >30 minutes decreased significantly by 0.52 bouts/day (p=0.015). In this study a consultation based SB intervention was successful in reducing number of bouts >30 minutes of SB. However, daily sedentary time did not reduce significantly. These results indicate that consultation-based interventions may be effective if focused on a specific component of SB.
ARTICLE | doi:10.20944/preprints202210.0481.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Controlled Elitism Non-Dominated Sorting Genetic Algorithm; CENSGA; NSGA-II; Variable-length chromosome (VLC); metaheuristic; multi-objective optimization; Pulse vaccination; allocation; scheduling; planning
Online: 31 October 2022 (10:16:41 CET)
: Seasonal influenza (a.k.a flu) is responsible for considerable morbidity and mortality across the globe. The three recognized pathogens that cause epidemics during the winter season are influenza A, B and C. The influenza virus is particularly dangerous due to its mutability. Vaccines are an effective tool in preventing seasonal influenza, and their formulas are updated yearly according to WHO recommendations. However, in order to facilitate decision-making in the planning of the intervention, policymakers need information on the projected costs and quantities related to introducing the influenza vaccine, in order to help governments obtain an optimal allocation of the vaccine each year. In this paper, an approach based on a Controlled Elitism Non-Dominated Sorting Genetic Algorithm (CENSGA) model is introduced to optimize the allocation of influenza vaccination. A bi-objective model is formulated to control the infection volume, and reduce the unit cost of the vaccination campaign. An SIR (Susceptible–Infected–Recovered) model is employed for representing a potential epidemic. The model constraints are based on the epidemiological model, time management, and vaccine quantity. A two-phase optimization process is proposed: guardian control followed by contingent controls. The proposed approach is an evolutionary metaheuristic multi-objective optimization algorithm with a local search procedure based on a hash table. Moreover, in order to optimize the scheduling of a set of policies over a predetermined time to form a complete campaign, an extended CENSGA is introduced with a variable-length chromosome (VLC) along with mutation and crossover operations. To validate the applicability of the proposed CENSGA, it is compared with the classical Non-Dominated Sorting Genetic Algorithm (NSGA-II). The results are analyzed using graphical and statistical comparisons in terms of cardinality, convergence, distribution and spread quality metrics, illustrating that the proposed CENSGA is effective and useful for determining the optimal vaccination allocation campaigns.
ARTICLE | doi:10.20944/preprints202207.0202.v1
Subject: Engineering, Civil Engineering Keywords: Seismic vulnerability; Urban areas; Objective risk; Perceived risk
Online: 14 July 2022 (03:25:12 CEST)
The assessment of seismic risk in urban areas with high seismicity is certainly one of the most important problems that territorial managers have to face. A reliable evaluation of this risk is the basis for the design of both specific seismic improvement interventions and emergency management plans. Unappropriate seismic risk assessments may provide misleading results and induce bad decisions with relevant economic and social impact.The seismic risk in urban areas is mainly linked to three factors, namely, “hazard”, “exposure” and “vulnerability”. Hazard measures the potential of an earthquake to produce harm; exposure evaluates the amount of population exposed to harm; vulnerability represents the proneness of considered buildings to suffer damages in case of an earthquake. Estimates of such factors may not always coincide with the perceived risk of the resident population. The propensity to implement structural seismic improvement interventions aimed at reducing the vulnerability of buildings depends significantly on the perceived risk.This paper investigates on the difference between objective and perceived risk and highlights some critical issues. The aim of this study is to calibrate opportune policies, which allow addressing the most appropriate seismic risk mitigation options with reference to current levels of perceived risk. We propose the introduction of a Seismic Policy Prevention index (SPPi). This methodology is applied to a case-study focused on a densely populated district of the city of Catania (Italy).
ARTICLE | doi:10.20944/preprints202111.0169.v1
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: discrete mathematics; scheduling; optimization; interpolation; approximation; objective function.
Online: 9 November 2021 (13:24:19 CET)
An approach to estimating the objective function value of minimization maximum lateness problem is proposed. It is shown how to use transformed instances to define a new continuous objective function. After that, using this new objective function, the approach itself is formulated. We calculate the objective function value for some polynomially solvable transformed instances and use them as interpolation nodes to estimate the objective function of the initial instance. What is more, two new polynomial cases, that are easy to use in the approach, are proposed. In the end of the paper numeric experiments are described and their results are provided.
REVIEW | doi:10.20944/preprints202008.0627.v1
Subject: Earth Sciences, Geoinformatics Keywords: pathfinding; algorithms; multi-criteria; multi-modal; multi-network; transportation
Online: 28 August 2020 (09:09:37 CEST)
In daily travel and activities, pathfinding is a significant process. They are often used in transportation routes calculation. They have now evolved to be able to solve most situations of the pathfinding and its related problems. This review describes previous and recent studies on the pathfinding algorithms. It reviews the development of pathfinding algorithms in a classification base on their usage. The aim is to summarize the application of the pathfinding algorithms for the readers interested in the subject that can be used as a supplement.
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: photogrammetry; metrology; underwater 3D reconstruction; structure-from-motion; navigation fusion; multi-objective BA; laser scalers; Monte-Carlo simulation; uncertainty estimation; scale drift evaluation; laser spot detection
Online: 15 July 2019 (05:22:16 CEST)
Rapid developments in the field of underwater photogrammetry have given scientists1the ability to produce accurate 3-dimensional (3D) models which are now increasingly used in the representation and study of local areas of interest. This paper addresses the lack of systematic analysis of 3D reconstruction and navigation fusion strategies, as well as associated error evaluation of models produced at larger scales in GPS-denied environments using a monocular camera (often in deep-sea scenarios). Based on our prior work on automatic scale estimation of Structure from Motion (SfM)-based 3D models using laser scalers, an automatic scale accuracy framework is presented. The confidence level for each of the scale error estimates is independently assessed through the propagation of the uncertainties associated with image features and laser spot detections using a Monte Carlo simulation. The number of iterations used in the simulation was validated through the analysis of the final estimate behaviour. To facilitate the detection and uncertainty estimation of even greatly attenuated laser beams, an automatic laser spot detection method, mitigating the effects of scene texture, was developed, with the main novelty of estimating the uncertainties based on the recovered characteristic shapes of laser spots with radially decreasing1 intensities. The effects of four different reconstruction strategies resulting from the combinations of Incremental/GlobalSfM, and thea priori/a posterioriuse of navigation data were analyzed using two distinct survey scenarios captured during the SUBSAINTES 2017 cruise (doi: 10.17600/17001000). The study demonstrates that surveys with multiple overlaps of non-sequential images result in a nearly identical solution regardless of the strategy (SfM or navigation fusion), while surveys with weakly connected sequentially acquired images are prone to produce broad-scale deformation (doming effect) when navigation is not included in the optimization. Thus the scenarios with complex survey patterns substantially benefit from using multi-objective BA navigation fusion. In all cases, the errors in the models are inferior to 5%, with errors often being around 1%. The effects of combining data from multiple surveys were also evaluated. The introduction of additional vectors in the optimization of multi-survey problems successfully accounted for offset changes present in the underwater USBL-based navigation data and thus minimize the effect of contradicting navigation priors. Our results also illustrate the importance of collecting a multitude of evaluation data at different locations and moments during the survey.
ARTICLE | doi:10.20944/preprints202104.0627.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Multiple Objective programming problems; Fuzzy numbers; Generalized Hukuhara differentiability.
Online: 23 April 2021 (10:05:28 CEST)
In this paper we deal with the resolution of a fuzzy multiobjective programming problem using the level sets optimization. We compare it to other optimization strategies studied until now and we propose an algorithm to identify possible Pareto efficient optimal solutions.
ARTICLE | doi:10.20944/preprints201801.0059.v2
Subject: Arts & Humanities, Religious Studies Keywords: multi-faith spaces; secularisation; multi-faith paradigm; unaffiliated; multi-belief
Online: 15 January 2018 (08:24:56 CET)
Multi-Faith Spaces (MFS) are a relatively recent invention that quickly gained in significance. On the one hand, they offer a convenient solution for satisfying needs of people with diverse beliefs in the institutional context of hospitals, schools, airports, etc. On the other hand, as Andrew Crompton pointed out, they are politically significant because the multi-faith paradigm “is replacing Christianity as the face of public religion in Europe” (2012, p. 493). Due to their ideological entanglement, MFS are often used as the means to promote either a more privatised version of religion, or a certain denominational preference. Two distinct designs are used to achieve these means: negative in the case of the former, and positive in the latter. Neither is without problems, and neither adequately fulfils its primary purpose of serving diverse groups of believers. Both, however, seem to follow the biases and main problems of secularism. In this paper, I analyse recent developments of MFS to detail their main problems and answer the question, whether the MFS, and the underlying Multi-Faith Paradigm, can be classified as a continuation of secularism.
ARTICLE | doi:10.20944/preprints202212.0312.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Access control; Blockchain; Multi-Blockchain; Multi-Authority; Multi-Domain; Attribute-Based Encryption
Online: 19 December 2022 (03:19:23 CET)
Although there are several access control systems in the literature for flexible policy management in multi-authority and multi-domain environments, achieving interoperability & scalability, without relying on strong trust assumptions, is still an open challenge. We present HMBAC, a distributed fine-grained access control model for shared and dynamic multi-authority and multi-domain environments, along with Janus, a practical system for HMBAC policy enforcement. The proposed HMBAC model supports: (a) dynamic trust management between different authorities; (b) flexible access control policy enforcement, defined at domain and cross-domain level; (c) a global source of truth for all entities, supported by an immutable, audit-friendly mechanism. Janus implements the HMBAC model and relies on the effective fusion of two core components. First, a Hierarchical Multi-Blockchain architecture that acts as a single access point that cannot be bypassed by users or authorities. Second, a Multi-Authority Attribute Based Encryption protocol that supports flexible shared multi-owner encryption, where attribute keys from different authorities are combined to decrypt data distributedly stored in different authorities. Our approach was implemented using Hyperledger Fabric as the underlying blockchain, with the system components placed in Kubernetes Docker container pods. We experimentally validated the effectiveness and efficiency of Janus, while fully reproducible artifacts of both our implementation and our measurements are provided.
ARTICLE | doi:10.20944/preprints201907.0287.v1
Subject: Earth Sciences, Geoinformatics Keywords: Airspace Reconfiguration; irregular boundary smoothing; dynamic Monte Carlo method by changing location of flexible vertices; Monte Carlo method by radius changing; Voronoi diagram; graph cutting; multi-objective optimization
Online: 25 July 2019 (10:22:38 CEST)
With the growth of air traffic demand in busy airspace, there is an urgent need for airspace sectorization to increase air traffic throughput and ease the pressure on controllers. The purpose of this paper is to develop a method framework that can perform airspace sectorization automatically, reasonably, which can be used as an advisory tool for controllers as an automatic system, especially for eliminating irregular sector shapes generated by simulated annealing algorithm (SAA) based on region growth method. Two graph cutting method, dynamic Monte Carlo method by changing location of flexible vertices (MC-CLFV) and Monte Carlo method by radius changing (MC-RC) were developed to eliminating irregular sector shapes generated by SAA in post-processing. The experimental results show that the proposed method framework of AS can automatically and reasonably generate sector design schemes that meet the design criteria. Our methodology framework and software can provide assistant design and analysis tools for airspace planners to design airspace, improve the reliability and efficiency of airspace design, and reduce the burden of airspace planners. In addition, this lays the foundation for reconstructing airspace with more intelligent method.
ARTICLE | doi:10.20944/preprints202204.0261.v1
Subject: Earth Sciences, Atmospheric Science Keywords: PM2.5; Aerosol Optical Depth; Data assimilation; MODIS; satellite data; Objective analysis
Online: 27 April 2022 (11:32:49 CEST)
We used the objective analysis method in junction with the successive correction method to assimilate MODerate resolution Imaging Spectroradiometer (MODIS) Aerosol Optical Depth (AOD) data into Chimère model in order to improve the modeling of fine particulate matter (PM2.5) concentrations and AOD field over Europe. A data assimilation module was developed to adjust the daily initial total column aerosol concentrations based on a forecast-analysis cycling scheme. The model is then evaluated during one-month winter period to examine how such data assimilation technique pushes the model results closer to surface observations. This comparison showed that the mean biases of both surface PM2.5 concentrations and AOD field could be reduced from -34 to -15% and from -45 to -27%. The assimilation however leads to false alarms because of the difficulty to distribute AOD550 over different particles sizes. The impact of the influence radius is found to be small and depends on the density of satellite data. This work, although preliminary, is important in terms of near-real time air quality forecasting using Chimère model and can be further developed to improve modeled PM2.5 and ozone concentrations.
ARTICLE | doi:10.20944/preprints202012.0566.v3
Subject: Physical Sciences, General & Theoretical Physics Keywords: quantum measurement; objective reality; Wigner’s friend; irreversibility; waveform collapse; Many-Worlds
Online: 4 January 2021 (10:58:22 CET)
Background: Recently some photon models of a Wigner's friend experiment have led investigators to suggest objective reality does not exist, and to publish non-academic articles with such claims. The public is not equipped to evaluate the severe limitations of these experiments. The separation of Wigner from the experiment and use of only reversible coherent processes for the friend allow operations that are not possible in ordinary reality according to the latest quantum research. Methods: We suggest directly testing the implied claim that objective reality, including incoherent objects with irreversible non-destructive memory, can be held in superposition. We suspect it will fail, but provide for a graduated approach that may discover something about the conditions for superposition collapse. To this end we design a thought experiment to model the objective world, investigating under what conditions experimenters in the same world (ensemble member) will be able to record a result and find it does not appear to change. An observer has a viewing apparatus and a memory apparatus. A second uncorrelated viewer of the same recorded result is employed to obtain objectivity. By hypothesis the uncorrelated second viewer obtains the same view of the measurement record as the first observer. There are not two measurements. This is not an investigation of hidden variables. Results: To model the objective world, incoherent and irreversible processes must be included. To test for superposition, coherence has to be established. These seem to present a contradiction. Conclusions: The thought experiment has suggested new places to look other than size for the origin of objective reality from the quantum world, casts doubt on the Many-Worlds interpretation, and provides a method of testing it.
ARTICLE | doi:10.20944/preprints202012.0615.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: RPL; Contiki Operating System; Objective Function; Wireless Sensor Networks; Routing Metrics
Online: 24 December 2020 (11:25:19 CET)
Wireless Sensor Networks (WSN) is the network of the resource-constrained network which forms the foundation of the Internet of Things (IoT). The Routing Protocol for Low-power and Lossy Networks (RPL) is responsible for generating and managing data routing paths. Nodes implementing RPL uses the mechanics of Objective Function (OF) to select the preferred next-hop node – parent node, and optimal routing path to the destination node. If routing decisions are not efficiently made, this results in increased collision domain, leading to packet losses and packet retransmission which impairs the network operational lifetime. In this study, we present the Contiki Operating System (OS), a state-of-the-art OS for IoTs, ContikiRPL; Contiki variant of RPL. We investigated the performance of RPL with respect to its two OFs; Objective Function Zero (OF0) and the Minimum Rank with Hysteresis Objective Function (MRHOF). The performance of these OFs was evaluated on the following metrics; Packet Delivery Ratio (PDR), Power consumption, and network latency. The result shows that MRHOF outperformed OF0 on all metrics with an overall average PDR of 91.5%, a latency of 44ms, and power consumption of 1.72mW across all nodes. This results in optimal network performance with improved network operational lifetime.
ARTICLE | doi:10.20944/preprints201912.0405.v1
Subject: Engineering, Marine Engineering Keywords: Optimal design procedure; monitoring network; water quality; graphical optimization; objective mapping
Online: 31 December 2019 (10:17:53 CET)
The semi-enclosed estuary is very susceptible to changes in the physical and environmental characteristics of the inflow from the land and the coastal sea weather and so the continuous and comprehensive monitoring is necessary for managing the estuary. Nevertheless, the standard procedure or schematic framework has not been proposed appropriately to determine how many instruments are necessary and where they need to be deployed to detect critical changes. Therefore, the present work proposes a systematical strategy for the deployments of the monitoring array by using the combination of the graphical optimization with the objective mapping technique. In order to reflect the spatiotemporal characteristics of the bay, the representative variables and eigenvectors are determined by the Empirical Orthogonal Function (EOF), and the cosine angle among them are calculated for a design index of optimization. At the recommended locations, the sampled representative variables are interpolated to reconstruct their spatiotemporal distribution and compared with the distributions of the true values. Analysis confirms that the selected locations, even with a small number of points, can be used for on-site monitoring. Also, the present framework suggests how to determine installable regions for real-time monitoring stations, which reflect the global and local characteristics of the semi-enclosed estuary.
ARTICLE | doi:10.20944/preprints201907.0075.v1
Subject: Medicine & Pharmacology, Gastroenterology Keywords: hepatocellular carcinoma; objective response; modified RECIST; sorafenib; hepatic arterial infusion chemotherapy
Online: 4 July 2019 (10:56:53 CEST)
Background In SILIUS (NCT01214343), combination of sorafenib and hepatic arterial infusion chemotherapy did not significantly improve overall survival in patients with advanced hepatocellular carcinoma (HCC) compared with sorafenib alone. In this study, we explored the relationship between objective response by mRECIST and overall survival (OS) in the sorafenib group, in the combination group and in all patients in the SILIUS trial. Methods Association between objective response and OS in patients treated with sorafenib (n=103), combination (n=102) and all patients (n=205) were analyzed. The median OS of responders was compared with that of non-responders. Landmark analyses were performed according to objective response at several fixed time points, as sensitivity analyses, and the effect on OS was evaluated by Cox regression analysis with objective response as a time-dependent covariate, with other prognostic factors was performed. Results In the sorafenib group, OS of responders (n = 18) was significantly better than that of non-responders (n = 78) (p < 0.0001), where median OS was 27.2 (95% CI, 16.0–not reached) months for responders and 8.9 (95% CI, 6.5–12.6) months for non-responders. HRs from landmark analyses at 4, 6, and 8 months were 0.45 (p=0.0330), 0.37 (p=0.0053), and 0.36 (p=0.0083), respectively. Objective response was an independent predictor of OS based on unstratified Cox regression analyses. In the all patients and the combination group, similar results were obtained. Conclusion In the SILIUS trial, objective response was an independent prognostic factor for OS in patients with HCC.
ARTICLE | doi:10.20944/preprints201702.0061.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: multi-target tracking; multi-Bernoulli filter; sequential Monte-Carlo
Online: 16 February 2017 (09:39:29 CET)
We develop an interactive likelihood (ILH) for sequential Monte-Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, AFL, and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (OSPA and CLEAR MOT). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.
ARTICLE | doi:10.20944/preprints202103.0506.v1
Subject: Social Sciences, Accounting Keywords: street view image; subjective and objective perceptions; housing prices; machine learning; computer vision
Online: 22 March 2021 (10:16:03 CET)
The relationship between the street environment and the health, education, mobility, and criminal behaviors of its citizens has long been investigated by economists, sociologists and urban planners. Home buyers were found to pay a premium for better street appearance. Prior studies considering streetscapes mainly focus on objective measures such as the number of nearby trees, the tree canopy area, or the view index of physical features such as greenery, sky or building. However, subjective perceptions may have complex or subtle relationships to physical features, individual physical features or simply summing them up do not capture people’s comprehensive perception. In contrast, this study proposed a new approach for the urban-scale application to quantify both subjectively and objectively measured streetscape scores for six important perception qualities, namely Greenness, Walkability, Safety, Imageability, Enclosure, and Complexity. Built on prior quantitative studies in urban design quality and emerging applications in deep learning and open source street view imagery for urban perceptions, we integrated existing frameworks to (1) effectively collect and evaluate both subjectively and objectively- measured perceptions; (2) investigate the coherence and divergence in ML-predicted subjective scores and formula-derived objective scores; and (3) compare their effects in affecting house prices taking Shanghai as a case study using a large-scale dataset on home transactions. The results implied: first, the percentage increase in sales price attributable to street scores is significant for both subjective and objective measurements. In general subjective scores explained more variance over structural attributes and objective scores in hedonic price model. Particularly, objective Greenness, subjective Safety and Imageability scores positively affected house prices. Second, for Greenness and Imageability scores, the subjective and objective measures exhibited opposite signs in affecting house prices, which implied that there might be mechanisms related to the psychological, social-demographical characteristics of street users that have not been fully incorporated by objective measures that taking view indices or recombination of them. In addition, certain objective measure might outperform subjective counterpart when the connotation of the perception is self-evident and not complicated, for example the Greenness. For those concepts were not familiar to the average person, subjective framework exhibits better performance. This is the first study comprehensively expanding hedonic price method with both subjectively and objectively measured streetscape qualities. It suggested that city authorities could levy a street environment tax to compensate the public budget invested in street environment where developers secured benefits from a price premium. This study enriches our understanding of the economic values of the subjective and objective measures street qualities. It sheds light on promising future study areas where the coherence and divergence of the two measurements should be further stressed.
ARTICLE | doi:10.20944/preprints202103.0283.v1
Subject: Social Sciences, Accounting Keywords: Cloud Manufacturing(CMfg); 3D Printing Device Resources; HPSO; Muti-objective Optimization; Baldwin effect
Online: 10 March 2021 (13:20:59 CET)
Focusing on service control factors, rapid changes in manufacturing environments, the difficulty of resource allocation evaluation, resource optimization for 3D printing services (3DPSs) in cloud manufacturing environments and so on, an indicator evaluation framework is proposed for the cloud 3D printing (C3DP) order task execution process based on a Pareto optimal set algorithm that is optimized and evaluated for remotely distributed 3D printing equipment resources. Combined with the multi-objective method of data normalization, an optimization model for C3DP order execution based on the Pareto optimal set algorithm is constructed with these agents' dynamic autonomy and distributed processing. This model can perform functions such as automatic matching and optimization of candidate services, and it is dynamic and reliable in the C3DP order task execution process based on the Pareto optimal set algorithm. Finally, a case study is designed to test the applicability and effectiveness of the C3DP order task execution process based on the analytic hierarchy process and technique for order of preference by similarity to ideal solution (AHP-TOPSIS) optimal set algorithm and the Baldwin effect.
Subject: Keywords: UAV; multi-spectral imageries; multi-locational; Maize yield; smallholder; vegetation indices
Online: 19 October 2020 (16:00:27 CEST)
Rapid assessment of maize yields in smallholder farming system is important to understand its spatial and temporal variability and for timely agronomic decision-support. Imageries acquired with unmanned air vehicles (UAV) offer opportunity to assess agronomic variables at field scale, however, it is not clear if this can be translated into reliable yield assessment on smallholder farms where field conditions, maize genotypes, and management practices vary within short distances. In this study, we assessed the predictability of maize grain yield using UAV-derived vegetation indices (VI), with(out) biophysical variables, in smallholder farms. High-resolution images were acquired with UAV-borne multispectral sensor at 4 and 8 weeks after sowing (WAS) on 31 farmers’ managed fields (FMFs) and 12 nearby Nutrient Omission Trials (NOT), all distributed across 5 locations within the core maize region of Nigeria. The NOTs included non-fertilized and fertilized plots (with and without micronutrients), sown with open-pollinated or hybrid maize genotypes. Acquired multispectral images were post-processed into several three (s) vegetation indices (VIs), normalized difference vegetation index (NDVI), normalized difference red-edge (NDRE), green-normalized difference vegetation index (GNDVI). Biophysical variables, plant height (Ht) and percent canopy cover (CC), were measured with the georeferenced plot locations recorded. In the NOTs, the nutrient status, not genotype, influenced the grain yield variability and outcome. The maximum grain yield observed in NOTs was 9.3 tha-1, compared to 5.4 tha-1 in FMF. Without accounting for between- and within-field variations, there was no relationship between UAV-derived VIs and grain yield at 4WAS (r<0.02, P>0.1), but significant correlations were observed at 8WAS (r≤0.3; p<0.001). Ht was positively correlated with grain yield at 4WAS (r=0.5, R2=0.25, p<0.001), and more strongly at 8WAS (r=0.7, R2=0.55, p<0.001), while relationship between CC and yield was only significant at 8WAS. By accounting for within- and between-field variations in NOTs and FMF (separately) through linear mixed-effects modeling, predictability of grain yield from UAV-derived VIs was generally (R2≤0.24), however, the inclusion of ground-measured biophysical variable (mainly Ht) improved the explained yield variability (R2 ≥0.62, RMSEP≤0.35) in NOTs but not in FMF. We conclude that yield prediction with UAV-acquired imageries (before harvest) is more reliable under controlled experimental conditions (NOTs), than in actual farmer-managed fields where various confounding agronomic factors can amplify noise-signal within the vegetation canopy.
ARTICLE | doi:10.20944/preprints202008.0209.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Sign Language Recognition; Multi-modality; Late Fusion; multi-sensor; Gesture Recognition
Online: 8 August 2020 (17:28:00 CEST)
In this work, we show that a late fusion approach to multi-modality in sign language recognition improves the overall ability of the model in comparison to the singular approaches of Computer Vision (88.14%) and Leap Motion data classification (72.73%). With a large synchronous dataset of 18 BSL gestures collected from multiple subjects, two deep neural networks are benchmarked and compared to derive a best topology for each. The Vision model is implemented by a CNN and optimised MLP and the Leap Motion model is implemented by an evolutionary optimised deep MLP topology search. Next, the two best networks are fused for synchronised processing which results in a better overall result (94.44%) since complementary features are learnt in addition to the original task. The hypothesis is further supported by application of the three models to a set of completely unseen data where a multi-modality approach achieves the best results relative to the single sensor method. When transfer learning with the weights trained via BSL, all three models outperform standard random weight distribution when classifying ASL, and the best model overall for ASL classification was the transfer learning multi-modality approach which scored 82.55% accuracy.
ARTICLE | doi:10.20944/preprints202101.0413.v1
Online: 21 January 2021 (09:31:21 CET)
Urgent environmental challenges and emerging additive manufacturing (AM) technologies push research towards more performant and new materials. In the field of metallurgy, high entropy alloys (HEAs) have recently represented a topic of intense research because of their promising properties, such as high temperature strength and stability. Moreover, this class of multi-principal element alloys (MPEAs) have opened up researcher community to unexplored compositional spaces, making prosper literature of high-throughput methodologies and tools for rapidly screening large number of alloys. However, none of the methods has been aimed to design new MPEAs for AM process known as selective laser melting (SLM) so far. Here we conducted nanoindentation testing on single scan tracks of elemental powder blends and pre-alloyed powders after ball milling of AlTiCuNb and AlTiVNb. Results show that nanoindentation can represent an effective technique to gain information about phase evolution during laser scanning, contributing to accelerate the development of new MPEAs.
ARTICLE | doi:10.20944/preprints202203.0161.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: multi-agent systems; multi-agent reinforcement learning; internet of vehicles; urban area
Online: 11 March 2022 (05:13:15 CET)
Smart Internet of Vehicles (IoVs) combined with Artificial Intelligence (AI) will contribute to vehicle decision-making in the Intelligent Transportation System (ITS). Multi-Vehicle Pursuit games (MVP), a multi-vehicle cooperative ability to capture mobile targets, is becoming a hot research topic gradually. Although there are some achievements in the field of MVP in the open space environment, the urban area brings complicated road structures and restricted moving spaces as challenges to the resolution of MVP games. We define an Observation-constrained MVP (OMVP) problem in this paper and propose a Transformer-based Time and Team Reinforcement Learning scheme (T3OMVP) to address the problem. First, a new multi-vehicle pursuit model is constructed based on decentralized partially observed Markov decision processes (Dec-POMDP) to instantiate this problem. Second, by introducing and modifying the transformer-based observation sequence, QMIX is redefined to adapt to the complicated road structure, restricted moving spaces and constrained observations, so as to control vehicles to pursue the target combining the vehicle’s observations. Third, a multi-intersection urban environment is built to verify the proposed scheme. Extensive experimental results demonstrate that the proposed T3OMVP scheme achieves significant improvements relative to state-of-the-art QMIX approaches by 9.66%~106.25%. Code is available at https://github.com/pipihaiziguai/T3OMVP.
REVIEW | doi:10.20944/preprints202101.0033.v1
Subject: Engineering, Other Keywords: Desalination; Multi Effect Distillation; Multi Stage Flash; Vapor Compression Distillation; Renewable Energies.
Online: 4 January 2021 (12:33:03 CET)
Abstract: Thermal desalination is yet a reliable technology in the treatment of brackish water and seawater; however, its demanding high energy requirements have lagged it compared to other non-thermal technologies such as reverse osmosis. This review provides an outline of the development and trends of the three most commercially used thermal or phase change technologies worldwide: Multi Effect Distillation (MED), Multi Stage Flash (MSF), and Vapor Compression Distillation (VCD). First, state of water stress suffered by regions with little fresh water availability and existing desalination technologies that could become an alternative solution are shown. The most recent studies published for each commercial thermal technology are presented, focusing on optimizing the desalination process, improving efficiencies, and reducing energy demands. Then, an overview of the use of renewable energy and its potential for integration into both commercial and non-commercial desalination systems is shown. Finally, research trends and their orientation towards hybridization of technologies and use of renewable energies as a relevant alternative to the current problems of brackish water desalination are discussed. This reflective and updated review will help researchers to have a detailed state of the art of the subject and to have a starting point for their research, since current advances and trends on thermal desalination are shown.
ARTICLE | doi:10.20944/preprints202011.0310.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Home health care; Routing and scheduling; Nurse downgrading; Epsilon-constraint method; Bi-objective optimization.
Online: 10 November 2020 (12:20:43 CET)
In recent years, the management of health systems is a main concern of governments and decision makers. Home health care is one of the newest methods of providing services to patients in developed societies that can respond to the individual lifestyle of modern age and the increase of life expectancy. The home health care routing and scheduling problem is a generalized version of the vehicle routing problem, which is extended to a complex problem by adding special features and constraints of health care problems. In this problem, there are multiple stakeholders such as nurses for which an increase of their satisfaction level is very important. In this study, a mathematical model is developed to expand traditional home health care routing and scheduling models to downgrading cost aspects by adding the objective of minimizing the difference between the actual and potential skills of the nurses. Downgrading can lead to a dissatisfaction of the nurses. In addition, skillful nurses have higher salaries and high-level services increase equipment costs and need more expensive trainings and nursing certificates. Therefore, downgrading can enforce hidden huge costs to the managers of a company. To solve the bi-objective model, an -constraint based approach is suggested and the model applicability and its ability to solve the problem in various sizes are discussed. A sensitivity analysis on the Epsilon parameter is conducted to analyze the effect of this parameter on the problem. Finally, some managerial insights are presented to help the managers in this field, and some directions for future studies are mentioned as well.
ARTICLE | doi:10.20944/preprints201805.0240.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: background reconstruction; image quality assessment; image dataset; subjective evaluation; perceptual quality; objective quality metric
Online: 17 May 2018 (09:36:33 CEST)
With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images.
ARTICLE | doi:10.20944/preprints202208.0307.v1
Subject: Engineering, Other Keywords: constrained optimization; multi-operator; multi-parameter adaptation; ensemble constraint handling techniques; Evolutionary Algorithms
Online: 17 August 2022 (08:35:44 CEST)
Real-world optimization problems are often governed by one or more constraints. Over the last few decades, extensive research has been performed in Constrained Optimization Problems (COPs) fueled by advances in computational intelligence. In particular, Evolutionary Algorithms (EAs) are a preferred tool for practitioners for solving these COPs within practicable time limits. We propose an ensemble of multi- method hybrid EA framework with four mutation operators, two crossover operators, multi-search [Differential Evolution (DE) & Gaining Sharing Knowledge (GSK)] optimization algorithm, and ensemble of constraint handling techniques to solve global real- world constrained optimization problem. The proposed frame- work FEPEA has an ascendancy of multiple adaptation strategies concerning the control parameters, search mechanisms, two sub-populations as well as uses knowledge sharing mechanism between junior and senior phases. The algorithm also combines the power of four popular constraint handling techniques (CHT) and uses a voting mechanism to select any particular CHT. On top of that, this algorithm also uses both linear and non- linear population size reduction in every step of the evolutionary process. We test our method on 57 real-world problems provided as part of the CEC 2020 special session & competition on real- world constrained optimization benchmark suite. Experimental results indicate that FEPEA is able to achieve state-of-the- art performance on real-world constrained global optimization when compared against other well-known real-world constrained optimizers.
Subject: Engineering, Control & Systems Engineering Keywords: bond graph; multi bond; vector bond; hybrid; switching; multi-body; dynamics; system; model
Online: 23 April 2020 (10:33:06 CEST)
The hybrid bond graph has been studied in depth for scalar bond graphs, but how does this translate to the multi-bond graph? Here, the controlled junction – used to model structural switching such as contact – is extended to the multi-bond case. This is a simple process, assuming that all bonds switch simultaneously (which makes physical sense). A controlled 0-junction is applied to multi-bond graph of a car, which can lose contact with the ground in cornering. Dynamic causality features, but this can be accommodated using an equational submodel in 20-Sim (in a manner similar to that used with scalar bond graphs). The junction is proposed for subsequent work to develop a validated multi-body dynamics car model in cornering.
ARTICLE | doi:10.20944/preprints201906.0036.v1
Subject: Earth Sciences, Geoinformatics Keywords: digital elevation models; multi-source fusion; multi-scale fusion; global evaluation; accuracy validation.
Online: 5 June 2019 (10:26:30 CEST)
The quality of digital elevation models (DEMs) is inevitably affected by the limitations of the imaging modes and the generation methods. One effective way to solve this problem is to merge the available datasets through data fusion. In this paper, a fusion-based global DEM dataset (82°S-82°N) is introduced, which we refer to as GSDEM-30. This is a 30-m DEM mainly reconstructed from the unfilled SRTM1, AW3D30, and ASTER GDEM v2 datasets combining the multi-source and multi-scale fusion techniques. A comprehensive evaluation of the GSDEM-30 data, as well as the 30-m ASTER GDEM v2 and AW3D30 DEM, was presented. Global ICESat GLAS data and the local National Elevation Dataset (NED) were used as the reference for the vertical accuracy validation, while GlobeLand30 was introduced for the landscape analysis. Furthermore, we employed the maximum slope approach to detect the potential artefacts in the DEMs. The results show that the GDEM data are seriously affected by noise and artefacts. With the advantage of the multiple datasets and the refined post-processing, the GSDEM-30 are contaminated with fewer anomalies than both ASTER GDEM and AW3D30. The fusion techniques used can also be applied to the reconstruction of other fused DEM datasets.
ARTICLE | doi:10.20944/preprints201607.0059.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: multi-slope sliding-mode control (MSSMC); single-phase inverter; multi-slope function (MS)
Online: 19 July 2016 (04:54:06 CEST)
In this paper, a new approach to the sliding-mode control of single-phase inverters under linear and non-linear loads is introduced. The main idea behind this approach is to utilize a non-linear, flexible and multi-slope function in controller structure. This non-linear function makes the controller possible to control the inverter by a non-linear multi-slope sliding surface. In general, this sliding surface has two parts with different slopes in each part and the flexibility of the sliding surface makes the multi-slope sliding-mode controller (MSSMC) possible to reduce the total harmonic distortion, to improve the tracking accuracy, and to prevent overshoots leading to undesirable transient-states in output voltage which are occurred when the load current sharply rises. In order to improve the tracking accuracy and to reduce the steady-state error, an integral term of the multi-slope function is also added to the sliding surface. The improved performance of the proposed controller is confirmed by simulations and finally, the results of the proposed approach are compared with a conventional SMC and a SRFPI controller.
ARTICLE | doi:10.20944/preprints201809.0200.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: objective clustering; biclustering; gene regulatory networks; reconstruction; validation; gene expression profiles; noise component; systems stability
Online: 11 September 2018 (13:48:12 CEST)
ARTICLE | doi:10.20944/preprints202204.0159.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: relaxation; spectral methods; multi-domain
Online: 18 April 2022 (08:28:47 CEST)
In gravitational theory and astrophysical dynamics, singular initial value problems (IVPs) are frequently encountered. Finding the solutions to this class of IVPs can be challenging due to their complex nature. This study strives to circumvent the complexity by proposing a numerical method for solving such problems. The approach proposed in the current research seeks solutions to the IVP by partitioning the domain [0,L] of the problem into two intervals and solving the problem on each domain. The study seeks a closed-form solution to the IVP in the interval containing the singular point. A linearization technique and piecewise partitioning of the domain not containing the singularity are applied to the nonlinear IVP. The resulting linearized differential equation is solved using the Chebyshev spectral collocation method. Some examples are presented to illustrate the efficiency of the proposed method. Numerical analysis of the solution and residual errors are shown to ascertain convergence and accuracy. The results suggest that the technique gives accurate convergent solutions using a few collocation points.
ARTICLE | doi:10.20944/preprints202105.0400.v1
Online: 18 May 2021 (09:31:15 CEST)
Dracunculiasis (also known as Guinea worm disease) is caused by Dracunculus medinensis parasite and it spreads by drinking water containing Larvae of Guinea worm. The lack of safe water facilities, preventions and treatments resulted in highly dangerous consequences in its endemic regions. The economy of the affected regions totally falls down due to less production which is the result of agricultural field worker’s bad health. In this study, a multi epitope vaccine was designed against Dracunculus medinensis by using immune-informatics. The vaccine was designed by using T-Cell and B-Cell epitopes derived from Dracunculus medinensis proteins (Lactamase-B domain-containing protein, G-Domain containing protein and Ferrochelatase) in addition to Adjuvants and Linkers. The tertiary structure, physiochemical properties and immunogenic elements of vaccine were achieved. The validation of tertiary structure was accessed, and quality was achieved. In addition, the world coverage of parasite’s CTL and HTL epitopes is 95.61%. The stability of the chimeric vaccine was achieved through disulfide engineering. The molecular docking with Toll Like Receptor 4 (TLR-4) of vaccine showed its binding efficiency followed by Molecular Dynamic Simulation. The immune simulation suggested the mediated cell immunity and repeated antigen clearance. At the end, the optimized codon was used in in silico cloning to ensure vaccine’s higher exposure in bacterium E. coli strain K12. With further assessments, it is believed that the proposed multi epitope vaccine has strong immunogen to control Dracunculus medinensis which may result in better social and economic conditions of endemic regions.
ARTICLE | doi:10.20944/preprints202011.0327.v1
Online: 12 November 2020 (08:24:40 CET)
Introduction Tuberculosis is common in Pakistan. Due to various factors including socioeconomic factors, compliance is poor to anti-tuberculosis drugs, leading to resistance. We aim to determine the prevalence of Multidrug resistance (MDR) tuberculosis in Pakistani population.Methods A prospective observational study was conducted from April 1, 2019, to December 31, 2019, in the Pulmonology department of a tertiary care hospital in Pakistan. Culture and sensitivity were assessed using a sputum sample or, in cases of an absent sputum sample, from Broncho alveolar lavage.ResultsApproximately 71.3% percent patients who had tuberculosis were found to be resistant to Isoniazid and around 48.6% did not respond to Rifampin. Multi-drug resistant was found in 29.4% participants.ConclusionMulti-drug resistance tuberculosis is very prevalent in Pakistan, which may increase burden on health care system and may lead to various complications of tuberculosis.
ARTICLE | doi:10.20944/preprints201807.0614.v1
Online: 31 July 2018 (09:49:06 CEST)
Phenotypic studies require large datasets for accurate inference and prediction. Collecting plant data in a farm can be very labor intensive and costly. This paper presents the design, architecture (hardware and software) and deployment of a distributed modular agricultural multi-robot system for row crop field data collection. The proposed system has been deployed in a soybean research farm at Iowa State University.
COMMUNICATION | doi:10.20944/preprints202207.0333.v1
Subject: Medicine & Pharmacology, Other Keywords: residual dizziness; labyrintholithiasis; cupololithiasis; otocones; BPPV; liberating maneuvers; utricle; VEMPs objective vertical visual VVS; bucket test
Online: 22 July 2022 (08:19:45 CEST)
Residual dizziness after a liberating maneuver is often referred to by patients as a more disabling set of symptoms than the positional vertigo itself. This situation seems to involve more than half of subjects with labyrintholithiasis. The authors examine the hypothesis according to which residual dizziness involves subjects with labyrintholithiasis on the basis of otoconia.
ARTICLE | doi:10.20944/preprints202001.0275.v1
Subject: Social Sciences, Finance Keywords: customer- oriented service; behaviour; distribution channel; commissions and fees; objective and subjective advice; sustainable insurance brokerage
Online: 24 January 2020 (10:36:31 CET)
This research focuses on the customer orientation of insurance brokers, whose activity is regulated by the Law of 26/2006 of July 17 on the mediation of private insurances and reinsurances. The goal is to ascertain whether the intermediation inherent in the insurance broker’s activity, which implies a customer-oriented service, entails a positive behaviour that transcends the immediate environment, reaching society. This study presents a comparative analysis between the insurance brokerage society, characterised by providing a personalised customer service, and banks’ advisory services on insurance. To this end, we study the evolution of the total volume of business and new production, compared for a portfolio of insurance products. The results presented in this research suggest that the customer values the advisory service provided by the broker. However, for a particular business segment in standardised insurance products and products related to banking assets, customers are more likely to resort to the bank’s services. In addition, the results indicate that the commission percentages applied by the entities operating in the banking insurance channel exceed those perceived by the insurance broker. With all this, intermediation in the development of the insurer’s activity can entail an ethical and social behaviour that involves customer-orientation and, possibly, social service, which does not always benefit the insurer.
Subject: Keywords: Single image deraining; Multi-layer Laplacian pyramid; Multi-scale feature extraction module; Channel attention module.
Online: 31 May 2021 (11:41:25 CEST)
Deep convolutional neural network (CNN) has shown their great advantages in the single image deraining task. However, most existing CNN-based single image deraining methods still suffer from residual rain streaks and details lost. In this paper, we propose a deep neural network including the Multi-scale feature extraction module and the channel attention module, which are embed in the feature extraction sub-network and the rain removal sub-network respectively. In the feature extraction sub-network, the Multi-scale feature extraction module is constructed by a Multi-layer Laplacian pyramid, and is then integrated multi-scale feature maps by a feature fusion module. In the rain removal sub-network, the channel attention module, which assigns different weights to the different channels, is introduced for preserving image details. Experimental results on visually and quantitatively comparison demonstrate that the proposed method performs favorably against other state-of-the-art approaches
ARTICLE | doi:10.20944/preprints202101.0157.v1
Subject: Engineering, Automotive Engineering Keywords: Multi-frequency eddy current; lift-off inversion; coating thickness; non-destructive testing; multi-layer conductor.
Online: 8 January 2021 (13:08:37 CET)
Defect detection in ferromagnetic substrates is often hampered by non-magnetic coating thickness variation when using conventional eddy current testing technique. The lift-off distance between the sample and the sensor is one of the main obstacles for the thickness measurement of non-magnetic coatings on ferromagnetic substrates when using the eddy current testing technique. Based on the eddy current thin-skin effect and the lift-off insensitive inductance (LII), a simplified iterative algorithm is proposed for reducing the lift-off variation effect using a multi-frequency sensor. Compared to the previous techniques on compensating the lift-off error (e.g., the lift-off point of intersection) while retrieving the thickness, the simplified inductance algorithms avoid the computation burden of integration, which are used as embedded algorithms for the online retrieval of lift-offs via each frequency channel. The LII is determined by the dimension and geometry of the sensor, thus eliminating the need for empirical calibration. The method is validated by means of experimental measurements of the inductance of coatings with different materials and thicknesses on ferrous substrates (dual-phase alloy). The error of the calculated coating thickness has been controlled to within 3 % for an extended lift-off range of up to 10 mm.
ARTICLE | doi:10.20944/preprints201902.0027.v1
Subject: Materials Science, Other Keywords: porous; ceramics; additive manufacturing; multi-material; multi-property; CerAMfacturing; CerAM VPP; CerAM T3DP; CerAM Replica
Online: 4 February 2019 (11:31:43 CET)
Porous ceramics can be realized by different methods and are used for manifold applications, like cross-flow-membranes or wall-flow-filters, porous burners, solar receivers, structural design elements or catalytic supports. Within this paper three different alternative process routes are presented, which can be used to manufacture porous ceramic components with different properties or even graded porosity. The first process route bases on additive manufacturing (AM) of macro porous ceramic components, the second on AM of a polymeric template, which is used to manufacture porous ceramic components via replica technique. Finally, the third process route bases on an AM technology, which allows the manufacturing of multi-material or multi-property ceramic components, like components with dense and porous volumes in one complex shaped component.
ARTICLE | doi:10.20944/preprints202206.0075.v1
Subject: Materials Science, Nanotechnology Keywords: hydrated multi-dimensional nanoparticles; advanced electronics
Online: 6 June 2022 (09:10:17 CEST)
The paper considers new effects of the nanoscale state of matter, which open up prospects for the creation of electronic devices using new physical principles. The contact of chemically homogeneous different sizes hydrated nanoparticles of yttrium-stabilized zirconium oxide (ZrO2 – x %mol Y2O3, x=0, 3, 4, 8; YSZ) with particle sizes of 7.5 nm and 7,5 nm; 7.5 nm and 9 nm; 7.5 nm and 11 nm; 7.5 nm and 14 nm in the form of compacts obtained using high hydrostatic pressure (HP-compacts of 300MPa) was studied at direct and alternating current. A unique size effect of the nonlinear (semiconductor) dependence of the electrical properties (in the region U <2.5 V, I ≤ 2.7 mA) of the contact of different-sized YSZ nanoparticles of the same chemical composition is revealed, which indicates the possibility of creating semiconductor structures of a new type based on chemically homogeneous nanostructured systems. The electronic structure of the near-surface regions of nanoparticles of a special type of oxide materials and the possibility, on this basis, to obtain specifically rectifying properties of the contacts were studied theoretically. Models of surface states of the Tamm type are constructed, but considering the Coulomb long-range action. The discovered variance and its dependence on the curvature of the surface of nanoparticles made it possible to study the conditions for the formation of a contact potential difference in cases of nanoparticles of the same radius (synergistic effect), different radii (doped and undoped variants), as well as to discover the possibility of describing a group of powder particles from material within the Anderson model. The established effect makes it possible to solve the problem of diffusion instability of semiconductor heterojunctions and opens up prospects for creating electronics devices with a fundamentally new level of properties for use in various fields of the national economy and breakthrough critical technologies.
ARTICLE | doi:10.20944/preprints202205.0006.v1
Subject: Life Sciences, Biophysics Keywords: structured illumination; fluorescence; brain; multi-camera
Online: 4 May 2022 (12:24:22 CEST)
Fluorescence microscopy provides an unparalleled tool for imaging biological samples. However, producing high-quality volumetric images quickly and without excessive complexity remains a challenge. Here, we demonstrate a simple multi-camera structured illumination microscope (SIM) capable of simultaneously imaging multiple focal planes, allowing for the capture of 3D fluorescent images without any axial movement of the sample. This simple setup allows for the acquisition of many different 3D imaging modes, including 3D time lapses, high-axial-resolution 3D images, and large 3D mosaics.
ARTICLE | doi:10.20944/preprints202111.0381.v1
Online: 22 November 2021 (11:08:58 CET)
Our work uses Iterative Boltzmann Inversion (IBI) to study the coarse-grained interaction between 20 amino acids and the representative carbon nanotube CNT55L3. IBI is a multi-scale simulation method that has attracted the attention of many researchers in recent years. It can effectively modify the coarse-grained model derived from the Potential of Mean Force (PMF). IBI is based on the distribution result obtained by All-Atom molecular dynamics simulation, that is, the target distribution function, the PMF potential energy is extracted, and then the initial potential energy extracted by the PMF is used to perform simulation iterations using IBI. Our research results have gone through more than 100 iterations, and finally, the distribution obtained by coarse-grained molecular simulation (CGMD) can effectively overlap with the results of all-atom molecular dynamics simulation (AAMD). In addition, our work lays the foundation for the study of force fields for the simulation of the coarse-graining of super-large proteins and other important nanoparticles.
REVIEW | doi:10.20944/preprints202009.0030.v1
Subject: Medicine & Pharmacology, Other Keywords: multi-morbidity; CGA; frailty; polypharmacy; deprescribing
Online: 2 September 2020 (06:04:17 CEST)
Multi-morbidity and polypharmacy are common in older people and pose a challenge for health and social care systems especially in context of global population ageing. They are complex and interrelated concepts in the care of older people that require early detection and patient centred decision making that are underpinned by the principles of multidisciplinary led comprehensive geriatric assessment (CGA). Personalised care plans need to remain responsive and adaptable to the needs of a patient, enabling an individual to maintain their independence.
ARTICLE | doi:10.20944/preprints201709.0134.v1
Subject: Earth Sciences, Atmospheric Science Keywords: multi-sensor fusion; satellite; radar; precipitation
Online: 27 September 2017 (04:09:22 CEST)
This paper presents a new and enhanced fusion module for the Multi-Sensor Precipitation Estimator (MPE) that would objectively blend real-time satellite quantitative precipitation estimates (SQPE) with radar and gauge estimates. This module consists of a preprocessor that mitigates systematic bias in SQPE, and a two-way blending routine that statistically fuses adjusted SQPE with radar estimates. The preprocessor not only corrects systematic bias in SQPE, but also improves the spatial distribution of precipitation based on SQPE and makes it closely resemble that of radar-based observations. It uses a more sophisticated radar-satellite merging technique to blend preprocessed datasets, and provides a better overall QPE product. The performance of the new satellite-radar-gauge blending module is assessed using independent rain gauge data over a 5-year period between 2003-2007, and the assessment evaluates the accuracy of newly developed satellite-radar-gauge (SRG) blended products versus that of radar-gauge products (which represents MPE algorithm currently used in the NWS operations) over two regions: I) inside radar effective coverage and II) immediately outside radar coverage. The outcomes of the evaluation indicate a) ingest of SQPE over areas within effective radar coverage improve the quality of QPE by mitigating the errors in radar estimates in region I; and b) blending of radar, gauge, and satellite estimates over region II leads to reduction of errors relative to bias-corrected SQPE. In addition, the new module alleviates the discontinuities along the boundaries of radar effective coverage otherwise seen when SQPE is used directly to fill the areas outside of effective radar coverage.
REVIEW | doi:10.20944/preprints202007.0459.v1
Subject: Chemistry, Electrochemistry Keywords: current-potential curve; multi-enzymatic cascades; multi-analyte detection; mass-transfer-controlled amperometric response; potentiometric coulometry
Online: 20 July 2020 (08:16:47 CEST)
Bioelectrocatalysis provides the intrinsic catalytic-functions of redox enzymes to non-specific electrode reactions and is the most important and basic concept for biosensors. This review starts by describing fundamental characteristics of bioelectrocatalytic reactions in mediated and direct electron transfer types from a theoretical viewpoint and summarizes amperometric biosensors based on multi-enzymatic cascades and for multi-analyte detection. The review also introduces prospective aspects of two new concepts of biosensors: mass-transfer-controlled (pseudo)steady-state amperometry at microelectrodes with enhanced enzymatic activity without calibration curves and potentiometric coulometry at enzyme/mediator-immobilized biosensors for absolute determination.
ARTICLE | doi:10.20944/preprints201901.0236.v1
Subject: Earth Sciences, Geoinformatics Keywords: 3D models; multi-sensor; multi-scale; SLAM; MMS; LiDAR; UAV; data integration; data fusion; cultural heritage
Online: 23 January 2019 (10:08:42 CET)
This article proposes the use of a multi-scale and multi-sensor approach to collect and modelling 3D data concerning wide and complex areas in order to obtain a variety of metric information in the same 3D archive, based on a single coordinate system. The employment of these 3D georeferenced products is multifaceted and the fusion or integration among different sensors data, scales and resolutions is promising and could be useful for the generation of a model that could be defined as hybrid. The correct geometry, accuracy, radiometry and weight of the data models are hereby evaluated comparing integrated processes and results from Terrestrial Laser Scanner (TLS), Mobile Mapping System (MMS), Unmanned Aerial Vehicle (UAV), terrestrial photogrammetry, using Total Station (TS) and Global Navigation Satellite System (GNSS) as topographic survey. The entire analysis underlines the potentiality of the integration and fusion of different solutions and is a crucial part of the “Torino 1911” project whose main purpose is mapping and virtually reconstructing the 1911 Great Exhibition settled in the Valentino Park in Turin (Italy).
ARTICLE | doi:10.20944/preprints202201.0413.v1
Subject: Engineering, Mechanical Engineering Keywords: Burr; deburring; abrasive flow machining; objective function; intersecting holes with offset; ANSYS Fluent; prediction of deburring performance
Online: 27 January 2022 (11:09:30 CET)
Burrs form due to plastic deformation of materials during machining processes, such as milling and drilling. Deburring can be very difficult when the burrs are not easily accessible for removal. In this study, abrasive flow machining (AFM) was adopted for deburring the edges of milling specimens. Based on the experimental observations on AL6061 specimens, the deburring performance was characterized in terms of flow speed, the local curvature of the streamline near the burr edge, and shear stress. A new objective function that can predict the extent of deburring is proposed based on these characteristics and validated through experiments. Based on this new objective function, a prediction of deburring performance on the burr edge of the intersecting holes with offset was performed. The results between the predicted and experimental observation results were in reasonable agreement.
COMMUNICATION | doi:10.20944/preprints202007.0709.v1
Subject: Biology, Other Keywords: intrinsic multi-drug resistance; acquired multi-drug resistance; circulating tumor cells; single cells; cell clusters; cell monolayer; multi-cellular spheroids; cytometry of reaction rate constant; ovarian cancer
Online: 30 July 2020 (09:01:50 CEST)
Does cell clustering influence intrinsic and acquired multi-drug resistance (MDR) differently? To address this question, we studied cultured monolayers (representing individual cells) and cultured spheroids (representing clusters) formed by drug-naïve (intrinsic MDR) and drug-exposed (acquired MDR) lines of ovarian cancer A2780 cells by cytometry of reaction rate constant (CRRC). MDR efflux was characterized by accurate and robust “cell number vs. MDR efflux rate constant (kMDR)” histograms. Both drug-naïve and drug-exposed monolayer cells presented unimodal histograms; the histogram of drug-exposed cells was shifted towards higher kMDR value suggesting greater MDR activity. Spheroids of drug-naïve cells presented a bimodal histogram indicating the presence of two subpopulations with different MDR activity. In contrast, spheroids of drug-exposed cells presented a unimodal histogram qualitatively similar to that of the monolayers of drug-exposed cells but with a moderate shift towards greater MDR activity. The observed greater effect of cell clustering on intrinsic than on acquired MDR can help guide the development of new therapeutic strategies targeting clusters of circulating tumor cells.
ARTICLE | doi:10.20944/preprints202008.0706.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Euler polynomials; Degenerate multi-polyexponential functions; Degenerate multi-poly-Euler polynomials; Degenerate Stirling numbers; Degenerate Whitney numbers
Online: 31 August 2020 (05:15:55 CEST)
In this paper, we consider a new class of polynomials which is called the multi-poly-Euler polynomials. Then, we investigate their some properties and relations. We provide that the type 2 degenerate multi-poly-Euler polynomials equals a linear combination of the degenerate Euler polynomials of higher order and the degenerate Stirling numbers of the first kind. Moreover, we provide an addition formula and a derivative formula. Furthermore, in a special case, we acquire a correlation between the type 2 degenerate multi-poly-Euler polynomials and degenerate Whitney numbers.
ARTICLE | doi:10.20944/preprints202008.0057.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Bernoulli polynomials; Degenerate multi-polyexponential functions; Degenerate multi-poly-Bernoulli polynomials; Degenerate Stirling numbers; Degenerate Whitney numbers
Online: 3 August 2020 (00:28:39 CEST)
Inspired by the definition of degenerate multi-poly-Genocchi polynomials given by using the degenerate multi-polyexponential functions. In this paper, we consider a class of new generating function for the degenerate multi-poly-Bernoulli polynomials, called the type 2 degenerate multi-poly-Bernoulli polynomials by means of the degenerate multiple polyexponential functions. Then, we investigate their some properties and relations. We show that the type 2 degenerate multi-poly-Bernoulli polynomials equals a linear combination of the weighted degenerate Bernoulli polynomials and Stirling numbers of the first kind. Moreover, we provide an addition formula and a derivative formula. Furthermore, in a special case, we acquire a correlation between the type 2 degenerate multi-poly-Bernoulli numbers and degenerate Whitney numbers.
ARTICLE | doi:10.20944/preprints201806.0282.v1
Subject: Earth Sciences, Geoinformatics Keywords: land-use/land-cover; multi-decadal change analysis; irrigation ponds; textural features; supervised classification; multi-source data
Online: 18 June 2018 (16:40:31 CEST)
A multi-decadal change analysis of the irrigation ponds in Taoyuan, Taiwan was conducted by using multi-source data including digitized ancient maps, declassified single-band CORONA satellite images, and multispectral SPOT images. Supervised LULC classifications were conducted using four textural features derived from the single-band CORONA images and spectral features derived from SPOT images. Post-classification analysis revealed that the number of irrigation ponds in the study area decreased during the post-World War II farmland consolidation period (1945 – 1965) and the subsequent industrialization period (1970 – 2000). However, efforts on restoration of irrigation ponds in recent years have resulted in gradual increases in the number (9%) and total area (12%) of irrigation ponds in the study area.
ARTICLE | doi:10.20944/preprints202207.0308.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: micro-video classification; 3D CNN; multi-modal
Online: 21 July 2022 (03:09:34 CEST)
Along with the popularity of the Internet, people are exposed to more and more ways of micro-videos, and a huge amount of micro-video data has emerged. micro-videos have gradually become the Internet content preferred by the public, and a large number of micro-video apps have also emerged, such as Tiktok and Kwai. Intelligent classification and mining of micro-videos can greatly enhance user experience, improve business operation efficiency and enhance user experience. Through deep intelligent analysis and mining of micro-videos, important information in micro-videos can be extracted to provide an important basis for beautifying videos, content appreciation, video recommendation, content search, etc. In the past, content understanding for short videos often used human work annotation, but in recent years, with the great success of deep convolutional neural networks in image recognition, short video content understanding based on this method has gradually developed. Nowadays, most recognition algorithms extract the feature representation of each frame independently and then fuse them. However, while extracting the feature representation, some low-level semantic features are lost, which makes the algorithm unable to accurately distinguish the category of the video. At present, the algorithm of micro-video recognition based on deep learning has surpassed the iDT algorithm, making these traditional methods fade out of people’s view. In this paper according to the micro-video classification task, a new network model is proposed to concatenate features of each modality into the overall features of various modalities through the network, and then fuse the various modal features with the attention mechanism to obtain the whole micro-video features, which will be used for classification. In order to verify the effectiveness of the algorithm proposed in this paper, experiments are conducted in the public dataset, and it is shown the effectiveness of our model.
ARTICLE | doi:10.20944/preprints202202.0151.v1
Subject: Mathematics & Computer Science, Analysis Keywords: high-performance; heritable; multi-environments; credibility interval
Online: 10 February 2022 (11:14:21 CET)
The giant challenge breeding flood-irrigated rice is to identify superior genotypes that present high-yielding with specific grain qualities, resistance to abiotic and biotic stresses, excellent adaptation to the target environment. Thus, the objectives of this study were to propose a bayesian multi-trait model, estimate genetic parameters, and select flood-irrigated rice genotypes with better genetic potentials in different evaluation environments. For this, twenty-five rice genotypes belonging to the flood-irrigated rice improvement program were evaluated. The grain yields, grain length, width and thickness, grain length, and grain width and weight of 100 grains in the agricultural year 2016/2017. The experimental design used in all experiments was a randomized block design with three replications. The Monte Carlo Markov Chain algorithm estimated genetic parameters and genetic values. The grain thickness trait was considered highly heritable, with a credibility interval ranging from: h^2: 0.9480; 0.9440; 0.8610, in environments 1, 2, and 3, respectively. The grain yields showed a low correlation estimate between grain thickness and 100-grain weight, in all environments, with a credibility interval ranging from (ρ= 0.5477; 0.5762; 0.5618 and 0.5973; 0.5247; 0.5632, grain thickness and 100-grain weight, in environments 1, 2, and 3, respectively). The Bayesian multi-trait model proved to be an adequate strategy for the genetic improvement of flood-irrigated. Genotypes 2 and 15 had similar potential in the three environments, they should be selected as high-performance multi-trait genotypes for the genetic breeding of flood-irrigated rice in the program.
ARTICLE | doi:10.20944/preprints202103.0525.v1
Subject: Engineering, Control & Systems Engineering Keywords: Wildlife Monitoring; Multi-UAV System; Optimal Transport
Online: 22 March 2021 (11:59:55 CET)
This paper addresses a wildlife monitoring problem using a team of UAVs for efficient monitoring of wildlife. The state-of-the-art technology using UAVs has been an increasingly popular tool to monitor wildlife compared to the traditional methods such as satellite imagery-based sensing or GPS trackers. However, there still exist unsolved problems as to how the UAVs need to cover a spacious domain to detect animals as many as possible. In this paper, we propose the optimal transport-based wildlife monitoring strategy for a multi-UAV system, to prioritize monitoring areas while incorporating complementary information such as GPS trackers and satellite-based sensing. Through the proposed scheme, the UAVs can explore the large-size domain effectively and collaboratively with a given priority. The time-varying nature of wildlife due to their movements is modeled as a stochastic process, which is included in the proposed work to reflect the spatio-temporal evolution of their position estimation. In this way, the proposed monitoring plan can lead to efficient wildlife monitoring with a high detection rate. Various simulation results including statistical data are provided to validate the proposed work.
REVIEW | doi:10.20944/preprints202101.0521.v1
Subject: Life Sciences, Molecular Biology Keywords: Data integration; multi-omics; integration strategies; genomics
Online: 25 January 2021 (16:19:31 CET)
Metabolomics deals with multiple and complex chemical reactions within living organisms and how these are influenced by external or internal perturbations. It lies at the heart of omics profiling technologies not only as the underlying biochemical layer that reflects information expressed by the genome, the transcriptome and the proteome, but also as the closest layer to the phenome. The combination of metabolomics data with the information available from genomics, transcriptomics, and proteomics offers unprecedented possibilities to enhance current understanding of biological functions, elucidate their underlying mechanisms and uncover hidden associations between omics variables. As a result, a vast array of computational tools have been developed to assist with integrative analysis of metabolomics data with different omics. Here, we review and propose five criteria – hypothesis, data types, strategies, study design and study focus – to classify statistical multi-omics data integration approaches into state-of-the-art classes under which all existing statistical methods fall. The purpose of this review is to look at various aspects that lead the choice of the statistical integrative analysis pipeline in terms of the different classes. We will draw a particular attention to metabolomics and genomics data to assist those new to this field in the choice of the integrative analysis pipeline.
REVIEW | doi:10.20944/preprints201911.0385.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: autonomous robots; multi-robot systems; teamwork; coordination
Online: 30 November 2019 (09:47:30 CET)
The increasing number of robots around us will soon create a demand for connecting these robots in order to achieve goal-driven teamwork in heterogeneous multi-robot systems. In this paper, we focus on robot teamwork specifically in dynamic environments. While the conceptual modeling of multi-agent teamwork has been studied extensively during the last two decades, related engineering concerns have not received the same degree of attention. Therefore, this paper makes two contributions. The analysis part discusses general design challenges that apply to robot teamwork in dynamic application domains. The constructive part presents a review of existing engineering approaches for challenges that arise with dynamically changing runtime conditions. An exhaustive survey of robot teamwork aspects would be beyond the scope of this paper. Instead, we aim at creating awareness for the manifold dimensions of the design space and highlight state-of-the-art technical solutions for dynamically adaptive teamwork, thus pointing at open research questions that need to be tackled in future work.
ARTICLE | doi:10.20944/preprints201907.0067.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: multi Server; remote user; mutual authentication; attack
Online: 3 July 2019 (12:08:40 CEST)
From ancient time, electric grid system developed as one way direction in which users get the electricity from generators to far end. However, it is not the consumer centric as its one way process and consumer have no way to communicate to the server. Thus, with the development of digital revolution, the grid converted to smart grid and meter converted to smart meter. In smart grid, the protocol follows the bidirectional way of communication with support of consumers in the system. Recently in 2016, Jo et al. proposed the scheme for smart grid system using privacy preserving model and claimed to be efficient and secure. However, in this paper we have analyzed the scheme of Jo et al. and proved that the scheme is vulnerable to Replay attack and afterwards shows the change in protocol to withstand against this attack.
ARTICLE | doi:10.20944/preprints201904.0091.v4
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Preference learning; Multi-label ranking; Neural network; Kendall’s tau; Preference mining
Online: 24 December 2021 (16:08:06 CET)
Equality and incomparability multi-label ranking have not been introduced to learning before. This paper proposes new native ranker neural network to address the problem of multi-label ranking including incomparable preference orders using a new activation and error functions and new architecture. Preference Neural Network PNN solves the multi-label ranking problem, where labels may have indifference preference orders or subgroups which are equally ranked. PNN is a nondeep, multiple-value neuron, single middle layer and one or more output layers network. PNN uses a novel positive smooth staircase (PSS) or smooth staircase (SS) activation function and represents preference orders and Spearman ranking correlation as objective functions. It is introduced in two types, Type A is traditional NN architecture and Type B uses expanding architecture by introducing new type of hidden neuron has multiple activation function in middle layer and duplicated output layers to reinforce the ranking by increasing the number of weights. PNN accepts single data instance as inputs and output neurons represent the number of labels and output value represents the preference value. PNN is evaluated using a new preference mining data set that contains repeated label values which have not experimented on before. SS and PS speed-up the learning and PNN outperforms five previously proposed methods for strict label ranking in terms of accurate results with high computational efficiency.
ARTICLE | doi:10.20944/preprints201812.0350.v1
Online: 28 December 2018 (15:55:25 CET)
The concept of transit-oriented development (TOD) has been widely recognized in recent years for its role in reducing car traffic, improving public transportation, and enhancing traffic sustainability. This paper conducts empirical research on a developed rail transit network, using Shanghai as a case study. In addition to traditional TOD features, other factors based on urban rail transit are introduced, including multi-level modeling (MLM), which is used to analyze the possible factors influencing rail patronage. To avoid the bias of research results led by the correlation between independent variables, factors are divided into two levels. The first level includes three groups of variables: the built environment, station characteristics, and socioeconomic and demographic characteristics. The second level includes a set of variables which are regional characteristics. Results show that the most significant impact on train patronage is station location in the business district area. Other factors that have a positive effect on promoting rail transit travel include the number of service facilities around the station, degree of employment around the station, economic level, intensity of residential development, if the station is a transfer station, the operating period of the station, and the size of the large transportation hub around the station.
ARTICLE | doi:10.20944/preprints201810.0107.v1
Subject: Earth Sciences, Geophysics Keywords: multibeam echosounder; backscatter; multi-frequency; machine-learning
Online: 5 October 2018 (16:09:53 CEST)
We propose a probabilistic graphical model for discriminative substrate characterization, to support geological and biological habitat mapping in aquatic environments. The model, called a fully connected conditional random field (CRF), is demonstrated using multispectral and monospectral acoustic backscatter from heterogeneous seafloors in Patricia Bay, British Columbia, and Bedford Basin, Nova Scotia. Unlike previously proposed discriminative machine learning algorithms, the CRF model considers both the relative backscatter magnitudes of different substrates and their relative proximities. The model therefore combines the statistical flexibility of a machine learning algorithm with an inherently spatial treatment of the substrate. The CRF model predicts substrates such that nearby locations with similar backscattering characteristics are likely to be in the same substrate class. The degree of proximity and allowable backscatter similarity are controlled by parameters that are learned from the data. CRF model results were evaluated against a popular generative model known as a Gaussian Mixture model that doesn't include spatial dependencies, only covariance between substrate backscattering response over different frequencies. Both models are used in conjunction with sparse bed observations/samples in a supervised classification. A detailed accuracy assessment, including a leave-one-out cross-validation analysis, was performed using both models. Using multispectral backscatter, the GMM model trained on 50% of the bed observations resulted in a 75% and 89% average accuracies in Patricia Bay and Bedford Basin, respectively. The same metrics for the CRF model were 78% and 95%. Further, the CRF model resulted in a 91% mean cross-validation accuracy across four substrate classes at Patricia Bay, and a 99.5% mean accuracy across three substrate classes at Bedford Basin, which suggest that the CRF model generalizes extremely well to new data. This analysis also showed that the CRF model was much less sensitive to the specific number and locations of bed observations than the generative model, owing to its ability to incorporate spatial autocorrelation in substrates. The CRF approach therefore may prove to be a powerful `spatially aware' alternative to other discriminative classifiers.
ARTICLE | doi:10.20944/preprints201704.0174.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Hierarchical search; Image retrieval; Multi-feature fusion
Online: 26 April 2017 (18:51:42 CEST)
Aiming at the problems that are poor generalization performance, low retrieval accuracy and large time consumption of existing content-based image retrieval system, the hierarchical image retrieval method based on multi feature fusion is proposed in this paper. The retrieval accuracy rates on Corel5K, UKbeach and Holidays are 68.23(Top 1), 3.73(N-S) and 88.20(mAp), respectively. The experimental results show that the method proposed in this paper can effectively improve the deficiency of single feature retrieval and save time significantly in the premise of a small amount of loss of accuracy.
ARTICLE | doi:10.20944/preprints201703.0160.v1
Subject: Earth Sciences, Environmental Sciences Keywords: visibility; PM; MLH; multi-cities; northeast China
Online: 20 March 2017 (11:57:10 CET)
The variations of visibility, PM mass concentration and mixing layer height (MLH) at four major urban-industry regions (Shenyang, Anshan, Benxi and Fushun) in multi-cities of central Liaoning over northeast China were evaluated from 2009-2012 to characterize the dynamics effect on air pollution. The annual mean visibilities were about 13.7±7.8km, 13.5±6.5km, 12.8±6.1km and 11.5±6.8km in Shenyang, Anshan, Benxi and Fushun, respectively. The pollution load (PM×MLH) shown a weaker vertical diffusion in Anshan with a higher PM concentration in the near-surface. High concentrations of fine mode particles may be partially attributed to the biomass burning emissions from September in Liaoning Province and surrounding regions in Northeast China as well as the coal burning during the heating period with lower MLH in winter. The increasing wind speed has a similar change as the increasing of mixing layer height to make the effect on the aerosol vertical diffusion. The visibility on the non haze-fog days was about 2.5-3.0 times higher than that on hazy and fog days. The fine particle concentrations of PM2.5 and PM1.0 on the haze and fog days were ~1.8-1.9 times and ~1.5 times higher than that on no hazy-fog days. The MLH during fog pollution showed more declining trend than haze pollution compared with non haze-fog days. The results of this study could provide the useful information to better recognize the effects of vertical pollutants diffusion on air quality in the multi-cities of central Liaoning over Northeast China.
ARTICLE | doi:10.20944/preprints201702.0060.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: microgrid; multi-microgrid; measured admittance; protection scheme
Online: 16 February 2017 (09:17:26 CET)
Multi-microgrid has many new characteristics, such as bi-directional power flows, flexible operation modes and variable fault currents with different control strategy of inverter interfaced distributed generations (IIDGs). All these featuring aspects pose challenges to multi-microgrid protection. In this paper, current and voltage characteristics of different feeders are analyzed when fault occurs in different positions of multi-microgrid. Based on the voltage and current distribution characteristics of the line parameters, a new protection scheme for the internal fault of multi-microgrid is proposed, which takes the change of phase difference and amplitude of measured bus admittance as the criterion. This scheme with high sensitivity and reliability, has a simple principle and is easy to be adjusted. PSCAD/EMTDC is used in simulation analysis, and simulation results have verified the correctness and effectiveness of the protection scheme.
ARTICLE | doi:10.20944/preprints202208.0216.v1
Subject: Earth Sciences, Geoinformatics Keywords: block cokriging; clay composition; granulometry; multi-collocated cokriging; multi-collocated fac-torial cokriging; regularization; SIDSAM; VIS-NIR-SWIR spectroscopy
Online: 11 August 2022 (11:30:23 CEST)
Traditional soil characterization methods are time consuming, laborious and invasive and do not allow long-term repeatability of measurements. The overall aim of this paper was to assess and model spatial variability of the soil in an olive grove in south Italy by using data from two sensors of different type: a multi-spectral on-board drone radiometer and a hyperspectral visible-near infrared-shortwave infrared (VIS-NIR-SWIR) reflectance radiometer as well as sample data, to arrive at a delineation of homogeneous areas. The hyperspectral data were processed using continuum removal methodology to obtain information about the content and composition of clay. Differently, the multispectral data were firstly upscaled to the support of soil data using geostatistics and taking into account change of support. Secondly, the two-sensor data were integrated with soil granulometric properties by using the multivariate geostatistical techniques of multi-collocated cokriging and factor cokriging, in order to achieve a more exhaustive and finer-scale soil characterisation. The paper shows the impact of change of support on the uncertainty of soil prediction that can have a significant effect on decision making in Precision Agriculture. Moreover, four regionalised factors at two different scales (two per each scale) were retained and mapped. Each factor provided a different delineation of the field with areas characterised by different granulometry and clay composition. The applied method is sufficiently flexible and could be applied to any number and type of sensors.
REVIEW | doi:10.20944/preprints202202.0048.v1
Subject: Life Sciences, Genetics Keywords: Plant Breeding; Speed Breeding; Training Population; Field Design; Multi-Environment; Multi-Trait; Deep Learning; High-Throughput Phenotyping; Genetic Gain
Online: 3 February 2022 (10:41:44 CET)
Plant geneticists and breeders have used marker technology since the 1980s in quantitative trait locus (QTL) identification. Marker-assisted selection is effective for large-effect QTL but has been challenging to use with quantitative traits controlled by multiple minor effect alleles. Therefore, genomic selection (GS) was proposed to estimate all markers simultaneously, thereby capturing all their effects. However, breeding programs are still struggling to identify the best strategy to implement it into their programs. Traditional breeding programs need to be optimized to implement GS effectively. This review explores the optimization of breeding programs for variety release based on aspects of the breeder’s equation. Optimizations include reorganizing field designs, training populations, increasing the number of lines evaluated, and leveraging the large amount of genomic and phenotypic data collected across different growing seasons and environments to increase heritability estimates, selection intensity, and selection accuracy. Breeding programs can leverage their phenotypic and genotypic data to maximize genetic gain and selection accuracy through GS methods utilizing multi-trait and, multi-environment models, high-throughput phenotyping, and deep learning approaches. Overall, this review describes various methods that plant breeders can utilize to increase genetic gains and effectively implement GS in breeding .