ARTICLE | doi:10.20944/preprints202102.0260.v3
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Feature Selection; Discrete Data; Heuristics; Running average
Online: 7 December 2021 (11:28:35 CET)
By applying a running average (with a window-size= d), we could transform Discrete data to broad-range, Continuous values. When we have more than 2 columns and one of them is containing data about the tags of classification (Class Column), we could compare and sort the features (Non-class Columns) based on the R2 coefficient of the regression for running averages. The parameters tuning could help us to select the best features (the non-class columns which have the best correlation with the Class Column). “Window size” and “Ordering” could be tuned to achieve the goal. this optimization problem is hard and we need an Algorithm (or Heuristics) for simplifying this tuning. We demonstrate a novel heuristics, Called Simulated Distillation (SimulaD), which could help us to gain a somehow good results with this optimization problem.
ARTICLE | doi:10.20944/preprints202012.0721.v1
Subject: Earth Sciences, Geoinformatics Keywords: Remote sensing; Global discrete grid; Accuracy evaluation; Hexagon grid
Online: 29 December 2020 (09:19:49 CET)
With the rapid development of earth observation, satellite navigation, mobile communication and other technologies, the order of magnitude of the spatial data we acquire and accumulate is increasing, and higher requirements are put forward for the application and storage of spatial data. Under this circumstance, a new form of spatial data organization emerged-the global discrete grid. This form of data management can be used for the efficient storage and application of large-scale global spatial data, which is a digital multi-resolution the geo-reference model that helps to establish a new model of data association and fusion. It is expected to make up for the shortcomings in the organization, processing and application of current spatial data. There are different types of grid system according to the grid division form, including global discrete grids with equal latitude and longitude, global discrete grids with variable latitude and longitude, and global discrete grids based on regular polyhedrons. However, there is no accuracy evaluation index system for remote sensing images expressed on the global discrete grid to solve this problem. This paper is dedicated to finding a suitable way to express remote sensing data on discrete grids, and establishing a suitable accuracy evaluation system for modeling remote sensing data based on hexagonal grids to evaluate modeling accuracy. The results show that this accuracy evaluation method can evaluate and analyze remote sensing data based on hexagonal grids from multiple levels, and the comprehensive similarity coefficient of the images before and after conversion is greater than 98%, which further proves that the availability hexagonal grid-based remote sensing data of remote sensing images. And among the three sampling methods, the image obtained by the nearest interpolation sampling method has the highest correlation with the original image.
ARTICLE | doi:10.20944/preprints202208.0019.v1
Subject: Engineering, Civil Engineering Keywords: discrete choice modeling; mode choice; travel behavior; city tourism; sustainable tourism; revealed preference data
Online: 1 August 2022 (10:13:39 CEST)
With growing city tourism, there is an increasing need for urban travel demand models to consider traffic generated by visitors. Existing research has concentrated on socio-demographic and journey-related factors to determine what influences the mode choice of tourists. In contrast, revealed preference data, like travel time, is almost never considered. In this article, we present the results of discrete choice modeling of city tourists’ mode choice based on revealed preference data from a survey we conducted in Kassel, Germany. We used multinomial logit models and determined the model parameters using maximum likelihood estimations. Surprisingly, travel time played a smaller role in mode choice than understood from previously established knowledge about everyday mobility. In the final model, travel time was only significant for the alternative walking. Also, most other sociodemographic and journey-related variables showed no significant influence. The final model reproduced the mode choice, but the goodness of fit was lower than expected from other research. We conclude that modeling the travel behavior of tourists is more complex than everyday mobility. An alternative approach that we suggest would be to model trip chains rather than single trips.
ARTICLE | doi:10.20944/preprints201709.0003.v1
Subject: Keywords: business workflows; discrete event systems; event logs; configurable process models; configurable process trees; process mining; business processes
Online: 1 September 2017 (17:14:50 CEST)
Configurable process models are frequently used to represent business workflows and other discrete event systems among different branches of large organizations: they unify commonalities shared by all branches and describe their differences, at the same time. The configuration of such models is usually done manually, which is challenging. On the one hand, when the number of configurable nodes in the configurable process model grows, the size of the search space increases exponentially. On the other hand, the person performing the configuration may lack the holistic perspective to make the right choice for all configurable nodes at the same time, since choices influence each other. Nowadays, information systems that support the execution of business processes create event data reflecting how processes are performed. In this article, we propose three strategies (based on exhaustive search, genetic algorithms, and greedy heuristic) that use event data to automatically derive a process model from a configurable process model that better represents the characteristics of the process in a specific branch. These strategies have been implemented in our proposed framework, and tested in both business-like event logs as recorded in a higher educational ERP system, and a real case scenario involving a set of Dutch municipalities.
ARTICLE | doi:10.20944/preprints202110.0204.v2
Subject: Mathematics & Computer Science, Analysis Keywords: Discrete; Double Hilbert transform; Circle method; exponential sums; discrete double Hilbert transform; discrete double exponential sums; Newton diagram
Online: 17 November 2021 (11:06:10 CET)
Discrete Double Hilbert Exponential sums along polynomials. This is preprint version (not peer reviewed) And we now check the proof of (ii) of Lemma 3.2 whether it is really true.
ARTICLE | doi:10.20944/preprints201710.0097.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: HEVC; Interpolation filter; Sinc; DCT (discrete cosine transform); DST (discrete sine transform)
Online: 14 October 2017 (13:53:17 CEST)
High Efficiency Video Coding (HEVC) uses an 8-point filter and a 7-point filter, which are based on the discrete cosine transform (DCT), for the 1/2-pixel and 1/4-pixel interpolations, respectively. In this paper, discrete sine transform (DST)-based interpolation filters (IF) are proposed. The first proposed DST-based IFs (DST-IFs) use 8-point and 7-point filters for the 1/2-pixel and 1/4-pixel interpolations, respectively. The final proposed DST-IFs use 12-point and 11-point filters for the 1/2-pixel and 1/4-pixel interpolations, respectively. These DST-IF methods are proposed to improve the motion-compensated prediction in HEVC. The 8-point and 7-point DST-IF methods showed average BD-rate reductions of 0.7% and 0.3% in the random access (RA) and low delay B (LDB) configurations, respectively. The 12-point and 11-point DST-IF methods showed average BD-rate reductions of 1.4% and 1.2% in the RA and LDB configurations for the Luma component, respectively.
ARTICLE | doi:10.20944/preprints201907.0270.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: DISCRETE EVENT, SIMULATION, ROUTING BEHAVIOR
Online: 24 July 2019 (10:47:42 CEST)
Several factors influence traffic congestion and overall traffic dynamics. Simulation modeling has been utilized to understand the traffic performance parameters during traffic congestions. This paper focuses on driver behavior of route selection by differentiating three distinguishable decisions, which are shortest distance routing, shortest time routing and less crowded road routing. This research generated 864 different scenarios to capture various traffic dynamics under collective driving behavior of route selection. Factors such as vehicle arrival rate, behaviors at system boundary and traffic light phasing were considered. The simulation results revealed that shortest time routing scenario offered the best solution considering all forms of interactions among the factors. Overall, this routing behavior reduces traffic wait time and total time (by 69.5% and 65.72%) compared to shortest distance routing.
ARTICLE | doi:10.20944/preprints201910.0145.v1
Subject: Biology, Forestry Keywords: clumping index; crown architecture; crown projection area; lidar-based crown metrics; discrete-return lidar; fire severity; leaf area density; post-fire effects
Online: 13 October 2019 (15:34:43 CEST)
Fire-tolerant eucalypt forests of south eastern Australia are assumed to fully recover from even the most intense fires but surprisingly very few studies have quantitatively assessed that recovery. Accurate assessment of horizontal and vertical attributes of tree crowns after fire is essential to understand the fire’s legacy effects on tree growth and on forest structure. In this study, we quantitatively assessed individual tree crowns 8.5 years after a 2009 wildfire that burnt extensive areas of eucalypt forest in temperate Australia. We used airborne lidar data validated with field measurements to estimate multiple metrics that quantified the cover, density, and vertical distribution of individual-tree crowns in 51 plots of 0.05 ha in fire-tolerant eucalypt forest across four wildfire severity types (unburnt, low, moderate, high). Significant differences in the field-assessed mean height of fire scarring as a proportion of tree height, and in the proportions of trees with epicormic (stem) resprouts were consistent with the gradation in fire severity. Linear mixed-effects models indicated persistent effects of both moderate- and high-severity wildfire on tree crown architecture. Trees at high-severity sites had significantly less crown projection area and live crown width as a proportion of total crown width than those at unburnt and low-severity sites. Significant differences in lidar-based metrics (crown cover, evenness, leaf area density profiles) indicated that tree crowns at moderate- and high-severity sites were comparatively narrow and more evenly distributed down the tree stem. These conical-shaped crowns contrasted sharply with the rounded crowns of trees at unburnt and low-severity sites, and likely influenced both tree productivity and the accuracy of biomass allometric equations for nearly a decade after the fire. Our data provide a clear example of the utility of airborne lidar data for quantifying the impacts of disturbances at the scale of individual trees. Quantified effects of contrasting fire severities on the structure of resprouter tree crowns provide a strong basis for interpreting post-fire patterns in forest canopies and vegetation profiles in lidar and other remotely-sensed data at larger scales.
ARTICLE | doi:10.20944/preprints201712.0173.v2
Subject: Mathematics & Computer Science, Analysis Keywords: Fourier Transform; Discrete-Time Fourier Transform (DTFT); Discrete Fourier Transform (DFT); Fourier Series; Poisson Summation Formula; discretization; periodization; discrete function; periodic function; periodization trick
Online: 23 May 2018 (07:42:15 CEST)
Four Fourier transforms are usually defined, the Integral Fourier transform, the Discrete-Time Fourier transform (DTFT), the Discrete Fourier transform (DFT) and the Integral Fourier transform for periodic functions. However, starting from their definitions, we show that all four Fourier transforms can be reduced to actually only one Fourier transform, the Fourier transform in the distributional sense.
ARTICLE | doi:10.20944/preprints201907.0189.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Fourier Theory; DFT in polar coordinates; polar coordinates; multidimensional DFT; discrete Hankel transform; discrete Fourier transform; orthogonality
Online: 16 July 2019 (08:00:00 CEST)
The theory of the continuous two-dimensional (2D) Fourier Transform in polar coordinates has been recently developed but no discrete counterpart exists to date. In the first part of this two-paper series, we proposed and evaluated the theory of the 2D discrete Fourier Transform (DFT) in polar coordinates. The theory of the actual manipulated quantities was shown, including the standard set of shift, modulation, multiplication, and convolution rules. In this second part of the series, we address the computational aspects of the 2D DFT in polar coordinates. Specifically, we demonstrate how the decomposition of the 2D DFT as a DFT, Discrete Hankel Transform (DHT) and inverse DFT sequence can be exploited for efficient code. We also demonstrate how the proposed 2D DFT can be used to approximate the continuous forward and inverse Fourier transform in polar coordinates in the same manner that the 1D DFT can be used to approximate its continuous counterpart.
ARTICLE | doi:10.20944/preprints201907.0151.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: fourier theory; DFT in polar coordinates; polar coordinates; multidimensional DFT; discrete hankel transform; discrete fourier transform; orthogonality
Online: 11 July 2019 (05:09:11 CEST)
The theory of the continuous two-dimensional (2D) Fourier Transform in polar coordinates has been recently developed but no discrete counterpart exists to date. In this paper, we propose and evaluate the theory of the 2D discrete Fourier Transform (DFT) in polar coordinates. This discrete theory is shown to arise from discretization schemes that have been previously employed with the 1D DFT and the discrete Hankel Transform (DHT). The proposed transform possesses orthogonality properties, which leads to invertibility of the transform. In the first part of this two-part paper, the theory of the actual manipulated quantities is shown, including the standard set of shift, modulation, multiplication, and convolution rules. Parseval and modified Parseval relationships are shown, depending on which choice of kernel is used. Similar to its continuous counterpart, the 2D DFT in polar coordinates is shown to consist of a 1D DFT, DHT and 1D inverse DFT.
ARTICLE | doi:10.20944/preprints202108.0121.v1
Subject: Keywords: Variable-order fractional-discrete time systems; Synchronization and Anti-Synchronization; Lyapunov-Krasovskii Stability; Fractional Order Caputo Derivative; Time-Delay Fractional-Discrete Systems; Fractional Order Discrete Time PID Control
Online: 4 August 2021 (20:16:06 CEST)
In this research article we solve the problem of synchronization and anti-synchronization of chaotic systems described by discrete and time-delayed variable fractional order differential equations. To guarantee the synchronization and anti-synchronization of these systems, we use the well-known PID control theory and the Lyapunov-Krasovskii stability theory for discrete systems of variable fractional order.We illustrate the results obtained through simulation with examples, in which it can be seen that our results are satisfactory, thus achieving synchronization and anti-synchronization of chaotic systems of variable fractional order with discrete time delay.
ARTICLE | doi:10.20944/preprints202206.0077.v1
Subject: Physical Sciences, General & Theoretical Physics Keywords: Fundamental Constants; Planck length; Discrete Space-Time
Online: 6 June 2022 (09:36:40 CEST)
In this work, the hypothesis that the universe is made up of 4-dimensional spheres of space, whose diameter is the Planck length, allows us to calculate most of the constants used in current physical theories. Calculated constants include: elementary charge, fine structure constant, electron mass, Planck constant, gravitational constant, electrical constant, Boltzmann constant, mass and charge of up and down quarks, muon mass, and Higgs boson. All of these depend on the speed of light and the Planck length, which shows that all the constants are related and not due to chance.
CONCEPT PAPER | doi:10.20944/preprints202203.0129.v2
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Discrete Fourier Transform; Fourier Transform; Twiddle factor
Online: 11 March 2022 (11:03:46 CET)
Computation of Discrete Fourier Transform (DFT) is a challenging task. Especially, on computational machines/embedded systems where resources are limited. The importance of Fourier Transform (FT) cannot be denied in the field of signal processing. This paper proposes a technique that can compute Discrete Fourier Transforms for a matrice or vector with the help of matrix multiplication. Moreover, this paper discusses the trivial methods used for computation of DFT along with methods based on matrix multiplication used to compute discrete Fourier Transform in addition to the shortcomings. The proposed method can help in the calculation of a Discrete Fourier Transformation matrix by truncation of values from the proposed generic method which can help in computing DFT of varying lengths of vectors. On legacy computing machines and programming environments, having support for matrix multiplication, the proposed methodology can be implemented.
ARTICLE | doi:10.20944/preprints202103.0371.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: biofortification; discrete choice; fruits; health claims; micronutrients
Online: 15 March 2021 (11:34:41 CET)
Selenium and iodine are essential micronutrients for humans. They are often deficient in food supply due to low phytoavailable concentrations in soil. Agronomic biofortification of food crops is one approach to overcome micronutrient malnutrition. This study focused on German consumers’ willingness to purchase selenium- and/or iodine-biofortified apples. For this purpose, an online survey was carried out. In this context, consumers were asked to choose their most preferred apple product from a set card of product alternatives (Discrete Choice Experiment). The multinomial logit model results demonstrated that German consumers’ have a preference in particular for iodine-biofortified apples. Furthermore, apple choice was mainly influenced by price, health claims, and plastic-free packaging material. Viewed individually, selenium did not exert an effect on product choice whereas positive interactions between both micronutrients exist.
ARTICLE | doi:10.20944/preprints202103.0308.v1
Subject: Engineering, Automotive Engineering Keywords: discrete-impulse energy; hydromechanic; process; milk products
Online: 11 March 2021 (10:52:37 CET)
The basis of the discrete-impulse energy supply (DIES) concept is the efficient use of supplied energy. The references describe in detail the general principles of DIES, examine the energy and thermodynamic aspects and the main mechanisms of intensification that can be initiated on the basis of this principle. DIES mechanisms conveniently can be divided into hard and soft ones. The former should be used to stimulate hydromechanical processes, and the latter to accelerate the processes of phase heat and mass transfer, or for the purpose of intensive mixing of multicomponent media. The authors have studied the possibility of using DIES to intensify the hydromechanical processes, in particular emulsification of milk fat (homogenization of milk, preparation of spreads), processing of cream cheese masses. Objects of research were whole non-homogenized milk, fat emulsions, cream cheese mass. In order to evaluate the efficiency of milk homogenization the homogenization coefficient change was studied, which was determined by centrifugation method as the most affordable and accurate one. Emulsions were evaluated according to the degree of destabilization, resistance and dispersion of the fat phase. The rheological characteristics of cheese masses were evaluated by the effective viscosity change.
ARTICLE | doi:10.20944/preprints202111.0169.v1
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: discrete mathematics; scheduling; optimization; interpolation; approximation; objective function.
Online: 9 November 2021 (13:24:19 CET)
An approach to estimating the objective function value of minimization maximum lateness problem is proposed. It is shown how to use transformed instances to define a new continuous objective function. After that, using this new objective function, the approach itself is formulated. We calculate the objective function value for some polynomially solvable transformed instances and use them as interpolation nodes to estimate the objective function of the initial instance. What is more, two new polynomial cases, that are easy to use in the approach, are proposed. In the end of the paper numeric experiments are described and their results are provided.
Subject: Physical Sciences, Mathematical Physics Keywords: discrete fragmentation; multicomponent; partition function; multiplicity of distribution
Online: 8 September 2020 (11:13:13 CEST)
We formulate the statistics of the discrete multicomponent fragmentation event using a methodology borrowed from statistical mechanics. We generate the ensemble of all feasible distributions that can be formed when a single integer multicomponent mass is broken into fixed number of fragments and calculate the combinatorial multiplicity of all distributions in the set. We define random fragmentation by the condition that the probability of distribution be proportional to its multiplicity and obtain the partition function and the mean distribution in closed form. We then introduce a functional that biases the probability of distribution to produce in a systematic manner fragment distributions that deviate to any arbitrary degree from the random case. We corroborate the results of the theory by Monte Carlo simulation and demonstrate examples in which components in sieve cuts of the fragment distribution undergo preferential mixing or segregation relative to the parent particle.
ARTICLE | doi:10.20944/preprints202006.0037.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: Discrete Event Simulation; Performance Analysis; WIP; Model; Healthcare
Online: 4 June 2020 (13:46:19 CEST)
This paper deals with the service performance analysis and improvement using discrete event simulation has been used. The simulation of the heath care has been done by arena master development 14-version software. The performance measurement for this study are patients output, service rate, service efficiency and it is directly related to waiting time of patients in each service station, work in progress, resource utilization.Simulation model was building for Bahir Dar clinic and then, prepared the proposed model for the system. Based on the simulation model run result, the output of the existing healthcare service system is low due to presence of bottlenecks on the service system. Moreover, the station with the largest queue and high resource utilization are identified as a bottleneck. The bottlenecks, which have identified are reduced by using reassigning the existing resources and add new resources and merging the similar services, which has under low resource utilization (nurses). Finally, the researchers have proposed a developed model from different scenarios. Moreover, the best scenario is developed by combining scenario 2 and 3. And then, service efficiency of the healthcare has increased by 9.86 percent, the work in progress (WIP) are reduced by 3 patients from the system and the service capacity of the system is increased 34 to 40 patients per day due to the reduction of bottleneck stations.
ARTICLE | doi:10.20944/preprints202005.0502.v1
Subject: Keywords: Online exam, cheating prevention, discrete optimization, social distancing
Online: 31 May 2020 (20:20:46 CEST)
Cheating prevention in online exams is often hard and costly to tackle with proctoring, and it even sometimes involves privacy issues, especially in social distancing due to the pandemic of COVID-19. Here we propose a low-cost and privacy-preserving anti-cheating scheme by programmatically minimizing the cheating gain. A novel anti-cheating scheme we developed theoretically ensures that the cheating gain of all students can be controlled below a desired level aided by the prior knowledge of students’ abilities and a proper assignment of question sequences. Furthermore, a heuristic greedy algorithm we developed can refine an assignment of questions from a cyclic pool of question sequences to efficiently reduce the cheating gain. Compared to the integer linear programming and min-max matching methods in a small-scale simulation, our heuristic algorithm provides results close to the optimal solutions offered by the two standard discrete optimization methods. Hence, our anti-cheating approach could potentially be a cost-effective solution to the well-known cheating problem even without proctoring.
ARTICLE | doi:10.20944/preprints201905.0089.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: fixed-bed reactor; blender; Discrete Element Method; CFD
Online: 8 May 2019 (10:00:00 CEST)
A common reactor type in the chemical and process industry is the fixed-bed reactor. Accurate modeling can be achieved with particle-resolved Computational Fluid Dynamic (CFD) simulations. However, the underlying bed morphology plays a paramount role. Synthetic bed-generation methods are much more flexible and faster than image-based approaches. In this study, we look critically at the two different bed generation methods: Discrete Element Method (DEM) (in the commercial software STAR-CCM+) and the rigid-body model (in the open-source software Blender). The two approaches are compared in terms of synthetically generated beds with experimental data of overall and radial porosity, particle orientation, as well as radial velocities. Both models show accurate agreement for the porosity. However, only Blender shows similar particle orientation than the experimental results. The main drawback of the DEM is the long calculation time and the shape approximation with composite particles.
TECHNICAL NOTE | doi:10.20944/preprints202101.0504.v2
Subject: Engineering, General Engineering Keywords: VIT transform, discrete-time signals, linear time-varying systems
Online: 19 October 2021 (10:15:05 CEST)
This addendum contains clarifications and a sharpening of some of the results on the VIT transform framework developed in . The focus is on the right-coefficient and left- coefficient forms of the transform, the extraction of a first-order term from a left polynomial fraction, and the application to linear time-varying systems.
ARTICLE | doi:10.20944/preprints201906.0307.v2
Subject: Physical Sciences, Applied Physics Keywords: granular flow；drag and lift forces；discrete element method
Online: 2 July 2019 (11:12:46 CEST)
Both drag and lift forces impact an inclined plane when it is dragged through a granular bed. In this paper, the following results have been obtained: the drag and lift forces grow with the velocity of motion; when the immersion depth is constant, the inclination angle has no effect on drag force, however, the lift force increases linearly with this inclination angle; the ratio of drag and lift forces is exactly equal to the tangent value of the inclined angle. In order to describe this physical process macroscopically, a continuum wedge model based on the Coulomb model is established to predict drag and lift forces. Particularly，the dynamic friction angle in the assumed shear band is predicted as a function of both inclined angle and moving velocity.
ARTICLE | doi:10.20944/preprints201802.0156.v1
Subject: Social Sciences, Microeconomics And Decision Sciences Keywords: Burkina Faso; discrete choice; education; food insecurity; monetary poverty
Online: 26 February 2018 (09:09:43 CET)
Given the Income enabling nature of education as stipulated by human capital theory, it can be postulated that “ceteris paribus”, households with formally educated heads experience less food insecurity and monetary poverty than those with uneducated heads. We test this claim in the case of Burkina Faso, using the 2014 National Survey on Households Living Conditions, along with semi-parametric modeling techniques. In its design the study uses households “willingness and ability” to spend annually on food consumption a per-capita amount above the food poverty line of 102,040 CFA Franc to characterize “household food security”, and households “unwillingness or inability” to spend above the overall poverty line of 153,530 CFA Franc to characterize “monetary poverty”. In addition, the study relies not only on single equation multivariate probit and logit specifications, but also on both fully parametric and semi-parametric bivariate probit representations of food insecurity and monetary poverty. The results show that relaxing the linearity and independence assumptions through joint semi-parametric bivariate modeling captures better the true effects of heads of households’ educational attainment on households' food insecurity and monetary poverty. In fact, compared to households headed by someone with no education, those headed by someone with a primary, secondary or higher education are respectively 19.8% , 49.7% and 118.9% less likely to experience food insecurity, and respectively 40.1%, 77.0% and 172.3% less likely to experience monetary poverty in Burkina Faso. In addition, the experience of food insecurity and monetary poverty are highly correlated at 92.7%, suggesting that educational policies that alleviate poverty in Burkina Faso should also impact positively food security in the country.
ARTICLE | doi:10.20944/preprints202108.0094.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Siamese networks; Ensemble of classifiers; Loss function; Discrete cosine transform
Online: 3 August 2021 (15:49:22 CEST)
In this paper, we examine two strategies for boosting the performance of ensembles of Siamese networks (SNNs) for image classification using two loss functions (Triplet and Binary Cross Entropy) and two methods for building the dissimilarity spaces (FULLY and DEEPER). With FULLY, the distance between a pattern and a prototype is calculated by comparing two images using the fully connected layer of the Siamese network. With DEEPER, each pattern is described using a deeper layer combined with dimensionality reduction. The basic design of the SNNs takes advantage of supervised k-means clustering for building the dissimilarity spaces that train a set of support vector machines, which are then combined by sum rule for a final decision. The robustness and versatility of this approach are demonstrated on several cross-domain image data sets, including a portrait data set, two bioimage and two animal vocalization data sets. Results show that the strategies employed in this work to increase the performance of dissimilarity image classification using SNN is closing the gap with standalone CNNs. Moreover, when our best system is combined with an ensemble of CNNs, the resulting performance is superior to an ensemble of CNNs, demonstrating that our new strategy is extracting additional information.
BRIEF REPORT | doi:10.20944/preprints202107.0169.v1
Subject: Engineering, Automotive Engineering Keywords: Discrete events simulation; Logistics port; Merchandise transports; Unloading of goods.
Online: 7 July 2021 (08:33:45 CEST)
Today, maritime transport is responsible for moving approximately 80 percent of the volume of world trade, which has favored Colombia, which, due to its geographical position, has managed to become one of the most competitive and dynamic economies in the world. South America. For this reason, this investigation is carried out in a port of their country, with the aim of identifying and analyzing restrictions or critical points that may cause delays in the unloading of goods, these would also cause delays in the loading of vehicles. To do this, it began with a diagnosis of the current situation through interviews with the operational personnel of the port, then a simulated model was designed in the ARENA software, in which it was observed that the weighing activities are where the queues were formed, this will cause a delay in the download. As possible solutions, it is recommended to senior management that a proper verification of the operation of the weighing equipment be carried out in order to unload goods more quickly. In addition, a replacement weighing equipment must be purchased. With this change, it would be possible to reduce inefficiencies or cost overruns caused by download delays, which are reducing the fluidity of the goods and the competitiveness of the organization
REVIEW | doi:10.20944/preprints202103.0204.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Hybrid Automata; Formal Modeling; Discrete State; Continuous State; Formal Methods
Online: 5 March 2021 (21:45:11 CET)
In this paper, Hybrid Automata, which is a formal model for hybrid systems, has been introduced. A summary of its theory is presented and some of its special and important classes are listed and some properties that can be studied and checked for it are mentioned. Finally, the purposes of use, the most widely used areas, and the tools that provide H.A. Support are addressed.
ARTICLE | doi:10.20944/preprints202001.0080.v1
Subject: Engineering, Energy & Fuel Technology Keywords: shale gas; MRST; embedded discrete fracture model; open-source implementation
Online: 9 January 2020 (09:59:37 CET)
We present a generic and open-source framework for the numerical modeling of the expected transport and storage mechanisms in unconventional gas reservoirs. These unconventional reservoirs typically contain natural fractures at multiple scales. Considering the importance of these fractures in shale gas production, we perform a rigorous study on the accuracy of different fracture models. The framework is validated against an industrial simulator and is used to perform a history-matching study on the Barnett shale. This work presents an open-source code that leverages cutting-edge numerical modeling capabilities like automatic differentiation, stochastic fracture modeling, multi-continuum modeling and other explicit and discrete fracture models. We modified the conventional mass balance equation to account for the physical mechanisms that are unique to organic-rich source rocks. Some of these include the use of an adsorption isotherm, a dynamic permeability-correction function, and an embedded discrete fracture model (EDFM) with fracture-well connectivity. We explore the accuracy of the EDFM for modeling hydraulically-fractured shale-gas wells, which could be connected to natural fractures of finite or infinite conductivity, and could deform during production. Simulation results indicates that although the EDFM provides a computationally efficient model for describing flow in natural and hydraulic fractures, it could be inaccurate under these three conditions: 1. when the fracture conductivity is very low. 2. when the fractures are not orthogonal to the underlying Cartesian grid blocks, and 3. when sharp pressure drops occur in large grid blocks with insufficient mesh refinement. Each of these results are very significant considering that most of the fluids in these ultra-low matrix permeability reservoirs get produced through the interconnected natural fractures, which are expected to have very low fracture conductivities. We also expect sharp pressure drops near the fractures in these shale gas reservoirs, and it is very unrealistic to expect the hydraulic fractures or complex fracture networks to be orthogonal to any structured grid. In conclusion, this paper presents an open-source numerical framework to facilitate the modeling of the expected physical mechanisms in shale-gas reservoirs. The code was validated against published results and a commercial simulator. We also performed a history-matching study on a naturally-fractured Barnett shale-gas well considering adsorption, gas slippage & diffusion and fracture closure as well as proppant embedment, using the framework presented. This work provides the first open-source code that can be used to facilitate the modeling and optimization of fractured shale-gas reservoirs. To provide the numerical flexibility to accurately model stochastic natural fractures that are connected to hydraulically-fractured wells, it is built atop other related open-source codes. We also present the first rigorous study on the accuracy of using EDFM to model both hydraulic fractures and natural fractures that may or may not be interconnected.
ARTICLE | doi:10.20944/preprints201811.0155.v1
Subject: Materials Science, Biomaterials Keywords: discrete dipole approximation (DDA); up-conversion nanoparticles (UCNP); lanthanide-gold
Online: 7 November 2018 (09:34:20 CET)
Up-conversion nanoparticles (UCNP) under near-infrared (NIR) light irradiation have been well investigated in the field of bio-imaging. However, the low up-conversion luminescence (UCL) intensity limits applications. Plasmatic modulation has been proposed as an effective tool to adjust the luminescence intensity and lifetime. In this study discrete dipole approximation (DDA) was explored concerning guiding the design of UCNP@mSiO2-Au structures with enhanced UCL intensity. The extinction effects of gold shells could be changed by adjusting the distance between the UCNPs and the Au NPs by synthesized tunable mesoporous silica (mSiO2) spacers. Enhanced UCL was obtained under 808 nm irradiation. Theoretical predictions could not be demonstrated to full extend by experimental data, indicating that better models for simulation need to better take into account inhomogeneities in particle morphologies.
ARTICLE | doi:10.20944/preprints201803.0087.v1
Subject: Social Sciences, Organizational Economics & Management Keywords: affordable care act; discrete choice modeling; employer health insurance; ubuntu
Online: 12 March 2018 (07:33:47 CET)
This article takes an approach to explaining the behavioral manifestations of the decision making in US companies’ offer of health insurance that is grounded not only on their cost minimizing behavior, but also in a humanness dimension based on the African concept of Ubuntu. In this way, we define an Ubuntu based Random Utility modeling framework, describing the choice process as a tripartite decision making, and implemented using a nationally representative random sample of 1,061 American companies from the Dunn and Bradstreet Business data, supplied by Survey Sampling International to the Associated Press-NORC Center for Public Affairs Research. The results from the three sequentially implemented specifications showed that the relationship between management culture and health plan offering strategy is dependent on other relevant co-variates, which when left out, leads to the problem of omitted variables bias. However, when all variables are included but assumed to enter the relationship exogenously, this results in management culture not having any statistically significant effect on companies' decisions about scope of health plan offering. When the exogeneity assumption is relaxed through a recursively Bivariate Probit model, the system of two equations produces a highly significant management culture effect. In fact, in this later case we see that companies with groups and formal committee management culture are 1.58 times less likely to choose a multiple plan strategy over a single plan strategy, hence failing to show the more wholesome plan offering that would theoretically prevail under Ubuntu style management.
ARTICLE | doi:10.20944/preprints201608.0088.v1
Subject: Engineering, Other Keywords: Face Recognition; Discrete Cosine Transform (DCT); singular value decomposition (SVD)
Online: 9 August 2016 (11:37:36 CEST)
In this paper, we proposed the fusion of two projection based face recognition algorithms: local binary Patterns in DCT domain and singular value decomposition (SVD) characterized by its simplicity and efficiently. Standard databases ORL are used to test the experimental results which prove that proposed system achieves more accurate face recognition as compared to individual method.
ARTICLE | doi:10.20944/preprints202110.0179.v1
Subject: Engineering, Energy & Fuel Technology Keywords: Well placement; CO2-EGS; water-EGS; Discrete fracture networks; THM modeling
Online: 12 October 2021 (12:40:20 CEST)
Well placement optimization in a given geological setting for a fractured geothermal reservoir is a prerequisite for enhanced geothermal operations. High computational cost associated in the framework of fully coupled thermo-hydraulic-mechanical (THM) processes in a fractured reservoir simulation, makes the well positioning as a missing point in developing a field scale investigation. Here, in this study, we shed light on this topic through examining different injection-production well (doublet) position in a given real fracture network. Water and CO2 are used as working fluids for geothermal operations and importance of well positions are examined using coupled THM numerical simulations for both the fluids. Results of this study are examined through the thermal breakthrough time, mass flux and the energy extraction potential to assess the impact of well position in a two-dimensional reservoir framework. Almost ten times of the difference between the final amount of heat extraction is observed for different well position but with the same well spacing and geological characteristics. Furthermore, stress field is be a strong function of well position that is important with respect to the possibility of unwanted stress development. As part of the MEET project, this study recommends to perform similar well placement optimization study for each fracture set in a fully coupled THM manner before a field well drilling.
ARTICLE | doi:10.20944/preprints202108.0490.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Discrete gamma distribution; correlated counts; sparse-grid quadrature; empirical Bayes estimators
Online: 25 August 2021 (11:49:27 CEST)
The normal and Poisson distribution assumptions in the normal-Poisson mixed effects regression model are often too restrictive for many real count data. Several works have independently relaxed the Poisson conditional distribution assumption for counts or the normal distribution assumption for random effects. This work couples some recent advances in these two regards to develop a skew t–discrete gamma regression model in which the count outcomes have full dispersion flexibility and random effets can be skewed and heavy tailed. Inference in the model is achieved by maximum likelihood using pseudo-adaptive Gaussian quadature. The use of the proposal is demonstrated on a popular owl sibling negotiation data. It appears that, for this example, the proposed approach outperforms models based on normal random effects and the Poisson or negative binomial count distribution.
ARTICLE | doi:10.20944/preprints202009.0224.v1
Subject: Physical Sciences, Optics Keywords: photon catalyzing; discrete modulation; dontinuous-variable; quantum key distribution; quantum communications
Online: 10 September 2020 (06:01:48 CEST)
Establishing global high-rate secure communications is a potential application of continuous-variable quantum key distribution (CVQKD) but also challenging for long-distance transmissions in metropolitan areas. The discrete modulation(DM) can make up for the shortage of transmission distance that has a unique advantage against all side-channel attacks, however its further performance improvement requires source preparation in the presence of noise and loss. Here, we consider the effects of photon catalysis (PC) on the DM-involved source preparation for lengthening the maximal transmission distance of the CVQKD system. We address a zero-photon catalysis (ZPC)-based source preparation for enhancing the DM-CVQKD system. The statistical fluctuation due to the finite length of data is taken into account for the practical security analysis. Numerical simulations show that the ZPC-based DM-CVQKD system can not only achieve the extended maximal transmission distance, but also contributes to the reasonable increase of the secret key rate. This approach enables the DM-CVQKD to tolerate lower reconciliation efficiency, which may promote the practical implementation solutions compatible with classical optical communications using state-of-the-art technology.
ARTICLE | doi:10.20944/preprints202009.0113.v1
Subject: Engineering, Control & Systems Engineering Keywords: age of information; cached files updating; stationary distribution; discrete time model
Online: 5 September 2020 (04:22:43 CEST)
In this paper, using the discrete time model, we consider the average age of all files for a cached-files-updating system where a server generates N files and transmits them to a local cache. In order that the cached files are fresh, in each time slot the server updates files with certain probabilities. The age of one file or its age of information (AoI) is defined as the time the file stays in cache since it was last time sent to cache. Assume that each file in cache has corresponding request popularity. In this paper, we obtain the distribution function of the popularity-weighted average age over all files, which gives a complete description of this average age. For the random age of single file, both the mean and its distribution have been derived before by establishing a simple Markov chain. Using the same idea, we show that an N dimensional stochastic process can be constituted to characterize the changes of N file ages simultaneously. By solving the steady-state of the resulting process, we obtain the explicit expression of stationary probability for an arbitrary state-vector. Then, the distribution function of the popularity-weighted average age can be derived by mergering a proper set of stationary probabilities. For the possible applications, the distribution function can be utilized to calculate the probability that the average age violates certain statistical guarantee.
ARTICLE | doi:10.20944/preprints201905.0187.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: dynamic travelling salesman problem; pheromone; discrete particle swarm optimization; heterogeneous; homogeneous
Online: 15 May 2019 (10:41:10 CEST)
This paper presents a discrete particle swarm optimization (DPSO) algorithm with heterogeneous (non-uniform) parameter values for solving the dynamic travelling salesman problem (DTSP). The DTSP can be modelled as a sequence of static sub-problems, each of which is an instance of the TSP. We present a method for automatically setting the values of the DPSO parameters without three parameters, which can be defined based on the size of the problem, the size of the particle swarm, the number of iterations, and the particle neighbourhood size. We show that the diversity of parameter values has a positive effect on the quality of the generated results. We compare the performance of the proposed heterogeneous DPSO with two ant colony optimization (ACO) algorithms. The proposed algorithm outperforms the base DPSO and is competitive with the ACO.
REVIEW | doi:10.20944/preprints201811.0079.v1
Subject: Physical Sciences, Applied Physics Keywords: interpersonal coordination; competition; dynamical systems; discrete dynamics; continuous dynamics; sporting activity
Online: 5 November 2018 (03:30:07 CET)
Complex human behavior, including interlimb and interpersonal coordination, has been studied from a dynamical system perspective. We review the applications of a dynamical system approach to a sporting activity, which includes continuous, discrete, and switching dynamics. Continuous dynamics identified switching between in- and anti-phase synchronization, controlled by an interpersonal distance of 0.1 m during expert kendo matches, using a relative phase analysis. As discrete dynamics, return map analysis was applied to the time series of movements during kendo matches, and six coordination patterns were classified. Furthermore, state transition probabilities were calculated based on the two states, which clarified the coordination patterns and switching behavior. We introduced switching dynamics with temporal inputs to clarify the simple rules underlying the complex behavior corresponding to switching inputs in a striking action as a non-autonomous system. As a result, we determined that the time evolution of the striking action was characterized as fractal-like movement patterns generated by a simple Cantor set rule with rotation. Finally, we propose a switching hybrid dynamics to understand both court-net sports, as strongly coupled interpersonal competition, and weakly coupled sports, such as martial arts.
ARTICLE | doi:10.20944/preprints201810.0081.v1
Subject: Materials Science, Metallurgy Keywords: Powder compaction; Discrete element method (DEM); Cohesive contact models; LIGGGHTS; EDEM
Online: 4 October 2018 (15:02:44 CEST)
The purpose of this work was analysing the compaction of a cohesive material using different DEM simulators to determine the equivalent contact models and identify how some parameters of the simulations affect the compaction results (maximum force and compacts appearance) and computational costs. For that purpose, three cohesion contact models were tested (‘linear cohesion’ in EDEM; ‘SJKR’ and ‘SJKR2’ in LIGGGHTS). The influence of the particle size distribution (PSD) on the results was also investigated. Further assessments were performed on the effect of selecting different timesteps, using distinct conversion tolerances for exporting the 3D models to STL files and moving the punch with different speeds. Consequently, it was possible to determine that a timestep equal to a 10% Rayleigh timestep, a conversion tolerance of 0.01 mm and a punch speed of 0.2 m/s are adequate for simulating the compaction process using the contact models in this work. In addition, the results determined that the maximum force was influenced by the PSD because of the rearrangement of the particles. The PSD was also related to the computational cost because of the number of simulated particles and their sizes. Finally, an equivalence was found between the linear cohesion and SJKR2 contact models
ARTICLE | doi:10.20944/preprints201807.0480.v1
Subject: Materials Science, Nanotechnology Keywords: surface plasmon resonance; core–shell nanoparticles; discrete-dipole approximation; aspect ratio
Online: 25 July 2018 (11:53:51 CEST)
In this work, numerical simulations for the absorption and scattering efficiencies of spheroid core–shell nanoparticles (CSNs) were conducted and studied using the discrete-dipole approximation method. The characteristics of surface plasmon resonances (SPR) depend upon shell thickness, the compositions of the core and shell materials, and the aspect ratio of the constructed CSNs. We used different core@shell compositions, specifically Au@SiO2, Ag@SiO2, Au@TiO2, Ag@TiO2, Au@Ag, and Ag@Au, for extinction spectra analysis. We also investigated coupled resonance mode wavelengths by adjusting the composition’s layer thickness and aspect ratio. In this study, we show that the extinction efficiency of the Ag@TiO2 core–shell nanoparticles (CSNPs) was higher than that of the others, and we examined the impact of TiO2 shell thickness and Ag core radius on SPR peak positions. From the extinction spectra we found that the Ag@TiO2 nanoparticle had better refractive index sensitivity and figure of merit when the aspect ratio was set to 0.3. All of the experimental results proved that the tunability of these plasmonic resonances was highly dependent on the material used, the layer thickness, and the aspect ratio of the core@shell CSNPs.
Subject: Engineering, Automotive Engineering Keywords: Discrete element method, simulation, multibody dynamics, particle replacement, high-pressure grinding rolls.
Online: 23 June 2021 (13:36:08 CEST)
It has been known that the performance of the High-Pressure Grinding Rolls (HPGR) varies as a function of method used to confine laterally the rolls, their diameter/length (aspect) ratio as well as their condition, if new or worn. However, quantifying these effects through direct experimentation in machines with reasonably large dimensions is not straightforward given the challenge, among others, of guaranteeing that the feed material remains unchanged. The present work couples the discrete element method (DEM) to multibody dynamics (MBD) and a novel particle replacement model (PRM) to simulate the performance of pilot-scale HPGRs grinding pellet feed. It shows that rotating side plates, in particular when fitted with studs, allow reaching more uniform forces along the bed, which also translates in a more constant product size along the rolls as well as higher throughput. It also shows that the edge effect is relatively constant with roll length, leading to substantially larger proportional edge regions for high-aspect ratio rolls. On the other hand, the product from the center region of such rolls was found to be finer when pressed at identical specific forces. Finally, rolls were found to have higher throughput, but generate a coarser product when worn following the commonly observed trapezoidal profile. The approach used in industry to compensate for roller wear by increasing the specific force and roll speed has then been demonstrated to be effective to maintain and potentially even increase product fineness and throughput, as long as the minimum safety gap is not reached.
ARTICLE | doi:10.20944/preprints202001.0246.v1
Subject: Engineering, Civil Engineering Keywords: Cell Method (CM); Discrete Element Method (DEM); multiscale modeling; periodic composite continua
Online: 21 January 2020 (11:53:52 CET)
This paper addresses the study of the stress field in composites continua with the multiscale approach of the DECM (Discrete Element modeling with the Cell Method). The analysis focuses on composites consisting of a matrix with inclusions of various shapes, to investigate whether and how the shape of the inclusions changes the stress field. The purpose is to provide a numerical explanation for some of the main failure mechanisms of concrete, which is precisely a composite consisting of a cement-based matrix and aggregates of various shapes. Actually, while extensive experimental campaigns detailed the shape effect of concrete aggregates in the past, so far it has not been possible to model the stress field within the inclusions and on the interfaces accurately. The reason for this lies in the limits of the differential formulation, which is the basis of the most commonly used numerical methods. The Cell Method (CM), on the contrary, is an algebraic method that provides descriptions up to the micro-scale, independently of the presence of rheological discontinuities or concentrated sources. This makes the CM useful for describing the shape effect of the inclusions, on the micro-scale. When used together with a multiscale approach, it also models the macro-scale behavior of periodic composite continua, without losing accuracy on the micro-scale. The DECM uses discrete elements precisely to provide the CM with a multiscale approach.
REVIEW | doi:10.20944/preprints201807.0606.v1
Subject: Chemistry, General & Theoretical Chemistry Keywords: protein-DNA interactions; facilitated diffusion; protein target search; discrete-state stochastic models
Online: 31 July 2018 (05:39:04 CEST)
Protein-DNA interactions are critical for the successful functioning of all natural systems. The key role in these interactions is played by processes of protein search for specific sites on DNA. Although it has been studied for many years, only recently microscopic aspects of these processes became more clear. In this work, we present a review on current theoretical understanding of the molecular mechanisms of the protein target search. A comprehensive discrete-state stochastic method to explain the dynamics of the protein search phenomena is introduced and explained. Our theoretical approach utilizes a first-passage analysis and it takes into account the most relevant physical-chemical processes. It is able to describe many fascinating features of the protein search, including unusually high effective association rates, high selectivity and specificity, and the robustness in the presence of crowders and sequence heterogeneity.
ARTICLE | doi:10.20944/preprints201801.0230.v1
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: QAP, PSO, OpenCL, GPU calculation, particle swarm optimization, multi-swarm, discrete optimization
Online: 24 January 2018 (19:04:47 CET)
This paper presents a multi-swarm PSO algorithm for the Quadratic Assignment Problem (QAP) implemented on the OpenCL platform. Our work was motivated by results of time efficiency tests performed for single-swarm algorithm implementation that showed clearly that the benefits of a parallel execution platform can be fully exploited provided the processed population is large. The described algorithm can be executed in two modes: with independent swarms or with migration. We discuss the algorithm construction as well as we report results of tests performed on several problem instances from the QAPLIB library. During the experiments the algorithm was configured to process large populations. This allowed us to collect statistical data related to values of goal function reached by individual particles. We use them to demonstrate on two test cases that although single particles seem to behave chaotically during the optimization process, when the whole population is analyzed, the probability that a particle will select a near-optimal solution grows.
ARTICLE | doi:10.20944/preprints201704.0094.v1
Subject: Engineering, Civil Engineering Keywords: alkali silica reaction; lattice discrete particle model; concrete; creep; shrinkage; aging; deterioration
Online: 17 April 2017 (06:12:48 CEST)
Alkali Silica Reaction (ASR) is known to be a serious problem for concrete worldwide, especially in high humidity and high temperature regions. ASR is a slow process that develops over years to decades and it is influenced by changes in environmental and loading conditions of the structure. The problem becomes even more complicated if one recognizes that other phenomena like creep and shrinkage are coupled with ASR. This results in synergistic mechanisms that can not be easily understood without a comprehensive computational model. In this paper, coupling between creep, shrinkage and ASR is modeled within the Lattice Discrete Particle Model (LDPM) framework. In order to achieve this, a multi-physics formulation is used to compute the evolution of temperature, humidity, cement hydration, and ASR in both space and time, which is then used within physics-based formulations of cracking, creep and shrinkage. The overall model is calibrated and validated on the basis of experimental data available in the literature. Results show that even during free expansions (zero macroscopic stress), a significant degree of coupling exists because ASR induced expansions are relaxed by meso-scale creep driven by self-equilibriated stresses at the meso-scale. This explains and highlights the importance of considering ASR and other time dependent aging and deterioration phenomena at an appropriate length scale in coupled modeling approaches.
ARTICLE | doi:10.20944/preprints201608.0001.v1
Subject: Social Sciences, Econometrics & Statistics Keywords: discrete-time hazard models; labour market transitions; duration of unemployment spells; immigration
Online: 1 August 2016 (09:47:20 CEST)
This paper studies the duration patterns of unemployment spells for immigrants and the determinants of unemployment’s completion into one of a number of studied labour market states in Finland. We estimate a duration model for unemployment with competing risks of its terminating into employment, labour market training or economic inactivity. Taking into account the wide period of observation and opportunities to analyse processes of labour market integration during various periods of economic development in Finland, in combination with the individualistic character of the labour careers of immigrants, this research is beneficial owing to the many various findings concerning labour market integration of immigrants. The approach undertaken in this research has a dualistic “descriptive-dynamic” character under which integration is understood as a never-ending process, which is conditioned by a time period of long-term existence and a context of solitary action. We find that transitions out of unemployment spells have a cyclical character; after every new “cycle” in unemployment, the probability of terminating unemployment decreases further. We also find that ascriptive factors make sense in the process of job-placement of immigrants from unemployment. Therefore, the gender, education and age of immigrants, as well as the effect of the period in which first unemployment occurred, potentially predict transitions out of unemployment and further labour market integration of immigrants.
ARTICLE | doi:10.20944/preprints202112.0303.v1
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Discrete fractional-order system; Caputo delta fractional difference; Hidden attractor; Dihedral symmetry D3
Online: 20 December 2021 (10:20:56 CET)
In this paper the D 3 dihedral logistic map of fractional order is introduced. The map 1 presents a dihedral symmetry D 3 . It is numerically shown that the construction and interpretation 2 of the bifurcation diagram versus the fractional order require special attention. The system stability 3 is determined and the problem of hidden attractors is analyzed. Also, analytical and numerical 4 results show that the chaotic attractor of integer order, with D 3 symmetries, looses its symmetry 5 in the fractional-order variant.
ARTICLE | doi:10.20944/preprints202111.0569.v1
Subject: Earth Sciences, Environmental Sciences Keywords: ecosystem dynamics; discrete-event model; qualitative modelling; boolean model; state-and-transition model
Online: 30 November 2021 (12:39:11 CET)
Sub-Saharan social-ecological systems are undergoing changes in environmental conditions, including modifications in rainfall pattern and biodiversity loss. Consequences of such changes depend on complex causal chains which call for integrated management strategies whose efficiency could benefit from ecosystem dynamic modelling. However, ecosystem models often require lots of quantitative information for estimating parameters, which is often unavailable. Alternatively, qualitative modelling frameworks have proved useful for explaining ecosystem response to perturbations, while requiring fewer information and providing more general predictions. However, current qualitative methods have some shortcomings which may limit their utility for specific issues. In this paper, we propose the Ecological Discrete-Event Network (EDEN), an innovative qualitative dynamic modelling framework based on "if-then" rules which generates many alternative event sequences (trajectories). Based on expert knowledge, observations and literature, we use this framework to assess the effect of permanent changes in surface water and herbivores diversity on vegetation and socio-economic transitions in an East African savanna. Results show that water availability drives changes in vegetation and socio-economic transitions, while herbivore functional groups had highly contrasted effects depending on the group. This first use of EDEN in a savanna context is promising for bridging expert knowledge and ecosystem modelling.
ARTICLE | doi:10.20944/preprints202104.0592.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Flexible count regression; balanced discrete gamma distribution; deviance statistic; latent equidispersion; likelihood ratio
Online: 22 April 2021 (08:55:29 CEST)
Most existing flexible count regression models allow only approximate inference. Balanced discretization is a simple method to produce a mean-parametrizable flexible count distribution starting from a continuous probability distribution. This makes easy the definition of flexible count regression models allowing exact inference under various types of dispersion (equi-, under- and overdispersion). This study describes maximum likelihood (ML) estimation and inference in count regression based on balanced discrete gamma (BDG) distribution and introduces a likelihood ratio based latent equidispersion (LE) test to identify the parsimonious dispersion model for a particular dataset. A series of Monte Carlo experiments were carried out to assess the performance of ML estimates and the LE test in the BDG regression model, as compared to the popular Conway-Maxwell-Poisson model (CMP). The results show that the two evaluated models recover population effects even under misspecification of dispersion related covariates, with coverage rates of asymptotic 95% confidence interval approaching the nominal level as the sample size increases. The BDG regression approach, nevertheless, outperforms CMP regression in very small samples (n = 15 − 30), mostly in overdispersed data. The LE test proves appropriate to detect latent equidispersion, with rejection rates converging to the nominal level as the sample size increases. Two applications on real data are given to illustrate the use of the proposed approach to count regression analysis.
ARTICLE | doi:10.20944/preprints202104.0452.v1
Subject: Social Sciences, Accounting Keywords: Informal employment; social security; state effectiveness; Maghreb countries; individual preferences; discrete choice model
Online: 16 April 2021 (22:29:59 CEST)
State legitimacy and effectiveness could be seen by the way to deliver welfare to citizens to mitigate social grievances, that could eventually lead to conflicts (Kivimäki, 2021). Social security systems in Maghreb countries are quite similar in their architecture and aims to provide social insurance to all the workers in the labor market. However, they suffer from the same main problem: the low rate of enrollment of workers. Many workers (employees and self-employed) work informally without any social security coverage. The issue of whether informal jobs are chosen voluntarily by workers or as a strategy of last resort is controversial. Many authors recognize that the informal sector is heterogeneous and it is made up of workers who voluntary choose it and others who are pushed inside because of entry barriers to the formal sector (Günther & Launov, 2012). Using the SAHWA survey and discrete choice models, this article confirms the heterogeneity of the informal labor market in three Maghreb countries: Algeria, Morocco, and Tunisia. Furthermore, this article highlights the profiles of workers who voluntarily choose informality, which is missing from previous studies. Finally, this article proposes policy recommendations in order to extend social security to informal workers and to include them in the formal labour market.
ARTICLE | doi:10.20944/preprints201912.0014.v3
Subject: Engineering, Civil Engineering Keywords: Discrete Element Method (DEM); Cell Method (CM); multiscale modeling; periodic composite materials; nonlocality
Online: 10 February 2020 (10:09:46 CET)
This paper presents a new numerical method for multiscale modeling of composite materials. The new numerical model, called DECM, consists in a DEM (Discrete Element Method) approach of the Cell Method (CM) and combines the main features of both the DEM and the CM. In particular, it offers the same degree of detail as the CM, on the microscale, and manages the discrete elements individually such as the DEM—allowing finite displacements and rotations—on the macroscale. Moreover, the DECM is able to activate crack propagation until complete detachment and automatically recognizes new contacts. Unlike other DEM approaches for modeling failure mechanisms in continuous media, the DECM does not require prior knowledge of the failure position. Furthermore, the DECM solves the problems in the space domain directly. Therefore, it does not require any dynamic relaxation techniques to obtain the static solution. For the sake of example, the paper shows the results offered by the DECM for axial and shear loading of a composite two-dimensional domain with periodic round inclusions. The paper also offers some insights into how the inclusions modify the stress field in composite continua.
ARTICLE | doi:10.20944/preprints201805.0026.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: weighted distribution; Poisson-Lindley distribution; discrete distribution; weighted negative binomial Poisson-Lindley distribution
Online: 2 May 2018 (11:47:51 CEST)
This study introduces a new discrete distribution which is a weighted version of Poisson-Lindley distribution. The weighted distribution is obtained using the negative binomial weight function and can be fitted to count data with over-dispersion. The p.m.f., p.g.f. and simulation procedure of the new weighted distribution, namely weighted negative binomial Poisson-Lindley (WNBPL), are provided. The maximum likelihood method for parameter estimation is also presented. The WNBPL distribution is fitted to several insurance datasets, and is compared to the Poisson and negative binomial distributions in terms of several statistical tests.
ARTICLE | doi:10.20944/preprints202012.0096.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: AHB LED constant-current driver; Digital Current-programmed control; discrete-time modeling; Modulation effect
Online: 4 December 2020 (10:09:24 CET)
The high-power Asymmetric half-bridge Converter (AHBC) LED constant current driver controlled by digital current mode is a fourth-order system. Static operating point, parasitic resistance, load characteristics, sampling effect, modulation mode and loop delay will have great influence on its dynamic performance. In this paper, the small-signal pulse transfer function of the driver is established by the discrete-time modeling method for the two operating points corresponding to the three modulation modes of the trailing edge, leading edge and double edge. And, the effects of parasitic parameters, delay effect, sampling effect and load effect are fully considered in modeling. For a large number of complex exponential matrix operations, the first order Taylor formula is used for approximate calculation after the coefficient matrix is obtained by substituting the data. Then, Matlab software is used to compare and analyze the discrete-time model and the discrete-average model. The results show that the proposed discrete-time model can more accurately characterize the resonant peak and high-frequency dynamic characteristics, and is very suitable for the design of high frequency digital controller.
ARTICLE | doi:10.20944/preprints201805.0324.v1
Subject: Engineering, Other Keywords: CO2 geological storage, fractured carbonates, CO2 migration plume, updated geological model, Discrete Fracture Network
Online: 23 May 2018 (16:43:00 CEST)
Investigation into geological storage of CO2 is underway at the Technology Development Plant (TDP) at Hontomín (Burgos, Spain), the only current onshore injection site in the European Union. The storage reservoir is a deep saline aquifer located within Low Jurassic Formations (Lias and Dogger), formed by fractured carbonates with low matrix permeability. Understanding the processes involved in CO2 migration within this kind of low-primary permeability carbonates influenced by fractures and faults is key to ensure safe operation and reliable plume prediction. During the hydraulic characterization tests, 2300 tons of liquid CO2 and 14000 m3 of synthetic brine were co-injected on site in various sequences to characterize the pressure response of the seal-storage pair [de Dios et al, 2017] The injection tests were analyzed with a compositional dual media model which accounts for both temperature effects (as the CO2 is liquid at the bottom of the wellbore) and multiphase flow hysteresis (to effectively simulate the alternating brine and CO2 injection tests that were performed). The pressure and temperature responses of the storage formation were history-matched mainly through the petrophysical characteristics of the fracture network [Le Gallo et al, 2017]. The dynamic characterization of the fracture network dominates the CO2 migration while the matrix does not appear to significantly contribute to the storage capacity. This initial modeling approach was improved using the characterization workflow developed within the European FP7 CO2ReMove project for sandstone fractured reservoirs [Ringrose et al., 2011; Deflandre et al., 2011]. Fractured reservoirs are challenging to handle because of their high level of heterogeneity that conditions the reservoir behaviour during the injection. In particular, natural fractures have a significant impact on well performance [Ray et al, 2012]. Furthermore, the understanding of the processes involved in CO2 migration within relatively low-permeability storage influenced by fractures and faults is essential for enabling safe storage operation [Iding and Ringrose, 2010]. As part of the European H2020 ENOS project, the site geological model is updated by integration of the recently acquired data such as the image log interpretations from injection and observation wells. The geological model is generated through the analysis and integration of data including borehole images and well test data. Following a methodology developed for naturally fractured hydrocarbon reservoirs [Ray et al., 2012], the image log analysis identified two sets of diffuse fractures. A Discrete Fracture Network [Bourbiaux et al., 2005] was built around both wells which encompass the caprock, storage and underburden formations. The fracture characteristics of the two sets of diffuse fractures, such as orientations, densities and conductivities, are calibrated upon the interpretation of the injection tests [Le Gallo et al, 2017]. For each facies, the DFN characteristics were then upscaled and propagated to the full-field reservoir simulation model as 3D fracture properties (fracture porosity, fracture permeability and equivalent block size).
ARTICLE | doi:10.20944/preprints201711.0052.v1
Subject: Physical Sciences, Other Keywords: information entropy production; Discrete Markov Chains; spike train statistics; Gibbs measures; maximum entropy principle
Online: 8 November 2017 (04:25:12 CET)
Experimental recordings of the collective activity of interacting spiking neurons exhibit random behavior and memory effects, thus the stochastic process modeling the spiking activity is expected to show some degree of time irreversibility. We use the thermodynamic formalism to build a framework, in the context of spike train statistics, to quantify the degree of irreversibility of any parametric maximum entropy measure under arbitrary constraints, and provide an explicit formula for the information entropy production of the inferred Markov maximum entropy process. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics.
ARTICLE | doi:10.20944/preprints201609.0025.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: numerical cognition; numerical distance effect; numerical size effect; analogue number system; discrete semantic system
Online: 7 September 2016 (11:29:41 CEST)
Human number understanding is thought to rely on the analogue number system (ANS), working according to Weber’s law. We propose an alternative account, suggesting that symbolic mathematical knowledge is based on a discrete semantic system (DSS), a representation that stores values in a semantic network, similar to the mental lexicon or to a conceptual network. Here, focusing on the phenomena of numerical distance and size effects in comparison tasks, first we discuss how a DSS model could explain these numerical effects. Second, we demonstrate that DSS model can give quantitatively as appropriate a description of the effects as the ANS model. Finally, we show that symbolic numerical size effect is mainly influenced by the frequency of the symbols, and not by the ratios of their values. This last result suggests that numerical distance and size effects cannot be caused by the ANS, while the DSS model might be the alternative approach that can explain the frequency-based size effect.
ARTICLE | doi:10.20944/preprints201608.0163.v1
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: emergence; uniqueness; existence of solutions; input/output system; system specifications; discrete event system specification
Online: 17 August 2016 (11:29:23 CEST)
Conditions under which compositions of component systems form a well-defined system-of-systems here are formulated at a fundamental level. Statement of what defines a well-defined composition and sufficient conditions guaranteeing such a result offers insight into exemplars that can be found in special cases such as differential equation and discrete event systems. For any given global state of a composition, two requirements can be stated informally as: 1) the system can leave this state, i.e., there is at least one trajectory defined that starts from the state, and 2) the trajectory evolves over time without getting stuck at a point in time. Considered for every global state, these conditions determine whether the resultant is a well-defined system and if so, whether it is non-deterministic or deterministic. We formulate these questions within the framework of iterative specifications for mathematical system models that are shown to be behaviorally equivalent to the Discrete Event System Specification (DEVS) formalism. This formalization supports definitions and proofs of the afore-mentioned conditions. Implications are drawn at the fundamental level of existence where the emergence of a system from an assemblage of components can be characterized. We focus on systems with feedback coupling where existence and uniqueness of solutions is problematic.
ARTICLE | doi:10.20944/preprints202205.0010.v1
Subject: Engineering, Control & Systems Engineering Keywords: age of information; discrete time status updating system; probabilistic preemption; probability generation function; stationary distribution
Online: 4 May 2022 (13:15:56 CEST)
The age of information (AoI) metric was proposed to measure the freshness of messages obtained at the terminal node of a status updating system. In this paper, the AoI of discrete time status updating system with probabilistic packet preemption is investigated by analyzing the steady state of a three-dimensional discrete stochastic process. Assuming the queue used in system is Ber/Geo/1/2*/η, which represents that the system size is 2 and the packet in buffer can be preempted by fresher packet with probability η. Instead of considering system’s AoI separately, we use a three-dimensional state vector (n,m,l) to simultaneously track the real time changes of the AoI, the age of packet in server, and the age of packet waiting in buffer. We give the explicit expression of system’s average AoI, and show that the average AoI of system without packet preemption is obtained by letting η=0. When η is set to 1, the mean of AoI of system having Ber/Geo/1/2* queue is obtained as well. Combining the results we have obtained and comparing them with corresponding average continuous AoIs, we propose a possible relationship between average discrete AoI with Ber/Geo/1/c queue and the average continuous AoI with M/M/1/c queue. For each of two extreme cases where η=0 and η=1, we also determine the stationary distribution of AoI using the probability generation function (PGF) method. The relations between average AoI and the packet preemption probability η, as well as AoI’s distribution curves in two extreme cases are illustrated by numerical simulations. Notice that the probabilistic packet preemption may occur, for example, in an energy harvest (EH) node of wireless sensor network, where the packet in buffer can be replaced only when the node collects enough energy. In particular, to exhibit the usefulness of our idea and methods and highlight the merits of considering discrete time systems, in this paper we give much more explanations showing that how the results about continuous AoI is derived by analyzing the corresponding discrete time system, and how the discrete age analysis is generalized to the system with multiple sources. In terms of packet service process, we also propose our idea to analyze system’s AoI when the service time distribution is relaxed to be arbitrary.
ARTICLE | doi:10.20944/preprints202107.0206.v1
Subject: Materials Science, Biomaterials Keywords: damage detection; concrete-like structures; coda waves; ultrasound; wave propagation; discrete element modeling; sensitivity study
Online: 8 July 2021 (15:26:10 CEST)
Ultrasonic measurements are used in civil engineering for structural health monitoring of concrete infrastructures. The late portion of the ultrasonic wavefield, the coda, is sensitive to small changes in the elastic moduli of the material. Coda Wave Interferometry (CWI) correlates these small changes in the coda with the wavefield recorded in intact, or unperturbed, concrete specimen to reveal the amount of velocity change that occurred. CWI has the potential to detect localised damages and global velocity reductions alike. In this study, the sensitivity of CWI to different types of concrete mesostructures and their damage levels is investigated numerically. Realistic numerical concrete models of concrete specimen are generated and damage evolution is simulated using the discrete element method. In the virtual concrete lab, the simulated ultrasonic wavefield is propagated from one transducer using a realistic source signal and recorded at a second transducer. Different damage scenarios reveal a different slope in the decorrelation of waveforms with the observed reduction in velocities in the material. Finally, the impact and possible generalizations of the findings are discussed and recommendations are given for a potential application of CWI in concrete at structural scale.
ARTICLE | doi:10.20944/preprints202107.0066.v1
Subject: Engineering, Automotive Engineering Keywords: ballasted track; unsupported sleepers; sleeper-ballast dynamic impact; dynamic simulation; analytic solution; discrete element modelling
Online: 2 July 2021 (15:39:20 CEST)
Unsupported sleepers or void zones in ballasted tracks are one of the most recent and frequent track failures. The void failures have the property of intensive development that, without timely maintenance measures, can cause the appearance of cost-expensive local instabilities like subgrade damages. The reason of the intensive void development lies in the mechanics of the sleeper and ballast bed interaction. The particularity of the interaction is a dynamic impact that occur due to void closure. Additionally, void zones cause inhomogeneous ballast pressure distribution between the void zone and fully supported neighbour zones. The present paper is devoted studying the mechanism of the sleeper-ballast dynamic impact in the void zone. The results of experimental in-situ measurements of rail deflections showed the significant impact accelerations in the zone even for light-weight slow vehicles. A simple 3-beam numerical model of track and rolling stock interaction has shown the similar to the experimental measurements dynamic interaction. Moreover, the model shows that the sleeper accelerations are more than 3 times higher than the corresponding wheel accelerations and the impact point appear before the wheel enters the impact point. The analysis of ballast loadings shows the specific impact behaviour in combination with the quasistatic part that is different for void and neighbour zones, which are characterised with high ballast pre-stressed conditions. The analysis of void sizes influence demonstrate that the impact loadings, wheel and sleeper maximal accelerations appear at certain void depth after which the values decrease. The ballast quasistatic loading analysis indicates more than twice increase of the ballast loading in neighbour zones for long voids and almost full quasistatic unloading for short length voids. However, the used imitation model cannot explain the nature of the dynamic impact. The mechanism of the void impact is clearly explained by the analytic solution using a simple clamped beam. A simplified analytical expression of the void impact velocity shows that it is linearly related to the wheel speed and loading. The comparison to the numerically simulated impact velocities shows a good agreement and the existence of the void depth with the maximal impact. An estimation of the long-term influences for the cases of normal sleeper loading, high ballast pre-stress and quasistatic loading in the neighbour zones and high impact inside the void are performed.
ARTICLE | doi:10.20944/preprints201812.0209.v2
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: neural population coding; mutual information; Kullback-Leibler divergence; Rényi divergence; Chernoff divergence; approximation; discrete variables
Online: 7 March 2019 (07:36:24 CET)
Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations of the mutual information based on Kullback--Leibler divergence and Rényi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding. While our asymptotic formulas all work for discrete variables, one of them has consistent performance and high accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating the mutual information between the stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems.
ARTICLE | doi:10.20944/preprints201804.0379.v2
Subject: Physical Sciences, Particle & Field Physics Keywords: spacetime models; discrete spacetime; relativity theory; causal models; quantum field theory; spin networks; quantum loops
Online: 12 June 2018 (12:43:15 CEST)
Based on a local causal model of the dynamics of curved discrete spacetime, a causal model of quantum field theory in curved discrete spacetime is described. On the elementary level, space(-time) is assumed to consists of interconnected space points. Each space point is connected to a small discrete set of neighboring space points. Density distribution of the space points and the lengths of the space point connections depend on the distance from the gravitational sources. This leads to curved spacetime in accordance with general relativity. Dynamics of spacetime (i.e., the emergence of space and the propagation of space changes) dynamically assigns "in-connections" and "out-connections" to the affected space points. Emergence and propagation of quantum fields (including particles) are mapped to the emergence and propagation of space changes by utilizing identical paths of in/out-connections. Compatibility with standard quantum field theory (QFT) requests the adjustment of the QFT techniques (e.g., Feynman diagrams, Feynman rules, creation/annihilation operators), which typically apply to three in/out connections, to n > 3 in/out connections. In addition, QFT computation in position space has to be adapted to a curved discrete space-time.
ARTICLE | doi:10.20944/preprints201805.0100.v1
Subject: Physical Sciences, Particle & Field Physics Keywords: quantum field theory; local causal models; general relativity theory; spacetime models; discrete spacetime; computer simulations
Online: 7 May 2018 (05:45:19 CEST)
Based on a local causal model of the dynamics of curved discrete spacetime, a causal model of quantum field theory in curved discrete spacetime is described. At the elementary level, space(-time) is assumed to consists of interconnected space points. Each space point is connected to a small discrete set of neighbor space points. Density distribution of the space points and the lengths of the space point connections depend on the distance from the gravitational sources. This leads to curved spacetime in accordance with general relativity. Dynamics of spacetime (i.e., the emergence of space and the propagation of space changes) dynamically assigns "in-connections" and "out-connections" to the affected space points. Emergence and propagation of quantum fields (including particles) are mapped to the emergence and propagation of space changes by utilizing identical paths of in/out-connections. Compatibility with standard quantum field theory (QFT) requests the adjustment of the QFT techniques (e.g., Feynman diagrams, Feynman rules, creation/annihilation operators), which typically apply to three in/out connections, to n > 3 in/out connections. In addition, QFT computation in position space has to be adapted to a curved discrete space-time.
ARTICLE | doi:10.20944/preprints201802.0150.v1
Subject: Mathematics & Computer Science, Analysis Keywords: discrete inverse Sumudu transform; Whittaker equation; Zettl equation; Gauss hypergeometric series and modified Struve function
Online: 24 February 2018 (07:31:17 CET)
Inverse Sumudu transform multiple shifting properties are used to design methodology for solving ordinary differential equations. Then algorithm applied to solve Whittaker and Zettl equations to get their new exact solutions and profiles which shown through Maple complex graphicals. Table of inverse Sumudu transforms for elementary functions given for supporting the differential equations solving using inverse Sumudu transform.
ARTICLE | doi:10.20944/preprints202103.0393.v1
Subject: Engineering, Automotive Engineering Keywords: Carbon anode production; Crack generation; Discrete element method; Failure analysis; Second-order work criterion; Strain localization
Online: 15 March 2021 (13:53:17 CET)
An in-depth study of the failure of granular materials, which is known as a mechanism to generate defects, can reveal the facts about the origin of the imperfections such as cracks in the carbon anodes. The initiation and propagation of the cracks in the carbon anode, especially the horizontal cracks below the stub-holes, reduce the anode efficiency during the electrolysis process. In order to avoid the formation of cracks in the carbon anodes, the failure analysis of coke aggregates can be employed to determine the appropriate recipe and operating conditions. In this paper, it will be shown that a particular failure mode can be responsible for the crack generation in the carbon anodes. The second-order work criterion is employed to analyze the failure of the coke aggregate specimens and the relationships between the second-order work, the kinetic energy, and the instability of the granular material are investigated. In addition, the coke aggregates are modeled by exploiting the discrete element method (DEM) to reveal the micro-mechanical behavior of the dry coke aggregates during the compaction process. The optimal number of particles required for the failure analysis in the DEM simulations is determined. The effects of the confining pressure and the strain rate as two important compaction process parameters on the failure are studied. The results reveal that increasing the confining pressure enhances the probability of the diffusing mode of the failure in the specimen. On the other hand, the increase of strain rate augments the chance of the strain localization mode of the failure in the specimen.
ARTICLE | doi:10.20944/preprints202009.0474.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: cancer modeling; combined treatment model; discrete time delay; stability conditions; Lyapunov functionals; linear matrix inequalities (LMIs)
Online: 20 September 2020 (14:38:56 CEST)
We use a system biology approach to translate the interaction of Bacillus Calmette-Gurin (BCG) + interleukin 2 (IL-2) for the treatment of bladder cancer into a mathematical model. The model is presented as a system of differential equations with the following variables: number of tumor cells, bacterial cells, immune cells, and cytokines involved in the tumor-immune response. This work investigates the delay effect induced by the proliferation of tumor antigen-specific effector cells after the immune system destroys BCG-infected urothelium cells following BCG and IL-2 immunotherapy in the treatment of bladder cancer. For the proposed model, three equilibrium states are found analytically. The stability of all equilibria is analyzed using the method of Lyapunov functionals construction and the method of linear matrix inequalities (LMIs).
ARTICLE | doi:10.20944/preprints202004.0107.v1
Subject: Engineering, General Engineering Keywords: Discrete Multiphysics Modelling; Smoothed Particle Hydrodynamics; Lattice Spring Model; Particle-base method; Aortic Valve; Calcification; Stenosis
Online: 7 April 2020 (13:33:02 CEST)
This study proposes a 3D particle-base (discrete) multiphysics approach for modelling calcification in the aortic valve. Different stages of calcification (from mild to severe) were simulated, and their effect on the cardiac output assessed. The cardiac flow rate decreases with the level of calcification. In particular, there is a critical level of calcification below which the flow rate decreases dramatically. Mechanical stress on the membrane is also calculated. The results show that, as calcification progresses, spots of high mechanical stress appear. Firstly, they concentrate in the regions connecting two leaflets; when severe calcification is reached, then they extend to the area at the basis of the valve.
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: probability exponent; multi-server parallel system; discrete time model; arrangement of multiple sets; large deviation theory
Online: 26 December 2019 (10:51:23 CET)
A multi-server parallel system dispatches the incoming job, which contains kn tasks into n servers. A job is considered to be computed if all the tasks associated with the job are processed. One job’s tasks can be encoded into at least kn “replicas” such that the job is considered to be served if any kn replicas finishing computation. In this paper, we analyze the random scheduling policy of a multi-server computing system under discrete time model in terms of Quality of Exponent (QoE), which is defined as the probability exponent that a typical job can be computed within a given number of time slots. We let kn/n be a constant. Assuming that any task of any job can be randomly dispatched by a “scheduler” to any server, and computing each task takes exactly one time slot. We divide the calculation of probability exponent into two parts, exponent of numerator and exponent of denominator. For the denominator, we give the almost exact exponent using Lagrange multiplier method, while for the numerator, an upper bound of the numerator’s exponent is provided. In addition, we also express the exponent in terms of information theoretical quantities and reconsider both of exponents in the context of large deviation theory.
ARTICLE | doi:10.20944/preprints202107.0570.v1
Subject: Keywords: Face Detection; Euclidean Distance; Fast Fourier Transformation; Discrete Cosine Transformation; Facial Parts Detection; Frequency domain; Spatial domain
Online: 26 July 2021 (11:47:11 CEST)
In today’s world face detection is the most important task. Due to the chromosomes disorder sometimes a human face suffers from different abnormalities. For example, one eye is bigger than the other, cliff face, different chin-length, variation of nose length, length or width of lips are different, etc. For computer vision currently this is a challenging task to detect normal and abnormal face and facial parts from an input image. In this research paper a method is proposed that can detect normal or abnormal faces from a frontal input image. This method used Fast Fourier Transformation (FFT) and Discrete Cosine Transformation of frequency domain and spatial domain analysis to detect those faces.
ARTICLE | doi:10.20944/preprints202106.0733.v1
Subject: Engineering, Automotive Engineering Keywords: Discrete multiphysics; smooth particle hydrodynamics; Lattice Spring Model; Fluid-structure interaction; particle-based method; Coronary stent; Atherosclerosis
Online: 30 June 2021 (11:55:59 CEST)
Stenting is a common method for treating atherosclerosis. A metal or polymer stent is deployed to open the stenosed artery or vein. After the stent is deployed, the blood flow dynamics influence the mechanics by compressing and expanding the structure. If the stent does not respond properly to the resulting stress, vascular wall injury or re-stenosis can occur. In this work, Discrete Multiphysics is used to study the mechanical deformation of the coronary stent and its relationship with the blood flow dynamics. The major parameters responsible for deforming the stent are sort in terms of dimensionless numbers and a relationship between the elastic forces in the stent and pressure forces in the fluid is established. The blood flow and the stiffness of the stent material contribute significantly to the stent deformation and affect the rate of deformation. The stress distribution in the stent is not uniform with the higher stresses occurring at the nodes of the structure.
ARTICLE | doi:10.20944/preprints202101.0370.v1
Subject: Social Sciences, Accounting Keywords: consumer preferences; red meat; food consumption; discrete choice experiment (DCE); willingness to pay (WTP); random utility model
Online: 19 January 2021 (10:52:41 CET)
Food consumption in Europe is changing. Red meat consumption has been steadily decreasing in the past decades. The rising interest of consumers for healthier and more sustainable meat products provide red meat producers with the opportunity to differentiate their offers by ecolabels, origin and health claims. This international study analyses the European consumer preferences for red meat (beef, lamb and goat) in seven countries: Finland, France, Greece, Italy, Spain, Turkey and the United Kingdom. Through a choice experiment, 2.900 responses were collected. Mixed multinomial logit models were estimated to identify heterogeneous preferences among consumers at the country level. Results indicate substantial differences between the most relevant attributes for the average consumers, as well as their willingness to pay for them in each country. Nevertheless, national origin and organic labels were highly valued in most countries.
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: discrete degenerate random variables; degenerate binomial random variable; degenerate Poisson random variable; new type degenerate Bell polynomials
Online: 15 November 2019 (16:43:03 CET)
In this paper, we introduce two discrete degenerate random variables, namely the degenerate binomial and degenerate Poisson random variables. We deduce the expectations of the degenerate binomial random variables. We compute the generating function of the moments of the degenerate Poisson random variables, which leads us to define the new type degenerate Bell polynomials, and hence obtain explicit expressions for the moments of those random variables in terms of such polynomials. We also get the variances of the degenerate Poisson random variables. Finally, we illustrate two examples of the degenerate Poisson random variables.
ARTICLE | doi:10.20944/preprints201806.0321.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: fast discrete stockwell transform; cardio-vascular disease; unique ECG signatures; self-organizing maps; sinus rhythm; ECG arrhythmia
Online: 20 June 2018 (10:24:44 CEST)
The diagnosis of Cardio-Vascular diseases (CVD) is highly dependent on analysis of ECG signals. ECG analysis can be helpful in estimating the underlying cause and condition of heart in cardiac abnormality. The effectiveness of ECG signal analysis in detection of CVDs is widely accepted by professional healthcare service provider. Many algorithms have been proposed but almost all of them have some kind of limitations, and these limitations largely influence the effectiveness of ECG analysis. The performed research work is dedicated for design of unique self-organizing maps (SOMs) based neural network for classification of arrhythmia according to a particular ECG signal, the generation of SOMs is based on the certain unique signatures of ECG signals and have potential to classify different cardiac conditions. For extraction of unique features from ECG signals, we have proposed to use Fast Discrete Stockwell Transform (FDST). Since the proposed technique is a result of combining two different techniques hence called as hybrid technology. The purpose of using FDST is to identify unique signatures of ECG signals in a more improved manner than existing one, the term improved is used because it has several advantages over existing techniques such as wavelet and Fourier Transform based methods. Results obtained from the implementation of the technique are capable in visualizing the ECG sinus rhythm and arrhythmia conditions in form of unique SOM for each associated arrhythmia condition. This unique SOM based classification makes them ideal for being used as a diagnostic tool. This ability of arrhythmia classification using FDST and SOMs makes the technique unique and useful providing valuable information about patient condition. Using proposed technology a portable diagnosis tool for monitoring of patient at their site may be facilitated later, that will improve the quality of life of the patient by diagnosing cardiac condition.
ARTICLE | doi:10.20944/preprints201612.0115.v1
Subject: Materials Science, General Materials Science Keywords: discrete element method; hypervelocity impact; debris cloud; fragmentation; space debris; multiscale modeling; computer simulation; high performance computing
Online: 23 December 2016 (10:21:21 CET)
In this paper we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms−1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.
ARTICLE | doi:10.20944/preprints202106.0197.v1
Subject: Physical Sciences, Acoustics Keywords: confinement; discrete reference frame; still more general principle of relativity; quantum-information conservation; symmetries of information; the ether
Online: 8 June 2021 (08:55:31 CEST)
The paper considers the symmetries of a bit of information corresponding to one, two or three qubits of quantum information and identifiable as the three basic symmetries of the Standard model, U(1), SU(2), and SU(3) accordingly. They refer to “empty qubits” (or the free variable of quantum information), i.e. those in which no point is chosen (recorded). The choice of a certain point violates those symmetries. It can be represented furthermore as the choice of a privileged reference frame (e.g. that of the Big Bang), which can be described exhaustively by means of 16 numbers (4 for position, 4 for velocity, and 8 for acceleration) independently of time, but in space-time continuum, and still one, 17th number is necessary for the mass of rest of the observer in it. The same 17 numbers describing exhaustively a privileged reference frame thus granted to be “zero”, respectively a certain violation of all the three symmetries of the Standard model or the “record” in a qubit in general, can be represented as 17 elementary wave functions (or classes of wave functions) after the bijection of natural and transfinite natural (ordinal) numbers in Hilbert arithmetic and further identified as those corresponding to the 17 elementary of particles of the Standard model. Two generalizations of the relevant concepts of general relativity are introduced: (1) “discrete reference frame” to the class of all arbitrarily accelerated reference frame constituting a smooth manifold; (2) a still more general principle of relativity to the general principle of relativity, and meaning the conservation of quantum information as to all discrete reference frames as to the smooth manifold of all reference frames of general relativity. Then, the bijective transition from an accelerated reference frame to the 17 elementary wave functions of the Standard model can be interpreted by the still more general principle of relativity as the equivalent redescription of a privileged reference frame: smooth into a discrete one. The conservation of quantum information related to the generalization of the concept of reference frame can be interpreted as restoring the concept of the ether, an absolutely immovable medium and reference frame in Newtonian mechanics, to which the relative motion can be interpreted as an absolute one, or logically: the relations, as properties. The new ether is to consist of qubits (or quantum information). One can track the conceptual pathway of the “ether” from Newtonian mechanics via special relativity, via general relativity, via quantum mechanics to the theory of quantum information (or “quantum mechanics and information”). The identification of entanglement and gravity can be considered also as a ‘byproduct” implied by the transition from the smooth “ether of special and general relativity’ to the “flat” ether of quantum mechanics and information. The qubit ether is out of the “temporal screen” in general and is depicted on it as both matter and energy, both dark and visible.
ARTICLE | doi:10.20944/preprints201911.0215.v1
Subject: Engineering, Energy & Fuel Technology Keywords: Discrete Fracture Network (DFN); fractured rock hydrology; Boundary Element Method (BEM); Domain Decomposition Method (DDM); subsurface fluid flow
Online: 19 November 2019 (02:55:27 CET)
Modeling fluid flow in three-dimensional (3D) Discrete Fracture Networks (DFNs) is of relevance in many engineering applications, such as hydraulic fracturing, oil/gas production, geothermal energy extraction, nuclear waste disposal and CO2 sequestration. A new Boundary Element Method (BEM) technique with discontinuous quadratic elements and a parallel Domain Decomposition Method (DDM) is presented herein for the simulation of the steady-state fluid flow in 3D DFN systems with wellbores, consisting of planar fractures having arbitrary properties and wellbore trajectories. Numerical examples characterized by DFNs of increasing complexity are investigated to evaluate the accuracy and efficiency of the presented technique. The results show that accurate solutions can be obtained with less nodes than with mesh-based methods (e.g. Finite Element Method). In addition, the DDM algorithm used provides a quite fast convergence. The simulation results of the fluid flow around intersections among traces (linear intersections between fractures), intersections between traces and a fracture boundaries, and wellbore intersections is accurate. Source code is available at : https://github.com/BinWang0213/PyDFN3D.
ARTICLE | doi:10.20944/preprints201810.0687.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Discrete Wavelet Transform (DWT); Adaptive Neuro-Fuzzy Inference System (ANFIS); Fuzzy Logic system (FLS); High Impedance Fault (HIF).
Online: 29 October 2018 (13:44:25 CET)
This paper presents a method to detect and classify the high impedance fault that occur in the medium voltage distribution network using discrete wavelet transform (DWT) and adaptive neuro-fuzzy inference system (ANFIS). The network is designed using Matlab software and various faults such as high impedance, symmetrical and unsymmetrical fault have been applied to study the effectiveness of the proposed ANFIS classifier method. This is achieved by training the ANFIS classifier using the features (standard deviation values) extracted from the three phase fault current signal by DWT technique for various cases of fault with different values of fault resistance in the system. The success and discrimination rate obtained for identifying and classifying the high impedance fault from the proffered method is 100% whereas the values are 66.7% and 85% respectively for conventional fuzzy based approach. The results indicate that the proposed method is more efficient to identify and discriminate the high impedance fault accurately from other power system faults in the system.
ARTICLE | doi:10.20944/preprints202110.0414.v1
Subject: Keywords: Chattering reduction; discrete-time sliding mode control; magnetic levitation system; multirate output feedback; robust control; sliding mode control (SMC)
Online: 27 October 2021 (13:33:34 CEST)
This paper presents three types of sliding mode controllers for a magnetic levitation system. First, a proportional-integral sliding mode controller (PI-SMC) is designed using a new switching surface and a proportional plus power rate reaching law. The PI-SMC is more robust than a feedback linearization controller in the presence of mismatched uncertainties and outperforms the SMC schemes reported recently in the literature in terms of the convergence rate and settling time. Next, to reduce the chattering phenomenon in the PI-SMC, a state feedback-based discrete-time SMC algorithm is developed. However, the disturbance rejection ability is compromised to some extent. Furthermore, to improve the robustness without compromising the chattering reduction benefits of the discrete-time SMC, mismatched uncertainties like sensor noise and track input disturbance are incorporated in a robust discrete-time SMC design using multirate output feedback (MROF). With this technique, it is possible to realize the effect of a full-state feedback controller without incurring the complexity of a dynamic controller or an additional discrete-time observer. Also, the MROF-based discrete-time SMC strategy can stabilize the magnetic levitation system with excellent dynamic and steady-state performance with superior robustness in the presence of mismatched uncertainties. The stability of the closed-loop system under the proposed controllers is proved by using the Lyapunov stability theory. The simulation results and analytical comparisons demonstrate the effectiveness and robustness of the proposed control schemes.
ARTICLE | doi:10.20944/preprints202106.0029.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: discrete-structured population; matrix population model; population projection matrices; calibration; net reproductive rate; reproductive uncertainty; colony excavation; Diophantine systems
Online: 1 June 2021 (11:49:45 CEST)
The notion of potential-growth indicator came to being in the field of matrix population models long ago, almost simultaneously with the pioneering Leslie model for age-structured population dynamics, albeit the term has been given and the theory developed only recent years. The indicator represents an explicit function, R(L), of matrix L elements and indicates the position of the spectral radius of L relative to 1 on the real axis, thus signifying the population growth, or decline, or stabilization. Some indicators turned out useful in theoretical layouts and practical applications prior to calculating the spectral radius itself. The most senior (1994) and popular indicator, R0(L), is known as the net reproductive rate, and we consider two more ones, R1(L) and RRT(A), developed later on. All the three are different in what concerns their simplicity and the level of generality, and we illustrate them with a case study of Calamagrostis epigeios, a long-rhizome perennial weed actively colonizing open spaces in the temperate zone. While the R0(L) and R1(L) fail respectively because of complexity and insufficient generality, the RRT(L) does succeed, justifying the merit of indication.
Subject: Behavioral Sciences, Clinical Psychology Keywords: involuntary memories; causal logic and semiotical logic; unconscious; mathematical model of the mind-matter relation; idiotope; category; discrete cofibration
Online: 14 February 2020 (11:44:16 CET)
Using classical clinical observations, we first outline an elementary conceptual model for the Mind Representation System, then move to a more elaborate mathematical model that refers to discrete cofibration with enriched fibers.
ARTICLE | doi:10.20944/preprints201805.0143.v1
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: depth-image-based rendering (DIBR); 3D content; curvelet transform; 1D-discrete cosine transform (1D-DCT); template watermark; DIBR watermarking
Online: 9 May 2018 (09:00:10 CEST)
Several depth image based rendering (DIBR) watermarking methods have been proposed, but they have various drawbacks, such as non-blindness, low imperceptibility, and vulnerability to signal or geometric distortion. This paper proposes a template based DIBR watermarking method that overcomes the drawbacks of previous methods. The proposed method exploits two properties to resist DIBR attacks: the pixel is only moved horizontally by DIBR, and the smaller block is not distorted by DIBR. The one dimensional (1D) discrete cosine transform (DCT) and curvelet domains are adopted to utilize these two properties. A template is inserted in the curvelet domain to restore the synchronization error caused by geometric distortion. A watermark is inserted in the 1D DCT domain to insert and detect a message from the DIBR image. Experimental results of the proposed method show high imperceptibility and robustness to various attacks, such as signal and geometric distortions. The proposed method is also robust to DIBR distortion and DIBR configuration adjustment, such as depth image preprocessing and baseline distance adjustment.
ARTICLE | doi:10.20944/preprints201910.0148.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: static synchronous compensator (STATCOM); discrete wavelet transform (DWT); multi-layer perceptron neural network (MLP); Bayes and Naive Bayes (NB) classifier
Online: 13 October 2019 (16:22:41 CEST)
This paper presents the methodology to detect and identify the type of fault that occurs in shunt connected static synchronous compensator (STATCOM) transmission line using a combination of Discrete Wavelet Transform (DWT) and Naive Bayes classifier. To study this, the network model is designed using Mat-lab/Simulink. The different faults such as Line to Ground (LG), Line to Line (LL), Double Line to Ground (LLG) and three-phase (LLLG) fault are applied at different zones of system with and without STATCOM considering the effect of varying fault resistance. The three-phase fault current waveforms obtained are decomposed into several levels using daubechies mother wavelet of db4 to extract the features such as standard deviation and Energy values. The extracted features are used to train the classifiers such as Multi-Layer Perceptron Neural Network (MLP), Bayes and Naive Bayes (NB) classifier to classify the type of fault that occurs in the system. The results reveal that the proposed NB classifier outperforms in terms of accuracy rate, misclassification rate, kappa statistics, mean absolute error (MAE), root mean square error (RMSE), relative absolute error (RAE) and root-relative square error (RRSE) than MLP and Bayes classifier.
ARTICLE | doi:10.20944/preprints202005.0048.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: Arena software, Discrete event simulation, Design of simulation experiment, Metamodeling, Regression metamodel, Simulation modeling, NYTIL, Resolution V design, Experimental design, Throughput
Online: 4 May 2020 (10:03:37 CEST)
The today competitive advantage of Ready-made garment industries depends on the ability to improve the efficiency and effectiveness of resource utilization. Ready-made garment industries have long historically adopted fewer technological and process advancement as compared to automotive, electronics and semiconductor industries. Simulation modeling of garment assembly line system has attracted a number of researchers as one way for insightful analysis of system behaviour and improving its performance. However, most of simulation studies have considered ill-defined experimental design which cannot fully explore the assembly line design alternatives and does not uncover the interaction effects of the input variables. Simulation metamodeling is an approach to assembly line design which has recently been of interest to so many researchers. However, its application in garment assembly line design has never been well explored. In this paper, simulation metamodeling of trouser assembly line with 72 operations has been demonstrated. The linear regression metamodel technique with resolution-V design was used. The effects of five factors: bundle size, job release policy, task assignment pattern, machine number and helper number on the production throughput of the trouser assembly line were studied. The increase of 28.63% of the production throughput was achieved for the best factors’ setting of the metamodel.
ARTICLE | doi:10.20944/preprints201712.0026.v2
Subject: Engineering, Electrical & Electronic Engineering Keywords: fault diagnosis; condition monitoring; short time Fourier transform; slepian window; prolate spheroidal wave functions; discrete prolate spheroidal sequences; time-frequency distributions
Online: 6 December 2017 (05:34:29 CET)
The aim of this paper is to introduce a new methodology for the fault diagnosis of induction machines working in transient regime, when time-frequency analysis tools are used. The proposed method relies on the use of the optimized Slepian window for performing the short time Fourier transform (STFT) of the stator current signal. It is shown that for a given sequence length of finite duration the Slepian window has the maximum concentration of energy, greater than can be reached with a gated Gaussian window, which is usually used as analysis window. In this paper the use and optimization of the Slepian window for fault diagnosis of induction machines is theoretically introduced and experimentally validated through the test of a 3.15 MW induction motor with broken bars during the start-up transient. The theoretical analysis and the experimental results show that the use of the Slepian window can highlight the fault components in the current's spectrogram with a significant reduction of the required computational resources.
Subject: Social Sciences, Accounting Keywords: parcel locker; last mile delivery; home delivery; City Logistics; urban freight transport; stated preference; discrete choice modelling; consumer behaviour; e-commerce; channel choice; collection points
Online: 28 May 2021 (12:23:05 CEST)
: E-commerce sales surge represents a huge challenge for urban freight transport. Parcel lockers constitute a valid solution for addressing the challenges home deliveries imply. In fact, eliminating courier-consumer contact (also relevant for health-related issues, as made evident by COVID19 pandemic) and delivering in few predefined places might help coping with missed deliveries substantially. Furthermore, this option enables consolidated shipping and reducing delivery trip costs. This paper analyses and compares consumers’ preferences for alternative collection strategies. It investigates home delivery vs parcel locker use and forecasts their future market shares. This is performed based on both customers’ socio-economic variables and attributes characterising these alternative logistic fulfilment strategies. The case study considered rests upon a stated preference survey deployed in the city of Rome. The investigation specifically targets young people (i.e., population under 30 years) since they represent early adopters. Discrete choice models allow both quantifying the monetary value of parcel lockers attributes (i.e., willingness to pay measures) and estimating the potential demand for this innovative delivery scheme. Results show that distance and accessibility are the main choice determinants. Furthermore, there is an overall high propensity to adopt parcel lockers. This research can support policymakers when implementing such solutions.
REVIEW | doi:10.3390/sci2040076
Subject: Keywords: academic journals; publishing; seal of approval; impact factor; h-index; anonymous refereeing; continuous and discrete frequency of publications; avoidance of time wasting; seeking adventure; open access; academic publishing as a continuous dynamic process; improving research after publication; internet
Online: 15 October 2020 (00:00:00 CEST)
Many academics are critical of the current publishing system, but it is difficult to create a better alternative. This review relates to the Sciences and Social Sciences, and discusses the primary purpose of academic journals as providing a seal of approval for perceived quality, impact, significance, and importance. The key issues considered include the role of anonymous refereeing, continuous rather than discrete frequency of publications, avoidance of time wasting, and seeking adventure. Here we give recommendations about the organization of journal articles, the roles of associate editors and referees, measuring the time frame for refereeing submitted articles in days and weeks rather than months and years, encouraging open access internet publishing, emphasizing the continuity of publishing online, academic publishing as a continuous dynamic process, and how to improve research after publication. Citations and functions thereof, such as the journal impact factor and h-index, are the benchmark for evaluating the importance and impact of academic journals and published articles. Even in the very top journals, a high proportion of published articles are never cited, not even by the authors themselves. Top journal publications do not guarantee that published articles will make significant contributions, or that they will ever be highly cited. The COVID-19 world should encourage academics worldwide not only to rethink academic teaching, but also to re-evaluate key issues associated with academic journal publishing in the future.