Preprint
Article

This version is not peer-reviewed.

The Euler-Type Universal Numerical Integrator (E-TUNI) with Backward Integration

A peer-reviewed article of this preprint also exists.

Submitted:

20 December 2024

Posted:

23 December 2024

You are already at the latest version

Abstract
The Euler-Type Universal Numerical Integrator (E-TUNI) is a discrete numerical structure that couples a first-order Euler-type numerical integrator with some feed-forward neural network architecture. Thus, E-TUNI can be used to model non-linear dynamic systems when the real-world plant’s analytical model is unknown. From the discrete solution provided by E-TUNI, the integration process can be either forward or backward. Thus, in this article, we intend to use E-TUNI in a backward integration framework to model autonomous non-linear dynamic systems. Three case studies, including the dynamics of the non-linear inverted pendulum, were developed to verify the computational and numerical validation of the proposed model.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

The Euler-Type Universal Numerical Integrator (E-TUNI) is a particular case of a Universal Numerical Integrator (UNI). In turn, the UNI is the coupling of a conventional numerical integrator (e.g., Euler, Runge-Kutta, Adams-Bashforth, Predictive-Corrector, among others) with some kind of universal approximator of functions (e.g., MLP neural networks, SVM, RBF, wavelets, fuzzy inference systems, among others). Furthermore, UNIs generally work exclusively with neural networks with feed-forward architecture through supervised learning with input/output patterns.
In [1], the first work involving the mathematical modeling of an artificial neuron is presented. In [2,3], it is mathematically demonstrated, independently, that feed-forward neural networks with Multi-Layer Perceptron (MLP) architecture are universal approximators of functions. For the mathematical demonstration of the universality of feed-forward networks, a crucial starting point is the Stone-Weierstrass theorem [4]. An interesting work on the universality of neural networks involving the ReLU neuron can be found in [5]. In summary, many neural architectures were developed in the last half of the twentieth century, involving shallow neural networks [6]. On the other hand, in this century, many architectures of deep neural networks demonstrated the great power of neural networks [7].
So, artificial neural networks solve an extensive range of computational problems. However, in this article, we limit the application of artificial neural networks with feed-forward architecture in the resolution of mathematical problems involving modeling autonomous non-linear dynamic systems governed by a system of ordinary differential equations. With the neural networks, it is possible to solve the inverse problem of modeling dynamical systems, that is, given the solution of the dynamical system, then obtain the instantaneous derivative functions or the mean derivative functions of the system.
The starting point for the proper design of a Universal Numerical Integrator (UNI) is the conventional numerical integrators [8,9,10,11,12,13,14,15]. In [16], a qualitative and very detailed description of the design of a UNI is presented. Also, in [16], an appropriate classification is given for the various types of Universal Numerical Integrators (UNIs) found in the literature.
According to these authors, the UNIs can be divided into three large groups, giving rise to three different methodologies, namely: (i) the NARMAX methodology (Nonlinear Auto Regressive Moving Average with Exogenous input); (ii) the instantaneous derivatives methodology (e.g., Runge-Kutta neural network, Adams-Bashforth neural network, Predictive-Corrector neural network, among others), and (iii) mean derivatives methodologies or E-TUNI. For a detailed understanding of the similarities and differences between the existing UNIS types, see again [16].
In practice, the leading scientific works involving the proper design of a UNI are: (i) the NARMAX methodology in [17,18,19,20,21]; (ii) the Runge-Kutta neural network in [22,23,24,25]; (iii) the Adams-Bashforth neural network in [26,27,28]; and (iv) the E-TUNI in [29,30,31,32,33,34]. For this article, it should be noticed that E-TUNI works exclusively with mean derivative functions instead of instantaneous derivative functions. Furthermore, the NARMAX and mean derivative methodologies are of fixed steps in the simulation phase, while the instantaneous derivative methodologies are of varied integration steps in the simulation phase [16]. However, all of them are fixed steps in the training phase.
In this paragraph, we briefly describe existing work on E-TUNI. In [29], the doctoral thesis gave rise to E-TUNI. In [30], it is a technological application of E-TUNI using neural networks with MLP architecture. In [32], it is a technological application of E-TUNI using fuzzy logic and genetic algorithms. Still in [32], the fuzzy membership functions are adjusted automatically, using genetic algorithms through supervised learning using input/output patterns. In [34], E-TUNI is designed with an MLP neural architecture and later applied in a predictive control structure.
However, none of the previous works used E-TUNI with backward integration. Therefore, this article’s originality lies in using E-TUNI in a backward integration process coupled with a shallow neural network with MLP architecture. As far as we know, this has not yet been done.
This article is divided as follows: Section 2 briefly describes the basic structure of a Universal Numerical Integrator (UNI). Section 3 describes the detailed mathematical development of E-TUNI, designed with backward integration. Section 4 presents the numerical and computational results validating the proposed model. Section 5 presents the main conclusions of this work.

2. Universal Numerical Integrator (UNI)

Figure 1 illustrates a general scheme for adequately designing a Universal Numerical Integrator (UNI). Notice that in Figure 1, when one couples a conventional numerical integrator (e.g., Euler, Runge-Kutta, Adams-Bashforth, predictive-corrector, among others) to some kind of universal approximators of functions (e.g., MLP neural networks, RBF, SVM, fuzzy inference systems, among others), the UNI design can be achieved high numerical accuracy through supervised learning using input/output training patterns. This allows for solving real-world problems involving the modeling of non-linear dynamic systems.
As previously mentioned, a Universal Numerical Integrator (UNI) can be of three types, thus giving rise to three distinct methodologies, namely: (i) NARMAX model, (ii) instantaneous derivatives methodology (e.g., Runge-Kutta neural network, Adams-Bashforth neural network, predicted-corrector neural network, among others), and (iii) mean derivatives methodology or E-TUNI. The three types of universal numerical integrators are equally precise, as the Artificial Neural Network (ANN) is a universal approximator of functions [2,3,4,5], every UNI can have its approximation error as close to zero, as much as one likes.
To fully understand the difference between the instantaneous and the mean derivatives methodologies, see Figure 2. Figure 2 also presents the geometric difference between the instantaneous and mean derivatives functions. Everyone knows that the mean derivative is the secant line between two distinct points of any mathematical function. The instantaneous derivative is the tangent line to any point of a mathematical function.
The mean derivatives methodology occurs when a universal approximator of functions is coupled to the Euler-type first-order integrator. In this case, the first-order integrator would have a substantial error if instantaneous derivative functions were used. However, when the mean derivative functions replace the instantaneous derivative functions, the integrator error is reduced to zero [16,30,31]. In this way, it is possible to train an artificial neural network to learn the mean derivative functions through two possible approaches [16]: (i) the direct approach and (ii) the indirect or empirical approach.
Figure 3 schematically represents the direct approach. Figure 4 schematically represents the indirect or empirical approach. In the direct approach, the artificial neural network is trained separately from the first-order integrator. The artificial neural network is trained and coupled to the first-order integrator in the indirect or empirical approach. Both approaches are equivalent to each other. However, the back-propagation algorithm must be recalculated using the indirect or empirical approach. In the direct approach, this is unnecessary [16,29].
This difference between the direct and indirect approach also applies to the instantaneous derivatives methodology. However, in this case, the direct approach only applies to theoretical models, as instantaneous derivative functions cannot be determined directly in real-world problems [16]. Furthermore, this difference between the direct and indirect approach does not apply to the NARMAX model. However, the NARMAX model is also a particular case of UNI. This is because, although a conventional integrator is not coupled to the NARMAX model, the universal approximator of functions behaves as if it were a numerical integrator [16].

3. The Euler-Type Universal Numerical Integrator (E-TUNI)

The equation (1) mathematically represents a system of n autonomous non-linear ordinary differential equations with dependent variables y 1 , y 2 , , y n . Based on this, it is possible to establish an initial value problem for a first-order system, also described by (1).
y ˙ 1 = f 1 ( y 1 , y 2 , y n ) , y 1 ( a ) = η 1 y ˙ 2 = f 2 ( y 1 , y 2 , y n ) , y 2 ( a ) = η 2 y ˙ n = f n ( y 1 , y 2 , y n ) , y n ( a ) = η n
The solution of the differential equation (1) can be obtained through a first-order approximation. Such a solution was obtained by the mathematician Leonard Euler in 1768. This solution, keeping the convention described by Figure 2, is given by the equation (2).
, k + 1 y 1 i t a n , k θ 1 i · Δ t + , k y 1 i , k + 1 y 2 i t a n , k θ 2 i · Δ t + , k y 2 i , k + 1 y n i t a n , k θ n i · Δ t + , k y n i
However, everyone knows that the accuracy of the equation (2) is awful. Because of this, if the instantaneous derivative functions are replaced by the mean derivative functions, then the Euler-type first-order integrator becomes precise [16,29,31]. This new solution is given by equation (3).
, k + 1 y 1 i = t a n Δ t , k α 1 i · Δ t + , k y 1 i , k + 1 y 2 i = t a n Δ t , k α 2 i · Δ t + , k y 2 i , k + 1 y n i = t a n Δ t , k α n i · Δ t + , k y n i
Notice that the equation (3) is straightforward to obtain, as the mean derivative functions sound easily computable, for real-world problems, via equation (4), for j = 1 , 2 , , n . Here, the variable n is the total number of state variables of the dynamical system given by (1).
t a n Δ t , k α j i = , k + 1 y j i , k y j i Δ t
It is crucial to notice that a neural network can compute the mean derivative functions, as schematized by Figure 5. In this way, the E-TUNI works with a first-order numerical integrator coupled to a neural network with any feed-forward architecture (e.g., MLP neural network, RBF, SVM, wavelets, fuzzy inference systems, Convolutional Neural Network (CNN), among others).

3.1. The E-TUNI with Backward Integration

Let be the autonomous system of nonlinear differential equations
y ˙ = f ( y )
where, y =   [ y 1 y 2 . . . y n ] T e f ( y ) =   [ f 1 ( y ) f 2 ( y ) . . . f n ( y ) ] T .
By definition, let also be y j i = y j i ( t ) for j = 1 , 2 , , n a trajectory about the family of solutions of the non-linear differential equations system y ˙ = f ( y ) that passes through y j i ( t 0 ) at the initial instant t 0 . Here, the index j indicates the variable’s name, and i = 1 , 2 , , is the number of discretizations of the initial condition that can be as large as one likes. So, the Theorem 1 (T1) is very important, as it deals with the uniqueness of the solutions of the differential equation (5).
Theorem 1 (T1). Assume that each of the functions f 1 ( y 1 , y 2 , , y n ) , , f n ( y 1 , y 2 , , y n ) has continuous partial derivatives with respect to y 1 , y 2 , , y n . So the known initial value problem y ˙ = f ( y ) for y ( t 0 ) has one and only one solution y i = y i ( t ) in R , starting from each y i ( t 0 ) . If two solutions y = ϕ ( t ) have a common point, they must be identical.
Theorem 2 (The Differential Mean Value Theorem): If a function g j i ( t ) for j = 1 , 2 , , n is a continuous function and defined over the closed interval [ t k , t k + 1 ] is differentiable over the open interval ( t k , t k + 1 ) , then there exists at least one number t k * with t k < t k * < t k + Δ t = t k + 1 such that, g ˙ j i ( t k * ) = , k + 1 g j i , k g j i Δ t .
Theorem 3 (The Mean Integral Value Theorem): If a function g ˙ j i ( t ) for j = 1 , 2 , , n is a continuous function and defined over the closed interval [ t k , t k + 1 ] , then there exists at least one internal point t k x in [ t k , t k + 1 ] such that, g ˙ j i ( t k x ) · Δ t = t k t k + 1 g ˙ j i ( t ) d t .
In general, as y ˙ i = f ( y i ) has no analytic solution, then it is expected to know for y i = y i ( t ) just a discrete set of points [ y i ( t + k · Δ t ) y i ( t + ( k + 1 ) · Δ t ) . . . y i ( t + ( k + n ) · Δ t ) ] =   [ k y i . . . , k + 1 y i , k + n y i ] to , k y i = y i ( t + k · Δ t ) at the horizon [ t k , t k + n ] and Δ t = t k + n t k n a constant; thus, the vector in question is a piece of a discrete solution to y ˙ i = f ( y i ) , where
, k y i = [ k y 1 i , k y 2 i , , k y n i ]
By definition [35,36], the secant between two points , k y j i and , k + 1 y j I for j = 1 , 2 , , n belonging to the curve y j i ( t ) is the line segment joining these two points. Then, the tangents of the secants between the points , k y j i and , k + 1 y j i for j = 1 , 2 , , n are defined as:
t a n Δ t α i ( t + k · Δ t ) = t a n Δ t , k α i = , Δ t , k α 1 i t a n Δ t , k α 2 i . . . t a n Δ t , k α n i ]
where, t a n Δ t k α j i = , k + 1 y j i k y j i Δ t , for j = 1 , 2 , , n .
The variable , k α j i is the angle of the secant that joins the two points , k y j I and , k + 1 y j i belonging to the curve y j i ( t ) . Thus, it is possible to demonstrate Theorem 4 (T4) concerning the solution of the dynamical system governed by autonomous non-linear ordinary differential equations [37].
Theorem 4 (T4). The solution , k + 1 y j i for j = 1 , 2 , . . . , n of the nonlinear differential equations system y ˙ i = f ( y i ) can be determined exactly through the relation , k + 1 y j i = t a n Δ t , k α j i · Δ t + k y j i for a given , k y i e Δ t .
Proof. It is presented in detail in [29,37]. However, there is an error in the proof of this theorem in [29], which was corrected in [37]. Additionally, we can provide some simplified proof here. When the general solution of the nonlinear autonomous dynamical system is known, then y i ˙ = f i ( y ) can be replaced by some function of the type y i ˙ = g i ( t ) . This allows us to apply systematically and recursively theorems T2 and T3 to the original differential equation (1) and easily arrive at the desired proof. Because of the Uniqueness Theorem T1, there is no other mathematical form than this one. In an approach involving supervised learning of input/output training patterns, knowing the solution one wants to arrive at in advance is not absurd.
Therefore, it is important to mathematically relate the mean derivatives functions with the instantaneous derivatives functions (see again Figure 2). Thus, it is easily seen that if y ˙ = f ( y ) then,
, k y ˙ i i = f ( k y i ) = l i m Δ t 0 t a n Δ t , k y j i = l i m Δ t 0 , k + 1 y j i k y j i Δ t p a r a j = 1 , 2 , . . . , n
or
l i m Δ t 0 t a n Δ t , k y j i = t a n k Θ j i p a r a j = 1 , 2 , . . . , n
Figure 2 graphically illustrates the convergence of mean derivatives functions to instantaneous derivatives functions when Δ t tends to zero. Based on this, it is possible to enunciate two basic types of first-order numerical integrators or Euler integrators. An Euler integrator that uses instantaneous derivatives functions, given by t a n k Θ j i = f j ( k y i ) for j = 1 , 2 , . . . , n and an Euler integrator that uses mean derivatives functions, given by t a n Δ t k y j i = , k + 1 y j i k y j i Δ t = g j [ k y i , k + 1 y i , Δ t ] .
A significant difference between the mean derivatives functions and the instantaneous derivatives functions is that the instantaneous derivatives functions f j ( k y i ) for j = 1 , 2 , . . . , n is only dependent on , k y i , but the mean derivatives functions g j [ k y i , k + 1 y i , Δ t ] for j = 1 , 2 , . . . , n is dependent on , k y i , , k + 1 y i and Δ t at the point , k y i . As will be seen later, this difference is significant.
The first important point to be considered here is the possibility of carrying out both forward and backward integration of the Euler Integrator with instantaneous derivatives. So, the instantaneous derivative functions at the points y j i ( t ) for j = 1 , 2 , . . . , n can be defined in two different ways:
, k y ˙ j i = f j + ( k y i ) = l i m Δ t 0 , k + 1 y j i k y j i Δ t = t a n + , k θ j i
or
, k y ˙ j i = f j ( k y i ) = l i m Δ t 0 , k y j i k 1 y j i Δ t = t a n , k θ j i
However, if the instantaneous derivatives functions f j ( . ) exists at the point , k y i , then necessarily f j + ( k y i ) = f j ( k y i ) = f j ( k y i ) or t a n + , k θ j i = t a n , k θ j i = t a n k θ j i (see Figure 6). In this way, the expressions of forward and backward propagating Euler integrators, necessarily designed with the instantaneous derivative functions, are given, respectively, by (i.e., with low numerical precision):
, k + 1 y j i f j ( k y i ) · Δ t + k y j i = , k + 1 y j i t a n , k θ j i · Δ t + k y j i =
, k 1 y j i f j ( k y i ) · Δ t + k y j i = , k 1 y j i t a n , k θ j i · Δ t + k y j i =
Notice that the equation (13) can also be obtained directly from the equation (12) by making the substitution of the variable Δ t = Δ t .
Additionally, like f j + ( k y i ) = f j ( k y i ) =   f j ( k y i ) , the Euler integrator with instantaneous derivatives functions has a symmetric solution between intervals [ t k 1 , t k ] and [ t k , t k + 1 ] , thus implying low numerical precision, for the outputs , k 1 y j i and , k + 1 y j I for j = 1 , 2 , . . . , n . To say that a solution is symmetric at the point k means to say that | k y j i k 1 y j i | =   | k + 1 y j i k y j i | to j = 1 , 2 , . . . , n .
This Euler integrator symmetry property only occurs when instantaneous derivative functions are used. However, in Euler’s integrator, designed with the mean derivatives functions, this symmetry of the solution around the point t k is not verified (see Figure 6 again). This is important because in Euler’s integrator, designed with mean derivatives functions, the backward solution does not necessarily decrease when the forward solution grows.
In [38], several difference operators can be applied to a function , k y j I for j = 1 , 2 , . . . , n . However, the two operators that interest us are the forward difference and the backward difference, given below, respectively:
Δ k y j i = k + 1 y j i k y j i f o r j = 1 , 2 , . . . , n
k y j i = k y j i k 1 y j i f o r j = 1 , 2 , . . . , n
Thus, using the finite difference operators, there are two different ways to represent the secant at the point , k y j i and with integration step Δ t :
t a n Δ t + , k α j i = Δ k y j i Δ t = , k + 1 y j i k y j i Δ t f o r j = 1 , 2 , . . . , n
t a n Δ t , k α j i = k y j i Δ t = , k y j i k 1 y j i Δ t f o r j = 1 , 2 , . . . , n
Comparing the equation (16) with the result of Theorem T4, one can obtain the following recursive and exact equation to represent the solution of autonomous non-linear ordinary differential equations system for a forward integration process (i.e., with high numerical accuracy):
, k + 1 y j i = t a n Δ t + , k α j i · Δ t + k y j i f o r j = 1 , 2 ,
or
, k + 1 y j i = Δ k y j i Δ t · Δ t + k y j i f o r j = 1 , 2 , . . . , n
Equations (18) and (19) represent an Euler-type integrator, which uses mean derivatives functions and forward integration. However, to obtain the Euler integrator with backward recursion, it is enough to analyze the asymmetry of the mean derivative functions at the point , k y j i and as illustrated in Figure 6. In this way, we can obtain the following equation to perform the backward integration of E-TUNI (i.e., with high numerical precision):
, k 1 y j i = t a n Δ t , k α j i · Δ t + k y j i f o r j = 1 , 2 ,
or
, k 1 y j i = k y j i Δ t · Δ t + k y j i f o r j = 1 , 2 , . . . , n
Thus, the iterative equations of the first-order Euler integrator, using mean derivatives, which perform forward and backward recursions, are, respectively, equations (18) and (20). The negative sign that appears in the equation (20) means, not necessarily, that when , k + 1 y j i increases, then , k 1 y j i decreases, because they are two neural networks independent of each other. Therefore, it is observed that, in general, | t a n Δ t + , k α j i | | t a n Δ t , k α j i | , and therefore Euler integrators with mean derivatives functions do not have a symmetric solution, that is, in general, | k + 1 y j i k y j i | | k y j i k 1 y j i | for j = 1 , 2 , . . . , n . As can be easily seen in Figure 6, the negative mean derivative has a different slope than the positive mean derivative for the same point in t k .
It is interesting to notice that the negative sign that appears in equations (13), (20), and (21) is because of the inverted direction of the flow of time (from present to past) in backward integration.

4. Results and Analysis

In this section, we perform a numerical and computational analysis of the Universal Numerical Integrator (UNI), which exclusively uses mean derivative functions. As previously described, this is the Euler-Type Universal Numerical Integrator (E-TUNI). Therefore, we present here three case studies: (i) a one-dimensional simple case, (ii) the non-linear simple pendulum (two state variables), and (ii) the non-linear inverted pendulum (four state variables).
Thus, for all experiments that it is presented below, the use of MLP neural networks was standardized with the traditional Levenberg-Marquardt training algorithm [39] from 1994. Concerning the architecture of the neural network used, all experiments were trained using only one inner layer (with thirty neurons with tangent hyperbolic activation function) in an artificial neural network of the Multi-Layer Perceptron (MLP) type. In all experiments, 80 % of the total patterns were left for training the MLP neural network, 10 % of the patterns for testing, and 10 % for validation. Concerning the architecture of E-TUNI, the direct approach was exclusively used. However, it must be said that the indirect or empirical approach of E-TUNI was not tested in this article. Thus, the general algorithm for training and testing the methodology presented in this article can be briefly described as follows:
Step 1. Using a high-order 4-5 Runge-Kutta integrator, over the considered autonomous non-linear ordinary differential equation system, randomly generate p training patterns input/output for E-TUNI concerning the initial instant y i ( t 0 ) . Perform both forward and backward integration.
Step 2. Determine both the values of the positive mean derivatives t a n Δ t + , k α j i , as well as the values of the negative mean derivatives t a n Δ t , k α j i in relation to initial instants y i ( t 0 ) for i = 1 , 2 , , p .
Step 3. Train two distinct neural networks. One is to output positive mean derivative functions, and the other is to output negative mean derivative functions. Do this following the graphic diagram in Figure 5.
Step 4. Simulate, for several different Δ t integration steps, the performance of E-TUNI both with forward and backward integration. Observe that if starting from an initial instant y i ( t 0 ) , the solution is propagated in n distinct instants forward and, after that, one returns to the same n instants with backward integration because of the uniqueness theorem, one returns perfectly to the original initial instant y i ( t 0 ) . This trick will be used to test the efficiency of E-TUNI when it is propagated backward.
Step 5. Do the same, using the Euler-type first-order integrator, but now using the original instantaneous derivative functions of the dynamical system in question. Then, compare E-TUNI with conventional Euler. Thus, to verify the superiority of E-TUNI concerning the conventional Euler, the accuracy of the discrete numerical solutions obtained must be verified.
Therefore, next, we briefly describe the systems of non-linear differential equations of the three dynamical systems considered here to test the proposed algorithm to perform the backward integration of E-TUNI. Table 1 also summarizes the training data of the eight experiments conducted here to test the proposed methodology.
Example 1. Simulate the one-dimensional non-linear dynamics problem, given by the equation (22). The variable y was trained on the range [ 1 , 13 ] . Notice that this dynamic has infinite stability points since s i n y = 0 has infinitely many solutions. Notice that the points of stability recovering the initial condition through backward integration is very difficult for any integrator.
y ˙ = s i n ( y )
Example 2. Simulate the non-linear simple pendulum problem given by the system of differential equations (23). Notice that, this example does not involve control variables. The acceleration values due to gravity constants and the pendulum’s length are, respectively, given by g = 9.81 m / s 2 and l = 0.30 m . The variable θ was trained on the range [ 1.2 , 1.2 ] r a d and the variable θ ˙ was trained on the range [ 5.2 , + 5.2 ] r a d s .
θ ˙ = w w ˙ = g l s i n θ
Example 3. Do the same for the non-linear inverted pendulum problem described by the system of differential equations (24). The constants adopted for this problem were M = 1.0 [ k g ] , m = 0.4 [ k g ] , l = 0.5 [ m ] , I = 0.05 [ k g · m 2 ] , b = 0.1 [ N m / s ] , g = 9.81 [ f r a c m s 2 ] , e F = 1 [ N ] . This problem has four state variables ( x 1 = x , x 2 = x ˙ , x 3 = θ , and x 4 = θ ˙ ) and a constant control ( u 1 = F = 1 [ N ] ). The state variables were trained on the intervals x 1 between [ 1 , + 35 ] [ m ] , x 2 between [ 1 , + 7 ] [ m / s ] , x 3 between [ 1 , + 1 ] [ r a d ] , and x 4 between [ 1.2 , + 1.2 ] [ r a d / s e c ] .
x 1 ˙ = x 2 x 2 ˙ = [ F b x 2 + m l x 4 2 s i n ( x 3 ) ] ( I + m l 2 ) + ( m l ) 2 g s i n ( x 3 ) c o s ( x 3 ) ( I + m · l 2 ) ( M + m ) [ m l c o s ( x 3 ) ] 2 x 3 ˙ = x 4 x 4 ˙ = [ ( F b x 2 + m l x 4 2 s i n ( x 3 ) ) c o s ( x 3 ) + ( M + m ) g s i n ( x 3 ) ] m l ( m l c o s ( x 3 ) ] 2 ( I + m l 2 ) ( M + m )
Figure 7 graphically illustrates the numerical simulation obtained for Example 1. This figure is not yet the result obtained by E-TUNI, but by Runge-Kutta 4-5, available in Matlab, to numerically solve the equation (22). Therefore, the result obtained by E-TUNI will be compared later, numerically and graphically, with the graphic result of this figure. Also, in Figure 7, the following convention was used: (i) the blue dots in "x" represent the forward propagation, in high precision, of Runge-Kutta 4-5 over the dynamics of the equation (22), (ii) the blue dots in "o" represent the high precision backward propagation of Runge-Kutta 4-5 over the same dynamics, and (iii) the solid red lines represent the slope of the mean derivatives functions between two consecutive points.
Notice that the integration step, graphically presented, was Δ t = 0.1 . However, it is essential to notice that the Runge-Kutta 4-5 works internally with a varied integration step much smaller than this one to achieve the high numerical accuracy shown in Figure 7. So, both forward and backward integration were performed from t 0 = 0 to t f = 5 . In this case, as can be seen, the result obtained is perfect, as the points obtained by backward integration overlapped perfectly over the points with forward integration.
Figure 8 presents the first graphic result of E-TUNI, for the dynamics given by equation (22). The blue dots represent the results obtained by E-TUNI and follow the same convention as in the previous figure. The green dots represent the results obtained by the conventional Euler Integrator, using the instantaneous derivative functions, also given by the equation (22). The two black dash-dot horizontal lines delimit the E-TUNI training range. Notice that within the training range, the E-TUNI result was perfect, with both forward and backward integration. The line in Table 1 with the N u m b e r = 1 indicates the training data obtained for this case. In this way, 400 training patterns were generated, and 80 % of them, 320 patterns, were effectively left for training. Notice that in Table 1 the symbols ( M S E F ) and ( M S E B ) represent, respectively, the mean square error of forward integration and the mean square error of backward integration.
Notice that, outside the training region, E-TUNI has become inaccurate. However, to solve this problem, the training domain of the state variable should be increased. Notice also that the conventional Euler presented a slight deviation in the backward integration concerning the forward integration in the given domain. Thus, in conventional Euler, this deviation becomes more critical and accentuated, which increases the integration step. However, in E-TUNI, the result does not deteriorate with the increase of the integration step.
Figure 9 presents the same result as the previous figure, but now using an integration step Δ t = 0.5 . The E-TUNI result was perfect, but the conventional Euler result was even worse. This confirms what was said in the previous paragraph. See Table 1 again in line N u m b e r = 2 to find out about the main parameters used in this training. Figure 10 presents the same result as the previous figure, but now using an integration step Δ t = 0.01 . In this case, the result achieved by E-TUNI is still perfect, and the result by conventional Euler is as well. The result, obtained by conventional Euler, was perfect, in this case, because a tiny integration step was used and equal to Δ t = 0.01 . Figure 11 presents a simulation using the integration step Δ t = 2 > 1 . Again, E-TUNI presented a perfect result, but the conventional Euler degenerated its solution.
Figure 12 and Figure 13 deserve a little more attention. In Figure 12, as can be seen, the solution obtained by E-TUNI, through backward integration, was degenerate (see the zoom). In this case, the blue circle should lie perfectly on the solid red line. In Figure 13, the backward integration solution obtained by Runge-Kutta 4-5 was also degenerate. Both solutions turned out bad because the integration process was conducted for too long, over a point of stability. The zoom made in Figure 12 will be used to better explain this.
What is explained for Figure 12 is also valid for Figure 13. In Figure 12, the integration process was conducted from t 0 = 0 to t f = 20 . The biggest problem occurred when returning from t f = 20 to t 0 = 0 . For example, all families of curves confined between y ( 0 ) = 0 and y ( 0 ) = 2 · π at the initial instant t = 0 were confined in a much smaller interval than this one at the instant t f = 20 . Therefore, to work properly, backward integration would have to be trained with a much smaller mean squared error than the one achieved in Table 1 (line N u m b e r = 5 ). However, this is impossible to achieve using just the algorithm in [39]. Notice that this task was also impossible for Runge-Kutta 4-5, as schematized in Figure 13. Therefore, the problem presented here is critical, regardless of the integrator used.
Figure 14 presents the first graphic result of E-TUNI for the dynamics given by the equation (23), which represents the dynamics of the non-linear simple pendulum. In this case, the superiority of E-TUNI over the conventional Euler is evident. An integration step Δ t = 0.02 was used in this case. Figure 15 concerns the same previous example, but using an integration step Δ t = 5 . The conventional Euler was not presented in this figure because its result was awful, which would impair, therefore, the scale of the graph. Still referring to Figure 15, the main reason the backward integration turned out to be a bit bad at the end of the integration process is that the average training error became slightly oversized (see Table 1 in cell N u m b e r = 7 ). To solve this problem, it is enough to reduce the average training error of the artificial neural network used, but that is not always an easy task. Figure 16 demonstrates that, even for Runge-Kutta 4-5, the backward integration process of the simple pendulum can also be imperfect (see blue circles misaligned with the blue "x").
Notice that the solution from equation (23), graphically presented in Figure 15, was much worse than the other solutions given in Table 1. This fact can be seen by checking the training mean square errors in Table 1 (line N u m b e r 7 ), which were too high. Most likely, this was caused by two fundamental reasons. The first reason is that the E-TUNI local error is amplified by the square of the Integration step (see columns 7 and 9 of Table 1). The proof of this result can be found in [29] or in [37]. However, the analysis of the global error would require further studies. The other reason, which is purely speculative, is the following: it is believed that when the E-TUNI integration step is very large if the solution of the real dynamical system becomes a non-convex function (concerning the line of mean derivatives in this same interval Δ t ), then the training becomes more difficult. However, this last argument still needs to be verified empirically and theoretically.
Figure 17 presents the first graphic result of E-TUNI, for the dynamics given by the equation (24), which represents the dynamics of the non-linear inverted pendulum. As can be seen, in this figure, Runge-Kutta 4-5 perfectly simulated this dynamic, both for forward and backward integrations. Figure 18 is the same case simulation but performed by E-TUNI. Theoretically, Figure 18 is the same as Figure 17. They are just on different scales. Notice that in Figure 18 the E-TUNI simulation was perfect. However, the conventional Euler simulation (green dots) became confusing due to the numerical imprecision achieved by it. Furthermore, this caused an exaggerated change in the scale of the figure.
Table 1. Training data summary.
Table 1. Training data summary.
N. Simul. t a n Δ t + k α j ( M S E F ) ( Δ t ) 2 · M S E F t a n Δ t k α j ( M S E B ) ( Δ t ) 2 · M S E B Patterns
1 Figure 8 1.951 × 10 10 1.951 · 10 12 1.035 × 10 11 1.035 · 10 13 320
2 Figure 9 2.157 × 10 13 5.393 · 10 14 1.322 × 10 14 3.305 · 10 15 320
3 Figure 10 3.097 × 10 12 3.097 · 10 16 4.768 × 10 14 4.768 · 10 18 320
4 Figure 11 8.636 × 10 12 3.454 · 10 11 2.169 × 10 12 8.676 · 10 12 320
5 Figure 12 8.636 × 10 12 3.454 · 10 11 2.169 × 10 12 8.676 · 10 12 320
6 Figure 14 7.921 × 10 11 3.168 · 10 14 6.770 × 10 11 2.708 · 10 14 580
7 Figure 15 1.845 × 10 6 4.613 · 10 5 1.892 × 10 6 4.730 · 10 5 580
8 Figure 18 1.774 × 10 10 1.774 · 10 12 6.645 × 10 11 6.645 · 10 13 640

5. Conclusion

As far as we know, all previous work on E-TUNI was developed using only forward integration. Therefore, we present here, for the first time, a work that uses E-TUNI in a backward integration process. Three case studies were carried out. Non-linear and autonomous ordinary differential equations govern all three dynamical systems considered in this study.
The Euler-type Integrator designed with instantaneous derivatives is not very accurate. However, the E-TUNI has numerical precision equivalent to that of the highest-order integrators. It is interesting to notice that the elegance of the theory of ordinary differential equations can be improved when combined with the theory of universal approximator of functions. This propriety is possible because artificial neural networks allow working with real-world input/output training patterns. This fact enables refining the final solution obtained by the first-order Euler integrator and estimator. Because of this, the E-TUNI can also be used to predict real-world dynamics (e.g., river flow and stock market forecasts, among others).
Additionally, we can call the forward mean derivatives functions positive mean derivatives functions. Equivalently, we can call the backward mean derivatives functions negative mean derivatives functions. Thus, we also conclude that the mean derivative functions depend on the integration step and are asymmetric. This propriety means training two distinct neural networks to simultaneously perform forward and backward integration. Unfortunately, the positive and negative instantaneous derivative functions are symmetric to each other.
Finally, it would be interesting to carry out a future study of E-TUNI using deep neural networks instead of shallow neural networks, aiming to reach two essential points: (i) to be able to increase the number of training patterns (and, consequently, the domain of state variables) and (ii) try to reduce the training mean squared error, as this quantity is critical, in the case of modeling non-linear dynamic systems designed through artificial neural networks. Interestingly, E-TUNI is an excellent option to replace the NARMAX model and even the Runge-Kutta Neural Network (RKNN).

Author Contributions

P.T. designed the proposed model and wrote the main part of the mathematical model. J. M., L.D., and A.C. supervised the writing of the paper and its technical and grammatical review. P.T. and G.G. (supervised by J. M., L.D., and A.C.) developed the software and performed several computer simulations to validate the proposed model.

Funding

The authors thank the Brazilian Aeronautics Institute of Technology (Instituto Tecnológico de Aeronáutica - ITA); the Casimiro Montenegro Filho Foundation (Fundação Casimiro Montenegro Filho - FCMF); and the Brazilian Enterprise Ecossistema Negócios Digitais Ltda for their support and infrastructure, which motivate the challenges and innovations of this research project.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

I would like to thank Professor and great friend Atair Rios Neto for his valuable tips for improving this article. Finally, I would also like to thank the valuable improvement tips given by the good reviewers of this journal. The authors of this article would also like to thank God for making all of this possible.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ABNN Adams-Bashforth Neural Network
E-TUNI Euler-Type Universal Numerical Integrator
L-MTA Levenberg-Marquardt Training Algorithm
MSE Mean Square Error
MLP Multi-Layer Perceptron
NARMAX Nonlinear Auto Regressive Moving Average with eXogenous inputs
PCNN Predictive-Corrector Neural Network
RBF Radial Basis Function
RKNN Runge-Kutta Neural Network
SVM Support Vector Machine
UNI Universal Numerical Integrator

References

  1. McCulloch, W.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biol. 1943, 5(6), 115–133. [CrossRef]
  2. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control. Signals, Syst. 1989, 2(4), 303–314. [CrossRef]
  3. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2(5), 359–366. [CrossRef]
  4. Cotter, N. E. The Stone-Weierstrass and its application to neural networks. IEEE Trans. Neural Netw. 1990, 1(4), 290–295. [CrossRef]
  5. Hanin, B. Universal function approximation by deep neural nets with bounded width and ReLU activations. Mathematics 2019, 7(10), 1–9. [CrossRef]
  6. Haykin, S. Neural Networks: A Comprehensive Foundation. Publisher: Prentice-Hall, Inc., New Jersey, USA, 1999.
  7. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning. Publisher: MIT Press, Cambridge, MA, 2017.
  8. Henrici, P. Elements of Numerical Analysis. Publisher: John Wiley and Sons, New York, USA, 1964.
  9. Lapidus, L.; Seinfeld, J. H. Numerical Solution of Ordinary Differential Equations. Publisher: Academic Press, New York and London, 1971.
  10. Lambert, J. D. Computational Methods in Ordinary Differential Equations. Publisher: John Wiley and Sons, New York, USA, 1973.
  11. Vidyasagar, M. Nonlinear Systems Analysis. Publisher: Prentice-Hall, Inc., Electrical Engineering Series, New Jersey, USA, 1978. [CrossRef]
  12. Chen, D. J. L.; Chang, J. C.; Cheng, C. H. Higher order composition Runge-Kutta methods. Tamkang Journal of Mathematics 2008, 39(3), 199–211. [CrossRef]
  13. Misirli, E.; Gurefe, Y. Multiplicative Adams Bashforth-Moulton methods. Numer. Algor. 2011, 57(4), 425–439. [CrossRef]
  14. Ming, Q.; Yang, Y.; Fang, Y. An optimized Runge-Kutta method for the numerical solution of the radial Schrödinger equation. Mathematical Problems in Engineering 2012, 2012(1), 1–12. [CrossRef]
  15. Polla, G. Comparing accuracy of differential equation results between Runge-Kutta Fehlberg methods and Adams-Moulton methods. Applied Mathematical Sciences 2013, 7(103), 5115–5127. [CrossRef]
  16. Tasinaffo, P. M.; Gonçalves, G. S.; Cunha, A. M.; Dias, L. A. V. An introduction to universal numerical integrators. Int. J. Innov. Comput. Inf. Control 2019, 15(1), 383–406. [CrossRef]
  17. Chen, S.; Billings, S. A. Representations of nonlinear systems: the NARMAX model. Int. J. Control 1989, 49(3), 1013–1032. https://api.semanticscholar.org/CorpusID:122070021.
  18. Chen, S.; Billings, S. A.; Cowan, C. F. N.; Grant, P. M. Practical identification of NARMAX models using radial basis functions. Int. J. Control 1990, 52(6), 1327–1350. https://api.semanticscholar.org/CorpusID:16430646.
  19. Narendra, K. S.; Parthasarathy, K. Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Netw. 1990, 1(1), 4–27. [CrossRef]
  20. Hunt, K. J.; Sbarbaro, D.; Zbikowski, R.; Gawthrop, P. J. Neural networks for control systems – A survey. Automatica 1992, 28(6), 1083–1112. https://api.semanticscholar.org/CorpusID:207511431.
  21. Norgaard, M.; Ravn, O.; Poulsen, N. K.; Hansen, L. K. Neural Networks for Modelling and Control of Dynamic Systems. Publisher: Springer, London, UK, 2000. https://link.springer.com/book/9781852332273.
  22. Wang, Y.-J.; Lin, C.-T. Runge-Kutta neural network for identification of dynamical systems in high accuracy. IEEE Trans. Neural Netw. 1998, 9(2), 294–307. [CrossRef]
  23. Uçak, K. A Runge-Kutta neural network-based control method for nonlinear MIMO systems. soft Computing 2019, 23(17), 7769–7803. [CrossRef]
  24. Uçak, K.; Günel, G. O. An adaptive state feedback controller based on SVR for nonlinear systems. In Proceedings of the 6th International Conference on Control Engineering and Information Technology (CEIT), Istanbul, Turkey, 2018, 25–27. https://api.semanticscholar.org/CorpusID:195775331.
  25. Uçak, K. A novel model predictive Runge-Kutta neural network controller for nonlinear MIMO systems. Neural Processing Letters 2020, 51(2), 1789–1833. [CrossRef]
  26. Rios Neto, A.; Tasinaffo, P. M. Feedforward neural networks combined with a numerical integrator structure for dynamic systems modeling. In Proceedings of the 17th International Congress of Mechanical Engineering (COBEM), São Paulo/SP, Brazil, 2003,1–7.
  27. Melo,R. P.; Tasinaffo, P. M. Uma metodologia de modelagem empírica utilizando o integrador neural de múltiplos passos do tipo Adams-Bashforth. Sociedade Brasileira de Automática (SBA) 2010, 21(5), 487–509. [CrossRef]
  28. Tasinaffo, P. M.; Rios Neto, A. Adams-Bashforth neural networks applied in a predictive control structure with only one horizon. Int. J. Innov. Comput. Inf. Control 2019, 15(2), 445–464. [CrossRef]
  29. Tasinaffo, P. M. Estruturas de Integração Neural Feedforward Testadas em Problemas de Controle Preditivo. Doctoral Thesis, INPE-10475-TDI/945, São José dos Campos/SP, Brazil, 2003. http://urlib.net/sid.inpe.br/jeferson/2004/02.18.14.07.
  30. Tasinaffo, P. M.; Rios Neto, A. Mean derivatives based neural Euler integrator for nonlinear dynamic systems modeling. Learning and Nonlinear Models 2005, 3(2), 98–109.
  31. Tasinaffo, P. M.; Guimarães, R. S.; Dias, L. A. V.; Strafacci Júnior, V. Discrete and exact general solution for nonlinear autonomous ordinary differential equations. Int. J. Innov. Comput. Inf. Control. 2016, 12(5), 1703–1719. [CrossRef]
  32. de Figueiredo, M. O.; Tasinaffo, P. M.; Dias, L. A. V. Modeling autonomous nonlinear dynamic systems using mean derivatives, fuzzy logic and genetic algorithms. Int. J. Innov. Comput. Inf. Control 2016, 12(5), 1721–1743. [CrossRef]
  33. Tasinaffo, P. M.; Guimarães, R. S.; Dias, L. A. V.; Strafacci Júnior, V. Mean derivatives methodology by using Euler integrator improved to allow the variation in the size of integration step. Int. J. Innov. Comput. Inf. Control. 2016, 12(6), 1881–1891. [CrossRef]
  34. Tasinaffo, P. M.; Rios Neto, A.Predictive control with mean derivative based neural Euler integrator dynamic model. Sociedade Brasileira de Automática (SBA) 2006, 18(1), 94–105. [CrossRef]
  35. Munem, M. A.; Foulis, D. J. Calculus with Analytic Geometry (Volumes I and II). Publisher: Worth Publishers, Inc., New York, USA, 1978.
  36. Wilson, E. Advanced Calculus. Publisher: Dover Publication, New York, USA, 1958.
  37. Tasinaffo, P. M.; Dias, L. A. V.; da Cunha A. M. A Quantitative Approach to Universal Numerical Integrators (UNIs) with Computational Application. Human-Centric Intelligent Systems (preprint) 2024, 4, 1–20.
  38. Ames, W. F. Numerical Methods for Partial Differential Equations. Publisher: Academic Press, 2. ed., New York, USA, 1988.
  39. Hagan, M. T.; Menhaj, M. B. Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 1994, 5(6), 989–993. [CrossRef]
Figure 1. General graphic diagram of the operation of a Universal Numerical Integrator (UNI) (Source: [16,29]).
Figure 1. General graphic diagram of the operation of a Universal Numerical Integrator (UNI) (Source: [16,29]).
Preprints 143654 g001
Figure 2. Basic difference between the instantaneous derivative and the mean derivative (Source: [16,29]).
Figure 2. Basic difference between the instantaneous derivative and the mean derivative (Source: [16,29]).
Preprints 143654 g002
Figure 3. The E-TUNI designed according to the direct approach (Source: [16,29,30]).
Figure 3. The E-TUNI designed according to the direct approach (Source: [16,29,30]).
Preprints 143654 g003
Figure 4. The E-TUNI designed according to the indirect or empirical approach (Source: [16,29]).
Figure 4. The E-TUNI designed according to the indirect or empirical approach (Source: [16,29]).
Preprints 143654 g004
Figure 5. A MLP neural network designed with the concept of mean derivative functions.
Figure 5. A MLP neural network designed with the concept of mean derivative functions.
Preprints 143654 g005
Figure 6. Geometric finding that instantaneous derivative functions are symmetric, but mean derivative functions are not.
Figure 6. Geometric finding that instantaneous derivative functions are symmetric, but mean derivative functions are not.
Preprints 143654 g006
Figure 7. The dynamics of example 1 solved with Runge-Kutta 4-5, available in Matlab with step automation through the "ode45.m" function, using forward and backward integrations.
Figure 7. The dynamics of example 1 solved with Runge-Kutta 4-5, available in Matlab with step automation through the "ode45.m" function, using forward and backward integrations.
Preprints 143654 g007
Figure 8. The dynamics of example 1, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 0.1 .
Figure 8. The dynamics of example 1, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 0.1 .
Preprints 143654 g008
Figure 9. The dynamics of example 1, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 0.5 .
Figure 9. The dynamics of example 1, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 0.5 .
Preprints 143654 g009
Figure 10. The dynamics of example 1, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 0.01 .
Figure 10. The dynamics of example 1, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 0.01 .
Preprints 143654 g010
Figure 11. The dynamics of example 1, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 2 > 1 and being integrated up to t f = 10 .
Figure 11. The dynamics of example 1, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 2 > 1 and being integrated up to t f = 10 .
Preprints 143654 g011
Figure 12. The dynamics of example 1, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 2 > 1 and being integrated up to t f = 20 .
Figure 12. The dynamics of example 1, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 2 > 1 and being integrated up to t f = 20 .
Preprints 143654 g012
Figure 13. The same example as in Figure 7, but instead of being integrated up to t f = 5 , it was integrated up to t f = 18 (notice the dissolution of the solution).
Figure 13. The same example as in Figure 7, but instead of being integrated up to t f = 5 , it was integrated up to t f = 18 (notice the dissolution of the solution).
Preprints 143654 g013
Figure 14. The dynamics of example 2, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 0.02 .
Figure 14. The dynamics of example 2, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 0.02 .
Preprints 143654 g014
Figure 15. The dynamics of example 2, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 5 > 1 .
Figure 15. The dynamics of example 2, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 5 > 1 .
Preprints 143654 g015
Figure 16. The dynamics of example 2 solved with Runge-Kutta 4-5, available in Matlab with step automation through the "ode45.m" function, using forward and backward integrations.
Figure 16. The dynamics of example 2 solved with Runge-Kutta 4-5, available in Matlab with step automation through the "ode45.m" function, using forward and backward integrations.
Preprints 143654 g016
Figure 17. The dynamics of example 3 solved with Runge-Kutta 4-5, available in Matlab with step automation through the "ode45.m" function, using forward and backward integrations.
Figure 17. The dynamics of example 3 solved with Runge-Kutta 4-5, available in Matlab with step automation through the "ode45.m" function, using forward and backward integrations.
Preprints 143654 g017
Figure 18. The dynamics of example 3, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 0.1 .
Figure 18. The dynamics of example 3, comparing the conventional Euler integrator with E-TUNI, using forward and backward integrations with Δ t = 0.1 .
Preprints 143654 g018
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated