Preprint
Article

This version is not peer-reviewed.

A Training Algorithm for Locally Recurrent Neural Networks Based on the Explicit Gradient of the Loss Function

A peer-reviewed article of this preprint also exists.

Submitted:

08 January 2025

Posted:

09 January 2025

You are already at the latest version

Abstract

In this paper a new algorithm for the training of Locally Recurrent Neural Networks (LRNNs) is presented, which aims to reduce computational complexity and at the same time to guarantee the stability of the network during the training. The main feature of the proposed algorithm is the capability to represent the gradient of the error in an explicit form. The algorithm builds on the interpretation of the Fibonacci’s sequence as the output of an IIR second-order filter, which makes it possible to use the Binet’s formula that allows the generic terms of the sequence to be calculated directly. Thanks to this approach, the gradient of the loss function during the training can be explicitly calculated, and it can be expressed in terms of the parameters, which control the stability of the neural network.

Keywords: 
;  ;  ;  

1. Introduction

Machine learning techniques, when they are applied to dynamic systems, are preferred to have in their turn a dynamic structure, namely the variable time being included in the algebraic structure of the model. More specifically, in the case of Artificial Neural Networks, the dynamics of the model is obtained by including in the structure delay blocks [1]. The dynamics can be introduced in the neural network by maintaining its feedforward structure (Time Delay Neural Networks - TDNN [2,3,4,5]), or by introducing feedbacks [6]. In the latter case, the delays are mandatory, otherwise the calculation of the neurons output cannot be resolved. Both, feedforward and feedback neural networks are suitable for modeling dynamic systems, so that the choice of the paradigm to use resides on the requirements of the problem to deal with and on the available resources. Feedforward paradigm has the advantage of leveraging on the same algorithms for training static NNs. In particular, in the case the delay blocks are foreseen only in a delay line at the input (Focused Time Delay Neural Networks – FTDNN [7]), the downstream part of the NN is structured as a static one, and so any static paradigm can be implemented. A different training strategy is adopted depending on the stationary or non-stationary behavior of the system to be modeled. In the first case the whole evolution of the system assumed as training set can be used to train the NN iteratively in batch mode, in the same way the static NNs are trained. In case the physical system is not stationary, the batch mode is no longer suitable, and the NN model must be adapted dynamically during the evolution of the system [8]. To this purpose, the training set at each iteration is constituted only by the last few samples of the physical signal, while the sensitivity of the model with respect to the past samples tends to vanish with time. From this point of view, the training strategy is the same as adapting filters [9,10], with the added value of exploiting nonlinearity. From a topological point of view, FTDNNs can be seen as the cascade of a Finite Impulsive Response (FIR) linear filter with a Multi-Layer Perceptron [11]. As for the FIRs, this kind of NNs has a short-term memory, represented by the samples stored in the delay line, and a long-term memory, represented by the weights of the connections. The FIR filters take their name from the fact that the impulsive response has a duration equal to the number of delays in the delay line (Memory Depth), after which it is null. This implies that a proper number of delays must be defined a priori, to guarantee that the dynamics of the system under study will be properly modeled. As such dynamics are unknown a priori, a trial-and-error procedure is adopted to design of the delay line. As an alternative, feedbacks can be introduced in the structure of the NNs [12,13,14]. Different strategies are used depending on the fact that the feedbacks connect each pair of neurons in the network, or neurons belonging to different layers, or the output of the NN is feedback to the input, or finally if the feedback are localized within the neurons. This last category is called Locally Recurrent Neural Networks (LRNNs [15,16]), and for many applications it represents the best compromise between performance and computational burden. The advantage of feedback is that the memory depth can be arranged by modifying a parameter, rather than changing the topology of the NN. The drawbacks are a larger computational cost with respect to feedforward NNs, and the fact that they are subject to instabilities[4,17]. LRNNs allows one to limit the computational cost, but the stability issue remains. The global structure of these networks is the same of Multi Layer Perceptron (MLP), but internally they are structured as an Infinite Impulsive Response (IIR) linear filter, optionally combined with a FIR filter, while the nonlinearity is placed downstream the linear filter. The IIR filters have a delay line as the FIR filters, but the taps are connected to the input rather than to the output. They take their name from the fact that the duration of the impulsive response is theoretically infinite, even if after a period of time enough long the response is negligible. The main advantage of the IIR filters is that the vanishing time of the impulsive response can be set by changing the feedback parameters and keeping the topology unchanged. Unfortunately, the values of such parameters could make the impulsive response unstable, therefore some measures are needed to prevent this event.
The standard algorithm for the training of feedback NNs is the Backpropagation Trough Time (BPTT [18]), which can be adapted to any feedback topology of NN. This algorithm has been adapted to LRNNs in [17,18,19,20], substantially developing the feedback loop a number of times sufficient to consider extinct the impulsive response. Such measure allows one to adapt the structure in spite the feedback structure is non-casual, and it makes it possible to use for training the same procedures defined for feedforward structures. Nonetheless, some issues remain, as the number of times the loop should be developed is unknown a priori, so the assumed value could be too small, giving rise to interference among different impulsive responses, or too large, in this case oversizing the computational cost. Furthermore, the stability of the impulsive response remains a main issue to solve [21].
In the present work, a new training algorithm is presented, which at the same time overcomes the problems of training and stability, in this way allowing one to extend the applicability of the LRNNs. The organization of the paper is as follows. In Section 2 the neural model is presented. In Section 3, the method is applied to the forecasting of a chaotic series. In Section 4 the results are commented and some conclusions are given.

2. Neural Model

The global structure of a LRNN is like that of MLP where neurons are organized in layers, but dynamic properties are achieved using neurons with internal feedback. In Figure 1 the assumed structure of NN is shown. For sake of simplicity and without prejudice to the generality, the NN has a single-input single-output structure, only one hidden layer, where the dynamic element of the network is concentrated, and a linear activation function is assigned to the output neuron. In the rest of the paper, we will refer to this neural structure since such a treatment has the advantage of simplicity and matches the exigencies of the paper. The dynamic part of the network is an ARMA filter, where the parameters a i create the IIR part and b i the FIR part. The FIR part is a feedforward structure, for which the literature provides a layout of efficient methodologies, therefore this work will focus only on the IIR part.
Let us consider the calculation of the state x t of the k t h hidden neuron
x t = a _ T · x _ t 1 + s k t
where a _ is the vector of feedback gains, · T indicates the transposal operator, x _ t 1 is the vector state of the delay line, s k t is the current input of the neuron. By referring to Figure 1.b, the output of the neuron is calculated as:
y k t = f b 0 · s k t + b _ T · x _ t 1
where b _ is the vector of forward gains, b 0 is the a-dynamic weight, and f is the activation function of the neuron. Equation (2) allows to write a dynamic loss function to be used for the training (or adapting) of the NN. Let be { d } the desired output sequence of the NN as an answer to the input sequence { i } . A loss function can be defined as the mean squared error of the output with respect to the desired sequence:
J = 1 2 t = 1 T u t d t 2
where T is the duration of the sequence for which the NN must be trained. The simplest procedure to minimize the (3) is based on the gradient of J calculated with respect to all the parameters of the NN. Referring to Figure 1, no difficult occurs for the calculation of the derivatives of J with respect to the global parameters w k and v k , with k = 1 , , K , where K is the number of hidden neurons, as well as to the internal parameters b j , with j = 0 , , r , where r is the number of delays. Instead, the derivatives with respect to the internal parameters a j , with j = 1 , , r is troublesome, because the derivative of one sample depends on all the previous ones:
J a k m = t = 1 T u t d t · v k · j = 1 r f ' t j   b j x t j a k m
The last derivative cannot be solved explicitly, as each state x t j due to the feedback depends on the whole previous sequence. To make the (4) explicit, a formula is needed which allows one to calculate the generic term of the impulsive response of the IIR. This is obtained by leveraging on the Binet’s formula to calculate the Fibonacci’s series, as described in the next sub-section.

2.1. The Fibonacci’s Series and the Binet’s Formula

The Fibonacci’s series (1,1,2,3,5,8, …) is a numeric sequence which owes its popularity to the fact that it reflects a ubiquitous scheme of growth in nature. It is described analytically by the following finite difference equation:
x n = x n 1 + x n 2
and as can be seen, to calculate the generic term all the previous terms are required. As can be easily demonstrated, the (5) is the expression of the impulsive response of a second-order IIR filter with unitary feedback gains, as the one represented in Figure 2.
Jacques Philippe Marie Binet (1786-1858) provided a formula which allows to calculate directly any value of the Fibonacci’s series without calculating the previous ones. Here the development of that formula is briefly summarized.
Demonstration of the Binet’s formula. Let’s assume that an explicit function which provides the terms of the Fibonacci’s series exists and it has the general expression:
h j = C z j
with C and z constant values to be determined. By substituting (5) in (4) the following expression is obtained
C z j = C z j 1 + C z j 2
and then:
C z j 2 z 2 z 1 1 = 0
Equation (7) has two trivial solutions ( C = 0   ; z = 0 ) which must be excluded because they couldn’t generate the series, and two non-trivial solutions, namely the roots of the polynomial within brackets. These two solutions are:
z 1 = 1 + 5 2 ;   z 2 = 1 5 2
It is worth noting that the first root z 1 in (8) is the golden ratio value. Let us now assume that the sought function is obtained as a linear combination of the two solutions corresponding to the two roots z 1 and z 2 :
h j = C 1 z 1 j + C 2 z 2 j
with C 1 and C 2 to be determined. To this end, we can impose the correspondence with two arbitrary values of the series, for example the first two: 1, 1.
h 1 = C 1 z 1 + C 2 z 2 = 1 h 2 = C 1 z 1 2 + C 2 z 2 2 = 1
The solution of the system (10) is C 1 = 1 5 ; C 2 = 1 5 from which the following expression of the Binet’s formula comes:
h j = 1 5 1 + 5 2 j 1 5 2 j

2.2. Exploitation of the Binet’s Formula to Calculate the IIR Impulsive Response

The method to determine the Binet’s formula can be applied without formal changes to calculate the impulsive response of an IIR filter.
Let be an IIR filter such as the one described in Figure 3, with r delays, which impulsive response writes:
x n = a 1 x n 1 + + a r x n r
Let’s assume that a function exists which provides the arbitrary term of its impulsive response.
h j = C z j
By substituting (13) in (12) it results:
C z j = a 1 C z j 1 + + a r C z j 1
and finally:
C · z j r z r a 1 z a r = 0
The non-trivial solutions of (15) are the r roots of the polynomial between brackets. From them, the following general solution can be obtained:
h j = C 1 z 1 j + + C r z r j
with C 1   , , C r to be determined. To this end, the first r samples of the impulsive response are calculated and imposed in the following linear equations system:
h 1 = C 1 z 1 + + C r z r h r = C 1 z 1 r + + C r z r r
The solution of (17) represents the set combination coefficients of the sought function:
h j = C 1 z 1 j + + C r z r j

2.3. Derivative of the Loss Function with Respect to the Feedback Parameters

The equation (4) describes the derivative of the loss function with respect to the generic feedback parameter. As remarked in the previous sections, such derivatives require the sequence of all the previous samples, therefore an explicit calculation is impossible unless a limit is imposed to the duration of the impulsive response. As said before, the vanishing time of the impulsive response depends on the feedback parameters we are calculating, so that the assumption is subject to uncertainty. The procedure described in section 2.2 allows one to fix this problem. The IIR filter is a linear system, therefore its answer to a generic input sequence can be expressed as the convolution product between the input signal and the impulsive response:
x t = s * h t
The equation (19) allows us to calculate the derivatives with respect to the roots z j rather than the feedback coefficients a j . This makes it possible to take under control the stability of the NN. In fact, as the state of the neurons depends on the roots z j , if they are constrained to have the module less than 1, no matter the other parameters, the stability of the network is guaranteed. Therefore, it is convenient to train the network by adapting the zeros of the polynomials, and then calculating the corresponding feedback parameters, which are the coefficients of the polynomials having the z j as roots. The equation (4) to calculate the derivatives of the loss function with respect to the feedback parameters is substituted by the following one:
J z k m = t = 1 T u t d t · v k · f ' k t j = 1 r b j s k * h k z k m t j
where f ' t j is the derivative of the activation function of the hidden layer. From (18) comes that h k j is a polynomial in z k and then only one term of its derivative is not null. The derivative finally writes:
J z k m = t = 1 T u t d t · v k · w k · f ' k t · C k m j = 1 r b j   · i t j * z k m t j 1
The convolution product in (21) implies that a limit must be assumed for the duration of the impulsive response, but thanks to the use of the roots, this term can be established a priori.

2.4. The Training Algorithm

A simple gradient descent method [22] is used to train the LRNN.
Γ p + 1 = Γ p η J
where Γ is the set of all the parameters of the NN, either forward or feedback parameters, p is the iteration index, η is the learning rate. The gradient is calculated with respect to all the independent parameters, considering that both the feedback coefficients a j and the coefficients of the impulsive responses C j are univocally determined once the values of the zeros z j has been established. It is worth noting that the values of the coefficients C j cannot affect the stability of the NN. The only parameters which affect stability are the zeros z j , so that the iterative procedure applying the (22) must be meant in constrained terms, in the sense that in updating the set of parameters Γ , the zeros with module greater than 1 must be avoided, by truncating the increment, or by projecting properly the move. The use of more advanced procedures, even possible, is beyond the scope of this study.

3. Results

As a case study, the Willamoski-R ö ssler reaction [23,24] has been chosen. This benchmark is widely known because it has been the first case which showed that deterministic chaos can be generated by a chemical reaction. The process consists of a multi-step catalytic reaction in the open system, which involves five species, between initiators and products and three intermediates. The following equations describe the five steps of the process:
Preprints 145586 i001
where A 1 ,   A 4 and A 5 are the initiators, A 2 and A 3 are the products and X ,   Y and Z are the intermediates. By assuming the 5 step equations the following overall reaction is obtained:
A 1 + A 4 + A 5 A 2 + A 3
In an open system the concentration of both initiators and products is constant in nominal conditions, while the concentrations of the three intermediate species assume a chaotic behavior. By assuming X ,   Y and Z as state variables, the phase space can be represented graphically. The evolution of the state variables can be studied by means of the following set of differential equations:
X ˙ = K 1 X K 1 X 2 K 2 X Y + K 2 Y 2 K 4 X Y + K 4 Y ˙ = K 2 X Y K 2 Y 2 K 3 Y + K 3 Z ˙ = K 4 X Z + K 5 Z K 5 Z 2 + K 4
In Figure 3 the evolution in the phase space is shown corresponding to the following reaction rates: K 1 = 30 , K 1 = 0.25 , K 2 = 1 , K 2 = 10 4 , K 3 = 10 , K 3 = 10 3 , K 4 = 1 , K 4 = 10 3 , K 5 = 16.5 , K 5 = 0.5 , and initial conditions are: X 0 = 0.21 ,   Y 0 = 0.01 and Z 0 = 0.12 [24].
Figure 4. Phase space trajectory of the Willamowski-Rössler model.
Figure 4. Phase space trajectory of the Willamowski-Rössler model.
Preprints 145586 g004
In the process chaotic behavior is necessary to obtain an efficient mixing of the reacting species, avoiding spending a great quantity of energy. Therefore, it is important to forecast any variation of the parameters before the chaotic behavior of the system is lost. In Figure 5 the evolution of the state is reported due to a variation of the state variables.
Three LRNNs have been trained, one for each state variable, to predict the value of the same variable one time step ahead. The optimal structure of the network has been determined by means of a trial-and-error procedure, to optimize the forecasting capability. The neural network used to perform the test has a 1-25-1 structure, with only one delay for each hidden neuron and no delay in the output neuron. As an activation function, the hyperbolic tangent function has been assigned to the hidden neurons, while the output neuron is linear. A sequence of 20s of each state variable of the system in nominal conditions has been acquired with a sample time of 5ms, obtaining sequences of 1000 samples, and these sequences have been used to train the neural networks. In Fig. 5 the evolution of Mean Squared Error (MSE) during the training phase is shown for the variable X. As can be seen, the adapted NN is able to perform a good approximation of the chaotic behavior of the system
Figure 6. Prediction error of the state variable X during the adaptation of the NN.
Figure 6. Prediction error of the state variable X during the adaptation of the NN.
Preprints 145586 g006

4. Discussion and Conclusion

The present work aims to introduce a new paradigm for the training of locally recurrent neural networks. The relevance of such neural networks comes from the fact that they conjugate the advantages of feedback with a relatively moderate computational complexity. This is because dynamics is obtained by means of local linear structures, such as FIR and IIR filters. There is wide literature which confirms that this simplification does not affect the potential of the networks with respect to the global feedback, but better performance is achieved because they can be trained with a moderate computational cost. Nonetheless, open issues remain, because of the implicit dependence on past samples, and the stability which cannot be established a priori. The method introduced in this paper provides a solution to both such issues. By extending the formula of Binet for the calculation of the Fibonacci’s series terms, a method is presented which allows to calculate directly the terms of the impulsive response of the IIR structure, and then the effect of a change in the parameters of the filter can be estimated a priori. Furthermore, rather than in terms of feedback coefficients, the IIR is expressed in terms of polynomials, which makes it possible to evaluate a priori the effect of a parameter change in terms of stability. The efficiency of the method has been evaluated by developing a predictor of a chaotic behavior of a chemical reactor. The aim of the present paper was to present the theoretical basis of the method and to demonstrate its efficiency in solving non-trivial problems. An extensive comparison with other methods from literature was beyond the scope of the paper, and it will be the subject of future work.

Author Contributions

Conceptualization, A.M; methodology, S.C. and A.M.; software, S.C. and A.M.; validation, S.C.; formal analysis, A.M.; investigation, S.C.; resources, S.C.; data curation, S.C.; writing—original draft preparation, A.M.; writing—review and editing, S.C.; visualization, S.C.; supervision, A.M.; All authors have read and agreed to the published version of the manuscript.

References

  1. Han, Y.; Huang, G.; Song, S.; Yang, L.; Wang, H.; Wang, Y. Dynamic Neural Networks: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 2022, 44, 7436–7456. [Google Scholar] [CrossRef] [PubMed]
  2. Waibel, A. Modular Construction of Time-Delay Neural Networks for Speech Recognition. Neural computation 1989, 1, 39–46. [Google Scholar] [CrossRef]
  3. Bromley, J.; Guyon, I.; LeCun, Y.; Säckinger, E.; Shah, R. Signature Verification using a “Siamese” Time Delay Neural Network. In Proceedings of the Advances in Neural Information Processing Systems; Morgan-Kaufmann, 1993; Vol. 6.
  4. Cao, J.; Wang, J. Global asymptotic and robust stability of recurrent neural networks with time delays. IEEE Transactions on Circuits and Systems I: Regular Papers 2005, 52, 417–426. [Google Scholar] [CrossRef]
  5. Huang, W.; Yan, C.; Wang, J.; Wang, W. A time-delay neural network for solving time-dependent shortest path problem. Neural Networks 2017, 90, 21–28. [Google Scholar] [CrossRef] [PubMed]
  6. Nerrand, O.; Roussel-Ragot, P.; Personnaz, L.; Dreyfus, G.; Marcos, S. Neural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms. Neural Computation 1993, 5, 165–199. [Google Scholar] [CrossRef]
  7. Htike, K.K.; Khalifa, O.O. Rainfall forecasting models using focused time-delay neural networks. In Proceedings of the International Conference on Computer and Communication Engineering (ICCCE’10); 2010; pp. 1–6.
  8. Grossberg, S. Nonlinear neural networks: Principles, mechanisms, and architectures. Neural Networks 1988, 1, 17–61. [Google Scholar] [CrossRef]
  9. Haykin, S.S. Adaptive Filter Theory; Pearson Education, 2002; ISBN 978-81-317-0869-9.
  10. Widrow, B.; Glover, J.R.; McCool, J.M.; Kaunitz, J.; Williams, C.S.; Hearn, R.H.; Zeidler, J.R.; Eugene Dong, Jr.; Goodlin, R.C. Adaptive noise cancelling: Principles and applications. Proceedings of the IEEE 1975, 63, 1692–1716. [Google Scholar] [CrossRef]
  11. Pedro, J.C.; Maas, S.A. A comparative overview of microwave and wireless power-amplifier behavioral modeling approaches. IEEE Transactions on Microwave Theory and Techniques 2005, 53, 1150–1163. [Google Scholar] [CrossRef]
  12. Krotov, D. A new frontier for Hopfield networks. Nat Rev Phys 2023, 5, 366–367. [Google Scholar] [CrossRef]
  13. Jordan, M.I. Chapter 25 - Serial Order: A Parallel Distributed Processing Approach. In Advances in Psychology; Donahoe, J.W., Packard Dorsel, V., Eds.; Neural-Network Models of Cognition; North-Holland, 1997; Vol. 121, pp. 471–495.
  14. Elman, J.L. Finding Structure in Time. Cognitive Science 1990, 14, 179–211. [Google Scholar] [CrossRef]
  15. Back, A.D.; Tsoi, A.C. FIR and IIR Synapses, a New Neural Network Architecture for Time Series Modeling. Neural Computation 1991, 3, 375–385. [Google Scholar] [CrossRef] [PubMed]
  16. Tsoi, A.C. Recurrent neural network architectures: An overview. In Adaptive Processing of Sequences and Data Structures: International Summer School on Neural Networks “E.R. Caianiello” Vietri sul Mare, Salerno, Italy September 6–13, 1997 Tutorial Lectures; Giles, C.L., Gori, M., Eds.; Springer: Berlin, Heidelberg, 1998; ISBN 978-3-540-69752-7. [Google Scholar]
  17. Campolucci, P.; Piazza, F. Intrinsic stability-control method for recursive filters and neural networks. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 2000, 47, 797–802. [Google Scholar] [CrossRef]
  18. Werbos, P.J. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 1990, 78, 1550–1560. [Google Scholar] [CrossRef]
  19. Campolucci, P.; Uncini, A.; Piazza, F. Causal back propagation through time for locally recurrent neural networks. In Proceedings of the 1996 IEEE International Symposium on Circuits and Systems. Circuits and Systems Connecting the World. ISCAS 96; 1996; Vol. 3, pp. 531–534 vol.3.
  20. Campolucci, P.; Uncini, A.; Piazza, F. Fast adaptive IIR-MLP neural networks for signal processing applications. In Proceedings of the 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings; 1996; Vol. 6, pp. 3529–3532 vol. 6.
  21. Campolucci, P.; Piazza, F. Intrinsic stability-control method for recursive filters and neural networks. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 2000, 47, 797–802. [Google Scholar] [CrossRef]
  22. Battiti, R. First- and Second-Order Methods for Learning: Between Steepest Descent and Newton’s Method. Neural Computation 1992, 4, 141–166. [Google Scholar] [CrossRef]
  23. Willamowski, K.-D.; Rössler, O.E. Irregular Oscillations in a Realistic Abstract Quadratic Mass Action System. Zeitschrift für Naturforschung A 1980, 35, 317–318. [Google Scholar] [CrossRef]
  24. Niu, H.; Wang, H.; Zhang, Q. The Chaos Anti-control in the Willamowski-Rössler Reaction. In Proceedings of the 2010 International Workshop on Chaos-Fractal Theories and Applications; 2010; pp. 87–91.
Figure 1. Structure of the LRNN. In (a) the global structure is shown, which is identical to that one of MLP. Without loss of generality, a one-input-one-output structure is assumed. The symbols between curl brackets are time sequences, the values without brackets indicate the weights. In (b) the internal structure of a hidden neuron is represented. The blocks with z 1 constitute the delay line. The battery of b i gains are the weights of the FIR filter, while the a i gains are the IIR weights. The gain b 0 represents the a-dynamic connection with the output, while f is the nonlinear activation function of the neuron.
Figure 1. Structure of the LRNN. In (a) the global structure is shown, which is identical to that one of MLP. Without loss of generality, a one-input-one-output structure is assumed. The symbols between curl brackets are time sequences, the values without brackets indicate the weights. In (b) the internal structure of a hidden neuron is represented. The blocks with z 1 constitute the delay line. The battery of b i gains are the weights of the FIR filter, while the a i gains are the IIR weights. The gain b 0 represents the a-dynamic connection with the output, while f is the nonlinear activation function of the neuron.
Preprints 145586 g001
Figure 2. IIR filter which impulsive response corresponds to the Fibonacci’s series.
Figure 2. IIR filter which impulsive response corresponds to the Fibonacci’s series.
Preprints 145586 g002
Figure 3. IIR filter with a generic memory depth.
Figure 3. IIR filter with a generic memory depth.
Preprints 145586 g003
Figure 5. State evolution in presence of a 3% deviation of the reaction rate K 1 .
Figure 5. State evolution in presence of a 3% deviation of the reaction rate K 1 .
Preprints 145586 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated