Preprint
Article

This version is not peer-reviewed.

Parametric Identifiability of Dynamic Systems Based on Adaptive Systems

Submitted:

22 June 2025

Posted:

23 June 2025

You are already at the latest version

Abstract
Many publications have been devoted to the problem of parametric identifiability (PI). The major focus is on a priori identifiability. The parametric identifiability problem using experimental data (the so‐called practical identifiability (PID)) is less studied. This is a parametric identification task. PI has not been studied using current data (in adaptive systems). We propose an approach to estimating PI based on the application of Lyapunov functions. The role of the excitation constancy is shown. Conditions of local parametric identifiability (LPI) for a class of linear dynamical systems are got based on current experimental data. The case is considered when both the state vector and the input‐output set are measured. Estimates are obtained for the parametric residual. Case of limiting LPI on the set of current data is studied. Influence of initial conditions on PI is analysed. The case of m‐parametric identifiability is studied. Approach to estimating the PI of linear dynamical systems and systems with periodic coefficients based on the application of Lyapunov exponents is proposed. The LPI of decentralised systems is analysed. Examples are given.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

Estimation of the model parameters is possible if the conditions guaranteeing their receipt are met. Many publications have been devoted to the issues of identifiability (see, for example, [1,2,3,4,5,6,7,8,9,10]). Much attention is paid to the analysis of a priori identifiability (AI) (in the literature, it is structural identifiability). AI conditions often have an algebraic form. To obtain them, such approaches are used as differential algebra [11], time series analysis [12] and some others [4,13,14,15]. The observability role [16] in identifiability problems is noted.
Some authors study the identifiability problem based on experimental data (see review [4]). This is practical identifiability. PID is based on obtaining a mathematical model and verifying it. This approach gives good results for systems with a known structure. In [17], low-order models are used to solve the problem of unidentifiable parameters. This approach is based on performing many adjustments.
Statistical hypotheses and criteria are used to solve the problem of estimating unidentifiable parameters. The probability profile parameter is used in [18]. Markov chains based on the Monte Carlo method [19] are used to estimate unidentifiable parameters. The apply of these approaches is associated with certain difficulties.
The Fisher information matrix is used to solve of PID problems [20]. Other statistical approaches are discussed in [4]. The result of solving the PID problem is the model with an accurate forecast. If this is not true, then the structural identification problem is solved. A more complete analysis of the state of the PID problem is given in the review [4]. Note that the PID problem interpretation does not accurately reflect the problem. This is the parametric identification problem with decision-making elements.
As follows from the presented analysis, the emphasis is on the study AI problem. Practical identifiability has not been sufficiently investigated. The focus is on synthesizing a mathematical model using various methods and evaluating its predictive properties. Various statistics, methods, and criteria are used to decision-making about the PRI. If the parametric identifiability condition is not met, various multistep procedures are proposed. These approaches are not always effective. For a more complex class of systems (multidimensional, decentralized, and interconnected), this problem requires further investigation. PI issues were not considered in adaptive systems.
In this paper, we study the PI problem for a class of adaptive models. The approach is proposed for obtaining conditions of local PI based on a class of adaptive algorithms. Conditions for limiting LPI are obtained. We show the dependence of adaptive identification system (ASI) properties on the initial conditions. A generalization of the results is given for the case of m-parametric identifiability. The linear system case with periodic parameters is considered. The PI problem solution is reduced to the application of Lyapunov exponents.

2. Problem Statement

Consider the system
X ˙ = A ¯ X + B u , y = C T X ,
where u , y are input and output, X n is the state vector, C = 1    0    ....    0 T , B = 0    0    ....    0    b T , A ¯ n × n .
Set of experimental data
I ( t ) = u ( t ) ,    y ( t ) ,    t J = t 0 , t k
Assumption 1.
A ¯  is Hurwitz matrix.
Problem. Evaluate the system (1) parametric identifiability using the of the set I ( t )    analysis.

3. Approach to PI Estimation

The representation is valid for the system (1) in space ( u , y ) :
y ˙ = A T P
where A 2 n  is the vector of parameters, P 2 n  is the generalised input vector, which is got based on the processing ( u , y )  by a system of auxiliary filters.
To evaluate elements of vector A , introduce the model based on the set I t = u ( t ) ,    y ( t )  for each t :
y ^ ˙ = k ( y ^ y ) + A ^ T P
where A ^ 2 n  is the vector of model parameters, k > 0  is the parameter setting the properties of the model.
Equation for identification error (prediction) e = y ^ y :
e ˙ = κ e + Δ A T P
where Δ A = A ^ A .
Let the elements of the vector p i P  by constantly excited (CE):
C E p i : α ¯ i p i 2 ( t ) α ¯ i , t t 0 , T ,
where α ¯ i > 0 ,    α ¯ i > 0 .
Notation:
(i) S X ( A ¯ )  is a class of systems (1);
(ii) S y ( A )  is a congruent representation (4) on the set I t = u ( u ) ,    y ( t ) ,    t > t 0 ;
(iii) Ω i  is the frequency spectrum of the element p i P ;
(iv) p i C E p i  or  p i C E ;
(v) p i C E  if the CE condition is not held true.
(vi) the variable is u ( t ) C E  if it has a non-degenerate frequency spectrum for all t J .
Definition 1.
The system (1) of class S X ( A ¯ )  is locally parametrically identifiable on the set I t   if the condition
A G A = A 2 n : A A * ε A    t t * > t 0    & I t
is fulfilled for its representation (4) in class S y ( A ) , where A * is some reference vector of system parameters (4), ε A 0 .
We see if there is an identification algorithm vector A ^ of the system (5) on I ( t ) = I t ,     t > t 0 for some A t 0 , i , then starting from the moment t * , the condition (8) will be fulfilled for the estimates of vector A .
Consider the Lyapunov function V e ( t ) = 0.5 e 2 ( t ) .
Theorem 1. 
Let 1) assumption 1 holds, i.e., the matrix in (1) A ¯ H ; 2) the system (1) represents how (4) to set  I ( t ) ; 3) the identification system is described by equation (6); 4) u C E u ; y C E y , P C E P . Then the system (4) is locally parametrically identifiable in the region  G A if Δ A = 0 follows from Δ A T P = 0 and the condition is satisfied:
Δ A ( t ) 2 2 k 2 α ¯ P V e ( t ) ,
where  P ( t ) P T ( t ) α ¯ P I n ,    α ¯ P > 0 ,   I n n × n  is the unit matrix.
The proof of Theorem 1 is given in Appendix A.
Remark 1.
The vector X ( t ) reconstruction in (1), based on (4) and schemes proposed in the literature, gives estimates that do not correspond to components X ( t ) . This follows directly from (4). Therefore, adaptive control laws based on the use of vector P elements are applied in control systems. Estimation x 2 = y ˙ , x 2 X can be obtained directly from (4). The remaining components X ( t ) are determined based on the symbolic differentiation operation.
Corollary from Theorem 1. 
If the adaptive algorithm
Δ A ˙ = Γ e P ,
is used to evaluate the vector A in (4), then the local parametric identifiability of the system (1) follows from the estimation
V ( t ) V ( t 0 ) 2 k t 0 t V e ( τ ) d τ ,
where   V ( t ) = V e ( t ) + V Δ ( t ) , V Δ ( t ) = 0.5 Δ A T Γ 1 Δ A , Γ = Γ T > 0  is a diagonal matrix.
Let vector P ( t ) elements be measurable for each t . Here, the system (4) is detectable. Then observability, detectability and recoverability of the system (1) follow from properties of system (4).
The proof corollary from Theorem 1 is given in Appendix B.
Consider the Lyapunov (LF) function V E = 0.5 Е T R Е , where R = R T > 0 .
Structures of classes S X ( A ¯ ) and S y ( A ) are congruent, so the following statement is valid for system (1).
Theorem 2. 
Let 1) the conditions of Theorem 1 are fulfilled for the system (4) of class  S y ( A ) ; 2) classes  S X ( A ¯ )  and  S y ( A )  are congruent; 3)  u C E u ,  X ( t ) C E X . Then system (1) is locally parametrically identifiable on class  S X ( A ¯ )  if
Δ A ¯ 2 α ¯ X + Δ B 2 α ¯ u 4 μ 2 V E ,
where Е T Q Е μ Е T R Е , μ > 0 , R K + K T R = Q , Q = Q T > 0 , Δ A ¯ = tr Δ A ¯ T Δ A ¯ , tr is the matrix trace.
The proof of Theorem 2 is given in Appendix C.
Consider the adaptive model
X ^ ˙ = K E + A ¯ ^ X + B ^ u ,
and apply the integral algorithm class
A ¯ ^ ˙ = Γ A ¯ E R X T B ˙ = Γ B R E u
to tuning of matrix A ¯ ^ ,    B ^ .
The identification system is described by the equation:
E ˙ = K E + Δ A X + Δ B u ,
where E = X ^ X , X ^ n is model (11) state vector, K H is the matrix of dimension n × n .
Corollary from Theorem 2. 
If the conditions of Theorem 2 are fulfilled, and the class of algorithms (12) is used to tuning the model (11) parameters, then the local parametric identifiability of the system (1) follows from the estimation
V W ( t ) V W ( t 0 ) 2 μ t 0 t V E ( τ ) d τ
The proof corollary from Theorem 1 is given in Appendix D.
We see that the local PI depends on the choice of initial conditions, and fulfilment of the requirements for variables and system input.
Remark 2. 
Presented results differ from the results [4] based on the application of AI methods. If the decision is made based on experimental data, then various statistics [4] are used. In this paper, we apply the approach to the PI analysis based on the current data analysis. This approach has not been used in PI tasks.
If conditions of Theorem 2 are fulfilled, then the class of algorithms (12) will be called locally identifying.
In the future, for the convenience of reference, the adaptive algorithm (10) will be related to class A I A y , and the law (12) to class A I A X .
Definition 2. 
A system (1) of class S X ( A ¯ ) is extremely locally parametrically identifiable (ELPI) on the set I t if the condition
A G ~ A = A 2 n :    lim t A A * ε ˜ ,     ε ˜ O ( 0 )    t t * > t 0    & I t ,
is satisfied for its representation (4) in class S y ( A ) , where A * is some reference vector of system (5) parameters, O ( 0 ) , is an area of zero.
Here, the vector A identifiability is understood as the limit proximity to A * . Under certain conditions, the global PI of the vector A follows from (15).
Consider again the system (6) and LF V Δ ( t ) = Δ A T ( t ) Γ 1 Δ A ( t ) .
Theorem 3. 
Let the conditions of Theorem 1 be fulfilled and (i) there is a Lyapunov function  V Δ  admitting an infinitesimal upper limit; (ii) there is  ϑ > 0  such that the condition  e Δ A T P = ϑ Δ A 2 + e 2  is satisfied for sufficiently large  t  in some area of zero; (iii)  P C E P  with parameters  α ¯ P , α ¯ P ; (iv) The inequality
V ˙ Δ 3 4 ϑ α ¯ P λ ¯ Γ V Δ + 4 3 α ¯ P ϑ a ¯ P V e ,
is valid for the trajectories of the adaptive system (6) and (10), where λ ¯ Γ is the minimum eigenvalue of the matrix Γ . Then the system (6), (10) is locally parametrically identifiable on the set I t with estimating V Δ ( t ) S Δ ( t ) if the functional condition
16 9 α ¯ P 2 λ ¯ Γ a ¯ P V e V Δ ,
is satisfied, where:
S Δ ( t ) = e σ ( t t 0 ) S Δ t 0 + π t 0 t e σ ( t τ ) V e ( τ ) d τ
is the upper solution of the comparison system  S ˙ Δ = σ S Δ + π V e  for (16) if  S Δ t 0 V Δ t 0 ,  π = 4 3 α ¯ P ϑ a ¯ P ,  σ = 0.75 ϑ α ¯ P λ ¯ Γ .
The proof of Theorem 3 is given in Appendix E.
We see that the PI in the class of algorithms (10) or (12) depends on the initial conditions and properties of the information set. LPI is guaranteed for systems of class S X ( A ¯ ) and S y ( A ) with asymptotic stability by error. However, estimate elements of the matrices will belong to the domain G A . This is a typical state of adaptive identification systems based on the class of algorithms (10), (12).
Remark 3. 
The region G A can be compressed to G ~ A and limiting conditions for LPI can be got if conditions (9) or (17) for ASI are fulfilled. In real-world conditions, ASI guarantees almost extremely local parametric identifiability.

4. On ELPI

The ELPI fulfilment guarantees the transition to global PI (GPI). For static procedures (least squares method, maximum likelihood method), ELPI is ensured by the properties of the information matrix. For methods based on the class A I A , properties of the information matrix are not directly applicable, because the processes are complex in ASI.
With GPI, we understand the condition fulfilment:
A G A = A 2 n : A A * = 0    t t * > t 0    & I t
The proposed interpretation of GPI as belonging the parameters of model (6) to the set G A is linked to the absolute stability of an adaptive system.
Global parametric identifiability follows from Theorem 4 for systems of class S y ( A ) .
Theorem 4. 
Let: 1) the conditions of Theorems 1 and 3 are fulfilled; 2) The system of inequalities is valid for processes in the system (6), (10)
V ˙ e V ˙ Δ k 1 k α ¯ P λ ¯ Γ 4 3 α ¯ P ϑ a ¯ P 3 4 ϑ α ¯ P λ ¯ Γ A G V e V Δ W G ,
where λ ¯ Γ , λ ¯ Γ are the maximum and minimum eigenvalues of the matrix Γ . Then (a) the system (4) is globally parametrically identifiable on the class S y ( A ) , (b) the system (6), (10) is exponentially stable with the estimate:
W G ( t ) e A G t t 0 S G ( t 0 ) ,
if
9 k 2 α ¯ P 2 λ ¯ Γ 16 a ¯ P 2 λ ¯ Γ ,
where  S G 2  is the state vector of the comparison system  S ˙ G ( t ) = A G S G ( t ) ,  S G t 0 W G t 0 .
The proof of Theorem 4 is given in Appendix F.
From Theorem 4, we obtain GPI on the set of initial conditions and ELPI. Since the systems are congruent, this condition is also valid for systems (1) of class S X ( A ¯ ) . To substantiate this statement, apply the approach [22].

5. About m-Parametric Identifiability

Let the CE condition not be fulfilled. The problem of identifiability, and identification, must be solved. Consider the approach to solving this problem using the example of the class S y ( A ) system.
Let the system and the model have the form (4) and (5). We assume that u C E . The term Δ A T P in (6) is represented as
Δ A T P = Δ A T P = Δ A ˜ T    δ A T P ˜ T    P ¯ T T ,
where P ˜ ( t ) C E P ˜ ,    P ¯ ( t ) C E ; Δ A ˜ T    δ A T is the representation Δ A corresponding to the vector P ( t ) .
Transform the equation for error (6) to the form
e ˙ = κ e + Δ A ˜ T P + ω A , P ,
where ω = δ A T P ¯ , ω A , P is uncertainty caused by non-fulfilment of the condition u C E , Δ A ˜ = A ˜ ^ A , A ˜ ^ A ^ is the part of the vector A ^ evaluated on the class A I A y , A ˜ ^ 2 m , m < n .
Let ω A , P ε ω , where ε ω 0 .
Definition 3. 
A system of class S y A is m -locally parametrically identifiable on the set I ˜ t = P ( t ) ,    u ( t ) C E ,    t > t 0 if the condition
A ˜ G A = A ˜ 2 m :    A ˜ A * ε m ,     ε m > 0    t t * > t 0    & I ˜ t .
is satisfied with its representation (4) in class S y ( A ) .
Theorem 5. 
Let (i) the system (1) be stable; (ii) the Lyapunov function  V ˜ Δ = 0.5 Δ A ˜ Γ A ˜ 1 Δ A ˜  admits an infinitesimal limit, where  Γ A ˜ T = Γ A ˜ > 0  is the diagonal matrix; (iii)  u ( t ) C E . Then the system (4) is locally parametrically identifiable in the domain  G A if
0.5 υ P Δ A ˜ 2 + 0.5 ε ω 2 2 k m V e
and all trajectories of the system (4) belong the area
G Ξ = e ( t ) , Δ A 2 n : V v ( t ) V v ( t 0 ) 2 k m η 1 V e + 0.5 η 1 ε ω 2 ,
where  V v = V e + V ˜ Δ , η = min 1 , 2 λ ¯ Γ .
The proof of Theorem 5 is given in Appendix G.
From Theorem 5, we see that the PI domain depends on the CE fulfilment of the information set of the system. If the CE condition is not fulfilled, the parameter ε m increases because of the effect of parametric uncertainty ω . Here, estimate (14) is more realistic and, under certain conditions, ELPI is possible with estimate (18).
Remark 4. 
In biological systems, structural identifiability issues are considered. Most times, lineal systems with numerous parameters are studied. Various algorithms are proposed and identifiability conditions are investigated to reduce the number of estimated parameters. In ASI, a multiplicative approach is used to identify a system with various parameters [23]. Here, PI is understood as parametric identifiability in some parametric domain G G A , depending on the vector of multiplicative parameters (MPV). As a rule, MPV estimates belong to a certain limited area, which is formed based on of a priori information and analysis of the information set. This identifiability applies to systems satisfying specified quality requirements.

6. Lyapunov Exponents in PI Problem

6.1. Stationary System of Class S X A ¯

Lyapunov characteristic exponents (LE) are the characteristic of a dynamical system. LE is an indirect PI estimate of the system. This approach to PI has not been considered in the literature. The LE application has its own peculiarities in the proposed paradigm of PI. In particular, it is necessary to consider the issue of detectability, recoverability and identifiability of LE based on the information set of the system. Identifiability is understood as the detectability of Lyapunov exponents. Known approaches allow us to estimate only the maximum (largest) LE [24]. A more promising approach is based on the analysis of geometric frameworks (GF) reflecting the change in LE [24]. Issues of LE detectability based on GF analysis are presented in [24]. Therefore, they are not considered here. Detectability is the important issue for evaluating LE.
In [24], the criteria for L P -detectability of Lyapunov exponents are presented. L P -detectability and recoverability we understood as the ability to the LE estimate. L P -detectability imposes certain requirements on experimental data. The approach allows us to obtain the full range LE.
Let m = n 1 υ , where υ is the number of non-recoverable LE.
Definition 4. 
The system (1) is called m -detectable with a υ -non-recoverability level if the υ lineal (LE) has an insignificant level.
As follows from definition 4, that if the system of class S X A ¯ is m -detectable with a level of υ -non-recoverability, then this is a sufficient condition for m -parametric identifiability of the system. The CE requirement plays an important role, as it guarantees the S-synchronizability and structural identifiability of the nonlinear system.
Remark 5. 
The definition 4 provides sufficient conditions for evaluating PI systems of class S X A ¯ . This issue requires further study. Note that LE (for the classes S X A ¯ under consideration, the Lyapunov exponents are the eigenvalues of the matrix) depend on system parameters.
Analysing nonstationary (periodic) systems is more difficult, since it is difficult to isolate the parametric space here.
Using LE translates the PI problem into the space of Lyapunov exponent [25] for periodic systems (PS).

6.2. PS for Class S X A ¯

Consider the system (1) with the matrix A ¯ ( t ) . For convenience, the system will be denoted by S p e r .
Assumptions.
A1. A ¯ ( t ) is a bounded continuous Frobenius matrix
A ¯ ( t ) α A ,
where α A > 0 , is matrix norm.
A2. A ¯ ( t ) is almost periodic, i.e., a subsequence can be selected from any sequence
A ¯ ( t ) = A ¯ ( t τ i )
converging uniformly along the entire axis to some almost periodic matrix A ¯ ( t ) .
A3. A ¯ ( t ) is the Hurwitz matrix for almost all t 0
Let R A ¯ = χ i ( t ) , i = 1 , n ¯ ,    t > t 0 is a spectrum of LE χ i   i = 1 , n ¯ .
Definition 5. 
The function χ i ( t ) is almost periodic in the Bohr sense [26] or the B F - function [25], if such a positive number l = l ( δ ) exists, that any segment a , a + l contains at least one number T f , for which it is hold
f ( x + T f ) f ( x ) < δ   and   t 0 , .
If χ i ( t ) is a B F -function, then it is α π -almost periodic [26], where α ,    π are positive numbers.
Let the order of the system S p e r be known. Apply the geometric structure S K Δ k s , ρ to decide on the spectrum R A ¯ [26]. Here k s ( t , ρ ) = ρ y ^ g / t , ρ y ^ g = ρ g = ln y ^ g ( t ) , y ^ g ( t ) is an evaluation of the general solution of the system (1).
S K Δ k s , ρ described by the function f s k ( t ) : k s Δ k s , where Δ k s . k s ( t ) is B F α π -function, f s k ( t ) contains areas D s k , where a drastic change is taking place.
Theorem 6. 
If the system  S p e r  is stable and recoverable, and the function  f s k ( t )  contains at the interval  t 0 , t * J ¯ g   t * t ¯ at least regions  D s k , then the system  S p e r  has an order  m  and is  L P -identifiable ( L P -detectable).
In terms of Theorem 6, J ¯ g is a time interval in which an estimate of the general solution of the system S p e r are obtained.
It follows from Theorem 6 that the system S p e r is identifiable on the set S K k s , ρ i . As shown in [25], the location of local minima on S K k s , ρ i coincides with regions D s k of the structure S K Δ k s , ρ . This result allows us to obtain the set M L E containing estimates LE of the system S p e r . Cardinal M L E may not match the LE number of the system. M L E characterizes the set of system S p e r lineals.
The detectability (identifiability) of the periodic system (1) with the matrix A ¯ ( t ) follows from [25].
Theorem 7. 
Let 1) assumptions A1-A3 are fulfilled for the system; 2) the system S p e r is recoverable; 3) u ( t ) C E u ; 4) set k s , ρ i ( t ) elements are B F α π -functions; 5) the structure S K Δ k s , ρ i contains at least v regions of D s k v , which to local minima correspond to the structure S K k s , ρ i . Then the set M L E is L P -detectable or fully detectable.
Corollary from Theorem 7. 
If structures  S L Δ k s , ρ i  contain only  m  of regions  D s k v , which to local minima correspond on  S K k s , ρ i , then the system  S p e r  is  m -detectable with an  υ -non-recoverable level.
Remark 6. 
Eigenvalues λ i ( t ) of the matrix A ¯ ( t ) are periodic functions of time. Therefore, lineals L i ( t ) and L i + 1 ( t ) corresponding to these functions may overlap. This can generate an infinite range of LE. Determine the acceptable range for M L E and the number that determines the mobility of the largest Lyapunov exponent. The set M L E upper bound is determined by the allowable mobility limit of the largest LE χ 1 . The estimate is fair for χ 1 [25]:
χ 1 sup J k s , ρ i 1
where  J k s , ρ i 1 is the interval of the change of the ith indicator k s , ρ i . The region M L E lower boundary is bounded by the smallest LE κ n [25].
Definition 6. 
The system S p e r of class S X A ¯ with matrix A ¯ ( t ) satisfying assumptions A1–A3 is locally M L E -identifiable on the set I ˜ t = X ( t ) ,    u ( t ) ,    t > t 0 , if spectrum R A ¯ of LE belonging to the class B F -function exists such that
χ i A ¯ M L E = χ i ( t ) R A ¯ : χ i χ i * ε χ      t t * ,    t * * ,    t * > t 0    & I ( t ) ,   i = 1 , n ¯ ,
where ε χ 0 .
The problem of assessing the LE adequacy has its own specifics. Let S y ^ g , y ˙ ^ g is phase portrait of the system S p e r .
Definition 7 [25].
Estimates of Lyapunov exponents χ i are χ -adequate in the space, if areas of their definition coincide with α π -almost-periodicity regions of structure S y ^ g , y ˙ ^ g .
Theorem 8 [25]. 
Let:(i) the  S p e r -system is stable and recoverable; (ii) the set  M L E  is  L P -detectable; (iii) definition regions  D s l j  on the  S L Δ k s , ρ i  structure coincide with  α π -almost-periodicity regions for the G5 structure  S y ^ g , y ˙ ^ g Then estimates of elements for the set  M L E  are  χ -adequate to the regions  α π -almost periodicity  S y ^ g , y ˙ ^ g .
Remark 8.
We have considered only one approach to assessing the PI of periodic systems. The PS can be considered as a system with an interval parametric domain and identifiability can be estimated within the specified limits. Here, the approaches described above in Section 4 and Section 5 are applied.
So, the problem of estimating LPI is reduced to a more adequate task of estimating LE for these systems. The PI conditions in a special space are got and the methods of its estimation are given. The adequacy concept of LE estimates is introduced and the area for the LE location is highlighted. We have shown the existence of a LE set for the system S p e r .

7. About LPI for Decentralized Systems

Consider a decentralized system (DS)
S i : X ˙ i = A i X i + B i u i + j = 1 , j i m A ¯ i j X j + F i X i , Y i = С i X i ,
where X i n i , Y i q i are state and output vectors of the S i -subsystem, u i is control, i = 1 , m ¯ , i = 1 m n i = n . The elements of the matrices A i n i × n i , B i n i , A ¯ i j n i × n j are unknown; C i q i × n i . The matrix A ¯ i j reflects the mutual influence of the subsystem S j . F i X i n i considers the nonlinear state of subsystem 2 S i , and the A i H is the Hurwitz matrix (stable).
Assumption 2.
F i X i belongs to the class
N F π 1 , π 2 = F ( X ) n : π 1 X F ( X ) π 2 X ,    F ( 0 ) = 0
and satisfies the quadratic condition
π 2 X F X T F X π 1 X 0
where π 1 > 0 ,    π 2 > 0 .
The information set of measurements for the S i -subsystem has the form
I o , i = X i ( t ) ,    u i ( t ) ,    X j ( t ) ,    t J = t 0 , t k
Mathematical model
X ^ ˙ i = K i X ^ i X i + A ^ i X i + B ^ i u i + j = 1 , j i m A ¯ ^ i j X j + F ^ i X i ,
where K i H is a matrix with known elements; A ^ i ,   B ^ i , A ¯ ^ i j are tuning matrices, F ^ i is a priori defined nonlinear vector function.
Problem. Obtain PI estimates for the system (32) based on the set I o , i analysis.
DS (32) is nonlinear, so the condition CE (7) is represented as:
E C α ¯ u ι , α ¯ u ι Σ : α ¯ u ι u ι 2 ( τ ) α ¯ u ι & Ω u ι ( ω ) Ω S ( ω ) ,
Where Ω u i ( ω ) is the set of frequencies for u i ; Ω S ( ω ) is the set of acceptable frequencies of input u i , ensuring S-synchronizability of the system.
Get the equation for the error E i = X ^ i X i :
E ˙ i = K i E i + Δ A i X i + Δ B i u i + j = 1 , j i M Δ A ¯ i j X j + Δ F i X i ,
where Δ A i = A ^ i A i , Δ B i = B ^ i B i , Δ A ¯ i j = A ¯ ^ i j A ¯ i j , Δ F i = F ^ i F i are parametric residuals.
Consider the system (37) and LF V i E i = 0.5 E i T R i E i , where R i = R i T > 0 is a positive symmetric matrix.
Let Δ A i = tr Δ A i T Δ A i , Δ A ¯ i j = tr Δ A i j T Δ A i j are the norm of matrixes Δ A i , Δ A ¯ i j , tr ( ) is the trace of the matrix.
The following modification of Theorem 1 [27] is true.
Theorem 9. 
Let: 1) the matrix  A i H ; 2)  X i ( t ) C E α ¯ X i , α ¯ X i ,  X j ( t ) P E α ¯ X j , α ¯ X j   u i ( t ) P E α ¯ u i ,   α  ¯ u i ; 3)  F i X i N F π 1 , π 2  and
F i X i 2 η α ¯ X i ,   Δ F i T Δ F i 2 η α ¯ X i + δ F i ,
where  π 1 > 0 , π 2 > 0 ,   η = η π 1 , π 2 > 0 ,    α ¯ Ξ = α ¯ X i X i > 0 ,  η = 2 π ¯ + π 2 ,  π ¯ = π 1 π 2 ,  π = π 1 + π 2 ,  δ F i > 0 . Then subsystem (32) is locally parametrically identifiable on the set  I o , i  if
2 α ¯ X i Δ A i 2 + α ¯ u i Δ B i 2 + j = 1 , j i m α ¯ X j Δ A ¯ i j 2 + 2 η α ¯ X i + δ F i λ ¯ Q i V i ,
where  λ ¯ Q i = λ Q i k i ,  λ i > 0  is the minimum eigenvalue of the matrix  Q i ;  k i > 0 K i R i + K i T R i = Q i ,  Q i = Q i T > 0 .
The proof of Theorem 9 is given in Appendix H.
Corollary 1 of Theorem 9.
Let conditions of Theorem 9 be fulfilled. Then the nonlinearity  F i X i  is locally structurally identifiable in the parametric sector
S X i π 1 , π 2 = X i n i : π 1 X i F i ( X i ) π 2 X i ,
if
Δ F i 2 0.25 λ ¯ i V i + z i ,
where  z i 0 .
The proof corollary 1 from Theorem 9 is given in Appendix I.
Consider the system (37) and class A I A S i algorithms to tune its parameters:
Δ A ˙ i = Γ A i E i R i X i T , Δ A ¯ ˙ i j = Γ A ¯ i j E j R i X j T , Δ B ˙ i = Γ B i E i T R i u i ,
where Γ A i , Γ A ¯ i j , Γ B i are diagonal matrices of the corresponding dimensions.
Lyapunov function for analysis of system (37) and (4): W S i = V i + V Δ , i :
V Δ , i = 0.5 tr Δ A i T Γ A i 1 Δ A i + 0.5 j = 1 , j i m tr Δ A ¯ i j T Γ A i j 1 Δ A ¯ i j + 0.5 Δ B i T Γ B i 1 Δ B i ,   V i E i = 0.5 E i T R i E i .
Corollary 2 of Theorem 9.
Let 1) conditions of Theorem 9 be fulfilled; 2) the class  A I A S i  of algorithms is used to tuning parameters of the model (35). Then the system (32) is locally parametrically identifiable if
σ i χ i t t 0 2 μ ˜ i σ i t 0 t V ( τ ) d τ ,
where  χ i = 2 η α ¯ X i + δ F i ,  γ i 1 = min 1 , 2 λ ¯ R i , σ = μ i λ ¯ R i > 0 λ ¯ R i ,    λ ¯ R i  are The minimum and maximum eigenvalues of the matrix  R i , and the estimate is held
W S i ( t ) W S i ( t 0 ) 2 μ ˜ i σ i t 0 t V ( τ ) d τ + σ i χ i t t 0 .
The proof corollary 2 from Theorem 9 is given in Appendix J.
As follows from Theorem 9, system (32) is LPI and structurally identifiable with nonlinearity F i X i in the parametric sector S X i π 1 , π 2 on the set of initial conditions and I o , i .
If we perform nonlinearity factorization (see, for example, [27])
F ^ i ( X i ) = F ˜ i T X i , N ^ i , 1 N ^ i , 2 ,
where N ^ i , 1 n i , 1 is a priori estimation of known parameters, N ^ i , 2 n i , 2 is vector of tuning parameters, the structure F ˜ i X i , N i , 1 is formed a priori considering the known vector N i , 1 , and apply the algorithm
N ^ ˙ i , 2 = Γ F i F ˜ i T R i E i X i , N ^ i , 1 ,
where Γ F i is a diagonal matrix with positive diagonal elements, then we obtain the conditions for global parametric identifiability for DS on the class of algorithms A I A S i and (44). They are based on the modernization of results [27].

8. Examples

1. Consider an engine control system with the Bouc–Wen hysteresis
m x ¨ + c x ˙ + F ( x , z , t ) = f ( t ) ,
F ( x , z , t ) = α k x ( t ) + ( 1 α ) k d z ( t )
z ˙ = d 1 a x ˙ β x ˙ z n s i g n ( z ) γ x ˙ z n
where m > 0 is mass, c > 0 is damp, F ( x , z , t ) is the recovering force, d > 0 , n > 0 , k > 0 , α ( 0 , 1 ) , f ( t ) is exciting force,  a , β , γ are some numbers. Set of experimental data I o = f ( t ) , y ( t ) , t J . Vector of parameters A = m , c , a , k , α , β , γ , n T .
To estimate PI on the set I o , Equation (45) is transformed to the form [28]
x ˙ = a 1 x + a 2 p x + a 3 p z + b p f ,
p ˙ x = μ p x + x , p ˙ f = μ p f + f , p ˙ z = μ p z + z ,      μ > 0 ,
where a 1 = ( c μ m ) / m , a 2 = α k μ c μ m / m , a 3 = 1 α k / m .
Model for system identification (48)
x ^ ˙ = k x x ^ x + a ^ 1 x + a ^ 2 p x + a ^ 3 p z + b ^ p f
where k x > 0 ; a ^ i ( t ) , i = 1 , 2 , 3 , b ^ ( t ) are adjustable model parameters. Let e = x ^ x . From (48) and (49), we obtain the equation for the identification error:
e ˙ = k x e + Δ a 1 x + Δ a 2 p x + Δ a 3 p z + Δ b p f ,
where Δ a 1 = a ^ 1 ( t ) a 1 , Δ a 2 = a ^ 2 ( t ) a 2 ,    Δ a 3 = a ^ 3 ( t ) a 3 ,    Δ b = b ^ ( t ) b .
The variable z is not measured. Apply the model to estimate z :
x ^ ˙ z ¯ = k x x ^ z ¯ x + a ^ 1 x + a ^ 2 p x + b ^ p f .
and introduce a residual ε z = x x ^ z ¯ . Let ε z is current estimate z . Then we get the model to evaluate z
z ^ ˙ = k z z ^ ε z + x ˙ ˜ β ^ x ˙ ˜ z ^ n s i g n ( z ^ ) γ ^ x ˙ ˜ z ^ n
where k z > 0 ; β ^ , γ ^ are estimates of hysteresis parameters (47); x ˙ ˜ = x ( t + τ ) x ( t ) / τ , τ is the integration step.
Introduce a residual ε = z ^ ε z , satisfying equations
ε ˙ = k z ε + Δ x ˙ + Δ β x ˙ ˜ z ^ n s i g n z ^ + β η β + Δ γ x ˙ ˜ z ^ n + γ η γ ,
η β = x ˙ z n s i g n z x ˙ ˜ z ^ n s i g n z ^ ,   η γ = x ˙ z n x ˙ ˜ z ^ n ,
where Δ x ˙ = x ˙ ˜ x ˙ ,    Δ β = β β ^ ,    Δ γ = γ γ ^ , Δ β = β β ^ , Δ γ = γ γ ^ . Present (49) as:
x ^ ˙ = k x ( x ^ x ) + a ^ 1 x + a ^ 2 p x + a ^ 3 p z ^ + b ^ p f ,
p ˙ z ^ = μ p z ^ + z ^ ,
and (50) is written as:
e ˙ = k x e + Δ a 1 x + Δ a 2 p x + Δ a 3 p z ^ + Δ b p f .
Evaluate the identification quality using the Lyapunov function. V ε ( t ) = 0.5 ε 2 ( t ) . Get adaptive algorithms from V ˙ ε < 0 :
Δ β ˙ = χ β ε x ˙ ˜ z ^ n s i g n z ^ ,     Δ γ ˙ = χ γ ε x ˙ ˜ z ^ n ,
where χ β > 0 , χ γ > 0 are parameters ensuring the stability of algorithms (57).
Consider functionals:
Δ D a ( t ) = Δ a 1 2 ( t ) + Δ a 2 2 ( t ) + Δ a 3 2 ( t ) + Δ b 2 ( t ) 0.5 ,   Δ D γ , β ( t ) = Δ γ 2 ( t ) + Δ β 2 ( t ) 0.5 .
Figure 1 and Figure 2 represent PI evaluations of the system (45) – (47). The ASI has two loops: the main one (variable e ) and the auxiliary one (variable ε ). Figure 1 shows the structure J e , Δ D a described by the function φ a : | e | Δ D a , and Figure 2 presents the structure J ε , Δ D γ , β described by the function φ γ , β : | ε | Δ D γ , β .
Presented structures confirm the fulfilment of the Theorem 1 conditions, since trajectories of the system for sufficiently large t get into the region G .
2. Consider the system, the phase portrait of which is shown in Figure 3. The set of experimental data I ( t ) is known. Input u ( t ) = 5 + 2 sin ( 0.2 π t ) . Figure 3 shows the presence of oscillations in the system, the frequency of which differs from the frequency of the input. Therefore, the system is the system with periodic coefficients.
To determine LE, we apply the approach [25] and obtain estimates of the general solution and its derivative.
y ^ q ( t ) = 0.75 ; 0.07 ; 0.22 1 u ( t ) u ˙ ( t ) T ,   y ˙ ^ q ( t ) = 0.394 ; 0.059 ; 0 , 078 1 u ( t ) u ˙ ( t ) T .
The coefficients of determination for these models are equal 0.99. Next, we determine estimates of the free movement for the system. The evaluation of order for the system follows from Figure 4.
It follows from S K Δ k s , ρ 1 that the system has a third order. From S K Δ k s , ρ 1 and S K Δ k s , ρ 2 , we get the set LE (Figure 4).
The upper estimate for κ m is 2.04 . Mobility limit for χ 1 is –0.8. χ -adequacy confirmation of LE estimates is shown in Figure 5. The eigenvalues of the state matrix of the system S p e r are:
M L E = 2.04 ; 1.842 ; 1.77 ; 1.167 ; 0.878 .
So, we see that the set M L E is LP -detectable, and the system has the third order. Since the elements of the set M L E are recoverable and detectable, the system S p e r is LPI.

9. Conclusion

The problem of estimating parametric identifiability based on current experimental data is considered. Methods of the priori identifiability based on the analysis of the information matrix are not applicable in this case. We consider the approach based on the application of the second Lyapunov method to the PI study. LPI conditions are got based on the adaptive identification application to the linear dynamical system. We analyse data on state vector and current information on input and output in the problem PI. Conditions and estimates have been obtained that guarantee PI and LPI.
The m -parametric identifiability case is considered when the condition of constant excitation is not fulfilled. PI estimates are got for decentralised nonlinear systems and systems with periodic parameters. We show that Lyapunov exponents should be used to PI analyses of the system with periodic parameters.
Modelling examples are presented that confirm the efficiency of the proposed approach.

Appendix A

Proof of Theorem 1.
Consider the LF V e ( t ) = 0.5 e 2 ( t ) . For V ˙ e , we get:
V ˙ e = k e 2 + e Δ A T P
or, applying condition 4 of Theorem 1,
V ˙ e k V + 1 2 k α ¯ P Δ A 2 ,
where α ¯ P P P T α ¯ P , α ¯ P > 0 ,    α ¯ P > 0 .
Let u ( t ) and y ( t ) correspond to Fourier series with multiple frequencies Ω u ,    Ω y , where Ω y depends on the spectrum u ( t ) . System (1) is a frequency filter and A H . Therefore, the frequency spectra of the elements of the vector P will vary. Therefore, sets Ω p i , where p i P , will not have common areas. Then Δ A = 0 follows from the identifiability condition Δ A T P = 0 .
So, condition 4 is necessary for the local identifiability of the system (6). As condition 4 is fulfilled, for the limited trajectories (identifiability) of the system (4), it is necessary that:
Δ A ( t ) 2 2 κ 2 ρ V ( t ) .

Appendix B

Proof of Corollary 1 from Theorem 1.
For V ˙ e , we get:
V ˙ e = k e 2 + e Δ A T P = k e 2 Δ A ˙ T Γ 1 Δ A , V ˙ e = 2 k V e 2 V ˙ Δ ,
where V e = 0.5 e 2 , V Δ = 0.5 Δ A T Γ 1 Δ A . Let V = V e + V Δ . Present (B.1) as:
V ˙ e 2 k V e 2 V ˙ Δ , 0.5 V ˙ k V e ,
where from
V ( t ) V ( t 0 ) 2 k t 0 t V e ( τ ) d τ
or V ( t ) V ( t 0 ) .■

Appendix C

Proof of Theorem 2. The derivative LF V E = 0.5 Е T R Е has the form
V ˙ E = Е T Q Е + Е T R Δ A X + Δ B u
or
V ˙ E μ Е T R Е + Е T R Δ A X + Δ B u ,
where Е T Q Е μ Е T R Е , μ > 0 , Q = Q T > 0 is a positive definite matrix satisfying the equation R K + K T R = Q , R = R T > 0 . Then (C.2)
V ˙ E μ Е T R Е + Е T R Δ A X + Δ B u 2 μ V E + 0.5 μ 1 α ¯ X Δ A 2 + α ¯ u Δ B 2 ,
where Δ A 2 = tr Δ A T Δ A , Δ B 2 = Δ B T Δ B , α X I n X ( t ) X T ( t ) α ¯ X I n , I n is the identity matrix.
From (C.3), we obtain the condition of LPI:
Δ A 2 α ¯ X + Δ B 2 α ¯ u 4 μ 2 V E .

Appendix D

Proof of Corollary 1 from Theorem 2.
Consider LF V W = V E + V A , B , where:
V A , B = 0.5 tr Δ A T Γ A 1 Δ A + 0.5 Δ B T Γ B 1 Δ B ,
Γ A , Γ B are diagonal matrices with positive diagonal elements.
If we consider (12), then (C.1) is written as:
V ˙ E = Е T Q Е + Е T R Δ A X + Δ B u = = Е T Q Е tr Δ A ˙ T Γ A ¯ 1 R Δ A Δ B ˙ T Γ B 1 R Δ B V ˙ A , B .
Obtain
V ˙ W 2 μ V E V W ( t ) V W ( t 0 ) 2 μ t 0 t V E ( τ ) d τ .

Appendix E

Proof of Theorem 3. Apply algorithm (10) and represent the derivative V Δ = Δ A T Γ 1 Δ A as:
V ˙ Δ = e Δ A T P .
Let ϑ 0 exist such that in some region 0 the condition e Δ A T P = ϑ Δ A 2 + e 2 is satisfied.
Then (E.1)
V ˙ Δ = ϑ Δ A 2 P 2 + e 2 = 3 4 ϑ Δ A 2 P 2 1 4 ϑ Δ A 2 P 2 ϑ e 2 3 4 ϑ Δ A 2 P 2 + 2 e ϑ Δ A P .
As P C E P и α ¯ P P ( t ) 2 α ¯ P , then
V ˙ Δ 3 4 ϑ α ¯ P Δ A 2 + 2 e ϑ a ¯ P Δ A .
Apply the inequality [29]
a z 2 + b z a 2 z 2 + 1 2 a b 2 ,   a > 0 , b > 0 , z > 0 .
Then
V ˙ Δ 3 8 ϑ α ¯ P Δ A 2 + 2 3 α ¯ P ϑ a ¯ P e 2 .
As Δ A 2 2 λ ¯ Γ V Δ , where λ ¯ Γ is the minimum eigenvalue of the matrix Γ . Then:
V ˙ Δ 3 4 ϑ α ¯ P λ ¯ Γ V Δ + 4 3 α ¯ P ϑ a ¯ P V e .
It follows from (E.5) that PI is guaranteed on a certain set A ^ t 0 and on the set I t if
V Δ 16 9 α ¯ P 2 λ ¯ Γ a ¯ P V e
and fair evaluation V Δ ( t ) S Δ ( t ) , where:
S Δ ( t ) = e σ ( t t 0 ) S Δ t 0 + 4 3 α ¯ P ϑ a ¯ P t 0 t e σ ( t τ ) V e ( τ ) d τ
σ = 0.75 ϑ α ¯ P λ ¯ Γ , S ( t ) is the upper solution of the comparison system S ˙ = σ S + π V e for V Δ ( t ) (E.5), if S Δ t 0 V Δ t 0 , π = 4 3 α ¯ P ϑ a ¯ P .■

Appendix F

Proof of Theorem 4. From the proofs of the corollary of Theorem 1 and Theorem 3, we obtain
V ˙ e k V + 1 2 k α ¯ P Δ A 2 , V ˙ Δ 3 4 ϑ α ¯ P λ ¯ Γ V Δ + 4 3 α ¯ P ϑ a ¯ P V e .
As Δ A 2 = Δ A T Γ Γ 1 Δ A λ ¯ Γ Δ A T Γ 1 Δ A 2 λ ¯ Γ V Δ , then (F.1)
V ˙ e V ˙ Δ k 1 k α ¯ P λ ¯ Γ 4 3 α ¯ P ϑ a ¯ P 3 4 ϑ α ¯ P λ ¯ Γ A G V e V Δ W G .
The matrix A G is an M -matrix [30] if conditions ( 1 ) i Δ i A G > 0 are fulfilled for the major minors. Obtain
k > 0 ,   9 k 2 α ¯ P 2 λ ¯ Γ 16 a ¯ P 2 λ ¯ Γ .
If the conditions (F.3) are fulfilled, then the adaptive system (6), (10) is exponentially stable (ES). As follows from the ES, estimates of the vector A in (4) are extremely locally parametrically identifiable under given initial conditions. The estimate (21) is got using the approach described in the proof of Theorem 3.■

Appendix G

Proof of Theorem 5.
Consider (23) and represent the derivative of LF V e as:
V ˙ e = k e 2 + e Δ A ˜ T P + e ω
or
V ˙ e k V + 1 2 k α ¯ P Δ A 2 , V ˙ e k e 2 + e 2 + 0.5 υ P Δ A ˜ 2 + 0.5 ε ω 2 ,
where k m = k 1 > 0 , P ( t ) 2 υ P ,     υ P α ¯ P . From (G.2), we obtain the condition m -local PI:
0.5 υ P Δ A ˜ 2 + 0.5 ε ω 2 2 k m V e .
Represent (G.1) in the form
V ˙ e = k e 2 + e Δ A ˜ T P ˜ + δ A T P ¯ = k e 2 Δ A ˜ T Γ 1 Γ Δ A ˜ + e ω .
Then (G.4)
V ˙ e 2 k V e 2 λ ¯ Γ V ˜ ˙ Δ + 0.5 e 2 + 0.5 ω 2 .
Transform (G.5)
V ˙ e 2 k m k 1 V e 2 λ ¯ Γ V ˜ ˙ Δ + 0.5 ω 2 , V ˙ e + 2 λ ¯ Γ V ˜ ˙ Δ 2 k m V e + 0.5 ω 2 .
Let η = min 1 , 2 λ ¯ Γ , V v = V e + V ˜ Δ and k m = k 1 . Then:
V ˙ v 2 k m η 1 V e + 0.5 η 1 ε ω 2 .
The estimate for V v (see (26)) follows from (G.7). ■

Appendix H

Proof of Theorem 9.
V ˙ i has the form:
V ˙ i = E i T Q i E i + E i T R i Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + Δ F i X i
or
V ˙ i E i T Q i E i λ V i + E i T R i k i V i Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + Δ F i X i λ i V i + k i V i + 0.5 Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + Δ F i X i 2 ,
where λ i > 0 is the minimum eigenvalue of the matrix Q i .
Apply the Cauchy-Bunyakovsky-Schwarz inequality and Titu’s lemma to the last term in (H.2) and get
0.5 Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + Δ F i X i 2 2 Δ A i 2 X i 2 + Δ B i 2 u i 2 + j = 1 , j i m Δ A ¯ i j 2 X j 2 + Δ F i X i 2 .
Consider condition 1) of Theorems 9 and V ˙ i write as:
V ˙ i λ ¯ i V i + 2 α ¯ X i Δ A i 2 + α ¯ u i Δ B i 2 + j = 1 , j i m α ¯ X j Δ A ¯ i j 2 + Δ F i X i 2 ,
where λ ¯ i = λ i k i . Apply Lemmas 1, 2 [26] and get for Δ F i 2
Δ F i T Δ F i = Δ F i 2 2 η α ¯ X i + δ F i ,
where η = 2 π ¯ + π 2 , π = π 1 + π 2 , π ¯ = π 1 π 2 , δ F i 0 .
Then (H.4)
V ˙ i λ ¯ i V i + 2 α ¯ X i Δ A i 2 + α ¯ u i Δ B i 2 + j = 1 , j i m α ¯ X j Δ A ¯ i j 2 + 2 η α ¯ X i + δ F i .
If state variables are CE and the condition (38) is fulfilled, then the system (32) is the LPI on the set I o , i .■

Appendix I

Proof of Corollary 1 from Theorem 9.
As follows from Theorem 9, DS is locally parametrically identifiable if the condition (38) is satisfied. Apply Lemmas 1, 2 [26] to the last terms in (H.6) and get:
2 η α ¯ X i + δ F i = 2 η α ¯ X i + δ F i δ F i = 2 Δ F i 2 δ F i ,
Therefore,
Δ F i 2 0.25 λ ¯ i V i + 0.5 δ F i χ i θ i ζ α , j ζ j ,
where θ i = min i Δ A i 2 + Δ B i 2 ,      ζ j = min j j = 1 , j i m Δ A ¯ i j 2 ,      ζ α , j = min j j = 1 , j i m α ¯ X j .
As ζ α , j ζ j δ F i χ i θ i , then Δ F i 2 0.25 λ ¯ i V i + z i . ■

Appendix J

Proof of Corollary 2 from Theorem 9.
Represent V ˙ i (H.2) as:
V ˙ i = E i T Q i E i + E i T R i Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + Δ F i X i + Δ F i X i = = E i T Q i E i tr Δ A ˙ i T Γ A i 1 R i Δ A i j = 1 , j i m tr Δ A ¯ ˙ i j T Γ A ¯ i j 1 R i Δ A ¯ i j Δ B ˙ i T Γ B i 1 R i Δ B i + E i T R i Δ F i X i .
Let E i T Q i E i μ i E i T R i E i 2 μ i V i , where μ i 0 . Then:
V ˙ i 2 μ i V i 2 λ ¯ R i V ˙ Δ , i + E i T R i Δ F i 2 μ i V i 2 λ ¯ R i V ˙ Δ , i + λ ¯ R i V i + 0.5 Δ F i 2 ,
where λ ¯ R i ,    λ ¯ R i are minimum and maximum eigenvalues of the matrix R i .
The estimate (H.5) is fair for Δ F i . Therefore, (J.2) is represented as:
V ˙ i 2 μ i λ ¯ R i σ i V i 2 λ ¯ R i V ˙ Δ , i + η α ¯ X i + δ F i χ i , V ˙ i + 2 λ ¯ R i V ˙ Δ , i 2 σ i V i + χ i ,
where χ i = 2 η α ¯ X i + δ F i . Let γ i 1 = min 1 , 2 λ ¯ R i , σ = μ i λ ¯ R i > 0 . Transform (J.3):
W ˙ S i 2 σ i γ i V i + γ i χ i ,
Then
W S i ( t ) W S i ( t 0 ) 2 μ ˜ i σ i t 0 t V ( τ ) d τ + σ i χ i t t 0
if σ i χ i t t 0 2 μ ˜ i σ i t 0 t V ( τ ) d τ .■

References

  1. Villaverde A. F. Observability and structural identifiability of nonlinear biological systems. arXiv:1812.04525v1 [q-bio.QM] 11 Dec 2018, 2022. [CrossRef]
  2. Renardy M., Kirschner D., and Eisenberg M. Structural identifiability analysis of age-structured PDE epidemic models. Journal of Mathematical Biology, 2022, 84(1-2); 2022. [CrossRef]
  3. Weerts, H. H. M., Dankers, A. G., &, Van den Hof, P. M. J. Identifiability in dynamic network identification. IFAC-PapersOnLine, 2015, 48(28), pp. 1409-1414. [CrossRef]
  4. Wieland F.-G., Hauber A. L., Rosenblatt M., Tönsing C. and Timmer J. On structural and practical identifiability. Current opinion in systems biology, 2021, 2, pp. 60–69. [CrossRef]
  5. Miao H., Xia X., Perelson A. S., and Wu H. On identifiability of nonlinear ode models and applications in viral dynamics. SIAM Rev Soc Ind Appl Math, 2011; 53(1), pp. 3–39. [CrossRef]
  6. Bellman R, Åström K: On structural identifiability. Math Biosci. 1970, 7, pp. 329–339. [CrossRef]
  7. Anstett-Collina F., Denis-Vidalc L., Millérioux G. A priori identifiability: An overview on definitions and approaches. Annual reviews in control, 2021, 50, pp.139-149. [CrossRef]
  8. Hong H., Ovchinnikov A., Pogudin G., Yap C. Global identifiability of differential models. Communications on Pure and Applied Mathematics, 2020, 73 (9). 18311879. [CrossRef]
  9. Denis-Vidal L., Joly-Blanchard G., Noiret C., System identifiability (symbolic computation) and parameter estimation (numerical computation). Numerical Algorithms, 2003, 34(2-4), pp. 283–292. [CrossRef]
  10. Boubaker O., Fourati A., Structural identifiability of nonlinear systems: an overview. In: Proc. of the IEEE International Conference on Industrial Technology, ICIT’04, December 8-10, 2004, pp. 1244–1248. [CrossRef]
  11. S. Audoly, G. Bellu, L. D’Angi `o, M. P. Saccomani, and C. Cobelli. Global identifiability of nonlinear models of biological systems. IEEE Trans. Biomed. Eng., 2001, 48(1), pp. 55–65. [CrossRef]
  12. Chis O.-T., Banga J. R., and Balsa-Canto E. GenSSI: a software toolbox for structural identifiability analysis of biological models. Bioinformatics, 2011, 27(18), pp. 2610–2611. [CrossRef]
  13. Denis-Vidal L., Joly-Blanchard G., and Noiret C. Some effective approaches to check the identifiability of uncontrolled nonlinear systems. Math. Comput. Simul, 2001, 57(1), pp. 35–44. [CrossRef]
  14. X. Xia and C. H. Moog. Identifiability of nonlinear systems with application to HIV/AIDS models. IEEE Trans. Autom. Control, 2003, 48(2), pp. 330–336. [CrossRef]
  15. Stigter J. D. and Molenaar J. A fast algorithm to assess local structural identifiability. Automatica, 2015, 58, pp. 118–124. [CrossRef]
  16. Villaverde A. F. Observability and structural identifiability of nonlinear biological systems. Complexity, 2019, Article ID 8497093. [CrossRef]
  17. Hengl S., Kreutz C., Timmer J. and Maiwald T. Data-based identifiability analysis of non-linear dynamical models. Bioinformatics, 2007, 23(19), pp. 2612–2618. [CrossRef]
  18. Murphy S.A, van der Vaart AW: On profile likelihood. Journal of the American Statistical Association, 2000, 95, pp. 449-465. [CrossRef]
  19. Raue A, Kreutz C, Theis F.J., Timmer J. Joining forces of Bayesian and frequentist methodology: a study for inference in the presence of non-identifiability. Phil Trans R Soc A, 2013, 371:20110544. [CrossRef]
  20. Neale MC, Miller MB: The use of likelihood-based confidence intervals in genetic models. Behav Genet, 1997, 27, pp. 113–120. [CrossRef]
  21. Cedersund G: Prediction uncertainty estimation despite unidentifiability: an overview of recent developments. In Uncertainty in biology, 2016, pp. 449–466. [CrossRef]
  22. Karabutov N. Identification of decentralized control systems. Preprints ID 143582. Preprints202412.1808.v1. [CrossRef]
  23. Karabutov N. Adaptive observers for linear time-varying dynamic objects with uncertainty estimation. International journal of intelligent systems and applications, 2017, 9(6), pp. 1-15. [CrossRef]
  24. Karabutov N.N. Identifiability and Detectability of Lyapunov Exponents for Linear Dynamical Systems. Mekhatronika, Avtomatizatsiya, Upravlenie, 2022, 23(7), pp. 339-350. [CrossRef]
  25. Karabutov N. Identifiability and detectability of Lyapunov exponents in robotics. In Design and control advances in robotics, Ed. Mohamed Arezki Mellal, IGI Global Scientific Publishing, 2022, 10(11), pp. 152-174. [CrossRef]
  26. Bohr G. Almost periodic functions. Moscow: Librocom, 2009.
  27. Karabutov N. Adaptive identification of decentralized systems. In Advances in mathematics research. Volume 37. Ed. A. R. Baswell. New York: Nova Science Publishers, Inc. 2025. 79-115.
  28. Karabutov N.N., Shmyrin A.M. Application of adaptive observers for system identification with Bouc-Wen hysteresis. Bulletin of the Voronezh State Technical University, 2019, 15(6), pp. 7-13.
  29. Barbashin E. A. Lyapunov functions. Moscow: Nauka Publ., 1970.
  30. Gantmakher F. R. Theory of matrices. Moscow, Nquka, 2010.
Figure 1. Structure J e , Δ D a .
Figure 1. Structure J e , Δ D a .
Preprints 164719 g001
Figure 2. Structure J ε , Δ D γ , β .
Figure 2. Structure J ε , Δ D γ , β .
Preprints 164719 g002
Figure 3. Phase portrait.
Figure 3. Phase portrait.
Preprints 164719 g003
Figure 4. LE set.
Figure 4. LE set.
Preprints 164719 g004
Figure 5. Estimate of χ -adequacy of LE.
Figure 5. Estimate of χ -adequacy of LE.
Preprints 164719 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated