Preprint
Article

This version is not peer-reviewed.

Identification of Decentralised Control Systems

Submitted:

20 December 2024

Posted:

23 December 2024

You are already at the latest version

Abstract
The identification problem of decentralised control systems (DS) is considered. Analysis shows that this problem has not been given sufficient attention. The complexity of systems and a priori uncertainty require the development of approaches and methods. DS parametric identifiability (PI) requires a solution. We propose the approach to the PI assessment based on the fulfilment of the constant excitation condition and consider relationships in the subsystems. PI conditions are got and algorithms for parametric and signal adaptive identification are received. We consider DS with non-linearities satisfying the quadratic condition. The exponential dissipativity of the identification system is proved using Lyapunov vector functions. The influence of interrelations considers on properties of parameter estimates. Examples are given. A method is proposed for the construction of adaptive algorithms under functional constraints.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

Decentralised control systems (DCS) are widely used to solve various tasks. Ensuring the DS stability and quality is the principal goal of control. The system works under incomplete a priori information. Thus, an adaptive robust DCS with a reference model is proposed for interconnected time-delayed systems in [1]. the system asymptotic stability is proved. A similar problem of stabilising the DS output using feedback is considered in [2]. Control laws are based on the application of nonlinear damping, adaptive state observer and Lyapunov functions. Various variants of the adaptive control problem under uncertainty are studied in [3, 4]. In [5], a design method is proposed for adaptive decentralised regulators based on an identifier and a reference model. Recurrent neural networks [6] are used to control large-scale systems under uncertainty. Algorithms are used to control a flat robot with two degrees of freedom.
Robust DS control of a nonlinear multidimensional object is proposed in [7]. The system identification is based on the frequency approach. The model approach [8] recommends for the control unknown large-scale DS. In [9], correlation analysis and the least squares method were used to identify DS. Correlation analysis and the least squares method [9] are the basis for the DS identification. Stochastic procedures for the DS identification are proposed in [10, 11]. The identification of DS with feedback [12] is based on the analysis of transient characteristics. Adaptive control of nonlinear large-scale systems (LSS) with limited perturbations is considered in [13]. The asymptotic tracking issue for LSS based on nonlinear output feedback considers in [14, 15]. Adaptive algorithms guarantee compensation disturbances.
We see that various identification procedures and methods are used in the DS. Parametric uncertainties are compensated by adjusting the parameters of the adaptive control law. Applied methods of retrospective identification do not always consider the current state of the system. The properties of the proposed algorithms, the system identifiability, and the influence of connections in the system are studied. These difficulties are compensated using multistep identification procedures.
We consider the adaptive identification problem of DS with nonlinearities (NDS) for which the quadratic condition is satisfied (Section 2). Section 3 contains a solution to the parametric identifiability (PI) problem for NDS. We study the influence of the information space on PI. The approach to the synthesis of adaptive identification algorithms based on the second Lyapunov method is proposed. We analyse parametric and signalling algorithms. Properties of the identification system are studied. We proof the exponential dissipativity of the adaptive identification system (AS I).

2. Problem Statement

Consider the system comprising m  interconnected subsystems
S i : X ˙ i = A i X i + B i u i + j = 1 , j i m A ¯ i j X j + F i X i , Y i = С i X i , (1)
where X i n i , Y i q i are vectors of the state and output of S i -subsystem, u i is a control, i = 1 , m ¯ , i = 1 m n i = n . Parameters of the matrices A i n i × n i , B i n i , A ¯ i j n i × n j are unknown, C i q i × n i . The matrix A ¯ i j reflects the mutual influence of the S j subsystem. F i X i n i consider the nonlinear state of the S i -subsystem, and A i is Hurwitz matrix (stable).
Assumption 1.  F i X i belongs to the class
N F π 1 , π 2 = F ( X ) n : π 1 X F ( X ) π 2 X , F ( 0 ) = 0 (2)
and satisfies the quadratic condition
π 2 X F X T F X π 1 X 0 , (3)
where π 1 > 0 , π 2 > 0 are set numbers.
Information set of measurements for subsystems II o , i = X i ( t ) , u i ( t ) , X j ( t ) , t J = t 0 , t k .
Mathematical model for (1)
X ^ ˙ i = K i X ^ i X i + A ^ i X i + B ^ i u i + j = 1 , j i m A ¯ ^ i j X j + F ^ i X i , (4)
where K i H is Hurwitz matrix with known parameters (reference model); A ^ i ,   B ^ i , A ¯ ^ i j are tuning matrices of corresponding dimensions, F ^ i is a priori given nonlinear vector function.
Problem: find such algorithms for estimating model (4) parameters based on the analysis of the set II o , i and the fulfilment assumption 1, so
lim t X ^ i ( t ) X i ( t ) δ i ,
where δ i 0 .

3. On identifiability S i -Subsystem

Identifiability is the basis for estimating S i -subsystem parameters. It knows that fulfilment the constant excitation (CE) condition for II o , i guarantees identifying DS parameters.
The CE condition
E C α ¯ u i , α ¯ u i : α ¯ u i u i 2 ( t ) α ¯ u i t t 0 , t 0 + T , (5)
where α ¯ u i , α ¯ u i are positive numbers, T > 0 . Next, condition (5) will be written as u i ( t ) E C α ¯ u i , α ¯ u i . If u i ( t ) does not have the CE property, then we will write u i ( t ) E C α ¯ u i , α ¯ u i or u i ( t ) E C .
Remark 1. Condition (5) requires modification, considering S-synchronisation for NDS [16]. We write property (5) as:
E C α ¯ u i , α ¯ u i S : α ¯ u i u i 2 ( t ) α ¯ u i & Ω u i ( ω ) Ω S ( ω ) ,
where Ω u i ( ω ) is the set of frequencies for u i ; Ω S ( ω ) is the set of acceptable frequencies for u i , guaranteeing S-synchronizability. Next, we will denote the CE property as E C α ¯ u i , α ¯ u i , assuming, that it guarantees E C α ¯ u i , α ¯ u i S .
To obtain the conditions of identifiability, consider the model (4). The error equation:
E ˙ i = K i E i + Δ A i X i + Δ B i u i + j = 1 , j i M Δ A ¯ i j X j + Δ F i X i , (6)
where E i = X ^ i X i ; Δ A i = A ^ i A i , Δ B i = B ^ i B i , Δ A ¯ i j = A ¯ ^ i j A ¯ i j , Δ F i = F ^ i F i are parametric residuals.
Lemma 1. If the nonlinearity F ( X ) n , X n belongs to the class N F π 1 , π 2 and
π 1 X F X π 2 X , (7)
then
F X 2 η α ¯ X , (8)
where  π 1 > 0 , π 2 > 0 ,   η = η π 1 , π 2 > 0 , α ¯ X = α ¯ X X > 0 .
Lemma 1 proof is presented in Appendix A.
Lemma 2. If Lemma 1 conditions are satisfied, then the estimate is valid for Δ F X Δ F T Δ F 2 η α ¯ X + δ F ,
where  η = 2 π ¯ + π 2 , π ¯ = π 1 π 2 , π = π 1 + π 2 , δ F > 0 .
Lemma 2 proof is presented in Appendix B.
Consider the system (6) and Lyapunov function (LF) V i E i = 0.5 E i T R i E i , where R i = R i T > 0 is a positive symmetric matrix. Let Δ A i = Sp ( Δ A i T Δ A i ) , Δ A ¯ i j = Sp ( Δ A i j T Δ A i j ) is
Let Δ A i = Sp ( Δ A i T Δ A i ) , Δ A ¯ i j = Sp ( Δ A i j T Δ A i j ) are matrix norms Δ A i , Δ A ¯ i j .
Theorem 1. Let 1) A i H ; 2) X i ( t ) P E α ¯ X i , α ¯ X i , X j ( t ) P E α ¯ X j , α ¯ X j   u i ( t ) P E α ¯ u i , α ¯ u i ; 3) conditions of Lemma 1, 2 are satisfied for F i X i . Then subsystem (1) is identifiable on the set II o , i if
2 α ¯ X i Δ A i 2 + α ¯ u i Δ B i 2 + j = 1 , j i m α ¯ X j Δ A ¯ i j 2 + 2 η α ¯ X i + δ F i λ ¯ i V i , (9)
where  λ ¯ i = λ i k i , λ i > 0 is the minimum eigenvalue of the matrix Q i , k i > 0 , K i R i + K i T R i = Q i , Q i is positive symmetric matrix, η = 2 π ¯ + π 2 , π = π 1 + π 2 , π ¯ = π 1 π 2 , δ F i 0 .
Theorem 1 proof is presented in Appendix C.
If Theorem 1conditions are fulfilled, then the subsystem S i is identifiable on the set II o , i or P I X i -identifiable.
Consider the identifiability of the S i -subsystem on the set II o , Y i = Y i ( t ) , u i ( t ) , Y j ( t ) , t J = t 0 , t k .
Representation of the S i -subsystem on II o , Y i Y ˙ i = A # , i Y i + B # , i u i + j = 1 , j i m A ¯ # , i j Y j + C ˜ i # F i C ˜ i Y i , (10)
where C ˜ i = C i T C i # C i T , A # , i = C ˜ i # A i C ˜ i , B # , i = C ˜ i # B i , A ¯ # , i j = C ˜ i # A ¯ i j C ˜ j , # is a sign of a pseudo-inreversal of the matrix.
The model for (10) has a similar structure. Introduce the error E # , i = Y ^ i Y i and LF V # , i E # , i = 0.5 E # , i T R i E # , i .
Theorem 2. Let 1) A # , i H ; 2) Y i ( t ) E C α ¯ Y i , α ¯ Y i , Y j ( t ) E C α ¯ Y j , α ¯ Y j   u i ( t ) E C α ¯ u i , α ¯ u i ; 3) F i X i satisfies conditions of Lemmas 1, 2; 4) the S i subsystem is observable. Then subsystem (1) is identifiable on the set II o , Y i if
2 α ¯ Y i Δ A # , i 2 + α ¯ u i Δ B # . i 2 + j = 1 , j i m α ¯ Y j Δ A ¯ # , i j 2 + 2 η α ¯ Y i + δ F i λ ¯ # , i V # , i ,
where Δ A # , i = A ^ # , i A # , i , Δ B # , i = B ^ # , i B # , i , Δ A ¯ # , i j = A ¯ ^ # , i j A ¯ # , i j , Δ F i = F ^ i F i , λ ¯ # , i > 0 .
The proof of Theorem 2 coincides with the proof of Theorem 1

4. Synthesis of Adaptation Algorithms

Consider LF V i E i = 0.5 E i T R i E i and
V ˙ i = E i T Q i E i + E i T R i Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + Δ F i X i .
We require that the functional constraint be satisfied for all t t 0 V ˙ i χ Δ A i , Δ A ˜ i , Δ F i ,
where
χ Δ A i , Δ A ¯ i , Δ B i , Δ F i = 0.5 φ A i ( t ) Δ A i ( t ) 2 + φ A ˜ i ( t ) Δ A ¯ i ( t ) 2 + φ B i ( t ) Δ B i 2 + φ F i ( t ) Δ F i ( t ) 2 ,
φ A i ( t ) , φ A ˜ i ( t ) , φ B i ( t ) , φ F i ( t ) are limited non-negative functions. Then:
V ˙ i = E i T Q i E i + χ Δ A i , Δ A ¯ i , Δ F i + E i T R i Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + Δ F i X i (11)
From (11), we obtain adaptive algorithms
Δ A ˙ i = Γ A i φ A i ( t ) Δ A i + E i T R i X i , Δ A ¯ ˙ i j = Γ A ¯ i j φ A ˜ i j ( t ) Δ A ¯ i j + E i T R i X j , Δ B ˙ i = Γ B i φ B i j ( t ) Δ B i + E i T R i u i , (12)
where Γ A i , Γ A ˜ i j , Γ B i are diagonal matrices of corresponding dimensions with positive diagonal elements, ensuring the stability of adaptation processes.

4.1. Parametric Algorithm for F i

Parametric and signal algorithms can be used to evaluating F i . Consider the parametric approach [17].
Assumption 2. The function F i ( X i ) is given on the set F i F F i = F i N F π 1 , π 2 : F i ( X i ) = F ˜ i T ( X i , N i , 1 ) N i , 2 , N i = N i , 1 T , N i , 2 T T , N i a , (13)
where i , a = N i n : N i ¯ N i N ¯ i is a posteriori generated parametric domain for F i ; N ¯ , N ¯ are vector boundaries for N , understood as n ¯ i N ¯ i , n ¯ i N ¯ i ; N i , 1 is a priori set vector of nonlinearity parameters, N i , 2 is a priori, an unknown set of parameters, which we consider as a vector to be evaluated. Some elements of N ¯ i , N i ¯ may be unknown. The F ˜ i X i , N i , 1 structure is formed a priori considering the known vector N i , 1 .
As follows from (13), the estimation for function F i X i is defined in the form:
F ^ i ( X i ) = F ˜ i T X i , N ^ i , 1 N ^ i , 2 , (14)
where N ^ i , 1 n i , 1 is a priori estimation of known parameters, N ^ i , 2 n i , 2 is the vector of tuning parameters.
We believe i = i , 1 i , 2 . The set i , 1 n i , 1   N i , 1 i , 1 contains elements that are not available for adjusting. We get estimates of elements N i , 2 i , 2 n i , 2 at the identification stage. The matrix F ˜ i y , N ^ i , 1 is formed at the stage of structural synthesis (analysis) of the system. Representation (14) is a consequence of the proposed parametric concept for F i X i .
Remark 2. The vector N ^ i , 1 can be adjusted iteratively based on the coercion algorithm [17].
As Δ F = F ˜ i T y , N ^ i , 1 N ^ i , 2 F i ( X i ) , then we get an adaptive algorithm for N ^ i , 2 from the condition V ˙ i 0 N ^ ˙ i , 2 = Γ F i φ F i N ^ ˙ i , 2 + F ˜ i T R i E i X i , N ^ i , 1 , (15)
where Γ F i is a diagonal matrix with positive diagonal elements. Designate the system (6), (11), (15) as A S A F i .

4.2. Signal Algorithm

Consider the model
X ^ ˙ i = K i X ^ i X i + A ^ i X i + B ^ i u i + j = 1 , j i m A ¯ ^ i j X j + U i (16)
and the equation for the error
E ˙ i = K i E i + Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + U i X i F i X i . (17)
Then
V ˙ i = E i T Q i E i + E i T R i Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + U i F i . (18)
Choose the algorithm for U i in the form:
U i = D i R i E i , (19)
where D i n i × n i is a diagonal matrix with positive diagonal elements. As F i X i N F π 1 , π 2 , then Lemma 1 is valid for F i X i .
Apply the approach [18]. Select matrix D i elements from the condition D i d i η α ¯ X i . Then:
V ˙ i = E i T Q i E i + E i T R i Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j D i R i E i F i . (20)
As E i T R i D i R i E i 2 η α ¯ X i λ ¯ R i V i then
V ˙ i σ V i + E i T R i Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ˜ i j X j , (21)
where λ ¯ R i is the smallest eigenvalue of the matrix R i ,   σ = λ Q i + 2 η α ¯ X i λ ¯ R i . We apply the approach outlined in the proof of Theorem 1, and get
V ˙ i σ V i + 2 α ¯ X i Δ A i 2 + α ¯ u i Δ B i 2 + j = 1 , j i m α ¯ X j Δ A ¯ i j 2 .
If
2 α ¯ X i Δ A i 2 + α ¯ u i Δ B i 2 + j = 1 , j i m α ¯ X j Δ A ˜ i j 2 + η α ¯ X i i σ V i ,
then the system (16) is parametrically identifiable on the set u i ( t ) , X i ( t ) , X j ( t ) on the algorithms (12), (19) class, if u i ( t ) , X i ( t ) , X j ( t ) E C .
Designate the system (6), (12), (19) as A S A S i .

5. Functional Restriction and Synthesis of Adaptive Algorithms

The described synthesis method of adaptive algorithms (AA) is typical for the adaptive identification. Another approach is based on accounting of the limitations that are imposed on ASI. This approach requires some knowledge and does not always provide workable algorithms. We propose the method based on consideration of requirements for ASI. Show the algorithm synthesis method using the example of the Δ A i matrix.
Consider of LF
V i Δ E i , Δ A i , Δ A ˙ i = 0.5 E i T R i E i + 0.5 Sp Δ A i T Δ A ˙ i .
Let
V ˙ i Δ χ Δ = α Δ Sp Δ A i T Δ A i , α Δ 0 . (22)
Denote η Δ = V ˙ i Δ + χ Δ and obtain:
η Δ = E i T Q i E i + E i T R i Δ A i X i + Sp Δ A ˙ i T Δ A ˙ i + Sp Δ A i T Δ A ¨ i + α Δ Sp Δ A i T Δ A i . Adaptive algorithm for Δ A i Δ A ¨ i = Δ A ˙ i α Δ Δ A i Γ A i E i R i X i T . (23)
Let Δ A i = Z 1 . Then:
S A A : Z ˙ 1 = Z 2 , Z ˙ 2 = α Δ Z 1 Z 2 Γ A i E i R i X i T . (24)
So, if the functional restriction χ Δ 0 is imposed on ASI, then AA is described by the system S A A . The S A A -algorithm use is associated with difficulties of application. Therefore, using the S A A requires their modification.
Let χ e , Δ = α e , Δ Sp Δ A i T D E Δ A ˙ i and η e , Δ = V ˙ i + χ e , Δ . Then A M -algorithm is presented as:
Δ A ˙ i = Γ A i E i R i X i T α e , Δ Γ A ˙ i D E i Δ A ˙ i (25)
or
Δ A ˙ i = M 1 Γ A i E i R i X i T , (26)
where M = I n × n + α e , Δ Γ A ˙ i D E , D E – диагoнальная матрица oт вектoра E , I n × n – единична матрица, Γ A ˙ i = Γ A ˙ i T > 0 .
We describe A M -algorithm (25) as:
Δ A ˙ i ( t ) = Γ A i E i ( t ) R i X i T ( t ) ω e Γ A ˙ i D E Δ A i ( t ) Δ A i ( t τ ) , (27)
where ω e , Δ = α ˜ e , Δ τ 1 . It is difficult to evaluate the properties of the algorithm (27). If the matrix D2 is unique, then
Δ A ˙ i ( t ) = Γ A i E i ( t ) R i X i T ( t ) ω e Γ A ˙ i Δ A i ( t ) Δ A i ( t τ ) . (27a)
Convergence conditions of estimates for algorithm (27a) at ω e , Δ = 1 and the matrix Γ A i is diagonal.
Theorem 3. Let 1) A # , i H ; 2) X i ( t ) P E α ¯ X i , α ¯ X i , X j ( t ) P E α ¯ X j , α ¯ X j   u i ( t ) E C α ¯ u i , α ¯ u i ; 3) F i X i satisfies conditions of Lemmas 1, 2; 4) V Δ , i = Sp Δ A i T Γ A ˙ i 1 Δ A i ( t ) ; 5) there exists υ > 0 such that
Sp Δ A i T Γ A ˙ i 1 Γ A i E i R i X i T = υ Sp Δ A i T Γ A ˙ i 1 Γ A i Δ A i + E i T R i 2 E i X i 2 is valid at. Then algorithm (27a) estimates are bounded if
Δ A i ( t τ ) 2 η V Δ , i υ 2 σ e 3 σ Δ V i t > t 0 ,
where η = λ ¯ Γ A ˙ i 1 + 3 4 υ σ Δ , σ e = α ¯ X i 2 λ ¯ R i σ Δ = λ ¯ Γ A ˙ i λ ¯ Γ A i , λ ¯ Γ A ˙ i  is the minimum eigenvalue of the matrix  Γ A ˙ i Theorem 3 proof is presented in Appendix D.
Remark 3. Algorithm (27) is a differential equation with an aftereffect. The equation (27) discrete analogues are proposed by various authors for regression models. They are based on the intuition of the researcher.

6. Properties of Adaptive System

6.1. System A S A F i

Consider systems A S A F i , A S A F j и LF V i E i = 0.5 E i T R i E i ,
V Δ , i = 0.5 Sp Δ A i T Γ i 1 Δ A i + 0.5 j = 1 m Sp Δ A ¯ i j T Γ i j 1 Δ A ¯ i j + 0.5 Δ B i T Γ i 1 Δ B i + 0.5 Δ N i , 2 T Γ N i , 2 1 Δ N N i , 2 , (28)
where Sp ( ) is the spur of matrix. We believe that the interference matrix A ¯ i j ensures the stability of the S i -subsystem.
Theorem 4. Let (i) Lyapunov functions V i ( t ) , V Δ , i ( t ) , admit an infinitesimal upper limit; (ii) A i H ; (iii) A ¯ i j ensures the stability of the subsystem S i ; (iv) X i ( t ) E C α ¯ X i , α ¯ X i , X j ( t ) E C α ¯ X j , α ¯ X j   u i ( t ) E C α ¯ u i , α ¯ u i ; (v F i F F i ) ; (vi) the system of inequalities
V ˙ i V ˙ Δ , i μ i 2 μ i κ i ϑ χ α , i ρ i β λ χ , i A W i V V Δ , i + 2 μ i υ ¯ F ˜ i 0.5 δ N , i L i (29)
is valid for the Lyapunov vector function W i = V i , V Δ , i T , where μ i , κ i , ϑ χ α , i ρ i , β λ χ , i , υ ¯ F ˜ i , δ N , i are positive numbers depending on the subsystem S i parameters and set II o , i properties; (vi) the upper solution for W i satisfies the system of equation S ˙ W i = A W i S W i + L i if
w ρ ( t ) s ρ ( t )   t t 0 & w ρ t 0 s ρ t 0 ,
w ρ W i , s ρ S W i , ρ = e , i ; Δ , i . Then the A S F , i -system is exponentially dissipative with the estimate:
W i ( t ) e A W i t t 0 S W i ( t 0 ) + t 0 t e A W i t τ L i d τ , (30)
if
μ i 2 β λ χ , i 2 κ i ϑ χ α , i ρ i . (31)
As follows from (30), the A S F , i limiting properties are determined by vector L i elements. If the vector N ^ i , 1 structure and parameters are known, then the A S F , i -subsystem is exponentially stable if F ˜ i E C .
Theorem 4 proof is presented in Appendix E.
Consider the system A S F , i , j with subsystems A S F , i and A S F , j . We have the system of inequalities for A S F , i , j W ˙ i W ˙ j A W i 0 0 A W j W i W j + L i L j , (32)
where A W i and L i have the form (29). Exponential dissipative conditions
μ i 2 β λ χ , i 2 κ i ϑ χ α , i ρ i , μ i 2 β λ χ , j 2 κ j ϑ χ α , j ρ j .

6.2. System A S A S i

Consider LF V i ( t ) and
V Δ S , i = 0.5 Sp Δ A i T Γ i 1 Δ A i + 0.5 j = 1 m Sp Δ A ¯ i j T Γ i j 1 Δ A ¯ i j + 0.5 Δ B i T Γ i 1 Δ B i + + 0.5 t 0 t Δ U i F i T ( τ ) Δ U i F i ( τ ) d τ ,
where Δ U i F i = U i F i .
Theorem 5. Let (i) Lyapunov functions V i ( t ) and V Δ S , i ( t ) to have an infinitesimal upper limit; (ii) A i H ; (iii) A ¯ i j ensures the stability of the subsystem S i ; (iv) F i X i N F π 1 , π 2 ; (v) u i ( t ) E C α ¯ u i , α ¯ u i , X i ( t ) E C α ¯ X i , α ¯ X i , X j ( t ) E C α ¯ X j , α ¯ X j ; (v) exist ν i > 0 such that the condition F i T Δ U i F i = ν i F i 2 + Δ U i F i 2 satisfy at t > > t 0 in some area of the origin; (vi) the system of inequalities
V ˙ i V ˙ Δ S , i W ˙ S i μ i 2 μ i κ Δ S , i ϑ Δ S , i ρ i β Δ S , i A W S i V i V Δ S , i W S i + 0 0.125 v i η i α ¯ X i L S , i (33)
is valid for the Lyapunov vector function W S , i = V i , V Δ S , i T , where μ i , κ Δ S , i , ϑ Δ . i , ρ i , β Δ , i , v i , η i are positive numbers depending on the subsystem S i parameters and set II o , i properties; (vii) the upper solution for W i satisfies the system of equation S ˙ W i = A W i S W i + L i if
w ρ ( t ) s ρ ( t )   t t 0 & w ρ t 0 s ρ t 0 ,
ρ = e , i ; Δ S , i for elements W S i , w ρ W S i , s ρ S W S , i . Then the A S A S i -system is exponentially dissipative with an estimate
W S i ( t ) e A W S i t t 0 S W S i ( t 0 ) + t 0 t e A W S i t σ L S , i d σ , (34)
if μ i 2 β Δ , i 2 ϑ Δ . i ρ i κ Δ S , i .
As follows from Theorem 4, the A S A S i -system application gives biased estimates for the parameters of the S i -subsystem.
Theorem 5 proof is presented in Appendix F.
Remark 4. Signalling algorithms (SA) are widely used in adaptive control systems (see review [19]). The rationale SA is based on ensuring on non-positivity derivative LF. This is a feature of using quadratic LF, which does not fully reflect the specifics of the processes in the system A S A S i . The Lyapunov function V Δ S , i proposed in the paper allows to prove the properties of the adaptive system.
Remark 5. Algorithm (19) is a compensating control. Therefore, the term "signal adaptation" reflects only the gain factor in (19). In identification systems, the SA use depends on the quality requirements of the identification system.
Remark 6. The analysis of the properties of algorithms (12) with φ i = 0 is based on the results got in [20].

7. Example

Consider the system
S 1 : x ˙ 11 x ˙ 12 = 0 1 a 21 a 22 x 11 x 12 + 0 a ¯ 1 x 2 + 0 b 1 u 1 + 0 c 1 f 1 x 11 , y 1 = x 11 , S 2 : x ˙ 2 = a 2 x 2 + a ¯ 2 x 11 + b 2 u 2 + c 2 f 1 x 11 , y 2 = x 2 , (35)
where X 1 = x 11 x 12 T , y 1 is the state vector and output of the subsystem S 1 ; u 1 is input (control)); f 1 x 11 = sat x 11 is saturation function; f 2 x 2 = sign x 2 is sign function; y 2 is subsystem S 2 output. System parameters (35): b 1 = 1 , a 21 = 2 , a 22 = 3 , a ¯ 1 = 1.5 , b 1 = 1 , c 1 = 1 , a 2 = 1.25 ,   a ¯ 2 = 0.2 , b 2 = 1 , c 2 = 0.25 . Inputs u i ( t ) are sinusoidal.
Since the variable x 12 is not measured, the subsystem S 1 is converted to a form where only observable variables are used [20]. Subsystem S 1 has the form in the input-output space:
y ˙ 1 = α 1 y 1 + α 2 p y 1 + β 12 p x 2 + b 1 p u 1 + c 1 p f 1 , (36)
where α 1 , α 2 , β 12 , b 1 , c 1 is unknown coefficients; μ > 0 ,
p ˙ y 1 = μ p y 1 + y 1 , p ˙ x 2 = μ p x 2 + x 2 , p ˙ u 1 = μ p u 1 + u 1 , p ˙ f 1 = μ p f 1 + f 1 . (37)
We present the phase portrait for S1 in Figure 1. Processes in S 1 are nonlinear. There is a relationship between y 1 and y 2 (the determination coefficient is 75%). This reflects in properties of the subsystem S 1 (see Figure 1). In particular, y 2 effects on S-synchronizability and parameter estimation. Apply the approach [16] and get that the S 1 subsystem is structurally identifiable.
Models for subsystems S 1 и S 2 y ^ ˙ 1 = k 1 e 1 + a ^ 11 y 1 + a ^ 12 p y 1 + β ^ 12 p x 2 + b ^ 1 p u 1 + c ^ 1 p f 1 , (38)
y ^ ˙ 2 = k 2 e 2 + a ^ 2 y 2 + a ¯ ^ 2 y 1 + b ^ 2 u 2 + c ^ 1 f 2 , (39)
where k 1 , k 2 are a priori set positive numbers (reference model); e 1 = y ^ 1 y 1 , e 2 = y ^ 2 y 2 are identification errors; a ^ i , a ¯ ^ i , b ^ i , c ^ i are tuning parameters.
Apply algorithms (12) with φ i = 0 :
a ^ ˙ 11 = γ a 11 e 1 y 1 , a ^ ˙ 12 = γ a 12 e 1 p y 1 , β ^ ˙ 12 = γ β 12 e 1 p x 2 , b ^ 1 = γ b 1 e 1 p u 1 , c ^ 1 = γ c 1 e 1 p f 1 , (40)
a ^ ˙ 2 = γ a 2 e 2 y 2 , a ¯ ^ ˙ 2 = γ a ¯ 2 e 2 y 1 , b ^ 2 = γ b 2 e 2 u 1 , c ^ 2 = γ c 2 e 2 f 2 , (41)
where γ a i j > 0 , γ a ¯ i j > 0 , γ β i j > 0 , γ b i > 0 , γ c i > 0 are gain factors of the adaptation subsystem.
Figure 2 shows tuning parameters of the model (38) for S 1 .
Show the adequacy estimation of the of models (31), (32) in Figure 3.
Figure 4 shows the tuning process dynamics of the model (32) parameters depending on e 2 . We see that the processes in the adaptive identification system (ASI) for S 2 are nonlinear. The tuning process is more regular in the ASI for the S 1 subsystem.
Models (38) and (39) adaptation processes have different speeds (Figure 5).
Consider an ASI with signal adaptation for S 2 . Apply the model
y ^ ˙ 2 = k 2 e 2 + a ^ 2 y 2 + a ¯ ^ 2 y 1 + b ^ 2 u 2 + u s , 2 , (39а)
where u s , 2 = d 2 e 2 x 2 .
We show results for the ASI in Figure 6, Figure 7, Figure 8, Figure 9. Show the adaptation of the model parameters (38) and the adequacy of the models in the output space in Fig. 6, 7. Fig. 8 reflects the dynamics in ASI and the change in SA as e 2 function for the subsystem S 2 .
We see that the S 2 -subsystem output with SA effects on the ASI adaptation of the S 1 -subsystem. Therefore, ASI with SA should be applied considering the quality requirements of the identification process. Despite the compensating properties of CA, CA can lead to more complicated processes in ASI.

8. Conclusions

We consider a class of nonlinear decentralised control systems for which the quadratic condition is valid. The problem of identifiability S1-subsystem of DS is studied. We note the constant excitation condition role in the parametric identifiable analysis of decentralised systems. Quadratic estimates are got for the nonlinear part of the S i -subsystem. Parametric identifiability conditions are got. Algorithms of parametric and signal adaptation are synthesised, and identification system properties are studied. The exponential dissipativity of the adaptive identification system is proved. We present the simulation results confirming the proposed approach efficiency. Appendix contains proof results.

Appendix A. Lemma 1 Proof

As π 1 X F X π 2 X , then
π 2 X F X T F X π 1 X 0 , (A1)
where π 1 > 0 , π 2 > 0 .
After simple transformations, we get
χ = π 2 X T F X π 1 π 2 X T X F T X F X + π 1 X T F X = = π X T F X π 1 π 2 X T X F T X F X 0 , (A2)
where π = π 1 + π 2 . Let X ( t ) E C α ¯ X , α ¯ X . Denote X T X = X 2 , α ¯ X X 2 α ¯ X , π ¯ = π 1 π 2 , where α ¯ X , α ¯ X are positive numbers.
Transform χ to the form
χ = π ¯ X T X 0.5 F T X F X + 2 0.5 π X T F X 2 0.5 ± 1 4 0.5 π 2 X T X 0.5 F T X F X 0 or
χ = π ¯ X T X 0.5 F X + 1 2 0.5 π X 2 0.5 F T X F X + 1 4 0.5 π 2 X T X 0. Then
0.5 F T X F X π ¯ + 0.5 π 2 X 2 0.5 F X + 1 2 0.5 π X 2 F T X F X 2 π ¯ X T X + π 2 X T X = 2 π ¯ + π 2 X 2 . F T X F X 2 π ¯ + π 2 η X T X = η X T X η α ¯ X

Appendix B. Lemma 2 Proof

As F ^ = F Δ F , then we obtain from (7): π 2 X F ^ T F ^ π 1 X 0 . Transform this inequality to the form
f 2 + Δ F T f 1 Δ F 0 , (B1)
where f 1 = F π 1 X , f 2 = π 2 X F . Then
f 2 T f 1 f 2 T Δ F + Δ F T f 1 Δ F T Δ F , f 2 T f 1 f 12 Δ F Δ F T Δ F 0 , (B2)
where f 12 = f 2 f 1 .
Transform (B2)
f 2 T f 1 0.5 Δ F T Δ F + 0.5 f 12 T f 12 2 0.5 Δ F T Δ F + 0.5 f 12 T f 12 0 , 0.5 Δ F T Δ F f 2 T f 1 + 0.5 f 12 T f 12 . As f 2 T f 1 η α ¯ X (see Lemma 1), then:
0.5 Δ F T Δ F η α ¯ X + 0.5 f 12 T f 12 Δ F T Δ F 2 η α ¯ X + f 12 T f 12 .
Obtain for f 12 T f 12 f 12 T f 12 = π X 2 F T π X 2 F = π 2 X T X 4 π X T F + 4 F T F , f 12 T f 12 = π X 2 F 2 δ F , where δ F 0 . So
Δ F T Δ F = Δ F 2 2 η α ¯ X + δ F .

Appendix C. Theorem 1 Proof

Derivative V i V ˙ i = E i T Q i E i + E i T R i Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + Δ F i X i , (C1)
where K i R i + K i T R i = Q i , Q i = Q i T > 0 . Transform (C1)
V ˙ i E i T Q i E i λ V i + E i T R i k i V i Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + Δ F i X i λ i V i + k i V i + 0.5 Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + Δ F i X i 2 , (C2)
where λ i > 0 is the minimum eigenvalue of the matrix Q i .
We apply the Cauchy-Bunyakovsky-Schwarz inequality and Titus lemma to the last term in (C2)
0.5 Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + Δ F i X i 2 2 Δ A i 2 X i 2 + Δ B i 2 u i 2 + j = 1 , j i m Δ A ¯ i j 2 X j 2 + Δ F i X i 2 . (C3)
Since the conditions of Theorem 1 are fulfilled, then
V ˙ i λ ¯ i V i + 2 α ¯ X i Δ A i 2 + α ¯ u i Δ B i 2 + j = 1 , j i m α ¯ X j Δ A ¯ i j 2 + Δ F i X i 2 , (C4)
where λ ¯ i = λ i k i .
Apply lemmas 1, 2. Then
Δ F i T Δ F i = Δ F i 2 2 η α ¯ X i + δ i , (C5)
where η = 2 π ¯ + π 2 , π = π 1 + π 2 , π ¯ = π 1 π 2 , δ i 0 .
Get estimation for (C4)
V ˙ i λ ¯ i V i + 2 α ¯ X i Δ A i 2 + α ¯ u i Δ B i 2 + j = 1 , j i m α ¯ X j Δ A ¯ i j 2 + 2 η α ¯ X i + δ i . (C6)
As follows from (C6), if state variables have the property CE and
2 α ¯ X i Δ A i 2 + α ¯ u i Δ B i 2 + j = 1 , j i m α ¯ X j Δ A ¯ i j 2 + 2 η α ¯ X i + δ i λ ¯ i V i ,
then subsystem (1) is identifiable on the set II o , i or P I X i - identifiable. ■

Appendix D. Theorem 3 Proof

Consider the LF V i = 0.5 Sp Δ E i T R i Δ E i and V i = 0.5 Sp Δ E i T R i Δ E i .
V ˙ i = E i T Q i E i + E i T R Δ A i X i + Δ B i u i + j = 1 , j i M Δ A ¯ i j X j + Δ F i X i E i T Q i E i + E i T R i Δ A i X i + Δ B i u i + j = 1 , j i M Δ A ¯ i j X j + Δ F i X i . The derivative of V i has the form
V ˙ i = E i T Q i E i + E i T R Δ A i X i + Δ B i u i + j = 1 , j i M Δ A ¯ i j X j + Δ F i X i E i T Q i E i + E i T R i Δ A i X i + Δ B i u i + j = 1 , j i M Δ A ¯ i j X j + Δ F i X i , and after simple transformations (see Appendix E)
V ˙ i μ i V i + 2 μ i κ i V Δ , i + 2 μ i υ ¯ F ˜ i .
For V ˙ Δ , i we have
V ˙ Δ , i = Δ A i T Γ A ˙ i 1 Γ A ˙ i Δ A i ( t ) Γ A i E i R i X i T + Γ A ˙ i Δ A i ( t τ ) or
V ˙ M , i = Sp Δ A i T Δ A i ( t ) Sp Δ A i T Γ A ˙ i 1 Γ A i E i R i X i T + Sp Δ A i T Γ A ˙ i 1 Γ A ˙ i Δ A i ( t τ ) . (D1)
Let E i T R i 2 E i λ ¯ R i E i T R i E i = 2 λ ¯ R i V i ,
2 λ ¯ Γ A ˙ i V Δ , i Sp Δ A i T Δ A i ( t ) 2 λ ¯ Γ A ˙ i V Δ , i и Sp Δ A i T ( t ) Δ A i ( t τ ) 0.5 Δ A i 2 + Δ A i ( t τ ) 2 ,
where Sp Δ A i T ( t ) Δ A i ( t ) 2 λ ¯ Γ A ˙ i V Δ , i , λ ¯ Γ A ˙ i is the minimum eigenvalue of the matrix Γ A ˙ i .
Then (D1)
V ˙ Δ , i 2 λ ¯ Γ A ˙ i V Δ , i Sp Δ A i T Γ A ˙ i 1 Γ A i E i R i X i T + Sp Δ A i T Δ A i ( t τ ) 2 λ ¯ Γ A ˙ i V Δ , i Sp Δ A i T Γ A ˙ i 1 Γ A i E i R i X i T + 0.5 Δ A i 2 + Δ A i ( t τ ) 2 . , (D2)
Apply condition 5) of Theorem 3 and get
V ˙ Δ , i λ ¯ Γ A ˙ i V Δ , i + 0.5 Δ A i ( t τ ) 2 υ Sp Δ A i T Γ A ˙ i 1 Γ A i Δ A i + E i T R i 2 E i X i 2 , (D3)
or
V ˙ Δ , i λ ¯ Γ A ˙ i V Δ , i + 0.5 Δ A i ( t τ ) 2 υ λ ¯ Γ A ˙ i λ ¯ Γ A i σ Δ Δ A i 2 υ α ¯ X i 2 λ ¯ R i σ e V i As
υ σ Δ Δ A i 2 υ σ e V i 3 4 υ σ Δ Δ A i 2 + υ σ e V i 3 8 υ σ Δ Δ A i 2 + υ 2 σ e 3 σ Δ V i ,
then
V ˙ Δ , i η V Δ , i + Δ A i ( t τ ) 2 + υ 2 σ e 3 σ Δ V i , (D4)
where η = λ ¯ Γ A ˙ i 1 + 3 4 υ σ Δ .
The system will be stable if the functional limitation is fulfilled
Δ A i ( t τ ) 2 η V M , i υ 2 σ e 3 σ Δ V i .■

Appendix E. Theorem 4 Proof

Derivative V i = E i T P i E i V ˙ i = E i T Q i E i + E i T R Δ A i X i + Δ B i u i + j = 1 , j i M Δ A ¯ i j X j + Δ F i X i E i T Q i E i + E i T R i Δ A i X i + Δ B i u i + j = 1 , j i M Δ A ¯ i j X j + Δ F i X i . (E1)
Apply the inequality
a z 2 + b z a z 2 2 + b 2 2 a , a > 0 , b 0 , z 0 . (E2)
As
Δ F i = F ˜ i T Δ N i , 2 + Δ F ˜ i T N i , 2 = F ˜ i T Δ N i , 2 + D Δ F i ,
then apply the conditions of Theorem 3 and get
V ˙ i μ i V i + 2 μ i α ¯ X i Δ A i 2 + α ¯ u i Δ B i 2 + j = 1 , j i m α ¯ X j Δ A ¯ j i 2 + + F ˜ i T X i , N ^ i , 1 Δ N i , 2 + D Δ F i 2 . , (E3)
where E i T Q i E i = μ i E i T R i E i , Δ A i 2 = Sp Δ A i T Δ A i . D Δ F ˜ i 2 υ ¯ Δ F ˜ i follows from the system construction, υ ¯ Δ F ˜ i 0 . Obtain for Δ A i 2 Δ A i 2 = Sp Δ A i T Γ A i 1 Γ A i Δ A i λ ¯ Γ A i Sp Δ A i T Γ A i 1 Δ A i , (E4)
where λ ¯ Γ A i is the maximum eigenvalue of the matrix Γ A i . Estimates for Δ B i 2 , Δ A ¯ j i 2 are obtained similarly. For F ˜ i T Δ N i , 2 in (E3), we have
F ˜ i T Δ N i , 2 2 F ˜ i 2 Δ N i , 2 2 ϑ F ˜ i Δ N i , 2 T Γ i , 2 1 Γ i , 2 Δ N i , 2 ϑ F ˜ i λ ¯ Γ i , 2 Δ N i , 2 T Γ i , 2 1 Δ N i , 2 ,
where λ ¯ Γ i , 2 is the maximum eigenvalue of the matrix Γ i , 2 , and F ˜ is limited in construction, i.e. F ˜ i 2 ϑ F ˜ i . Therefore, (E3)
V ˙ i μ i V i + 2 μ i κ i V Δ , i + 2 μ i υ ¯ F ˜ i , (E5)
where κ i = max α ¯ X i λ ¯ Γ A i , α ¯ u i λ ¯ Γ B i , α ¯ X j λ ¯ Γ A ¯ i j , α ¯ F ˜ i λ ¯ Γ N i , 2 .
V ˙ Δ , i haves the form
V ˙ Δ , i = Sp Δ A i T φ A i Δ A i + R i E i X i T j = 1 m Sp Δ A ¯ i j T φ A ˜ i j Δ A ˜ i j + R i E i X j T Δ B i T φ B i j Δ B i + R i E i u i Δ N i , 2 T φ F i Δ N i , 2 + F ˜ i R i E i = ω 1 , i + ω 2 , i . . (E6)
Consider first components in (E6), i.e. r 1 = ω 1 , i ( 1 ) + ω 2 , i ( 1 ) :
r 1 = φ A i Sp Δ A i T Δ A i Sp Δ A i T R i E i X i T = 0.5 φ A i Sp Δ A i T Δ A i 0.5 φ A i Sp Δ A i T Δ A i + 2 0.5 φ A i 2 0.5 φ A i Sp Δ A i T R i E i X i T ± 1 4 0.5 φ A i X i T X i E i T R i R i E i 0.5 φ A i Sp Δ A i T Δ A i + 1 4 0.5 φ A i X i T X i E i T R i R i E i . (E7)
Using the transformations performed above for V ˙ i , we obtain
r 1 0.5 φ A i Sp Δ A i T Δ A i + 1 4 0.5 φ A i X i T X i E i T R i R i E i 0.5 λ ¯ A i χ ¯ A i Sp Δ A i T Γ A i 1 Δ A i + χ ¯ A i 1 α ¯ X i ρ i V i , (E8)
where φ A i χ ¯ A i , R i ρ i , φ A i 1 χ ¯ A i 1 .
Estimate r 2 = ω 1 , i ( 2 ) + ω 2 , i ( 2 ) , using the approach for r 1 :
r 2 = j = 1 m φ A ˜ i j Sp Δ A ¯ i j T Δ A ˜ i j j = 1 m Sp Δ A ¯ i j T R i E i X j T = 0.5 j = 1 m φ A ˜ i j Sp Δ A ¯ i j T Δ A ˜ i j 0.5 j = 1 m φ A ¯ i j Sp Δ A ¯ i j T Δ A ˜ i j + 0.5 E i T R i R i E i j = 1 m φ A ¯ i j 1 X j T X j . (E9)
Let χ ¯ A ¯ i j φ A ¯ i j ( t ) χ ¯ A ¯ i j . Then:
r 2 0.5 j = 1 m λ ¯ A ¯ i j χ ¯ A ¯ i j Sp Δ A ¯ i j T Γ A ¯ i j 1 Δ A ¯ i j + ρ i V i j = 1 m χ ¯ A ¯ i j 1 α ¯ X i j . (E10)
Estimate for r 3 = φ B i Δ B i T Δ B i Δ B i T R i E i u i r 3 0.5 λ ¯ B i χ ¯ B i Δ B i T Γ B i 1 Δ B i + χ ¯ B i 1 α ¯ u i ρ i V i , (E11)
where φ B i ( t ) χ ¯ B i , φ B i 1 ( t ) χ ¯ B i 1 .
Consider r 4 = φ F i Δ N i , 2 T Δ N i , 2 Δ N i , 2 T F ˜ i R i E i N i , 2 T Δ F ˜ i R i E i r 4 0.5 λ ¯ N i , 2 χ ¯ F i Δ N i , 2 T Γ N i , 2 1 Δ N i , 2 + χ ¯ F i 1 α ¯ F ˜ i ρ i V i + N i , 2 T Δ F ˜ i R i E i 0.5 λ ¯ N i , 2 χ ¯ F i Δ N i , 2 T Γ N i , 2 1 Δ N i , 2 + χ ¯ F i 1 α ¯ F ˜ i ρ i V i + 0.5 N i , 2 T Δ F ˜ i Δ F i T N i , 2 + 0.5 E i T R i R i E i , where φ F i ( t ) χ ¯ F i , F ˜ i 2 α ¯ F ˜ i . Considering the designations introduced above and
0.5 N i , 2 T Δ F ˜ i Δ F ˜ i T N i , 2 δ N , i , δ N , i 0 ,
we obtain
r 4 0.5 λ ¯ N i , 2 χ ¯ F i Δ N i , 2 T Γ N i , 2 1 Δ N i , 2 + χ ¯ F i 1 α ¯ F ˜ i + 1 ρ i V i + 0.5 δ N i . (E12)
Let
min λ ¯ A k χ ¯ A k ¯ λ ¯ A i χ ¯ A i , χ ¯ A ¯ i j λ ¯ A ¯ i j , λ ¯ B i χ ¯ B i , λ ¯ N i , 2 χ ¯ F i = β λ χ , i , max χ A i 1 α ¯ X i χ A i 1 α ¯ X i , j = 1 m χ ¯ A ¯ i j 1 α ¯ X i j , χ ¯ B i 1 α ¯ u i , χ ¯ F i 1 α ¯ F ˜ i + 1 ϑ χ α , i . Then
V ˙ Δ , i β λ χ V Δ , i + ϑ χ α , i ρ i V i + 0.5 δ N , i . (E13)
We obtain a system of inequalities for the A S F , i from (E5) and (E13)
V ˙ i V ˙ Δ , i μ i 2 μ i κ i ϑ χ α , i ρ i β λ χ , i A W i V V Δ , i W i + 2 μ i υ ¯ F ˜ i 0.5 δ N , i L i (E14)
The A S F , i
-subsystem is asymptotically stable, if ( 1 ) q D m q A W i > 0
, where D m q
is the qth minor of the matrix A W i
. From these terms, we obtain the exponential dissipation condition: μ i 2 β λ χ , i 2 κ i ϑ χ α , i ρ i
.
The upper solution for W i satisfies the comparison system
S ˙ W i = A W i S W i + L i , (E15)
if w ρ ( t ) s ρ ( t )   t t 0 & w ρ t 0 s ρ t 0 , w ρ W i , s ρ S W i , ρ = e , i ; Δ , i . Then the A S F , i -system is exponentially dissipative with the estimate
W i ( t ) e A W i t t 0 S W i ( t 0 ) + t 0 t e A W i t τ L i d τ .■ (E16)

Appendix F. Theorem 5 Proof

Consider V ˙ i .
V ˙ i = E i T Q i E i + E i T R i Δ A i X i + Δ B i u i + j = 1 , j i m Δ A ¯ i j X j + Δ U i F i . (F1)
Apply the approach from Appendix D and get
V ˙ i μ i V i + 2 μ i Δ A i X i 2 + Δ B i u i 2 + j = 1 , j i M Δ A ¯ i j X j 2 + Δ U i F i 2 . (F2)
where Δ A i 2 = Sp Δ A i T Δ A i , Δ A i j 2 = Sp Δ A i j T Δ A i j .
According to section 4, U i has the form (19), where D i n i × n i is a diagonal matrix with positive elements. We select the matrix D i from the condition D i d i η α ¯ X i , and F i X i N F π 1 , π 2 satisfies conditions Lemma 1, 2 conditions. Given the choice U i , we get Δ U i F i 2 σ F i , where σ F i 0 . Apply the mean theorem,
Δ U i F i 2 = τ 1 t t + τ Δ U i F i ( τ ) 2 d τ I Δ F i = τ 1 I Δ F i ,
where τ > 0 , I Δ F i φ , φ > 0 Let τ φ ϕ . Then the estimation for V ˙ i :
V ˙ i μ i V i + 2 μ i Δ A i X i 2 + Δ B i u i 2 + j = 1 , j i M Δ A ¯ i j X j 2 + τ 1 I Δ F i . (F3)
Using the proof scheme from Appendix 4, we get:
α ¯ X i Δ A i 2 + α ¯ u i Δ B i 2 + j = 1 , j i m α ¯ X j Δ A ¯ j i 2 + τ 1 I Δ F i κ Δ S , i V Δ S , i , (F4)
where κ Δ S , i > 0 , and
V ˙ i μ i V i + 2 μ i κ Δ S , i V Δ S , i . (F5)
Consider
V ˙ Δ S , i = Sp Δ A i T φ A ˜ i j Δ A i + R i E i X j T j = 1 m Sp Δ A ¯ i j T φ A ˜ i j Δ A ¯ i j + R i E i X j T Δ B i T φ B i j Δ B i + R i E i u i + Δ U i F i T Δ U i F i . (F6)
Lemma 3.Estimate
Δ U i F i T Δ U i F i v ¯ Δ U i F i 2 + 2 θ i ρ i d i V i + 0.125 v η i α ¯ X i is valid for  Δ U i F i T Δ U i F i .
Lemma 3 proof. Δ U i F i T Δ U i F i has the form
Δ U i F i T Δ U i F i = E i T R i D i Δ U i F i F i T Δ U i F i .
Estimate the E i T R i D i Δ U i F i . Let in the domain O ν ( O ) , the equality holds:
E i T R i D i Δ U i F i = π i E i T R i D i 2 + Δ U i F i 2 ,
where π i > 0 , O ν = { 0 , 0 n i × n i } × n i × n i × J 0 , , 0 n i × n i n i × n i is zero matrix, O ν is some neighbourhood of the point 0, t [ 0 , ] = J 0 , .
Then
π i E i T R i D 2 + Δ U i F i 2 = 0.5 π Δ U i F i 2 π i 0.5 Δ U i F i 2 + E i T R i D i 2 ± 2 2 0.5 Δ U i F i E i T R i D i 0.5 π i Δ U i F i 2 + 0.5 0.5 π i Δ U i F i E i T R i D i . (F7)
Apply the inequality (E2) to (E7)
E i T R i D i Δ U i F i π i Δ U i F i 2 + π i E i T R i D i 2 . As D i d i , R i ρ i , then
E i T R i D i Δ U i F i π i Δ U i F i 2 + 2 π i ρ i d i V i . (F8)
Δ U i F i 2 = τ 1 I Δ F i . Therefore, (F8)
E i T R i D i Δ U i F i π i τ 1 I Δ F i + 2 π i ρ i d i V i . (F9)
Estimate F i T Δ U i F . Let ν i > 0 exist such that:
F i T Δ U i F i = ν i F i 2 v i Δ U i F i 2 is fulfilled on t > > t 0 .
Then
F i T Δ U i F i = ν i F i 2 v i Δ U i F i 2 = 0.5 v i Δ U i F i 2 0.5 v i Δ U i F i 2 ν i F i 2 = = 0.5 v i Δ U i F i 2 v i 0.5 Δ U i F i 2 + F i 2 + 2 0.5 2 F i T Δ U i F i > 0 + v i 2 0.5 2 F i T Δ U i F 0.5 v i Δ U i F i 2 + 0.5 v i F i T Δ U i F i 0.25 v i Δ U i F i 2 + 0.125 v i F i 2 . (F10)
Use (E9) and (E10) and get the desired score for Δ U i F i T Δ U i F i Δ U i F i T Δ U i F i π i + 0.25 v i Δ U i F i 2 + 2 π i ρ i d i V i + 0.125 v i η i α ¯ X i where F i 2 η i α ¯ X i . ■
Use the estimates obtained in Appendix D. Introduce notations
min λ ¯ A i χ ¯ A i , χ ¯ A ¯ i j λ ¯ A ¯ i j , λ ¯ B i χ ¯ B i , π i + 0.25 v i τ 1 = β Δ S , i ,
max χ A i 1 α ¯ X i χ A i 1 α ¯ X i , j = 1 m χ ¯ A ¯ i j 1 α ¯ X i j , χ ¯ B i 1 α ¯ u i , π i d i ϑ Δ S , i ,
and apply to Lemma 3. Then (E6)
V ˙ Δ S , i β Δ S , i V Δ S , i + ϑ Δ S , i ρ i V i + 0.125 v i η i α ¯ X i , (F11)
where F i T Δ U i F i = ν i F i 2 v i Δ U i F i 2 , v i 0 .
We obtain the system of inequalities for A S A S i V ˙ i V ˙ Δ S , i W ˙ S i μ i 2 μ i κ Δ S , i ϑ Δ S , i ρ i β Δ S , i A W S i V i V Δ S , i W S i + 0 0.125 v i η i α ¯ X i L S , i . . (E12)
The exponential dissipation condition for A S A S i
μ i 2 β Δ S , i 2 ϑ Δ S , i ρ i κ Δ S , i . (F13)
If we introduce a comparison system (see appendix D), then we get the estimate for A S A S i
W S i ( t ) e A W S i t t 0 S W S i ( t 0 ) + t 0 t e A W S i t σ L S , i d σ .■

References

  1. Hua, C. , Guan X., Shi P. Decentralised robust model reference adaptive control for interconnected time-delay systems. Proceeding of the 2004 American Control Conference Boston, Massachusetts June 30 - July 2, 2004, 30 June.
  2. Yang, Q. , Zhu M., Jiang T. He J., Yuan J., Han J. Decentralised Robust Adaptive output feedback stabilization for interconnected nonlinear systems with uncertainties. Journal of control science and engineering, 1: article ID 3656578, 3656. [Google Scholar]
  3. Wu, H. Decentralised adaptive robust control of uncertain large-scale non-linear dynamical systems with time-varying delays. IET Control Theory and Applications. 2012, 6, 629–640. [Google Scholar] [CrossRef]
  4. Fan, H. , Han L., Wen C., Xu L. Decentralised adaptive output-feedback controller design for stochastic nonlinear interconnected systems. Automatica. 2012, 48, 2866–2873. [Google Scholar] [CrossRef]
  5. Bronnikov, A.M. , Bukov V.N. Decentralized adaptive management with identification and model coordination in multicommunicated systems 2010:17. https://www.researchgate.net/publication/273038561_Decentralizovannoe_adaptivnoe_upravlenie_s_identifikaciej_i_modelnoj_koordinaciej_v_mnogosvaznyh_sistemah.
  6. Benitez, V.H. , Sanchez E.N., Loukianov A.G. Decentralised adaptive recurrent neural control structure. Engineering Applications of Artificial Intelligence. 2007, 20, 1125–1132. [Google Scholar] [CrossRef]
  7. Lamara, A. , Colin G., LANUSSE P., Chamaillard Y., Charlet A. Decentralised robust control-system for a non-square MIMO system, the air-path of a turbocharged Diesel engine. 2012 Workshop on engine and powertrain control, simulation and modelling. The International Federation of Automatic Control Rueil-Malmaison, 23 October 2012; -25. [Google Scholar]
  8. Nguyen, T. , Mukhopadhyay S. Identification and optimal control of large-scale systems using selective decentralization. 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 0005; -08. [Google Scholar] [CrossRef]
  9. Gudi, R.D. , Rawlings J.B., Venkat A., Jabbar N. Identification for decentralised MPC. IFAC Proceedings Volumes. 2004, 37, 299–304. [Google Scholar] [CrossRef]
  10. Cho, S. , Park J.-W., Sim S.-H. Decentralised System Identification Using Stochastic Subspace Identification for Wireless Sensor Networks. Sensors. 2015, 15, 8131–8145. [Google Scholar] [CrossRef] [PubMed]
  11. Mao, X. , He J. Decentralised System Identification Method for Large-Scale Networks, 2022 American Control Conference (ACC), Atlanta, GA, USA, 2022, 5173-5178. [CrossRef]
  12. Mei, H. , Cai W.-J., Xiong Q. Decentralised closed-loop parameter identification for multivariable processes from step responses. Mathematics and Computers in Simulation. 2005.
  13. Ioannou, P.A. Decentralised adaptive control of interconnected systems. IEEE Transactions on automatic control, 1986, 31, 291–298. [Google Scholar] [CrossRef]
  14. Zhong-Ping Jiang. Decentralised and adaptive nonlinear tracking of large-scale systems via output feedback. IEEE Transactions on Automatic Control, 2000, 45, 2122–2128. [Google Scholar] [CrossRef]
  15. Li, X.-J. , Yan G.-H. Adaptive decentralised control for a class of interconnected nonlinear systems via backstepping approach and graph theory. Automatica. 2017, 76, 87–95. [Google Scholar] [CrossRef]
  16. Karabutov, N.N. Introduction to structural identifiability of nonlinear systems. Мoscow: URSS/Lenand, 2021.
  17. Karabutov, N. Structural-parametrial design method of adaptive observers for nonlinear systems. I.J. Intelligent Systems and Applications, 2018, 2, 1–16. [Google Scholar] [CrossRef]
  18. Gromyko V., D. , Sankovsky E. A. Self-adjusting systems with a model, 1: Energy, 1974. [Google Scholar]
  19. Podval’ny, S.L. , Vasil’ev E. M. Multi-level signal adaptation in non-stationary control systems Bulletin of Voronezh state technical university. 2022, 18, 38–45. [Google Scholar]
  20. Karabutov, N.N. On adaptive identification of systems having multiple nonlinearities. Russ. Technol. J. 2023, 11, 94−105. [Google Scholar] [CrossRef]
Figure 1. Phase portrait of subsystem S 1 .
Figure 1. Phase portrait of subsystem S 1 .
Preprints 143582 g001
Figure 2. Tuning model (31) parameters.
Figure 2. Tuning model (31) parameters.
Preprints 143582 g002
Figure 3. Adequacy of models (38), (39).
Figure 3. Adequacy of models (38), (39).
Preprints 143582 g003
Figure 4. Tuning model (39) parameters.
Figure 4. Tuning model (39) parameters.
Preprints 143582 g004
Figure 5. ASI phase portrait in error space.
Figure 5. ASI phase portrait in error space.
Preprints 143582 g005
Figure 6. Tuning model (38) parameters.
Figure 6. Tuning model (38) parameters.
Preprints 143582 g006
Figure 7. Adequacy of models (38), (39а).
Figure 7. Adequacy of models (38), (39а).
Preprints 143582 g007
Figure 8. Phase portraits of the ASI and the output of the signal adaptation.
Figure 8. Phase portraits of the ASI and the output of the signal adaptation.
Preprints 143582 g008
Figure 9. Tuning model (39a) parameters.
Figure 9. Tuning model (39a) parameters.
Preprints 143582 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated