Preprint
Article

This version is not peer-reviewed.

Synthesis of Adaptive Algorithms with Given Quality

Submitted:

29 April 2025

Posted:

07 May 2025

You are already at the latest version

Abstract
Integrated adaptive algorithms (IAA) are widely used in control systems. Some authors use integrally proportional algorithms, which are obtained in heuristically considered features of the system under consideration. Most IA was got based on of the second Lyapunov method. Modifications of numerical procedures are used as algorithms for tuning parameters of the control law. In this paper, we substantiate some well‐known algorithms and procedures, considering the requirements for an adaptive identification system. Requirements are formed as functional constraints and are considering during synthesis. A representation of the adaptive algorithm is obtained as a system in the state space. Classes of potential algorithms (PA) are considered. PA has a general form and is required adaptation at the implementation stage. PA variants are presented. The limitation of trajectories in an adaptive system and their exponential stability are studied. Simulation results are got.
Keywords: 
;  ;  ;  ;  

1. Introduction

The integrated adaptive algorithm (IAA) is widely used in control systems:
A ^ ˙ = Γ E T R X ,
where A ^ n is vector of tuning parameters; E q is an error in estimating system output; X n is vector of observed variables (input, control); Γ n × n is positive definite gain matrix; R = R T > 0 is the matrix of the loss function (optimisation criteria).
Equation (1) describes the IAA class, which determined by minimising the quadratic loss function, consider heuristic assumptions, or ensuring the stability of the identification system (see, for example, [1,2,3,4,5,6,7,8,9,10]).
In [6], the system is considered
X ( n ) = F X ( n 1 ) + Δ F X ( n 1 ) + H T X ( n 1 ) A + L X ( n 1 ) U ,
where X m , U m are vectors of state and control; F X ( n 1 ) r is a continuous vector function; H X ( n 1 ) m × r and L X ( n 1 ) r × r are continuous matrix functions, Δ F X ( n 1 ) r is an undefined function; A ( t ) m is an unknown parameter vector. A preliminary estimate A 0 = A 0 ( t ) m is known for A .
A robust adaptive algorithm (AA) [6] is proposed:
A ^ ˙ = H P L T ( A ) X μ + 1 A ^ A 0 + A ˙ 0 , U a = H T X A ^ ,
where A ^ is estimation of the A vector, U a is control, μ > 0 , P L T ( A ) is a matrix that determines the quality of control.
The algorithm (3) implementation under uncertainty is difficult, since the current evaluation of the adaptation quality is not considered. Modification (3) and its simplifications are proposed in [6].
In [10], IAA (1) and its proportional σ -modification were considered [11]
A ^ ˙ i = Γ E i T R i X i σ A ^ i ,
where σ > 0 guarantees damping of the tuning process.
A generalization of the σ -modification, if the condition of constant excitation is not fulfilled, is given in [12]. In [12], the design of AA is based on the requirements for the derivative of Lyapunov function (LF).
An adaptive law for the first-order system is proposed in the form
A ^ ˙ 1 = e 1 X 1 γ e 1 A ^ 1 , γ > 0 ,
where e ˙ 1 = k m e 1 + Δ A 1 T X 1 , e 1 is error in predicting system output, X 1 m is generalised system input, k m > 0 . AA is introduced intuitively without synthesis formalization method.
Projection variants of IAA are proposed in [13]. Modifications of IAA are considered in [15,16,17,18]. A normalized version of the algorithm (1) (projection algorithm) is proposed in [16]
A ^ ˙ = Γ E T R X 1 + X 2 ,
where X 2 = X T X . Various projection AA is considered in [19].
Regardless of [12], in [20], an analytical method is proposed for the synthesis of adaptive algorithms, considering functional limitations.
Remark 1. Sometimes, signalling adaptation (SA) [21] algorithms are used. As shown in [20], the SA use in adaptive identification systems is not always effective.
The analysis shows that the IAA is obtained based on the least squares method and its modifications, and the LF, are mainly used. Gradient algorithms based on numerical optimisation methods are often implemented. The AA parameters are selected to ensure the stability of the adaptive process.
In this paper, we generalise the approach proposed in [20] for the formalised synthesis of AA. The properties of the adaptive identification system (AIS) are investigated using the example of a decentralised system.

2. Problem Statement

Consider AIS
S X : X ˙ = A X + B u y = C T X ,
where X n is the state vector, A n × n is a matrix with constant elements, B n , C n , y is system output.
Set of experimental data
I o = y ( y ) , u ( t ) , t J t = t 0 , t k .
The model for evaluating elements ( A , B )
S A : X ^ ˙ = K X ^ X + A ^ X ^ + B ^ u y ^ = C T X ^ ,
where K n × n is the Hurwitz matrix, X ^ n is the model state vector, A ^ n × n and B ^ n are tuning matrices.
Problem: find the tuning laws of the matrices G1 and K2 such that:
lim t e ( t ) = lim t y ^ ( t ) y ( t ) δ e , δ e 0 .

3. The Constant Excitation Condition

Consider the requirements for X , u . The estimation of system parameters (1) depends on the S X -system identifiability. This property is guaranteed if the condition of constant excitation (CE) is satisfied for I o .
Let u ( t ) satisfy the condition CE
E C α ¯ u , α ¯ u : α ¯ u u 2 ( t ) α ¯ u t t 0 , t 0 + T or   u ( t ) E C α ¯ u , α ¯ u ,
where α ¯ u > 0 , α ¯ u > 0 , T > 0 . If u ( t ) does not have the CE property, then we will write u ( t ) E C α ¯ u , α ¯ u or u i ( t ) E C .
Remark 2. If a nonlinear System is considered, then condition (8) is transformed [22].

4. AA Synthesis Based on LF

The error equation for the system (1), (7):
E ˙ = K E + Δ A X + Δ B u ,
where E = X ^ X ; Δ A = A ^ A , Δ B = B ^ B are parametric residuals.
Let a functional constraint χ E , Δ A , Δ B 0 be imposed on the system (9), where χ ( ) is a continuously differentiable function reflecting the tuning process quality. The task is reduced to meeting the target condition lim t e ( t ) = lim t y ^ ( t ) y ( t ) δ e , δ e 0 .
Introduce LF
V ( E , Δ A , Δ B , t ) = 0.5 E T ( t ) P E ( t ) + 0.5 tr Δ A T Γ A 1 Δ A + 0.5 Δ B T Γ B 1 Δ B ,
where P = P T is symmetric positive definite matrix, tr is the trace of the matrix, Γ A = Γ A T > 0 , Γ B = Γ B T > 0 are diagonal matrices.
The V derivative:
V ˙ = E T Q E + tr P E X T + Δ A ˙ T Γ A 1 Δ A + E T P u + Δ B ˙ T Γ B 1 Δ B ,
where K T P + P K = Q , Q = Q T is symmetric positive definite matrix. From V ˙ 0 we obtain AA
Δ A ˙ = Γ A X P E T Δ B ˙ = Γ B P E u ,
So, the IAA synthesized from the stability condition of the system (9). AIS is stable in the space t , E . This is a classic AA synthesis scheme.
Remark 3. In [18], a quadratic condition is imposed on V ˙ to develop a control algorithm. Functional restrictions (FR) for obtaining AA are not considered.

5. FR and AA Structure

Consider LF V ( E , Δ A , Δ B , t ) and apply the approach [20]. Let FR imposes on the AIS
V ˙ χ ( E , Δ A , Δ B ) ,
where χ E , Δ A , Δ B is a quality function of processes in the adaptive system.
Describe the approach to AA synthesis using the example of the matrix A identification for the S X -system. Let V Δ V ( E , Δ A , t ) . Consider examples of functions χ E , Δ A .
1. χ χ Δ = α Δ Sp Δ A T Δ A , where α Δ > 0 . Let η Δ = V ˙ Δ + χ Δ . Then:
η Δ = E T Q E + E T R Δ A X + tr Δ A ˙ T Δ A ˙ + tr Δ A T Δ A ¨ + α Δ tr Δ A T Δ A .
From condition η Δ 0 , we obtain
Δ A ¨ = Δ A ˙ α Δ Δ A Γ A R E X T .
Let Δ A = Z 1 . We get the representation for equation (15) in the state space
S A A : Z ˙ 1 = Z 2 , Z ˙ 2 = α Δ Z 1 Z 2 Γ A R E X T .
So, if FR is imposed on AIS, then AA is described by the system S A A . In this form, to apply of the system (16) is difficult. Therefore, the system structural modification is necessary (16).
Remark 4. There are various modifications to the S A A -system that depend on FR.
2. χ e , Δ = α e , Δ min t E tr Δ A T Δ A ˙ и η e , Δ = V ˙ Δ + χ e , Δ . Then we get the A χ -algorithm:
Δ A ˙ = Γ A R E X T α ˜ e , Δ Γ A ˙ Δ A ˙ ,
where α ˜ e , Δ = α e , Δ min t E , E = E T E is the Euclidean norm of the vector E .
Modifications of the A χ -algorithm:
(a) integral A χ I -algorithm
Δ A ˙ = M 1 Γ A R E X T ,
where M = I n + α ˜ e , Δ Γ A ˙ , Γ A ˙ = Γ A ˙ T > 0 ;
(b) A χ -algorithm as a delayed system (modification (17)):
Δ A ˙ ( t ) = Γ A R E ( t ) X T ( t ) ω e Γ A ˙ Δ A ( t ) Δ A ( t τ ) ,
where ω e , Δ = α ˜ e , Δ τ 1 , τ > 0 is time lag;
(c)
Δ A ˙ = Γ A R E X T α e , Δ D e i Γ A ˙ Δ A ˙ .
Remark 5. The AA (17) implementation and its modifications is depended on the identified system and the properties of the set X ( t ) , u ( t ) . At the beginning of the adaptation, apply variant (17). If the initial conditions can be chosen successfully, then apply algorithms (18)–(20).
3. χ E , Δ = α E , Δ Sp Δ A T D e i Δ A , where α E , Δ > 0 , D e i is the diagonal matrix of the vector E elements. χ E , Δ corresponds to the A E -algorithm
Δ A ˙ = Γ A R E X T α E , Δ D e i Δ A .
Remark 5 is valid for (21). Modifications and simplifications of the G2 algorithm are possible. As a special case, algorithm (5) follows from (20).
Remark 6. FR can have a different form. Above, we have considered only some examples of restrictions χ E , Δ A for adaptive identification of matrix A . The described approach is valid for the vector B identification. If in variant (2b) χ X , Δ = α X , Δ Sp Δ A T D x i Δ A , then we get of the algorithm (18) analog from (20), a special case of which is equation (6).

6. AIS Properties

We will evaluate the limitations of AIS trajectories using algorithms (19) and
Δ B ˙ ( t ) = Γ B R E ( t ) u ( t ) ω e Γ B 1 Δ B ( t ) Δ B ( t τ )
Theorem 1. Let: 1) A H is the Hurwitz matrix; 2) Lyapunov functions V Δ ( t ) = 0 . 5 tr Δ A T Γ A 1 Δ A ( t ) + 0 . 5 Δ B T Γ B 1 Δ B ( t ) and V E = 0.5 E T R E assume an infinitesimal upper limit, where Γ A , Γ B are diagonal matrices with positive diagonal elements; 3) X ( t ) E C α ¯ X , α ¯ X , u ( t ) E C α ¯ u , α ¯ u ; 4) 4) exists υ > 0 such that the condition
tr Δ A T Γ A ˙ 1 Γ A E R X T = υ tr Δ A T Γ A ˙ 1 Γ A Δ A + E T R 2 E X 2
is performed at sufficiently large t in some neighbourhood of  O ( 0 ) zero. Then the trajectories of the system (9), (19), (22) are bounded on some set of initial conditions if
Δ A ( t τ ) 2 η V Δ υ 2 σ e 3 σ Δ V E ,
where  η = λ ¯ Γ A ˙ 1 + 3 4 υ σ Δ , σ e = α ¯ X 2 λ ¯ R σ Δ = λ ¯ Γ A ˙ λ ¯ Γ A , λ ¯ Γ A ˙  is the minimum eigenvalue of the matrix.
The proof of Theorem 1 is presented in Appendix A.
Theorem 1 confirms the limited trajectories in the system (9), (19), (22) and the possibility of local identifiability of model parameters. These statements are valid for some set of initial conditions, since AIS is a system with the delay.
Consider the system (9), (15). To simplify the results, we assume that the vector B ^ of the model (7) is precisely tuned (i.e., Δ B = 0 ), and:
E ˙ = K E + Δ A X ,
Present the algorithm (15) in the form (see Appendix B)
Δ A ˙ = υ Δ A d 1 Γ A E R X T κ ¯ Δ A ( t τ ) ,
In (26), the argument t is omitted, and t τ is used to emphasise the delay.
Consider FL V E = 0.5 E T R E and the Lyapunov
V Δ , υ , τ = V Δ , υ + V Δ , τ = 0.5 tr Δ A T Δ A + c τ 0 tr Δ A T ( t + s ) Δ A ( t + s ) d s
depending on initial conditions for A ( t τ ) .
Theorem 2. Let the theorem 1 conditions be fulfilled, where the functional (26) is used instead of V Δ , and 1) exists ω > 0 such that the condition
tr Δ A T Δ A t τ = ω tr Δ A T Δ A + tr Δ A T t τ Δ A
is performed at sufficiently large t in some neighbourhood of O ( 0 ) zero; 2) the system of inequalities
V ˙ E V ˙ Δ , υ , τ W ˙ = μ E α ¯ X ρ ˜ λ ¯ Γ A 2 α ¯ X λ ¯ R 8 υ d 2 2 ω Δ V E V Δ , υ , τ W
is fair, where ρ ˜ > 0 ,   ω Δ > 0 are numbers depending on the parameters of the adaptive system; 3) the upper solution for the Lyapunov vector function W = V E V Δ , υ , τ T satisfies the comparison system S ˙ W = A W S W if s W , k t 0 w W , k t 0 , where s W , k t 0 , w W , k t 0 are initial conditions for elements of corresponding vectors, k = E , Δ , υ , τ . Then the adaptive system (25), (19), (26) is exponentially stable with the estimate
W D t 0 e A D t t 0 S D t 0
If  μ E , ω Δ > 0 , 2 μ E ω Δ > α ¯ X ρ ˜ λ ¯ Γ A 2 α ¯ X λ ¯ R / 8 υ d 2 λ ¯  are the maximum eigenvalue of the matrix,  ρ ˜ > 0 .
The proof of Theorem 1 is presented in Appendix C.
Theorem 2 proofs the exponential stability of the adaptive system (AS) with algorithm (26). We apply the Lyapunov functional (27) to prove this property.
Remark 7. Consider the tuning algorithm B ^ (26) does not change the statement of Theorem 2. In this case, the system of inequalities is valid (28).
So, we have proved the applications of AA as a dynamic system. These algorithms improve the quality of tuning process for model parameters.
Consider AIS (9), (21) with the algorithm
Δ B ˙ = Γ B R E u
where Γ B = Γ B > 0 is the diagonal matrix.
Theorem 3. Let conditions 1-3 of Theorems 1 be fulfilled and 1) exists υ u > 0 such that the condition Δ B T R E u = 0.5 υ u Δ B T Δ B + 2 R E u is performed at sufficiently large t in some neighbourhood of  O ( 0 ) zero; 2) the system of inequalities
V ˙ E V ˙ Δ Ξ ˙ μ E ρ η u κ A Ξ V E V Δ Ξ
is fair, where  μ E ρ ˜ , κ  are positive numbers depending on the parameters of the adaptive system; 3) the upper solution for the Lyapunov vector function W Ξ = V E V Δ T   W = V E V Δ , υ , τ T satisfies the comparison system S ˙ Ξ = A Ξ S Ξ if s Ξ , k t 0 w Ξ , k t 0 , where s W Ξ , k t 0 , w W Ξ , k t 0 are initial conditions for elements of correspond vectors, k = E , Δ . Then the adaptive system (9), (21), (29) is exponentially stable with the estimate:
W Ξ t 0 e A Ξ t t 0 S Ξ t 0
if  μ E > 0 , κ > 0 , η u = α ¯ X λ ¯ R 8 α ˜ υ u α ¯ u λ ¯ R  and  μ E κ ρ η u .
Remark 8. Properties of IAA obtained without restrictions depend on the CE condition fulfilment.

7. Simulation Results

Consider the system
S 1 : x ˙ 11 x ˙ 12 = 0 1 a 21 a 22 x 11 x 12 + 0 a ¯ 1 x 2 + 0 b 1 u 1 + 0 c 1 f 1 x 11 , y 1 = x 11 , S 2 : x ˙ 2 = a 2 x 2 + a ¯ 2 x 11 + b 2 u 2 + c 2 f 1 x 11 , y 2 = x 2 ,
where X 1 = x 11 x 12 T , y 1 are state vector and output of the subsystem S 1 ; u 1 is input (control); f 1 x 11 = sat x 11 is saturation function; f 2 x 2 = sign x 2 is the sign function; y 2 is output of the subsystem S 2 , b 1 = 1 , a 21 = 2 , a 22 = 3 , a ¯ 1 = 1.5 , b 1 = 1 , c 1 = 1 , a 2 = 1.25 ,   a ¯ 2 = 0.2 , b 2 = 1 , c 2 = 0.25 . u i ( t ) inputs were sinusoidal.
The subsystem S 1 equation is represented [17] as:
y ˙ 1 = α 1 y 1 + α 2 p y 1 + β 12 p x 2 + b 1 p u 1 + c 1 p f 1 ,
where α 1 , α 2 , β 12 , b 1 , c 1 are estimated coefficients; μ > 0 ,
p ˙ y 1 = μ p y 1 + y 1 , p ˙ x 2 = μ p x 2 + x 2 , p ˙ u 1 = μ p u 1 + u 1 , p ˙ f 1 = μ p f 1 + f 1 . .
Models for the system (31)
y ^ ˙ 1 = k 1 e 1 + a ^ 11 y 1 + a ^ 12 p y 1 + β ^ 12 p x 2 + b ^ 1 p u 1 + c ^ 1 p f 1 ,
y ^ ˙ 2 = k 2 e 2 + a ^ 2 y 2 + a ¯ ^ 2 y 1 + b ^ 2 u 2 + c ^ 1 f 2 ,
where k 1 , k 2 > 0 , e 1 = y ^ 1 y 1 , e 2 = y ^ 2 y 2 , a ^ i , a ¯ ^ i , b ^ i , c ^ i are tuning parameters.
Adaptive algorithms
a ^ ˙ 11 = γ a 11 e 1 y 1 + γ ¯ a 11 a ^ 11 ( t τ ) , a ^ ˙ 12 = γ a 12 e 1 p y 1 , β ^ ˙ 12 = γ β 12 e 1 p x 2 , b ^ ˙ 1 = γ b 1 e 1 p u 1 , c ^ ˙ 1 = γ c 1 e 1 p f 1 + γ ¯ c 1 c ^ 1 ( t τ ) ,
a ^ ˙ 2 = γ a 2 e 2 y 2 , a ¯ ^ ˙ 2 = γ a ¯ 2 e 2 y 1 , b ^ ˙ 2 = γ b 2 e 2 u 1 , c ^ ˙ 2 = γ c 2 e 2 f 2 ,
where γ a 11 = 0.026 , γ β 12 = 0.025 , γ b 1 = 0.002 , γ c 1 = 0.05 , γ ¯ c i = 0.03 , γ ¯ a 11 = 0.5 .
Results are shown in Figure 1, Figure 2 and Figure 3. s 1 reflects the adequacy of the models (33), (34).
Figure 1. Adequacy of models (33) and (34): 1 is model (33), 2 is model (34).
Figure 1. Adequacy of models (33) and (34): 1 is model (33), 2 is model (34).
Preprints 157797 g001
Results of tuning parameters for models (33) and (34) are shown in Figure 2 and Figure 3.
Figure 2. Tuning parameters of model (33).
Figure 2. Tuning parameters of model (33).
Preprints 157797 g002
Figure 3. Tuning parameters of model (34).
Figure 3. Tuning parameters of model (34).
Preprints 157797 g003
Simulation results confirm the proposed AA performance. (35) and (36). Tuning process can be linear or nonlinear. Efficiency is determined by properties of the adaptive system and the parameters of signals.
In Figure 4, Figure 5 and Figure 6, we present identification results of the system (31) with algorithms (35), (36), where algorithms for tuning a ^ 11 and c ^ 1 in (35) have the form
a ^ ˙ 11 = 1 γ ¯ a 11 e 1 γ a 11 e 1 y 1 , if e 1 / y 1 > 0.1 , a ^ 11 γ a 11 e 1 y 1 , if e 1 / y 1 0.1 , c ^ ˙ 1 = γ c 1 e 1 p f 1 .
To ensure the system (31) S-synchroniability, we changed parameters b 1 = 2.9 and b 2 = 1.4 . Tuning parameters for models (33) and (34) are shown in Figure 4 and Figure 5. The adequacy of the models is reflected in Figure 6.
Figure 4. Tuning parameters of model (33): 1– a ^ 11 , 2 – a ^ 12 , 3 – β ^ 12 , 4 – b ^ 1 , 5 – c ^ 1 .
Figure 4. Tuning parameters of model (33): 1– a ^ 11 , 2 – a ^ 12 , 3 – β ^ 12 , 4 – b ^ 1 , 5 – c ^ 1 .
Preprints 157797 g004
Figure 5. Tuning parameters of model (34): 1 is b ^ 2 , 2 is a ^ 1 , 3 is c ^ 2 , 4 is a ¯ ^ 2 .
Figure 5. Tuning parameters of model (34): 1 is b ^ 2 , 2 is a ^ 1 , 3 is c ^ 2 , 4 is a ¯ ^ 2 .
Preprints 157797 g005
Figure 6. Adequacy of models (33) and (34).
Figure 6. Adequacy of models (33) and (34).
Preprints 157797 g006
We see that the outputs of subsystems affect adaptation processes.
Figure 7 shows phase portraits in AIS in spaces e 1 , a ^ 11 и e 2 , с ^ 1 , e 2 , a ¯ ^ 2 . We see that adaptation processes for S 2 are nonlinear, and they are almost linear for the S 1 system.
Figure 7. Phase portraits of AIS in spaces e 1 , a ^ 11 , e 2 , с ^ 1 and e 2 , a ¯ ^ 2 .
Figure 7. Phase portraits of AIS in spaces e 1 , a ^ 11 , e 2 , с ^ 1 and e 2 , a ¯ ^ 2 .
Preprints 157797 g007
So, the simulation results confirm the proposed algorithms.

8. Conclusion

The approach to the synthesis of adaptive algorithms based on requirements for the adaptation process is proposed. These requirements are presented as functional constraints (FR). It is shown that, for the considered class of FR, the adaptive algorithm is described by the system in the state space. Special cases of FR are considered and the corresponding AA are obtained. For one class of adaptive algorithms, a representation is presented as the dynamic system with an aftereffect. Properties of adaptive systems of identification are studied, and the limited of trajectories and exponential stability are proved. Simulation results confirm the efficiency of adaptive algorithms.

Appendix A

Proof of Theorem 1. Consider FL
V Δ = 0.5 tr Δ A T Γ A 1 Δ A + 0.5 Δ B T Γ B 1 Δ B ,   V E = 0.5 E T R E .
For V ˙ E , we get
V ˙ E = E T Q E + E T R Δ A X + Δ B u E i T Q i E i + E T R Δ A X + Δ B u ,
or
V ˙ E = E T Q E + E T R Δ A X + Δ B u E i T Q i E i + 0.5 R E 2 + Δ A X + Δ B u 2 ,
where R E 2 = R E T R E 2 λ ¯ R V E , A T R + R A = Q , Q = Q T > 0 is a symmetric positive matrix.
Let E T Q E μ E T R E , μ 0 . As
Δ A X + Δ B u 2 α ¯ X Δ A 2 + α ¯ u Δ B 2 α ¯ X λ ¯ Γ A tr Δ A T Γ A Δ A + α ¯ u λ ¯ Γ B Δ B T Γ B Δ B 2 ρ V Δ ,
where ρ = max α ¯ X λ ¯ Γ A , α ¯ u λ ¯ Γ B , λ ¯ Γ A , λ ¯ Γ B are maximum eigenvalues of matrices Γ A , Γ B , then
V ˙ E μ E V E + 2 ρ V Δ ,
where μ E = μ 2 λ ¯ R > 0 .
V ˙ Δ is
V ˙ Δ = tr Δ A T Γ A 1 Δ A ˙ + Δ B T Γ B 1 Δ B ˙ = = Δ A T Γ A ˙ 1 ω e Γ A ˙ Δ A Γ A E R X T + ω e Γ A ˙ Δ A ( t τ ) + + Δ B T Γ B 1 Γ B R E u ω e Γ B 1 Δ B Δ B ( t τ ) .
The component V ˙ Δ depending on Δ A :
V ˙ Δ , 1 = tr Δ A T Γ A ˙ 1 ω e Γ A ˙ Δ A Γ A E R X T + ω e Γ A ˙ Δ A ( t τ ) = = ω e tr Δ A T Γ A ˙ 1 Γ A ˙ Δ A tr Δ A T Γ A ˙ 1 Γ A E R X T + ω e tr Δ A T Δ A ( t τ ) .
Then
tr Δ A T Δ A ( t τ ) 0.5 tr Δ A T Δ A + Δ A ( t τ ) 2 0.5 tr Δ A T Γ A Γ A 1 Δ A + 0.5 tr Δ A T t τ Δ A t τ Δ A ( t τ ) 2 0.5 λ ¯ A tr Δ A T Γ A 1 Δ A + 0.5 Δ A ( t τ ) 2
and
V ˙ Δ , 1 ω A tr Δ A T Γ A 1 Δ A tr Δ A T Γ A ˙ 1 Γ A E R X T + 0.5 Δ A ( t τ ) 2 ,
where ω A = ω e 0.5 λ ¯ A . Condition (23) is valid for tr Δ A T Γ A ˙ 1 Γ A E R X T . Therefore,
V ˙ Δ , 1 ω A tr Δ A T Γ A 1 Δ A υ tr Δ A T Γ A ˙ 1 Γ A Δ A + E T R 2 E X 2 + 0.5 Δ A ( t τ ) 2
As
tr Δ A T Γ A ˙ 1 Γ A Δ A λ ¯ Γ A ˙ λ ¯ Γ A 2 tr Δ A T Γ A 1 Δ A   and   E T R 2 E X 2 2 λ ¯ R α X ¯ V E ,
then (A6)
V ˙ Δ , 1 ω ˜ tr Δ A T Γ A 1 Δ A υ 2 λ ¯ R α X ¯ V E + 0.5 Δ A ( t τ ) 2 ,
where ω ˜ = ω A + λ ¯ Γ A ˙ λ ¯ Γ A 2 . After simple transformations, we get
V ˙ Δ , 1 ω ˜ tr Δ A T Γ A 1 Δ A 2 υ λ ¯ R α X ¯ V E + 0.5 Δ A ( t τ ) 2 3 4 ω ˜ tr Δ A T Γ A 1 Δ A 1 4 ω ˜ tr Δ A T Γ A 1 Δ A + 2 υ λ ¯ R α X ¯ V E + 2 2 2 ω ˜ υ λ ¯ R α X ¯ 2 ω ˜ υ λ ¯ R α X ¯ V E tr Δ A T Γ A 1 Δ A > 0 + + 0.5 ω ˜ ω ˜ υ λ ¯ R α X ¯ V E tr Δ A T Γ A 1 Δ A + 0.5 Δ A ( t τ ) 2 V ˙ Δ , 1 3 4 ω ˜ tr Δ A T Γ A 1 Δ A + 0.5 ω ˜ υ λ ¯ R α X ¯ V E tr Δ A T Γ A 1 Δ A + 0.5 Δ A ( t τ ) 2 .
Apply the inequality
a z 2 + b z a z 2 2 + b 2 2 a , a > 0 , b 0 , z 0
and get
V ˙ Δ , 1 3 8 ω ˜ tr Δ A T Γ A 1 Δ A + 1 3 κ E V E + 0.5 Δ A ( t τ ) 2 ,
where κ E = υ λ ¯ R α X ¯ V E .
The component V ˙ Δ depending on Δ B :
V ˙ Δ , 2 = Δ B T R E ( t ) u ω e Δ B T Γ B 1 Γ B 1 Δ B + ω e Δ B T Γ B 1 Γ B 1 Δ B ( t τ ) .
Get
V ˙ Δ , 2 3 4 ω e λ ¯ Γ B 1 Δ B T Γ B 1 Δ B 1 4 ω e λ ¯ Γ B 1 Δ B T Γ B 1 Δ B + 2 ω e λ ¯ Γ B 1 2 2 ω e λ ¯ Γ B 1 Δ B T R E ( t ) u + 1 16 ω e λ ¯ Γ B 1 R E ( t ) u 2 > 0 + + 1 8 ω e λ ¯ Γ B 1 α ¯ u λ ¯ R V E + ω e Δ B T Γ B 1 Γ B 1 Δ B ( t τ ) .
As
ω e Δ B T Γ B 1 Γ B 1 Δ B ( t τ ) 0.5 ω e Δ B T Γ B 2 Δ B + Δ B T ( t τ ) Γ B 1 2 Δ B ( t τ ) 0.5 ω e λ ¯ Γ B 1 Δ B T Γ B 1 Δ B + 0.5 ω e λ ¯ Γ B 1 2 Δ B ( t τ ) 2
then
V ˙ Δ , 2 3 4 ω e λ ¯ Γ B 1 Δ B T Γ B 1 Δ B + 1 8 ω e λ ¯ Γ B 1 α ¯ u λ ¯ R V E + + 0.5 ω e λ ¯ Γ B 1 Δ B T Γ B 1 Δ B + 0.5 ω e λ ¯ Γ B 1 2 Δ B ( t τ ) 2 ,
or
V ˙ Δ , 2 3 4 ω e λ ¯ Γ B 1 Δ B T Γ B 1 Δ B + 1 8 ω e λ ¯ Γ B 1 α ¯ u λ ¯ R V E + + 0.5 ω e λ ¯ Γ B 1 Δ B T Γ B 1 Δ B + 0.5 ω e λ ¯ Γ B 1 2 Δ B ( t τ ) 2 , V ˙ Δ , 2 0.5 ω e 1.5 λ ¯ Γ B 1 λ ¯ Γ B 1 Δ B T Γ B 1 Δ B + 1 8 ω e λ ¯ Γ B 1 α ¯ u λ ¯ R V E + 0.5 ω e λ ¯ Γ B 1 2 Δ B ( t τ ) 2 .
So
V ˙ Δ , 2 χ Δ B T Γ B 1 Δ B + 1 8 ω e λ ¯ Γ B 1 α ¯ u λ ¯ R V E + 0.5 ω e λ ¯ Γ B 1 2 Δ B ( t τ ) 2
where χ = 0.5 ω e 1.5 λ ¯ Γ B 1 λ ¯ Γ B 1 > 0 .
Let η = min 0.375 ω ˜ , χ , κ ˜ = 1 3 κ E + 1 8 ω e λ ¯ Γ B 1 α ¯ u λ ¯ R and ϑ = max 1 , ω e λ ¯ Γ B 1 2 . Then, considering (A7) and (A9), we get
V ˙ Δ μ V Δ + κ ˜ V E + 0.5 ϑ Δ A ( t τ ) 2 + Δ B ( t τ ) 2
From (A10) we obtain that trajectory of the system (9), (19), (22) are limited if the condition
0.5 ϑ Δ A ( t τ ) 2 + Δ B ( t τ ) 2 μ V Δ κ ˜ V E .
is satisfied on a certain set of initial conditions. □

Appendix B

Obtain to algorithm (25). Present the algorithm (15) as:
Δ A ˙ = Δ A ¨ α Δ Δ A Γ A E R X T .
Let Δ A ¨ ( t ) = κ Δ A ˙ ( t ) Δ A ˙ ( t τ ) , κ = Δ t 1 , where τ > 0 , Δ t is the discreteness step. Then (B1) present as:
Δ A ˙ ( t ) = α Δ d 1 Δ A ( t ) d 1 Γ A E ( t ) R X T ( t ) + κ d 1 Δ A ˙ ( t τ ) ,
where d = 1 + κ . The algorithm (B2) is rewritten as:
Δ A ˙ i = υ Δ A d 1 Γ A i E i R i X i T κ ¯ Δ A i ( t τ ) ,
where υ = d 1 α Δ κ , κ ¯ = κ d 1 .

Appendix C

Proof of Theorem 2. Following the proof of Theorem 1, we obtain for V ˙ E :
V ˙ E = E T Q E + E T R Δ A X E i T Q i E i + 0.5 R E 2 + Δ A X 2 .
Let E T Q E μ E T R E , μ 0 , R E 2 2 λ ¯ R V E , and Δ A X 2 α ¯ X tr Δ A T Δ A , where μ E = μ 2 λ ¯ R > 0 . As tr Δ A T Δ A ρ ˜ V Δ , υ , τ then
V ˙ E μ E V E + α ¯ X ρ ˜ V Δ , υ , τ ,
where μ E = μ 2 λ ¯ R > 0 , ρ ˜ > 0 .
Consider V Δ , υ , τ . We get for V ˙ Δ , υ :
V ˙ Δ , υ = υ tr Δ A T Δ A d 1 tr Δ A T Γ A E i R i X i T κ ¯ tr Δ A i T Δ A i ( t τ ) .
Transform (C3)
V ˙ Δ , υ = 3 4 υ tr Δ A T Δ A + 1 16 υ d 2 Γ A E R X T 2 κ ¯ tr Δ A i T Δ A i ( t τ ) 1 4 υ tr Δ A T Δ A + 1 4 υ 2 υ d 1 tr Δ A T Γ A E R X T + 1 16 υ d 2 Γ A E R X T 2 > 0 3 4 υ tr Δ A T Δ A + 1 16 υ d 2 Γ A E R X T 2 κ ¯ tr Δ A i T Δ A i ( t τ ) .
Let Γ A E R X T 2 λ ¯ Γ A 2 α ¯ X λ ¯ R E T R E = 2 λ ¯ Γ A 2 α ¯ X λ ¯ R V E . Then (C4):
V ˙ Δ , υ 3 4 υ tr Δ A T Δ A + λ ¯ Γ A 2 α ¯ X λ ¯ R 8 υ d 2 V E κ ¯ tr Δ A i T Δ A i ( t τ ) .
Let
tr Δ A T Δ A t τ = ω tr Δ A T Δ A + tr Δ A T t τ Δ A , ω 0
Then
V ˙ Δ , υ , τ 2 ω Δ V Δ , υ + λ ¯ Γ A 2 α ¯ X λ ¯ R 8 υ d 2 V E c τ tr Δ A T ( t τ ) Δ A ( t τ ) .
where ω Δ = 0.75 υ + ω κ ¯ , c τ = ω κ ¯ с > 0 . As m ¯ tr Δ A T ( t τ ) Δ A ( t τ ) m ¯ , then we get by the mean integral theorem (or the Newton-Leibniz formula)
tr Δ A T ( t τ ) Δ A ( t τ ) τ 1 τ 0 tr Δ A T ( t + s ) Δ A ( t + s ) d s .
Let β = min 2 ω Δ , c τ τ 1 . Then
V ˙ Δ , υ , τ 2 β V Δ , υ , τ + λ ¯ Γ A 2 α ¯ X λ ¯ R 8 υ d 2 V E .
So, the system of inequalities is valid for the system (25), (26)
V ˙ E V ˙ Δ , υ , τ W ˙ μ E α ¯ X ρ ˜ λ ¯ Γ A 2 α ¯ X λ ¯ R 8 υ d 2 2 β A W V E V Δ , υ , τ W .
The upper solution of the system (C8) satisfies the vector system S ˙ W = A W S W if s i t 0 w i t 0 , where s i t 0 , w i t 0 are initial conditions for elements of vectors S W , W . The adaptive system is exponentially stable with the estimate:
W t 0 e A W t t 0 S W t 0
if μ E , ω Δ > 0 , 2 μ E ω Δ > α ¯ X ρ ˜ λ ¯ Γ A 2 α ¯ X λ ¯ R / 8 υ d 2 . □

Appendix D

Proof of Theorem 3. Consider AS (9), (21), (29). Apply FL from theorem 1., We obtain (see (A3)) for V ˙ E :
V ˙ E μ E V E + 2 ρ V Δ ,
where ρ = max α ¯ X λ ¯ Γ A , α ¯ u λ ¯ Γ B , λ ¯ Γ A , λ ¯ Γ B are maximum eigenvalues of matrices Γ A , Γ B ,   E T Q E μ E T R E , μ E = μ 2 λ ¯ R > 0 .
Present for V ˙ Δ :
V ˙ Δ = tr Δ A T R E X T α E , Δ tr Δ A T Γ A 1 D e i Δ A Δ B T R E u .
Let
α e , Δ tr Δ A T D e i Δ A α ˜ tr Δ A T Δ A ,
where min i e i = δ e i , α ˜ = α e , Δ δ e i . Then (D2)
V ˙ Δ = tr Δ A T R E X T α ˜ tr Δ A T Δ A Δ B T R E u = = 3 4 α ˜ tr Δ A T Δ A 1 4 α ˜ tr Δ A T Δ A + 2 α ˜ 1 4 α ˜ tr Δ A T R E X T + α ¯ X 16 α ˜ R E 2 > 0 + + α ¯ X 16 α ˜ R E 2 Δ B T R E u V ˙ Δ 3 4 α ˜ tr Δ A T Δ A + α ¯ X λ ¯ R 8 α ˜ V E Δ B T R E u
Let Δ B T R E u = 0.5 υ u Δ B T Δ B + 2 R E u , υ u 0 then
V ˙ Δ 3 4 α ˜ tr Δ A T Δ A + α ¯ X λ ¯ R 8 α ˜ V E 0.5 υ u Δ B T Δ B + 2 R E u 3 4 α ˜ tr Δ A T Δ A 0.5 υ u Δ B T Δ B + α ¯ X λ ¯ R 8 α ˜ V E υ u α ¯ u λ ¯ R V E V ˙ Δ 3 4 α ˜ tr Δ A T Δ A 0.5 υ u Δ B T Δ B + η u V E ,
where η u = α ¯ X λ ¯ R 8 α ˜ υ u α ¯ u λ ¯ R . As
3 4 α ˜ tr Δ A T Δ A 0.5 υ u Δ B T Δ B = 3 4 α ˜ tr λ ¯ Γ A Δ A T Γ A 1 Δ A 0.5 υ u λ ¯ Γ B Δ B T Γ B 1 Δ B
then
V ˙ Δ κ V ˙ Δ + η u V E ,
where κ = min 3 2 α ˜ λ ¯ Γ A , υ u λ ¯ Γ B .
So, for V ˙ Δ , we obtain
V ˙ E V ˙ Δ Ξ ˙ μ E ρ η u κ A Ξ V E V Δ Ξ .
Estimation of exponential stability for the system
W Σ t 0 e A Ξ t t 0 S Ξ t 0 ,
where S Ξ t is a comparison system S ˙ Ξ = A Ω S Ξ for (D5), if s Ξ , k t 0 w Ξ , k t 0 , where s Ξ , k t 0 , w Ξ , k t 0 are the initial conditions for elements of corresponding vectors.
The estimate (D6) is valid if μ E > 0 , κ > 0 and μ E κ ρ η u . □

References

  1. Narendra K. S., Annaswamy A. M. Robust adaptive control in the presence of bounded disturbances. IEЕЕ Transactions on automatic control, 1986; 31(4);306-315.
  2. Nikolić T., Nikolić G., Petrović B. Adaptive controller based on LMS algorithm for grid-connected and islanding inverters. Proceedings of the 8th Small Systems Simulation Symposium 2020, Niš, Serbia, 12th-14th February 2020, 2020;107-110.
  3. Polston J. D., Hoagg J. B. Decentralized adaptive disturbance rejection for relative-degree-one local subsystems. 2014 American Control Conference (ACC) June 4-6, 2014. Portland, Oregon, USA. 2014; 1316-1321. [CrossRef]
  4. Lozano R., Brogliato B. Adaptive control of robot manipulators with flexible joints. IEEE Transactions on automatic control, 1992; 37(2);171-181. [CrossRef]
  5. Landau I.D., Lozano R., M’Saad M., Karimi A. Adaptive control algorithms, analysis and applications. Second edition. Springer: London Dordrecht Heidelberg New York, 2011.
  6. Duan G. High-order fully actuated system approaches: Part V. Robust adaptive control. International journal of systems science, 2021;52(10); 2129–2143. [CrossRef]
  7. Landau Y. D. Adaptive Control: The Model reference approach. Dekker, 1979.
  8. Kaufman H., Barkana I., Sobel K. Direct adaptive control algorithms: theory and applications. Second edition. Springer, 1998.
  9. Hua C., Ning P., Li K. Adaptive prescribed-time control for a class of uncertain nonlinear systems. IEEE transactions on automatic control, 2022;67(11); 6159-6166.
  10. Weise C., Kaufmann T., Reger J. Model reference adaptive control with proportional integral adaptation law. 2024 European Control Conference (ECC) June 25-28, 2024. Stockholm, Sweden, 2024; 486-492.
  11. Ioannou P., Kokotovic P., Instability analysis and improvement of robustness of adaptive control, Automatica, 1984;20(5); 583–594. [CrossRef]
  12. Narendra K., Annaswamy A. A new adaptive law for robust adaptation without persistent excitation, IEEE Transactions on automatic control, 1987;32(2);134–145. [CrossRef]
  13. Lavretsky E., Gibson T. E., Annaswamy A. M. Projection operator in adaptive systems. 2012, arXiv:1112.4232. [CrossRef]
  14. Chen Z., Yang T. Y., Xiao Y., Pan X., Yang W. Model reference adaptive hierarchical control framework for shake table tests. Earthquake Engng Struct Dyn. 2025;5(4);346–362. [CrossRef]
  15. Brahmi B., Ghommam J., Saad M. Adaptive observer-based backstepping-super twisting control for robust trajectory tracking in robot manipulators. TechRxiv. September 18, 2024. [CrossRef]
  16. Narendra K. S., Han Z. Adaptive control using collective information obtained from multiple models. Proceedings of the 18th World Congress the International Federation of Automatic Control Milano (Italy) August 28 - September 2, 2011; 362-367.
  17. Narendra K. S., Tian Z. Adaptive identification and control of linear periodic systems. Proceedings of the 45th IEEE Conference on Decision & Control Manchester Grand Hyatt Hotel San Diego, CA, USA, December 13-15, 2006; 465-470.
  18. Krsfić M., Kanellakopoulos I., Kokotović P. Nonlinear and adaptive control design. John Wiley & Sons, Inc, 1995.
  19. Hovakimyan N., Cao C. L1 adaptive control theory: guaranteed robustness with fast adaptation. SIAM, 2010.
  20. Karabutov N. Identification of decentralized control systems. Preprints.org (www.preprints.org), 2024. [CrossRef]
  21. Podval’ny S.L., Vasil’ev E.M. Multi-level signal adaptation in non-stationary control systems. Voronezh state technical university bulletin, 2022;18(5);38-47.
  22. Karabutov N. N. Structural Identifiability evaluation of system with nonsymmetric nonlinearities. Mekhatronika, Avtomatizatsiya, Upravlenie, 2024;25(2);55-64. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated