Preprint
Article

This version is not peer-reviewed.

The Effect of the Cost Functional on Asymptotic Solution to One Class of Zero-Sum Linear-Quadratic Cheap Control Differential Games

A peer-reviewed article of this preprint also exists.

Submitted:

01 July 2025

Posted:

03 July 2025

You are already at the latest version

Abstract
A finite-horizon zero-sum linear-quadratic differential game is considered. The feature of this game is that the cost of the control of the minimizing player (the minimizer) in the game’s cost functional is much smaller than the cost of the control of the maximizing player (the maximizer) and the cost of the state variable. This smallness is due to a positive small multiplier (a small parameter) for the quadratic form of the minimizer’s control in the integrand of the cost functional. Two cases of the game's cost functional are studied: (i) the current state cost in the integrand of the cost functional is a positive definite quadratic form; (ii) the current state cost in the integrand of the cost functional is a positive semidefinite (but non-zero) quadratic form. For each of these cases, an asymptotic solution with respect to the small parameter of the considered game is formally constructed and justified. These solutions are compared with each other. Illustrative example is presented.
Keywords: 
;  ;  ;  ;  

1. Introduction

In this paper, a two-player finite-horizon zero-sum linear-quadratic differential game is considered. The feature of the considered game is that the control cost of the minimizer (the minimizing player) in the cost functionals is small in comparison with the cost of the state variable and with the control cost of the maximizer (the maximizing player). Such a feature means that the game under the consideration is a cheap control game. In the most general formulation, a cheap control problem is an extremal control problem in which a control cost of at least one decision maker is much smaller than a cost of the state variable in at least one cost functional of the problem.
Cheap control problems have a considerable importance in qualitative and quantitative analysis of many topics in the theory of optimal control, the theory of H control and the theory of differential games. For instance, such problems are important in the following topics: (1) existence analysis and analytical/numerical computation of singular controls and arcs (see, e.g, [1,2,3,4,5,6,7,8,9,10,11,12,13,14]); (2) derivation of limiting forms and maximally achievable accuracy of optimal regulators and filters (see, e.g., [15,16,17,18,19,20,21]); (3) study of inverse optimal control problems (see e.g. [22]); (4) solution of optimal control problems with high gain control in dynamics (see, e.g., [23,24]).
The Hamilton boundary-value problem and the Hamilton–Jacobi–Bellman–Isaacs equation, associated with a cheap control problem by solvability (control optimality) conditions, are singularly perturbed because of the smallness of the control cost. This feature means that cheap control problems can be (and they really are) sources of novel classes of singularly perturbed differential equations. Thus, cheap control problems also are of considerable interest and importance in the theory of differential equations.
As it is aforementioned, in this paper we study a cheap control differential game. Cheap control differential games are extensively studied in the literature. Thus, in [12,13,14,25,26,27,28,29,30,31] various zero-sum cheap control games were analyzed. Different cheap control Nash equilibrium games were studied in [32,33,34]. A cheap control Stackelberg equilibrium differential game was studied in [35].
In most of the works, devoted to study of cheap control problems, the following two types of the quadratic cost of the "fast" state variable in the integral part of the cost functional are considered: (i) the cost is a positive definite quadratic form (see, e.g., [4,9,12,13,14,15,25,26,32,33,35] and references therein); (ii) the cost is zero (see, e.g., [27,29,31,34,36] and references therein). In the present paper, the intermediate case is studied. Namely, the case where the quadratic cost of the "fast" state variable in the integral part of the cost functional is a positive semi-definite (but non-zero) quadratic form.
More precisely, in this paper we consider the finite-horizon zero-sum linear-quadratic differential game with the cheap control of the minimizer. The dynamics of the considered game is non-homogeneous. The dimension of the minimizer’s control coincides with the dimension of the state vector and the matrix-valued coefficient of the minimizer’s control in the equation of dynamics has full rank. Hence, the entire state variable is a "fast" one. Two cases of the quadratic cost of the state variable in the integral part of the cost functional are treated, namely, (a) positive definite quadratic form and (b) positive semi-definite (but non-zero) quadratic form. Due to the solvability conditions of the game, the derivation of its state-feedback saddle-point is reduced to solution of terminal-value problems for three differential equations: matrix Riccati equation, vector linear equation and scalar trivial equation. For these differential equations with the terminal conditions, asymptotic solutions are formally constructed and justified. In the aforementioned cases (a) and (b), the algorithms of constructing the asymptotic solutions and the solutions themselves differ considerably from each other. Based on these asymptotic solutions, asymptotic approximation of the game’s value and approximate-saddle point are derived in each of the cases (a) and (b).
It should be noted the following. The cheap control differential game in the case (a) was treated in the literature (and even in more general form than in the present paper). However, to the best of our knowledge, the version of the non-homogeneous dynamics of such a game, including the derivation of an approximate-saddle point, has not yet been considered in the literature. Furthermore, to the best of our knowledge, the case (b) is completely novel. This case yields new types of singularly perturbed Riccati matrix and linear vector differential equations. For these equations, essentially novel approaches to derivation of the asymptotic solutions are proposed. Moreover, along with separate analysis of the cases (a) and (b), a comparison of the algorithms for the derivation of the aforementioned asymptotic solutions and the solutions themselves is presented. In this comparison, the case (a) serves as a reference clearly showing a considerable novelty of the case (b) and its analysis.
The paper is organized as follows. In the next section (Section 2), the cheap control differential game is rigorously formulated. Main definitions are presented. In Section 3, the non-singular (invertible) transformation of the initially formulated game is carried out. This transformation yields a new cheap control differential game which is considerably simpler than the initially formulated game. The equivalence of both games to each other is proven. It should be noted that in both games the state variable is the "fast" one. In the sequel of the paper, the new game is considered as an original cheap control differential game. In Section 4, the solvability conditions of this game are presented. These conditions contain terminal-value problems for three differential equations: matrix Riccati equation, vector linear equation and scalar trivial equation. Due to the cheap control of the minimizing player, these differential equations are perturbed by a small positive parameter ε . Along with the aforementioned terminal-value problems, the solvability conditions of the game contain the expressions for the components of the state-feedback saddle point and the value of the game. In Section 5, the asymptotic analysis with respect to ε of these solvability conditions is carried out in the case where the current state cost in the integrand of the cost functional is a positive definite quadratic form. In Section 6, such an analysis is carried out in the case where the current state cost in the integrand of the cost functional is a positive semi-definite (but non-zero) quadratic form. In both sections, the asymptotic analysis includes asymptotic solutions of the aforementioned terminal-value problems, obtaining asymptotic approximations of the game value and derivation of approximate-saddle point. In Section 7, an illustrative example is presented. Section 8 is devoted to conclusions.
The following main notations are applied in the paper.
  • R n denotes the n-dimensional real Euclidean space;
  • · denotes the Euclidean norm either of a vector ( z ) or of a matrix ( A );
  • the superscript " T " denotes the transposition either of a vector ( z T ) or of a matrix ( A T );
  • I n denotes the identity matrix of dimension n;
  • col ( x , y ) , where x R n , y R m , denotes the column block-vector of the dimension n + m with the upper block x and the lower block y;
  • diag ( a 1 , . . . , a n ) denotes the diagonal matrix of the dimension n × n with the main diagonal entries a 1 ,..., a n ;
  • L 2 [ t 1 , t 2 ; R n ] denotes the space of all functions z ( · ) : [ t 1 , t 2 ] R n square integrable in the interval [ t 1 , t 2 ] .

2. Initial Game Formulation and Main Definitions

Consider the following differential system controlled by two decision makers:
d ζ ( t ) d t = A ( t ) ζ ( t ) + B ( t ) w ( t ) + C ( t ) v ( t ) + ϕ ( t ) , t [ 0 , t f ] , ζ ( 0 ) = ζ 0 ,
where ζ ( t ) R n is a state variable; w ( t ) R n and v ( t ) R m are controls of the decision makers (players); A ( t ) , B ( t ) and C ( t ) are given matrices of corresponding dimensions, while ϕ ( t ) is a given vector of corresponding dimension; ζ 0 R n is a given constant vector; t f > 0 is a given time instant; the matrix-valued functions A ( t ) , B ( t ) , C ( t ) and the vector-valued function ϕ ( t ) are continuous in the interval [ 0 , t f ] ; for all t [ 0 , t f ] , det B ( t ) 0 .
The cost functional, to be minimized by the control w (the minimizer’s control) and maximized by the control v (the maximizer’s control ), has the form
J ( w , v ) = 0 t f ζ T ( t ) D ( t ) ζ ( t ) + ε 2 w T G w ( t ) w ( t ) v T ( t ) G v ( t ) v ( t ) d t ,
where D ( t ) , G w ( t ) and G v ( t ) are given matrices of corresponding dimensions; for all t [ 0 , t f ] , D ( t ) is symmetric and positive definite/positive semidefinite, while G w ( t ) and G v ( t ) are symmetric and positive definite; the matrix-valued functions D ( t ) , G w ( t ) and G ( t ) are continuous in the interval [ 0 , t f ] ; ε > 0 is a small parameter.
We assume that both players know perfectly all the data, appearing in (1)-(2), as well as the current ( state , time ) -position of the system (1).
Consider the set W ˜ of all functions w = w ˜ ( ζ , t ) : R n × [ 0 , t f ] R n , which are measurable with respect to t [ 0 , t f ] for any given ζ R n and satisfy the local Lipschitz condition with respect to ζ R n uniformly in t [ 0 , t f ] . Similarly, let V ˜ be the set of all functions v = v ˜ ( ζ , t ) : R n × [ 0 , t f ] R m , which are measurable with respect to t [ 0 , t f ] for any given ζ R n and satisfy the local Lipschitz condition with respect to ζ R n uniformly in t [ 0 , t f ] .
Based on the results of the book [14], we introduce the following definitions.
Definition 1. 
By ( W V ) ˜ , we denote the set of all pairs w ˜ ( ζ , t ) , v ˜ ( ζ , t ) , ( ζ , t ) R n × [ 0 , t f ] , satisfying the following conditions: (i) w ˜ ( ζ , t ) W ˜ , v ˜ ( ζ , t ) V ˜ ; (ii) the initial-value problem (1) for w ( t ) = w ˜ ( ζ , t ) , v ( t ) = v ˜ ( ζ , t ) and any ζ 0 R n has the unique absolutely continuous solution ζ w v ( t ; ζ 0 ) in the entire interval [ 0 , t f ] ; (iii) w ˜ ζ w v ( t ; ζ 0 ) , t L 2 [ 0 , t f ; R n ] ; (iv) v ˜ ζ w v ( t ; ζ 0 ) , t L 2 [ 0 , t f ; R m ] . We call ( W V ) ˜ the set of all admissible pairs of the players’ state-feedback controls in the game (1)-(2).
For a given w ˜ ( ζ , t ) W ˜ , we consider the set
K ˜ v w ˜ ( ζ , t ) = v ˜ ( ζ , t ) V ˜ : w ˜ ( ζ , t ) , v ˜ ( ζ , t ) ( W V ) ˜
.
Let us denote
L ˜ w = w ˜ ( ζ , t ) W ˜ : K ˜ v w ˜ ( ζ , t )
.
Similarly, for a given v ˜ ( ζ , t ) V ˜ , we consider the set
K ˜ w v ˜ ( ζ , t ) = w ˜ ( ζ , t ) W ˜ : w ˜ ( ζ , t ) , v ˜ ( ζ , t ) ( W V ) ˜
.
Let us denote
L ˜ v = v ˜ ( ζ , t ) V ˜ : K ˜ w v ˜ ( ζ , t ) .
Definition 2. 
For a given w ˜ ( ζ , t ) L ˜ w , the value
J w w ˜ ( ζ , t ) ; ζ 0 = sup v ˜ ( ζ , t ) K ˜ v w ˜ ( ζ , t ) J w ˜ ( ζ , t ) , v ˜ ( ζ , t )
is called the guaranteed result of w ˜ ( ζ , t ) in the game (1)-(2).
Definition 3. 
For a given v ˜ ( ζ , t ) L ˜ v , the value
J v v ˜ ( ζ , t ) ; ζ 0 = inf w ˜ ( ζ , t ) K ˜ w v ˜ ( ζ , t ) J w ˜ ( ζ , t ) , v ˜ ( ζ , t )
is called the guaranteed result of v ˜ ( ζ , t ) in the game (1)-(2).
Definition 4. 
A pair w ˜ * ( ζ , t ) , v ˜ * ( ζ , t ) ( W V ) ˜ is called a saddle-point solution of the game (1)-(2) if the guaranteed results of w ˜ * ( ζ , t ) and v ˜ * ( ζ , t ) in this game are equal to each other for all ζ 0 R n , i.e.,
J w w ˜ * ( ζ , t ) ; ζ 0 = J v v ˜ * ( ζ , t ) ; ζ 0 ζ 0 R n .
If this equality is valid, then the value
J * ( ζ 0 ) = J w w ˜ * ( ζ , t ) ; ζ 0 = J v v ˜ * ( ζ , t ) ; ζ 0
is called a value of the game (1)-(2). The solution of the initial-value problem (1) with w ( t ) = w ˜ * ( ζ , t ) , v ( t ) = v ˜ * ( ζ , t ) is called a saddle-point trajectory of the game (1)-(2).

3. Transformation of the Differential Game (1)-(2)

In what follows, we assume that:
A1. The matrix-valued functions A ( t ) , C ( t ) , G v ( t ) are twice continuously differentiable in the interval [ 0 , t f ] .
A2. The matrix-valued functions B ( t ) , D ( t ) , G w ( t ) are three times continuously differentiable in the interval [ 0 , t f ] .
A3. The vector-valued function ϕ ( t ) is twice continuously differentiable in the interval [ 0 , t f ] .
Remark 1. 
By G w 1 / 2 ( t ) , let us denote the unique symmetric positive definite square root of the positive definite matrix G w ( t ) , t [ 0 , t f ] . The inverse matrix of G w 1 / 2 ( t ) is denoted as G w 1 / 2 ( t ) . It is clear that G w 1 / 2 ( t ) also is symmetric and positive definite. Moreover, due to the assumption A2, the matrix-valued functions G w 1 / 2 ( t ) and G w 1 / 2 ( t ) are three times continuously differentiable in the interval [ 0 , t f ] .
Remark 2. 
Since the matrix D ( t ) is symmetric for all t [ 0 , t f ] , then the matrix
G w 1 / 2 ( t ) B T D ( t ) B ( t ) G w 1 / 2 ( t )
also is symmetric for all t [ 0 , t f ] . Therefore, due to the results of [38], there exists an orthogonal matrix H ( t ) , t [ 0 , t f ]   ( H 1 ( t ) = H T ( t ) ) such that
H T ( t ) G w 1 / 2 ( t ) B T ( t ) D ( t ) B ( t ) G w 1 / 2 ( t ) H ( t ) = Λ ( t ) = diag λ 1 ( t ) , λ 2 ( t ) , . . . , λ n ( t ) , t [ 0 , t f ] ,
where λ i ( t ) , ( i = 1 , 2 , . . . , n ) are eigenvalues of the matrix G w 1 / 2 ( t ) B T ( t ) D ( t ) B ( t ) G w 1 / 2 ( t ) . Due to the assumption A2 and the results of [39], the matrix-valued function H ( t ) and the functions λ i ( t ) , ( i = 1 , 2 , . . . , n ) are three times continuously differentiable in the interval [ 0 , t f ] . Moreover, since the matrix D ( t ) is at least positive semidefinite, then λ i ( t ) 0 , ( i = 1 , 2 , . . . , n ) , t [ 0 , t f ] .
Let us make the following state and control transformations in the game (1)-(2):
ζ ( t ) = R z ( t ) z ( t ) , R z = B ( t ) G w 1 / 2 ( t ) H ( t ) , t [ 0 , t f ] ,
w ( t ) = R u ( t ) u ( t ) , R u ( t ) = G w 1 / 2 ( t ) H ( t ) , t [ 0 , t f ] ,
where z ( t ) is a new state variable and u ( t ) is a new control variable.
Since the matrices B ( t ) , G w 1 / 2 ( t ) and H ( t ) are invertible, then the transformations (4) and (5) are invertible.
Lemma 1. 
Let the assumptions A1-A3 be valid. Then, the transformations (4) and (5) convert the system (1) and the cost functional (2) to the following system and cost functional:
d z ( t ) d t = A ( t ) z ( t ) + u ( t ) + C ( t ) v ( t ) + f ( t ) , t [ 0 , t f ] , z ( 0 ) = z 0 ,
J ( u , v ) = 0 t f z T ( t ) Λ ( t ) z ( t ) + ε 2 u T ( t ) u ( t ) v T ( t ) G v ( t ) v ( t ) d t ,
where
A ( t ) = H T ( t ) G w 1 / 2 ( t ) B 1 ( t ) A ( t ) B ( t ) G w 1 / 2 ( t ) H ( t ) d d t B ( t ) G w 1 / 2 ( t ) H ( t ) ,
C ( t ) = H T G w 1 / 2 ( t ) B 1 ( t ) C ( t ) ,
f ( t ) = H T ( t ) G w 1 / 2 ( t ) B 1 ( t ) ϕ ( t ) ,
z 0 = H T ( 0 ) G w 1 / 2 ( 0 ) B 1 ( 0 ) ζ 0 .
The matrix-valued functions A ( t ) , C ( t ) and the vector-valued function f ( t ) are twice continuously differentiable in the interval [ 0 , t f ] .
Proof. 
Differentiating (4) yields
d ζ ( t ) d t = d d t B ( t ) G w 1 / 2 ( t ) H ( t ) z ( t ) + B ( t ) G w 1 / 2 ( t ) H ( t ) d z ( t ) d t , t [ 0 , t f ] .
Substituting this expression for d ζ ( t ) / d t , as well as (4) and (5), into the system (1), we obtain
d d t B ( t ) G w 1 / 2 ( t ) H ( t ) z ( t ) + B ( t ) G w 1 / 2 ( t ) H ( t ) d z ( t ) d t = A ( t ) B ( t ) G w 1 / 2 ( t ) H ( t ) z ( t ) + B ( t ) G w 1 / 2 ( t ) H ( t ) u ( t ) + C ( t ) v ( t ) + ϕ ( t ) , t [ 0 , t f ] , B ( 0 ) G w 1 / 2 ( 0 ) H ( 0 ) z ( 0 ) = ζ 0 .
Now, resolving the first equation in (13) with respect to d z ( t ) / d t , the second equation in (13) with respect to z ( 0 ) and using the orthogonality of the matrix H ( t ) , we directly prove the equations (6),(8)-(11).
Furthermore, substitution of (4),(5) into the cost functional (2) and use of Remark 1 and the equation (3) immediately yield the cost functional (7).
Finally, the smoothness of the matrices A ( t ) , C ( t ) , f ( t ) , claimed in the lemma, is a direct consequence of the equations (8)-(10) and the assumptions A1-A3. □
Remark 3. 
The cost functional (7) is minimized by the control u (the minimizer’s control) and maximized by the control v (the maximizer’s control). Similarly to the game (1)-(2), we assume that in the game (6)-(7) both players know perfectly all the data, appearing in the system (6) and the cost functional (7), as well as the current ( state , time ) -position of the system (6).
Consider the set U of all functions u = u ( z , t ) : R n × [ 0 , t f ] R n , which are measurable with respect to t [ 0 , t f ] for any given z R n and satisfy the local Lipschitz condition with respect to z R n uniformly in t [ 0 , t f ] . Similarly, we consider the set V of all functions v = v ( z , t ) : R n × [ 0 , t f ] R m , which are measurable with respect to t [ 0 , t f ] for any given z R n and satisfy the local Lipschitz condition with respect to z R n uniformly in t [ 0 , t f ] .
Similarly to Definitions 1-4, we introduce the following definitions.
Definition 5. 
Let ( U V ) be the set of all pairs u ( z , t ) , v ( z , t ) , ( z , t ) R n × [ 0 , t f ] , satisfying the following conditions: (i) u ( z , t ) U , v ( z , t ) V ; (ii) the initial-value problem (6) for u ( t ) = u ( z , t ) , v ( t ) = v ( z , t ) and any z 0 R n has the unique absolutely continuous solution z u v ( t ; z 0 ) in the entire interval [ 0 , t f ] ; (iii) u z u v ( t ; z 0 ) , t L 2 [ 0 , t f ; R n ] ; (iv) v z u v ( t ; z 0 ) , t L 2 [ 0 , t f ; R m ] . We call ( U V ) the set of all admissible pairs of the players’ state-feedback controls in the game (6)-(7).
For a given u ( z , t ) U , we consider the set
K v u ( z , t ) = v ( z , t ) V : u ( z , t ) , v ( z , t ) ( U V )
.
Let us denote
L u = u ( z , t ) U : K v u ( z , t )
.
Similarly, for a given v ( z , t ) V , we consider the set
K u v ( z , t ) = u ( z , t ) U : u ( z , t ) , v ( z , t ) ( U V )
.
Let us denote
L v = v ( z , t ) V : K u v ( z , t ) .
Definition 6. 
For a given u ( z , t ) L u , the value
J u u ( z , t ) ; z 0 = sup v ( z , t ) K v u ( z , t ) J u ( z , t ) , v ( z , t )
is called the guaranteed result of u ( z , t ) in the game (6)-(7).
Definition 7. 
For a given v ( z , t ) L v , the value
J v v ( z , t ) ; z 0 = inf u ( z , t ) K u v ( z , t ) J u ( z , t ) , v ( z , t )
is called the guaranteed result of v ( z , t ) in the game (6)-(7).
Definition 8. 
A pair u * ( z , t ) , v * ( z , t ) ( U V ) is called a saddle-point solution of the game (6)-(7) if the guaranteed results of u * ( z , t ) and v * ( z , t ) in this game are equal to each other for all z 0 R n , i.e.,
J u u * ( z , t ) ; z 0 = J v v * ( z , t ) ; z 0 z 0 R n . n o n u m b e r
If this equality is valid, then the value
J * ( z 0 ) = J u u * ( z , t ) ; z 0 = J v v * ( ζ , t ) ; z 0
is called a value of the game (6)-(7). The solution of the initial-value problem (6) with u ( t ) = u * ( z , t ) , v ( t ) = v * ( z , t ) is called a saddle-point trajectory of the game (6)-(7).
Let ζ 0 R n and z 0 R n be any prechoosen vectors satisfying the equation (11).
The following assertion is a direct consequence of Definition 1, Definition 5 and Lemma 1.
Corollary 1. 
Let the assumptions A1-A3 be valid. Let w ˜ ( ζ , t ) , v ˜ ( ζ , t ) be an admissible pair of the players’ state-feedback controls in the game (1)-(2), i.e., w ˜ ( ζ , t ) , v ˜ ( ζ , t ) ( W V ) ˜ . Let ζ w v ( t ; ζ 0 ) , t [ 0 , t f ] be the solution of the initial-value problem (1) generated by this pair of the players’ controls. Then the pair R u 1 ( t ) w ˜ R z ( t ) z , t , v ˜ R z ( t ) z , t is an admissible pair of the players’ state-feedback controls in the game (6)-(7), meaning that
R u 1 ( t ) w ˜ R z ( t ) z , t , v ˜ R z ( t ) z , t ( U V ) . Furthermore, ζ w v ( t ; ζ 0 ) = R z ( t ) z u v ( t ; z 0 ) , t [ 0 , t f ] , where z u v ( t ; z 0 ) , t [ 0 , t f ] is the unique solution of the initial-value problem (6) generated by the players’ controls u ( t ) = R u 1 ( t ) w ˜ R z ( t ) z , t , v ( t ) = v ˜ R z ( t ) z , t . Moreover, J w ˜ ( ζ , t ) , v ˜ ( ζ , t ) = J R u 1 ( t ) w ˜ R z ( t ) z , t , v ˜ R z ( t ) z , t . Vice versa: let u ( z , t ) , v ( z , t ) ( U V ) and z u v ( t ; z 0 ) , t [ 0 , t f ] be the solution of the initial-value problem (6) generated by this pair of the players’ controls. Then R u ( t ) u R z 1 ( t ) ζ , t , v R z 1 ( t ) ζ , t ( W V ) ˜ and z u v ( t ; z 0 ) = R z 1 ( t ) ζ w v ( t ; ζ 0 ) , t [ 0 , t f ] , where ζ w v ( t ; ζ 0 ) , t [ 0 , t f ] is the unique solution of the initial-value problem (1) generated by the players’ controls w ( t ) = R u ( t ) u R z 1 ( t ) ζ , t , v ( t ) = v R z 1 ( t ) ζ , t . Moreover, J u ( z , t ) , v ( z , t ) = J R u ( t ) u R z 1 ( t ) ζ , t , v R z 1 ( t ) ζ , t .
Lemma 2. 
Let the assumptions A1-A3 be valid. Let the pair w ˜ * ( ζ , t ) , v ˜ * ( ζ , t ) be a saddle-point of the game (1)-(2). Then the pairs R u 1 ( t ) w ˜ * R z ( t ) z , t , v ˜ * R z ( t ) z , t is a saddle-point of the game (6)-(7). Vice versa: let the pair { u * ( z , t ) , v * ( z , t ) be a saddle-point of the game (6)-(7). Then the pair R u ( t ) u * R z 1 ( t ) ζ , t , v * R z 1 ( t ) ζ , t is a saddle-point of the game (1)-(2).
Proof. 
We start with the first lemma’s statement. Since the pair w ˜ * ( ζ , t ) , v ˜ * ( ζ , t ) is a saddle-point of the game (1)-(2), then the pair of the players’ controls w ˜ * ( ζ , t ) , v ˜ * ( ζ , t ) is admissible in this game. Hence, due to Corollary 1, the pair of the players’ controls R u 1 ( t ) w ˜ * R z ( t ) z , t , v ˜ * R z ( t ) z , t is admissible in the game (6)-(7) and the following equality is valid: J w ˜ * ( ζ , t ) , v ˜ * ( ζ , t ) = J R u 1 ( t ) w ˜ * R z ( t ) z , t , v ˜ * R z ( t ) z , t . Moreover, by Definitions 2-3, Definitions 6-7 and Corollary 1, we obtain
J u w ˜ * ( ζ , t ) ; ζ 0 = J u R u 1 ( t ) w ˜ * R z ( t ) z , t ; z 0 , J v v ˜ * ( ζ , t ) ; ζ 0 = J v v ˜ * R z ( t ) z , t ; z 0 .
The equalities in (15), along with Definitions 4 and 8, directly yield the first statement of the lemma. The second statement is proven quite similarly. □
Remark 4. 
Due to Lemma 2, the initially formulated game (1)-(2) is equivalent to the new game (6)-(7). Along with this equivalence, due to Lemma 1, the latter game is simpler than the former one. Therefore, in what follows, we deal with the game (6)-(7), which we consider as an original one and call it the Cheap Control Differential Game (CCDG). In the next section, ε-dependent solvability conditions of the CCDG are presented.
Remark 5. 
By the nonsingular control transformation u ˘ ( t ) = ε u ( t ) , ( u ˘ ( t ) is a new control of the minimizer), the CCDG can be converted to the equivalent zero-sum differential game consisting of the dynamic system
ε d z ( t ) d t = ε A ( t ) z ( t ) + C ( t ) v ( t ) + f ( t ) + u ˘ ( t ) , t [ 0 , t f ] , z ( 0 ) = z 0 ,
and the cost functional
J ˘ ( u ˘ , v ) = 0 t f z T ( t ) Λ ( t ) z ( t ) + u ˘ T ( t ) u ˘ ( t ) v T ( t ) G v ( t ) v ( t ) d t .
In this game, the dynamic equation is singularly perturbed and the state variable z ( t ) is a fast state variable (see, e.g., [37]). Therefore, we call the state variable of the CCDG a fast state variable. Thus, the cost functional J ˘ ( u ˘ , v ) contains the cost of the fast state variable z ( t ) , the cost of the maximizer’s control v ( t ) and the non small cost of the minimizer’s control u ˘ ( t ) , while the cost functional J ( u , v ) of the CCDG contains the cost of the fast state variable z ( t ) , the cost of the maximizer’s control v ( t ) and the small cost of the minimizer’s control u ( t ) .

4. Solvability Conditions of the CCDG

Consider the following matrices:
S u ( ε ) = 1 ε 2 I n , S v ( t ) = C ( t ) G v 1 ( t ) C T ( t ) , S ( t , ε ) = S u ( ε ) S v ( t ) ,
where t [ 0 , t f ] , ε > 0 .
Using the data of the CCDG (see the equations (6)-(7)) and the matrices in (16), we construct the terminal-value problem for the Riccati matrix differential equation
d K d t = K A ( t ) A T ( t ) K + K S ( t , ε ) K Λ ( t ) , t [ 0 , t f ] , K ( t f ) = 0 .
In what follows, we assume
A4. For a given ε > 0 , the terminal-value problem (17) has the symmetric solution K = K ( t , ε ) in the entire interval [ 0 , t f ] .
Remark 6. 
Since the right-hand side of the differential equation in (17) is a smooth function with respect to the unknown matrix K, then the aforementioned solution K = K ( t , ε ) is unique.
Using the assumption A4, as well as the data of the CCDG and the equation (16), we construct the terminal-value problem for the linear vector-valued differential equation
d q d t = A T ( t ) K ( t , ε ) S ( t , ε ) q 2 K ( t , ε ) f ( t ) , t [ 0 , t f ] , q ( t f ) = 0 .
The problem (18) has the unique solution q = q ( t , ε ) in the entire interval [ 0 , t f ] . Using this solution, we construct the terminal-value problem for the scalar differential equation
d s d t = 1 4 q T ( t , ε ) S ( t , ε ) q ( t , ε ) f T ( t ) q ( t , ε ) , t [ 0 , t f ] , s ( t f ) = 0 .
This problem has the unique solution s = s ( t , ε ) in the entire interval [ 0 , t f ] .
Consider the functions
u ε * ( z , t ) = 1 ε 2 K ( t , ε ) z 1 2 ε 2 q ( t , ε ) U , ( z , t ) R n × [ 0 , t f ]
and
v ε * ( z , t ) = G v 1 ( t ) C T ( t ) K ( t , ε ) z + 1 2 G v 1 ( t ) C T ( t ) q ( t , ε ) V , ( z , t ) R n × [ 0 , t f ] .
Based on the results of [14,40,41], we immediately have the following assertion.
Proposition 1. 
Let the assumptions A1-A4 be valid. Then the pair u ε * ( z , t ) , v ε * ( z , t ) is the saddle point of the CCDG. The value of this game has the form
J ε * ( z 0 ) = J u ε * ( z , t ) , v ε * ( z , t ) = z 0 T K ( 0 , ε ) z 0 + z 0 T q ( 0 , ε ) + s ( 0 , ε ) .
In the forthcoming sections, we derive an asymptotic solution of the CCDG with respect to the small parameter ε > 0 in the following two cases:
Case I λ i ( t ) > 0 , i = 1 , 2 , . . . , n , t [ 0 , t f ] ,
Case II λ j ( t ) > 0 , j = 1 , . . . , l , 1 l < n , t [ 0 , t f ] , λ k ( t ) 0 , k = l + 1 , . . . , n , t [ 0 , t f ] ,
where λ i ( t ) , ( i = 1 , 2 , . . . , n ) are the entries of the diagonal matrix Λ ( t ) (see the equations (3) and (7)).
We start the asymptotic solution of the CCDG with the simpler case, namely, case I.

5. Asymptotic Solution of the CCDG in the Case I

5.1. Transformation of the Terminal-Value Problems (17)-(19)

First of all, let us note the following. Due to the equation (16), the differential equations in the problems (17)-(19) have the singularities with respect to ε in their right-hand sides for ε = 0 . To remove these singularities, we look for the solutions of the problems (17) and (18) in the form
K ( t , ε ) = ε P ( t , ε ) , t [ 0 , t f ] ,
q ( t , ε ) = ε p ( t , ε ) , t [ 0 , t f ] ,
where P ( t , ε ) and p ( t , ε ) are new unknown matrix-valued function and vector-valued function.
Substitution of (25)-(26) into the problems (17)-(19) yields the following new terminal-value problems
ε d P ( t , ε ) d t = ε P ( t , ε ) A ( t ) ε A T ( t ) P ( t , ε ) + P ( t , ε ) I n ε 2 S v ( t ) P ( t , ε ) Λ ( t ) , t [ 0 , t f ] , P ( t f , ε ) = 0 ,
ε d p ( t , ε ) d t = ε A T ( t ) P ( t , ε ) I n ε 2 S v ( t ) p ( t , ε ) 2 ε P ( t , ε ) f ( t ) , t [ 0 , t f ] , p ( t f , ε ) = 0 ,
d s ( t , ε ) d t = 1 4 p T ( t , ε ) I n ε 2 S v ( t ) p ( t , ε ) ε f T ( t ) p ( t , ε ) , t [ 0 , t f ] , s ( t f , ε ) = 0 .
Moreover, substitution of (25)-(26) into the expressions for the components of the CCDG saddle point and into the expression for the CCDG value (see the equations (20),(21) and (22)) yields the following new expressions for the components of the saddle point and for the game value:
u ε * ( z , t ) = 1 ε P ( t , ε ) z 1 2 ε p ( t , ε ) U , ( z , t ) R n × [ 0 , t f ] ,
v ε * ( z , t ) = ε G v 1 ( t ) C T ( t ) P ( t , ε ) z + ε 2 G v 1 ( t ) C T ( t ) p ( t , ε ) V , ( z , t ) R n × [ 0 , t f ] ,
J ε * ( z 0 ) = J u ε * ( z , t ) , v ε * ( z , t ) = ε z 0 T P ( 0 , ε ) z 0 + ε z 0 T p ( 0 , ε ) + s ( 0 , ε ) .

5.2. Asymptotic Solution of the Terminal-Value Problem (27)

The problem (27) is a singularly perturbed terminal-value problem. Based on the Boundary Functions Method [37], we look for the first-order asymptotic solution of (27) in the form
P 1 ( t , ε ) = P 0 o ( t ) + P 0 b ( τ ) + ε P 1 o ( t ) + P 1 b ( τ ) ,
where
τ = t t f ε .
Remark 7. 
In (33), the terms with the superscript o constitute the so-call outer solution, the terms with the superscript b are the boundary corrections in the left-hand neighborhood of t = t f . Equations and boundary conditions for the asymptotic solution terms are obtained substituting P 1 ( t , ε ) into the problem (27) instead of P ( t , ε ) and equating the coefficients for the same power of ε on both sides of the resulting equations, separately depending on t and on τ. Additionally, we note the following. For any t [ 0 , t f ) and ε > 0 , τ < 0 . Moreover, if ε + 0 , then, for any t [ 0 , t f ) , τ .

5.2.1. Obtaining the Outer Solution Term P 0 o ( t )

Due to Remark 7, we have the following matrix Riccati algebraic equation for P 0 o ( t ) :
0 = P 0 o ( t ) 2 Λ ( t ) , t [ 0 , t f ] ,
yielding
P 0 o ( t ) = Λ 1 / 2 ( t ) = diag λ 1 1 / 2 ( t ) , λ 2 1 / 2 ( t ) , . . . , λ n 1 / 2 ( t ) , t [ 0 , t f ] .
Remark 8. 
Due to Remark 2 and the equation (23), the matrix P 0 o ( t ) is positive definite for all t [ 0 , t f ] . Moreover, the matrix-valued function P 0 o ( t ) is three times continuously differentiable in the interval [ 0 , t f ] .

5.2.2. Obtaining the Boundary Correction P 0 b ( τ )

Taking into account Remark 7 and the equations (34),(36), we directly derive the following terminal-value problem for P 0 b ( τ ) :
d P 0 b ( τ ) d τ = Λ 1 / 2 ( t f ) P 0 b ( τ ) + P 0 b ( τ ) Λ 1 / 2 ( t f ) + P 0 b ( τ ) 2 , τ 0 , P 0 b ( 0 ) = Λ 1 / 2 ( t f ) .
The differential equation in (37) is a matrix Bernulli differential equation [42]. Using this feature, we obtain the solution of the problem (37)
P 0 b ( τ ) = 2 Λ 1 / 2 ( t f ) exp 2 Λ 1 / 2 ( t f ) τ I n + exp 2 Λ 1 / 2 ( t f ) τ 1 , τ 0 .
Due to the positive definiteness of Λ 1 / 2 ( t f ) , the matrix-valued function P 0 b ( τ ) is exponentially decaying for τ , i.e.,
P 0 b ( τ ) a exp ( 2 β τ ) , τ 0 ,
where a > 0 is some constant;
β = min i { 1 , 2 , . . . , n } λ i 1 / 2 ( t f ) > 0 .

5.2.3. Obtaining the Outer Solution Term P 1 o ( t )

Using the equation (36) and Remark 8, we have (similarly to (35)) the matrix linear algebraic equation for P 1 o ( t )
d Λ 1 / 2 ( t ) d t = Λ 1 / 2 ( t ) A ( t ) A T ( t ) Λ 1 / 2 ( t ) + Λ 1 / 2 ( t ) P 1 o ( t ) + P 1 o ( t ) Λ 1 / 2 ( t ) , t [ 0 , t f ] .
Using the results of [43] and taking into account the equations (23),(36), we obtain the solution of the equation (41)
P 1 o ( t ) = 0 + exp Λ 1 / 2 ( t ) ξ [ d Λ 1 / 2 ( t ) d t + Λ 1 / 2 ( t ) A ( t ) + A T ( t ) Λ 1 / 2 ( t ) ] exp Λ 1 / 2 ( t ) ξ d ξ , t [ 0 , t f ] .

5.2.4. Obtaining the Boundary Correction P 1 b ( τ )

Using Remark 7 and the equations (34),(36),(38),(42), we derive (similarly to the equation (37)) the following terminal-value problem for P 1 b ( τ ) :
d P 1 b ( τ ) d τ = P 0 o ( t f ) + P 0 b ( τ ) P 1 b ( τ ) + P 1 b ( τ ) P 0 o ( t f ) + P 0 b ( τ ) + Ψ ( τ ) , τ 0 , P 1 b ( 0 ) = P 1 o ( t f ) ,
where
Ψ ( τ ) = P 0 b ( τ ) A ( t f ) A T ( t f ) P 0 b ( τ ) + τ d P 0 o ( t ) / d t | t = t f P 0 b ( τ ) + τ P 0 b ( τ ) d P 0 o ( t ) / d t | t = t f + P 0 b ( τ ) P 1 o ( t f ) + P 1 o ( t f ) P 0 b ( τ ) .
Due to the inequality (39), the matrix-valued function Ψ ( τ ) is estimated as:
Ψ ( τ ) a 1 exp ( β τ ) , τ 0 ,
where a 1 > 0 is some constant; the constant β is given in (40).
Solving the problem (43) and using the results of [44] and the symmetry of the matrices P 0 o ( t ) , P 0 b ( τ ) , we obtain
P 1 b ( τ ) = Φ ( 0 , τ ) P 1 o ( t f ) Φ ( 0 , τ ) + 0 τ Φ ( σ , τ ) Ψ ( σ ) Φ ( σ , τ ) d σ , τ 0 ,
where, for any τ 0 , the n × n matrix-valued function Φ ( σ , τ ) is the unique solution of the problem
d Φ ( σ , τ ) d σ = P 0 o ( t f ) + P 0 b ( σ ) Φ ( σ , τ ) , σ [ τ , 0 ] , Φ ( τ , τ ) = I n .
Solving this problem and taking into account the expressions for P 0 o ( t ) and P 0 b ( τ ) (see the equations (36) and (38)), we have
Φ ( σ , τ ) = exp Λ 1 / 2 ( t f ) ( τ σ ) Θ 1 ( τ ) Θ ( σ ) , 0 σ τ > ,
where
Θ ( χ ) = I n + exp 2 Λ 1 / 2 ( t f ) χ , χ 0 .
The matrix-valued function Φ ( σ , τ ) satisfies the inequality
Φ ( σ , τ ) a 2 exp β ( τ σ ) , 0 σ τ > ,
where a 2 > 0 is some constant; the constant β is given in (40).
Using the equation (46) and the inequalities (45),(49) yields the following estimate for P 1 b ( τ ) :
P 1 b ( τ ) a 2 2 P 1 o ( t f ) exp ( 2 β τ ) + a 1 β exp ( β τ ) 1 exp ( β τ ) , τ 0 ,
meaning that P 1 b ( τ ) is an exponentially decaying function for τ .

5.2.5. Justification of the Asymptotic Solution to the Problem (27)

Similarly to the results of [14] (Lemma 4.2), we have the following lemma.
Lemma 3. 
Let the assumptions A1-A3 and the case I (see the equation (23)) be valid. Then, there exist a positive number ε 10 such that, for all ε ( 0 , ε 10 ] , the terminal-value problem (27) has the unique solution P ( t , ε ) in the entire interval [ 0 , t f ] . This solution satisfies the inequality
P ( t , ε ) P 1 ( t , ε ) a 10 ε 2 , t [ 0 , t f ] ,
where P 1 ( t , ε ) is given by (33); a 10 > 0 is some constant independent of ε.

5.3. Asymptotic Solution of the Terminal-Value Problem (28)

Like the problem (27), the problem (28) is a singularly perturbed terminal-value problem. Based on the Boundary Functions Method [37], we look for the first-order asymptotic solution of (28) in the form
p 1 ( t , ε ) = p 0 o ( t ) + p 0 b ( τ ) + ε p 1 o ( t ) + p 1 b ( τ ) ,
where the variable τ is given by (34); the terms in (51) have the same meaning as the corresponding terms in (33). These terms are obtained substituting p 1 ( t , ε ) and P 1 ( t , ε ) into the problem (28) instead of p ( t , ε ) and P ( t , ε ) , respectively, and equating the coefficients for the same power of ε on both sides of the resulting equations, separately depending on t and on τ .

5.3.1. Obtaining the Outer Solution Term p 0 o ( t )

For this term, we have the following linear algebraic equation:
0 = P 0 o ( t ) p 0 o ( t ) , t [ 0 , t f ] .
Since P 0 o ( t ) is an invertible matrix for all t [ 0 , t f ] (see the equations (23) and (36)), then the equation (52) yields
p 0 o ( t ) 0 , t [ 0 , t f ] .

5.3.2. Obtaining the Boundary Correction p 0 b ( τ )

Taking into account the equations (34),(53), we directly obtain the following terminal-value problem for p 0 b ( τ ) :
d p 0 b ( τ ) d τ = P 0 o ( t f ) + P 0 b ( τ ) p 0 b ( τ ) , τ 0 , p 0 b ( 0 ) = 0 ,
yielding
p 0 b ( τ ) 0 , τ 0 .

5.3.3. Obtaining the Outer Solution Term p 1 o ( t )

Using the equation (53), we have (similarly to (52)) the linear algebraic equation for p 1 o ( t )
0 = P 0 o ( t ) p 1 o ( t ) 2 P 0 o ( t ) f ( t ) , t [ 0 , t f ] .
Since P 0 o ( t ) is an invertible matrix for all t [ 0 , t f ] , then
p 1 o ( t ) = 2 f ( t ) , t [ 0 , t f ] .

5.3.4. Obtaining the Boundary Correction p 1 b ( τ )

Using the equations (34),(53),(55),(56), we derive (similarly to the equation (54)) the following terminal-value problem for p 1 b ( τ ) :
d p 1 b ( τ ) d τ = P 0 o ( t f ) + P 0 b ( τ ) p 1 b ( τ ) , τ 0 , p 1 b ( 0 ) = 2 f ( t f ) .
Solving the problem (57), we have
p 1 b ( τ ) = 2 Φ ( 0 , τ ) f ( t f ) , τ 0 ,
where the matrix-valued function Φ ( σ , τ ) is given by (47).
Thus, using the equations (47),(48),(58), we obtain after a routine algebra
p 1 b ( τ ) = 4 exp Λ 1 / 2 ( t f ) τ Θ 1 ( τ ) f ( t f ) , τ 0 ,
which yields the inequality
p 1 b ( τ ) b exp ( β τ ) , τ 0 .
In this inequality, b > 0 is some constant; the constant β is given by (40).
Thus, p 1 b ( τ ) is an exponentially decaying function for τ .

5.3.5. Justification of the Asymptotic Solution to the Problem (28)

Using the equations (51),(53),(55), we can rewrite the vector-valued function p 1 ( t , ε ) as:
p 1 ( t , ε ) = ε p 1 o ( t ) + p 1 b ( τ ) .
Lemma 4. 
Let the assumptions A1-A3 and the case I (see the equation (23)) be valid. Then, for all ε ( 0 , ε 10 ] ( ε 10 > 0 is introduced in Lemma 3), the terminal-value problem (28) has the unique solution p ( t , ε ) in the entire interval [ 0 , t f ] . Moreover, there exists a positive number ε 20 ε 10 such that, for all ε ( 0 , ε 20 ] , this solution satisfies the inequality
p ( t , ε ) p 1 ( t , ε ) b 10 ε 2 , t [ 0 , t f ] ,
where p 1 ( t , ε ) is given by (61); b 10 > 0 is some constant independent of ε.
Proof. 
First of all, let us note that the existence and the uniqueness of the solution to the problem (28) for all ε ( 0 , ε 10 ] directly follow from its linearity and from the existence and the uniqueness of the solution to the problem (27) (see Lemma 3).
Proceed to the proof of the inequality (62). Let us make the transformation of the state variable in the problem (28)
p ( t , ε ) = p 1 ( t , ε ) + δ p ( t , ε ) , t [ 0 , t f ] , ε ( 0 , ε 10 ] ,
where δ p ( t , ε ) is a new state variable.
The transformation (63) converts the problem (28) to the equivalent terminal-value problem with respect to δ p ( t , ε )
ε d δ p ( t , ε ) d t = P 1 ( t , ε ) δ p ( t , ε ) + g 1 δ p ( t , ε ) , t , ε + g 2 ( t , ε ) + g 3 ( t , ε ) , t [ 0 , t f ] , δ p ( t f , ε ) = 0 ,
where P 1 ( t , ε ) is given by (33), τ is given by (34),
g 1 δ p ( t , ε ) , t , ε = ε A T ( t ) Δ P ( t , ε ) + ε 2 P ( t , ε ) S v ( t ) δ p ( t , ε ) , g 2 ( t , ε ) = ε 2 d p 1 o ( t ) d t A T ( t ) p 1 ( t , ε ) , g 3 ( t , ε ) = ε d p 1 b ( τ ) d τ + P ( t , ε ) I n ε 2 S v ( t ) p 1 o ( t ) + p 1 b ( τ ) 2 P ( t , ε ) f ( t ) , Δ P ( t , ε ) = P ( t , ε ) P 1 ( t , ε ) .
Using Lemma 3, as well as the equations (56),(59),(61), we directly obtain the following estimates of g 1 δ p ( t , ε ) , t , ε and g 2 ( t , ε ) for all ε ( 0 , ε 10 ] :
g 1 δ p ( t , ε ) , t , ε b 1 ε δ p ( t , ε ) , t [ 0 , t f ] , g 2 ( t , ε ) b 2 ε 2 , t [ 0 , t f ] ,
where b 1 > 0 and b 2 > 0 are some constants independent of ε .
Now, let us estimate g 3 ( t , ε ) . Using Lemma 3, as well as the equations (56),(57),(59),(61) and the inequalities (39),(60), we have for all ε ( 0 , ε 10 ]
g 3 ( t , ε ) ε 3 P ( t , ε ) S v ( t ) p 1 o ( t ) + p 1 b ( τ ) + ε d p 1 b ( τ ) d τ + P ( t , ε ) p 1 o ( t ) + p 1 b ( τ ) 2 P ( t , ε ) f ( t ) ε 3 P ( t , ε ) S v ( t ) p 1 o ( t ) + p 1 b ( τ ) + ε P 0 o ( t ) P 0 o ( t f ) p 1 b ( τ ) + ε 2 P 1 o ( t ) + P 1 b ( τ ) p 1 o ( t ) + p 1 b ( τ ) 2 f ( t ) + ε Δ P ( t , ε ) p 1 o ( t ) + p 1 b ( τ ) 2 f ( t ) , t [ 0 , t f ] .
To complete the estimate of g 3 ( t , ε ) , one has to estimate the expression P 0 o ( t ) P 0 o ( t f ) p 1 b ( τ ) . Using the smoothness of P 0 o ( t ) (see Remark 8) and the equation (34), we obtain for any t [ 0 , t f ]
P 0 o ( t ) P 0 o ( t f ) = P 0 o ( t f + ε τ ) P 0 o ( t f ) = ε τ d P 0 o ( χ ) d χ | χ = t 1 ( t ) , t 1 ( t ) [ t , t f ] .
The latter, along with the boundedness of d P 0 o ( t ) d t in the interval [ 0 , t f ] and the inequality (60), yields
P 0 o ( t ) P 0 o ( t f ) p 1 b ( τ ) = ε d P 0 o ( s ) d s | s = t 1 ( t ) τ p 1 b ( τ ) b ¯ 3 ε , t [ 0 , t f ] , ε ( 0 , ε 10 ] ,
where b ¯ 3 > 0 is some constant independent of ε .
Thus, the inequalities (66),(67) and Lemma 3 imply immediately
g 3 ( t , ε ) b 3 ε 2 , t [ 0 , t f ] , ε ( 0 , ε 10 ] ,
where b 3 > 0 is some constant independent of ε .
The problem (64) can be rewritten in the equivalent integral form as:
δ p ( t , ε ) = 1 ε t f t Ω ( t , σ , ε ) g 1 δ p ( σ , ε ) , σ , ε + g 2 ( σ , ε ) + g 3 ( σ , ε ) d σ ,
where for any given σ [ t , t f ] and ε ( 0 , ε 10 ] , the n × n -matrix-valued function Ω ( t , σ , ε ) is the unique solution of the terminal-value problem
ε d Ω ( t , σ , ε ) d t = P 1 ( t , ε ) Ω ( t , σ , ε ) , t [ 0 , σ ] , Ω ( σ , σ , ε ) = I n .
Based on the results of [45] and using the inequalities in (23) and the equation (36), we obtain the following estimate of Ω ( t , σ , ε ) for all 0 t σ t f :
Ω ( t , σ , ε ) b Ω exp β Ω ( t σ ) / ε , ε ( 0 , ε ¯ 20 ] ,
where 0 < ε ¯ 20 ε 10 is some sufficiently small number; b Ω > 0 and β Ω > 0 are some constants independent of ε .
Applying the method of successive approximations to the equation (69), we construct the following sequence of the matrix-valued functions δ p , α ( t , ε ) α = 0 + :
δ p , α + 1 ( t , ε ) = 1 ε t f t Ω ( t , σ , ε ) g 1 δ p , α ( σ , ε ) , σ , ε + g 2 ( σ , ε ) + g 3 ( σ , ε ) d σ , α = 0 , 1 , . . . , t [ 0 , t f ] , ε ( 0 , ε ¯ 20 ] , δ p , 0 ( t , ε ) 0 .
Using the inequalities (65),(68),(70), we obtain the existence of a positive number ε 20 ε ¯ 20 such that, for any ε ( 0 , ε 20 ] , the sequence δ p , α ( t , ε ) α = 0 + converges in the linear space of all n × n -matrix-valued functions continuous in the interval [ 0 , t f ] . Furthermore, the following inequalities are fulfilled:
δ p , α ( t , ε ) b 10 ε 2 , α = 1 , 2 , . . . , t [ 0 , t f ] , ε ( 0 , ε 20 ] ,
where b 10 > 0 is some constant independent of ε .
Due to the aforementioned convergence of the sequence δ p , α ( t , ε ) α = 0 + , its limit δ p ( t , ε ) = lim α + δ p , α ( t , ε ) is, for all ε ( 0 , ε 20 ] , the solution of the integral equation (69) and, therefore, of the terminal-value problem (64) in the entire interval [ 0 , t f ] . Since the problem (64) is linear, its solution δ p ( t , ε ) is unique. Moreover, by virtue of the inequalities in (71), we directly have
δ p ( t , ε ) b 10 ε 2 , α = 1 , 2 , . . . , t [ 0 , t f ] , ε ( 0 , ε 20 ] .
Finally, this inequality, along with the equation (63), yields the inequality (62), which completes the proof of the lemma. □

5.4. Asymptotic Solution of the Terminal-Value Problem (29)

Solving the problem (29) and taking into account Lemma 4, we obtain
s ( t , ε ) = t f t 1 4 p T ( σ , ε ) I n ε 2 S v ( σ ) p ( σ , ε ) ε f T ( σ ) p ( σ , ε ) d σ , t [ 0 , t f ] , ε ( 0 , ε 20 ] .
Let us consider the function
s ¯ ( t , ε ) = ε 2 t f t 1 4 p 1 o ( σ ) T p 1 o ( σ ) f T ( σ ) p 1 o ( σ ) d σ , t [ 0 , t f ] , ε ( 0 , ε 20 ] .
Using (56), this function can be represented as:
s ¯ ( t , ε ) = ε 2 t f t f T ( σ ) f ( σ ) d σ , t [ 0 , t f ] , ε ( 0 , ε 20 ] .
Lemma 5. 
Let the assumptions A1-A3 and the case I (see the equation (23)) be valid. Then, for all ε ( 0 , ε 20 ] ( ε 20 > 0 is introduced in Lemma 4), the following inequality is satisfied:
| s ( t , ε ) s ¯ ( t , ε ) | c 10 ε 3 , t [ 0 , t f ] , ε ( 0 , ε 20 ] ,
where c 10 > 0 is some constant independent of ε.
Proof. 
Using the equations (72),(73) and taking into account the equation (61), we obtain
s ( t , ε ) s ¯ ( t , ε ) = ε 2 t f t [ 1 2 p 1 o ( σ ) T p 1 b ( σ t f ) / ε + 1 4 p 1 b ( σ t f ) / ε T p 1 b ( σ t f ) / ε p 1 T ( σ , ε ) S v ( σ ) p 1 ( σ , ε ) f T ( σ ) p 1 b ( σ t f ) / ε ] d σ , t [ 0 , t f ] , ε ( 0 , ε 20 ] .
The latter, along with the equation (61) and the inequality (60), yields the statement of the lemma. □

5.5. Asymptotic Approximation of the CCDG value

Consider the following value, depending on z 0 :
J app I ( z 0 ) = ε z 0 T P 1 ( 0 , ε ) z 0 + ε z 0 T p 1 ( 0 , ε ) + s ¯ ( 0 , ε ) ,
where P 1 ( t , ε ) , p 1 ( t , ε ) and s ¯ ( t , ε ) are given by (33), (51) and (73), respectively.
Using the equations (32) and (75), as well as Lemmas 3, 4, 5, we directly have the assertion.
Theorem 1. 
Let the assumptions A1-A3 and the case I (see the equation (23)) be valid. Then, for all ε ( 0 , ε 20 ] ( ε 20 > 0 is introduced in Lemma 4), the following inequality is satisfied:
| J ε * ( z 0 ) J app I ( z 0 ) | ε 3 a 10 z 0 2 + b 10 z 0 + c 10 .
Consider the following matrix and vector:
P ¯ 1 ( ε ) = P 0 o ( 0 ) + ε P 1 o ( 0 ) , p ¯ 1 ( ε ) = ε p 1 o ( 0 ) .
Based on these matrix and vector, let us construct the following value, depending on z 0 :
J app , 1 I ( z 0 ) = ε z 0 T P ¯ 1 ( ε ) z 0 + ε z 0 T p ¯ 1 ( ε ) + s ¯ ( 0 , ε ) .
Corollary 2. 
Let the assumptions A1-A3 and the case I (see the equation (23)) be valid. Then, there exists a positive number ε ¯ 20 ε 20 such that, for all ε ( 0 , ε ¯ 20 ] , the following inequality is satisfied:
| J ε * ( z 0 ) J app , 1 I ( z 0 ) | ε 3 a ¯ 10 z 0 2 + b ¯ 10 z 0 + c 10 ,
where a ¯ 10 > 0 and b ¯ 10 > 0 are some constants independent of ε.
Proof. 
First of all, let us note that, for β > 0 (see the equation (40)) and all sufficiently small ε > 0 , the following inequality is valid:
exp ( β t f / ε ) < ε .
This inequality, along with the equations (33),(34),(61),(76), the inequalities (39),(50),(60) and Lemmas 3, 4, yields the fulfillment of the inequalities
P ( 0 , ε ) P ¯ 1 ( ε ) a ¯ 10 ε 2 , p ( 0 , ε ) p ¯ 1 ( ε ) b ¯ 10 ε 2 , ε ( 0 , ε ¯ 20 ] ,
where ε ¯ 20 ( 0 , ε 20 ] is some sufficiently small number; a ¯ 10 and b ¯ 10 are some positive numbers independent of ε .
These inequalities and Theorem 1 directly imply the statement of the corollary. □

5.6. Approximate-Saddle Point of the CCDG

Consider the following controls of the minimizer and the maximizer, respectively:
u ˜ ε ( z , t ) = 1 ε P 1 ( t , ε ) z 1 2 ε p 1 ( t , ε ) U , v ˜ ε ( z , t ) = ε G v 1 ( t ) C T ( t ) P 1 ( t , ε ) z + ε 2 G v 1 ( t ) C T ( t ) p 1 ( t , ε ) V , ( z , t ) R n × [ 0 , t f ] , ε ( 0 , ε 20 ] ,
where P 1 ( t , ε ) and p 1 ( t , ε ) are given by (33) and (61), respectively.
Remark 9. 
The controls u ˜ ε ( z , t ) and v ˜ ε ( z , t ) are obtained from the controls u ε * ( z , t ) and v ε * ( z , t ) (see the equations (30) and (31)) by replacing there P ( t , ε ) with P 1 ( t , ε ) and p ( t , ε ) with p 1 ( t , ε ) .
Due to the linearity of these controls with respect to z R n for any t [ 0 , t f ] , ε ( 0 , ε 20 ] and their continuity with respect to t [ 0 , t f ] for any z R n , ε ( 0 , ε 20 ] , the pair u ˜ ε ( z , t ) , v ˜ ε ( z , t ) is admissible in the CCDG.
Substitution of ( u ( t ) , v ( t ) ) = u ˜ ε z ( t ) , t , v ˜ ε z ( t ) , t into the system (6) and the cost functional (7), as well as using the equation (16) and taking into account the symmetry of the matrix P 1 ( t , ε ) , yield after a routine algebra the following system and cost functional:
d z ( t ) d t = A ˜ ( t , ε ) z ( t ) + f ˜ ( t , ε ) , t [ 0 , t f ] , z ( 0 ) = z 0 ,
J ˜ ( z 0 ) = 0 t f z T ( t ) Λ ˜ ( t , ε ) z ( t ) + z T ( t ) g ˜ ( t , ε ) + e ˜ ( t , ε ) d t ,
where
A ˜ ( t , ε ) = A ( t ) ε S ( t , ε ) P 1 ( t , ε ) , f ˜ ( t , ε ) = f ( t ) ε 2 S ( t , ε ) p 1 ( t , ε ) , Λ ˜ ( t , ε ) = Λ ( t ) + ε 2 P 1 ( t , ε ) S ( t , ε ) P 1 ( t , ε ) , g ˜ ( t , ε ) = ε 2 P 1 ( t , ε ) S ( t , ε ) p 1 ( t , ε ) , e ˜ ( t , ε ) = ε 2 4 p 1 T ( t , ε ) S ( t , ε ) p 1 ( t , ε ) .
Based on these functions, we construct the following terminal-value problems:
d L ˜ ( t , ε ) d t = L ˜ ( t , ε ) A ˜ ( t , ε ) A ˜ T ( t , ε ) L ˜ ( t , ε ) Λ ˜ ( t , ε ) , L ˜ ( t , ε ) R n × n , t [ 0 , t f ] , L ˜ ( t f , ε ) = 0 ,
d η ˜ ( t , ε ) d t = A ˜ T ( t , ε ) η ˜ ( t , ε ) 2 L ˜ ( t , ε ) f ˜ ( t , ε ) g ˜ ( t , ε ) , η ˜ ( t , ε ) R n , t [ 0 , t f ] , η ˜ ( t f , ε ) = 0 ,
d κ ˜ ( t , ε ) d t = f ˜ T ( t , ε ) η ˜ ( t , ε ) , κ ˜ ( t , ε ) R , t [ 0 , t f ] , κ ˜ ( t f , ε ) = 0 t f e ˜ ( σ , ε ) d σ ,
where ε ( 0 , ε 20 ] .
Remark 10. 
Due to the linearity, the problem (82) has the unique solution L ˜ ( t , ε ) in the entire interval [ 0 , t f ] for all ε ( 0 , ε 20 ] . Therefore, the problems (83) and (84) also have the unique solutions η ˜ ( t , ε ) and κ ˜ ( t , ε ) , respectively, in the entire interval [ 0 , t f ] for all ε ( 0 , ε 20 ] .
Lemma 6. 
The value J ˜ ( z 0 ) , given by the equations (79)-(80), can be represented in the form
J ˜ ( z 0 ) = z 0 T L ˜ ( 0 , ε ) z 0 + z 0 T η ˜ ( 0 , ε ) + κ ˜ ( 0 , ε ) , ε ( 0 , ε 20 ] .
Proof of the lemma is presented in Section 5.7
Lemma 7. 
Let the assumptions A1-A3 and the case I (see the equation (23)) be valid. Then, there exists a positive number ε 30 ε 20 ( ε 20 > 0 is introduced in Lemma 4) such that, for all ε ( 0 , ε 30 ] , the following inequality is satisfied:
ε P ( t , ε ) L ˜ ( t , ε ) a L ε 5 , t [ 0 , t f ] ,
where P ( t , ε ) is the solution of the terminal-value problem (27) mentioned in Lemma 3; a L > 0 is some constant independent of ε.
Proof. 
For any ε ( 0 , ε 20 ] , let us consider the matrix-valued function
Δ P L ( t , ε ) = ε P ( t , ε ) L ˜ ( t , ε ) , t [ 0 , t f ] .
Using the problems (27) and (82), we obtain after a routine rearrangement the terminal-value problem for Δ P L ( t , ε )
d Δ P L ( t , ε ) d t = Δ P L ( t , ε ) A ˜ ( t , ε ) A ˜ T ( t , ε ) Δ P L ( t , ε ) + P ( t , ε ) P 1 ( t , ε ) ε 2 S ( t , ε ) P ( t , ε ) P 1 ( t , ε ) , t [ 0 , t f ] , Δ P L ( t f , ε ) = 0 ,
where P 1 ( t , ε ) is given by (33).
Solving the problem (87) and using the results of [44], we have
Δ P L ( t , ε ) = t f t Γ T ( σ , t , ε ) P ( σ , ε ) P 1 ( σ , ε ) ε 2 S ( σ , ε ) P ( σ , ε ) P 1 ( t , ε ) Γ ( σ , t , ε ) d σ , 0 t σ t f , ε ( 0 , ε 20 ] ,
where for any given t [ 0 , t f ) and ε ( 0 , ε 20 ] , the matrix-valued function Γ ( σ , t , ε ) is the unique solution of the terminal-value problem
d Γ ( σ , t , ε ) d σ = A ˜ ( σ , ε ) Γ ( σ , t , ε ) , σ [ t , t f ] , Γ ( t , t , ε ) = I n .
Based on the results of [45] and using the inequalities in (23) and the equations (33),(36),(81), we obtain the following estimate of Γ ( t , σ , ε ) for all 0 t σ t f :
Γ ( σ , t , ε ) b Γ exp β Γ ( t σ ) / ε , ε ( 0 , ε 30 ] ,
where 0 < ε 30 ε 20 is some sufficiently small number; b Γ > 0 and β Γ > 0 are some constants independent of ε .
Using the equations (16),(88), as well as Lemma 3 and the inequality (90), we directly obtain the inequality
Δ P L ( t , ε ) a L ε 5 , t [ 0 , t f ] , ε ( 0 , ε 30 ] ,
where a L > 0 is some constant independent of ε .
Thus, the equation (86) and the inequality (91) immediately yield the statement of the lemma, which completes its proof. □
Lemma 8. 
Let the assumptions A1-A3 and the case I (see the equation (23)) be valid. Then, for all ε ( 0 , ε 30 ] ( ε 30 > 0 is introduced in Lemma 7), the following inequalities are satisfied:
ε p ( t , ε ) η ˜ ( t , ε ) a η ε 5 , t [ 0 , t f ] ,
| s ( 0 , ε ) κ ˜ ( 0 , ε ) | a κ , 1 ε 4 + a κ , 2 ε 5 , a κ , 1 = 1 4 b 10 2 t f , a κ , 2 = a η 1 4 t f + 0 t f f ( σ ) d σ ,
where p ( t , ε ) and s ( t , ε ) are the solutions of the terminal-value problems (28) and (29) mentioned in Lemma 4 and Lemma 5, respectively; a η > 0 is some constant independent of ε; the constant b 10 > 0 is introduced in Lemma 4.
Proof. 
We start the proof with the inequality (92).
For any ε ( 0 , ε 30 ] , let us consider the vector-valued function
Δ p η ( t , ε ) = ε p ( t , ε ) η ˜ ( t , ε ) , t [ 0 , t f ] .
Using the problems (28) and (83), we obtain after a routine rearrangement the terminal-value problem for Δ p η ( t , ε )
d Δ p η ( t , ε ) d t = A ˜ T ( t , ε ) Δ p η ( t , ε ) + 2 L ˜ ( t , ε ) ε P ( t , ε ) f ( t ) + ε P ( t , ε ) L ˜ ( t , ε ) ε S ( t , ε ) p 1 ( t , ε ) + P ( t , ε ) P 1 ( t , ε ) ε 2 S ( t , ε ) p ( t , ε ) p 1 ( t , ε ) , t [ 0 , t f ] , Δ p η ( t f , ε ) = 0 ,
where P 1 ( t , ε ) and p 1 ( t , ε ) are given by (33) and (61), respectively.
Solving the problem (95), we have
Δ p η ( t , ε ) = t f t Γ T ( σ , t , ε ) [ 2 L ˜ ( σ , ε ) ε P ( σ , ε ) f ( σ ) + ε P ( σ , ε ) L ˜ ( σ , ε ) ε S ( σ , ε ) p 1 ( σ , ε ) + P ( σ , ε ) P 1 ( σ , ε ) ε 2 S ( σ , ε ) p ( σ , ε ) p 1 ( σ , ε ) ] d σ , 0 t σ t f , ε ( 0 , ε 30 ] ,
where for any given t [ 0 , t f ) and ε ( 0 , ε 20 ] , the matrix-valued function Γ ( σ , t , ε ) is the unique solution of the terminal-value problem (89).
Using Lemmas 3, 4, 7 and the inequality (90), we obtain the inequality
Δ p η ( t , ε ) a η ε 5 , t [ 0 , t f ] , ε ( 0 , ε 30 ] ,
where a η > 0 is some constant independent of ε .
The equation (94) and the inequality (96) directly imply the inequality (92).
Proceed to the proof of the inequality (93). From the equation (19), we have
s ( 0 , ε ) = t f 0 1 4 p T ( σ , ε ) I n ε 2 S v ( σ ) p ( σ , ε ) ε f T ( σ ) p ( σ , ε ) d σ .
Using the equations (16),(81),(84), we obtain
κ ˜ ( 0 , ε ) = t f 0 [ f T ( σ ) η ˜ ( σ , ε ) 1 2 ε p 1 T ( σ , ε ) I n ε 2 S v ( t ) η ˜ ( σ , ε ) + 1 4 p 1 T ( σ , ε ) I n ε 2 S v ( t ) p 1 ( σ , ε ) ] d σ .
Using these expressions for s ( 0 , ε ) and κ ˜ ( 0 , ε ) , as well as the inequalities (62) and (92), we obtain the following chain of the inequalities for all ε ( 0 , ε 30 ] :
| s ( 0 , ε ) κ ˜ ( 0 , ε ) | 0 t f f ( σ ) ε p ( σ , ε ) η ˜ ( σ , ε ) d σ + 1 4 0 t f | p T ( σ , ε ) I n ε 2 S v ( σ ) p ( σ , ε ) 2 ε p 1 T ( σ , ε ) I n ε 2 S v ( σ ) ε p ( σ , ε ) + η ˜ ( σ , ε ) ε p ( σ , ε ) + p 1 T ( σ , ε ) I n ε 2 S v ( σ ) p 1 ( σ , ε ) | d σ a η 0 t f f ( σ ) d σ ε 5 + 1 4 0 t f | p ( σ , ε ) p 1 ( σ , ε ) T I n ε 2 S v ( σ ) p ( σ , ε ) p 1 ( σ , ε ) | d σ + 1 4 0 t f ε p ( σ , ε ) η ˜ ( σ , ε ) d σ a η 0 t f f ( σ ) d σ ε 5 + 1 4 b 10 2 t f ε 4 + 1 4 a η t f ε 5 ,
which directly yields the inequality (93).
Thus, the lemma is proven. □
Theorem 2. 
Let the assumptions A1-A3 and the case I (see the equation (23)) be valid. Then, for all ε ( 0 , ε 30 ] ( ε 30 > 0 is introduced in Lemma 7), the following inequality is satisfied:
| J ε * ( z 0 ) J ˜ ( z 0 ) | ε 4 a κ , 1 + a L z 0 2 + a η z 0 + a κ , 2 ε .
Proof. 
The statement of the theorem directly follows from the equations (32),(85) and Lemmas 7, 8. □
Remark 11. 
Due to Theorem 2, the outcome J ˜ ( z 0 ) of the CCDG, generated by the pair of the controls u ˜ ε ( z , t ) , v ˜ ε ( z , t ) , approximate the CCDG value J ε * ( z 0 ) with a high accuracy for all sufficiently small ε > 0 . This observation allows us to call the pair u ˜ ε ( z , t ) , v ˜ ε ( z , t ) an approximate-saddle point in the CCDG.

5.7. Proof of Lemma 6

First, let us calculate the value J ˜ ( z 0 ) using its definition, i.e., the equations (79)-(80).
Solving the initial-value problem (79), we obtain
z ( t ) = z ( t , ε ) = Γ ( t , 0 , ε ) z 0 + 0 t Γ 1 ( σ , 0 , ε ) f ˜ ( σ , ε ) d σ , t [ 0 , t f ] , ε ( 0 , ε 20 ] ,
where the n × n -matrix-valued function Γ ( t , 0 , ε ) is given by the equation (89). This function can be represented as:
Γ ( t , 0 , ε ) = Y ( 0 , ε ) Y 1 ( t , ε ) T , t [ 0 , t f ] ,
where the n × n -matrix-valued function Y ( t , ε ) is the unique solution of the terminal-value problem
d Y ( t , ε ) d t = A ˜ T ( t , ε ) Y ( t , ε ) , t [ 0 , t f ] , Y ( t f , ε ) = I n .
Substituting (98) into (97), we obtain
z ( t , ε ) = Y 1 ( t , ε ) T Y T ( 0 , ε ) z 0 + 0 t Y T ( σ , ε ) f ˜ ( σ , ε ) d σ , t [ 0 , t f ] .
Substitution this expression of z ( t , ε ) into (80) yields after a routine rearrangement the following expression for J ˜ ( z 0 ) :
J ˜ ( z 0 ) = z 0 T H ˜ 1 ( ε ) z 0 + z 0 T H ˜ 2 ( ε ) + H ˜ 3 ( ε ) , H ˜ 1 ( ε ) = Y ( 0 , ε ) 0 t f Y 1 ( t , ε ) D ˜ ( t , ε ) Y 1 ( t , ε ) T d t Y T ( 0 , ε ) , H ˜ 2 ( ε ) = 2 Y ( 0 , ε ) 0 t f Y 1 ( t , ε ) D ˜ ( t , ε ) Y 1 ( t , ε ) T 0 t Y T ( σ , ε ) f ˜ ( σ , ε ) d σ d t + Y ( 0 , ε ) 0 t f Y 1 ( t , ε ) g ˜ ( t , ε ) d t , H ˜ 3 ( ε ) = 0 t f [ 0 t f ˜ T ( σ , ε ) Y ( σ , ε ) d σ Y 1 ( t , ε ) D ˜ ( t , ε ) Y 1 ( t , ε ) T × 0 t Y T ( σ , ε ) f ˜ ( σ , ε ) d σ ] d t + 0 t f 0 t f ˜ T ( σ , ε ) Y ( σ , ε ) d σ Y 1 ( t , ε ) g ˜ ( t , ε ) d t + 0 t f e ˜ ( t , ε ) d t .
Now, let us calculate the expression in the right-hand side of the equation (85). To do this, we should solve the terminal-value problems (82),(83) and (84).
Using (99), we obtain the solution of the problem (82) in the form
L ˜ ( t , ε ) = Y ( t , ε ) t t f Y 1 ( σ , ε ) D ˜ ( σ , ε ) Y 1 ( σ , ε ) T d σ Y T ( t , ε ) , t [ 0 , t f ] .
Substituting this expression for L ˜ ( t , ε ) into (83) and solving the resulting problem yield after some rearrangement its solution as:
η ˜ ( t , ε ) = Y ( t , ε ) [ 2 t t f σ t f Y 1 ( ξ , ε ) D ˜ ( ξ , ε ) Y 1 ( ξ , ε ) T d ξ Y T ( σ , ε ) f ˜ ( σ , ε ) d σ + t t f Y 1 ( σ , ε ) g ˜ ( σ , ε ) d σ ] = Y ( t , ε ) [ 2 t t f Y 1 ( ξ , ε ) D ˜ ( ξ , ε ) Y 1 ( ξ , ε ) T t ξ Y T ( σ , ε ) f ˜ ( σ , ε ) d σ d ξ + t t f Y 1 ( σ , ε ) g ˜ ( σ , ε ) d σ ] , t [ 0 , t f ] .
Finally, substituting the above obtained expression for η ˜ ( t , ε ) into (84) and solving the resulting problem, we have its solution in the form
κ ˜ ( t , ε ) = κ ˜ ( t f , ε ) + κ ˜ 1 ( t , ε ) + κ ˜ 2 ( t , ε ) , t [ 0 , t f ] , κ ˜ 1 ( t ) = 2 t t f f ˜ T ( σ , ε ) Y ( σ , ε ) [ σ t f Y 1 ( ξ , ε ) D ˜ ( ξ , ε ) Y 1 ( ξ , ε ) T × σ ξ Y T ( σ 1 , ε ) f ˜ ( σ 1 , ε ) d σ 1 d ξ ] d σ , κ ˜ 2 ( t ) = t t f f ˜ T ( σ , ε ) Y ( σ , ε ) σ t f Y 1 ( σ 1 , ε ) g ˜ ( σ 1 , ε ) d σ 1 d σ .
Let us show that κ ˜ 1 ( t , ε ) can be represented as:
κ ˜ 1 ( t , ε ) = t t f [ t ξ f ˜ T ( σ , ε ) Y ( σ , ε ) d σ Y 1 ( ξ , ε ) D ˜ ( ξ , ε ) Y 1 ( ξ , ε ) T × t ξ Y T ( σ , ε ) f ˜ ( σ , ε ) d σ ] d ξ , t [ 0 , t f ] .
First, we observe that κ ˜ 1 ( t , ε ) , given in (103), and the expression in the right-hand side of (104) become zero at t = t f . Differentiation of κ ˜ 1 ( t , ε ) , given in (103), yields
d κ ˜ 1 ( t , ε ) d t = 2 f ˜ T ( t , ε ) Y ( t , ε ) t t f Y 1 ( ξ , ε ) D ˜ ( ξ , ε ) Y 1 ( ξ , ε ) T × t ξ Y T ( σ 1 , ε ) f ˜ ( σ 1 , ε ) d σ 1 d ξ , t [ 0 , t f ] .
The same expression is obtained by the differentiation of the function in the right-hand side of (104). This feature, along with the aforementioned observation, immediately yields the validity of (104).
Now, using the equation (100) and the equations (101)-(104), we obtain the following equalities:
L ˜ ( 0 , ε ) = H ˜ 1 ( ε ) , η ˜ ( 0 , ε ) = H ˜ 2 ( ε ) , κ ˜ ( 0 , ε ) = H ˜ 3 ( ε ) , ε ( 0 , ε 20 ] .
These equalities directly yield the statement of the lemma.

6. Asymptotic Solution of the CCDG in the Case II

6.1. Transformation of the Terminal-Value Problems (17)-(19)

As it was mentioned in Section 5.1, due to the equation (16), the differential equations in the problems (17)-(19) have the singularities with respect to ε in their right-hand sides for ε = 0 . To remove these singularities, in Section 5.1 the transformations (25) and (26) of the variables in the problems (17) and (18) were proposed. These transformations are applicable in the case I of the matrix Λ ( t ) (see Remark 2 and the equation (23)). However, for the asymptotic analysis of the problems (17)-(19) in the case II (see the equation (24)), we need in another transformations allowing to remove the aforementioned singularities.
Namely, the transformation of the variable in the problem (17) is
K = K ( t , ε ) = ε K ^ 1 ( t , ε ) ε 2 K ^ 2 ( t , ε ) ε 2 K ^ 2 T ( t , ε ) ε 2 K ^ 3 ( t , ε ) , t [ 0 , t f ] ,
where, for all t [ 0 , t f ] and sufficiently small ε > 0 , the matrices K ^ 1 ( t , ε ) , K ^ 2 ( t , ε ) and K ^ 3 ( t , ε ) are of the dimensions l × l , l × ( n l ) and ( n l ) × ( n l ) , respectively; K ^ 1 T ( t , ε ) = K ^ 1 ( t , ε ) , K ^ 3 T ( t , ε ) = K ^ 3 ( t , ε ) ; the functions K ^ 1 ( t , ε ) , K ^ 2 ( t , ε ) and K ^ 3 ( t , ε ) are new unknown matrix-valued functions.
The transformation of the variable in the problem (18) is
q = q ( t , ε ) = ε q ^ 1 ( t , ε ) ε q ^ 2 ( t , ε ) , t [ 0 , t f ] ,
where, for all t [ 0 , t f ] and sufficiently small ε > 0 , the vectors q ^ 1 ( t , ε ) and q ^ 2 ( t , ε ) are of the dimensions l and ( n l ) , respectively; the functions q ^ 1 ( t , ε ) and q ^ 2 ( t , ε ) are new unknown vector-valued functions.
Let us partition the matrices A ( t ) , S v ( t ) and Λ ( t ) into blocks as follows:
A ( t ) = A 1 ( t ) A 2 ( t ) A 3 ( t ) A 4 ( t ) , S v ( t ) = S v 1 ( t ) S v 2 ( t ) S v 2 T ( t ) S v 3 ( t ) , Λ ( t ) = Λ 1 ( t ) 0 0 0 , t [ 0 , t f ] ,
where the matrices A 1 ( t ) , A 2 ( t ) , A 3 ( t ) and A 4 ( t ) are of the dimensions l × l , l × ( n l ) , ( n l ) × l and ( n l ) × ( n l ) , respectively; the matrices S v 1 ( t ) , S v 2 ( t ) and S v 3 ( t ) are of the dimensions l × l , l × ( n l ) and ( n l ) × ( n l ) , respectively; S v 1 T ( t ) = S v 1 ( t ) , S v 3 T ( t ) = S v 3 ( t ) ; the matrix Λ 1 ( t ) has the form
Λ 1 ( t ) = diag λ 1 ( t ) , . . . , λ l ( t ) .
Using the equations (16),(105),(107), we can rewrite the terminal-value problem (17) in the following equivalent form:
ε d K ^ 1 ( t , ε ) d t = ε K ^ 1 ( t , ε ) A 1 ( t ) ε 2 K ^ 2 ( t , ε ) A 3 ( t ) ε A 1 T ( t ) K ^ 1 ( t , ε ) ε 2 A 3 T ( t ) K ^ 2 T ( t , ε ) + K ^ 1 ( t , ε ) 2 ε 2 K ^ 1 ( t , ε ) S v 1 ( t ) K ^ 1 ( t , ε ) ε 3 K ^ 2 ( t , ε ) S v 2 T ( t ) K ^ 1 ( t , ε ) ε 3 K ^ 1 ( t , ε ) S v 2 ( t ) K ^ 2 T ( t , ε ) + ε 2 K ^ 2 ( t , ε ) K ^ 2 T ( t , ε ) ε 4 K ^ 2 ( t , ε ) S v 3 ( t ) K ^ 2 T ( t , ε ) Λ 1 ( t ) , t [ 0 , t f ] , K ^ 1 ( t f , ε ) = 0 ,
ε d K ^ 2 ( t , ε ) d t = K ^ 1 ( t , ε ) A 2 ( t ) ε K ^ 2 ( t , ε ) A 4 ( t ) ε A 1 T ( t ) K ^ 2 ( t , ε ) ε A 3 T ( t ) K ^ 3 ( t , ε ) + K ^ 1 ( t , ε ) K ^ 2 ( t , ε ) ε 2 K ^ 1 ( t , ε ) S v 1 ( t ) K ^ 2 ( t , ε ) ε 3 K ^ 2 ( t , ε ) S v 2 T ( t ) K ^ 2 ( t , ε ) ε 2 K ^ 1 ( t , ε ) S v 2 ( t ) K ^ 3 ( t , ε ) + ε K ^ 2 ( t , ε ) K ^ 3 ( t , ε ) ε 3 K ^ 2 ( t , ε ) S v 3 ( t ) K ^ 3 ( t , ε ) , t [ 0 , t f ] , K ^ 2 ( t f , ε ) = 0 ,
d K ^ 3 ( t , ε ) d t = K ^ 2 T ( t , ε ) A 2 ( t ) K ^ 3 ( t , ε ) A 4 ( t ) A 2 T ( t ) K ^ 2 ( t , ε ) A 4 T ( t ) K ^ 3 ( t , ε ) + K ^ 2 T ( t , ε ) K ^ 2 ( t , ε ) ε 2 K ^ 2 T ( t , ε ) S v 1 ( t ) K ^ 2 ( t , ε ) ε 2 K ^ 3 ( t , ε ) S v 2 T ( t ) K ^ 2 ( t , ε ) ε 2 K ^ 2 T ( t , ε ) S v 2 ( t ) K ^ 3 ( t , ε ) + K ^ 3 ( t , ε ) 2 ε 2 K ^ 3 ( t , ε ) S v 3 ( t ) K ^ 3 ( t , ε ) , t [ 0 , t f ] , K ^ 3 ( t f , ε ) = 0 .
Let us partition the vector f ( t ) into blocks as:
f ( t ) = f 1 ( t ) f 2 ( t ) , t [ 0 , t f ] ,
where the vectors f 1 ( t ) and f 2 ( t ) are of the dimensions l and ( n l ) , respectively.
Using the equations (16),(105),(106),(107),(111), we can rewrite the terminal-value problem (18) in the following equivalent form:
ε d q ^ 1 ( t , ε ) d t = K ^ 1 ( t , ε ) ε A 1 T ( t ) ε 2 K ^ 1 ( t , ε ) S v 1 ( t ) ε 3 K ^ 2 ( t , ε ) S v 2 T ( t ) q ^ 1 ( t , ε ) ε A 3 T ( t ) K ^ 2 ( t , ε ) + ε K ^ 1 ( t , ε ) S v 2 ( t ) + ε 2 K ^ 2 ( t , ε ) S v 3 ( t ) q ^ 2 ( t , ε ) 2 ε K ^ 1 ( t , ε ) f 1 ( t ) 2 ε 2 K ^ 2 ( t , ε ) f 2 ( t ) , t [ 0 , t f ] , q ^ 1 ( t f , ε ) = 0 ,
d q ^ 2 ( t , ε ) d t = A 2 T ( t ) K ^ 2 T ( t , ε ) + ε 2 K ^ 2 T ( t , ε ) S v 1 ( t ) + ε 2 K ^ 3 ( t , ε ) S v 2 T ( t ) q ^ 1 ( t , ε ) A 4 T ( t ) K ^ 3 ( t , ε ) + ε 2 K ^ 2 T ( t , ε ) S v 2 ( t ) + ε 2 K ^ 3 ( t , ε ) S v 3 ( t ) q ^ 2 ( t , ε ) 2 ε K ^ 2 T ( t , ε ) f 1 ( t ) 2 ε K ^ 3 ( t , ε ) f 2 ( t ) , t [ 0 , t f ] , q ^ 2 ( t f , ε ) = 0 .
Finally, using the equations (16),(106),(107),(111), we can rewrite the terminal-value problem (19) in the following equivalent form:
d s ( t , ε ) d s = 1 4 q ^ 1 T ( t , ε ) q ^ 1 ( t , ε ) + q ^ 2 T ( t , ε ) q ^ 2 ( t , ε ) ε 2 4 ( q ^ 1 T ( t , ε ) S v 1 ( t ) q ^ 1 ( t , ε ) + q ^ 1 T ( t , ε ) S v 2 ( t ) q ^ 2 ( t , ε ) + q ^ 2 T ( t , ε ) S v 2 T ( t ) q ^ 1 ( t , ε ) + q ^ 2 T ( t , ε ) S v 3 ( t ) q ^ 2 ( t , ε ) ) ε f 1 T ( t ) q ^ 1 ( t , ε ) + f 2 T ( t ) q ^ 2 ( t , ε ) , t [ 0 , t f ] , s ( t f , ε ) = 0 .

6.2. Asymptotic Solution of the Terminal-Value Problem (108)-(110)

Similarly to the problem (27), the problem (108)-(110) also is singularly perturbed. However, in contrast with the former, the latter contains both, fast and slow, state variables. Namely, the state variables K ^ 1 ( t , ε ) and K ^ 2 ( t , ε ) , derivatives of which are multiplied by the small parameter ε > 0 , are fast state variables, while the state variable K ^ 3 ( t , ε ) is a slow state variable.
Similarly to (33)-(34), we look for the first-order asymptotic solution of (108)-(110) in the form
K ^ i , 1 ( t , ε ) = K ^ i , 0 o ( t ) + K ^ i , 0 b ( τ ) + ε K ^ i , 1 o ( t ) + K ^ i , 1 b ( τ ) , τ = t t f ε , i = 1 , 2 , 3 .
The terms in (115) have the same meaning as the corresponding terms in (33). These terms are obtained substituting K ^ i , 1 ( t , ε ) into the problem (108)-(110) instead of K ^ i ( t , ε ) , ( i = 1 , 2 , 3 ) and equating the coefficients for the same power of ε on both sides of the resulting equations, separately depending on t and on τ .

6.2.1. Obtaining the Boundary Correction K ^ 3 , 0 b ( τ )

This boundary correction satisfies the equation
d K ^ 3 , 0 b ( τ ) d τ = 0 , τ 0 .
To obtain a unique solution of this equation, we need an additional condition on K ^ 3 , 0 b ( τ ) . By virtue of the Boundary Function Method [37], such a condition is: K ^ 3 , 0 b ( τ ) 0 for τ . Subject to this condition, the equation (116) yields the solution
K ^ 3 , 0 b ( τ ) = 0 , τ 0 .

6.2.2. Obtaining the Outer Solution Terms K ^ 1 , 0 o ( t ) , K ^ 2 , 0 o ( t ) , K ^ 3 , 0 o ( t )

For these terms, we have the following equations in the time-interval [ 0 , t f ] :
0 = K ^ 1 , 0 o ( t ) 2 Λ 1 ( t ) ,
0 = K ^ 1 , 0 o ( t ) A 2 ( t ) + K ^ 1 , 0 o ( t ) K ^ 2 , 0 o ( t ) ,
d K ^ 3 , 0 o ( t ) d t = K ^ 2 , 0 o ( t ) T A 2 ( t ) K ^ 3 , 0 o ( t ) A 4 ( t ) A 2 T ( t ) K ^ 2 , 0 o ( t ) A 4 T ( t ) K ^ 3 , 0 o ( t ) + K ^ 2 , 0 o ( t ) T K ^ 2 , 0 o ( t ) + K ^ 3 , 0 o ( t ) 2 , K ^ 3 , 0 o ( t f ) = 0 .
The equation (118) yields the solution
K ^ 1 , 0 o ( t ) = Λ 1 1 / 2 ( t ) = diag λ 1 1 / 2 ( t ) , . . . , λ l 1 / 2 ( t ) , t [ 0 , t f ] .
Solving the equation (119) and taking into account the invertibility of K ^ 1 , 0 o ( t ) = Λ 1 1 / 2 ( t ) for all t [ 0 , t f ] , we obtain
K ^ 2 , 0 o ( t ) = A 2 ( t ) , t [ 0 , t f ] .
Substitution of (122) into (120) yields the following terminal-value problem with respect to K ^ 3 , 0 o ( t ) :
d K ^ 3 , 0 o ( t ) d t = K ^ 3 , 0 o ( t ) A 4 ( t ) A 4 T ( t ) K ^ 3 , 0 o ( t ) + K ^ 3 , 0 o ( t ) 2 A 2 T ( t ) A 2 ( t ) , t [ 0 , t f ] , K ^ 3 , 0 o ( t f ) = 0 .
Since A 2 T ( t ) A 2 ( t ) is a positive definite/positive semidefinite matrix for all t [ 0 , t f ] , then by virtue of the results of [46], the problem (123) has the unique solution K ^ 3 , 0 o ( t ) in the entire interval [ 0 , t f ] .

6.2.3. Obtaining the Boundary Corrections K ^ 1 , 0 b ( τ ) and K ^ 2 , 0 b ( τ )

Using the equations (121) and (122), we derive the following terminal-value problem for these corrections:
d K ^ 1 , 0 b ( τ ) d τ = K ^ 1 , 0 b ( τ ) Λ 1 1 / 2 ( t f ) + Λ 1 1 / 2 ( t f ) K ^ 1 , 0 b ( τ ) + K ^ 1 , 0 b ( τ ) 2 , τ 0 , K ^ 1 , 0 b ( 0 ) = K ^ 1 , 0 o ( t f ) = Λ 1 1 / 2 ( t f ) ,
d K ^ 2 , 0 b ( τ ) d τ = Λ 1 1 / 2 ( t f ) + K ^ 1 , 0 b ( τ ) K ^ 2 , 0 b ( τ ) , τ 0 , K ^ 2 , 0 b ( 0 ) = K ^ 2 , 0 o ( t f ) = A 2 ( t f ) .
This problem consists of two subproblems, which can be solved consecutively. First, the subproblem (124) is solved. Then, using its solution K ^ 1 , 0 b ( τ ) , the subproblem (125) is solved. The subproblem (124) is a terminal-value problem for a Bernoulli-type matrix differential equation (see, e.g., [42]) yielding the unique solution
K ^ 1 , 0 b ( τ ) = 2 Λ 1 1 / 2 ( t f ) exp 2 Λ 1 1 / 2 ( t f ) τ I l + exp 2 Λ 1 1 / 2 ( t f ) τ 1 , τ 0 .
Substituting (126) into the subproblem of (125) and solving the obtained terminal-value problem with respect to K ^ 2 , 0 b ( τ ) yield
K ^ 2 , 0 b ( τ ) = 2 exp Λ 1 1 / 2 ( t f ) τ I l + exp 2 Λ 1 1 / 2 ( t f ) τ 1 A 2 ( t f ) , τ 0 .
Since the matrix Λ 1 1 / 2 ( t f ) is positive definite, the matrix-valued functions K ^ 1 , 0 b ( τ ) and K ^ 2 , 0 b ( τ ) are exponentially decaying, i.e.,
K ^ 1 , 0 b ( τ ) a ^ 1 , 0 exp ( 2 β ^ τ ) , K ^ 2 , 0 b ( τ ) a ^ 2 , 0 exp ( β ^ τ ) , τ 0 ,
where a ^ 1 , 0 > 0 and a ^ 2 , 0 > 0 are some constants;
β ^ = min i { 1 , . . . , l } λ i 1 / 2 ( t f ) > 0 .

6.2.4. Obtaining the Boundary Correction K ^ 3 , 1 b ( τ )

Using the equations (117) and (122) yields after a routine algebra the equation for this boundary correction
d K ^ 3 , 1 b ( τ ) d τ = K ^ 2 , 0 b ( τ ) T K ^ 2 , 0 b ( τ ) , τ 0 .
Substituting (127) into (130) and taking into account the diagonal form of the matrix Λ 1 1 / 2 ( t ) , we obtain the following differential equation for K ^ 3 , 1 b ( τ ) :
d K ^ 3 , 1 b ( τ ) d τ = 4 A 2 T ( t f ) exp 2 Λ 1 1 / 2 ( t f ) τ I l + exp 2 Λ 1 1 / 2 ( t f ) τ 2 A 2 ( t f ) , τ 0 .
Solution of this equation with an unknown value K ^ 3 , 1 b ( 0 ) is
K ^ 3 , 1 b ( τ ) = K ^ 3 , 1 b ( 0 ) + A 2 T ( t f ) Λ 1 1 / 2 ( t f ) A 2 ( t f ) 2 A 2 T ( t f ) Λ 1 1 / 2 ( t f ) I l + exp 2 Λ 1 1 / 2 ( t f ) τ 1 A 2 ( t f ) , τ 0 ,
where Λ 1 1 / 2 ( t f ) is the inverse matrix to the matrix Λ 1 1 / 2 ( t f ) .
Due to the Boundary Function Method [37], we choose the unknown matrix K ^ 3 , 1 b ( 0 ) such that K ^ 3 , 1 b ( τ ) 0 for τ . Thus, using (131) and taking into account the positive definiteness of the matrix Λ 1 1 / 2 ( t f ) , we have
lim τ K ^ 3 , 1 b ( τ ) = K ^ 3 , 1 b ( 0 ) A 2 T ( t f ) Λ 1 1 / 2 ( t f ) A 2 ( t f ) = 0 ,
implying
K ^ 3 , 1 b ( 0 ) = A 2 T ( t f ) Λ 1 1 / 2 ( t f ) A 2 ( t f ) .
The latter, along with the equation (131), yields after a routine rearrangement
K ^ 3 , 1 b ( τ ) = 2 A 2 T ( t f ) Λ 1 1 / 2 ( t f ) exp 2 Λ 1 1 / 2 ( t f ) τ I l + exp 2 Λ 1 1 / 2 ( t f ) τ 1 A 2 ( t f ) ,
where τ 0 .
Since Λ 1 1 / 2 ( t f ) is a positive definite matrix, the matrix-valued function K ^ 3 , 1 b ( τ ) exponentially decays for τ , i.e.,
K ^ 3 , 1 b ( τ ) a ^ 3 exp ( 2 β ^ τ ) , τ 0 ,
where a ^ 3 > 0 is some constant; the constant β ^ > 0 is given by (129).

6.2.5. Obtaining the Outer Solution Terms K ^ 1 , 1 o ( t ) , K ^ 2 , 1 o ( t ) , K ^ 3 , 1 o ( t )

Using the equations (121), (122) and (132) we have the following equations for these terms in the time-interval [ 0 , t f ] :
d Λ 1 1 / 2 ( t ) d t = Λ 1 1 / 2 ( t ) A 1 ( t ) A 1 T ( t ) Λ 1 1 / 2 ( t ) + Λ 1 1 / 2 ( t ) K ^ 1 , 1 o ( t ) + K ^ 1 , 1 o ( t ) Λ 1 1 / 2 ( t ) ,
d A 2 ( t ) d t = A 2 ( t ) A 4 ( t ) A 1 T ( t ) A 2 ( t ) A 3 T ( t ) K ^ 3 , 0 o ( t ) + Λ 1 1 / 2 ( t ) K ^ 2 , 1 o ( t ) + A 2 ( t ) K ^ 3 , 0 o ( t ) ,
d K ^ 3 , 1 o ( t ) d t = K ^ 3 , 1 o ( t ) K ^ 3 , 0 o ( t ) A 4 ( t ) + K ^ 3 , 0 o ( t ) A 4 T ( t ) K ^ 3 , 1 o ( t ) , K ^ 3 , 1 o ( t f ) = K ^ 3 , 1 b ( 0 ) = A 2 T ( t f ) Λ 1 1 / 2 ( t f ) A 2 ( t f ) .
Using the results of [43] and taking into account the equation (24), we obtain the solution of the equation (134)
K ^ 1 , 1 o ( t ) = 0 + exp Λ 1 1 / 2 ( t ) ξ [ d Λ 1 1 / 2 ( t ) d t + Λ 1 1 / 2 ( t ) A 1 ( t ) + A 1 T ( t ) Λ 1 1 / 2 ( t ) ] exp Λ 1 1 / 2 ( t ) ξ d ξ , t [ 0 , t f ] .
Furthermore, taking into account the invertibility of the matrix Λ 1 1 / 2 ( t ) for all t [ 0 , t f ] , we obtain the solution of the equation (135)
K ^ 2 , 1 o ( t ) = Λ 1 1 / 2 ( t ) [ d A 2 ( t ) d t + A 2 ( t ) A 4 ( t ) + A 1 T ( t ) A 2 ( t ) + A 3 T ( t ) K ^ 3 , 0 o ( t ) A 2 ( t ) K ^ 3 , 0 o ( t ) ] , t [ 0 , t f ] .
Finally, solving the problem (136), we obtain
K ^ 3 , 1 o ( t ) = Φ ^ ( t ) A 2 T ( t f ) Λ 1 1 / 2 ( t f ) A 2 ( t f ) Φ ^ T ( t ) , t [ 0 , t f ] ,
where the matrix-valued function Φ ^ ( t ) satisfies the terminal-value problem
d Φ ^ ( t ) d t = K ^ 3 , 0 o ( t ) A 4 T ( t ) Φ ^ ( t ) , t [ 0 , t f ] , Φ ^ ( t f ) = I l .

6.2.6. Obtaining the Boundary Corrections K ^ 1 , 1 b ( τ ) and K ^ 2 , 1 b ( τ )

The correction K ^ 1 , 1 b ( τ ) satisfies the following terminal-value problem:
d K ^ 1 , 1 b ( τ ) d τ = K ^ 1 , 0 o ( t f ) + K ^ 1 , 0 b ( τ ) K ^ 1 , 1 b ( τ ) + K ^ 1 , 1 b ( τ ) K ^ 1 , 0 o ( t f ) + K ^ 1 , 0 b ( τ ) + Ψ ^ 1 ( τ ) , τ 0 , K ^ 1 , 1 b ( 0 ) = K ^ 1 , 1 o ( t f ) ,
where
Ψ ^ 1 ( τ ) = τ d K ^ 1 , 0 o ( t ) d t | t = t f + K ^ 1 , 1 o ( t f ) A 1 T ( t f ) K ^ 1 , 0 b ( τ ) + K ^ 1 , 0 b ( τ ) τ d K ^ 1 , 0 o ( t ) d t | t = t f + K ^ 1 , 1 o ( t f ) A 1 ( t f ) .
Due to the first inequality in (128), we can estimate the matrix-valued function Ψ ^ 1 ( τ ) as:
Ψ ^ 1 ( τ ) a ^ Ψ , 1 exp ( β ^ τ ) , τ 0 ,
where a ^ Ψ , 1 > 0 is some constant; the constant β ^ > 0 is given by (129).
Solving the problem (141) and using the results of [44] and the symmetry of the matrices K ^ 1 , 0 o ( t ) , K ^ 1 , 0 b ( τ ) , we obtain similarly to (46)
K ^ 1 , 1 b ( τ ) = Φ ^ 1 ( 0 , τ ) K ^ 1 , 1 o ( t f ) Φ ^ 1 ( 0 , τ ) + 0 τ Φ ^ 1 ( σ , τ ) Ψ ^ 1 ( σ ) Φ ^ 1 ( σ , τ ) d σ , τ 0 ,
where, for any τ 0 , the n × n matrix-valued function Φ ^ 1 ( σ , τ ) is the unique solution of the problem
d Φ ^ 1 ( σ , τ ) d σ = K ^ 1 , 0 o ( t f ) + K ^ 1 , 0 b ( σ ) Φ ^ 1 ( σ , τ ) , σ [ τ , 0 ] , Φ ^ 1 ( τ , τ ) = I n .
Solving this problem and taking into account the expressions for K ^ 1 , 0 o ( t ) and K ^ 1 , 0 b ( τ ) (see the equations (121) and (126)), we have
Φ ^ 1 ( σ , τ ) = exp Λ 1 1 / 2 ( t f ) ( τ σ ) Θ ^ 1 ( τ ) Θ ^ ( σ ) , 0 σ τ > ,
where
Θ ^ ( χ ) = I n + exp 2 Λ 1 1 / 2 ( t f ) χ , χ 0 .
The matrix-valued function Φ ^ 1 ( σ , τ ) satisfies the inequality
Φ ^ 1 ( σ , τ ) a ^ Φ , 1 exp β ^ ( τ σ ) , 0 σ τ > ,
where a ^ Φ , 1 > 0 is some constant; the constant β ^ is given in (129).
Using the equation (144) and the inequalities (143),(148), we obtain the following estimate for K ^ 1 , 1 b ( τ ) :
K ^ 1 , 1 b ( τ ) a ^ Φ , 1 2 K ^ 1 , 1 o ( t f ) exp ( 2 β ^ τ ) + a ^ Ψ , 1 β ^ exp ( β ^ τ ) 1 exp ( β ^ τ ) , τ 0 ,
meaning that K ^ 1 , 1 b ( τ ) is an exponentially decaying function for τ .
Proceed to the correction K ^ 2 , 1 b ( τ ) . Using the equation (122), we obtain after some rearrangement the terminal-value problem for this correction
d K ^ 2 , 1 b ( τ ) d τ = K ^ 1 , 0 o ( t f ) + K ^ 1 , 0 b ( τ ) K ^ 2 , 1 b ( τ ) + Ψ ^ 2 ( τ ) , τ 0 , K ^ 2 , 1 b ( 0 ) = K ^ 2 , 1 o ( t f ) ,
where
Ψ ^ 2 ( τ ) = τ d K ^ 1 , 0 o ( t ) d t | t = t f + K ^ 1 , 1 o ( t f ) + K ^ 1 , 1 b ( τ ) A 1 T ( t f ) K ^ 2 , 0 b ( τ ) + K ^ 1 , 0 b ( τ ) K ^ 2 , 1 o ( t f ) K ^ 2 , 0 b ( τ ) A 4 ( t f ) .
The analysis and solution of the problem (150) is similar to the above presented analysis and solution of the problem (141). Namely, the matrix-valued function Ψ ^ 2 ( τ ) can be estimated as:
Ψ ^ 2 ( τ ) a ^ Ψ , 2 exp ( β ^ / 2 ) τ , τ 0 ,
where a ^ Ψ , 2 > 0 is some constant; the constant β ^ > 0 is given by (129).
The solution of the problem (150) is
K ^ 2 , 1 b ( τ ) = Φ ^ 1 ( 0 , τ ) K ^ 2 , 1 o ( t f ) + 0 τ Φ ^ 1 ( σ , τ ) Ψ ^ 2 ( σ ) d σ , τ 0 ,
where the matrix-valued function Φ ^ 1 ( σ , τ ) is given by (145)-(147) and satisfies the inequality (148).
Using the equation (153) and the inequalities (152),(148), we obtain the following estimate of K ^ 2 , 1 b ( τ ) for all τ 0 :
K ^ 2 , 1 b ( τ ) a ^ Φ , 1 K ^ 2 , 1 o ( t f ) exp ( β ^ τ ) + 2 a ^ Ψ , 2 β ^ exp ( β ^ / 2 ) τ ) 1 exp ( β ^ / 2 ) τ ,
meaning that K ^ 2 , 1 b ( τ ) is an exponentially decaying function for τ .

6.2.7. Justification of the Asymptotic Solution to the Problem (108)-(110)

Lemma 9. 
Let the assumptions A1-A3 and the case II (see the equation (24)) be valid. Then, there exists a positive number ε ^ 10 such that, for all ε ( 0 , ε ^ 10 ] , the problem (108)-(110) has the unique solution K ^ i ( t , ε ) , ( i = 1 , 2 , 3 ) , in the entire interval [ 0 , t f ] . This solution satisfies the inequalities K ^ i ( t , ε ) K ^ i , 1 ( t , ε ) a ^ K , i ε 2 , t [ 0 , t f ] , where K ^ i , 1 ( t , ε ) , ( i = 1 , 2 , 3 ) , are given by (115); a ^ K , i > 0 , ( i = 1 , 2 , 3 ) , are some constants independent of ε.
Proof. 
Let us make the transformation of variables in the problem (108)-(110)
K ^ i ( t , ε ) = K ^ i , 1 ( t , ε ) + Δ K , i ( t , ε ) , i = 1 , 2 , 3 ,
where Δ K , i ( t , ε ) , ( i = 1 , 2 , 3 ) are new unknown matrix-valued functions.
Consider the block-form matrix-valued function
Δ K ( t , ε ) = ε Δ K , 1 ( t , ε ) ε 2 Δ K , 2 ( t , ε ) ε 2 Δ K , 2 T ( t , ε ) ε 2 Δ K , 3 ( t , ε ) .
Substituting (155) into the problem (108)-(110), using the equations for the outer solution terms and boundary corrections (see (117)-(120),(124)-(125),(130),(134)-(136),(141),(154)) and using the expressions for the matrices S ( t , ε ) , K ( t , ε ) , A ( t ) , S v ( t ) , Λ ( t ) (see the equations (16),(105), (107)) yield after a routine algebra the terminal-value problem for Δ K ( t , ε )
d Δ K ( t , ε ) d t = Δ K ( t , ε ) A ^ K ( t , ε ) A ^ K T ( t , ε ) Δ K ( t , ε ) + Δ K ( t , ε ) S ( t , ε ) Δ K ( t , ε ) D ^ K ( t , ε ) , t [ 0 , t f ] , Δ K ( t f , ε ) = 0 ,
where
A ^ K ( t , ε ) = A ( t ) S ( t , ε ) K ^ 1 ( t , ε ) , K ^ 1 ( t , ε ) = ε K ^ 1 , 1 ( t , ε ) ε 2 K ^ 2 , 1 ( t , ε ) ε 2 K ^ 2 , 1 T ( t , ε ) ε 2 K ^ 3 , 1 ( t , ε ) ;
the matrix-valued function D ^ K ( t , ε ) is expressed in a known form by the matrix-valued functions K ^ 1 ( t , ε ) , S u ( ε ) and S v ( t ) ; for any ε > 0 , D ^ K ( t , ε ) is a continuous function of t [ 0 , t f ] ; for any t [ 0 , t f ] and ε > 0 , the matrix D ^ K ( t , ε ) is symmetric.
Let us represent the matrix D ^ K ( t , ε ) in the block form as:
D ^ K ( t , ε ) = D ^ K , 1 ( t , ε ) D ^ K , 2 ( t , ε ) D ^ K , 2 T ( t , ε ) D ^ K , 3 ( t , ε ) ,
where the dimensions of the blocks are the same as the dimensions of the corresponding blocks in the matrix K ^ 1 ( t , ε ) .
Using the inequalities (128),(133),(149),(154), we obtain the following estimates for the blocks of D ^ K ( t , ε ) :
D ^ K , 1 ( t , ε ) b D , 1 ε 2 , D ^ K , 2 ( t , ε ) b D , 2 ε 3 , D ^ K , 3 ( t , ε ) b D , 3 ε 3 ε + exp ( β ^ / 2 ) τ ) , τ = ( t t f ) / ε , t [ 0 , t f ] , ε ( 0 , ε ^ D ] ,
where b ^ D , i > 0 , ( i = 1 , 2 , 3 ) , are some constants independent of ε ; the constant β ^ > 0 is given by (129); ε ^ D > 0 is some sufficiently small number.
Due to the results of [44], we can rewrite the terminal-value problem (157) in the equivalent integral form
Δ K ( t , ε ) = t f t Ω ^ T ( σ , t , ε ) [ Δ K ( σ , ε ) S ( σ , ε ) Δ K ( σ , ε ) D ^ K ( σ , ε ) ] Ω ^ ( σ , t , ε ) d σ , t [ 0 , t f ] , ε ( 0 , ε ^ D ] ,
where, for any given t [ 0 , t f ] and ε ( 0 , ε ^ D ] , the n × n -matrix-valued function Ω ^ ( σ , t , ε ) is the unique solution of the problem
d Ω ^ ( σ , t , ε ) d σ = A ^ K ( σ , ε ) Ω ^ ( σ , t , ε ) , σ [ t , t f ] , Ω ^ ( t , t , ε ) = I n .
Let Ω ^ 1 ( σ , t , ε ) , Ω ^ 2 ( σ , t , ε ) , Ω ^ 3 ( σ , t , ε ) and Ω ^ 4 ( σ , t , ε ) be the upper left-hand, upper right-hand, lower left-hand and lower right-hand blocks of the matrix Ω ^ ( σ , t , ε ) of the dimensions l × l , l × ( n l ) , ( n l ) × l and ( n l ) × ( n l ) , respectively. By virtue of the results of [45], we have the following estimates of these blocks for all 0 t σ t f :
Ω ^ 1 ( σ , t , ε ) b Ω [ ε + exp β ^ ( t σ ) / ε ] , Ω ^ k ( σ , t , ε ) b Ω ε , k = 2 , 3 , Ω ^ 4 ( σ , t , ε ) b Ω , ε ( 0 , ε ^ Ω ] ,
where b Ω > 0 is some constant independent of ε ; ε ^ Ω > 0 is some sufficiently small number.
Applying the method of successive approximations to the equation (160), let us consider the sequence of the matrix-valued functions Δ K j ( t , ε ) j = 0 + given as:
Δ K j + 1 ( t , ε ) = t f t Ω ^ T ( σ , t , ε ) [ Δ K j ( σ , ε ) S ( σ , ε ) Δ K j ( σ , ε ) D ^ K ( σ , ε ) ] Ω ^ ( σ , t , ε ) d σ , j = 0 , 1 , . . . , t [ 0 , t f ] , ε ( 0 , ε 1 ] ,
where Δ K 0 ( t , ε ) = 0 , t [ 0 , t f ] , ε ( 0 , ε ^ D ] ; the matrices Δ K j ( σ , ε ) have the block form
Δ K j ( σ , ε ) = ε Δ K , 1 j ( t , ε ) ε 2 Δ K , 2 j ( t , ε ) ε 2 Δ K , 2 j ( t , ε ) T ε 2 Δ K , 3 j ( t , ε ) , ( j = 1 , 2 , . . . ) ,
and the dimensions of the blocks in each of these matrices are the same as the dimensions of the corresponding blocks in (156).
Using the block representations of all the matrices appearing in the equation (163), as well as using the inequalities (159) and (162), we obtain the existence of a positive number ε ^ 10 min { ε ^ D , ε ^ Ω } such that for any ε ( 0 , ε ^ 10 ] the sequence Δ K j ( t , ε ) j = 0 + converges in the linear space of all n × n -matrix-valued functions continuous in the interval [ 0 , t f ] . Moreover, the following inequalities are fulfilled:
Δ K , i j ( t , ε ) a ^ K , i ε 2 , i = 1 , 2 , 3 , j = 1 , 2 , . . . , t [ 0 , t f ] ,
where a ^ K , i > 0 , ( i = 1 , 2 , 3 ) , are some constants independent of ε and j.
Thus, for any ε ( 0 , ε ^ 10 ] ,
Δ K * ( t , ε ) = lim j + Δ K j ( t , ε )
is a solution of the equation (160) and, therefore, of the problem (157) in the entire interval [ 0 , t f ] . Moreover, this solution has the block form similar to (156) and satisfies the inequalities
Δ K , i * ( t , ε ) a ^ K , i ε 2 , i = 1 , 2 , 3 , t [ 0 , t f ] .
Since the right-hand side of the differential equation in the problem (157) is smooth w.r.t. Δ K uniformly in t [ 0 , t f ] , this problem cannot have more than one solution. Therefore, Δ K * ( t , ε ) defined by (164) is the unique solution of the problem (157). This observation, along with the equation (155) and the inequalities in (165), proves the lemma. □

6.2.8. Comparison of the Asymptotic Solutions to the Terminal-Value Problem (17) in the Cases I and II

Comparing the asymptotic solutions of the problem (17) in the cases I and II, we can observe the following.
In the case I, the problem (17) is reduced to the singularly perturbed terminal-value problem with only fast state variable (see the equation (27)). This feature yields the uniform algorithm of constructing the entire matrix asymptotic solution and the similar form of all its entries. In particular, the outer solution terms are obtained from algebraic (not differential) equations. The zero-order (with respect to ε ) boundary corrections appear in all the entries of the asymptotic solution.
In the case II (in contrast with the case I), the problem (17) is reduced to the singularly perturbed terminal-value problem with two types of the state variables: two fast matrix state variables and one slow matrix state variable (see the equations (108)-(110)). In this case, the outer solution terms, corresponding to the slow state variable, are obtained from differential equations. The zero-order (with respect to ε ) boundary correction, corresponding to the slow state variable, equals zero. The outer solution terms, corresponding to the fast state variables, are obtained from algebraic equations. The zero-order boundary corrections, corresponding to these state variables, are not zero.
The aforementioned observation means a considerable difference in the derivation and the form of the asymptotic solutions to the problem (17) in the cases I and II.
Remark 12. 
For the particular block form of the matrix A ( t ) , where A 2 ( t ) 0 , A 3 ( t ) 0 , A 4 ( t ) 0 , the second and third components of the solution to the problem (108)-(110) become identically zero, i.e., K ^ 2 ( t , ε ) 0 and K ^ 3 ( t , ε ) 0 . Hence, the problem (108)-(110) is reduced to the much simpler terminal-value problem
ε d K ^ 1 ( t , ε ) d t = ε K ^ 1 ( t , ε ) A 1 ( t ) ε A 1 T ( t ) K ^ 1 ( t , ε ) + K ^ 1 ( t , ε ) 2 ε 2 K ^ 1 ( t , ε ) S v 1 ( t ) K ^ 1 ( t , ε ) Λ 1 ( t ) , t [ 0 , t f ] , K ^ 1 ( t f , ε ) = 0 .
This problem has the same form as the terminal-value problem (27). The asymptotic solution of the latter can be obtained from the asymptotics of K ^ 1 ( t , ε ) by replacing there A 1 ( t ) with A ( t ) and Λ 1 ( t ) with Λ ( t ) . However, for the sake of the better readability of the paper (including more clear explanation of the differences between the asymptotic analysis in the case I and in the case II), we present the case I as a separate case with the proper details.

6.3. Asymptotic Solution of the Terminal-Value Problem (112)-(113)

Similarly to the problem (28), the problem (112)-(113) also is singularly perturbed. However, in contrast with (28), the problem (112)-(113) contains not only a fast state variable but also a slow one. Namely, the state variable q ^ 1 ( t , ε ) is a fast state variable, while the state variable q ^ 2 ( t , ε ) is a slow state variable.
Similarly to (51), we look for the first-order asymptotic solution of (112)-(113) in the form
q ^ k , 1 ( t , ε ) = q ^ k , 0 o ( t ) + q ^ k , 0 b ( τ ) + ε q ^ k , 1 o ( t ) + q ^ k , 1 b ( τ ) , τ = t t f ε , k = 1 , 2 .
The terms in (166) have the same meaning as the corresponding terms in (51). These terms are obtained substituting q ^ k , 1 ( t , ε ) and K ^ i , 1 ( t , ε ) into the problem (112)-(113) instead of q ^ k ( t , ε ) , ( k = 1 , 2 ) , and K ^ i ( t , ε ) , ( i = 1 , 2 , 3 ) , and equating the coefficients for the same power of ε on both sides of the resulting equations, separately depending on t and on τ .

6.3.1. Obtaining the Boundary Correction q ^ 2 , 0 b ( τ )

This boundary correction satisfies the equation
d q ^ 2 , 0 b ( τ ) d τ = 0 , τ 0 ,
which, subject to the condition lim τ q ^ 2 , 0 b ( τ ) = 0 , yields the solution
q ^ 2 , 0 b ( τ ) = 0 , τ 0 .

6.3.2. Obtaining the Outer Solution Terms q ^ 1 , 0 o ( t ) and q ^ 2 , 0 o ( t )

Taking into account the equation (122), these terms satisfy the following equations:
0 = K ^ 1 , 0 o ( t ) q ^ 1 , 0 o ( t ) , t [ 0 , t f ] ,
d q ^ 2 , 0 o ( t ) d t = A 4 T ( t ) K ^ 3 , 0 o ( t ) q ^ 2 , 0 o ( t ) , t [ 0 , t f ] , q ^ 2 , 0 o ( t f ) = 0 .
Solving these equations and taking into account that K ^ 1 , 0 o ( t ) = Λ 1 1 / 2 ( t ) is an invertible matrix for all t [ 0 , t f ] , we directly have
q ^ 1 , 0 o ( t ) = 0 , q ^ 2 , 0 o ( t ) = 0 , t [ 0 , t f ] .

6.3.3. Obtaining the Boundary Correction q ^ 1 , 0 b ( τ )

Using the equation (168) yields the terminal-value problem for q ^ 1 , 0 b ( τ )
d q ^ 1 , 0 b ( τ ) d τ = K ^ 1 , 0 o ( t f ) q ^ 1 , 0 b ( τ ) , τ 0 , q ^ 1 , 0 b ( 0 ) = 0 ,
implying
q ^ 1 , 0 b ( τ ) = 0 , τ 0 .

6.3.4. Obtaining the Boundary Correction q ^ 2 , 1 b ( τ )

Using the equations (167) and (169), we derive the equation for q ^ 2 , 1 b ( τ )
d q ^ 2 , 1 b ( τ ) d τ = 0 , τ 0 ,
which, subject to the condition lim τ q ^ 2 , 1 b ( τ ) = 0 , yields the solution
q ^ 2 , 1 b ( τ ) = 0 , τ 0 .

6.4. Obtaining the Outer Solution Terms q ^ 1 , 1 o ( t ) and q ^ 2 , 1 o ( t )

Using the equations (122),(168) and (170), we have the equations for q ^ 1 , 1 o ( t ) and q ^ 2 , 1 o ( t )
0 = K ^ 1 , 0 o ( t ) q ^ 1 , 1 o ( t ) 2 K ^ 1 , 0 o ( t ) f 1 ( t ) , t [ 0 , t f ] ,
d q ^ 2 , 1 o ( t ) d t = A 4 T ( t ) K ^ 3 , 0 o ( t ) q ^ 2 , 1 o ( t ) 2 A 2 T ( t ) f 1 ( t ) 2 K ^ 3 , 0 o ( t ) f 2 ( t ) , t [ 0 , t f ] , q ^ 2 , 1 o ( t f ) = 0 .
The equation (171) yields immediately
q ^ 1 , 1 o ( t ) = 2 f 1 ( t ) , t [ 0 , t f ] .
The terminal-value problem (172) has the unique solution q ^ 2 , 1 o ( t ) in the entire interval [ 0 , t f ]
q ^ 2 , 1 o ( t ) = 2 t f t Φ ^ ( t ) Φ ^ 1 ( σ ) A 2 T ( σ ) f 1 ( σ ) + K ^ 3 , 0 o ( σ ) f 2 ( σ ) d σ ,
where the matrix-valued function Φ ^ ( t ) is given by (140).

6.4.1. Obtaining the Boundary Correction q ^ 1 , 1 b ( τ )

This correction satisfies the following terminal-value problem:
d q ^ 1 , 1 b ( τ ) d τ = K ^ 1 , 0 o ( t f ) + K ^ 1 , 0 b ( τ ) q ^ 1 , 1 b ( τ ) , τ 0 , q ^ 1 , 1 b ( 0 ) = q ^ 1 , 1 o ( t f ) = 2 f 1 ( t f ) .
The solution of this problem is
q ^ 1 , 1 b ( τ ) = 2 Φ ^ 1 ( 0 , τ ) f 1 ( t f ) , τ 0 ,
where the matrix-valued function Φ ^ 1 ( σ , τ ) is given by (145)-(147) and satisfies the inequality (148). This inequality, along with the equation (175), yields
q ^ 1 , 1 b ( τ ) 2 a ^ Φ , 1 f 1 ( t f ) exp ( β ^ τ ) , τ 0 .

6.4.2. Justification of the Asymptotic Solution to the Problem (112)-(113)

Using the equations (166),(167),(168),(169),(170), we can rewrite the vector-valued functions q ^ 1 , 1 ( t , ε ) and q ^ 2 , 1 ( t , ε ) as:
q ^ 1 , 1 ( t , ε ) = ε q ^ 1 , 1 o ( t ) + q ^ 1 , 1 b ( τ ) , q ^ 2 , 1 ( t , ε ) = ε q ^ 2 , 1 o ( t ) .
Using this equation, as well as the equations (171),(172),(175) and the inequality (176), we obtain similarly to Lemma 9 the following assertion.
Lemma 10. 
Let the assumptions A1-A3 and the case II (see the equation (24)) be valid. Then, for all ε ( 0 , ε ^ 10 ] ( ε ^ 10 > 0 is introduced in Lemma 9), the terminal-value problem (112)-(113) has the unique solution col q ^ 1 ( t , ε ) , q ^ 2 ( t , ε ) in the entire interval [ 0 , t f ] . Moreover, there exists a positive number ε ^ 20 ε ^ 10 such that, for all ε ( 0 , ε ^ 20 ] , this solution satisfies the inequalities q ^ j ( t , ε ) q ^ j , 1 ( t , ε ) b ^ q , j ε 2 , t [ 0 , t f ] , ( j = 1 , 2 ) , where b ^ q , j > 0 , ( j = 1 , 2 ) , are some constants independent of ε.

6.4.3. Comparison of the Asymptotic Solutions to the Terminal-Value Problem (18) in the Cases I and II

Comparing the asymptotic solutions of the problem (18) in the cases I and II, we can observe the following.
In the case I, the problem (18) is reduced (like the problem (17)) to the singularly perturbed terminal-value problem with only fast state variable (see the equation (28)). This feature yields the uniform algorithm of constructing the entire vector asymptotic solution and the similar form of all its entries. In particular, the outer solution terms are obtained from algebraic (not differential) equations. The first-order (with respect to ε ) boundary corrections appear in all the entries of the asymptotic solution.
In the case II (in contrast with the case I), the problem (18) is reduced (like the problem (17)) to the singularly perturbed terminal-value problem with two types of the state variables. The transformed problem has one fast vector state variable and one slow vector state variable (see the equations (112)-(113)). In this case, the outer solution terms, corresponding to the slow state variable, are obtained from differential equations. The first-order (with respect to ε ) boundary correction, corresponding to the slow state variable, equals zero. The outer solution terms, corresponding to the fast state variable, are obtained from algebraic equations. The first-order boundary correction, corresponding to this state variable, is not zero.
The aforementioned observation means a considerable difference in the derivation and the form of the asymptotic solutions to the problem (18) in the cases I and II.
Remark 13. 
Similarly to Remark 12, for the particular block form of the matrix A ( t ) , where A 2 ( t ) 0 , A 3 ( t ) 0 , A 4 ( t ) 0 , the second component of the solution to the problem (112)-(113) becomes identically zero, i.e., q ^ 2 ( t , ε ) 0 . Due to this feature and that K ^ 2 ( t , ε ) 0 , K ^ 3 ( t , ε ) 0 , the problem (112)-(113) is reduced to the much simpler terminal-value problem
ε d q ^ 1 ( t , ε ) d t = K ^ 1 ( t , ε ) ε A 1 T ( t ) ε 2 K ^ 1 ( t , ε ) S v 1 ( t ) q ^ 1 ( t , ε ) 2 ε K ^ 1 ( t , ε ) f 1 ( t ) , t [ 0 , t f ] , q ^ 1 ( t f , ε ) = 0 ,
This problem has the same form as the terminal-value problem (28). The asymptotic solution of the latter can be obtained from the asymptotics of q ^ 1 ( t , ε ) by replacing there f 1 ( t ) with f ( t ) and Φ ^ 1 ( σ , t ) with Φ ( σ , t ) given in (47). However, for the sake of the better readability of the paper (including more clear explanation of the differences between the asymptotic analysis in the case I and in the case II), we present the case I as a separate case with the proper details.

6.5. Asymptotic Solution of the Terminal-Value Problem (114)

Solving the problem (114) and taking into account Lemma 10, we obtain
s ( t , ε ) = t f t [ 1 4 q ^ 1 T ( t , ε ) q ^ 1 ( t , ε ) + q ^ 2 T ( t , ε ) q ^ 2 ( t , ε ) ε 2 4 ( q ^ 1 T ( t , ε ) S v 1 ( t ) q ^ 1 ( t , ε ) + q ^ 1 T ( t , ε ) S v 2 ( t ) q ^ 2 ( t , ε ) + q ^ 2 T ( t , ε ) S v 2 T ( t ) q ^ 1 ( t , ε ) + q ^ 2 T ( t , ε ) S v 3 ( t ) q ^ 2 ( t , ε ) ) ε f 1 T ( t ) q ^ 1 ( t , ε ) + f 2 T ( t ) q ^ 2 ( t , ε ) ] d σ , t [ 0 , t f ] , ε ( 0 , ε ^ 20 ] .
Let us consider the function
s ^ ( t , ε ) = ε 2 t f t [ 1 4 q ^ 1 , 1 o ( σ ) T q ^ 1 , 1 o ( σ ) + q ^ 2 , 1 o ( σ ) T q ^ 2 , 1 o ( σ ) f 1 T ( σ ) q ^ 1 , 1 o ( σ ) f 2 T ( σ ) q ^ 2 , 1 o ( σ ) ] d σ , t [ 0 , t f ] , ε ( 0 , ε ^ 20 ] .
Using (173), this function can be represented as:
s ^ ( t , ε ) = ε 2 t f t 1 4 q ^ 2 , 1 o ( σ ) T q ^ 2 , 1 o ( σ ) f 1 T ( σ ) f 1 ( σ ) f 2 T ( σ ) q ^ 2 , 1 o ( σ ) d σ , t [ 0 , t f ] , ε ( 0 , ε ^ 20 ] .
The following assertion is proven similarly to Lemma 5, using the equations (177),(178) and Lemma 10.
Lemma 11. 
Let the assumptions A1-A3 and the case II (see the equation (24)) be valid. Then, for all ε ( 0 , ε ^ 20 ] ( ε ^ 20 > 0 is introduced in Lemma 10), the following inequality is satisfied:
| s ( t , ε ) s ^ ( t , ε ) | c ^ 10 ε 3 , t [ 0 , t f ] , ε ( 0 , ε ^ 20 ] ,
where c ^ 10 > 0 is some constant independent of ε.
Remark 14. 
Comparison of the equation (73) and Lemma 5 with the equation (178) and Lemma 11, respectively, directly shows that the asymptotic solutions of the problem (19) in the cases I and II considerably differ from each other. However, due to Remarks 12 and 13, if A 2 ( t ) 0 , A 3 ( t ) 0 , A 4 ( t ) 0 , then the asymptotic solution of the problem (19) in the case I is obtained from the asymptotic solution of this problem in the case II by replacing A 1 ( t ) with A ( t ) and Λ 1 ( t ) with Λ ( t ) .

6.6. Asymptotic Approximation of the CCDG value

Consider the following value, depending on z 0 :
J app I I ( z 0 ) = z 0 T K ^ 1 ( 0 , ε ) z 0 + z 0 T Q ^ 1 ( 0 , ε ) + s ^ ( 0 , ε ) ,
where K ^ 1 ( t , ε ) is given in (158), s ^ ( t , ε ) is given by (178), and
Q ^ 1 ( t , ε ) = col ε q ^ 1 , 1 ( t , ε ) , ε q ^ 2 , 1 ( t , ε ) .
Using the equations (22),(105),(106),(180), as well as Lemmas 9, 10 and 11, we directly have the assertion.
Theorem 3. 
Let the assumptions A1-A3 and the case II (see the equation (24)) be valid. Then, for all ε ( 0 , ε ^ 20 ] ( ε ^ 20 > 0 is introduced in Lemma 10), the following inequality is satisfied:
| J ε * ( z 0 ) J app I I ( z 0 ) | ε 3 a ^ K , 1 z 0 , 1 2 + b ^ q , 1 z 0 , 1 + b ^ q , 2 z 0 , 2 + c ^ 10 + ε 4 2 a ^ K , 2 z 0 , 1 z 0 , 2 + a ^ K , 3 z 0 , 2 2 ,
where z 0 , 1 R l is the upper block of the vector z 0 , while z 0 , 2 R n l is lower block of the vector z 0 .
Consider the following matrix and vector:
K ¯ 1 ( ε ) = ε K ^ 1 , 0 o ( 0 ) + ε K ^ 1 , 1 o ( 0 ) ε 2 K ^ 2 , 0 o ( 0 ) + ε K ^ 2 , 1 o ( 0 ) ε 2 K ^ 2 , 0 o ( 0 ) + ε K ^ 2 , 1 o ( 0 ) T ε 2 K ^ 3 , 0 o ( 0 ) + ε K ^ 3 , 1 o ( 0 ) , Q ¯ 1 ( ε ) = col ε 2 q ^ 1 , 1 o ( 0 ) , ε 2 q ^ 2 , 1 o ( 0 ) .
Based on these matrix and vector, let us construct the following value, depending on z 0
J app , 1 I I ( z 0 ) = z 0 T K ¯ 1 ( ε ) z 0 + z 0 T Q ¯ 1 ( ε ) + s ^ ( 0 , ε ) .
Similarly to Corollary 2, we have the following assertion.
Corollary 3. 
Let the assumptions A1-A3 and the case II (see the equation (24)) be valid. Then, there exists a positive number ε ˇ 20 ε ^ 20 such that, for all ε ( 0 , ε ˇ 20 ] , the following inequality is satisfied:
| J ε * ( z 0 ) J app , 1 I I ( z 0 ) | ε 3 a ˇ K , 1 z 0 , 1 2 + b ˇ q , 1 z 0 , 1 + b ˇ q , 2 z 0 , 2 + c ^ 10 + ε 4 2 a ˇ K , 2 z 0 , 1 z 0 , 2 + a ˇ K , 3 z 0 , 2 2 ,
where a ˇ K , i > 0 , ( i = 1 , 2 , 3 ) and b ˇ q , k > 0 , ( k = 1 , 2 ) are some constants independent of ε.

6.7. Approximate-Saddle Point of the CCDG

Consider the following controls of the minimizer and the maximizer, respectively:
u ^ ε ( z , t ) = 1 ε 2 K ^ 1 ( t , ε ) z 1 2 ε 2 Q ^ 1 ( t , ε ) U , v ^ ε ( z , t ) = G v 1 ( t ) C T ( t ) K ^ 1 ( t , ε ) z + 1 2 G v 1 ( t ) C T ( t ) Q ^ 1 ( t , ε ) V , ( z , t ) R n × [ 0 , t f ] , ε ( 0 , ε 20 ] .
Remark 15. 
The controls u ^ ε ( z , t ) and v ^ ε ( z , t ) are obtained from the controls u ε * ( z , t ) and v ε * ( z , t ) (see the equations (20) and (21)) by replacing there K ( t , ε ) with K ^ 1 ( t , ε ) and q ( t , ε ) with Q ^ 1 ( t , ε ) .
Due to the linearity of these controls with respect to z R n for any t [ 0 , t f ] , ε ( 0 , ε 20 ] and their continuity with respect to t [ 0 , t f ] for any z R n , ε ( 0 , ε 20 ] , the pair u ^ ε ( z , t ) , v ^ ε ( z , t ) is admissible in the CCDG.
Substituting ( u ( t ) , v ( t ) ) = u ^ ε z ( t ) , t , v ^ ε z ( t ) , t into the system (6) and the cost functional (7), using the equation (16) and taking into account the symmetry of the matrix K ^ 1 ( t , ε ) , we obtain (similarly to (79)-(81)) the following system and cost functional:
d z ( t ) d t = A ^ K ( t , ε ) z ( t ) + f ^ ( t , ε ) , t [ 0 , t f ] , z ( 0 ) = z 0 ,
J ^ ( z 0 ) = 0 t f z T ( t ) Λ ^ ( t , ε ) z ( t ) + z T ( t ) g ^ ( t , ε ) + e ^ ( t , ε ) d t ,
where A ^ K ( t , ε ) is given in (158), and
f ^ ( t , ε ) = f ( t ) 1 2 S ( t , ε ) Q ^ 1 ( t , ε ) , Λ ^ ( t , ε ) = Λ ( t ) + K ^ 1 ( t , ε ) S ( t , ε ) K ^ 1 ( t , ε ) , g ^ ( t , ε ) = K ^ 1 ( t , ε ) S ( t , ε ) Q ^ 1 ( t , ε ) , e ^ ( t , ε ) = 1 4 Q ^ 1 T ( t , ε ) S ( t , ε ) Q ^ 1 ( t , ε ) .
Using these functions, we construct similarly to (82)-(84)) the following terminal-value problems:
d L ^ ( t , ε ) d t = L ^ ( t , ε ) A ^ K ( t , ε ) A ^ K T ( t , ε ) L ^ ( t , ε ) Λ ^ ( t , ε ) , L ^ ( t , ε ) R n × n , t [ 0 , t f ] , L ^ ( t f , ε ) = 0 ,
d η ^ ( t , ε ) d t = A ^ K T ( t , ε ) η ^ ( t , ε ) 2 L ^ ( t , ε ) f ^ ( t , ε ) g ^ ( t , ε ) , η ^ ( t , ε ) R n , t [ 0 , t f ] , η ^ ( t f , ε ) = 0 ,
d κ ^ ( t , ε ) d t = f ^ T ( t , ε ) η ^ ( t , ε ) , κ ^ ( t , ε ) R , t [ 0 , t f ] , κ ^ ( t f , ε ) = 0 t f e ^ ( σ , ε ) d σ ,
where ε ( 0 , ε ^ 20 ] .
Remark 16. 
Due to the linearity, the problem (186) has the unique solution L ^ ( t , ε ) in the entire interval [ 0 , t f ] for all ε ( 0 , ε ^ 20 ] . Therefore, the problems (187) and (188) also have the unique solutions η ^ ( t , ε ) and κ ^ ( t , ε ) , respectively, in the entire interval [ 0 , t f ] for all ε ( 0 , ε ^ 20 ] .
Similarly to Lemma 6, we have the following assertion.
Lemma 12. 
The value J ^ ( z 0 ) , given by the equations (184)-(185), can be represented in the form
J ^ ( z 0 ) = z 0 T L ^ ( 0 , ε ) z 0 + z 0 T η ^ ( 0 , ε ) + κ ^ ( 0 , ε ) , ε ( 0 , ε ^ 20 ] .
Taking into account the symmetry of the matrix L ^ ( t , ε ) , let us represent this matrix in the block form as:
L ^ ( t , ε ) = ε L ^ 1 ( t , ε ) ε 2 L ^ 2 ( t , ε ) ε 2 L ^ 2 T ( t , ε ) ε 2 L ^ 3 ( t , ε ) ,
where the matrices L ^ 1 ( t , ε ) , L ^ 2 ( t , ε ) and L ^ 3 ( t , ε ) are of the dimensions l × l , l × ( n l ) and ( n l ) × ( n l ) , respectively; L ^ 1 T ( t , ε ) = L ^ 1 ( t , ε ) , L ^ 3 T ( t , ε ) = L ^ 3 ( t , ε )
Lemma 13. 
Let the assumptions A1-A3 and the case II (see the equation (24)) be valid. Then, there exists a positive number ε ^ 30 ε ^ 20 ( ε ^ 20 > 0 is introduced in Lemma 10) such that, for all ε ( 0 , ε ^ 30 ] , the following inequalities are satisfied:
K ^ i ( t , ε ) L ^ i ( t , ε ) a ^ L , i ε 4 , i = 1 , 2 , 3 , t [ 0 , t f ] ,
where K ^ 1 ( t , ε ) , K ^ 2 ( t , ε ) , K ^ 3 ( t , ε ) is the solution of the terminal-value problem (108)-(110); the matrix-valued function K ( t , ε ) , given by (105), is the solution of the terminal-value problem (17); a ^ L , i > 0 , ( i = 1 , 2 , 3 ) , are some constants independent of ε.
Proof. 
For any ε ( 0 , ε ^ 20 ] , let us consider the matrix-valued function
Δ ^ K L ( t , ε ) = K ( t , ε ) L ^ ( t , ε ) , t [ 0 , t f ] .
Similarly to (87), we obtain the terminal-value problem for Δ ^ K L ( t , ε )
d Δ ^ K L ( t , ε ) d t = Δ ^ K L ( t , ε ) A ^ K ( t , ε ) A ^ K T ( t , ε ) Δ ^ K L ( t , ε ) + K ( t , ε ) K ^ 1 ( t , ε ) S ( t , ε ) K ( t , ε ) K ^ 1 ( t , ε ) , t [ 0 , t f ] , Δ ^ K L ( t f , ε ) = 0 ,
where K ^ 1 ( t , ε ) is given in (158).
Solving this terminal-value problem and using the results of [44], we have
Δ ^ K L ( t , ε ) = t f t Ω ^ T ( σ , t , ε ) K ( σ , ε ) K ^ 1 ( σ , ε ) S ( σ , ε ) K ( σ , ε ) K ^ 1 ( t , ε ) Ω ^ ( σ , t , ε ) d σ , 0 t σ t f , ε ( 0 , ε ^ 20 ] ,
where for any given t [ 0 , t f ) and ε ( 0 , ε ^ 20 ] , the matrix-valued function Ω ^ ( σ , t , ε ) is the unique solution of the terminal-value problem (161); the blocks of the matrix-valued function Ω ^ ( σ , t , ε ) satisfy the inequalities (162).
Using the equations (16),(105),(158),(190),(192),(193), as well as Lemma 9 and the inequalities (162), we obtain by a routine algebra the validity of the inequalities in (191) with ε ^ 30 = min { ε ^ 20 , ε ^ Ω } . This completes the proof of the lemma. □
Let us represent the vector η ^ ( t , ε ) in the block form as:
η ^ ( t , ε ) = ε η ^ 1 ( t , ε ) ε η ^ 2 ( t , ε ) , t [ 0 , t f ] , ε ( 0 , ε ^ 20 ] ,
where the vectors η ^ 1 ( t , ε ) and η ^ 2 ( t , ε ) are of the dimensions l and ( n l ) , respectively.
Lemma 14. 
Let the assumptions A1-A3 and the case II (see the equation (24)) be valid. Then, for all ε ( 0 , ε ^ 30 ] ( ε ^ 30 > 0 is introduced in Lemma 13), the following inequalities are satisfied:
q ^ j ( t , ε ) η ^ j ( t , ε ) a ^ η , j ε 4 , j = 1 , 2 , t [ 0 , t f ] ,
| s ( 0 , ε ) κ ^ ( 0 , ε ) | a ^ κ , 1 ε 4 + a ^ κ , 2 ε 5 , a ^ κ , 1 = 1 4 b ^ q , 1 2 + b ^ q , 2 2 t f , a ^ κ , 2 = a ^ η , 1 + a ^ η , 2 1 4 t f + 0 t f f ( σ ) d σ .
where col q ^ 1 ( t , ε ) , q ^ 2 ( t , ε ) is the solution of the terminal-value problem (112)-(113); the vector-valued function q ( t , ε ) , given by (106), is the solutions of the terminal-value problems (18); the scalar function s ( t , ε ) is the solutions of the terminal-value problem (19) and of the equivalent problem (114); a ^ η , j > 0 , ( j = 1 , 2 ) are some constants independent of ε; the constants b ^ q , 1 > 0 and b ^ q , 2 > 0 are introduced in Lemma 10.
Proof. 
We start the proof with the inequalities (195).
For any ε ( 0 , ε ^ 30 ] , let us consider the vector-valued function
Δ ^ q η ( t , ε ) = q ( t , ε ) η ^ ( t , ε ) , t [ 0 , t f ] .
Similarly to (95), we obtain the terminal-value problem for Δ ^ q η ( t , ε )
d Δ ^ q η ( t , ε ) d t = A ^ K T ( t , ε ) Δ ^ q η ( t , ε ) + 2 L ^ ( t , ε ) K ( t , ε ) f ( t ) + K ( t , ε ) L ^ ( t , ε ) S ( t , ε ) Q ^ 1 ( t , ε ) + K ( t , ε ) K ^ 1 ( t , ε ) S ( t , ε ) q ( t , ε ) Q ^ 1 ( t , ε ) , t [ 0 , t f ] , Δ q η ( t f , ε ) = 0 ,
where K ^ 1 ( t , ε ) and Q ^ 1 ( t , ε ) are given in (158) and (181), respectively.
Solving this terminal-value problem, we have
Δ ^ q η ( t , ε ) = t f t Ω T ( σ , t , ε ) [ 2 L ^ ( σ , ε ) K ( σ , ε ) f ( σ ) + K ( σ , ε ) L ^ ( σ , ε ) S ( σ , ε ) Q ^ 1 ( σ , ε ) + K ( σ , ε ) K ^ 1 ( σ , ε ) S ( σ , ε ) q ( σ , ε ) Q ^ 1 ( σ , ε ) ] d σ , 0 t σ t f , ε ( 0 , ε ^ 30 ] ,
where for any given t [ 0 , t f ) and ε ( 0 , ε ^ 30 ] , the matrix-valued function Ω ( σ , t , ε ) is the unique solution of the terminal-value problem (161); the blocks of the matrix-valued function Ω ^ ( σ , t , ε ) satisfy the inequalities (162).
Using the equations (16),(105),(158),(181),(194),(197), as well as Lemmas 9, 10, Lemma 13 and the inequalities (162), we obtain by a routine algebra the validity of the inequalities in (195).
The inequality (196) is shown similarly to the inequality (93) (see the proof of Lemma 8).
Thus, the lemma is proven. □
Theorem 4. 
Let the assumptions A1-A3 and the case II (see the equation (24)) be valid. Then, for all ε ( 0 , ε ^ 30 ] ( ε ^ 30 > 0 is introduced in Lemma 13), the following inequality is satisfied:
| J ε * ( z 0 ) J ^ ( z 0 ) | ε 4 a ^ κ , 1 + ε 5 a ^ L , 1 z 0 , 1 2 + a ^ η , 1 z 0 , 1 + a ^ η , 2 z 0 , 2 + a ^ κ , 2 + ε 6 2 a ^ L , 2 z 0 , 1 z 0 , 2 + a ^ L , 3 z 0 , 2 2 .
Proof. 
The statement of the theorem directly follows from the equations (22),(189), as well as the equations (105),(106),(190),(194) and Lemmas 13, 14. □
Remark 17. 
Due to Theorem 4, the outcome J ^ ( z 0 ) of the CCDG, generated by the pair of the controls u ^ ε ( z , t ) , v ^ ε ( z , t ) , approximate the CCDG value J ε * ( z 0 ) with a high accuracy for all sufficiently small ε > 0 . This observation allows us to call the pair u ^ ε ( z , t ) , v ^ ε ( z , t ) an approximate-saddle point in the CCDG.

7. Example

In this section, we consider a particular case of CCDG (see (6)-(7)) with the following data:
n = 2 , m = 2 , t f = 2 , Λ ( t ) = Λ = diag ( λ 1 , λ 2 ) . A ( t ) = A = 1 1 3 2 , C ( t ) = C = 4 0 0 4 , G v ( t ) = G v = 8 0 0 8 , f ( t ) = 2 t t , z 0 = 1 1 .
In this example, the symmetric matrix-valued functions K ( t , ε ) and P ( t , ε ) , given by the terminal-value problems (17) and (27), respectively, are of the dimension 2 × 2 . The vector-valued functions q ( t , ε ) and p ( t , ε ) , given by the terminal-value problems (18) and (28), respectively, have the dimension 2.

7.1. Case I of the Matrix Λ

In this subsection, we treat the differential game (6)-(7),(198) in the case I (see (23)), i.e., for λ 1 > 0 and λ 2 > 0 . We choose
λ 1 = 9 , λ 2 = 9 .
We start the asymptotic solution of the differential game (6)-(7),(198),(199) with the asymptotic solution of the terminal-value problem for P ( t , ε ) (see the equation (27)).
Using the equations (36),(38),(42) and the data of the example (198),(199), we directly have
P 0 o ( t ) = P 0 o = 3 I 2 , P 0 b ( τ ) = 6 exp ( 6 τ ) 1 + exp ( 6 τ ) I 2 , P 1 o ( t ) = P 1 o = 1 2 2 2 .
Proceed to obtaining P 1 b ( τ ) , which is based on the equations (44),(46),(47) and the data of the example (198),(199).
Using the equations (44) and (200), we obtain by a routine matrix calculations that Ψ ( τ ) 0 . From (47) and (199), we have Φ ( σ , τ ) = 1 + exp ( 6 σ ) exp 3 ( τ σ ) 1 + exp ( 6 τ ) I 2 . Thus, due to (46),
P 1 b ( τ ) = 4 exp ( 6 τ ) ( 1 + exp ( 6 τ ) ) 2 1 2 2 2 .
From the above derived expressions for P 0 b ( τ ) and P 1 b ( τ ) , we can see that both matrix-valued functions exponentially decay for τ .
Based on the equations (33),(200),(201), let us construct the asymptotic solution P 1 ( t , ε ) of the problem (27) subject to the data (198),(199). For this purpose, we represent this matrix-valued function in the block form
P 1 ( t , ε ) = P 1 , 11 ( t , ε ) P 1 , 12 ( t , ε ) P 1 , 12 ( t , ε ) P 1 , 22 ( t , ε ) .
The latter yields
P 1 , 11 ( t , ε ) = 3 6 exp ( 6 τ ) 1 + exp ( 6 τ ) + ε 1 4 exp ( 6 τ ) ( 1 + exp ( 6 τ ) ) 2 , P 1 , 12 ( t , ε ) = ε 2 + 8 exp ( 6 τ ) ( 1 + exp ( 6 τ ) ) 2 , P 1 , 22 ( t , ε ) = 3 6 exp ( 6 τ ) 1 + exp ( 6 τ ) + ε 2 8 exp ( 6 τ ) ( 1 + exp ( 6 τ ) ) 2 ,
where τ = ( t t f ) / ε = ( t 2 ) / ε .
In Figure 1, the absolute errors
Δ P i j ( ε ) = max t [ 0 , t f ] P i j ( t , ε ) P 1 , i j ( t , ε ) , { i j } = { 11 } , { 12 } , { 22 } ,
are depicted for ε [ 0 . 015 , 0 . 1 ] along with their mutual estimate function 5 ε 2 . The figure illustrates Lemma 3.
Proceed to construction of the asymptotic solution to the terminal-value problem (28) subject to the data (198),(199).
Using the equations (53),(55),(56),(59) and the equation (48), we obtain
p 0 o ( t ) 0 , p 0 b ( τ ) 0 , p 1 o ( t ) = 4 t 2 t , p 1 b ( τ ) = exp ( 3 τ ) 1 + exp ( 6 τ ) 16 8 .
Using these results, as well as the equation (51) and the block representation of the vector-valued function p 1 ( t , ε )
p 1 ( t , ε ) = p 1 , 1 ( t , ε ) p 1 , 2 ( t , ε ) ,
we obtain
p 1 , 1 ( t , ε ) = ε 4 t 16 exp ( 3 τ ) 1 + exp ( 6 τ ) , p 1 , 2 ( t , ε ) = ε 2 t 8 exp ( 3 τ ) 1 + exp ( 6 τ ) ,
where (like in (203)) τ = ( t 2 ) / ε .
In Figure 2, the absolute errors
Δ p 1 ( ε ) = max t [ 0 , t f ] p up ( t , ε ) p 1 , 1 ( t , ε ) , Δ p 2 ( ε ) = max t [ 0 , t f ] p low ( t , ε ) p 1 , 1 ( t , ε ) , p ( t , ε ) = p up ( t , ε ) p low ( t , ε ) ,
are depicted for ε [ 0 . 015 , 0 . 1 ] along with their mutual estimate function 1 . 7 ε 2 . The figure illustrates Lemma 4.
To complete the construction of the asymptotic solutions to the terminal-value problems, associated with the solvability conditions of the CCDG, we should construct the asymptotic solution to the problem (29) subject to the data (198),(199). Using the equation (74), we directly obtain
s ¯ ( t , ε ) = 5 ε 2 3 ( 8 t 3 ) , t [ 0 , 2 ] .
In Figure 3, the absolute error
Δ s ¯ ( ε ) = max t [ 0 , t f ] s ( t , ε ) s ¯ ( t , ε ) ,
is depicted for ε [ 0 . 015 , 0 . 1 ] along with the estimate function 6 . 6 ε 3 . The figure illustrates the inequality (74) in Lemma 5 with c 10 = 6 . 6 .
Now, using the equations (75),(77), as well as the equations (202)-(203),(204)-(205),(206) and the data (198), we obtain the following two approximations of the CCDG value:
J app I ( z 0 ) = 6 ε 1 2 exp ( 12 / ε ) 1 + exp ( 12 / ε ) + ε 2 37 3 + 4 exp ( 12 / ε ) 1 + exp ( 12 / ε ) 2 24 exp ( 6 / ε ) 1 + exp ( 12 / ε ) , J app , 1 I ( z 0 ) = 6 ε + 37 ε 2 3 .
The components of the approximate-saddle point have the form (78), where P 1 ( t , ε ) is given by (202)-(203), p 1 ( t , ε ) is given by (204)-(205), z R 2 , t [ 0 , 2 ] .
The game value J ε * , the values J app I , J app , 1 I and the outcome of the game J ˜ , generated by the approximate-saddle point, are shown in Table 1 for ε = 0 . 1 , 0 . 05 , 0 . 015 (the initial state position z 0 is fixed by (198) yielding the simplified notation). Note that J app I and J app , 1 I ( are not distinguishable because the differences
J app I ( ε ) J app , 1 I ( ε ) = 12 ε exp ( 12 / ε ) 1 + exp ( 12 / ε ) + ε 2 4 exp ( 12 / ε ) 1 + exp ( 12 / ε ) 2 24 exp ( 6 / ε ) 1 + exp ( 12 / ε )
are negligible small ( 2 . 1 · 10 27 , 4 . 6 · 10 54 , and 1 . 03 · 10 176 , respectively).
In Table 2, the absolute and the relative errors of the game value approximations in the case I
Δ J app I ( ε ) = J ε * J app I , δ J app I ( ε ) = Δ J app I ( ε ) J ε * · 100 % , Δ J app , 1 I ( ε ) = J ε * J app , 1 I , δ J app , 1 I ( ε ) = Δ J app , 1 I ( ε ) J ε * · 100 % , Δ J ˜ ( ε ) = J ε * J ˜ , δ J ˜ ( ε ) = Δ J ˜ ( ε ) J ε * · 100 % ,
are presented. It is seen that all errors decrease with decreasing ε . The approximation J ˜ , calculated by employing the approximate-saddle point controls, is more accurate than J app I and J app , 1 I (which accuracies are identical). The relative errors are not larger than 0 . 52 % for J app I and J app , 1 I , and not larger than 0 . 029 % for J ˜ .

7.2. Case II of the Matrix Λ

In this subsection, we treat the differential game (6)-(7),(198) in the case II (see (24)), i.e., for λ 1 > 0 and λ 2 = 0 . We choose
λ 1 = 9 .
We start the asymptotic solution of the differential game (6)-(7),(198),(207) with the asymptotic solution of the terminal-value problem for K ^ 1 ( t , ε ) , K ^ 2 ( t , ε ) , K ^ 1 ( t , ε ) (see the equations (108)-(110)).
Using the equations (117),(121),(122), we directly have
K ^ 3 , 0 b ( τ ) 0 , K ^ 1 , 0 o ( t ) = K ^ 1 , 0 o = 3 , K ^ 2 , 0 o ( t ) = K ^ 2 , 0 o = 1 .
To obtain K ^ 3 , 0 o ( t ) , we should solve the terminal-value problem (120) subject to the data (198),(207). This, problem has the form
d K ^ 3 , 0 o ( t ) d t = 4 K ^ 3 , 0 o ( t ) + K ^ 3 , 0 o ( t ) 2 1 , t [ 0 , 2 ] , K ^ 3 , 0 o ( 2 ) = 0 ,
yielding the unique solution
K ^ 3 , 0 o ( t ) = 2 + 5 + 2 5 10 + 4 5 exp 2 5 ( t 2 ) 1 2 5 1 , t [ 0 , 2 ] .
Subject to the data (198),(207), the equations (126),(127),(131) yield
K ^ 1 , 0 b ( τ ) = 6 exp ( 6 τ ) 1 + exp ( 6 τ ) , K ^ 2 , 0 b ( τ ) = 2 exp ( 3 τ ) 1 + exp ( 6 τ ) , K ^ 3 , 1 b ( τ ) = ( 2 / 3 ) exp ( 6 τ ) 1 + exp ( 6 τ ) .
Furthermore, using the equations (137)-(140) and the data (198),(207), we obtain
K ^ 1 , 1 o ( t ) = K ^ 1 , 1 o = 1 , K ^ 2 , 1 o ( t ) = 1 3 3 + 2 K ^ 3 , 0 o ( t ) , K ^ 3 , 1 o ( t ) = 1 3 Φ ^ ( t ) 2 , t [ 0 , 2 ] ,
where Φ ^ ( t ) is the solution of the terminal-value problem
d Φ ^ ( t ) d t = K ^ 3 , 0 o ( t ) 2 Φ ^ ( t ) , t [ 0 , 2 ] , Φ ^ ( 2 ) = 1 .
Finally, using the equations (142),(144),(146),(147),(151),(153) and taking into account the data (198),(207) and the equations (210)-(211), we derive K ^ 1 , 1 b ( τ ) and K ^ 2 , 1 b ( τ )
K ^ 1 , 1 b ( τ ) = 4 exp ( 6 τ ) 1 + exp ( 6 τ ) 2 , K ^ 2 , 1 b ( τ ) = exp ( 3 τ ) 1 + exp ( 6 τ ) 4 3 1 + exp ( 6 τ ) + 2 exp ( 3 τ ) 4 τ 2 3 .
Thus, due to the equations (115),(208)-(213), we obtain the asymptotic solution of the problem (108)-(110) subject to the data (198),(207)
K ^ 1 , 1 ( t , ε ) = 3 6 exp ( 6 τ ) 1 + exp ( 6 τ ) + ε 1 4 exp ( 6 τ ) 1 + exp ( 6 τ ) 2 , K ^ 2 , 1 ( t , ε ) = 1 + 2 exp ( 3 τ ) 1 + exp ( 6 τ ) + ε [ 1 3 3 + 2 K ^ 3 , 0 o ( t ) + exp ( 3 τ ) 1 + exp ( 6 τ ) 4 3 1 + exp ( 6 τ ) + 2 exp ( 3 τ ) 4 τ 2 3 ] , K ^ 3 , 1 ( t , ε ) = 2 + 5 + 2 5 10 + 4 5 exp 2 5 ( t 2 ) 1 2 5 1 + ε 1 3 Φ ^ ( t ) 2 + ( 2 / 3 ) exp ( 6 τ ) 1 + exp ( 6 τ ) ,
where τ = ( t 2 ) / ε .
In Figure 4, the absolute errors
Δ K ^ i ( ε ) = max t [ 0 , t f ] K ^ i ( t , ε ) K ^ i , 1 ( t , ε ) , i = 1 , 2 , 3 ,
are depicted for ε [ 0 . 015 , 0 . 1 ] along with their mutual estimate function 5 . 75 ε 2 . The figure illustrates Lemma 9.
Proceed to construction of the asymptotic solution to the terminal-value problem (112)-(113) subject to the data (198),(207).
Using the equations (167),(168),(169),(170),(173),(174),(175) we directly have
q ^ 2 , 0 b ( τ ) 0 , q ^ 1 , 0 o ( t ) 0 , q ^ 2 , 0 o ( t ) 0 , q ^ 1 , 0 b ( τ ) 0 , q ^ 2 , 1 b ( τ ) 0 , q ^ 1 , 1 o ( t ) = 4 t , q ^ 2 , 1 o ( t ) = 2 2 t Φ ^ ( t ) Φ ^ ( σ ) 1 2 σ σ K ^ 3 , 0 o ( σ ) d σ , q ^ 1 , 1 b ( τ ) = 16 exp ( 3 τ ) 1 + exp ( 6 τ ) ,
where Φ ^ ( t ) is the solution of the terminal-value problem (212); K ^ 3 , 0 o ( t ) is given by (209).
Thus, due to the equations (166),(215), we obtain the asymptotic solution of the problem (112)-(113) subject to the data (198),(207)
q ^ 1 , 1 ( t , ε ) = ε 4 t 16 exp ( 3 τ ) 1 + exp ( 6 τ ) , q ^ 2 , 1 ( t , ε ) = 2 ε 2 t Φ ^ ( t ) Φ ^ ( σ ) 1 2 σ σ K ^ 3 , 0 o ( σ ) d σ ,
where τ = ( t 2 ) / ε .
In Figure 5, the absolute errors
Δ q ^ i ( ε ) = max t [ 0 , t f ] q ^ i ( t , ε ) q ^ i , 1 ( t , ε ) , i = 1 , 2 ,
are depicted for ε [ 0 . 015 , 0 . 1 ] along with their mutual estimate function 6 ε 2 . The figure illustrates Lemma 10.
Using the equation (178) and the equation (215), we obtain the asymptotic solution of the problem (114) subject to the data (198),(207)
s ^ ( t , ε ) = ε 2 2 t 1 4 q ^ 2 , 1 o ( σ ) 2 4 σ 2 σ q ^ 2 , 1 o ( σ ) d σ , t [ 0 , 2 ] .
In Figure 6, the absolute error
Δ s ^ ( ε ) = max t [ 0 , t f ] s ( t , ε ) s ^ ( t , ε ) ,
is depicted for ε [ 0 . 015 , 0 . 1 ] along with the estimate function 5 . 2 ε 3 . The figure illustrates the inequality (179) in Lemma 11 with c ^ 10 = 5 . 2 .
Now, using the equations (180),(182),(214),(216),(217) and the data (198), we obtain the following two approximations of the CCDG value:
J app I I ( z 0 ) = ε K ^ 1 , 1 ( 0 , ε ) + 2 ε 2 K ^ 2 , 1 ( 0 , ε ) + ε 2 K ^ 3 , 1 ( 0 , ε ) + ε q ^ 1 , 1 ( 0 , ε ) + ε q ^ 2 , 1 ( 0 , ε ) + s ^ ( 0 , ε ) , J app , 1 I I ( z 0 ) = ε ( 3 + ε ) 2 ε 2 1 + 1 3 ε 3 + 2 K ^ 3 , 0 o ( 0 ) + ε 2 K ^ 3 , 0 o ( 0 ) + ε K ^ 3 , 1 o ( 0 ) + ε 2 q ^ 2 , 1 o ( 0 ) + s ^ ( 0 , ε ) .
Due to the equation (183) and the data (198), the components of the approximate-saddle point have the form
u ^ ε ( z , t ) = ( 1 / ε ) K ^ 1 , 1 ( t , ε ) z 1 + K ^ 2 , 1 ( t , ε ) z 2 + ( 1 / 2 ε ) q ^ 1 , 1 ( t , ε ) K ^ 2 , 1 ( t , ε ) z 1 + K ^ 3 , 1 ( t , ε ) z 2 + ( 1 / 2 ε ) q ^ 2 , 1 ( t , ε ) , v ^ ε ( z , t ) = 1 2 ε K ^ 1 , 1 ( t , ε ) z 1 + ε 2 K ^ 2 , 1 ( t , ε ) z 2 + ( ε / 2 ) q ^ 1 , 1 ( t , ε ) ε 2 K ^ 2 , 1 ( t , ε ) z 1 + ε 2 K ^ 3 , 1 ( t , ε ) z 2 + ( ε / 2 ) q ^ 2 , 1 ( t , ε ) ,
where K ^ i , 1 ( t , ε ) , ( i = 1 , 2 , 3 ) are given in (214); q ^ k , 1 ( t , ε ) , ( k = 1 , 2 ) are given in (216); z 1 and z 2 are upper and lower scalar blocks, respectively, of the state vector z.
The values of J ε * , J app I I , J app , 1 I I and the outcome of the game J ^ , generated by the approximate-saddle point, are shown in Table 3 for ε = 0 . 1 , 0 . 05 , 0 . 015 In this case, the difference between J app I I ( ε ) and J app , 1 I I ( ε ) is also negligible.
The absolute and the relative errors of the game value approximations in the Case II
Δ J app I I ( ε ) = J ε * J app I I , δ J app I I ( ε ) = Δ J app I I ( ε ) J ε * · 100 % , Δ J app , 1 I I ( ε ) = J ε * J app , 1 I I , δ J app , 1 I I ( ε ) = Δ J app , 1 I I ( ε ) J ε * · 100 % , Δ J ^ ( ε ) = J ε * J ^ , δ J ^ ( ε ) = Δ J ^ ( ε ) J ε * · 100 % ,
are presented in Table 4.
It is seen that all errors decrease with decreasing ε . The approximation J ^ , calculated by employing approximate-saddle point controls, is more accurate than J app I I and J app , 1 I I (which accuracies are identical). The relative errors are not larger than 1 . 22 % for J app I I and J app , 1 I I , and not larger than 0 . 27 % for J ^ .

8. Conclusions

In this paper, a two-player finite-horizon zero-sum linear-quadratic differential game was studied in the case where the control cost of the minimizing player (the minimizer) in the cost functional is much smaller than the state cost and the cost of the control of the maximazing player. This smallness is represented by the presence of the small multiplier ε > 0 in the control cost of the minimizer. Due to this feature of the minimizer’s control cost, the considered game is a cheap control game. The differential equation of the considered game is non-homogeneous. The dimension of the minimizer’s control equals to the dimension of the state vector and the matrix-valued coefficient of the minimizer’s control in the differential equation has full rank. This means that the entire state variable is a "fast" one. For this game, the state-feedback saddle point and the value were sought. By the proper changes of the state and minimizer’s control variables, the initially formulated game was transformed equivalently to the much simpler zero-sum cheap control differential game. In this new game, the matrix-valued coefficient of the minimizer’s control in the differential equation is the identity matrix. The matrix-valued coefficient for the state cost in the integral part of the game’s cost functional is a diagonal one. In the sequel of the paper, this new game was considered as an original cheap control game. The following two cases of the matrix-valued coefficient for the state cost in the integral part of the game’s cost functional were treated: (a) all the entries of the main diagonal are positive; (b) only part of the entries are positive, while the rest of the entries are zero. In each of the cases, the asymptotic analysis with respect to the small parameter ε > 0 of the state-feedback solution in the original cheap control game was carried out. This analysis includes: (i) the first-order asymptotic solutions of the terminal-value problems for the three differential equations, appearing in the game’s solvability conditions; (ii) obtaining asymptotic approximations of the game value; (iii) derivation of approximate-saddle point. The approaches to this analysis in the cases (a) and (b) and the results of this analysis were compared with each other. This comparison clearly shows the essential novelty of the case (b) and its analysis. The analysis of the case (b) clearly shows that the assumption on the positive definiteness of the quadratic cost of the "fast" state variable in the integral part of the cost functional is not necessary. Positive semi-definiteness of this quadratic cost is not an obstacle in the asymptotic analysis of the cheap control game. Along with this, the property of this quadratic cost (positive definiteness or positive semi-definiteness) effects considerably on the asymptotic solution of the cheap control game.

References

  1. O’Malley, R.E. Cheap control, singular arcs, and singular perturbations. In Optimal Control Theory and its Applications; Kirby, B.J. Ed.; Lecture Notes in Economics and Mathematical Systems, Volume 106; Springer: Berlin, Heidelberg, Germany, 1974.
  2. Bell, D.J.; Jacobson, D.H. Singular Optimal Control Problems; Academic Press: Cambridge, MA, USA, 1975.
  3. O’Malley, R.E. The singular perturbation approach to singular arcs. In International Conference on Differential Equations; Antosiewicz, H.A., Ed.; Elsevier Inc.: Amsterdam, Netherlands, 1975, pp. 595–611.
  4. O’Malley, R.E.; Jameson, A. Singular perturbations and singular arcs, I. IEEE Trans. Automat. Control 1975, 20, 218–-226.
  5. O’Malley, R.E. A more direct solution of the nearly singular linear regulator problem. SIAM J. Control Optim. 1976,14, 1063–1077.
  6. O’Malley, R.E.; Jameson, A. Singular perturbations and singular arcs, II. IEEE Trans. Automat. Control 1977, 22, 328–-337.
  7. Kurina, G.A. A degenerate optimal control problem and singular perturbations. Soviet Math. Dokl. 1977, 18, 1452–-1456.
  8. Sannuti, P.; Wason, H.S. Multiple time-scale decomposition in cheap control problems – singular control. IEEE Trans. Automat. Control 1985, 30, 633–-644. [CrossRef]
  9. Saberi, A.; Sannuti, P. Cheap and singular controls for linear quadratic regulators. IEEE Trans. Automat. Control 1987, 32, 208–-219. [CrossRef]
  10. Smetannikova, E.N.; Sobolev, V.A. Regularization of cheap periodic control problems. Automat. Remote Control 2005, 66, 903–-916. [CrossRef]
  11. Glizer, V.Y. Stochastic singular optimal control problem with state delays: Regularization, singular perturbation, and minimizing sequence. SIAM J. Control Optim. 2012, 50, 2862–-2888. [CrossRef]
  12. Glizer, V.Y. Saddle-point equilibrium sequence in one class of singular infinite horizon zero-sum linear-quadratic differential games with state delays. Optimization 2019, 68, 349–384. [CrossRef]
  13. Shinar, J.; Glizer, V.Y.; Turetsky, V. Solution of a singular zero-sum linear-quadratic differential game by regularization. Int. Game Theory Rev. 2014, 16, 1–-32. [CrossRef]
  14. Glizer, V.Y.; Kelis, O. Singular Linear-Quadratic Zero-Sum Differential Games and H Control Problems: Regularization Approach; Birkhauser: Basel, Switzerland, 2022.
  15. Kwakernaak, H.; Sivan, R. The maximally achievable accuracy of linear optimal regulators and linear optimal filters. IEEE Trans. Autom. Control 1972, 17, 79–-86. [CrossRef]
  16. Francis, B. The optimal linear-quadratic time-invariant regulator with cheap control. IEEE Trans. Autom. Control 1979, 24, 616–621. [CrossRef]
  17. Saberi, A.; Sannuti, P. Cheap control problem of a linear uniform rank system: Design by composite control. Automatica 1986, 22, 757–759. [CrossRef]
  18. Lee, J.T.; Bien, Z.N. A quadratic regulator with cheap control for a class of nonlinear systems. J. Optim. Theory Appl. 1987, 55, 289-–302. [CrossRef]
  19. Braslavsky, J.H.; Seron, M.M.; Mayne, D.Q.; Kokotovic, P.V. Limiting performance of optimal linear filters. Automatica 1999, 35, 189–-199. [CrossRef]
  20. Seron, M.M.; Braslavsky, J.H.; Kokotovic, P.V.; Mayne, D.Q. Feedback limitations in nonlinear systems: From Bode integrals to cheap control. IEEE Trans. Autom. Control 1999, 44, 829–-833. [CrossRef]
  21. Glizer, V.Y.; Kelis, O. Asymptotic properties of an infinite horizon partial cheap control problem for linear systems with known disturbances. Numer. Algebra Control Optim. 2018, 8, 211–-235. [CrossRef]
  22. Moylan, P.J.; Anderson, B.D.O. Nonlinear regulator theory and an inverse optimal control problem. IEEE Trans. Autom. Control 1973, 18, 460–-465. [CrossRef]
  23. Young, K.D.; Kokotovic, P.V.; Utkin, V.I. A singular perturbation analysis of high-gain feedback systems. IEEE Trans. Autom. Control 1977, 22, 931–-938. [CrossRef]
  24. Kokotovic, P.V.; Khalil, H.K.; O’Reilly, J. Singular Perturbation Methods in Control: Analysis and Design; Academic Press: London, UK, 1986.
  25. Petersen, I.R. Linear-quadratic differential games with cheap control. Syst. Control Lett. 1986, 8, 181–-188. [CrossRef]
  26. Glizer, V.Y. Asymptotic solution of zero-sum linear-quadratic differential game with cheap control for the minimizer. NoDEA Nonlinear Diff. Equ. Appl. 2000, 7, 231–-258. [CrossRef]
  27. Turetsky, V.; Shinar, J. Missile guidance laws based on pursuit—evasion game formulations. Automatica 2003, 39, 607–-618. [CrossRef]
  28. Turetsky, V. Upper bounds of the pursuer control based on a linear-quadratic differential game. J. Optim. Theory Appl. 2004, 121, 163–-191. [CrossRef]
  29. Turetsky, V.; Glizer, V.Y. Robust solution of a time-variable interception problem: A cheap control approach. Int. Game Theory Rev. 2007, 9, 637–-655. [CrossRef]
  30. Turetsky, V.; Glizer, V.Y.; Shinar, J. Robust trajectory tracking: Differential game/cheap control approach. Int. J. Systems Sci. 2014, 45, 2260–-2274. [CrossRef]
  31. Turetsky, V.; Glizer, V.Y. Cheap control in a non-scalarizable linear-quadratic pursuit-evasion game: Asymptotic analysis. Axioms 2022, 11, 214. [CrossRef]
  32. Glizer, V.Y. Nash equilibrium sequence in a singular two-person linear-quadratic differential game. Axioms 2021, 10, 132. [CrossRef]
  33. Glizer, V.Y. Nash equilibrium in a singular infinite horizon two-person linear-quadratic differential game. Pure Appl. Funct. Anal. 2022, 7, 1657–1698.
  34. Glizer, V.Y. Solution of one class of singular two-person Nash equilibrium games with state and control delays: regularization approach. Appl. Set-Valued Anal. Optim. 2023, 5, 401–438.
  35. Glizer, V.Y.; Turetsky, V. One class of Stackelberg linear-quadratic differential games with cheap control of a leader: asymptotic analysis of open-loop solution. Axioms 2024, 13, 801. [CrossRef]
  36. Glizer, V.Y. Solution of a singular minimum energy control problem for time delay system: regularization approach. Pure Appl. Funct. Anal. 2023, 8, 1413-1435.
  37. Vasil’eva, A.B.; Butuzov, V.F.; Kalachev, L.V. The Boundary Function Method for Singular Perturbation Problems; SIAM Books: Philadelphia, PA, USA, 1995.
  38. Bellman, R. Introduction to Matrix Analysis; SIAM Books: Philadelphia, PA, USA, 1997.
  39. Sibuya, Y. Some global properties of matrices of functions of one variable. Math. Annalen 1965, 161, 67–-77. [CrossRef]
  40. Basar, T.; Olsder, G.J. Dynamic Noncooperative Game Theory; Academic Press: London, UK, 1992.
  41. Zhukovskii, V.I. Analytic design of optimum strategies in certain differential games. I. Autom. Remote Control 1970, 4, 533-–536.
  42. Derevenskii, V.P. Matrix Bernoulli equations, I. Russiam Math. 2008, 52, 12–-21. [CrossRef]
  43. Gajic, Z.; Qureshi, M.T.J. Lyapunov Matrix Equation in System Stability and Control; Dover Publications: Mineola, NY, USA, 2008.
  44. Abou-Kandil, H.; Freiling, G.; Ionescu, V.; Jank, G. Matrix Riccati Equations in Control and Systems Theory; Birkhauser: Basel, Switzerland, 2003.
  45. Glizer, V.Y. Asymptotic solution of a cheap control problem with state delay. Dynam. Control 1999, 9, 339–-357. [CrossRef]
  46. Kwakernaak, H.; Sivan, R. Linear Optimal Control Systems; Wiley-Interscience: New York, NY, USA, 1972.
Figure 1. Absolute errors of the asymptotic expansion P 1 ( t , ε ) of P ( t , ε ) .
Figure 1. Absolute errors of the asymptotic expansion P 1 ( t , ε ) of P ( t , ε ) .
Preprints 166064 g001
Figure 2. Absolute errors of the asymptotic expansion p 1 ( t , ε ) of p ( t , ε ) .
Figure 2. Absolute errors of the asymptotic expansion p 1 ( t , ε ) of p ( t , ε ) .
Preprints 166064 g003
Figure 3. Absolute errors of the asymptotic expansion s ¯ ( t , ε ) of s ( t , ε ) .
Figure 3. Absolute errors of the asymptotic expansion s ¯ ( t , ε ) of s ( t , ε ) .
Preprints 166064 g002
Figure 4. Absolute errors Δ K ^ 1 ( ε ) , Δ K ^ 2 ( ε ) and Δ K ^ 3 ( ε ) .
Figure 4. Absolute errors Δ K ^ 1 ( ε ) , Δ K ^ 2 ( ε ) and Δ K ^ 3 ( ε ) .
Preprints 166064 g004
Figure 5. Absolute errors Δ q ^ 1 ( ε ) and Δ q ^ 2 ( ε ) .
Figure 5. Absolute errors Δ q ^ 1 ( ε ) and Δ q ^ 2 ( ε ) .
Preprints 166064 g005
Figure 6. Absolute error of the asymptotic expansion s ^ ( t , ε ) of s ( t , ε ) .
Figure 6. Absolute error of the asymptotic expansion s ^ ( t , ε ) of s ( t , ε ) .
Preprints 166064 g006
Table 1. Values of J ε * , J app I , J app , 1 I , and J ˜ in the Case I.
Table 1. Values of J ε * , J app I , J app , 1 I , and J ˜ in the Case I.
ε J ε * J app I J app , 1 I J ˜
0.1 0.7271 0.7233 0.7233 0.7273
0.05 0.3312 0.3308 0.3308 0.3312
0.015 0.09282 0.09278 0.09278 0.09282
Table 2. Absolute and relative errors of J ε * approximations in the Case I.
Table 2. Absolute and relative errors of J ε * approximations in the Case I.
ε Δ J app I ( ε ) Δ J app , 1 I ( ε ) Δ J ˜ ( ε ) δ J app I ( ε ) δ J app , 1 I ( ε ) δ J ˜ ( ε )
0.1 3.78 · 10 3 3.78 · 10 3 2.1 · 10 4 0.52 0.52 0.029
0.05 3.42 · 10 4 3.42 · 10 4 7.31 · 10 6 0.10 0.10 0.0022
0.015 4.11 · 10 5 4.11 · 10 5 2.46 · 10 8 0.044 0.044 2.65 · 10 5
Table 3. Values of J ε * , J app I I , J app , 1 I I , and J ^ in the Case II.
Table 3. Values of J ε * , J app I I , J app , 1 I I , and J ^ in the Case II.
ε J ε * J app I I J app , 1 I I J ^
0.1 0.3588 0.3545 0.3545 0.3598
0.05 0.16503 0.1646 0.1646 0.16508
0.015 0.046399 0.0463718 0.0463718 0.0463999
Table 4. Absolute and relative errors of J ε * in the Case II.
Table 4. Absolute and relative errors of J ε * in the Case II.
ε Δ J app I I ( ε ) Δ J app , 1 I I ( ε ) Δ J ^ ( ε ) δ J app I I ( ε ) δ J app , 1 I I ( ε ) δ J ^ ( ε )
0.1 4.38 · 10 3 4.38 · 10 3 9.68 · 10 4 1.22 1.22 0.27
0.05 4.55 · 10 4 4.55 · 10 4 5.46 · 10 5 0.28 0.28 0.033
0.015 2.77 · 10 5 2.77 · 10 5 4.07 · 10 7 0.06 0.06 8.78 · 10 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated