Characterization of relationships between the domains of two linear matrix-valued functions with applications

One of the typical forms of linear matrix expressions (linear matrix-valued functions) is given by A + B1X1C1 + · · · + BkXkCk, where X1, . . . , Xk are independent variable matrices of appropriate sizes, which include almost all matrices with unknown entries as its special cases. The domain of the matrix expression is defined to be all possible values of the matrix expressions with respect to X1, . . . , Xk. I this article, we approach some problems on the relationships between the domains of two linear matrix expressions by means of the block matrix method (BMM), the matrix rank method (MRM), and the matrix equation method (MEM). As application, we discuss some topics on the relationships among general solutions of some linear matrix equations and their reduced equations. Mathematics Subject Classifications (2000): 15A09; 15A24; 15A27


Introduction
Throughout this article, we denote by C m×n the set of all m × n complex matrices; by A * , r(A), and R(A) the conjugate transpose, the rank, and the range (column space) of a matrix A ∈ C m×n , respectively; by I m the identity matrix of order m; and [A, B] be a row block matrix consisting of A and B. A matrix A ∈ C m×m is said to be EP (or range Hermitian) if R(A * ) = R(A) holds. We next introduce the definition and notation of generalized inverses of a matrix. The Moore-Penrose inverse of A ∈ C m×n , denoted by A † , is the unique matrix X ∈ C n×m satisfying the four Penrose equations A † , A (1,3,4) , A (1,2,4) , A (1,2,3) , A (1,4) , A (1,3) , A (1,2) , A (1) , (1.2) which are usually called the eight commonly-used types of generalized inverses of A in the literature; see e.g., [4,5,18]. In addition, we also denote by P A = I m − AA † and Q A = I n − A † A the orthogonal projectors (Hermitian idempotent matrices) induced from A. The Kronecker product of any two matrices A and B is defined to be A ⊗ B = (a ij B). The vectorization operator of a matrix A = [a 1 , . . . , a n ] is defined to be vec(A) = − → A = [a 1 , . . . , a n ] . A well-known property on the vec operator of a triple matrix product is − −− → AXB = (B ⊗ A) − → X ; see e.g., [3,20]. Linear matrix expressions that involve variable matrices arise in a variety of problems in pure and applied mathematics. In the present paper we pursue our study of a general linear matrix expressions of the form where A ∈ C m×n , B i ∈ C m×pi , and C i ∈ C qi×n are given, and X i ∈ C pi×qi are variable matrices, i = 1, 2, . . . , k. Eq. (1.3) includes many kinds of well-known matrix expressions with variable entries as its special cases, such as, A + BX, A + BXC, A + BX + Y C, see e.g., [25,27], as well as various partially specified matrices, such as, A B C ? , A ? ? D , A ? ? ? , etc, see e.g., [1,7,8,11,12]. There are many natural modifications of considering the LMVFs in mathematics and applications. Here we mention a few: Here we mention some examples on relations between two linear matrix expressions: (a) When do two solvable linear matrix equations A 1 X 1 B 1 = C 1 and A 2 X 2 B 2 = C 2 , where X 1 and X 2 have the same size, have a common solution?
These facts show that algebraic features and performances of the matrix set in (1.4) are necessarily worth for investigation from both theoretical and applied points of view. In fact, a class of fundamental and meaningful problems that have been identified in the matrix calculus is the characterization of relationships between two given LMVFs under various assumptions. In view of the above facts, the present author intends to investigate the relationships between two domains D 1 and D 2 generated from some special cases of (1.3) using the matrix range and rank methodology. We also discuss the connections among general solutions of some linear matrix equations and their reduced linear matrix equations.

Preliminaries
Block matrix, rank of matrix, and matrix equation are basic concepts in linear algebra, while the block matrix method (BMM), the matrix rank method (MRM), and the matrix equation method (MEM) are three fundamental and traditional analytic methods that are widely used in matrix theory and applications because they give one the ability to construct and analyze various complicated matrix expressions and matrix equalities in a subtle and computationally tractable way. We next present a group of well-known results on ranks of matrices and matrix equations that are described by way of generalized inverses, MRM and BMM, which we shall use to deal with various matrix expressions and matrix equalities. Lemma 2.1 ( [13]). Let A ∈ C m×n , B ∈ C m×k , and C ∈ C l×n . Then In particular, the following results hold.
In particular, the following three statements are equivalent: We also use the following well-known results in the sequel. be a given linear matrix equation, where A ∈ C m×n and B ∈ C m×p are known matrices, and X ∈ C n×p is an unknown matrix. Then, the following statements are equivalent: In this case, the general solution of the equation can be written in the parametric form where U ∈ C n×p is an arbitrary matrix. In particular,  be a given linear matrix equation, where A ∈ C m×n , B ∈ C p×q , and C ∈ C m×q are given. Then, the following statements are equivalent: (i) Eq. (2.7) is consistent. (iv) AA † CB † B = C.
In this case, the general solution of (2.7) can be written in the parametric form where U, V ∈ C n×p are arbitrary matrices. In particular, (2.7) holds for all matrices X ∈ C n×p if and only if ). The matrix equation is consistent if and only if 11) or equivalently, Eq. (2.10) holds for all matrices X 1 and X 2 if and only if one of the following four block matrix equalities holds Lemma 2.6 ( [14]). The matrix equation is consistent if and only if the following four conditions hold 16) or equivalently, Lemma 2.7 ( [10,22]). Equation (2.14) holds for all matrices X 1 and X 2 if and only if one of the following four block matrix equalities holds 21]). The matrix equation is consistent iff the following four conditions hold 3 Some fundamental results on relationships between domains of two linear matrix-valued functions We start with two groups of known results on the relationships between two matrix sets generated from the two simplest cases in (1.3).

Lemma 3.1 ( [25]
). Given two domains of LMVFs: , and B 2 ∈ C m×p2 are known matrices, and X 1 ∈ C p1×n and X 2 ∈ C p2×n are variable matrices, we have the following results: (a) D 1 ∩ D 2 = ∅, i.e., there exist X 1 and X 2 such that

Lemma 3.2 ( [25]
). Given two domains of LMVFs: where A i ∈ C m×n , B i ∈ C m×pi , and C i ∈ C qi×n are given, and X i ∈ C pi×qi are variable matrices, i = 1, 2, we have the following results: (a) D 1 ∩ D 2 = ∅ if and only if the following four conditions hold if and only if one of the following three conditions holds (c) D 1 = D 2 if and only if one of the following five conditions holds As an extension, we have the following result on relationships between domains of two general matrix-valued functions, which we shall use in the latter part of the article. Theorem 3.3. Given two domains of LMVFs: , and E 2 ∈ C v2×n are known matrices, we have the following results: (a) D 1 ∩ D 2 = ∅ if and only if the following four conditions hold if and only if one of the following four conditions holds if and only if one of the following four conditions holds holds for all X 2 and Y 2 . By Lemma 2.7, (3.18) holds for all X 2 and Y 2 iff which, by Lemma 2.1(c), are equivalent to (3.9)-(3.12). By (3.17) and Lemma 2.7, the fact D 1 ⊆ D 2 holds iff one of the following four equations hold for all X 1 and Y 1 . Further by Lemma 2.7, (3.23) holds for all X 1 and Y 1 iff one of the following two conditions holds which are equivalent to (3.24) holds for all X 1 and Y 1 iff one of the following three conditions holds which are equivalent to (3.25) holds for all X 1 and Y 1 iff one of the following four conditions holds which are equivalent to (3.26) holds for all X 1 and Y 1 iff one of the following four conditions holds Combining them leads to (3.13)-(3.16).
The results in the above three lemmas can be used, as demonstrated below, to solve many concrete problems on the relationships between solutions of matrix equations, as well as relations between generalized inverses of matrices.

Relationships between solutions of two fundamental linear matrix equations
It is well known since Penrose [19] that general solutions of linear matrix equations can be represented certain linear matrix expressions composed with the given matrices in the matrix equations and their generalized inverses. In this situation, we can use the previous results to characterize various relationships between solutions of linear matrix equations. There are many linear matrix equations for which the general solution can explicitly be written as certain explicit linear matrix-valued functions as given in (4.1). In this section, we present a variety of results and facts on relationships between linear transformations of solutions of some fundamental linear matrix equations.
Theorem 4.1. Assume that the following two matrix equations are consistent, respectively, where A i ∈ C mi×ni and B i ∈ C mi×p are given, i = 1, 2. Also we denote by the domains of two constrained LMVFs, where S i ∈ C s×ni and T i ∈ C s×p are given, i = 1, 2. Then the following results hold.
Proof. By Lemma 2.3, the general solutions of the two equations in (4.1) can be expressed as where U 1 ∈ C n1×p and U 2 ∈ C n2×p are arbitrary matrices. Then the two sets in (4.2) can be represented as Applying Lemma 3.1(a) to (4.4), we obtain that D 1 ∩ D 2 = ∅ if and only if where by (2.2), Substituting (4.6) and (4.7) into (4.5) yields r Applying Lemma 3.1(b) to (4.5), we obtain that D 1 ∩ D 2 = ∅ if and only if where by (2.2), Substituting (4.6) and (4.9) into (4.8) yields Result (b). By a similar approach, we obtain that D 1 ⊇ D 2 if and only if r Corollary 4.2. Assume that A 1 X 1 = B 1 and A 2 X 2 = B 2 in (4.1) are consistent, respectively, and denote by the sets of all solutions of the two equations, respectively. Then the following results hold.
(a) The two equations in (4.1) have a common solution if and only if r (c) D 1 = D 2 if and only if r

Corollary 4.3.
Let A ∈ C m×n and B ∈ C m×p be given, and suppose AX = B is consistent. Also denote where M ∈ C t×m and S ∈ C s×n . Then the following results hold.
(a) D 1 ⊆ D 2 always holds.  where M ∈ C s×m . Then, the following results hold.

Assume that the matrix equation in (2.5) is consistent, and partition it as
where A i ∈ C m×ni , with A = [A 1 , . . . , A k ], X i ∈ C ni×p are unknown matrices with X = [X 1 , . . . , X k ] and p = p 1 + · · · + p k , and pre-multiplying (4.13) with P Yi yields the following reduced linear matrix equations where Y i = [A 1 , . . . , A i−1 , 0, A i+1 , . . . , A k ], i = 1, . . . , k. Then the family of equations in (4.14) are consistent, respectively. In such cases, We denote by the matrix sets composed by the partial solutions X i of (4.13) and (4.14) respectively; and denote by In this section, we first discuss the relationships between D i and H i in (4.15), i = 1, . . . , k, as well as the two sets in (4.16).

Thus (4.17) holds by Corollary 4.3(c).
Theorem 4.6. Assume that the matrix equation in (4.13) is consistent, and let D and H be as given in (5.4). Then the following results hold.
(a) D ⊆ H always holds.
Proof. By Lemma 2.3, the general solutions of (4.14) are given by where U i ∈ C ni×p are arbitrary, i = 1, . . . , k. Substituting (2.6) and (4.18) into (4.16) gives (4.20) Applying Lemma 3.1(b) to (4.19) and (4.20), we see that D ⊆ H if and only if where by and r    Both (4.22) and (4.23) mean that (4.21) is an identity, thus establishing (a). Substituting (4.18) into (4.13) gives It is obvious that D ⊇ H holds if and only if the matrix equation in (4.24) holds for all U 1 , . . . , U k , which by Lemma 2.3 is equivalent to where by (2.5), Thus (4.25) is equivalent to (k − 1)r(A) = r(Y 1 ) + · · · + r(Y k ). Combining this facts with (a) leads to the equivalence of (i) and (ii) in (b). The equivalence of (ii), (iii), and (iv) in (b) follows from Lemma 2.2.
5 Relationships among solutions of A 1 X 1 B 1 +A 2 X 2 B 2 = C and its four reduced equations Eq. (2.14) is well known in matrix theory and applications, which solvability condition and general solution were precisely established using the ranks, ranges, and generalized inverses of the given matrices in the equation; see e.g., [2,14,22,23,28] and the relevant literature quoted there. It is easy to see that we can construct from (2.14) some small or transformed linear matrix equations. For instance, pre-and post-multiplying (2.14) with P Ai and Q Bi respectively yield the following four reduced matrix equations 3) respectively. Each of (5.1)-(5.4) is consistent as well, if the matrix equation in (2.14) is consistent. Concerning the relationships among the solutions of (2.14) and (5.1)-(5.4), we have the following results.
Theorem 5.1. Assume that the matrix equation in (2.14) is consistent, and denote by the collections of all pairs of solutions of (2.14) and (5.1)-(5.4), respectively. Then the following results hold.

15)
respectively. By Lemma 2.5(b), (5.14) holds for all U 1 and U 2 if and only if one of the following four equalities It is easy to verify that the ranks of the left-hand sides of (5.18)-(5.21) are given by It is easy to verify that the ranks of the left-hand sides of (5.26)-(5.29) are given by    Proof. By the vec operation of matrix, (2.14) can equivalently be expressed as which is a special case of (4.13). Pre-multiplying (5.40) with P B i ⊗Ai yields the following reduced linear matrix equation equations  always hold. On the other hand, it is easy to verify that and Thus the two equations in (5.41) by the vectorization operation of matrix are equivalent to respectively. Thus the two set equalities in (5.45) are equivalent to the set equalities in (a). Results (b) and (c) follow from applying Theorem 4.6 to (5.40).
In addition to the LMVF in (1.3), there are many types of multilinear and nonlinear matrix-valued functions that occur in matrix theory and applications, such as, f (X 1 , . . . , X k ) = (A 1 + B 1 X 1 C 1 )(A 2 + B 2 X 2 C 2 ) · · · (A k + B k X k C k ), etc. In these cases, it would be of interest but are also challenging to investigate the connections between a pair of such matrix-valued functions under various specified assumptions.