Preprint
Review

This version is not peer-reviewed.

A Comprehensive Review on the Generalized Sylvester Equation AX-YB=C

A peer-reviewed version of this preprint was published in:
Symmetry 2025, 17(10), 1686. https://doi.org/10.3390/sym17101686

Submitted:

12 August 2025

Posted:

15 August 2025

You are already at the latest version

Abstract
Since Roth’s work on the generalized Sylvester equation AX −YB = C (GSE) in 1952, related research has consistently attracted significant attention. Building on this, this review systematically summarizes relevant research on GSE from five perspectives: research methods, constrained solutions, various generalizations, iterative algorithms, and applications. Furthermore, we provide comments on current research, put forward several intriguing questions, and offer prospects for future research trends. We hope this work can fill the gap in the review literature on GSE and offer some inspiration for subsequent studies in the field.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

This review commences with the famous Sylvester equation
A X X B = C
with unknown X, named after the British mathematician James Joseph Sylvester (1814–1897) [251]. For a detailed discussion of this equation, refer to the review article [14] by Rajendra Bhatia, a Fellow of the Indian National Science Academy. The core of our review revolves around the generalized Sylvester equation (abbreviated as GSE):
A X Y B = C
with unknown X and Y.
In 1952, Roth established the necessary and sufficient conditions for the solvability of GSE over a field by means of block matrix equivalence. Since then, a large number of papers on GSE have been published in rapid succession. The solvability conditions and explicit expressions of the general solution for GSE have been investigated from diverse perspectives: linear transformations, generalized inverses, matrix decompositions, real (complex) representations, determinant representations, and semi-tensor products, etc. Extensive studies have been conducted on various constrained solutions (e.g., -congruent, symmetric, self-adjoint, positive (semi)definite, per(skew)symmetric, bi(skew)symmetric, Re-(non)negative definite, Re-(non)positive definite, η -Hermitian, η -skew-Hermitian, ϕ -Hermitian, and equality-constrained solutions) of GSE, as well as its various best approximate solutions when the solvability conditions are not satisfied. Furthermore, the generalizations of GSE in different algebraic structures (e.g., unit regular rings, principal ideal domains, division rings, module-finite rings, commutative rings, Artinian rings, noncommutative rings, dual numbers, dual quaternions), operator equations, tensor equations, polynomial matrix equations, Sylvester-polynomial-conjugate matrix equations, and formal aspects have also shown remarkable vitality. Iterative algorithms for solving various solutions to GSE and its extended forms have also attracted considerable attention, with continuous updates and optimizations. Finally, the relevant results of GSE have demonstrated extraordinary significance in both theoretical applications (e.g., solvability of matrix equations, dual number matrix factorizations, and microlocal triangularization of pseudo-differential systems) and practical applications (e.g., hand-eye calibration problems, and encryption and decryption of color images).
Qing-Wen Wang, the first author of this paper, has been engaged in researching and teaching on the theory of matrix equations since the early 1990s, and has published over 100 related articles. Numerous scholars both domestically and internationally have suggested that we systematically present the theory of solving linear matrix equations. To date, our team has published three review articles, focusing respectively on solving the equations A X B = C [284], A X = C and X B = D [278], as well as A 1 X B 1 = C 1 and A 2 X B 2 = C 2 [279].
As mentioned earlier, in the development history of research on GSE, its conclusions are interrelated, mutually inspiring, mutually promotive, and progressively advanced, forming a complex, intertwined, and extensive network. By reviewing numerous literature, analyzing and comparing, summarizing, questioning, and prospecting, we strive to identify the commonalities, main threads, and approaches in these studies, aiming to provide insights and inspiration for subsequent research on GSE. Thus, this work is of significant value. In this paper, we intend to focus on GSE as the core theme, and  unfold its solution methods, solution categories, generalizations, iterative algorithms, as well as theoretical and practical applications in a step-by-step manner, so as to provide a comprehensive overview.
The remainder of this paper consists of 8 sections. Section 2 presents some necessary notations and notes. In Section 3, we introduce Roth’s work on GSE published in [228], which serves as the starting point for this paper. In Section 4, we summarize seven methods for studying GSE, demonstrating diverse perspectives and research schemes. Various solutions to GSE are discussed in detail in Section 5. We devote Section 6 to introducing the generalizations of GSE in certain algebraic aspects. In Section 7, we enumerate several classic iterative algorithms for solving various solutions to GSE and its generalized forms. The theoretical and practical applications of GSE are presented in Section 8. Conclusions are stated in Section 9.

2. Preliminary

This section recalls some notations and definitions used throughout the paper. Additionally, other necessary terminology for subsequent (sub)sections will be introduced within each. A few symbols, due to their bulk, will be referenced via specific indices in original sources, rather than being reproduced. This not only does not hinder readers’ understanding but also makes the paper more concise and readable.
Let F be a ring; let F [ λ ] be the polynomial ring over F with the variable λ ; let F m × n be the set of all m × n matrices over F ; let F m × n [ λ ] be the set of all m × n polynomial matrices over F with the variable λ . Specially, F m = F m × 1 . Let deg f ( λ ) be the degree of f ( λ ) F [ λ ] ; let rank ( A ( λ ) ) be the rank of A ( λ ) F m × n [ λ ] ; let A T be the transpose of A F m × n . The symbol det ( A ( λ ) ) denotes the determinant of A ( λ ) F n × n [ λ ] for a field F . The component-wise representation of A F m × n can be denoted in the following three forms:
A = [ a i j ] = [ a i , j ] = [ a i , j ] m × n F m × n .
The Kronecker product of A = [ a i j ] F m × n and B F s × t is defined by
A B = a 11 B a 12 B a 1 n B a 21 B a 22 B a 2 n B a m 1 B a m 2 B a m n B F m s × n t .
The matrices A F m × n and B F m × n are equivalent if there exist two nonsingular matrices P F m × m and Q F n × n such that A = P B Q . Moreover, A ( λ ) F m × n [ λ ] and B ( λ ) F m × n [ λ ] are equivalent if there exist two invertible polynomial matrices P ( λ ) F m × m [ λ ] and Q ( λ ) F n × n [ λ ] such that A ( λ ) = P ( λ ) B ( λ ) Q ( λ ) .
For a subspace T F n , let P T be the orthogonal projector onto T ; and let T and dim ( T ) be the orthogonal complement and dimension of T , respectively. In addition, denote A T = { A t t T } for A F m × n . For two vector spaces V and W over a field F, let τ be a linear transformation from V to W; then Ker ( τ ) and Im ( τ ) represent the kernel space and the image of τ , respectively.
Let Z + , N , R , and  C be the sets of all positive integers, natural numbers, real numbers, and  complex numbers, respectively. The symbol sign ( a ) stands for the sign of a R , and  represents the empty set. Let the set of all (real) quaternions be
H = { q 1 + q 2 i + q 3 j + q 4 k i 2 = j 2 = k 2 = 1 , i j k = 1 , q 1 , q 2 , q 3 , q 4 R } ,
which is a four-dimensional noncommutative division algebra over R [227]. Let the conjugate of a quaternion q = q 1 + q 2 i + q 3 j + q 4 k H be
q ¯ = q 1 q 2 i q 3 j q 4 k .
Let A = [ a i j ] H m × n . The conjugate of A is A ¯ = [ a ¯ i j ] H m × n , and  the conjugate transpose of A is A * = A ¯ T . Let
A η = η A η and A η * = η A * η ,
where η { i , j , k } . If A = A η * ( A = A η * ) for η { i , j , k } , then A is called η -Hermitian ( η -anti-Hermitian) [127]. An η -anti-Hermitian matrix is also called an η -skew-Hermitian matrix. Moreover, A i * = A * and A j * = A k * = A T for A C m × n .
Denote
P A = A A , Q A = A A , L A = I A A , R A = I A A ,
where A is the Moore-Penrose inverse [215] of a matrix A. The symbols A 1 , rank ( A ) , R ( A ) , and  N ( A ) denote the inverse, rank, range, and null space of a matrix A, respectively. In addition, A ¯ , A * , and  A F are the conjugate, conjugate transpose, and Frobenius norm of A C m × n , respectively. Let I and 0 be the identity matrix and the null matrix, respectively, with  appropriate orders. Specifically, I n denotes the n × n identity matrix. For a symmetric matrix A R m × m , A > 0 and A 0 denote the positive definite matrix and the positive semidefinite matrix, respectively. In addition, denote A B if A B 0 for matrices A and B. The matrix
A = [ A 1 , A 2 , , A n ] = A 1 A 2 A n
denotes a new matrix formed by arranging matrix blocks A 1 , A 2 , , A n (with the same number of rows) column-wise. The symbol diag ( σ 1 , , σ r ) denotes a (block) diagonal matrix with diagonal entries σ 1 , , σ r .
For a ring F and a F , a is regular (or inner-invertible) if there exists a F such that a a a = a ; a is ( 1 , 2 ) -invertible (or reflexive) if there exists a ( 1 , 2 ) F such that a a ( 1 , 2 ) a = a and a ( 1 , 2 ) a a ( 1 , 2 ) = a ( 1 , 2 ) . The characteristic of a field F is denoted as char ( F ) .
An equation or a system of equations is said to be solvable (consistent) if it has at least one solution. The symbols “⇒" and “⇔" denote “imply" and “if and only if", respectively.
At the end of this section, we present three specific notes regarding this review.
(1)
Though some research does not focus directly on GSE itself, its traces are easily detectable. Thus, we regard such content as an integral part of GSE-related research.
(2)
We selected core literature relevant to this review. Results closely related to the theme are rigorously presented as theorems, while less relevant conclusions are briefly summarized narratively. Furthermore, the proofs of these theorems are omitted here.
(3)
The remarks in this paper include comments and suggestions on relevant results, encompassing both previous researchers’ views and our reflections, questions, and prospects.

3. Roth’s Equivalence Theorem

First, we specifically present the paper’s core equation:
A X Y B = C ,
where A = [ a i , j ] F m × r , B = [ b i , j ] F s × n , and  C = [ c i , j ] F m × n are given over a ring F .
Let F be a field. In 1952, Roth [228] first studied the necessary and sufficient conditions for the solvability to the polynomial matrix form of Eq. (1), i.e.,
A ( λ ) X ( λ ) Y ( λ ) B ( λ ) = C ( λ ) ,
where A ( λ ) , B ( λ ) , C ( λ ) F n × n [ λ ] , by using the normal form of polynomial matrices.
Theorem 1. 
[228] Eq. (2) has a solution pair X ( λ ) , Y ( λ ) F n × n [ λ ] if and only if the polynomial matrices
A ( λ ) C ( λ ) 0 B ( λ ) and A ( λ ) 0 0 B ( λ )
are equivalent.
Roth [228], Theorem 1 has stated that Theorem 1 still valid for rectangular polynomial matrices of appropriate orders. Thus, the following theorem can be derived immediately.
Theorem 2. 
[228], Theorem 1 Let A F m × r , B F s × n , and  C F m × n . Then,
Eq . ( 1 ) has a solution pair X F r × n and Y F m × s
if and only if
A C 0 B and A 0 0 B are equivalent .
We call this theorem Roth’s equivalence theorem (abbreviated as RET).
Remark 1. 
It is easy to find that the study of Eq. (1) is equivalent to that of the equation
A X + Y B = C
with given A = [ a i , j ] F m × r , B = [ b i , j ] F s × n , and  C = [ c i , j ] F m × n , by regarding B in Eq. (1) as B in Eq. (5). Thus, in this paper, we collectively refer to Eqs. (1) and (5) as GSE.

4. Different Methods on GSE

Roth [228] considered Eq. (1) based on the canonical forms of the polynomial matrices. Subsequently, many scholars have provided different proofs of RET from various perspectives. These proofs demonstrate the effectiveness and uniqueness of different mathematical methods in considering Eq. (1). In addition, these methods have also had a profound impact on the subsequent study of different Sylvester-type equations. This is precisely one of the enchantment of mathematics: reaching the same endpoint through different paths, or even completely unrelated ones. Furthermore, mathematicians ceaselessly pursue groundbreaking innovations and perpetually strive to discover the most elegant path.
Remark 2. 
It is notable that it is easy to see that (3) implies (4) by
I Y 0 I A 0 0 B I X 0 I = A A X Y B 0 B .
Therefore, it is only need to consider the sufficiency of RET, i.e.,  (4) ⇒ (3).

4.1. Method by Linear Transformations and Subspace Dimensions

Let F be a field. Flanders and Wimmer [80] proved RET for m = r and s = n by means of linear transformations and subspace dimension arguments. This method is more fundamental and elementary than Roth’s method.
Proof of RET. [80]
Step 1: 
Define ψ i : M r + s , 2 ( r + s ) M r + s by
ψ 0 ( U , W ) = A 0 0 B U W A 0 0 B and ψ 1 ( U , W ) = A C 0 B U W A 0 0 B .
Then, the condition (4) yields
dim ( Ker ( ψ 0 ) ) = dim ( Ker ( ψ 1 ) ) .
Step 2: 
Let
U = U 11 U 12 U 21 U 22 and W = W 11 W 12 W 21 W 22 .
Then,
Ker ( ψ 0 ) = ( U , W ) A U 11 = W 11 A , A U 12 = W 12 B , B U 21 = W 21 A , B U 22 = W 22 B , , Ker ( ψ 1 ) = ( U , W ) A U 11 + C U 21 = W 11 A , A U 12 + C U 22 = W 12 B , B U 21 = W 21 A , B U 22 = W 22 B , .
Let
Z = U 21 U 22 W 21 W 22 B U 21 = W 21 A , B U 22 = W 22 B .
For i = 0 , 1 , define
ν i : Ker ( ψ i ) Z , ν i ( U , W ) = U 21 U 22 W 21 W 22 .
Then Im ( ν 1 ) Im ( ν 0 ) = Z and Ker ( ν 1 ) = Ker ( ν 0 ) . So, Im ( ν 1 ) = Im ( ν 0 ) .
Step 3: 
Since ( U , W ) K e r ( ψ 0 ) with U 22 = I , there also exists such in Ker ( ψ 1 ) . Therefore, A U 12 W 12 B = C , i.e.,  (3) holds. □
Remark 3. 
(1)
In [80], Flanders and Wimmer mentioned that by making small modifications to the above proof, one can similarly obtain the proof of RET under the condition of rectangular matrices A, B, and C.
(2)
In terms of linear transformations and subspace dimensions, Dmytryshyn et al. discussed two highly complex systems of equations (see [59], Theorem 1.1 and [58], Theorem 1).

4.2. Method by Generalized Inverses

Let F be a field. In 1955, Penrose [215] concisely defined the Moore-Penrose inverse of any rectangular matrix through four matrix equations.
Definition 1. 
[9,215] Let A F m × n . If there exists X F n × m such that
( 1 ) A X A = A , ( 2 ) X A X = X , ( 3 ) ( A X ) * = A X , ( 4 ) ( X A ) * = X A ,
then X is called the Moore-Penrose (in short, MP) inverse of A and is denoted by A . Especially, if  X F n × m satisfies the above Eq. ( 1 ) , then X is called an inner inverse of A and is denoted by A . Clearly, A is a special inner inverse of A.
Since then, the theory of generalized inverses has flourished (see [9,40,223,265]), and it has been widely used to study the solvability conditions of linear equations and represent the explicit expressions of their general solutions when solvable. Penrose’s study on the matrix equation A X B = C [215] is the most renowned.
In 1979, Baksalary and Kala [6] naturally utilized inner inverses to establish solvability conditions and representations of the general solution for Eq. (1).
Theorem 3. 
[6] Eq. (1) has a solution pair X F r × n and Y F m × s if and only if
I A A C I B B = 0 ,
in which case,
X = A C + A Z B + I A A W , Y = I A A C B + Z I A A Z B B ,
where W and Z are arbitrary matrices with appropriate orders.
Remark 4. 
In views of RET and Theorem 3, Baksalary and Kala [6], Remark 2 noted that
( 6 ) ( 4 ) rank A C 0 B = rank ( A ) + rank ( B ) .
They also pointed out that this result can be directly derived by using [197], Fomula (8.7) in a particular case, i.e.,
rank A C 0 B = rank ( A ) + rank ( B ) + rank ( I A A ) C ( I B B ) .
Moreover, they said that proof using generalized inverses is simpler than Roth’s [228] and Flanders et al.’s [80]. In fact, this is indeed the case when considering the simplicity of the proof and its length.
Remark 5. 
Under the hypotheses of Theorem 3, it is easy to obtain that
( 6 ) R C I B B N I A A
C N ( B ) R ( A ) ,
which is also proved by Woude in [260], Lemma 3.2 according to an elementary method. In addition, Woude applied this result to a control problem that occurs in almost non-stationary stochastic processes via measurement feedback (see [260], Theorems 3.3 and 4.1).
Remark 6. 
It is easy to show that the solvability of Eq. (1) is equivalent to the existence of X and Y such that
I Y 0 I A 0 0 B I X 0 I = A C 0 B .
In terms of a geometrical method, Olshevsky gave a cyclic argument over C as follows:
( 3 ) ( 8 ) ( 4 ) ( 7 ) ( 3 ) ,
(see [209], Proof of Theorem 1.2).
Let F = C . Meyer [201] revealed an interesting result, that is, the solvability of Eq. (1) is equivalent to the existence of the upper block inner inverse of a certain block matrix.
Theorem 4. 
[201], Theorems 1 and 2 The block triangular complex matrix
T = A C 0 B
has an upper block triangular inner inverse if and only if
rank ( T ) = rank ( A ) + rank ( B ) ,
in which case,
T = A A C B 0 B ,
is an inner inverse of T for any A and B .

4.3. Method by Singular Value Decompositions

Let F = R . The singular value decomposition (abbreviated as SVD) [89], as an important tool for the study of matrix theory, also plays a significant role in the research on solutions of matrix equations. Chu [34] utilized the SVD to study the solvability conditions and the representation of the general solution of Eq. (5).
In fact, let the SVDs of A and B given in Eq. (5) be
A = U A D A V A T and B = U B D B V B T ,
where U A , U B , V A , and  V B are orthogonal matrices with appropriate orders,
D A = Σ A 0 0 0 , D B = Σ B 0 0 0 ,
Σ A = diag ( α 1 , . . , α k ) with α i > 0 ( 1 i k ), and Σ B = diag ( δ 1 , . . , δ l ) with δ j > 0 ( 1 i l ). Then,
( 5 ) U A D A V A T X + Y U B D B V B T = C D A X ˜ + Y ˜ D B = C ˜ ,
where X ˜ = V A T X V B , Y ˜ = U A T Y U B , and  C ˜ = U A T C V B . Partitioning the above equation analogously to the partitioning of D A and D B , we obtain
Σ A 0 0 0 X ˜ 11 X ˜ 12 X ˜ 21 X ˜ 22 + Y ˜ 11 Y ˜ 12 Y ˜ 21 Y ˜ 22 Σ B 0 0 0 = C ˜ 11 C ˜ 12 C ˜ 21 C ˜ 22 Σ A X ˜ 11 + Y ˜ 11 Σ B Σ A X ˜ 12 Y ˜ 21 Σ B 0 = C ˜ 11 C ˜ 12 C ˜ 21 C ˜ 22 .
Based on the above discussion, the following theorem can be obtained.
Theorem 5. 
[34], Theorem 2 Under the notations in (9) and (10), let
U A = U A 1 U A 2 and V B = V B 1 V B 2 .
(1)
Then Eq. (5) is solvable if and only if
C ˜ 22 = 0 , i . e . , U A 2 T C V B 2 = 0 .
(2)
Suppose that (11) holds. Denote
C ˜ = ( c ˜ i j ) and M i j = α i δ j .
Then X ˜ 21 , X ˜ 22 , Y ˜ 12 , and  Y ˜ 22 are arbitrary,
X ˜ 12 = Σ A 1 C ˜ 12 , Y ˜ 21 = C ˜ 21 Σ B 1 , X ˜ 11 = ( x ˜ i j ) , and Y ˜ 11 = ( y ˜ i j ) ,
where
x ˜ i j y ˜ i j = M i j c ˜ i j + ( I M i j M i j ) Z i j
for arbitrary Z i j . Moreover, if  Y ˜ 11 is arbitrary, then
X ˜ 11 = Σ A 1 ( C ˜ 11 Y ˜ 11 Σ B ) .
(3)
If (11) holds, then
X = V A Σ A 1 ( U A 1 T C V B 1 Z 1 Σ B ) Σ A 1 U A 1 T C V B 1 Z 2 Z 3 V B T , Y = U A Z 1 Z 4 U A 2 T C V B 1 Σ B 1 Z 5 U B T ,
where Z 1 , . . . , Z 5 are arbitrary matrices with appropriate orders.
Remark 7. 
Interestingly, Chu in [34], Theorem 1 utilized the generalized singular value decomposition (abbreviated as GSVD) [212] to study the extended form of Eq. (5) over R , i.e.,
A X E + F Y B = C ,
where A , E , F , B , and C are given real matrices with appropriate orders. Eleven years later, Xu et al. [323] once again discussed the solvability conditions, the general solution, and least-squares solutions of Eq. (12) over C by using the canonical correlation decomposition (abbreviated as CCD) introduced in [90].
Remark 8. 
Inspired by solving Eq. (5) via SVD, one can utilize equivalent normal forms of A and B to study Eq. (5) over a field, along with an approach similar to Theorem 5. In fact, we only need to regard orthogonal matrices U A , U B , V A , and  V B in (9) as invertible matrices, the transpose V A T and V B T as the inverse V A 1 and V B 1 , and  Σ A and Σ B as identity matrices of appropriate orders.
It can be found that using the equivalent normal form of matrices to solve matrix equations is also a very convenient method. This viewpoint has been confirmed multiple times in the subsequent research. Wang et al. used the elementary row and column transformations of matrices to give the equivalent normal form of a matrix triplet
A B C
with the same row number over an arbitrary division ring F (see [273], Theorem 2.1). Similarly, the equivalent normal form of another matrix triplet
D T E T F T T
with the same column number can also be obtained (see [273], Theorem 2.2). The two equivalent normal forms are applied to solve the matrix equation
A X D + B Y E + C Z F = G ,
where X, Y, and Z are unknown (see [273], Theorem 3.2).
Interestingly, He et al. utilized [273], Theorem 2.1 to propose a simultaneous decomposition of seven matrices over H (see [125], Theorem 2.3), i.e.,
G A B C D E F
and discussed Eq. (13) once again by this decomposition (see [125], Theorem 3.1). Compared with the method in [273], which directly applies the equivalent normal forms of two matrix triplets to matrix G, the  simultaneous decomposition is more concise. In addition, Ref. [114] once again brilliantly demonstrates the important role of simultaneous decomposition in solving matrix equations.

4.4. Method by Simultaneous Decompositions

Let F be a field. Remark 8 notes using equivalent canonical forms of the matrices A and B to solve Eq. (1). Can we find an equivalent canonical form that is simultaneously related to the matrix triplet
C A B
so as to discuss Eq. (1) more simply? Gustafson [97] gave a positive answer to this problem.
Theorem 6. 
[97] Let A, B, and C be given in Eq. (1). Then there exist invertible matrices T, U, V, and  W of appropriate orders such that
T C W T A U V B W = C A B ,
where
A = I z 0 0 0 0 I t 3 0 0 0 0 0 0 0 0 0 0 0 0 0 I r 2 0 0 0 0 , B = I z 0 0 0 0 0 0 I t 1 0 0 0 0 0 0 0 0 0 0 0 0 0 I r 1 0 0 , C = I z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I r 1 0 0 0 0 0 0 I r 2 0 0 0 0 0 0 I t 2 ,
z + t 3 + r 2 = rank ( A ) , z + t 1 + r 1 = rank ( B ) , and  z + t 2 + r 1 + r 2 = rank ( C ) . We call (15) the simultaneous decomposition of (14).
Remark 9. 
By elementary matrix row and column transformations, Gustafson designed the 12-step algorithm to obtain the simultaneous decomposition of (14) (see [97], Section 3).
Applying the simultaneous decomposition of (14) to Eq. (1) yields
( 1 ) T 1 A U 1 X Y V 1 B W 1 = T 1 C W 1 A X Y B = C ,
where X = U 1 X W and Y = T Y V 1 . Thus, X = U X W 1 and Y = T 1 Y V .
Theorem 7. 
[97] Eq. (16) is solvable if and only if t 2 = 0 , in which case,
A = I z 0 0 0 0 0 I t 3 0 0 0 0 0 0 0 0 0 0 0 I r 2 0 0 0 0 0 0 , B = I z 0 0 0 0 0 I t 1 0 0 0 0 0 0 0 0 0 0 0 I r 1 0 , C = I z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I r 1 0 0 0 0 0 I r 2 ,
X = X 11 X 12 0 X 14 0 X 21 X 22 0 X 24 0 X 31 X 32 X 33 X 35 X 35 X 41 X 42 0 X 44 I r 2 , Y = X 11 I z X 12 Y 13 X 14 X 21 X 22 I t 1 Y 23 X 24 0 0 Y 33 0 0 0 Y 43 I r 1 X 41 X 42 Y 53 X 44 ,
where the X i j and Y i j are arbitrary matrices with appropriate orders.
Remark 10. 
The quiver theory introduced by Gabriel [84] is an important tool in the representation theory of algebras. Gustafson [97] gave a novel interpretation of the existence of the simultaneous decomposition of (14) from the perspective of the quiver theory. Finally, he gave a necessary and sufficient condition for Eq. (1) to have a solution by using the corresponding representation of arrows (see [97], Section 10). Noted that Refs. [59,115] also make use of the graph theory to discuss linear matrix equations.
Remark 11. 
Wang  et al. in [276], Theorem 2.1 continued with the idea of simultaneous decomposition and decomposed the following matrices over a division ring F :
A B C D ,
where A, B, and C are of the same row number, and A and D are of the same column number. So, Theorem 6 is a corollary of [276], Theorem 2.1 (see [276], Corollary 2.2). Also, it should be noted that [273], Theorem 2.1 is indeed a special case of [276], Theorem 2.1 (see [276], Corollary 2.3). In addition, Wang et al. applied the simultaneous decomposition of (17) to solving two types of systems of matrix equations. Interestingly, He et al. [116] further refined the simultaneous decomposition presented by [276].
He et al. [126] further considered the simultaneous decompositions of two more general forms over H :
A B C D E and A B C D E ,
where matrices in the same row (or column) have the same number of rows (or columns), and  applied them to solving systems of quaternion matrix equations. Recently, Huo  et al. [135] generalized the simultaneous decomposition of (17) to quaternion tensors under the Einstein product.

4.5. Method by Real (Complex) Representations

It is well known that a associative algebra A with finite dimensions over a field F is isomorphic to a subalgebra of the algebra F n × n , where n is the dimension of A over F (see [81]). We now consider the case of complex numbers, i.e.,  F = C .
Let A = A 0 + A 1 i C m × r , where A 0 , A 1 R m × r , and  i is the imaginary unit such that i 2 = 1 . Define a map
ϕ : C m × r R 2 m × 2 r with ϕ ( A ) = ϕ ( A 0 + A 1 i ) = A 0 A 1 A 1 A 0 .
We call ϕ ( A ) a real representation of the complex matrix A[191]. Then, one can check that ϕ ( · ) is an isomorphism of the real algebra C m × n onto the real subalgebra
S 0 S 1 S 1 S 0 S 0 , S 1 R m × r .
Then,
( 5 ) ϕ ( A ) ϕ ( X ) + ϕ ( Y ) ϕ ( B ) = ϕ ( C ) .
On the other hand, suppose that a real matrix pair
X ^ = X 11 X 12 X 21 X 22 and Y ^ = Y 11 Y 12 Y 21 Y 22
is a solution pair of the following equation
ϕ ( A ) X ^ + Y ^ ϕ ( B ) = ϕ ( C ) .
Then, K 2 r 1 X ^ K 2 n and K 2 m 1 Y ^ K 2 s are also a solution pair of Eq. (18), where
K 2 t = 0 I t I t 0 for t = m , n , r , s .
Let X ¯ = 1 2 ( X 11 + X 22 ) + 1 2 ( X 21 X 12 ) i and Y ¯ = 1 2 ( Y 11 + Y 22 ) + 1 2 ( Y 21 Y 12 ) i . Then,
ϕ ( X ¯ ) = 1 2 X 11 + X 22 X 12 X 21 X 21 X 12 X 11 + X 22 = 1 2 X ^ + K 2 r 1 X ^ K 2 n , and ϕ ( Y ¯ ) = 1 2 Y 11 + Y 22 Y 12 Y 21 Y 21 Y 12 Y 11 + Y 22 = 1 2 Y ^ + K 2 m 1 Y ^ K 2 s
satisfy (18), so, X ¯ and Y ¯ satisfy Eq. (5).
Based on the above analysis, the problem of a complex matrix equation is transformed into a problem of a real matrix equation by using the real representation of complex matrices. Based on this consideration, Liu [191] proposed the following theorem for solving Eq. (5) over C .
Theorem 8. 
[191], Lemma 1.3 Let
A = A 0 + A 1 i , B = B 0 + B 1 i , and C = C 0 + C 1 i
be given in Eq. (5). Then, Eq. (5) is consistent over C if and only if
A 0 A 1 A 1 A 0 X 11 X 12 X 21 X 22 + Y 11 Y 12 Y 21 Y 22 B 0 B 1 B 1 B 0 = C 0 C 1 C 1 C 0
is consistent over R , in which case,
X = X 0 + X 1 i = 1 2 ( X 11 + X 22 ) + 1 2 ( X 21 X 12 ) i , Y = Y 0 + Y 1 i = 1 2 ( Y 11 + Y 22 ) + 1 2 ( Y 21 Y 12 ) i ,
where X 11 , X 12 , X 21 , X 22 , Y 11 , Y 12 , Y 21 , and  Y 22 are the general solution of Eq. (19). Furthermore, the explicit forms of X and Y given in (20) are
X 0 = 1 2 P 1 ϕ ( A ) ϕ ( C ) Q 1 + 1 2 P 2 ϕ ( A ) ϕ ( C ) Q 2 + [ U 1 , U 2 ] ϕ ( B ) Q 1 ϕ ( B ) Q 2 + [ P 1 F ϕ ( A ) , P 2 F ϕ ( A ) ] V 1 V 2 , X 1 = 1 2 P 2 ϕ ( A ) ϕ ( C ) Q 1 1 2 P 1 ϕ ( A ) ϕ ( C ) Q 2 + [ U 1 , U 2 ] ϕ ( B ) Q 2 ϕ ( B ) Q 1 + [ P 2 F ϕ ( A ) , P 1 F ϕ ( A ) ] V 1 V 2 , Y 0 = 1 2 S 1 E ϕ ( A ) ϕ ( C ) ϕ ( B ) T 1 + 1 2 S 2 E ϕ ( A ) ϕ ( C ) ϕ ( B ) T 2 [ S 1 ϕ ( A ) , S 2 ϕ ( A ) ] U ^ 1 U ^ 2 + [ W 1 , W 2 ] E ϕ ( B ) T 1 E ϕ ( B ) T 2 , Y 1 = 1 2 S 2 E ϕ ( A ) ϕ ( C ) ϕ ( B ) T 1 1 2 S 1 E ϕ ( A ) ϕ ( C ) ϕ ( B ) T 2 + [ S 2 ϕ ( A ) , S 1 ϕ ( A ) ] U ^ 1 U ^ 2 + [ W 1 , W 2 ] E ϕ ( B ) T 1 E ϕ ( B ) T 2 ,
where P 1 = [ I r , 0 ] , P 2 = [ 0 , I r ] , S 1 = [ I m , 0 ] , S 2 = [ 0 , I m ] ,
Q 1 = I n 0 , Q 2 = 0 I n , T 1 = I s 0 , T 2 = 0 I s ,
and U i , V i , U ^ i , W i ( i = 1 , 2 ) are arbitrary matrices over R with appropriate orders.
Remark 12. 
Liu [191] further used Theorem 8 to discuss the maximal and minimal ranks of Eq. (5)’s solutions, as seen in SubSection 5.6 of this paper.
Remark 13. 
We know that the quaternion algebra (or the quaternion division ring) cannot be a field because it does not satisfy the commutative law. However, when studying matrix equations over H , it is widely applied that the method of converting the quaternion matrix equations to the real (or complex) matrix equations by using the real (or complex) representation of quaternions [227]. For example, [337] and [333] studied the special least-squares solutions of a class of quaternion matrix equations by using the real representation and the complex representation of quaternions, respectively. Moreover, the real (or complex) representation of some other quaternions mentioned in Remark 43 have been explored in [190,226,264,332].
On the other hand, because real (or complex) representations have the drawback of a significant increase in computational load, Wei et al. [292] introduced real structure-preserving methods over H for LU decomposition, QR decomposition, and SVD, thereby solving quaternion linear systems.

4.6. Method by Determinantal Representations

As is well known, Cramer’s rule for the linear equation A x = b for unknown vector x is an effective means of expressing its unique solution. In addition, it can be found in SubSection 4.2 that the theory of generalized inverses is closely related to the study of Eq. (1). Naturally, we can consider whether Cramer’s rule for the solutions of Eq. (1) can be obtained through determinant representations of generalized inverses.
Generalized inverses (especially the MP inverse) over fields (particularly the complex field) have been thoroughly discussed and applied to characterize various solutions of matrix equations (see [9,249,265]). Notably, Kyrchei [169] presented the determinantal representations of the MP inverse and the Drazin inverse [62] from the new perspective of limit representations of generalized inverses in 2008.
When we consider the determinantal representations of generalized inverses in the quaternion algebra, it relies on the theory of determinants of quaternion matrices. However, due to the noncommutativity of quaternions, the determinants of quaternion matrices become much more complicated (see [4,37]). It was not until several decades later, after Kyrchei [168,171] introduced the theory of column-row determinants over H , that this problem was effectively solved.
Definition 2 
(Column-row determinants over H ). [168], Definitions 2.4 and 2.5 Let A = [ a i j ] H n × n , and let S n be the symmetric group on I n = { 1 , 2 , , n } .
(1)
For i = 1 , 2 , , n , the i-th row determinant of A is defined by
rdet i A = σ S n ( 1 ) n r ( a i i k 1 a i k 1 i k 1 + 1 a i k 1 + l 1 i ) ( a i k r i k r + 1 a i k r + l r i k r ) , σ = ( i i k 1 i k 1 + 1 i k 1 + l 1 ) ( i k 2 i k 2 + 1 i k 2 + l 2 ) ( i k r i k r + 1 i k r + l r ) ,
where i k 2 < i k 3 < < i k r , and  i k t < i k t + s for t = 2 , , r and s = 1 , , l t .
(2)
For j = 1 , 2 , , n , the j-th column determinant of A is defined by
cdet j A = τ S n ( 1 ) n r ( a j k r j k r + l r a j k r + 1 j k r ) ( a j j k 1 + l 1 a j k 1 + 1 j k 1 a j k 1 j ) , τ = ( j k r + l r j k r + 1 j k r ) ( j k 2 + l 2 j k 2 + 1 j k 2 ) ( j k 1 + l 1 j k 1 + 1 j k 1 j ) ,
where j k 2 < j k 3 < < j k r and j k t < j k t + s for t = 2 , , r and s = 1 , , l t .
Remark 14. 
Kyrchei in [168], Theorem 3.1 showed that if A H n × n is Hermitian, i.e.,  A = A * , then
rdet 1 A = = rdet n A = cdet 1 A = = cdet n A R .
Thus, [168], Remark 3.1 defines the determinant of a Hermitian matrix A by
det A = rdet i A = cdet i A , ( i = 1 , 2 , , n ) .
We now introduce some necessary notations. Let A H m × n . Then, R l ( A ) , R r ( A ) , N l ( A ) , and  N r ( A ) denote the left row space, the right column space, the left null space, and  the right null space of A, respectively. Let
α = { α 1 , , α k } { 1 , , m } and β = { β 1 , , β k } { 1 , , n } ,
where 1 k min { m , n } . By A β α , we denote the submatrix of A whose rows are indexed by α and whose columns are indexed by β . If A is Hermitian, then | A α α | is the corresponding principal minor of A. For 1 k n , let
L k , n = { α α = ( α 1 , , α k ) , 1 α 1 < < α k n } .
And, for  i α and j β , let
I r , m { i } = { α α L r , m , i α } and J r , n { j } = { β β L r , n , j β } .
We denote the j-th column of A by a . j and its i-th row by a i . . And, A . j ( b ) is the matrix that is obtained from A by replacing its j-th column by the column vector b H m × 1 , and  A i . ( b ) is the matrix obtained from A by replacing its i-th row by the row vector b H 1 × n . Symbols a · j * and a i · * denoted the j-th column and the i-th row of A * , respectively.
Kyrchei [173] obtained the Cramer’s rules for some left, right and two-sided quaternion matrix equations by the theory of the column-row determinants over H . Song et al. [246] then utilized results in [173] to further study the Cramer’s rule for the quaternion matrix equation:
A X E + F Y B = C ,
where A H m × n , E H s × q , F H m × q , B H t × q , and  C H m × q are given.
Theorem 9. 
[246], Theorem 3.1 Suppose that Eq. (21) is consistent. Let
T = A * ( I + R F ) A , S = E ( I + L B ) E * , A 11 = A * R F A T , A 22 = A * A T , E 11 = S E E * , E 22 = S E L B E * , Y 10 = A 22 A 11 A * C L B E * + L A 22 A * R F C E * E 22 E 11 , Y 20 = A 11 A 22 A * R F C E * + L A 11 A * C L B E * E 11 E 22 .
Let K * , L, M * , and N be of full column rank matrices over H such that
N r ( T ) = R r ( K * ) , N r ( S ) = R r ( L ) , N r ( F ) = R r ( M * ) , N r ( B ) = R r ( N ) .
Then, the general solution of Eq. (21) is
x i j = rdet j ( S + L L * ) j . ( c i . A ) det ( T + K * K ) det ( S + L L * ) = cdet i ( T + K * K ) . i ( c . j E ) det ( T + K * K ) det ( S + L L * ) , y h l = rdet l ( B B * + N N * ) l . ( g h . A ) det ( F * F + M * M ) det ( B B * + N N * ) = cdet h ( F * F + M * M ) . h ( g . l E ) det ( F * F + M * M ) det ( B B * + N N * ) ,
where
c i . A = cdet i ( T + K * K ) . i ( d . 1 ) , , cdet i ( T + K * K ) . i ( d . s ) , g h . A = cdet h ( F * F + M * M ) . h ( k . 1 ) , , cdet h ( F * F + M * M ) . h ( k . t ) , c . j E = rdet j ( S + L L * ) j . ( d 1 . ) , , rdet j ( S + L L * ) j . ( d n . ) T , g . l E = rdet l ( B B * + N N * ) l . ( k 1 . ) , , rdet l ( B B * + N N * ) l . ( k q . ) T
with d i . and d . j are the i-th row and j-th column of
( T + K * K ) T ( A * R F C E * + A * C L B E * + Y 10 + Y 20 ) S ( S + L L * ) + M + W ,
respectively, and  k . i and k j . are the i-th column and j-th row of
( F * F + M * M ) ( F ( C A X E ) B ) ( B B * + N N * ) + Q ,
respectively, for  i = 1 , , m , j = 1 , , s , h = 1 , , q , and  l = 1 , , t , where
M = ( T + K * K ) T ( L A 22 V 1 R E 11 + L A 11 V 2 R E 22 ) S ( S + L L * ) , W = T Z L L * + K * K Z S + K * K Z L L * , Q = F * F H N N * + M * M H ( B B * + N N * )
for arbitrary V 1 , V 2 , Z, and H with appropriate orders.
Remark 15. 
When E and F in Eq. (21) are identity matrices, by Theorem 9 we immediately derived the Cramer’s rule to the matrix equation over H :
A X + Y B = C .
Note that Theorem 9 uses the auxiliary matrices, i.e., K, L, M, and N, to derive the determinant representation of the general solution of Eq. (22). However, these auxiliary matrices are not always easy to obtain in practical applications. Then, can we obtain the Cramer’s rule for Eq. (22) only through its given coefficient matrices?
In order to answer the above problem, let’s first take a look at another work of Kyrchei [170]. Based on the column-row determinant theory and the limit representation of the MP inverse, he gave the determinant representation of the MP inverse over H in [170]. This research has greatly promoted the study of the Cramer’s rules to the various solutions of matrix equations over H (see [164,165,166,172,243,244,245]). Denote
H r m × n = { A H m × n | rank ( A ) = r } .
Theorem 10. 
[170], Theorem 5 Let A H r m × n and A = a i , j + H n × m . Then,
a i , j + = β J r , n { i } cdet i ( A * A ) · i ( a · j * ) β β β J r , n ( A * A ) β β = α I r , m { j } rdet j ( A A * ) j · ( a i · * ) α α α I r , m ( A A * ) α α ,
where i = 1 , 2 . , n and j = 1 , 2 , , m .
A result similar to Theorem 3 was given by Kyrchei in [165].
Theorem 11. 
[165], Lemma 5.1 The following are equivalent:
(1)
Eq. (22) is solvable;
(2)
R A C L B = 0 ;
(3)
rank A C 0 B = rank A 0 0 B ;
in which case,
X = A C A V B + L A U and Y = R A C B + A A V + W R B ,
where U, V, and W are arbitrary matrices of appropriate orders over H .
It is easy to see that
X = A C and Y = C B A A C B
are the partial solution pair of Eq. (22) by taking U, V, and W as zero matrices in (23). Then, by applying Theorem 10 to (24), Kyrchei [165] presented a new Cramer’s rule for the partial solution of Eq. (22), which only makes use of the coefficient matrices, i.e.,  A, B, and C.
Theorem 12. 
[165], Theorem 5.2 Let A H r 1 m × n and B H r 2 t × q . Then, the  partial solution pair X = [ x i j ] H n × q and Y = [ y g f ] H m × t in (24) can be expressed as
x i j = β J r 1 , n { i } cdet i ( A * A ) . i ( c . j ( 1 ) ) β β β J r 1 , n ( A * A ) β β ,
where c . j ( 1 ) is the j-th columns of A * C , and 
y g f = α I r 2 , t { f } rdet f ( B B * ) . f ( c g . ( 2 ) ) α α α I r 2 , t ( B B * ) α α l = 1 m α I r 1 , m { l } rdet l ( A A * ) l . ( a ¨ g . ( 1 ) ) α α α I r 2 , t { f } rdet f ( B B * ) . f ( c l . ( 2 ) ) α α α I r 1 , m ( A A * ) α α α I r 2 , t ( B B * ) α α ,
where c g . ( 2 ) and a ¨ g . ( 1 ) are the g-th rows of C B * and A A * , respectively.
Remark 16. 
Note that although Kyrchei [165] gave the determinant representation of the particular solution of Eq. (22), the problem proposed in Remark 15 is not solved completely. That is to say, the Cramer’s rule for representing the general solution of Eq. (22) only using the coefficient matrices remains an unsolved problem.
Interestingly, Song [247] considered the determinant representation for the general solution of Eq. (22) with the restricted conditions, i.e.,
Eq . ( 22 ) subject to R r ( X ) T 1 , N r ( X ) S 1 , R l ( Y ) T 2 , N l ( Y ) S 2 ,
where T 1 H n , S 1 H p , T 2 H 1 × t , and  S 2 H 1 × m . Let P T H n × n ( Q T H n × n ) denote the right (left) H -orthogonal projector onto a right (left) H -vector subspace T H n × 1 ( T H 1 × n ) along T .
Theorem 13. 
[247], Theorem 3.2 Let A, B, C, T 1 , T 2 , S 1 , and  S 2 be given in (25), and let
M = R A P T 1 Q S 2 , N = Q T 2 B P S 1 , T = A * R Q S 2 A + A * A , S = I + L Q T 2 B ,
and Y 1 be such that
A * R Q S 2 A T Y 1 = A * A T A * R Q S 2 C and Y 1 L Q T 2 B = A * C L Q T 2 B .
(1)
The restricted Eq. (25) is solvable if and only if
R M R A P T 1 C = 0 , R A P T 1 C L Q T 2 B = 0 , C L P S 1 L N = 0 , R Q S 2 C L Q T 2 B = 0 ,
in which case,
X = ( T P T 1 ) C ˜ P S 1 + P T 1 L A P T 1 U 1 P S 1 + P T 1 V 1 P S 1 = ( T P T 1 ) C ˜ P S 1 + P T 1 Z 1 P S 1 P T 11 Z 1 P S 1 = ( T P T 1 ) C ˜ P S 1 + P T 1 N r ( A ) U 2 P S 1 + P T 1 V 2 P S 1 , Y = Q S 2 ( C A X ) ( Q T 2 B ) + Q S 2 V 3 R Q T 2 B Q T 2 ,
where
C ˜ = A * R Q S 2 C + A * C L Q T 2 B + A * R Q S 2 C L Q T 2 B + Y 1 S 1 ,
and Z 1 , V 1 , U 1 , V 2 , U 2 , and  V 3 are arbitrary matrices over H with appropriate dimensions.
(2)
Let C 1 * , K 1 * , C 1 , and  K 2 be full column rank matrices such that
T 1 = N r ( C 1 ) , T 1 N r ( T ) = R r ( K 1 * ) , T 2 = N l ( C 2 ) , T 2 N r ( B ) = R l ( K 2 * ) .
Denote X = [ x i j ] H n × q and Y = [ y k l ] H m × t . If Eq. (25) is solvable, then
x i j = cdet i ( A * A + C 1 * C 1 + K 1 * K 1 ) . i ( d . j ) det ( A * A + C 1 * C 1 + K 1 * K 1 ) , y k l = rdet l ( B B * + C 2 C 2 * + K 2 K 2 * ) l . ( d k . ) det ( B B * + C 2 C 2 * + K 2 K 2 * )
with d . j is the j-th column of
T A * R Q S 2 C + A * C L Q T 2 B + A * R Q S 2 C L Q T 2 B + Y 1 + K 1 * K 1 Z 1 ,
and d k . is the i-th row of
( C A X ) B * + Z 2 K 2 K 2 * ,
where i = 1 , , n , j = 1 , , q , k = 1 , , m , l = 1 , , t , and  Z 1 and Z 2 are arbitrary matrices over H with appropriate orders.

4.7. Method by Semi-Tensor Prodcuts

To effectively handle multidimensional arrays and nonlinear problems, Chinese mathematician Daizhan Cheng [30] pioneered a new matrix product in 2001: the semi-tensor product (abbreviated as STP). It exactly coincides with traditional matrix products when the two factor matrices meet the dimension requirements. Due to the excellent properties of STP [26], namely:
(1)
It is applied to any two matrices;
(2)
It has certain commutative properties;
(3)
It inherits all properties of the conventional matrix product;
(4)
It enables easy expression of multilinear functions (mappings);
STP has been effectively applied to: Boolean (control) networks, logical dynamic systems, systems biology, graph theory, formation control, finite automata and symbolic dynamics, circuit design and failure detection, coding and cryptography, fuzzy control, engineering, game theory, and so on (see [25,26,27,31,32] and references therein).
Definition 3. 
[30] Let F be a ring, A F m × n , and  B F p × q . The left STP of A and B is defined as
A B = A I t n B I t p ,
where t = lcm ( n , p ) is the least common multiple of n and p.
Remark 17. 
Similarly, Cheng [25] defined the right SPT of A and B, i.e.,
A B = I t n A I t p B .
However, its mathematical properties are not as good as those of the left STP in certain aspects, accounting for most studies’ focus on the left STP. Therefore, the STP referred to in this paper specifically denotes the left STP.
Remark 18. 
As a generalization of the traditional matrix product, and given its extensive application significance, the study of matrix equations under STP has emerged as an inevitable and pivotal research direction. In 2016, Yao et al. [331] were the first to conduct research on the classical complex matrix equation under STP:
A X = B ,
where X is unknown. Subsequently, building on the research framework of [331], investigations into diverse matrix equations under STP have flourished (see [137,144,180,266,268,271]).
Regrettably, no studies directly addressing Eq. (5) under STP have been published to date. However, when the dimension of Y is constrained to equal that of X T , the exploration of Sylvester-transpose matrix equation [137]
A X + X T B = C
with unknown X may provide valuable guidance for this problem.
Notably, in 2019, Cheng and Liu [29], inspired by the research on cross-dimensional linear systems [28], further proposed the second matrix-matrix semi-tensor product (abbreviated as MM-2 STP), denoted by l . Subsequently, Wang [267] investigated the complex matrix equation under the MM-2 STP:
A l X = B ,
where X is unknown, and A and B are given. So, exploring Eq. (1) under MM-2 STP emerges as a potential research topic. Additionally, Cheng [26] introduced the dimension-free matrix theory, which systematically elucidates the deep mathematical insights underlying STP.
Since 2020, Liaocheng University’s team led by Professors Ying Li and Jianli Zhao has utilized STP to propose several novel matrix representations, including complex, quaternion, and octonion matrices. These representations have also been applied to solving corresponding matrix equations, and have yielded effective numerical experimental results.
Ding et al. [55] first proposed the real vector representation for quaternion matrices, whose properties were characterized by STP, as follows:
Definition 4. 
[55], Definitions 3.1-3.3 Let x = x 1 + x 2 i + x 3 j + x 4 k H . Denote
v R ( x ) = x 1 x 2 x 3 x 4 T .
Let x = x 1 x n H 1 × n and y = y 1 y n T H n . Denote
v R ( x ) = v R ( x 1 ) v R ( x n ) and v R ( y ) = v R ( y 1 ) v R ( y n ) .
For A H m × n , the real column stacking form v c R ( A ) and the real row stacking form v r R ( A ) of A are defined as
v c R ( A ) = v R ( Col 1 ( A ) ) v R ( Col 2 ( A ) ) v R ( Col n ( A ) ) and v r R ( A ) = v R ( Row 1 ( A ) ) v R ( Row 2 ( A ) ) v R ( Row m ( A ) ) ,
where Col j ( A ) ( j = 1 , . . . , n ) and Row i ( A ) ( j = 1 , . . . , m ) are the j-th column and i-th row of A, respectively.
Remark 19. 
For A H m × n and B H n × p ,55], Theorem 3.3(3) shows
v r R ( A B ) = G v r R ( A ) v c R ( B ) ,
where
G = F ( δ m 1 ) T I 4 m n ( δ p 1 ) T F ( δ m 1 ) T I 4 m n ( δ p p ) T F ( δ m m ) T I 4 m n ( δ p 1 ) T F ( δ m m ) T I 4 m n ( δ p p ) T , F = M Q i = 1 n ( δ n i ) T ( I 4 n ( δ n i ) T ) ,
M Q = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 ,
and δ m i ( i = 1 , . . . , m ) is the i-th column of I m .
Using the real vector representation for quaternion matrices, Ding et al. [55] further discussed the special least-squares solutions of quaternion matrix equation
A X B + C Y D = E ,
where A H m × n , B H n × s , C H m × k , D H k × s , and  E H m × s . Denote
δ k [ i 1 , , i s ] = [ δ k i 1 , , δ k i s ] ,
W [ m , n ] = δ m n [ 1 , , ( n 1 ) m + 1 , , m , , n m ] ,
and
J η = J 1 η J m η J n η , J m η = J 1 m η J r m η J n m η , R η = R 1 η R m η R n η , R m η = R 1 m η R r m η R n m η , m = 1 , 2 , , n ,
where
for η = i , J r m i = δ n ( n + 1 ) / 2 ( r 1 ) ( 2 n r + 2 ) 2 + m r + 1 T R 4 , r < m , δ n ( n + 1 ) / 2 ( m 1 ) ( 2 n m + 2 ) 2 + r m + 1 T I 4 , r m , R 4 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ; for η = j , J r m j = δ n ( n + 1 ) / 2 ( r 1 ) ( 2 n r + 2 ) 2 + m r + 1 T L 4 , r < m , δ n ( n + 1 ) / 2 ( m 1 ) ( 2 n m + 2 ) 2 + r m + 1 T I 4 , r m , L 4 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ; for η = k , J r m k = δ n ( n + 1 ) / 2 ( r 1 ) ( 2 n r + 2 ) 2 + m r + 1 T S 4 , r < m , δ n ( n + 1 ) / 2 ( m 1 ) ( 2 n m + 2 ) 2 + r m + 1 T I 4 , r m , S 4 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ;
for η = i , R r m i = δ n ( n + 1 ) / 2 ( r 1 ) ( 2 n r + 2 ) 2 + m r + 1 T R 4 , r < m , δ n ( n + 1 ) / 2 ( m 1 ) ( 2 n m + 2 ) 2 + r m + 1 T I 4 , r m , R 4 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ; for η = j , R r m j = δ n ( n + 1 ) / 2 ( r 1 ) ( 2 n r + 2 ) 2 + m r + 1 T L 4 , r < m , δ n ( n + 1 ) / 2 ( m 1 ) ( 2 n m + 2 ) 2 + r m + 1 T I 4 , r m , L 4 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ; for η = k , R r m k = δ n ( n + 1 ) / 2 ( r 1 ) ( 2 n r + 2 ) 2 + m r + 1 T S 4 , r < m , δ n ( n + 1 ) / 2 ( m 1 ) ( 2 n m + 2 ) 2 + r m + 1 T I 4 , r m , S 4 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 .
Theorem 14. 
[55], Theorem 4.3 and Corollary 4.4 Let A, B, C, D, and E be given in Eq. (26), and let
M ^ = M 1 M 2 ,
where
M 1 = G 2 G 3 v r R ( A ) W [ 4 n s , 4 n 2 ] v c R ( B ) J η , M 2 = G 4 G 5 v r R ( C ) W [ 4 k s , 4 k 2 ] v c R ( D ) R η ,
and G i has the same structure as G given in Remark 19 but differs in orders.
(1)
Then, Eq. (26) is consistent if and only if
M ^ M ^ I 4 m s v r R ( E ) = 0 .
(2)
Let
S M = ( X , Y ) | X = X η * , Y = Y η * , A X B + C Y D E = min .
Then,
S M = ( X , Y ) | v s R ( X ) v s R ( Y ) = M ^ v r R ( E ) + I 2 ( n 2 + k 2 ) + 2 ( n + k ) M ^ M ^ y , y R 2 ( n 2 + k 2 ) + 2 ( n + k ) .
(3)
If ( X ^ , Y ^ ) S M satisfies
X ^ 2 + Y ^ 2 = min ( X , Y ) S M X 2 + Y 2 ,
then
v s R ( X ^ ) v s R ( Y ^ ) = M ^ v r R ( E ) .
Remark 20. 
Setting B and C in Eq. (26) as identity matrices, Theorem 14 yields the corresponding results for Eq. (5) over H .
Remark 21. 
In the same period, Ding et al. [57] and Wang et al. [263] also developed the real vector representation of quaternion matrices and applied this method to address problems in quaternion matrix equations. Inspired by this idea, Liu et al. [189] proposed the left and right real element representations of octonion matrices to solve the classical octonion matrix equation
A X B = C ,
where X is unknown. Recently, Chen and Song [23] used the real vector representation of quaternion matrices to study the least-squares lower (upper) triangular Toeplitz solutions and (anti)centrosymmetric solutions of quaternion matrix equations.
Remark 22. 
It is known that real representations of quaternion matrices are not unique. For instance, Liu et al. [188] defined three real representations of quaternion matrices. Interestingly, Fan et al. first [70,71] proposed the L -representation for quaternion matrices by STP to systematically study the real representations of quaternion matrices. Moreover, L -representation serves as an effective tool in solving quaternion matrix equations. Meanwhile, Zhang et al. [339] also defined the L -representation for commutative quaternion matrices and applied it to solving the corresponding matrix equations.
Following this idea, Fan et al. [72] established the C -representation of quaternion matrices, which generalizes the complex representation of quaternion matrices. This new representation is also applied to study η-Hermitian solutions of the quaternion matrix equation
A X B = C ,
with unknown X. Similarly, Xi et al. [316] defined the L C -representation of reduced biquaternion matrices to investigate mixed solutions of the reduced biquaternion matrix equation
i = 1 n A i X i B i = E ,
where X i ( i = 1 , . . . , n ) is unknown. Evidently, Eq. (5) is the special case of Eq. (27) over reduced biquaternions.
Contemporaneously with [70,71], Fan et al. [73] also derived the minimal norm least-squares (anti)-Hermitian solution of the quaternion matrix equation directly by the vectorization properties of STP (i.e., [73], Theorems 2.9 and 2.10) and the complex representation of quaternion matrix. Recently, Liu et al. [192] directly use the vectorization properties of STP to investigate (skew) bisymmetric solutions of the generalized Lyapunov quaternion matrix equation.
We contend that the four aforementioned STP-based methods—specifically, L -representation, C -representation, L C -representation, and vectorization properties of STP—provide distinct perspectives for the investigation of Eq. (1).

5. Constrained Solutions of GSE

When GSE fails to satisfy solvability conditions, find its least-squares solution under a certain matrix norm; since such solutions are always non-unique, further seeking for the minimum-norm least-squares solution (also known as the best approximate solution) is a common research approach. Furthermore, in practical applications, specific constraints are often imposed on the solutions of GSE (e.g., symmetric solutions, Re-(non)positive definite solutions, equality-constrained solutions). Thus, this section focuses on various solutions to GSE.

5.1. Chebyshev Solutions and L p -Solutions

Let F = R . Ziętak studied the Chebyshev solutions [344] and the l p -solutions [345] of Eq. (5) by using the Chebyshev norm and the l p -norm of a matrix, respectively.
Definition 5. 
Let A = [ a i , j ] R m × n and 1 < p < . The Chebyshev norm of A, denoted by A , is defined as
A = max 1 i m , 1 j n | a i , j | ,
where | a i , j | is the absolute value of a i , j for 1 i m and 1 j n . And, the  l p -norm of A, denoted by A p , is defined as
A p = i = 1 m j = 1 n | a i , j | p 1 / p .
Theorem 15. 
[344], Theorem 2.2 Let r < m and s < n . Suppose that (4) is not satisfied. Then the matrices X = [ x k j ] and Y = [ y i l ] are a Chebyshev solution pair of Eq. (5), i.e.,
A X + Y B C = min X , Y A X + Y B C ,
if and only if there exists V = [ v i j ] ( 0 ) R m × n such that V T A = 0 , V B T = 0 ,
v i j sign ( r i j ) > 0 for ( i , j ) J 1 , and v i j = 0 for ( i , j ) J 1 ,
where
r i j = k = 1 r a i k x k j + l = 1 s y i l b l j c i j ,
and J 1 is an appropriate subset of J = ( i , j ) | r i j | = A X + Y B C .
Remark 23. 
Moreover, Ziętak formulated the equivalent conditions for the Chebyshev solution of Eq. (5) by [344], Theorem 3.3 and [344], Theorem 4.1 under the assumption:
m = r + 1 , n = s + 1 , rank ( A ) = r , rank ( B ) = s ,
and another assumption:
A 0 , B 0 , r = s = 1 ,
respectively.
Theorem 16. 
[345], Theorem 2.1 Let r < m , s < n , and  1 < p < . Then the matrices X p = [ x i , j ] and Y p = [ y i , j ] are an l p -solution of Eq. (5), i.e.,
A X p + Y p B C p = min X , Y A X + Y B C p ,
if and only if
V T A = 0 and V B T = 0 ,
where
V = [ v i , j ] m × n = sign ( r i , j ) | r i , j | p 1 m × n and r i , j = k = 1 r a i , k x k , j + l = 1 s y i , l b l , j c i , j
for i = 1 , 2 , , m and j = 1 , 2 , , n .
Additional characterizations of the l p -solutions of Eq. (5) are presented by Ziętak in [345], Theorems 2.2 and 2.3.
Remark 24. 
Since
A ( X + ( I A A ) W A Z B ) + ( Y + Z ( I B B ) + A A Z B B ) B = A X + Y B
for arbitrary W and Z, neither the Chebyshev solution nor the l p -solution of Eq. (5) is unique.
Remark 25. 
Note that in [323], Theorem 2.1, Xu et al. gave the explicit expression of the l 2 -solutions of Eq. (12). Moreover, in [184], Theorems 4.2 and 4.3, Liao et al. also consider the best approximate solution of (12) to a given matrix pair ( X f , Y f ) by using GSVD and CCD. Therefore, when both E and F are identity matrices, we can immediately obtain decomposed expressions of l 2 -solutions and the best approximate solution of Eq. (5).

5.2. -Congruent Solutions

We now discuss Eq. (5) under the constraint condition Y = X , i.e.,
A X + X B = C ,
where X denotes either X T or X * . We call X satisfying Eq. (30) a -congruent solution of Eq. (5).
Wimmer is the first to study the necessary and sufficient conditions for the solvability of Eq. (30) under = * over C (see [299], Theorem 2). After that, De Terán and Dopico [46] generalize the Wimmer’s work to a field F with char ( F ) 2 , in which case, X denotes the transpose of X, except in the particular case F = C , where it may be either the transpose or the conjugate transpose of X. Moreover, we call A F n × n and B F n × n are -congruent if there exists a nonsingular matrix P F n × n such that P A P = B .
Theorem 17. 
[46], Theorem 2.3 Let F be a field with char ( F ) 2 . Then Eq. (30) is solvable if and only if
C A B 0 and 0 A B 0
are ★-congruent.
Remark 26. 
Additionally, in  [17], Lemma 5.10, Byers and Kressner established the equivalent conditions for the existence of a unique solution to Eq. (30) only when = T . Kressner et al. generalized this result in [161] to include the case where = * . In  [41], Cvetković-Ilić investigated Eq. (30) for the bounded linear operators under certain conditions, in which case X denotes the adjoint operator of X.

5.3. (Minimum-norm least-squares) symmetric solutions

Let F = R , s = m , r = n , and  A = B for Eq. (5), i.e.,
A X + Y A = C .
Chang and Wang [20] studied the symmetric, minimum-2-norm symmetric, least-squares symmetric, and minimum-2-norm least-squares symmetric solutions of Eq. (31) by SVD over R . Let SR m × m be the set of all m × m real symmetric matrices, and let
A B = [ a i j b i j ] F m × n
denote the Hadamard product of A = [ a i j ] F m × n and B = [ b i j ] F m × n .
Theorem 18. 
[20], Theorem 2.1 Let the SVD of A be
A = U Σ 0 0 0 V T ,
where Σ = diag ( σ 1 , , σ r ) > 0 , r = rank ( A ) , and 
U = U 1 U 2 R m × m and V = V 1 V 2 R n × n
are real orthogonal with
U 1 R m × r , U 2 R m × ( m r ) , V 1 R n × r , and V 2 R n × ( n r ) .
Denote
W 1 = Σ U 1 T C V 1 V 1 T C T U 1 Σ , W 2 = Σ 1 U 1 T C V 1 + V 1 T C T U 1 , M 1 = Σ V 1 T C T U 1 U 1 T C V 1 Σ , M 2 = Σ 1 U 1 T C V 1 .
Define
Φ = [ φ i j ] R r × r and Ψ = [ ψ i j ] R r × r ,
where
φ i j = 1 σ i 2 σ j 2 , σ i σ j , 0 , σ i = σ j , and ψ i j = 1 | sign ( σ i σ j ) |
for i , j = 1 , , r .
(1)
Let
L I = { [ X , Y ] X SR n × n , Y SR m × m , A X + Y A = C } .
Then L I if and only if
U 2 T C V 2 = 0 and Ψ U 1 T C V 1 V 1 T C T U 1 = 0 ,
in which case,
L I = { [ V Φ W 1 + Ψ M 2 Y 11 Σ 1 U 1 T C V 2 V 2 T C T U 1 Σ 1 X 22 V T , U Φ M 1 + Ψ Y 11 Σ 1 V 1 T C T U 2 U 2 T C V 1 Σ 1 Y 22 U T ] X 22 SR ( n r ) × ( n r ) , Y 11 SR r × r , Y 22 SR ( m r ) × ( m r ) } .
(2)
If [ X ^ , Y ^ ] L I satisfies
[ X ^ , Y ^ ] F = min [ X , Y ] L I ( X ^ F 2 + Y ^ F 2 ) 1 2 ,
then [ X ^ , Y ^ ] L I is unique and
X ^ = V Φ W 1 + 1 2 Ψ M 2 Σ 1 U 1 T C V 2 V 2 T C T U 1 Σ 1 0 V T , Y ^ = U Φ M 1 + 1 2 Ψ M 2 Σ 1 V 1 T C T U 2 U 2 T C V Σ 1 0 U T .
(3)
Let
L I L S = { [ X , Y ] | X SR n × n , Y SR m × m , A X + Y A C F = min } .
Then,
L I L S = { [ V Φ W 1 + Ψ ( 1 2 W 2 Y 11 ) Σ 1 U 1 T C V 2 V 2 T C T U 1 Σ 1 X 22 V T , U Φ M 1 + Ψ Y 11 Σ 1 V 1 T C T U 2 U 2 T C V 1 Σ 1 Y 22 U T ] X 22 SR ( n r ) × ( n r ) , Y 11 SR r × r , Y 22 SR ( m r ) × ( m r ) } .
(4)
If [ X ^ , Y ^ ] L I L S satisfies
[ X ^ , Y ^ ] F = min [ X , Y ] L I L S ( X ^ F 2 + Y ^ F 2 ) 1 2 ,
then [ X ^ , Y ^ ] is unique and
X ^ = V Φ W 1 + 1 4 Ψ W 2 Σ 1 U 1 T C V 2 V 2 T C T U 1 Σ 1 0 V T , Y ^ = U Φ M 1 + 1 4 Ψ W 2 Σ 1 V 1 T C T U 2 U 2 T C V 1 Σ 1 0 U T .

5.4. Self-Adjoint and Positive (Semi)Definite Solutions

In this subsection, we consider Eq. (1) with r = n , s = m and C = 0 , i.e.,
A X Y B = 0 ,
which is required in optimal control theory [138].
Jameson et al. [139] explored the symmetric, positive semidefinite, and positive definite real solutions of Eq. (32) over R . Subsequently, Dobovišek [60] further studied the self-adjoint, positive semidefinite, positive definite, minimal, and  extreme solutions of Eq. (32) over C .
Definition 6. 
[60] If the nonnegative matrix Y m is such that ( X , Y m ) is a solution pair of Eq. (32), and  Y m Y (or Y m Y ) for all other solution pairs ( X , Y ) of Eq. (32) with the same X, then Y m is called a minimal (or maximal) solution of Eq. (32). The minimal and maximal solutions of Eq. (32) are collectively referred to as the extreme solutions of Eq. (32).
Theorem 19. 
[60] Let A be of full column rank, m n ,
A = 0 I , B = B 1 B 2 , and Y = Y 11 Y 12 Y 12 * Y 22 .
(1)
[60], Theorem 6 Assume that A B * has at least one real eigenvalue or at least one conjugate pair of eigenvalues. Then Eq. (32) has a non-zero self-adjoint solution pair, i.e.,  X * = X and Y * = Y .
(2)
[60], Theorem 7 Eq. (32) has a nonzero positive semidefinite solution Y, i.e.,  Y 0 , if and only if A B * has at least one real eigenvalue.
(3)
[60], Theorem 8 Eq. (32) has a positive definite solution Y, i.e.,  Y > 0 , if and only if A B * has only real eigenvalues and is diagonalisable.
(4)
[60], Theorem 9 Eq. (32) has a positive semidefinite solution, i.e.,  X 0 , if and only if the matrix A has at least one real eigenvalue, in which case, X 0 is nonzero if rank ( A B * ) > rank ( B 2 ) .
(5)
[60], Theorem 10 Eq. (32) has a positive definite solution, i.e.,  X > 0 , if and only if all eigenvalues of B 2 * are real, B 2 * is diagonalisable, and  rank ( A B * ) = rank ( A ) . Moreover, if  Y 0 , then X > 0 if and only if all eigenvalues of B * are positive and B * is diagonalisable.
(6)
[60], Theorem 11 If rank ( B ) < m , then Eq. (32) with a fixed solution X does not have a extreme solution for Y. If  rank ( B ) = m , the solution Y is unique.
(7)
[60], Theorem 12 If Eq. (32) has a solution pair ( X , Y ) and Y 0 , then there exists a minimal solution Y m 0 .
Remark 27. 
[60], Theorems 6,7,8,9,10 also present the expressions of self-adjoint, positive semidefinite, positive definite solutions when the solvability conditions are met. Additionally, Dobovišek discussed these solutions for m < n (see [60], Theorems 13,14,15,16,17) and for matrices A and B without full rank (see [60], Theorems 18,19,20).

5.5. Per(Skew)Symmetric and Bi(Skew)Symmetric Solutions

It is well known that (skew)selfconjugate, per(skew)symmetric, and centro(skew)symmetric matrices be applied to information theory, linear system theory, and numerical analysis theory (see [3,18,225,291]). Let F = Ω be a finite dimensional central algebra with an involution σ and char ( Ω ) 2 (see [63]). Wang et al. in [287] and [288] gave necessary and sufficient conditions for the existence of per(skew)symmetric solutions and bi(skew)symmetric solutions to Eq. (1) over Ω , respectively.
Definition 7. 
For A = [ a i , j ] Ω m × n , let
A * = [ σ ( a i , j ) ] Ω n × m , A ( * ) = [ σ ( a n j + 1 , m i + 1 ) ] Ω n × m , A # = [ a m i + 1 , n j + 1 ] Ω m × n .
Then, A is called to be (skew)selfconjugate if A = A * ( A * ) , per(skew)symmetric if A = A ( * ) ( A ( * ) ) , and  centro(skew)symmetric if A = A # ( A # ) . If  A is both (skew)selfconjugate and per(skew)symmetric, then A is called to be bi(skew)symmetric. Moreover, a solution pair ( X , Y ) of Eq. (1) is called to be per(skew)symmetric (or bi(skew)symmetric) if both X and Y are per(skew)symmetric (or bi(skew)symmetric).
Theorem 20. 
[287], Corollary 2.3 Let A , B , C Ω m × n [ λ ] .
(1)
[287] Eq. (1) has a persymmetric solution pair ( X , Y ) if and only if there exist invertible matrices P Ω 2 n × 2 n and Q Ω 2 m × 2 m such that
Q A C 0 B P 1 = A 0 0 B ,
P I n 0 0 I n P ( * ) = I n 0 0 I n ,
Q I m 0 0 I m Q ( * ) = I m 0 0 I m .
(2)
[287], Corollary 2.8 Eq. (1) has a perskewsymmetric solution pair ( X , Y ) if and only if there exist invertible matrices P Ω 2 n × 2 n and Q Ω 2 m × 2 m such that (33), P ( * ) P = I 2 n , and  Q ( * ) Q = I 2 m .
(3)
[287], Corollary 2.13 Eq. (1) has a solution pair ( X , Y ) such that X is persymmetric and Y is perskewsymmetric if and only if there exist invertible matrices P Ω 2 n × 2 n and Q Ω 2 m × 2 m such that (33), (34), and  Q ( * ) Q = I 2 m .
(4)
[287], Corollary 2.16 Eq. (1) has a solution pair ( X , Y ) such that X is perskewsymmetric and Y is persymmetric if and only if there exist invertible matrices P Ω 2 n × 2 n and Q Ω 2 m × 2 m such that (33), (35), and  P ( * ) P = I 2 n .
Theorem 21. 
[288] Let A , B , C Ω m × n [ λ ] .
(1)
[288], Corollary 2 Eq. (1) has a bisymmetric solution pair ( X , Y ) if and only if there exist invertible matrices P Ω 2 n × 2 n and Q Ω 2 m × 2 m such that
Q A C 0 B P 1 = A 0 0 B ,
P 0 I n I n 0 P * = 0 I n I n 0 ,
P I n 0 0 I n P ( * ) = I n 0 0 I n ,
P # 0 I n I n 0 P 1 = 0 I n I n 0 ,
Q 0 I m I m 0 Q * = 0 I m I m 0 ,
Q I m 0 0 I m Q ( * ) = I m 0 0 I m ,
Q # 0 I m I m 0 Q 1 = 0 I m I m 0 .
(2)
[288], Corollary 5 Eq. (1) has a biskewsymmetric solution pair ( X , Y ) if and only if there exist invertible matrices P Ω 2 n × 2 n and Q Ω 2 m × 2 m such that (36),
P ( * ) P = I 2 n , P 0 I n I n 0 P * = 0 I n I n 0 = P # 0 I n I n 0 P 1 ,
Q ( * ) Q = I 2 m , Q 0 I m I m 0 Q * = I m 0 0 I m = Q # 0 I m I m 0 Q 1 .
(3)
[288], Corollary 8 Eq. (1) has a solution pair ( X , Y ) such that X is bisymmetric and Y is biskewsymmetric if and only if there exist invertible matrices P Ω 2 n × 2 n and Q Ω 2 m × 2 m such that (36), (37), (38), (39), and (44).
(4)
[288], Corollary 10 Eq. (1) has a solution pair ( X , Y ) such that X is biskewsymmetric and Y is bisymmetric if and only if there exist invertible matrices P Ω 2 n × 2 n and Q Ω 2 m × 2 m such that (36), (40), (41), (42), and (43).

5.6. Maximal and minimal ranks of the general solution

Let F = C , and let a solution pair ( X , Y ) of Eq. (5) be
X = X 0 + X 1 i C r × n and Y = Y 0 + Y 1 i C m × s ,
where X 0 , X 1 R r × n and Y 0 , Y 1 R m × s . Liu [191] determined the maximal and minimal ranks for X, Y, X 0 , X 1 , Y 0 , and  Y 1 .
Theorem 22. 
[191], Theorems 2.1 and 2.2 Let Eq. (5) be consistent.
(1)
Then,
max A X + Y B = C rank ( X ) = min n , r , r rank ( A ) + rank B C , max A X + Y B = C rank ( Y ) = min { m , s , s rank ( B ) + rank [ A , C ] } , min A X + Y B = C rank ( X ) = rank B C rank ( B ) , min A X + Y B = C rank ( Y ) = rank [ A , C ] rank ( A ) .
(2)
Let
S 1 = X 0 R r × n A ( X 0 + i X 1 ) + ( Y 0 + i Y 1 ) B = C , S 2 = X 1 R r × n A ( X 0 + i X 1 ) + ( Y 0 + i Y 1 ) B = C .
Then,
max X 0 S 1 r ( X 0 ) = min r , n , rank B 0 0 B 1 0 C 1 A 0 C 0 A 1 2 rank ( A ) + r , min X 0 S 1 r ( X 0 ) = rank B 0 0 B 1 0 C 1 A 0 C 0 A 1 rank A 0 A 1 rank B 0 B 1 ,
max X 1 S 2 r ( X 1 ) = min r , n , rank B 0 0 B 1 0 C 0 A 0 C 1 A 1 2 rank ( A ) + r , min X 1 S 2 r ( X 1 ) = rank B 0 0 B 1 0 C 0 A 0 C 1 A 1 rank A 0 A 1 rank B 0 B 1 .
(3)
Let
S 3 = Y 0 R m × s A ( X 0 + i X 1 ) + ( Y 0 + i Y 1 ) B = C , S 4 = Y 1 R m × s A ( X 0 + i X 1 ) + ( Y 0 + i Y 1 ) B = C .
Then,
max Y 0 S 3 r ( Y 0 ) = min m , s , rank A 0 A 1 C 1 C 0 0 0 B 0 B 1 2 rank ( B ) + s , min Y 0 S 3 r ( Y 0 ) = rank A 0 A 1 C 1 C 0 0 0 B 0 B 1 rank A 0 , A 1 rank B 0 , B 1 , max Y 1 S 4 r ( Y 1 ) = min m , s , rank A 0 A 1 C 0 C 1 0 0 B 0 B 1 2 rank ( B ) + s , min Y 1 S 4 r ( Y 1 ) = rank A 0 A 1 C 0 C 1 0 0 B 0 B 1 rank A 0 , A 1 rank B 0 , B 1 .
Remark 28. 
In [191], Corollary 2.3, Liu also presented equivalent conditions for Eq. (5) to have a (all) real solution pair(s), i.e.,
X = X 0 and Y = Y 0 ,
and a (all) pure imaginary solution pair(s), i.e.,
X = i X 1 and Y = i Y 1 .
However, in [286], Section 3, Wang et al. provided two counterexamples to illustrate that the items (a) and (c) in [191], Corollary 2.3 are incorrect.

5.7. Re-(non)negative and Re-(non)positive definite solutions

Let F = C . For a Hermitian matrix A C n × n , i + ( A ) , i ( A ) , and  i 0 ( A ) represent the numbers of the positive, negative, and zero eigenvalues of A, respectively. In [280], Corollary 5.7, Wang and He established the maximal and minimal values of
i ± ( X + X * ) and i ± ( Y + Y * )
for a solution pair ( X , Y ) of the complex matrix equation
A X B + C Y D = E ,
where A C m × k 2 , B C k 2 × q , C C m × k 3 , D C k 3 × q , and  E C m × q are given. This result directly yields the equivalent conditions for Re-positive definite, Re-negative definite, Re-nonnegative definite, and Re-nonpositive definite solutions to Eq. (45).
Definition 8. 
[280] Let A C n × n and H ( A ) = A + A * . The A is called to be Re-positive definite if H ( A ) > 0 , Re-nonnegative definite if H ( A ) 0 , Re-negative definite if H ( A ) < 0 , and  Re-nonpositive definite if H ( A ) 0 .
Theorem 23. 
[280], Corollary 5.8 Let X C k 2 × k 2 and Y C k 3 × k 3 be a solution pair of Eq. (45). Denote,
G 1 = 0 D C * 0 D * 0 E * 0 C E 0 A 0 0 A * 0 , G 2 = 0 C * D 0 C 0 E 0 D * E * 0 B * 0 0 B 0 , G 3 = 0 B A * 0 B * 0 E * 0 A E 0 C 0 0 C * 0 , G 4 = 0 A * B 0 A 0 E 0 B * E * 0 D * 0 0 D 0 ,
s 1 = rank 0 A * B 0 0 A 0 E C 0 B * E * 0 0 D * , s 2 = rank 0 B A * 0 0 A E 0 C 0 B * 0 E * 0 D * 0 0 C * 0 0 , s 3 = rank 0 B A * 0 0 A E 0 C 0 B * 0 E * 0 D * 0 D 0 0 0 , s 4 = rank A B * ,
w 1 = rank 0 C * D 0 0 C 0 E A 0 D * E * 0 0 B * , w 2 = rank 0 D C * 0 0 C E 0 A 0 D * 0 E * 0 B * 0 0 A * 0 0 , w 3 = rank 0 D C * 0 0 C E 0 A 0 D * 0 E * 0 B * 0 B 0 0 0 , w 4 = rank C D * .
Then:
(1)
X is Re-positive definite if and only if
i + ( G 3 ) = rank A C + rank ( B ) and i + ( G 3 ) rank D B + rank ( A ) , or i + ( G 3 ) rank A C + rank ( B ) and i + ( G 4 ) = rank D B + rank ( A ) .
(2)
X is Re-negative definite if and only if
i ( G 3 ) = rank A C + rank ( B ) and i ( G 4 ) rank D B + rank ( A ) , or i ( G 3 ) rank A C + rank ( B ) and i ( G 4 ) = rank D B + rank ( A ) .
(3)
X is Re-nonnegative definite if and only if
s 1 s 4 + i ( G 3 ) s 2 = 0 and s 1 s 4 + i ( G 4 ) s 3 0 , or s 1 s 4 + i ( G 3 ) s 2 0 and s 1 s 4 + i ( G 4 ) s 3 = 0 .
(4)
X is Re-nonpositive definite if and only if
s 1 s 4 + i + ( G 3 ) s 2 = 0 and s 1 s 4 + i + ( G 3 ) s 3 0 , or s 1 s 4 + i + ( G 3 ) s 2 0 and s 1 s 4 + i + ( G 4 ) s 3 = 0 .
(5)
Y is Re-positive definite if and only if
i + ( G 1 ) = rank A C + rank ( D ) and i + ( G 2 ) rank D B + rank ( C ) , or i + ( G 1 ) rank A C + rank ( D ) and i + ( G 2 ) = rank D B + rank ( C ) .
(6)
Y is Re-negative definite if and only if
i ( G 1 ) = rank A C + rank ( D ) and i ( G 2 ) rank D B + rank ( C ) , or i ( G 1 ) rank A C + rank ( D ) and i ( G 2 ) = rank D B + rank ( C ) .
(7)
Y is Re-nonnegative definite if and only if
w 1 w 4 + i ( G 1 ) w 2 = 0 and w 1 w 4 + i ( G 2 ) w 3 0 , or w 1 w 4 + i ( G 1 ) w 2 0 and w 1 w 4 + i ( G 2 ) w 3 = 0 .
(8)
Y is Re-nonpositive definite if and only if
w 1 w 4 + i + ( G 1 ) w 2 = 0 and w 1 w 4 + i + ( G 2 ) w 3 0 , or w 1 w 4 + i + ( G 1 ) w 2 0 and w 1 w 4 + i + ( G 2 ) w 3 = 0 .
Remark 29. 
When both B and C in Theorem 23 are taken as identity matrices with k 2 = q and m = k 3 , we can immediately obtain the equivalent conditions for the existence of Re-positive definite, Re-negative definite, Re-nonnegative definite, and  Re-nonpositive definite solutions of Eq. (5).

5.8. η -Hermitian and η -skew-Hermitian solutions

Let F = H . The η -Hermitian matrices have been employed in statistical multichannel processing and widely linear modelling (see [255,256]). Yuan and Wang [335] considered the least-squares η -Hermitian solution with the least norm to the quaternion matrix equation
A X E + F X B = C
with unknown X. After that, He and Wang [118] investigated the η -Hermitian solutions of the matrix equation
A X A η * + B Y B η * = C ,
where A, B, and C given quaternion matrices with appropriate orders. Moreover, a solution pair ( X , Y ) of Eq. (46) is called to be η -Hermitian ( η -skew-Hermitian) if both X and Y are η -Hermitian ( η -skew-Hermitian).
Theorem 24. 
[118], Corollaries 3.5 and 4.3 Let A, B, and  C be given over H such that C = C η * . Set
M = R A B and S = B L M .
Then, the following are equivalent:
(1)
Eq. (46) has an η-Hermitian solution pair ( X , Y ) ;
(2)
R M R A C = 0 and R A C ( R B ) η * = 0 ;
(3)
rank A C 0 B η * = rank ( A ) + rank ( B ) and rank A B C = rank A B .
In this case,
X = A C ( A ) η * 1 2 A B M C [ I + ( B ) η * S η * ] ( A ) η * 1 2 A ( I + S B ) C ( M ) η * B η * ( A ) η * A S W 2 S η * ( A ) η * + L A U + U η * ( L A ) η , Y = 1 2 M C ( B ) η * [ I + ( S S ) η ] + 1 2 ( I + S S ) B C ( M ) η * + L M W 2 ( L M ) η + V L B η + L B V η * + L M L S W 1 + W 1 η * ( L S ) η ( L M ) η ,
where W 1 , U, V, and  W 2 = W 2 η * are arbitrary quaternion matrices with appropriate sizes, and 
min A X A η * + B Y B η * = C rank ( X ) = 2 rank C B rank 0 B η * B C , min A X A η * + B Y B η * = C rank ( Y ) = 2 rank A C rank 0 A η * A C .
Remark 30. 
Inspired by Theorem 24, we now consider the η-Hermitian solutinos of Eq. (5) over H , namely, finding X and Y over H such that
A X + Y B = C , X = X η * , and Y = Y η * ,
where A, B, and C are given over H . By means of
X = 1 2 ( X ^ + X ^ η * ) and Y = 1 2 ( Y ^ + Y ^ η * ) ,
it is easy to check that the following are equivalent:
(1)
The statement (47) holds.
(2)
There exist the matrices X ^ and Y ^ over H such that
A X ^ + Y ^ B = C and X ^ A η * + B η * Y ^ = C η * .
(3)
There exist the matrices X ^ and Y ^ over H such that
A I X ^ I 0 0 A η * + I B η * Y ^ B 0 0 I = C C η * .
Therefore, solving the η-Hermitian solutinos of Eq. (5) reduces to solving an equation of the form
A X B + C Y D = E ,
which has been solved by Baksalary and Kala [7]. By the same method, we have:
(1)
There exists a matrix pair ( X , Y ) such that
A X + Y B = C and X = X η * ,
if and only if there exist the matrices X ^ , Y ^ , and  Z ^ such that
A X ^ + Y ^ B = C and X ^ A η * + B η * Z ^ = C η * .
(2)
There exists a matrix pair ( X , Y ) such that
A X + Y B = C and Y = Y η * ,
if and only if there exist the matrices X ^ , Y ^ , and  Z ^ such that
A X ^ + Y ^ B = C and Z ^ A η * + B η * Y ^ = C η * .
Moreover, solvability conditions and expressions of the general solution to Eqs. (48) and (49) can be obtained by [119].
Kyrchei [167] further discussed the η -skew-Hermitian solutions of Eq. (46) under C = C η * using a method similar to Theorem 24.
Theorem 25. 
[167, Corollary 4.4] Under the hypotheses of Theorem 24, the following are equivalent:
(1)
Eq. (46) has an η-skew-Hermitian solution pair ( X , Y ) ;
(2)
R M R A C = 0 and R A C ( R B ) η * = 0 ;
(3)
rank A C 0 B η * = rank ( A ) + rank ( B ) and rank A B C = rank A B ;
in which case, the η-skew-Hermitian solutions of Eq. (46) are
X = A C ( A ) η * 1 2 A B M C ( A ) η * + A C ( A B M ) η * 1 2 A B M C ( A S B ) η * + A S B C ( A B M ) η * A S W 2 ( A S ) η * L A U + ( L A U ) η * , Y = 1 2 M C ( B ) η * + B C ( M ) η * + 1 2 M C ( B ) η * ( Q S ) η * + Q S B C ( M ) η * + L M W 2 L M + V L B L B V η * + L M L S W 1 W 1 η * L S L M η * ,
where W 1 , U, V, and  W 2 = W 2 η * are arbitrary over H with appropriate sizes.
Remark 31. 
Using a method similar to that in Remark 30, one can also consider the η-skew-Hermitian solutions of Eq. (5) over H , i.e., finding X and Y such that
A X + Y B = C , X = X η * , and Y = Y η * .
Remark 32. 
In terms of determinant representations of the MP inverse over H (i.e., Theorem 10), Kyrchei [167] also showed the Cramer’s rules for the partial η-Hermitian and η-skew-Hermitian solution to Eq. (46) under C = C η * and C = C η * , respectively.

5.9. ϕ -Hermitian solutions

Let F = H . Rodman [227] introduced the quaternion matrix A ϕ over H , a generation of A * and A η * , and defined the ϕ -Hermitian quaternion matrix as follows:
Definition 9. 
[227] Let ϕ : H H be a map.
(1)
We call ϕ an anti-endomorphism if for any α , β H , ϕ satisfies
ϕ ( α β ) = ϕ ( β ) ϕ ( α ) and ϕ ( α + β ) = ϕ ( β ) + ϕ ( α ) .
An anti-endomorphism ϕ is called an involution if ϕ 2 is the identity map.
(2)
Let ϕ be a nonzero involution. Then ϕ can be represented as a matrix in R 4 × 4 with respect to the basis { 1 , i , j , k } , i.e.,
ϕ = 1 0 0 T ,
where either T = I 3 (in which case ϕ is called a standard involution), or T R 3 × 3 is an orthogonal symmetric matrix with the eigenvalues { 1 , 1 , 1 } (in which case ϕ is called a nonstandard involution).
(3)
Let ϕ be a nonstandard involution and A = [ a i , j ] H m × n . Define
ϕ ( A ) = [ ϕ ( a i , j ) ] H m × n and A ϕ = ϕ ( A T ) H n × m .
If A = A ϕ with m = n , then A is called a ϕ-Hermitian matrix.
He et al. [122] considered the ϕ -Hermitian solution Z = Z ϕ of the following system
A 1 X Y B 1 = C 1 , A 2 Z Y B 2 = C 2 ,
where A i , B i , and  C i ( i = 1 , 2 ) are given matrices over H with appropriate orders.
Theorem 26. 
[122], Theorem 4.5 Let
A 11 = R B 2 B 1 , B 11 = R A 2 A 2 , C 11 = B 1 L A 11 , D 11 = R A 1 ( R A 2 C 2 B 2 B 1 C 1 ) L A 11 , A 22 = [ L A 2 , ( R C 11 B 2 ) ϕ ] , B 22 = R C 11 B 2 ( L A 2 ) ϕ , C 22 = ( A 2 C 2 L B 2 ) ϕ + ( B 2 ) ϕ ( C 11 ) ϕ D 22 ( B 11 ) ϕ A 2 C 2 B 11 D 11 C 11 B 2 , A = R A 22 L B 11 , B = B 2 L B 22 , C = R A 22 ( B 2 ) ϕ , D = ( L B 11 ) ϕ L B 22 , E = R A 22 C 22 L B 22 , M = R A C ,
Then the following are equivalent:
(1)
The system (50) has a solution ( X , Y , Z ) such that Z = Z ϕ .
(2)
The following rank equalities hold:
rank C i A i B i 0 = rank ( A i ) + rank ( B i ) , i = 1 , 2 , rank C 1 C 2 A 1 A 2 B 1 B 2 0 0 = rank A 1 A 2 + rank B 1 B 2 , rank C 1 C 2 ( A 2 ) ϕ A 2 ( C 2 ) ϕ A 1 A 2 ( B 2 ) ϕ B 1 B 2 ( A 2 ) ϕ 0 0 = rank A 1 A 2 ( B 2 ) ϕ + rank B 1 B 2 ( A 2 ) ϕ ) ,
rank C 2 ( A 2 ) ϕ A 2 ( C 2 ) ϕ A 2 ( B 2 ) ϕ B 2 ( A 2 ) ϕ 0 = 2 rank ( A 2 ( B 2 ) ϕ ) , rank C 1 C 2 ( A 2 ) ϕ A 2 ( C 2 ) ϕ A 1 A 2 ( B 2 ) ϕ 0 ( C 1 ) ϕ 0 ( B 1 ) ϕ B 1 B 2 ( A 2 ) ϕ 0 0 0 ( A 1 ) ϕ 0 0 = 2 rank A 1 A 2 ( B 2 ) ϕ 0 ( B 1 ) ϕ .
(3)
The following equations hold:
R A 2 C 2 L B 2 = 0 , D 11 L C 11 = 0 , R B 11 D 11 = 0 , R M R A E = 0 , R C E L B = 0 , R A E L D = 0 .
In this case,
X = X 1 + ( X 5 ) ϕ 2 , Y = X 2 + ( X 4 ) ϕ 2 , and Z = X 3 + ( X 3 ) ϕ 2 ,
where X 1 , X 2 , . . . , X 5 are given in [122], Fomulars (4.24)–(4.39).
Remark 33. 
(1)
When A 1 = B 1 = C 1 = 0 , Theorem 26 yields the result for
A 2 Z Y B 2 = C 2 subject to Z = Z ϕ ,
which can be regarded as Eq. (1) under the constrain that X is ϕ-Hermitian, i.e.,
A X Y B = C subject to X = X ϕ .
(2)
Note that ϕ-Hermitian matrices are a generalization of Hermitian matrices. In [119], Theorems 5.1, 5.2, He and Wang have investigated the following problem over C :
A X Y B = C subject to X = X * ( or Y = Y * ) ,
which is clearly similar to the problem (52).
(3)
By the same method as in Remark 30, we can also discuss the following problem:
A X Y B = C subject to X = X ϕ and Y = Y ϕ .

5.10. Equality-constrained solutions

Let F = C . Wang et al. [270] considered the solvability conditions and the general solution for Eq. (5) over C under the following equality constraints:
A 1 X = C 1 , Y B 2 = C 2 , A 3 X B 3 = C 3 , and A 4 Y B 4 = C 4 ,
where A 1 , A 3 , A 4 , B 2 , B 3 , B 4 , C 1 , C 2 , C 3 , and  C 4 are given.
Theorem 27. 
[270], Theorem 3.2 Let A 1 , B 1 , C 1 , B 2 , C 2 , A 3 , B 3 , C 3 , A 4 , B 4 , C 4 , A, B, and  C be given matrices over C with appropriate sizes. Set
T = A 3 L A 1 , K = R B 2 B 4 , ϕ 1 = A 1 C 1 + L A 1 T ( C 3 A 3 A 1 C 1 B 3 ) B 3 , ϕ 2 = C 2 B 2 + A 4 ( C 4 A 4 C 2 B 2 B 4 ) K R B 2 , A 11 = A L A 1 L T , B 11 = R K R B 2 B , C 33 = A L A 1 , D 33 = R B 3 , C 44 = L A 4 , D 44 = R B 2 B , E 11 = C A ϕ 1 ϕ 2 B , A a = R A 11 C 33 , B b = D 33 L B 11 , C c = R A a C 44 , D d = D 44 L B 11 , E = R A 11 E 11 L B 11 , M = R A a C c , N = D d L B b , S = C c L M .
Then the following are equivalent:
(1)
Eq. (5) under the constraints (53) is consistent.
(2)
The following rank equations hold:
rank [ A 1 , C 1 ] = rank ( A 1 ) , rank C 3 B 3 = rank ( B 3 ) , rank A 1 C 1 B 3 A 3 C 3 = rank A 1 A 3 ,
rank C 2 B 2 = rank ( B 2 ) , rank [ A 4 , C 4 ] = rank ( A 4 ) , rank C 4 A 4 C 2 B 4 B 2 = rank [ B 4 , B 2 ] ,
rank 0 B 2 B B 3 A C 2 C B 3 A 3 0 C 3 A 1 0 C 1 B 3 = rank 0 B 2 B B 3 A 0 0 A 3 0 0 A 1 0 0 , rank 0 B B 2 A C C 2 A 1 C 1 0 = rank 0 B B 2 A 0 0 A 1 0 0 ,
rank 0 B B 4 B 2 A 4 A A 4 C C 4 A 4 C 2 A 1 C 1 0 0 = rank 0 B B 4 B 2 A 4 A 0 0 0 A 1 0 0 0 , rank 0 B 4 B 2 B B 3 A 3 0 0 C 3 A 1 0 0 C 1 B 3 A 4 A C 4 A 4 C 2 A 4 C B 3 = rank 0 B 4 B 2 B B 3 A 3 0 0 0 A 1 0 0 0 A 4 A 0 0 0 .
(3)
The the following equations hold:
R A 1 C 1 = 0 , R T ( C 3 A 3 A 1 C 1 B 3 ) = 0 , C 3 L B 3 = 0 , C 2 L B 2 = 0 , R A 4 C 4 = 0 , ( C 4 A 4 C 2 B 2 B 4 ) L K = 0 , R M R A a E = 0 , E L B b L N = 0 , R A a E L D d = 0 , R C c E L B b = 0 .
In this case,
X = A 1 C 1 + L A 1 T ( C 3 A 3 A 1 C 1 B 3 ) B 3 + L A 1 L T Z 1 + L A 1 W 1 R B 3 , Y = C 2 B 2 + A 4 ( C 4 A 4 C 2 B 2 B 4 ) K R B 2 + L A 4 W 2 R B 2 + Z 2 R K R B 2 , Z 1 = A 11 ( E 11 C 33 W 1 D 33 C 44 W 2 D 44 ) A 11 V 7 B 1 + L A 11 V 6 , Z 2 = R A 11 ( E 11 C 32 W 1 D 33 C 44 W 2 D 44 ) B 11 + A 11 A 11 V 7 + V 8 R B 11 , W 1 = A a E B b A a C c M E B b A a S C c E N D d B b A a S V 4 R N D d B b + L A a V 1 + V 2 R B b , W 2 = M E D d + S S C c E N + L M L S V 3 + L M V 4 R N + V 5 R D d ,
where V 1 , . . . , V 8 are arbitrary matrices over C with appropriate orders.
Remark 34. 
Inspired by Theorem 27, using the Kronecker product and the vectorization operation, Wang  et al. [282] investigated the the minimum-norm least-squares solution for the quaternion tensor system under the Einstein product:
A * N X + Y * N B = C subject to A 1 * N X = C 1 , A 3 * N X * N B 3 = C 3 , Y * N B 2 = C 2 , A 4 * N Y * N B 4 = C 4 ,
where X and Y are unknown tensors and the others are given tensors over H . Thus, the  minimum-norm least-squares solution of the tensor equation (54) is directly given in [282], Corollary 3.4, which is also expounded in SubSection 6.5.

6. Various Generalizations of GSE

This section shows the generalizations of GSE to diverse domains such as various rings, dual numbers, dual quaternions, linear operators, tensors, and matrix polynomials, as well as its more general forms. This embodies another enchantment of mathematics: constantly exploring more general problems to ultimately reveal the most essential conclusions.

6.1. Generalizing RET Over Different Rings

RET given in Section 3 characterizes the equivalent condition for the solvability of GSE over a field. This subsection mainly generalizes RET over the following algebraic structures: unit regular rings, principal ideal rings, commutative rings, division rings, Artinian rings, etc. To simplify expressions, we first introduce Guralnick’s definition in [93].
Definition 10. 
[93] If (3) is equivalent to (4) over a ring F , then we call that F has the equivalence property.

6.1.1. Generalizing RET over unit regular rings

A ring F is called to be unit regular if for any a F , there exists a unit u F such that
a = a u a .
Hartwig [110] generalized RET to a unit regular ring, also extending Theorems 3 and 4.
Theorem 28. 
[110] Let F be a unit regular ring, and 
M = a c 0 b F 2 × 2 .
Then, the following statements are equivalent:
(1)
M has an inner inverse with the form of r s 0 t F 2 × 2 ;
(2)
a x y b = c has a solution pair x , y F ;
(3)
( 1 a a ) c ( 1 b b ) = 0 for all a and b ;
(4)
a 1 c b 1 = 0 , where a 1 { x F x a = 0 } and b 1 { x F b x = 0 } ;
(5)
M = p a 0 0 b q , where p , q F are invertible;
(6)
( 1 a a ( 1 , 2 ) ) c ( 1 b ( 1 , 2 ) b ) = 0 for all a ( 1 , 2 ) and b ( 1 , 2 ) ;
(7)
M ( 1 , 2 ) = a ( 1 , 2 ) a ( 1 , 2 ) c b ( 1 , 2 ) 0 b ( 1 , 2 ) is a reflexive inverse of M.
If F is a skewfield (or a commutative ring without zero divisors) and a , c , b F n × n , then items (28)–(28) are also equivalent to
(5a)
rank ( M ) = rank ( a ) + rank ( b ) ;
(5b)
rank ( 1 a a ( 1 , 2 ) ) c ( 1 b ( 1 , 2 ) b ) = 0 .
Remark 35. 
In the conclusions of [110], Hartwig mentioned that RET also holds for matrices over Euclidean domains and unit regular rings. These rings are finite and elementary divisor rings satisfying the cancellation law. However, whether these properties are sufficient to ensure the validity of RET remains an open problem. In [93], Guralnick considered parts of this problem.

6.1.2. Generalizing RET over Principal Ideal Domains

Building on [207], Theorem 2, Feinberg [76] considered RET over principal ideal domains and further extended it to a more general form.
Theorem 29. 
[76], Theorems 1 and 2 Let F be a principal ideal domain.
(1)
Let A F r × r , B F s × s , and  C F r × s . Then, the matrix equation
A X + Y B = C
is consistent if and only if
A C 0 B and A 0 0 B
are equivalent.
(2)
Let M i j F r j × r j for 1 i j t . Then,
M 11 M 12 M 1 t 0 M 22 M 2 t 0 0 M t t and M 11 0 0 0 M 22 0 0 0 M t t
are equivalent if and only if there exist X i j , Y i j F r i × r j such that
M i j = M i i X i j + k = i + 1 j Y i k M k j
for 1 i < j t .
Remark 36. 
Feinberg [76] also noted that it is easy to generalize Theorem 1 to a principal ideal domain by a method similar to that of [76], Theorem 1.
Let F = C . In view of Remark 6, we can see that (4) implies that there exist the equivalence matrices to be of the upper block triangular, i.e.,
I Y 0 I and I X 0 I .
Olshevsky [209] generalized the above conclusion by showing that if any block upper triangular matrix is equivalent to its block diagonal part, then the equivalent matrices can also be chosen in the form of a block upper triangular matrix.
Theorem 30. 
[209], Theorem 3.2 Let A i j C n i × n j for 1 i j k . If 
G = A 11 0 0 0 A 22 0 0 0 0 0 0 A k k and H = A 11 A 12 A 1 k 0 A 22 A 23 A 2 k 0 0 0 A k k
are equivalent, then there exist X i j , Y i j C n i × n j ( 1 i < j k ) such that
I n 1 Y 12 Y 1 k 0 I n 2 Y 23 Y 2 k 0 0 0 I n k G I n 1 X 12 X 1 k 0 I n 2 X 23 X 2 k 0 0 0 I n k = H .
Remark 37. 
Theorem 30 can also be derived from Theorem 29(29).

6.1.3. Generalizing RET over Division and Module-Finite Rings

Let F be a ring with identity. For G F a × b and H F c × d , let
E R ( G , H ) = ( T , S ) T F c × a , S F d × b and T G = H S .
Denote
M = A C 0 B ,
where A, B, and C are given in Eq. (1). For ( T , S ) E R ( M , A ) , denote
T = T 1 T 2 and S = S 1 S 2 ,
where T 1 F m × m , T 2 F m × s , S 1 F r × r , and  S 2 F r × n . Then,
T M = A S T 1 A = A S 1 , T 1 C + T 2 B = A S 2 .
Define a map
g M , R : E R ( M , A ) E R ( A , A ) by g ( ( T 1 , T 2 ) , ( S 1 , S 2 ) ) = ( T 1 , S 1 ) .
Gustafson et al. [98] gave a general characterization of the solvability of Eq. (1) over F based on the map g M , R , which is essentially similar to the method used by Hartwig in [110].
Theorem 31. 
[98], Lemma 1 Eq. (1) is consistent if and only if the map g M , R defined in (55) is surjective.
Using Theorem 31, Gustafson et al. [98] further proved that RET also holds for a division ring and for a ring which is finitely generated as modules over its center.
Theorem 32. 
[98] A division ring (or a ring that is module-finite over its center) has the equivalence property.

6.1.4. Generalizing RET over Commutative Rings

Let F be a commutative ring with identity. Gustafson [96] showed that RET remain valid over the commutative ring F . By reducing this problem to Artinian rings, Gustafson employed a simple argument with the composition length. This approach parallels that of Flanders and Wimmer [80], who used linear transformations and subspace dimensions to discuss RET over fields.
Theorem 33. 
[96], Theorem 1 A commutative ring with identity has the equivalence property.
Guralnick [95] further generalized Theorem 33 to finite sets of matrices over the commutative ring F . Let F [ x 1 , x 2 , . . . , x t ] be the polynomial ring over F , where x i ( i = 1 , . . . , t ) is commute with each other and any element in F . For
C ˜ = C i F m × n 1 i r and D ˜ = D i F m × n 1 i r ,
we call that C ˜ and D ˜ are simultaneously equivalent if there exist invertible matrices U F m × m and V F n × n such that U C i V = D i for any 1 i r (see [95,177]).
Theorem 34. 
[95], Theorem B(i) For A i F m × r , B i F s × n , and  C i F m × n , let
M i = A i C i 0 B i and N i = A i 0 0 B i ,
where 1 i t . Suppose that the polynomial ring F [ x 1 , x 2 , . . . , x t ] has the equivalence property. Then, the system of matrix equations
A 1 X Y B 1 = C 1 , A 2 X Y B 2 = C 2 , A r X Y B r = C r ,
has a common solution pari X F r × n and Y F m × s if and only if
M ˜ = M i 1 i r and N ˜ = N i 1 i r
are simultaneously equivalent.
Remark 38. 
Dmytryshyn and his collaborators [58,59] have done significant contributions on the further generalizations of the system (56). Moreover, Dmytryshyn was awarded the SIAM Student Paper Prize 2015 for his work [59].
Let F be a field with char ( F ) 2 . We stipulate that in the following systems from [58,59], except for the unknown matrices, the remaining matrices are given with appropriate orders over F . In [59], Theorem 4.1, Dmytryshyn et al. first study the system
A i X k ± X j B i = C i ,
where i = 1 , , n , k , j { 1 , , m } , 1 m 2 n , and X 1 , , X m are unknown. Their method for solving the system (57) extends that used in [80]. Based on the system (57), they then considered the main research subject of [59], i.e.,
A i X k ± X j B i = C i , i = 1 , , n 1 , F i X k ± X j G i = H i , i = 1 , , n 2 ,
where
(i)
k , j , k , j { 1 , , m } , 1 m ( 2 n 1 + 2 n 2 ) , and  X 1 , , X m are unknown;
(ii)
for 1 l m , the symbol X l denotes the matrix transpose X l T and, for the complex number field, also the matrix conjugate transpose X l * ,
(see [59], Theorem 1.1). In [59], Theorem 6.1, they further generalized the system (58) to the following form:
A i X k K i L i X j B i = C i , i = 1 , , n 1 , F i X k M i + N i X j G i = H i , i = 1 , , n 2 ,
where j , k , j , k { 1 , . . . , m } , 1 m ( 2 n 1 + 2 n 2 ) , and  X 1 , X 2 , , X m are unknown.
Let F be a skew field of char ( F ) 2 that is finite dimensional over its center. Interestingly, two years later, Dmytryshyn et al. [58] generalized the system (59) over F including the complex conjugation of unknown matrices, i.e.,
A i X i ε i M i N i X i δ i B i = C i ,
(i)
of complex matrix equations, in which ε i , δ i { 1 , C , T , * } and X C = X ¯ is the complex conjugate of X,
(ii)
of quaternion matrix equations, in which ε i , δ i { 1 , * } and X * is the quaternion conjugate transpose of X,
where i , i { 1 , , t } , i = 1 , , s , 1 t 2 s , and  X 1 , , X t are unknown (see [58], Theorem 2). The system (58) is also extended over F (see [58], Theorem 1).

6.1.5. Generalizing RET over Artinian and Noncommutative Rings

Let N be a subgroup of a finitely generated abelian group M. If M and N M / N are isomorphic, then is N a direct summand of M? This problem is raised by H. Matsumura, and it is subsequently answered affirmatively by H. Toda. Miyata [202] showed that this result can also be generalized to the more general case of module, i.e.,  if F is a commutative Noetherian ring and M is a finitely presented F -module with a submodule N, then
M N M / N N is a summand of M .
We call that F has the extension property if (60) holds.
Guralnick [93] proved the equivalence between the equivalence property and the extension property for an Artinian ring F .
Theorem 35. 
[93], Corollary 2.7 Let F be a right Artinian ring. Then, F has the extension property if and only if F has the equivalence property.
Remark 39. 
In the proof of [93], Theorem 3.4, Guralnick proposed a new perspective to prove Theorem 33. Moreover, Guralnick in [93], Theorem 3.5 showed that a commutative ring has the extension property. Differing from Miyata [202] and Gustafson [96], this proof avoids the completion of a local Noetherian ring.
Guralnick [93] found that two special classes of Artinian rings (i.e., semisimple Artinian rings and Artinian principal ideal rings) possess the equivalence property.
Theorem 36. 
[93], Theorem 2.4 and Corolary 4.6
(1)
A semisimple Artinian ring has the equivalence property.
(2)
An Artinian principal ideal ring has the equivalence property.
Guralnick [93] finally discussed the more general case where F is a regular ring, which generalizes Theorem 36 as well as the items (28) and (28) in Theorem 28.
Theorem 37. 
[93], Theorem 4.3 Let F be a regular ring. Then, F has the equivalence property if and only if F n × n is directly finite for all n.
Interestingly, Guralnick [94] gave a generalized definition of the equivalence property, i.e.,  the generalized equivalence property.
Definition 11. 
[94] Let F be a ring with identity. For  A i j F n i × m j ( 1 i j k ), denote
A ˜ = A 11 0 0 A k k and B ˜ = A 11 A i j 0 A k k .
We define that F has the generalized equivalence property if A ˜ and B ˜ that are equivalent imply that there exist X i j F n i × n j and Y i j F m i × m j such that
I X i j 0 I A ˜ = B ˜ I Y i j 0 I .
Guralnick [94] then proved that not only semisimple Artinian rings and Artinian principal ideal rings but also module finite R-algebras for a commutative ring R possess the generalized equivalence property. This generalizes Theorem 36 evidently.
Theorem 38. 
[94], Theorems 3.3, 3.6 and 3.7 A semisimple Artinian ring, an Artinian principal ideal ring, or  a module finite R-algebra for a commutative ring R has the generalized equivalence property.
Remark 40. 
It is worth noting that the generalized form of Eq. (5), i.e.,
A X B + C Y D = E
with unknown X and Y, has also been discussed over fields, principal ideal domains, simple Artinian rings, regular rings with identity, and associative rings with unit by [7,132,210,277], and [42], respectively.

6.2. Generalizing RET to a rank minimization problem

In a brief three-page article [186], Lin and Wimmer revealed that RET is essentially a special case of a rank minimization problem over a field F . Let GI ( k ) be the set of all invertible matrices of order k.
Theorem 39. 
[186], Theorem 2 Let F be a field, and let A F m × m , B F n × n , and  C F m × n . Then,
min rank A X Y B C X , Y F m × n = min rank P A C 0 B A 0 0 B Q P , Q GI ( m + n ) .
Subsequently, Ito and Wimmer [136] generalized Theorem 39 to Bezout domains under the condition that A and B are regular. An integral domain F with identity is called a Bezout domain if it satisfies that all finitely generated ideals are principal.
Theorem 40. 
[136], Theorem 3.1 Let F be a Bezout domain, and let A F m × m , B F n × n , and  C F m × n . If A and B are regular over F , then
min rank A X Y B C X , Y F m × n = min rank P A C 0 B A 0 0 B Q P , Q GI ( m + n ) = rank A C 0 B rank A 0 0 B = dim R ( A ) + C N ( B ) rank ( A ) .
Remark 41. 
Theorem 40 also yields the equivalence of (3), (4), and (7) over a Bezout domain directly (see [136], Corollary 3.2).

6.3. GSE over Dual Numbers and Dual Quaternions

In 1843, the Irish mathematician William Rowan Hamilton [109] invented quaternions H (also called Hamilton quaternions or real quaternions). The set H is a noncommutative associative division algebra, and it also generalizes the real field R and the complex field C . The quaternion algebra has been effectively applied to mechanics, optics, color image processing, signal processing, computer graphics, flight mechanics, quantum physics, and so on (see [1,85,163,183,227,262,292]).
On the other hand, the British mathematician William Kingdon Clifford [35] invented dual numbers and dual quaternions in 1873. Up to now, dual numbers have been widely used in fields such as kinematics, statics, dynamics, robotics, and  brain dynamics (see [38,79,92,258,259,293]).
Definition 12. 
[35] A dual number is define as
a ^ = a 0 + a 1 ε ,
where a 0 , a 1 R , and ε is the dual unit such that
ε 0 , 0 ε = ε 0 = 0 , 1 ε = ε 1 = ε and ε 2 = 0 .
In this case, a 0 is called primal/real/standard part of a ^ , and  a 1 is called dual/infinitesimal part of a ^ . The set of all dual numbers is denoted by D , which is a commutative ring.
Fan et al. [69] established solvability conditions and expressions of the general solution to Eq. (5) over D by using the MP inverse and SVD.
Theorem 41. 
[69], Theorem 3 Let
A = A 0 + A 1 ε D m × r , B = B 0 + B 1 ε D s × n , C = C 0 + C 1 ε D m × n ,
where A i R m × r , B i R s × n , and  C i R m × n ( i = 0 , 1 ) . The SVDs of A 0 and B 0 are given by
A 0 = P Ω 0 0 0 Q T and B 0 = M Λ 0 0 0 N T ,
where Ω = diag ( ω 1 , , ω l ) , l = rank ( A 0 ) , Λ = diag ( λ 1 , , λ t ) , t = rank ( B 0 ) , and  P R m × m , Q R r × r , M R s × s , and  N R n × n are orthogonal. Let
P = P 1 P 2 , Q = Q 1 Q 2 , M = M 1 M 2 , N = N 1 N 2 ,
where P 2 R m × ( m l ) , Q 2 R r × ( r l ) , M 2 R s × ( s t ) , and  N 2 R n × ( n t ) . Denote
J = C 1 A 1 A 0 C 0 R A 0 C 0 B 0 B 1 , K 1 = P 2 T A 1 Q 2 , K 2 = M 2 T B 1 N 2 .
Then, Eq. (5) has a solution pair X D r × n and Y D m × s if and only if
R A 0 C 0 L B 0 = 0 and R K 1 P 2 T J N 2 L K 2 = 0 ,
in which case,
X = X 0 + X 1 ε and Y = Y 0 + Y 1 ε
with
X 0 = A 0 C 0 + Q 2 K 1 P 2 T J L B 0 A 0 P 1 V 11 M 1 T B 0 + Q 2 V 23 N 1 T + Q 2 ( L K 1 W 4 K 1 W 3 K 2 ) N 2 T , Y 0 = R A 0 C 0 B 0 + P 2 R K 1 P 2 T J N 2 K 2 M 2 T + P 1 ( V 11 M 1 T + V 12 M 2 T ) + P 2 ( W 3 R K 1 W 3 K 2 K 2 ) M 2 T , X 1 = A 0 J A 0 A 1 Q 2 ( K 1 P 2 T J N 2 K 1 W 3 K 2 + L K 1 W 4 ) N 2 T A 0 P 1 ( V 11 M 1 T + V 12 M 2 T ) B 1 A 0 R 1 B 0 + L A 0 R 2 + A 0 A 1 ( A 0 P 1 V 11 M 1 T B 0 Q 2 V 23 N 1 T ) , Y 1 = R A 0 J B 0 + R A 0 A 1 A 0 P 1 V 11 M 1 T B 0 Q 2 V 23 N 1 T B 0 P 2 R K 1 P 2 T J N 2 K 2 M 2 T B 1 B 0 P 2 W 3 R K 1 W 3 K 2 K 2 M 2 T B 1 B 0 + R 1 R A 0 R 1 B 0 B 0 ,
where V 11 , V 12 , V 23 , W 3 , W 4 , R 1 , and  R 2 are arbitrary matrices over D with appropriate sizes.
At the same time, due to the excellent property that dual number matrices can represent both rotation and translation, the theory of dual quaternions is not only one of the most powerful tools for handling rigid-body motion but also finds applications in computer graphics, medical procedures, neural networks, proximity operations in spacecraft, modern robotics, and so on (see [75,155,206]).
Definition 13. 
[35] A dual quaternion is define as
q ^ = q 0 + q 1 ε ,
where q 0 , q 1 H . The set of all dual quaternions is denoted by DH , which a noncommutative ring with zero divisors.
Recently, Xie et al. [317], inspired by the hand-eye calibration problem in robotics research, studied Eq. (1) over DH .
Theorem 42. 
[317], Theorem 3.1 Let
A = A 0 + A 1 ε DH m × r , B = B 0 + B 1 ε DH s × n , C = C 0 + C 1 ε DH m × n ,
where A i R m × r , B i R s × n , and  C i R m × n ( i = 0 , 1 ). Set
A 11 = A 1 L A 0 , A 2 = R A 0 A 11 , A 3 = R A 0 , C 3 = R A 0 C 0 , B 2 = R B 0 B 1 , A 4 = R A 2 R A 0 , A 5 = A 4 A 1 A 0 , C 4 = A 4 ( A 1 A 0 C 0 + C 0 B 0 B 1 C 1 ) , A 6 = R A 4 A 5 , C 5 = R A 4 C 4 , B 3 = B 2 L B 0 , B 4 = C 4 L B 0 .
Then the following are equivalent:
(1)
Eq. (1) has a solution pair X DH r × n and Y DH m × s ;
(2)
C 3 L B 0 = 0 and B 4 L B 3 = 0 ;
(3)
The following rank equations hold:
rank B 0 0 C 0 A 0 = rank ( B 0 ) + rank ( A 0 ) , rank B 0 0 0 0 B 1 B 0 0 0 C 1 C 0 A 0 A 1 0 0 0 A 0 = rank B 0 0 B 1 B 0 + rank A 0 A 1 0 A 0 ;
in which case,
X = X 0 + X 1 ε and Y = Y 0 + Y 1 ε
with
X 0 = A 0 ( C 0 + Y 0 B 0 ) + L A 0 W ,
X 1 = A 0 [ C 1 + Y 0 B 1 + Y 1 B 0 A 1 A 0 ( C 0 + Y 0 B 0 ) A 11 W ] + L A 0 W 1 , Y 0 = A 3 C 3 B 0 + L A 3 U 1 + U 2 R B 0 , Y 1 = A 4 ( C 4 A 5 U 1 B 0 A 4 U 2 B 2 ) B 0 + L A 4 W 3 + W 4 R B 0 , W = A 2 A 3 [ C 1 + Y 0 B 1 + Y 1 B 0 A 1 A 0 ( C 0 + Y 0 B 0 ) ] + L A 2 W 2 , U 1 = A 6 C 5 B 0 + L A 6 W 5 + W 6 R B 0 , U 2 = A 4 B 4 B 3 + L A 4 W 7 + W 8 R B 3 ,
where W 1 , W 2 , . . . , W 8 are arbitrary matrices over DH with appropriate sizes.
Remark 42. 
Subsequently, Xie and Wang [318] further investigated a more general form of Eq. (1) over DH , namely,
A X + E X F = C Y + D ,
where X and Y are unknown. Additionally, the systematic introduction to Eq. (64) and its more general forms can be found in the book [64].
Remark 43. 
After the Hamilton quaternions, different concepts of quaternions were proposed, greatly enriching the quaternion theory, such as biquaternion (also called complexified quaternions), split quaternions, commutative quaternions (also called Segre biquaternions or reduced biquaternions), generalized commutative quaternions, degenerate quaternions, degenerate pseudo-quaternions, doubly degenerate quaternions, and quaternion algebras over a field (see [36,108,175,190,230,254,326]).
Interestingly, in  [216], Remark 8.2.7, Pottmann and Wallner introduced the concept of the generalized quaternions, which, in specific cases, coincide with Hamilton quaternions, split quaternions, degenerate quaternions, pseudo-degenerate quaternions, and  doubly degenerate quaternions.
The theory of dual numbers has also developed rapidly, giving rise to three-dimensional dual numbers (also known as hyper-dual numbers), n-dimensional dual numbers (also known as higher dimensional dual numbers), interval dual numbers, fuzzy dual numbers, neutrosophic dual numbers, and  finite complex modulo integer neutrosophic dual numbers (see [152] and references therein).
One can see that dual quaternions are essentially a combination of dual numbers and quaternions. It is then natural to ask: given such a rich variety of quaternions and dual numbers, what sparks will fly when they interact, and what applications will emerge?

6.4. Linear Operator Equations on Hilbert spaces

Generalizing matrix equations to operator equations on Hilbert spaces or Hilbert C * -modules has been a mainstream research direction. For a C * -algebra A , a Hilbert C * -module E [74,176] is a right A -module equipped with an A -valued inner product
· , · : E × E A
such that its induced norm x = x , x 1 2 is complete.
The theory of generalized inverses also serves as an effective tool for studying operator equations. Notably, most research requires the condition that the ranges of related operators are closed to ensure the existence of their MP inverses (see [43,324]). Interestingly, Douglas [61] pioneered an alternative approach in Hilbert spaces without the strong condition of closed ranges, which is known as the Douglas theorem. This work has provided valuable inspiration for subsequent research on operator equations on Hilbert C * -modules (see [74,205]).
Let A be a C * -algebra, and  let E and F be Hilbert C * -modules. The set of all bounded A -linear maps
A : E F
is denoted by L ( E , F ) . Particularly, L ( E ) = L ( E , E ) . The adjoint of A L ( E , F ) is a map A * L ( F , E ) such that
A x , y = x , A * y for all x E , y F .
The range and the null space of an operator A are denoted by R ( A ) and N ( A ) , respectively. A closed submodule H of E is called to be orthogonally complemented [176] if
E = H H ,
where H = { x E : x , y = 0 for all y H } . And, H ¯ is the closure of H . Let P A * be the projection of E onto R ( A * ) ¯ . And, R A = I P A * , where I is the identity operator on E .
Let A , B , C L ( E ) be such that A and B are adjointable. Mousavi  et al. [205] investigated Eq. (5) in Hilbert C * -modules E , where only the range closures of adjointable operators need to be orthogonally complemented.
Theorem 43. 
[205], Theorem 3.3 Let R ( A ) ¯ , R ( B ) ¯ , R ( A * ) ¯ , and  R ( B * ) ¯ are orthogonally complemented. If 
R ( C R B ) R ( A ) and R ( P B * C * ) R ( B * ) ,
then Eq. (5) is consistent, in which case,
X = X h + X p and Y = Y h + Y p ,
where X p and Y p satisfying A X p = P A C R B and B * Y p * = P B * C * ,
X h = R A W 1 + W 2 P B * and Y h = P A W 3 + W 4 R B * ,
where W 1 , W 2 , W 3 , W 4 L ( E ) are arbitrary satisfying A W 2 P B * + P A W 3 B = 0 .
Remark 44. 
In [205], Example 2.1, Mousavi  et al. gave the example to show that, in a Hilbert C * -module, an operator’s range closure being orthogonally complemented is weaker than its range being closed.
Let A L ( E , F ) satisfy that R ( A ) is closed. In view of the orthogonal decompositions of closed submodules, i.e.,
E = R ( A * ) N ( A ) and F = R ( A ) N ( A * ) ,
Karizaki et al. in [153], Corollary 1.2 showed that the operator A can be decomposed into the following matrix form
A = A 1 0 0 0 : R ( A * ) N ( A ) R ( A ) N ( A * ) ,
where A 1 is invertible, and thus the MP inverse A L ( F , E ) of A is
A = A 1 1 0 0 0 : R ( A ) N ( A * ) R ( A * ) N ( A ) .
Interestingly, four years after [205], Moghani et al. [203] supplemented the conclusions on solving the operator Eq. (5) on Hilbert C * -modules using the matrix forms of adjointable operators and generalized inverses.
Theorem 44. 
[203], Theorem 3.2 Let A L ( E , F ) , B L ( F , E ) , and  C L ( F ) be such that R ( A ) and R ( B ) are closed, R ( A ) = R ( B * ) , and  R ( A * ) = R ( B ) . Then, the operator Eq. (5) has a solution pair X L ( F , E ) and Y L ( E , F ) if and only if
( I A A ) C ( I B B ) = 0 ,
in which case,
X = 1 2 A C + 1 2 A C ( I B B ) + 1 2 W B + ( I A A ) Z , Y = 1 2 A A C B + ( I A A ) C B 1 2 A W B B + V ( I B B ) ,
where Z L ( F , E ) and V L ( E , F ) are arbitrary, and  W L ( E ) satisfies
( I A A ) W B B = 0 .
Remark 45. 
In [203], Section 4, Moghani et al. further studied the following operator equation
A X E + F Y B = C ,
where X and Y are unknown operators between Hilbert C * -modules.
Let H be an infinite dimensional separable Hilbert space. Recently, An  et al. [2] revisited the operator Eq. (1) in H . For A L ( H ) , a  ( 1 , 2 ) -inverse A ( 1 , 2 ) of A is an operator in L ( H ) satisfying A A ( 1 , 2 ) A = A and A ( 1 , 2 ) A A ( 1 , 2 ) = A ( 1 , 2 ) .
Theorem 45. 
[2], Theorem 2.1 Let A, B, and  C L ( H ) . Then, Eq. (1) has a solution pair
X = A ( 1 , 2 ) C and Y = ( I A A ( 1 , 2 ) ) C B ( 1 , 2 )
if and only if
( I A A ( 1 , 2 ) ) C ( I B ( 1 , 2 ) B ) = 0 .
For A , B L ( H ) , we call that the operator pair ( A , B ) has the generalized Fuglede-Putnam property [2,148] if
A X = Y B for X , Y L ( H ) A * X = Y B * .
An et al. then presented an interesting connection between the generalized orthogonality and the solvability of the operator Eq. (1) in H .
Theorem 46. 
[2], Theorem 2.9 Let A, B, and  C L ( H ) .
(1)
If the operator Eq. (1) is consistent, then exists invertible operators U , V H H such that
U A 0 0 B = A C 0 B V .
(2)
Suppose that ( B , A ) and ( B , B ) satisfy the generalized Fuglede-Putnam property.
If there exist invertible operators T , S H H such that
T A 0 0 B = A C 0 B S ,
and the ( 2 , 2 ) -entry of S T * is invertible, then the operator Eq. (1) is consistent.
Example 1. 
Note that Olshevsky in [209], Section 2 designed an example to show that RET does not hold in infinite dimensional spaces. Indeed, let H be an infinite dimensional separable Hilbert space with the orthonormal basis { e i } i = 1 . Define the operators A , C L ( H ) as
A e 3 k + 1 = 0 , A e 3 k + 2 = 0 , A e 3 k + 3 = e 3 k + 2 ( k = 0 , 1 , 2 , ) , C e 1 = e 1 , and C e i = 0 ( i 1 ) .
Put B = A . Let
E = A 0 0 B and F = A C 0 B .
Then, one can observe that the operator E has the only the eigenvalue λ 0 = 0 . Corresponding to this eigenvalue, there are a countable number of Jordan chains of lengths 1 and 2. Additionally, the vectors of these chains form the orthonormal basis of H H . So, F has the same properties. Thus, E and F are equivalent. On the other hand, the operator Eq. (1) is not solvable. In fact, assume X and Y satisfy Eq. (1). Then, A X e 1 = e 1 , which is a contradiction with e 1 R ( A ) .
Inspired by Bhatia’s characterizations of the unique solution of Sylvester equation, i.e., [13], Theorem VII.2.3, An et al. further proposed an integral expression for the solution of the operator Eq. (1) under the specific conditions. For  A L ( H ) , σ ( A ) denotes the spectrum of A.
Theorem 47. 
[2], Theorem 2.17 Let A, B, and  C L ( H ) .
(1)
If the spectra of A and B are contained in the open right half-plane and the open left half-plane, respectively, then the operator Eq. (1) has the solution pair
X = 0 e 2 t A C d t and Y = 0 C e 2 t B d t .
(2)
Suppose that A and B are Hermitian operators such that
σ ( A ) σ ( B ) = and α + 1 2 = β ,
where α and β are eigenvalues of A and B, respectively. Assume that for an absolutely integrable function f defined on R , its Fourier transform f ^ ( s ) satisfies
f ^ ( s ) = 1 s ,
where s σ ( A ) σ ( B ) . Then, the operator Eq. (1) has the solution pair
X = e i t A C f ( t ) d t and Y = C e i t B f ( t ) d t .

6.5. Tensor Equations

From the end of the 19th century to the present, the understanding of tensors in multilinear algebra and physics has essentially gone through three ways: as multi-indexed objects satisfying certain transformation rules, as  multilinear maps, and as elements in the tensor product of vector spaces (see [185] and references therein). On the other hand, the direct interpretation of “tensors" as multidimensional arrays (or hypermatrices) has also been widely accepted by many scholars (see [21,56,217] and references therein). Following this habit, the tensors in this paper are referred to as multidimensional arrays.
Definition 14. 
Let F be a ring. An N-order I 1 × × I N -dimension tensor A over F is defined as a multidimensional array with I 1 I 2 I N entries, i.e.,
A = a i 1 i N 1 i j I j ( j = 1 , , N ) ,
where a i 1 i N F for 1 i j I j and j = 1 , , N . Moreover, denote
A i 1 i N = a i 1 i N .
The set of all N-order I 1 × × I N -dimensional tensors over F is denoted by F I 1 × × I N .
Tensor theory has been effectively applied in diverse fields: image processing [313], handwritten digit classification [229], hypergraphs [217], extreme learning machines [134], signal processing and machine learning [237], quantum physics and mechanics [219], etc. In addition, the review article [160] by Kolda and Bader introduced the theory of tensor decomposition and its applications in psychometrics, chemometrics, numerical linear algebra, computer vision, numerical analysis, neuroscience, and so on.
In the 2017 preprint [120], He et al. first discussed SVD and the MP inverse of quaternion tensors under the Einstein product, establishing a fundamental framework for solving quaternion tensor equations under the Einstein product.
Definition 15. 
[65] Let
A = a i 1 i N j 1 j N H I 1 × × I N × J 1 × × J N and B = b j 1 j N k 1 k M H J 1 × × J N × K 1 × × K M .
The Einstein product of A and B is defined as
A * N B = c i 1 i N k 1 k M H I 1 × × I N × K 1 × × K M ,
where
c i 1 i N k 1 k M = j 1 j N a i 1 i N j 1 j N b j 1 j N k 1 k M ,
for 1 i j I j , j = 1 , , N , 1 k t K t , and  t = 1 , , M .
The conjugate transpose A * of A = a i 1 i N j 1 j M H I 1 × × I N × J 1 × × J M is
A * = b j 1 j M i 1 i N H J 1 × × J M × I 1 × × I N with b j 1 j M i 1 i N = a ¯ i 1 i N j 1 j M .
The MP inverse [250] of A H I 1 × × I N × J 1 × × J N is A H J 1 × × J N × I 1 × × I N satisfying
A * N A * N A = A , A * N A * N A = A , ( A * N A ) * = A * N A , ( A * N A ) * = A * N A .
Moreover, let the unit (or identity) tensor be
I I N = e i 1 i N j 1 j N H I 1 × × I N × I 1 × × I N ,
where all diagonal entries e i 1 i N i 1 i N are 1 and all off-diagonal entries are 0. Definite
L A = I A * N A and R A = I A * N A ,
where I denotes the unit tensor with appropriate dimensions.
Theorem 48. 
[120], Corollary 5.3 Let
A H I 1 × × I N × J 1 × × J N , B H H 1 × × H M × L 1 × × L M , C H I 1 × × I N × L 1 × × L M .
Then, Eq. (5) over quaternion tensors under the Einstein product, i.e.,
A * N X + Y * M B = C
has a solution pair
X H J 1 × × J N × L 1 × × L M and Y H I 1 × × I N × H 1 × × H M
if and only if
R A * N C * M L B = 0 ,
in which case,
X = A * N C U 1 * M B + L A * N U 2 , Y = R A * N C * M B + A * N U 1 + U 3 * M R B ,
where U 1 , U 2 , and  U 3 are arbitrary tensors over H with appropriate dimensions.
Remark 46. 
(1)
Theorem 48 is a direct corollary of [120], Theorem 5.1, which establishes the solvability conditions and the general solution for the following quaternion tensor equation:
A * N X * M D + E * N Y * M B = C ,
where X and Y are unknown and other tensors are given over H .
(2)
Inspired by the transformation between tensors and matrices over R (see [16], Definition 2.8), He et al. [117,120] defined an analogous transformation over H , i.e., the transformation f is a map defined as
f : H I 1 × × I N × J 1 × × J N H ( I 1 I 2 · I N ) × ( J 1 J 2 · J N ) A H I 1 × × I N × J 1 × × J N A = f ( A ) H ( I 1 I 2 I N ) × ( J 1 J 2 J N ) ,
where the components of A are given by
( A ) i 1 i N j 1 j N f ( A ) i 1 + k = 2 N ( i k 1 ) s = 1 k 1 I s j 1 + k = 2 N ( j k 1 ) s = 1 k 1 J s .
[120], Lemma 2.2 shows that the transformation f is a bijection satisfying
f ( A + B ) = f ( A ) + f ( B ) and f ( A * N C ) = f ( A ) f ( C )
for A , B H I 1 × × I N × J 1 × × J N and C H J 1 × × J N × L 1 × × L N . The transformation f ingeniously bridges quaternion tensors under the Einstein product and quaternion matrices under the ordinary product. By virtue of its isomorphism property, f serves as a powerful tool for studying problems related to quaternion tensors under the Einstein product.
(2)
The work [120] on quaternion tensor equations has profoundly influenced subsequent research on tensor equations over H (see [112,199,220,274,275,320,321]).
Subsequently, Wang et al. [282] discussed the minimum-norm least-squares solution of the quaternion tensor equation of the form (65), i.e.,
A * N X + Y * N B = D
with the unknown X and Y . We now introduce some notations.
Let A = [ a i 1 i N j 1 j M ] H I 1 × × I N × J 1 × × J M . The transpose A T of A is
A T = [ b j 1 j M i 1 i N ] H J 1 × × J M × I 1 × × I N ,
where b j 1 j M i 1 i N = a i 1 i N j 1 j M , and  the conjugate A ¯ of A is
A ¯ = [ a ¯ i 1 i N j 1 j M ] H I 1 × × I N × J 1 × × J M .
The symbol
A ( i 1 i N | : ) = [ a i 1 i N : : ] H J 1 × × J M
stands for a subblock of A , and  Vec ( A ) is a new tensor obtained by lining up all subtensor in a column, where the t-th subblock of Vec ( A ) is A ( i 1 i N | : ) for
t = i N + K = 1 N 1 ( i K 1 ) L = K + 1 N I L .
Since A = A 1 + A 2 j for A 1 , A 2 C I 1 × × I N × J 1 × × J M , the complex representation tensor of A is defined as
f ( A ) = A 1 A 2 A ¯ 2 A ¯ 1 C 2 I 1 × × 2 I N × 2 J 1 × × 2 J M .
Let Θ A = [ A 1 , A 2 ] , A = ( Re A 1 , Im A 1 , Re A 2 , Im A 2 ) ,
Vec ( A ) = Vec ( Re A 1 ) Vec ( Im A 1 ) Vec ( Re A 2 ) Vec ( Im A 2 ) , and K J M = I i I J M 0 0 0 0 I J M i I J M I J M i I J M 0 0 0 0 I J M i I J M ,
where i is imaginary unit such that i 2 = 1 .
Let A R I 1 × × I N × J 1 × × J N . The Frobenius norm · F of A is
A F = i 1 i N j 1 j N a i 1 i N j 1 j N 2 1 / 2 .
For B = B 1 + B 2 j H J 1 × × J N × K 1 × × K M , define
A f ( B ) = A B 1 A B 2 A ( B ¯ 2 ) A B ¯ 1 ,
where ⊗ denotes the Kronecker product of two tensors. The the inverse of A C I 1 × × I N × I 1 × × I N is the tensor A 1 C I 1 × × I N × I 1 × × I N satisfying
A * N A 1 = A 1 * N A = I .
Theorem 49. 
[282], Corollary 3.4 Let A , B , D H J 1 × × J N × J 1 × × J N , and  let
H L 1 = X , Y
be the set of all X , Y H J 1 × × J N × J 1 × × J N such that
A * N X + Y * N B D F 2 = min X 1 , Y 1 H I 1 × × I N × J 1 × × J N A * N X 1 + Y 1 * N B D F 2 .
Denote A = A 1 + A 2 j . Put
P 01 = A 1 f ( I J N ) T A 2 f ( I J N j ) * * N K J N , Q 01 = I J N f ( B ) T 0 * N K J N , T 11 = Re P 01 Re Q 01 , T 12 = Im P 01 Im Q 01 , E 1 = Vec ( Re Θ D ) Vec ( Im Θ D ) , R 1 = I T 11 * N T 11 * N T 12 T , H 1 = R 1 + I R 1 * N R 1 * N Z 1 * N T 12 * N T 11 * N T 11 T * N I T 12 T * N R 1 , Z 1 = I + I R 1 * N R 1 * N T 12 * N T 11 * N T 11 T * N T 12 T * N I R 1 * N R 1 1 .
(1)
Then,
H L 1 = { [ X , Y ] | Vec ( X ) Vec ( Y ) = T 11 H 1 T * N T 12 * N T 11 H 1 T * N E 1 + I T 11 * N T 11 R 1 * N R 1 * N W 1 } ,
where W 1 is arbitrary with appropriate dimensions.
(2)
If [ X l 1 , Y l 1 ] H L 1 satisfies
[ X l 1 , Y l 1 ] F 2 = min [ X , Y ] H L 1 X F 2 + Y F 2 ,
then [ X l 1 , Y l 1 ] H L 1 is unique and
Vec ( X l 1 ) Vec ( Y l 1 ) = T 11 H 1 T * N T 12 * N T 11 H 1 T * N E 1 .
Remark 47. 
Recently, some scholars have extended quaternion tensor equations under the Einstein product to different categories. For instance, Jia and Wang [140] investigated split quaternion tensor equations, while Yang et al. [329] explored dual split quaternion tensor equations. On the other hand, tensor theory encompasses a variety of product operations, including the Einstein product [65], k-mode Product [217], contracted product [5], T-product [158], Qt-product [221], general product [233], cosine transform product (c-product)[156], and M-product [145,157]. Combined with Remark 43, the investigation of tensor equations over diverse quaternion algebras under various tensor products reveals significant untapped research potential.

6.6. Polynomial matrix equations

As described in Section 3, Roth was the first to study the solvability conditions of polynomial matrix Eq. (2) via the equivalence of two block polynomial matrices. Since then, the theoretical research and practical applications regarding this polynomial matrix equation have gradually become more extensive and enriched. For instance, it is successfully applied to multivariable linear discrete systems in stochastic control [162], the algebraic regulator problem [10], etc. Next, we mainly discuss different approaches to studying this polynomial matrix.

6.6.1. By the divisibility of polynomials

Let F be a field. Cheng and Pearson [33], in their research on the regulator problem with internal stability, provided an equivalent characterization of the solvability of the polynomial matrix equation
B X + Y D = P
with given polynomial matrix B F p × n [ λ ] , D F p × p [ λ ] , and  P F p × p [ λ ] , by the divisibility of a series of polynomials. Assume that rank ( B ) = r and rank ( D ) = p . Using [33] (i.e., Smith normal form theorem for polynomial matrices), there exist unimodular matrices M b , N b , M d , and  N d such that
M b B N b = B d = B 0 0 0 0 and M d D N d = diag ( d 1 , d 2 , , d p ) ,
where B 0 = diag ( b 1 , b 2 , , b r ) , b 1 b 2 b p 0 , and  d 1 d 2 d p 0 . Left-multiplying (66) by M b and right-multiplying (66) by N d yield that
B d X + Y D d = P ,
where X = N b 1 X N d , Y = M b Y M d 1 , and  P = M b P N d . For  a , b F [ λ ] , a | b denotes a divides b.
Theorem 50. 
[33], Lemma 6 Let B, D, and P be given in (66) with rank ( B ) = r and rank ( D ) = p . Let b 1 , b 2 , , b r and d 1 , d 2 , , d p be given in (67), and  P = [ p i j ] be given in (68). Then, Eq. (66) has a solution pair ( X , Y ) if and only if
(1)
g i j | p i j for i = 1 , . . . , r and j = 1 , . . . , p , and 
(2)
d j | p i j if p > r , for  i = r + 1 , . . . , p and j = 1 , . . . , p ,
where g i j is the monic greatest common divisor of b i and d j .
Remark 48. 
Notably, in [33], Theorem 2, Cheng and Pearson equivalently transform the solvability of a restricted regulator problem with internal stability into the solvability of Eq. (66) in a special form.

6.6.2. By skew-prime polynomial matrices

Let F be a field. In 1978, Wolovich [301] proposed a new approach to studying the polynomial matrix equation
A ( λ ) X ( λ ) + Y ( λ ) B ( λ ) = C ( λ )
with given A ( λ ) F p × m [ λ ] , B ( λ ) F q × t [ λ ] , and  C ( λ ) F p × t [ λ ] , based on skew-prime polynomial matrices.
Definition 16. 
[300,301] Let A ( λ ) F p × m [ λ ] and B ( λ ) F q × p [ λ ] with q + m > p . Assume that there exist M ( λ ) F m × p [ λ ] and N ( λ ) F p × q [ λ ] such that
A ( λ ) M ( λ ) + N ( λ ) B ( λ ) = I p .
Then A ( λ ) and B ( λ ) are called externally skew prime (or B ( λ ) and A ( λ ) are called internally skew prime). Moreover, A ( λ ) and N ( λ ) are called relatively left prime while M ( λ ) and B ( λ ) are called relatively right prime.
Suppose that A ( λ ) is nonsingular with p = m . Then A 1 ( s ) C ( λ ) can be factored in dual prime form:
A 1 ( s ) C ( λ ) = C ¯ ( s ) A ¯ 1 ( s ) ,
where C ¯ ( s ) F p × t [ λ ] and A ¯ ( s ) F t × t [ λ ] are relatively right prime.
Theorem 51. 
[301], Theorem 3 and Corollary 3 Let A ( λ ) F p × p [ λ ] be nonsingular and let A ¯ ( s ) be given in (70).
(1)
If A ¯ ( s ) and B ( λ ) are externally skew, then Eq. (69) is consistent.
(2)
Suppose that A ( λ ) and C ( λ ) are relatively left prime. Then, Eq. (69) is consistent if and only if A ¯ ( s ) and B ( λ ) are externally skew prime.
Remark 49. 
Note that Wolovich proved the sufficiency of the item (51) in Theorem 51 via a constructive method, implying a new procedure to find a solution pair of Eq. (69). Moreover, when C ( λ ) = I , all solutions of Eq. (69) are characterized in [301], Section 5, and this characterization is further used to obtain the unique solution of Eq. (69).

6.6.3. By the realization of matrix fraction descriptions

Let F be a field. For  given Q F p × q [ λ ] , R F m × t [ λ ] , and  Φ F p × t [ λ ] , consider the polynomial matrix equation
X R + Q Y = Φ ,
where X F p × m [ λ ] and Y F q × t [ λ ] are unknown. According to the realization of matrix fraction descriptions presented in [82], Emre and Silverman [66] transformed Eq. (71) into a set of linear matrix equations when Q is nonsingular.
Let F q [ λ ] and F q ( λ ) be the sets of all q-tuples of polynomials in λ with coefficients in F and all q-tuples of rational functions in λ over F , respectively. Assume that Q is nonsingular with p = q . Let
F Q = { x F p [ λ ] : Q 1 x is strictly proper } .
For X 1 F p × m [ λ ] satisfying that Q 1 X 1 is strictly proper, define the following F -linear maps:
G : F m F Q , u X 1 u for u F m , π : F p ( λ ) F p ( λ ) , q strictly proper part of q , π Q : F p [ λ ] F p [ λ ] , x Q π ( Q 1 x ) , F : F Q F Q , x π Q ( z x ) , H : F Q F p , x ( Q 1 x ) 1 ,
where ( Q 1 x ) 1 is the coefficient of λ 1 in the formal power series of Q 1 x in λ 1 . For Z = Q 1 X 1 , we call Σ = ( F , G , H ) the Q-realization of Z [82].
Let S F p × n [ λ ] be such that its columns are a basis of F Q . Let ( F , G 1 , H ) be the Q-realization of Q 1 S . Let F ^ , G ^ 1 , and  H ^ denote the matrix representations of F, G 1 , and H, respectively, with respect to the canonical bases of F m and F p , and  the columns of S serving as a basis of F Q . Put
R = j = 0 r u j λ j ,
where u j F m × p . Define Φ ^ F n × p uniquely by π Q ( Φ ) = S Φ ^ , and for the unique polynomial matrix Φ 1 , express Φ as
Φ = Q Φ 1 + S Φ ^ .
Moreover, let the linear equations be as follows:
j = 0 r F ^ j G ^ u j = Φ ^ ,
where G ^ F n × m are unknown.
Theorem 52. 
[66], Theorem 2.5 Let R, Q, and Φ be given in (71). Suppose that Q is nonsingular with p = q . Denote
E ¯ ( Q , R ) = { ( X 1 , Y 1 ) X 1 R + Q Y 1 = Φ and Q 1 X 1 is strictly proper } .
The following are equivalent:
(1)
( X 1 , Y 1 ) E ¯ ( Q , R ) ;
(2)
Eq. (73) has a solution G ^ such that
X 1 = S G ^ and Y 1 = Φ 1 Q p ,
where Φ 1 satisfies (72), and  Q p is the polynomial part of Q 1 X 1 R .
Remark 50. 
(1)
Under the hypotheses of Theorem 52, let
E ( Q , R ) = { ( X , Y ) | X R + Q Y = Φ } .
In terms of [66], Lemma 2.2, Emre and Silverman have shown that
E ( Q , R ) = { ( X 1 , Y 1 ) + ( X ¯ , Y ¯ ) ( X 1 , Y 1 ) E ¯ ( Q , R ) and ( X ¯ , Y ¯ ) H ¯ ( Q , R ) } ,
where H ¯ ( Q , R ) = { ( Q Q 1 , Q 1 R ) Q 1 is an arbitrary polynomial matrix } . This implies that to characterize E ( Q , R ) , it is sufficient to characterize E ¯ ( Q , R ) .
(2)
In [66], Section 3, Eq. (71) is further generalized to the case where Q is a general polynomial matrix. In fact, for  Q F p × q [ λ ] , there exist unimodular polynomial matrices M 1 and M 2 such that
M 1 Q M 2 = Q ^ 0 0 0 ,
where Q ^ is the nonsingular polynomial matrix. Let
X ^ = M 1 X = X ^ 1 X ^ 2 , Y ^ = M 2 1 Y = Y ^ 1 Y ^ 2 , M 1 Φ = Φ ˜ 1 Φ ˜ 2 .
Then,
Eq . ( ) X ^ R + Q ^ 0 0 0 Y ^ 1 Y ^ 2 = Φ ˜ 1 Φ ˜ 2 X ^ 2 R = Φ ˜ 2 , X ^ 1 R + Q ^ Y ^ 1 = Φ ˜ 1 .
So, Theorem 52 can be applied to the second equation of (74); see [67] for the first equation of (74).

6.6.4. By the Unilateral Polynomial Matrix Equation

Let F be a field. For given A , B , C F n × n [ λ ] , consider the following polynomial matrix equation (also called the bilateral polynomial matrix equation):
A X + Y B = C ,
where X , Y F n × n [ λ ] are unknown. Żak [336] proposed an algorithm for finding the unique solution pair ( X , Y ) of Eq. (75). In fact, let
A = i = 0 N λ i A i and B = j = 0 M λ j B j .
For C = [ c i j ] n × n F n × n [ λ ] , denote
vec r ( C ) = c 11 c 1 n c 21 c 2 n c n 1 c n n T .
Using the Kronecker product, Eq. (75) can be transformed to the following unilateral polynomial matrix equation:
A X + B Y = C ,
where X = vec r ( X ) , Y = vec r ( Y ) , C = vec r ( C ) ,
A = i = 0 N λ i ( A i I n ) , and B = j = 0 M λ j ( I n B j T ) .
Let
X = i = 0 M 1 1 λ i X i and Y = i = 0 N 1 1 λ i Y i .
Denote
A i = A i I n and B j = I n B j T ,
where i = 0 , 1 , . . . , N and j = 0 , 1 , . . . , M . Then, by comparing of like powers, (76) can be rewritten as
B 0 A 0 B 1 B 0 A 1 A 0 B 1 B 0 A 1 A 0 B M B 1 A N A 1 B M A N B M A N Y 0 Y N 1 1 X 0 X M 1 1 = C 0 C 1 . N 1 + M 1 blocks
Let’s assume without loss of generality that B 0 is nonsingular, which implies that B 0 is also nonsingular. As shown by Feinstein and Bar-Ness in [78], performing a series of elementary row operations on Eq. (77) yields the following form:
I n I n L 0 L 1 L 0 L 0 0 L y x = p 1 p 2 .
Therefore, a  necessary and sufficient condition for the solvability of Eq. (75) is
rank L p 2 = rank L ,
in which case, we can obtain x by solving L x = P 2 , and then compute y recursively. Furthermore, the upper bound on the degree of Y in (76) are also given as follows:
Theorem 53. 
[336] For A and B given in (76), assume that
(1)
A and B are relatively left prime;
(2)
B is nonsingular and satisfies that B 1 is strictly proper;
(3)
A 1 ( λ ) B 1 1 ( λ ) is the right coprime factorization of B 1 A , where B 1 ( λ ) is row reduced.
If Eq. (76) is consistent, then it has a solution pair X ( λ ) , Y ( λ ) such that
deg r i X ( λ ) < deg r i B 1 ( λ ) and deg Y ( λ ) < deg A 1 ( λ ) ,
where deg r i denotes the degree of the i-th row.
Remark 51. 
Żak [336] also noted that the number of equations in (78) can be further reduced by using additional information about A and B given in (75). For instance, if a row of B has degree zero, then the corresponding row of X is identically zero, so discard the relevant column and row of L given in (78).

6.6.5. By the equivalence of block polynomial matrices

Let F be a field. Building upon Theorem 1, Wimmer [296] investigated the constant solutions of the following polynomial matrix equation:
A ( λ ) X Y B ( λ ) = C ( λ ) ,
where A ( λ ) F m × n [ λ ] , B ( λ ) F p × k [ λ ] , and  C ( λ ) F m × k [ λ ] are given.
Theorem 54. 
[296], Lemma 2.1 Eq. (79) has a constant solution pair
X F n × k and Y F m × p
if and only if there exist two nonsingular constant matrices
R F ( n + k ) × ( n + k ) and S F ( m + p ) × ( m + p )
such that
A ( λ ) 0 0 B ( λ ) R = S A ( λ ) C ( λ ) 0 B ( λ ) .
Remark 52. 
Theorem 54 also appeared as a lemma in the earlier article [295], Lemma 3, though no complete proof was provided.
Remark 53. 
Let A i F m × n , B i F p × k , and  C i F m × k for i = 1 , 2 . Using Theorem 54, Wimmer [296], Theorem 1.1 showed that the system of matrix equations
A 1 X Y B 1 = C 1 , A 2 X Y B 2 = C 2 ,
has a solution pair X F n × k and Y F m × p if and only if there exist two nonsingular matrices R F ( n + k ) × ( n + k ) and S F ( m + p ) × ( m + p ) such that
S A 1 C 1 0 B 1 λ A 2 C 2 0 B 2 = A 1 0 0 B 1 λ A 2 0 0 B 2 R .
Obviously, this result is also a generalization of RET. A similar research idea is also introduced in SubSection 8.1.

6.6.6. By Jordan Systems of Polynomial Matrices

Let A C m × m [ λ ] , B C n × n [ λ ] , and  C C m × n [ λ ] be such that det ( A ) 0 and det ( B ) 0 . According to Jordan systems of polynomial matrices, Wimmer [297] discussed the solvability conditions for the polynomial matrix equations
A X Y B = C ,
where X , Y C m × n [ λ ] are unknown. Let
σ ( B ) = { λ C det ( B ( λ ) ) = 0 } ,
and let the elementary divisors corresponding to λ 1 σ ( B ) be
( λ λ 1 ) l 1 , . . . , ( λ λ 1 ) l q ,
where l 1 . . . l q 1 and l = l 1 + . . . + l q satisfying
det ( B ) = ( λ λ 1 ) l c ( λ ) and c ( λ 1 ) 0 .
For r Z + , denote
N r = 0 1 1 0 r × r .
Let the Jordan matrix of B associated to λ 1 be
J = diag ( λ 1 I N l 1 , , λ 1 I N l q ) .
Then, there exist H C n × l and H ^ C n × l [ λ ] such that
B H = H ^ ( λ I J ) ,
where the columns of H ^ are C -linearly independent (see [294]). Thus, H is called a right Jordan system of B corresponding to λ 1 . Then, G is called a left Jordan system of A corresponding to λ 1 σ ( A ) if G T is a right Jordan system of A T .
For λ 1 σ ( A ) σ ( B ) , let G and H are left and right Jordan systems of A and B corresponding to λ 1 , respectively. Then, ( G , H ) is called a pair of Jordan systems of ( A , B ) corresponding to λ 1 . According to (81), partition
H = H 1 H 2 H q
with H j = [ h j 0 , , h j , l j 1 ] for j = 1 , , q . Similarly, assume that the elementary divisors belonging to λ 1 σ ( A ) are
( λ λ 1 ) k 1 , , ( λ λ 1 ) k p ,
where k 1 k p 1 . Partition
G = G 1 G p with G i = g i 0 g i , k i 1 for i = 1 , , p .
The k-th derivative of C ( λ ) is denoted by C ( k ) ( λ ) . We call that ( G , H ) has the property ( Σ ) if
ν + σ + τ = r i j g i ν C ( σ ) ( λ 1 ) σ ! h j τ = 0 ,
where r i j = 0 , 1 , , min ( k i , l j ) 1 , i = 1 , , p , and  j = 1 , , q .
Theorem 55. 
[297], Theorem 1.1 Let A, B, and C be given in (80). Then, the following are equivalent:
(1)
Eq. (80) is consistent;
(2)
There exists a pair of Jordan systems ( G , H ) of ( A , B ) with property ( Σ ) for each λ σ ( A ) σ ( B ) ;
(3)
All pairs of Jordan systems of ( A , B ) have property ( Σ ) for each λ σ ( A ) σ ( B ) .
Remark 54. 
Wimmer [297] pointed out that Theorem 55 can also be extended to matrices over the ring O ( G ) of complex holomorphic functions in a domain G .

6.6.7. By linear matrix equations

Let F be a field. Assume that
A ( λ ) = i = 0 m A i λ i , B ( λ ) = i = 0 n B i λ i , and C ( λ ) = i = 0 k C i λ i ,
where A i , B i , C i F r × r satisfying A m 0 , B n 0 , and  C k 0 . Consider the polynomial matrix equation:
A ( λ ) X ( λ ) + Y ( λ ) B ( λ ) = C ( λ ) ,
where X ( λ ) , Y ( λ ) F r × r [ λ ] are unknown.
Barnett [8] provided an equivalent condition for Eq. (83) to have a unique solution pair ( X ( λ ) , Y ( λ ) ) with
deg X ( λ ) < n and deg Y ( λ ) < m .
For H ( λ ) = i = 0 m H i λ i with H m 0 , we say that H ( λ ) is regular if det ( H m ) 0 .
Theorem 56. 
[8] Let A ( λ ) , B ( λ ) , and  C ( λ ) given in (82) be such that A ( λ ) and B ( λ ) are regular and that deg C ( λ ) n + m 1 . Then, Eq. (83) has a unique solution pair ( X ( λ ) , Y ( λ ) ) such that (84) if and only if det ( A ( λ ) ) and det ( B ( λ ) ) are relatively prime, (i.e., has the greatest common divisor is a constant independent λ).
Remark 55. 
Note that the condition in Theorem 56, i.e.,  det ( A ( λ ) ) and det ( B ( λ ) ) are relatively prime, implies that A ( λ ) and B ( λ ) are nonsingular (see [77]).
Subsequently, Feinstein and Bar-Ness [77] conducted further research on the solutions to Eq. (83) with (84); such special solutions are also called the minimal solutions.
Theorem 57. 
[77], Theorem II Let A ( λ ) , B ( λ ) , and  C ( λ ) given in (82) be such that A ( λ ) and B ( λ ) are nonsingular and deg C ( λ ) n + m 1 . Assume that A ( λ ) (or B ( λ ) ) is regular. Then, Eq. (83) has a unique minimal solution if and only if det ( A ( λ ) ) and det ( B ( λ ) ) are relatively prime.
Remark 56. 
In [77], Theorem III, Feinstein and Bar-Ness showed that if Eq. (83) has a minimal solution ( X ( λ ) , Y ( λ ) ) , then ( X ( λ ) , Y ( λ ) ) is not unique if and only if both A ( λ ) and B ( λ ) are not regular.
Motivated by Theorem 56, Chen and Tian [22] proved that Eq. (82) can be reduced to a linear matrix equation. For A ( λ ) and B ( λ ) given in (82), let
A R = 0 0 0 A 0 I 0 0 A 1 0 I 0 A 2 0 0 I A m 1 and B L = 0 I 0 0 0 0 I 0 0 0 0 I B 0 B 1 B 2 B n 1 .
Theorem 58. 
[22], Lemma 3.1 and Theorems 1.1 and 1.3 Let A ( λ ) , B ( λ ) , and  C ( λ ) given in (82) be such that A m = B n = I r , and  let H C ( λ ) be the set of all solutions ( X ( λ ) , Y ( λ ) ) of Eq. (83) with (84).
(1)
Let k < m + n . If  Eq. (83) is solvable, then H C ( λ ) .
(2)
Let k < m . There exists X ( λ ) satisfying ( X ( λ ) , Y ( λ ) ) H C ( λ ) if and only if
A R n Y + i = 0 n 1 A R i Y B i = C ,
where
Y ( λ ) = i = 0 m 1 Y i λ i , Y = Y 0 Y m 1 , C = C 0 C m 1 .
(3)
Let k < n . There exists Y ( λ ) satisfying ( X ( λ ) , Y ( λ ) ) H C ( λ ) if and only if
X B L m + i = 0 n 1 A i X B L i = C ˜ ,
where X ( λ ) = i = 0 n 1 X i λ i , X = X 0 X n 1 , and  C ˜ = C 0 C n 1 .
Remark 57. 
(1)
For A = [ a i j ] F p × q , let
row ( A ) = a 11 a p 1 a 12 a p 2 a 1 q a p q .
and vec ( A ) = row ( A ) T . Then,
( 85 ) I A R n + i = 0 n 1 B i T A R i vec ( Y ) = vec ( C ) , ( 86 ) row ( X ) I B L m + i = 0 m 1 A i B L i = row ( C ˜ ) .
(2)
The explicit solutions to Eqs. (85) and (86) have been studied in [131,298], which also serve as a starting point of SubSection 6.7 in this paper.
(3)
Moreover, Sheng and Tian [22] mentioned that Theorem 58 still holds when the field F is extended to a commutative ring with identity.

6.6.8. By Root Functions of Polynomial Matrices

Let L 1 n × n [ a , b ] denote the space of n × n matrix-valued functions that are Lebesgue integrable over the interval [ a , b ] . Let
B ( λ ) = I n + ω 0 e i λ t b ( t ) d t , D ( λ ) = I n + 0 ω e i λ t d ( t ) d t , G ( λ ) = ω ω e i λ t g ( t ) d t ,
where b L 1 n × n [ ω , 0 ] , d L 1 n × n [ 0 , ω ] , and  g L 1 n × n [ ω , ω ] . Consider the following linear entire matrix function equation
X ( λ ) B ( λ ) + D ( λ ) Y ( λ ) = G ( λ ) , λ C ,
where X and Y are unknown n × n matrix functions
X ( λ ) = 0 ω e i λ t x ( t ) d t and Y ( λ ) = ω 0 e i λ t y ( t ) d t
with x L 1 n × n [ 0 , ω ] and y L 1 n × n [ ω , 0 ] . Gohberg [86] proposed necessary and sufficient conditions for the solvability of Eq. (87) using root functions of the coefficients.
Definition 17. 
[86,87] Let an n × n matrix function H ( λ ) be analytic at λ 0 C . A C n -valued function φ is called be a root function of H ( λ ) at λ 0 of order (at least) k Z + if φ is analytic at λ 0 , φ ( λ 0 ) 0 , and  H ( λ ) φ ( λ ) has a zero at λ 0 of order (at least) k.
Theorem 59. 
[86], Theorem 1.1 Eq. (87) is consistent if and only if for each common zero λ 0 of det ( B ( λ ) ) and det ( D ( λ ) ) , if φ is a root function of B ( λ ) at λ 0 of order p and ψ is a root function of D ( λ ) T at λ 0 of order q, then ψ ( λ ) T G ( λ ) φ ( λ ) has a zero at λ 0 of order at least min { q , p } .
Let H ( λ ) be an analytic r × r matrix function, and let λ 0 be a point in the domain of analyticity of H ( λ ) . If  det ( H ( λ 0 ) ) = 0 , then λ 0 is called an eigenvalue of H ( λ ) . Let a C r -vector valued function ϕ ( λ ) be analytic in a neighborhood of an eigenvalue λ 0 of F ( λ ) . If  ϕ ( λ 0 ) 0 and F ( λ 0 ) ϕ ( λ 0 ) = 0 , then ϕ ( λ ) is called a right root function of F ( λ ) at λ 0 . The order (at least) k of the right root function ϕ ( λ ) at λ 0 is the order (at least) k of λ 0 as a zero of the analytic function F ( λ ) ϕ ( λ ) . Similarly, left root functions can be defined (see [149]).
Utilizing right and left root functions of polynomial matrices, Kaashoek and Lerer [149] then presented a discrete version of Theorem 59. Let
L ( λ ) = j = 0 l λ j L j , M ( λ ) = j = 0 m λ j M j , G ( λ ) = j = 0 + m 1 λ j G j ,
where L j , M j , G j C r × r with L l 0 and M m 0 . Consider the following polynomial matrix equation:
X ( λ ) L ( λ ) + M ( λ ) Y ( λ ) = G ( λ ) ,
where X ( λ ) and Y ( λ ) are unknown. Define
L ^ ( λ ) = λ l L ( λ 1 ) = j = 0 l λ j L l j , M ^ ( λ ) = λ m M ( λ 1 ) = j = 0 m λ j M m j , G ¯ ( λ ) = λ l + m 1 G ( λ 1 ) = j = 0 l + m 1 λ j G l + m 1 j .
Theorem 60. 
[149], Theorem 1.1 For L ( λ ) and M ( λ ) given in (88), assume that both det ( L ( λ ) ) and det ( M ( λ ) ) do not vanish identically. Then, Eq. (89) has a solution pair ( X ( λ ) , Y ( λ ) ) such that
deg X ( λ ) m 1 and deg Y ( λ ) l 1
if and only if both of the following two conditions hold:
(1)
For each λ 0 C satisfying det ( L ( λ 0 ) ) = det ( M ( λ 0 ) ) = 0 , if f ( λ ) is a right root function of L ( λ ) at λ 0 of order s and h ( λ ) is a left root function of M ( λ ) at λ 0 of order t, then h ( λ ) G ( λ ) f ( λ ) has a zero at λ 0 of order at least min { s , t } ;
(2)
If f ( λ ) is a right root function of L ^ ( λ ) at zero of order s and h ( λ ) is a left root function of M ^ ( λ ) at zero of order t , then h ( λ ) G ¯ ( λ ) f ( λ ) has a zero of order at least min { s , t } .
Remark 58. 
Kaashoek and Lerer proved Theorem 60 by means of [149], Theorem 1.2, which is a direct corollary of [88], Theorems 3.2 and 4.1. Although this proof strategy can be regarded as a discrete version of the proof of Theorem 59, in the discrete case the common spectrum at infinity plays a crucial role, whereas in [86] there are no common root functions at infinity.

6.7. Sylvester-Polynomial-Conjugate Matrix Equations

For given matrices A, B i ( i = 1 , . . . , k ) , and C, the matrix equation
i = 0 k A i X B i = C
has been thoroughly investigated for its important role in control theory (see [129,130,131,133,298]). Based on results in [131], Wu et al. [302] defined Sylvester sums and Kronecker maps, and used them to discuss the following matrix equation over R :
i = 0 n 1 A i X F i = i = 0 n 2 B i R F i ,
where X is unknown and other matrices are given over R . Building on the work in [302], Wu et al. [309,310,312] established the framework of conjugate products and Sylvester-conjugate sums over C . Specifically, for  A C m × n and k N , define
A * k = A , for even k , A ¯ , for odd k .
The conjugate product [310,312] of
A ( λ ) = i = 0 m A i λ i C p × q [ λ ] and B ( λ ) = j = 0 n B j λ j C q × r [ λ ]
is defined as
A ( λ ) B ( λ ) = i = 0 m j = 0 n A i B j * i λ i + j .
For A C n × n and k N , define
A k = A k 2 k 2 A ¯ A k 2 ,
where · is the floor function (downward rounding). For Z C r × p , F C p × p , and 
T ( λ ) = i = 0 t T i λ i C n × r [ λ ] ,
we define the Sylvester-conjugate sum [310] of T ( λ ) and Z with respect to F by
T ( λ ) F Z = i = 0 t T i Z * i F i .
Remark 59. 
[310], Lemma 7 reveals an intriguing relationship between the conjugate product and the Sylvester-conjugate sum. Specifically, if A ( λ ) C l × q [ λ ] , B ( λ ) C q × r [ λ ] , F C p × p , and  Z C r × p , then
A ( λ ) B ( λ ) F Z = A ( λ ) F B ( λ ) F Z .
On this basis, Wu et al. [310] investigated the following Sylvester-polynomial-conjugate matrix equation:
i = 0 ϕ 1 A i X * i F i + j = 0 ϕ 2 B j Y * j F j = k = 0 ϕ 3 C k R * k F k ,
where A i C n × n ( i = 0 , 1 , , ϕ 1 ) , B j C n × r ( j = 0 , 1 , , ϕ 2 ) , C k C n × m ( k = 0 , 1 , , ϕ 3 ) , R C m × p , and  F C p × p are given, and  X C n × p and Y C r × p are unknown. Denote
A ( λ ) = i = 0 ϕ 1 A i λ i , B ( λ ) = i = 0 ϕ 2 B i λ i , C ( λ ) = i = 0 ϕ 3 C i λ i ,
where A i , B i , and  C i are given in (91). Thus, by conjugate products and Sylvester-conjugate sums, Eq. (91) can be direactly rewritten as
A ( λ ) F X + B ( λ ) F Y = C ( λ ) F R .
Additionally, we say that A ( λ ) C n × n [ λ ] and B ( λ ) C n × r [ λ ] are left coprime [312] if all their greatest common left divisors are unimodular, which is also equivalent to the existence of C ( λ ) C n × r [ λ ] and D ( λ ) C r × r [ λ ] such that
A ( λ ) C ( λ ) + B ( λ ) D ( λ ) = I .
Theorem 61. 
[310], Theorem 2 Assume that A ( λ ) and B ( λ ) given in (92) are left coprime. Suppose that the unimodular polynomial matrix U ( λ ) C ( n + r ) × ( n + r ) [ λ ] satisfies
A ( λ ) B ( λ ) U ( λ ) = I 0 .
Then, all solutions of Eq. (93) (or (91)) are
X Y = U ( λ ) C ( λ ) 0 0 I F R Z ,
where Z C r × p is arbitrary. Furthermore, if partition
U ( λ ) = H ( λ ) N ( λ ) L ( λ ) D ( λ )
with N ( λ ) C n × r [ λ ] and D ( λ ) C r × r [ λ ] , then (94) can be rewritten as
X = H ( λ ) C ( λ ) F R + N ( λ ) F Z , Y = L ( λ ) C ( λ ) F R + D ( λ ) F Z ,
for arbitrary Z C r × p .
Remark 60. 
(1)
[312], Theorem 9 guarantees the existence of the polynomial matrix U ( λ ) in Theorem 61.
(2)
Taking
ϕ 1 = 0 , ϕ 2 = 1 , B 0 = 0 , B 1 = I , and ϕ 3 = 0 ,
Eq. (91) over R reduces to
A X + Y B = C ,
where A = A 0 , B = F , and  C = C 0 R . Clearly, Theorem 61 is also a generalization of RET over R .
(3)
In [310], Theorem 1, Wu et al. characterized the homogeneous case of Eq. (91) more specifically via a pair of right coprime polynomial matrices. Moreover, in [310], Remark 4, they utilized the same method to discuss a more general form of Eq. (91), i.e.,
k = 1 θ i = 0 ω k A k i X k F i = j = 0 c C j R F j ,
where X k ( k = 1 , , θ ) are unknown and others are given.
(4)
It can be observed that [310], Lemmas 11 and 12 are crucial for proving Theorem 61 and [312], Theorem 1. Meanwhile, it should be noted that [310], Lemmas 11 and 12 provide only necessary conditions for left and right coprimeness, respectively. Thus, we contend that exploring the converse problems of these two lemmas is interesting.
(5)
Eq. (91) generalizes a class of complex conjugate matrix equations (see [11,12,141,305,306,307,308]). For systematic research on complex conjugate matrix equations and their applications in discrete-time antilinear systems, refer to the monograph [304] by Wu and Zhang.
Inspired by [310], Mazurek [198] recently generalized Theorem 61 based on groupoids and vector spaces.
Theorem 62. 
[198], Theorem 2 Let M 11 , M 12 , M 21 , and M 22 be groupoids with binary operations commonly denoted by ⊕, and let V 1 and V 2 be finite-dimensional vector spaces over a field F . Assume that for any i , j , k { 1 , 2 } , two operations
: M i j × V j V i and : M i j × M j k M i k
are given such that
(i)
( s t ) v = s v + t v for i , j { 1 , 2 } , s , t M i j , and  v V j ;
(ii)
s ( k u + l v ) = k ( s u ) + l ( s v ) for i , j { 1 , 2 } , s M i j , k , l F , and  u , v V j ;
(iiii)
s ( t v ) = ( s t ) v for i , j , k { 1 , 2 } , s M i j , t M j k , and  v V k .
Suppose that for a M 11 and b M 12 , there exist p M 11 , g M 12 , d , q M 21 , and  h , w M 22 such that
(vi)
( ( a p ) ( b q ) ) v = v for any v V 1 ,
(v)
( ( d g ) ( w h ) ) u = u for any u V 2 ,
(iv)
( ( a g ) ( b h ) ) u = 0 for any u V 2 .
Then, for  c V 1 , the all solutions of the equation
a x + b y = c
are
( x , y ) = ( p c + g z , q c + h z ) ,
where z V 2 is arbitrary.
For a ring F with unity and a ring endomorphism σ of F , the skew polynomial ring F [ λ ; σ ] is the set of polynomials over F in the indeterminate λ with the usual addition and multiplication subject to λ a = σ ( a ) λ for any a F (see [174]). Specifically, the  multiplication in F [ λ ; σ ] is given by
i = 0 n a i λ i j = 0 m b j λ j = i = 0 n j = 0 m a i σ i ( b j ) λ i + j
for i = 0 n a i λ i , j = 0 m b j λ j F [ λ ; σ ] . The set of m × n matrices over the skew polynomial ring F [ s ; σ ] is denoted by F m × n [ s ; σ ] . Assume that F is also a finite-dimensional vector space over a field K , and put
M 11 = F n × n [ s ; σ ] , M 12 = F n × m [ s ; σ ] , M 21 = F m × n [ s ; σ ] , M 22 = F m × m [ s ; σ ] , V 1 = F n × p , V 2 = F m × p ,
with the usual addition (denoted by ⊕) and the skew multiplication (95) (denoted by ⊙) of polynomial matrices. Then, applying Theorem 62, Mazurek obtained the relevant result (i.e., [198], Theorem 3) for the equation
A ( λ ) X + B ( λ ) Y = C ,
where A ( λ ) F n × n [ λ ; σ ] , B ( λ ) F n × m [ λ ; σ ] , and C F n × p are given, and  X F n × p and Y F m × p are unknown.
Remark 61. 
In [198], Proof of Theorem 1, Mazurek showed that Theorem 61 is an immediate corollary of [198], Theorem 3.
Moreover, analogous to the conjugate product of complex matrices, Wu et al. [303] further defined the j -conjugate product of quaternion matrices. The j -conjugate of A H m × n is
A = j A j .
For k Z + , inductively define A k = A ( k 1 ) with A 0 = A . Then, the j -conjugate product [303] of
A ( λ ) = i = 0 m A i λ i H p × q [ λ ] and B ( λ ) = j = 0 n B j λ j H q × r [ λ ]
is defined as
A ( λ ) B ( λ ) = i = 0 m j = 0 n A i B j i λ i + j .
We say that A ( λ ) H n × n [ λ ] and B ( λ ) H n × m [ λ ] be ⊛-left coprime [198] if there exists a unimodular polynomial matrix U ( λ ) H ( n + m ) × ( n + m ) [ λ ] such that
A ( λ ) B ( λ ) U ( λ ) = I 0 .
Let the map σ be σ : H H with σ ( h ) = j h j for h H . Then, σ is an automorphism on H . So, the  j -conjugate product is the product of matrices over H [ λ ; σ ] . Similar to (90), Mazurek [198] defined the Sylvester- j -conjugate sum over H ,
T ( λ ) F V = T 0 V + m = 1 t T m σ m ( V ) σ m 1 ( F ) σ 1 ( F ) σ 0 ( F )
for T ( λ ) = i = 0 t T i λ i H n × r [ λ ] , V H r × p , and  F H p × p . Applying [198], Theorem 3 to matrices over H [ λ ; σ ] yields the following result immediately.
Theorem 63. 
[198], Theorem 4 Let A ( λ ) H n × n [ λ ] and B ( λ ) H n × m [ λ ] be ⊛-left coprime. Then, there exist
P ( λ ) H n × n [ λ ] , G ( λ ) H n × m [ λ ] , D ( λ ) , Q ( λ ) H m × n [ λ ] , H ( λ ) , W ( λ ) H m × m [ λ ]
such that
A ( λ ) P ( λ ) + B ( λ ) Q ( λ ) = I n , D ( λ ) G ( λ ) + W ( λ ) H ( λ ) = I m , A ( λ ) G ( λ ) + B ( λ ) H ( λ ) = 0 .
Moreover, given F H p × p and C H n × p , the general solution of the matrix equation
A ( λ ) F X + B ( λ ) F Y = C
is
X = P ( λ ) F C + G ( λ ) F Z , Y = Q ( λ ) F C + H ( λ ) F Z ,
where Z H m × p is arbitrary.
Remark 62. 
Theorem 63 not only generalizes Theorem 61 to H , but also presents a more general result than the relevant results in [83,142,143,188,238,239,240,241,242,334].

6.8. Generalized forms of GSE

In SubSection 4.3, we can see that the SVD plays an important role in solving Eq. (1). Then, will the development of the SVD promote research on a more general form of Eq. (1)? The work in [45,113,114] shows that it is affirmative.
De Moor and Zha [45] established GSVD of a finite number k Z + of matrices over C .
Theorem 64 
(GSVD for any k matrices). [45], Theorem 1 Let
A 1 C n 0 × n 1 , A 2 C n 1 × n 2 , , A k 1 C n k 2 × n k 1 , and A k C n k 1 × n k .
Then, there exist unitary matrices U 1 C n 0 × n 0 and V k C n k × n k , and nonsingular matrices X j C n j × n j ( j = 1 , , k 1 ) such that
U 1 * A 1 X 1 = Λ 1 , Z 1 A 2 X 2 = Λ 2 , , Z i 1 A i X i = Λ i , , Z k 1 A k V k = Λ k ,
where Z j = X j * (or = X j 1 ) for j = 1 , 2 , , k 1 (i.e., both choices are always possible),
Λ j = I 0 0 0 0 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 r j 1 r j 1 1 r j 1 r j 2 r j 1 2 r j 2 r j 3 r j 1 3 r j 3 r j j n j 1 r j 1 r j j r j 1 r j 2 r j 3 r j 4 r j j n j r j
( j = 1 , 2 , , k 1 ) with r 0 = 0 and r j = i = 1 j r j i = rank ( A j ) ,
Λ k = Λ k 1 0 0 0 0 0 0 0 0 0 0 0 0 Λ k 2 0 0 0 0 0 0 0 0 0 0 0 0 Λ k 3 0 0 0 0 0 0 0 0 0 0 0 0 0 Λ k k 0 0 0 0 0 0 0 r k 1 r k 1 1 r k 1 r k 2 r k 1 2 r k 2 r k 3 r k 1 3 r k 3 r k k n k 1 r k 1 r k k r k 1 r k 2 r k 3 r k 4 r k k n k r k
with r k = i = 1 k r k i = rank ( A k ) , and  Λ k i C r k i × r k i ( i = 1 , 2 , . . . , k ) are diagonal with positive diagonal elements.
It is easy to see that there exists an elementary column transformation that turns Λ k given in (96) into the matrix Λ k consisting of only zero and identity matrix blocks. So, under the hypotheses of Theorem 64, there exist nonsingular matrices U 1 C n 0 × n 0 , V k C n k × n k , and  X j C n j × n j ( j = 1 , , k 1 ) such that
U 1 * A 1 X 1 = Λ 1 , Z 1 A 2 X 2 = Λ 2 , , Z i 1 A i X i = Λ i , , Z k 1 A k V k = Λ k .
In this case, Λ 1 , , Λ k 1 , and  Λ k only have zero and identity blocks.
Following the idea analogous to that in (97), He [113], Lemma 2.1 considered the pure product singular value decomposition (PSVD) for four quaternion matrices, i.e.,
A 1 I A 2 I A 3 I A 4 ,
where A 1 H q 1 × q 2 , A 2 H q 2 × q 3 , A 3 H q 3 × q 4 , and  A 4 H q 4 × q 5 . Using the PSVD, He further investigated the system of generalized Sylvester matrix equations over H :
X 1 A 1 B 1 X 2 = C 1 , X 2 A 2 B 2 X 3 = C 2 , X 3 A 3 B 3 X 4 = C 3 , X 4 A 4 B 4 X 5 = C 4 ,
where X 1 , X 2 , . . . , X 5 are unknown (see [113], Theorems 4.1 and 4.2).
Inspired by He’s work aforementioned, since the GSVD of an arbitrary number k of matrices has been established, can we then consider a system of k generalized Sylvester equations? That is to say, consider
X 1 A 1 B 1 X 2 = C 1 , X 2 A 2 B 2 X 3 = C 2 , X k A k B k X k + 1 = C k ,
where k Z + , and  X 1 , X 2 , . . . , X k + 1 are unknown.
To answer this question, let’s first take a look at the work of He et al. in [114]. In terms of Theorem 64, He et al. in [114], Theorem 2.2 gave the simultaneous decomposition of fifteen matrices over C , i.e.,
B 1 A 1 E 1 C 1 D 1 B 2 A 2 E 2 C 2 D 2 B 3 A 3 E 3 C 3 D 3 ,
where A i , B i , C i , D i , E i ( i = 1 , 2 , 3 ) are given matrices over C with appropriate orders. By this simultaneous decomposition of fifteen matrices, they further studied the following system of complex matrix equations:
A 1 X 1 B 1 + C 1 X 2 D 1 = E 1 , A 2 X 2 B 2 + C 2 X 3 D 2 = E 2 , A 3 X 3 B 3 + C 3 X 4 D 3 = E 3 ,
where X 1 , . . . , X 4 are unknown (see [114], Theorems 3.1 and 3.6). Interestingly, they also demonstrated the simultaneous decomposition of 5 k matrices by the same means, i.e.,
B 1 A 1 E 1 C 1 D 1 B 2 A 2 E 2 C 2 D 2 B k A k E k C k D k ,
where k Z + , and  A i , B i , C i , D i , E i ( i = 1 , 2 , . . . , k ) are given matrices over C with appropriate orders (see [114], Theorem 4.1). Next, it is natural to consider the system
A 1 X 1 B 1 + C 1 X 2 D 1 = E 1 , A 2 X 2 B 2 + C 2 X 3 D 2 = E 2 , A k X k B k + C k X k + 1 D k = E k ,
where X 1 , X 2 , . . . , X k + 1 are unknown. Obviously, Problem (98) is a special case of the system (99). However, only one conjecture on the solvability conditions of the system (99) was presented, i.e., [114], Conjecture 4.2.
Subsequently, in 2017, He published the preprint [121] providing a complete proof of this conjecture. It was not officially published in Linear and Multilinear Algebra until 2024 (see [115], Corollary 4.4). This led to Problem (98) being completely solved. Moreover, in [115], Theorem 2.1, He et al. further investigated
( 99 ) subject to G 1 X 1 = P 1 , G 2 X 2 = P 2 , , G k + 1 X k + 1 = P k + 1 , X 1 H 1 = Q 1 , X 2 H 2 = Q 2 , , X k + 1 H k + 1 = Q k + 1 ,
where X 1 , X 2 , . . . , X k + 1 are unknown.
It is worth noting that after [121], Wang and Xie published the preprint [285] to further extend the research on the system (99) over H , that is, to consider the system over H :
A 1 X 1 + Y 1 B 1 + C 1 Z 1 D 1 + F 1 Z 2 G 1 = E 1 , A 2 X 2 + Y 2 B 2 + C 2 Z 2 D 2 + F 2 Z 3 G 2 = E 2 , A k X k + Y k B k + C k Z k D k + F k Z k + 1 G k = E k ,
where k Z + , and  X i , Y i , Z i ( i = 1 , 2 , . . . , k ) , and  Z k + 1 unknown. In addition, the main system studied in [322] is actually a special case of (101) with k = 3 .
Up to this point, we can observe that the systems (100) and (101) encompass most of the current formal generalizations of Eq. (1) without considering differences in number sets, such as [119,123,124,200,224,272,281,283].
Remark 63. 
However, there are currently no published studies on the more generalized system as follows:
j = 1 l A 1 , j X 1 j B 1 , j + A 1 , l + 1 X 1 , l + 1 B 1 , l + 1 + A 1 , l + 2 X 2 , l + 1 B 1 , l + 2 = C 1 , j = 1 l A 2 , j X 2 , j B 2 , j + A 2 , l + 1 X 2 , l + 1 B 2 , l + 1 + A 2 , l + 2 X 3 , l + 1 B 2 , l + 2 = C 2 , j = 1 l A i , j X i , j B i , j + A i , l + 1 X i , l + 1 B i , l + 1 + A i , l + 2 X i + 1 , l + 1 B i , l + 2 = C i , j = 1 l A k , j X k , j B k , j + A k , l + 1 X k , l + 1 B k , l + 1 + A k , l + 2 X k + 1 , l + 1 B k , l + 2 = C k , subject to G i , j X i , j = P i , j , X i , j H i , j = Q i , j for 1 i k , 1 j l + 1 , G k + 1 , l + 1 X k + 1 , l + 1 = P k + 1 , l + 1 , X k + 1 , l + 1 H k + 1 , l + 1 = Q k + 1 , l + 1 ,
where l , k Z + , A i , j , B i , j , C i ( 1 i k , 1 j l + 2 ) , G i , j , P i , j , H i , j , Q i , j ( 1 i k , 1 j l + 1 ) , G k + 1 , l + 1 , P k + 1 , l + 1 , H k + 1 , l + 1 , and  Q k + 1 , l + 1 are given, and  X i , j ( 1 i k , 1 j l + 1 ) and X k + 1 , l + 1 are unknown. So, this is also an interesting problem.
Remark 64. 
Through the continuous research on the generalizations of Eq. (1), it can be found that the GSVD gradually becomes ineffective, while the tool of generalized inverse has always been an effective method. However, we also find that the generalized inverse theory has little success in studying the systems of equations that simultaneously couple multiple ( 2 ) unknown matrices.
For instance, for  the given matrices A, B, and C, the system
A X Y B = C , X Y = 0 ,
is consistent if and only if the Sylvester equation
A X X B = C
is consistent. At present, there are almost no articles that directly represent the general solution of the Sylvester equation using only the generalized inverses of the coefficient matrices.
But, we can intuitively observe that using the GSVD (or the simultaneous equivalence canonical forms of A and B) to discuss the Sylvester equation is a feasible option.
It is noted that Liu put forward an open problem in [191], that is, to study the equivalent conditions for the solvability of the system of matrix equations over C :
A 1 X + Y B 1 = C 1 , A 2 X + Y B 2 = C 2 ,
where X and Y are unknown. Clearly, the system (105) is a generalization of both Eq. (5) and the system (103). Later, using the rank equalities, Wang et al. [286] solved this problem over H under the certain condition, i.e.,
rank A 1 A 2 = rank ( A 1 ) + rank ( A 2 ) and rank B 1 B 2 = rank ( B 1 ) + rank ( B 2 ) .
This wonderful work is illuminating for completely solving Eq. (104) and the system (105) by using rank equalities or the generalized inverses.
Theorem 65. 
[286], Theorem 2.8 Let A 1 , A 2 H m × p , B 1 , B 2 H q × n , and C 1 , C 2 H m × n be such that every matrix equation in the system (105) is consistent. If (106) hold, then the system (105) is consistent if and only if
rank B 1 0 B 2 0 C 1 A 1 C 2 A 2 = rank A 1 A 2 + rank B 1 B 2 , rank A 1 A 2 C 1 C 2 0 0 B 1 B 2 = rank A 1 A 2 + rank B 1 B 2 ,
rank 0 B 1 B 2 A 1 0 0 A 2 0 F = rank A 1 A 2 + rank B 1 B 2 , rank 0 B 1 B 2 A 1 0 0 A 2 0 F ^ = rank A 1 A 2 + rank B 1 B 2 ,
where
F = A 1 ( A 2 ( 1 , 2 ) C 2 A 1 ( 1 , 2 ) C 1 ) B 1 B 2 ( 1 , 2 ) B 1 B 2 + Ω B 1 , F ^ = A 2 ( A 2 ( 1 , 2 ) C 2 A 1 ( 1 , 2 ) C 1 ) B 1 B 2 ( 1 , 2 ) B 1 B 2 + Ω B 2 , Ω = A 1 A 2 A 1 A 2 ( 1 , 2 ) R A 2 C 2 B 2 ( 1 , 2 ) R A 1 C 1 B 1 ( 1 , 2 )
with R A 1 = I A 1 A 1 ( 1 , 2 ) .
Remark 65. 
For more research on Eq. (105), please refer to [150,296].

7. Iterative Algorithms

Analytical solutions of GSE over various algebras have been introduced in Section 5 and Section 6. However, in practical applications, challenges such as high computational complexity, stability, and robustness issues often arise. Therefore, investigating numerical solutions to GSE is imperative. In this section, we primarily present the relevant results on the numerical solutions of GSE through a concise enumeration. Several iterative algorithms directly addressing GSE are presented first.
(1)
In 1984, Ziętak [345], Section 3 proposed an algorithm to compute the l p -solutions of Eq. (5) over R using [345], Theorem 2.3. In the same period, analogous to Algorithm R[343] for a nonlinear matrix equation, Ziętak [344] devised Algorithm T. Using this algorithm, [344], Theorems 5.2 and 5.3 yield a Chebyshev solution of Eq. (5) under the conditions (29) and (28), respectively.
(2)
In 2008, using Kronecker products of matrices, Yang and Huang [330], Theorem 1 derived the normwise backward errors of the approximate solutions of Eq. (5) over R , as well as their upper and lower bounds.
(3)
In 2011, Li  et al. [179], Theorems 1, 2, and 3 extended the classical conjugate gradient least-squares algorithm to compute the optimal solution of Eq. (5) over R with symmetric pattern constraints.
(4)
In 2017, inspired by [111], Ke and Ma [154], Theorem 4.1 proposed an alternating direction method to find the nonnegative solutions of Eq. (5) over R .
In Section 6, we observe that GSE has numerous generalizations. Consequently, the iterative algorithms for these generalized forms specialize to the corresponding results of GSE as special cases. Thus, a few key algorithms for these generalizations are enumerated as follows:
(I)
In 2006, Peng  et al. [214] showed an efficient iterative algorithm for solving the following matrix equation over R :
A X B + C Y D = E ,
where X and Y are unknown. They noted that the algorithm can also be used to construct symmetric, antisymmetric, and bisymmetric solutions of (107) with only minor changes.
(II)
The condition number is an important topic in numerical analysis, characterizing the worst-case sensitivity of problems to input data perturbations. A large condition number indicates an ill-posed problem. Consider the following matrix equation:
A X Y B = C , D X Y E = F ,
where X and Y are unknown.
(i)
In 1996, Kågström and Poromaa [151] presented LAPACK-style algorithms and software for solving Eq. (108) over C .
(ii)
In 2007, Lin and Wei [187] studied the perturbation analysis for Eq. (108) over R , explicitly deriving the expressions and upper bounds for normwise, mixed, and componentwise condition numbers.
(iii)
In 2013, Diao et al. [52] developed the small sample statistical condition estimation algorithm to evaluate the normwise, mixed, and componentwise condition numbers of Eq. (108) over R . In [52], they also investigated the effective condition number for Eq. (108) and derived sharp perturbation bounds using this condition number.
(III)
In 2008, Dehghan and Hajarian [47] proposed an iterative algorithm to solve the reflexive solutions of Eq. (108) over R .
1.
In 2010, Dehghan and Hajarian [49] presented an iterative algorithm for solving the generalized bisymmetric solutions of the generalized coupled Sylvester matrix equation over R :
A X B + C Y D = M , E X F + G Y H = N ,
where X and Y are unknown generalized bisymmetric matrices.
(IV)
In 2012, inspired by the least-squares QR-factorization algorithm in [211], Li and Huang [181] proposed an iterative method to find the best approximate solution of Eq. (109) over R , where unknown matrices X and Y are constrained to be symmetric, generalized bisymmetric, or  ( R , S ) -symmetric.
(V)
In 2018, inspired by [128,338], Lv and Ma [194], Section 3 proposed a parametric iterative algorithm for Eq. (108) over R . Moreover, in [194], Section 4, they developed an accelerated iterative algorithm based on this parametric approach. Note that Ref. [338] is a monograph on iterative algorithms for constrained solutions of matrix equations.
(VI)
Interestingly, in 2024, Ma et al. [195] proposed a Newton-type splitting iterative method for the coupled Sylvester-like absolute value equation R :
A 1 X B 1 + C 1 | Y | D 1 = E 1 , A 2 Y B 2 + C 2 | X | D 2 = E 2 ,
where X and Y are unknown. Here, | A | means that each component of a matrix A is absolute-valued.
(VII)
The algorithms in [53,54,234] suffer from parameter tuning challenges, particularly for large-scale problems. To address this limitation, in 2025, Shirilord and Dehghan [235] recently proposed an advanced gradient descent-based parameter-free method to solve Eq. (108) over R .
In the final part of this section, we further enumerate some iterative algorithms for solving the generalizations of GSE, which involve an arbitrary number of unknown (or coefficient) matrices.
(A)
In 2005–2006, using the hierarchical identification principle, Ding and Chen [53,54] presented a large family of iterative methods for the more general form of Eq. (5) over R , i.e.,
A i , 1 X 1 B i , 1 + A i , 2 X 2 B i , 2 + + A i , p X p B i , p = C i , i = 1 , 2 , , p ,
where X 1 , X 2 , . . . , X p are unknown. These iterative methods subsume the well-known Jacobi and Gauss-Seidel iterations. Subsequent scholars have conducted more extensive research on numerical algorithms for Eq. (110).
(a)
In 2010, based on the conjugate gradient method, Dehghan and Hajarian [48] constructed an iterative method for Eq. (110) over R with the generalized bisymmetric solutions ( X 1 , X 2 , . . , X p ) .
(b)
In 2012, Dehghan and Hajarian [50] introduced two iterative methods for solving (110) over R with the generalized centro-symmetric and central antisymmetric solutions ( X 1 , X 2 , . . , X p ) .
(c)
In 2014, Hajarian [100] solved Eq. (110) over C by the matrix form of the conjugate gradients squared method.
(d)
In 2016, Hajarian [103] presented the generalized conjugate direction algorithm for computing Eq. (110) with the symmetric solutions ( X 1 , X 2 , . . , X p ) .
(e)
In 2017, based on the Hestenes-Stiefel version of the biconjugate residual (BCR) algorithm, Hajarian [105] solved the generalized Sylvester matrix equation
i = 1 f ( A i X B i ) + j = 1 g ( C j Y D j ) = E
over R with the generalized reflexive solutions ( X , Y ) . In 2018, Lv and Ma [193] introduced another Hestenes-Stiefel version of BCR method for computing the centrosymmetric or anti-centrosymmetric solutions of Eq. (110) over R .
(f)
In 2018, inspired by [208], Sheng [234] proposed a relaxed gradient based iterative (RGI) algorithm to solve Eq. (108), and further generalized this algorithm to Eq. (110). Moreover, Numerical examples in [234] demonstrate that the RGI algorithm outperforms the iterative algorithm in [54] in terms of speed, elapsed time, and iterative steps.
(h)
In 2018, Hajarian [106] extended the Lanczos version of BCR algorithm .to find the symmetric solutions ( X , Y , Z ) of the matrix equation over R :
A i X B i + C i Y D i + E i Z F i = G i , i = 1 , 2 , . . . , t .
In 2020, Yan and Ma [327] also used the Lanczos version of BCR algorithm to study Eq. (110) over R with the (anti-)reflexive solutions ( X 1 , X 2 , . . , X p ) .
(B)
In 2009, from an optimization perspective, Zhou et al. [340] developed a novel iterative method for solving Eq. (110) over R and its more general form, i.e.,
j = 1 s i , 1 A i , 1 , j X 1 B i , 1 , j + j = 1 s i , 2 A i , 2 , j X 2 B i , 2 , j + + j = 1 s i , p A i , p , j X p B i , p , j = C i , i = 1 , 2 , , p
with unknown X 1 , X 2 , . . . , X p , which contains iterative methods in [53,54] as special cases. In 2015, by extending the generalized product biconjugate gradient algorithms, Hajarian gave [102] four effective matrix algorithms for the coupled matrix equation over R :
j = 1 l A i , 1 , j X 1 B i , 1 , j + A i , 2 , j X 2 B i , 2 , j + + A i , l , j X l B i , l , j = D i , i = 1 , 2 , , l ,
where X 1 , X 2 , , X l are unknown.
(C)
In 2011, Wu et al. [311] constructed an iterative algorithm to solve the coupled Sylvester-conjugate matrix equation over C :
j = 1 p A i j X j B i j + C i j X ¯ j D i j = F i , i = 1 , 2 , , n ,
where X 1 , X 2 , . . . , X p are unknown. In 2021, inspired by [311], Yan and Ma [328] proposed an iterative algorithm for the generalized Hamiltonian solutions of the generalized coupled Sylvester-conjugate matrix equations over C :
j = 1 q A i j X j B i j + C i j Y ¯ j D i j = M i , j = 1 q E i j Y j F i j + G i j X ¯ j H i j = N i ,
where i = 1 , , p , and  X j and Y j ( j = 1 , . . . , q ) are unknown generalized Hamiltonian matrices.
(D)
In 2015, inspired by [19,147], Hajarian [101] obtained an iterative method for the coupled Sylvester-transpose matrix equations over R :
k = 1 l A 1 , k X B 1 , k + C 1 , k X T D 1 , k + E 1 , k Y F 1 , k = M 1 , k = 1 l A 2 , k X B 2 , k + C 2 , k X T D 2 , k + E 2 , k Y F 2 , k = M 2
with unknown X and Y, by  developing the biconjugate A-orthogonal residual and the conjugate A-orthogonal residual squared methods. Based on this developed method, Hajarian [101] also considered the coupled periodic Sylvester matrix equations over R :
A 1 , j X j B 1 , j + C 1 , j X j + 1 D 1 , j + E 1 , j Y j F 1 , j = M 1 , j , A 2 , j X j B 2 , j + C 2 , j X j + 1 D 2 , j + E 2 , j Y j F 2 , j = M 2 , j , for j = 1 , 2 , ,
where X j and Y j are unknown periodic matrices with a period.
(E)
Discrete-time periodic matrix equations are an important tool for analyzing and designing periodic systems [15]. More related studies are as follows:
(a)
In 2017, Hajarian [104] introduced a generalized conjugate direction method for solving the general coupled Sylvester discrete-time periodic matrix equations over R :
j = 1 m A i j X i B i j + C i j X i + 1 D i j + E i j Y i F i j + G i j Y i + 1 H i j = M i , i = 1 , 2 , , j = 1 m A ^ i j X i B ^ i j + C ^ i j X i + 1 D ^ i j + E ^ i j Y i F ^ i j + G ^ i j Y i + 1 H ^ i j = M ^ i , i = 1 , 2 , ,
where X i and Y i are unknown periodic matrices with a period.
(b)
In 2022, Ma and Yan [196] proposed a modified conjugate gradient algorithm for solving the general discrete-time periodic Sylvester matrix equations over R :
j = 1 h A i j X i B i j + C i j X i + 1 D i j + E i j Y i F i j + G i j Y i + 1 H i j = M i , i = 1 , 2 , , T ,
where X i and Y i are unknown periodic matrices of period T.
(F)
Interestingly, in 2014, Dehghani-Madiseh and Dehghan [51] presented the generalized interval Gauss-Seidel iteration method for the outer estimation of AE-solution set of the interval generalized Sylvester matrix equation over R :
i = 1 p A i X i + j = 1 q Y j B j = C ,
where X i ( i = 1 , , p ) and Y j ( j = 1 , , q ) are unknown interval matrices.
(H)
In 2018, Hajarian [107] established the biconjugate residual algorithm for solving the matrix equation over R :
i = 1 s A i X B i + j = 1 t C j Y D j = M ,
where X and Y are the unknown generalized reflexive and anti-reflexive matrices, respectively.
(I)
In 2022, based on Kronecker product approximations, Li et al. [182] established a preconditioned modified conjugate residual method for solving the following tensor equation over R :
X 1 × 1 A 11 + X 2 × 2 A 12 + + X n 1 × n 1 A 1 ( n 1 ) + X n × n A 1 n = B 1 , X 2 × 1 A 21 + X 3 × 2 A 22 + + X n × n 1 A 2 ( n 1 ) + X 1 × n A 2 n = B 2 , X n × 1 A n 1 + X 1 × 2 A n 2 + + X n 2 × n 1 A n ( n 1 ) + X n 1 × n A n n = B n ,
where X 1 , . . . , X n are unknown. Here, A × k B denotes the k-mode product [217] of a tensor A R I 1 × I 2 × × I n and a matrix B R m × I k .
Remark 66. 
We believe that the iterative algorithms for numerical solutions of GSE and its generalizations summarized in this section can also provide certain inspiration and guidance for the study of numerical solutions to other linear equations.

8. Applications to GSE

In this section, we mainly introduce several applications of GSE in both theoretical and practical problems. It is also worth noting that the generalizations of GSE find applications in pole and eigenstructure assignment [261], scalar functional observer design [257], robots and acoustic source localization [146], pseudo-differential system [159], control theory [304], parametric control [91], model reference tracking control [248], etc.

8.1. Theoretical Applications

8.1.1. Solvability of Matrix Equations

In 1988, Wimmer [295] utilized the solvability condition of the polynomial matrix form of Eq. (1) (i.e., Theorem 54) to give a necessary and sufficient condition for the consistency of the matrix equation over a field F :
X A X B = C ,
where A F p × p , B F q × q , and  C F p × q .
Theorem 66. 
[295], Theorem 2 Eq. (113) has a solution X F p × q if and only if there exist nonsingular matrices S , R F ( p + q ) × ( p + q ) such that
S λ I 0 0 B + A C 0 I R = λ I 0 0 B + A 0 0 I .
Remark 67. 
The core of the proof of Theorem 66 is that
( 113 ) X A Y = C , Y = X B , ( A + λ I ) Y X ( I + λ B ) = C .
Furthermore, using a special polynomial matrix form of Eq. (5), Huang and Liu [133] considered the solvability condition for the matrix equation
i = 0 k A i X B i = C
with unknown X over a ring with identity.
Theorem 67. 
[133], Theorems 1 and 2 Let F be a ring with identity, and let A F n × n , B i F m × q ( i = 0 , 1 , . . . , k ) , and  C F n × q . Denote
B ( λ ) = i = 0 k B i λ i F m × q [ λ ] .
(1)
Then, Eq. (114) is solvable if and only if the polynomial matrix equation
( λ I A ) X ( λ ) + Y ( λ ) B ( λ ) = C
has a solution pair X ( λ ) F n × q [ λ ] and Y ( λ ) F n × m [ λ ] .
(2)
Suppose that F is a division ring and A is algebraic (or F is a finitely generated as module over its center). Then, Eq. (114) is solvable if and only if
λ I A C 0 B ( λ ) and λ I A 0 O B ( λ )
are equivalent.
Remark 68. 
(1)
Theorem 67 remains valid when F is a finite dimensional central simple algebra over a field (see [129]).
(2)
In [133], Corollaries 1, 2 and 3, Huang and Liu indicated that relevant results regarding the solvability of the equations A X X B = C , X A X B = C , and  A X B = C can be directly derived by Theorem 67.

8.1.2. UTV Decomposition of Dual Matrices

In 2024, Xu et al. [325] presented the UTV decomposition of dual complex matrices based on the solvability conditions and general solution representations of Eq. (1) over C (i.e., Theorem 3).
For a s , a i C , a = a s + a i ε represents a dual complex number, where ε is the dual unit given in (61). The set of all dual complex numbers is denoted by DC . For A = A s + A i ε DC n × p , A has unitary columns if n p and A * A = I n , where A * = A s * + A i * ε is the conjugate transpose of A. For A = A s + A i ε DC n × n , A is unitary if A * A = A A * = I n ; A is diagonal if both A s and A i are diagonal; and A is nonsingular if A A 1 = A 1 A = I n for some A 1 DC n × n .
We say that A = A s + A i ε D C m × n has the UTV decomposition [325] if
A = U T V * ,
where U = U s + U i ε DC m × k has unitary columns, T = T s C k × k is triangular and nonsingular, and  V = V s + V i ε DC n × k has a unitary standard part V s .
Theorem 68. 
[325], Theorem 3.1 Let A = A s + A i ε D C m × n . Assume that the UTV decomposition of A s is given by
A s = U s T s V s * ,
where both U s C m × k and V s C n × k have unitary columns, and  T s C k × k is triangular and nonsingular. Then, the UTV decomposition of A exists if and only if
( I m U s U s * ) A i ( I n V s V s * ) = 0 ,
in which case,
U i = ( I m U s U s * ) A i V s T s 1 + U s P and V i = A i * U s ( T s 1 ) * V s T s * P * ( T s 1 ) * ,
where P C k × k is arbitrary skew-Hermitian matrix.
Remark 69. 
[325], Proof of Theorem 3.1 shows that the pivotal step in proving Theorem 68 is that A has the UTV decomposition if and only if the matrix equation
U i T s V s * + U s T s V i * = A i
is consistent for unknown U i DC m × k and V i DC n × k .

8.1.3. Microlocal Triangularization of Pseudo-Differential Systems

In 2013, using the solvability of Eq. (1) over C , Kiran [159] constructed a recursive scheme to factorize a pseudo-differential system into lower and upper triangular systems (LU factorization) independent of lower order terms.
Let O P S N 0 ( Ω ) be the set of all N × N pseudo-differential systems of order 0 defined on an open subset Ω of R n . In addition, the notation and terminology in this subsection follow that in [222,253].
Definition 18. 
[159], Definition 2.2 A matrix valued operator A O P S N 0 ( Ω ) admits L U factorization if
A = L U ,
where L O P S N 0 ( Ω ) is an elliptic lower triangular matrix whose principal symbol has the identity on the diagonal entries, and  U O P S N 0 ( Ω ) is an upper triangular matrix.
Theorem 69. 
[159], Theorem 2.3 Let λ 1 ( x , ξ ) , , λ N ( x , ξ ) be N sections of eigenvalues of the principal symbol of A O P S N 0 ( Ω ) , including multiplicities, in a conic neighborhood Γ of ( x 0 , ξ 0 ) T * Ω { 0 } . If
λ i 1 ( 0 ) λ j 1 ( 0 ) Γ = , i j ,
then A is microlocally triangularizable in Γ independent of lower order terms. Moreover, the system admits L U factorization independent of lower order terms in Γ if and only if the principal symbol of A admits an L U factorization where the first N 1 eigenvalues of the upper triangular matrix do not vanish in Γ.
Remark 70. 
(1)
In [159], Sections 3.3 and 3.4, Kiran showed that the triangularization scheme in Theorem 69 can also be applied to symbolic hierarchies.
(2)
[159], Lemma 2.5 shows that Eq. (1) over C has a unique solution X if and only if A or B is nonsingular. However, there is a simple counterexample to its sufficiency. Indeed, if both A and B are identity matrices (and thus nonsingular), the solution X of Eq. (1) is obviously not unique for a given C. For instance, take X = C and Y = 0 , or  X = 2 C and Y = C . This minor error, however, does not affect the existence of solutions to Eq. (1).

8.2. Practical Applications

8.2.1. Calibration Problems

The late 1970s to early 1980s witnessed a surge of interest in deploying robotic manipulators for automated manufacturing. However, integrating robots as core components in flexible manufacturing systems remained challenging across many industrial applications, prompting extensive research into the manipulator calibration [204].
Inspired by [236,341], Zhuang et al. [342] first solved a specialized form of Eq. (1) to address a robot calibration problem: the calibration of the robot/world (i.e., the BASE transformation) and tool/flange (i.e., the TOOL transformation). As a matter of fact, robot manipulator calibration refers to the procedure of enhancing a robot manipulator’s accuracy by adjusting its control software.
Figure 1 provides a schematic illustration of the geometry of a robotic cell. The world coordinate frame serves as an external reference frame. The base coordinate frame is defined within the robot structure. The flange coordinate frame is defined on the mounting surface of the robot end effector. The tool frame is positioned at a point inside the end effector.
Then, the robot kinematic model can be transformed into the following form:
A X = Y B ,
where
(i)
A is the known homogeneous transformation from end effector pose measurements,
(ii)
B is derived from the calibrated manipulator internal-link forward kinematics,
(iii)
X is the unknown transformation from the tool frame to the flange frame,
(iv)
Y is the unknown transformation from the world frame to the base frame.
Assume that there are n pose measurements for i = 1 , 2 , . . . n . Thus, the calibration problem is reduced to solving the system of equations
A i X = Y i B , i = 1 , 2 , . . . n .
Subsequently, Zhuang et al. elaborated on the solution of Eq. (116) using the rotational properties of quaternions in [342], Section 3.
Subsequently, the robot manipulator calibration problem is further optimized by considering Eq. (116) through different approaches: dual quaternion method [24,44,178,218,289], new hybrid calibration method [269], least-squares approach [68], Kronecker product method [232], 3D position measurements [315], nonlinear optimization and evolutionary computation [252], 2D positional features [231], dual Lie algebra [39], symbolic method [314], linear matrix inequality and semi-definite programming optimization [213], probabilistic framework [99], transference principle [290], etc.

8.2.2. Encryption and Decryption Schemes for Color Images

The RGB (Red, Green, Blue) color channels can be directly mapped to the imaginary parts ( i , j , k ) of a pure imaginary quaternion matrix. Naturally, a dual quaternion matrix can represent two color images since both its standard and infinitesimal parts are quaternion matrices. By solving Eq. (1) over DH , Xie et al. [317] proposed the encryption and decryption schemes for color images, as shown in Figure 2.
The specific encryption and decryption processes for color images are presented in the following two algorithms. To ensure the uniqueness of the decrypted color images ( X ^ 0 , X ^ 1 ) output by Algorithmic 2, the encryption dual quaternion matrices A and B must be restricted to satisfy Condition P, i.e., both the standard parts and the infinitesimal parts of A and B are of either full row rank or full column rank.
Algorithm 1 Color image encryption scheme
1:
Input two original color images X 0 and X 1 , two color images Y 0 and Y 1 as keys, and two encryption dual quaternion matrices A and B satisfying Condition P;
2:
Output two encrypted color images C 0 and C 1 by
A ( X 0 + X 1 ε ) ( Y 0 + Y 1 ε ) B = C 0 + C 1 ε .
Algorithm 2 Color image decryption scheme
1:
Input the encryption matrices A and B, the keys Y 0 and Y 1 , and the encrypted color images C 0 and C 1 from Algorithm 1.
2:
Output two decrypted color images X ^ 0 and X ^ 1 by (62) and (63) in Theorem 42.
Remark 71. 
Recently, research on encrypting and decrypting color images and color videos by solving matrix or tensor equations has attracted attention in [140,319,329].

9. Conclusions

Research on GSE and its generalizations has long been a vibrant field, boasting both profound theoretical value and extensive practical application prospects. This comprehensive review encompasses 88 theorems, 71 remarks, and 345 references spanning from 1952 to 2025, covering pure mathematics (linear algebra, abstract algebra, operators, tensors, semi-tensor products, polynomial matrices, etc.), computational mathematics (iterative algorithms, condition numbers, etc.), and applied mathematics (robotics, image processing, encryption/decryption schemes, etc.). Centered on solving GSE, this paper elaborates on five dimensions–methods, constraints, generalizations, algorithms, and applications–distilling the field’s essence through point-by-point analysis and synthesis. A network diagram (i.e., Figure 3) intuitively illustrates the GSE research’s core framework.
Numerous researchers worldwide have made significant contributions to GSE-related studies. Owing to the authors’ limitations, we have not been able to cover all GSE research findings, and there may inevitably be inadequacies. We offer our apologies here. Moreover, as GSE research advances rapidly with continuous innovations, this paper’s framework and content will grow increasingly rich and substantial as developments unfold. Finally, we kindly request experts and readers to offer their valuable insights, assisting us in further refining the summarization work in this field.

Author Contributions

Conceptualization, Qing-Wen Wang and Jiale Gao; Methodology, Qing-Wen Wang and Jiale Gao; Investigation, Qing-Wen Wang and Jiale Gao; Writing-Original Draft Preparation, Qing-Wen Wang and Jiale Gao; Writing-Review and Editing, Qing-Wen Wang and Jiale Gao; Supervision, Qing-Wen Wang; Funding Acquisition, Qing-Wen Wang.

Funding

This research was funded by the National Natural Science Foundation of China grant number [12371023].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. S.L. Adler. Quaternionic quantum mechanics and quantum fields, Oxford University Press, New York. 1995.
  2. I. J. An, E. Ko, and J.E. Lee. On the generalized Sylvester operator equation AX-YB=C. Linear Multilinear Algebra 2022, 72, 585–596. [Google Scholar]
  3. A. L. Andrew. Centrosymmetric matrices. SIAM Rev. 1998, 40, 697–698. [Google Scholar] [CrossRef]
  4. H. Aslaksen. Quaternionic determinants. Math. Intell. 1996, 18, 57–65. [Google Scholar] [CrossRef]
  5. B. W. Bader and T.G. Kolda. Algorithm 862: Matlab tensor classes for fast algorithm prototyping. ACM Trans. Math. Softw. 2006, 32, 635–653. [Google Scholar] [CrossRef]
  6. J. K. Baksalary and R. Kala. The matrix equation AX-YB=C. Linear Algebra Appl. 1979, 25, 41–43. [Google Scholar] [CrossRef]
  7. J. K. Baksalary and R. Kala. The matrix equation AXB+CYD=E. Linear Algebra Appl. 1980, 30, 141–147. [Google Scholar] [CrossRef]
  8. S. Barnett. Regular polynomial matrices having relatively prime determinants. Proc. Camb. Philos. Soc. 1969, 65, 585–590. [Google Scholar] [CrossRef]
  9. A. Ben-Israel and T.N.E Greville. Generalized Inverses: Theory and Applications, Springer, New York, 2nd edition. 2003.
  10. G. Bengtsson. Output regulation and internal models–a frequency domain approach. Automatica 1977, 13, 333–345. [Google Scholar] [CrossRef]
  11. J.H. Bevis, F.J. Hall, and R.E. Hartwig. Consimilarity and the matrix equation AX-XB=C. In F. Uhlig and R. Grone, editors, Current Trends in Matrix Theory, pages 51–64. North-Holland, New York, 1987.
  12. J. H. Bevis, F.J. Hall, and R.E. Hartwig. The matrix equation AX¯-XB=C and its special cases. SIAM J. Matrix Anal. Appl. 1988, 9, 348–359. [Google Scholar]
  13. R. Bhatia. Matrix Analysis, Springer, New York. 1997.
  14. R. Bhatia and P. Rosenthal. How and why to solve the operator equation AX-XB=Y. Bull. Lond. Math. Soc. 1997, 29, 1–21. [Google Scholar] [CrossRef]
  15. S. Bittanti and P. Colaneri. Periodic Systems: Filtering and Control, Springer, London. 2009.
  16. M. Brazell, N. Li, C. Navasca, and C. Tamon. Solving multilinear systems via tensor inversion. SIAM J. Matrix Anal. Appl. 2013, 34, 542–570. [Google Scholar] [CrossRef]
  17. R. Byers and D. Kressner. Structured condition numbers for invariant subspaces. SIAM J. Matrix Anal. Appl. 2006, 28, 326–347. [Google Scholar] [CrossRef]
  18. A. Cantoni and P. Butler. Eigenvalues and eigenvectors of symmetric centrosymmetric matrices. Linear Algebra Appl. 1976, 13, 275–288. [Google Scholar] [CrossRef]
  19. B. Carpentieri, Y.F. Jing, and T.Z. Huang. The BiCOR and CORS iterative algorithms for solving nonsymmetric linear systems. SIAM J. Sci. Comput. 2011, 33, 3020–3036. [Google Scholar] [CrossRef]
  20. X. W. Chang and J.S. Wang. The symmetric solution of the matrix equations AX+YA=C, AXAT+BYBT=C, and (ATXA,BTXB)=(C,D). Linear Algebra Appl. 1993, 179, 171–189. [Google Scholar] [CrossRef]
  21. M. Che and Y. Wei. Theory and Computation of Complex Tensors and its Applications, Springer, Singapore. 2020.
  22. S. Chen and Y. Tian. On solutions of generalized Sylvester equation in polynomial matrices. J. Frankl. Inst. 2014, 351, 5376–5385. [Google Scholar] [CrossRef]
  23. W. Chen and C. Song. STP method for solving the least squares special solutions of quaternion matrix equations. Adv. Appl. Clifford Algebr. 2025, 35, 6. [Google Scholar] [CrossRef]
  24. Z. Chen, C. Ling, L. Qi, and H. Yan. A regularization-patching dual quaternion optimization method for solving the hand-eye calibration problem. J. Optim. Theory Appl. 2024, 200, 1193–1215. [Google Scholar] [CrossRef]
  25. D. Cheng. Matrix and Polynomial Approach to Dynamic Control Systems, Science Press, Beijing. 2002.
  26. D. Cheng. From Dimension-Free Matrix Theory to Cross-Dimensional Dynamic Systems, Elsevier, London. 2019.
  27. D. Cheng and H. Qi. Semi-tensor Product of Matrices–Theory and Applications, Science Press, Beijing, 2nd 62 edition, 2011. In Chinese.
  28. D. Cheng, Z. Xu, and T. Shen. Equivalence-based model of dimension-varying linear systems. IEEE Trans. Autom. Control 2020, 65, 5444–5449. [Google Scholar] [CrossRef]
  29. D. Cheng and Z. Liu. A new semi-tensor product of matrices. Control Theory Technol. 2019, 17, 4–12. [Google Scholar] [CrossRef]
  30. D. Cheng. Semi-tensor product of matrices and its application to Morgan’s problem. Sci. China (Ser. F) 2001, 44, 195–212. [Google Scholar]
  31. D. Cheng, H. Qi, and Z. Li. Analysis and Control of Boolean Networks: A Semi-tensor Product Approach, 2011.
  32. D. Cheng, H. Qi, and A. Xue. A survey on semi-tensor product of matrices. J. Syst. Sci. Complex. 2007, 20, 304–322. [Google Scholar] [CrossRef]
  33. L. Cheng and J. Pearson. Frequency domain synthesis of multivariable linear regulators. IEEE Trans. Autom. Control 1978, 23, 3–15. [Google Scholar] [CrossRef]
  34. K. E. Chu. Singular value and generalized singular value decompositions and the solution of linear matrix equations. Linear Algebra Appl. 1987, 88–89, 83–98. [Google Scholar]
  35. M. A. Clifford. Preliminary sketch of biquaternions. Proc. Lond. Math. Soc. 1873, 4, 381–395. [Google Scholar]
  36. J. Cockle. On systems of algebra involving more than one imaginary and on equations of the fifth degree. London, Edinburgh, Dublin Philos. Mag. J. Sci. 1849, 35, 434–437. [Google Scholar] [CrossRef]
  37. N. Cohen and S.D. Leo. The quaternionic determinant. Electron. J. Linear Algebra 2000, 7, 100–111. [Google Scholar]
  38. D. Condurache and A. Burlacu. Dual tensors based solutions for rigid body motion parameterization. Mech. Mach. Theory 2014, 74, 390–412. [Google Scholar] [CrossRef]
  39. D. Condurache and I.A. Ciureanu. A novel solution for AX=YB sensor calibration problem using dual Lie algebra. In 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT’19), pages 302–307, Paris, France, 2019.
  40. D.S. Cvetković-Ilić and Y. Wei. Algebraic Properties of Generalized Inverses, Springer, Singapore. 2017.
  41. D. S. Cvetković-Ilić. The solutions of some operator equations. J. Korean Math. Soc. 2008, 45, 1417–1425. [Google Scholar] [CrossRef]
  42. A. Dajić. Common solutions of linear equations in a ring, with applications. Electron. J. Linear Algebra 2015, 30, 66–79. [Google Scholar]
  43. A. Dajić and J. J. Koliha. Positive solutions to the equations AX=C and XB=D for Hilbert space operators. J. Math. Anal. Appl. 2007, 333, 567–576. [Google Scholar] [CrossRef]
  44. K. Daniilidis. Hand-eye calibration using dual quaternions. Int. J. Robot. Res. 1999, 18, 286–298. [Google Scholar] [CrossRef]
  45. B. De Moor and H. Zha. A tree of generalization of the ordinary singular value decomposition. Linear Algebra Appl. 1991, 147, 469–500. [Google Scholar] [CrossRef]
  46. F. De Terán and F. M. Dopico. Consistency and efficient solution of the Sylvester equation for -congruence. Electron. J. Linear Algebra 2011, 22, 849–863. [Google Scholar]
  47. M. Dehghan and M. Hajarian. An iterative algorithm for the reflexive solutions of the generalized coupled Sylvester matrix equations and its optimal approximation. Appl. Math. Comput. 2008, 202, 571–588. [Google Scholar]
  48. M. Dehghan and M. Hajarian. The general coupled matrix equations over generalized bisymmetric matrices. Linear Algebra Appl. 2010, 432, 1531–1552. [Google Scholar] [CrossRef]
  49. M. Dehghan and M. Hajarian. An iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices. Appl. Math. Modell. 2010, 34, 639–654. [Google Scholar] [CrossRef]
  50. M. Dehghan and M. Hajarian. Iterative algorithms for the generalized centro-symmetric and central anti-symmetric solutions of general coupled matrix equations. Eng. Comput. 2012, 29, 528–560. [Google Scholar] [CrossRef]
  51. M. Dehghani-Madiseh and M. Dehghan. Generalized solution sets of the interval generalized Sylvester matrix equation ∑i=1pAiXi+∑j=1qYjBj=C and some approaches for inner and outer estimations. Comput. Math. Appl. 2014, 68, 1758–1774. [Google Scholar] [CrossRef]
  52. H. Diao, X. Shi, and Y. Wei. Effective condition numbers and small sample statistical condition estimation for the generalized Sylvester equation. Sci. China-Math. 2013, 56, 967–982. [Google Scholar] [CrossRef]
  53. F. Ding and T. Chen. Iterative least-squares solutions of coupled Sylvester matrix equations. Syst. Control Lett. 2005, 54, 95–107. [Google Scholar] [CrossRef]
  54. F. Ding and T. Chen. On iterative solutions of general coupled matrix equations. SIAM J. Control Optim. 2006, 44, 2269–2284. [Google Scholar] [CrossRef]
  55. W. Ding, Y. Li, D. Wang, and A. Wei. Constrained least squares solution of Sylvester equation. Math. Model. Control 2021, 1, 112–120. [Google Scholar] [CrossRef]
  56. W. Ding and Y. Wei. Theory and Computation of Tensors: Multi-dimensional Arrays, Elsevier/Academic Press, London,. 2016.
  57. W. Ding, Y. Li, and D. Wang. A real method for solving quaternion matrix equation X-AX^B=C based on semi-tensor product of matrices. Adv. Appl. Clifford Algebr. 2021, 31, 78. [Google Scholar] [CrossRef]
  58. A. Dmytryshyn, V. Futorny, T. Klymchuk, and V.V. Sergeichuk. Generalization of Roth’s solvability criteria to systems of matrix equations. Linear Algebra Appl. 2017, 527, 294–302. [Google Scholar] [CrossRef]
  59. A. Dmytryshyn and B. Kågström. Coupled Sylvester-type matrix equations and block diagonalization. SIAM J. Matrix Anal. Appl. 2015, 36, 580–593. [Google Scholar] [CrossRef]
  60. M. Dobovišek. On minimal solutions of the matrix equation AX-YB=0. Linear Algebra Appl. 2001, 325, 81–99. [Google Scholar] [CrossRef]
  61. R. G. Douglas. On majorization, factorization and range inclusion of operators on Hilbert space. Proc. Amer. Math. Soc. 1966, 17, 413–416. [Google Scholar] [CrossRef]
  62. M. P. Drazin. Pseudo-inverses in associative rings and semigroups. Am. Math. Mon. 1958, 65, 506–514. [Google Scholar] [CrossRef]
  63. P.K. Draxl. Skew Field. Cambridge University Press, London, 1983.
  64. G.R. Duan. Generalized Sylvester Equations: Unified Parametric Solutions, Taylor and Francis Group/CRC Press, Boca Raton. 2015.
  65. A. Einstein. The foundation of the general theory of relativity. In A.J. Kox, M.J. Klein, and R. Schulmann, editors, The Collected Papers of Albert Einstein (Vol. 6), pages 146–200. Princeton University Press, Princeton, 1997.
  66. E. Emre and L.M. Silverman. The equation XR+QY=Φ: A characterization of solutions. SIAM J. Control Optim. 1981, 19, 33–38. [Google Scholar]
  67. E. Emre. The polynomial equation QQc+RPc=Φ with application to dynamic feedback. SIAM J. Control Optim. 1980, 18, 611–620. [Google Scholar] [CrossRef]
  68. F. Ernst, L. Richter, L. Matthäus, V. Martens, et al. Non-orthogonal tool/flange and robot/world calibration. Int. J. Med Robot. Comput. Assist. Surg. 2012, 8, 407–420. [Google Scholar] [CrossRef] [PubMed]
  69. R. Fan, M. Zeng, and Y. Yuan. The solutions to some dual matrix equations. Miskolc Math. Notes 2024, 25, 679–691. [Google Scholar] [CrossRef]
  70. X. Fan, Y. Li, Z. Liu, and J. Zhao. Solving quaternion linear system based on semi-tensor product of quaternion matrices. Symmetry 2022, 14, 1359. [Google Scholar] [CrossRef]
  71. X. Fan, Y. Li, Z. Liu, and J. Zhao. The (anti)-η-Hermitian solution of quaternion linear system. Filomat 2024, 38, 4679–4695. [Google Scholar] [CrossRef]
  72. X. Fan, Y. Li, J. Sun, and J. Zhao. Solving quaternion linear system AXB=E based on semi-tensor product of quaternion matrices. Banach J. Math. Anal. 2023, 17, 25. [Google Scholar] [CrossRef]
  73. X. Fan, Y. Li, M. Zhang, and J. Zhao. Solving the least squares (anti)-Hermitian solution for quaternion linear systems. Comput. Appl. Math. 2022, 41, 371. [Google Scholar] [CrossRef]
  74. X. Fang, J. Yu, and H. Yao. Solutions to operator equations on Hilbert C*-modules. Linear Algebra Appl. 2009, 431, 2142–2153. [Google Scholar] [CrossRef]
  75. J. G. Farias, E.D. Pieri, and D. Martins. A review on the applications of dual quaternions. Machines 2024, 12, 402. [Google Scholar] [CrossRef]
  76. R.B. Feinberg. Equivalence of partitioned matrices. J. Res. Nat. Bur. Stand.–B. Math. Sci. 1976; 97.
  77. J. Feinstein and Y. Bar-Ness. On the uniqueness minimal solution of the matrix polynomial equation A(λ)X(λ)+Y(λ)B(λ)=C(λ). J. Frankl. Inst. 1980, 310, 131–134. [Google Scholar] [CrossRef]
  78. J. Feinstein and Y. Bar-Ness. The solution of the matrix polynomial A(s)X(s)+B(s)Y(s)=C(s). IEEE Trans. Autom. Control.
  79. I. Fischer. Dual-Number Methods in Kinematics, Statics and Dynamics, CRC Press, Boca Raton. 1999.
  80. H. Flanders and H.K. Wimmer. On the matrix equations AX-XB=C and AX-YB=C. SIAM J. Appl. Math. 1977, 32, 707–710. [Google Scholar]
  81. C. Flaut and V. Shpakivskyi. Real matrix representations for the complex quaternions. Adv. Appl. Clifford Algebr. 2013, 23, 657–671. [Google Scholar] [CrossRef]
  82. P. A. Fuhrmann. Algebraic system theory: An analyst’s point of view. J. Frankl. Inst. 1976, 301, 521–540. [Google Scholar] [CrossRef]
  83. V. Futorny, T. Klymchuk, and V.V. Sergeichuk. Roth’s solvability criteria for the matrix equations AX-X^B=C and X-AX^B=C over the skew field of quaternions with an involutive automorphism qq^. Linear Algebra Appl. 2016, 510, 246–258. [Google Scholar] [CrossRef]
  84. P. Gabriel. Unzerlegbare Darstellungen I. Manuscripta Math. 1972, 6, 71–103. [Google Scholar] [CrossRef]
  85. P.R. Girard. Quaternions, Clifford Algebras and Relativistic Physics, Birkhäuser, Basel, Switzerland. 2007.
  86. I. Gohberg, M.A. Kaashoek, and L. Lerer. On a class of entire matrix function equations. Linear Algebra Appl. 2007, 425, 434–442. [Google Scholar] [CrossRef]
  87. I. Gohberg, M.A. I. Gohberg, M.A. Kaashoek, and F. Schagen. Partially Specified Matrices and Operators: Classification, Completion, Applications, Birkhäuser Verlag, Basel. 1995. [Google Scholar]
  88. I. Gohberg, M.A. Kaashoek, and L. Lerer. The resultant for regular matrix polynomials and quasi commutativity. Indiana Univ. Math. J. 2008, 57, 2793–2813. [Google Scholar] [CrossRef]
  89. G.H. Golub and C.F.V. Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore, 1983.
  90. G. H. Golub and H. Zha. Perturbation analysis of the canonical correlations of matrix pairs. Linear Algebra Appl. 1994, 210, 3–28. [Google Scholar] [CrossRef]
  91. D. Gu, D. Zhang, and Q. Liu. Parametric control to permanent magnet synchronous motor via proportional plus integral feedback. Trans. Inst. Meas. Control 2021, 43, 925–932. [Google Scholar] [CrossRef]
  92. Y.L. Gu and J.Y.S. Luh. Dual-number transformations and its applications to robotics. IEEE Journal of Robotics and Automation, -3.
  93. R. M. Guralnick. Roth’s theorems and decomposition of modules. Linear Algebra Appl. 1980, 39, 155–165. [Google Scholar]
  94. R. M. Guralnick. Matrix equivalence and isomorphism of modules. Linear Algebra Appl. 1982, 43, 125–136. [Google Scholar] [CrossRef]
  95. R. M. Guralnick. Roth’s theorems for sets of matrices. Linear Algebra Appl. 1985, 71, 113–117. [Google Scholar] [CrossRef]
  96. W. H. Gustafson. Roth’s theorems over commutative rings. Linear Algebra Appl. 1979, 23, 245–251. [Google Scholar] [CrossRef]
  97. W. H. Gustafson. Quivers and matrix equations. Linear Algebra Appl. 1995, 231, 159–174. [Google Scholar] [CrossRef]
  98. W. H. Gustafson. and J.M. Zelmanowitz. On matrix equivalence and matrix equations. Linear Algebra Appl. 1979, 27, 219–224. [Google Scholar] [CrossRef]
  99. J. Ha. Probabilistic framework for hand-eye and robot-world calibration AX=YB. IEEE Trans. Robot. 2023, 39, 1196–1211. [Google Scholar] [CrossRef]
  100. M. Hajarian. Matrix form of the CGS method for solving general coupled matrix equations. Appl. Math. Lett. 2014, 34, 37–42. [Google Scholar] [CrossRef]
  101. M. Hajarian. Developing BiCOR and CORS methods for coupled Sylvester-transpose and periodic Sylvester matrix equations. Appl. Math. Modell. 2015, 39, 6073–6084. [Google Scholar] [CrossRef]
  102. M. Hajarian. Matrix GPBiCG algorithms for solving the general coupled matrix equations. IET Control Theory Appl. 2015, 9, 74–81. [Google Scholar] [CrossRef]
  103. M. Hajarian. Generalized conjugate direction algorithm for solving the general coupled matrix equations over symmetric matrices. Numer. Algor. 2016, 73, 591–609. [Google Scholar] [CrossRef]
  104. M. Hajarian. Convergence analysis of generalized conjugate direction method to solve general coupled Sylvester discrete-time periodic matrix equations. Int. J. Adapt. Control Signal Process. 2017, 31, 985–1002. [Google Scholar] [CrossRef]
  105. M. Hajarian. Convergence of HS version of BCR algorithm to solve the generalized Sylvester matrix equation over generalized reflexive matrices. J. Frankl. Inst. 2017, 354, 2340–2357. [Google Scholar] [CrossRef]
  106. M. Hajarian. Computing symmetric solutions of general Sylvester matrix equations via Lanczos version of biconjugate residual algorithm. Comput. Math. Appl. 2018, 76, 686–700. [Google Scholar] [CrossRef]
  107. M. Hajarian. Convergence properties of BCR method for generalized Sylvester matrix equation over generalized reflexive and anti-reflexive matrices. Linear Multilinear Algebra 2018, 66, 1975–1990. [Google Scholar] [CrossRef]
  108. W. R. Hamilton. II. On quaternions; or on a new system of imaginaries in algebra. London, Edinburgh, Dublin Philos. Mag. J. Sci. 1844, 25, 10–13. [Google Scholar] [CrossRef]
  109. W.R. Hamilton. Lectures on Quaternions, Hodges and Smith, Dublin. 1853.
  110. R. E. Hartwig. Roth’s equivalence problem in unit regular rings. Proc. Amer. Math. Soc. 1976, 59, 39–44. [Google Scholar]
  111. R. E. Hartwig. A note on light matrices. Linear Algebra Appl. 1987, 97, 153–169. [Google Scholar] [CrossRef]
  112. Z. H. He. The general solution to a system of coupled Sylvester-type quaternion tensor equations involving η-Hermicity. Bull. Iran. Math. Soc. 2019, 45, 1407–1430. [Google Scholar] [CrossRef]
  113. Z. H. He. Pure PSVD approach to Sylvester-type quaternion matrix equations. Electron. J. Linear Algebra 2019, 35, 266–284. [Google Scholar] [CrossRef]
  114. Z. H. He, O.M. Agudelo, Q.W. Wang, and B. De Moor. Two-sided coupled generalized Sylvester matrix equations solving using a simultaneous decomposition for fifteen matrices. Linear Algebra Appl. 2016, 496, 549–593. [Google Scholar] [CrossRef]
  115. Z. H. He, A. Dmytryshyn, and Q.W. Wang. A new system of sylvester-like matrix equations with arbitrary number of equations and unknowns over the quaternion algebra. Linear Multilinear Algebra 2024, 73, 1269–1309. [Google Scholar]
  116. Z. H. He, Q.W. Wang, and Y. Zhang. The complete equivalence canonical form of four matrices over an arbitrary division ring. Linear Multilinear Algebra 2018, 66, 74–95. [Google Scholar] [CrossRef]
  117. Z. H. He, C. Navasca, and X.X. Wang. Decomposition for a quaternion tensor triplet with applications. Adv. Appl. Clifford Algebr. 2022, 32, 9. [Google Scholar] [CrossRef]
  118. Z. H. He and Q.W. Wang. A real quaternion matrix equation with applications. Linear Multilinear Algebra 2013, 61, 725–740. [Google Scholar] [CrossRef]
  119. Z. H. He and Q.W. Wang. A pair of mixed generalized Sylvester matrix equations. J. Shanghai Univ. (Natural Sci. 2014, 20, 138–156. [Google Scholar]
  120. Z.H. He, C. Z.H. He, C. Navasca, and Q.W. Wang. Tensor decompositions and tensor equations over quaternion algebra. arXiv, 1710. [Google Scholar]
  121. Z. H. He. Sylvester-type quaternion matrix equations with arbitrary equations and arbitrary unknowns. arXiv 2020, arXiv:2006.00189v1. [Google Scholar]
  122. Z. H. He, J. Liu, and T.Y. Tam. The general ϕ-Hermitian solution to mixed pairs of quaternion matrix Sylvester equations. Electron. J. Linear Algebra 2017, 32, 475–499. [Google Scholar] [CrossRef]
  123. Z. H. He and Q.W. Wang. A system of periodic discrete-time coupled Sylvester quaternion matrix equations. Algebra Colloq. 2017, 24, 169–180. [Google Scholar] [CrossRef]
  124. Z. H. He, Q.W. Wang, and Y. Zhang. A system of quaternary coupled Sylvester-type real quaternion matrix equations. Automatica 2018, 87, 25–31. [Google Scholar] [CrossRef]
  125. Z. H. He, Q.W. Wang, and Y. Zhang. A simultaneous decomposition for seven matrices with applications. J. Comput. Appl. Math. 2019, 349, 93–113. [Google Scholar] [CrossRef]
  126. Z. H. He, Y.Z. Xu, Q.W. Wang, and C.Q. Zhang. The equivalence canonical forms of two sets of five quaternion matrices with applications. Math. Meth. Appl. Sci. 2025, 48, 5483–5505. [Google Scholar] [CrossRef]
  127. R. A. Horn and F. Zhang. A generalization of the complex Autonne-Takagi factorization to quaternion matrices. Linear Multilinear Algebra 2012, 60, 1239–1244. [Google Scholar] [CrossRef]
  128. J. Huang. On parameter iteration method for solving the mixed-type Lyapunov matrix equation. Math. Numer. Sin. 2007, 29, 285–292. [Google Scholar]
  129. L. Huang. The solvability of linear matrix equation over a central simple algebra. Linear Multilinear Algebra 1996, 40, 353–363. [Google Scholar] [CrossRef]
  130. L. Huang. The quaternion matrix equation ∑AiXBi=E. Acta Math. Sin. New Ser. 1998, 14, 91–98. [Google Scholar] [CrossRef]
  131. L. Huang. The explicit solutions and solvability of linear matrix equations. Linear Algebra Appl. 2000, 311, 195–199. [Google Scholar] [CrossRef]
  132. L. Huang and Q. Zeng. The matrix equation AXB+CYD=E over a simple Artinian ring. Linear Multilinear Algebra 1995, 38, 225–232. [Google Scholar] [CrossRef]
  133. L. Huang and J. Liu. The extension of Roth’s theorem for matrix equations over a ring. Linear Algebra Appl. 1997, 259, 229–235. [Google Scholar] [CrossRef]
  134. S. Huang, G. Zhao, and M. Chen. Tensor extreme learning design via generalized Moore-Penrose inverse and triangular type-2 fuzzy sets. Neural Comput. Appl. 2019, 31, 5641–5651. [Google Scholar] [CrossRef]
  135. J. W. Huo, Y.Z. Xu, and Z.H. He. A simultaneous decomposition for a quaternion tensor quaternity with applications. Mathematics 2025, 13, 1679. [Google Scholar] [CrossRef]
  136. N. Ito and H.K. Wimmer. Rank minimization of generalized Sylvester equations over Bezout domains. Linear Algebra Appl. 2013, 439, 592–599. [Google Scholar] [CrossRef]
  137. J. Jaiprasert and P. Chansangiam. Solving the Sylvester-transpose matrix equation under the semi-tensor product. Symmetry 2022, 14, 1094. [Google Scholar] [CrossRef]
  138. A. Jameson and E. Kreindler. Inverse problem of linear optimal control. SIAM J. Control 1973, 11, 1–19. [Google Scholar] [CrossRef]
  139. A. Jameson, E. Kreindler, and P. Lancaster. Symmetric, positive semidefinite, and positive definite real solutions of AX=XAT and AX=YB. Linear Algebra Appl. 1992, 160, 189–215. [Google Scholar] [CrossRef]
  140. Z. R. Jia and Q.W. Wang. The general solution to a system of tensor equations over the split quaternion algebra with applications. Mathematics 2025, 13, 644. [Google Scholar] [CrossRef]
  141. T. Jiang and M. Wei. On solutions of the matrix equations X-AXB=C and X-AX¯B=C. Linear Algebra Appl. 2003, 367, 225–233. [Google Scholar]
  142. T. Jiang and S. Ling. On a solution of the quaternion matrix equation AX˜-XB=C and its applications. Adv. Appl. Clifford Algebr. 2013, 23, 689–699. [Google Scholar] [CrossRef]
  143. T. S. Jiang and M.S. Wei. On a solution of the quaternion matrix equation X-AX˜B=C and its application. Acta Math. Sin. Engl. Ser. 2005, 21, 483–490. [Google Scholar] [CrossRef]
  144. Z. Ji, J. Li, X. Zhou, F. Duan, and T. Li. On solutions of matrix equation AXB=C under semit-ensor product. Linear Multilinear Algebra 2021, 69, 1935–1963. [Google Scholar] [CrossRef]
  145. H. Jin, S. Xu, Y. Wang, and X. Liu. The Moore-Penrose inverse of tensors via the M-product. Comput. Appl. Math. 2023, 42, 294. [Google Scholar] [CrossRef]
  146. L. Jin, J. Yan, X. Du, X. Xiao, et al. RNN for solving time-variant generalized Sylvester equation with applications to robots and acoustic source localization. IEEE Trans. Ind. Inform. 2020, 16, 6359–6369. [Google Scholar] [CrossRef]
  147. Y. F. Jing, T.Z. Huang, Y. Zhang, L. Li, et al. Lanczos-type variants of the COCR method for complex nonsymmetric linear systems. J. Comput. Phys. 2009, 228, 6376–6394. [Google Scholar] [CrossRef]
  148. S. Jo, Y. Kim, and E. Ko. On Fuglede-Putnam properties. Positivity 2015, 19, 911–925. [Google Scholar] [CrossRef]
  149. M. A. Kaashoek and L. Lerer. On a class of matrix polynomial equations. Linear Algebra Appl. 2013, 439, 613–620. [Google Scholar] [CrossRef]
  150. B. Kågström. A perturbation analysis of the generalized Sylvester equation (AR-LB,DR-LE)=(C,F). SIAM J. Matrix Anal. Appl. 1994, 15, 1045–1060. [Google Scholar] [CrossRef]
  151. B. Kågström and P. Poromaa. Lapack-style algorithms and software for solving the generalized Sylvester equation and estimating the separation between regular matrix pairs. ACM Trans. Math. Softw. 1996, 22, 78–103. [Google Scholar] [CrossRef]
  152. W.B.V. Kandasamy and F. Smarandache. Zip Publishing, Ohio, USA. Dual Numbers, 2012.
  153. M. M. Karizaki, M. Hassani, M. Amyari, and M. Khosravi. Operator matrix of Moore-Penrose inverse operators on Hilbert C*-modules. Colloq. Math. 2015, 140, 171–182. [Google Scholar] [CrossRef]
  154. Y. Ke and C. Ma. An alternating direction method for nonnegative solutions of the matrix equation AX+YB=C. Comp. Appl. Math. 2017, 36, 359–365. [Google Scholar] [CrossRef]
  155. B. Kenwright. A beginner’s guide to dual-quaternions: What they are, how they work, and how to use them for 3D character hierarchies. In 20th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, 2012.
  156. E. Kernfeld, M. Kilmer, and S. Aeron. Tensor-tensor products with invertible linear transforms. Linear Algebra Appl. 2015, 485, 545–570. [Google Scholar] [CrossRef]
  157. M. Kilmer, L. Horesh, H. Avron, and E. Newman. Tensor-tensor products for optimal representation and compression. 2019. arXiv:2001.00046v1.
  158. M. Kilmer and C.D. Martin. Factorization strategies for third-order tensors. Linear Algebra Appl. 2011, 435, 641–658. [Google Scholar] [CrossRef]
  159. N. U. Kiran. Simultaneous triangularization of pseudo-differential systems. J. Pseudo-Differ. Oper. Appl. 2013, 4, 45–61. [Google Scholar] [CrossRef]
  160. T. G. Kolda and B.W. Bader. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  161. D. Kressner, C. Schröder, and D.S. Watkins. Implicit QR algorithms for palindromic and even eigenvalue problems. Numer. Algor. 2009, 51, 209–238. [Google Scholar] [CrossRef]
  162. V. Kučera. Algebraic approach to discrete stochastic control. Kybernetika 1975, 11, 114–147. [Google Scholar]
  163. J.B. Kuipers. Quaternions and Rotation Sequences: A Primer with Applications to Orbits, Aerospace and Virtual Reality, Princeton University Press, Princeton. 1999.
  164. I. Kyrchei. Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations. Linear Algebra Appl. 2013, 438, 136–152. [Google Scholar] [CrossRef]
  165. I. Kyrchei. Cramer’s rules for Sylvester quaternion matrix equation and its special cases. Adv. Appl. Clifford Algebr. 2018, 28, 90. [Google Scholar] [CrossRef]
  166. I. Kyrchei. Determinantal representations of solutions to systems of quaternion matrix equations. Adv. Appl. Clifford Algebr. 2018, 28, 23. [Google Scholar] [CrossRef]
  167. I. Kyrchei. Cramer’s rules of η-(skew-)Hermitian solutions to the quaternion Sylvester-type matrix equations. Adv. Appl. Clifford Algebr. 2019, 29, 56. [Google Scholar] [CrossRef]
  168. I. I. Kyrchei. Cramer’s rule for quaternionic systems of linear equations. J. Math. Sci. 2008, 155, 839–858. [Google Scholar] [CrossRef]
  169. I. I. Kyrchei. Analogs of the adjoint matrix for generalized inverses and corresponding Cramer rules. Linear Multilinear Algebra 2008, 56, 453–469. [Google Scholar] [CrossRef]
  170. I. I Kyrchei. Determinantal representations of the Moore-Penrose inverse matrix over the quaternion skew field. J. Math. Sci. 2012, 180, 23–33. [Google Scholar] [CrossRef]
  171. I.I. Kyrchei. The theory of the column and row determinants in a quaternion linear algebra. In A.R. Baswell, editor, Advances in Mathematics Research, volume 15, pages 301–358. Nova Science Publisher, New York, 2012.
  172. I. I. Kyrchei. Explicit determinantal representation formulas for the solution of the two-sided restricted quaternionic matrix equation. J. Appl. Math. Comput. 2018, 58, 335–365. [Google Scholar] [CrossRef]
  173. I. I. Kyrchei. Cramer’s rule for some quaternion matrix equations. Appl. Math. Comput. 2010, 217, 2024–2030. [Google Scholar]
  174. T.Y. Lam. A First Course in Noncommutative Rings, Springer, New York. 1991.
  175. T.Y. Lam. Introduction to Quadratic Forms over Fields, American Mathematical Society, Providence. 2005.
  176. E.C. Lance. C, Hilbert C*-Modules: a toolkit for operator algebraists. Cambridge University Press, Cambridge. 1995.
  177. S. G. Lee and Q.P. Vu. Simultaneous solutions of matrix equations and simultaneous equivalence of matrices. Linear Algebra Appl. 2012, 437, 2325–2339. [Google Scholar] [CrossRef]
  178. A. Li, L. Wang, and D. Wu. Simultaneous robot-world and hand-eye calibration using dual-quaternions and Kronecker product. Int. J. Phys. Sci. 2010, 5, 1530–1536. [Google Scholar]
  179. J. F. Li, X.Y. Hu, and X.F. Duan. A symmetric preserving iterative method for generalized Sylvester equation. Asian J. Control 2011, 13, 408–417. [Google Scholar] [CrossRef]
  180. J. Li, L. Tao, W. Li, Y. Chen, and R. Huang. Solvability of matrix equations AX=B,XC=D under semi-tensor product. Linear Multilinear Algebra 2017, 65, 1705–1733. [Google Scholar] [CrossRef]
  181. S. K. Li and T.Z. Huang. LSQR iterative method for generalized coupled Sylvester matrix equations. Appl. Math. Modell. 2012, 36, 3545–3554. [Google Scholar] [CrossRef]
  182. T. Li, Q.W. Wang, and X.F. Zhang. A modified conjugate residual method and nearest Kronecker product preconditioner for the generalized coupled Sylvester tensor equations. Mathematics 2022, 10, 1730. [Google Scholar] [CrossRef]
  183. W. Li. Quaternion Matrices. National University of Defense Technology Press, Changsha, China, 2002. In Chinese. 2002.
  184. A. P. Liao, Z.Z. Bai, and Y. Lei. Best approximate solution of matrix equation AXB+CYD=E. SIAM J. Matrix Anal. Appl. 2005, 27, 675–688. [Google Scholar] [CrossRef]
  185. L. H. Lim. Tensors in computations. Acta Numer. 2021, 30, 555–764. [Google Scholar] [CrossRef]
  186. M. Lin and H.K. Wimmer. The generalized Sylvester matrix equation, rank minimization and Roth’s equivalence theorem. Bull. Aust. Math. Soc. 2011, 84, 441–443. [Google Scholar] [CrossRef]
  187. Y. Lin and Y. Wei. Condition numbers of the generalized Sylvester equation. IEEE Trans. Autom. Control 2007, 52, 2380–2385. [Google Scholar] [CrossRef]
  188. X. Liu, Q.W. Wang, and Y. Zhang. Consistency of quaternion matrix equations AX-XB=C and X-AXB=C. Electron. J. Linear Algebra 2019, 35, 394–407. [Google Scholar]
  189. X. Liu, Y. Li, W. Ding, and R. Tao. A real method for solving octonion matrix equation AXB=C based on semi-tensor product of matrices. Adv. Appl. Clifford Algebr. 2024, 34, 12. [Google Scholar] [CrossRef]
  190. X. Liu and Y. Zhang. Matrices over quaternion algebras. In M.S. Moslehian, editor, Matrix and Operator Equations and Applications, pages 139–183. Springer, Switzerland, 2023.
  191. Y. H. Liu. Ranks of solutions of the linear matrix equation AX+YB=C. Comput. Math. Appl. 2006, 52, 861–872. [Google Scholar] [CrossRef]
  192. Z. Liu, Y. Li, X. Fan, and W. Ding. A new method of solving special solutions of quaternion generalized Lyapunov matrix equation. Symmetry 2022, 14, 1120. [Google Scholar] [CrossRef]
  193. C. Q. Lv and C.F. Ma. BCR method for solving generalized coupled Sylvester equations over centrosymmetric or anti-centrosymmetric matrix. Comput. Math. Appl. 2018, 75, 70–88. [Google Scholar] [CrossRef]
  194. C. Lv and C. Ma. Two parameter iteration methods for coupled Sylvester matrix equations. East Asian J. Appl. Math. 2018, 8, 336–351. [Google Scholar] [CrossRef]
  195. C. Ma, Y. Wu, and Y. Xie. The Newton-type splitting iterative method for a class of coupled Sylvester-like absolute value equation. J. Appl. Anal. Comput. 2024, 14, 3306–3331. [Google Scholar]
  196. C. Ma and T. Yan. A finite iterative algorithm for the general discrete-time periodic Sylvester matrix equations. J. Frankl. Inst. 2022, 359, 4410–4432. [Google Scholar] [CrossRef]
  197. G. Marsaglia and G.P.H. Styan. Equalities and inequalities for ranks of matrices. Linear Multilinear Algebra 1974, 2, 269–292. [Google Scholar] [CrossRef]
  198. R. Mazurek. A general approach to Sylvester-polynomial-conjugate matrix equations. Symmetry 2024, 16, 246. [Google Scholar] [CrossRef]
  199. M. S. Mehany, Q. Wang, and L. Liu. A system of Sylvester-like quaternion tensor equations with an application. Front. Math. 2024, 19, 749–768. [Google Scholar] [CrossRef]
  200. M. S. Mehany and Q.W. Wang. Three symmetrical systems of coupled Sylvester-like quaternion matrix equations. Symmetry 2022, 14, 550. [Google Scholar] [CrossRef]
  201. C. D. Meyer, Jr. Generalized inverses of block triangular matrices. SIAM J. Appl. Math. 1970, 19, 741–750. [Google Scholar] [CrossRef]
  202. T. Miyata. Note on direct summands of modules. J. Math. Kyoto Univ. 1967, 7, 65–69. [Google Scholar]
  203. Z. N. Moghani, M.M. Karizaki, and M. Khanehgir. Solutions of the Sylvester equation in C*-Modular operators. Ukr. Math. J. 2021, 73, 354–369. [Google Scholar]
  204. B.W. Mooring, Z.S. Roth, and M.R. Driels. Fundamentals of Manipulator Calibration, Wiley, New York. 1991.
  205. Z. Mousavi, R. Eskandari, M.S. Moslehian, and F. Mirzapour. Operator equations AX+YB=C and AXA*+BYB*=C in Hilbert C*-modules. Linear Algebra Appl. 2017, 517, 85–98. [Google Scholar] [CrossRef]
  206. R. Mukundan. Quaternions: From classical mechanics to computer graphics, and beyond. In Proceedings of the 7th Asian Technology Conference in Mathematics, pages 97–106. 2002.
  207. M. Newman. The Smith normal form of a partitioned matrix. J. Res. Nat. Bur. Stand.–B. Math. Sci.
  208. Q. Niu, X. Wang, and L.Z. Lu. A relaxed gradient based algorithm for solving Sylvester equations. Asian J. Control 2011, 13, 461–464. [Google Scholar] [CrossRef]
  209. V. Olshevsky. Similarity of block diagonal and block triangular matrices. Integr. Equat. Oper. Th. 1992, 15, 853–863. [Google Scholar] [CrossRef]
  210. A. B. Özgüler. The matrix equation AXB+CYD=E over a principal ideal domain. SIAM J. Matrix Anal. Appl. 1991, 12, 581–591. [Google Scholar] [CrossRef]
  211. C. C. Paige and M.A. Saunders. LSQR: An algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. 1982, 8, 43–71. [Google Scholar] [CrossRef]
  212. C. C. Paige and M.A. Saunders. Towards a generalized singular value decomposition. SIAM J. Numer. Anal. 1981, 18, 398–405. [Google Scholar] [CrossRef]
  213. J. Pan, Z. Fu, H. Yue, X. Lei, et al. Toward simultaneous coordinate calibrations of AX=YB problem by the LMI-SDP optimization. IEEE Trans. Autom. Sci. Eng. 2023, 20, 2445–2453. [Google Scholar] [CrossRef]
  214. Z. Peng and Y. Peng. An efficient iterative method for solving the matrix equation AXB+CYD=E. Numer. Linear Algebra Appl. 2006, 13, 473–485. [Google Scholar] [CrossRef]
  215. R. Penrose. A generalized inverse for matrices. Math. Proc. Camb. Philos. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef]
  216. H. Pottmann and J. Wallner. Computational Line Geometry, Springer, Berlin. 2001.
  217. L. Qi and Z. Luo. Tensor Analysis: Spectral theory and special tensors, SIAM, Philadelphia. 2017.
  218. L. Qi. Standard dual quaternion optimization and its applications in hand-eye calibration and SLAM. Commun. Appl. Math. Comput. 2023, 5, 1469–1483. [Google Scholar] [CrossRef]
  219. L. Qi, H. Chen, and Y. Chen. Tensor Eigenvalues and Their Applications, Springer, Singapore. 2018.
  220. J. Qin and Q.W. Wang. Solving a system of two-sided Sylvester-like quaternion tensor equations. Comput. Appl. Math. 2023, 42, 232. [Google Scholar] [CrossRef]
  221. Z. Qin, Z. Ming, and L. Zhang. Singular value decomposition of third order quaternion tensors. Appl. Math. Lett. 2022, 123, 107597. [Google Scholar] [CrossRef]
  222. H. Radjavi and P. Rosenthal. Simultaneous triangularization, Springer, New York. 2000.
  223. C.R. Rao and S.K. Mitra. Generalized Inverse of Matrices and its Applications, Wiley, New York. 1971.
  224. A. Rehman, Q.W. Wang, I. Ali, M. Akram, et al. A constraint system of generalized Sylvester quaternion matrix equations. Adv. Appl. Clifford Algebr. 2017, 27, 3183–3196. [Google Scholar] [CrossRef]
  225. R. M. Reid. Some eigenvalues properties of persymmetric matrices. SIAM Rev. 1997, 39, 313–316. [Google Scholar] [CrossRef]
  226. B. Y. Ren, Q.W. Wang, and X.Y. Chen. The η-anti-Hermitian solution to a constrained matrix equation over the generalized segre quaternion algebra. Symmetry 2023, 15, 592. [Google Scholar] [CrossRef]
  227. L. Rodman. Topics in Quaternion Linear Algebra, Princeton University Press, Princeton. 2014.
  228. W. E. Roth. The equations AX-YB=C and AX-XB=C in matrices. Proc. Amer. Math. Soc. 1952, 3, 392–396. [Google Scholar] [CrossRef]
  229. B. Savas and L. Eldén. Handwritten digit classification using higher order singular value decomposition. Pattern Recognit. 2007, 40, 993–1003. [Google Scholar] [CrossRef]
  230. C. Segre. The real representations of complex elements and extension to bicomplex systems. Math. Ann. 1892, 40, 413–467. [Google Scholar]
  231. M. Shah, R. Bostelman, S. Legowik, and T. Hong. Calibration of mobile manipulators using 2D positional features. Measurement 2018, 124, 322–328. [Google Scholar] [CrossRef] [PubMed]
  232. M. Shah. Solving the robot-world/hand-eye calibration problem using the Kronecker product. J. Mech. Robot. 2013, 5, 031007. [Google Scholar] [CrossRef]
  233. J. Y. Shao. A general product of tensors with applications. Linear Algebra Appl. 2013, 439, 2350–2366. [Google Scholar] [CrossRef]
  234. X. Sheng. A relaxed gradient based algorithm for solving generalized coupled Sylvester matrix equations. J. Frankl. Inst. 2018, 355, 4282–4297. [Google Scholar] [CrossRef]
  235. A. Shirilord and M. Dehghan. Gradient descent-based parameter-free methods for solving coupled matrix equations and studying an application in dynamical systems. Appl. Numer. Math. 2025, 212, 29–59. [Google Scholar] [CrossRef]
  236. Y. C. Shiu and S. Ahmad. Calibration of wrist-mounted robotic sensors by solving homogeneous transform equations of the form AX=XB. IEEE Trans. Robot. Automat. 1989, 5, 16–29. [Google Scholar] [CrossRef]
  237. N. D. Sidiropoulos, L.D. Lathauwer, X. Fu, K. Huang, et al. Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 2017, 65, 3551–3582. [Google Scholar] [CrossRef]
  238. C. Song and G. Chen. On solutions of matrix equation XF-AX=C and XF-AX˜=C over quaternion field. J. Appl. Math. Comput. 2011, 37, 57–68. [Google Scholar] [CrossRef]
  239. C. Song and G. Chen. Solutions to matrix equations X-AXB=CY+R and X-AX^B=CY+R. J. Comput. Appl. Math. 2018, 343, 488–500. [Google Scholar] [CrossRef]
  240. C. Song and J. Feng. On solutions to the matrix equations XB-AX=CY and XB-AX^=CY. J. Frankl. Inst. 2016, 353, 1075–1088. [Google Scholar] [CrossRef]
  241. C. Song, J. Feng, X. Wang, and J. Zhao. A real representation method for solving Yakubovich-j-conjugate quaternion matrix equation. Abstr. Appl. Anal. 2014, 2014, 285086. [Google Scholar]
  242. C. Song, G. Chen, and Q. Liu. Explicit solutions to the quaternion matrix equations X-AXF=C and X-AX˜F=C. Int. J. Comput. Math. 2012, 89, 890–900. [Google Scholar] [CrossRef]
  243. G. J. Song and C.Z. Dong. New results on condensed Cramer’s rule for the general solution to some restricted quaternion matrix equations. J. Appl. Math. Comput. 2017, 53, 321–341. [Google Scholar] [CrossRef]
  244. G. J. Song and Q.W. Wang. Condensed Cramer rule for some restricted quaternion linear equations. Appl. Math. Comput. 2011, 218, 3110–3121. [Google Scholar]
  245. G. J. Song, Q.W. Wang, and H.X. Chang. Cramer rule for the unique solution of restricted matrix equations over the quaternion skew field. Comput. Math. Appl. 2011, 61, 1576–1589. [Google Scholar] [CrossRef]
  246. G. J. Song, Q.W. Wang, and S.W. Yu. Cramer’s rule for a system of quaternion matrix equations with applications. Appl. Math. Comput. 2018, 336, 490–499. [Google Scholar]
  247. G. J. Song. Determinantal expression of the general solution to a restricted system of quaternion matrix equations with applications. Bull. Korean Math. Soc. 2018, 55, 1285–1301. [Google Scholar]
  248. W. Song and A. Jin. Observer-based model reference tracking control of the Markov jump system with partly unknown transition rates. Appl. Sci. 2023, 13, 914. [Google Scholar] [CrossRef]
  249. P. S. Stanimirovic. General determinantal representation of pseudoinverses of matrices. Mat. Vesn. 1996, 48, 1–9. [Google Scholar]
  250. L. Sun, B. Zheng, C. Bu, and Y. Wei. Moore-Penrose inverse of tensors via Einstein product. Linear Multilinear Algebra 2016, 64, 686–698. [Google Scholar] [CrossRef]
  251. J.J. Sylvester. Sur l’équation en matrices px=xq. C. R. Acad. Sci. Paris.
  252. N. Tan, X. Gu, and H. Ren. Simultaneous robot-world, sensor-tip, and kinematics calibration of an underactuated robotic hand with soft fingers. IEEE Access 2018, 6, 22705–22715. [Google Scholar] [CrossRef]
  253. M. Taylor. Pseudo differential operators, Springer, Heidelberg. 1974.
  254. Y. Tian, X. Liu, and Y. Zhang. Least-squares solutions of the generalized reduced biquaternion matrix equations. Filomat 2023, 37, 863–870. [Google Scholar] [CrossRef]
  255. C. C. Took and D.P. Mandic. Augmented second-order statistics of quaternion random signals. Signal Process. 2011, 91, 214–224. [Google Scholar] [CrossRef]
  256. C. C. Took, D.P. Mandic, and F. Zhang. On the unitary diagonalisation of a special class of quaternion matrices. Appl. Math. Lett. 2011, 24, 1806–1809. [Google Scholar] [CrossRef]
  257. H. Trinh, T.D. Tran, and S. Nahavandi. Design of scalar functional observers of order less than (ν-1). Int. J. Control 2006, 79, 1654–1659. [Google Scholar] [CrossRef]
  258. F. E. Udwadia. Dual generalized inverses and their use in solving systems of linear dual equations. Mech. Mach. Theory 2021, 156, 104158. [Google Scholar] [CrossRef]
  259. F. E. Udwadia, E. Pennestri, and D. de Falco. Do all dual matrices have dual Moore-Penrose inverses? Mech. Mach. Theory 2020, 151, 103878. [Google Scholar] [CrossRef]
  260. J. W. van der Woude. Almost non-interacting control by measurement feedback. Syst. Control Lett. 1987, 9, 7–16. [Google Scholar] [CrossRef]
  261. A. Varga. A numerically reliable approach to robust pole assignment for descriptor systems. Futur. Gener. Comp. Syst. 2012, 19, 1221–1230. [Google Scholar]
  262. J. Voight. Quaternion Algebras, Springer, Switzerland. 2021.
  263. D. Wang, Y. Li, and W.X. Ding. Several kinds of special least squares solutions to quaternion matrix equation AXB=C. J. Appl. Math. Comput. 2022, 68, 1881–1899. [Google Scholar] [CrossRef]
  264. G. Wang, Z. Guo, D. Zhang, and T. Jiang. Algebraic techniques for least-squares problem over generalized quaternion algebras: A unified approach in quaternionic and split quaternionic theory. Math. Meth. Appl. Sci. 2020, 43, 1124–1137. [Google Scholar] [CrossRef]
  265. G. Wang, Y. Wei, and S. Qiao. Generalized Inverses: Theory and Computations, Springer, Singapore. 2018.
  266. J. Wang, J. Feng, and H. Huang. Solvability of the matrix equation AX2=B with semi-tensor product. Electron. Res. Arch. 2020, 29, 2249–2267. [Google Scholar]
  267. J. Wang. On solutions of the matrix equation AlX=B with respect to MM-2 semitensor product. J. Math. 2021, 2021, 6651434. [Google Scholar]
  268. J. Wang. Least squares solutions of matrix equation AXB=C under semi-tensor product. Electron. Res. Arch. 2024, 32, 2976–2993. [Google Scholar] [CrossRef]
  269. J. Wang, D. Qu, and F. Xu. A new hybrid calibration method for extrinsic camera parameters and hand-eye transformation. In IEEE International Conference on Mechatronics and Automation, volume 4, pages 1981–1985, Niagara Falls, ON, Canada, 2005.
  270. L. Wang, Q. Wang, and Z. He. The common solution of some matrix equations. Algebra Colloq. 2016, 23, 71–81. [Google Scholar] [CrossRef]
  271. N. Wang. Solvability of the Sylvester equation AX-XB=C under left semi-tensor product. Math. Model. Control 2022, 2, 81–89. [Google Scholar] [CrossRef]
  272. Q. W. Wang and Z.H. He. Systems of coupled generalized Sylvester matrix equations. Automatica 2014, 50, 2840–2844. [Google Scholar] [CrossRef]
  273. Q. W. Wang, J.W. van der Woude, and S.W. Yu. An equivalence canonical form of a matrix triplet over an arbitrary division ring with applications. Sci. China-Math. 2011, 54, 907–924. [Google Scholar] [CrossRef]
  274. Q. W. Wang and X. Wang. A system of coupled two-sided Sylvester-type tensor equations over the quaternion algebra. Taiwan. J. Math. 2020, 24, 1399–1416. [Google Scholar]
  275. Q. W. Wang, X. Wang, and Y. Zhang. A constraint system of coupled two-sided Sylvester-like quaternion tensor equations. Comput. Appl. Math. 2020, 39, 317. [Google Scholar] [CrossRef]
  276. Q. W. Wang, X. Zhang, and J.W. van der Woude. A new simultaneous decomposition of a matrix quaternity over an arbitrary division ring with applications. Commun. Algebra 2012, 40, 2309–2342. [Google Scholar] [CrossRef]
  277. Q. W. Wang. A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity. Linear Algebra Appl. 2004, 384, 43–54. [Google Scholar] [CrossRef]
  278. Q. W. Wang, Z.H. Gao, and J. Gao. A comprehensive review on solving the system of equations AX=C and XB=D. Symmetry 2025, 17, 625. [Google Scholar] [CrossRef]
  279. Q.W. Wang, Z.H. Gao, and Y.F. Li. An overview of methods for solving the system of matrix equations A1XB1=C1 and A2XB2=C2. Preprints, 2025. C. [CrossRef]
  280. Q. W. Wang and Z.H. He. Some matrix equations with applications. Linear Multilinear Algebra 2012, 60, 1327–1353. [Google Scholar] [CrossRef]
  281. Q. W. Wang and Z.H. He. Solvability conditions and general solution for mixed Sylvester equations. Automatica 2013, 49, 2713–2719. [Google Scholar] [CrossRef]
  282. Q. W. Wang, R.Y. Lv, and Y. Zhang. The least-squares solution with the least norm to a system of tensor equations over the quaternion algebra. Linear Multilinear Algebra 2022, 70, 1942–1962. [Google Scholar] [CrossRef]
  283. Q. W. Wang, A. Rehman, Z.H. He, and Y. Zhang. Constraint generalized Sylvester matrix equations. Automatica 2016, 69, 60–64. [Google Scholar] [CrossRef]
  284. Q. W. Wang, L.M. Xie, and Z.H. Gao. A survey on solving the matrix equation AXB=C with applications. Mathematics 2025, 13, 450. [Google Scholar] [CrossRef]
  285. Q.W. Wang and M. Xie. A system of k Sylvester-type quaternion matrix equations with 3k+1 variables. arXiv, arXiv:2007.14536v2.
  286. Q. W. Wang, H.S. Zhang, and G.J. Song . A new solvable condition for a pair of generalized Sylvester equations. Electron. J. Linear Algebra 2009, 18, 289–301. [Google Scholar]
  287. Q. W. Wang and S.Z. Li. Persymmetric and perskewsymmetric solutions to sets of matrix equations over a finite central algebra. Acta Math. Sin. 2004, 47, 27–34. [Google Scholar]
  288. Q. W. Wang, J.H. Sun, and S.Z. Li. Consistency for bi(skew)symmetric solutions to systems of generalized Sylvester equations over a finite central algebra. Linear Algebra Appl. 2002, 353, 169–182. [Google Scholar] [CrossRef]
  289. X. Wang, J. Huang, and H. Song. Simultaneous robot-world and hand-eye calibration based on a pair of dual equations. Measurement 2021, 181, 109623. [Google Scholar] [CrossRef]
  290. X. Wang and H. Song. One-step solving the robot-world and hand-eye calibration based on the principle of transference. J. Mech. Robot. 2024, 17, 031014. [Google Scholar]
  291. J. R. Weaver. Centrosymmetric (cross-symmetric) matrices, their basic properties, eigenvalues, eigenvectors. Am. Math. Mon. 1985, 92, 711–717. [Google Scholar] [CrossRef]
  292. M.S. Wei, Y. Li, F. Zhang, and J. Zhao. Quaternion Matrix Computations, Nova Science Publishers, New York. 2018.
  293. T. Wei, W. Ding, and Y. Wei. Singular value decomposition of dual matrices and its application to traveling wave identification in the brain. SIAM J. Matrix Anal. Appl. 2024, 45, 634–660. [Google Scholar] [CrossRef]
  294. H. K. Wimmer. The structure of nonsingular polynomial matrices. Math. Syst. Theory 1981, 14, 367–379. [Google Scholar] [CrossRef]
  295. H. K. Wimmer. The matrix equation X-AXB=C and an analogue of Roth’s theorem. Linear Algebra Appl. 1988, 109, 145–147. [Google Scholar] [CrossRef]
  296. H. K. Wimmer. Consistency of a pair of generalized Sylvester equations. IEEE Trans. Autom. Control 1994, 39, 1014–1016. [Google Scholar] [CrossRef]
  297. H. K. Wimmer. The generalized Sylvester equation in polynomial matrices. IEEE Trans. Autom. Control 1996, 41, 1372–1376. [Google Scholar] [CrossRef]
  298. H. K. Wimmer. Explicit solutions of the matrix equation ∑AiXDi=C. SIAM J. Matrix Anal. Appl. 1992, 13, 1123–1130. [Google Scholar] [CrossRef]
  299. H. K. Wimmer. Roth’s theorems for matrix equations with symmetry constraints. Linear Algebra Appl. 1994, 199, 357–362. [Google Scholar] [CrossRef]
  300. W.A. Wolovich. Linear Multivariable Systems, Springer, New York. 1974.
  301. W.A. Wolovich. Skew prime polynomial matrices. IEEE Trans. Autom. Control, -23.
  302. A. G. Wu, G.R. Duan, and Y. Xue. Kronecker maps and Sylvester-polynomial matrix equations. IEEE Trans. Autom. Control 2007, 52, 905–910. [Google Scholar] [CrossRef]
  303. A. G. Wu, W. Liu, C. Li, and G.R. Duan. On j-conjugate product of quaternion polynomial matrices. Appl. Math. Comput. 2013, 219, 11223–11232. [Google Scholar]
  304. A.G. Wu and Y. Zhang. Complex Conjugate Matrix Equations for Systems and Control, Springer, Singapore. 2017.
  305. A. G. Wu, G.R. Duan, and H.H. Yu. On solutions of the matrix equations XF-AX=C and XF-AX¯=C. Appl. Math. Comput. 2006, 183, 932–941. [Google Scholar]
  306. A. G. Wu, G. Feng, J. Hu, and G.R. Duan. Closed-form solutions to the nonhomogeneous Yakubovich-conjugate matrix equation. Appl. Math. Comput. 2009, 214, 442–450. [Google Scholar]
  307. A. G. Wu, Y.M. Fu, and G.R. Duan. On solutions of matrix equations V-AVF=BW and V-AV¯F=BW. Math. Comput. Model. 2008, 47, 1181–1197. [Google Scholar]
  308. A. G. Wu, H.Q. Wang, and G.R. Duan. On matrix equations X-AXF=C and X-AX¯F=C. J. Comput. Appl. Math. 2009, 230, 690–698. [Google Scholar]
  309. A. G. Wu, G.R. Duan, G. Feng, and W. Liu. On conjugate product of complex polynomials. Appl. Math. Lett. 2011, 24, 735–741. [Google Scholar] [CrossRef]
  310. A. G. Wu, G. Feng, W. Liu, and G.R. Duan. The complete solution to the Sylvester-polynomial-conjugate matrix equations. Math. Comput. Model. 2011, 53, 2044–2056. [Google Scholar] [CrossRef]
  311. A. G. Wu, B. Li, Y. Zhang, and G.R. Duan. Finite iterative solutions to coupled Sylvester-conjugate matrix equations. Appl. Math. Modell. 2011, 35, 1065–1080. [Google Scholar] [CrossRef]
  312. A. G. Wu, W. Liu, and G.R. Duan. On the conjugate product of complex polynomial matrices. Math. Comput. Model. 2011, 53, 2031–2043. [Google Scholar] [CrossRef]
  313. F. Wu, C. Li, and Y. Li. Manifold regularization nonnegative triple decomposition of tensor sets for image compression and representation. J. Optim. Theory Appl. 2022, 192, 979–1000. [Google Scholar] [CrossRef]
  314. J. Wu, M. Liu, Y. Zhu, Z. Zou, et al. Globally optimal symbolic hand-eye calibration. IEEE-ASME Trans. Mechatron. 2021, 26, 1369–1379. [Google Scholar] [CrossRef]
  315. L. Wu and H. Ren. Finding the kinematic base frame of a robot by hand-eye calibration using 3D position data. IEEE Trans. Autom. Sci. Eng. 2017, 14, 314–324. [Google Scholar] [CrossRef]
  316. Y. Xi, Z. Liu, Y. Li, R. Tao, et al. On the mixed solution of reduced biquaternion matrix equation ∑i=1nAiXiBi=E with sub-matrix constraints and its application. AIMS Math. 2023, 8, 27901–27923. [Google Scholar] [CrossRef]
  317. L. M. Xie, Q.W. Wang, and Z.H. He. The generalized hand-eye calibration matrix equation AX-YB=C over dual quaternions. Comput. Appl. Math. 2025, 44, 137. [Google Scholar] [CrossRef]
  318. L.M. Xie and Q.W. Wang. A generalized Sylvester dual quaternion matrix equation with applications. Preprints, 2025. [CrossRef]
  319. L. M. Xie and Q.W. Wang. Some novel results on a classical system of matrix equations over the dual quaternion algebra. Filomat 2025, 39, 1477–1490. [Google Scholar] [CrossRef]
  320. M. Xie and Q.W. Wang. Reducible solution to a quaternion tensor equation. Front. Math. China 2020, 15, 1047–1070. [Google Scholar] [CrossRef]
  321. M. Xie, Q.W. Wang, and Y. Zhang. The minimum-norm least squares solutions to quaternion tensor systems. Symmetry 2022, 14, 1460. [Google Scholar] [CrossRef]
  322. M. Y. Xie, Q.W. Wang, Z.H. He, and M.M. Saad. A system of Sylvester-type quaternion matrix equations with ten variables. Acta. Math. Sin.-Engl. Ser. 2022, 38, 1399–1420. [Google Scholar] [CrossRef]
  323. G. Xu, M. Wei, and D. Zheng. On solutions of matrix equation AXB+CYD=F. Linear Algebra Appl. 1998, 279, 93–109. [Google Scholar]
  324. Q. Xu. Common Hermitian and positive solutions to the adjointable operator equations AX=C, XB=D. Linear Algebra Appl. 2008, 429, 1–11. [Google Scholar] [CrossRef]
  325. R. Xu, T. Wei, Y. Wei, and H. Yan. UTV decomposition of dual matrices and its applications. Comput. Appl. Math. 2024, 43, 41. [Google Scholar] [CrossRef]
  326. I.M. Yaglom. Complex Numbers in Geometry, Academic Press, New York. 1968.
  327. T. Yan and C. Ma. The BCR algorithms for solving the reflexive or anti-reflexive solutions of generalized coupled Sylvester matrix equations. J. Frankl. Inst. 2020, 357, 12787–12807. [Google Scholar] [CrossRef]
  328. T. Yan and C. Ma. An iterative algorithm for generalized Hamiltonian solution of a class of generalized coupled Sylvester-conjugate matrix equations. Appl. Math. Comput. 2021, 411, 126491. [Google Scholar]
  329. L. Yang, Q.W. Wang, and Z. Kou. A system of tensor equations over the dual split quaternion algebra with an application. Mathematics 2024, 12, 3571. [Google Scholar] [CrossRef]
  330. X. Yang and W. Huang. Backard error analysis of the matrix equations for Sylvester and Lyapunov. J. Sys. Sci. Math. Scis. 2008, 28, 524–534. [Google Scholar]
  331. J. Yao, J. Feng, and M. Meng. On solutions of the matrix equation AX=B with respect to semi-tensor product. J. Frankl. Inst. 2016, 353, 1109–1131. [Google Scholar] [CrossRef]
  332. C. Yu, X. Liu, and Y. Zhang. The generalized quaternion matrix equation AXB+CXD=E. Math. Meth. Appl. Sci. 2020, 43, 8506–8517. [Google Scholar] [CrossRef]
  333. S. Yuan. Least squares pure imaginary solution and real solution of the quaternion matrix equation AXB+CXD=E with the least norm. J. Appl. Math. 2014, 2014, 857081. [Google Scholar]
  334. S. Yuan and A. Liao. Least squares solution of the quaternion matrix equation X-AX^B=C with the least norm. Linear Multilinear Algebra 2011, 59, 985–998. [Google Scholar] [CrossRef]
  335. S. F. Yuan and Q.W. Wang. Two special kinds of least squares solutions for the quaternion matrix equation AXB+CXD=E. Electron. J. Linear Algebra 2012, 23, 257–274. [Google Scholar]
  336. S.H. Żak. On the polynomial matrix equation AX+YB=C. IEEE Trans. Autom. Control.
  337. F. Zhang, W. Mu, Y. Li, and J. Zhao. Special least squares solutions of the quaternion matrix equation AXB+CXD=E. Comput. Math. Appl. 2016, 72, 1426–1435. [Google Scholar] [CrossRef]
  338. K. Zhang. Iterative Algorithms for Constrained Solutions of Matrix Equations, National Defense Industry Press, Beijing, 2015. In Chinese.
  339. M. Zhang, Y. Li, J. Sun, X. Fan, et al. A new method based on the semi-tensor product of matrices for solving communicative quaternion matrix equation ∑i=1kAiXBi=C and its application. Bull. Sci. Math. 2025, 199, 103576. [Google Scholar] [CrossRef]
  340. B. Zhou, G.R. Duan, and Z.Y. Li. Gradient based iterative algorithm for solving coupled matrix equations. Syst. Control Lett. 2009, 58, 327–333. [Google Scholar] [CrossRef]
  341. H. Zhuang and Z.S. Roth. Comments on “Calibration of wrist-mounted robotic sensors by solving homogeneous transformation equations of the form AX=XB". IEEE Trans. Robot. Autom. 1991, 7, 877–878. [Google Scholar] [CrossRef]
  342. H. Zhuang, Z.S. Roth, and R. Sudhakar. Simultaneous robot/world and tool/flange calibration by solving homogeneous transformation equations of the form AX=YB. IEEE Trans. Robot. Autom. 1994, 10, 549–554. [Google Scholar] [CrossRef]
  343. K. Ziętak. The properties of the minimax solution of a non-linear matrix equation XY=A. IMA J. Numer. Anal. 1983, 3, 229–244. [Google Scholar] [CrossRef]
  344. K. Ziętak. The Chebyshev solution of the linear matrix equation AX+YB=C. Numer. Math. 1985, 46, 455–478. [Google Scholar] [CrossRef]
  345. K. Ziętak. The lp-solution of the linear matrix equation AX+YB=C. Computing 1984, 32, 153–162. [Google Scholar] [CrossRef]
Figure 1. Geometry of a robotic system
Figure 1. Geometry of a robotic system
Preprints 172260 g001
Figure 2. Encrypting and decrypting color images
Figure 2. Encrypting and decrypting color images
Preprints 172260 g002
Figure 3. Core framework of GSE research
Figure 3. Core framework of GSE research
Preprints 172260 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated