Preprint
Article

This version is not peer-reviewed.

Bridging Gauss-Jordan Reduction and Determinant Methods Through Cross-Multiplication-Flip (CMF) Method in Matrix Inversion and Solving Systems of Linear Equations

Submitted:

03 July 2023

Posted:

04 July 2023

You are already at the latest version

Abstract
In this paper, we introduce as a pedagogical strategy an internal division-free, straightforward, and symmetrically progressing algorithm in manually computing matrix inverse and solving systems of linear equations by revisiting the application of elementary row operations in the Gauss-Jordan reduction method and connecting it to the determinant method. The proposed cross-multiplication-flip (CMF) algorithm employs cross-multiplication similar to the butterfly movement in computing determinants as a strategic application of elementary row operations to efficiently reduce the rows and then applies flipping of rows and entries to put an upper triangular matrix into lower triangular form to continue the reduction process.
Keywords: 
;  ;  ;  ;  

1. Introduction

The concept of matrix inverse is essential in the solution of systems of linear equations and other various practical applications. For a nonsingular matrix A , there is a unique matrix A 1  called the usual inverse of A , with which the condition A 1 A = A A 1 = I  holds [1,2]. The usual matrix inverse has the following properties,
( A 1 ) 1 = A ; ( A T ) 1 = ( A 1 ) T ; ( A * ) 1 = ( A 1 ) * ; A B 1 = B 1 A 1 .
where A T is the transpose of A and A * is its conjugate transpose. While it is generally thought of that A 1 is unique to nonsingular matrices, there are cases where approximations of matrix inverse from singular matrices or rectangular matrices are necessary; hence, the concept of generalized matrix inverse A 1 ~ defined as follows [3],
A A 1 ~ A = A .
In a system of linear equation A x = b where A   is a nonsingular n × n matrix, left multiplication of the equation by A 1 yields A 1 A x = A 1 b and with A A 1 = A 1 A = I , the solution is given by x = A 1 b [1,2]. Where A is not necessarily singular such that A A 1 ~ A = A ,   then A x = b has a solution if and only if A A 1 ~ b = b [3]. Consequently, Penrose [3] introduced the Moore-Penrose matrix inverse denoted by A P with the following properties,
A A P A = A ; A P A A P = A P ; ( A A P ) * = A A P ; and ( A P A ) * = A P A .
There are other generalized matrix inverses introduced depending on relations and applications investigated such as those involved in differential and difference equations, cryptography, Markov chains, and numerical analysis [5,7]. For instance, the Drazin matrix inverse denoted by A D satisfies the following conditions [6]
A k A D A = A k ; A D A A A D = A D ; A A D = A D A
where k called the index of A , is the least nonnegative integer such that r a n k A k + 1 = r a n k   A k . Note that the rank of A is the number of its nonzero eigenvalues. When k = 0 , the Drazin inverse reduces to the usual matrix inverse; that is A D = A 1 .
With technological applications in the computation of matrix inverse for specific purposes, accuracy in estimations and consistency of generalized matrix inverse can be readily obtained. The very goal of this paper is directed to the pedagogical context. Hence, we revisit the fundamental concepts and procedures to facilitate conceptual understanding of mathematical principles and fluency in procedural knowledge. With x = A 1 b , and as matrix size becomes larger, manually solving for solution of systems of linear equations by separately computing for the inverse of the coefficient matrix becomes tedious and thus the approach becomes less efficient in terms of teaching time. One may consider common approaches in solving systems of linear equations without matrix inversing such as the Gaussian elimination method, Gauss-Jordan reduction method, LU Decomposition.
In the Gauss-Jordan reduction method, solving both the linear system and matrix inverse can be addressed independently. For linear systems A x = b , an augmented matrix A : b is formed and elementary row operations are performed to transform A into reduced row echelon form. The process is actually multiplying A 1 and b without specifically computing the entries of A 1 .
In matrix inversing, we extend the Gauss-Jordan reduction method by forming a linear system A X = B . With B = I , t h e n   b y   d e f i n i t i o n ,   X = A 1 . T a k i n g   A is as the coefficient matrix and then augmenting it with I we produce A : I . Here, the columns of I represent n different sets of right-hand side values of equations of the linear system . Forming A : I and transforming A into reduced row echelon form implies solving a system of linear equations with n different right-hand sides at the same time. That is, pre-multiplying A X = I   b y   A 1   g i v e s   A 1 A X = A 1 I ,   t h e n   f i n a l l y   X = A 1 . With this approach, solving systems of linear equations is but a special case of matrix inversing.
The second method of computing matrix inverse involves the determinant as indicated in the relation below [1,2].
A 1 = 1 det A ( a d j   A ) = A 11 d e t ( A ) A 12 d e t ( A ) A 1 n d e t ( A ) A 21 d e t ( A ) A 21 d e t ( A ) A 2 n d e t ( A ) . A n 1 d e t ( A ) . A 21 d e t ( A ) . A n n d e t ( A )
where A i j = A j i ; A j i = ( 1 ) j + i d e t ( M j i ) and M j i is a matrix formed by removing the column and row of A where a j i lies. A j i is a cofactor of a j i . Current texts in linear algebra treat independently these two usually-employed methods, Gauss-Jordan reduction method through elementary row operations and determinant method, of matrix inversion. In this paper, we attempted to develop a general approach applied to matrices with numerical entries to connect the Gauss-Jordan reduction and determinant methods and to propose a more straightforward and pedagogically efficient algorithm in manually computing matrix inverse.

2. Materials and Methods

Matrix Reduction and Inversion by Cross-Multiplication-Flip (CMF) Method
We compute the inverse of an n × n matrix A with the augmented matrix [ A : I ] n represented as follows:
[ A : I ] n a 11 a 12 a 1 n 1 0 0
a 21 a 22 a 2 n 0 1 0
. . . . . .
. . . . . .
a n 1   1 a n 1   2 a n 1   n 0 0 0
a n 1 a n 2 a n n 0 0 1
In the Gauss-Jordan reduction, the process of matrix inversion involves transformation of A into reduced row echelon form, which in the case of nonsingular matrices, is the identity matrix by performing elementary row operations through the rows of the augmented matrix. We start the reduction by zeroing out the entries under a 11 . Assuming the first entries of each row of A are nonzero, we do the straightforward approach by taking two successive rows at a time beginning at the top. We multiply R 2 by a 11 and R 1 by a 21 . Adding the results would produce a row with zero first entry. We continue the process with R 2 and R 3 , then R 3 and R 4 until R n 1 and R n . We write the resulting rows under [ A : I ] n and label this [ A : B ] n 1 as a submatrix with n 1 rows and n 1 columns by disregarding the first column with zero entries.
The process of multiplying i + 1 t h row by a i 1 and the i t h row by a i + 1   1 and adding the results leads to a scheme of individually computing each entry of the succeeding submatrix [ A : I ] n 1 which we refer to here as cross-multiplication and that the resulting entry for the left matrix, a i j ( m ) , is denoted as a i j ( m ) = a i 1 ( m 1 ) a i + 1   j ( m 1 ) a i + 1   1 ( m 1 ) a i   j ( m 1 ) where ( m ) indicates the number of reductions performed. In determinant form, we can also denoted a i j ( m ) as a i j ( m ) = a i 1 ( m 1 ) a i   j ( m 1 ) a i + 1   1 ( m 1 ) a i + 1   j ( m 1 ) where a i j ( 0 ) = a i j , which provides a symmetric (butterfly) movement of the algorithm; hence, the term cross-multiplication. We also denote the entries of the right matrix by b i j ( m ) which is determined in similar manner since cross-multiplication is extended to the entire augmented matrix. We now represent the initial results as follows:
[ A : I ] n a 11 a 12 a 1 n 1 0 0
a 21 a 22 a 2 n 0 1 0
. . . . . .
. . . . . .
a n 1   1 a n 1   2 a n 1   n 0 0 0
a n 1 a n 2 a n n 0 0 1
[ A : B ] n 1 a 11 ( 1 ) a 12 ( 1 ) a 1 n 1 ( 1 ) b 11 ( 1 ) b 21 ( 1 ) 0
a 21 ( 1 ) a 22 ( 1 ) a 2   n 1 ( 1 )
. . . . . . .
a n 1   1 ( 1 ) a n 1   2 ( 1 ) a n 1   n 1 ( 1 ) b n 1   n ( 1 )
[ A : B ] 2 a 11 ( n 2 ) a 12 ( n 2 ) b 11 ( n 2 ) b 12 ( n 2 ) b 1 n ( n 2 )
a 21 ( n 2 ) a 22 ( n 2 ) b 21 ( n 2 ) b 22 ( n 2 ) b 2 n ( n 2 )
[ A : B ] 1 a 11 ( n 1 ) b 11 ( n 1 ) b 12 ( n 1 ) b 1 n ( n 1 )
At this point, the last row of A 1 can now be determined from [ A : B ] 1 and it should be
b 11 ( n 1 ) a 11 ( n 1 ) b 12 ( n 1 ) a 11 ( n 1 ) b 1 n ( n 1 ) a 11 ( n 1 )
But we will not compute the last row of A 1 yet to avoid fractions. We now do the first part of the FLIP by getting the first row of each submatrix beginning from the lowest submatrix [ A : B ] 1 to form an augmented matrix by [ C : D ] n .
[ C : D ] n 0 0 0 a 11 ( n 1 ) b 11 ( n 1 ) b 12 ( n 1 ) b 1 n ( n 1 )
0 0 a 11 ( n 2 ) a 12 ( n 2 ) b 11 ( n 2 ) b 12 ( n 2 ) b 1 n ( n 2 )
. . . . . .
. . . . . .
0 a 11 ( 1 ) a 1 n 2 ( 1 ) a 1 n 1 ( 1 ) b 11 ( 1 ) b 12 ( 1 ) b 1 n ( 1 )
a 11 a 12 a 1 n 1 a 1 n 1 0 0
Note that we cannot perform cross-multiplication here since the first entries of the left matrix are zeros, except that of the last row. We do now the second part of the FLIP by flipping left the entries of the left matrix to obtain a matrix in lower triangular form.
[ C : D ] n a 11 ( n 1 ) 0 0 0 b 11 ( n 1 ) b 12 ( n 1 ) b 1 n ( n 1 )
a 12 ( n 2 ) a 11 ( n 2 ) 0 0 b 11 ( n 2 ) b 12 ( n 2 ) b 1 n ( n 2 )
. . . . . .
. . . . . .
a 1 n 1 ( 1 ) a 1 n 2 ( 1 ) a 11 ( 1 ) 0 b 11 ( 1 ) b 12 ( 1 ) b 1 n ( 1 )
a 1 n a 1 n 1 a 12 a 11 1 0 0
We repeat the process of performing cross-multiplication through the rows of [ C : D ] n to obtain the rows of [ C : D ] n 1 . Here, we denote the entries of the left matrix by a i j ( m ) ' and of the right matrix by b i j ( m ) ' .
[ C : D ] n a 11 ( n 1 ) 0 0 0 b 11 ( n 1 ) b 12 ( n 1 ) b 1 n ( n 1 )
a 12 ( n 2 ) a 11 ( n 2 ) 0 0 b 11 ( n 2 ) b 12 ( n 2 ) b 1 n ( n 2 )
. . . . . .
. . . . . .
a 1 n 1 ( 1 ) a 1 n 2 ( 1 ) a 11 ( 1 ) a 1 n 1 ( 1 ) b 11 ( 1 ) b 12 ( 1 ) b 1 n ( 1 )
a 1 n a 1 n 1 a 12 a 11 1 0 0
[ C : D ] n 1 a 11 ( 1 ) ' 0 0 b 11 ( 1 ) ' b 12 ( 1 ) ' b 1 n ( 1 ) '
a 21 ( 1 ) ' 0 0 b 21 ( 1 ) ' b 22 ( 1 ) ' b 2 n ( 1 ) '
. . . . .
. . . . .
a n 21 ( 1 ) ' a n 2 n 2 ( 1 ) ' 0 b n 21 ( 1 ) ' b n 22 ( 1 ) ' b n 2 n ( 1 ) '
a n 11 ( 1 ) ' a n 1 n 2 ( 1 ) ' a n 1 n 1 ( 1 ) ' b n 11 ( 1 ) ' b n 11 ( 1 ) ' b n 1 n ( 1 ) '
[ C : D ] 2 a 11 ( n 2 ) ' 0 b 11 ( n 2 ) ' b 12 ( n 2 ) ' b 1 n ( n 2 ) '
a 21 ( n 2 ) ' a 22 ( n 2 ) ' b 21 ( n 2 ) ' b 22 ( n 2 ) ' b 2 n ( n 2 ) '
[ C : D ] 1 a 11 ( n 1 ) ' b 11 ( n 1 ) ' b 12 ( n 1 ) ' b 1 n ( n 1 ) '
Since the left matrix in [ C : D ] n is in lower triangular form, every first row of the left matrix of every succeeding submatrix [ C : D ] i takes the form
[ C : D ] i a 11 ( i ) ' 0 0 b 11 ( i ) ' b 12 ( i ) ' b 1 n ( i ) '
that is, a 11 ( i ) ' 0 and 0 elsewhere.
To restore the original sequence of rows and columns, we now write the rows of the augmented matrix leading to the inverse of A by collecting the first rows of [ C : D ] i beginning from [ C : D ] 1 and doing the FUL movement.
[ C : D ] n a 11 ( n 1 ) ' 0 0 0 b 11 ( n 1 ) ' b 12 ( n 1 ) ' b 1 n ( n 1 ) '
0 a 11 ( n 2 ) ' 0 0 b 11 ( n 2 ) ' b 12 ( n 2 ) ' b 1 n ( n 2 ) '
. . . . . .
. . . . . .
0 0 a 11 ( 1 ) ' 0 b 11 ( 1 ) ' b 12 ( 1 ) ' b 1 n ( 1 ) '
0 0 0 a 11 ( n 1 ) b 11 ( n 1 ) b 12 ( n 1 ) b 1 n ( n 1 )
Note that the left matrix is a diagonal matrix and we can now specify the inverse by diving each row of [ C : D ] n by the corresponding diagonal entry of C. Thus
[ I : A 1 ] n 1 0 0 0 b 11 ( n 1 ) ' a 11 ( n 1 ) ' b 12 ( n 1 ) ' a 11 ( n 1 ) ' b 1 n ( n 1 ) ' a 11 ( n 1 ) '
0 1 0 0 b 11 ( n 2 ) ' a 11 ( n 2 ) ' b 12 ( n 2 ) ' a 11 ( n 2 ) ' b 1 n ( n 2 ) ' a 11 ( n 2 ) '
. . . . . . .
. . . . . . .
0 0 1 0 b 11 ( 1 ) ' a 11 ( 1 ) ' b 12 ( 1 ) ' a 11 ( 1 ) ' b 1 n ( 1 ) ' a 11 ( 1 ) '
0 0 0 1 b 11 ( n 1 ) a 11 ( n 1 ) b 12 ( n 1 ) a 11 ( n 1 ) b 1 n ( n 1 ) a 11 ( n 1 )
Cases with Zero First Entries
The derivation of the algorithm assumes that there are no zero first entries. If there are zero first entries, these rows are considered as standby rows and are not involved in cross-multiplication within a submatrix where it belongs. These rows are then transferred to the next submatrix by dropping the first zero entry. Below, a 21 = 0 , hence, the row is transferred as first row of the next submatrix. Cross-multiplication is performed between the first and third rows of [ A : I ] n . We can also first move the row with zero first entry to the bottom of its submatrix to avoid confusion before commencing cross-multiplication especially if it is the first row of a submatrix. If all of the first entries of a submatrix are zeros, then the matrix is invertible.
[ A : I ] n a 11 a 12 a 1 n 1 0 0
0 a 22 a 2 n 0 1 0
. . . . . .
. . . . . .
a n 1   1 a n 1   2 a n 1   n 0 0 0
a n 1 a n 2 a n n 0 0 1
[ A : B ] n 1 a 22 a 2 n 0 1 0
a 21 ( 1 ) a 22 ( 1 ) a 2   n 1 ( 1 )
. . . . . . .
a n 1   1 ( 1 ) a n 1   2 ( 1 ) a n 1   n 1 ( 1 ) b n 1   n ( 1 )

3. CMF and Determinant Methods

For this illustration, we make use of 3 × 3 matrix denoted as follows
A = a 11 a 12 a 13 a 21 a 22 a 21 a 31 a 32 a 33
We proceed with reduction by cross-multiplication through the rows of the augmented matrix.
[ A : I ] 3 a 11 a 12 a 13 1 0 0
a 21 a 22 a 23 0 1 0
a 31 a 32 a 33 0 0 1
[ A : B ] 2 a 11 ( 1 ) a 12 ( 1 ) a 21 a 11 0
a 21 ( 1 ) a 2   2 ( 1 ) 0 a 31 a 21
[ A : B ] 1 a 11 ( 2 ) a 21 ( 1 ) a 21 a 11 ( 1 ) a 31 a 21 ( 1 ) a 11 a 11 ( 1 ) a 21
We let
A 1 = b 11 b 12 b 13 b 21 b 22 b 21 b 31 b 32 b 33
and by the determinant method,
A 1 = A 11 det ( A ) A 12 det ( A ) A 13 det ( A ) A 21 det ( A ) A 22 det ( A ) A 23 det ( A ) A 31 det ( A ) A 32 det ( A ) A 33 det ( A )
Now we introduce the following lemma:
Lemma 1.
If  A   a   3 × 3 matrix then   d e t ( A ) = a 11 ( 2 ) a 21 where
a i j ( m ) = a i 1 ( m 1 ) a i   j ( m 1 ) a i + 1   1 ( m 1 ) a i + 1   j ( m 1 ) ; a i j ( 0 ) = a i j   .
Proof. 
By the above definition of a i j m ,
a 11 2 = a 11 1 a 22 1 a 21 1 a 12 1 = a 11 a 22 a 21 a 12 a 21 a 33 a 31 a 23 a 21 a 32 a 31 a 22 a 11 a 23 a 21 a 13 = a 21 ( a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 a 11 a 23 a 32 a 12 a 21 a 33 a 13 a 22 a 31 )
a 11 2 = a 21 d e t ( A )
Thus, det A = a 11 2 a 21 .
Note that each of a i j ( 1 ) can be expressed in terms of cofactor A i j such that a 11 ( 1 ) = A 33 , a 12 ( 1 ) = A 32 ,  a 21 ( 1 ) = A 13 , a 22 ( 1 ) = A 12 , and a 11 ( 2 ) = A 33 A 12 + A 13 A 32 = A 13 A 32 A 33 A 12 .
From [ A : B ] 1 , we can specify the entries in the last row of A 1 as follows:
b 31 = a 21 ( 1 ) a 21 a 11 ( 2 ) = ( a 21 a 32 a 31 a 22 ) a 21 a 11 ( 2 ) = A 13 a 21 a 11 ( 2 ) = A 31 d e t ( A ) b 32 = a 11 1 a 31 a 21 1 a 11 a 11 2 = a 11 a 22 a 21 a 12 a 31 a 21 a 32 a 31 a 22 a 11 a 11 2 = ( a 12 a 31 a 32 a 11 ) a 21 a 11 ( 2 ) = A 23 a 21 a 11 ( 2 ) = A 32 d e t ( A ) ,
and
b 33 = a 11 ( 1 ) a 21 a 11 ( 2 ) = ( a 11 a 22 a 21 a 12 ) a 21 a 11 ( 2 ) = A 33 a 21 a 11 ( 2 ) = A 33 d e t ( A ) .
To verify the entries in the first and second rows of A 1 we do the FUL technique. For simplicity of notations, we use the cofactor equivalents of each a i j ( 1 ) .
[ C : D ] 3 a 11 ( 2 ) 0 0 a 21 ( 1 ) a 21 a 11 ( 1 ) a 31 a 21 ( 1 ) a 11 a 11 ( 1 ) a 21
a 12 ( 1 ) a 11 ( 1 ) 0 a 21 a 11 0
a 13 a 12 a 11 1 0 0
a 11 ( 2 ) 0 0 A 13 a 21 A 33 a 31 A 13 a 11 A 33 a 21
A 32 A 33 0 a 21 a 11 0
a 13 a 12 a 11 1 0 0
[ C : D ] 2 a 11 ( 2 ) A 33 0 a 11 ( 2 ) a 21 + A 32 A 13 a 21 a 11 ( 2 ) a 11 + A 32 ( A 33 a 31 A 13 a 11 ) A 32 A 33 a 21
A 32 a 12 a 13 A 33 A 32 a 11 A 32 + a 13 a 21 a 13 a 11 0
[ C : D ] 1 a 11 ( 2 ) A 33 A 32 a 11 a 11 2 A 33 A 32 + a 13 a 21 ( A 32 a 12 a 13 A 33 ) ( a 11 2 a 21 + A 32 A 13 a 21 ) a 11 2 A 33 a 13 a 11 + ( A 32 a 12 + a 13 A 33 ) ( a 11 2 a 11 + A 32 ( A 33 a 31 A 13 a 11 ) ) A 32 A 33 a 21 ( A 32 a 12 + a 13 A 33 )
We now verify the results for the remaining entries in the rows of A 1 . For the entries of the second row, we have
b 21 = a 21 A 13 A 32 A 33 A 12 + a 21 A 32 A 13 a 11 ( 2 ) A 33 = a 21 A 12 a 11 ( 2 ) = A 21 d e t ( A ) b 22 = a 11 2 a 11 + A 32 A 33 a 31 A 13 a 11 a 11 2 A 33 = A 13 A 32 a 11 A 33 A 12 a 11 A 32 A 33 a 31 A 32 A 13 a 11 a 11 2 A 33 = A 12 a 11 A 32 a 31 a 11 2 = a 21 a 33 a 31 a 23 a 11 + a 11 a 23 a 21 a 13 a 31 a 11 2 = a 21 a 33 a 11 a 21 a 13 a 31 a 11 ( 2 ) = a 21 A 22 a 11 ( 2 ) = A 22 d e t ( A ) ; b 23 = A 32 A 33 a 21 a 11 ( 2 ) A 33 = a 12 1 a 21 a 11 ( 2 ) = ( a 11 a 23 a 21 a 13 ) a 21 a 11 ( 2 ) = A 32 a 21 a 11 ( 2 ) = A 23 d e t ( A )
For the entries of the first row,
b 11 = a 11 2 A 33 A 32 + a 13 a 21 A 32 a 12 a 13 A 33 a 11 2 a 21 + A 32 A 13 a 21 a 11 2 A 33 A 32 a 11 = A 12 A 33 A 13 A 32 + a 12 a 21 A 12 + a 13 a 21 A 13 a 11 2 a 11 = a 21 ( a 22 a 33 a 23 a 32 ) a 11 ( 2 ) = a 21 A 11 a 11 ( 2 ) = A 11 d e t ( A ) b 12 = a 11 2 A 33 a 13 a 11 + A 32 a 12 + a 13 A 33 a 11 2 a 11 + A 32 A 33 a 31 A 13 a 11 a 11 2 A 33 A 32 a 11 = ( A 12 a 11 a 12 + A 32 a 12 a 31 + A 33 a 13 a 31 + A 13 a 11 a 13 a 11 2 a 11 = a 11 a 21 ( a 12 a 33 a 13 a 32 ) a 11 ( 2 ) a 11 = a 21 A 21 a 11 ( 2 ) = A 12 d e t ( A ) ; b 13 = A 32 A 33 a 21 A 32 a 12 + a 13 A 33 a 11 2 A 33 A 32 a 11 = a 21 a 12 a 21 a 13 a 12 a 11 a 23 + a 21 a 13 a 11 a 22 a 13 a 21 a 12 a 11 2 a 11 = a 21 a 11 ( a 12 a 23 + a 13 a 22 ) a 11 ( 2 ) a 11 = a 21 A 31 a 11 ( 2 ) = A 13 d e t ( A ) .

4. CMF and Solving Systems of Linear Equations

Earlier in this paper, we assume that the process of computing the matrix inverse is an extension of the Gauss-Jordan reduction method of solving systems of linear equations by forming system A X = I . With the developed algorithm, we can now address the problem of solving linear systems by replacing each column of I by a set of right-hand values of the equations in the system.
[ A : b ] n a 11 a 12 a 1 n b 11 c 11
a 21 a 22 a 2 n b 21 c 21
. . . .
. . . .
a n 1   1 a n 1   2 a n 1   n b n 1   1 c n 1   1
a n 1 a n 2 a n n b n 1   c n 1  
[ A : b ] n 1 a 11 ( 1 ) a 12 ( 1 ) a 1 n 1 ( 1 ) b 11 ( 1 ) c 11 ( 1 )
a 21 ( 1 ) a 22 ( 1 ) a 2   n 1 ( 1 ) b 21 ( 1 ) c 21 ( 1 )
. . . . . .
a n 1   1 ( 1 ) a n 1   2 ( 1 ) a n 1   n 1 ( 1 ) c n 1   1 ( 1 ) c n 1   2 ( 1 )
[ A : b ] 2 a 11 ( n 2 ) a 12 ( n 2 ) b 11 ( n 2 ) c 11 ( n 2 )
a 21 ( n 2 ) a 22 ( n 2 ) b 21 ( n 2 ) c 21 ( n 2 )
[ A : b ] 1 a 11 ( n 1 ) b 11 ( n 1 ) c 11 ( n 1 )
x n = b 11 ( n 1 ) a 11 ( n 1 ) or x n = c 11 ( n 1 ) a 11 ( n 1 ) , and so on. Working the way up, we can solve for x n 1 , x n 2 , …, x 2 , x 1 by back substitution. Likewise, by doing FLIP, we complete the Gauss-Jordan reduction to readily obtain the solution.
[ C : d ] n a 11 ( n 1 ) 0 0 0 b 11 ( n 1 ) c 11 ( n 1 )
a 12 ( n 2 ) a 11 ( n 2 ) 0 0 b 11 ( n 2 ) c 11 ( n 2 )
. . . . .
. . . . .
a 1 n 1 ( 1 ) a 1 n 2 ( 1 ) a 11 ( 1 ) a 1 n 1 ( 1 ) b 11 ( 1 ) c 11 ( 1 )
a 1 n a 1 n 1 a 12 a 11 b 11 c 11
[ C : d ] n 1 a 11 ( 1 ) ' 0 0 b 11 ( 1 ) ' c 11 ( 1 ) '
a 21 ( 1 ) ' 0 0 b 21 ( 1 ) ' c 21 ( 1 ) '
. . . .
. . . .
a n 21 ( 1 ) ' a n 2 n 2 ( 1 ) ' 0 b n 21 ( 1 ) ' c n 21 ( 1 ) '
a n 11 ( 1 ) ' a n 1 n 2 ( 1 ) ' a n 1 n 1 ( 1 ) ' b n 11 ( 1 ) ' c n 11 ( 1 ) '
[ C : d ] 2 a 11 ( n 2 ) ' 0 b 11 ( n 2 ) ' c 11 ( n 2 ) '
a 21 ( n 2 ) ' a 22 ( n 2 ) ' b 21 ( n 2 ) ' c 21 ( n 2 ) '
[ C : d ] 1 a 11 ( n 1 ) ' b 11 ( n 1 ) ' c 11 ( n 1 ) '
Here for instance, x 1 = b 11 ( n 1 ) ' a 11 ( n 1 ) ' , x 2 = b 11 ( n 2 ) ' a 11 ( n 2 ) ' , …, x n 1 = b 11 ( 1 ) ' a 11 ( 2 ) ' , x 2 =   b 11 ( n 1 ) a 11 ( n 1 ) .

5. Verifying the Proposed Algorithm with Numerical Examples

We test the developed algorithm with specific cases and verify the result by the definition of matrix inverse.
Illustration 1.
3 × 3  matrix.
A = 2 3 1 1 2 1 1 2 2
[ A : I ] 3 2 3 1 1 0 0
1 2 1 0 1 0
1 -2 2 0 0 1
[ A : B ] 2 1 1 -1 2 0
-4 1 0 -1 1
[ A : B ] 1 5 -4 7 1
Collect a row from each submatrix and do FLIP
[ C : D ] 3 5 0 0 -4 7 1
1 1 0 -1 2 0
1 3 2 1 0 0
[ C : D ] 2 5 0 -1 3 -1
2 2 2 -2 0
[ C : D ] 1 10 12 -16 -2
Collect first row from each submatrix and do FLIP
10 0 0 12 -16 2
0 5 0 -1 3 -1
0 0 5 -4 7 1
[ I : A 1 ] 1 0 0 6 5 8 5 1 5
0 1 0 1 5 3 5 1 5
0 0 1 4 5 7 5 1 5
We verify the result by the definition A A 1 = A 1 A = I .
2 3 1 1 2 1 1 2 2 6 5 8 5 1 5 1 5 3 5 1 5 4 5 7 5 1 5 = 6 5 8 5 1 5 1 5 3 5 1 5 4 5 7 5 1 5 2 3 1 1 2 1 1 2 2 = 1 0 0 0 1 0 0 0 1
Illustration 2.
4 × 4 matrix with resulting zero first entry.
A = 1 2 1 3 2 2 0 1 1 3 1 1 2 1 3 4
[ A : I ] 4 1 2 1 3 1 0 0 0
2 -2 0 1 0 1 0 0
1 -1 2 3 0 0 1 0
3 1 1 4 0 0 0 1
[ A : B ] 3 -6 -2 -5 -2 1 0 0
* 0 4 5 0 -1 2 0
4 -5 -5 0 0 -3 1
*Row 2 in [ A : B ] 3 is a standby row.
[ A : B ] 2 38 50 8 -4 18 -6
4 5 0 -1 2 0
[ A : B ] 1 -10 -32 -22 4 24
We collect 1 row from each submatrix and do FLIP. Since an augmented matrix represents an equation, we can reduce to lower term a row with entries having common factor.
[ C : D ] 4 5 0 0 0 16 11 -2 -12
5 4 0 0 0 -1 2 0
5 2 6 0 2 -1 0 0
3 1 2 1 1 0 0 0
[ C : D ] 3 20 0 0 -80 -60 20 60
-10 30 0 10 0 -10 0
-1 -8 5 -1 3 0 0
Reducing to lower terms.
1 0 0 -4 -3 1 3
1 -3 0 -1 0 1 0
1 8 -5 1 -3 0 0
[ C : D ] 2 -3 0 3 3 0 -3
11 -5 2 -3 -1 0
[ C : D ] 1 15 -39 -24 3 33
Collect first rows and do FLIP
15 0 0 0 -39 -24 3 33
0 -3 0 0 3 3 0 -3
0 0 1 0 -4 -3 1 3
0 0 0 5 16 11 -2 -12
We express the results with a denominator 5.
[ I : A 1 ] 1 0 0 0 13 5 8 5 1 5 11 5
0 1 0 0 5 5 5 5 0 5 5
0 0 1 0 20 5 15 5 5 5 15 5
0 0 0 1 16 5 11 5 2 5 12 5
We verify the result as follows.
1 2 1 3 2 2 0 1 1 3 1 1 2 3 1 4 13 5 8 5 1 5 11 5 5 5 5 5 0 5 5 20 5 16 5 15 5 11 5 5 5 15 5 2 5 12 5 = 13 5 8 5 1 5 11 5 5 5 5 5 0 5 5 20 5 16 5 15 5 11 5 5 5 15 5 2 5 12 5 1 2 1 3 2 2 0 1 1 3 1 1 2 3 1 4 = 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1
Illustration 3.
Solving a system of linear equations.
x 1 x 2 + x 3 + x 4 = 1 2 x 1 + x 2 x 3 x 4 = 2 x 1 + 2 x 2 + x 3 + 2 x 4 = 10 2 x 1 2 x 2 x 3 x 4 = 4
We form the augmented matrix and proceed with cross-multiplication as follows
[ A : b ] 4 1 -1 1 1 1
2 1 -1 -1 2
1 2 1 2 10
2 -2 -1 -1 -4
[ A : b ] 3 3 -3 -3 0 Reduce before flipping
3 3 5 18
-6 -3 -5 -24
[ A : b ] 2 18 24 54 Reduce before flipping
9 15 36
[ A : b ] 1 54 162 Reduce before flipping
1 3 x 4 = 3
At this point, one may consider back substitution instead of flipping
[ A : b ] 4 1 0 0 0 3 x 4 = 3
4 3 0 0 9
1 1 -1 0 0
1 1 -1 1 1
[ A : b ] 3 3 0 0 -3 x 3 = 1
1 -4 0 -9
0 0 1 1 Standby. This already gives x 1 = 1
[ A : b ] 2 -12 0 -24 x 2 = 2
0 1 1 x 1 = 1

6. Conclusions

The computation of matrix inverse by CMF is an innovation of the Gauss-Jordan reduction method by individually computing the entries of succeeding reduced augmented matrices through a i j 1 ( m ) = a i 1 a i + 1   j a i + 1   1 a i   j or a i j 1 ( m ) = a i 1 a i j a i + 1   1 a i + 1   j ; a i j 1 ( 0 ) = a i j 1 . Beginning with the augmented matrix [ A : I ] n the algorithm is continued until a matrix with one row [ A : B ] 1 is obtained. By taking the first row from each submatrix, an augmented matrix where the left matrix is in upper triangular form is obtained. Doing the FLIP by flipping the rows up and then flipping the rows of the left matrix to the left produces an augmented matrix where the left matrix is in lower triangular form, [ C : D ] n . The process of reduction by cross-multiplication is performed until obtaining the last submatrix [ C : D ] 1 . Collecting the first row from each submatrix [ C : D ] i and then doing the FLIP movement generates an augmented matrix from which the inverse is determined by dividing the entries of the right matrix by the corresponding diagonal entries of the left matrix.
Solving systems of linear equations by CMF follows the same algorithm as matrix inversing by treating each row of I as different sets of right-hand values of the equations of the linear system.
For n = 3 , the CMF proceeds in the same process as the determinant method in matrix inversing; thus, bridging the Gauss-Jordan reduction and determinant methods.
The limitation of the cross-multiplication occurs when a zero first entry occurs in which cross-multiplication cannot be performed. This can be compensated by permuting the rows, treating the row zero as a standby row and then placing it at the bottom row of the succeeding augmented submatrix.

Acknowledgments

The author acknowledges the invaluable service of peer reviewers of this paper.

Conflicts of Interest

The author declares no conflict of interest. This paper is solely developed by the author.

References

  1. Kolman, B.; Hill, D. Elementary Linear Algebra with Applications, 9th ed.; Pearson Education Ltd.: London, UK, 2014. [Google Scholar]
  2. Anton, H.; Rorres, C.; Kaul, A. Elementary Linear Algebra, 12th ed.; John Wiley and Sons: NY, USA, 2019. [Google Scholar]
  3. Ben-Israel, A.; Greville, T. Generalized Inverses: Theory and Applications; Springer: New York, 2003. [Google Scholar]
  4. Penrose, R.; Todd, J. A generalized inverse for matrices. Mathematical Proceedings of the Cambridge Philosophical Society 1955, 51. [Google Scholar] [CrossRef]
  5. Campbell, S.L. ; Meyer,, C.D. Generalized Inverses of Linear Transformations; Pitman: London, 1979. [Google Scholar]
  6. Drazin, M. P. Pseudo-inverses in associative rings and semigroups. The American Mathematical Monthly 1958, 65. [Google Scholar] [CrossRef]
  7. Hartwig, R.E.; Levine, J. Applications of Drazin inverse to the Hill cryptographic systems, Part III. Cryptologia 1981, 5. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated