Preprint
Article

This version is not peer-reviewed.

Rigorous Asymptotic Perturbation Bounds for Hermitian Matrix Eigendecompositions

A peer-reviewed article of this preprint also exists.

Submitted:

12 September 2025

Posted:

15 September 2025

You are already at the latest version

Abstract
In this paper, we present rigorous asymptotic componentwise perturbation bounds for regular Hermitian indefinite matrix eigendecompositions, obtained by the method of the splitting operators. The asymptotic bounds are derived from the exact nonlinear expressions for the perturbations and make possible to bound each entry of every matrix eigenvector in case of distinct eigenvalues. In contrast to the perturbation analysis of the Schur form of a nonsymmetric matrix, the bounds obtained do not make use of the Kronecker product which reduces significantly the necessary memory and volume of computations. This allows to analyze efficiently the sensitivity of high order problems. The eigenvector perturbation bounds are applied to obtain bounds on the angles between the perturbed and unperturbed one-dimensional invariant subspaces spanned by the corresponding eigenvectors. To reduce the bound conservatism in case of high order problems, we propose to implement probabilistic perturbation bounds based on the Markoff inequality. The analysis is illustrated by two numerical experiments of order 5000.
Keywords: 
;  ;  ;  ;  

1. Introduction

The eigenvalue perturbation analysis of real symmetric and complex Hermitian matrices is a well established part of matrix analysis which is presented in depth in the fundamental works of Kato [12], Wilkinson [31], Parlett [20], Stewart and Sun [27], Bhatia [4], Stewart [26] and Chatelin [7], the surveys of Sun [29] and Li [14], as well as in numerous papers, see for instance [2,11,16,25,28,30] and the references therein. The perturbation analysis of Hermitian decompositions is usually simpler than the analysis of non-symmetric matrix decompositions. (For a survey on the perturbation theory of non-symmetric matrix decompositions, see [10]). The aim of such an analysis is to find perturbation bounds of the eigenvalues and eigenvectors of Hermitian matrices in the case when these matrices are subject to Hermitian or non-Hermitian perturbations. Depending on the problem solved, such an analysis can be a priori, when we want to predict in advance the changes in the eigendecomposition before the perturbation is applied, or a posteriori, when we attempt to find the errors in eigenvalues and eigenvectors based on the computed perturbed quantities [7]. Here we will be concerned with the first type of analysis, the second one being related to the determination of residual bounds as it is considered by several authors, see Davis and Kahan [8], Parlett [Ch. 11][20], Stewart and Sun [Ch. 5][27], Chatelin [7], and Nakatsukasa [18]. On the other hand, the perturbation analysis can be asymptotic, when we are interested in the effect of vanishing perturbations or global, when we want to find guaranteed bounds in the case of sufficiently large perturbations. We note that in most cases the authors are usually concerned with the eigenvalue perturbations and the eigenvector perturbation analysis is reduced to the sensitivity analysis of the corresponding invariant subspace. In several cases this sensitivity analysis is restricted to bounding the distance between the unperturbed and perturbed subspace [28] or finding the angles between these subspaces [14,26,27,29]. However, in some applications it is necessary to have bounds on the individual elements of specific eigenvector, not only on the sensitivity of the corresponding invariant subspace. This leads to the necessity to perform a componentwise perturbation analysis of the matrix eigensystem.
In this paper we are interested to carry out a componentwise perturbation analysis of indefinite Hermitian matrices that allows to bound each element of every corresponding eigenvector. This analysis is done only for regular eigenvalue problems, i.e., perturbation problems for matrices with distinct eigenvalues. The singular problems, corresponding to multiple eigenvalues, are treated by different techniques, see [6,15]. The regular problems can be solved simply and efficiently by using the method of splitting operators [13] which is already implemented to solve several other matrix perturbation problems. It is important that the bounds on eigenvector elements, obtained by this method, can also be used to obtain a bound on the sensitivity of the invariant subspace spanned by the corresponding eigenvector.
Theoretically, the perturbation analysis of Hermitian decompositions can be done by using some of the methods intended for non-Hermitian problems. This, however, may require much larger memory and volume of computations. For instance, the similar method for perturbation analysis of the Schur form proposed in [21], requires to construct large matrices by implementing the Kronecker product. The method presented in this paper avoids the using of such product which makes possible to analyze much larger problems.
Consider a Hermitian matrix A C n × n . Using a unitary transformation matrix U C n × n , the matrix A is reduced to the diagonal form
Λ = U H A U = diag ( λ 1 , λ 2 , , λ n ) ,
where λ 1 , λ 2 , , λ n are the eigenvalues of A and U = [ U 1 , U 2 , , U n ] is the matrix of the corresponding orthonormal eigenvectors. Note that the eigenvalues of a Hermitian matrix are always real. If A is real, then U is orthogonal [9,20]. The diagonal decomposition (1) is also referred to as symmetric Schur decomposition. Further on, without loss of generality, we will assume that the eigenvalues of A are ordered such that λ 1 λ 2 λ n .
If the matrix A is subject to a Hermitian perturbation δ A C n × n , δ A H = δ A , then instead of the decomposition (1) we have the new diagonal decomposition
Λ ˜ = U ˜ H A ˜ U ˜ = diag ( λ ˜ 1 , λ ˜ 2 , , λ ˜ n ) ,
where λ ˜ 1 λ ˜ 2 λ ˜ n are the perturbed eigenvalues and U ˜ = U + δ U = [ U ˜ 1 , U ˜ 2 , , U ˜ n ] is the matrix of the perturbed eigenvectors.
The aim of the perturbation analysis is to determine bounds on the eigenvalue perturbations δ Λ = diag ( δ λ j ) , δ λ j = λ ˜ j λ j , i = 1 , 2 , , n and the corresponding eigenvector perturbations δ U j = U ˜ j U j . The perturbation problem for Hermitian matrices eigendecompositions is regular, if and only if the eigenvalues of A are distinct. An additional important problem is to determine the sensitivity of the one-dimensional invariant subspaces spanned by the eigenvectors U j , j = 1 , 2 , , n . According to the Wielandt-Hofmann theorem [9], the eigenvalues of a Hermitian matrix are always well conditioned since their perturbations obey
| λ ˜ j λ j | δ A 2 .
However, the eigenvectors can be very sensitive to perturbations of A, depending on the separation of the eigenvalues, see for instance [25].
According to the splitting operator method [13], it is possible to derive separately perturbation bounds on | δ U | and | δ Λ | which allows to obtain tighter perturbation bounds on the eigenvector. For this aim, it is appropriate to introduce the matrix
δ W = U H δ U = U 1 H δ U 1 U 1 H δ U 2 U 1 H δ U n U 2 H δ U 1 U 2 H δ U 2 U 2 H δ U n U n H δ U 1 U n H δ U 2 U n H δ U n C n × n ,
which is unitary equivalent to the unknown matrix δ U . Finding estimates of the entries of this matrix allows to find sharp bounds on the entries of δ U thanks to the orthogonality of the matrix U.
Further on, we shall use the perturbation parameter vector
x = vec ( Low ( δ W ) ) = U 2 H δ U 1 U n H δ U 1 | U 3 H δ U 2 U n H δ U 2 | | U n H δ U n 1 T C ν , ν = n ( n 1 ) / 2 ,
where the components of x are the entries of the strictly lower triangular part of δ W . Using this vector, it is convenient to find bounds on various quantities related to the perturbation analysis of A.
The paper is organized as follows. In sect. Section 2, we derive asymptotic bounds on the perturbation parameters which are then used in the perturbation analysis of eigenvalues and eigenvectors of a symmetric matrix. The eigenvector perturbation bounds are used to find asymptotic bounds on the perturbations of the invariant subspaces spanned by the eigenvectors. In sect. Section 3, we briefly present some results concerning the determination of realistic probabilistic bounds on the entries of a random matrix using its Frobenius norm. We describe the implementation of such bounds in the derivation of probabilistic perturbation bounds on the eigenvector entries, eigenvalues and invariant subspaces. We illustrate the theoretical results in Sect. Section 4 by presenting two numerical experiments with matrices of order 5000 demonstrating the performance of the derived bounds. Some conclusions are made in sect. Section 5.
All computations in the paper are done with MATLAB® Version 9.9 (R2020b) [17] using IEEE double precision arithmetic on a machine equipped with an 12th Gen Intel(R) Core(TM) i5-1240P CPU running at 1.70 GHz and with 32 GB of RAM. M-files implementing the perturbation bounds described in the paper can be obtained from the authors.

2. Asymptotic Perturbation Bounds

2.1. Asymptotic Bounds for the Perturbation Parameters

Theorem 1.
Let A C n × n be a Hermitian matrix ( A = A H ) with distinct eigenvalues λ 1 , λ 2 , , λ n , which is decomposed as A = U Λ U H , where U is unitary and Λ is diagonal. Assume that the matrix A is perturbed by a Hermitian perturbation δ A , so that
A ˜ = A + δ A = U ˜ Λ ˜ U ˜ H .
Denote by F = U H δ A U the transformed perturbation matrix and construct the vector
f = vec ( Low ( F ) ) C ν , f = U i H δ A U j , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n .
Then the perturbation parameters
x = U i H δ U j , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n ,
satisfy the equation
M x = f + Δ x ,
where M C ν × ν is a diagonal matrix of the form
Preprints 176420 i001
with μ i j = λ i λ j and Δ x C ν is a vector containing second order terms in δ U i , δ U j .
Proof. From (1) and (2) we have that
U ˜ i H ( A + δ A ) U ˜ j = U i H A U j = 0 , j = 1 , 2 , , n 1 , i = j + 1 , j + 2 , , n .
Hence
U i H A δ U j + δ U i H A U j + δ U i H A δ U j = U ˜ i H δ A U ˜ j
From the diagonal decomposition (1) we have that
U i H A = λ i U i H , i = 1 , 2 , , n , A U j = λ j U j , j = 1 , 2 , , n .
The substitution of these expressions in (4) gives
λ i U i H δ U j + λ j δ U i H U j + δ U i H A δ U j = U ˜ i H δ A U ˜ j
Since the matrices U and U ˜ are unitary, it follows that
U H δ U = δ U H U δ U H δ U
and hence
δ U i H U j = U i H δ U j δ U i H δ U j , i = 1 , 2 , , n 1 , j = i , i + 1 , , n .
Replacing this expression for δ U i H U j into (6), we obtain that
( λ i λ j ) U i H δ U j = λ j δ U i H δ U j δ U i H A δ U j U ˜ i H δ A U ˜ j
or
( λ i λ j ) U i H δ U j = λ j δ U i H δ U j δ U i H A δ U j U i H δ A U j U i H δ A δ U j δ U i H δ A U j δ U i H δ A δ U j .
The expression (9) can be rewritten as a system of ν = n ( n 1 ) / 2 nonlinear algebraic equations for the unknown quantities
x = U i H δ U j , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n
in the form (3), where the component Δ x , = i + ( j 1 ) n j ( j + 1 ) / 2 of the nonlinear term Δ x is equal to
Δ x = λ j δ U i H δ U j δ U i H A δ U j U i H δ A δ U j δ U i H δ A U j δ U i H δ A δ U j .
We note that the solution of the system of equations (3) does not require the explicit formation of the matrix M. In particular, the asymptotic bound x l i n in the inequality
| x | x l i n , = 1 , 2 , , ν ,
can be obtained easily by using the expression
x l i n = δ A F / | λ i λ j | , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n .
Example 1.
Consider the 5 × 5 matrix
A = 2.401616 1.198384 0.398384 0.402384 0.398424 1.198384 2.201616 0.001616 0.002384 0.001576 0.398384 0.001616 1.801616 0.797616 0.801576 0.402384 0.002384 0.797616 1.803616 0.797576 0.398424 0.001576 0.801576 0.797576 1.801636 .
The eigenvalues of this matrix are
λ 1 = 4.0 , λ 2 = 3.0 , λ 3 = 1.01 , λ 4 = 1.0001 , λ 5 = 1.0 .
Note the closeness of the last three eigenvalues which prompts that the corresponding eigenvectors can be ill conditioned.
The perturbation matrix is taken as
δ A = 10 9 × ( δ A 0 + δ A 0 T ) ,
where δ A 0 is a matrix whose elements are random normal numbers.
For this perturbation problem, the matrix M 1 , which determines the perturbation parameter vector x in (3) has a 2-norm equal to 1 × 10 4 , which confirms that the problem is relatively ill conditioned.
The exact perturbation parameters x , = 1 , 2 , , ν and their asymptotic approximations x l i n computed by using (11), are shown to eight decimal digits in Table 1. Note the good coherence between the magnitude of the corresponding elements of both vectors. The differences between the values of x l i n and x are due to the bounding of the elements of the vector f by the value of δ A F and the neglecting of the nonlinear term δ x .

2.2. Asymptotic Componentwise Eigenvector Bounds

Theorem 2.
Under the conditions of Theorem 1, a strict asymptotic bound on the perturbation of each eigenvector U j , j = 1 , 2 , , n of A under a perturbation δ A , is given by
| U ˜ j U j | δ U j l i n = | U | W j l i n ,
where
δ W l i n = 0 x 1 l i n x 2 l i n x n 2 l i n x n 1 l i n x 1 l i n 0 x n l i n x 2 n 4 l i n x 2 n 3 l i n x 2 l i n x n l i n 0 x 3 n 7 l i n x 3 n 6 l i n x n 2 l i n x 2 n 4 l i n x 3 n 7 l i n 0 x ν l i n x n 1 l i n x 2 n 3 l i n x 3 n 6 l i n x ν l i n 0 C n × n ,
and the elements of the vector x l i n = [ x 1 l i n , x 2 l i n , , x ν l i n ] T are determined form
x l i n = δ A F / | λ i λ j | , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n .
Proof. Consider the auxiliary matrix
δ W = U H δ U : = [ δ W 1 , δ W 2 , , δ W n ] , δ W j C n
introduced above. As already noted, the strictly lower part of this matrix contains elements of the form
U i H δ U j , j = 1 , 2 , , n 1 , i = j + 1 , j + 1 , , n
which can be substituted by the corresponding elements x , = i + ( j 1 ) n j ( j + 1 ) / 2 of the vector x. The elements of the strictly upper part of δ W are of the form
U i H δ U j , i = 1 , 2 , , n 1 , j = i + 1 , i + 2 , , n
which, according to the unitary condition (7), can be represented as
U i H δ U j = δ U i H U j δ U i H δ U j
or
U i H δ U j = U j H δ U i ¯ δ U i H δ U j ,
where the term U j H δ U i ¯ , j > i is the conjugate value of the element x . Hence the matrix δ W can be represented as
δ W = δ V δ D δ Y ,
where
δ V = 0 x ¯ 1 x ¯ 2 x ¯ n 1 x 1 0 x ¯ n x ¯ 2 n 3 x 2 x n 0 x ¯ 3 n 6 x n 1 x 2 n 3 x 3 n 6 0 C n × n ,
and the matrices
δ D = δ U 1 H δ U 1 / 2 0 0 0 δ U 2 H δ U 2 / 2 0 0 0 δ U n H δ U n / 2 C n × n ,
δ Y = 0 δ U 1 H δ U 2 δ U 1 H δ U 3 δ U 1 H δ U n 0 0 δ U 2 H δ U 3 δ U 2 H δ U n 0 0 0 δ U 3 H δ U n 0 0 0 δ U n 1 H δ U n C n × n
contain second order terms in δ U j , j = 1 , 2 , , n . Note that for definiteness, we have assumed that U ˜ is chosen so that the diagonal elements of δ D are real and
U i H δ U i + U i H δ U i ¯ = 2 U i H δ U i = δ U i H δ U j ,
since the condition (12) does not restrict the imaginary part of U i H δ U i .
According to (13), the matrix | δ W | can be estimated as
| δ W | | δ V | + δ W ,
where
δ W = | δ D | + | δ Y |
contains second order terms in the elements of x. Thus, an asymptotic (linear) approximation of the matrix | δ U | can be determined as
| δ U l i n | | U | | U H δ U | = | U | δ W l i n .
Note that the bounds derived are also valid for real symmetric matrices.
Example 2.
For the same matrix A and perturbation δ A as in Example 1, the absolute values of the exact changes of the entries of the matrix U and their linear estimates, obtained by using (15) are, respectively
| δ U | = 10 4 × 0.0000045 0.0000028 0.0025383 0.0928278 0.0953600 0.0000037 0.0000020 0.0025204 0.0928299 0.0953488 0.0000025 0.0000070 0.0010687 0.0928181 0.1381715 0.0000016 0.0000139 0.0025323 0.0942843 0.0904961 0.0000071 0.0000066 0.0023290 0.1407080 0.0953601
and
δ U l i n = 10 4 × 0.0000879 0.0001319 0.0088754 0.4437818 0.4438262 0.0001099 0.0001099 0.0088791 0.4437855 0.4438298 0.0000952 0.0001209 0.0110648 0.4437745 0.6634911 0.0000953 0.0001210 0.0088680 0.4459712 0.4460378 0.0000952 0.0001209 0.0110869 0.6634467 0.4438189
For all i , j = 1 , 2 , , 5 it is fulfilled that | δ U i , j | < δ U i , j l i n .

2.3. Eigenvalue Sensitivity

For the changes in the elements of the perturbed diagonal form Λ ˜ one has the following expressions
δ λ i = λ ˜ i λ i = U ˜ i H ( A + δ A ) U ˜ i U i H A U i , i = 1 , 2 , , n .
Hence
δ λ i = U i H A δ U i + δ U i H A U i + δ U i H A δ U i + U i H δ A U j
+ U i H δ A δ U i + δ U i H δ A U i + δ U i H δ A δ U i .
Thus, we obtain
δ λ i = U i H δ A U i + δ i d , i = 1 , 2 , , n ,
where, taking into account equations (5), we have that
δ i d = λ i ( U i H δ U i + δ U i H U i ) + δ U i H A δ U i + U i H δ A δ U i + δ U i H δ A U i + δ U i H δ A δ U i .
According to (7), it is fulfilled that
δ U i H U j + U i H δ U j = δ U i H δ U j ,
which gives
δ i d = λ i δ U i H δ U i + δ U i H A δ U i + U i H δ A δ U i + δ U i H δ A U i + δ U i H δ A δ U i .
The quantity δ i d contains second order terms in δ U i .
If the perturbation δ A is known, expression (19) allows to obtain bound on δ i d using the eigenvector bound (15).
In the asymptotic eigenvalue analysis the higher order terms are neglected and one has
δ λ i l i n = U i H δ A U i , i = 1 , 2 , , n .
Hence,
| δ λ i | δ λ l i n = δ A 2 , i = 1 , 2 , , n
which reduces to the well known corollary of the Wielandt-Hoffman theorem [9].

2.4. Sensitivity of One Dimensional Invariant Subspaces

The estimate of the eigenvector perturbation δ U j can be used to find an estimate of the one dimensional (simple) invariant subspace, associated with the eigenvector U j .
Consider the one dimensional invariant subspace X j = R ( U j ) , j = 1 , 2 , , n . The sensitivity of this subspace is measured by the angle between the perturbed and unperturbed invariant subspace. Since
U ˜ j = U j + δ U j ,
we have that [5,26]
sin ( Θ ( X ˜ j , X j ) ) = U j H U ˜ j 2 = U j H δ U j 2 ,
where
U j = U 1 , U 2 , , U j 1 , U j + 1 , , U n C n × ( n 1 )
is the orthogonal complement of U j , U j H U j = 0 ( n 1 ) × 1 .
Equation (20) shows that the sensitivity of the one dimensional invariant subspace X is connected to the values of the perturbation parameters x = U i H δ U j . Consequently, if bounds on the perturbation parameters are known, it is possible to find the sensitivity estimates of all invariant subspaces. More specifically, we have that
U 1 H δ U 1 = x 1 x 2 x n 1 , U 2 H δ U 2 = x ¯ 1 x n x 2 n 3 , , U n H δ U n = x ¯ n 1 x ¯ 2 n 3 x ¯ 3 n 6 x ¯ ν .
Then we obtain that the angle between the jth perturbed and unperturbed invariant subspace has the following asymptotic estimate
Θ ( X ˜ j , X j ) Θ l i n ( X ˜ j , X j ) = arcsin ( δ W l i n ( 1 : n , j ) 2 ) , j = 1 , 2 , , n .
We note that this estimate produces practically the same results as the linear bound derived in [3,28,29].
Example 3.
For the same matrix and perturbation, as in the previous examples, the exact angles between the perturbed and unperturbed one-dimensional invariant subspaces and their linear estimates, are shown in Table 2.

3. Probabilistic Asymptotic Bounds

The numerical experiments with symmetric matrices show that the estimates obtained by using Theorem 2 can become very pessimistic with the increasing of matrix order. For instance, if the matrix is of order 2000, then the ratio between the bound (15) and the actual values of the entries of | δ U | may become of order 10 5 which makes the computed bound useless. A further reduction of the perturbation bounds can be achieved by implementing probabilistic perturbation bounds which for large n allow to obtain sufficiently close bounds with a high probability. For this aim we will make use of the probabilistic matrix bounds proposed in [23,24] that are based on the Markoff inequality [19].
Consider briefly the essence of the approach presented in [23]. The aim is to reduce the size of entries of an estimate Δ A of the matrix perturbation δ A at the price that some entries of | Δ A | are smaller then the corresponding entries of | δ A | . We have the following result.
Theorem 3.
Let Δ A = [ Δ a i j ] be an estimate of the m × n random perturbation δ A and P { | δ a i j | < Δ a i j } be the probability that | δ a i j | < Δ a i j . If the entries of Δ A are chosen as
Δ a i j = δ A F Ξ ,
where
Ξ = ( 1 P r e f ) m n ,
and 0 < P r e f < 1 is a desired probability, then it is fulfilled that
P { | δ a i j | < Δ a i j } P r e f , i = 1 , 2 , , m , j = 1 , 2 , , n .
If the number m n is sufficiently large so that Ξ > 1 , Theorem 3 allows to decrease the mean value of the bound Δ A and hence the magnitude of its entries by the scaling factor Ξ , choosing the desired probability P r e f less than 1. Note that the probability bound produced by the Markoff inequality is conservative, the actual results usually being much better than the results predicted by the quantity P r e f . This is due to the fact that the Markoff inequality is valid for the worst possible distribution of the random variables, but in the given case we do not impose a restriction on the probability distribution of the entries of δ A .
According to Theorem 3, the using of the scaling factor (22) guarantees that the inequality
| δ a i j | < Δ a i j
holds for each i and j with probability no less than P r e f .
Using Theorem 3, a probability bound on the perturbation parameters | x | , = 1 , 2 , , ν can be determined by the following theorem.
Theorem 4.
If the asymptotic estimate of the perturbation parameter vector x is chosen as
x e s t = δ A F Ξ | λ i λ j | , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n ,
where Ξ is determined according to
Ξ = n ( 1 P r e f ) ,
then
P { | x | x e s t 2 } P r e f .
The inequality (25) shows that the probability estimate of the component | x | can be determined, if in the linear estimate (11) we replace the perturbation norm δ A F by the probability estimate δ A F / Ξ , where the scaling factor Ξ is taken as shown in (22) for a specified probability P r e f .
The probabilistic perturbation bounds on the elements of the vector x allow to find probabilistic bounds on the elements of δ U .
Theorem 5.
An asymptotic bound with probability P r e f on the perturbation of each eigenvector U j , j = 1 , 2 , , n of A under a perturbation δ A , is given by
| U ˜ j U j | | δ U j e s t | = | U | W j e s t ,
where
δ W e s t = 0 x 1 e s t x 2 e s t x n 2 e s t x n 1 e s t x 1 e s t 0 x n e s t x 2 n 4 e s t x 2 n 3 e s t x 2 e s t x n e s t 0 x 3 n 7 e s t x 3 n 6 e s t x n 2 e s t x 2 n 4 e s t x 3 n 7 e s t 0 x ν e s t x n 1 e s t x 2 n 3 e s t x 3 n 6 e s t x ν e s t 0 C n × n ,
and the elements of the vector x e s t = [ x 1 e s t , x 2 e s t , , x ν e s t ] T are determined form
x e s t = δ A F Ξ | λ i λ j | , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n .
The probabilistic bound on δ A produces the following asymptotic perturbation bounds on the eigenvalues of A:
| δ λ i | δ λ i e s t = δ A F / Ξ , i = 1 , 2 , , n .
Finally, we obtain that the angle between the jth perturbed and unperturbed one-dimensional invariant subspace satisfies the probabilistic asymptotic estimate
Θ j ( X ˜ j , X j ) Θ j e s t ( X ˜ j , X j ) = arcsin ( δ W e s t ( 1 : n , j ) 2 ) , j = 1 , 2 , , n .

4. Numerical Experiments

In this section, we present two numerical experiments for determining the perturbation bounds for the eigendecompositions of matrices of order 5000. The matrices are constructed in the form A = V D V T where V is orthogonal and D is a diagonal matrix containing the desired eigenvalues. The perturbation δ A is taken as 10 9 × ( δ A 0 + δ A 0 T ) , where δ A 0 is a matrix whose elements are normal random numbers.
Example 4.
In this examples the eigenvalues of A are taken uniformly between 1 and 5.999 with an increment equal to 0.001, which results in matrix M 1 with a 2-norm equal to 10 3 .
In Figure 1 we show the entries of the exact perturbation | δ U = U ˜ U | and the entries of the asymptotic δ U l i n and probabilistic δ U e s t bounds. The desired lower bound probability is set to P r e f = 0.8 and the scaling factor in finding the probabilistic estimates is equal to 1000. In a result, the actual probability is equal to 100 % , since the bounds of all entries of δ U e s t exceed the corresponding entries of | δ U | . As already mentioned, this phenomenon is due to the conservatism of the Markoff inequality. The eigenvalue perturbations δ λ i = λ ˜ i λ i along with their asymptotic δ λ l i n = 2.8287199 × 10 6 and probability δ λ e s t = 7.0717998 × 10 9 estimates, are shown in Figure 2. There are no eigenvalues whose probability perturbation bound is smaller than the actual δ λ , which gives 100 % accuracy instead of the reference value 80 % . Due to the equal space between the eigenvalues, in the given case the sensitivity of all invariant subspaces is the same (Figure 3). Clearly, the probabilistic estimates are much closer to the actual perturbations than the linear deterministic bounds.
Example 5.
In this example the eigenvalues of A are taken as λ i = 1 + 500 / i and are spread between 1.1 and 501, the norm of the matrix M 1 being equal to 4.999 × 10 4 . The eigenvalues of A are shown in Figure 4.
In Figure 5, Figure 6 and Figure 7 we show the matrix | δ U | , the eigenvalue perturbations and the angles between the perturbed and unperturbed invariant subspaces, respectively, as well as their asymptotic and probabilistic bounds for P r e f = 80 % . Due to the decreasing distance between the eigenvalues, the sensitivity of the invariant subspace is increasing with the sequence number i. The actual probabilistic bounds for the eigenvector matrix and for the eigenvalues are again equal to 100 % as in the previous example.
Note that in both examples, the linear estimates reflect correctly the changes of the corresponding exact quantities.

5. Conclusions

In this paper, we present strict asymptotic perturbation bounds of the eigenvalues, eigenvectors and invariant subspaces of symmetric and Hermitian matrices. The implementation of the splitting operator method makes possible to unify the analysis with the perturbation analysis of the Schur form [21], the generalized Schur form [32], the singular value decomposition [1] and the QR decomposition of a matrix [22]. Since the deterministic bounds derived can be conservative especially for high-order matrices, we present probabilistic bounds based on the Markoff inequality. The probabilistic bounds are found easily from the asymptotic bounds and allow to decrease significantly the perturbation bounds with a guaranteed probability which is near to 1 in practice. An attractive future of the bounds for symmetric matrices is that these bounds can be determined with low requirements in respect to the necessary memory and volume of computations.

6. Notation

C , the set of complex numbers;
C n × m , the space of n × m complex matrices;
A = [ a i j ] , a matrix with entries a i j ;
A j , the jth column of A;
A i , 1 : n , the ith row of an m × n matrix A;
A 1 : m , j , the jth column of an m × n matrix A;
Low ( A ) , the strictly lower triangular part of A;
| A | , the matrix of absolute values of the elements of A;
A H , the Hermitian transposed of A;
0 m × n , the zero m × n matrix;
I n , the unit n × n matrix;
δ A , the perturbation of A;
A 2 , the spectral norm of A;
A F , the Frobenius norm of A;
: = , equal by definition;
⪯, relation of partial order. If a , b R n , then a b means
a i b i , i = 1 , 2 , , n ;
X = R ( X ) , the subspace spanned by the columns of X;
U , the orthogonal complement of U, U H U = 0 ;
❒, the end of a proof.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated during the current study are available from the author upon reasonable request.

Conflicts of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. V. Angelova and P. Petkov, Componentwise perturbation analysis of the Singular Value Decomposition of a matrix, Applied Sciences, 14 (2024), 1417. [CrossRef]
  2. J. Barlow and I. Slapničar, Optimal perturbation bounds for the Hermitian eigenvalue problem, Lin. Alg. Appl., 309 (2000), pp. 19–43. [CrossRef]
  3. Z. Bai and J. Demmel and A. Mckenney, On computing condition numbers for the nonsymmetric eigenproblem, ACM Trans. Math. Software, 19 (1993), 202–223. [CrossRef]
  4. R. Bhatia, Perturbation Bounds for Matrix Eigenvalues, Society of Industrial and Applied Mathematics, Philadelphia, PA, 2007. 8987; -3. [CrossRef]
  5. A. Björck and G. Golub, Numerical methods for computing angles between linear subspaces, Math. Comp., 27 (1973), pp. 579–594. [CrossRef]
  6. M. Carlsson, Spectral perturbation theory of Hermitian matrices, in: Bridging Eigenvalue Theory and Practice – Applications in Modern Engineering, B. Carpentieri, ed., IntechOpen, London, 2025. 8363; -9. [CrossRef]
  7. F. Chatelin, Eigenvalues of Matrices, Society of Industrial and Applied Mathematics, Philadelphia, PA, 2012. ISBN 978-1-611972-45-0. [CrossRef]
  8. C. Davis and W. M. Kahan, The Rotation of Eigenvectors by a Perturbation. III, SIAM J. Numer. Anal., 7 (1970), pp. 46. [CrossRef]
  9. G. H. Golub and C. F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, MD, fourth ed., 2013. ISBN 978-1-4214-0794-4.
  10. A. Greenbaum and R.-C. Li and M.L. Overton, First-order perturbation theory for eigenvalues and eigenvectors, SIAM Review, 62 (2020), pp. 463–482. [CrossRef]
  11. I.C.F. Ipsen, An overview of relative sin(Θ) theorems for invariant subspaces of complex matrices. J. Comp. Appl. Math., 123 (2000), pp. 131–153. [CrossRef]
  12. T. Kato, Perturbation Theory for Linear Operators, Springer-Verlag, Berlin, second ed., 1995. ISBN 978-0-540-58661-6.
  13. M. Konstantinov and P. Petkov, Perturbation Methods in Matrix Analysis and Control, NOVA Science Publishers, Inc., New York, 2020. https://novapublishers.com/shop/perturbation-methods-in-matrix-analysis-and-control.
  14. R. Li, Matrix perturbation theory, in Handbook of Linear Algebra, L. Hogben, ed., Discrete Math. Appl., CRC Press, Boca Raton, FL, second ed., 2014, pp. (21–1)–(21–20).
  15. <sc>R.-C. Li, Y. R.-C. Li, Y. Nakatsukasa, N. Truhar and W.-g. Wang, Perturbation of multiple eigenvalues of Hermitian matrices, Linear Algebra Appl., 437 (2012), pp. 202–213. [CrossRef]
  16. R. Mathias, Quadratic residual bounds for the Hermitian eigenvalue problem, SIAM J. Matrix Anal. Appl., 19 (1998), pp. 541–550. [CrossRef]
  17. The MathWorks, Inc., MATLAB Version 9.9.0.1538559 (R2020b), Natick, MA, 2020. https://www.mathworks.com.
  18. Y. Nakatsukasa, Sharp error bounds for Ritz vectors and approximate singular vectors, Math. Comput., 89 (2018), pp. 1843–-1866. [CrossRef]
  19. A. Papoulis, Probability, Random Variables and Stochastic Processes, McGraw Hill, Inc., New York, 3rd edition, 1991. ISBN 0-07-048477-5.
  20. B.N. Parlett, The Symmetric Eigenvalue Problem, Society of Industrial and Applied Mathematics, Philadelphia, PA, 1998. [CrossRef]
  21. P. Petkov, Componentwise perturbation analysis of the Schur decomposition of a matrix, SIAM J. Matrix Anal. Appl., 42 (2021), pp. 108–133. [CrossRef]
  22. P. Petkov, Componentwise perturbation analysis of the QR decomposition of a matrix, Mathematics, 10 (2022). [CrossRef]
  23. P. Petkov, Probabilistic perturbation bounds of matrix decompositions, Numer. Linear Algebra Appl., 31(2024), pp. 1–40. [CrossRef]
  24. P. Petkov, Probabilistic perturbation bounds for invariant, deflating and singular subspaces, Axioms, 13(2024), 597. [CrossRef]
  25. G.W. Stewart, Error and perturbation bounds for subspaces associated with certain eigenvalue problems, SIAM Review, 15 (1973), pp. 727–764. [CrossRef]
  26. G. Stewart, Matrix Algorithms; Vol. II: Eigensystems, SIAM: Philadelphia, PA, 2001; ISBN 0-89871-503-2. [Google Scholar]
  27. G. W. Stewart and J.-G. Sun, Matrix Perturbation Theory, Academic Press, New York, 1990. ISBN 978-0126702309.
  28. J.-g. Sun, Perturbation expansions for invariant subspaces, Linear Algebra Appl., 153 (1991), pp. 85–97. [CrossRef]
  29. J.-g. Sun, Stability and Accuracy. Perturbation Analysis of Algebraic Eigenproblems. Technical Report, Department of Computing Science, Umeå University, Umeå, Sweden, 1998, pp. 1–210.
  30. K. Veselić and I. Slapničar, Floating-point perturbations of Hermitian matrices, Linear Algebra Appl., 195 (1993), pp. 81–116. [CrossRef]
  31. J. Wilkinson, The Algebraic Eigenvalue Problem. Clarendon Press, Oxford, UK, 1965. ISBN 978-0-19-853418-1.
  32. G. Zhang and H. Li and Y. Wei, Componentwise perturbation analysis for the generalized Schur decomposition, Calcolo, 59 (2022). [CrossRef]
Figure 1. The entries of the matrix | δ U | and their asymptotic and probabilistic bounds for Example 4
Figure 1. The entries of the matrix | δ U | and their asymptotic and probabilistic bounds for Example 4
Preprints 176420 g001
Figure 2. The eigenvalue perturbations and their asymptotic and probabilistic bounds for Example 4
Figure 2. The eigenvalue perturbations and their asymptotic and probabilistic bounds for Example 4
Preprints 176420 g002
Figure 3. Angles between the perturbed and unperturbed invariant subspaces and their asymptotic and probabilistic bounds for Example 4
Figure 3. Angles between the perturbed and unperturbed invariant subspaces and their asymptotic and probabilistic bounds for Example 4
Preprints 176420 g003
Figure 4. The matrix eigenvalues for Example 5
Figure 4. The matrix eigenvalues for Example 5
Preprints 176420 g004
Figure 5. The entries of the matrix | δ U | and their asymptotic and probabilistic bounds for Example 5
Figure 5. The entries of the matrix | δ U | and their asymptotic and probabilistic bounds for Example 5
Preprints 176420 g005
Figure 6. The eigenvalue perturbations and their asymptotic and probabilistic bounds for Example 5
Figure 6. The eigenvalue perturbations and their asymptotic and probabilistic bounds for Example 5
Preprints 176420 g006
Figure 7. Angles between the perturbed and unperturbed invariant subspaces and their asymptotic and probabilistic bounds for Example 5
Figure 7. Angles between the perturbed and unperturbed invariant subspaces and their asymptotic and probabilistic bounds for Example 5
Preprints 176420 g007
Table 1. Exact perturbation parameters x related to the matrix δ W and their linear estimates for δ A F = 1.0983610 × 10 8
Table 1. Exact perturbation parameters x related to the matrix δ W and their linear estimates for δ A F = 1.0983610 × 10 8
x = U i T δ U j | x | x l i n | Δ x |
x 1 = U 2 T δ U 1 7.6707043 × 10 11 1.0983610 × 10 8 4.7003350 × 10 19
x 2 = U 3 T δ U 1 6.0721878 × 10 10 3.6734482 × 10 9 4.2499607 × 10 19
x 3 = U 4 T δ U 1 6.9954664 × 10 10 3.6612033 × 10 9 2.0826081 × 10 19
x 4 = U 5 T δ U 1 2.5735541 × 10 10 3.6613255 × 10 9 1.0760803 × 10 18
x 5 = U 3 T δ U 2 1.1842760 × 10 9 5.5194023 × 10 9 5.8765348 × 10 18
x 6 = U 4 T δ U 2 9.0330987 × 10 10 5.4918052 × 10 9 7.9079719 × 10 18
x 7 = U 5 T δ U 2 8.5971180 × 10 10 5.4920797 × 10 9 8.2028102 × 10 18
x 8 = U 4 T δ U 3 1.4635852 × 10 7 1.0983611 × 10 6 1.7153442 × 10 15
x 9 = U 3 T δ U 3 4.8612969 × 10 7 1.1094556 × 10 6 3.5872417 × 10 15
x 10 = U 5 T δ U 4 2.3352890 × 10 5 1.0983610 × 10 4 9.1061463 × 10 14
Table 2. Exact angles between perturbed and unperturbed invariant subspaces and their linear estimates
Table 2. Exact angles between perturbed and unperturbed invariant subspaces and their linear estimates
j Θ Θ l i n
1 9.6446662 × 10 10 1.2686356 × 10 8
2 1.7214722 × 10 9 1.4540507 × 10 8
3 5.0768557 × 10 7 1.5611959 × 10 6
4 2.3353348 × 10 5 1.0984160 × 10 4
5 2.3357949 × 10 5 1.0984171 × 10 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated