Preprint
Article

This version is not peer-reviewed.

Componentwise Perturbation Analysis of the Singular Value Decomposition of a Matrix

A peer-reviewed article of this preprint also exists.

Submitted:

13 November 2023

Posted:

14 November 2023

You are already at the latest version

Abstract
A rigorous perturbation analysis of the singular value decomposition of a real matrix of full column rank is presented. It is shown that the SVD perturbation problem is well posed only in case of distinct singular values. The analysis involves the solution of coupled systems of linear equations and produces asymptotic (local) componentwise perturbation bounds of the entries of the orthogonal matrices participating in the decomposition of the given matrix and of its singular values. Local bounds are derived for the sensitivity of the singular subspaces measured by the angles between the unperturbed and perturbed subspaces. An iterative scheme is described to find global bounds on the respective perturbations. The analysis implements the same methodology used previously to determine componentwise perturbation bounds of the Schur form and the QR decomposition of a matrix.
Keywords: 
;  ;  ;  ;  

1. Introduction

As it is known [5, Ch. 2], [13, Ch. 1], [4, Ch, 2], each real m × n , m n matrix A can be represented by the singular value decomposition (SVD) in the factorized form
A = U Σ V T ,
where the m × m matrix U and the n × n matrix V are orthogonal and the m × n matrix Σ is diagonal:
Σ = Σ n 0 , Σ n = diag σ 1 , σ 2 , , σ n .
The numbers σ i 0 are called singular values of the matrix A. The columns of
U : = [ u 1 , u 2 , , u m ] , u j R m ,
are called left singular vectors and the columns of
V : = [ v 1 , v 2 , , v n ] , v j R n
are the right singular vectors. The subspaces spanned by sets of left and right singular vectors are called left and right singular subspaces, respectively.
The usual assumption is that by an appropriate ordering of the columns u j and v j , the singular values appear in the order
σ max : = σ 1 σ 2 σ n : = σ min ,
but in this paper we will not impose such a requirement.
The SVD has a lot of properties which make it an invaluable tool in matrix analysis and matrix computations, see the references cited above. Among them is the fact that the rank of A is equal to the number of its nonzero singular values as well as the equality A 2 = σ max . The singular value decomposition has a long and interesting history which is described in [12].
In this paper we are interested in the case when the matrix A is subject to an additive perturbation δ A . In such a case there exists another pair of orthogonal matrices U ˜ and V ˜ and a diagonal matrix Σ ˜ , such that
A ˜ : = A + δ A = U ˜ Σ ˜ V ˜ T .
The perturbation analysis of the singular value decomposition consists in determining the changes of the quantities related to the elements of the decomposition due to the perturbation δ A . This includes determining bounds on the changes of the entries of the orthogonal matrices which reduce the original matrix to diagonal form and bounds on the perturbations of the singular values. Hence, the aim of the analysis is to find bounds on the sizes of δ U = U ˜ U , δ V = V ˜ V and δ Σ = Σ ˜ Σ as functions of the size of δ A . According to the Weyl’s theorem [13, Ch. 1], we have that
| δ σ i | = | σ ˜ i σ i | δ A 2 , i = 1 , 2 , , n
which shows that the singular values are perturbed by no more than the 2-norm of the perturbation of A, i.e., the singular values are always well conditioned. The SVD perturbation analysis is well defined if the matrix A is of full column rank n, i.e. σ m i n 0 since otherwwise the corresponding left singular vector is undetermined.
The size of the perturbations δ A , δ U , δ V and δ Σ is usually measured by using some of the matrix norms which leads to the so called normwise perturbation analysis. In several cases we are interested in the size of the perturbations of the individual entries of δ U , δ V and δ Σ , so that it is necessary to implement a componentwise perturbation analysis. This analysis has an advantage in the cases when the individual components of δ U and δ V differ very much in magnitude and the normwise estimates do not produce tight bounds on the perturbations.
The first results in the perturbation analysis of the singular value decomposition are obtained by Wedin [16] and Stewart [10], who developed estimates of the sensitivity of pairs of singular subspaces, see also [14, Ch. V]. Other results concerning the sensitivity of the SVD can be found in [3] and [15]. Several results concerning the sensitivity of the SVD are summarized in [6] and a survey on the perturbation theory of the singular value decomposition can be found in [11]. It should be pointed out that a componentwise perturbation analysis of the SVD apart from several results about the sensitivity of the singular values, is not available up to the moment.
In this paper we present a rigorous perturbation analysis of the orthogonal matrices, singular subspaces and singular values of a real matrix of full column rank. It is shown that the SVD perturbation problem is well posed only in case of distinct (simple) singular values. The analysis produces asymptotic (local) componentwise perturbation bounds of the entries of the orthogonal matrices U and V and of the singular values of the given matrix. Local bounds are derived for the sensitivity of a pair of singular subspaces measured by the angles between the unperturbed and perturbed subspaces. An iterative scheme is described to find global bounds on the respective perturbations and results of numerical experiments are presented. The analysis performed in the paper implements the same methodology as the one used previously in [8,9] to determine componentwise perturbation bounds of the Schur form and QR decomposition of a matrix. However, the SVD perturbation analysis has some distinctive features which makes it a self-dependent problem.
The paper is organized as follows. In Section 2 we derive the basic nonlinear algebraic equations used to perform the perturbation analysis of the SVD. After introducing in Section 3 the perturbation parameters that determine the perturbations of the matrices U and V, we derive a system of coupled equations for these parameters in Section 4. The solution of the equations for the first-order terms of the perturbation parameters allows to find asymptotic bounds on the parameters in Section 5, on the singular values in Section 6 and on the perturbations in the matrices U and V in Section 7. Using the bounds on the perturbation parameters, in Section 8 we derive bounds on the sensitivity of singular subspaces. In Section 9, we develop an iterative scheme for finding global bounds on the perturbations and in Section 10 we present the results of two higher order examples illustrating the proposed analysis. Some conclusions are drawn in Section 11.

2. Basic Equations

The perturbed singular value decomposition of A (2) can be written as
U ˜ T A ˜ V ˜ = Σ ˜ ,
where
U ˜ = U + δ U : = [ u ˜ 1 , u ˜ 2 , , u ˜ m ] , u ˜ j R m , δ U : = [ δ u 1 , δ u 2 , , δ u m ] , δ u j R m u ˜ j = u j + δ u j , j = 1 , 2 , , m , V ˜ = V + δ V : = [ v ˜ 1 , v ˜ 2 , , v ˜ n ] , v ˜ j R n δ V : = [ δ v 1 , δ v 2 , , δ v n ] , δ v j R n v ˜ j = v j + δ v j , j = 1 , 2 , , n ,
and
Σ ˜ = Σ + δ Σ = Σ ˜ n 0 , Σ ˜ n = Σ n + δ Σ n = diag ( σ ˜ 1 , σ ˜ 2 , , σ ˜ n ) , δ Σ n = diag ( δ σ 1 , δ σ 2 , , δ σ n ) , σ ˜ i = σ i + δ σ i .
Equation (4) is rewritten as
δ U T A V + U T A δ V + δ F = δ Σ n 0 ( m n ) × n + Δ 0 ,
where δ F = U T δ A V and the matrix Δ 0
Δ 0 = δ U T A δ V U T δ A δ V δ U T δ A V δ U T δ A δ V R m × n
contains only higher-order terms in the elements of δ A , δ U and δ V .
Let the matrices U and δ U be divided as U = [ U 1 , U 2 ] , U 1 R m × n and δ U = [ δ U 1 , δ U 2 ] , δ U 1 R m × n , respectively. Since the matrix A can be represented as A = U 1 Σ n V T , the matrix U 2 is not well determined but should satisfy the orthogonality condition [ U 1 , U 2 ] T [ U 1 , U 2 ] = I m . The perturbation δ U 2 is also undefined, so that we can bound only the perturbations of the entries in the first n columns of U, i.e., the entries of δ U 1 . Further on, we shall use (5) to determine componentwise bounds on δ U 1 , δ V and δ Σ n .

3. Perturbation Parameters and Perturbed Orthogonal Matrices

In the perturbation analysis of the SVD, it is convenient to find first componentwise bounds on the entries of the matrices δ W U : = U T δ U 1 and δ W V : = V T δ V , which are related to the corresponding perturbations δ U 1 and δ V by orthogonal transformations. The implementation of the matrices δ W U and δ W V allows to find bounds on
δ U 1 = U δ W U
and
δ V = V δ W V
using orthogonal transformations without increasing the norms of δ W U and δ W V . This helps to determine bounds on δ U 1 and δ V which are as tight as possible.
Consider first the matrix
δ W U = U T δ U 1 = u 1 T δ u 1 u 1 T δ u 2 u 1 T δ u n u 2 T δ u 1 u 2 T δ u 2 u 2 T δ u n u n T δ u 1 u n T δ u 2 u n T δ u n u n + 1 T δ u 1 u n + 1 T δ u 2 u n + 1 T δ u n u m T δ u 1 u m T δ u 2 u m T δ u n R m × n .
Further on, we shall use the vector of the subdiagonal entries of the matrix δ W U ,
x : = [ x 1 , x 2 , , x p ] T R p , = [ u 2 T δ u 1 , u 3 T δ u 1 , , u m T δ u 1 m 1 , u 3 T δ u 2 , , u m T δ u 2 m 2 , , u n + 1 T δ u n , , u m T δ u n m n ] T ,
where
p = i = 1 n ( m i ) = n ( n 1 ) / 2 + ( m n ) n = n ( 2 m n 1 ) / 2 .
As it will become clear latter on, together with the orthogonality condition
[ U 1 + δ U 1 ] T [ U 1 + δ U 1 ] = I n ,
the vector x contains the whole information which is necessary to find the perturbation δ U 1 . This vector may be expressed as
x = vec ( Low ( δ W U ) ) ,
or, equivalently,
x = Ω x vec ( δ W U ) ,
where
Ω x : = diag ( ω 1 , ω 2 , , ω n ) R p × m n , ω i : = 0 ( m i ) × i , I m i R ( m i ) × m , i = 1 , 2 , , n , Ω x T Ω x = I p , Ω x 2 = 1
is a matrix that “pulls out”the p elements of x from the m . n elements of δ W U (we consider 0 0 × i as a non-existing matrix).
We have that
x k = u i T δ u j , k = i + ( j 1 ) m j ( j + 1 ) 2 , 1 j n , j < i m , 1 k p .
In a similar way, we introduce the vector of the subdiagonal entries of the matrix δ W V = V T δ V (note that V is a square matrix),
y = [ y 1 , y 2 , , y q ] T R q , y i = [ v 2 T δ v 1 , v 3 T δ v 1 , , v n T δ v 1 n 1 , v 3 T δ v 2 , , v n T δ v 2 n 2 , , v n T δ v n 1 1 ] T ,
where q = n ( n 1 ) / 2 . It is fulfilled that
y = vec ( Low ( δ W V ) )
or, equivalently,
y = Ω y vec ( δ W V ) ,
where
Ω y : = diag ( ω 1 , ω 2 , , ω n ) R q × n 2 , ω i : = 0 ( n i ) × i , I n i R ( n i ) × n , i = 1 , 2 , , n , Ω y T Ω y = I q , Ω y 2 = 1 .
In this case
y = v i T δ v j , = i + ( j 1 ) n j ( j + 1 ) 2 , 1 j n , j < i n 1 , 1 q .
Further on the quantities x k , k = 1 , 2 , , p and y , = 1 , 2 , , q will be referred to as perturbation parameters since they determine the perturbations δ U 1 and δ V , as well as the sensitivity of the singular values and singular subspaces.
Consider the matrix
δ W U = U T δ U 1 : = [ δ w u 1 , δ w u 2 , , δ w u n ] , δ w u j R m .
Using the vector x of the perturbation parameters, this matrix is written in the form
Preprints 90346 i001
where the diagonal and super-diagonal entries that are not determined up to the moment, are highlighted in red boxes.
Consider how to determine the diagonal elements of the matrix δ W U ,
d u j j : = u j T δ u j , j = 1 , 2 , , n
by the elements of x. Since δ u j T u j = u j T δ u j , according to (9), we have that
2 u j T δ u j = δ u j T δ u j , j = 1 , 2 , , n
or
d u j j = δ u j 2 2 / 2 .
The above expression shows that d u j j is always negative and is of second order of magnitude in δ u j 2 .
Let us determine now the entries of the super-diagonal part of W U . Since U ˜ T U ˜ = I m , it follows that
U T δ U = δ U T U δ U T δ U
and
δ u i T u j = u i T δ u j δ u i T δ u j , 1 i m , i < j m .
According to the orthogonality condition (9), the entries of the strictly upper triangular part of δ W U can be represented as
u i T δ u j = u j T δ u i δ u i T δ u j , 1 i n , i < j n .
Thus, the matrix δ W u can be represented as the sum
δ W U = δ Q U + δ D U δ N U ,
where the matrix
δ Q U = 0 x 1 x 2 x n 1 x 1 0 x m x m + n 3 x 2 x m 0 x 2 m + n 6 x n 1 x m + n 3 x 2 m + n 6 0 x n x m + n 2 x 2 m + n 5 x ( n 1 ) ( 2 m n ) / 2 + 1 x m 1 x 2 m 3 x 3 m 6 x p : = [ δ q u 1 , δ q u 2 , , δ q u n ] , δ q u j R m
has entries depending only on the perturbation parameters x,
δ D U = d u 11 0 0 0 d u 22 0 0 0 d u n n 0 0 0 0 0 0 R m × n ,
and the matrix
δ N U = 0 δ u 1 T δ u 2 δ u 1 T δ u 3 δ u 1 T δ u n 0 0 δ u 2 T δ u 3 δ u 2 T δ u n 0 0 0 δ u 3 T δ u n 0 0 0 δ u n 1 T δ u n 0 0 0 0 0 0 0 0 0 0 0 0 R m × n
contains second order terms in δ u j , j = 1 , 2 , , n .
Similarly, for the matrix
δ W V = V T δ V : = [ δ w v 1 , δ w v 2 , , δ w v n ] , δ w v j R n ,
like to the case of δ W U , it is possible to show that
δ W V = δ Q V + δ D V δ N V ,
where
δ Q V = 0 y 1 y 2 y n 1 y 1 0 y n y 2 n 3 y 2 y n 0 y 3 n 6 y n 1 y 2 n 3 y 3 n 6 0 : = [ δ q v 1 , δ q v 2 , , δ q v n ] , δ q v j R n
has elements depending only on the perturbation parameters y,
δ D V = d v 11 0 0 0 d v 22 0 0 0 d v n n R n × n ,
d v j j = v j T δ v j , j = 1 , 2 , , n and the matrix
δ N V = 0 δ v 1 T δ v 2 δ v 1 T δ v 3 δ v 1 T δ v n 0 0 δ v 2 T δ v 3 δ v 2 T δ v n 0 0 0 δ v 3 T δ v n 0 0 0 δ v n 1 T δ v n 0 0 0 0 R n × n ,
contains second order terms in δ v j , j = 1 , 2 , , n . The diagonal entries of δ D V are determined as in the case of the matrix δ D U .

4. Equations for the Perturbation Parameters

The elements of the perturbation parameter vectors x and y can be determined from equation (5). For this aim it is appropriate to transform this equation as follows. Taking into account that A V = U Σ = U 1 Σ n and U T A = Σ V T , the equation is represented as
δ U T U 1 Σ n + Σ n 0 ( m n ) × n V T δ V + δ F = Σ ˜ n Σ n 0 ( m n ) × n + Δ 0 .
where
δ F : = δ F 1 δ F 2 = δ f 11 δ f 12 δ f 1 n δ f 21 δ f 22 δ f 2 n δ f n 1 δ f n 2 δ f n n δ f n + 1 , 1 δ f n + 1 , 2 δ f n + 1 , n δ f m 1 δ f m 2 δ f m n .
According to (8), we have that
δ U T U 1 = U T δ U 1 δ U T δ U 1 .
Substituting in (12) the term δ U T U 1 by the sum in the right hand side of (13), we obtain
U T δ U 1 Σ n + Σ n 0 ( m n ) × n V T δ V + δ F = Σ ˜ n Σ n 0 ( m n ) × n + Δ ,
where
Δ = Δ 0 + δ U T δ U 1 Σ n = δ U T δ U 1 Σ n δ U T A δ V U T δ A δ V δ U T δ A V δ U T δ A δ V
contains higher order terms in the entries of δ A , δ U and δ V .
Replacing the matrices U T δ U 1 and V T δ V by δ W U and δ W V , respectively, (14) is rewritten as
δ W U Σ n + Σ n 0 ( m n ) × n δ W V + δ F = Σ ˜ n Σ n 0 ( m n ) × n + Δ
or
δ Q U Σ n + Σ n 0 ( m n ) × n δ Q V + δ F = Σ ˜ n Σ n 0 ( m n ) × n + ( δ D U δ N U ) Σ n Σ ( δ D V δ N V ) + Δ .
Note that the matrices δ D U , δ N U , δ D V , δ N V and Δ contain only higher order terms in the entries of δ A , δ U and δ V .
The entries of the matrices δ Q U and δ Q V can be substituted by the corresponding elements x k , k = i + ( j 1 ) m j ( j + 1 ) 2 and y , = i + ( j 1 ) n j ( j + 1 ) 2 of the vectors x and y as shown in the previous section. This leads to the representation of equation (16) as two matrix equations in respect to two groups of the entries of x,
Preprints 90346 i002
and
σ 1 x n σ 2 x m + n 2 σ 3 x 2 m + n 5 σ n x ( n 1 ) ( 2 m n ) / 2 + 1 σ 1 x n + 1 σ 2 x m + n 1 σ 3 x 2 m + n 4 σ n x ( n 1 ) ( 2 m n ) / 2 + 2 σ 1 x m 1 σ 2 x 2 m 3 σ 3 x 3 m 6 σ n x p = δ f n + 1 , 1 δ f n + 1 , 2 δ f n + 1 , 3 δ f n + 1 , n δ f n + 2 , 1 δ f n + 2 , 2 δ f n + 2 , 3 δ f n + 2 , n δ f m 1 δ f m 2 δ f m 3 δ f m n + Δ 2 ,
where
Δ 1 = δ U 1 T δ U 1 Σ n δ U 1 T A δ V U 1 T δ A δ V δ U 1 T δ A V δ U 1 T δ A δ V ,
Δ 2 = δ U 2 T δ U 1 Σ n δ U 2 T A δ V U 2 T δ A δ V δ U 2 T δ A V δ U 2 T δ A δ V ,
Δ : = Δ 1 Δ 2 , Δ 1 R n × n , Δ 2 R ( m n ) × n .
We note that the estimation of Δ 2 requires to know an estimate of δ U 2 which is undetermined.
Equations (17) and (18) are the basic equation of the SVD perturbation analysis. They can be used to obtain asymptotic as well as global perturbation bounds on the elements of the vectors x and y.
Let us introduce the vectors
x ( 1 ) = Ω 1 x , a n d x ( 2 ) = Ω 2 x ,
where
Ω 1 : = diag ( ω 1 , ω 2 , , ω n ) R q × p , ω i : = I n i , 0 ( n i ) | × ( m n ) R ( n i ) × ( m i ) , i = 1 , 2 , , n , Ω 1 Ω 1 T = I q , Ω 1 2 = 1 , Ω 2 : = diag ( ω 1 , ω 2 , , ω n ) R ( m n ) n × p , ω i : = 0 ( m n ) × ( n i ) , I m n R ( m n ) × ( m i ) , i = 1 , 2 , , n , Ω 2 Ω 2 T = I ( m n ) n , Ω 2 2 = 1 .
The vector x ( 1 ) contains the elements of the unknown vector x participating in (17), while x ( 2 ) contains the elements of x participating in (18). It is easy to prove that
x = Ω 1 T x ( 1 ) + Ω 2 T x ( 2 ) .
Taking into account that
Low ( ( δ D U 1 δ N U 1 ) Σ n Σ n ( δ D V δ N V ) ) = 0 , Up ( δ D U 1 Σ n Σ n δ D V ) = 0 ,
the strictly lower part of (17) can be represented columnwise as the system of linear equations in respect to the unknown vectors x ( 1 ) and y,
S 1 x ( 1 ) + S 2 y = f + vec ( Low ( Δ 1 ) ) ,
where
S 1 = diag ( σ 1 , σ 1 , , σ 1 n 1 , σ 2 , , σ 2 n 2 , , σ n 1 1 ) ,
S 2 = diag ( σ 2 , σ 3 , , σ n n 1 , σ 3 , , σ n n 2 , , σ n 1 ) , S i R q × q
and
f = vec ( Low ( δ F 1 ) ) = Ω 3 vec ( δ F 1 ) R q , Ω 3 : = diag ( ω 1 , ω 2 , , ω n ) R q × n 2 , ω i : = 0 ( n i ) × i , I n i R ( n i ) × n , i = 1 , 2 , , n , Ω 3 Ω 3 T = I q , Ω 3 2 = 1 .
Similarly, the strictly upper part of (17) is represented rowwise as the system of equations
S 2 x ( 1 ) S 1 y = g vec ( ( Up ( δ N U 1 Σ n Σ n δ N V Δ 1 ) ) T ) ,
where
g = vec ( ( Up ( F 1 ) ) T ) = Ω 4 vec ( δ F 1 ) R q , Ω 4 : = 0 ( n 1 ) × n , ω 1 , 0 ( n 2 ) × 2 n , ω 2 , 0 1 × ( n 1 ) n , ω n 1 R q × n 2 , ω i : = [ 0 1 × ( i 1 ) , 1 , 0 1 × ( n i ) ] 0 1 × n 0 1 × n 0 1 × n [ 0 1 × ( i 1 ) , 1 , 0 1 × ( n i ) ] 0 1 × n 0 1 × n 0 1 × n [ 0 1 × ( i 1 ) , 1 , 0 1 × ( n i ) ] R ( n i ) × ( n i ) . n , i = 1 , 2 , , n 1 , Ω 4 Ω 4 T = I q , Ω 4 2 = 1 .
It should be noted that the operators Low and Up used in (21), (24), take only the entries of the strict lower and strict upper part, respectively, of the corresponding matrix which are placed by the operator vec column by column excluding the zeros above or under the diagonal, respectively. For instance, if n = 4 the elements of the vectors f and g satisfy
Preprints 90346 i003
In this way, the solution of (17) reduces to the solution of the two coupled equations (21) and (24) with diagonal matrices of size q × q . The equation (18) can be solved independently yielding
x ( 2 ) = vec ( ( δ F 2 Δ 2 ) Σ n 1 ) .
Note that the elements of x ( 1 ) depend on the elements of y and vice versa, while x ( 2 ) does not depend on y.
Thanks to the diagonal structure of the matrices S 1 , S 2 and Σ , the equations (21) - (25) can be solved efficiently with high accuracy.

5. Asymptotic Bounds of the Perturbation Parameters

Equations (21) and (24) can be used to determine asymptotic approximations of the vectors x ( 1 ) and y. The exact solution of these equations satisfies
x ( 1 ) = S x f ( f + vec ( Low ( Δ 1 ) ) + S x g ( g vec ( ( Up ( δ N U 1 Σ n Σ n δ N V Δ 1 ) ) T ) ,
y = S y f ( f + vec ( Low ( Δ 1 ) ) + S y g ( g vec ( ( Up ( δ N U 1 Σ n Σ n δ N V Δ 1 ) ) T ) ,
where taking into account that S 1 and S 2 commute, we have that
S x f = ( S 2 S 1 S 2 1 S 1 ) 1 S 1 S 2 1 = ( S 2 2 S 1 2 ) 1 S 1 R q × q , S x g = ( S 2 S 1 S 2 1 S 1 ) 1 = ( S 2 2 S 1 2 ) 1 S 2 R q × q , S y f = S x g , S y g = S x f .
Exploiting these expressions, it is possible to show that
S x f = diag σ 1 σ 2 2 σ 1 2 , σ 1 σ 3 2 σ 1 2 , , σ 1 σ n 2 σ 1 2 n 1 , σ 2 σ 2 2 σ 1 2 , , σ 2 σ n 2 σ 2 2 n 2 , , σ n 1 σ n 2 σ 2 2 1 , S x g = diag σ 2 σ 2 2 σ 1 2 , σ 3 σ 3 2 σ 1 2 , , σ n σ n 2 σ 1 2 n 1 , σ 3 σ 3 2 σ 2 2 , , σ n σ n 2 σ 2 2 n 2 , , σ n σ n 2 σ n 1 2 1 .
Let us consider the conditions for existence of a solution of equations (21), (24). These equations have an unique solution for x ( 1 ) and y, if and only if the matrix
S 1 S 2 S 2 S 1
is nonsingular, or equivalently, the matrices S 1 , S 2 and S 2 2 S 1 2 are nonsingular. The matrices S 1 and S 2 are nonsingular, since the matrix A has nonzero singular values. In turn, a condition for nonsingularity of the matrix S 2 2 S 1 2 can be found taking into account the structure of the matrices S x f and S x g shown above. Clearly, the denominators of the first n 1 diagonal entries of S x f and S x g will be different from zero, if σ 1 is distinct from σ 2 , σ 3 , , σ n . Similarly, the denominators of the next group of n 2 diagonal entries will be different from zero if σ 2 is distinct from σ 3 , , σ n and so on. Finally, σ n 1 should be different from σ n . Thus we come to the conclusion that the matrices S x f and S x g will exist and the equations (21), (24) will have an unique solution, if and only if the singular values of A are distinct. This conclusion should not come to surprise, since U is the matrix of the transformation of A A T to Schur (diagonal) form U Σ Σ T U T and V is the matrix of the transformation of A T A to diagonal form V Σ T Σ V T . On the other hand, the perturbation problem for the Schur form is well posed only when the matrix eigenvalues (the diagonal elements of Σ Σ T or Σ T Σ ) are distinct.
Neglecting the higher order terms in (26), (27) and approximating each element of f and g by the perturbation norm δ A 2 , we obtain the linear estimates
x ( 1 ) l i n = ( | S x f | + | S x g | ) h ,
y l i n = ( | S y f | + | S y g | ) h ,
where
h = [ 1 , 1 , , 1 ] q T × δ A 2 .
Clearly, if the matrices | S x f | , | S x g | , | S y f | , | S y g | have large diagonal elements, then the estimates of the perturbation parameters will be large. Using the expressions for S x f and S x g , we may show that
S x f 2 = max i , j σ i | σ j 2 σ i 2 | , i = 1 , 2 , , n 1 , j = i + 1 , i + 2 , , n , S x g 2 = max i , j σ j | σ j 2 σ i 2 | .
Note that the norms of S x f and S x g can be considered as condition numbers of the vectors x 1 and y with respect to the changes of δ A .
An asymptotic estimate of the vector x ( 2 ) is obtained neglecting the higher order term Δ 2 and approximating its elements according to (25) as
x ( 2 ) l i n = vec ( Z ) , Z = δ A 2 × 1 / σ 1 1 / σ 2 1 / σ n 1 / σ 1 1 / σ 2 1 / σ n 1 / σ 1 1 / σ 2 1 / σ n .
Equation (30) shows that a group of n elements of x ( 2 ) l i n will be large, if the singular value participating in the corresponding column of Z is small. The presence of large elements in the vector x leads to large entries in δ W U and consequently in the estimate of δ U . This observation is in accordance with the well known fact that the sensitivity of a singular subspace is inversely proportional to the smallest singular value associated with this subspace.
As a result of solving the linear systems (28) - (30), we obtain an asymptotic approximation of the vector x as
x l i n = Ω 1 T x ( 1 ) l i n + Ω 2 T x ( 2 ) l i n .
Example 1.
Consider the 6 × 4 matrix
A = 3 3 3 6 3 1 1 8 3 1 0 9 3 1 2 11.1 6 4 6 11.9 3 1 1 10.1
and assume that it is perturbed by
δ A = 10 c × A 0 , A 0 = 5 3 1 2 7 4 9 5 3 8 5 4 2 6 3 7 5 3 1 9 4 8 2 6 , δ A 2 = 2.081981 × 10 c ,
where c is a varying parameter. (For convenience, the entries of the matrix A 0 are taken as integers.)
The singular value decompositions of the matrices A and A + δ A are computed by the functionsvdof MATLAB®[7]. The singular values of A are
σ 1 = 25.460186918120350 , σ 2 = 7.684342946021752 , σ 3 = 1.248776186923002 , σ 4 = 0.017709242587950 .
In the given case the matrices S 1 and S 2 in equations (22), (23), are
S 1 = diag ( 25.4601869 , 25.4601869 , 25.4601869 , 7.6843429 , 7.6843429 , 1.2487762 ) , S 2 = diag ( 7.6843429 , 1.2487762 , 0.0177092 , 1.2487762 , 0.0177092 , 0.0177092 ) ,
and the matrices participating in (26), (27) are
S x f = diag ( 0.0432135 , 0.0393717 , 0.0392770 , 0.1336647 , 0.1301354 , 0.8009451 ) , S x g = diag ( 0.0130426 , 0.0019311 , 0.0000273 , 0.0217217 , 0.0002999 , 0.0113584 ) S y f = S x g , S y g = S x f .
The matrix
S 1 S 2 S 2 S 1 ,
which determines the solution for x ( 1 ) and y, has a condition number equal to 26.9234179 . The exact parameters x k and their linear approximates x k l i n computed by using (28) and (30), are shown to eight decimal digits for two perturbation sizes in Table 1 (the elements x k obtained from the equations (30) are highlighted in red boxes). The differences between the values of x k l i n and x k are due to the bounding of the elements of the vectors f and g by the value of δ A 2 and taking the terms in (28) - (30) with positive signs. Both approximations are necessary to ensure that x x l i n for arbitrary small size perturbation.
Similarly, in Table 2 we show for the same perturbations of A the exact perturbation parameters y and their linear approximations obtained from (29).

6. Bounding the Perturbations of the Singular Values

Equation (17) can also be used to determine linear and nonlinear estimates of the perturbations of the singular values. Considering the diagonal elements of this equation (highlighted in green boxes) and taking into account that diag ( δ N U 1 ) = 0 n , diag ( δ N V ) = 0 n , we obtain
δ Σ n = diag ( δ F 1 ) diag ( δ D U 1 Σ n Σ n δ D V + Δ 1 )
or
δ σ i = δ f i i ( δ D U 1 Σ n Σ n δ D V + Δ 1 ) i i , i = 1 , 2 , , n ,
where ( . ) i i denotes the ith diagonal element of ( . ) . Neglecting the higher order terms, we determine the componentwise asymptotic estimate
δ σ i l i n = | δ f i i | , i = 1 , 2 , , n .
Bounding each diagonal element | δ f i i | by δ A 2 , we find the normwise estimate of δ σ i ,
δ σ ^ i δ A 2 ,
which is in accordance with the Weyl’s theorem (see (3)).
From (32) we also have that
i = 1 n δ σ i 2 δ F 1 F + δ D U 1 Σ n Σ n δ D V F + Δ 1 F .
In Table 3 we show the exact perturbations of the singular values of the matrix A of Example 1 along with the normwise bound δ σ ^ i = δ A 2 and the asymptotic estimate δ σ i l i n obtained from (33) under the assumption that δ A is known. The exact perturbations and their linear bounds are very close.

7. Asymptotic Bounds on the Perturbations of U 1 and V

Having componentwise estimates for the elements of x and y, it is possible to find bounds on the entries of the matrices δ U 1 and δ V . An asymptotic bound δ W U l i n = | δ Q u | on the absolute value of the matrix δ W U = U T δ U 1 is given by
δ W U l i n = 0 | x 1 | | x 2 | | x n 1 | | x 1 | 0 | x m | | x m + n 3 | | x 2 | | x m | 0 | x 2 m + n 6 | | x n 1 | | x m + n 3 | | x m + 2 n 6 | 0 | x n | | x m + n 2 | | x 2 m + n 5 | | x ( n 1 ) ( 2 m n ) / 2 + 1 | | x m 1 | | x 2 m 3 | | x 3 m 6 | | x p | R m × n .
From (6), a linear approximation of the matrix | δ U 1 | is determined as
| δ U 1 | δ U 1 l i n = | U | | U T δ U 1 | = | U | | δ W U | .
The matrix δ U 1 l i n gives bounds on the perturbations of the individual elements of the orthogonal transformation matrix U.
In a similar way we have that the linear approximation of the matrix δ W V is given by
δ W V l i n = | δ Q V | = 0 | y 1 | | y 2 | | y n 1 | | y 1 | 0 | y n | | y 2 n 3 | | y 2 | | y n | 0 | y 3 n 6 | | y n 1 | | y 2 n 3 | | y 3 n 6 | 0 R n × n .
From (7) we obtain that
| δ V | δ V l i n = | V | | δ W V |
Hence, the entries of the matrix δ V l i n give asymptotic estimates of the perturbations of the entries of V.
For the matrix A of Example 1, we obtain that the absolute values of the exact changes of the entries of the matrix U 1 for the perturbation δ A = 10 5 A 0 satisfy
| δ U 1 | = 10 3 × 0.0020130 0.0013238 0.0330402 0.2407168 0.0009548 0.0070093 0.0356216 0.9425346 0.0006562 0.0004107 0.0576633 0.1185295 0.0000996 0.0028839 0.0020370 0.1517173 0.0004584 0.0024637 0.0316821 0.3649600 0.0004974 0.0014734 0.0630623 0.1877213
and their asymptotic componentwise estimates found by using (34), are
δ U 1 l i n = 10 2 × 0.0018494 0.0052303 0.0215558 0.9532765 0.0012398 0.0044262 0.0221403 1.3988805 0.0014125 0.0045112 0.0219550 0.5264197 0.0017421 0.0041695 0.0159661 0.5316375 0.0015062 0.0033947 0.0117806 0.6365127 0.0016843 0.0050286 0.0180145 0.9544088 .
Also, for the same δ A we have that
| δ V | = 10 4 × 0.0188135 0.0293793 0.2621967 0.0365476 0.0348050 0.0032323 0.0065525 0.2572914 0.0153096 0.0121733 0.0836556 0.0994065 0.0036567 0.0106224 0.0446413 0.0002219
and, according to (35),
δ V l i n = 10 3 × 0.0110821 0.0341290 0.1616097 0.0370981 0.0130707 0.0300485 0.0179106 0.1608764 0.0161344 0.0250801 0.0803532 0.1020436 0.0056685 0.0191880 0.0672176 0.0164314 .
It is seen that the magnitude of the entries of δ U l i n and δ V l i n reflect correctly the magnitude of the corresponding entries of | δ U | and | δ V | , respectively. Note that the perturbations of the columns of U and V tend to increase with increasing of the column number.

8. Sensitivity of Singular Subspaces

The sensitivity of a left
U r = span ( u 1 , u 2 , , u r ) , r min { m 1 , n }
or right
V r = span ( v 1 , v 2 , , v r ) , r n 1
singular subspace of dimension r is measured by the angles between the corresponding unperturbed and perturbed subspaces [14, Ch. V].
Let the unperturbed left singular subspace corresponding to the first r singular values is denoted by U r and its perturbed counterpart as U ˜ r and let U ( r ) and U ˜ ( r ) be the orthonormal bases for U r and U ˜ r , respectively. Then the maximum angle between U and U ˜ is determined from
cos ( Φ max ( U r , U ˜ r ) = σ min ( U ( r ) T U ˜ ( r ) ) .
The expression (36) has the disadvantage that if Φ max is small, then cos ( Φ max ) 1 and the angle Φ max is not well determined. To avoid this difficulty, instead of
cos ( Φ max ( U r , U ˜ r ) ) it is preferable to work with sin ( Φ max ( U r , U ˜ r ) ) . It is possible to show that [2]
sin ( Φ max ( U r , U ˜ r ) ) = σ max ( U ( r ) T U ˜ ( r ) )
where U ( r ) is the orthogonal complement of U ( r ) , U ( r ) T U ( r ) = 0 . Since
U ˜ ( r ) = U ( r ) + δ U ( r ) ,
we have that
sin ( Φ max ( U r , U ˜ r ) ) = σ max ( U ( r ) T δ U ( r ) ) .
Equation (38) shows that the sensitivity of the left singular subspace U r is connected to the values of the perturbation parameters x k = u i T δ u j , k = i + ( j 1 ) m j ( j + 1 ) 2 , i > r , j = 1 , 2 , , r . In particular, for r = 1 the sensitivity of the first column of U (the left singular vector, corresponding to σ 1 ) is determined as
sin ( Φ max ( U 1 , U ˜ 1 ) ) = σ max ( δ W U 2 : m , 1 ) ,
for r = 2 one has
sin ( Φ max ( U 2 , U ˜ 2 ) ) = σ max ( δ W U 3 : m , 1 : 2 )
and so on, see Figure 1, where the matrices U ( r ) T δ U ( r ) = δ W U r + 1 : m , 1 : r for different values of r are highlighted in boxes and (*) denotes entries which are not used.
In a similar way, utilizing the matrix δ W V , it is possible to find the sine of the maximum angle between the perturbed V ˜ r and unperturbed right singular subspace V r ,
sin ( Θ max ( V r , V r ˜ ) ) = σ max ( δ W V r + 1 : n , 1 : r )
(see Figure 2). Hence, if the perturbation parameters are determined, it is possible to find at once sensitivity estimates for the nested singular subspaces
U 1 = span ( U ( 1 ) ) = span ( u 1 ) , U 2 = span ( U ( 2 ) ) = span ( u 1 , u 2 ) , U r = span ( U ( r ) ) = span ( u 1 , u 2 , , u r ) , r = min { m 1 , n }
and
V 1 = span ( V ( 1 ) ) = span ( v 1 ) , V 2 = span ( V ( 2 ) ) = span ( v 1 , v 2 ) , V n 1 = span ( V ( n ) ) = span ( v 1 , v 2 , , v n 1 ) .
Specifically, as
δ W U = * * * * x 1 * * * x 2 x m * * x n 1 x m + n 3 x 2 m + n 6 * x m 1 x 2 m 3 x 3 m 6 x p R m × n ,
we have that the exact maximum angle between the perturbed and unperturbed left singular subspace of dimension r is given by
Φ max ( U r , U ˜ r ) = arcsin ( σ max ( δ W U r + 1 : m , 1 : r ) ) .
Similarly, the maximum angle between the perturbed and unperturbed right singular subspace of dimension r is
Θ max ( V r , V ˜ r ) = arcsin ( σ max ( δ W V r + 1 : m , 1 : r ) ) .
To find an asymptotic estimate of Φ max ( U r , U ˜ r ) , in the expression for the matrix δ W U the elements x k are replaced by their linear approximations (31). Let
δ W U l i n = * * * * x 1 l i n * * * x 2 l i n x m l i n * * x n 1 l i n x m + n 3 l i n x 2 m + n 6 l i n * x m 1 l i n x 2 m 3 l i n x 3 m 6 l i n x p l i n R m × n
Then the following asymptotic estimate holds,
Φ max l i n ( U r , U ˜ r ) = arcsin ( δ W U r + 1 : m , 1 : r l i n 2 )
In particular, for the sensitivity of the range R ( A ) of A we obtain that
sin ( Φ max l i n ( U n , U ˜ n ) ) = δ W U n + 1 : m , 1 : n l i n 2 .
Similarly, for the angles between the unperturbed and perturbed right singular subspaces we obtain the linear estimates
Θ max l i n ( V r , V ˜ r ) = arcsin ( δ W V r + 1 : n , 1 : r l i n ) .
We note that the use of separate x and y parameters decouples the SVD perturbation problem and makes it possible to determine the sensitivity estimates of the left and right singular subspaces independently. This is important in cases when the left or right subspace in a pair of singular subspaces is much more sensitive than its counterpart.
Consider the same perturbed matrix A as in Example 1. Computing the matrices δ W U l i n and δ W V l i n , it is possible to estimate at once the sensitivity of all four pairs of singular subspaces of dimensions r = 1 , 2 , 3 , 4 corresponding to the chosen ordering of the singular values. In Table 4 we show the actual values of the left and right singular subspaces sensitivity and the computed asymptotic estimates (41) of this sensitivity. To determine the sensitivity of other singular subspaces, it is necessary to reorder the singular values in the initial decomposition so that the desired subspace appears in the set of nested singular subspaces. Also, the computations related to the determining of the linear estimates should be done again.

9. Global Perturbation Bounds

Since analytical expressions for the global perturbation bounds of the singular value decompositions are not known up to this moment, we present an iterative procedure for finding estimates of these bounds based on the asymptotic analysis presented above. This procedure is similar to the corresponding iterative schemes proposed in [8,9], but is more complicated since the determining of bounds on the parameter vectors x and y must be done simultaneously due to fact that the equations for these parameters are coupled.

9.1. Perturbation Bounds of the Entries of U 2

The main difficulty in determining global bounds of x and y is to find an appropriate approximation of the high order term Δ 2 in (18). As is seen from (20), the determining of such estimate requires to know the perturbation δ U 2 which is not well determined since it contains the columns of the matrix δ U 2 = U ˜ 2 U 2 . This perturbation satisfies the equations
( U 1 + δ U 1 ) T ( U 2 + δ U 2 ) = 0 ,
( U 2 + δ U 2 ) T ( U 2 + δ U 2 ) = I m n
which follow from the orthogonality of the matrix
U ˜ = [ U 1 + δ U 1 , U 2 + δ U 2 ] .
An estimate of δ U 2 can be found based on a suitable approximation of
X = U T δ U 2 : = X 1 X 2 .
As shown in [9], a first order approximation of the matrix X can be determined using the estimates
X 1 a p p r = ( I n + δ W 1 T ) 1 δ W 2 T R n × ( m n ) ,
X 2 a p p r = X 1 a p p r T X 1 a p p r / 2 R ( m n ) × ( m n ) ,
where δ W 1 = U 1 T δ U 1 , δ W 2 = U 2 T δ U 1 and for sufficiently small perturbation δ U 1 the matrix I n + δ W 1 T is nonsingular. (Note that δ W U = [ δ W 1 T δ W 2 T ] T is already estimated.) Thus, we have that
δ U 2 a p p r = U X a p p r .

9.2. Iterative Procedure for Finding Global Bounds of x and y

Global componentwise perturbation bounds of the matrices U and V can be found using nonlinear estimates of the matrices δ W U and δ W V , determined by (10) and (11), respectively. Such estimates are found correcting the linear estimates of the perturbation parameters x k = u i T δ u j and y = v i T δ v j on each iteration step in a way similar to the one presented in [8] and [9].
Consider the case of estimating the matrix δ W U . It is convenient to substitute the terms containing the perturbations δ u j in (10) by the quantities
δ w u j = U T δ u j , j = 1 , 2 , , n ,
which have the same magnitude as δ u j . Since
δ u i T δ u j = δ u i T U U T δ u j = δ w u i T δ w u j ,
the absolute value of the matrix δ W U R m × n (10) can be bounded as
| δ W U n o n l | = | U T δ U 1 | = [ | δ w u 1 | , δ w u 2 | , , | δ w u n | ] | δ Q U | + | δ D U | + | δ N U | ,
where
| δ Q U | : = δ Q U 1 δ Q U 2 = 0 | x 1 | | x 2 | | x n 1 | | x 1 | 0 | x m | | x m + n 3 | | x 2 | | x m | 0 | x 2 m + n 6 | | x n 1 | | x m + n 3 | | x 2 m + n 6 | 0 | x n | | x m + n 2 | | x 2 m + n 5 | | x ( n 1 ) ( 2 m n ) / 2 + 1 | | x m 1 | | x 2 m 3 | | x 3 m 6 | | x p | R m × n ,
| δ D U | : = δ D U 1 δ D U 2 = | d u 11 | 0 0 0 0 | d u 22 | 0 0 0 0 | d u 33 | 0 0 0 0 | d u n n | 0 0 0 0 0 0 0 0 R m × n ,
| δ N U | : = δ N U 1 δ N U 2 = 0 | δ w u 1 T | | δ w u 2 | | δ w u 1 T | | δ w u 3 | | δ w u 1 T | | δ w u n | 0 0 | δ w u 2 T | | δ w u 3 | | δ w u 2 T | | δ w u n | 0 0 0 | δ w u 3 T | | δ w u n | 0 0 0 | δ w u n 1 T | | δ w u n | 0 0 0 0 0 0 0 0 R m × n .
Since the unknown column estimates | δ w j | participate in both sides of (48), it is possible to obtain them as follows. The first column of δ W U is determined from
| δ w u 1 | = | δ q u 1 | + | δ d u 1 | ,
where | δ q u 1 | , | δ d 1 | are the first columns of | δ Q U | , | δ D U | , respectively. Then the next column estimates | δ w j | , j = 2 , 3 , n can be determined recursively from
| δ w u j | = | δ q u j | + | δ d u j | + | δ w u 1 T | | δ w u j | | δ w u 2 T | | δ w u j | | δ w u j 1 T | | δ w u j | 0 0 ,
which is equivalent to solving the linear system
| S U j | | δ w u j | = | δ q u j | + | δ d u j | ,
where
| S U j | = e 1 T | δ w u 1 T | e 2 T | δ w u 2 T | e j 1 T | δ w u j 1 T | e j T e n T R m × m
and e j is the jth column of I m . The matrix | S U j | is upper triangular with unit diagonal and if | δ w u i | , i = 1 , 2 , , j 1 have small norms, then the matrix | S U j | is diagonally dominant. Hence, it is very well conditioned with condition number close to 1.
As a result we obtain that
| δ w u j | | S U j | 1 ( | δ q u j | + | δ d u j | )
which produces the jth column of | δ W U n o n l | .
A similar recursive procedure can be used to determine the quantities | δ w v j | = | v i T δ v j | . In this case for each j it is necessary to solve the nth order linear system
| S V j | | δ w v j | = | δ q v j | + | δ d v j | .
The estimates of | δ w u j | , j = 1 , 2 , , m and | δ w v j | , j = 1 , 2 , , n thus obtained, are used to bound the absolute values of the nonlinear elements Δ 1 and Δ 2 given in (19) and (20), respectively. Utilizing the approximation of U T δ U 2 , it is possible to find an approximation of the matrix U T δ U as
Z = [ δ W U n o n l , δ U 2 a p p r ] : = [ Z 1 , Z 2 ] , Z 1 R m × n , Z 2 R m × ( m n )
where δ U 2 a p p r = U X a p p r ,
X a p p r = X 1 a p p r X 2 a p p r
and X 1 a p p r , X 2 a p p r are given by (45), (46). Then the elements of Δ 1 , Δ 2 are bounded according to (19) and (20) as
| Δ 1 i j n o n l | σ j | Z 1 i T Z 1 j | + | Z 1 i T Σ Y j | + ( Z 1 i 2 + Y j 2 + Z 1 i 2 Y j 2 ) δ A 2 , i = 1 , 2 , , n , j = 1 , 2 , , n ,
| Δ 2 i j n o n l | σ j | Z 2 i T Z 1 j | + | Z 2 i T Σ Y j | + ( Z 2 i 2 + Y j 2 + Z 2 i 2 Y j 2 ) δ A 2 . i = 1 , 2 , , m n , j = 1 , 2 , , n .
Utilizing (26) and (27), the nonlinear corrections of the vectors x ( 1 ) and y can be determined from
δ x ( 1 ) = | S x f | vec ( Low ( | Δ 1 | ) + | S x g | vec ( ( Up ( | δ N U 1 | Σ n
+ Σ n | δ N V | + | Δ 1 n o n l | ) ) T ) , δ y = | S y f | vec ( Low ( | Δ 1 | ) + | S y g | vec ( ( Up ( | δ N U 1 | Σ n
+ Σ n | δ N V | + | Δ 1 | n o n l ) ) T ) ,
where | δ N U 1 | is estimated by using the corresponding expression (48) and | δ N V | - by a similar expression.
The nonlinear correction of x ( 2 ) is found from
δ x ( 2 ) = vec ( Z ) , Z = Δ 2 n o n l 2 × 1 / σ 1 1 / σ 2 1 / σ n 1 / σ 1 1 / σ 2 1 / σ n 1 / σ 1 1 / σ 2 1 / σ n .
and the total correction vector is determined from
δ x = Ω 1 T δ x ( 1 ) + Ω 2 T δ x ( 2 ) .
Now, the nonlinear estimates of the vectors x and y are found from
x n o n l = x l i n + δ x ,
y n o n l = y l i n + δ y .
In this way we obtain an iterative scheme for finding simultaneously nonlinear estimates of the coupled perturbation parameter vectors x and y involving the equations (48) - (51), (52) - (57). In the numerical experiments presented below, the initial conditions are chosen as x 0 n o n l = e p s [ 1 , 1 , , 1 ] T and y 0 n o n l = e p s [ 1 , 1 , , 1 ] T , where e p s is the MATLAB®function eps, e p s = 2 52 . The stopping criteria for x- and y-iterations are taken as
e r r x = x s n o n l x s 1 n o n l 2 / x s 1 n o n l 2 < t o l , e r r y = y s n o n l y s 1 n o n l 2 / y s 1 n o n l 2 < t o l ,
where t o l = 10 e p s . The scheme converges for perturbations δ A of restricted size. It is possible that y converges while x does not converge.
The nonlinear estimate of the higher term Δ 1 can be used to obtain nonlinear corrections of the singular value perturbations. Based on (32), a nonlinear correction of each singular value can be determined as
δ σ i c o r r = ( | δ D U 1 | Σ n + Σ n | δ D V | + | Δ 1 | ) i i ,
so that the corresponding singular value perturbation is estimated as
δ σ i n o n l = σ i l i n + δ σ i c o r r .
Note that σ i l i n = | δ f i i | is known only when the entries of the perturbation δ A are known and usually this is not fulfilled in practice. Nevertheless, the nonlinear correction (58) can be useful in estimating the sensitivity of a given singular value.
In Table 5, we present the number of iterations necessary to find the global bound Δ n o n l for the problem considered in Example 1 with perturbations δ A = 10 c × A 0 , c = 10 , 9 , , 3 . In the last two columns of the table we give the norm of the exact higher order term Δ and its approximation Δ n o n l 2 computed according to (50), (51) (the approximation is given for the last iteration). In particular, for the perturbation δ A = 10 5 A 0 , the exact higher order term Δ , found using (19) and (20), is
| Δ | = 10 7 × 0.0045563 0.0006839 0.0146195 0.0257942 0.0012737 0.0008692 0.0080543 0.0003708 0.0000993 0.0015257 0.0086067 0.0085598 0.0019107 0.0007767 0.0258012 0.0170781 0.0032494 0.0006701 0.0423789 0.2046100 0.0064032 0.0004567 0.0384765 0.0071094 , Δ 2 = 2.12746 × 10 8 .
Implementing the described iterative procedure, after 10 iterations we obtain the nonlinear bound
Δ n o n l = 10 5 × 0.0019716 0.0021343 0.0049111 0.0047911 0.0041273 0.0053422 0.0069174 0.0069309 0.0189857 0.0186412 0.0222135 0.0179503 0.8671798 0.8680878 0.8664829 0.8607932 0.5146400 0.5162827 0.5207565 0.2618101 0.5146400 0.5162827 0.5207565 0.2618101 , Δ n o n l 2 = 2.16338 × 10 5
computed according to (50) and (51) on the base of the nonlinear bound x n o n l .
The global bounds x n o n l 2 and y n o n l 2 , found for different perturbations along with the values of x 2 and y 2 , are shown in Table 6. The results confirm that the global estimates of x and y are close to the corresponding asymptotic estimates.
In Figure 2 and Figure 3 we show the convergence of the relative errors
x s n o n l x s 1 n o n l / x s 1 n o n l
and
y s n o n l y s 1 n o n l / y s 1 n o n l ,
respectively, at step s of the iterative process for different perturbations δ A = 10 c × A 0 . For the given example the iteration behaviours of x and y are close.
As it is seen from the figures, with the increasing of the perturbation size the convergence worsens and for c = 3 ( δ A 2 = 2.08198 × 10 2 ) the iteration diverges. This demonstrates the restricted usefulness of the nonlinear estimates which are valid only for limited perturbation magnitudes.
In Table 7 we give normwise perturbation bounds of the singular values along with the actual singular value perturbations and their global bounds found for two perturbations of A under the assumption that the linear bounds of all singular values are known. As it can be seen from the table, the nonlinear estimates of the singular values are very tight.

9.3. Global Perturbation Bounds of δ U 1 and δ V

Having nonlinear bounds of x, y, | δ W U | and | δ W V | , we may find nonlinear bounds on the perturbations of the entries of U 1 and V according to the relationships
δ U 1 n o n l = | U | | δ W U n o n l | ,
δ V n o n l = | V | | δ W V n o n l | .
For the perturbations of the orthogonal matrices of Example 1 we obtain the nonlinear componentwise bounds
δ U 1 n o n l = 10 2 × 0.0018632 0.0052760 0.0300043 0.9595752 0.0012438 0.0044395 0.0270841 1.4007065 0.0014395 0.0046009 0.0279076 0.5379336 0.0017550 0.0042123 0.0215531 0.5375519 0.0015075 0.0033988 0.0146480 0.6375027 0.0016913 0.0050519 0.0262884 0.9579537
and
δ V n o n l = 10 3 × 0.0110870 0.0341508 0.1619665 0.0371843 0.0130758 0.0300693 0.0179405 0.1612236 0.0161397 0.0250977 0.0805035 0.1022483 0.0056705 0.0191966 0.0673461 0.0164515 .
These bounds are close to the obtained in sect. Section 7 linear estimates δ U 1 l i n and δ V l i n , respectively.
Based on (39), (40), global estimates of the maximum angles between the unperturbed and perturbed singular subspaces of dimension r can be obtained using the nonlinear bounds δ W U n o n l and δ W V n o n l of the matrices δ W U and δ W V , respectively. For the pair of left and right singular subspaces we obtain that
Φ max n o n l ( U r , U ˜ r ) = arcsin ( δ W U r + 1 : m , 1 : r n o n l 2 ) , r min { m 1 , n } ,
Θ max n o n l ( V r , V ˜ r ) = arcsin ( δ W V r + 1 : n , 1 : r n o n l 2 ) , r n 1 .
In Table 8 we give the exact angles between the perturbed and unperturbed left and singular subspaces of different dimensions and their nonlinear bounds computed using (61) and (62) for the matrix A from Example 1 and two perturbations δ A = 10 c × A 0 , c = 10 , 5 . The comparison with the corresponding linear bounds given in Table 4 shows that the two types of bounds produce close results. As in the estimation of the other elements of the singular value decomposition, the global perturbation bounds are slightly larger than the corresponding asymptotic estimates but give guaranteed bounds on the changes of the respective elements although for limited size of δ A .

10. Two Higher Order Examples

In this section, we present the result of two numerical experiments with higher order matrices to illustrate the properties of the asymptotic and global estimates obtained in the paper.
Example 2.
Consider a 50 × 20 matrix A, taken as
A = U 0 Σ 0 0 V 0 T ,
where Σ 0 = diag ( 1 , 1 , , 1 ) , the matrices U 0 and V 0 are constructed as proposed in [1],
U 0 = M 2 S U M 1 , M 1 = I m 2 e e T / m , M 2 = I m 2 f f T / m , e = [ 1 , 1 , 1 , , 1 ] T , f = [ 1 , 1 , 1 , , ( 1 ) m 1 ] T , S U = diag ( 1 , σ , σ 2 , , σ m 1 ) , V 0 = N 2 S V N 1 , N 1 = I n 2 u u T / n , N 2 = I n 2 v v T / n , g = [ 1 , 1 , 1 , , 1 ] T , h = [ 1 , 1 , 1 , , ( 1 ) n 1 ] T , S V = diag ( 1 , τ , τ 2 , , τ n 1 ) ,
and the orthogonal and symmetric matrices M 1 , M 2 , N 1 , N 2 are Householder reflections. The condition numbers of U 0 and V 0 with respect to the inversion are controlled by the variables σ and τ and are equal to σ m 1 and τ n 1 , respectively. In the given case, σ = 1.2 , τ = 1.3 and cond ( U 0 ) = 7583.7 , cond ( V 0 ) = 146.2 . The minimum singular value of the matrix A is σ min ( A ) = 0.114319461998077 . The perturbation of A is taken as δ A = 10 c × A 0 , where c is a negative number and A 0 is a matrix with random entries generated by the MATLAB®function rand .
In Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 we show several results related to the perturbations of the singular value decomposition of A for 30 values of c between 12 and 3 . As particular examples, in Figure 5 we display the perturbations of the entry U 20 , 15 , which is an element of the matrix U 1 and in Figure 6 - the perturbations of the entry V 20 , 15 , both as functions of δ A 2 . The componentwise linear bound reflect correctly the behavior of the actual perturbations and are valid for wide changes of the perturbation size. Note that this holds for all elements of U and V. The global (nonlinear) bounds practically coincide with the linear bounds but do not exist for perturbations whose size is larger than 2 × 10 3 .
In Figure 7 and Figure 8 we show the angles between the perturbed and unperturbed left
U 15 = span ( u 1 , u 2 , , u 15 )
and right
V 15 = span ( v 1 , v 2 , , v 15 )
singular subspaces of dimension 15. Again, the linear bounds on the angles | Φ max ( U 15 , U ˜ 15 ) | and | Θ max ( V 15 , V ˜ 15 ) | are valid for perturbation magnitudes from δ A 2 = 10 13 to δ A 2 = 10 1 and this also holds for singular subspaces of other dimensions.
In Figure 9 and Figure 10 we show the perturbations of the singular values and their nonlinear bounds for perturbations with δ A 2 = 2.7282784 × 10 10 and δ A 2 = 4.3932430 × 10 4 . While in the first case the nonlinear bound δ σ i n o n l coincides with the actual change δ σ i of the singular values, in the second case the bound becomes significantly greater than the actual change due to the overestimating of the higher order term Δ 1 .
Example 3.
Consider a 150 × 100 matrix A, constructed as in the previous example for σ = 1.05 and τ = 1.0 . The perturbation of A is taken as δ A = 10 9 × A 0 , where A 0 is a matrix with random entries.
In Figure 11 and Figure 12 we show the entries of the matrices | δ U 1 | and | δ V | , respectively, along with the corresponding componentwise estimates δ U l i n and δ V l i n . The absolute values of the exact changes in all 100 singular values along with the bounds | δ σ i n o n l | and δ A 2 are shown in Figure 13. The nonlinear bounds of δ U 1 and δ V are found only for 7 iterations and are visually indistinguishable from the corresponding linear bounds.

11. Conclusions

The SVD perturbation analysis presented in this paper makes possible to determine componentwise perturbation bounds of the orthogonal matrices, singular values and singular subspaces of a full rank matrix. The analysis performed has some peculiarities which make it a challenging problem. On one hand, the SVD analysis is simpler than some other problems, like the QR decomposition perturbation problem. This is due to the diagonal form of the decomposed matrix which, among the other, allows to solve easily the equations for the perturbation parameters avoiding the use of the Kronecker product. On the other hand, the presence of two matrices in the decomposition requires the introduction of two different parameter vectors which are mutually dependent due to the relationship between the perturbations of the two orthogonal matrices. This makes necessary to solve a coupled system of equations about the parameter vectors which complicates the analysis.
The analysis presented in the paper can be extended with minor complications to the case of complex matrices.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable

Informed Consent Statement

Not applicable

Data Availability Statement

The datasets generated during the current study are available from the authors on reasonable request.

Conflicts of Interest

The authors declares no conflict of interest.

Abbreviations

Notation
R , the set of real numbers;
R m × n , the space of m × n real matrices ( R n = R n × 1 );
R ( A ) , the range of A;
span ( v 1 , v 2 , , v n ) , the subspace spanned by the vectors u 1 , u 2 , , u n ;
X , the orthogonal complement of the subspace X ;
| A | , the matrix of absolute values of the elements of A;
A T , the transposed of A;
A 1 , the inverse of A;
a j , the jth column of A;
A i , 1 : n , the ith row of m × n matrix A;
A i 1 : i 2 , j 1 : j 2 , the part of matrix A from row i 1 to i 2 and from column j 1 to j 2 ;
δ A , perturbation of A;
O ( δ A 2 ) , a quantity of second order of magnitude with respect to δ A ;
0 m × n , the zero m × n matrix;
I n , the unit n × n matrix;
e j , the jth column of I n ;
σ i ( A ) , the ith singular value of A;
σ min ( A ) , σ max ( A ) , the minimum and maximum singular values of A, respectively;
⪯, relation of partial order. If a , b R n , then a b means a i b i , i = 1 , 2 , , n ;
Low ( A ) , the strictly lower triangular part of A R n × n ;
Up ( A ) , the strictly upper triangular part of A R n × n ;
vec ( A ) , the vec mapping of A R m × n . If A is partitioned columnwise as
A = [ a 1 , a 2 , a n ] , then vec ( A ) = [ a 1 T , a 2 T , , a n T ] T ,
x , the Euclidean norm of x R n ;
A 2 , the spectral norm of A;
Θ max ( X , Y ) , the maximum angle between subspaces X and Y .

References

  1. C. Bavely and G. Stewart, An algorithm for computing reducing subspaces by block diagonalization, SIAM J. Numer. Anal., 16 (1979), pp. 359–367. [CrossRef]
  2. A. Björck and G. Golub, Numerical methods for computing angles between linear subspaces, Math. Comp., 27 (1973), pp. 579–594. [CrossRef]
  3. J. g. Sun, Perturbation analysis of singular subspaces and deflating subspaces, Numer. Math., 73 (1996), pp. 235–263. [CrossRef]
  4. G. H. Golub and C. F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, MD, fourth ed., 2013. ISBN 978-1-4214-0794-4.
  5. R. Horn and C. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK, second ed., 2013. ISBN 978-0-521-83940-2.
  6. R. Li, Matrix perturbation theory, in Handbook of Linear Algebra, L. Hogben, ed., Discrete Math. Appl., CRC Press, Boca Raton, FL, second ed., 2014, pp. (21–1)–(21–20).
  7. The MathWorks, Inc., MATLAB Version 9.9.0.1538559 (R2020b), Natick, MA, 2020, http://www.mathworks.com.
  8. P. Petkov, Componentwise perturbation analysis of the Schur decomposition of a matrix, SIAM J. Matrix Anal. Appl., 42 (2021), pp. 108–133. [CrossRef]
  9. P. Petkov, Componentwise perturbation analysis of the QR decomposition of a matrix, Mathematics, 10 (2022). [CrossRef]
  10. G. Stewart, Error and perturbation bounds for subspaces associated with certain eigenvalue problems, SIAM Rev., 15 (1973), pp. 727–764. [CrossRef]
  11. G. Stewart, Perturbation theory for the singular value decomposition. Technical Report CS-TR 2539, University of Maryland, College Park, MD, 1990.
  12. G. Stewart, On the early history of the singular value decomposition, SIAM Rev., 35 (1993), pp. 551–566. [CrossRef]
  13. G. Stewart, Matrix Algorithms, vol. I: Basic Decompositions, SIAM, Philadelphia, PA, 1998. ISBN 0-89871-414-1.
  14. G. Stewart and J.-G. Sun, Matrix Perturbation Theory, Academic Press, New York, 1990. ISBN 978-0126702309.
  15. R. Vaccaro, A second-order perturbation expansions for the SVD, SIAM J. Matrix Anal. Appl., 15 (1994), pp. 661–671. [CrossRef]
  16. P.-A. Wedin, Perturbation bounds in connection with singular value decomposition, BIT Numerical Mathematics, 12 (1972), pp. 99–111. [CrossRef]
Figure 1. Sensitivity estimations of the left singular subspaces.
Figure 1. Sensitivity estimations of the left singular subspaces.
Preprints 90346 g001
Figure 2. Sensitivity estimations of the right singular subspaces.
Figure 2. Sensitivity estimations of the right singular subspaces.
Preprints 90346 g002
Figure 3. Iterations for finding the global bounds of x.
Figure 3. Iterations for finding the global bounds of x.
Preprints 90346 g003
Figure 4. Iterations for finding the global bounds of y.
Figure 4. Iterations for finding the global bounds of y.
Preprints 90346 g004
Figure 5. Exact values of | δ U 20 , 15 | and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Figure 5. Exact values of | δ U 20 , 15 | and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Preprints 90346 g005
Figure 6. Exact values of | δ V 20 , 15 | and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Figure 6. Exact values of | δ V 20 , 15 | and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Preprints 90346 g006
Figure 7. Exact values of Φ max 15 and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Figure 7. Exact values of Φ max 15 and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Preprints 90346 g007
Figure 8. Exact values of Θ max 15 and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Figure 8. Exact values of Θ max 15 and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Preprints 90346 g008
Figure 9. Perturbations of the singular values and their nonlinear bounds for δ A = 2.7282784 × 10 10 .
Figure 9. Perturbations of the singular values and their nonlinear bounds for δ A = 2.7282784 × 10 10 .
Preprints 90346 g009
Figure 10. Perturbations of the singular values and their nonlinear bounds for δ A = 4.3932430 × 10 4 .
Figure 10. Perturbations of the singular values and their nonlinear bounds for δ A = 4.3932430 × 10 4 .
Preprints 90346 g010
Figure 11. Values of | δ U i j | and the corresponding linear estimates.
Figure 11. Values of | δ U i j | and the corresponding linear estimates.
Preprints 90346 g011
Figure 12. Values of | δ V i j | and the corresponding linear estimates.
Figure 12. Values of | δ V i j | and the corresponding linear estimates.
Preprints 90346 g012
Figure 13. Perturbations and perturbation bounds of the singular values.
Figure 13. Perturbations and perturbation bounds of the singular values.
Preprints 90346 g013
Table 1. Exact perturbation parameters x k related to the matrix δ W U and their linear estimates.
Table 1. Exact perturbation parameters x k related to the matrix δ W U and their linear estimates.
c δ A 2 x k = u i T δ u j | x k | x k lin
10 2.0819813 × 10 9 x 1 = u 2 T δ u 1 5.9651314 × 10 12 1.1712418 × 10 10
x 2 = u 3 T δ u 1 1.2358232 × 10 11 8.5991735 × 10 11
x 3 = u 4 T δ u 1 2.5843633 × 10 12 8.1830915 × 10 11
x 4 = u 5 T δ u 1 7.5119150 × 10 13 8.1773996 × 10 11
x 5 = u 6 T δ u 1 1.9764954 × 10 11 8.1773996 × 10 10
x 6 = u 3 T δ u 2 2.5769606 × 10 11 3.2351171 × 10 10
x 7 = u 4 T δ u 2 6.7934677 × 10 12 2.7156394 × 10 10
x 8 = u 5 T δ u 2 7.1177269 × 10 11 2.7093810 × 10 10
x 9 = u 6 T δ u 2 3.0802793 × 10 11 2.7093810 × 10 10
x 10 = u 4 T δ u 3 4.9634458 × 10 10 1.6912007 × 10 9
x 11 = u 5 T δ u 3 6.0967501 × 10 10 1.6672173 × 10 9
x 12 = u 6 T δ u 3 6.6936869 × 10 10 1.6672173 × 10 9
x 13 = u 5 T δ u 4 1.0685519 × 10 8 1.1756467 × 10 7
x 14 = u 6 T δ u 4 1.0010811 × 10 9 1.1756467 × 10 7
5 2.0819812 × 10 4 x 1 = u 2 T δ u 1 5.9650702 × 10 7 1.1712418 × 10 5
x 2 = u 3 T δ u 1 1.2358210 × 10 6 8.5991735 × 10 6
x 3 = u 4 T δ u 1 2.5843445 × 10 7 8.1830915 × 10 6
x 4 = u 5 T δ u 1 7.5103866 × 10 8 8.1773996 × 10 6
x 5 = u 6 T δ u 1 1.9765187 × 10 6 8.1773996 × 10 6
x 6 = u 3 T δ u 2 2.5769484 × 10 6 3.2351171 × 10 5
x 7 = u 4 T δ u 2 6.7937087 × 10 7 2.7156394 × 10 5
x 8 = u 5 T δ u 2 7.1177424 × 10 6 2.7093809 × 10 5
x 9 = u 6 T δ u 2 3.0802821 × 10 6 2.7093809 × 10 5
x 10 = u 4 T δ u 3 4.9636511 × 10 5 1.6912007 × 10 4
x 11 = u 5 T δ u 3 6.0971010 × 10 5 1.6672173 × 10 4
x 12 = u 6 T δ u 3 6.6939951 × 10 5 1.6672173 × 10 4
x 13 = u 5 T δ u 4 1.0673940 × 10 3 1.1756467 × 10 2
x 14 = u 6 T δ u 4 1.0014883 × 10 4 1.1756467 × 10 2
Table 2. Exact perturbation parameters y related to the matrix δ W V and their linear estimates.
Table 2. Exact perturbation parameters y related to the matrix δ W V and their linear estimates.
c δ A 2 y = v i T δ v j | y | y lin
10 2.0819813 × 10 9 y 1 = v 2 T δ v 1 5.9724939 × 10 13 1.1712418 × 10 10
y 2 = v 3 T δ v 1 4.0539576 × 10 11 8.5991735 × 10 11
y 3 = v 4 T δ v 1 1.3009739 × 10 11 8.1830915 × 10 11
y 4 = v 5 T δ v 2 3.5278691 × 10 12 3.2351171 × 10 10
y 5 = v 6 T δ v 2 3.3493810 × 10 11 2.7156394 × 10 10
y 6 = v 3 T δ v 3 2.7589325 × 10 10 1.6912007 × 10 9
5 2.0819812 × 10 4 y 1 = v 2 T δ v 1 5.9734613 × 10 8 1.1712418 × 10 5
y 2 = v 3 T δ v 1 4.0539817 × 10 6 8.5991735 × 10 6
y 3 = v 4 T δ v 1 1.3009948 × 10 6 8.1830915 × 10 6
y 4 = v 5 T δ v 2 3.5285443 × 10 7 3.2351171 × 10 5
y 5 = v 6 T δ v 2 3.3493454 × 10 6 2.7156394 × 10 5
y 6 = v 3 T δ v 3 2.7590797 × 10 5 1.6912007 × 10 4
Table 3. Perturbations of the singular values and their linear estimates.
Table 3. Perturbations of the singular values and their linear estimates.
c δ σ ^ i = δ A 2 | δ σ i | δ σ i lin = | δ f ii |
10 2.0819812 × 10 9 δ σ 1 = 0.1543977 × 10 8 δ σ 1 = 0.1543978 × 10 8
δ σ 2 = 0.4299672 × 10 10 δ σ 2 = 4.2995882 × 10 10
δ σ 3 = 0.6114032 × 10 9 δ σ 3 = 0.6114030 × 10 9
δ σ 4 = 0.1736317 × 10 9 δ σ 4 = 0.1736318 × 10 9
5 2.0819812 × 10 4 δ σ 1 = 0.1543975 × 10 3 δ σ 1 = 0.1543978 × 10 3
δ σ 2 = 0.4299891 × 10 5 δ σ 2 = 0.4299588 × 10 5
δ σ 3 = 0.6113327 × 10 4 δ σ 3 = 0.6140303 × 10 4
δ σ 4 = 0.1737508 × 10 4 δ σ 4 = 0.1736318 × 10 4
Table 4. Sensitivity of the singular subspaces.
Table 4. Sensitivity of the singular subspaces.
c δ A 2 | Φ max ( U r , U ˜ r ) | Φ max lin ( U r , U ˜ r )
10 2.0819813 × 10 9 Φ 1 = 0.0022393 × 10 8 Φ 1 = 0.0002029 × 10 6
Φ 2 = 0.0075282 × 10 8 Φ 2 = 0.0005938 × 10 6
Φ 3 = 0.0581566 × 10 8 Φ 3 = 0.0029428 × 10 6
Φ 4 = 0.9560907 × 10 8 Φ 4 = 0.1662787 × 10 6
5 2.0819812 × 10 4 Φ 1 = 0.0022393 × 10 3 Φ 1 = 0.0020294 × 10 2
Φ 2 = 0.0075282 × 10 3 Φ 2 = 0.0059380 × 10 2
Φ 3 = 0.0581602 × 10 3 Φ 3 = 0.0294279 × 10 2
Φ 4 = 0.9550611 × 10 3 Φ 4 = 1.6628641 × 10 2
c δ A 2 | Θ max ( V r , V ˜ r ) | Θ max l i n ( V r , V ˜ r )
10 2.0819813 × 10 9 Θ 1 = 0.0420929 × 10 9 Θ 1 = 0.0166760 × 10 8
Θ 2 = 0.0485250 × 10 9 Θ 2 = 0.0438688 × 10 8
Θ 3 = 0.2788775 × 10 9 Θ 3 = 0.1714819 × 10 8
5 2.0819812 × 10 4 Θ 1 = 0.0420932 × 10 4 Θ 1 = 0.0166760 × 10 3
Θ 2 = 0.0485253 × 10 4 Θ 2 = 0.0438688 × 10 3
Θ 3 = 0.2788921 × 10 4 Θ 3 = 0.1714819 × 10 3
Table 5. Convergence of the global bounds and higher order terms.
Table 5. Convergence of the global bounds and higher order terms.
c δ A F Number of iterations Δ 2 Δ nonl 2
10 2.08198 × 10 9 4 2.12806 × 10 18 2.09530 × 10 15
9 2.08198 × 10 8 4 2.12806 × 10 16 2.09531 × 10 13
8 2.08198 × 10 7 5 2.12806 × 10 14 2.09537 × 10 11
7 2.08198 × 10 6 5 2.12806 × 10 12 2.09595 × 10 9
6 2.08198 × 10 5 7 2.12800 × 10 10 2.10185 × 10 7
5 2.08198 × 10 4 10 2.12746 × 10 8 2.16338 × 10 5
4 2.08198 × 10 3 29 2.12189 × 10 6 3.23713 × 10 3
3 2.08198 × 10 2 No convergence - -
Table 6. Global bounds of x and y.
Table 6. Global bounds of x and y.
c x 2 x nonl 2 y 2 y nonl 2
10 1.0782203 × 10 8 1.6628799 × 10 7 2.8118399 × 10 10 1.7511069 × 10 9
9 1.0782176 × 10 7 1.6628817 × 10 6 2.8118379 × 10 9 1.7511072 × 10 8
8 1.0782167 × 10 6 1.6628999 × 10 5 2.8118393 × 10 8 1.7511098 × 10 7
7 1.0782065 × 10 5 1.6308164 × 10 4 2.8118405 × 10 7 1.7511358 × 10 6
6 1.0781037 × 10 4 1.6649057 × 10 3 2.8118536 × 10 6 1.7513967 × 10 5
5 1.0770771 × 10 3 1.6837998 × 10 2 2.8119844 × 10 5 1.7540888 × 10 4
4 1.0668478 × 10 2 1.9739131 × 10 1 2.8132907 × 10 4 1.7960592 × 10 3
Table 7. Perturbations of the singular values and their nonlinear estimates.
Table 7. Perturbations of the singular values and their nonlinear estimates.
c δ σ ^ i = δ A 2 | δ σ i | δ σ i nonl
10 2.0819812 × 10 9 δ σ 1 = 0.1543977 × 10 8 δ σ 1 = 0.1543978 × 10 8
δ σ 2 = 0.4299672 × 10 10 δ σ 2 = 0.4299589 × 10 10
δ σ 3 = 0.6114032 × 10 9 δ σ 3 = 0.6114031 × 10 9
δ σ 4 = 0.1736317 × 10 9 δ σ 4 = 0.1736329 × 10 9
5 2.0819812 × 10 4 δ σ 1 = 0.1543975 × 10 3 δ σ 1 = 0.1544194 × 10 3
δ σ 2 = 0.4299891 × 10 5 δ σ 2 = 0.4359172 × 10 5
δ σ 3 = 0.6113327 × 10 4 δ σ 3 = 0.6140033 × 10 4
δ σ 4 = 0.1737508 × 10 4 δ σ 4 = 0.2848015 × 10 4
Table 8. Nonlinear sensitivity estimates of the singular subspaces.
Table 8. Nonlinear sensitivity estimates of the singular subspaces.
c δ A 2 | Φ max ( U r , U ˜ r ) | Φ max nonl ( U r , U ˜ r )
10 2.0819813 × 10 9 Φ 1 = 0.0022393 × 10 8 Φ 1 = 0.0002029 × 10 6
Φ 2 = 0.0075282 × 10 8 Φ 2 = 0.0005938 × 10 6
Φ 3 = 0.0581566 × 10 8 Φ 3 = 0.0029428 × 10 6
Φ 4 = 0.9560907 × 10 8 Φ 4 = 0.1662788 × 10 6
5 2.0819812 × 10 4 Φ 1 = 0.0022393 × 10 3 Φ 1 = 0.0020601 × 10 2
Φ 2 = 0.0075282 × 10 3 Φ 2 = 0.0060635 × 10 2
Φ 3 = 0.0581602 × 10 3 Φ 3 = 0.0303251 × 10 2
Φ 4 = 0.9550611 × 10 3 Φ 4 = 1.6683781 × 10 2
c δ A 2 | Θ max ( V r , V ˜ r ) | Θ max n o n l ( V r , V ˜ r )
10 2.0819813 × 10 9 Θ 1 = 0.0420929 × 10 9 Θ 1 = 0.0166760 × 10 8
Θ 2 = 0.0485250 × 10 9 Θ 2 = 0.0438688 × 10 8
Θ 3 = 0.2788775 × 10 9 Θ 3 = 0.1714819 × 10 8
5 2.0819812 × 10 4 Θ 1 = 0.0420932 × 10 4 Θ 1 = 0.0166819 × 10 3
Θ 2 = 0.0485253 × 10 4 Θ 2 = 0.0438972 × 10 3
Θ 3 = 0.2788921 × 10 4 Θ 3 = 0.1718211 × 10 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated