Preprint
Article

This version is not peer-reviewed.

Decrease in Computational Load and Increase in Accuracy for Filtering of Random Signals

A peer-reviewed article of this preprint also exists.

Submitted:

20 March 2025

Posted:

21 March 2025

You are already at the latest version

Abstract
This paper describes methods for optimal filtering of random signals that involve large matrices. We develop a procedure that allows us to significantly decrease the computational load needed to numerically realize the associated filter and increase the associated accuracy. The procedure is based on the reduction of a large covariance matrix to a collection of smaller matrices. It is done in such a way that the filter equation with large matrices is equivalently represented by a set of equations with smaller matrices. The filter Fp we develop is represented by Fp(v1,…vp)=∑j=1pMjvj and minimizes the associated error over all matrices M1,…,Mp. As a result, the proposed optimal filter has two degrees of freedom to increase the associated accuracy. They are associated, first, with the optimal determination of matrices M1,…,Mp and second, with the increase in the number p of components in the filter Fp. The error analysis and results of numerical simulations are provided.
Keywords: 
;  ;  ;  

1. Introduction

1.1. Preliminaries

Although there is an extensive literature on studying methods of filtering of random signals, the problem we consider seems to be different from the known ones. The objective of this paper is to formulate and solve this problem under scenarios which relate to a reduction of computation complexity, for the case of large covariance matrices, and an increase in the filtering accuracy.
Let Ω be a set of outcomes in probability space ( Ω , Σ , μ ) for which Σ is a σ –algebra of measurable subsets of Ω and μ : Σ [ 0 , 1 ] is an associated probability measure. Let x L 2 ( Ω , R m ) be a signal of interest and y L 2 ( Ω , R n ) be an observable signal.. Here L 2 ( Ω , R m ) is the space of square-integrable functions defined on Ω with values in R m , i.e., such that Ω i = 1 m [ x ( i ) ( ω ) ] 2 d μ ( ω ) < .
Further, let us write x = [ x ( 1 ) , , x ( m ) ] T where x ( i ) L 2 ( Ω , R ) , for i = 1 , , m . We assume without loss of generality that all random vectors are of the zero mean. Let us denote
x Ω 2 = Ω i = 1 m [ x ( i ) ( ω ) ] 2 d μ ( ω ) , E x y = Ω x ( ω ) [ y ( ω ) ] T d μ ( ω ) ,
where E x y is the covariance matrix formed from x and y .
Let filter F be represented by F ( y ) = M y where M R m × n . Note that each matrix M R m × n defines a bounded linear transformation M : L 2 ( Ω , R n ) L 2 ( Ω , R m ) . It is customary to write M rather then M since [ M ( y ) ] ( ω ) = M [ y ( ω ) ] , for each ω Ω .
The known problem of finding optimal filter F is formulated as follows. Given large covariance matrices E x y and E y y , find M that solves
min M x M y Ω 2 .
The minimal Frobenius norm solution to (2) is given by (see, for example, [1,2,3,4,5])
M = E x y E y y ,
where E y y is the Moor-Penrose pseudo-inverse of E y y .

1.2. Motivations

In 3, M contains m × n entries. In other words, the filter defined by 3 contains m × n parameters to optimize the associated accuracy.
Motivation 1: Increase in accuracy. Since M in 3 is optimal there is no other linear filter that can provide a better associated accuracy. At the same time, it may happen that the accuracy is not satisfactory. Then a logical way to increase the accuracy is an increase in the number of parameters to optimize. Since m is fixed then only n can be varied; n can be increased to, say, q where q > n , if y is replaced with another vector v L 2 ( Ω , R q ) which is unknown. In this case, we need to determine both M and v . In particular, v can be represented as v = [ v 1 T , , v p T ] T where, for j = 1 , , p , v j L 2 ( Ω , R q j ) and q = q 1 + + q p . Then the associated filter F p is represented by the equation
F p ( v 1 , v p ) = M v = [ M 1 , , M p ] [ v 1 T , , v p T ] T = j = 1 p M j v j ,
where, for j = 1 , , p , M j R m × q j . We call p the degree of filter F p .
Therefore, the preliminary statement of the problem considered here and in [6] is as follows. Given E x v and E v v , find M 1 , , M p and v 1 , , v p that solve
min M 1 , , M p v 1 , , v p x j = 1 p M j v j Ω 2 .
Note, since E x v and E v v are given then, for i , j = 1 , , p , E x v j and E v i v j are known as blocks of E x v and E v v .
Motivation 2: Decrease in computational load. In 2 and 3, dimensions of x and y , m and n, respectively, can be very large. In a number of important applied problems, such as those in biology, ecology, finance, sociology and medicine (see e.g., [7,8]), m , n = O ( 10 3 ) and greater. For example, measurements of gene expression can contain tens of hundreds of samples. Each sample can contain tens of thousands of genes. As a result, in such cases, associated covariance matrices matrices in 3 are very large. Further, in (4) and (5), dimension q j of v j might be of the same order as that of observable signal y . For instance, one of v j can be equal to y . Based on 2–(5), it is natural to predict that a solution of problem (5) contains q × q pseudo-inverse E v v and m × q pseudo-inverse M . It is indeed so, as shown in Section 4.1.6 and [6]. Matrices E v v and M are usually larger than E y y . Then computation of a large pseudo-inverse matrix leads to a quite slow computing procedure up to the cases when the computer runs out of computer memory. In particular, the associated well-known phenomenon of the “curse of dimensionality” [9] states that many problems become exponentially difficult in high dimensions.
Of course, in the case of high dimensional signals, some well known dimensionality reduction techniques can be used (as those, for example, in [2,3,5]) as an intermediate step for the filtering. At the same time, those techniques also involve large pseudo-inverse covariance matrices of the same size as that in 3. Additionally, in applying the dimensionality reduction procedure, intrinsic associated errors appear. The errors may involve, in particular, an undesirable distortion of the data. Thus, an application of the dimensionality reduction techniques to the problem under consideration does not allow us to avoid a computation of large pseudo-inverse covariance matrices. Therefore, we wish to develop techniques that reduce computation of large pseudo-inverses E v v and M without involving any additional associated error.
The following example illustrates Motivation 2. Simulations in this example and also in 2 of Section 4.1.7 were run on a Dell Latitude 7400 Business Laptop (manufactured in 2020).
Example 1.
Let E v v R q × q be a matrix whose entries are normally distributed with mean 0 and variance 1. In Table 1, we provide the time (in seconds) needed forMatlabto compute matrix E v v for different values of q.
We observe that computation of, e.g., 5000 × 5000 pseudo-inverse matrix requires 63 seconds while computation of the five pseudo-inverse matrices of sizes 1000 × 1000 requires about 2 = 5 × 0.4 seconds. Thus, a procedure that would allow us to reduce the computation of E v v R 5000 × 5000 to a computation of, say, five pseudo-inverse matrices of sizes 1000 × 1000 would be faster. At the same time, such a procedure would additionally contain associated matrix multiplications which are time consuming as well. This observation is further elaborated in Section 4.1.2 and 2 of Section 4.1.7.

2. Statement of the Problem

The solution of the problem in (5) is divided into two parts. In this paper, we consider the first part of the solution. It mainly concerns a technique for the decrease in a computational load associated with computation of E v v . The second part of the solution of problem (5) is considered in [6]. It represents an extension of the technique considered here to the solution of the original problem in (5). More precisely, here, we consider
min M 1 , , M p x j = 1 p M j v j Ω 2 .
In terms of notation (4), the known minimal Frobenius norm solution of (6) is given by (see, e.g., [1,2,3,4,5])
M = M = E x v E v v .
As mentioned in Section 1.2, in (7) (and in (8) below), q × q matrix E v v might be very large. Then the pseudo-inverse E v v may be difficult to compute. In this case, formula in (7) is not useful in practice.
Therefore, the problem is as follows. Given large  E x v and E v v , find a computational procedure for solving (6) which significantly decreases the computational complexity for the E v v evaluation compared to that needed by the filter represented by (7).
While problem (6) arises in the solution provided in [6] it is also important in its own right. In particular, if x = [ x 1 T , , x p T ] T where, for j = 1 , , p , x j L 2 ( Ω , R m j ) and m = m 1 + + m p , then (6) represents the problem of finding the optimal multi-input multi-output (MIMO) filter. In this case, v j and x j , for j = 1 , , p , are an input signal and output signal, respectively.
The problem in (6) can also be interpreted as the system identification problem [10,11,12,13]. In this case, v 1 , , v p are the inputs of the system, M = [ M 1 , , M p ] is the input-output map of the system, and x is the system output. For example, in environmental monitoring, v 1 , , v p may represent random data coming from nodes measuring temperature, light or pressure variations, and x may represent a random signal obtained after a merging of the received data in order to estimate required data parameters within a prescribed accuracy. Similar scenarios occur, e.g., in target localization and tracking [14].
As mentioned above, in all problems considered in this paper, covariance matrices are assumed to be known. This assumption is a well-known and significant limitation in problems dealing with the optimal filtering of random signals. See [15,16,17,18] in this regard. The covariances can be estimated in various ways and particular techniques are given in many papers. We cite [19,20,21,22,23,24] as the examples. Estimation of covariance matrices is an important problem which is not considered in this paper.

3. Contribution and Novelty

The main conceptual novelty of the approach developed in this paper is in the technique that allows us to decrease the computational load needed for the numerical realization of the proposed filter. The procedure is based on the reduction of a large E v v matrix to a collection of smaller matrices. It is done so that the filter equation with large matrices is equivalently represented by the set of equations with smaller matrices. As a result, as shown in Section 4.1.7, the proposed p-th degree filter requires computation of p pseudo-inverse matrices with sizes smaller that the size of the large matrix E v v . In this regard, see Section 4.1.2, and 1 and 2, for more details. The associated Matlab code is given in Section 4.1.7.
In 2, 3 and 4, we also show that the accuracy of the proposed filter increases with increases in the degree of the filter. Details are provided in Section 4.1 and more specifically, in Section 4.1.2. In particular, the algorithm for the optimal determination of M 1 , , M p is given in Section 4.1.7. Note that the algorithm is easy to implement numerically. The error analysis associated with the filter under consideration is provided in Section 4.1.3, Section 4.1.5 and Section 4.1.8.
Remark 1.
Note that for every outcome ω Ω , realizations v 1 ( ω ) , , v p ( ω ) and x ( ω ) of signals v 1 , , v p and x , respectively, occur with certain probabilities. Thus, random signals v j , for j = 1 , , p , and x are associated with infinite sets of realizations V j = { v j ( ω ) | ω Ω } R n and X = { x ( ω ) | ω Ω } R m , respectively. For different ω, the values v j ( ω ) , for j = 1 , , p , and x ( ω ) are, in general, different, and in many cases will span the entire spaces R j q and R m , respectively. Therefore, for each x and v j , for j = 1 , , p , filter given by (4) can be interpreted as map R q 1 × R q p R m where probabilities are assigned to each column of R q j and R m . Importantly, the filter is invariant with respect to outcome ω Ω , and therefore, is the same for all v 1 ( ω ) , v p ( ω ) and x ( ω ) with different outcomes ω Ω .

4. Solution of the Problem

We wish that in (5) and (6), j = 1 p M j v j would never been the zero vector. To this end, we need to extend the known definition of the linear independence of vectors as follows.
Let M ( 1 ) R m × q 1 , . . . , M ( p ) R m × q p be some matrices. For j = 1 , , p , let N ( M ( j ) ) be a null space of matrix M ( j ) .
A linear combination in the generalized sense of vectors v 1 , , v p is a vector
v ^ = M ( 1 ) v 1 ( ω ) + + M ( p ) v p ( ω ) .
We say that the linear combination in generalized sense is non-trivial if v j ( ω ) N ( M ( j ) ) , for at least one of j = 1 , , p . Note that if M ( j ) O where O is the zero matrix, then it is still, of course, possible that v j ( ω ) N ( M ( j ) ) . Therefore, condition v j ( ω ) N ( M ( j ) ) is more general than condition M ( j ) O .
Definition 1.
Random vectors v 1 , , v p are called linearly independent in the generalized sense if there is no non-trivial linear combination in the generalized sense of these vectors equal to the zero vector; in other words, if
M ( 1 ) v 1 ( ω ) + + M ( p ) v p ( ω ) = O ,
for almost all ω Ω , implies that v j ( ω ) N ( M ( j ) ) , for each j = 1 , , p , and almost all ω Ω .
All random vectors considered below are assumed to be linearly independent in the generalized sense.

4.1. Solution of Problem in (6)

In (7), matrix M is the minimum Frobenius norm solution of the equation
M E v v = E x v .
Here, we exploit the specific structure of the original system (8) to develop a special block elimination procedure that allows us to reduce the original system of equations (8) to a collection of independent smaller subsystems. Their solution implies less computational load than that needed for (7).
To this end, let us represent E v v and E x v in blocked forms as follows:
E v v = E 11 E 1 p E p 1 E p p and E x v = [ E 1 E p ] ,
where E i j = E v i v j R q j × q j and E j = E x v j R m × q j , for j , i = 1 , , p . Then (8) implies
[ M 1 M p ] E 11 E 1 p E p 1 E p p = [ E 1 E p ] .
Thus, the solution of the problem in (8) is equivalent to the solution of equation (10).
We provide a specific solution of equation (10) in the following way. First, in Section 4.1.1, a generic basis for the solution is given. In Section 4.1.4, the solution of equation (10) is considered for p = 3 . This is a preliminary step for the solution of equation (10) for an arbitrary p which is provided in Section 4.1.6.

4.1.1. Generic Basis for the Solution of Problem (8): Case p = 2 in (10). Second Degree Filter

To provide a generic basis for the solution of equation (10), let us consider the case p = 2 in (4) and (6), i.e., the second degree filter. Then (10) becomes
[ M 1 M 2 ] E 11 E 12 E 21 E 22 = [ E 1 E 2 ] .
We denote
E 22 ( 1 ) = E 22 E 21 E 11 E 12 and E 2 ( 1 ) = E 2 E 1 E 11 E 12 .
Recall, it has been shown in [4] that for any random vectors g and h,
E g h = E g h E h h E h h .
Lemma 1.
Let M 1 and M 2 satisfy equation (11). Then (11) decomposes into equations
M 1 E 11 = E 1 M 2 E 21 and M 2 E 22 ( 1 ) = E 2 ( 1 )
Proof. 
Let us denote M ¯ 1 = M 1 + M 2 E 21 E 11 . Then
[ M ¯ 1 M 2 ] = [ M 1 M 2 ] I O E 21 E 11 I
and
[ M 1 M 2 ] = [ M ¯ 1 M 2 ] I O E 21 E 11 I ,
where by (13),
E 21 E 11 E 11 = E 21 .
Thus, (11) is represented as
[ M ¯ 1 M 2 ] I O E 21 E 11 I E 11 E 12 E 21 E 22 = [ E 1 E 2 ]
which implies
[ M ¯ 1 M 2 ] I O E 21 E 11 I E 11 E 12 E 21 E 22 I E 11 E 12 O I
= [ E 1 E 2 ] I E 11 E 12 O I .
On the basis of (17), we see that (8), for p = 2 , reduces to the equation
[ M ¯ 1 M 2 ] E 11 O O E 22 ( 1 ) = [ E 1 E 2 ( 1 ) ] .
The latter implies 14.
Remark 2.
The above proof is based on equation (15) which implies the equivalence of equations (11) and 14. In turn, (1) allows us to equivalently obtain 14 as follows. Let us multiply (11) from the right by N 21 = I E 11 E 12 O I . Then
[ M 1 M 2 ] E 11 E 12 E 21 E 22 N 21 = [ E 1 E 2 ] N 21 ,
i.e.,
[ M 1 M 2 ] E 11 O E 21 E 22 ( 1 ) = [ E 1 E 2 ( 1 ) ] .
Then 14 follows from (20).
Theorem 1.
For p = 2 , the minimal Frobenius norm solution of problem (8) is represented by
M 2 = M ˜ 2 = E 2 ( 1 ) ( E 22 ( 1 ) ) , M 1 = M ˜ 1 = ( E 1 M ˜ 2 E 21 ) E 11
Proof. 
By (1), matrices M 1 and M 2 that solve (11) satisfy the equations in 14. By [2,3,17,18], their minimal Frobenius norm solutions are given by (21).
It follows from (21) that the second degree filter requires computation of two pseudo-inverse matrices, E 11 R q 1 × q 1 and ( E 22 ( 1 ) ) R q 2 × q 2 , of sizes that are smaller than the size of E v v R q × q (if, of course, q 1 0 and q 2 0 ).

4.1.2. Decrease in Computational Load

Here, we wish to compare the computational load needed for a numerical realization of the second degree filter given, for p = 2 , by (4) and (21), and that of the filter represented by (7).
By [25] (p. 254), the product of m × n and n × q matrices requires, for large m , n and q, 2 m n q flops. For large q, the Golub-Reinsch SVD used for computation of the q × q matrix pseudo-inverse requires 21 q 3 flops (see [25], p. 254).
In (12), for q 1 = q 2 = q / 2 , the evaluation of E 22 ( 1 ) implies the evaluation of E 11 , matrix multiplications in E 21 E 11 E 12 and matrix subtraction. These operations require 21 ( q / 2 ) 3 flops, 2 ( q / 2 ) 3 + 2 ( q / 2 ) 3 flops and q 2 / 4 flops, respectively. Similarly, computation of E 2 ( 1 ) requires 2 m q 2 / 4 + 2 q 3 / 8 + m q / 2 flops. In (21), computation of M 2 and M 1 imply 21 q 3 / 8 + 2 m q 2 / 4 flops and 2 m q 2 / 4 + 2 m q 2 / 4 + m q / 2 flops, respectively.
Let us denote by C ( A ) a number of flops required to compute an operation A. In total, the second degree filter requires C ( E 22 ( 1 ) ) + C ( E 2 ( 1 ) ) + C ( M 2 ) + C ( M 1 ) flops where
C ( E 22 ( 1 ) ) = q 2 / 4 + 25 q 3 / 8 ,
C ( E 2 ( 1 ) ) = m q / 2 + 2 m q 2 / 4 + 2 q 3 / 8 ,
C ( M 2 ) = 2 m q 2 / 4 + 21 q 3 / 8 ,
C ( M 1 ) = m q / 2 + m q 2 .
Thus, the number of flops needed for the numerical realization of the second degree filter is equal to m q + q 2 / 4 + 2 m q 2 + 6 q 3 .
The computational load required by the filter given by (7) is 2 m q 2 + 21 q 3 flops. It is customary to choose m q . In this case, clearly, m q + q 2 / 4 + 2 m q 2 + 6 q 3 < 2 m q 2 + 21 q 3 . In particular, if m = q then the latter inequality becomes 5 q 2 / 4 + 8 q 3 < 23 q 3 . That is, for sufficiently large m and q, the proposed second degree filter requires, roughly, up to 2.8 times less flops than that by the filter given by (7). In simulations by Matlab provided in 2 below, the second degree filter takes about two times less seconds than the filter given by (7). A command that would allow us to compute a number of flops is not available in Matlab.
In the following Section 4.1.4 and Section 4.1.6, the procedure described in Section 4.1.1 is extended to the case of the filter of higher degrees. For given q, this will imply, obviously, smaller sizes of blocks of matrices E v v and E x v than those for the second degree filter, and as a result, a further decrease in the associated computational load.

4.1.3. Error Associated with the Second Degree Filter Determined by (4)

Now, we wish to show that the error associated with the second degree filter determined, for p = 2 , by (4) and (21) is less than the error associated with the filter determined by 3.
Let us denote
ϵ = min M x M y Ω 2 , ε 2 = min M 1 , M 2 x [ M 1 M 2 ] v 1 v 2 Ω 2 .
The Frobenius norm of matrix M is denoted by M .
Theorem 2.
Let v 1 and v 2 be such that
j = 1 2 E j ( j 1 ) [ ( E j j ( j 1 ) ) 1 / 2 ] 2 > E x y E y y 1 / 2 2 ,
where E 1 = E 1 ( 0 ) and E 11 = E 11 ( 0 ) . Then
ε 2 < ϵ .
Proof. 
It is known (see, for example, [1,2,3,4,5]) that, for M determined by 3, and for M ˜ 1 and M ˜ 2 determined by (21),
ϵ = E x x 1 / 2 2 tr { E x y E y y E y x } , ε 2 = E x x 1 / 2 2 tr { E x v E v v E v x } ,
where v = v 1 v 2 and, bearing in mind (2), [ M ˜ 1 M ˜ 2 ] = E x v E v v . Therefore, by (21),
tr { E x v E v v E v x } = tr { [ M ˜ 1 M ˜ 2 ] E v x } = tr { M ˜ 1 E 1 T + M ˜ 2 E 2 T }
= tr { ( E 1 E 11 M ˜ 2 E 21 E 11 ) E 1 T + M ˜ 2 E 2 T }
= tr { E 1 E 11 E 1 T } + tr { M ˜ 2 ( E 2 T E 21 E 11 E 1 T ) }
= tr { E 1 E 11 E 1 T }
+ tr { ( E 2 E 1 E 11 E 12 ) ( E 22 E 21 E 11 E 12 ) ( E 2 T E 21 E 11 E 1 T ) }
= E 1 E 11 1 / 2 2 + E 2 ( 1 ) [ ( E 22 ( 1 ) ) 1 / 2 ] 2 .
Then
ε 2 = E x x 1 / 2 2 ( E 1 E 11 1 / 2 2 + E 2 ( 1 ) [ ( E 22 ( 1 ) ) 1 / 2 ] 2 ) .
At the same time,
ϵ = E x x 1 / 2 2 E x y E y y 1 / 2 2 .
As a result, (24) follows from (23), (27) and (28).
Remark 3.
The condition in (23) can practically be satisfied by, in particular, a suitable choice of v 1 and v 2 , and/or their dimensions. The following 1 demonstrates such a particular choice of v 1 .
Corollary 1.
Let v 1 = y and v 2 be a nonzero signal. Then (24) is always true.
Proof. 
The proof follows directly from (23), (27) and (28).
In general, the proposed solution of equation ((10)) is based on a combination of (1) and (2) with the extension of the idea of Gaussian elimination [25] to the case of matrices with specific blocks E i j and E j . This allows us to build the special procedure considered below in Section 4.1.4 and Section 4.1.6.

4.1.4. Solution of Equation (10) for p = 3

To clarify the solution device of equation (10) for an arbitrary p, let us now consider a particular case with p = 3 . The solution is based on the development of the procedure represented by (19) and (20).
For p = 3 , equation (10) is as follows:
[ M 1 M 2 M 3 ] E 11 E 12 E 13 E 21 E 22 E 23 E 31 E 32 E 33 = [ E 1 E 2 E 3 ] .
First, consider a reduction of (29) to the equation with the block-lower triangular matrix.
Step 1. By (13), for i = 2 , 3 , E i 1 = E i 1 E 11 E 11 . Define
N 3 , 1 = I E 11 E 12 E 11 E 13 O I O O O I .
Now, in (29), for i = 1 , 2 , 3 and j = 2 , 3 , update block-matrices E i j and E j to E i j ( 1 ) and E j ( 1 ) , respectively, so that
E 11 O O E 21 E 22 ( 1 ) E 23 ( 1 ) E 31 E 32 ( 1 ) E 33 ( 1 ) = E 11 E 12 E 13 E 21 E 22 E 23 E 31 E 32 E 33 N 3 , 1
and
[ E 1 E 2 ( 1 ) E 3 ( 1 ) ] = [ E 1 E 2 E 3 ] N 3 , 1
where E k j ( 1 ) = E k j E k 1 E 11 E 1 j , for k = 2 , 3 and j = 2 , 3 , and E j ( 1 ) = E j E 1 E 11 E 1 j , for j = 2 , 3 . Note that E 12 ( 1 ) = E 13 ( 1 ) = O by (13).
Step 2. Set
N 3 , 2 = I O O O I E 22 ( 1 ) E 23 ( 1 ) O O I .
By (13), E 32 ( 1 ) = E 32 ( 1 ) E 22 ( 1 ) E 22 ( 1 ) . Taking the latter into account, update block-matrices E i j ( 1 ) and E j ( 1 ) in the LHS of (31) and 31, for i = 2 , 3 and j = 3 , so that
E 11 O O E 21 E 22 ( 1 ) O E 31 E 32 ( 1 ) E 33 ( 2 ) = E 11 O O E 21 E 22 ( 1 ) E 23 ( 1 ) E 31 E 32 ( 1 ) E 33 ( 1 ) N 3 , 2
and
[ E 1 E 2 ( 1 ) E 3 ( 2 ) ] = [ E 1 E 2 ( 1 ) E 3 ( 1 ) ] N 3 , 2
where E 33 ( 2 ) = E 33 ( 1 ) E 32 ( 1 ) E 22 ( 1 ) E 23 ( 1 ) and E 3 ( 2 ) = E 3 ( 1 ) E 2 ( 1 ) E 22 ( 1 ) E 23 ( 1 ) .
As a result, we finally obtain the updated matrix equation with the block-lower triangular matrix
[ M 1 M 2 M 3 ] E 11 O O E 21 E 22 ( 1 ) O E 31 E 32 ( 1 ) E 33 ( 2 ) = [ E 1 E 2 ( 1 ) E 3 ( 2 ) ] .
Solution of equation (34). Using a back substitution, equation (34) is represented by the three equations as follows:
M 3 E 33 ( 2 ) = E 3 ( 2 ) ,
M 2 E 22 ( 1 ) + M 3 E 32 ( 1 ) = E 2 ( 1 ) ,
M 1 E 11 + M 2 E 21 + M 3 E 31 = E 1 ,
where E 11 , E 21 , E 31 and E 1 are the original blocks in (29).
It follows from (35)–(37) that the solution of equation (10), for p = 3 , is represented by the minimal Frobenius norm solutions to the problems
min M 3 E 3 ( 2 ) M 3 E 33 ( 2 ) 2 ,
min M 2 ( E 2 ( 1 ) M ˜ 3 E 32 ( 1 ) ) M 2 E 22 ( 1 ) 2 ,
min M 1 ( E 1 j = 2 3 M ˜ j E j 1 ) M 1 E 11 2 ,
where M ˜ 3 and M ˜ 2 are solutions to the problems in (38) and (39), respectively. Matrices M ˜ j , for j = 1 , 2 , 3 , that solve (38)-(40) are as follows:
M ˜ 3 = E 3 ( 2 ) [ E 33 ( 2 ) ] ,
M ˜ 2 = ( E 2 ( 1 ) M ˜ 3 E 32 ( 1 ) ) ( E 22 ( 1 ) ) ,
M ˜ 1 = ( E 1 j = 2 3 M ˜ j E j 1 ) ( E 11 ) .
Thus, the third degree filter requires computation of three pseudo-inverse matrices, E 11 R q 1 × q 1 , ( E 22 ( 1 ) ) R q 2 × q 2 and [ E 33 ( 2 ) ] R q 3 × q 3 .

4.1.5. Error Associated with the Third Degree Filter Determined by (41)–(43)

We wish to show that the error associated with the filter determined by (41)–(43), is less than that associated with the filter determined by 3, ϵ .
To this end, let us denote
ε 3 = min M 1 , M 2 , M 3 x [ M 1 M 2 M 3 ] v Ω 2 ,
where v = [ v 1 T , v 2 T , v 3 T ] T . As before, E 1 = E 1 ( 0 ) and E 11 = E 11 ( 0 ) .
Theorem 3.
Let v 1 , v 2 and v 3 be such that
j = 1 3 E j ( j 1 ) [ ( E j j ( j 1 ) ) 1 / 2 ] 2 > E x y E y y 1 / 2 2 .
Then
ε 3 < ϵ .
Proof. 
We have
ε 3 = E x x 1 / 2 2 tr { E x v E v v E v x } ,
and, bearing in mind an obvious extension of Remark (2), for p = 3 , [ M ˜ 1 M ˜ 2 M ˜ 3 ] = E x v E v v . Therefore, by ((41))–((43)),
tr { E x v E v v E v x } = tr { [ M ˜ 1 M ˜ 2 M ˜ 3 ] E v x } = tr { M ˜ 1 E 1 T + M ˜ 2 E 2 T + M ˜ 3 E 3 T } = tr { ( E 1 E 11 [ M ˜ 2 E 21 + M ˜ 3 E 31 ] E 11 ) E 1 T + M ˜ 2 E 2 T + M ˜ 3 E 3 T } = tr { E 1 E 11 E 1 T } + tr { M ˜ 2 ( E 2 T E 21 E 11 E 1 T ) + M ˜ 3 ( E 3 T E 31 E 11 E 1 T ) } = tr { E 1 E 11 E 1 T } + tr { ( E 2 ( 1 ) ( E 22 ( 1 ) ) M ˜ 3 E 32 ( 1 ) ( E 22 ( 1 ) ) ) ( E 2 ( 1 ) ) T + M ˜ 3 ( E 3 T E 31 E 11 E 1 T ) } = tr { E 1 E 11 E 1 T + E 2 ( 1 ) ( E 22 ( 1 ) ) ( E 2 ( 1 ) ) T + M ˜ 3 [ E 3 T E 31 E 11 E 1 T E 32 ( 1 ) ( E 22 ( 1 ) ) + ( E 2 ( 1 ) ) T ] } = E 1 E 11 1 / 2 2 + E 2 E 22 1 / 2 2 + tr { E 3 ( 2 ) ( E 33 ( 2 ) ) ( E 3 ( 2 ) ) T } = E 1 E 11 1 / 2 2 + E 2 ( 1 ) [ ( E 22 ( 1 ) ) 1 / 2 ] 2 + E 3 ( 2 ) [ ( E 33 ( 2 ) ) 1 / 2 ] 2 .
Thus, (45)–(46) follows from ϵ in (22), (47) and (48).
Corollary 2.
If v 1 = y , and at least one of v 2 or v 3 is a nonzero vector then (46) is always true.
Proof. 
The proof follows from (45), (48) and (28).

4.1.6. Solution of Equation (10) for Arbitrary p

Here, we extend the procedure considered in (Section 4.1.4) to the case of an arbitrary p. First, we reduce the equation in (10) to an equation with the block-lower triangular matrix. We do it by the following steps where each step is justified by an obvious extension of (1).
Step 1. First, in (10), for k = 1 , , p and j = 2 , , p , we wish to update blocks E k j and E k to E k j ( 1 ) and E k ( 1 ) , respectively. To this end, define
N p , 1 = I E 11 E 12 E 11 E 1 p O I O O I O O O I .
Bearing in mind that by (13), E i 1 = E i 1 E 11 E 11 , for i = 2 , , p , we obtain
E 11 O O E 21 E 22 ( 1 ) E 2 p ( 1 ) E p 1 , 1 E p 1 , 2 ( 1 ) E p 1 , p ( 1 ) E p 1 E p 2 ( 1 ) E p p ( 1 )
= E 11 E 12 E 1 p E 21 E 22 E 2 p E p 1 , 1 E p 1 , 2 E p 1 , p E p 1 E p 2 E p p N p , 1
and
[ E 1 E 2 ( 1 ) E p ( 1 ) ] = [ E 1 E 2 E p ] N p , 1 ,
where, for k = 1 , , p and j = 2 , , p ,
E k j ( 1 ) = E k j E k 1 E 11 E 1 j
and
E j ( 1 ) = E j E 1 E 11 E 1 j .
Step 2. Now we wish to update blocks E k j ( 1 ) and E k ( 1 ) , for k = 2 , , p and j = 3 , , p , of the matrices in the LHS of (49) and (50), respectively.
Set
N p , 2 = I O O O O I E 22 ( 1 ) E 23 ( 1 ) E 22 ( 1 ) E 2 p ( 1 ) O O I O O O O I ,
Because by (13), E i 2 ( 1 ) = E i 2 ( 1 ) E 22 ( 1 ) E 22 ( 1 ) , for i = 3 , , p , we have
E 11 O O O E 21 E 22 ( 1 ) O O E 31 E 32 ( 1 ) E 33 ( 2 ) E 3 p ( 2 ) E p 1 E p 2 ( 1 ) E p 3 ( 2 ) E p p ( 2 )
= E 11 O O O E 21 E 22 ( 1 ) E 23 ( 1 ) E 2 p ( 1 ) E 31 E 32 ( 1 ) E 33 ( 1 ) E 3 p ( 1 ) E p 1 E p 2 ( 1 ) E p 3 ( 1 ) E p p ( 1 ) N p , 2
and
[ E 1 E 2 ( 1 ) E 3 ( 2 ) E p ( 2 ) ] = [ E 1 E 2 ( 1 ) E 3 ( 1 ) E p ( 1 ) ] N p , 2 ,
where, for k = 2 , , p and j = 3 , , p ,
E k j ( 2 ) = E k j ( 1 ) E k 2 ( 1 ) E 22 ( 1 ) E 2 j ( 1 )
and
E j ( 2 ) = E j ( 1 ) E 2 ( 1 ) E 22 ( 1 ) E 2 j ( 1 ) .
We continue this updating procedure up to the final Step ( p 1 ) . In Step ( p 2 ) , the following matrices are obtained:
E 11 O O O E 21 E 22 ( 1 ) O O E p 1 , 1 E p 1 , 2 ( 1 ) E p 1 , p 1 ( p 2 ) E p 1 , p ( p 2 ) E p 1 E p 2 ( 1 ) E p , p 1 ( p 2 ) E p p ( p 2 )
and
[ E 1 E 2 ( 1 ) E p 2 ( p 3 ) E p 1 ( p 2 ) E p ( p 2 ) ] .
Step ( p 1 ) . By (13),
E p 1 , p ( p 2 ) = E p 1 , p ( p 2 ) ( E p 1 , p 1 ( 2 ) ) E p 1 , p 1 ( p 2 ) .
In 57 and 58, we now update only two blocks E p 1 , p ( p 2 ) and E p p ( p 2 ) , and the single block E p ( p 2 ) . To this end, define
N p , p 1 = I O O O O I O O O O I E p 1 , p 1 ( p 2 ) E p 1 , p ( p 2 ) O O O I ,
and obtain
E 11 O O O E 21 E 22 ( 1 ) O O E p 1 , 1 E p 1 , 2 ( 1 ) E p 1 , p 1 ( p 2 ) O E p 1 E p 2 ( 1 ) E p , p 1 ( p 2 ) E p p ( p 1 )
= E 11 O O O E 21 E 22 ( 1 ) O O E p 1 , 1 E p 1 , 2 ( 1 ) E p 1 , p 1 ( p 2 ) E p 1 , p ( p 2 ) E p 1 E p 2 ( 1 ) E p , p 1 ( p 2 ) E p p ( p 2 ) N p , p 1
and
[ E 1 E 2 ( 1 ) E p 2 ( p 3 ) E p 1 ( p 2 ) E p ( p 1 ) ]
= [ E 1 E 2 ( 1 ) E p 2 ( p 3 ) E p 1 ( p 2 ) E p ( p 2 ) ] N p , p 1 ,
where
E p p ( p 1 ) = E p p ( p 2 ) E p , p 1 ( p 2 ) ( E p 1 , p 1 ( p 2 ) ) E p 1 , p ( p 2 )
and
E p ( p 1 ) = E p ( p 2 ) E p 1 ( p 2 ) ( E p 1 , p 1 ( p 2 ) ) E p 1 , p ( p 2 ) .
As a result, on the basis of (59)–(62), equation (10) is reduced to the equation with the the block-lower triangular matrix as follows
M ^ E 11 O O O E 21 E 22 ( 1 ) O O E p 1 , 1 E p 1 , 2 ( 1 ) E p 1 , p 1 ( p 2 ) O E p 1 E p 2 ( 1 ) E p , p 1 ( p 2 ) E p p ( p 1 ) = E ^ ,
where
M ^ = [ M 1 M 2 M p 1 M p ]
and
E ^ = [ E 1 E 2 ( 1 ) E p 1 ( p 2 ) E p ( p 1 ) ] .

4.1.7. Solution of Equation (10)

Using a back substitution, (63) is represented by p equations as follows:
M p E p p ( p 1 ) = E p ( p 1 ) , M p 1 E p 1 , p 1 ( p 2 ) + M p E p , p 1 ( p 2 ) = E p 1 ( p 2 ) , M 2 E 22 ( 1 ) + + M p 1 E p 1 , 2 ( 1 ) + M p E p 2 ( 1 ) = E 2 ( 1 ) , M 1 E 11 + + M p 1 E p 1 , 1 + M p E p 1 = E 1 .
Therefore, the solution of equation (10) is represented by solutions to the problems
min M p E p ( p 1 ) M p E p p ( p 1 ) 2 ,
min M p 1 ( E p 1 ( p 2 ) M ˜ p E p , p 1 ( p 2 ) ) M p 1 E p 1 , p 1 ( p 2 ) 2 ,
min M 2 ( E 2 ( 1 ) j = 3 p M ˜ j E j 2 ( 1 ) ) M 2 E 22 ( 1 ) 2 ,
min M 1 ( E 1 j = 2 p M ˜ j E j 1 ) M 1 E 11 2 .
In (65)–(67), M ˜ p , , M ˜ 2 are solutions to the problems in (64)–(66), respectively. On the basis of [26], the minimal Frobenius norm solutions M ˜ p , , M ˜ 1 to (64)–(67) are given by
M ˜ p = E p ( p 1 ) ( E p p ( p 1 ) ) ,
M ˜ p 1 = ( E p 1 ( p 2 ) M ˜ p E p , p 1 ( p 2 ) ) ( E p 1 , p 1 ( p 2 ) ) ,
M ˜ 2 = ( E 2 ( 1 ) j = 3 p M ˜ j E j 2 ( 1 ) ) ( E 22 ( 1 ) ) ,
M ˜ 1 = ( E 1 j = 2 p M ˜ j E j 1 ) E 11 .
Thus, the p-th degree filter requires computation of p pseudo-inverse matrices, E 11 R q 1 × q 1 , ( E 22 ( 1 ) ) R q 2 × q 2 , , ( E p p ( p 1 ) ) R q p × q p .
The algorithm that follows is based on formulas (51), (52), (55), (56), (61), (62) and (68)–(71).
Algorithm 1: Solution of equation (10)
Input: p, E k j for k , j = 1 , , p , and E k for k = 1 , , p .
Output: M j for j = 1 , , p .
Preprints 153040 i001
Note that the algorithm is quite simple. Its output is constructed from the successive computation of matrices E k j and E k .
  • In line 2, the algorithm updates matrices E k j and E k similar to those for the particular cases given by (51), (52), (55), (56), (61), (62).
  • In line 5, matrix M p is calculated as in (68).
  • In line 7, matrices M p 1 , , M 1 are calculated as in (69)–(71).
Importantly, E s s is the q s × q s matrix with q s < q , therefore, it requires less computation than q × q pseudo-inverse E v v in (7).
In the Matlab code below, m , q , p are as above and Evv.mat, Exv.mat are files containing matrices E v v and E x v , respectively.
Listing 1. MATLAB Code for Solving Equation (10).
Preprints 153040 i002
Example 2.
Here, we wish to numerically illustrate the above algorithm andMatlabcode. To this end, in equation (10), we consider matrices E v v R q × q and E x v R m × q whose entries are normally distributed with mean 0 and variance 1. We choose m = 1000 , q = 1000 , , 5000 , p = 1 , 2 , 4 , 5 and q j = q / p , for j = 1 , , p .
In Table 2 and Figure 1, for p = 1 , 2 , 4 , 5 , diagrams of the time needed by the above algorithm versus different values of dimension q are presented. Recall, the simulations carry out by the Dell Latitude 7400 Business Laptop (manufactured in 2020). The time decreases with the increase in p. In other words, the computational load needed for the numerical realization of the proposed p-th degree filer decreases with the increase in degree p. Note that the proposed p-th degree filter requires computation of p pseudo-inverse matrices and also matrix multiplications determined by the above algorithm. In particular, for q = 5000 and p = 5 , the proposed filter is around four times faster than the filter defined by (7).

4.1.8. The Error Associated with the Filter Represented by (4) and (68)–(71)

Let us denote by
ε p = min M 1 , , M p x [ M 1 , , M p ] v Ω 2
the error associated with the filter determined by (4) and (68)–(71). We wish to analyze the error.
Theorem 4.
The error associated with the filter determined by (4) and (68)–(71) is given by
ε p = E x x 1 / 2 2 j = 1 p E j ( j 1 ) [ ( E j j ( j 1 ) ) 1 / 2 ] 2 .
Let v p be a nonzero vector. Then the error decreases if the degree of the proposed filter p increases, i.e.,
ε p < ε p 1 , for p = 2 , 3 , .
In particular, if v 1 , , v p are such that
j = 1 p E j ( j 1 ) [ ( E j j ( j 1 ) ) 1 / 2 ] 2 > E x y E y y 1 / 2 2
then ε p is less than ϵ, the error associated with the filter given by (3),
ε p < ϵ .
Proof. 
We have
ε p = E x x 1 / 2 2 tr { E x v E v v E v x } ,
where similar to the proofs of (2) and (3), [ M ˜ 1 M ˜ p ] = E x v E v v . Note that, as before, E 1 = E 1 ( 0 ) and E 11 = E 11 ( 0 ) . Therefore, by (68)–(71),
tr { E x v E v v E v x } = tr { [ M ˜ 1 M ˜ p ] E v x } = tr { M ˜ 1 E 1 T + + M ˜ p E p T } = tr { ( E 1 E 11 [ M ˜ 2 E 21 + + M ˜ p E p 1 ] E 11 ) E 1 T + M ˜ 2 E 2 T + + M ˜ p E p T } = tr { E 1 E 11 E 1 T } + tr { M ˜ 2 ( E 2 T E 21 E 11 E 1 T ) + + M ˜ p ( E p T E p 1 E 11 E 1 T ) } + tr { [ E 2 ( 1 ) ( E 22 ( 1 ) ) M ˜ 3 E 32 ( 1 ) ( E 22 ( 1 ) ) M ˜ p E p 2 ( 1 ) ( E 22 ( 1 ) ) ] ( E 2 ( 1 ) ) T + + M ˜ p ( E p T E p 1 E 11 E 1 T ) } = = E 1 E 11 1 / 2 2 + E 2 E 22 1 / 2 2 + + E p 1 E p 1 , p 1 1 / 2 2 + tr { E p ( 2 ) ( E p p ( 2 ) ) ( E p ( 2 ) ) T } = E 1 E 11 1 / 2 2 + E 2 ( 1 ) [ ( E 22 ( 1 ) ) 1 / 2 ] 2 + + E p ( 2 ) [ ( E p p ( 2 ) ) 1 / 2 ] 2 .
As a result, (77) and (78) imply (73). In turn, (73) implies (74). Inequalities (23) and (76) follow from (28), (77) and (78).
Remark 4.
Of course, in practice, the condition in (23) can easily be satisfied by a variation of the dimensions of vectors v 1 , , v p and/or by their choice. The following (3) illustrates this observation.
Corollary 3.
If v 1 = y and at least one of v 2 , , v p is not the zero vector then (76) is always true.
Proof. 
The proof follows directly from (28) and (73).

5. Conclusion

In a number of important applications, such as those in biology and medicine, associated random signals are high-dimensional. Therefore, covariance matrices formed from those signals are high-dimensional as well. Large matrices also appear because of the filter structure. As a result, the numerical realization of the filter that processes high-dimensional signals might be very time consuming, and in some cases run out of computer memory. In this paper, we have presented a method of optimal filter determination which targets the case of high-dimensional random signals. In particular, we have developed a procedure that significantly decreases the computational load needed to numerically realize the filter (see (Section 4.1.7) and (2)). The procedure is based on the reduction of the filter equation with large matrices to its equivalent representation by the set of equations with smaller matrices. The associated algorithm has been provided. The algorithm is easy to implement numerically. We have also shown that the filter accuracy is improved with an increase in filter degree, i.e. in the number of filter parameters (see (4)).
The provided results demonstrate that in high-dimensional random signal settings, a number of existing applications may benefit from using the proposed technique. The Matlab code is given in (Section 4.1.7). In [6], the technique provided here is extended to the case of filtering which is optimal with respect to minimization of both matrices M 1 , , M p and random vectors v 1 , , v p .

Author Contributions

Conceptualization, P.H, A.T. and P.S.Q.; methodology, P.H. and A.T.; software, A.T. and P.S.Q; validation, P.H. and A.T.; writing – original draft, A.T.; visualization, P.S.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

This work was financially supported by Vicerrectoría de Investigación y Extensión from Instituto Tecnológico de Costa Rica (Research #1440054).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fomin, V.N.; Ruzhansky, M.V. Abstract Optimal Linear Filtering. SIAM Journal on Control and Optimization 2000, 38, 1334–1352. [Google Scholar] [CrossRef]
  2. Hua, Y.; Nikpour, M.; Stoica, P. Optimal reduced-rank estimation and filtering. IEEE Transactions on Signal Processing 2001, 49, 457–469. [Google Scholar]
  3. Brillinger, D.R. Time Series: Data Analysis and Theory; 2001. [Google Scholar]
  4. Torokhti, A.; Howlett, P. An Optimal Filter of the Second Order. IEEE Transactions on Signal Processing 2001, 49, 1044–048. [Google Scholar] [CrossRef]
  5. Scharf, L. The SVD and reduced rank signal processing. Signal Processing 1991, 25, 113–133. [Google Scholar] [CrossRef]
  6. Howlett, P.; Torokhti, A.; Soto-Quiros, P. Decrease in computational load and increase in accuracy for filtering of random signals. Part II: Increase in accuracy. Signal Processing (submitted).
  7. Leclercq, M.; Vittrant, B.; Martin-Magniette, M.L.; Boyer, M.P.S.; Perin, O.; Bergeron, A.; Fradet, Y.; Droit, A. Large-Scale Automatic Feature Selection for Biomarker Discovery in High-Dimensional OMICs Data. Frontiers in Genetics 2019, 10. [Google Scholar] [CrossRef] [PubMed]
  8. Artoni, F.; Delorme, A.; Makeig, S. Applying dimension reduction to EEG data by Principal Component Analysis reduces the quality of its subsequent Independent Component decomposition. Neuroimage 2018, 175, 176–187. [Google Scholar] [CrossRef] [PubMed]
  9. Bellman, R.E. Dynamic Programming, 2 ed.; Courier Corporation, 2003. [Google Scholar]
  10. Stoica, P.; Jansson, M. MIMO system identification: State-space and subspace approximation versus transfer function and instrumental variables. IEEE Transactions on Signal Processing 2000, 48, 3087–3099. [Google Scholar] [CrossRef]
  11. Billings, S.A. Nonlinear System Identification - Narmax Methods in the Time, Frequency, and Spatio-temporal Domains; John Wiley and Sons, Ltd., 2013. [Google Scholar]
  12. Schoukens, M.; Tiels, K. Identification of Block-oriented Nonlinear Systems Starting from Linear Approximations: A Survey. Automatica 2017, 85, 272–292. [Google Scholar] [CrossRef]
  13. Howlett, P.G.; Torokhti, A.P.; Pearce, C.E.M. A Philosophy for the Modelling of Realistic Non-linear Systems. Processing of Amer. Math. Soc. 2003, 131, 353–363. [Google Scholar] [CrossRef]
  14. Li, D.; Wong, K.D.; Hu, Y.H.; Sayeed, A.M. Detection, classification, and tracking of targets. IEEE Signal Processing Mag. 2002, 19, 17–29. [Google Scholar]
  15. Mathews, V.J.; Sicuranza, G.L. Polynomial Signal Processing; John Willey & Sons, Inc.: New York, 2001. [Google Scholar]
  16. Marelli, D.E.; Fu, M. Distributed weighted least-squares estimation with fast convergence for large-scale systems. Automatica 2015, 51, 27–39. [Google Scholar] [CrossRef] [PubMed]
  17. Torokhti, A.P.; Howlett, P.G. Best Operator Approximation in Modelling of Nonlinear Systems. IEEE Trans. CAS. Part I, Fundamental theory and applications 2002, 49, 1792–1798. [Google Scholar]
  18. Torokhti, A.; Howlett, P. Best approximation of identity mapping: the case of variable memory. Journal of Approximation Theory 2006, 143, 111–123. [Google Scholar] [CrossRef]
  19. Perlovsky, L.; Marzetta, T. Estimating a covariance matrix from incomplete realizations of a random vector. IEEE Transactions on Signal Processing 1992, 40, 2097–2100. [Google Scholar] [CrossRef] [PubMed]
  20. Ledoit, O.; Wolf, M. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis 2004, 88, 365–411. [Google Scholar] [CrossRef]
  21. Ledoit, O.; Wolf, M. Nonlinear shrinkage estimation of large-dimensional covariance matrices. The Annals of Statistics 2012, 40, 1024–1060. [Google Scholar] [CrossRef]
  22. Vershynin, R. How Close is the Sample Covariance Matrix to the Actual Covariance Matrix? J. Th. Prob. 2012, 25, 655–686. [Google Scholar] [CrossRef]
  23. Joong-Ho Won, Johan Lim, S.J.K.; Rajaratnam, B. Condition-number-regularized covariance estimation. Journal of the Royal Statistical Society: Series B) 2013, 75, 427–450. [CrossRef]
  24. Schneider, M.K.; Willsky, A.S. A Krylov Subspace Method for Covariance Approximation and Simulation of Random Processes and Fields. Multidimensional Systems and Signal Processing 2003, 14, 295–318. [Google Scholar] [CrossRef]
  25. Golub, G.; Van Loan, C. Matrix Computations, 4 ed.; Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, 2013. [Google Scholar]
  26. Friedland, S.; Torokhti, A. Generalized Rank-Constrained Matrix Approximations. SIAM Journal on Matrix Analysis and Applications 2007, 29, 656–659. [Google Scholar] [CrossRef]
Figure 1. Time needed by the above algorithm, for p = 1 , 2 , 4 , 5 , versus dimension q.
Figure 1. Time needed by the above algorithm, for p = 1 , 2 , 4 , 5 , versus dimension q.
Preprints 153040 g001
Table 1. Time needed for Matlab to compute E v v R q × q for different dimensions q.
Table 1. Time needed for Matlab to compute E v v R q × q for different dimensions q.
q 500 1000 2000 3000 4000 5000 6000
Time 0.01 0.4 4 14 33 63 115
Table 2.
Dimension q
1000 2000 3000 4000 5000 6000
Time in seconds
p = 1 1 4 14 33 63.5 116
p = 2 1 2 7 17 35 58
p = 4 0.7 1.8 4 9 19 37
p = 5 0.5 1.5 3.7 8 15 28
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated