Preprint
Article

This version is not peer-reviewed.

Correct Degree Selection for Koopman Mode Decomposition

Submitted:

10 December 2025

Posted:

18 December 2025

You are already at the latest version

Abstract
Fourier Decomposition (FD) and Koopman Mode Decomposition (KMD) are important tools for time series data analysis, applied across a broad spectrum of applications. Both aim to decompose time series functions into superpositions of countably many wave functions, with strikingly similar mathematical foundations. These methodologies derive from the linear decomposition of functions within specific function spaces: FD uses a fixed basis of sine and cosine functions, while KMD employs eigenfunctions of the Koopman linear operator. A notable distinction lies in their scope: FD is confined to periodic functions, while KMD can decompose functions into exponentially amplifying or damping waveforms, making it potentially better suited for describing phenomena beyond FD’s capabilities. However, practical applications of KMD often show that despite accurate approximation of training data, its prediction accuracy is limited. This paper clarifies that this issue is closely related to the number of wave components used in decomposition, referred to as the degree of a KMD. Existing methods use predetermined, arbitrary, or ad hoc values for this degree. We demonstrate that using a degree different from a uniquely determined value for the data allows infinite KMDs to accurately approximate training data, explaining why current methods, which select a single KMD from these candidates, struggle with prediction accuracy. Furthermore, we introduce mathematically supported algorithms to determine the correct degree. Simulations verify that our algorithms can identify the right degrees and generate KMDs that can make accurate predictions, even with noisy data.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

A wide range of natural and social phenomena are observed as superpositions of multiple nonlinear elemental processes. For example, recorded audio signals typically include not only the target speech—for instance, a conversation between individuals—but also various environmental noise components. Similarly, variations in the geomagnetic field arise from both internal processes, such as temporal fluctuations in the Earth’s main magnetic field, and external perturbations, such as solar flares and solar wind. Decomposing such observations into constituent processes and extracting only the components relevant to the study is a fundamental procedure in scientific research. Among such methods, frequency analysis—where time-series data are decomposed into countably (often finitely) many frequency components—plays a central role in data science.
A foundational principle across the natural sciences involves reducing nonlinear phenomena to linear problems, enabling analysis via linear algebra. For instance, kernel methods in machine learning embed data into high-dimensional (often infinite-dimensional Hilbert) spaces to facilitate linear solutions. Likewise, neural networks—which approximate arbitrary continuous (and thus potentially nonlinear) functions via linear combinations followed by nonlinear activations—rely on efficient linear transformations during training. The backpropagation algorithm, essential for learning from large-scale data, exemplifies this reliance.
A well-established and widely applied technique based on this principle is Fourier decomposition, which forms the foundation of frequency analysis across a wide range of fields. Grounded in functional analysis, it represents a function as a linear combination of frequency components, typically expressed in terms of an orthonormal basis of trigonometric functions. This decomposition facilitates tasks such as signal characterization and noise reduction and has found broad applications in speech recognition and compression, image processing, radar and sonar analysis, time-series forecasting, and medical imaging.
Koopman Mode Decomposition (KMD) has recently attracted considerable theoretical attention as a powerful extension of Fourier decomposition. Its origin traces back to the 1931 work of B.O. Koopman, who formulated a representation of nonlinear dynamical systems through linear operators acting on function spaces—now referred to as Koopman operators. The theoretical foundation of this framework was subsequently formalized, and beginning in the 1990s, research by I. Mezić and collaborators renewed interest in its potential to reveal latent dynamics in nonlinear systems. The development of Dynamic Mode Decomposition (DMD) by P.J. Schmid ignited a new wave of research and led to advanced extensions such as Extended DMD (EDMD), which enable practical estimation of Koopman spectral components from data beyond the original limitations of DMD. Although KMD often provides accurate representations of observed data, it has been noted that its predictive accuracy may deteriorate under certain conditions.
The present study aims to address this limitation by identifying the sources of prediction error in KMD and proposing efficient algorithms to extract those Koopman modes that, if present, are the only viable candidates for accurate forecasting.
To illustrate Koopman Mode Decomposition and our contributions, we begin by recalling the concept of Fourier Decomposition (FD). Let f ( t ) be a 2 π -periodic, complex-valued function in L 2 . That is, f ( t + 2 π ) = f ( t ) for all t, and
π π | f ( t ) | 2 d t < .
The space of such functions forms a Hilbert space, where the inner product between g ( t ) and h ( t ) is defined by
g , h = π π g ( t ) · h ( t ) ¯ d t ,
with x ¯ denoting the complex conjugate of x. Then, f ( t ) admits the decomposition:
f ( t ) = n = c n e i n t .
This decomposition is justified by the fact that the family
1 2 π e i n t n Z
forms a countable orthonormal basis for the Hilbert space of 2 π -periodic functions in L 2 ( [ 0 , 2 π ] ) . The convergence in Equation (1) is understood in the L 2 -norm. Although such convergence does not imply pointwise convergence, Riesz’s theorem [1] [Theorem 3.12] guarantees that a subsequence of the partial sums converges pointwise almost everywhere.
Koopman Mode Decomposition (KMD) [2] is similar to FD in that it expresses a function as a sum of oscillatory components. However, unlike FD, KMD allows for exponentially growing or decaying components. Hence, if a KMD of f ( t ) exists, it takes the form:
f ( t ) = n = 0 c n λ n t ,
where λ n t = | λ n | t e i arg ( λ n ) t . Thus, unless | λ n | = 1 , each term represents an exponentially growing (if | λ n | > 1 ) or decaying (if | λ n | < 1 ) wave (see Figure 1). The λ n constitute a countable subset of the spectrum of the so-called Koopman operator [2].
KMD is expected to provide a more flexible framework for representing diverse phenomena and has been applied in a wide array of domains, including: fluid dynamics [3,4,5,6], chaotic systems [7], neuroscience [8], plasma physics [9,10,11], sports analytics [12], robotics [13], and video processing [14].
In practical settings, both FD and KMD rely on a finite number of observations. Without restricting the summations in Equation (1) and Equation (2) to finitely many terms, the decomposition becomes ill-posed. We therefore approximate the function by a finite superposition of oscillatory components, where is called the degree of the decomposition.
In the case of the Discrete Fourier Transform (DFT), we assume observations f ( t 0 ) , , f ( t T 1 ) at t k = 2 π k T , for k = 0 , , T 1 . Since e i m t k = e i n t k whenever m n mod T , the problem of finding the coefficients c 0 , , c reduces to solving the linear system:
f ( t 0 ) , , f ( t T 1 ) = ( c 0 c T 1 ) 1 1 1 1 e i 2 π T e i 2 π ( T 1 ) T 1 e i 2 π ( T 1 ) T e i 2 π ( T 1 ) 2 T .
This system has a unique solution, as the coefficient matrix is a square Vandermonde matrix over distinct T-th roots of unity.
In general, an m × n matrix
V n a 1 , , a m = 1 a 1 a 1 2 a 1 n 1 1 a m a m 2 a m n 1 ,
is referred to as a Vandermonde matrix, whose determinant when m = n is given by
det V m a 1 , , a m = i > j ( a i a j ) .
The square matrix on the right-hand side of Equation (3) is a Vandermonde matrix, and Equation (3) can be restated as
( x 0 x T 1 ) = ( c 0 c 1 ) V T α 0 , α 1 , , α 1 ,
where α = e 2 π i / T is a primitive T-th root of unity.
By the aforementioned invertibility of the Vandermonde matrix generated by distinct points α 0 , , α T 1 , Equation (3) admits a unique solution:
( c 0 c T 1 ) = 1 T ( x 0 x T 1 ) V T α 0 , α 1 , , α T 1 * .
Here, M * denotes the conjugate transpose (i.e., Hermitian transpose) of a matrix M .
In contrast, the KMD problem can be formulated analogously as:
[ x 0 x T 1 ] = [ m 1 m ] V T λ 1 , , λ ,
with the following distinctions:
  • Each observable x t is an m-dimensional vector. We denote the matrix of observations by X = [ x 0 x T 1 ] .
  • The eigenvalues λ 1 , , λ and the corresponding modes m 1 , , m are unknown and must be determined.
  • The choice = T , which is required for DFT, is entirely unsuitable for KMD: for any distinct set of λ 1 , , λ T , there always exists a corresponding set of modes m 1 , , m T such that Equation (6) holds exactly.
Despite the increased complexity of the KMD problem, several numerical methods exist to solve Equation (6) for a given degree , including: Dynamic Mode Decomposition (DMD), which is typically applicable when = rank X ; the Arnoldi method, applicable when = T 1 ; and the vector Prony method, which allows arbitrary . These methods yield approximate solutions minimizing the residual sum of squares (RSS), especially in the presence of observation noise.
However, what remained unresolved was how to determine an optimal degree . We illustrate its importance through the following example, highlighting the predictive risk of inappropriate choice.
Example 1.
Consider one-dimensional observables given by X = [ 1 1 1 1 3 5 7 ] . It is readily verified that Equation (6) admits no solution if 3 . For = 4 , the roots λ 1 , , λ 4 of the following equation
f ( x ; α ) = x 4 x 3 x 1 + α ( x 1 ) = 0
uniquely determine the modes m 1 , , m 4 such that Equation (6) is satisfied, thereby yielding a valid KMD for any value of the parameter α. As illustrated in Figure 2, all such KMDs exactly reproduce the observed sequence X for t = 0 , , 6 , yet their extrapolations for t 7 differ significantly.
This example highlights a key issue: even if an algorithm happens to return a single quartic KMD, it is merely one among infinitely many KMDs that fit the observed data. Consequently, the forecast made by such a KMD is almost certainly different from the ground truth, and the chance of accurate prediction is negligibly small.
Thus, for the sake of predictive accuracy, it is crucial to select such that it is uniquely feasible, defined as follows:
Definition 1.
Given an observable matrix X , a degree ℓ is said to befeasibleif there exists at least one solution ( λ 1 , , λ ; m 1 , , m ) to Equation (6). Moreover, if this solution is unique, then ℓ is said to beuniquely feasible.
This paper develops a theoretical framework for uniquely feasible degrees and, based on this foundation, proposes efficient and practical algorithms to determine whether a given set of observables X admits a uniquely feasible degree—and if so, to identify it. We also demonstrate through simulations that the KMD selected by our algorithms can yield highly accurate predictions.

2. Theoretical Frameworks Underlying Koopman Mode Decomposition

A key significance of Koopman Mode Decomposition (KMD) is its ability to analyze the dynamics of a nonlinear system using only methods from linear algebra. In this section, we provide a brief review of the theoretical framework of KMD, which bridges nonlinear dynamics and linear algebra.

2.1. Temporal Transition of States and Semigroup Property

Let Z denote a (possibly unobservable) state space. Under a deterministic assumption, once a state ζ Z is observed at some time, the state of the system after an elapsed time t 0 is uniquely determined and is denoted by ζ t . Accordingly, the temporal evolution of the system is described by the mapping
ζ ^ : Z × [ 0 , ) ( ζ , t ) ζ t Z .
In the discrete-time setting, ζ ^ is instead defined on Z × ( { 0 } N ) , which can be regarded as a special case of the continuous-time formulation.
While the notation ζ ^ ( ζ , t ) emphasizes the bivariate nature of the mapping, the map t ζ t is essentially regarded as a univariate function of t, with the initial state ζ fixed. The deterministic assumption also requires the identity ζ s + t = ( ζ s ) t , which is equivalently expressed as
ζ ^ ( ζ , s + t ) = ζ ^ ( ζ ^ ( ζ , s ) , t )
for all s , t 0 . This implies that if we define σ t : = ζ ^ ( · , t ) : Z Z for t 0 , then the family
{ σ t : Z Z t [ 0 , ) }
forms a one-parameter semigroup under composition, that is,
σ s + t = σ t σ s
holds for all s , t 0 .

2.2. Koopman Operator

We denote the space of C -valued functions defined over Z by C Z . The function space C Z forms a C -algebra, equipped with addition, multiplication, and scalar multiplication, defined as follows for f , g C Z and z C :
( f + g ) ( ζ ) = f ( ζ ) + g ( ζ ) , ( f g ) ( ζ ) = f ( ζ ) g ( ζ ) , and ( z f ) ( ζ ) = z f ( ζ )
In particular, C Z is a vector space over C .
The Koopman operator parameterized by time t is defined as
U t : C Z f f σ t C Z .
It is straightforward to verify that the Koopman operator is a C -algebra homomorphism and, in particular, a linear operator. Furthermore, we have:
Proposition 1.
The collection of Koopman operators { U t t [ 0 , ) } forms a one-parameter semigroup. That is, U s + t = U t U s holds for any s , t [ 0 , ) .

2.3. Koopman Generator

In general, when a one-parameter semigroup T = { T t t [ 0 , ) } is defined on a Banach space B , it is said to be strongly continuous if, for every x B , the following norm convergence holds:
lim t 0 T t x x = 0 .
A strongly continuous one-parameter semigroup T has several important properties [15] [Chapter 13]:
  • Each T t is bounded; that is, the operator norm T t is well-defined. More precisely, there exist constants M 1 and ω R such that T t M e ω t for all t 0 .
  • The set D ( A ) of all x B for which
    A x : = lim t 0 T t x x t
    exists is a dense linear subspace of B , and A is a closed linear operator with domain D ( A ) . This operator A is called the infinitesimal generator of { T t } .
  • If { T t } is bounded, i.e., sup t 0 T t < , then D ( A ) = B .
  • For every x D ( A ) , the derivative
    d d t T t x = lim τ 0 T t + τ x T t x τ
    exists, and we have
    d d t T t x = T t A x = A T t x
    for all t 0 and x D ( A ) .
  • If A x = λ x for some x 0 and λ C (that is, λ is an eigenvalue of A), then
    T t x = e λ t x .
When we say that the Koopman operator semigroup U = { U t t [ 0 , ) } is strongly continuous, we assume that it acts on a Banach space B whose elements can be regarded as functions in C Z in some way (e.g., B C Z ), and that for each f B the limit
lim t 0 U t f = f
holds in the norm of B . Pointwise convergence of functions is a more primitive notion, and although these two modes of convergence are generally independent, they are closely related in certain settings.
  • Let X be a compact topological space and let C ( X ) denote the space of continuous functions on X. Since every continuous function on a compact space is bounded, we may equip C ( X ) with the supremum norm:
    f : = sup x X | f ( x ) | .
    With this norm, C ( X ) is a Banach space. In this setting, convergence in norm is equivalent to uniform convergence, and in particular, uniform convergence implies pointwise convergence.
  • Let ( X , Σ , μ ) be a measure space, and let B = L p ( μ ) for 1 p < . Elements of L p ( μ ) are equivalence classes of measurable functions that are equal almost everywhere. Thus, any statement about pointwise convergence should be interpreted in terms of representatives of these equivalence classes, that is, convergence almost everywhere. If a sequence { f n } n N L p ( μ ) satisfies
    lim n f n f L p = 0 ,
    then there exists a subsequence that converges to f pointwise almost everywhere. This follows from the completeness of L p spaces and is sometimes referred to as a version of the Riesz convergence theorem (see [1] [Theorem 3.12]).
If the Koopman operator semigroup { U t t [ 0 , ) } is bounded, then its infinitesimal generator, referred to as the Koopman generator, and defined by
K f = lim t 0 U t f f t , f D ( K ) ,
is defined on the entire Banach space.
We next consider the case in which the Koopman operators are bounded. Let ( Z , Σ , μ ) be a measure space, and suppose that for each t 0 the map σ t : Z Z is measurable. We examine the boundedness of the associated Koopman operator U t acting on L p ( μ ) .
A sufficient condition for U t to be bounded on L p ( μ ) is that σ t is non-expansive with respect to μ , meaning that
μ ( σ t ) 1 ( A ) μ ( A ) for all A Σ .
In this case, for any f L p ( μ ) ,
U t f L p ( μ ) p = Z | f ( σ t ( z ) ) | p d μ ( z ) = Z | f ( z ) | p d ( σ t ) * μ ( z ) ,
where ( σ t ) * μ denotes the pushforward of μ by σ t . If σ t is non-expansive, then ( σ t ) * μ μ (as measures), and hence
U t f L p ( μ ) f L p ( μ ) ,
so in particular
sup t 0 U t L p ( μ ) L p ( μ ) 1 < .
(For p = , the same argument shows U t L L 1 .)
Conversely, if σ t is expansive, i.e., there exists A Σ such that
μ ( σ t ) 1 ( A ) > μ ( A ) ,
then the family { U t } t 0 may fail to be uniformly bounded in t (even though each fixed U t can still be bounded).
In many applications, especially in ergodic theory and dynamical systems, σ t is assumed to be measure-preserving, that is,
μ ( σ t ) 1 ( A ) = μ ( A ) for all A Σ .
This implies ( σ t ) * μ = μ , and hence U t acts as an isometry on L p ( μ ) for every 1 p . On L 2 ( μ ) , the Koopman operators are therefore unitary. If, in addition, { σ t } t R forms a measure-preserving flow (so that { U t } t R is a strongly continuous unitary group), then by Stone’s theorem [1] [Theorem 13.40] the Koopman generator K is skew-adjoint, i.e., K * = K . Since unitary and skew-adjoint operators are normal, the spectral theorem applies and provides the functional-analytic foundation for the Koopman Mode Decomposition.

2.4. Koopman Mode Decomposition and Spectral Theorem

To introduce the Koopman mode decomposition, we assume that the Koopman operator semigroup { U t t [ 0 , ) } defined on a Banach space B is strongly continuous in the norm of B and induces a Koopman generator defined on the entire B . For example, this holds when the semigroup { U t } is bounded.
Let σ p ( K ) C denote the point spectrum of K , and let V λ B be the eigenspace corresponding to λ σ p ( K ) . When f belongs to the completion in B of the linear span of λ σ p ( K ) V λ , that is, when
f = λ σ p ( K ) ϕ λ , ϕ λ V λ ,
holds in the norm of B , only countably many ϕ λ are nonzero. We denote the corresponding eigenvalues by { λ n n = 1 , 2 , } . Then the Koopman mode decomposition of f is expressed as
f = n = 1 N ϕ λ n , N N { } ,
and the following relations hold:
K f = n = 1 N λ n ϕ λ n , U t f = n = 1 N e λ n t ϕ λ n , t [ 0 , ) .
If, in addition, every element of the one-parameter semigroup { σ t t [ 0 , ) } is measure-preserving on a measure space ( Z , Σ , μ ) , then the Koopman operators U t and the Koopman generator K defined on L 2 ( Z ) are unitary and skew-adjoint, respectively; that is, they are normal linear operators defined on the entire B . Hence, the Koopman mode decomposition can be understood in the context of the spectral theorem.
The spectral theorem asserts that a normal operator T defined on a Hilbert space H can be represented as
T = σ ( T ) λ d E ( λ ) ,
where E is a projection-valued measure, which plays the role of a Borel measure defined on the Borel σ -algebra B of σ ( T ) C . For each B B , the value E ( B ) is an orthogonal projection operator on H rather than a real number, and the measure E satisfies the following properties:
  • Orthogonality: E ( B 1 ) E ( B 2 ) = E ( B 1 B 2 ) .
  • Countable additivity: For any countable mutually disjoint family { B i } i I B ,
    E i I B i = i I E ( B i ) ,
    where the convergence is in the strong operator topology.
Based on the projection-valued measure E, the integral of a measurable function F over C is defined as
F ( T ) = σ ( T ) F ( λ ) d E ( λ ) ,
in complete analogy with the Lebesgue integral.
This integral representation is essential, because the spectrum σ ( T ) may be distributed continuously in C . Furthermore, letting σ d ( T ) σ p ( T ) denote the set of all isolated eigenvalues of T, we can express the decomposition as
F ( T ) = λ σ d ( T ) F ( λ ) E ( { λ } ) + σ ( T ) σ d ( T ) F ( λ ) d E ( λ ) ,
where each E ( { λ } ) for λ σ d ( T ) coincides with the orthogonal projection onto the eigenspace of λ .
For f L 2 ( Z ) —an equivalence class of functions equal almost everywhere— the Koopman mode decomposition of f and the actions of the Koopman generator K and the Koopman operators U t are obtained by setting T = K and assuming that the integral part of Equation (9) vanishes (or is negligible). Since E ( { λ } ) f is nonzero only for a countable subset of σ d ( K ) , we label the corresponding eigenvalues as { λ n } n = 1 N for N N { } , and write ϕ λ n : = E ( { λ n } ) f .
  • For F ( x ) = 1 , we obtain the Koopman mode decomposition of f:
    f = λ σ d ( T ) E ( { λ } ) f = n = 1 N ϕ λ n .
  • For F ( x ) = x , we obtain the action of K :
    K f = λ σ d ( T ) λ E ( { λ } ) f = n = 1 N λ n ϕ λ n .
  • For F ( x ) = e t x , we obtain the action of U t :
    e t K f = λ σ d ( T ) e t λ E ( { λ } ) f = n = 1 N e t λ n ϕ λ n .

3. Discrete Koopman Mode Decomposition

The objective of Discrete Koopman Mode Decomposition (DKMD) is analogous to that of the Discrete Fourier Transform (DFT): to obtain a decomposition that fits a given finite sequence of samples of an unknown function. We begin by recalling the formulation of the DFT.

3.1. DFT and Vandelmonde Matrix

Let ( x 0 , , x T 1 ) denote the values of an unknown function observed at times t = 0 , 1 , , T 1 . The DFT seeks coefficients c 0 , c ± 1 , c ± 2 , such that
x t = n = c n e i 2 π n t T , t = 0 , , T 1 .
Since this is an underdetermined system with infinitely many unknowns, we restrict to a finite sum. Moreover, since n n ( mod T ) implies e i 2 π n t T = e i 2 π n t T , we may assume n = 0 , 1 , , T 1 . Let ω : = e i 2 π T . Then
x t = n = 0 T 1 c n ω n t , t = 0 , , T 1 .
In matrix form, we can write
x 0 x 1 x T 1 = c 0 c 1 c T 1 1 1 1 1 1 ω ω 2 ω T 1 1 ω 2 ω 4 ω 2 ( T 1 ) 1 ω T 1 ω 2 ( T 1 ) ω ( T 1 ) 2 .
The coefficient matrix above is an instance of the Vandermonde matrix.
Definition 2.
Let a 1 , , a m C and n N . The associatedVandermonde matrixis defined as
V n a 1 , , a m = 1 a 1 a 1 2 a 1 n 1 1 a m a m 2 a m n 1 .
Basic properties. For pairwise distinct nodes a 1 , , a m , the Vandermonde matrix satisfies:
  • det V m a 1 , , a m = 1 j < i m ( a i a j ) .
  • rank V n a 1 , , a m = min { m , n } .
  • Viewing V n a 1 , , a m as a linear map F : C m C n , F is injective if and only if m n , surjective if and only if m n , and bijective if and only if m = n .
In particular,
det V T ω 0 , ω 1 , , ω T 1 = 0 m < n < T ( ω n ω m ) 0 ,
so Equation (10) has a unique solution for ( c 0 , , c T 1 ) .

3.2. Formulation of DKMD

Assume that the Koopman mode decomposition of a function f is expressed as a countably infinite sum of eigenfunctions { ϕ n } n N of the Koopman generator K . For example, assume that the spectrum of K consists only of a countably infinite set of isolated eigenvalues { λ n } n N :
f = n = 1 ϕ n .
Then, without loss of generality, we may assume that the values { e λ n } n N are pairwise distinct.
Given finitely many samples x t = f ( t ) for t = 0 , 1 , , T 1 , DKMD seeks to satisfy
x t = n = 1 c n e λ n t , where c n : = ϕ n ( 0 ) .
In the same way as in the DFT case, since the infinitely many unknowns { c n } n = 1 and { λ n } n = 1 make the system indefinite, we restrict their number to a finite < . Then the system can be written compactly as
( x 0 , x 1 , , x T 1 ) = ( c 1 , c 2 , , c ) V T e λ 1 , , e λ .
If e λ 1 , , e λ are pairwise distinct and T , the Vandermonde matrix represents a surjective linear mapping from C onto C T . Then we have the following:
Proposition 2.
For any ( x 0 , , x T 1 ) C T , pairwise distinct e λ 1 , , e λ , and T , there exists ( c 1 , , c ) C satisfying Equation (13).
Thus, in contrast to the DFT case, where the given values x 0 , , x T 1 are decomposed into exactly T Fourier components, performing DKMD requires determining the number of Koopman modes within the range < T . Specifically, in the nonlinear system (13), the Koopman degree itself appears as an additional unknown, alongside e λ 1 , , e λ and c 1 , , c .
In this regard, the primary contribution of this paper is to establish criteria for determining , and to present computationally efficient methods for estimating it both exactly and approximately, particularly when the given observations are contaminated by noise.
In Section 5, we review an existing method from the literature for solving the equation in the special case where is known.

4. Definitions and Notations

The following definitions and descriptions are essential for the subsequent sections.
We extend the aforementioned formalization of DKMD from the decomposition of C -valued functions to that of C m -valued functions for a dimension m 1 . This modification does not alter the fundamental nature of the problem but better aligns with real-world applications of DKMD, such as fluid dynamics.
Definition 3
(DKMD). Let [ x 0 x T 1 ] be a matrix of observables at times t = 0 , , T 1 , where each x t is an m-dimensional column vector. The discrete Koopman mode decomposition (DKMD) of this observable matrix is a matrix factorization given by
[ x 0 x T 1 ] = [ m 1 m ] V T μ 1 , , μ ,
where μ 1 , , μ C are pairwise distinct.
Definition 4
(Koopman Eigenvalues, Modes, and Degree). Given a DKMD as in Equation (14), we refer to ℓ as theKoopman degree, μ 1 , , μ as theKoopman eigenvalues, and m 1 , , m as theKoopman modes.
For convenience, let X and M denote the observable matrix and the matrix of modes, respectively:
X = x 0 x T 1 , M = m 1 m .
Then the DKMD can be expressed compactly as
X = M V T μ 1 , , μ .
Table 1 summarizes the principal notations used throughout this article.

5. Computing DKMD for Known Degrees (Related Work)

In this section, we introduce the vector Prony method [16], which estimates the unknown Koopman eigenvalues μ 1 , , μ and the Koopman mode matrix M that satisfy Equation (15), given the observable matrix X and the Koopman degree .
The procedure first computes the Koopman eigenvalues, and subsequently computes the Koopman modes based on the obtained eigenvalues.

5.1. Computing the Koopman Eigenvalues

We first introduce the characteristic polynomial associated with a DKMD, whose roots correspond to the Koopman eigenvalues.
Definition 5
(Characteristic Polynomial). For a DKMD X = M V T μ 1 , , μ , the polynomial
f ( X ) = i = 1 ( X μ i )
is called thecharacteristic polynomialof the DKMD.
If the characteristic polynomial of a DKMD is expressed as
f ( X ) = X + a 1 X 1 + + a 0 ,
then the following recurrence relation holds:
Proposition 3.
For all integers j with 0 j T 1 ,
x j + + a 1 x j + 1 + + a 0 x j = 0 .
Proof. 
The statement follows from
x j + + a 1 x j + 1 + + a 0 x j = M diag ( μ 1 j , , μ j ) f ( μ 1 ) f ( μ ) T = 0 .
   □
Using Hankel matrices (Definition 6), the statement of Proposition 3 can be compactly expressed as
H 0 X T 1 = a 0 a 1 H 1 X 0 T 1 .
Definition 6
(Hankel Matrix). Given a matrix X = [ x 0 , , x T 1 ] , the kth Hankel matrix of X is defined for 0 k T 1 as
H k X = ( X 0 k ) T ( X 1 k + 1 ) T ( X T k 1 T 1 ) T ,
where H k X C ( k + 1 ) m × ( T k ) .
Although ( a 0 , , a 1 ) that satisfy Equation (16) may not exist, and even if they exist they may not be unique, the following least-squares optimality always holds:
H 0 X T 1 H 1 X 0 T 1 + arg min a 0 , , a 1 H 0 X T 1 + a 0 a 1 H 1 X 0 T 1 F .
This represents the least-squares solution for the characteristic coefficients when Equation (16) is inconsistent.

5.2. Computing the Koopman Modes

Once the eigenvalues μ 1 , , μ are computed, the linear equation (14) determines M . Since
X V T μ 1 , , μ + arg min M X M V T μ 1 , , μ F
holds, if a solution exists, X V T μ 1 , , μ + provides one possible solution, which is not necessarily unique.
Equation (15) can be viewed as a factorization of linear mappings:
X : C T C m , V T μ 1 , , μ : C T C , M : C C m .
The condition for the existence of such an M is
ker X ker V T μ 1 , , μ .
Under this condition, the solution is unique if and only if V T μ 1 , , μ is surjective, which holds if and only if T and the nodes μ 1 , , μ are pairwise distinct.
Proposition 4.
For an observable matrix X and a Koopman degree ℓ with T , a DKMD of X with Koopman eigenvalues μ 1 , , μ , if it exists, it is unique.

6. The Contributions of This Article

Given an observable matrix, the vector Prony method introduced in the previous section computes a DKMD for a specified Koopman degree . However, a DKMD does not always exist for the given degree, and even when it does, it may not be unique.
Definition 7
(Feasible Degree). A Koopman degree ℓ is said to befeasiblefor the observables if a DKMD of degree ℓ exists for the given observable matrix.
Although the Koopman degree must satisfy < T (Proposition 2), a DKMD may exist for multiple values of . Thus, selecting the optimal feasible degree is essential to obtain an optimal DKMD.
For this selection, we consider two independent principles:
Minimality:
The optimal degree should be the smallest among all feasible degrees. This principle is analogous to Occam’s razor, favoring the simplest representation that adequately explains the observations.
Uniqueness:
The optimal degree should correspond to a unique DKMD. If multiple DKMDs exist for a given , as described in a later section, the set of such decompositions forms a continuum, where different DKMDs yield distinct eigenvalues and modes. Consequently, any particular DKMD extracted from this continuum— such as one obtained by the vector Prony method— may fail to reproduce the true dynamics precisely.
In this article, we demonstrate that if a degree satisfies the uniqueness criterion, it also satisfies the minimality criterion, but the converse does not necessarily hold. Thus, between these two principles, we adopt the uniqueness criterion as the standard for selecting the optimal Koopman degree.
Definition 8
(Uniquely Feasible Degree). A Koopman degree is said to beuniquely feasibleif a DKMD of that degree exists and is unique for the given observables.
In summary, the objective of this paper is to establish a theoretical framework for uniquely feasible degrees. Specifically, we demonstrate and present the following:
  • A uniquely feasible degree for a given observable matrix, if it exists, is the smallest among all feasible degrees.
  • Several structural properties of uniquely feasible degrees lead to computationally efficient algorithms for determining them.
  • These algorithms are further extended to handle noisy observables via least-squares formulations.

7. Finding Uniquely Feasible Degrees

7.1. Key Indices: Hankel Dimension and Codimension

We first summarize the results of the previous sections in a theorem that provides a necessary and sufficient condition for an observable matrix to admit a DKMD.
For notational convenience, we first introduce the following definition.
Definition 9
(Square-free coefficient vector). A vector a 0 a n 1 1 C n + 1 is called asquare-free coefficient vectorif the algebraic equation
X n + a n 1 X n 1 + + a 0 = 0
has no repeated roots.
Theorem 1
(Feasibility condition). Let X be an observable matrix and < T . The following statements are equivalent:
1.
X admits a DKMD with Koopman degree ℓ; equivalently, ℓ is feasible for X .
2.
There exists a square-free coefficient vector a 0 a 1 1 satisfying
a 0 a 1 1 H X = 0 .
Proof. 
(1 ⇒ 2) Suppose X admits a DKMD with Koopman eigenvalues μ 1 , , μ . The coefficient vector a = a 0 a 1 1 of the characteristic polynomial of this DKMD is square-free by the definition of DKMD (Definition 3), and the assertion of Proposition 3 can be restated as
a X i i + T = 0 for i = 0 , , T 1 ,
which further implies Equation (17).
(2 ⇒ 1) Let μ 1 , , μ denote the distinct roots of the square-free polynomial X + a 1 X 1 + + a 0 = 0 . We consider X , V T μ 1 , , μ , and M as representing linear mappings F : C T C m , G : C T C , and H : C C m , respectively. Then, the rank-nullity theorem implies
ker G = i = 0 T 1 C 0 i a 0 T 1 i T ,
and ker F ker G follows from
X 0 i a 0 T 1 i T = X i i + a T = 0
for all i = 0 , , T 1 , which implies the existence of H.    □
The square-free coefficient vector in Equation (17) lies in the orthogonal complement V ( H X ) , which is the subspace of C + 1 consisting of all vectors orthogonal to every column of H X . Theorem 1 thus shows that the existence of a DKMD depends on the structure of V ( H k X ) and its orthogonal complement. This motivates the following key indices.
Definition 10
(Hankel dimension and codimension). Given an observable matrix X , the Hankel dimension and codimension of order k ( k < T ) are defined as
dim H k X = rank H k X , codim H k X = k + 1 dim H k X .
Using these indices, we can restate Theorem 1 as follows:
Corollary 1.
For an observable matrix X and a Koopman degree < T , a necessary condition for the existence of an ℓ-degree DKMD of X is
codim H X 1 .
However, the condition codim H X 1 is not a sufficient condition due to the following two reasons:
  • Even if we can find a 0 a 1 a with
    a 0 a 1 a H X = 0 ,
    it may happen that a = 0 , meaning that the polynomial corresponding to a 0 a 1 a is of degree lower than , which cannot induce a DKMD of degree .
  • Even if
    a 0 a 1 1 H X = 0 ,
    the resulting polynomial
    X + a 1 X 1 + + a 0 = 0
    may have repeated roots, rendering it unsuitable as a characteristic polynomial.
Our purpose here is to identify a Koopman degree that admits a unique DKMD. We can represent the condition for a uniquely feasible degree using the Hankel codimension, assuming that is feasible, that is, an -degree DKMD exists.
Corollary 2.
For a feasible degree ℓ of an observable matrix X with < T , the following hold:
1.
The DKMD is unique if and only if codim H X = 1 .
2.
The set of ℓ-degree DKMDs forms a continuum if and only if codim H X > 1 .
Proof. 
Since is feasible, V ( H X ) is non-empty, and there exists a coefficient vector
a 0 a 1 1 T V ( H X )
which determines the characteristic polynomial of a DKMD of X .
If codim H X = 1 , that is, if dim V ( H X ) = 1 , then only a 0 a 1 1 can determine a characteristic polynomial, implying that the DKMD is unique by Proposition 4.
On the other hand, if codim H X > 1 , there exists a coefficient vector
b 0 b 1 1 V ( H X ) C a 0 a 1 1 .
When we define
f α ( X ) = ( 1 α ) f ( X ) + α X + b 1 X 1 + + b 0 ,
since f 0 ( X ) is square-free, there exists ϵ > 0 such that f α ( X ) remains square-free for any α ( ϵ , ϵ ) . In particular, each distinct value of α yields a distinct DKMD.    □

7.2. The Koopman Dimension and Codimension for m = 1

In this section, we investigate the Hankel dimension and codimension in the restricted case where m = 1 , i.e., when X consists of a single row with T components. These results will be extended to the general case m 1 in Section 7.4.6.
Lemma 1.
Let X be an observable matrix with one row and T columns. Furthermore, suppose that X admits a DKMD of the form
X = m 1 m V T μ 1 , , μ
such that T + 1 2 and each m i is nonzero. Then, the leftmost × block submatrix of H 1 X has nonzero determinant.
Proof. 
The leftmost × block submatrix is expressed as:
X 0 1 T X 1 T X 1 2 2 T = V μ 1 , , μ T diag ( m 1 , , m ) V μ 1 , , μ .
The determinant of this matrix is nonzero, because μ 1 , , μ are mutually distinct, implying det V μ 1 , , μ 0 , and m 1 , , m are nonzero, implying det diag ( m 1 , , m ) 0 .    □
Theorem 2.
Let X , m 1 , , m , and μ 1 , , μ be as in Lemma 1. Assume further that T + 1 2 . Then, the Hankel dimension and codimension are given by:
dim H k X = k + 1 , for k { 0 , , 1 } ; , for k { , , T } ; T k , for k { T + 1 , , T 1 } ; codim H k X = 0 , for k { 0 , , 1 } ; k + 1 , for k { , , T } ; 2 k T + 1 , for k { T + 1 , , T 1 } .
Proof. 
Note that H k X consists of k + 1 rows and T k columns.
If 0 k 1 , equivalently if k + 1 and T k , the leftmost ( k + 1 ) × submatrix of H k X coincides with the top ( k + 1 ) × submatrix of H 1 X , implying its rank is k + 1 by Lemma 1. Thus, all k + 1 rows of H k X are linearly independent, and dim H k X = k + 1 follows.
If T + 1 k T 1 , equivalently if k + 1 > and T k < , the entire H k X has T k columns. Its top × ( T k ) submatrix coincides with the leftmost × ( T k ) submatrix of H 1 X , implying its rank is T k by Lemma 1. Thus, all T k columns of H k X are linearly independent, and dim H k X = T k follows.
If k T , equivalently if k + 1 > and T k , the top-left × submatrix of H k X coincides with that of H 1 X , and hence, Lemma 1 implies that the leftmost columns of H k X are linearly independent.
On the other hand, we let
i = 1 ( X μ i ) = X a 1 X 1 a 0
denote the characteristic polynomial. Then each eigenvalue μ i satisfies μ i = a 1 μ i 1 + + a 0 . For the i-th element x i of X with i ,
x i = j = 1 m j μ j i = j = 1 m j μ j i · μ j = j = 1 m j μ j i n = 0 1 a n μ j n = n = 0 1 a n j = 1 m j μ j i + n = n = 0 1 a n x i + n .
This recurrence relation shows that for any i , the element x i is determined by x i , , x i 1 . Consequently, any column of H k X is a linear combination of the leftmost columns, which have been proven to be linearly independent. Therefore, dim H k X = .
The claims about the Hankel codimension follow directly from the definition codim H k X = k + 1 dim H k X .    □
Corollary 3.
If T 2 , then k = is the unique value satisfying codim H k X = 1 .
If = T + 1 2 , then X admits no uniquely feasible degree.

7.3. Examples

In this section, we introduce three examples of an observable matrix X , each demonstrating different properties regarding the existence of a uniquely feasible degree:
  • No Koopman degree satisfies codim H X = 1 , meaning that no uniquely feasible degree exists (Example 2).
  • A Koopman degree with codim H X = 1 exists, but the corresponding characteristic polynomial is not square-free. As a result, a uniquely feasible degree does not exist (Example 3).
  • A uniquely feasible degree exists, ensuring that a DKMD is uniquely determined for the degree (Example 4).
These examples illustrate the conditions under which a DKMD is uniquely determined and the role of Hankel codimension in establishing uniqueness.
Example 2.
Consider the observable matrix
X = 1 1 1 1 3 5 7 .
The Hankel matrices for = 1 , 2 , 3 are computed as:
H 1 X = 1 1 1 1 3 5 1 1 1 3 5 7 , H 2 X = 1 1 1 1 3 1 1 1 3 5 1 1 3 5 7 , and H 3 X = 1 1 1 1 1 1 1 3 1 1 3 5 1 3 5 7 .
These computations yield:
codim H 1 X = codim H 2 X = codim H 3 X = 0 ,
implying that X admits no DKMD for these degrees by Corollary 1.
On the other hand, for = 4 , 5 , 6 , we have
H 4 X = 1 1 1 1 1 1 1 1 3 1 3 5 3 5 7 , H 5 X = 1 1 1 1 1 1 1 3 3 5 5 7 , and H 6 X = 1 1 1 1 3 5 7 .
It follows that
codim H 4 X = 2 , codim H 5 X = 4 , and codim H 6 X = 6 ,
implying that DKMDs form a continuum by Corollary 2.
For example, for = 4 ,
V ( H 4 X ) = C 1 1 0 1 1 T C 1 1 0 0 0 T .
Thus, the possible candidates for characteristic polynomials take the form:
f a ( x ) = x 4 x 3 x 1 + a ( x 1 ) ,
whose discriminant is computed as:
D = 27 a 4 + 54 a 3 783 a 2 900 a 500 .
Since f a ( X ) = 0 has no repeated roots if and only if D 0 , it follows that 4-degree DKMDs form a continuum.
Example 3.
When we let X = 1 1 3 5 7 , the Hankel matrices for = 1 , 2 , 3 are computed as:
H 1 X = 1 1 3 5 1 3 5 7 , H 2 X = 1 1 3 1 3 5 3 5 7 , and H 3 X = 1 1 1 3 3 5 5 7 ,
implying
codim H 1 X = 0 , codim H 2 X = 1 , and codim H 3 X = 2 .
In fact, we have V ( H 1 X ) = ( 0 ) , and furthermore,
V ( H 2 X ) = C 1 2 1 T and V ( H 3 X ) = C 1 2 1 0 T C 0 1 2 1 T .
This implies that there exists no DKMD for any of these degrees because:
1.
For = 1 , no characteristic polynomial exists.
2.
For = 2 , if a DKMD existed, the corresponding characteristic polynomial would be
X 2 + 2 X + 1 = ( X + 1 ) 2 ,
which is not square-free.
3.
For = 3 , if a DKMD existed, the corresponding characteristic polynomial would be of the form
X 3 + 2 X 2 + X + a ( X 2 + 2 X + 1 ) = ( X + a ) ( X + 1 ) 2 ,
which is also not square-free.
On the other hand, for = 4 , we have H 4 X = X T and
V ( H 4 X ) = C 7 0 0 0 1 T C 1 2 1 0 0 T C 0 1 2 1 0 T C 0 0 1 2 1 T ,
implying that there exists a DKMD whose characteristic polynomial is
X 4 + 7 = k = 0 3 X 7 4 e i π 4 ( 2 k + 1 ) .
Furthermore, all possible characteristic polynomials of DKMDs for these observables are of the form
f a , b , c ( X ) = a ( X 4 + 7 ) + ( 1 a ) ( X 4 + 2 X 3 + X 2 ) + b ( X 3 + 2 X 2 + X ) + c ( X 2 + 2 X + 1 ) .
Since f a , b , c ( X ) is square-free for ( a , b , c ) = ( 1 , 0 , 0 ) , and since the set of ( a , b , c ) C 3 for which f a , b , c ( X ) is square-free is an open subset of C 3 , we conclude that the set of possible four-degree DKMDs for X forms a continuum.
Example 4.
We expand X of Example 2 by adding one more dimension to the observables. Let the observable matrix X be given by
X = 1 1 2 3 5 8 13 1 1 1 1 3 5 7 .
For = 1 , 2 , 3 , the corresponding Hankel matrices are computed as:
H 1 X = 1 1 1 1 2 1 3 1 5 3 8 5 1 1 2 1 3 1 5 3 8 5 13 7 , H 2 X = 1 1 1 1 2 1 3 1 5 3 1 1 2 1 3 1 5 3 8 5 2 1 3 1 5 3 8 5 13 7 , and H 3 X = 1 1 1 1 2 1 3 1 1 1 2 1 3 1 5 3 2 1 3 1 5 3 8 5 3 1 5 3 8 5 13 7 ,
implying
codim H 1 X = codim H 2 X = codim H 3 X = 0 .
On the other hand,
codim H 4 X = 1 , codim H 5 X = 2 , and codim H 6 X = 5
follows from
H 4 X = 1 1 1 1 2 1 1 1 2 1 3 1 2 1 3 1 5 3 3 1 5 3 8 5 5 3 8 5 13 7 , H 5 X = 1 1 1 1 1 1 2 1 2 1 3 1 3 1 5 3 5 3 8 5 8 5 13 7 , and H 6 X = 1 1 1 1 2 1 3 1 5 3 8 5 13 7 .
Therefore, if X admits a uniquely feasible degree, it must be four. In fact,
V ( H 4 X ) = C 1 1 0 1 1 T
holds, and the corresponding characteristic polynomial is
f ( x ) = x 4 x 3 x 1 ,
which is square-free, implying that = 4 is uniquely feasible.

7.4. Important Properties of The Hankel Dimension and Codimension

In this subsection, we introduce important properties of the Hankel dimension and Hankel codimension, which play a crucial role in designing algorithms to determine uniquely feasible degrees, including the following. For the convenience of description, we denote the minimum feasible degree as:
L = min { [ 0 , T ) codim H X > 0 } .

Best Possible Upper Bound of a Uniquely Feasible Degree.

Let r = rank X . If > r T r + 1 , then codim H X > 1 . Hence, no uniquely feasible degree can exceed r T r + 1 , which is also the sharpest possible upper bound.

Monotonic Increase of the Hankel Codimension.

The Hankel codimension codim H X is strictly increasing with respect to over the interval [ L , T ) .

Equivalence Between Unique and Minimal Feasibility.

The monotonicity of the codimension implies that, if is uniquely feasible, then = L . In particular, if codim H L X > 1 , no uniquely feasible degree exists.

Saturation of the Hankel Dimension.

If L T 2 and [ L , T L ] , then
dim H X = L , codim H X = + 1 L .
In particular, codim H L X = 1 holds, which implies L is the only candidate for a uniquely feasible degree.

7.4.1. Invariance under Basis Transformations

For X C m × T , Y C n × T , and A C n × m , we assume
Y = A X .
Their corresponding Hankel matrices are computed as
H k Y = H k X diag ( A T , , A T T k ) ,
where diag ( A T , , A T T k ) is defined as the m ( T k ) × n ( T k ) matrix given by
diag ( A T , , A T T k ) = A T 0 0 A T .
This implies, in particular,
dim H k Y dim H k X and codim H k Y codim H k X .
Furthermore, if a square-free polynomial f ( x ) = x k + a k 1 x k 1 + + a 0 is a characteristic polynomial of a DKMD for X , it is also a characteristic polynomial of a DKMD for Y . In fact, we have
a 0 a k 1 1 H k Y = a 0 a k 1 1 H k X diag ( A T , , A T T k ) = 0 .
Furthermore, if in addition there exists B C m × n such that X = B Y , then
dim H k Y = dim H k X and codim H k Y = codim H k X
holds, and the correspondence of characteristic polynomials is bijective.
The condition for Y = A X is that the row space of Y is contained in the row space of X , and symmetrically, the condition for X = B Y is that the row space of Y contains the row space of X . Hence, both Y = A X and X = B Y hold if and only if the row space of Y and the row space of X are identical. In other words, under the hypothesis Y = A X , the necessary and sufficient condition for the existence of B C m × n such that X = B Y is rank Y = rank X .
If Y = A X and rank Y = rank X hold, the minimum min B X B Y F is zero, and hence, X X Y + Y F = 0 holds, implying
X = X Y + Y .
Furthermore, this bijective correspondence between characteristic polynomials yields a bijective correspondence between DKMDs. In fact, X = M V T μ 1 , , μ k , which is a DKMD of X , yields
Y = ( A M ) V T μ 1 , , μ k ,
which is a DKMD of Y . In reverse, a DKMD of Y   Y = M V T μ 1 , , μ k corresponds to a DKMD of X
X = ( X Y + M ) V T μ 1 , , μ k .
The latter correspondence is the inverse of the former by Proposition 4.
Thus, we have:
Theorem 3.
If rank Y = rank X holds for Y = A X , then the following statements hold:
1.
dim H k X = dim H k Y for each k { 0 , 1 , , T 1 } .
2.
codim H k X = codim H k Y for each k { 0 , 1 , , T 1 } .
3.
The set of characteristic polynomials for X is identical with that for Y .
4.
A DKMD X = M V T μ 1 , , μ k of X can be converted to a DKMD Y = ( A M ) V T μ 1 , , μ k of Y , while a DKMD Y = M V T μ 1 , , μ k of Y can be converted to a DKMD X = ( X Y + M ) V T μ 1 , , μ k of X . These conversions yield a bijective correspondence between the set of DKMDs for X and that for Y .
As an application of Theorem 3, the following two cases are particularly important:
  • When A is a nonsingular m × m matrix, we have rank Y = rank X automatically, and thus Theorem 3 provides the invariance of the Hankel dimension, Hankel codimension, characteristic polynomials, and DKMDs under basis transformations in C m .
  • When A C r × m with r = rank X is selected to satisfy rank ( A X ) = rank X , the matrix Y = A X has fewer rows than X , and thus Y requires less computation to obtain DKMDs than X .

7.4.2. Invariance under Basis Transformations

For X C m × T , Y C n × T , and A C n × m , we assume
Y = A X .
Their corresponding Hankel matrices are computed as
H k Y = H k X diag ( A T , , A T T k ) ,
where diag ( A T , , A T T k ) is defined as the m ( T k ) × n ( T k ) matrix given by
diag ( A T , , A T T k ) = A T 0 0 A T .
This implies, in particular,
dim H k Y dim H k X and codim H k Y codim H k X .
Furthermore, if a square-free polynomial f ( x ) = x k + a k 1 x k 1 + + a 0 is a characteristic polynomial of a DKMD for X , it is also a characteristic polynomial of a DKMD for Y . In fact, we have
a 0 a k 1 1 H k Y = a 0 a k 1 1 H k X diag ( A T , , A T T k ) = 0 .
Furthermore, if there exists B C m × n such that X = B Y ,
dim H k Y = dim H k X and codim H k Y = codim H k X ,
holds, and the correspondence of characteristic polynomials is bijective.
The condition for Y = A X is that the row space of Y is contained in the row space of X , and symmetrically, the condition for X = B Y is that the row space of Y contains the row space of X . Hence, both Y = A X and X = B Y hold, if and only if the row space of Y and the row space of X are identical. In other words, under the hypothesis Y = A X , the necessary and sufficient condition for the existence of B C m × n such that X = B Y is rank Y = rank X .
If Y = A X and rank Y = rank X hold, the minimum min B X B Y F is zero, and hence, X X Y + Y F = 0 holds, implying
X = X Y + Y .
Furthermore, the aforementioned bijective correspondence between characteristic polynomials yields a bijective correspondence between DKMDs. In fact, X = M V T μ 1 , , μ k , which is a DKMD of X , yields
Y = ( A M ) V T μ 1 , , μ k ,
which is a DKMD of Y In reverse, a DKMD of Y   Y = M V T μ 1 , , μ k , corresponds to a DKMD of X
X = ( X Y T M ) V T μ 1 , , μ k .
The latter correspondence is the inverse of the former by Proposition 4.
Thus, we have:
Theorem 4.
If rank Y = rank X holds for Y = A X , then the following statements hold:
1.
dim H k X = dim H k Y for each k { 0 , 1 , , T 1 } .
2.
codim H k X = codim H k Y for each k { 0 , 1 , , T 1 } .
3.
The set of characteristic polynomials for X is identical with that for Y .
4.
A DKMD X = M V T μ 1 , , μ k of X can be converted to a DKMD Y = ( A M ) V T μ 1 , , μ k of Y , while a DKMD Y = M V T μ 1 , , μ k of Y can be converted to a DKMD X = ( X Y + M ) V T μ 1 , , μ k of X . These conversions yields a bijective correspondence between the set of DKMDs for X and that for Y .
As application of Theorem 3, the following two cases are particularly important.
  • When A is a nonsingular m × m matrix, we have rank Y = rank X automatically, and thus Theorem 3 provides the invariance of the Hankel dimension, Hankel codimension, characteristic polynomials and DKMDs under basis transformation in C m .
  • If A C r × m can be selected to satisfy rank X = rank A X . since Y = A X has fewer rows than X , Y requires less computation to obtain DKMDs than X .

7.4.3. The Best Possible Upper Bound for A Uniquely Feasible Degree

Although we have seen that a uniquely feasible degree is always less than T, we can determine the best possible upper bound. First, we see:
Proposition 5.
For an observable matrix X with T columns, we have r = rank X < T if X admits a uniquely feasible degree.
Proof. 
If is a uniquely feasible degree, Proposition 2 implies < T . On the other hand, X = M V T μ 1 , , μ implies r . The assertion follows.    □
Based on this Proposition, we now establish the best possible upper bound for a Koopman degree that admits a unique DKMD.
Theorem 5.
If codim H X = 1 , then r T r + 1 . Furthermore, r T r + 1 is the best possible upper bound for a uniquely feasible degree.
Proof. 
When we take Y = A X C r × T with rank Y = r , Theorem 3 implies
dim H X = dim H Y = rank H Y r ( T ) ,
equivalently,
codim H X + 1 r ( T ) .
Since codim H X = 1 by hypothesis, + 1 r ( T ) 1 , which yields
r T r + 1 .
To show that r T r + 1 is the best possible upper bound for a uniquely feasible degree, we construct an r × T observable matrix X with rank X = r and dim H X = .
First, we determine a sufficiently long series Y with mutually distinct eigenvalues ( μ 1 , , μ ) and nonzero modes ( m 1 , , m ) :
Y = x 0 x T ^ 1 = m 1 m V T ^ μ 1 , , μ ,
where T ^ is chosen sufficiently large so that we can construct an observable matrix
X = x a 1 x a 1 + T 1 x a 2 x a 2 + T 1 x a r x a r + T 1 .
Without loss of generality, we let 0 = a 1 < a 2 < < a r and further suppose a i + 1 a i T for i = 1 , , r 1 .
We now verify that the columns of H X correspond exactly to the first a r + T columns of H Y . The i-th row of X contributes T consecutive columns to H X , where the last such column is ( x a i + T 1 x a i + T 1 ) T . On the other hand, the first column contributed by the ( i + 1 ) -th row is ( x a i + 1 x a i + 1 + ) T . Since a i + 1 a i T implies a i + 1 a i + T , we have a i + 1 a i + T 1 + 1 , which means the columns contributed by the i-th and ( i + 1 ) -th rows are contiguous (or overlapping) in H Y , with no gaps. Consequently, the set of columns of H X coincides exactly with the first a r + T columns of H Y .
Thus, rank H X = holds if and only if
a r + T
by Theorem 2. Since a r satisfies r 1 a r ( r 1 ) ( T ) , the condition for such an a r to exist is that
2 T ( r 1 ) ( T ) , i . e . , r T r + 1 .
Since must be an integer, the largest feasible value is = r T r + 1 , completing the proof.    □

7.4.4. Monotonic increase of the Hankel codimension

In this section, we establish that the Hankel codimension increases strictly monotonically beyond a certain threshold. This property plays a crucial role in identifying uniquely feasible degrees.
Theorem 6.
In the domain { L , L + 1 , , T 1 } , codim H X is strictly monotonically increasing.
Proof. 
Let L < k < T . First, for a nonzero vector w V ( H L X ) , define
w 0 , L = ( w T 0 0 L ) T C + 1 .
This is a nonzero vector in V ( H X ) , so we have codim H X > 0 .
Next, let v 1 , , v d be a basis of V ( H X ) , where d = codim H X . By reordering if necessary, we may assume that
j 1 : = max { j : ( v 1 ) j 0 } max { j : ( v i ) j 0 }
holds for all i { 1 , , d } .
We claim that the following d + ( k ) vectors are linearly independent:
v 1 1 , k 1 , v 1 2 , k 2 , , v 1 k , 0 , v 1 0 , k , v 2 0 , k , , v d 0 , k ,
which all belong to V ( H k X ) C k + 1 . To prove the claim, let us consider
s = 1 k c s v 1 s , k s + i = 1 d c i v i 0 , k = 0 .
The vector v 1 s , k s has the form ( 0 s v 1 T 0 k s ) T C k + 1 , so its ( j 1 + s ) -th component equals ( v 1 ) j 1 0 .
First, we prove c s = 0 for s = k , k 1 , , 1 by backward induction on s.
For s = k , consider the ( j 1 + k ) -th component of the left-hand side. Among all vectors in the sum, only v 1 k , 0 has a nonzero value at this position, namely ( v 1 ) j 1 . The vectors v i 0 , k for i = 1 , , d have the form ( v i T 0 k ) T , and since k 1 , we have j 1 + k > j 1 , which implies that their ( j 1 + k ) -th component is zero. Similarly, for t < k , the vector v 1 t , k t has nonzero components only up to position j 1 + t < j 1 + k , so its ( j 1 + k ) -th component is also zero. Therefore, c k ( v 1 ) j 1 = 0 , which gives c k = 0 .
For s < k , assuming c s + 1 = = c k = 0 by the induction hypothesis, we consider the ( j 1 + s ) -th component of the left-hand side. By the same argument as above, only v 1 s , k s has a nonzero value at position j 1 + s , namely ( v 1 ) j 1 . Therefore, c s ( v 1 ) j 1 = 0 , which gives c s = 0 .
This reduces the equation to i = 1 d c i v i 0 , k = 0 , which implies c i = 0 for i = 1 , , d by the linear independence of v 1 , , v d .
Therefore,
codim H k X d + ( k ) = codim H X + ( k ) > codim H X ,
which completes the proof.    □

7.4.5. Equivalence Between Unique and Minimal Feasibility

If a uniquely feasible degree exists, it must coincide with the minimum feasible degree L. This conclusion follows directly from Theorem 6 as an immediate corollary.
Corollary 4.
If X admits a uniquely feasible degree, it must be L.
Proof. 
While Corollary 2 states that a uniquely feasible degree must satisfy codim H X = 1 , Theorem 6 ensures that this condition is met only when = L .    □

7.4.6. Saturation of dim H X

In this section, we present a theorem that extends Theorem 2. This result plays a crucial role in the development of algorithms for determining uniquely feasible degrees, particularly in the case where L T + 1 2 for the minimum feasible degree L.
Theorem 7.
If L T + 1 2 and an L-degree DKMD exists, then:
dim H X = + 1 , for 0 L 1 , L , for L T L ; codim H X = 0 , for 0 L 1 , L + 1 , for L T L .
Proof. 
We express the L-degree DKMD given by hypothesis in the form
X = M V T μ 1 , , μ L .
No column of M is a zero vector, since L is the minimum feasible degree. Indeed, if
X = M V T μ 1 , , μ L 1
held for some M C m × ( L 1 ) , then the coefficient vector
b 0 b 1 b L 2 1
of the polynomial
( X μ 1 ) ( X μ L 1 ) = X L 1 + b L 2 X L 2 + + b 0
would belong to V ( H L 1 X ) , contradicting the definition of L.
By Theorem 3, we can transform X by multiplying a matrix A such that rank X = rank ( A X ) without changing the assertion. In particular, we may assume the following:
  • X C r × T with rank X = r . To achieve this, take i 1 , , i r such that the rows X i 1 , , X i r are linearly independent. Then, define A C r × m so that the j-th row of A has 1 as the i j -th component and 0 for the other components.
  • The first row of M has no zero component: m 1 i 0 for i = 1 , 2 , , L . Since each column of M is nonzero and X C r × T , we can find a nonsingular matrix A C r × r such that the first row of A M has no zero component.
After this transformation, we apply Theorem 2 to the first row of X :
X 1 = m 11 m 1 L V T μ 1 , , μ L ,
and obtain
dim H X dim H X 1 = + 1 , for 0 L 1 , L , for L T L ,
since the columns of H X 1 are among those of H X . For 0 L 1 , in particular, the equality dim H X = + 1 holds because dim H X + 1 by definition.
Leveraging this evaluation of lower bound, the claim dim H X = L for L T L can be proven by induction on .
For the base case = L , the definition of L gives codim H L X 1 , which implies dim H L X = L + 1 codim H L X L . Combined with dim H L X L shown above, we have dim H L X = L .
For the inductive step, assume > L and dim H 1 X = L , i.e., codim H 1 X = L . By Theorem 6, codim H X > codim H 1 X = L , which gives
dim H X = + 1 codim H X < + 1 ( L ) = L + 1 ,
and consequently, dim H X L . Combined with dim H X L , we conclude dim H X = L .
Finally, the expressions for codim H X follow directly from the definition of the Hankel codimension.    □
Although Theorem 7 requires L T + 1 2 , if L = T + 1 2 (which occurs only when T is odd), then L = T + 1 2 > T 1 2 = T L holds, implying that the range L T L in Theorem 7 is empty. In this boundary case, the theorem provides information only for = 0 , 1 , , L 1 = T 1 2 , namely that dim H X = + 1 and codim H X = 0 in this range.
For L < T + 1 2 , which for integer L is equivalent to L T 2 , we have:
Corollary 5.
If L is feasible and satisfies L T 2 , then L is a uniquely feasible degree.
Proof. 
Since L is feasible, an L-degree DKMD exists. The condition L T 2 implies L < T + 1 2 , so Theorem 7 applies and gives codim H L X = 1 . By Corollary 2, this means that L is uniquely feasible.    □

8. Algorithms

Leveraging the five properties mentioned in Section 7.4, we develop efficient algorithms to search for a uniquely feasible degree L and to determine an L-degree characteristic polynomial, which provides eigenvalues of a DKMD. Once mutually distinct eigenvalues are obtained, the associated Koopman modes are calculated as described in Section 5.2. Our algorithms are categorized into:
  • One that applies to the case where L T 2 and determines L by dim H T 2 X ;
  • Another that performs a binary search to determine L in the case when L > T 2 .
To introduce the algorithms, we start by addressing a theoretical scenario where the observables exactly consist of a finite number of wave components. In this case, an exact DKMD is obtained. Subsequently, we consider a practical scenario where the observables comprise a finite number of dominant wave components, an infinite number of minor wave components, and noise. Here, our algorithms focus on extracting only the dominant wave components, effectively filtering out the minor components and the noise, resulting in an approximate DKMD.

8.1. A Theoretical Scenario

We first present Algorithm 1, which reduces the dimension of each observable so that decomposing the reduced observable matrix is equivalent to decomposing the original one. This reduction is practically useful for more efficient computation and clearer understanding of the underlying structure. We then present algorithms that decompose an observable matrix for two cases: L T 2 (Algorithms 2 and 3) and T 2 < L r T r + 1 (Algorithm 4).

8.1.1. Dimension Reduction

Although each observable vector, i.e., a column vector of X , is of dimension m, the rank r of X can be smaller than m. In this case, Algorithm 1 determines a dimensionally reduced observable matrix Y C r × T and a conversion matrix A C r × m such that Y = A X and rank Y = rank X = r . By Theorem 3, the Hankel dimensions and codimensions, as well as the set of characteristic polynomials, are invariant under this conversion, and DKMDs for X and those for Y are mutually converted by matrix multiplication by A and X Y + .
Such A and Y can be constructed by selecting r linearly independent rows of X and defining A so that these rows become the rows of Y = A X . Since these r rows of X are linearly independent and form all the rows of Y , we have rank Y = r = rank X .
Algorithm 1 Dimension reduction of the observable matrix.
Require: 
X C m × T
Ensure: 
Matrices A C r × m and Y C r × T for r = rank X with Y = A X and rank Y = r
1:
Find i 1 , , i r { 1 , 2 , , m } such that the row vectors X i 1 , , X i r are linearly independent;
2:
Determine a matrix A C r × m such that the ( j , i j ) component is 1 and all other components are 0 for j = 1 , , r ;
3:
return A and Y = A X .
By applying Algorithm 1, we can reduce the problem of decomposing X C m × T to the problem of decomposing Y C r × T with r = rank X m . This reduction provides benefits in terms of computational efficiency and clearer understanding of the data structure when executing the algorithms presented below. However, these algorithms are formulated in general terms and do not require that such a dimension reduction has been performed.

8.1.2. Case L T 2

Algorithm 2 first investigates whether L T 2 by leveraging the equivalence between L T 2 and codim H T 2 X > 0 . Indeed, if L T 2 , then codim H T 2 X codim H L X > 0 holds by Theorem 6. Conversely, if codim H T 2 X > 0 , then L T 2 by the definition of L.
If this investigation reveals L T 2 , the algorithm identifies L as dim H T 2 X by Theorem 7, and then executes Algorithm 3 to determine whether L is uniquely feasible. If L > T 2 , the algorithm returns the value continue, indicating that Algorithm 4 should be used.
When invoked, Algorithm 3 verifies the following:
  • A vector ( a 0 , a 1 , , a L , 1 ) T exists in V ( H L X ) . This can be efficiently verified by performing a QR decomposition of H L X .
  • If such a vector exists, verify that the polynomial
    x L + 1 + a L x L + + a 1 x + a 0 = 0
    has no repeated roots.
If both conditions are satisfied, this confirms that L is feasible, and we can then apply Theorem 7. As a result, we have codim H L X = 1 , meaning that L is uniquely feasible. Thus, the polynomial obtained in the verification is square-free and serves as the characteristic polynomial of the unique L-degree DKMD of X . In this case, the algorithm returns the obtained characteristic polynomial. Otherwise, it returns the value no_solution, indicating that no uniquely feasible degree exists.
Algorithm 2 Search for an L-degree characteristic polynomial when L T 2 .
Require: 
X C m × T
Ensure: 
The signal continue if L > T 2 ; the characteristic polynomial if L T 2 is uniquely feasible; no_solution if L T 2 is not uniquely feasible.
1:
if  codim H T 2 X > 0  then
2:
    Let L = dim H T 2 X ;
3:
    Execute Algorithm 3;
4:
    return the return value of Algorithm 3;
5:
else
6:
    return continue
7:
end if
Algorithm 3 Determine the characteristic polynomial.
Require: 
X C m × T and L
Ensure: 
An L-degree characteristic polynomial or no_solution
1:
if  ( a 0 a 1 a L 1 ) T V ( H L X )  then
2:
    Let f ( x ) = x L + 1 + a L x L + + a 1 x + a 0 ;
3:
    if  f ( x ) = 0 has no repeated roots then
4:
        return  f ( x )
5:
    end if
6:
end if
7:
return no_solution

8.1.3. Case T 2 < L rT r + 1

Algorithm 4 details the procedure for cases when L > T 2 . Note that if L is uniquely feasible, then L r T r + 1 holds by Theorem 5, and this gives the best possible upper bound.
The algorithm first verifies whether codim H r T r + 1 X > 0 . If this condition does not hold, no uniquely feasible degree exists, and the algorithm returns the signal no_solution.
If codim H r T r + 1 X > 0 , then L lies in the range T 2 < L r T r + 1 . The algorithm utilizes a binary search to find L, leveraging the fact that codim H X is a strictly increasing function by Theorem 6.
Since the identified L does not necessarily satisfy codim H L X = 1 , the algorithm must verify codim H L X = 1 before executing Algorithm 3 to confirm that L is uniquely.

8.2. A Practical Scenario

In practice, even if the Koopman operator has only discrete eigenvalues, the number of eigenvalues can be infinite, and in addition, observables can contain error signals. In such situations, the purpose of DKMD is to find a finite number of major wave components that most significantly affect the observables. From the viewpoint of executing our algorithms, the presence of minor components and errors makes direct computation of Hankel dimensions via QR decomposition impractical. In fact, rank H X = + 1 may always hold, which makes it impossible to identify the Hankel dimensions. In this section, we present a method to estimate Hankel dimensions for the major components via singular value decomposition (SVD) rather than QR decomposition.
We assume that the observables are represented as
x t = n = 1 m n λ n t + n = 1 m n λ n t + ε t ,
Algorithm 4 Search for an L-degree characteristic polynomial when L T 2 , r T r + 1 .
Require: 
X C m × T with codim H T 2 X = 0
Ensure: 
Either the characteristic polynomial of the DKMD of X for the uniquely feasible degree L > T 2 , if present, or no_solution, otherwise.
1:
if  codim H r T r + 1 X = 0  then
2:
    return  no _ solution
3:
end if
4:
Let l = T 2 ;
5:
Let h = r T r + 1 ;
6:
while  h l > 1  do
7:
    Let k = l + h 2 ;
8:
    if  codim H k X = 0  then
9:
        Let l = k ;
10:
    else if  codim H k X = 1  then
11:
        Let L = k ;
12:
        Execute Algorithm 3;
13:
        return the return value of Algorithm 3;
14:
    else
15:
        Let h = k ;
16:
    end if
17:
end while
18:
if  codim H h X = 1  then
19:
    Let L = h ;
20:
    Execute Algorithm 3;
21:
    return the return value of Algorithm 3;
22:
else
23:
    return  no _ solution ;
24:
end if
where λ n t and λ n t represent major and minor wave components, respectively, and ε t is noise. We define:
x ^ t = n = 1 m ^ n λ ^ n t , X ^ = [ x ^ 0 x ^ T 1 ] ; x t = n = 1 m n λ n t + ε t , X = [ x 0 x T 1 ] ,
and our basic assumption is that x t is a small perturbation. Our aim is to estimate rank H X ^ from H X , taking advantage of the fact that rank H X ^ equals the number of positive singular values of H X ^ .
We consider A = A ^ + A C m × n with m n . Let σ 1 σ m 0 , σ ^ 1 σ ^ m 0 , and σ 1 σ m 0 be the singular values of A , A ^ , and A , respectively. Then,
σ i σ ^ i σ 1
holds for all i { 1 , , m } . This can be proven as follows. By the Courant–Fischer min-max theorem [17], the i-th singular value of A satisfies
σ i = min V C n dim V = n i + 1 max v V v = 1 A v .
Since A = A ^ + A , we have σ i σ ^ i σ 1 as follows:
σ i = min V C n dim V = n i + 1 max v V v = 1 A v min V C n dim V = n i + 1 max v V v = 1 A ^ v + A v min V C n dim V = n i + 1 max v V v = 1 A ^ v + σ 1 = σ ^ i + σ 1 .
By the same reasoning, we also obtain σ ^ i σ i σ 1 .
Furthermore, if rank A ^ = r < m , that is,
σ ^ 1 σ ^ r > 0 = σ ^ r + 1 = = σ ^ m ,
then we have
σ i σ ^ i σ 1 σ ^ r σ 1 , if i r , σ 1 , if i > r .
Therefore, if σ ^ r is sufficiently larger than σ 1 , there exists a large gap between σ i for i r and σ i for i > r , and hence, we can estimate r from σ 1 , , σ m . Thus, if we can assume that the smallest positive singular value of H X ^ is sufficiently larger than the largest singular value of H X , we can apply this method to estimate dim H X ^ .

8.3. Time Complexities

The computationally intensive operations in these algorithms primarily involve executing QR decomposition (QRD), singular value decomposition (SVD), and solving equations (EqS). Table 2 demonstrates that the algorithms execute these computations only a small number of times, and consequently, prove to be highly efficient.

9. Simulations

Through simulations, we investigate the accuracy of Algorithms 2 to 4 in estimating Koopman eigenvalues and making predictions.

9.1. Synthetic Datasets Used in the Simulations

The synthetic datasets of observables are constructed with m = 2 and T = 50 , which makes X a 2 × 50 matrix, and are randomly generated by performing the following steps:
  • Sample as many distinct Koopman eigenvalues, each classified as either major or minor, as specified in Table 3. Let λ 1 , , λ N denote these eigenvalues. For each λ i , its complex conjugate λ ¯ i must also be in the set. Furthermore, every conjugate pair ( λ , λ ¯ ) is sampled independently as follows:
    • | λ | = | λ ¯ | is sampled according to a log-normal distribution with parameters μ = 0 and σ = 0.01 , whose probability density function is 1 x σ 2 π exp ( ln x ) 2 2 σ 2 . The median, mean, and variance of this distribution are e 0 = 1 , e σ 2 2 , and e 2 σ 2 e σ 2 , respectively.
    • arg λ = arg λ ¯ is sampled uniformly from the interval ( 0 , π ) .
    The distribution of | λ | is designed to restrict the occurrence of samples too far from 1, because | λ | much larger than 1 causes the observables to diverge, while a component with | λ | much smaller than 1 decays rapidly.
  • Determine the Koopman mode m i = ( m 1 i , m 2 i ) corresponding to λ i with the following constraints:
    • m 1 j = m ¯ 1 i and m 2 j = m ¯ 2 i hold whenever λ j = λ ¯ i ;
    • The modes associated with the major eigenvalues must have significant magnitudes, while those associated with the minor eigenvalues must have smaller magnitudes.
    To satisfy the second requirement, we use a function ς : C [ 0 , 1 ] defined below, which has sharp peaks only at the sampled major eigenvalues λ i 1 , , λ i k :
    ς ( λ λ i 1 , , λ i k ) = max j = 1 , , k 2 1 + e 100 ( | λ | | λ i j | ) 2 + ( arg λ arg λ i j ) 2 .
    For each Koopman eigenvalue λ i , the mode m i is determined by sampling the argument of each component uniformly at random, while setting the magnitude to ς ( λ i λ i 1 , , λ i k ) , i.e., | m 1 i | = | m 2 i | = ς ( λ i λ i 1 , , λ i k ) .
  • Construct X as X = [ m 1 , , m N ] V T λ 1 , , λ N . If the inclusion of noise is required, add to X a noise matrix [ ε i t ] with i { 1 , 2 } and t { 0 , 1 , , T 1 } , where each ε i t is independently sampled from a normal distribution N ( 0 , 0 . 01 2 ) .
In addition, to evaluate the predictive accuracy of our algorithms, we compute observable values for t { 50 , 51 , , 79 } using the Koopman eigenvalues and Koopman modes determined above.

9.2. Simulation Scenarios

We conduct simulations under the following four distinct scenarios:
  • Scenarios 1 and 2 investigate the case where an exact DKMD is obtained via QR decomposition (Section 8.1).
  • Scenarios 3 and 4 investigate the case where an approximated DKMD is obtained via singular value decomposition (Section 8.2).
  • Scenarios 1 and 3 are used to investigate Algorithm 2.
  • Scenarios 2 and 4 are used to investigate Algorithm 4.

9.3. Results of the Simulations

We have obtained excellent results in the simulations for all scenarios. In the case of estimating an exact DKMD, the estimated Koopman eigenvalues and the predictions for t { 50 , 51 , , 79 } are identical to the ground truth within numerical precision. In the case of estimating an approximated DKMD, the estimated Koopman eigenvalues and the predictions for t { 50 , 51 , , 79 } show close agreement with the ground truth.
Scenarios 1 and 2:
For both scenarios, Algorithms 2 and 4 correctly identify the ground truth uniquely feasible degrees. Furthermore, we observe that the estimated eigenvalues (the upper-right panels of Figure 3(a,b)) and the predictions for t { 50 , 51 , , 79 } (the bottom panels of Figure 3(a,b)) are identical to the ground truth within numerical precision.
Scenario 3:
The upper-left panel in Figure 4(a) depicts the logarithm of the singular values of H 25 X . Evidently, we observe a significant gap between the top ten singular values and those that follow, leading to the conclusion that dim H 25 X = 10 , indicating that = 10 is the uniquely feasible degree. Furthermore, the estimated eigenvalues (the upper-right panel) and the predictions (the bottom panels) show excellent agreement with the ground truth.
Scenario 4:
The upper-left panel in Figure 4(b) depicts the result of a singular value decomposition of H X when the binary search of Algorithm 4 visits = 30 . Since the top 30 singular values are significantly greater than those that follow, we can conclude that codim H 30 X = 1 , meaning that = 30 is the uniquely feasible degree. The estimated eigenvalues and the predictions also show excellent agreement with the ground truth.

10. Conclusions

We have developed a theoretical framework to estimate the correct degree (uniquely feasible degree) of Discrete Koopman Mode Decomposition (DKMD) from a given dataset of observables. The degree of DKMD corresponds to the number of Koopman eigenvalues involved. We demonstrate that unless the correct degree is used, infinitely many DKMDs with different Koopman eigenvalues and modes can fit the training data. However, these variations lead to divergent predictions for future time steps, resulting in unreliable forecasts. Furthermore, the theory provides efficient algorithms to identify uniquely feasible degrees.

References

  1. Rudin, W. Real and Complex Analysis, 3rd ed.; McGraw-Hill: New York, 1987. [Google Scholar]
  2. Koopman, B.O. Hamiltonian systems and transformation in Hilbert space. Proceedings of the national academy of sciences of the united states of america 1931, 17, 315. [Google Scholar] [CrossRef] [PubMed]
  3. Schmid, P.J. Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 2010, 656, 5–28. [Google Scholar] [CrossRef]
  4. Rowley, C.W.; Mezić, I.; Bagheri, S.; Schlatter, P.; Henningson, D.S. Spectral analysis of nonlinear flows. Journal of Fluid Mechanics 2009, 641, 115–127. [Google Scholar] [CrossRef]
  5. Tu, J.H.; Rowley, C.W.; Luchtenburg, D.M.; Brunton, S.L.; Kutz, J.N. On dynamic mode decomposition: Theory and applications. Journal of Computational Dynamics 2014, 1, 391. [Google Scholar] [CrossRef]
  6. Taira, K.; Brunton, S.L.; Dawson, S.T.; Rowley, C.W.; Colonius, T.; McKeon, B.J.; Schmidt, O.T.; Gordeyev, S.; Theofilis, V.; Ukeiley, L.S. Modal analysis of fluid flows: An overview. Aiaa Journal 2017, 55, 4013–4041. [Google Scholar] [CrossRef]
  7. Brunton, S.L.; Brunton, B.W.; Proctor, J.L.; Kaiser, E.; Kutz, J.N. Chaos as an intermittently forced linear system. Nature communications 2017, 8, 1–9. [Google Scholar] [CrossRef] [PubMed]
  8. Brunton, B.W.; Johnson, L.A.; Ojemann, J.G.; Kutz, J.N. Extracting spatial–temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition. Journal of neuroscience methods 2016, 258, 1–15. [Google Scholar] [CrossRef] [PubMed]
  9. Taylor, R.; Kutz, J.N.; Morgan, K.; Nelson, B.A. Dynamic mode decomposition for plasma diagnostics and validation. Review of Scientific Instruments 2018, 89, 053501. [Google Scholar] [CrossRef] [PubMed]
  10. Kaptanoglu, A.A.; Morgan, K.D.; Hansen, C.J.; Brunton, S.L. Characterizing magnetized plasmas with dynamic mode decomposition. Physics of Plasmas 2020, 27, 032108. [Google Scholar] [CrossRef]
  11. Kusaba, A.; Shin, K.; Shepard, D.; Kuboyama, T. Predictive Nonlinear Modeling by Koopman Mode Decomposition. In Proceedings of the 2020 International Conference on Data Mining Workshops (ICDMW), 2020; IEEE; pp. 811–819. [Google Scholar]
  12. Fujii, K.; Takeishi, N.; Kibushi, B.; Kouzaki, M.; Kawahara, Y. Data-driven spectral analysis for coordinative structures in periodic human locomotion. Scientific reports 2019, 9, 1–14. [Google Scholar] [CrossRef] [PubMed]
  13. Berger, E.; Sastuba, M.; Vogt, D.; Jung, B.; Ben Amor, H. Estimation of perturbations in robotic behavior using dynamic mode decomposition. Advanced Robotics 2015, 29, 331–343. [Google Scholar] [CrossRef]
  14. Takeishi, N.; Kawahara, Y.; Yairi, T. Sparse nonnegative dynamic mode decomposition. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), 2017; IEEE; pp. 2682–2686. [Google Scholar]
  15. Rudin, W. Functional Analysis. In McGraw-Hill Series in Higher Mathematics, second ed. Second edition; McGraw-Hill, Inc.: New York, St. Louis, San Francisco, 1991; p. xvi + 424. [Google Scholar]
  16. Susuki, Y.; Mezic, I. A Prony approximation of Koopman mode decomposition. In Proceedings of the 2015 54th IEEE Conference on Decision and Control (CDC), 2015; IEEE; pp. 7022–7027. [Google Scholar] [CrossRef]
  17. Horn, R.A.; Johnson, C.R. Matrix Analysis, 2nd ed.; Cambridge University Press, 2012. [Google Scholar]
Figure 1. Comparison between FD and KMD.
Figure 1. Comparison between FD and KMD.
Preprints 189020 g001
Figure 2. Illustration of multiple valid KMDs with divergent predictions.
Figure 2. Illustration of multiple valid KMDs with divergent predictions.
Preprints 189020 g002
Figure 3. Results of simulations for Scenarios 1 and 2.
Figure 3. Results of simulations for Scenarios 1 and 2.
Preprints 189020 g003aPreprints 189020 g003b
Figure 4. Results of simulations for Scenarios 3 and 4.
Figure 4. Results of simulations for Scenarios 3 and 4.
Preprints 189020 g004aPreprints 189020 g004b
Table 1. Notation summary.
Table 1. Notation summary.
Notation Description
Koopman degree.
x 0 , , x T 1 C m Column vectors of observables.
μ 1 , , μ C Koopman eigenvalues of an -degree DKMD.
m 1 , , m C m Koopman modes of an -degree DKMD.
X C m × T Observable matrix [ x 0 x T 1 ] .
M C m × Mode matrix [ m 1 m ] .
X i j C m × ( j i + 1 ) Submatrix [ x i x j ] for 0 i j < T .
X i C T The ith row vector of X .
V n a 1 , , a k C k × n Vandermonde matrix (Definition 2).
H k X C ( k + 1 ) × m ( T k ) Hankel matrix (Definition 6).
dim H k X kth Hankel dimension of X , defined as rank H k X (Definition 10).
codim H k X kth Hankel codimension of X , defined as k + 1 dim H k X (Definition 10).
L The smallest such that codim H X > 0 .
A i j Entry of a matrix A at row i and column j.
A F Frobenius norm of A : A F = i , j | A i j | 2 .
A + Moore–Penrose pseudoinverse of A ; B = C A + minimizes C B A F .
X T C n × m Transpose of X C m × n .
X * C n × m Conjugate transpose of X C m × n .
V ( M ) Subspace spanned by the column vectors of M .
A B C k × ( m + n ) Matrix obtained by appending the n columns of B C k × n to A C k × m .
V C n Orthogonal complement of a subspace V C n .
v i The ith component of a vector v .
0 n C n n-dimensional zero row vector ( 0 0 ) .
v a , b C n + a + b Column vector defined as v a , b = ( 0 a v T 0 b ) T for v C n .
diag ( A 1 , , A k ) Block diagonal matrix with diagonal blocks A 1 , , A k .
Table 2. Time complexity of the algorithms in the theoretical and practical scenarios.
Table 2. Time complexity of the algorithms in the theoretical and practical scenarios.
Algorithms Theoretical Practical
QRD EqS SVD EqS
Algorithm 1 1 0 1 0
Algorithm 2 1 0 1 0
Algorithm 3 1 1 1 1
Algorithm 4 < log 2 T 0 < log 2 T 0
Table 3. Simulation scenarios.
Table 3. Simulation scenarios.
Scenario Type # Major # Minor Noise
number eigenvalues eigenvalues inclusion
1 L T 2 10 0 No
2 L > T 2 30 0 No
3 L T 2 10 90 Yes
4 L > T 2 30 70 Yes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated