Preprint
Article

This version is not peer-reviewed.

Spectral Degeneracy Operators for Interpretable Turbulence Modeling

Submitted:

23 September 2025

Posted:

24 September 2025

You are already at the latest version

Abstract
Spectral Degeneracy Operators (SDOs) provide a unified framework for embedding physical symmetries and adaptive singularities into neural network layers. In this work we develop the theoretical and computational underpinnings of SDOs and demonstrate their role in neural symmetrization and turbulence modeling. We establish a generalized spectral decomposition for vector-valued SDOs, derive Lipschitz stability estimates for the inverse calibration of degeneracy points from sparse or boundary data, and prove a neural–turbulence correspondence theorem showing that SDO-based networks can approximate turbulence closure operators while preserving incompressibility. This combination of rigorous spectral theory, inverse problem analysis, and physics-informed neural design bridges harmonic analysis, degenerate PDEs, and fluid dynamics. Our results provide both mathematical guarantees and practical algorithms for constructing data-driven yet physically consistent turbulence models. By enforcing modewise stability under training and exploiting Green’s function representations, SDO layers act as adaptive spectral filters that learn anisotropic structures without violating conservation laws. The proposed framework opens new directions for interpretable deep learning architectures grounded in the spectral properties of underlying physical operators.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

The intersection of degenerate partial differential equations (PDEs), neural network symmetrization, and turbulence modeling offers a rich ground for mathematical innovation. Recent advances in inverse problems for degenerate PDEs [1] and physics-informed neural networks (PINNs) [2] have enabled the integration of adaptive diffusion mechanisms into machine learning models. Yet, a unified theoretical framework connecting these areas remains largely unexplored.

1.1. Degenerate PDEs and Inverse Problems

Degenerate PDEs naturally arise in contexts such as anisotropic diffusion [3], geometric singularities [4], and phase transitions [5]. Cannarsa et al. [1] established Lipschitz stability for reconstructing degeneracy points in parabolic equations of the form
t w x | x a | θ x w c w = 0 , θ [ 1 , 2 ) ,
from boundary measurements of x w ( 1 , t ) . Previous works on inverse source problems [6] and coefficient identification [7] mainly addressed scalar, one-dimensional domains, leaving multi-dimensional, vector-valued problems largely uncharted.

1.2. Neural Symmetrization

Equivariant neural networks [8] and the broader field of geometric deep learning [9] embed symmetry principles into machine learning models. Conventional group-convolution approaches [10] struggle with continuous symmetries such as SO ( 3 ) and anisotropic phenomena, including turbulent shear layers. Neural operators [11] and PINNs [2] offer promise for turbulence modeling, but often lack structural guarantees like rotation equivariance, energy conservation, or adaptivity to localized singularities.

1.3. Turbulence Modeling

Classical turbulence models—LES [12] and RANS [13]—rely heavily on empirical closures, which poorly capture intermittency and anisotropic dissipation. Data-driven neural closures [14,15] improve predictive performance but can violate fundamental physical constraints. Our approach enforces incompressibility via
· T NN u = 0 ,
where T NN is a degeneracy-aware neural operator designed to respect the underlying PDE structure.

1.4. Contributions

We introduce spectral degeneracy operators (SDOs)—differential operators that encode both physical symmetries and adaptive singularities—and demonstrate their application to:
  • Neural symmetrization, through SDO-based activation and layer design,
  • Turbulence closure modeling, via data-driven calibration and spectral filtering,
  • Inverse problem formulation, for reconstructing degeneracy points from sparse or boundary observations.
The key theoretical contributions of this work are:
  • Generalized spectral decomposition for vector-valued SDOs (Section 2),
  • Lipschitz stability results for inverse calibration in turbulence models (Section 3),
  • A neural-turbulence correspondence theorem, connecting learned SDO parameters to underlying turbulent structures (Section 3).

2. Spectral Degeneracy Operators (SDOs)

2.1. Definition and Spectral Properties

Let Ω R d be a bounded Lipschitz domain. Denote by a L ( Ω ; Ω ) the degeneracy points and θ L ( Ω ; [ 1 , 2 ) d ) the degeneracy exponents. Define the spectral degeneracy operator (SDO) as
L a , θ u · | x a | θ u ,
where
| x a | θ i = 1 d | x i a i | θ i , denotes Hadamard product .
The natural energy space associated with L a , θ is
H θ 1 ( Ω ) u L 2 ( Ω ) : | x a | θ / 2 u L 2 ( Ω ; R d ) , u | Ω = 0 ,
with inner product
u , v H θ 1 Ω u v d x + Ω | x a | θ / 2 u · | x a | θ / 2 v d x .
Theorem 2.1
(Spectral Decomposition of SDOs). Let a int ( Ω ) and θ [ 1 , 2 ) d . Then the operator
L a , θ u : = · D a , θ u , D a , θ = diag ( | x 1 a 1 | θ 1 , , | x d a d | θ d ) ,
with domain H 0 1 ( Ω ) , isself-adjoint,positive semi-definite, and has acompact resolventon L 2 ( Ω ) .
Moreover, there exists a countable set of eigenpairs { ( λ k , ϕ k ) } k N d forming an orthonormal basis of L 2 ( Ω ) such that
ϕ k ( x ; a , θ ) = i = 1 d ϕ k i ( i ) ( x i ; a i , θ i ) ,
where each ϕ k i ( i ) satisfies the 1D Bessel-type Sturm–Liouville problem
( | x i a i | θ i ϕ k i ( i ) ) = λ k i ϕ k i ( i ) , x i ( 0 , 1 ) , ϕ k i ( i ) ( 0 ) = ϕ k i ( i ) ( 1 ) = 0 .
The eigenvalues admit the asymptotic behavior
λ k ( a , θ ) i = 1 d j ν i , k i | 1 a i | 2 , ν i = θ i 1 2 θ i ,
where j ν i , k i denotes the k i -th positive zero of the Bessel function J ν i of the first kind.
Proof. 
Assume u ( x ) = i = 1 d u i ( x i ) . Then the operator acts as
L a , θ u ( x ) = i = 1 d j i | x j a j | θ j ( | x i a i | θ i u i ) .
Thus, the d-dimensional spectral problem reduces to d decoupled 1D weighted Sturm–Liouville problems (8).
Each 1D operator
L i u i : = ( | x i a i | θ i u i ) , u i H 0 1 ( 0 , 1 ) ,
is self-adjoint with respect to the weighted inner product
u i , v i L 2 ( ( 0 , 1 ) , | x i a i | θ i d x i ) : = 0 1 u i ( x i ) v i ( x i ) | x i a i | θ i d x i .
The operator is positive semi-definite because
L i u i , u i = 0 1 | x i a i | θ i | u i | 2 d x i 0 .
Compactness of the resolvent follows from standard embeddings H 0 1 L 2 for singular-weighted Sobolev spaces [1].
By tensorizing the 1D eigenfunctions ϕ k i ( i ) , we obtain
ϕ k ( x ) = i = 1 d ϕ k i ( i ) ( x i ) ,
with eigenvalues
λ k = i = 1 d λ k i .
Orthonormality in L 2 ( Ω ) follows from the product structure and orthonormality of each 1D set.
For each 1D Sturm–Liouville problem with weight | x i a i | θ i , standard singular Bessel function theory [16] implies
λ k i j ν i , k i | 1 a i | 2 , ν i = θ i 1 2 θ i .
Summing over i yields the asymptotic formula (9).
Compact resolvent and orthonormality guarantee that { ϕ k } k N d forms a complete orthonormal basis of L 2 ( Ω ) , establishing the spectral decomposition. □

2.2. Neural Symmetrization via SDOs

Define SDO layers by
[ L a , θ u ] ( x ) = · | x a | θ u ( x ) , x Ω ,
with trainable a Ω and θ [ 1 , 2 ) d . Let G ( x , y ; a , θ ) be the Green’s function:
L a , θ G ( · , y ) = δ ( · y ) , G | Ω = 0 .
Definition 2.2
(SDO-Net). AnSDO-Netcontains layers of the form
u l + 1 = σ L a l , θ l u l + W l u l + b l ,
with Lipschitz activation σ, weight W l , bias b l , and trainable a l , θ l .
Lemma 2.3
(Weighted SDO Solve). Let a int ( Ω ) and θ [ 1 , 2 ) d . For any f L 2 ( Ω ) , the boundary-value problem
L a , θ u = f , u | Ω = 0 ,
admits a unique solution u H θ 1 ( Ω ) , where the weighted Sobolev space is defined as
H θ 1 ( Ω ) : = v L 2 ( Ω ) : i = 1 d Ω | x i a i | θ i | x i v | 2 d x < , v | Ω = 0 .
Moreover, there exists a constant C θ > 0 such that
u H θ 1 ( Ω ) C θ f L 2 ( Ω ) .
Proof. 
Consider the bilinear form
a ( u , v ) : = Ω i = 1 d | x i a i | θ i x i u x i v d x , u , v H θ 1 ( Ω ) .
The weak form of (13) is: find u H θ 1 ( Ω ) such that
a ( u , v ) = Ω f v d x , v H θ 1 ( Ω ) .
By definition,
| a ( u , v ) | u H θ 1 v H θ 1 .
Also, a ( u , u ) = u H θ 1 2 shows coercivity.
The Lax–Milgram theorem then guarantees a unique u H θ 1 ( Ω ) solving (15).
From (15) with v = u and Cauchy–Schwarz inequality:
u H θ 1 2 = a ( u , u ) = Ω f u d x f L 2 u L 2 .
Using a weighted Poincaré inequality [1]:
u L 2 C P u H θ 1 ,
we obtain
u H θ 1 C θ f L 2 ,
which proves (14).
Alternatively, using the spectral decomposition { ( λ k , ϕ k ) } from Theorem 2.1, one can write
u = k N d f , ϕ k L 2 λ k ϕ k ,
which also satisfies the same norm estimate due to orthonormality and positivity of λ k . □
Theorem 2.4
(Well-Posedness of SDO Layers). Let u l H θ l 1 ( Ω ) , and let the SDO layer be defined by
u l + 1 = σ L a l , θ l 1 ( W l u l + b l ) ,
where σ is Lipschitz continuous with constant L σ , W l is a linear operator, and b l L 2 ( Ω ) . Then:
  • Existence and uniqueness:  There exists a unique u l + 1 H θ l 1 ( Ω ) satisfying (16).
  • Lipschitz bound:
    u l + 1 H θ l 1 L σ C θ l u l H θ l 1 + W l u l + b l L 2 ,
    where C θ l is the constant from Lemma 2.3.
  • Continuous dependence on parameters:
    L a l , θ l 1 L a ˜ l , θ ˜ l 1 H θ l 1 H θ l 1 0 as a l a ˜ l , θ l θ ˜ l .
Proof. 
From Lemma 2.3, for any f L 2 ( Ω ) , the problem
L a l , θ l u = f , u | Ω = 0 ,
admits a unique solution u H θ l 1 ( Ω ) with
u H θ l 1 C θ l f L 2 .
Setting f = W l u l + b l , we obtain existence and uniqueness of u l + 1 before activation.
Applying the Lipschitz continuous activation σ with constant L σ gives
u l + 1 H θ l 1 = σ ( L a l , θ l 1 ( W l u l + b l ) ) H θ l 1 L σ L a l , θ l 1 ( W l u l + b l ) H θ l 1 .
Using Lemma 2.3:
L a l , θ l 1 ( W l u l + b l ) H θ l 1 C θ l W l u l + b l L 2 ,
which yields (17).
Let u = L a l , θ l 1 f and u ˜ = L a ˜ l , θ ˜ l 1 f . Then
L a ˜ l , θ ˜ l ( u u ˜ ) = ( L a ˜ l , θ ˜ l L a l , θ l ) u .
Taking the H θ l 1 norm and applying the stability estimate from Lemma 2.3:
u u ˜ H θ l 1 C θ l ( L a ˜ l , θ ˜ l L a l , θ l ) u L 2 .
Since L a l , θ l depends continuously on a l and θ l , we have
u u ˜ H θ l 1 0 as a l a ˜ l , θ l θ ˜ l ,
which establishes (18). □
Lemma 2.5
(Spectral Decomposition of SDO Layer). Let L a l , θ l be the SDO at layer l as in Theorem 2.1, and let u l L 2 ( Ω ) . Then u l admits the spectral decomposition
u l ( x ) = k N d u l , ϕ k L 2 ( Ω ) ϕ k ( x ; a l , θ l ) ,
and the action of the operator satisfies
L a l , θ l u l ( x ) = k N d λ k ( a l , θ l ) u l , ϕ k L 2 ( Ω ) ϕ k ( x ; a l , θ l ) ,
where ( λ k , ϕ k ) are the eigenpairs of L a l , θ l as in Theorem 2.1.
Proof. 
From Theorem 2.1, L a l , θ l is self-adjoint, positive semi-definite, and has compact resolvent in L 2 ( Ω ) . Therefore, the spectral theorem for compact self-adjoint operators applies, ensuring the existence of a countable orthonormal basis of eigenfunctions { ϕ k } k N d with corresponding eigenvalues { λ k } k N d .
Since { ϕ k } forms an orthonormal basis of L 2 ( Ω ) , any u l L 2 ( Ω ) can be expanded as
u l ( x ) = k N d u l , ϕ k L 2 ( Ω ) ϕ k ( x ; a l , θ l ) ,
with convergence in L 2 ( Ω ) . This proves (19).
Applying L a l , θ l to the expansion termwise gives
L a l , θ l u l ( x ) = k N d u l , ϕ k L 2 ( Ω ) L a l , θ l ϕ k ( x ; a l , θ l ) = k N d λ k ( a l , θ l ) u l , ϕ k L 2 ( Ω ) ϕ k ( x ; a l , θ l ) .
where we used the eigenvalue equation L a l , θ l ϕ k = λ k ϕ k . Convergence is in L 2 ( Ω ) .
The decomposition allows efficient representation of SDO layers and is crucial for spectral analysis of deep networks. The dependence of ϕ k and λ k on a l and θ l is continuous due to the parameter-dependence properties of SDOs (see Theorem 2.1 and Theorem 2.4). This completes the proof. □
Corollary 2.6
(Spectral Stability under Training). Let Ω R d be a bounded Lipschitz domain. For each layer l, denote by
L a , θ u : = · D a , θ u with D a , θ ( x ) = diag | x 1 a 1 | θ 1 , , | x d a d | θ d ,
acting on H 0 1 ( Ω ) , and let { λ k ( a , θ ) } k 1 be the eigenvalues enumerated in nondecreasing order with associated orthonormal eigenfunctions { ϕ k ( · ; a , θ ) } k 1 . Assume that between two training steps t and t + 1 the parameters change by
δ : = a ( t + 1 ) a ( t ) R d + θ ( t + 1 ) θ ( t ) R d .
Then, for each fixed k N ,
λ k a ( t + 1 ) , θ ( t + 1 ) λ k a ( t ) , θ ( t ) = 1 + O k ( δ ) as δ 0 .
In particular, the spectrum of each mode remains stable under sufficiently small parameter updates during training.
Remark 2.7
(Physics-Informed Kernels and Modewise Filtering). Using the Green’s function representation, each layer admits the expansion
u l + 1 ( x ) = k N d u l , ϕ k ( · ; a l , θ l ) L 2 ( Ω ) λ k ( a l , θ l ) ϕ k ( x ; a l , θ l ) .
Thus each SDO layer acts as a modewise filter whose anisotropy adapts to the learned parameters. When the eigenvalues vary smoothly with ( a l , θ l ) , the gains 1 / λ k of the dominant modes change only by O ( δ ) for small updates, ensuring stable physics-informed filtering across layers.

3. Main Results

3.1. Universality for Turbulence Closure

Theorem 3.1
(Universality of SDO-Nets). Let Ω R d be a bounded Lipschitz domain, and let T : L 2 ( Ω ; R d ) L 2 ( Ω ; R d × d ) be a turbulence closure operator mapping velocity fields to stress tensors. Then, for any compact set U L 2 ( Ω ; R d ) and any ε > 0 , there exists an SDO-Net T NN such that
sup u U T ( u ) T NN ( u ) L 2 ( Ω ) < ε ,
with T NN ( u ) weakly divergence-free:
· T NN ( u ) = 0 in D ( Ω ) .
Proof. 
By Theorem 2.1, the set of eigenfunctions { ϕ k } k N d of the spectral degeneracy operator L a , θ forms a complete orthonormal basis of L 2 ( Ω ) . Thus, for each component T i j ( u ) of the closure tensor, and for any u U , we can approximate
T i j ( u ) | k | N c k ( i j ) ( u ) ϕ k ( x ; a , θ ) ,
with coefficients c k ( i j ) ( u ) R and error arbitrarily small in L 2 norm by choosing N sufficiently large.
Each coefficient map u c k ( i j ) ( u ) is a continuous functional on the compact set U L 2 ( Ω ; R d ) . By the **universal approximation theorem** for feedforward networks [17], for any δ > 0 , there exists a neural network N k ( i j ) such that
sup u U c k ( i j ) ( u ) N k ( i j ) ( u ) < δ .
Let F ( x ) L 2 ( Ω ; R d × d ) be any tensor field. By the Hodge decomposition (or Helmholtz projection) in bounded Lipschitz domains, there exists a unique decomposition
F = F div + ,
with · F div = 0 and potential vanishing on Ω . Applying this projection to the neural approximation of the spectral expansion (26), we obtain a divergence-free SDO-Net:
T NN ( u ) = P div | k | N N k ( u ) ϕ k ( x ) ,
where P div denotes the L 2 -orthogonal projection onto divergence-free tensor fields.
By triangle inequality,
T ( u ) T NN ( u ) L 2 T ( u ) | k | N c k ( u ) ϕ k L 2 + | k | N ( c k ( u ) N k ( u ) ) ϕ k L 2 + | k | N N k ( u ) ϕ k P div | k | N N k ( u ) ϕ k L 2 .
Each term can be made smaller than ε / 3 by choosing sufficiently large N and accurate neural networks N k . Hence T NN satisfies (24) and is weakly divergence-free by construction (29).
Remark 3.2
(Interpretation). This theorem shows that **SDO-Nets provide a universal approximator** for turbulence closure operators while exactly enforcing incompressibility. The spectral degeneracy basis allows adaptive representation of anisotropic and localized structures, critical for turbulent flows.

3.2. Inverse Calibration of Degeneracy Points

Theorem 3.3
(Lipschitz Stability of Degeneracy Points). Let Ω R d be a bounded Lipschitz domain, and let u 1 , u 2 be solutions to the degenerate Navier–Stokes system
t u + ( u · ) u + p · | x a | θ u = f , · u = 0 ,
with degeneracy points a 1 , a 2 Ω and identical initial/boundary conditions.
Assume that boundary measurements satisfy
| x a 1 | θ u 1 | x a 2 | θ u 2 · n L 2 ( Ω × ( 0 , T ) ) δ .
Then there exist constants C > 0 and γ ( 0 , 1 ] , depending on Ω, θ , and the spectral gap of the SDO, such that
a 1 a 2 R d C δ γ .
Proof. 
We outline a rigorous four-step argument:
Set w = u 1 u 2 . Subtracting the two PDEs (31) gives
t w + ( u 1 · ) w + ( w · ) u 2 + ( p 1 p 2 ) · | x a 1 | θ w = · ( | x a 2 | θ | x a 1 | θ ) u 2 .
Multiply (34) by w and integrate over Ω :
1 2 d d t w L 2 2 + Ω | x a 1 | θ | w | 2 d x Ω | ( | x a 1 | θ | x a 2 | θ ) | | u 2 | | w | d x + N ,
where N = Ω | ( w · ) u 2 · w | d x is the nonlinear advection term.
Apply Cauchy–Schwarz and Young inequalities:
Ω | ( | x a 1 | θ | x a 2 | θ ) | | u 2 | | w | d x ϵ w L 2 ( | x a 1 | θ d x ) 2 + C ϵ a 1 a 2 R d 2 θ u 2 L 2 2 .
By Theorem 2.1, the SDO has a positive first eigenvalue λ 1 > 0 :
Ω | x a 1 | θ | w | 2 d x λ 1 w L 2 2 .
Combining (35)–(37) and bounding the nonlinear term via u 2 L w L 2 2 , we obtain a Grönwall inequality:
d d t w L 2 2 + λ 1 w L 2 2 C a 1 a 2 2 θ + C w L 2 2 .
Using the boundary measurement condition (32) and standard elliptic estimates (or Carleman inequalities for SDOs [1]), we can control the interior norm w L 2 ( Ω ) in terms of δ . Thus, solving (38) gives
a 1 a 2 R d C δ γ ,
for some γ ( 0 , 1 ] , completing the proof. □
Remark 3.4
(Interpretation). This result guarantees Lipschitz-type stability for inverse calibration of degeneracy points: small changes in boundary fluxes lead to controlled shifts in the inferred centers a l . It forms the theoretical foundation for learning adaptive attention centers in SDO-Nets from partial or boundary measurements.

3.3. Neural-Turbulence Correspondence

Theorem 3.5
(Neural-Turbulence Correspondence – Refined). Let T NN : L 2 ( Ω ; R d ) L 2 ( Ω ; R d × d ) be an SDO-Net trained to minimize the residual energy functional
E N ( T NN ) = t u ¯ + ( u ¯ · ) u ¯ + p ¯ · ( | x a | θ u ¯ ) · T NN ( u ¯ ) L 2 ( Ω ) .
Assume:
  • The dataset { u ¯ i } i = 1 N is dense in the function space of resolved velocities as N .
  • The SDO-Net satisfies the Lipschitz stability property from Theorem 3.3.
  • The loss functional E N is equi-coercive and lower semicontinuous with respect to the degeneracy points a N .
Then, as N , the learned degeneracy points a N converge to the true turbulence structures a * in L 1 :
lim N a N a * L 1 ( Ω ) = 0 .
Proof. 
The proof is based on a three-step argument combining consistency, stability, and variational convergence:
By the Germano identity [18], the true subgrid stress satisfies
· T * ( u ¯ ) = lim N · T NN ( u ¯ N ) ,
ensuring that the SDO-Net can approximate the exact residual as N .
Applying Theorem 3.3 to the mapping
a · ( | x a | θ u ¯ ) + · T NN ( u ¯ )
guarantees that small residual errors in E N induce controlled deviations in the inferred a N :
a N a * R d C E N ( T NN ) γ .
Let E ( a ) = lim N E N ( a ) . By equi-coercivity and lower semicontinuity,
a N arg min a E ( a ) = a * .
Combining (43) and (44) yields
a N a * L 1 ( Ω ) C E N ( T NN ) γ 0 as N .
Hence, the SDO-Net learns degeneracy points that asymptotically align with true turbulent structures. □
Remark 3.6
(Practical Implication). The theorem rigorously justifies the use of trainable degeneracy points in SDO-Nets: during large-data training, these points act as learned attention centers, automatically localizing key turbulence structures in a physically consistent way.

4. Results

We summarize here the main theoretical and computational contributions developed in the preceding sections:
  • Spectral theory of SDOs. We proved self-adjointness, compact resolvent, and a complete tensor-product basis of eigenfunctions with Bessel-type asymptotics, enabling efficient spectral representations of SDO layers.
  • Stability and inverse calibration. Using min–max principles and Carleman-type estimates we derived Lipschitz-type stability bounds for recovering degeneracy points from boundary measurements in degenerate Navier–Stokes systems.
  • Universal approximation for turbulence closure. We established that SDO-Nets approximate arbitrary divergence-free closure operators on compact subsets of L 2 , combining spectral expansions with neural coefficient maps.
  • Neural–turbulence correspondence. We proved convergence of trainable degeneracy points to true turbulent structures as the amount of training data increases, rigorously justifying their interpretation as learned attention centers.
These results jointly demonstrate that SDO-based architectures retain the expressive power of modern neural operators while inheriting structural guarantees from the underlying PDEs.

5. Conclusions

We have introduced and analyzed Spectral Degeneracy Operators as a bridge between degenerate PDE theory and physics-informed neural networks. Our analysis shows that SDO layers act as stable, adaptive, and interpretable spectral filters capable of representing anisotropic and intermittent features of turbulence without violating conservation laws. The inverse stability results provide a theoretical foundation for data-driven calibration of degeneracy points, and the universality theorem establishes the approximation power of SDO-Nets for turbulence closures. Future work will extend these ideas to fully three-dimensional, time-dependent turbulent flows, incorporate stochastic parameterizations, and explore SDO-based architectures in other domains such as geophysical fluid dynamics and nuclear-reactor thermo-hydraulics.

Symbols and Nomenclature

Ω R d Bounded Lipschitz domain.
x = ( x 1 , , x d ) Spatial coordinates.
a Ω Degeneracy (attention) centers — trainable layer parameters.
θ [ 1 , 2 ) d Degeneracy exponents controlling anisotropy.
D a , θ ( x ) Diagonal degeneracy matrix diag ( | x 1 a 1 | θ 1 , , | x d a d | θ d ) .
L a , θ Spectral degeneracy operator L a , θ = · D a , θ ( · ) on H 0 1 ( Ω ) .
H θ 1 ( Ω ) Weighted Sobolev space associated with L a , θ .
λ k ( a , θ ) k-th eigenvalue of L a , θ (nondecreasing order).
ϕ k ( · ; a , θ ) Corresponding eigenfunction, orthonormal in L 2 ( Ω ) .
G ( x , y ; a , θ ) Green’s function solving L a , θ G ( · , y ) = δ ( · y ) with homogeneous Dirichlet boundary.
u l , u l + 1 Input / output at layer l of an SDO-Net.
W l , b l Linear weight operator and bias at layer l.
σ ( · ) Lipschitz activation function.
T ( u ) True turbulence closure operator (velocity ↦ stress tensor).
T NN ( u ) SDO-Net approximation of T ( u ) .
P div Helmholtz–Hodge projection onto divergence-free tensor fields.
j ν , k k-th positive zero of the Bessel function J ν .
δ Update norm between training steps: δ = a ( t + 1 ) a ( t ) R d + θ ( t + 1 ) θ ( t ) R d .
Notation. Bold lowercase denotes vectors; bold uppercase denotes operators or matrices. Inner product · , · L 2 ( Ω ) . Norms default to L 2 ( Ω ) unless otherwise stated. Superscript ( t ) denotes the training iteration index.

Acknowledgments

Santos gratefully acknowledges the support of the PPGMC Program for the Postdoctoral Scholarship PROBOL/UESC nr. 218/2025. Sales would like to express his gratitude to CNPq for the financial support under grant 30881/2025-0.

References

  1. Cannarsa, P., Doubova, A., & Yamamoto, M. (2024). Reconstruction of degenerate conductivity region for parabolic equations. Inverse Problems, 40(4), 045033. [CrossRef]
  2. Raissi, M., Perdikaris, P., Karniadakis, G. E., & Shrimali, B. (2021). CS598: Physics-Informed Neural Networks: A deep learning framework for solving forward and inverse problems involving nonlinear PDEs.
  3. DiBenedetto, E. (2012). Degenerate parabolic equations. Springer Science & Business Media.
  4. Díaz, J. I. (1985). Nonlinear partial differential equations and free boundaries. Elliptic Equations. Research Notes in Math., 1, 106.
  5. Oleinik, O. (2012). Second-order equations with nonnegative characteristic form. Springer Science & Business Media.
  6. Hussein, M. S., Lesnic, D., Kamynin, V. L., & Kostin, A. B. (2020). Direct and inverse source problems for degenerate parabolic equations. Journal of Inverse and Ill-Posed Problems, 28(3), 425-448. [CrossRef]
  7. Kamynin, V. L. (2018). On inverse problems for strongly degenerate parabolic equations under the integral observation condition. Computational Mathematics and Mathematical Physics, 58(12), 2002-2017. [CrossRef]
  8. Finzi, M., Stanton, S., Izmailov, P., & Wilson, A. G. (2020, November). Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. In International conference on machine learning (pp. 3165-3176). PMLR.
  9. Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A., & Vandergheynst, P. (2017). Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4), 18-42. [CrossRef]
  10. Cohen, T., & Welling, M. (2016, June). Group equivariant convolutional networks. In International conference on machine learning (pp. 2990-2999). PMLR.
  11. Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., & Anandkumar, A. (2020). Neural operator: Graph kernel network for partial differential equations. arXiv preprint arXiv:2003.03485. arXiv:2003.03485. [CrossRef]
  12. Sagaut, P. (2006). Large eddy simulation for incompressible flows: an introduction. Berlin, Heidelberg: Springer Berlin Heidelberg.
  13. Cant, S., & Pope, S. B. (2000). Turbulent Flows.
  14. Beck, A. D., Flad, D. G., & Munz, C. D. (2018). Deep neural networks for data-driven turbulence models. arXiv preprint arXiv:1806.04482. arXiv:1806.04482. [CrossRef]
  15. Xiao, M. J., Yu, T. C., Zhang, Y. S., & Yong, H. (2023). Physics-informed neural networks for the Reynolds-Averaged Navier–Stokes modeling of Rayleigh–Taylor turbulent mixing. Computers & Fluids, 266, 106025.
  16. Moore, C. N. (1945). GN Watson, A treatise on the theory of Bessel functions.
  17. Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4), 303-314. [CrossRef]
  18. Germano, M., Piomelli, U., Moin, P., & Cabot, W. H. (1991). A dynamic subgrid-scale eddy viscosity model. Physics of fluids a: Fluid dynamics, 3(7), 1760-1765. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated