1. Introduction
The mathematical modeling of turbulent flows has long relied on closure operators that mediate the interaction between resolved and unresolved scales [12,17]. Classical frameworks such as Reynolds-Averaged Navier–Stokes (RANS) and Large Eddy Simulation (LES) are built upon phenomenological assumptions which, despite their practical relevance, often fail to accurately capture key features of turbulence, including anisotropic dissipation, intermittency, and localized coherent structures. In recent years, data-driven methods most notably physics-informed neural networks (PINNs) [7] and neural operator architectures [6] have provided powerful alternatives by embedding physical laws into machine learning models. Nevertheless, these approaches typically lack a unified analytic foundation that simultaneously: (i) introduces physically meaningful, trainable anisotropic singularities into neural representations; (ii) preserves the spectral and variational structure of the underlying partial differential equations (PDEs); and (iii) admits rigorous inverse, stability, and convergence guarantees.
A major step toward bridging this gap was achieved in [1], where the authors introduced Spectral Degeneracy Operators (SDOs) as a mathematically rigorous and computationally viable framework that couples degenerate PDE theory, spectral analysis, and modern neural architectures for turbulence modeling. The central idea is to embed adaptive singularities and symmetry structures directly into neural layers via differential operators with trainable degeneracy centers. This construction yields layers that act as anisotropic, data-adaptive spectral filters while retaining physical interpretability.
The SDO framework establishes a complete spectral theory, including self-adjointness, compact resolvent, and a tensorized Bessel-type eigenbasis; derives Lipschitz stability estimates for inverse calibration of degeneracy points from boundary measurements; proves a universality theorem ensuring that SDO-based networks approximate any divergence-free turbulence closure operator; and introduces a neural–turbulence correspondence principle demonstrating that learned degeneracy centers converge to physically meaningful turbulent structures during training. Consequently, SDOs provide a mathematically principled and physically consistent alternative to traditional turbulence closures and standard neural networks, combining expressive power with structural guarantees.
In this work, we significantly advance the analytical foundations of
Spectral Degeneracy Operators (SDOs), a class of anisotropic, degenerate elliptic operators that serve both as fundamental objects of PDE theory and as parametric modules in neural architectures. For a bounded Lipschitz domain
, a degeneracy center
, and exponents
, we consider
Operators of this form naturally blend Bessel-type singular Sturm–Liouville behavior with directionally adaptive spectral structure (cf. [2,11,18]). While prior work outlined their spectral decomposition and relevance to turbulence modeling, a complete analytic theory addressing regularity, stability, and variational limits remained undeveloped. This manuscript fills this gap by establishing: (1) a comprehensive fractional regularity theory for elliptic SDO problems; (2) a detailed spectral stability analysis for deep compositions of SDO networks under parameter perturbations; and (3) a full variational limit theory, in the sense of
-convergence, for SDO-induced energy functionals.
1.1. Mathematical Motivations and Principal Contributions
Three core mathematical motivations drive this study. First, degeneracy centers (trainable parameters a) act as adaptive attention loci within a physically grounded operator basis; understanding the sensitivity of eigenvalues and eigenmodes with respect to variations in a is crucial for ensuring robust training and model interpretability. Second, regularity estimates in weighted Sobolev and Besov spaces provide rigorous control of approximation errors and justify spectral truncation strategies employed in practical implementations. Third, variational convergence connects discrete, network-induced energy landscapes with continuum turbulence limits, ensuring consistency between training objectives and physical modeling.
Accordingly, our principal contributions are:
A comprehensive fractional regularity theory for solutions of in weighted Sobolev and Besov scales, establishing interior and boundary estimates (Theorem 3.5).
A spectral stability theorem for deep SDO networks that quantifies how compositions of SDO inverses and linear readouts preserve modal amplitudes and eigenvalue clusters under small perturbations of , yielding practical bounds for training stability (Theorem 4.2).
A complete Γ-convergence framework linking discrete and parametric SDO energy functionals to a continuum variational limit associated with turbulent energy dissipation.
Strengthened and fully rigorous proofs of inverse calibration stability and universality of divergence-free SDO networks.
1.2. Related Mathematical Work
The present analysis lies at the intersection of several mature areas of mathematical research. First, our framework builds on the classical theory of degenerate elliptic and parabolic PDEs, where foundational results on well-posedness, weighted Sobolev spaces, and regularity were established in [11] and further developed in boundary layer settings by [14]. These works provide the analytic backbone for handling degeneracies such as those induced by SDO weights.
Second, the stability of inverse problems for degenerate operators is essential for the recovery of degeneracy centers. Recent advances include Lipschitz-type reconstruction results for parabolic equations with degeneracy [2], direct and inverse source problems [4], and integral-observation-based stability results in strongly degenerate settings [8]. These studies motivate and inform our inverse calibration stability theorems for SDOs.
Third, our spectral analysis draws on singular Sturm–Liouville and Bessel theory, with Watson’s classical treatment [18] providing the primary asymptotic tools necessary to characterize the eigenstructure of separable degenerate operators. This spectral foundation is crucial for developing fractional powers, weighted Besov scales, and modewise stability estimates.
In the context of modern machine learning, our work connects with the emerging field of neural operator approximation. Neural operators and graph kernel networks [6] have demonstrated the ability to learn solution operators to PDEs in infinite-dimensional function spaces, establishing universal approximation properties and stability guarantees. Complementary to this, physics-informed neural networks (PINNs) [7] enforce differential constraints via variational residual minimization, motivating the integration of PDE structure directly into training objectives.
Finally, our variational limit theory is grounded in the classical De Giorgi–Dal Maso framework for Γ-convergence [15]. By adapting
2. Mathematical Preliminaries and Functional Analytic Framework
We consider a bounded Lipschitz domain
and denote spatial variables by
. For a measurable weight function
, we define
as the weighted
space equipped with the norm
Our primary focus will be on separable weights of the form
2.1. Weighted Sobolev Spaces and Trace Theory
To rigorously analyze degenerate elliptic problems, we require specialized function spaces that account for the anisotropic weighting in the differential operator. This subsection introduces the weighted Sobolev space framework and establishes fundamental inequalities that underpin the well-posedness and regularity theory.
Definition 2.1 (Weighted Sobolev Space).
Let be a bounded domain, , and with for each . Theweighted Sobolev space
is defined as the completion of with respect to the norm
This space is a Hilbert space when equipped with the inner product
Remark 2.2. The restriction ensures:
The weight is integrable near (critical for -theory),
The space is compactly embedded in (via weighted Rellich–Kondrachov),
The Poincaré inequality holds uniformly (see Lemma 2.3).
For , the weight may fail to be locally integrable, complicating the analysis.
Standard density results extend to this weighted setting under natural assumptions on a and . Specifically, is dense in provided the weights are locally integrable and the boundary is sufficiently regular (e.g., Lipschitz). For a comprehensive treatment of weighted Sobolev spaces, we refer to [2,11].
The following weighted Poincaré inequality is fundamental for establishing coercivity of the bilinear form associated with degenerate operators:
Lemma 2.3 (Weighted Poincaré Inequality).
There exists a constant , depending only on Ω,
a, and θ, such that for all ,
Proof. The proof proceeds in three steps:
1. One-Dimensional Reduction. Fix
and consider the one-dimensional weighted Poincaré inequality along the
-direction. For each
, define the slice
By Fubini’s theorem, it suffices to establish the inequality for almost every slice
. For fixed
, the one-dimensional weighted Poincaré inequality states:
where
C depends on
and the length of
. This follows from classical one-dimensional weighted Hardy inequalities (see [11]), valid since
.
2. Partition of Unity. To globalize the estimate, we employ a partition of unity subordinate to a cover of by coordinate-aligned cylinders. For each cylinder Q, we apply the one-dimensional inequality along the -direction, summing over all directions and cylinders. The condition ensures that the weight controls the -norm locally, while the Dirichlet boundary condition (in the trace sense) guarantees that the Poincaré constant is uniform across slices.
3. Synthesis. Summing the one-dimensional inequalities over all coordinates and integrating over the transverse variables, we obtain:
The constant
C depends on the diameter of
, the maximum of the
, and the geometry of the partition. This completes the proof. □
Corollary 2.4 (Equivalence of Weighted Norms with Explicit Domain Dependence).
Let be a bounded domain contained in the ball for some and radius . Let be the weighted Sobolev space with weights , endowed with the norm
Define the corresponding semi-norm
Then there exist constants , depending explicitly on R, Ω, and the weights , such that
Proof. The upper bound in (2.7) is immediate from the definition (2.5) since all integrals are nonnegative.
For the lower bound, we apply the weighted Poincaré inequality (see (2.4)) which explicitly depends on the domain radius
R and weights
. There exists a constant
such that
Using (2.8), we have
Defining
gives the lower bound
Hence, the equivalence (2.7) holds with constants depending explicitly on the domain radius R and weights . □
2.2. Bessel Potential and Weighted Besov Scales
Bessel potential spaces
and Besov spaces
provide a natural framework for quantifying fractional-order regularity. To handle degeneracies induced by the weights
we employ
anisotropic weighted Bessel potential spaces, denoted by
and
. These spaces can be constructed via localization and pullback to one-dimensional model problems near each degeneracy center
, combined with standard extension operators and partitions of unity.
Formally, for
, the weighted Bessel potential space
is defined via fractional powers of the weighted Laplacian
:
where
and similarly for weighted Besov spaces
using interpolation and modulus of smoothness adapted to
(see [18]). These constructions ensure that the weighted fractional spaces inherit embedding, interpolation, and compactness properties analogous to the classical unweighted scales, with explicit dependence on the weights
.
2.3. Spectral Theory for Weighted SDOs
Consider the differential operator
and its associated bilinear form
The form
is symmetric, continuous, and coercive modulo the kernel if some
. Applying the Lax–Milgram theorem and using the compact embedding
which follows from weighted Rellich–Kondrachov-type theorems, we deduce that
with Dirichlet boundary conditions is self-adjoint with compact resolvent.
Consequently, there exists a complete orthonormal basis
of
satisfying
The eigenfunctions
are smooth away from the degeneracy centers
, and their tensorized Bessel-type asymptotics in coordinate-separable domains follow from one-dimensional singular Sturm–Liouville theory (see [18]). These asymptotics provide the foundation for defining fractional powers
and for characterizing
and
in terms of spectral expansions.
2.4. Equivalence of Weighted Spectral and Bessel Norms
For
and
, we define the
spectral norm
where
denotes the weighted
inner product. Then the Bessel potential norm (2.12) and the spectral norm (2.18) are equivalent: there exist constants
, depending explicitly on
s,
, and the domain radius, such that
Proof of (2.19). Let
with spectral decomposition
where
are the eigenfunctions of
satisfying (2.17).
Lower bound:
By definition of the weighted Bessel potential norm (2.12) and the spectral decomposition (2.20), we have
Hence, we can take
, so that
Upper bound:
We decompose the sum in (2.20) into low and high modes relative to a cutoff index
N:
The weighted Rellich embedding (2.16) implies
and the high modes satisfy
Combining (2.24) and (2.25), we obtain
Anisotropic case:
For , the operator is separable along coordinates. Using tensorization of one-dimensional spectral estimates and one-dimensional weighted Poincaré inequalities, the constants can be chosen to depend explicitly on each and the size of the domain in the i-th coordinate. This ensures that (2.19) holds uniformly for anisotropic weights.
Combining (2.22) and (2.26), we conclude the equivalence
as claimed. □
3. Weighted Poincaré Inequality and Asymptotic Analysis of
3.1. Weighted Poincaré Inequality
We establish a weighted Poincaré inequality, fundamental for coercivity of the bilinear form in .
Lemma 3.1 (Weighted Poincaré Inequality).
Let be a bounded Lipschitz domain, and let satisfy almost everywhere. Then there exists a constant such that
Furthermore, if , the weighted norm satisfies
Proof. Since
is bounded and Lipschitz, the standard Poincaré inequality ensures the existence of
such that
By definition,
, so (3.3) applies to all
.
For
, define the weighted norm
Since
,
Combining (3.3) and (3.5), we have
From (3.4) and (3.5),
Hence,
Conversely, using
,
Defining
we conclude the weighted Poincaré inequality (3.1)–(3.2). □
3.2. Asymptotic Analysis of the Weighted Constant
We now study the asymptotic behavior of the constant in Lemma 3.1 as the anisotropic weights approach degenerate limits ().
Proposition 3.2 (Asymptotic Behavior of
).
Let be a rectangular domain, and for . Denote by the optimal constant in the weighted Poincaré inequality
Then there exist constants , depending only on the aspect ratios , such that
Proof. Consider the one-dimensional weighted inequality
A standard Hardy-type inequality (see [13]) implies
For the rectangular domain
, the multi-dimensional weighted Poincaré inequality can be obtained by tensorizing the one-dimensional estimates:
where
. Taking the maximum over
i yields
To see that the scaling
is sharp, consider test functions depending only on the most degenerate coordinate
with
. Then the weighted Poincaré inequality reduces to (3.13), giving
Combining (3.16) and (3.17) establishes the asymptotic estimate (3.12). □
Remark 3.3. This analysis shows explicitly that the weighted Poincaré constant blows up as , with a precise scaling. Such estimates are essential for understanding the coercivity of degenerate bilinear forms and for bounding constants in spectral estimates for SDOs in highly anisotropic regimes.
3.3. Caccioppoli-Type Estimates and Interior Regularity
A fundamental tool for establishing interior regularity of weak solutions to degenerate elliptic equations is the following degenerate Caccioppoli inequality. This inequality provides local control of the weighted gradient of the solution in terms of the solution itself and the right-hand side.
Lemma 3.4 (Degenerate Caccioppoli Inequality).
Let be a ball such that . Suppose is a weak solution to
where the degenerate elliptic operator is defined by
with and for each . Assume . Then, for any cutoff function with and on , the following inequality holds:
where and .
Proof. The proof is structured in:
Multiply (3.18) by
and integrate over
:
Substituting the expression for
from (3.19), we obtain
Integrate by parts in the left-hand side of (3.22):
Expanding
, we get:
For each
i, apply Young’s inequality to the cross term in (3.24):
Absorbing the first term on the right-hand side into the left-hand side of (3.24), we obtain
For the first term on the right-hand side of (3.26), use the Cauchy–Schwarz inequality:
Since
and
, we have
Thus,
Since
on
, the left-hand side of (3.26) dominates the integral over
:
Combining (3.26), (3.29), and (3.30), and using the scaling
for the
f-term and
for the
u-term, we obtain:
This completes the proof. □
3.4. Remarks
The constant C in (3.20) depends on the dimension d and the maximum degeneracy exponent , but not on R.
If for all i, the inequality reduces to the classical Caccioppoli inequality for non-degenerate elliptic equations.
The separable structure of the weight allows for coordinate-wise estimates, simplifying the analysis.
The result in (3.20) is a key ingredient in proving Hölder continuity or fractional regularity for solutions of (3.18).
3.5. Fractional Regularity Theory: Statement and Proof
We present a comprehensive fractional regularity result for spectral degeneracy operators (SDOs).
Theorem 3.5 (Fractional Regularity for Spectral Degeneracy Operators).
Let be a bounded Lipschitz domain, , and . Suppose for some , and let be the weak solution of
Then there exists and a constant (depending on Ω, a, θ, and s) such that
More generally, for any , we have
Proof. The proof proceeds in three main steps, with explicit formulas.
1. Localization and Flattening. Cover
with finitely many coordinate charts
and choose a partition of unity
subordinate to this cover. Near a degeneracy center
a, we translate coordinates so that
a corresponds to the origin. In each chart, the local problem reads
where
denotes the commutator.
2. One-Dimensional Fractional Regularity. Consider the one-dimensional model operator
with Dirichlet boundary conditions. Standard singular Sturm–Liouville theory [18] implies that if
, then the solution
v satisfies
for some
. Here
denotes the weighted Bessel space along coordinate
.
3. Tensorization and Interpolation. For the full
d-dimensional operator, we have
Tensorizing (3.37) along each coordinate and applying the real interpolation method gives a fractional gain
for each local cube
Q. Using the Caccioppoli inequality (lem:caccioppoli) to control lower-order derivatives,
and summing over the partition of unity, we obtain the global estimate (3.33).
Finally, embedding the weighted Bessel spaces into weighted Besov spaces using standard theory [13] yields (3.34). The constants depend explicitly on , the Lipschitz character of , and the localization radii of the coordinate charts. □
4. Deep SDO Networks and Spectral Stability Analysis
We investigate architectures that compose multiple SDO-based layers. In continuous form, an SDO layer maps an input field
u to
where
W is a bounded linear operator (e.g., convolution or spectral projection),
b is a bias term, and
is a Lipschitz activation applied pointwise.
4.1. Compositional Structure and Modewise Action
Using the spectral decomposition
of
, the layer operation reads
Hence, each mode
k is scaled by
and receives a coefficient determined by the projection
. Composing
blocks across multiple layers produces multiplicative mode gains; controlling these gains under parameter perturbations is essential for stability.
4.2. Perturbation Theory and Kato-Type Estimates
Let
and
be two parameter tuples with
Standard analytic perturbation theory for self-adjoint operators ensures that simple eigenvalues
depend analytically on smooth parameter variations [10]. For degenerate weights, eigenpairs vary continuously, and first-order perturbations can be captured by resolvent differences.
Proposition 4.1 (Resolvent Perturbation Bound).
For ζ in the resolvent set of , we have
where C depends on spectral gaps and domain geometry.
Proof. We use the resolvent identity:
Since
is a multiplication-differential operator, we have
Combining (4.5)–(4.6) and the resolvent bounds yields (4.4). □
4.3. Deep Spectral Stability Theorem
Consider a network with
L SDO layers:
Theorem 4.2 (Deep Spectral Stability of SDO Networks).
Let each be bounded , σ Lipschitz with constant , and assume spectral gap bounds
For perturbed parameters with
the k-th spectral coefficient at network output satisfies
where depends polynomially on L, , , and .
Proof.
Base case (). The
k-th output coefficient before activation is
Using Proposition 4.1, we have
Assume the claim holds for
layers. Let
then
Applying the
argument to the final layer with Lipschitz
gives
Iterating and absorbing constants proves (4.10). □
Corollary 4.3 (Training Stability).
Let be the network parameters at training iteration n, and assume that
Then, for any input , the spectral coefficients of the network output satisfy
where depends polynomially on L, , , and the inverse spectral gap .
Proof. By Theorem 4.2, the difference between the outputs of two networks with parameters differing by
satisfies
Here, we identify the perturbed network as the one at iteration
and the nominal network as iteration
n.
Inductively, for each layer
ℓ, the spectral difference in intermediate outputs
satisfies
where
and
is the Lipschitz constant of the activation. Iterating (4.19) over
yields (4.17).
Therefore, small parameter updates guarantee that the spectral coefficients vary smoothly, and the accumulation of errors across layers remains controlled, preventing catastrophic spectral drift during training. □
5. Variational Limits: -Convergence of Degenerate Energies
We formulate an energy functional naturally associated with SDO parameterizations and investigate its variational limit as resolution increases.
5.1. Energy Functional and Discretization Framework
Let
denote a family of finite-dimensional parameterizations (e.g., spectral truncation at frequency
N or spatial discretization with mesh size
) of SDO networks. For parameters
, we define the degenerate energy functional
where
denotes the SDO-network reconstruction at resolution
N and
is a fidelity parameter. The first term represents the anisotropic degenerate Dirichlet energy, and the second term enforces agreement with the network reconstruction.
5.2. -Convergence Result
Theorem 5.1 (
-Convergence to a Degenerate Variational Limit).
Assume that yields reconstructions that are uniformly bounded in and converge (along subsequences) to in , and suppose . Then, as , the sequence of functionals Γ-converges in to
with respect to the strong topology. Consequently, minimizers of converge (along subsequences) to minimizers of .
Proof. We verify the conditions of -convergence: liminf and limsup inequalities.
Let
strongly in
. Assume
is bounded in
. By lower semicontinuity of the weighted Dirichlet integral [15], we have
For the fidelity term, strong convergence of
and convergence of reconstructions give
Combining (5.3) and (5.4) yields the liminf inequality:
Given
with
, there exists a sequence
such that
by density of smooth functions in weighted Sobolev spaces. Then
and summing gives
Let
be a minimizer of
. By the coercivity of the weighted Dirichlet term and boundedness of the fidelity term,
is bounded in
:
Hence, there exists a subsequence
converging weakly in
and strongly in
to some
. Using the liminf inequality (5.5),
is a minimizer of
.
This completes the proof of -convergence and convergence of minimizers. □
Remark 5.2. The Γ-limit (5.2) shows that as model resolution increases, the discrete SDO learning objective converges variationally to a continuum degenerate energy. This provides a rigorous justification for stability, interpretability, and the approximation of degenerate centers a by minimizing the discrete network loss.
6. Results
This work establishes several fundamental mathematical advances with direct implications for physics-informed turbulence modeling:
-
Complete Functional Analytic Framework: We developed a rigorous foundation for Spectral Degeneracy Operators, including:
Fractional Regularity Theory: Theorem 3.5 demonstrates that solutions of gain fractional derivatives, with for . This result, established via degenerate Caccioppoli estimates and tensorized Sturm-Liouville theory, provides rigorous error control for spectral approximations.
Explicit Asymptotics: Proposition 3.2 reveals the precise scaling of weighted Poincaré constants as , offering crucial control over coercivity in highly anisotropic regimes.
Deep Spectral Stability: Theorem 4.2 guarantees bounded mode amplitude variations of order under parameter perturbations in deep SDO networks. The practical consequence (Corollary 4.3) ensures training stability by preventing catastrophic spectral drift.
Variational Convergence: Theorem 5.1 establishes -convergence of discrete SDO energies to continuum limits, connecting neural network training with physically meaningful degenerate energy minimization.
Strengthened Classical Results: We provided complete proofs with explicit constants for universality of divergence-free closures and inverse calibration stability, improving convergence rates under spectral gap assumptions.
7. Conclusions
This work establishes a comprehensive mathematical foundation for Spectral Degeneracy Operators, bridging degenerate PDE theory with modern physics-informed neural networks for turbulence modeling. Our three principal contributions—fractional regularity theory, deep spectral stability, and variational -convergence—provide rigorous guarantees that address fundamental challenges in neural operator architectures:
The fractional regularity theory resolves fundamental questions about solution smoothness in weighted anisotropic spaces, enabling rigorous error control for spectral approximations. The deep spectral stability theorem ensures that SDO-based networks maintain consistent spectral characteristics during training, preventing the mode distortion that often plagues deep neural operators. The -convergence framework establishes a variational interpretation of SDO learning, connecting discrete network objectives with continuum energy principles.
These theoretical advances position SDOs as a mathematically principled alternative to traditional turbulence closures, combining the expressive power of neural networks with the interpretability and robustness of physically-grounded operator theory. The framework ensures that learned degeneracy centers converge to meaningful turbulent structures while maintaining training stability and approximation guarantees.
Future research directions include:
Stochastic extensions for uncertainty quantification in degeneracy parameters
Numerical analysis of discretization schemes and computational complexity
Experimental validation on canonical turbulent flow datasets
Extensions to non-separable weights and more general degenerate structures
Multi-scale implementations combining SDOs with traditional turbulence models
Connections with operator learning to establish universal approximation in stronger topologies
The mathematical framework developed herein provides a solid foundation for these investigations while ensuring that SDO-based architectures maintain physical interpretability, training stability, and rigorous mathematical guarantees across diverse turbulence modeling applications.
Acknowledgments
Santos gratefully acknowledges the support of the PPGMC Program for the Postdoctoral Scholarship PROBOL/UESC nr. 218/2025. Sales acknowledges CNPq grant 30881/2025-0.
Notation and Symbols
The following table summarizes the key mathematical notations and symbols used throughout this paper.
Mathematical Symbols and Operators
| Symbol |
Description |
|
Bounded Lipschitz domain in
|
| d |
Spatial dimension |
|
Spatial coordinates |
|
Degeneracy center (trainable parameter) |
|
Degeneracy exponents,
|
|
Spectral Degeneracy Operator (SDO) |
|
Diagonal weight matrix:
|
|
Separable weight function:
|
|
Weighted Sobolev space with degeneracy weights |
|
Weighted Bessel potential space |
|
Weighted Besov space |
|
Weighted space:
|
|
,
|
Eigenvalues and eigenfunctions of
|
|
Bilinear form associated with
|
|
SDO neural network layer |
|
Deep SDO network with L layers |
|
,
|
Discrete and continuum energy functionals |
|
-convergence |
Variational convergence in the sense of De Giorgi |
|
,
|
Poincaré constants (weighted and unweighted) |
|
Regularity exponent or perturbation parameter |
|
Activation function in neural networks |
Greek Letters
| Symbol |
Description |
Symbol |
Description |
|
Spectral coefficients |
|
Intermediate coefficients |
|
Generic constant |
|
Regularity/perturbation parameter |
|
Cutoff function |
|
Degeneracy exponents |
|
Fidelity parameter |
|
Eigenvalues |
|
Activation function/Besov parameter |
|
Eigenfunctions |
|
Spatial domain |
|
Resolvent parameter |
Key Constants and Parameters
| Symbol |
Description |
|
Generic constants (may depend on domain, weights, etc.) |
|
Weighted Poincaré constant |
|
Anisotropic weighted Poincaré constant |
|
Lipschitz constant of activation function |
|
Lower bound on eigenvalues (spectral gap) |
|
Domain radii and aspect ratios |
| K |
Spectral truncation index |
| N |
Discretization/resolution parameter |
| L |
Number of layers in deep network |
Function Spaces
| Symbol |
Description |
|
Standard Lebesgue space |
|
Standard Sobolev space |
|
Fractional Sobolev space |
|
Weighted Sobolev space with degeneracy |
|
Weighted Bessel potential space |
|
Weighted Besov space |
|
Smooth functions with compact support |
References
- Chaves dos Santos, R. D., & de Oliveira Sales, J. H. (2025). Spectral Degeneracy Operators for Interpretable Turbulence Modeling. Preprints. [CrossRef]
- Cannarsa, P., Doubova, A., & Yamamoto, M. (2024). Reconstruction of degenerate conductivity region for parabolic equations. Inverse Problems, 40(4), 045033. 10.1088/1361-6420/ad308a.
- Xiao, M. J., Yu, T. C., Zhang, Y. S., & Yong, H. (2023). Physics-informed neural networks for the Reynolds-Averaged Navier–Stokes modeling of Rayleigh–Taylor turbulent mixing. Computers & Fluids, 266, 106025. [CrossRef]
- Hussein, M. S., Lesnic, D., Kamynin, V. L., & Kostin, A. B. (2020). Direct and inverse source problems for degenerate parabolic equations. Journal of Inverse and Ill-Posed Problems, 28(3), 425-448.
- Finzi, M., Stanton, S., Izmailov, P., & Wilson, A. G. (2020, November). Generalizing convolutional neural networks for equivariance to Lie groups on arbitrary continuous data. In International Conference on Machine Learning (pp. 3165–3176). PMLR.
- Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., & Anandkumar, A. (2020). Neural operator: Graph kernel network for partial differential equations. arXiv preprint arXiv:2003.03485. [CrossRef]
- Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear PDEs. Journal of Computational Physics, 378, 686–707. [CrossRef]
- Kamynin, V. L. (2018). On inverse problems for strongly degenerate parabolic equations under the integral observation condition. Computational Mathematics and Mathematical Physics, 58(12), 2002–2017. [CrossRef]
- Beck, A. D., Flad, D. G., & Munz, C. D. (2018). Deep neural networks for data-driven turbulence models. arXiv preprint arXiv:1806.04482. [CrossRef]
- Kato, T. (2013). Perturbation theory for linear operators (Vol. 132). Springer Science & Business Media.
- DiBenedetto, E. (2012). Degenerate parabolic equations. Springer Science & Business Media.
- Sagaut, P. (2006). Large eddy simulation for incompressible flows: an introduction. Springer Berlin Heidelberg.
- Triebel, H. (2006). Theory of function spaces III. Birkhäuser Basel.
- Oleinik, O. A., & Samokhin, V. N. (1999). Mathematical models in boundary layer theory (Vol. 15). CRC Press.
- Brézis, H., & Dal Maso, G. (1993). Introduction to Γ-Convergence. Birkhäuser Boston.
- Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2(4), 303–314. [CrossRef]
- Pope, S. B. (1988). The evolution of surfaces in turbulence. International Journal of Engineering Science, 26(5), 445–469. [CrossRef]
- Watson, G. N. (1922). A treatise on the theory of Bessel functions (Vol. 3). The University Press.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).