Preprint
Article

This version is not peer-reviewed.

A Unified Gegenbauer Framework for Arbitrary-Order Riemann-Liouville Fractional Integrals with Tunable Endpoint Regularity

Submitted:

20 January 2026

Posted:

22 January 2026

You are already at the latest version

Abstract
We show that the GBFA method of Elgindy (2025), originally developed for 0 < α < 1, extends to all α > 0 without altering its algorithmic structure. Elgindy's transformation τ = t(1 − y1/α) remains valid for all α > 0 and preserves the numerical framework, ensuring that interpolation, quadrature, and error analysis carry over unchanged. For α > 1, the mapping induces only Hölder regularity at y = 0, which a ects quadrature accuracy. We quantify this e ect and show that the interpolation error retains its original convergence properties. To restore higher-order endpoint smoothness, we introduce a ϕ(α)-generalized transformation that enforces Cr regularity for any prescribed r ≥ 0, accelerating quadrature convergence while preserving the GBFA structure. Numerical experiments con rm high accuracy and robustness across all α > 0, demonstrating that the uni ed GBFA formulation provides an e cient, non-adaptive, xed-node approach for arbitrary-order RLFIs.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

Fractional calculus provides a powerful framework for modeling systems exhibiting memory, nonlocal interactions, and power-law dynamics [1,2]. Central to this theory is the RLFI of order α > 0 , defined by
I t α f = 1 Γ ( α ) I t ( τ ) ( t τ ) α 1 f .
While fractional orders in Ω 1 have been extensively studied due to their prevalence in physical applications [42,43,44], emerging problems in applied electromagnetics [45], structural dynamics [46], intelligent infrastructure, electrical engineering, nonlinear dynamics, biomechanics, control systems, thermal and fluid sciences, and biomedical signal analysis increasingly require integration orders beyond this interval [47,48].

1.1. Challenges in Arbitrary-Order RLFI Computation

Despite its closed-form definition (1), the RLFI rarely admits an analytic expression for general f and arbitrary α > 0 . Even for smooth or elementary f, the nonlocal kernel ( t τ ) α 1 couples all past values of f, defying symbolic integration except in special cases (e.g., monomials or exponentials with integer α ). Moreover, for non-integer α > 1 , the kernel belongs only to C α 1 ( Ω t ) , which precludes the use of classical higher-order Newton–Cotes or Gauss rules without transformation. In practical modeling–such as anomalous transport, viscoelasticity, and control–where f is often given numerically or as part of a coupled system, fast, stable, and repeated evaluation of I t α f becomes essential.

1.2. Overview of Existing Numerical Approaches

This need has spurred decades of research into numerical quadrature methods. Foundational work in polynomial-based and operational matrix methods for fractional calculus includes early contributions from orthogonal polynomial systems [52] as well as more recent extensions and refinements [50,51,54]. More recent developments include operational matrices using Charlier polynomials [24], fractional-order Chelyshkov functions [41,49], and SG-based GBFA schemes [4] that exhibit super-exponential convergence for 0 < α < 1 —a result we later extend and refine in Section A.3, showing that full super-convergence of the classical GBFA formulation (including the quadrature component) occurs precisely when 1 / α Z + . Concurrently, spline interpolation schemes (linear to quintic) offered flexible local approximations with rigorous error estimates [25,26,53]. Composite and trapezoidal-type quadrature rules for fractional integrals, frequently combined with Romberg extrapolation, have been developed and analyzed in recent years [27,28]. In parallel, a variety of projection and operational matrix techniques–employing block-pulse functions, hybrid polynomial systems, and Galerkin formulations–have been developed over the past decade to improve computational efficiency [30,31,32,33,34,41]. Advanced quadrature techniques for weakly singular fractional integrals continue to be an active research area. These include variable transformations such as the sigmoidal Sidi–Laurie maps [56,57], Sinc-based collocation [37,38,39], and modern adaptations in Sobolev settings [35,36], all aimed at improving convergence by regularizing endpoint behavior. Spectral and wavelet bases, such as Mittag-Leffler wavelets, have further enhanced adaptivity and accuracy [40,55].

1.3. Limitations of Current Methods

While these existing alternatives provide valuable tools for specific regimes of α , they generally lack one or more of the following attributes essential for high-performance, repeated RLFI evaluation: (i) fixed-node, non-adaptive structure compatible with global solvers; (ii) precomputable integration matrices enabling low online cost; (iii) generally offers rapid (often spectral) convergence for smooth f across all α > 0 ; and (iv) tunable endpoint regularity to control quadrature accuracy. The GBFA framework developed in this work–extending the earlier approach of Elgindy [4]–uniquely satisfies all four properties simultaneously, as demonstrated later in the paper.
The GBFA method [4] employs the transformation
τ = t ( 1 y 1 / α ) , y Ω 1 ,
to obtain the exact identity
Preprints 195196 i001
Although the kernel ( t τ ) α 1 of (1) is nonsingular on Ω t for α 1 , the composed integrand f ( t ( 1 y 1 / α ) ) is only Hölder-continuous of order 1 / α at y = 0 , with all derivatives blowing up as y 0 + (as we demonstrate later via Faà di Bruno’s formula). This loss of endpoint smoothness degrades Gauss-type quadrature convergence despite the original kernel being C α 1 . Adaptive quadrature can compensate but is computationally expensive, non-reproducible across evaluations, and incompatible with fixed-grid solvers (e.g., collocation, operational matrices, time-stepping schemes). Hence, a unified, non-adaptive, highly accurate framework operating on fixed nodes, yielding precomputable FSGIMs, maintaining high algebraic, spectral, or even super-exponential convergence rates whenever the endpoint regularity permits, and integrating seamlessly into global approximation architectures remains compelling.

1.4. The GBFA Method and Its Original Scope

Within this landscape, Elgindy [4] introduced the GBFA method, which achieves super-exponential convergence for the RLFI when 1 / α Z + via SG interpolation and precomputable FSGIMs, while exhibiting algebraic convergence for other fractional orders 0 < α < 1 ; cf. Section A.3. The method hinges on transformation (2), yielding identity (3), and exhibits: (i) super-exponential convergence for analytic f and 1 / α Z + ; (ii) precomputable FSGIMs; and (iii) tunable SG parameters λ , λ q for optimal stability and accuracy. Although the original presentation restricted α to Ω 1 , transformation (2) is mathematically valid a α > 0 as we show later in Section 3. This paper demonstrates that the entire GBFA framework–including SG interpolation, FSGIM construction, and error analysis–extends naturally to arbitrary α > 0 without algorithmic modification. Additionally, we introduce and analyze a family of generalized ϕ ( α ) -transformations that restore C r endpoint regularity for α > 1 (for any r Z + ), thereby enabling high-order quadrature convergence across all α > 0 . We further note that C r endpoint regularity is likewise attainable for 0 < α < 1 whenever 1 / α Z + s l n q , as demonstrated later. This insight significantly broadens GBFA’s applicability to problems requiring α 1 or even integer-order repeated integration.
Existing alternatives for arbitrary-order RLFIs–such as adaptive quadrature [5], operational matrix techniques [6,7], and wavelet-based methods [8,9]–often trade accuracy for computational cost or implementation complexity. In contrast, the generalized GBFA preserves spectral accuracy, enables efficient FSGIM reuse, and, as shown herein, attains near machine-precision across α > 0 with minimal parameter tuning.

1.5. Contributions of This Work

The main contributions of this work include:
(i)
A proof that the GBFA method of [4], originally developed for 0 < α < 1 , requires no algorithmic modification to extend to arbitrary α > 0 ;
(ii)
Extension of the exact transformed identity (3)—originally derived in [4] for 0 < α < 1 —and all associated theoretical results (interpolation, quadrature, FSGIM construction, and error analysis) to any arbitrary α > 0 ;
(iii)
Numerical validation of excellent convergence rates and robust performance over a wide range of α > 0 ;
(iv)
Introduction of a novel ϕ ( α ) -generalized transformation for α > 1 that enforces tunable C r endpoint regularity (for any prescribed r 0 ) in the transformed integrand, dramatically accelerating quadrature convergence and enabling near machine-precision accuracy with modest quadrature orders through appropriate choice of r and spectral parameters λ , λ q ; moreover, we show that C r endpoint regularity is likewise attainable for 0 < α < 1 whenever 1 / α Z + s l n q , as demonstrated later.
(v)
Provision of a unified framework which delivers an exact algebraic substitution that precisely cancels the kernel ( t τ ) α 1 for 0 < α < 1 (classical case) while restoring controllable endpoint smoothness for α 1 (generalized case), and likewise for 0 < α < 1 whenever 1 / α Z + ; this is distinct from approximate sigmoidal regularization techniques such as Sidi–Laurie transformations, which are primarily designed for weakly singular kernels when 0 < α < 1 .
(vi)
A refined error analysis that establishes a complete convergence hierarchy for the ITE across all smoothness classes of f (globally analytic, Gevrey-regular with index 1 < μ < 2 , general C , and finitely smooth C k ), clarifying the precise regularity conditions required for super-exponential, log-stretched-exponential, qualitative, or algebraic decay rates—extending and sharpening the bounds in [4], which focused primarily on analytic data.
(vii)
A sharp characterization of the endpoint regularity of the transformed integrand under Elgindy’s classical map, establishing that full super-exponential convergence of the GBFA method (including exact quadrature for polynomial data) occurs iff 1 / α Z + ; otherwise, the integrand belongs only to a Hölder space C K , θ ( Ω 1 ) with K = 1 / α and θ = 1 / α K , limiting quadrature to algebraic asymptotic rates. In particular, for 0 < α < 1 with 1 / α Z + , one may still enforce an error of order O ( n r ) for any r > 0 s l n q . Crucially, we clarify via Remark A9 that for real-analytic f, the pre-asymptotic quadrature error often decays dramatically faster than this worst-case rate due to exponentially decaying coefficients in the fractional power series expansion of the transformed integrand, reconciling observed near-exponential convergence in practical computations even when 1 / α Z + .

1.6. Organization of the Paper

The remainder of the paper is organized as follows: Section 2 reviews the GBFA framework of [4]. Section 3 proves this framework is valid a α > 0 , analyzes endpoint regularity, and presents convergence results for analytic, Gevrey, C , and C k data, along with quadrature error and λ -dependence. Section 4 introduces the ϕ ( α ) -generalized transformation, and constructs weighted FSGIMs. Section 5 analyzes complexity. Section 6 derives the smoothness condition. Section 7 validates both transformations for α > 0 , compares GBFA against some methods and solvers available in the literature. Section 8 summarizes implications, guidelines, trade-offs, and future directions.

2. Background: GBFA Method

This section briefly summarizes the GBFA method as presented in [4], highlighting the components that naturally generalize to arbitrary α > 0 .

2.1. SG Polynomials, the RLFI Transformation, and FSGIMs

As detailed in [4, Section 2], the SG polynomials G ^ n ( λ ) ( t ) , defined for t Ω 1 and λ > 1 / 2 , form an orthogonal basis on Ω 1 with respect to the weight function ( t t 2 ) λ 1 / 2 . For f L 2 ( Ω 1 ) and SGG nodes G ^ n λ = { t ^ n , 0 : n λ } , the GBFA interpolant is:
Preprints 195196 i002
where L 0 : n λ ( t ) are the Lagrange basis polynomials in SG form. The core innovation in [4] is Elgindy’s transformation (2), which, for 0 < α < 1 , yields (3). The GBFA approximation then becomes:
Preprints 195196 i003
The approximation formula (5) shows that the RLFI reduces, after the transformation (2), to evaluating integrals of the form
Preprints 195196 i004
We refer to (6) as the SG–transformed integrals. They arise naturally from substituting the SG interpolant of f into the transformed RLFI and represent the contribution of each SG basis function to the fractional integral.
Because the mapping y t ( 1 y 1 / α ) is a C -diffeomorphism on the open interval Ω 1 for all α > 0 , the integrands in (6) are smooth functions of y on Ω 1 , even when the original RLFI kernel ( t τ ) α 1 is weakly singular for 0 < α < 1 . This interior regularity supports high-order SG quadrature convergence. However, for α > 1 , the mapping is only Hölder-continuous of order 1 / α at the endpoint y = 0 , with all derivatives blowing up as y 0 + (as shown later in Section 3.6). This endpoint behavior reduces the global regularity of the transformed integrand on Ω 1 , affecting the quadrature error component but not the interpolation error (which depends only on the smoothness of f in the original variable t).
In practice, the integrals (6) are approximated using high-order Gauss quadratures such as the ( n q , λ q ) -GBFA quadrature rule [4]. This introduces a quadrature truncation error that we will analyze later in Section 3.5 for arbitrary α > 0 . The explicit definition (6) is therefore essential for understanding both the construction of the FSGIMs and the total error structure of the GBFA method.
As developed in [4, Section 3], the α th-order FSGIM Q n α E enables efficient computation:
I z M α f Q n α E f 0 : n ,
for evaluation points z M Ω 1 M + 1 . These matrices are precomputable and depend only on α , n, λ , and quadrature parameters; see [4, Eqs. (18) and (20)].
Motivated by the structural properties reviewed above, we now turn to one of the central theoretical contributions of this work.

3. Generalization to Arbitrary Fractional Orders

With the essential machinery now in place, we demonstrate in this section how the GBFA framework naturally extends to arbitrary α > 0 without modification.

3.1. Mathematical Validity of Elgindy’s Transformation

The transformation (2) requires careful examination of the exponent 1 / α . For α > 0 , the mapping (2) is bijective on Ω t Ω 1 with Jacobian (A46) (see A.2). Substituting into the RLFI definition yields:
Preprints 195196 i005
which directly institutes (3). Notice here the precise cancellation of the exponents of y: the term y ( α 1 ) / α arising from the kernel ( t τ ) α 1 is perfectly neutralized by the term y ( 1 α ) / α appearing in the Jacobian d τ / d y . This identity holds for all α > 0 ; thus, the core GBFA framework—including the approximation formula (5), the FSGIM construction from [4, Section 2], and the evaluation form (7)—extends directly to arbitrary positive orders without modification. Since the transformation (2) is exact for all α > 0 , the algorithm for generating Q n α E requires no α -dependent changes. However, for α > 1 , the endpoint regularity of the transformed integrand is reduced (becoming only Hölder-continuous of order 1 / α at y = 0 ), which affects the quadrature truncation error component but leaves the interpolation error convergence rate intact as we prove later in Section 3.6.
With the transformation rigorously justified, we can now analyze how the error structure behaves under arbitrary fractional orders.

3.2. Error Analysis

The error analysis developed in [4, Section 4] carries over to the present framework without any change in the proof structure or asymptotic rates; only the constants in the bounds acquire an implicit dependence on α through the composed integrand f t ( 1 y 1 / α ) . The corresponding convergence results are given below in Theorems 1–5, covering analytic, Gevrey-regular, general C -smooth functions, and finitely smooth data with algebraic convergence rates. In particular, Theorems 1 and 2 quantify super-exponential and log-stretched-exponential decay for analytic and Gevrey-regular inputs, Theorems 3 and 4 ensure qualitative convergence for C data, and Theorem 5 establishes algebraic convergence of order O ( n k ) for functions in C k ( Ω 1 ) .
Theorem 1 
(Super-Exponential Convergence for Globally Analytic f). Let α > 0 and t Ω 1 . Suppose that f is real-analytic on Ω 1 and that there exist constants C f , B f > 0 such that
f ( k ) L ( Ω 1 ) C f B f k , k Z 0 + .
Let I t α f be approximated via the GBFA method using SG interpolation of degree n. Then ξ = ξ ( t ) Ω 1 such that the ITE admits the exact representation
Preprints 195196 i006
where K ^ n λ is given by [4, Eq. (24)]. Moreover, the GBFA approximation converges to I t α f at a super-exponential rate in n; more precisely, there exists a constant c > 0 independent of n such that
I t α f I t α I n f = O e c n ln n , n .
Proof. 
The exact error representation (10) follows directly from [4, Theorem 1], which remains valid for any α > 0 since the transformation (2) is a C -diffeomorphism between Ω 1 and Ω t a α > 0 ; see Corollary A2. To obtain an explicit asymptotic bound, we invoke the general estimate derived in [4, Theorem 2], which yields
I t α f I t α I n f < A n + 1 ϑ α , λ ( e 4 ) n n 3 2 n + α Υ σ λ ( n ) ,
where
A n + 1 : = f ( n + 1 ) L ( Ω 1 ) ,
ϑ α , λ = 1 4 π e α + 1 α 1 2 α 1 + 2 λ 1 2 2 λ 1 + 1 1620 1 + λ 5 × 1 + λ csch 1 1 + 2 λ 1 + 2 λ 1 2 + λ α sinh 1 α α / 2 1 + λ sinh 1 1 + λ 1 + λ 2 ,
and
Υ σ λ ( n ) = 1 , λ 0 , σ λ n λ , 1 2 < λ < 0 ,
with σ λ > 1 . By assumption (9), we have the growth bound
A n + 1 C f B f n + 1 .
Substituting (14) into (12) gives
I t α f I t α I n f < C f B f n + 1 ( e 4 ) n n 3 2 n + λ Υ σ λ ( n ) .
Collecting the n-dependent factors, we may rewrite (15) as
I t α f I t α I n f C 1 e B f 4 n n 3 2 n + λ Υ σ λ ( n ) ,
for some constant C 1 > 0 independent of n. Since Υ σ λ ( n ) is at most algebraic in n, there exists p R and C 2 > 0 such that
Υ σ λ ( n ) C 2 n p , n 1 .
Thus,
I t α f I t α I n f C 3 e B f 4 n n n + q , = C 3 exp n ln ( e B f / 4 ) n ln n + q ln n .
s C 3 > 0 and q R independent of n. As n , the dominant term in the exponent is n ln n . Hence, there exists c > 0 and N Z + such that
n ln ( e B f / 4 ) n ln n + q ln n c n ln n , n N ,
which implies
e B f 4 n n n + q e c n ln n , n N .
This proves that the GBFA error decays super-exponentially in n under the growth condition (9).    □
The following theorem is tailored for smooth, but non-globally analytic function f; see, for example, [13] for a detailed discussion of Gevrey-regularity. By log-stretched-exponential convergence here we mean stretched-exponential with logarithmic enhancement; i.e., faster than stretched-exponential but slower than exponential.
Theorem 2 
(Log-Stretched-Exponential Convergence for Gevrey Regular f). Let α > 0 and t Ω 1 . Suppose that f is C -smooth on Ω 1 and that there exist constants C f , B f > 0 and 1 < μ < 2 such that
f ( k ) L ( Ω 1 ) C f B f k ( k ! ) μ 1 , k Z 0 + .
Let I t α f be approximated via the GBFA method using SG interpolation of degree n. Then ξ = ξ ( t ) Ω 1 such that the ITE admits the exact representation (10). Moreover, there exists a constant c > 0 independent of n such that
I t α f I t α I n f = O e c n 1 / μ ln n , n .
In particular, the GBFA approximation converges faster than any algebraic rate in n for Gevrey-regular f of order μ.
Proof. 
As in Theorem 1, the exact error representation (10) follows from [4, Theorem 1] and the diffeomorphism property of the transformation (2). Consider again the general GBFA error estimate (12). Under the Gevrey growth condition (18), we have
A n + 1 C f B f n + 1 ( n + 1 ) ! μ 1 .
Using Stirling’s approximation ( n + 1 ) ! 2 π ( n + 1 ) n + 1 e n + 1 , we obtain
( n + 1 ) ! μ 1 ( 2 π ) μ 1 2 ( n + 1 ) μ 1 2 n + 1 e ( μ 1 ) ( n + 1 ) .
Substituting (20) and (21) into (12), and absorbing ( 2 π ) ( μ 1 ) / 2 and the factor B f into a generic constant, we obtain s l n:
I t α f I t α I n f C 1 e B f 4 n n e ( μ 1 ) n n λ + 3 2 μ n 3 Υ σ λ ( n ) ,
s C 1 > 0 independent of n. Since Υ σ λ ( n ) grows at most algebraically in n, there exist q R and C 2 > 0 such that
Υ σ λ ( n ) C 2 n q , n 1 .
Thus, from (22) we obtain
I t α f I t α I n f C 3 e B f 4 n n e ( μ 1 ) n n q n ,
s C 3 > 0 and q R independent of n. Writing the right-hand side of (23) in exponential form,
e B f 4 n n e ( μ 1 ) n n q n = exp ( μ 2 ) n ln n + n ln ( e B f / 4 ) ( μ 1 ) + q ln n .
For 1 < μ < 2 , the coefficient μ 2 is negative, hence the leading term is
( μ 2 ) n ln n = ( 2 μ ) n ln n ,
which dominates the lower-order contributions O ( n ) and q ln n as n . Therefore, there exist constants c 1 > 0 , C 4 > 0 , and N Z + such that
e B f 4 n n e ( μ 1 ) n n q n C 4 exp c 1 n ln n , n N .
Since n ln n n 1 / μ ln n s n 2 and 1 < μ < 2 , it follows that
exp c 1 n ln n exp c n 1 / μ ln n , s l n ,
s c > 0 , which proves (19).    □
Theorem 3 
(Qualitative Convergence for Higher-Order Gevrey-Regular f). Let α > 0 and t Ω 1 . Suppose that f is C -smooth on Ω 1 and that there exist constants C f , B f > 0 and μ 2 such that
f ( k ) L ( Ω 1 ) C f B f k ( k ! ) μ 1 , k Z 0 + .
Let I t α f be approximated via the GBFA method using SG interpolation of degree n. Then the GBFA approximation converges pointwise to the exact RLFI, i.e.,
lim n I t α f I t α I n f = 0 .
Moreover, the convergence is uniform in t on any compact subset K Ω 1 .
Proof. 
The Gevrey condition (24) with μ 2 implies that f C ( Ω 1 ) . For such smooth functions, spectral interpolation at Gauss-type nodes yields uniform convergence. Specifically, for SG interpolation at the SGG nodes G ^ n λ , there exists a sequence { η n } n = 1 with η n 0 such that
f I n f L ( Ω 1 ) η n , n Z + .
This follows from standard results in polynomial approximation theory; see, e.g., the convergence properties of interpolation in orthogonal polynomial bases at Gauss quadrature nodes [10,11,12,58].
Next, the RLFI I t α ( · ) defines a bounded linear operator on C ( Ω 1 ) . For any g C ( Ω 1 ) ,
| I t α g | 1 Γ ( α ) 0 t ( t τ ) α 1 | g ( τ ) | d τ t α Γ ( α + 1 ) g L ( Ω 1 ) , t Ω 1 .
Applying this bound with g = f I n f and using (26), we obtain for every t Ω 1 :
I t α f I t α I n f = I t α ( f I n f ) t α Γ ( α + 1 ) f I n f L ( Ω 1 ) t α Γ ( α + 1 ) η n .
Consequently, for any fixed t Ω 1 , since η n 0 , we have the pointwise convergence (25).
For the uniform convergence on compact subsets, let K Ω 1 be compact. Then sup t K t α < , and we obtain
sup t K I t α f I t α I n f 1 Γ ( α + 1 ) sup t K t α η n 0 as n ,
which establishes uniform convergence on K.    □
Theorem 4 
(Qualitative Convergence for Smooth Data). Let α > 0 and t Ω 1 . Suppose that f C ( Ω 1 ) and that, for each n Z + , I n f denotes the SG interpolant of f at the SGG nodes G ^ n λ Ω 1 as in (4). Assume that the SG interpolation operator satisfies
lim n f I n f L ( Ω 1 ) = 0 .
Let I t α f be approximated via the GBFA method using SG interpolation of degree n. Then (25) is satisfied, and the convergence is uniform in t on any compact subset K Ω 1 .
Proof. 
By assumption f C ( Ω 1 ) and (29) holds, i.e., the SG interpolants I n f at the SGG nodes converge uniformly to f on Ω 1 . This is precisely the interpolation property used in the proof of Theorem 3, where the Gevrey condition (24) was invoked only to guarantee smoothness and the uniform convergence of SG interpolation. The remaining step in the proof of Theorem 3 relies on two ingredients: (i) the bounded linearity of the RLFI I t α ( · ) on C ( Ω 1 ) , and (ii) the uniform convergence of I n f to f in L ( Ω 1 ) . These ingredients are entirely independent of the Gevrey growth condition and require only (29). Consequently, the same argument shows that (25) holds true for each fixed t Ω 1 , and that the convergence is uniform in t on any compact subset K Ω 1 . This establishes the proof.    □
Remark 1 
(Minimal Versus Structured Regularity). Theorem 4 identifies minimal smooth-data assumptions for qualitative convergence of the GBFA method–namely: (i) f is C -smooth on Ω 1 , and (ii) the SG interpolants converge uniformly to f as in (29). In fact, the proof of convergence fundamentally relies only on the interpolation property (29) and the boundedness of the RLFI operator, and does not preclude convergence in weaker function spaces. By contrast, Theorem 3 concerns the more restrictive class of Gevrey-regular functions with index μ 2 . Although this theorem also yields qualitative convergence, it plays a different conceptual role: it completes the Gevrey hierarchy established in Theorems 2–3 and shows that, as the Gevrey index crosses the threshold μ = 2 , the GBFA method transitions from quantitative (stretched-exponential) convergence to purely qualitative convergence. Thus, Theorem 3 serves to characterize the behavior of the GBFA method across the full spectrum of Gevrey regularity, whereas Theorem 4 isolates the weakest smooth-data conditions under which convergence still holds in this framework.

3.3. Dependence of the GBFA ITE on the Parameter λ

The dependence of the GBFA ITE on the SG parameter λ arises through several components of the general estimate (12). The dominant λ -dependence is contained in the constant ϑ α , λ , which, as shown in [4], is unimodal in λ , attaining its maximum near λ * 0.1351 and then decaying exponentially as λ increases. This exponential decay is the primary mechanism governing the behavior of the error with respect to λ . In contrast, the remaining λ -dependent factors in (12), namely the algebraic term n 3 2 n + λ and, for 1 / 2 < λ < 0 , the auxiliary factor Υ σ λ ( n ) = σ λ n λ , contribute only algebraic growth or decay in n. For fixed n, these algebraic contributions are dominated by the exponential decay of ϑ α , λ , and therefore do not alter the qualitative trend that the GBFA ITE decreases as λ increases beyond λ * .
For moderate fractional orders α Ω 1 , 2 , this unimodal structure of ϑ α , λ explains why increasing λ improves the ITE. For larger α , the dominant factor α 1 2 α in (13) decays super-exponentially, suppressing the λ -dependence of all terms in (12). This accounts for the numerical observation in Section 7 that, for α > 2 , the GBFA error becomes nearly invariant across all tested values of λ . Although the analysis in [4] focused on 0 < α < 1 , the structure of the λ -dependent terms in (12) remains unchanged for all α > 0 , and the qualitative behavior of the ITE with respect to λ therefore extends naturally to arbitrary fractional orders.
Figure 1 extends the numerical study of [4, Fig. 3] from the classical regime 0 < α < 1 to fractional orders α > 1 . The results exhibit the same qualitative structure observed previously: for moderate values of α (e.g., α Ω 1 , 2 ), the GBFA error retains a clear dependence on the SG parameter λ , with smaller errors obtained for increasing λ > λ * 0.1351 , in agreement with the unimodal behavior of ϑ α , λ described earlier in this section. As α increases further, the dominant factor α 1 2 α in the constant ϑ α , λ decays super-exponentially, effectively suppressing the λ -dependence of the entire bound (12). This explains the near-coincidence of all curves for increasing α values in the classical transformation, where the total GBFA error is dominated by the quadrature component. In this regime, the super-exponential decay of the factor α 1 2 α in ϑ α , λ suppresses the underlying λ -dependence, so the method appears only weakly sensitive to λ . Thus, Figure 1 demonstrates that the parameter-sensitivity patterns reported in [4] for 0 < α < 1 persist for α > 1 , but gradually diminish as α grows, in full agreement with the theoretical predictions.
Remark 2. 
Although the theoretical bound (12) indicates that the dependence on λ should diminish as α increases, numerical experiments using the ϕ ( α ) -generalized transformation introduced in Section 4 exhibit a more nuanced trend. For this generalized mapping, moderate values of λ (typically λ Ω 1 ) continue to deliver excellent accuracy for large α. In contrast, extreme values near the theoretical lower limit λ = 0 . 5 + show noticeably degraded performance as α grows, effectively widening the practical λ-sensitivity range. These observations suggest that, in applications of the generalized transformation, λ should be selected from Ω 1 to avoid the blow-up region near 0 . 5 + , while λ q should be fixed at 0.5 to fully exploit the C r endpoint regularization, as demonstrated in Section 7.3.

3.4. Finite Smoothness Case

The previous results focused on globally analytic functions, Gevrey-regular data, and C -smooth functions. In many practical applications, however, the input f may possess only finite smoothness, say f C k ( Ω 1 ) for some finite k Z + . In this case, one expects algebraic—rather than (super-)exponential—convergence, in line with classical results from spectral and pseudospectral approximation theory; see, for example, [11]. The following theorem formalizes this behavior for the GBFA approximation of the RLFI.
Theorem 5 
(Algebraic convergence for C k data). Let α > 0 and t Ω 1 . Suppose that f C k ( Ω 1 ) for some integer k 1 , and that its kth derivative satisfies
f ( k ) L ( Ω 1 ) C k ,
for some constant C k > 0 . For each n Z + , let I n f denote the SG interpolant of f at the SGG nodes G ^ n λ Ω 1 as in (4). Then there exist constants C app , C α , k > 0 , independent of n, such that
f I n f L ( Ω 1 ) C app n k , n Z + ,
and the GBFA approximation of the RLFI satisfies the algebraic error bound
I t α f I t α I n f C α , k n k , n Z + .
In particular, the GBFA approximation of I t α f converges to the exact RLFI at an algebraic rate O n k as n for every fixed t Ω 1 .
Proof. 
By assumption f C k ( Ω 1 ) with bounded kth derivative (30). For polynomial approximation on Ω 1 with respect to an orthogonal basis associated with a Jacobi-type weight (in particular, SG polynomials), classical Jackson-type estimates and interpolation results imply that there exists a constant C app > 0 , independent of n, such that the SG interpolation error at the SGG nodes satisfies the algebraic bound (31) for every fixed λ > 1 / 2 [14,15].
To transfer the interpolation error to the RLFI, we use the boundedness of the RLFI as a linear operator on C ( Ω 1 ) . Using (28) and (31), we obtain
I t α f I t α I n f t α Γ ( α + 1 ) C app n k .
Setting
C α , k : = C app Γ ( α + 1 ) sup t Ω 1 t α ,
which is finite since t α 1 on Ω 1 , yields (32).    □
Remark 3 
(Convergence Hierarchy for Smooth Functions). The GBFA method exhibits a classical smoothness-dependent convergence hierarchy in terms of its ITE:
  • Globally analytic f ( μ = 1 ): Super-exponential convergence O ( e c n ln n ) as shown by Theorem 1.
  • Gevrey-regular f with 1 < μ < 2 : Log-stretched-exponential convergence O ( e c n 1 / μ ln n ) as shown by Theorem 2. This naturally interpolates between algebraic and super exponential rates.
  • Gevrey-regular f with μ 2 (and general C data): Qualitative convergence guaranteed but no explicit rate from current analysis (Theorems 3 and 4). The restriction 1 < μ < 2 in Theorem 2 is essential because for μ 2 , the leading term ( μ 2 ) n ln n in the error exponent becomes non-negative, requiring different analysis techniques.
  • Finite smoothness f C k ( Ω 1 ) : Algebraic convergence O ( n k ) as shown by Theorem 5.
This hierarchy demonstrates that the GBFA method preserves the classical spectral property: convergence accelerates with increased smoothness of f, from algebraic for finitely differentiable functions to super-exponential for analytic functions. The RLFI operator does not alter this fundamental behavior but merely introduces a bounded scaling factor dependent on α and t.

3.5. Quadrature Truncation Error for GBFA-Based RLFI

In the GBFA framework for α Ω 1 , the SG–transformed integrals (6) appearing in the RLFI approximation are evaluated numerically using the ( n q , λ q ) –GBFA quadrature; cf. the SGIRV construction in [16, Algorithms 6–7]. This numerical step introduces a quadrature truncation error that combines with the interpolation error to determine the total accuracy of the method. The quadrature truncation error for SG–based RLFI integrands is rigorously characterized in [4, Theorems 3–4]. These results provide a closed-form expression for the quadrature truncation error in terms of the ( n q + 1 ) th SG derivative of the composed integrand, and sharp asymptotic bounds describing its decay with respect to n q , including the regimes j n q and j n q . In particular, [4, Theorem 3] gives the exact quadrature error representation for each SG mode j, while [4, Theorem 4] establishes its asymptotic decay rate and the influence of the quadrature index λ q . Finally, the quadrature error produced by the ( n q , λ q ) –GBFA rule combines with the SG interpolation error analyzed in Theorems 1–5 to yield the total truncation error of the GBFA method. A complete asymptotic bound for this combined error is already established in [4, Theorem 5]. For analytic functions (Theorem 1), the regularity assumptions required by [4, Theorem 5] are fully satisfied, so the total GBFA error inherits the super-exponential decay in n coming from the ITE together with the fast decay in n q associated with the ( n q , λ q ) –GBFA quadrature. In particular, as shown in [4, Theorem 4], the quadrature error exhibits a two-regime structure: it vanishes identically whenever j < n q + 1 , and for j n q + 1 its dominant terms decay exponentially in n q (up to algebraic prefactors depending only on λ and λ q ) under the smallness conditions on 2 t η 1 / α 1 α and t η 1 / α 1 α specified in [4, Eqs. (41) and (43)], where η Ω 1 . Consequently, for analytic f, the total truncation error of the GBFA method is typically dominated by the interpolation component and therefore decays super-exponentially in n, while the quadrature component either vanishes or becomes negligible for sufficiently large n q in the admissible parameter regime. Thus, the overall accuracy of the GBFA method is governed jointly by the smoothness class of f and the chosen interpolation and quadrature parameters: the interpolation component decays at the rate dictated by the regularity of f, whereas the quadrature component is either zero or decays at least exponentially in n q when the transformed integrand is sufficiently smooth (e.g., when 1 / α Z + ); otherwise, it exhibits algebraic convergence due to limited endpoint regularity.
Remark 4 
(On the Smoothness Assumptions in the GBFA Error Analysis [4]). The truncation–error analysis in the GBFA work [4] (Theorems 1–5 therein) is carried out under the single assumption that f C n + 1 ( Ω 1 ) , since the Lagrange interpolation remainder formula requires the existence and boundedness of f ( n + 1 ) . No distinction is made between analytic, Gevrey-regular, C , or finitely smooth data; all bounds are expressed in terms of the quantity f ( n + 1 ) L ( Ω 1 ) together with asymptotic properties of SG polynomials and Gamma function estimates.
It is important to note that the super-exponential decay in [4] is generally valid for 1 / α Z + when the derivatives of f satisfy an analytic-type growth condition of the form f ( k ) C B k , which ensures that f ( n + 1 ) L ( Ω 1 ) does not dominate the polynomial–asymptotic factors. For general C n + 1 functions, however, the derivatives may grow at rates comparable to k ! (Gevrey), k ! μ (higher-order Gevrey), or even faster, in which case the factor f ( n + 1 ) L ( Ω 1 ) may prevent super-exponential decay.
The present paper refines the convergence analysis by establishing a complete hierarchy of smoothness classes–globally analytic, Gevrey-regular ( 1 μ < 2 ), general C , and finitely smooth C k –and derives the corresponding decay rates for the ITE in each case. This clarifies that super-exponential convergence of the total GBFA error is achieved only when two conditions hold simultaneously: (i) f is globally analytic (ensuring super-exponential ITE decay per Theorem 1), and (ii) 1 / α Z + , under which the classical transformation (2) renders the SG–transformed integrands (6) real-analytic (indeed, polynomial if f is polynomial). In this special regime, the ( n q , λ q ) –GBFA quadrature is exact for n q n , eliminating the quadrature truncation error entirely and yielding full super-convergence of the total error–even for functions whose derivatives grow rapidly–as confirmed in Theorem A10.

3.6. Endpoint Regularity of the Transformed Integrand for α > 1

The GBFA method relies on the transformation (2), which eliminates the algebraic kernel factor ( t τ ) α 1 in the RLFI. After applying (2), the RLFI integrand becomes
g ( y ) : = f t 1 y 1 / α , y Ω 1 ,
which appears inside the SG–transformed integrals (6). Even when f is assumed smooth on Ω 1 , the mapping y y 1 / α is not C 1 at y = 0 when α > 1 . This subsection clarifies the precise regularity of g near y = 0 and its implications for the GBFA error structure.
Lemma 1 
(Regularity of y 1 / α for α > 1 ). Let α > 1 and set β : = 1 / α Ω 1 . Then:
(i) 
y y β is continuous on Ω 1 and analytic on Ω 1 .
(ii) 
For every integer m 1 ,
d m d y m y β = c β , m y β m , s c β , m 0 .
(iii) 
Since β m < 0 m 1 , all derivatives blow up at y = 0 :
lim y 0 + d m d y m y β = + .
In particular, since | x 1 / α y 1 / α | | x y | 1 / α x , y Ω 1 , the map y y β is Hölder-continuous of order β on Ω 1 , but its derivatives blow up at y = 0 ; hence it belongs to C 0 , β ( Ω 1 ) but not to C 1 ( Ω 1 ) α > 1 .
Theorem 6 
(Regularity of the Transformed Integrand g for α > 1 ). Let α > 1 and define g by (33). Assume that f C ( Ω 1 ) . Then g C ( Ω 1 ) C 1 ( Ω 1 ) and real-analytic on Ω 1 .
Proof. 
Let α > 1 and set β : = 1 / α Ω 1 . Define the inner map
ϕ ( y ) : = t 1 y β , y Ω 1 .
By Lemma 1, the function y y β is continuous on Ω 1 and analytic on Ω 1 , with all derivatives blowing up at y = 0 . Since f C ( Ω 1 ) , the composition g ( y ) = f ϕ ( y ) is analytic on Ω 1 by the chain rule for analytic functions.
For the global regularity on Ω 1 , we analyze the derivatives of g at y = 0 . By Faá di Bruno’s formula for the mth derivative of a composition g = f ϕ [17], we have
g ( m ) ( y ) = k = 1 m f ( k ) ϕ ( y ) B m , k ϕ ( y ) , ϕ ( y ) , , ϕ ( m k + 1 ) ( y ) ,
where B m , k are the (partial) Bell polynomials in the derivatives of ϕ . Each term in this sum is a finite linear combination of monomials of the form
f ( k ) ϕ ( y ) j = 1 m ϕ ( j ) ( y ) ν j ,
where the nonnegative integers ν 1 , , ν m satisfy
j = 1 m ν j = k , j = 1 m j ν j = m .
Using Lemma 1, each derivative ϕ ( j ) ( y ) behaves like
ϕ ( j ) ( y ) = t c β , j y β j ,
so the product scales as
j = 1 m ϕ ( j ) ( y ) ν j = C ν y j = 1 m ( β j ) ν j = C ν y β k m ,
for some constant C ν depending only on t , β and the multi-index ν = ( ν 1 , , ν m ) . Thus, every Faà di Bruno term in g ( m ) ( y ) is asymptotically of the form
f ( k ) ϕ ( y ) y β k m as y 0 + .
To identify the most singular contribution, we note that for fixed m the exponent β k m is strictly increasing in k because β > 0 . Hence the smallest exponent (i.e., the most negative, and therefore the most singular) corresponds to the smallest possible k. Since k 1 , the dominant contribution arises from the terms with k = 1 . For such terms we have
β k m = β m ,
and the corresponding part of g ( m ) ( y ) behaves like
g ( m ) ( y ) f ( ϕ ( y ) ) y β m , y 0 + .
In particular, if f ( t ) 0 , then f ( ϕ ( y ) ) f ( t ) 0 as y 0 + , and the factor y β m controls the singular behavior of g ( m ) ( y ) near y = 0 . For y β m to remain bounded as y 0 + , we require β m 0 , i.e., m β = 1 / α . Since β < 1 for every α > 1 , no integer m 1 satisfies m β . Thus, unless f ( t ) = 0 , the dominant contribution (34) already becomes singular for m = 1 , and higher derivatives exhibit even stronger divergence. If f ( t ) = 0 , the leading singular term vanishes, but similar divergences reappear in higher-order contributions unless f possesses sufficiently high-order zeros at t. This establishes the proof.    □
Remark 5 
(Effect on GBFA Error: Quadrature vs. Interpolation). The finite endpoint regularity described in Theorem 6 affects only the quadrature error associated with approximating the SG–transformed integrals (6) using the ( n q , λ q ) –GBFA quadrature rule. For α > 1 , the composed integrand g is merely Hölder-continuous of order 1 / α < 1 at y = 0 , which directly reduces the global regularity of g on Ω 1 and significantly slows the convergence of the quadrature truncation error component compared to the 0 < α < 1 case. This quadrature error dominates the total error for analytic f, limiting the overall convergence rate of the GBFA method and causing it to degrade as α increases (with larger α yielding slower quadrature convergence for fixed n q ). Importantly, the ITE in the original variable t remains unaffected and retains super-exponential decay in n. Thus, while the interpolation component preserves the full spectral rate dictated by the smoothness of f, the total GBFA accuracy for α > 1 is governed primarily by the typically algebraic quadrature decay, requiring substantially larger n q to maintain high accuracy as α grows, as confirmed numerically in Section 7.

3.7. Endpoint Regularity of the Transformed Integrand for 0 < α < 1 , 1 / α Z +

We now analyze the SG–transformed integrand g defined in (33) for the regime 0 < α < 1 . In contrast to the α > 1 case of Subsection 3.6, the inner map ϕ ( y ) = t ( 1 y 1 / α ) exhibits higher endpoint smoothness when 1 / α Z + , and this improved regularity propagates to g.
Lemma 2 
(Regularity of y 1 / α for 0 < α < 1 , 1 / α Z + ). Let 0 < α < 1 and β : = 1 / α > 1 with β Z + . Then:
(i) 
y y β is continuous on Ω 1 and analytic on Ω 1 .
(ii) 
For every integer m 1 , the derivative formula from Lemma 1(ii) applies with β > 1 , and the exponent β m determines the endpoint behavior.
(iii) 
d m d y m y β remains finite at y = 0 iff m β , and diverges for all m β + 1 .
Hence y β C β , β β ( Ω 1 ) and y β C β + 1 ( Ω 1 ) .
Theorem 7 
(Regularity of the Transformed Integrand g for 0 < α < 1 , 1 / α Z + ). Let 0 < α < 1 and define g by (33). Assume f C ( Ω 1 ) . Then:
g C 1 / α , 1 / α 1 / α ( Ω 1 ) C 1 / α + 1 ( Ω 1 ) ,
and g is analytic on Ω 1 .
Proof. 
Let β = 1 / α > 1 . The derivatives of ϕ ( y ) = t ( 1 y β ) are given by the same expressions used in the proof of Theorem 6, with the scaling ϕ ( j ) ( y ) y β j inherited from Lemma 1. Applying Faà di Bruno’s formula [17] exactly as in Subsection 3.6 yields
g ( m ) ( y ) f ( ϕ ( y ) ) y β m , y 0 + ,
since the dominant contribution again corresponds to k = 1 in the Bell-polynomial expansion. Because β > 1 , the quantity β m is nonnegative precisely when m β , and negative for m β + 1 . Thus g ( m ) ( 0 ) is finite for m β and diverges otherwise, unless f has sufficiently high-order zeros at t. This proves the stated regularity.    □
Remark 6 
(Effect on GBFA Error for 0 < α < 1 , 1 / α Z + ). The endpoint smoothness in Theorem 7 improves the SG–quadrature behavior relative to the α > 1 case. Since g now possesses C 1 / α regularity at y = 0 , the ( n q , λ q ) –GBFA quadrature applied to (6) achieves a higher-order algebraic decay rate than that described in Remark 5. The ITE in t remains unchanged and retains super-exponential decay in n. Consequently, for 0 < α < 1 , the total GBFA error is governed by a faster quadrature convergence that improves as α decreases.

4. Restoring Endpoint Regularity via a Generalized Transformation

For α > 1 , the transformation (2) remains mathematically valid; however, as shown in Section 3.6, the mapping y y 1 / α is only Hölder-continuous of order 1 / α at y = 0 , and all derivatives blow up as y 0 + . This loss of endpoint regularity affects only the quadrature truncation error component of the GBFA method. For 0 < α < 1 with 1 / α Z + , the same mapping exhibits a complementary limitation: although y 1 / α is differentiable at y = 0 , its higher-order derivatives fail to exist due to the noninteger exponent. Consequently, α > 0 : 1 / α Z + , the transformation (2) imposes endpoint smoothness restrictions that may reduce the global regularity of the transformed integrand. To improve the global smoothness of the transformed integrand for such values of α , we introduce the generalized mapping
τ = t 1 y ϕ ( α ) , y Ω 1 , ϕ ( α ) > 0 .
The mapping (35) is a C -diffeomorphism on Ω 1 for every α > 0 (see A.4), with
d τ d y = t ϕ ( α ) y ϕ ( α ) 1 .
Substituting (35) into the RLFI definition (1) yields the exact identity
Preprints 195196 i007
Thus, for α > 0 , the RLFI reduces to evaluating weighted SG-transformed integrals of the form
Preprints 195196 i008
in place of the unweighted integrals (6). The GBFA approximation corresponding to (36) becomes
Preprints 195196 i009

4.1. Weighted FSGIM Construction

Following the same steps used in [4, Section 3], we define
Y n q ( λ q , α ) : = y ^ n q λ q p ( α ) 1 n + 1 , τ M , i , n q ( λ q ) : = z M , i 1 n q + 1 y ^ n q λ q ϕ ( α ) ,
and obtain the discrete GBFA approximation
Preprints 195196 i010
for all i J M . This gives the weighted FSGIM defined row-wise by
Preprints 195196 i011
which reduces to the original FSGIM in [4] when ϕ ( α ) = 1 / α .

5. Computational Complexity of the Weighted FSGIM

We analyze the computational complexity of the weighted FSGIM defined row-wise by (40). Let n , n q , M Z + denote the SG interpolation degree, the GBFA quadrature order, and the number of evaluation points, respectively. As in [4], the computation of the SG polynomials G ^ 0 : n ( λ ) using the three-term recurrence relation requires O ( n ) operations per degree. Since polynomial evaluations are required for all degrees up to n, this stage contributes O ( n 2 ) operations per entry of z M .
For a fixed evaluation point z M , i , the construction of Preprints 195196 i012 requires evaluating the SG-based expressions at the n q + 1 transformed quadrature nodes, which incurs an additional O ( n n q ) flops. The vector y ^ n q λ q p ( α ) is computed in O ( n q ) flops, and forming the outer product Y n q ( λ q , α ) = y ^ n q λ q p ( α ) 1 n + 1 requires ( n q + 1 ) ( n + 1 ) multiplications. The Hadamard product with L 0 : n λ ( τ M , i , n q ( λ q ) ) introduces another ( n q + 1 ) ( n + 1 ) multiplications. Applying P ^ to collapse the ( n q + 1 ) rows into one row requires ( n + 1 ) dot products of length ( n q + 1 ) , i.e., O ( n n q ) flops, and the scalar factor ϕ ( α ) z M , i α / Γ ( α ) adds O ( n ) flops.
Collecting the dominant terms, the overall per-row cost of constructing the weighted FSGIM is
cos t row weighted = O n 2 + n n q = O n ( n + n q ) ,
with constant factors at most a small multiple of the classical case. Therefore, constructing Q n α E R ( M + 1 ) × ( n + 1 ) for all entries of z M requires O M n ( n + n q ) flops. Once Q n α E is available, applying the FSGIM to f 0 : n is a dense matrix–vector multiplication costing O M n flops in both the classical and weighted cases.

6. Endpoint Regularity and the Choice of ϕ ( α )

The transformed integrand in (37) is
g ( y ) = y p ( α ) f t ( 1 y ϕ ( α ) ) , p ( α ) = α ϕ ( α ) 1 .
For analytic f, g is analytic on Ω 1 , but its smoothness at y = 0 depends on the competition between the vanishing weight y p ( α ) and the singular derivatives of y ϕ ( α ) as y 0 + . A Faá di Bruno–type exponent analysis shows that a sufficient condition for g C r ( Ω 1 ) is (A48); see Section A.5. This condition provides a practical design rule: choosing ϕ ( α ) according to (A48) guarantees that the transformed integrand is at least C r at y = 0 . Since Gauss–type quadrature for C r functions converges algebraically at a rate O ( n q r ) , the choice of r directly controls the quadrature truncation error.
Remark 7 
(Trade-offs in the Choice of ϕ ( α ) ). The mapping (35) enables a tunable balance between endpoint smoothness and weight concentration:
  • Choosing ϕ ( α ) = 1 / α yields perfect kernel cancellation and recovers the original GBFA identity (3), but the transformed integrand is only Hölder-continuous at y = 0 for α > 1 .
  • Choosing ϕ ( α ) according to (A48) yields a C r integrand at y = 0 , improving the quadrature convergence rate to O ( n q r ) by Theorem A12 and Corollary 1.
  • Larger r increases smoothness but also increases the weight exponent p ( α ) = α ϕ ( α ) 1 , which may over-concentrate the integrand near y = 1 ; cf. Figure 2. This concentration can degrade quadrature accuracy unless both λ q and n q are appropriately chosen. Specifically, λ q controls the node distribution, while n q must be sufficiently large to resolve the sharp peak near y = 1 s l r .
Although the generalized ϕ ( α ) -framework and its associated smoothness condition (A48) were developed primarily for α > 1 , the underlying results extend naturally to all α > 0 . In particular, for 0 < α < 1 with 1 / α Z + , one may enforce C r regularity via (A48), guaranteeing algebraic quadrature convergence O ( n q r ) . However, this approach is not recommended in practice. For 0 < α < 1 , the classical transformation ϕ ( α ) = 1 / α yields the exact, weight-free identity (3) and, for real-analytic f, induces a fractional power series expansion with exponentially decaying coefficients (cf. Remark A9). This structure leads to dramatically faster pre-asymptotic quadrature convergence–often appearing as high-order algebraic or near-exponential decay–without introducing artificial weighting or endpoint concentration near y = 1 . Consequently, the classical map is superior for practical n q (small and moderate) when 0 < α < 1 , and the generalized ϕ ( α ) -framework should be reserved s l n q or for α > 1 .
Corollary 1 
(Effect of (A48) on the Total GBFA Error). Let f satisfy the hypotheses of Theorem 1. Let ϕ ( α ) satisfy the smoothness condition (A48) for some r 0 . Then the total GBFA error decomposes as
I t α f Q n α E f 0 : n I t α f I t α I n f ITE + C α , r n q r quadrature error ,
where the ITE term decays super-exponentially in n by Theorem 1, and the quadrature error decays algebraically in n q with rate O ( n q r ) due to the C r endpoint regularity enforced by (A48). In particular, the choice of ϕ ( α ) determines the dominant error component for fixed n.
Remark 8 
(Spectral Convergence Under Tunable Endpoint Regularity). Although the quadrature error in Corollary 1 decays only algebraically as O ( n q r ) , the ITE retains super-exponential decay O ( e c n ln n ) for analytic f by Theorem 1. Consequently, for any fixed r 0 , one may choose n q sufficiently large so that the quadrature error is negligible. Thereafter, N Z + such that n N , the total error is dominated by the ITE and thus decays faster than any algebraic rate O ( n k ) , k Z + . Hence, the method is spectrally convergent in n.
To complement the theoretical results, we now present numerical experiments illustrating the performance of the generalized GBFA method.

7. Numerical Experiments

We present comprehensive numerical experiments validating the GBFA method for arbitrary fractional orders α > 0 , using both the classical transformation ϕ ( α ) = 1 / α and the ϕ ( α ) -generalized transformation developed in Section 4. All experiments use the same implementation framework of [4] with the ϕ ( α ) parameterization added for α > 1 .

7.1. Experimental Setup

All simulations were performed in MATLAB R2023b on a 2.9 GHz AMD Ryzen 7 4800H processor with 16 GB RAM running Windows 11 (64-bit). Our primary test function is the cubic polynomial
f ( t ) = 2 t 3 + 8 t ,
for which the exact RLFI admits the closed form
I t α f = 12 t α + 3 Γ ( α + 4 ) + 8 t α + 1 Γ ( α + 2 ) .
Unless otherwise stated, we use the following baseline parameters:
  • SG interpolation degree n = 8 ,
  • SG parameter λ = 1 ,
  • Quadrature parameter λ q = 0.5 ,
  • Evaluation point t = 0.5 ,
  • Quadrature order n q = 16 ,
  • Fractional orders α { 1.5 : 9.5 } .
  • For the ϕ ( α ) -generalized transformation, we take
    ϕ ( α ) = 1 α , 0 < α < 1 , r + 1 α , α 1 ,
    with smoothness parameter r = 12 , which guarantees C 12 ( Ω 1 ) regularity of the transformed integrand at y = 0 for f C 12 ( Ω 1 ) .
For each configuration, we compute the LARE defined as:
log ( ARE ) = log | I t α f I t α I n f | | I t α f | .

7.2. Numerical Validation of the Classical Transformation τ = t ( 1 y 1 / α )

To support the theoretical developments established in the previous sections, we now present numerical experiments that assess the practical behavior of the classical transformation (2) for a wide range of fractional orders. These tests verify that the GBFA interpolation error in t retains its spectral character for smooth data, whereas the total approximation error for α > 1 is governed primarily by the quadrature component associated with the SG–transformed integrals (6). The results further illustrate the stability of the transformed integrand, the consistency of the computed FSGIMs, and the robustness of the method under variations in α and the SG parameters, while also reflecting the reduced endpoint regularity of y t ( 1 y 1 / α ) for α > 1 .
Figure 3 presents a comprehensive six-panel analysis using the classical transformation ϕ ( α ) = 1 / α , systematically exploring the method’s behavior across parameter variations.
Figure 3(a) demonstrates good agreement between GBFA approximations and exact RLFI values across α Ω 1.5 , 9.5 . The GBFA values closely track the exact values over five orders of magnitude, from about 0.44 at α = 1.5 to nearly 4.65 × 10 10 at α = 9.5 .
Figure 3(b) reveals a key phenomenon: the LARE increases monotonically from 4.6961 at α = 1.5 to 2.9134 at α = 9.5 . This degradation in accuracy with increasing α is primarily due to the reduced endpoint regularity of the transformed integrand. As shown in Section 3.6, for α > 1 , the mapping y y 1 / α is only Hölder-continuous of order 1 / α at y = 0 , with all derivatives blowing up as y 0 + . This endpoint singularity becomes more severe as α increases (since 1 / α decreases), slowing the convergence of the quadrature rule for fixed n q .
Figure 3(c) investigates convergence with respect to the interpolation degree n for multiple α values. The data reveal a characteristic exactness plateau:
  • For n = 2 : LARE ranges from approximately 0.7 ( α = 9.5 ) to 2.3 ( α = 1.5 ).
  • For n = 4 : LARE improves dramatically to the range [ 2.9 , 4.7 ] .
  • For n 4 : No further improvement occurs for any α value.
This plateau arises because f is a cubic polynomial, and SG interpolation at SGG nodes becomes exact for n 3 . The observed behavior perfectly aligns with Theorem 1: for polynomial data, the interpolation error vanishes once n exceeds the polynomial degree. The remaining error stems entirely from quadrature approximation of the SG–transformed integrals (6). The different colored curves in Figure 3(c) show that this plateau behavior is consistent across all tested α values.
Figure 3(d) reveals a striking result: for α > 1 , the GBFA error is essentially independent of λ across the tested range λ Ω 0.4 , 2 . For each fixed α , all data points at different λ values collapse onto horizontal lines, indicating constant LARE as λ varies. This λ -insensitivity aligns with the theoretical analysis in Section 3.3. The bounding constant ϑ α , λ in (13) contains the factor α 1 2 α , which decays super-exponentially as α increases, effectively suppressing λ -dependence. This simplifies practical implementation: for α > 1 , any λ > 0.5 yields comparable accuracy, though one should avoid values too close to 0 . 5 + in practice since SG polynomials exhibit blow-up behavior as their indices approach this theoretical lower bound [4].
Figure 3(e) demonstrates significant sensitivity to λ q , contrasting with the λ -insensitivity. The data reveal several important patterns:
  • The optimal λ q is consistently λ q = 0 (Chebyshev quadrature) across all tested α values, yielding the smallest LARE. Thus, λ q = 0 provides robust performance with minimal sensitivity to variations in α .
  • Smaller λ q values ( λ q = 0.4 , 0.2 ) or larger λ q values ( λ q = 1 , 2 ) degrade accuracy substantially.
Figure 3(f) presents the most critical convergence study: varying n q while fixing n = 8 . The data reveal clear algebraic convergence patterns:
α = 1.5 : L A R E improves from 2.8 to 5.7 as n q increases from 4 to 32 . α = 4.5 : L A R E improves from 1.9 to 3.9 over the same range . α = 9.5 : L A R E improves from 1.7 to 3.6 .
The log-log plots in Figure 3(f) display generally near-linear trends, consistent with algebraic convergence O ( n q p ) . Average convergence exponents are p 3.2 for α = 1.5 and p 2.1 for α = 9.5 , with convergence slowing as α increases. This algebraic convergence directly reflects the endpoint regularity limitations analyzed in Section 3.6. For α > 1 , the transformed integrand g defined by (33) belongs to C 0 , β ( Ω 1 ) with β = 1 / α < 1 , exhibiting only Hölder continuity at y = 0 . This reduced regularity fundamentally limits quadrature convergence to algebraic rates, consistent with Theorem 6 and Remark 5. The convergence becomes slower as α increases (smaller β ), explaining why the α = 9.5 curve in Figure 3(f) has shallower slope than the α = 1.5 curve.
The numerical results in Figure 3 align precisely with the theoretical framework when considering the complete error composition for the classical transformation ϕ ( α ) = 1 / α : (i) Interpolation Component behaves as predicted by Theorem 1: for polynomial f, the interpolation error vanishes for n degree ( f ) + 1 , producing the observed exactness plateau in Figure 3(c); (ii) Quadrature Component dominates the total error for α > 1 , exhibiting algebraic convergence O ( n q p ) with p > 1 (Figure 3(f)). The convergence exponent p decreases with increasing α , from p 3.2 at α = 1.5 to p 2.1 at α = 9.5 . This aligns perfectly with the endpoint regularity analysis in Section 3.6: the transformed integrand’s Hölder regularity β = 1 / α limits quadrature convergence rates.
The parameter sensitivity patterns match theoretical predictions:
  • λ -insensitivity for α > 1 (Figure 3(d)) follows from ϑ α , λ structure.
  • The sensitivity to λ q (Figure 3(e)) indicates that quadrature accuracy can be tuned by proper parameter selection.
  • n q scaling requirements increase with α due to worsening endpoint regularity (Figure 3(f)).
With baseline parameters ( n = 8 , n q = 16 ), the method achieves ARE 10 5 to 10 3 across α Ω 1.5 , 9.5 (Figure 3(b)). While not reaching machine precision, this represents useful accuracy for many applications, achievable with modest computational cost.
The high-resolution results in Figure 4 (using classical transformation ϕ ( α ) = 1 / α ) provide deeper insight into the GBFA method’s behavior with extended quadrature resolution. Panel (a) demonstrates that with n q = 1000 , the LARE values for n 4 are substantially improved compared to those in Figure 3(c), reaching remarkably low values ranging from approximately 6.8 to 10.6 across different α . This confirms that with sufficiently large n q , the GBFA method achieves high accuracy for polynomial data, with the exactness plateau reaching LARE values below 6.8 across all tested α values.
Panel (b) reaffirms the λ -insensitivity observed in Figure 3(d), with all curves for each α collapsing to constant LARE values across the tested λ range Ω 0.4 , 2 . This consistency across vastly different quadrature resolutions ( n q = 16 versus n q = 1000 ) strengthens the conclusion that for α > 1 , the SG interpolation parameter λ has negligible effect on the total GBFA error, in agreement with the theoretical analysis in Section 3.3.
Panel (c) demonstrates that Chebyshev quadrature ( λ q = 0 ) yields (near) optimal accuracy across all α values, achieving LARE values ranging from 7.6 (for α = 9.5 ) to 10.8 (for α = 1.5 ). Larger values ( λ q = 1 , 2 ) progressively degrade the accuracy, with the LARE increasing more noticeably for larger α . The sensitivity to λ q is more prominent in this high resolution setting ( n q = 1000 ) than in Figure 3(e) because the quadrature error remains the dominant component of the total error, and λ q directly governs the convergence rate of this quadrature error for the transformed integrand with its specific endpoint regularity.
Panel (d) offers the most critical insight: with n q extended up to 10 4 , the GBFA method achieves LARE values as low as 13.6 (for α = 1.5 ) to 9.0 (for α = 9.5 ), corresponding to relative errors on the order of 10 14 to 10 9 . The logarithmic x-axis clearly reveals that the convergence with respect to n q follows distinct patterns: rapid initial improvement for n q < 100 , followed by progressively slower gains. For α = 1.5 , LARE improves from 5.7 at n q = 32 to 13.6 at n q = 10 4 , demonstrating that machine-precision accuracy is attainable with sufficiently large n q . However, the diminishing returns as n q increases highlight the algebraic nature of quadrature convergence for α > 1 , consistent with the endpoint regularity constraints analyzed in Section 3.6.

7.3. Numerical Validation of the ϕ ( α ) -Generalized Transformation

This section presents numerical experiments using the ϕ ( α ) -generalized transformation (35) with smoothness parameter r = 12 . All tests use the polynomial test function f ( t ) = 2 t 3 + 8 t and its exact RLFI (43) at t = 0.5 .
Figure 5 presents a comprehensive six-panel analysis of the GBFA method with the ϕ ( α ) -generalized transformation. Panel (a) demonstrates perfect agreement between GBFA approximations and exact RLFI values across α Ω 1.5 , 9.5 , with the data points being numerically identical (to near machine precision). The RLFI magnitude decreases over five orders of magnitude as α increases from 1.5 to 9.5.
Panel (b) reveals LARE values ranging from 14.109 at α = 1.5 to 14.423 at α = 9.5 , with a minimum of 15.104 at α = 2.5 , corresponding to relative errors of approximately 10 14.1 , 10 14.4 , and 10 15.1 , respectively. This demonstrates that with r = 12 , the ϕ ( α ) -generalized transformation achieves consistent near machine-precision accuracy across all tested α values, with all LARE values below 14.1 .
Panel (c) shows convergence with respect to the interpolation degree n. For n = 2 , LARE ranges from 2.2935 at α = 1.5 to 0.69605 at α = 9.5 . At n = 4 , there is dramatic improvement to values ranging from 14.102 at α = 1.5 to 15.954 at α = 9.5 , after which the error plateaus. This exactness plateau occurs because f is cubic, making SG interpolation exact for n 3 , leaving only quadrature error. With the ϕ ( α ) -generalized transformation and r = 12 , this quadrature error is already at or near machine precision for the tested n q = 16 , resulting in the observed LARE values near 15 .
Panel (d) shows that under the ϕ ( α ) -generalized transformation with r = 12 , the λ -sensitivity of the GBFA method exhibits a markedly different behavior from that observed under the classical mapping. By enforcing C 12 endpoint regularity, the quadrature error is reduced to near machine precision even with modest n q = 16 , thereby exposing the ITE, which depends explicitly on λ through ϑ α , λ and the conditioning of the SG basis. This stands in contrast to the classical-transformation plots in Figure 3(d) and Figure 4(b), where the quadrature error dominates and masks the underlying λ -dependence.
In the ITE-dominated regime of the ϕ ( α ) -generalized mapping, optimal λ values (typically near λ = 0 or λ = 0.5 ) yield improving accuracy as α increases, consistent with the decay of the factor α 1 2 α in ϑ α , λ . Suboptimal choices, especially those approaching the theoretical lower bound λ = 0 . 5 + , suffer from ill-conditioning of the SG basis and produce significantly larger errors. Consequently, the observed λ -sensitivity band widens from roughly 0.11 at α = 1.5 to nearly 1.9 at α = 8.5 . This widening is not a numerical artifact but a deterministic manifestation of the interplay between the decaying error constant and the λ -dependent stability of the spectral basis. Notably, this λ -sensitivity only emerges under the ϕ ( α ) -generalized transformation; under the classical mapping ϕ ( α ) = 1 / α , the GBFA error remains practically λ -insensitive for α > 1 due to the dominance of quadrature error and the super-exponential decay of ϑ α , λ .
Panel (e) demonstrates that the quadrature parameter λ q plays a decisive role in the accuracy of the ϕ ( α ) -generalized GBFA. The choice λ q = 0.5 is consistently optimal across all tested fractional orders, yielding LARE values of 14.109 at α = 1.5 , 15.104 at α = 2.5 , and 14.423 at α = 9.5 . In contrast, suboptimal values such as λ q = 0.4 or λ q = 2 degrade accuracy by 8–10 orders of magnitude. This striking disparity highlights the critical importance of selecting λ q = 0.5 when using the ϕ ( α ) -generalized transformation, and we recommend this choice as a default setting for robust high-accuracy performance.
Panel (f) demonstrates rapid algebraic convergence with respect to quadrature points n q . For α = 1.5 , LARE improves from 0.7406 at n q = 4 to 3.8205 at n q = 8 , 7.5804 at n q = 12 , reaching near machine-precision level 14.109 at n q = 16 , and stabilizing at 14.941 at n q = 20 . The convergence accelerates for larger α : for α = 4.5 , LARE reaches the machine-precision at 15.154 by n q = 12 , then slightly bounces back to 14.911 at n q = 16 . For α = 9.5 , near-optimal accuracy of 14.463 is achieved at n q = 12 , stabilizing at 14.423 at n q = 16 . This demonstrates that with r = 12 , modest n q values of 12–16 suffice to achieve near machine-precision accuracy, with some cases even exhibiting superconvergence (accuracy temporarily exceeding the final plateau).
Figure 6 presents a three-dimensional sensitivity analysis of the ϕ ( α ) -generalized transformation with respect to the smoothness parameter r over the range r { 0 : 2 : 12 } .
Panel (a) shows LARE vs. r vs.  λ q for fixed α = 5.5 . The surface reveals a strong interaction: at optimal λ q = 0.5 , LARE improves from 3.12 at r = 0 to 14.64 at r = 12 , a gain of over 11.5 orders of magnitude. Conversely, for suboptimal λ q = 0.4 and λ q = 2 , LARE degrades from 2.77 to 9.58 and from 2.54 to 7.08 , respectively, as r increases from 0 to 12, demonstrating that inappropriate λ q combined with high r concentrates quadrature weight too aggressively near y = 1 , harming accuracy. These results indicate that the effectiveness of increasing r is highly sensitive to the quadrature parameter λ q . For appropriately selected λ q (e.g., λ q = 0.5 ), larger r markedly accelerates convergence, delivering near machine-precision accuracy. However, for suboptimal λ q , increasing r initially improves accuracy up to a critical value, beyond which further increases degrade performance—demonstrating that excessive smoothness without compatible quadrature design can be counterproductive.
Panel (b) shows LARE vs. r vs.  n q . The surface demonstrates accelerated convergence with higher r: while r = 0 requires n q = 32 to reach LARE 14.6 , r = 12 achieves LARE < 15 with merely n q = 12 . This confirms Corollary 1, which predicts quadrature convergence O ( n q r ) when the transformed integrand is C r -smooth.
Panel (c) shows LARE vs. r vs.  α . The surface confirms that increasing r compensates for accuracy degradation at larger α : for r = 0 , LARE deteriorates from 4.70 at α = 1.5 to 2.91 at α = 9.5 , whereas for r = 12 , LARE remains consistently below 14.1 across all α Ω 1.5 , 9.5 , with the worst-case at α = 8.5 ( 14.37 ) still representing near machine-precision accuracy. This demonstrates that the ϕ ( α ) -generalized transformation effectively mitigates endpoint regularity limitations.
Collectively, these results demonstrate that while the GBFA method extends seamlessly to α > 0 using the classical transformation ϕ ( α ) = 1 / α , achieving high accuracy uniformly for all α > 1 with modest computational resources requires addressing the endpoint regularity limitation. The ϕ ( α ) -generalized transformation introduced in Section 4 overcomes this by enforcing C r smoothness at y = 0 , achieving accelerated quadrature convergence O ( n q r ) and enabling near machine-precision accuracy with practical n q values α Ω 1.5 , 9.5 . The method exhibits strong sensitivity to the quadrature parameter λ q : for r = 12 , the LARE at the optimal λ q = 0.5 is up to 10 orders of magnitude better than at highly suboptimal values. This dramatic difference underscores the critical importance of selecting λ q = 0.5 —corresponding to Gauss–Legendre quadrature—when using the ϕ ( α ) -generalized transformation. While λ -sensitivity exhibits a complex pattern—with optimal λ values yielding excellent accuracy but performance degrading near λ = 0 . 5 + —the rapid n q convergence confirms the effectiveness of the C 12 regularization enforced by the ϕ ( α ) design condition (A48). The occasional superconvergence observed suggests interesting interactions between the quadrature rule and the regularized integrand structure.

7.3.1. Benchmarking GBFA Against Existing Solvers and Methods

Comparison with MATLAB’s Adaptive Quadrature

To benchmark the GBFA method with the ϕ ( α ) -generalized transformation against a standard numerical approach, we compare its performance with MATLAB’s adaptive quadrature routine integral. The MATLAB function integral employs global adaptive quadrature with a relative tolerance of 10 14 . For a fair comparison, we evaluate the RLFI I t α f for the cubic polynomial f ( t ) = 2 t 3 + 8 t at t = 0.5 over the extended range α { 1.5 : 2 : 29.5 } , using r = 14 in the ϕ ( α ) -generalized transformation to enforce C 14 ( Ω 1 ) regularity at y = 0 .
Figure 7 presents a comprehensive accuracy comparison between the GBFA method and MATLAB’s integral. Panel (a) shows that both methods produce RLFI values that are visually indistinguishable from the exact analytic solution across more than 41 orders of magnitude (from 10 1 at α = 1.5 to 10 42 at α = 29.5 ), confirming high accuracy for smooth data. Panel (b) displays the LARE for both methods: the GBFA method achieves LARE values in  [ 15.72 , 14.00 ] , while MATLAB achieves LARE values in  [ 15.95 , 14.06 ] , except at α = 7.5 where it attains machine-precision exactness (infinite LARE). Notably, both methods attain near machine-precision accuracy ( LARE 14 ) throughout the entire α range, with neither consistently dominating the other in accuracy.
The decisive advantage of the GBFA method lies in computational efficiency, as shown in Figure 8. Panel (a) reports computation times in microseconds ( μ s ) after FSGIM precomputation. The GBFA evaluation time ranges from  0.8 to  104.7 μ s , with an average of  19.2 μ s . In contrast, MATLAB’s adaptive quadrature requires  192.9 2241.8 μ s , averaging  680.6 μ s . Panel (b) shows the speedup factor (MATLAB time / GBFA time), which ranges from  5.5 × at  α = 5.5 to  1761.1 × at  α = 17.5 , with an average of  262.4 × . The speedup remains substantial even for large  α , demonstrating the GBFA method’s scalability.
These results confirm that with r = 14 , the ϕ ( α ) -generalized GBFA transformation achieves accuracy comparable to MATLAB’s high-tolerance adaptive quadrature while delivering orders-of-magnitude faster evaluation. This efficiency makes the method especially suitable for applications involving repeated RLFI evaluations or time-stepping schemes for fractional differential equations with arbitrary α > 0 .

Comparison With SPH-Based Fractional Approximation

We compare the GBFA method with the 1D SPH-based fractional approximation of Ghorbani and Semperlotti [18]. Their scheme approximates left-handed constant-order RLFIs and derivatives via kernel-based particle quadrature on [ 0 , 5 ] , and is tested on
f 1 ( x ) = sin ( π x ) , f 2 ( x ) = cos ( π x ) , f 3 ( x ) = e x , f 4 ( x ) = ( x 1 ) 3 ,
for the left-handed RLFI of order α = 0.75 . Numerical results for I x 0.75 f j on Ω 0 , 5 are reported in [18, Fig. 5] for all j N 4 .
Restricting attention to x Ω 1 to match the present GBFA configuration, visual inspection of [18, Fig. 5] shows that the SPH approximation exhibits relatively large absolute errors:
max x Ω 1 | SPH ( x ) I x 0.75 f 1 | 2 × 10 2 , max x Ω 1 | SPH ( x ) I x 0.75 f 2 | 5 × 10 4 , max x Ω 1 | SPH ( x ) I x 0.75 f 3 | 5 × 10 3 , max x Ω 1 | SPH ( x ) I x 0.75 f 4 | 2 × 10 2 .
Even after auxiliary-particle refinement, [18, Fig. 6] indicates
max x Ω 1 | SPH ref ( x ) I x 0.75 f 1 | 3 × 10 4 .
By contrast, the present GBFA method with n = n q = 20 and λ = λ q = 0.5 , using the exact transformation (2), yields for the same four test functions and α = 0.75 ,
max x Ω 1 | GBFA ( x ) I x 0.75 f j | < 2 × 10 7 j N 4 ,
with L 2 -errors of the same order. Table 1 summarizes the MAE on Ω 1 . The SPH entries are conservative visual estimates from [18, Fig. 5], while the GBFA entries follow directly from (3) and (7).
Beyond the MAE, the GBFA method achieves near-perfect goodness-of-fit. The computed R 2 scores exceed 0.9999999999999 for all four functions, with specific values
R 2 ( f 1 ) = 0.999999999999879 , R 2 ( f 2 ) = 0.999999999999932 , R 2 ( f 3 ) = 0.999999999999984 , R 2 ( f 4 ) = 0.999999999999926 .
These values confirm the visual indistinguishability between the GBFA and exact RLFI curves in Figure 9.
In contrast, the SPH method reports significantly lower R 2 scores: 0.977319 for f 1 , 0.999987 for f 2 , 1.000000 (rounded) for f 3 , and 0.999994 for f 4 [18]. The modest R 2 for f 1 aligns with its large absolute error ( 2 × 10 2 ), indicating a systematic deviation. Even for f 3 , the apparently perfect SPH R 2 masks an absolute error above 5 × 10 3 , illustrating that R 2 alone may obscure significant discrepancies when the reference signal has low variance. GBFA, by contrast, achieves both near-unity R 2 and absolute errors below 2 × 10 7 .
Figure 9 and Figure 10 illustrate the GBFA performance. Figure 9 shows that the GBFA approximation of I x 0.75 f j is visually indistinguishable from the exact RLFI on Ω 1 . Figure 10 displays the corresponding error distributions and statistical indicators: MAE, mean error, median error, L 2 accuracy, and R 2 score. All error curves remain below 2 × 10 7 , with L 2 errors 1.8 × 10 7 and R 2 > 0.9999999999998 for every test case.
The several-orders-of-magnitude accuracy gap between SPH and GBFA follows directly from their underlying approximation mechanisms. SPH employs kernel-based particle quadrature and exhibits, in practice, at most algebraic convergence with respect to particle spacing. GBFA, by contrast, exploits SG interpolation and FSGIMs within the exact transformed identity (3). Theorem 1 guarantees super-exponential decay of the ITE for analytic data, while the quadrature truncation error behaves as described in Section 3.5: its asymptotic rate is algebraic O ( n q K θ ) for 0 < α < 1 with 1 / α Z + , where K = 1 / α and θ = 1 / α K . However, when f is real-analytic–as in the test functions f j –the transformed integrand g ( y ) = f ( t ( 1 y 1 / α ) ) admits a fractional power series with exponentially decaying coefficients, yielding dramatically faster pre-asymptotic quadrature convergence than the worst-case algebraic bound. For α = 0.75 ( 1 / α = 4 / 3 Z + ), this results in observed errors below 2 × 10 7 with modest n q = 20 , far surpassing SPH’s performance. Consequently, the combined GBFA error decays markedly faster in ( n , n q ) than the SPH error does in particle resolution, explaining the performance advantage observed in Table 1 and Figure 9 and Figure 10.
Figure 11 displays the convergence behavior of the GBFA method for the RLFI I x 0.75 f 1 at x = 0.5 , using the classical transformation ϕ ( α ) = 1 / α . The observed absolute errors decay rapidly from 2.912 × 10 6 at n q = 4 to 2.166 × 10 13 at n q = 40 , yielding a fitted algebraic rate of p = 7.1252 via least-squares regression in log–log space. This empirical rate significantly exceeds the theoretical asymptotic prediction O ( n q K θ ) = O ( n q 4 / 3 ) given by Corollary A3(ii), where K = 1 / α = 1 and θ = 1 / α K = 1 / 3 . As explained in Section 3.5 and Remark A9, this discrepancy arises because f 1 is real-analytic on a neighborhood of Ω x , inducing a fractional power series expansion
g ( y ) = f 1 x ( 1 y 1 / α ) = m = 0 a m y m / α , | a m | C ˜ B ˜ m ,
with exponentially decaying coefficients. Consequently, the tail of the series beyond mode m n q is extremely small, causing Gauss-type quadrature to exhibit dramatically faster pre-asymptotic convergence than the worst-case O ( n q 4 / 3 ) rate–which governs only the limit n q under minimal C 1 , θ regularity. Thus, the near-exponential decay observed here is fully consistent with the refined smoothness analysis in Section A.3.

Generalized Transformation for 0 < α < 1 with 1 / α Z +

Although the generalized ϕ ( α ) -transformation was primarily designed for α > 1 , it can also be applied to 0 < α < 1 when 1 / α Z + to enforce algebraic convergence O ( n q r ) for any prescribed r > 0 . This capability follows directly from Theorem A12 and Corollary 1, which guarantee that choosing ϕ ( α ) = ( r + 1 ) / α yields a transformed integrand in C r ( Ω 1 ) , thereby ensuring quadrature error decay O ( n q r ) . To illustrate, we revisit the RLFI I x 0.75 sin ( π x ) at x = 0.5 with r = 15 . As shown in Figure 12, the absolute error decays slowly for small n q due to endpoint weight concentration near y = 1 , but transitions into the asymptotic regime O ( n q r ) for n q 30 , achieving near machine precision by n q = 40 . The fitted convergence rate p 14.33 closely matches the theoretical expectation p = r = 15 . However, this accelerated convergence emerges only beyond a threshold quadrature order ( n q 30 ); below this, the classical map ϕ ( α ) = 1 / α yields superior accuracy due to its exact kernel cancellation and absence of artificial weighting. Moreover, for real-analytic f, the classical transformation induces a fractional power series with exponentially decaying coefficients, as we demonstrated earlier, leading to dramatically faster pre-asymptotic convergence than the worst-case algebraic bound. Consequently, the generalized transformation is not recommended for 0 < α < 1 with 1 / α Z + unless (near) machine precision is required; in such cases, it may be employed to enforce a prescribed algebraic rate O ( n q r ) s l r , n q . Otherwise, the classical GBFA framework remains optimal both theoretically and computationally.

Comparison With Hybrid Function (HF) Operational Matrices

We now compare the GBFA method with the orthogonal HF approach of Damarla and Kundu [19], which constructs generalized one-shot operational matrices for the RLFI. Their method approximates the RLFI of f ( t ) = t on t Ω 1 using a step size h = 0.125 (i.e., m = 8 subintervals) and reports the -norm error against the exact RLFI
I t α f = t α + 1 Γ ( α + 2 ) .
Following the experimental protocol of [19, Table 1], we evaluate the GBFA of the same RLFI at the HF grid points t HF = { 0 , h , 2 h , , 1 } and compute the -norm error over this discrete set. The GBFA parameters are fixed at n = 2 , n q = 12 , λ = 1 , λ q = 0.5 , and r = 15 .
Table 2 summarizes the comparison between both methods. The results demonstrate that both methods achieve machine-precision accuracy α Ω 0.5 , 5 . Crucially, the computational cost of the GBFA evaluation is up to nearly three orders of magnitude lower. As shown in Table 2, the GBFA CPU time is consistently under 1 millisecond (ms), with a maximum of 0.76 ms at α = 0.5 . In contrast, the HF method requires 21.7 ms on average for the same task, as reported in [19, Table 1]. This represents a speedup factor of 28 × to 2892 × , computed as the ratio HF CPU / GBFA CPU .
This superior efficiency stems from the spectral nature of GBFA: it requires only n + 1 = 3 function evaluations to achieve machine precision, whereas the HF method relies on a piecewise representation over m = 8 intervals (16 coefficients). Thus, for repeated evaluations of the RLFI with a fixed α , the GBFA method offers a dramatically faster online phase. This advantage is critical for applications involving fractional differential equations, where the RLFI operator is evaluated repeatedly in a time-stepping loop.

Comparison With Bernstein Approximation Method

We compare the GBFA method with the Bernstein approximation method of Usta [20] applied to solve the following second-kind FVIE:
u ( x ) = π ( 1 + x ) 3 / 2 0.02 x 3 1 + x + 0.01 x 5 / 2 I x 1 / 2 u , x Ω 1 ,
with exact solution
u ( x ) = π ( 1 + x ) 3 / 2 ,
which is real-analytic on Ω 1 , with its nearest singularity located at x = 1 . Although analytic, its derivatives grow factorially, placing it in the Gevrey-1 class (see A.6). The GBFA solver uses SGG nodes with n = 12 , λ = 0 , n q = 12 , and λ q = 0.5 , yielding a MAE of 6.080 × 10 13 , a CPU time of 3 × 10 3 s , and a RmsE of 3.601 × 10 13 , which is defined as:
RmsE = 1 N i = 1 N u ( x i ) u exact ( x i ) 2 ,
with x 1 : N being the evaluation nodes and N is their total number.
As shown in Table 3, the GBFA method achieves a MAE of 6.08 × 10 13 at n = 12 , which is two orders of magnitude smaller than the Bernstein result ( 1.05 × 10 11 ) at the same resolution, while requiring over 60 times less CPU time (3 ms vs. 186 ms). Although the Bernstein method is analyzed using a theoretical convergence bound of O ( n 1 ) [20, Eq. (1.7)], its empirical performance for smooth solutions is significantly better–reaching near machine precision at modest n. Nevertheless, the GBFA method maintains a clear advantage in both accuracy per degree of freedom and computational efficiency, underpinned by its provable super-exponential convergence (Theorem 1).
Although the exact solution u is analytic on Ω 1 , its derivatives satisfy (A49), placing it in the Gevrey-1 class rather than the stricter analytic class assumed in Theorem 1; cf. Section A.6. This distinction does not affect the observed convergence because the radius of analyticity is large (distance 1 to the singularity at x = 1 ). Moreover, the Gevrey- μ convergence rate in Theorem 2 continuously recovers the analytic rate of Theorem 1 as μ 1 + : the log-stretched-exponential decay e c n 1 / μ ln n tends to the super-exponential form e c n ln n . Hence, Gevrey-1 functions exhibit asymptotically the same GBFA convergence as analytic ones. Crucially, the fractional order satisfies 1 / α = 2 Z + , so by Theorem A10, the SG–transformed integrands are real-analytic (polynomial if f is polynomial), and the ( n q , λ q ) –GBFA quadrature is exact for n q n , eliminating quadrature truncation error. Consequently, the total error inherits the super-exponential ITE decay, yielding near machine-precision accuracy at modest resolutions–e.g., MAE 6.08 × 10 13 at n = 12 .
Figure 13 visually confirms the near-indistinguishability of the GBFA solution from the exact one on the SGG grid.
Figure 14 displays the convergence rate of the GBFA method, where the MAE is plotted against the polynomial degree n on a semi-log scale. The data in Table 4 show a rapid decay of the MAE from 2.8 × 10 8 at n = 6 to 7.8 × 10 16 at n = 16 , consistent with near machine-precision accuracy. A linear regression of ln ( MAE ) against n ln n yields a slope of approximately 0.54 , confirming the super-exponential convergence rate O ( e c n ln n ) predicted by Theorems 1 and 2 in the limit μ 1 + . Although the exact solution u belongs to the Gevrey-1 class, its analyticity on Ω 1 and large radius of convergence ensure that its numerical behavior is indistinguishable from the analytic case, producing the observed super-exponential decay. This is fully consistent with the error hierarchy in Remark 3 and demonstrates the GBFA method’s exceptional efficiency for smooth problems.

Comparison With Quadratic and Cubic Spline-Based Schemes

We now compare the GBFA method with the Quadratic (S1) approximation scheme of Kumar et al. [23] for the RLFI. Their approach constructs piecewise polynomial interpolants over uniform grids with step size h, achieving theoretical convergence order O ( h 3 ) . The test function f ( t ) = sin ( t ) at t = 1 with α = 1.5 provides a direct benchmark against their reported results in [23, Table 3].
Using the ϕ ( α ) -generalized GBFA transformation with r = 10 , n = 10 , n q = 14 , λ = 0.1 , and λ q = 0.5 , we compute I 1 1.5 sin with absolute error 3.045 × 10 11 in 2.264 ms. In contrast, the S1 scheme requires h = 6.25 × 10 3 ( n = 80 i.e. 160 subintervals) to reach error 3.05977 × 10 11 . Thus, the GBFA method attains slightly higher accuracy than the S1 scheme, despite using far fewer degrees of freedom ( n + 1 = 11 vs. 161 function evaluations) and significantly lower computational effort.
This dramatic advantage stems from the spectral convergence of GBFA, as established in Theorem 1 for analytic f, versus the algebraic convergence inherent to spline-based methods. The classical GBFA transformation already ensures exact kernel cancellation via (3), while the generalized transformation (35) enforces C 10 endpoint regularity through (A48), thereby accelerating quadrature convergence per Corollary 1. Consequently, GBFA achieves near machine-precision accuracy where piecewise polynomial methods remain limited by their fixed convergence rates, even at fine mesh resolutions.
Having established both theoretical and numerical evidence, we conclude by summarizing the broader implications of the generalized GBFA framework.

8. Conclusions and Discussion

This work establishes that the GBFA method, originally formulated for α Ω 1 , extends naturally to all α R + through two complementary approaches: (i) the classical transformation (2) with ϕ ( α ) = 1 / α , yielding the exact identity (3) but suffering from reduced endpoint regularity for α > 1 ; and (ii) the ϕ ( α ) -generalized transformation (35), producing the weighted exact form (36) with a tunable ϕ ( α ) that restores smoothness at y = 0 . The total GBFA error decomposes per Corollary 1 into an ITE governed by Theorems 1–5 and a quadrature component scaling as O ( n q r ) when ϕ ( α ) satisfies (A48). Numerical experiments confirm that for r = 12 , the generalized transformation achieves near machine-precision accuracy (LARE < 14 ) across α Ω 1.5 , 9.5 using only n q = 16 (Figure 5f), whereas the classical transformation requires n q 10 4 to attain comparable accuracy (Figure 4d).
A refined smoothness analysis in Appendix A.3 reveals that full super-exponential convergence–including exact quadrature for polynomial data–is achieved by the classical GBFA transformation iff 1 / α Z + . In this case, the transformed integrand is real-analytic on Ω 1 , and Gauss-type quadrature becomes exact for sufficiently large n q . Otherwise, the integrand belongs only to the Hölder space C K , θ ( Ω 1 ) with K = 1 / α and θ = 1 / α K , limiting the asymptotic quadrature rate to O ( n q K θ ) . Crucially, as clarified in Remark A9, when f is real-analytic, the transformed integrand admits a fractional power series with exponentially decaying coefficients, causing the pre-asymptotic quadrature error to decay dramatically faster than the worst-case algebraic bound, thereby reconciling the near-exponential convergence often observed numerically even when 1 / α Z + .
Importantly, while the generalized ϕ ( α ) -framework was developed primarily for α > 1 , our numerical study in Section 7 confirms its applicability to the regime 0 < α < 1 with 1 / α Z + . In such cases, the classical map already yields rapidly decaying pre-asymptotic errors due to exponential coefficient decay in the fractional power series; consequently, the generalized transformation is not recommended unless (near) machine precision is explicitly required, as it incurs degraded accuracy for moderate n q and introduces sensitivity to λ q . When such extreme accuracy is needed, however, the generalized approach can enforce a prescribed algebraic rate O ( n q r ) for all r > 0 , achieving the target precision beyond a threshold n q 30 (see Figure 12).
The extension to arbitrary α > 0 enables direct application of GBFA to a broad class of fractional models, grounded in the exact identities (1), (3), (36) and the error hierarchy of Theorems 1–5. In particular, the GBFA can be systematically applicable to:
1.
High-order fractional viscoelastic models ( α > 1 ) [2], where the ϕ ( α ) -transformation maintains accuracy with modest n q .
2.
Super-diffusive transport ( α > 1 ) [3], efficiently evaluated via FSGIMs (40).
3.
Integer-order repeated integrals ( α = m Z + ), for which (36) provides a spectrally accurate alternative to classical quadrature.
4.
Variable-order operators α = α ( t ) > 0 , where precomputed FSGIMs Q n α E (classical) or (40) (generalized) can be tabulated and interpolated over α -nodes.
Key limitations and directions for further work include: (i) For f C k ( Ω 1 ) (Theorem 5), a rigorous analysis of the interplay among ϕ ( α ) , ITE, and quadrature error–particularly in the presence of endpoint singularities–is still lacking; (ii) For variable-order α ( t ) , pointwise-adaptive ϕ ( α ( t ) ) satisfying (A48) a t Ω 1 may significantly improve time-integration efficiency.
The ϕ ( α ) -generalized transformation (35) thus elevates GBFA from a method restricted to 0 < α < 1 to a unified, high-accuracy framework for arbitrary α R + , with tunable smoothness and efficiency controlled by the phi-design strategy via (A48).

8.1. Practical Implementation Guidelines

While the analytical structure of GBFA is unchanged for α > 0 , numerical behavior hinges critically on the choice between transformations. The following guidelines emerge from [4], Theorems 1–5, and the numerical experiments in Section 7.2 and Section 7.3.
  • For 0 < α < 1 , the classical transformation (2) with ϕ ( α ) = 1 / α yields the exact identity (3). The smoothness of the transformed integrand depends critically on whether 1 / α Z + :
    -
    If 1 / α Z + , the integrand is real-analytic on Ω 1 , and GBFA achieves full super-exponential convergence (including exact quadrature for polynomial data). Parameter recommendations from [4] then apply directly.
    -
    If 1 / α Z + , the integrand lies in C K , θ ( Ω 1 ) with K = 1 / α , θ = 1 / α K , limiting the asymptotic quadrature rate to O ( n q K θ ) . However, for real-analytic f, the fractional power series of the integrand has exponentially decaying coefficients (Remark A9), yielding dramatically faster pre-asymptotic convergence—often appearing as high-order algebraic or near-exponential decay for practical n q . In this regime, the generalized ϕ ( α ) -transformation is not recommended unless (near) machine precision is required (see Section 7).
    When 1 / α Z + , parameter recommendations from [4] depend on the scale of n and n q :
    -
    For relatively small n and n q : any λ T c , r is feasible:
    T c , r = γ 1 2 + ε γ r , 0 < ε 1 , r Ω 1 , 2 .
    The set T c , r is often the effective operational range for the SG parameters used in SG-based collocation regimes; the subscript `c’ in T c , r denotes “collocation.” Choosing smaller λ q from this range generally improves quadrature accuracy, with the notable exception that λ q = 0.5 often minimizes the quadrature error.
    -
    For relatively large n and n q :
    *
    Precision computations: select λ Ω 0.5 + ε , 0 N δ ( λ * ) with λ * 0.1351 , and choose λ q T c , r such that
    λ q < λ , if n n q , 3 / 2 , if n n q ,
    where N δ ( λ * ) excludes a δ -neighborhood of λ * to avoid amplification of interpolation error.
    *
    Standard computational scenarios: utilize λ = λ q = 0 , corresponding to shifted Chebyshev approximation, which offers a robust default balancing accuracy and efficiency for smooth functions.
  • For α > 1 , two strategies apply:
    (i)
    Classical transformation: use (2) with ϕ ( α ) = 1 / α , but anticipate algebraic quadrature convergence O ( n q p ) with p decreasing as α increases (Figure 3f). Set λ q = 0 for optimal performance and use n q 10 3 10 4 to achieve high accuracy (Figure 4(c)).
    (ii)
    Generalized transformation: use (35) with ϕ ( α ) chosen via (A48) to enforce C r smoothness. As shown in Figure 5, with r = 12 and n q = 16 , this achieves LARE < 14 (ARE < 10 14 ) across α Ω 1.5 , 9.5 . Use λ q = 0.5 for best results (Figure 5e).
  • Parameter sensitivity (Figure 3d,e, Figure 4b,c and Figure 5d,e):
    -
    λ : weakly influences the total error for α > 1 under the classical transformation (quadrature-dominated regime). For the ϕ ( α ) -generalized transformation with sufficiently large r, the total error becomes ITE-dominated, and the choice of λ is therefore more critical than in the classical, quadrature-dominated regime. Choosing λ Ω 1 ensures spectral stability.
    -
    λ q : has a dominant influence on quadrature accuracy. For α > 1 , the classical transformation performs best with λ q = 0 . For the ϕ ( α ) -generalized transformation, λ q = 0.5 is consistently optimal, and can improve LARE by 8–10 orders of magnitude compared with suboptimal choices.
    -
    n: once n exceeds the polynomial degree of a polynomial function f (or achieves the target ITE per Theorem 1), further refinement yields diminishing returns; focus instead on n q .
    -
    n q : governs quadrature error. For the generalized transformation with α > 1 , r = 12 , n q = 16 often suffices for near machine precision (Figure 5f).
  • For repeated evaluations with fixed α , precompute the appropriate FSGIM: Q n α E for the classical transformation or (40) for the generalized transformation. For the classical FSGIM of [4], the cost analysis following [4] shows that the construction of Q n α E requires O n ( n + n q ) operations per single-point evaluation. After this offline stage, evaluating I t α f via (7) reduces to a single matrix–vector multiplication, giving an online cost O ( n ) . For M > 1 evaluation points, the classical FSGIM construction incurs O M n ( n + n q ) operations. Once Q n α E is formed, the online evaluation of I z M α f requires only O ( M n ) operations.
    For the weighted FSGIM defined row-wise by (40), Section 5 establishes that the per-row construction cost is O n ( n + n q ) , identical in asymptotic order to the classical case. Hence, for M + 1 evaluation points, the total offline cost to assemble the full weighted FSGIM is O M n ( n + n q ) . The subsequent online application to f 0 : n remains a dense matrix–vector product, incurring O ( M n ) operations. Thus, both the classical and weighted FSGIMs share the same asymptotic offline/online computational structure, differing only in constant factors arising from the evaluation of the weight y p ( α ) and the modified parameter ϕ ( α ) .

8.2. Performance Comparison and Trade-Offs

  • Classical transformation ( ϕ ( α ) = 1 / α ):
    -
    Advantage: Exact kernel cancellation in (3); optimal α < 1 .
    -
    Limitation: Algebraic quadrature convergence for α > 1 (Figure 3f and Figure 4d).
    -
    Best for: α < 1 : either 1 / α Z + or moderate-accuracy regimes. In the latter case, although 1 / α Z + implies only algebraic asymptotic quadrature convergence O ( n q K θ ) with K = 1 / α , θ = 1 / α K , the transformed integrand admits a fractional power series with exponentially decaying coefficients for real-analytic f, yielding dramatically faster pre-asymptotic decay that often appears near-exponential in practice (see Remark A9).
  • ϕ ( α ) -generalized transformation:
    -
    Advantage: Enforces C r smoothness via (A48), yielding near machine precision with n q = O ( 10 ) (Figure 5f).
    -
    Limitation: Introduces weight y p ( α ) with p ( α ) = α ϕ ( α ) 1 , necessitating tuned λ q ; moreover, for 0 < α < 1 with 1 / α Z + , achieving the asymptotic regime O ( n q r ) requires n q to be sufficiently large (cf. Figure 12).
    -
    Best for: α > 1 or 0 < α < 1 : 1 / α Z + under high-accuracy demands (ARE 10 15 ).

Funding

This work was not supported by any funding agency, grant, or institutional resources.

Data Availability Statement

All data generated or analyzed during this study are included in this published article.

Conflicts of Interest

The author declares that there are no competing interests.

List of Acronyms

Table 5. List of acronyms used in this paper
Table 5. List of acronyms used in this paper
Acronym Meaning
ARE Absolute relative error
FVIE Fractional Volterra integral equation
FSGIM Fractional shifted Gegenbauer integration matrix
GBFA Gegenbauer-based fractional approximation
HF Hybrid function
ITE Interpolation-induced truncation error
LARE Logarithm of the absolute relative error
MAE Maximum absolute error
RLFI Riemann–Liouville fractional integral
RmsE Root mean square error
SG Shifted Gegenbauer
SGG Shifted Gegenbauer–Gauss
SPH Smoothed particle hydrodynamics

Notation

Table 6. Key notations used in this paper
Table 6. Key notations used in this paper
Logical Operators and Quantifiers
a For any
s For some
s l For sufficiently large
Integral Operators
I t ( τ ) h 0 t h ( τ ) d τ
I a , b ( t ) h a b h ( t ) d t
Preprints 195196 i013 0 b h ( u ( t ) ) d t
Preprints 195196 i014 a b h ( u ( t ) ) d t
Fractional Calculus
I t α f Riemann–Liouville fractional integral, α > 0
Γ ( α ) Gamma function
Function Spaces and Sets
C Complex numbers
R Real numbers
R 0 + Non-negative reals
Z + Positive integers
Z 0 + Non-negative integers
N n Index set { 1 , , n }
J n Index set { 0 , , n }
L 2 ( Ω 1 ) Square-integrable functions on [ 0 , 1 ]
Vectors, Matrices, and Operators
1 n All ones column vector of size n
z M [ z M , 0 , , z M , M ]
z M N [ z M , 0 N , , z M , M N ]
Preprints 195196 i015 [ h 0 ( t ) , , h N ( t ) ]
h ( z M ) [ h 0 ( z M ) , , h N ( z M ) ]
Greater than or approximately equal to
Lists and Sequences
i : j i , i + 1 , , j
i : k : j List of numbers from i to j with increment k
y 1 : n y 1 , y 2 , , y n
GBFA Framework
G ^ n ( λ ) ( t ) nth-degree SG polynomial, λ > 1 2
Ω t Interval [ 0 , t ]
Ω t Interior ( 0 , t )
Ω a , b Interval [ a , b ]
G ^ n λ SGG nodes on Ω 1
L k λ ( t ) SG Lagrange basis polynomials
I n f ( t ) GBFA interpolant of f
f 0 : n Vector [ f 0 , , f n ]
Q n α E α th-order FSGIM
λ SG interpolation parameter ( λ > 1 2 )
λ q SG quadrature parameter ( λ q > 1 2 )
n Interpolation degree
n q Quadrature order

Appendix A Mathematical Foundations

This appendix provides rigorous proofs of the key mathematical properties that underpin the GBFA method. Specifically, it establishes: (i) the orthogonality of SG polynomials, ensuring a stable basis for approximation with rapid convergence; (ii) the bijectivity of the RLFI transformation (2), which guarantees the mathematical validity and reversibility of the coordinate change; and (iii) the resulting analytic exactness of the transformed integral. Collectively, these properties provide a rigorous mathematical foundation for the GBFA framework and justify its extension to arbitrary α > 0 .

Appendix A.1. Orthogonality of SG Polynomials

The SG polynomials inherit the following well-known property.
Theorem A8 
(Orthogonality of SG Polynomials). The SG polynomials G ^ n ( λ ) , defined for t Ω 1 and λ > 1 / 2 , form an orthogonal basis on Ω 1 with respect to the weight function ( t t 2 ) λ 1 / 2 . Specifically, they satisfy the orthogonality relation:
Preprints 195196 i016
where δ m n is the Kronecker delta and h ^ n ( λ ) is the normalization constant given by:
h ^ n ( λ ) = π 2 1 4 λ Γ ( n + 2 λ ) n ! ( n + λ ) Γ 2 ( λ ) .
Proof. 
The SG polynomials are defined in terms of standard Gegenbauer polynomials G n ( λ ) through the transformation:
G ^ n ( λ ) = G n ( λ ) ( 2 t 1 ) , t Ω 1 .
Standard Gegenbauer polynomials are orthogonal on [ 1 , 1 ] with respect to the weight function ( 1 x 2 ) λ 1 / 2 :
I 1 , 1 ( t ) G m ( λ ) G n ( λ ) ( 1 x 2 ) λ 1 / 2 = δ m n h n ( λ ) ,
where
h n ( λ ) = π 2 1 2 λ Γ ( n + 2 λ ) n ! ( n + λ ) Γ 2 ( λ ) .
Substituting x = 2 t 1 into the standard orthogonality relation:
I 1 , 1 ( t ) G m ( λ ) G n ( λ ) ( 1 x 2 ) λ 1 / 2 = I 1 ( t ) G m ( λ ) 2 t 1 G n ( λ ) 4 ( t t 2 ) ] λ 1 / 2 · 2 = 2 · 4 λ 1 / 2 I 1 ( t ) G ^ m ( λ ) G ^ n ( λ ) ( t t 2 ) λ 1 / 2 .
Therefore:
I 1 ( t ) G ^ m ( λ ) G ^ n ( λ ) ( t t 2 ) λ 1 / 2 = δ m n h n ( λ ) 2 2 λ = δ m n h ^ n ( λ ) ,
where h ^ n ( λ ) = h n ( λ ) / 2 2 λ . This completes the proof of orthogonality. The completeness of the polynomial basis follows from standard results in approximation theory. □

Appendix A.2. Bijectivity of the RLFI Transformation

Theorem A9 
(Bijectivity of the RLFI Transformation). For any fixed t > 0 and α > 0 , the mapping τ : Ω 1 Ω t defined by τ ( y ) = t ( 1 y 1 / α ) is bijective. That is, it is both injective and surjective, establishing a perfect correspondence between the intervals Ω 1 and Ω t .
Proof. 
A function is bijective if and only if it is both injective and surjective.
1.
Injectivity (One-to-One): Assume τ ( y 1 ) = τ ( y 2 ) for y 1 , y 2 Ω 1 . Then:
t ( 1 y 1 1 / α ) = t ( 1 y 2 1 / α ) .
Since t > 0 , we can divide both sides by t:
1 y 1 1 / α = 1 y 2 1 / α y 1 1 / α = y 2 1 / α y 1 = y 2 .
Thus, τ is injective.
2.
Surjectivity (Onto): a τ 0 Ω t , we need to find y 0 Ω 1 such that τ ( y 0 ) = τ 0 . Solving:
τ 0 = t ( 1 y 0 1 / α ) 1 y 0 1 / α = τ 0 t y 0 1 / α = 1 τ 0 t .
Since τ 0 Ω t , we have τ 0 t Ω 1 , so 1 τ 0 t Ω 1 . Raising both sides to the power α :
y 0 = 1 τ 0 t α Ω 1 .
Thus, τ 0 Ω t , y 0 = 1 τ 0 t α Ω 1 : τ ( y 0 ) = τ 0 . Therefore, τ is surjective.
3.
Explicit Inverse: The inverse function y : Ω t Ω 1 is explicitly given by:
y ( τ ) = 1 τ t α .
This confirms the bijection since a function is bijective if and only if it has an inverse.
4.
Boundary Behavior:
  • By definition: As y 0 + , τ t ; as y 1 , τ 0 + .
  • By injectivity and the inverse function: As τ 0 + , y 1 ; as τ t , y 0 + .
Thus, the mapping preserves the open intervals: Ω 1 Ω t .
Corollary A2 
(Diffeomorphism Property). The transformation (2) constitutes a C -diffeomorphism between Ω 1 and Ω t . Specifically, it has non-vanishing Jacobian
d τ d y = t α y 1 / α 1 ,
and both τ ( y ) and its inverse
y ( τ ) = 1 τ t α ,
are C on their respective domains.
Proof. 
For fixed t > 0 and α > 0 , the map y y 1 / α is well-defined and real-analytic on Ω 1 . Hence (2) is real-analytic, in particular τ C Ω 1 . Differentiating (2) with respect to y gives (A46). Since t > 0 , α > 0 , and y Ω 1 , we have y 1 / α 1 > 0 , so
d τ d y ( y ) 0 , y Ω 1 .
By Theorem A9, the mapping τ : Ω 1 Ω t is bijective. Because τ is C with non-vanishing derivative on Ω 1 , the inverse function theorem implies that the inverse map (A47) is also C on Ω t . Thus τ is a C -diffeomorphism between Ω 1 and Ω t . □

Appendix A.3. Smoothness Characterization of GBFA-Transformed Integrands and Its Implications for Convergence Rates

Theorem A10 
(Exact Smoothness of the Transformed Integrand under the Classical GBFA Map). Let 0 < α < 1 , β = 1 / α > 1 , t Ω 1 , and assume f is real-analytic on a neighborhood of Ω t . Define the transformed integrand
g ( y ) = f t ( 1 y β ) , y Ω 1 .
Set K = β and θ = β K [ 0 , 1 ) . Then:
(i) 
If θ > 0 (i.e., β Z + ), then g C K , θ ( Ω 1 ) and, generically, g C K + 1 ( Ω 1 ) .
(ii) 
If θ = 0 (i.e., β Z + ), then g is real-analytic on Ω 1 .
Proof. 
Define ϕ ( y ) = t ( 1 y β ) . Since f is real-analytic on a neighborhood of Ω t and ϕ ( y ) Ω t for all y Ω 1 , the composition g = f ϕ is real-analytic on Ω 1 by the chain rule.
To analyze regularity at y = 0 , first note that for any m Z + and any β > 1 ,
d m d y m y β = c β , m y β m ,
for some constant c β , m (and c β , m 0 whenever β Z + or m β ). Thus
ϕ ( m ) ( y ) = t c β , m y β m .
By Faà di Bruno’s formula [17],
g ( m ) ( y ) = k = 1 m f ( k ) ( ϕ ( y ) ) B m , k ϕ ( y ) , , ϕ ( m k + 1 ) ( y ) ,
where each monomial in B m , k is a product of powers of ϕ ( j ) ( y ) and scales as y β k m . The most singular contribution arises when k = 1 , yielding asymptotically
g ( m ) ( y ) f ( ϕ ( y ) ) y β m , y 0 + .
Since f ( ϕ ( y ) ) f ( t ) as y 0 + and f ( t ) 0 generically, g ( m ) remains bounded near y = 0 if and only if β m 0 , i.e., m β .
We now distinguish two cases.
Case (i): β Z + . Then K = β < β < K + 1 , so g ( m ) is continuous on Ω 1 m K , while for m = K + 1 we have
g ( K + 1 ) ( y ) C y θ 1 , θ = β K Ω 1 ,
which diverges as y 0 + . Moreover,
g ( K ) ( y ) C y θ , y 0 + ,
so g ( K ) is Hölder-continuous of order θ at y = 0 . Hence g C K , θ ( Ω 1 ) and, generically, g C K + 1 ( Ω 1 ) .
Case (ii): β = m Z + . In this case y β = y m is a polynomial, so ϕ ( y ) = t ( 1 y m ) is also a polynomial and therefore real-analytic on Ω 1 . Since f is real-analytic on a neighborhood of Ω t , the composition g = f ϕ is real-analytic on Ω 1 . In particular, g C ( Ω 1 ) .
This completes the proof. □
The preceding analysis leads directly to the following corollary, formalizing the resulting regularity and quadrature behavior.
Corollary A3. 
Let f C ( Ω 1 ) and define g ( y ) = f t ( 1 y 1 / α ) . Then the following hold:
(i)
If 1 / α = m Z + , then g is real-analytic on Ω 1 , and the ( n q , λ q ) -GBFA quadrature applied to the SG–transformed integrals in (6) converges super-exponentially in n q . If, in addition, f is a polynomial of degree d, then g is a polynomial of degree at most d m , and the quadrature is exact for all n q d m . Consequently, the total GBFA error reduces to the ITE, which vanishes for all n d , yielding exact evaluation of I t α f for every n d and n q d / α .
(ii)
If 1 / α Z + , then g C K , θ ( Ω 1 ) with K = 1 / α and θ = 1 / α K Ω 1 , and generically g C K + 1 ( Ω 1 ) ; cf. Theorem A10(i). In this case, the quadrature error decays algebraically as O ( n q K θ ) under Gauss-type rules.
Remark A9. 
The algebraic rate O ( n q K θ ) in Corollary A3(ii) represents the worst-case convergence for arbitrary f C ( Ω 1 ) . However, if f is real-analytic on a neighborhood of Ω t , then the transformed integrand
g ( y ) = f t ( 1 y 1 / α ) , y Ω 1 ,
admits a convergent fractional power series expansion near y = 0 of the form
g ( y ) = m = 0 a m y m / α , a m = c m ( t ) m ,
where f ( z ) = m = 0 c m ( z t ) m with | c m | C B m for some C , B > 0 . Consequently, the coefficients satisfy | a m | C ˜ B ˜ m for C ˜ = C and B ˜ = | t | B , exhibiting exponential-type decay. This rapid decay implies that the tail of the series beyond indices m n q is extremely small. As a result, Gauss-type quadrature rules–though asymptotically limited to an algebraic rate by the endpoint Hölder regularity–often exhibit dramatically faster pre-asymptotic convergence in practice. The observed convergence can thus appear nearly exponential or high-order algebraic over typical computational ranges ( n q 10 2 10 3 ), even though the strict asymptotic rate O ( n q K θ ) governs only the limit n q .

Appendix A.4. Diffeomorphism Property of the ϕ(α)-Mapping

We now formalize the smoothness and invertibility of the generalized RLFI transformation (35) introduced in Section 4. The result is the exact analogue of Corollary A2 for the mapping (2).
Theorem A11 
( C -diffeomorphism of the ϕ ( α ) -mapping). Let t R + , α > 0 , and let ϕ ( α ) R + be fixed. Consider the mapping
τ : Ω 1 Ω t , τ ( y ) = t 1 y ϕ ( α ) ,
as defined in (35). Then τ is a C -diffeomorphism between Ω 1 and Ω t ; that is,
(i) 
τ is bijective,
(ii) 
τ C Ω 1 with
d τ d y ( y ) = t ϕ ( α ) y ϕ ( α ) 1 0 , y Ω 1 ,
(iii) 
its inverse y : Ω t Ω 1 ,
y ( τ ) = 1 τ t 1 / ϕ ( α ) ,
belongs to C Ω t .
Proof. 
To prove the smoothness and non-vanishing Jacobian, notice that a α > 0 and ϕ ( α ) > 0 , the map y y ϕ ( α ) is real-analytic on Ω 1 ; thus the map (35) is real-analytic on Ω 1 , in particular τ C Ω 1 . Differentiating (35) with respect to y yields
d τ d y ( y ) = t ϕ ( α ) y ϕ ( α ) 1 , y Ω 1 .
Since t > 0 , ϕ ( α ) > 0 , and y ϕ ( α ) 1 > 0 y Ω 1 , it follows that
d τ d y ( y ) 0 , y Ω 1 .
The bijectivity proof argument parallels that of Theorem A9, but now for (35). In particular, to verify the map’s injectivity, assume τ ( y 1 ) = τ ( y 2 ) for some y 1 , y 2 Ω 1 . Then
t 1 y 1 ϕ ( α ) = t 1 y 2 ϕ ( α ) .
Dividing by t > 0 gives
1 y 1 ϕ ( α ) = 1 y 2 ϕ ( α ) y 1 ϕ ( α ) = y 2 ϕ ( α ) y 1 = y 2 ,
since the map y y ϕ ( α ) is strictly increasing on Ω 1 . Hence τ is injective. To prove surjectivity, let τ 0 Ω t be arbitrary. We seek y 0 Ω 1 such that τ ( y 0 ) = τ 0 . Solving
τ 0 = t 1 y 0 ϕ ( α )
for y 0 gives
y 0 = 1 τ 0 t 1 / ϕ ( α ) .
Since τ 0 Ω t , we have τ 0 t Ω 1 , hence
1 τ 0 t Ω 1 y 0 Ω 1 .
Thus, τ 0 Ω t , y 0 Ω 1 with τ ( y 0 ) = τ 0 , and τ is surjective. Combining injectivity and surjectivity, τ is bijective with inverse
y ( τ ) = 1 τ t 1 / ϕ ( α ) , τ Ω t .
The smoothness of the inverse follows by realizing that the map
τ 1 τ t
is real-analytic on Ω t , and its image lies in Ω 1 . The map
y y 1 / ϕ ( α ) ,
with ϕ ( α ) > 0 fixed, is real-analytic on Ω 1 . Therefore, their composition
y ( τ ) = 1 τ t 1 / ϕ ( α )
is real-analytic on Ω t , in particular y C Ω t . In summary, we have shown that τ : Ω 1 Ω t is bijective, τ C Ω 1 with d τ d y 0 on Ω 1 , and its inverse y is C on Ω t . Hence, by definition, τ is a C -diffeomorphism between Ω 1 and Ω t . □

Appendix A.5. Smoothness of g via Faà di Bruno Exponent Analysis

Theorem A12 
(Sufficient Smoothness Condition for g ( y ) = y α ϕ ( α ) 1 f ( t ( 1 y ϕ ( α ) ) ) ). Let α > 0 , t Ω 1 , and f C ( Ω 1 ) . Define
g ( y ) = y p ( α ) f t ( 1 y ϕ ( α ) ) , p ( α ) = α ϕ ( α ) 1 ,
for y Ω 1 . Then, for any r N , a sufficient condition for g C r ( Ω 1 ) is
ϕ ( α ) r + 1 α .
Proof. 
Set ϕ = ϕ ( α ) and p = p ( α ) = α ϕ 1 for brevity, and define ψ ( y ) = t ( 1 y ϕ ) . Then g ( y ) = y p f ( ψ ( y ) ) . Since f C ( Ω 1 ) and ψ C ( Ω 1 ) , g is smooth on Ω 1 . We analyze the behavior of g ( r ) ( y ) as y 0 + . By the Leibniz rule,
g ( r ) ( y ) = = 0 r r d d y y p · d r d y r f ( ψ ( y ) ) .
Step 1: The = r term. Repeated differentiation of y p shows that
d r d y r y p = c p , r y p r ,
for some finite constant c p , r (possibly zero only if p Z 0 + and r > p ). Since ψ ( y ) t as y 0 + and f is smooth on Ω 1 , we have
f ( ψ ( y ) ) = f ( t ) + o ( 1 ) , y 0 + .
Thus, unless c p , r f ( t ) = 0 , the = r term behaves asymptotically as
d r d y r y p · f ( ψ ( y ) ) C y p r , y 0 + ,
for some constant C 0 . In particular, its exponent is p r = α ϕ 1 r . If c p , r f ( t ) = 0 , then this contribution either vanishes identically or is replaced by a term with a strictly larger exponent, which can only improve regularity at y = 0 .
Step 2: The terms with < r . Each such term contains at least one derivative of the composition f ψ . By Faà di Bruno’s formula, we can write
d r d y r f ( ψ ( y ) ) = k = 1 r f ( k ) ( ψ ( y ) ) B r , k ψ ( y ) , , ψ ( r k + 1 ) ( y ) ,
where each monomial in the Bell polynomial B r , k is of the form
f ( k ) ( ψ ( y ) ) j = 1 r ψ ( j ) ( y ) u j ,
with nonnegative integers u 1 , , u r satisfying
j = 1 r j u j = r , j = 1 r u j = k .
Since ψ ( j ) ( y ) = t ϕ ( ϕ 1 ) ( ϕ j + 1 ) y ϕ j C j y ϕ j as y 0 + , the product scales as
j = 1 r ψ ( j ) ( y ) u j C ˜ y j = 1 r ( ϕ j ) u j = C ˜ y ϕ k ( r ) .
Multiplying by d d y y p c p , y p gives a total factor of
y p · y ϕ k ( r ) = y p + ϕ k r .
Since k 1 and ϕ > 0 , we have
p + ϕ k r p + ϕ r > p r .
Thus every term with < r carries an exponent strictly larger than p r , and therefore decays faster as y 0 + than the generic = r contribution.
Step 3: Boundedness of g ( r ) at y = 0 . Combining the two steps, we see that, in the generic case where c p , r f ( t ) 0 , the smallest exponent among all contributions to g ( r ) ( y ) is p r . Hence the worst-case behaviour near y = 0 is governed by
g ( r ) ( y ) = O y p r = O y α ϕ 1 r , y 0 + .
Therefore g ( r ) remains bounded at y = 0 whenever
α ϕ 1 r 0 ,
which gives the smoothness condition (A48). If cancellations occur (e.g., c p , r f ( t ) = 0 ), the first nonzero term in g ( r ) has exponent strictly larger than p r , so the resulting regularity at y = 0 is even higher. Thus (A48) is a sufficient condition for g C r ( Ω 1 ) . □

Appendix A.6. Gevrey Regularity of the Exact Solution (45)

We establish the precise smoothness class of the exact solution (45) of (44). Although u is real-analytic on Ω 1 , its derivatives grow factorially. We prove that u belongs to the Gevrey-1 class, which is equivalent to the class of real-analytic functions, but does not satisfy the derivative bound (9) required by Theorem 1.
Theorem A13 
(Gevrey-1 regularity of u ( x ) = π ( 1 + x ) 3 / 2 ). The function u defined by (45) is real-analytic on Ω 1 and belongs to the Gevrey-1 class. Specifically, its derivatives satisfy
u ( k ) L ( Ω 1 ) C B k k ! , k Z 0 + ,
for some constants C , B > 0 . Consequently, u does not satisfy the derivative growth condition (9).
Proof. 
The function u ( x ) = π ( 1 + x ) 3 / 2 is the restriction to Ω 1 of the complex function z π ( 1 + z ) 3 / 2 , which is analytic on C { z R : z 1 } . Since the singularity at z = 1 is at a distance of 1 from Ω 1 , u is real-analytic on Ω 1 .
We now compute the kth derivative. For ν = 3 / 2 , the kth derivative of ( 1 + x ) ν is
d k d x k ( 1 + x ) ν = ( 1 ) k Γ ( k + ν ) Γ ( ν ) ( 1 + x ) ( k + ν ) .
Using Γ ( 3 / 2 ) = π / 2 and u ( x ) = π ( 1 + x ) 3 / 2 , we obtain
| u ( k ) ( x ) | 2 Γ k + 3 2 , x Ω 1 .
Applying the property of the Gamma function [22, Eq. (6.1.12)],
Γ ( k + 3 / 2 ) = ( 2 k + 1 ) ! ! 2 k + 1 π ,
and the bound for the double factorial ( 2 k + 1 ) ! ! ( 2 k + 1 ) ! , we can use Stirling’s approximation to find that there exist constants C 1 , B 1 > 0 such that
Γ k + 3 2 C 1 B 1 k k ! , k Z 0 + .
Thus, for C = 2 C 1 and B = B 1 , the bound (A49) holds. Consequently, u fails to meet the hypothesis of Theorem 1. □

References

  1. Podlubny, I. Fractional Differential Equations. In Academic Press.; 1999. [Google Scholar]
  2. Mainardi, F. Fractional Calculus and Waves in Linear Viscoelasticity. In World Scientific; 2022. [Google Scholar]
  3. Metzler, R.; Klafter, J. The random walk’s guide to anomalous diffusion: A fractional dynamics approach. Physics Reports 2000, 339(1), 1–77. [Google Scholar]
  4. Elgindy, K. T. Super-exponential approximation of the Riemann–Liouville fractional integral via Gegenbauer-based fractional approximation methods. Algorithms 2025, 18, 395. [Google Scholar]
  5. Diethelm, K.; Ford, N. J.; Freed, A. D.; Luchko, Y. Algorithms for the fractional calculus: A selection of numerical methods. Computer Methods in Applied Mechanics and Engineering 2005, 194(6-8), 743–773. [Google Scholar] [CrossRef]
  6. Avcı, İ.; Mahmudov, N. I. Numerical solutions for multi-term fractional order differential equations with fractional Taylor operational matrix of fractional integration. Mathematics 2020, 8(1), 96. [Google Scholar] [CrossRef]
  7. Krishnasamy, V. S.; Mashayekhi, S.; Razzaghi, M. Numerical solutions of fractional differential equations by using fractional Taylor basis. IEEE/CAA Journal of Automatica Sinica 2017, 4(1), 98–106. [Google Scholar] [CrossRef]
  8. Zhang, J.; Zhou, F.; Mao, N. Numerical optimization algorithm for solving time-fractional telegraph equations. Physica Scripta 2025, 100(4), 045237. [Google Scholar] [CrossRef]
  9. Ghasempour, A.; Ordokhani, Y.; Sabermahani, S. Mittag-Leffler wavelets and their applications for solving fractional optimal control problems. Journal of Vibration and Control 2025, 31(5-6), 753–767. [Google Scholar]
  10. Shen, J.; Tang, T.; Wang, L.-L. Spectral Methods: Algorithms, Analysis and Applications. In Springer; 2011. [Google Scholar]
  11. Canuto, C.; Hussaini, M. Y.; Quarteroni, A.; Zang, T. A. Spectral Methods: Fundamentals in Single Domains. In Springer Science & Business Media; 2012. [Google Scholar]
  12. Elgindy, K. T. Gegenbauer Collocation Integration Methods: Advances in Computational Optimal Control Theory. PhD thesis, Monash University., 2013. [Google Scholar]
  13. Zhang, T.-F.; Yin, Z. Gevrey regularity for solutions of the non-cutoff Boltzmann equation: The spatially inhomogeneous case. Nonlinear Analysis: Real World Applications 2014, 15, 246–261. [Google Scholar] [CrossRef]
  14. Jackson, D. The Theory of Approximation; American Mathematical Society Colloquium Publications, 1930; Volume 11. [Google Scholar]
  15. Xiang, S. On interpolation approximation: Convergence rates for polynomial interpolation for functions of limited regularity. SIAM Journal on Numerical Analysis 2016, 54(4), 2081–2113. [Google Scholar] [CrossRef]
  16. Elgindy, K. T. High-order, stable, and efficient pseudospectral method using barycentric Gegenbauer quadratures. Applied Numerical Mathematics 2017, 113, 1–25. [Google Scholar] [CrossRef]
  17. Johnson, W. P. The curious history of Faá di Bruno’s formula. The American Mathematical Monthly 2002, 109(3), 217–234. [Google Scholar]
  18. Ghorbani, K.; Semperlotti, F. On the one-dimensional SPH approximation of fractional-order operators. arXiv 2025, arXiv:2505.04350. [Google Scholar]
  19. Damarla, S. K.; Kundu, M. Novel hybrid function operational matrices of fractional integration: An application for solving multi-order fractional differential equations. AppliedMath 2025, 5(2), 55. [Google Scholar] [CrossRef]
  20. Usta, F. Numerical analysis of fractional Volterra integral equations via Bernstein approximation method. Journal of Computational and Applied Mathematics 2021, 384, 113198. [Google Scholar] [CrossRef]
  21. Micula, S. An iterative numerical method for fractional integral equations of the second kind. Journal of Computational and Applied Mathematics 2018, 339, 124–133. [Google Scholar] [CrossRef]
  22. Abramowitz, M.; Stegun, I. A. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. In US Government Printing Office; 1948; Vol. 55. [Google Scholar]
  23. Kumar, K.; Pandey, R. K.; Sharma, S. Approximations of fractional integrals and Caputo derivatives with application in solving Abel’s integral equations. Journal of King Saud University-Science 2019, 31(4), 692–700. [Google Scholar] [CrossRef]
  24. Rao, N.; Farid, M.; Jha, N. K. Approximation properties of a fractional integral-type Szász–Kantorovich–Stancu–Schurer operator via Charlier polynomials. Mathematics 2025, 13(18), 3039. [Google Scholar] [CrossRef]
  25. Ciesielski, M.; Grodzki, G. Numerical approximations of the Riemann–Liouville and Riesz fractional integrals. Informatica 2024, 35(1), 21–46. [Google Scholar] [CrossRef]
  26. Ciesielski, M. Numerical algorithms for approximation of fractional integrals and derivatives based on quintic spline interpolation. Symmetry 2024, 16(2), 252. [Google Scholar] [CrossRef]
  27. Batiha, I. M.; Saadeh, R.; Jebril, I. H.; Qazza, A.; Al-Nana, A. A.; Momani, S. Composite fractional trapezoidal rule with Romberg integration. Computer Modeling in Engineering & Sciences 2024, 140(3), 2729–2745. [Google Scholar] [CrossRef]
  28. Batiha, I. M.; Alshorm, S.; Al-Husban, A.; Saadeh, R.; Gharib, G.; Momani, S. The n-point composite fractional formula for approximating Riemann–Liouville integrator. Symmetry 2023, 15(4), 938. [Google Scholar] [CrossRef]
  29. Butt, S. I.; Khan, D.; Tipurić-Spužević, S.; Mohsin, B. B. Fractional perspective of HH f-divergence, midpoint and trapezoidal type estimates for superquadratic function. Kuwait Journal of Science 2025, 100452. [Google Scholar] [CrossRef]
  30. Bouafoura, M. K.; Braiek, N. B. Block pulse-based techniques for modelling and synthesis of non-integer systems. International Journal of Systems Science 2010, 41(5), 487–499. [Google Scholar] [CrossRef]
  31. Postavaru, O.; Toma, A. A numerical approach based on fractional-order hybrid functions of block-pulse and Bernoulli polynomials for numerical solutions of fractional optimal control problems. Mathematics and Computers in Simulation 2022, 194, 269–284. [Google Scholar] [CrossRef]
  32. Barary, Z.; Yazdani Cherati, A.; Nemati, S. Solving fractional optimal control-affine problems via fractional-order hybrid Jacobi functions. Control and Optimization in Applied Mathematics 2024, 9(1), 149–168. [Google Scholar]
  33. Sadek, L.; Ounamane, S.; Abouzaid, B.; Sadek, E. M. The Galerkin Bell method to solve the fractional optimal control problems with inequality constraints. Journal of Computational Science 2024, 77, 102244. [Google Scholar] [CrossRef]
  34. Mohammadi, H.; Saeedi, H.; Izadi, M. A spectral collocation method via Krawtchouk polynomials for two-dimensional Liouville–Caputo fractional optimal control problems. Results in Control and Optimization 2025, 100639. [Google Scholar] [CrossRef]
  35. Shadimetov, K.; Nuraliev, F.; Toshboev, O. Optimal methods for approximate calculation of the fractional Riemann-Liouville integral in Sobolev space. AIP Conference Proceedings 2024, 3147(1). [Google Scholar] [CrossRef]
  36. Zhou, Z.; Zhang, H.; Yang, X. The compact difference scheme for the fourth-order nonlocal evolution equation with a weakly singular kernel. Mathematical Methods in the Applied Sciences 2023, 46(5), 5422–5447. [Google Scholar] [CrossRef]
  37. Okayama, T.; Matsuo, T.; Sugihara, M. Sinc-collocation methods for weakly singular Fredholm integral equations of the second kind. Journal of Computational and Applied Mathematics 2010, 234(4), 1211–1227. [Google Scholar] [CrossRef]
  38. Galperin, E. A.; Kansa, E. J.; Makroglou, A.; Nelson, S. A. Variable transformations in the numerical solution of second kind Volterra integral equations with continuous and weakly singular kernels; extensions to Fredholm integral equations. Journal of Computational and Applied Mathematics 2000, 115(1-2), 193–211. [Google Scholar] [CrossRef]
  39. Baumann, G.; Stenger, F. Fractional calculus and Sinc methods. Fractional Calculus and Applied Analysis 2011, 14(4), 568–622. [Google Scholar] [CrossRef]
  40. Hussien, H. S.; Abdalla, M. Spectral optimal control solution for nonlinear weakly singular system of fractional integro-differential equations. Fractals 2025, 33(08), 1–11. [Google Scholar] [CrossRef]
  41. Taleshian, A. H.; Nemati, S.; Lima, P. M. Application of fractional-order hybrid Chelyshkov functions for solving a general class of fractional integro-differential equations with weakly singular kernels. Journal of Computational and Applied Mathematics 2025, 117110. [Google Scholar] [CrossRef]
  42. Al Jarbouh, A. Rheological behaviour modelling of viscoelastic materials by using fractional model. Energy Procedia 2012, 19, 143–157. [Google Scholar] [CrossRef]
  43. Anwar, M. S.; Alam, M. M.; Khan, M. A.; Abouzied, Amr S.; Hussain, Z.; Puneeth, V. Generalized viscoelastic flow with thermal radiations and chemical reactions. Geoenergy Science and Engineering 2024, 232, 212442. [Google Scholar] [CrossRef]
  44. Geetanjali, G.; Sharma, P. K. Fractional order strain and viscosity in the Moore Gibson Thompson thermoelastic diffusion model: A study of transient responses in one-dimensional half-space. Mechanics of Materials 2025, 105542. [Google Scholar] [CrossRef]
  45. Mescia, L.; Bia, P.; Caratelli, D. Fractional-calculus-based electromagnetic tool to study pulse propagation in arbitrary dispersive dielectrics. Physica Status Solidi (a) 2019, 216(3), 1800557. [Google Scholar] [CrossRef]
  46. Xu, K.; Chen, L.; Wang, M.; Lopes, A. M.; Tenreiro Machado, J. A.; Zhai, H. Improved decentralized fractional PD control of structure vibrations. Mathematics 2020, 8(3), 326. [Google Scholar] [CrossRef]
  47. Jiang, Y.; Zhang, B.; Shu, X.; Wei, Z. Fractional-order autonomous circuits with order larger than one. Journal of Advanced Research 2020, 25, 217–225. [Google Scholar] [CrossRef]
  48. Krejcar, O.; Namazi, H. Review of the applications of different fractional models in engineering. Applied Mathematical Modelling 2025, 116623. [Google Scholar] [CrossRef]
  49. Nemati, S.; Sedaghat, S.; Arefi, S. A new numerical approach for solving space–time fractional Schrödinger differential equations via fractional-order Chelyshkov functions. Results in Applied Mathematics 2025, 26, 100584. [Google Scholar] [CrossRef]
  50. Talib, I.; Özger, F. Orthogonal polynomials based operational matrices with applications to Bagley-Torvik fractional derivative differential equations. In Recent Research in Polynomials (Chap. 3); Özger, F., Ed.; IntechOpen., 2023. [Google Scholar]
  51. Mohammadi, F. A discrete orthogonal polynomials approach for fractional optimal control problems with time delay. Iranian Journal of Numerical Analysis and Optimization 2019, 9(2), 49–75. [Google Scholar]
  52. Wang, M. L.; Chang, R. Y.; Yang, S. Y. Generalization of generalized orthogonal polynomial operational matrices for fractional and operational calculus. International Journal of Systems Science 1987, 18(5), 931–943. [Google Scholar] [CrossRef]
  53. Grodzki, G. Numerical approximation of the Riemann-Liouville fractional integrals using the Akima spline interpolation. Journal of Applied Mathematics and Computational Mechanics 2023, 22(4), 30–43. [Google Scholar] [CrossRef]
  54. Bhrawy, A. H.; Alofi, A. S. The operational matrix of fractional integration for shifted Chebyshev polynomials. Applied Mathematics Letters 2013, 26(1), 25–31. [Google Scholar] [CrossRef]
  55. Ghasempour, A.; Ordokhani, Y.; Sabermahani, S. Mittag-Leffler wavelets and their applications for solving fractional optimal control problems. Journal of Vibration and Control 2025, 31(5-6), 753–767. [Google Scholar] [CrossRef]
  56. Sidi, A. A new variable transformation for numerical integration. In Numerical Integration IV: Proceedings of the Conference at the Mathematical Research Institute, Oberwolfach, November 8–14, 1992; Springer., 1993; pp. 359–373. [Google Scholar]
  57. Laurie, D. P. Periodizing transformations for numerical integration. Journal of Computational and Applied Mathematics 1996, 66(1-2), 337–344. [Google Scholar] [CrossRef]
  58. Elgindy, K. T. High-order numerical solution of second-order one-dimensional hyperbolic telegraph equation using a shifted Gegenbauer pseudospectral method. Numerical Methods for Partial Differential Equations 2016, 32(1), 307–349. [Google Scholar] [CrossRef]
Figure 1. Behavior of the bounding constant ϑ α , λ as α varies from 0.1 to 5, shown for six representative values of λ ( 0.4 , 0, 0.5 , 1, 2, and 3). Each curve corresponds to a distinct λ value: red ( λ = 0.4 ), dark green ( λ = 0 ), blue ( λ = 0.5 ), purple ( λ = 1 ), orange ( λ = 2 ), and black ( λ = 3 ).
Figure 1. Behavior of the bounding constant ϑ α , λ as α varies from 0.1 to 5, shown for six representative values of λ ( 0.4 , 0, 0.5 , 1, 2, and 3). Each curve corresponds to a distinct λ value: red ( λ = 0.4 ), dark green ( λ = 0 ), blue ( λ = 0.5 ), purple ( λ = 1 ), orange ( λ = 2 ), and black ( λ = 3 ).
Preprints 195196 g001
Figure 2. Effect of the smoothness parameter r on the transformed integrand g ( y , r ) = y α ϕ ( α ) 1 f ( t ( 1 y ϕ ( α ) ) ) for t = 0.5 and two representative values of α (left: α = 1.5 , right: α = 19.5 ). Top row: f ( t ) = 2 t 3 + 8 t ; bottom row: f ( t ) = π ( 1 + t ) 3 / 2 . For both α values, increasing r improves the smoothness of g at y = 0 but simultaneously concentrates the integrand increasingly near y = 1 , as visible from the steeper decay toward y = 0 and the sharper peaks near y = 1 .
Figure 2. Effect of the smoothness parameter r on the transformed integrand g ( y , r ) = y α ϕ ( α ) 1 f ( t ( 1 y ϕ ( α ) ) ) for t = 0.5 and two representative values of α (left: α = 1.5 , right: α = 19.5 ). Top row: f ( t ) = 2 t 3 + 8 t ; bottom row: f ( t ) = π ( 1 + t ) 3 / 2 . For both α values, increasing r improves the smoothness of g at y = 0 but simultaneously concentrates the integrand increasingly near y = 1 , as visible from the steeper decay toward y = 0 and the sharper peaks near y = 1 .
Preprints 195196 g002
Figure 3. Comprehensive RLFI analysis for f ( t ) = 2 t 3 + 8 t at t = 0.5 : (a) GBFA approximation versus exact RLFI values across α Ω 1.5 , 9.5 ; (b) LARE versus α ; (c) LARE versus interpolation degree n for multiple α values; (d) LARE versus SG parameter λ for multiple α values; (e) LARE versus quadrature parameter λ q for multiple α values; (f) LARE versus quadrature points n q for multiple α values. In all panels, markers show computed data points, and lines connect points for visual clarity. Each color represents a different α value as indicated in the legends.
Figure 3. Comprehensive RLFI analysis for f ( t ) = 2 t 3 + 8 t at t = 0.5 : (a) GBFA approximation versus exact RLFI values across α Ω 1.5 , 9.5 ; (b) LARE versus α ; (c) LARE versus interpolation degree n for multiple α values; (d) LARE versus SG parameter λ for multiple α values; (e) LARE versus quadrature parameter λ q for multiple α values; (f) LARE versus quadrature points n q for multiple α values. In all panels, markers show computed data points, and lines connect points for visual clarity. Each color represents a different α value as indicated in the legends.
Preprints 195196 g003
Figure 4. Comprehensive RLFI analysis for f ( t ) = 2 t 3 + 8 t at t = 0.5 with extended quadrature resolution: (a) LARE versus interpolation degree n for multiple α values using n q = 1000 ; (b) LARE versus SG parameter λ for multiple α values using n q = 1000 ; (c) LARE versus quadrature parameter λ q for multiple α values using n q = 1000 ; (d) LARE versus quadrature points n q on a logarithmic scale for multiple α values, with n q ranging from 4 to 10 4 . In all panels, markers show computed data points, and lines connect points for visual clarity. Each color represents a different α value as indicated in the legends.
Figure 4. Comprehensive RLFI analysis for f ( t ) = 2 t 3 + 8 t at t = 0.5 with extended quadrature resolution: (a) LARE versus interpolation degree n for multiple α values using n q = 1000 ; (b) LARE versus SG parameter λ for multiple α values using n q = 1000 ; (c) LARE versus quadrature parameter λ q for multiple α values using n q = 1000 ; (d) LARE versus quadrature points n q on a logarithmic scale for multiple α values, with n q ranging from 4 to 10 4 . In all panels, markers show computed data points, and lines connect points for visual clarity. Each color represents a different α value as indicated in the legends.
Preprints 195196 g004
Figure 5. Comprehensive RLFI analysis for f ( t ) = 2 t 3 + 8 t at t = 0.5 using the ϕ ( α ) -generalized transformation with r = 12 : (a) GBFA approximation versus exact RLFI values across α Ω 1.5 , 9.5 ; (b) LARE versus α ; (c) LARE versus interpolation degree n for multiple α values; (d) LARE versus SG parameter λ for multiple α values; (e) LARE versus quadrature parameter λ q for multiple α values; (f) LARE versus quadrature points n q for multiple α values. In all panels, markers show computed data points, and lines connect points for visual clarity. Each color represents a different α value as indicated in the legends.
Figure 5. Comprehensive RLFI analysis for f ( t ) = 2 t 3 + 8 t at t = 0.5 using the ϕ ( α ) -generalized transformation with r = 12 : (a) GBFA approximation versus exact RLFI values across α Ω 1.5 , 9.5 ; (b) LARE versus α ; (c) LARE versus interpolation degree n for multiple α values; (d) LARE versus SG parameter λ for multiple α values; (e) LARE versus quadrature parameter λ q for multiple α values; (f) LARE versus quadrature points n q for multiple α values. In all panels, markers show computed data points, and lines connect points for visual clarity. Each color represents a different α value as indicated in the legends.
Preprints 195196 g005
Figure 6. Three-dimensional sensitivity analysis of the ϕ ( α ) -generalized transformation with respect to smoothness parameter r: (a) LARE vs. r vs. λ q for fixed α = 5.5 ; (b) LARE vs. r vs. n q for fixed α = 5.5 ; (c) LARE vs. r vs. α for fixed n q = 16 .
Figure 6. Three-dimensional sensitivity analysis of the ϕ ( α ) -generalized transformation with respect to smoothness parameter r: (a) LARE vs. r vs. λ q for fixed α = 5.5 ; (b) LARE vs. r vs. n q for fixed α = 5.5 ; (c) LARE vs. r vs. α for fixed n q = 16 .
Preprints 195196 g006
Figure 7. Accuracy comparison between GBFA ( ϕ ( α ) -generalized, r = 14 ) and MATLAB integral for f ( t ) = 2 t 3 + 8 t , t = 0.5 : (a) RLFI values versus α ; (b) LARE versus α . The GBFA method uses n = 4 , n q = 20 , λ = λ q = 0.5 , r = 14 ; MATLAB uses relative tolerance 10 14 . Both methods achieve near machine-precision accuracy over α [ 1.5 , 29.5 ] .
Figure 7. Accuracy comparison between GBFA ( ϕ ( α ) -generalized, r = 14 ) and MATLAB integral for f ( t ) = 2 t 3 + 8 t , t = 0.5 : (a) RLFI values versus α ; (b) LARE versus α . The GBFA method uses n = 4 , n q = 20 , λ = λ q = 0.5 , r = 14 ; MATLAB uses relative tolerance 10 14 . Both methods achieve near machine-precision accuracy over α [ 1.5 , 29.5 ] .
Preprints 195196 g007
Figure 8. Computational efficiency comparison between GBFA ( ϕ ( α ) -generalized, r = 14 ) and MATLAB integral: (a) Computation time in microseconds versus α ; (b) Speedup factor (MATLAB time / GBFA time) versus α . The GBFA method exhibits evaluation times under 103 μ s after FSGIM precomputation, while MATLAB requires 193– 1867 μ s . Speedup factors range from 6.4 × to 533.3 × over α [ 1.5 , 29.5 ] .
Figure 8. Computational efficiency comparison between GBFA ( ϕ ( α ) -generalized, r = 14 ) and MATLAB integral: (a) Computation time in microseconds versus α ; (b) Speedup factor (MATLAB time / GBFA time) versus α . The GBFA method exhibits evaluation times under 103 μ s after FSGIM precomputation, while MATLAB requires 193– 1867 μ s . Speedup factors range from 6.4 × to 533.3 × over α [ 1.5 , 29.5 ] .
Preprints 195196 g008
Figure 9. GBFA approximation of I x 0.75 f for (a1) sin ( π x ) , (a2) cos ( π x ) , (a3) exp ( x ) , and (a4) ( x 1 ) 3 on x Ω 1 , with corresponding error distributions in (b1)–(b4). The left panels show GBFA and exact RLFI values; the right panels show pointwise absolute error (blue), mean error (red dashed), and median error (magenta dash-dotted).
Figure 9. GBFA approximation of I x 0.75 f for (a1) sin ( π x ) , (a2) cos ( π x ) , (a3) exp ( x ) , and (a4) ( x 1 ) 3 on x Ω 1 , with corresponding error distributions in (b1)–(b4). The left panels show GBFA and exact RLFI values; the right panels show pointwise absolute error (blue), mean error (red dashed), and median error (magenta dash-dotted).
Preprints 195196 g009
Figure 10. Error metrics for the GBFA approximation of I x 0.75 f on x Ω 1 : (a) L 2 error; (b) R 2 score; (c) maximum vs. mean absolute error; (d) comparison of MAE, mean, median, and L 2 . All errors are below 1.8 × 10 7 , and R 2 scores exceed 0.9999999999998 .
Figure 10. Error metrics for the GBFA approximation of I x 0.75 f on x Ω 1 : (a) L 2 error; (b) R 2 score; (c) maximum vs. mean absolute error; (d) comparison of MAE, mean, median, and L 2 . All errors are below 1.8 × 10 7 , and R 2 scores exceed 0.9999999999998 .
Preprints 195196 g010
Figure 11. Convergence of the GBFA method for I x 0.75 sin ( π x ) at x = 0.5 using the classical transformation ϕ ( α ) = 1 / α . Absolute error versus quadrature order n q (blue circles), best-fit algebraic rate C n q 7.1252 (red dashed), and theoretical asymptotic rate C n q 4 / 3 (black dotted). The rapid pre-asymptotic decay reflects the exponential coefficient decay in the fractional power series of the transformed integrand, despite the endpoint Hölder regularity g C 1 , 1 / 3 ( Ω 1 ) .
Figure 11. Convergence of the GBFA method for I x 0.75 sin ( π x ) at x = 0.5 using the classical transformation ϕ ( α ) = 1 / α . Absolute error versus quadrature order n q (blue circles), best-fit algebraic rate C n q 7.1252 (red dashed), and theoretical asymptotic rate C n q 4 / 3 (black dotted). The rapid pre-asymptotic decay reflects the exponential coefficient decay in the fractional power series of the transformed integrand, despite the endpoint Hölder regularity g C 1 , 1 / 3 ( Ω 1 ) .
Preprints 195196 g011
Figure 12. Convergence of the GBFA method for I x 0.75 sin ( π x ) at x = 0.5 using the ϕ ( α ) -generalized transformation with r = 15 . Absolute error versus quadrature order n q (blue circles), best-fit algebraic rate C n q 14.33 (red dashed), and theoretical rate C n q 15 (black dotted). The asymptotic O ( n q r ) regime is attained only for n q 30 , beyond which the generalized map overtakes the classical one in convergence speed–but at the cost of degraded pre-asymptotic accuracy and increased sensitivity to λ q .
Figure 12. Convergence of the GBFA method for I x 0.75 sin ( π x ) at x = 0.5 using the ϕ ( α ) -generalized transformation with r = 15 . Absolute error versus quadrature order n q (blue circles), best-fit algebraic rate C n q 14.33 (red dashed), and theoretical rate C n q 15 (black dotted). The asymptotic O ( n q r ) regime is attained only for n q 30 , beyond which the generalized map overtakes the classical one in convergence speed–but at the cost of degraded pre-asymptotic accuracy and increased sensitivity to λ q .
Preprints 195196 g012
Figure 13. GBFA solution of the 2nd FVIE (44) vs. exact solution. Markers: SGG nodes ( n = 12 ).
Figure 13. GBFA solution of the 2nd FVIE (44) vs. exact solution. Markers: SGG nodes ( n = 12 ).
Preprints 195196 g013
Figure 14. Convergence rate of GBFA for solving Eq. (44): MAE vs. polynomial degree n.
Figure 14. Convergence rate of GBFA for solving Eq. (44): MAE vs. polynomial degree n.
Preprints 195196 g014
Table 1. Estimated MAE of SPH [18] and GBFA for I x 0.75 f on x Ω 1 . The SPH values are inferred from [18, Fig. 5]; GBFA uses n = n q = 20 , λ = λ q = 0.5 .
Table 1. Estimated MAE of SPH [18] and GBFA for I x 0.75 f on x Ω 1 . The SPH values are inferred from [18, Fig. 5]; GBFA uses n = n q = 20 , λ = λ q = 0.5 .
Function SPH MAE (estimated) GBFA MAE GBFA L 2 Error
f 1 2.0 × 10 2 1.78 × 10 7 1.64 × 10 7
f 2 5.0 × 10 4 7.73 × 10 8 1.78 × 10 7
f 3 5.0 × 10 3 1.54 × 10 7 6.12 × 10 8
f 4 2.0 × 10 2 1.27 × 10 8 3.44 × 10 8
Table 2. Comparison of GBFA and HF [19] for I t α f with f ( t ) = t on the HF grid t HF . The GBFA parameters are n = 2 , n q = 12 , λ = 1 , λ q = 0.5 .
Table 2. Comparison of GBFA and HF [19] for I t α f with f ( t ) = t on the HF grid t HF . The GBFA parameters are n = 2 , n q = 12 , λ = 1 , λ q = 0.5 .
α GBFA · HF ·  [19] GBFA CPU (s) HF CPU (s) [19] Speedup
0.5 4.44 × 10 16 1.11 × 10 16 7.62 × 10 4 2.1720 × 10 2 28.5
1.0 3.33 × 10 16 0 2.29 × 10 4 2.0514 × 10 2 89.6
1.5 4.44 × 10 16 5.55 × 10 17 6.48 × 10 5 2.2866 × 10 2 352.8
2.0 2.78 × 10 17 2.78 × 10 17 5.85 × 10 5 2.1033 × 10 2 359.5
2.5 2.78 × 10 17 1.39 × 10 17 1.96 × 10 4 2.3839 × 10 2 121.3
3.0 2.08 × 10 17 6.94 × 10 18 1.67 × 10 5 2.1494 × 10 2 1286.5
3.5 6.94 × 10 18 8.67 × 10 19 7.40 × 10 6 2.1404 × 10 2 2892.4
4.0 5.20 × 10 18 0 7.70 × 10 6 2.2049 × 10 2 2863.5
4.5 1.73 × 10 18 1.08 × 10 19 7.90 × 10 6 2.1662 × 10 2 2742.0
5.0 4.34 × 10 19 2.17 × 10 19 1.84 × 10 5 2.1855 × 10 2 1187.8
Table 3. Comparison of GBFA and Bernstein [20] for Solving (44).
Table 3. Comparison of GBFA and Bernstein [20] for Solving (44).
Method n MAE RmsE Time (s)
GBFA 12 6.080 × 10 13 3.601 × 10 13 0.003
Bernstein [20] 12 1.047 × 10 11 3.612 × 10 12 0.186
Table 4. Convergence data for GBFA applied to solve Eq. (44).
Table 4. Convergence data for GBFA applied to solve Eq. (44).
n MAE ln(MAE)
6 2.79 × 10 8 17.40
8 7.85 × 10 10 20.97
10 2.17 × 10 11 24.55
12 6.08 × 10 13 28.13
14 1.82 × 10 14 31.64
16 7.77 × 10 16 34.79
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated