Preprint
Article

This version is not peer-reviewed.

A Bayesian Inverse Framework with Convolutional Regularization for Weakly Singular Volterra Integral Equations

Submitted:

14 June 2025

Posted:

17 June 2025

You are already at the latest version

Abstract
This paper presents a new Bayesian inverse framework improved by convolutional adaptation for the numerical approximation of weakly singular Volterra integral equations. The proposed methodology addresses the computational difficulties associated with singular kernels by implementing a convolution-based regularization technique that makes the kernel easier to deal with. A prior Gaussian process with a radial basis function kernel is used, together with a gradual mesh transformation that concentrates the quadrature points near the singularities, significantly improving numerical accuracy without compromising efficiency. Numerical experiments confirm the robustness and efficiency of the method in various scenarios, demonstrating strong convergence and stability in various hyperparameter configurations. Our approach outperforms existing methods in comparative benchmarks with collocation and Legendre wavelet probes.
Keywords: 
;  ;  ;  ;  

1. Introduction

Volterra’s integral equations, named after Vito Volterra [1,2], constitute a fundamental class where the unknown function appears under an integral with variable limits. These equations model systems with memory effects across physics, biology, economics, and control theory [3,4].
The mathematical foundations were established by pioneering work of Volterra [3,4], Lalescu [5], Fredholm [6], and Hilbert [7], with later contributions from Wiener and Hopf [8,9], Taylor [10], and Kato [11].
Volterra equations are classified into first-kind and second-kind equations. While continuous kernels allow direct treatment, singular or weakly singular kernels pose considerable challenges due to their divergent behavior [12,13,14]. The theory was extensively developed by Tricomi [15], Muskhelishvili [16], and Erdélyi [17].
Existing solution methods include analytical approaches such as Laplace transforms [18,19], series solutions [20,21], and Picard iteration [22], alongside numerical methods including quadrature methods [23,24], collocation methods [13,25,26], and Galerkin methods [27,28]. Specialized techniques for singular kernels involve graded meshes [29,30] and product integration [31,32].
Traditional methods face significant limitations, especially regarding stability and uncertainty quantification near singularities. Conventional methods often fail near singularities, producing unreliable results due to inherent ill-conditioning and error propagation. Current techniques lack robust uncertainty management and adaptive strategies for handling singular behavior.
This study proposes an innovative hybrid approach combining Bayesian inference with Gaussian processes and adaptive convolutional regularization to solve Volterra integral equations with weak singularities. Our methodology addresses traditional limitations by exploiting Gaussian processes’ uncertainty quantification capabilities while utilizing regularization techniques to stabilize numerical behavior near singularities, enhanced by importance sampling [33] and logarithmic stabilization methods [34].
This paper is organized as follows: in Section 2, we provide a detailed description of the proposed method. In Section 3, we deal with the numerical examples and we conclude in Section 4.

2. Proposed Method

We consider the singular Volterra integral equation of the second kind defined by:
f ( x ) λ 0 x k ( x , t ) F ( f ( t ) ) ( x t ) 1 α d t = g ( x ) , 0 x 1
where f ( x ) represents the unknown function, while k ( x , t ) , g ( x ) , and F are given functions. The parameter λ R is a scalar multiplier, and α > 0 determines the type and strength of the singularity.
The Volterra operator is formally defined as:
L [ f ] ( x ) = f ( x ) λ 0 x K ( x , t ) d t
where K ( x , t ) = k ( x , t ) F ( f ( t ) ) ( x t ) 1 α represents the singular kernel. For 0 < α < 1 , the exponent ( 1 α ) ( 0 , 1 ) creates a weak singularity, and the integral
0 x 1 ( x t ) 1 α d t = 1 α x α <
remains convergent, ensuring the well-posedness of the problem. Existence and uniqueness for similar singular kernels are discussed in [35,36].

2.1. Bayesian Formulation

2.1.1. Likelihood Function

Assuming that the observed data g obs ( x ) = g true ( x ) represents noise-free observations, we define the likelihood function [37,38] as:
p ( g obs | f ) = i = 1 n 1 2 π σ 2 exp [ g obs ( x i ) L [ f ] ( x i ) ] 2 2 σ 2
Let A ( x i ) = g obs ( x i ) L [ f ] ( x i ) denote the residual at point x i . The likelihood becomes:
p ( g obs | f ) = i = 1 n 1 2 π σ 2 exp [ A ( x i ) ] 2 2 σ 2
The log-likelihood is derived as follows:
ln p ( g obs | f ) = ln i = 1 n 1 2 π σ 2 exp [ A ( x i ) ] 2 2 σ 2
                                  = i = 1 n ln 1 2 π σ 2 exp [ A ( x i ) ] 2 2 σ 2
          = i = 1 n ln 1 2 π σ 2 i = 1 n [ A ( x i ) ] 2 2 σ 2
                            = n 2 ln ( 2 π σ 2 ) 1 2 σ 2 i = 1 n [ A ( x i ) ] 2

2.1.2. Prior Distribution

We impose a Gaussian process (GP) prior [38,39,40] on the unknown function f:
f GP ( μ ( x ) , k ( x , x ) )
where μ ( x ) is the mean function and k ( x , x ) is the covariance kernel. We employ a Radial Basis Function (RBF) kernel:
k RBF ( x , x ) = σ f 2 exp | x x | 2 2 2
where σ f 2 controls the variance and is the characteristic length scale.

2.1.3. Posterior Distribution

By Bayes’ theorem, the posterior distribution [37,38] is given by:
p ( f | g obs ) = p ( g obs | f ) p ( f ) p ( g obs )
Taking logarithms:
ln p ( f | g obs ) = ln p ( g obs | f ) + ln p ( f ) ln p ( g obs )

2.2. Singularity Treatment

The primary computational challenge lies in handling the singularity [35,36] when ( x t ) ( 1 α ) as t x . We propose a convolutional regularization approach:
( x t ) ( 1 α ) [ ( x t ) 2 + ϵ h 2 ] ( 1 α ) / 2
where ϵ h is a small adaptive regularization parameter that prevents numerical overflow while preserving the singular behavior. The associated local regularization error is defined by:
E reg ( x , t ) = ( x t ) ( 1 α ) [ ( x t ) 2 + ϵ h 2 ] ( 1 α ) / 2
For | x t | ϵ h , this error admits the expansion:
E reg ( x , t ) = ( 1 α ) ϵ h 2 2 ( x t ) 3 α + ( 1 α ) ( 3 α ) ϵ h 4 8 ( x t ) 5 α + O ϵ h 6 ( x t ) 7 α
By binomial expansion with u = ϵ h 2 ( x t ) 2 1 , we have:
[ ( x t ) 2 + ϵ h 2 ] ( 1 α ) / 2 = | x t | ( 1 α ) 1 + ϵ h 2 ( x t ) 2 ( 1 α ) / 2
Using the Taylor expansion:
( 1 + u ) ( 1 α ) / 2 = 1 1 α 2 u + ( 1 α ) ( 3 α ) 8 u 2 + O ( u 3 )
Substituting u and simplifying yields the desired expansion. We then define the relative regularization error as:
E reg ( x , t ) ( x t ) ( 1 α ) = ( 1 α ) ϵ h 2 2 ( x t ) 2 + O ϵ h 4 ( x t ) 4
Furthermore, if the error in the data is δ g , then the propagated error in the solution satisfies the bound:
δ f L 2 1 ϵ h ( 1 α ) / 2 δ g L 2
This relationship demonstrates that values of ϵ h that are too small amplify data errors.
To concentrate quadrature points near the singularity, we employ a graded mesh transformation:
t j = x · j n β , j = 0 , 1 , , n
with β > 1 to increase point density near t = 0 . The adaptive weights [35,41] are defined as:
w j ( x , t j ) = 1 + C exp γ | x t j |
where C and γ are positive parameters that enhance accuracy near the singular point. The local density of quadrature points is characterized by the spacing between consecutive points:
Δ t j = t j + 1 t j = x j + 1 n β j n β
Using the mean value theorem, there exists ξ j ( j , j + 1 ) such that:
Δ t j = x · β n · ξ j n β 1 · 1 n = x β n β + 1 ξ j β 1
We define the relative density of points near the singularity as:
Δ t 1 Δ t n = 1 n β 1
Indeed, for j = 1 , Δ t 1 x β n β + 1 , and for j = n , Δ t n x β n 2 . Therefore, Δ t 1 Δ t n 1 n β 1 . This relationship demonstrates that the larger β becomes, the greater the concentration of points near the singularity. For the singular integral
I = 0 x k ( x , t ) F ( f ( t ) ) ( x t ) 1 α d t
the quadrature error with the graded mesh transformation is given by:
E β ( n ) = I j = 0 n 1 w j k ( x , t j ) F ( f ( t j ) ) ( x t j ) 1 α
This error satisfies the bound [13,30]
| E β ( n ) | C 0 · n min ( 2 , β ( 1 + α ) 1 + α ) · g ( p )
where p = min ( 2 , β ( 1 + α ) 1 + α ) , C 0 is a constant independent of n, and g is the function defined by g ( t ) = k ( x , t ) F ( f ( t ) ) ( x t ) 1 α with 0 < α < 1 .
When we perform the transformation s = t x 1 / β , we obtain t = x s β and d t = β x s β 1 d s . The transformed integral becomes:
I = x α 0 1 k ( x , x s β ) F ( f ( x s β ) ) ( 1 s β ) 1 α β s β 1 d s
The singularity at s = 1 has a modified order of ( 1 s β ) ( 1 α ) , and the error depends on the regularity of the transformed integrand. We apply a uniform composite quadrature formula in s (trapezoid in this case), of order p (typically p 2 ). The error is proportional to:
E n 1 n p max 0 s 1 | h ( p ) ( s ) | .
By repeated derivation of h ( s ) = β s β 1 ( 1 s β ) ( 1 α ) · k ( · ) · F ( f ( · ) ) , we obtain that each derivative h ( p ) ( s ) generates dominant terms of the type C ( 1 s β ) ( 1 α ) m · s β 1 p , m p . For these terms to remain bounded near s = 1 , we require ( 1 α ) + m β m , m p , the most restrictive condition being for m = p . Thus we need:
β ( 1 α ) + p p .
In the context p 2 , we see that the real order of convergence is:
min p , β ( 1 + α ) 1 + α = min 2 , β ( 1 + α ) 1 + α .
Hence the origin of the bound | E β ( n ) | C 0 n min ( 2 , β ( 1 + α ) 1 + α ) , g ( p ) . The optimal convergence order is achieved for β opt = 2 α 1 α . This optimal value is obtained by maximizing the convergence exponent in Equation (16). The aim is to maximize the exponent:
γ ( β ) = min 2 , β ( 1 + α ) 1 + α .
If β 2 ( 1 + α ) 1 + α = 2 , then γ ( β ) = β ( 1 + α ) 1 + α and if β 2 , then γ ( β ) = 2 . However, the derivability bound restricts β to β ( 1 α ) + p p . For p = 2 , this gives:
β 2 α 2 + 1 α 2 = 2 α 1 α .
Therefore, the tipping point where β ( 1 + α ) 1 + α = 2 and the maximum derivability are equal is indeed
β opt = 2 α 1 α .
We then employ importance sampling by drawing samples from the prior p ( f ) and weighting them by the likelihood. The importance weight for sample i is:
w i = p ( g obs | f i ) p ( f i ) p ( g obs | f i )
To prevent numerical overflow in the exponential function, we apply log-space stabilization:
ln w i = ln p ( g obs | f i ) max j ln p ( g obs | f j )

2.3. Posterior Approximation and Reconstruction

After computing the importance weights { w i } i = 1 N for all N prior samples { f i } i = 1 N , we normalize them:
w ˜ i = w i j = 1 N w j
The posterior distribution is then approximated by the weighted empirical distribution:
p ( f | g obs ) i = 1 N w ˜ i δ f i ( f )
where δ f i denotes the Dirac delta function centered at f i .
The posterior mean function is computed as:
E [ f ( x ) | g obs ] = i = 1 N w ˜ i f i ( x )
The posterior variance is given by:
Var [ f ( x ) | g obs ] = i = 1 N w ˜ i [ f i ( x ) ] 2 i = 1 N w ˜ i f i ( x ) 2
For uncertainty quantification, we construct pointwise credible intervals. At each point x, we sort the weighted samples { ( f i ( x ) , w ˜ i ) } i = 1 N and compute the 100 ( 1 α 0 ) % credible interval as:
CI 1 α 0 ( x ) = [ Q α 0 / 2 ( x ) , Q 1 α 0 / 2 ( x ) ]
where Q p ( x ) is the p-th weighted quantile of the posterior samples at point x and α 0 is the confidence level.

2.3.1. Convergence Assessment

The quality of the approximation can be assessed through several complementary metrics that provide insights into different aspects of the reconstruction performance. The Effective Sample Size (ESS) [37,38], defined as ESS = 1 i = 1 N w ˜ i 2 , serves as a crucial indicator of sampling efficiency. When the ESS is low, it suggests that the importance weights are poorly distributed, with only a few samples carrying most of the weight, thereby indicating poor sampling efficiency and potential numerical instability.
Additionally, residual analysis provides a direct measure of how well the reconstructed solution satisfies the original integral equation. The residual function R ( x ) = g obs ( x ) L [ E [ f | g obs ] ] ( x ) quantifies the discrepancy between the observed data and the Volterra operator applied to the posterior mean. Small residuals across the domain indicate that the reconstructed solution provides a good fit to the integral equation, while large or systematic residuals may suggest inadequate prior specification or insufficient sampling.
Furthermore, the Monte Carlo Standard Error (MCSE) quantifies the uncertainty in our posterior estimates due to the finite sample approximation. Given by MCSE ( x ) = Var [ f ( x ) | g obs ] ESS , this metric provides a measure of the precision of our posterior mean estimates. A decreasing MCSE with increasing sample size indicates convergence of the Monte Carlo approximation, while persistently large MCSE values suggest the need for additional samples or improved sampling strategies.

2.3.2. Proposed Algorithm

The complete Bayesian reconstruction algorithm [38,39] proceeds as follows:
  • Initialization: Set up the GP prior with chosen hyperparameters ( σ f , ) and discretization grid { x i } i = 1 n .
  • Prior Sampling: Generate N function samples { f i } i = 1 N from the GP prior.
  • Likelihood Evaluation: For each sample f i :
    • Compute L [ f i ] ( x j ) using adaptive quadrature with graded mesh transformation
    • Apply singularity regularization: ( x t ) ( 1 α ) [ ( x t ) 2 + ϵ h 2 ] ( 1 α ) / 2
    • Evaluate ln p ( g obs | f i ) using the likelihood formula
  • Weight Computation: Calculate importance weights w i with log-space stabilization:
    ln w i = ln p ( g obs | f i ) max j ln p ( g obs | f j )
  • Posterior Approximation: Normalize weights w ˜ i = w i j = 1 N w j and compute posterior statistics:
    • Posterior mean: E [ f ( x ) | g obs ] = i = 1 N w ˜ i f i ( x )
    • Posterior variance: Var [ f ( x ) | g obs ] = i = 1 N w ˜ i [ f i ( x ) ] 2 i = 1 N w ˜ i f i ( x ) 2
  • Reconstruction: Output the posterior mean as the solution estimate along with pointwise credible intervals CI 1 α 0 ( x ) = [ Q α 0 / 2 ( x ) , Q 1 α 0 / 2 ( x ) ] for uncertainty quantification.

2.4. Convergence and Stability

Let C [ 0 , 1 ] be the space of continuous functions on [ 0 , 1 ] equipped with the uniform norm f = max x [ 0 , 1 ] | f ( x ) | . We define the weighted Sobolev space by:
H α s [ 0 , 1 ] = { f L 2 [ 0 , 1 ] : f H α s < }
where
f H α s 2 = 0 1 | f ( x ) | 2 x 2 α d x + 0 1 | f ( s ) ( x ) | 2 x 2 α d x
We assume kernel regularity conditions, namely that the kernel k ( x , t ) satisfies: k C 2 ( [ 0 , 1 ] × [ 0 , 1 ] ) , | k ( x , t ) | M k for some constant M k > 0 , and | k x ( x , t ) | + | k t ( x , t ) | M k .
For ϵ h > 0 and the regularization ( x t ) ( 1 α ) [ ( x t ) 2 + ϵ h 2 ] ( 1 α ) / 2 , we have:
( x t ) ( 1 α ) [ ( x t ) 2 + ϵ h 2 ] ( 1 α ) / 2 C α ϵ h 1 α | x t | ( 2 α )
for | x t | ϵ h , where C α depends only on α .
Using Taylor expansion and the mean value theorem, for u = ( x t ) 2 and v = ( x t ) 2 + ϵ h 2 , we have:
u ( 1 α ) / 2 v ( 1 α ) / 2 = 1 α 2 ξ ( 3 α ) / 2 ϵ h 2
where ξ [ u , v ] . The result follows by estimating ξ .

2.4.1. Monte Carlo Estimation and Stability Analysis

The Monte Carlo estimator of the posterior mean satisfies:
E 1 N i = 1 N w ˜ i f i ( x ) E [ f ( x ) | g obs ] 2 C N
where C depends on σ f , , and the problem parameters, but is independent of N. Using the variance decomposition:
Var i = 1 N w ˜ i f i ( x ) = Var i = 1 N w i f i ( x ) i = 1 N w i
By the Cauchy-Schwarz inequality and Gaussian process properties:
E [ w i 2 ] E [ w i ] 2 · exp ( C σ f 2 )
The strong law of large numbers applies since E [ w i f i ( x ) ] < under our assumptions.

2.4.2. Stability with Respect to Data Perturbations

Let g 1 , g 2 be two data functions with g 1 g 2 δ . Then the corresponding posterior means μ 1 , μ 2 satisfy:
μ 1 μ 2 C δ
where C depends on the hyperparameters but not on δ . The log-likelihoods satisfy:
| log p ( g 1 | f ) log p ( g 2 | f ) | g 1 g 2 2 σ 2 g 1 + g 2 + 2 L [ f ]
By continuity of the operator L on C [ 0 , 1 ] and compactness of the Gaussian process support, the result follows.

2.4.3. Minimax Risk Analysis

For the function class Θ = { f H α s [ 0 , 1 ] : f H α s M } , the minimax risk satisfies:
inf f ^ sup f Θ E [ f ^ f 2 ] c n 2 s 2 s + 1 + 2 α
where c > 0 is a constant. Under the optimal choice of hyperparameters n 1 2 s + 1 + 2 α and with N = O ( n 2 ) samples, our Bayesian estimator f ^ n satisfies:
sup f Θ E [ f ^ n f 2 ] C n 2 s 2 s + 1 + 2 α

3. Numerical Examples

To validate the effectiveness of the proposed Bayesian approach and demonstrate its capability in handling weakly singular Volterra integral equations, we present a series of numerical experiments. These simulations are designed to assess the accuracy, convergence properties, and robustness of our method under various conditions. For the quadrature points, we calculate β , using his optimal equation form in (17).

3.1. Example 1: Abel Integral Equation

We begin our numerical investigation with a classical Abel integral equation, which serves as a fundamental benchmark for methods dealing with weakly singular kernels. This example allows us to evaluate the performance of our Bayesian inverse approach against well-established analytical and numerical solutions.
We consider the following Abel integral equation [42] of the second kind:
f ( x ) = 1 0 x f ( x ) ( x t ) 0.5 d t , 0 x 1
This equation represents a specific instance of our general formulation with the parameter values λ = 1 , α = 0.5 , and the right-hand side function g ( x ) = 1 . The kernel function k ( x , t ) = 1 and the nonlinear function F ( f ( t ) ) = f ( t ) , making this a linear Abel equation with a characteristic weak singularity of order 1 / 2 at t = x .
The choice of α = 0.5 corresponds to the classical Abel kernel, which exhibits the typical square-root singularity that appears in numerous physical applications, including particle physics, plasma dynamics, and image reconstruction problems. This test case serves as an ideal benchmark for validating our numerical approach due to its analytical tractability. The exact solution of this equation is f ( x ) = 1 1 + 2 x .
Figure 1 and Figure 2 showcases the compelling results obtained from our comprehensive numerical experiments for Example 1. The striking visual evidence presented in Figure 1 reveals an exceptional alignment between the approximate and exact solutions, providing unequivocal validation of our method’s remarkable effectiveness. This outstanding performance is further substantiated by the quantitative analysis in Table 1, where our approach demonstrates extraordinary precision with an average absolute error of merely 10 6 a testament to the method’s superior computational accuracy.
To comprehensively evaluate the influence of the critical hyperparameters N points , C, γ , and σ f , we conducted an extensive parametric study across diverse computational scenarios. Figure 2 provides crucial insights into the evolution of both mean squared error and residual norm as functions of N points . The results demonstrate a clear and consistent pattern: as the number of quadrature points increases, we observe a remarkable dual improvement in both solution accuracy and convergence behavior, highlighting the robust scalability of our approach.
The hyperparameter sensitivity analysis reveals fascinating computational dynamics. Table 2 demonstrates that increasing σ f values lead to substantial MSE reduction, thereby significantly enhancing convergence characteristics. Similarly, Table 3 corroborates this trend for the γ parameter, indicating optimal performance tuning capabilities. However, Table 4 reveals the expected computational trade-off: while larger C values may offer certain advantages, they inevitably result in increased computational overhead a critical consideration for practical implementation.
Most importantly, the comparative analysis presented in Table 1 illustrates the comparative performance metrics, indicating improved accuracy of the proposed method across all test points. Our proposed technique consistently outperforms both the collocation method and the Legendre wavelet approach, achieving significantly lower absolute errors across all test cases. This decisive advantage unambiguously establishes the superior performance and enhanced reliability of our methodology compared to these well-established competing methods. The substantial error reduction achieved by our approach not only validates its theoretical foundations but also demonstrates its practical value for high-precision computational applications.

3.2. Example 2: Nonlinear Volterra Equation with Polynomial Right-Hand Side

To further evaluate the robustness and versatility of our Bayesian approach, we consider a more challenging nonlinear Volterra integral equation that incorporates both the characteristic weak singularity and a complex polynomial forcing term. This test case allows us to assess the method’s performance in handling nonlinear kernels while maintaining computational efficiency.
We examine the following nonlinear Volterra integral equation [42]:
f ( x ) 0 x x t ( x t ) 0.5 d t = x 3 4096 x 8.5
The exact solution to this equation is known to be f ( x ) = x 3 , which provides a clear benchmark for accuracy assessment. We implemented our Bayesian algorithm to solve this problem and conducted extensive numerical experiments to evaluate its performance across different parameter configurations.
The results showcased in Figure 3 and Figure 4 for Example 2 present an even more impressive demonstration of our method’s exceptional capabilities. The comprehensive analysis includes both the direct comparison between exact and approximate solutions (Figure 3) and the detailed performance evolution as a function of computational points (Figure 4). The visual inspection of Figure 3 reveals a virtually flawless correspondence between the exact solution and our method’s approximation a remarkable achievement that underscores the method’s extraordinary precision and reliability.
This exceptional visual agreement is powerfully reinforced by the quantitative metrics presented in Table 5, which demonstrates our approach’s outstanding computational excellence with an average absolute error reaching an impressive 10 7 —representing a full order of magnitude improvement compared to Example 1. This exceptional accuracy level positions our method at the forefront of high-precision numerical techniques.
Consistent with our findings from Example 1, the comprehensive hyperparameter analysis reveals intriguing computational characteristics. Figure 4 corroborates our previous observations, confirming that convergence performance exhibits consistent enhancement as N points  increases, thereby validating the method’s robust scalability across different problem domains.
Remarkably, the hyperparameter sensitivity analysis for Example 2 unveils a fascinating aspect of our method’s robustness. Table 6, Table 7 and Table 8 demonstrate that the MSE and maximum absolute error remain virtually unaffected by variations in the hyperparameters σ f , C, and γ . This exceptional stability indicates that our method possesses inherent robustness against parameter fluctuations, with only marginal variations in computational time observed across different parameter configurations due to intermediate calculations that influence the overall computational overhead.
The definitive validation of our method’s supremacy emerges from the comprehensive comparative analysis presented in Table 5. Our proposed approach decisively outperforms all competing methodologies, consistently achieving the lowest absolute errors across all evaluation metrics. This comprehensive superiority not only validates the theoretical foundations of our approach but also establishes its undisputed dominance in the field of high-precision numerical computations, making it the method of choice for demanding computational applications.

3.3. Limitations

Despite the exceptional performance demonstrated by our approach, it is important to acknowledge certain limitations that define its current scope of application. First, the proposed method is specifically designed to handle Volterra integral equations with weak singularities ( 0 < α < 1 ) and has not been validated for strong singularity cases ( α 1 ), thereby restricting its applicability to this particular class of problems. Second, scalability considerations arise as increasing the number of samples N required for adequate convergence leads to substantial growth in computational cost, which becomes particularly challenging for high-dimensional problems. Finally, the importance sampling algorithm may experience weight degeneracy in ill-conditioned problems, where only a few samples carry the majority of the statistical weight, potentially compromising numerical stability and reliability of posterior estimates.

4. Conclusions

In this study, we have proposed a novel numerical approach for solving Volterra’s integral equations with weak singularities by approximating the singular kernel through convolutional regularization and employing RBF kernels within a Gaussian process prior framework. A graded mesh transformation technique was implemented to achieve strategic sampling that concentrates the quadrature points around the singular regions of the equation, thereby enhancing computational efficiency and accuracy.
The comprehensive numerical experiments conducted across multiple test cases have demonstrated the robustness and effectiveness of our proposed methodology. The results consistently show strong convergence properties, achieving remarkable accuracy with absolute errors ranging from 10 6 to 10 7 . These performance metrics validate the theoretical foundations of our approach and confirm its practical applicability for solving weakly singular Volterra’s integral equations.
Furthermore, our parametric studies revealed that the method exhibits stable behavior across various hyperparameter configurations, with the number of quadrature points N points being the most influential factor for convergence enhancement. The computational efficiency of the algorithm, combined with its high accuracy, makes it a valuable tool for practitioners dealing with singular integral equations in various scientific and engineering applications.
Future work will focus on extending the method to strongly singular Volterra equations, as well as exploring adaptive mesh refinement and automated hyperparameter tuning using machine learning.

References

  1. Volterra, V. Leçons sur la théorie mathématique de la lutte pour la vie; Gauthier-Villars, 1931.
  2. Brunner, H. Volterra Integral Equations: An Introduction to Theory and Applications; Cambridge University Press, 2017.
  3. Volterra, V. Sulla teoria delle equazioni differenziali lineari. Memorie della Società Italiana delle Scienze 1896, 6, 3–68. [Google Scholar] [CrossRef]
  4. Volterra, V. Leçons sur les équations intégrales et les équations intégro-différentielles; Gauthier-Villars, 1913.
  5. Lalescu, T. Sur les équations de Volterra. Annali di Matematica Pura ed Applicata 1911, 20, 1–133. [Google Scholar]
  6. Fredholm, E.I. Sur une classe d’équations fonctionnelles. Acta Mathematica 1903, 27, 365–390. [Google Scholar] [CrossRef]
  7. Hilbert, D. Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen. Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen 1904, pp. 49–91.
  8. Wiener, N. Tauberian theorems; Annals of Mathematics, 1931.
  9. Hopf, E. Ergodentheorie; Springer, 1939.
  10. Taylor, A.E. Introduction to functional analysis; Wiley, 1958.
  11. Kato, T. Perturbation theory for linear operators; Springer, 1966.
  12. Abel, N.H. Solution de quelques problèmes à l’aide d’intégrales définies. Magazin for Naturvidenskaberne 1823, 1, 55–68. [Google Scholar] [CrossRef]
  13. Brunner, H. Collocation Methods for Volterra Integral and Related Functional Differential Equations; Cambridge University Press, 2004.
  14. Gorenflo, R.; Vessella, S. Abel integral equations: analysis and applications; Springer, 1991.
  15. Tricomi, F.G. Integral equations; Interscience Publishers, 1957.
  16. Muskhelishvili, N.I. Singular integral equations; Noordhoff, 1953.
  17. Erdélyi, A. Tables of integral transforms; McGraw-Hill, 1954.
  18. Arfken, G.B.; Weber, H.J.; Harris, F.E. Mathematical Methods for Physicists; Academic Press, 2013.
  19. Debnath, L.; Bhatta, D. Integral transforms and their applications; CRC Press, 2014.
  20. Adomian, G. A review of the decomposition method in applied mathematics. Journal of Mathematical Analysis and Applications 1988, 135, 501–544. [Google Scholar] [CrossRef]
  21. Adomian, G. Solving frontier problems of physics: the decomposition method; Kluwer Academic Publishers, 1994.
  22. Picard, É. Mémoire sur la théorie des équations aux dérivées partielles et la méthode des approximations successives. Journal de Mathématiques Pures et Appliquées 1890, 6, 145–210. [Google Scholar]
  23. Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical Recipes: The Art of Scientific Computing; Cambridge University Press, 1986.
  24. Stoer, J.; Bulirsch, R. Introduction to numerical analysis; Springer, 2002.
  25. de Boor, C. A practical guide to splines; Springer, 1978.
  26. Trefethen, L.N. Approximation theory and approximation practice; SIAM, 2013.
  27. Bownds, J.M.; Wood, B. On Numerically Solving Non-linear Volterra Equations with Fewer Computations. SIAM Journal on Numerical Analysis 1976, 13, 705–719. [Google Scholar] [CrossRef]
  28. Canuto, C.; Hussaini, M.Y.; Quarteroni, A.; Zang, T.A. Spectral methods in fluid dynamics; Springer, 2006.
  29. Brunner, H.; van der Houwen, P.J. The Numerical Solution of Volterra Equations; North-Holland, 1986.
  30. Lubich, C. Runge-Kutta theory for Volterra and Abel integral equations of the second kind; Vol. 41, 1983; pp. 87–102.
  31. Young, A. The numerical solution of integral equations. Proceedings of the Royal Society of London 1954, 224, 561–573. [Google Scholar] [CrossRef]
  32. Atkinson, K.E. A survey of numerical methods for the solution of Fredholm integral equations of the second kind; SIAM, 1976.
  33. Robert, C.P.; Casella, G. Monte Carlo statistical methods; Springer, 2004.
  34. Higham, N.J. Accuracy and stability of numerical algorithms; SIAM, 2002.
  35. Dehbozorgi, R.; Nedaiasl, K. Numerical Solution of Nonlinear Abel Integral Equations. arXiv 2021, arXiv:2001.06240. [Google Scholar]
  36. Ramazanova, M.I.; Gulmanova, N.K.; Omarova, M.T. On solving a singular Volterra integral equation. Filomat 2025, 39, 3647–3656. [Google Scholar]
  37. Dashti, M.; Stuart, A.M. The Bayesian Approach To Inverse Problems. arXiv 2013, arXiv:1302.6989. [Google Scholar]
  38. Randrianarisoa, T.; Szabo, B. Variational Gaussian Processes For Linear Inverse Problems. arXiv 2023, arXiv:2311.00663. [Google Scholar]
  39. Bai, T.; Teckentrup, A.L.; Zygalakis, K.C. Gaussian processes for Bayesian inverse problems associated with PDEs. arXiv 2023, arXiv:2307.08343. [Google Scholar]
  40. Titsias, M. Variational Learning of Inducing Variables in Sparse Gaussian Processes. Journal of Machine Learning Research 2009.
  41. Brunner, H. Volterra Integral Equations: An Introduction to Theory and Applications; Cambridge University Press, 2004.
  42. Khani, A.; Belalzadeh, N. Numerical solution of volterra integral equations with weakly singular kernel using legendre wavelet method. Mathematics and Computational Sciences 2025, 6, 160–169. [Google Scholar]
  43. Cardone, A.; Conte, D.; D’Ambrosio, R.; Beatrice, P. Collocation Methods for Volterra Integral and Integro-Differential Equations: A Review. Axioms 2018, 7, 45. [Google Scholar] [CrossRef]
Figure 1. Comparison of exact and approximate solutions with N points = 50 .
Figure 1. Comparison of exact and approximate solutions with N points = 50 .
Preprints 163799 g001
Figure 2. Evolution of MSE and residual standard as a function of the total number of points considered.
Figure 2. Evolution of MSE and residual standard as a function of the total number of points considered.
Preprints 163799 g002
Figure 3. Comparison of exact and approximate solutions with N points = 50 .
Figure 3. Comparison of exact and approximate solutions with N points = 50 .
Preprints 163799 g003
Figure 4. Evolution of MSE and residual standard as a function of the total number of points considered.
Figure 4. Evolution of MSE and residual standard as a function of the total number of points considered.
Preprints 163799 g004
Table 1. Comparison between our proposed method solution (Approx), collocation method, Legendre wavelet method and exact solution for example 1.
Table 1. Comparison between our proposed method solution (Approx), collocation method, Legendre wavelet method and exact solution for example 1.
x Exact Approx Abs Err Collocation [43] Abs err Wavelet [42] Abs err
0.1 0.612930 0.612933 3.03 × 10 6 0.608684 4.25 × 10 3 0.614358 1.43 × 10 3
0.2 0.528031 0.528032 6.86 × 10 7 0.527864 1.67 × 10 4 0.527519 5.12 × 10 4
0.3 0.477326 0.477325 3.74 × 10 7 0.477226 1.00 × 10 4 0.477342 1.60 × 10 5
0.4 0.441583 0.441582 9.18 × 10 7 0.441518 6.50 × 10 5 0.441461 1.22 × 10 4
0.5 0.414257 0.414255 1.47 × 10 6 0.414214 4.30 × 10 5 0.414252 5.00 × 10 6
0.6 0.392310 0.392309 1.22 × 10 6 0.392281 2.90 × 10 5 0.392246 6.40 × 10 5
0.7 0.374085 0.374083 1.61 × 10 6 0.374067 1.80 × 10 5 0.374108 2.30 × 10 5
0.8 0.358581 0.358583 2.56 × 10 6 0.358570 1.10 × 10 5 0.358509 7.20 × 10 5
0.9 0.345146 0.345145 1.06 × 10 6 0.345141 5.00 × 10 6 0.345250 1.04 × 10 4
1.0 0.333333 0.333333 5.00 × 10 7 0.333322 2.21 × 10 6 0.333251 8.20 × 10 5
Table 2. Analysis of the performance and convergence metrics of the approximated solution for different values of σ f for example 1.
Table 2. Analysis of the performance and convergence metrics of the approximated solution for different values of σ f for example 1.
σ f MSE Max Abs Err Runtime (s)
0.01 4.83 × 10 12 5.04 × 10 6 1.425
0.05 4.09 × 10 12 4.84 × 10 6 1.205
0.10 3.99 × 10 12 6.22 × 10 6 1.218
0.20 4.06 × 10 12 5.43 × 10 6 1.186
0.30 3.77 × 10 12 5.29 × 10 6 0.983
0.50 3.49 × 10 12 4.55 × 10 6 1.299
Table 3. Analysis of the performance and convergence metrics of the approximated solution for different values of γ for example 1.
Table 3. Analysis of the performance and convergence metrics of the approximated solution for different values of γ for example 1.
γ MSE Max Abs Err Runtime (s)
1.0 3.49 × 10 12 5.29 × 10 6 1.660
5.0 3.65 × 10 12 4.57 × 10 6 1.726
10.0 4.49 × 10 12 5.47 × 10 6 2.222
15.0 3.95 × 10 12 4.19 × 10 6 2.472
20.0 3.92 × 10 12 4.46 × 10 6 2.165
30.0 3.40 × 10 12 4.06 × 10 6 1.730
Table 4. Analysis of the performance and convergence metrics of the approximated solution for different values of C for example 1.
Table 4. Analysis of the performance and convergence metrics of the approximated solution for different values of C for example 1.
C MSE Max Abs Err Runtime (s)
1.0 4.11 × 10 12 4.49 × 10 6 1.084
5.0 4.83 × 10 12 6.42 × 10 6 1.006
10.0 3.16 × 10 12 4.87 × 10 6 1.913
15.0 4.90 × 10 12 5.57 × 10 6 1.296
20.0 3.82 × 10 12 4.03 × 10 6 1.735
30.0 4.50 × 10 12 6.20 × 10 6 2.030
Table 5. Comparison between our proposed method solution (Approx), collocation method, Legendre wavelet method and exact solution for example 2.
Table 5. Comparison between our proposed method solution (Approx), collocation method, Legendre wavelet method and exact solution for example 2.
x Exact Approx Abs Err Collocation [43] Abs err Wavelet [42] Abs err
0.1 0.001017 0.001072 5.50 × 10 5 0.002087 1.07 × 10 3 0.000956 6.10 × 10 5
0.2 0.008061 0.008061 3.64 × 10 7 0.012741 4.68 × 10 3 0.007956 1.05 × 10 4
0.3 0.027123 0.027121 1.64 × 10 6 0.037091 9.97 × 10 3 0.026956 1.67 × 10 4
0.4 0.064189 0.064188 9.70 × 10 7 0.079570 1.54 × 10 2 0.063956 2.33 × 10 4
0.5 0.125247 0.125247 4.13 × 10 8 0.144281 1.90 × 10 2 0.124956 2.91 × 10 4
0.6 0.216285 0.216285 5.97 × 10 7 0.235123 1.88 × 10 2 0.215956 3.29 × 10 4
0.7 0.343291 0.343291 5.04 × 10 7 0.355847 1.26 × 10 2 0.342956 3.35 × 10 4
0.8 0.512254 0.512254 3.87 × 10 7 0.510098 2.16 × 10 3 0.511956 2.98 × 10 4
0.9 0.729161 0.729163 2.43 × 10 6 0.701431 2.77 × 10 2 0.728956 2.05 × 10 4
1.0 1.000000 0.999999 7.41 × 10 7 0.933333 6.67 × 10 2 0.999956 4.40 × 10 5
Table 6. Analysis of the performance and convergence metrics of the approximated solution for different values of σ f for example 2.
Table 6. Analysis of the performance and convergence metrics of the approximated solution for different values of σ f for example 2.
σ f MSE Max Abs Err Runtime (s)
0.01 7.31 × 10 8 1.00 × 10 3 1.839
0.05 7.31 × 10 8 1.00 × 10 3 0.768
0.10 7.31 × 10 8 1.00 × 10 3 0.970
0.20 7.31 × 10 8 1.00 × 10 3 0.860
0.30 7.31 × 10 8 1.00 × 10 3 0.922
0.50 7.31 × 10 8 1.00 × 10 3 1.181
Table 7. Analysis of the performance and convergence metrics of the approximated solution for different values of C for example 2.
Table 7. Analysis of the performance and convergence metrics of the approximated solution for different values of C for example 2.
C MSE Max Abs Err Runtime (s)
1.0 7.31 × 10 8 1.00 × 10 3 1.396
5.0 7.31 × 10 8 1.00 × 10 3 0.952
10.0 7.31 × 10 8 1.00 × 10 3 1.608
15.0 7.31 × 10 8 1.00 × 10 3 1.351
20.0 7.31 × 10 8 1.00 × 10 3 0.809
30.0 7.31 × 10 8 1.00 × 10 3 1.915
Table 8. Analysis of the performance and convergence metrics of the approximated solution for different values of γ for example 2.
Table 8. Analysis of the performance and convergence metrics of the approximated solution for different values of γ for example 2.
γ MSE Max Abs Err Runtime (s)
1.0 7.31 × 10 8 1.00 × 10 3 1.563
5.0 7.31 × 10 8 1.00 × 10 3 1.105
10.0 7.31 × 10 8 1.00 × 10 3 1.142
15.0 7.31 × 10 8 1.00 × 10 3 0.880
20.0 7.31 × 10 8 1.00 × 10 3 1.396
30.0 7.31 × 10 8 1.00 × 10 3 2.390
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated