Preprint
Article

This version is not peer-reviewed.

Spectral Approach to the Fractional Power of Operator and Its Matrix Approximation

Submitted:

06 January 2026

Posted:

08 January 2026

You are already at the latest version

Abstract
This article is devoted to constructing of fractional powers of operators and their matrix approximations. A key feature of this study is the use of a spectral approach that remains applicable even when the base operator does not generate a semigroup. Our main results include the convergence rate of matrix approximation, derived from resolvent estimates, and a practical algorithm for constructing matrix approximations. The theory is supported by examples.
Keywords: 
;  ;  ;  ;  

1. Introduction

One of the most important problem in the fractional calculus and its applications is to derive a formula that, for suitable operators A and a range of parameters α , generates a family of operators { A α } possessing the properties expected of powers. In particular, A n should coincide with the iterated power A n = A · A · A · . . . · A n when α is a positive integer n, and the index law A α A β = A α + β should hold whenever A α , A β , and A α + β are all defined. Several methods now exist for constructing such families [1]. Let us consider some methods of constructing powers of operator A.
The first method involves finding the iterated operator A n and replacing the natural number n with a real or complex number α . This approach works when the functions of natural numbers which are presented in the formula for the iterated operator can be generalized through their extensions to real (complex) numbers. For example, the fractional power of the regular integral arises in this way. Namely, starting from the formula for the iterated integral, we see that it is sufficient to replace the power function with a natural exponent by a power function with a real exponent, and the factorial by the Gamma function. This yields the well-known Riemann-Liouville fractional integral (see [2], p. 33). By finding the inverse operator, we obtain the Riemann-Liouville fractional derivative ([2], p. 30).
Another approach is to begin with the definition of the derivative as the limit of the ratio of the function’s increment to the argument’s increment as the argument’s increment tends to zero, derive a finite-difference formula for the derivative of a positive integer order, and generalize it to a real order. This results in the Grünwald–Letnikov fractional derivative, whose inverse is the corresponding fractional integral in a difference form (see [3], p. 43). However, if the operator is more general than an integral or a derivative, then using the iterative method becomes technically quite difficult.
A further method is based on solving the Cauchy problem for the iterated operator and then replacing the positive integer denoting the order of the operator with a real number (see [4]). Additionally, if the operator can be presented in a factorized form using integral transforms and a symbol, its fractional power can be expressed as a pseudo-differential operator with the symbol raised to a real power (see, for example, the fractional Laplace operator in [2], p. 483).
All listed approaches are not constructive. For this reason, they are often not applicable from an applied point of view. Methods for constructing fractional powers of operators based on theirs properties rather than on replacing a natural number with a real one appeared in functional analysis. When the operator ( A ) is the infinitesimal generator of a bounded semigroup, Phillips and Hille (see [5], 1952 and [6], first published in 1957) established that its fractional powers could be constructed using an operational calculus they developed. Balakrishnan [7] in 1959 provided a comprehensive treatment of this approach. Subsequently, Balakrishnan [8] in 1960 introduced a new definition based on resolvents that extended the theory to a broader class of operators. Further study of this approach have been given in [9,10,11,12]. Independently, Krasnosel’skii and Sobolevskii [13] in 1959 (see also [14]) using the splitting method, constructed fractional powers of self-adjoint operators.
The significant role of fractional calculus in applications [15,16,17] drives the need to construct approximations of fractional powers of operators and to quantify their convergence rates. The most useful way to represent, disnamical process, and study linear mappings between finite-dimensional vector spaces are matrices. Also, matrix approximation of operators is crucial for simplifying integrals, differential operators and stochastic processes.
In this paper, we present the convergence estimate for fractional powers of operators. Specifically, we begin with a base operator and its approximation, both satisfying the same resolvent estimate. Using the resolvent approach, we then construct the corresponding fractional powers and derive an estimate for the norm of their difference. As a practical application, we also develop a matrix approximation method.

2. The Spectral Approach to the Fractional Powers of Operators

This section is devoted to the study of the spectral approach to the fractional powers of operators, including rate of convergence and examples.

2.1. Application of the Resolvent to the Construction of a Fractional Power

In [18] the rate of convergence for powers of operators constructed from semigroups theory were obtained. In this article, we use a method for defining fractional powers that applies even when the base operator does not generate a semigroup or have a dense domain. This construction was first given by Balakrishnan in [7]. Namely, if the resolvent of the operator A exists for λ > 0 , and is O ( 1 / λ ) for all λ , then fractional powers was constructed, using an abstract version of the Stieltjes transform.
Recall the definition of ( A ) α from [7].
Definition 1.
If A is a closed linear operator with domain D ( A ) and range in a Banach space X, for which | | λ R ( λ , A ) | | < M < , λ > 0 , then
( A ) α = sin ( α π ) π 0 λ α 1 R ( λ ; A ) A x d λ , x D ( A ) , 0 < α < 1 .
For a such that n 1 < α n , n N
( A ) α = ( A ) α n + 1 ( A ) n 1 x , x D ( A n + 1 ) .
Some important properties were obtained in [7].
1.
The operators ( A ) α can be extended to be closed linear.
2.
For 0 < α + β < 1 , a semigroup property is valid ( A ) α + β = ( A ) α ( A ) β , x D ( A 2 ) .

2.2. Rate of Convergence: Spectral Approach

The precise problem considered in this subsection is as follows. Let B be a Banach space, and let A , C be linear operators with domain and range in B. Suppose the resolvents of A , C exist for λ > 0 , and are O ( 1 / λ ) for all λ . Consider a subordinate Banach space D B such that ( A C ) f B f D . We pursue two goals: first, to estimate the closeness of the resolvents | R ( λ ; A ) R ( λ ; C ) | , and second, to estimate the closeness of the fractional powers | ( A ) α ( C ) α | . Using methods from [19], we obtain the following theorem, which provides effective upper bounds for these quantities.
Next we will consider α ( 0 , 1 ) .
Theorem 2.1.
Assume A , C are linear operators with domain and range in a Banach space B with the resolvents R ( λ , A ) , R ( λ , C ) defined for λ > 0 such that
| | λ R ( λ , A ) | | B < M < , | | λ R ( λ , C ) | | B < M < .
Let there exist a Banach space D B with norm . D . B (can be just D = B for bounded A , C ) such that R ( λ , C ) is defined and has similar bounds in D for λ > 0 :
| | λ R ( λ , C ) | | D < M < .
Assume moreover that
( A C ) f B ω f D , ω > 0 .
Then
( R ( λ ; A ) R ( λ ; C ) ) f B M M ω λ 2 f D ,
and
( ( A ) α ( C ) α ) f B γ f D ω α ,
where
γ = M 2 + M α 1 α .
Proof. 
From the second resolvent equality
R ( λ ; A ) R ( λ ; C ) = R ( λ ; C ) ( A C ) R ( λ ; A ) ,
it follows that
R ( λ , A ) f R ( λ , C ) f B M ω λ R ( λ , C ) f D ω M M λ 2 f D .
Next, since R ( λ , A ) A f = λ R ( λ , A ) f f and the same holds for B, applying (1) we obtain
( A ) α f ( C ) α f B α 0 λ α R ( λ ; A ) f R ( λ ; C ) f B d λ .
Decomposing the domain of integration in two parts, [ 0 , ω ] and [ ω , ) , and using (2) only for the second integral we get the estimate
( A ) α f ( C ) α f B 2 M α 0 ω λ α 1 f B d λ + α ω M M ω λ α 2 f D d λ = = 2 M ω α f B + α 1 α M M ω α f D M 2 + M α 1 α f D ω α .
The proof is complete. □
Remark 1.
In the most important case, when A , C generate Feller semigroups, then the above estimate for the norms of their resolvents in B are known to hold with M = 1 . Moreover, if D is the domain of C with f D = f B + C f B (the canonical choice of the norm in the domain of an operator), then the above estimate for the norm of R ( λ , C ) in D holds as well with M = 1 , because then
C R ( λ , C ) f B 1 λ C f B 1 λ f D .
In this case, γ in (3) simplifies to γ = ( 2 α ) / ( 1 α ) . In any case it is seen that these estimates are not good for α close to 1.
Next, we analyze Definition 1 in details and show how to construct ( A ) α in practice, starting from the operator ( A ) and its resolvent.

2.3. Riemann-Liouville Fractional Integral

In this subsection we show how this method works with the simplest integral:
( I a + 1 f ) ( x ) = a x f ( t ) d t .
It is easy to see, that [20]
g = R ( λ , A ) f = ( A + λ I ) 1 f = 1 λ k = 0 ( 1 ) k A k λ k f .
It is known that iterated operator I a + 1 has a form
( ( I a + 1 ) k f ) ( x ) = ( I a + k f ) ( x ) = a x d x a x d x . . . a x k f ( t ) d t = 1 ( k 1 ) ! a x ( x t ) k 1 f ( t ) d t .
Then by (4) the resolvent of the operator ( I a + 1 ) to be
R ( λ , I a + 1 ) f = 1 λ k = 0 ( 1 ) k A k λ k f = 1 λ f + 1 λ k = 1 ( 1 ) k ( k 1 ) ! 1 λ k a x ( x t ) k 1 f ( t ) d t =
= 1 λ f + 1 λ 2 a x k = 1 ( 1 ) k ( k 1 ) ! x t λ k 1 f ( t ) d t =
= 1 λ f 1 λ 2 a x k = 0 ( 1 ) k k ! x t λ k f ( t ) d t = 1 λ f 1 λ 2 a x e x t λ f ( t ) d t .
So we can write
R ( λ , I a + 1 ) I a + 1 f ( x ) = 1 λ a x e x t λ f ( t ) d t ,
then by (1) we get
( ( I a + 1 ) α f ) ( x ) = sin ( α π ) π 0 λ α 2 a x e x t λ f ( t ) d t d λ = = sin ( α π ) π a x f ( t ) 0 λ α 2 e x t λ d λ d t = { 1 / λ = z } = = sin ( α π ) π a x f ( t ) 0 z α e ( x t ) z d z d t = = sin ( α π ) Γ ( 1 α ) π a x ( x t ) α 1 f ( t ) d t = 1 Γ ( α ) a x ( x t ) α 1 f ( t ) d t = ( I a + α f ) ( x ) .
Here the Euler’s reflection formula
Γ ( 1 z ) Γ ( z ) = π sin π z , z Z
was used. So, we obtain
( I a + α f ) ( x ) = 1 Γ ( α ) a x ( x t ) α 1 f ( t ) d t .
The expression (5) consides to the well-known Riemann-Liouville fractional integral I a + α (see [2]).

2.4. Erdélyi-Kober Fractional Integral

Looking at the integral
( I a + ; x 1 f ) ( x ) = a x f ( t ) t d t
first, we easily obtain that iterated operator ( I a + ; x 1 ) k has a form
( ( I a + ; x 1 ) k f ) ( x ) = ( I a + ; x k f ) ( x ) = a x x d x a x x d x . . . a x k f ( t ) t d t = = 1 2 k 1 ( k 1 ) ! a x ( x 2 t 2 ) k 1 t f ( t ) d t .
Then by (4) the resolvent of the operator ( I a + 1 ) to be
R ( λ , I a + ; x 1 ) f = 1 λ k = 0 ( 1 ) k A k λ k f =
= 1 λ f + 1 λ k = 1 ( 1 ) k 2 k 1 ( k 1 ) ! 1 λ k a x ( x 2 t 2 ) k 1 f ( t ) t d t =
= 1 λ f + 1 λ 2 a x k = 1 ( 1 ) k ( k 1 ) ! x 2 t 2 2 λ k 1 f ( t ) t d t =
= 1 λ f 1 λ 2 a x k = 0 ( 1 ) k k ! x 2 t 2 2 λ k f ( t ) t d t = 1 λ f 1 λ 2 a x e x 2 t 2 2 λ f ( t ) t d t .
Since
( λ , I a + ; x 1 ) I a + ; x 1 f ( x ) = 1 λ a x e x 2 t 2 2 λ f ( t ) t d t ,
then by (1) we get
( ( I a + ; x 1 ) α f ) ( x ) = sin ( α π ) π 0 λ α 2 a x e x 2 t 2 2 λ f ( t ) t d t d λ = = sin ( α π ) π a x f ( t ) t 0 λ α 2 e x 2 t 2 2 λ d λ d t = { 1 / λ = z } = = sin ( α π ) π a x f ( t ) t 0 z α e 1 2 ( x 2 t 2 ) z d z d t = = sin ( α π ) Γ ( 1 α ) π a x x 2 t 2 2 α 1 f ( t ) t d t = = 1 Γ ( α ) a x x 2 t 2 2 α 1 f ( t ) t d t = ( I a + ; x α f ) ( x ) .
Since the Erdelýi-Kober integral is defined by formula (see [2]):
( I a + ; 2 α f ) ( x ) = 2 Γ ( α ) x 2 ( α ) 0 x ( x 2 t 2 ) α 1 t f ( t ) d t ,
operator I a + ; x α coincides with (6) up to multiplier C x 2 ( α ) and
( I a + ; x α f ) ( x ) = 1 Γ ( α ) a x x 2 t 2 2 α 1 f ( t ) t d t .

2.5. Hadamard-Type Fractional Integral

We will use the function space [1]
F p , μ = { φ C ( 0 , ) : | | x k D k ( x μ φ ) | | p < , k = 0 , 1 , 2 , . . . } ,
where 1 p < , μ is any real number, D = d d x , | | · | | p is L p -norm.
We consider the integral operator of the form
( I m η , 1 f ) ( x ) = m x m ( η + 1 ) 0 x t m ( η + 1 ) 1 f ( t ) d t , x > 0 ,
which is a homeomorphism from F p , μ onto F p , μ and has inverse given by
( ( I m η , 1 ) 1 f ) ( x ) = ( η + 1 ) f ( x ) + 1 m x d f d x = η + 1 + 1 m x d d x f ( x ) .
Indeed,
( ( I m η , 1 ) 1 I m η , 1 f ) ( x ) = ( I m η , 1 ) 1 m x m ( η + 1 ) 0 x t m ( η + 1 ) 1 f ( t ) d t =
= m ( η + 1 ) x m ( η + 1 ) 0 x t m ( η + 1 ) 1 f ( t ) d t + 1 m x d d x m x m ( η + 1 ) 0 x t m ( η + 1 ) 1 f ( t ) d t =
= m ( η + 1 ) x m ( η + 1 ) 0 x t m ( η + 1 ) 1 f ( t ) d t m ( η + 1 ) x m ( η + 1 ) 0 x t m ( η + 1 ) 1 f ( t ) d t + f ( x ) = f ( x ) .
If f ( 0 ) = 0 , then
x m d d x 1 f ( x ) = m 0 x 1 t f ( t ) d t = ( I m 1 , 1 f ) ( x ) .
Since
( I m 1 , 1 x m ( η + 1 ) f ) ( x ) = x m ( η + 1 ) ( I m η , 1 f ) ( x ) ,
we can write
λ I + I m η , 1 = 1 x m ( η + 1 ) ( λ I + I m 1 , 1 ) x m ( η + 1 ) .
That means that the spectrum of ( I m η , 1 ) on F p , μ is identical to the spectrum of ( I m 1 , 1 )   F p , μ + m ( η + 1 ) . Bearing this in mind, in order to find the resolvent of ( I m η , 1 ) , we consider the equation
( λ I + I m 1 , 1 ) φ = ψ , φ , ψ F p , μ + m ( η + 1 ) .
Solution to the equation (9) gives the resolvent of the operator ( I m 1 , 1 ) :
φ = ( λ I + I m 1 , 1 ) 1 ψ = R ( λ , I m 1 , 1 ) ψ .
Starting with finding the resolvent R ( λ , I m 1 , 1 ) . Applying x m d d x to each side of (9) and dividing by λ produces
x m d d x + 1 λ φ = 1 λ x m d d x ψ .
This can be written as
1 x m λ x m d d x x m λ φ = 1 λ x m d d x ψ x m d d x x m λ φ = x m λ λ x m d d x ψ ,
and we get
φ = 1 λ x m λ x m d d x 1 x m λ x m d d x ψ = 1 λ x m λ I m 1 , 1 x m λ x m d d x ψ ( x ) .
Integrating by parts we obtain
I m 1 , 1 x m λ x m d d x ψ ( x ) = 0 x t m λ ψ ( t ) d t = x m λ ψ ( x ) m λ ( I m 1 , 1 x m λ ψ ) ( x )
and
φ = 1 λ ψ m λ 2 x m λ I m 1 , 1 x m λ ψ = 1 λ ψ m λ 2 x m λ x m d d x 1 x m λ ψ ,
provided that Re ( μ + m ( η + 1 ) + 1 / λ ) 1 / p . When Re ( μ + m ( η + 1 ) + 1 / λ ) > 1 / p by (7) we obtain
φ = 1 λ ψ m λ 2 I m 1 λ 1 , 1 ψ
and
R ( λ , I m 1 , 1 ) = ( λ I + I m 1 , 1 ) 1 = 1 λ I m λ 2 I m 1 λ 1 , 1 .
Consequently, from (8), for λ > 0 , Re ( μ + m ( η + 1 ) ) > 1 / p and f F p , μ ,
( λ I + I m η , 1 ) g = 1 x m ( η + 1 ) ( λ I + I m 1 , 1 ) x m ( η + 1 ) g = f
g = 1 x m ( η + 1 ) ( λ I + I m 1 , 1 ) 1 x m ( η + 1 ) f
g = 1 λ f m λ 2 1 x m ( η + 1 ) I m 1 λ 1 , 1 x m ( η + 1 ) f .
We have
1 x m ( η + 1 ) ( I m 1 λ 1 , 1 x m ( η + 1 ) f ) ( x ) = m x m η + 1 λ + 1 0 x t m η + 1 λ + 1 1 f ( t ) d t = ( I m η + 1 λ , 1 f ) ( x ) .
Finally,
R ( λ , I m η , 1 ) = 1 λ I m λ 2 I m η + 1 λ , 1 I λ R ( λ , I m η , 1 ) = m λ I m 1 η + λ , 1 .
We obtain
R ( λ , I m η , 1 ) I m η , 1 = I λ R ( λ , I m η , 1 ) = m λ I m 1 η + λ , 1 .
We get by (1), taking into account (10)
( ( I m η , 1 ) α f ) ( x ) = sin ( α π ) π 0 λ α 1 R ( λ , I m η , 1 ) I m η , 1 f ( x ) d λ =
= sin ( α π ) m π 0 λ α 2 I m 1 λ + η , 1 f ( x ) d λ =
sin ( α π ) m 2 π 0 λ α 2 x m η + 1 λ + 1 0 x t m η + 1 λ + 1 1 f ( t ) d t d λ =
= sin ( α π ) m 2 π x m ( η + 1 ) 0 x f ( t ) t n ( η + 1 ) 1 d t 0 λ α 2 x t m λ d λ =
= sin ( α π ) m 2 π x m ( η + 1 ) 0 x f ( t ) t n ( η + 1 ) 1 d t 0 λ α 2 e m λ log x t d λ = u = m λ log x t =
= sin ( α π ) m 2 π x m ( η + 1 ) 0 x f ( t ) t n ( η + 1 ) 1 m log x t α 1 d t 0 u α e u d u =
= sin ( α π ) Γ ( 1 α ) m 2 π x m ( η + 1 ) 0 x f ( t ) t n ( η + 1 ) 1 m log x t α 1 d t =
= m 2 Γ ( α ) x m ( η + 1 ) 0 x f ( t ) t n ( η + 1 ) 1 m log x t α 1 d t .
So we get generalisation of Hadamard fractional integral
( ( I m η , 1 ) α f ) ( x ) = m 2 Γ ( α ) x m ( η + 1 ) 0 x f ( t ) t n ( η + 1 ) 1 m log x t α 1 d t .
If we put m = 1 in (11) we get the following operator
( ( I 1 η , 1 ) α f ) ( x ) = 1 Γ ( α ) 0 x f ( t ) log x t α 1 t x η + 1 d t t = ( H η , α f ) ( x ) .
This example in the particular case, when m = 1 was given in [1]. Such integral was considered in [21,22]. If additionally we put η = 1 in (11) we obtain classical Hadamard fractional integral (see [2]) which is realised negative power of the Cauchy-Euler operator x d d x
( ( I 1 1 , 1 ) α f ) ( x ) = 1 Γ ( α ) 0 x f ( t ) log x t α 1 d t t = x d d x α f ( x ) .
It is known that for α > 0 and η > 1 and for an infinitely differentiable function f ( x ) defined for x > 0 such that its Taylor series converges at every point x > 0 the following equality holds (see [21])
( ( I 1 η , 1 ) α f ) ( x ) = 1 Γ ( α ) 0 x t x η + 1 ln x t α 1 f ( t ) d t t = k = 0 α k η + 1 x k f ( k ) ( x ) ,
where
α k c = 1 k ! j = 1 k ( 1 ) k + j k j ( c + j ) α , α , c R , k N { 0 }
are the generalised Stirling numbers of the second kind.

3. The Matrix Approach to the Fractional Powers of Operator

This section is devoted to the matrix approach for calculating fractional powers of operators. We start from the illustrative example giving the fractional power of the simplest integral operator, then we construct a power of a diagonalizable triangle matrix and its resolvent.

3.1. Illustrative Example

To illustrate the concept of using matrices for introducing new forms of fractional integro-differentiation, starting with the simplest case and consider a Riemann integral ( I a + 1 f ) ( x ) = a x f ( t ) d t , where a < x b , f C [ a , b ] .
Consider a uniform partition P = ( x 0 , x 1 , , x n ) of the interval [ a , x ] , defined by points x p = a + p h for p = 0 , 1 , 2 , . . . , n , where the step size is h = x a n . Then the integral I a + 1 is defined as the limit of the corresponding integral sums as n :
( I a + 1 f ) ( x ) = a x f ( t ) d t = lim n + p = 0 n 1 f ( ξ p ) h = lim h + 0 p = 0 n 1 f ( ξ p ) h ,
where ξ p is an arbitrary number ξ p [ x ( p + 1 ) h , x p h ] , p = 0 , . . . , n 1 .
If we take ξ p = x p h and denote f p = f ( x p h ) , p = 0 , . . . , n 1 , f n = 0 , we obtain
( I a + 1 f ) ( x ) = a x f ( t ) d t = lim n + h p = 0 n 1 f ( x p h ) = lim h + 0 h p = 0 n 1 f p , h = x a n .
Approximate operator is
( I a + , n 1 f ) ( x ) = h p = 0 n 1 f ( x p h ) , h = x a n .
Let f n = ( f 0 , f 1 , . . . , f n 1 ) . Operator I a + we associate with the ( n × n ) matrix
A n = h 1 1 1 1 0 1 1 1 0 0 1 1 0 0 0 1 .
in the sense
lim h + 0 A n · f n =
= lim h + 0 h 1 1 1 1 0 1 1 1 0 0 1 1 0 0 0 1 · f 0 f 1 f n 2 f n 1 = lim h + 0 h p = 0 n 1 f p lim h + 0 h p = 1 n 1 f p lim h + 0 h p = n 2 n 1 f p lim h + 0 h f n 1 = a x f ( x ) d x a 1 x f ( x ) d x a n 2 x f ( x ) d x a n 1 x f ( x ) d x .
We can see that I a + 1 can be represented by
( I a + 1 f ) ( x ) = lim n + ( A n · f n ) · I 1 n , h = x a n ,
where I 1 n = ( 1 , 0 , . . . , 0 ) n . We have so called "matrix operator" A n and the limit when h + 0 of the first row of A n · f n gives ( I a + 1 f ) ( x ) .
We explore the concept of associating integral and differential operators with their corresponding matrices. Our goal is to construct the fractional power A n α , α R , of the matrix operator A n . Subsequently, we will derive the fractional power of an operator as a limit by the formula
( I a + α f ) ( x ) = lim n ( A n α · f n ) · I 1 n = lim h + 0 ( A n α · f n ) · I 1 n ,
where A n α is first row of A n α for all α R .
In order to construct the power A n α , α R we will use the formula A n α = e α ln A n . Choosing for uniqueness the main branch of the logarithm ln A n , we obtain
ln A n = ln h + 0 1 1 2 1 n 2 1 n 1 0 0 1 1 n 3 1 n 2 0 0 0 0 1 0 0 0 0 0 ,
α ln A n = α ln h + 0 α α 2 α n 2 α n 1 0 0 α α n 3 α n 2 0 0 0 0 α 0 0 0 0 0 ,
A n α = h α 1 α α ( α + 1 ) 2 α ( α + 1 ) . . . ( α + n 3 ) ( n 2 ) ! α ( α + 1 ) . . . ( α + n 2 ) ( n 1 ) ! 0 1 α α ( α + 1 ) . . . ( α + n 4 ) ( n 3 ) ! α ( α + 1 ) . . . ( α + n 3 ) ( n 2 ) ! 0 0 0 1 α 0 0 0 0 1 .
We get general power (17) of matrix operator A n for all α R which automatically has semigroup property.
It is easy to see that when α = 1
A n 1 = 1 h 1 1 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 1 ,
we get the formula of matrix A n inversion and
lim h + 0 A n 1 · f n =
= lim h + 0 1 h 1 1 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 1 · f 0 f 1 f n 2 f n 1 = lim h + 0 1 h f 0 f 1 f 1 f 2 f n 2 f n 1 f n 1 f n =
= lim h + 0 f ( x ) f ( x h ) h lim h + 0 f ( x h ) f ( x 2 h ) h lim h + 0 f ( x ( n 2 ) h ) f ( x ( n 1 ) h ) h lim h + 0 f ( x ( n 1 ) h ) f ( x n h ) h = f ( x ) f ( x 1 ) f ( x n 2 ) f ( x n 1 )
and
( I a + 1 f ) ( x ) = lim n + ( A n 1 · f n ) · I 1 n = lim h + 0 f ( x ) f ( x h ) h = f ( x )
is a left derivative of f at x.
Next we notice that when α = m , m N , m n we get that in the first row all elements with a column number greater than ( m + 2 ) are equal to zero
A n m = 1 h m 1 m m ( m 1 ) 2 ( 1 ) m m ( m 1 ) . . . 1 m ! 0 0 0 1 m ( 1 ) m 1 m ( m 1 ) . . . 2 ( m + 1 ) ! 0 0 0 0 0 0 1 m m ( m 1 ) 2 0 0 0 0 0 1 m 0 0 0 0 0 0 1 ,
so for α = m , m N , m n by (16)
( I a + m f ) ( x ) = lim h + 0 ( A n m · f n ) · I 1 n = lim h + 0 p = 0 m ( 1 ) p h m m p f p = = lim h + 0 p = 0 m ( 1 ) p h m m p f ( x p h ) = lim h + 0 h m [ f ] ( x ) h m = f ( m ) ( x ) ,
where m p = m ! p ! ( m p ) ! are binomial coefficients, h m [ f ] ( x ) = p = 0 m ( 1 ) p m p f ( x p h ) is a backward difference, f ( m ) ( x ) is a left derivative of f at x of order m N .
It α R , α m , m N , then, Taking into account that
α ( α + 1 ) . . . ( α + p 2 ) = α ( p 1 )
is the rising factorial which can be written using the Gamma function by the formula
α ( p 1 ) = Γ ( α + p 1 ) Γ ( α ) ,
we get
A n α = h α 1 Γ ( α + 1 ) Γ ( α ) Γ ( 2 ) Γ ( α + 2 ) Γ ( α ) Γ ( 3 ) Γ ( α + n 1 ) Γ ( α ) Γ ( n ) 0 1 Γ ( α + 1 ) Γ ( α ) Γ ( 2 ) Γ ( α + n 2 ) Γ ( α ) Γ ( n 1 ) 0 0 0 1 .
Therefore, by (16) we can write
( I a + α f ) ( x ) = lim h + 0 ( A n α · f n ) · I 1 n = lim h + 0 h α Γ ( α ) p = 0 n 1 Γ ( α + p ) Γ ( p + 1 ) f p ,
where f p = f x p h , h = x a n . Formula (21) gives the left-sided Grünwald-Letnikov fractional integro-differentiation of order α R , α m , m N (see [2]).
It is knows that (see [3]) for f ( x ) L 1 ( a , b ) limit (21) exists for almost everyone x and
( I a + α f ) ( x ) = ( I a + α f ) ( x ) , ( D a + α f ) ( x ) = ( D a + α f ) ( x )
where
( I a + α f ) ( x ) = 1 Γ ( α ) a x f ( t ) d t ( x t ) 1 α
is the left-sided Riemann-Liouville integral,
( D a + α f ) ( x ) = 1 Γ ( p α ) d d x p a x f ( t ) d t ( x t ) α p + 1 , p = [ α ] + 1
is the left-sided Riemann-Liouville derivative.
The finite-difference operator
( I a + , n α f ) ( x ) = h α Γ ( α ) p = 0 n 1 Γ ( α + p ) Γ ( p + 1 ) f p , f p = f x p h , h = x a n
which is a prelimiting form of (21) will be used for approximation of the left-sided Riemann-Liouville integral. Let us note, that in [3] this approximation was used for the numerical calculation for concrete engineering problems.

3.2. Power of a Diagonalizable Triangle Matrix

In order to construct fractional power of the operator A we shall construct the power of matrix A α = e α ln A , where we choose the main branch of the logarithm for uniqueness. In Section 3.1 we constructed the A α when all a k = 1 , k = 0 , 1 , . . . , N 1 . In this section we construct A α = e α ln A when a 0 a 1 . . . a N 1 . In this case a matrix A is diagonalizable.
Theorem 3.1.
Let det A 0 , a 1 a 2 . . . a n , then the matrix
A = a 1 a 2 a n 0 a 2 a n 0 0 a n
is diagonalizable and eigenvalue decomposition for matrix A is
A = P D P 1 ,
where P = ( b s m ) m = 1 , s = 1 n , n , P 1 = ( c s m ) m = 1 , s = 1 n , n , D = ( d s m ) m = 1 , s = 1 n , n ,
b s m = 1 i f s = m ; 0 i f m < s ; a m m s i = s m 1 ( a m a i ) i f s < m , c s m = 1 i f s = m ; 0 i f m < s ; a s m s 1 a m i = s + 1 m ( a s a i ) i f s < m ,
d s m = a m i f m = s ; 0 i f s m .
Proof. Eigenvalues for A are a 1 , a 2 , . . . , a n . Since a 1 a 2 . . . a n then matrix A is diagonalizable and can be written in the form A = P D P 1 , where
D = a 1 0 0 0 a 2 0 0 0 a n
is a diagonal matrix and
P = b 11 b 12 b 1 n 0 b 22 b 2 n 0 0 b n n
is an invertible matrix. Now, we can find P and P 1 .
Since
P D = b 11 b 12 b 1 n 0 b 22 b 2 n 0 0 b n n · a 1 0 0 0 a 2 0 0 0 a n = a 1 b 11 a 2 b 12 a n b 1 n 0 a 2 b 22 a n b 2 n 0 0 a n b n n ,
then an equality A P = P D gives
a 1 a 2 a n 0 a 2 a n 0 0 a n · b 11 b 12 b 1 n 0 b 22 b 2 n 0 0 b n n = a 1 b 11 a 2 b 12 a n b 1 n 0 a 2 b 22 a n b 2 n 0 0 a n b n n .
For the first row:
a 1 · b 11 = a 1 · b 11 a 1 · b 12 + a 2 · b 22 = a 2 · b 12 . . . a 1 · b 1 n + a 2 · b 2 n + . . . + a n · b n n = a n · b 1 n
for the second row:
a 2 · b 22 = a 2 · b 22 a 2 · b 23 + a 3 · b 33 = a 3 · b 23 . . . a 2 · b 2 n + a 3 · b 3 n + . . . + a n · b n n = a n · b 2 n
for the n-th row
a n · b n n = a n · b n n .
Let s be the number of row, s = 1 , . . . , n , m = 1 , . . , n , s m we we can write the system
i = s m a i b i m = a m b s m .
For s = m 1
a m 1 b m 1 m + a m b m m = a m b m 1 m
then
b m 1 m = a m a m a m 1 b m m .
If b 11 = b 22 = . . . = b n n = 1 , then
b m 1 m = a m a m a m 1 .
For s = m 2
a m 2 b m 2 m + a m 1 b m 1 m + a m b m m = a m b m 2 m ,
then
b m 2 m = 1 a m a m 2 ( a m 1 b m 1 m + a m b m m ) =
= 1 a m a m 2 a m 1 a m a m a m 1 + a m = a m 2 ( a m a m 1 ) ( a m a m 2 ) .
For s = m 3
a m 3 b m 3 m + a m 2 b m 2 m + a m 1 b m 1 m + a m b m m = a m b m 3 m ,
then
b m 3 m = 1 a m a m 3 ( a m 2 b m 2 m + a m 1 b m 1 m + a m b m m ) =
= 1 a m a m 3 a m 2 a m 2 ( a m a m 1 ) ( a m a m 2 ) + a m 1 a m a m a m 1 + a m =
= a m 3 ( a m a m 1 ) ( a m a m 2 ) ( a m a m 3 ) .
Therefore for s < m
b m s m = a m s i = 1 s ( a m a m i )
or for s < m
b s m = a m m s i = s m 1 ( a m a i )
and
P = 1 a 2 a 2 a 1 a 3 2 ( a 3 a 1 ) ( a 3 a 2 ) a n n 1 i = 1 n 1 ( a n a i ) 0 1 a 3 a 3 a 2 a n n 2 i = 2 n 1 ( a n a i ) 0 0 1 a n n 3 i = 3 n 1 ( a n a i ) 0 0 1 .
Finally, P = ( b s m ) m = 1 , s = 1 n , n , s = 1 , . . . , n , m = 1 , . . . , n where
b s m = 1 i f s = m ; 0 i f m < s ; a m m s i = s m 1 ( a m a i ) i f s < m .
We now proceed to find the inverse matrix P 1 . Inverse of an lower triangular matrix P is another upper triangular matrix
P 1 = c 11 c 12 c 1 n 0 c 22 c 2 n 0 0 c n n .
The elements c s m of P 1 can be found from the system
P P 1 = b 11 b 12 b 1 n 0 b 22 b 2 n 0 0 b n n · c 11 c 12 c 1 n 0 c 22 c 2 n 0 0 c n n = 1 0 0 0 1 0 0 0 1 .
It is clear that
b m m c m m = 1 , m = 1 , . . . , n
so diagonal elements of P 1 are reciprocal of the elements of P:
c m m = 1 b m m = 1 , m = 1 , . . . , n .
For other elements we get
i = s k b s i c i k = 0 , s = 1 , . . . , n , k = s + 1 , . . . , n .
Putting k = s + 1 we obtain
i = s s + 1 b s i c i s + 1 = b s s c s s + 1 + b s s + 1 c s + 1 s + 1 = 0 , s = 1 , . . . , n 1
or
c s s + 1 = b s s + 1 b s s c s + 1 s + 1 = b s s + 1 = a s + 1 s + 1 s i = s s + 1 1 ( a s + 1 a i ) = a s + 1 a s a s + 1 , s = 1 , . . . , n 1 .
For k = s + 2 we get
i = s s + 2 b s i c i s + 2 = b s s c s s + 2 + b s s + 1 c s + 1 s + 2 + b s s + 2 c s + 2 s + 2 = 0 , s = 1 , . . . , n 2 ,
c s s + 2 = 1 b s s ( b s s + 1 c s + 1 s + 2 + b s s + 2 c s + 2 s + 2 ) = ( b s s + 1 c s + 1 s + 2 + b s s + 2 ) =
= a s + 1 a s + 2 ( a s + 1 a s ) ( a s + 2 a s + 1 ) + a s + 2 2 ( a s + 2 a s ) ( a s + 2 a s + 1 ) =
= a s a s + 2 ( a s + 1 a s + 2 ) ( a s + 1 a s ) ( a s + 2 a s ) ( a s + 2 a s + 1 ) = a s a s + 2 ( a s + 1 a s ) ( a s + 2 a s ) .
For k = s + 3 we get
i = s s + 3 b s i c i s + 3 = b s s c s s + 3 + b s s + 1 c s + 1 s + 3 + b s s + 2 c s + 2 s + 3 + b s s + 3 c s + 3 s + 3 = 0 , s = 1 , . . . , n 3 , c s s + 3 = 1 b s s ( b s s + 1 c s + 1 s + 3 + b s s + 2 c s + 2 s + 3 + b s s + 3 c s + 3 s + 3 ) = = ( b s s + 1 c s + 1 s + 3 + b s s + 2 c s + 2 s + 3 + b s s + 3 ) = = a s + 1 2 a s + 3 ( a s + 1 a s ) ( a s + 2 a s + 1 ) ( a s + 3 a s + 1 ) a s + 2 2 a s + 3 ( a s + 2 a s ) ( a s + 2 a s + 1 ) ( a s + 3 a s + 2 ) + + a s + 3 3 ( a s + 3 a s ) ( a s + 3 a s + 1 ) ( a s + 3 a s + 2 ) = = a s 2 a s + 3 ( a s a s + 1 ) ( a s a s + ) ( a s a s + 3 ) .
Therefore for s = 1 , . . . , n ,
c s s + m = a s m 1 a s + m i = 1 m ( a s a s + i ) ,
c s m = a s m s 1 a m i = 1 m s ( a s a s + i ) = a s m s 1 a m i = s + 1 m ( a s a i )
and
P 1 = 1 a 2 a 1 a 2 a 1 a 3 ( a 1 a 2 ) ( a 1 a 3 ) a 1 n 2 a n i = 2 n ( a 1 a i ) 0 1 a 3 a 2 a 3 a 2 n 3 a n i = 3 n ( a 2 a i ) 0 0 1 a 3 n 4 a n i = 4 n ( a 3 a i ) 0 0 1 .
Finally, P 1 = ( c s m ) m = 1 , s = 1 n , n , where
c s m = 1 i f s = m ; 0 i f m < s ; a s m s 1 a m i = s + 1 m ( a s a i ) i f s < m .
Theorem 3.2.
Let det A 0 , a 1 a 2 . . . a n , then the power α R of matrix A is
A α = ( p s m ) m = 1 , s = 1 n , n ,
where
p s m = 0 i f m < s ; i = s m a i α + m s 1 a m j = s , j i m ( a j a i ) i f s < m ; a m α i f m = s .
Proof. By Theorem 3.1 we get A = P D P 1 . Therefore, for any α R we can write A α = P D α P 1 or
A α = p 11 p 12 p 1 n 0 p 22 p 2 n 0 0 p n n = = b 11 b 12 b 1 n 0 b 22 b 2 n 0 0 b n n · a 1 α 0 0 0 a 2 α 0 0 0 a n α · c 11 c 12 c 1 n 0 c 22 c 2 n 0 0 c n n = = b 11 b 12 b 1 n 0 b 22 b 2 n 0 0 b n n · a 1 α c 11 a 1 α c 12 a 1 α c 1 n 0 a 2 α c 22 a 2 α c 2 n 0 0 a n α c n n = = a 1 α b 11 c 11 a 1 α b 11 c 12 + a 2 α b 12 c 22 a 1 α b 11 c 1 n + a 2 α b 12 c 2 n + . . . + a n α b 1 n c n n 0 a 2 α b 22 c 22 a 2 α b 22 c 2 n + a 3 α b 23 c 3 n + . . . + a n α b 2 n c n n 0 0 a n α b n n c n n .
Since c m m = 1 b m m = 1 , m = 1 , . . . , n on the main diagonal we obtain p m m = a m α , m = 1 , . . . , n . For s < m we get p s m = j = s m a j α b s j c j m . For b s j and c j m we have representations by Theorem 3.1
b s j = 1 i f s = j ; 0 i f j < s ; a j j s i = s j 1 ( a j a i ) i f s < j , c j m = 1 i f j = m ; 0 i f m < j ; a j m j 1 a m i = j + 1 m ( a j a i ) i f j < m .
When j = s < m we get
b s s = 1 , c s m = a s m s 1 a m i = s + 1 m ( a s a i ) = a s m s 1 a m i = s , i s m ( a s a i ) ,
when j = m > s we get
c m m = 1 , b s m = a m m s i = s m 1 ( a m a i ) = a m m s i = s , i m m ( a m a i ) .
For s < j < m , s < m we have
b s j c j m = a j j s i = s j 1 ( a j a i ) · a j m j 1 a m i = j + 1 m ( a j a i ) = a j m s 1 a m i = s , j i m ( a j a i ) ,
therefore for s < m
p s m = i = s m a i α a i m s 1 a m i = s , j i m ( a j a i ) .
Taking into account the form of A α we get that p s m can be written as (25).

3.3. Resolvent of a Diagonalizable Triangle Matrix

The resolvent serves as a foundational tool for our considerations. Here, we derive the resolvent of a diagonalizable triangular matrix A. In this case, the resolvent takes on a particularly simple and computationally efficient form.
Lemma 3.1.
Let det A 0 , a 1 a 2 . . . a n , then the matrix
A = a 1 a 2 a n 0 a 2 a n 0 0 a n
has the resolvent im the form
R ( λ , A ) = ( r s m ) m = 1 , s = 1 n , n ,
where
r s m = 0 i f m < s ; i = s m a i m s 1 a m ( λ a i ) j = s , j i m ( a j a i ) i f s < m ; 1 λ a m i f m = s .
Proof. By Theorem 3.1 we obtain
R ( λ , A ) = P R ( λ , D ) P 1 = = b 11 b 12 b 1 n 0 b 22 b 2 n 0 0 b n n · 1 λ a 1 0 0 0 1 λ a 2 0 0 0 1 λ a n · c 11 c 12 c 1 n 0 c 22 c 2 n 0 0 c n n = = b 11 b 12 b 1 n 0 b 22 b 2 n 0 0 b n n · c 11 λ a 1 c 12 λ a 1 c 1 n λ a 1 0 c 22 λ a 2 c 2 n λ a 2 0 0 c n n λ a n = = b 11 c 11 λ a 1 i = 1 2 b 1 i c i 2 λ a i i = 1 n b 1 i c i n λ a i 0 b 22 c 22 λ a 2 i = 2 n b 2 i c i n λ a i 0 0 b n n c n n λ a n .
Therefore
R ( λ , A ) = 1 λ a 1 i = 1 2 a 1 ( λ a i ) j = 1 , j i 2 ( a i a j ) i = 1 n a i n 2 a n ( λ a i ) j = 1 , j i n ( a i a j ) 0 1 λ a 2 i = 2 n a i n 3 a n ( λ a i ) j = 2 , j k n ( a i a j ) 0 0 1 λ a n
that gives (26).
For constructing ( A ) α , we assume now that a i > 0 , i = 1 , . . . , n . Then since
0 λ α 1 λ + a i d λ = π a i α 1 sin ( α π ) ,
by (1) we obtain (24).

3.4. Definition of Fractional Power of the Riemann–Stieltjes Integral Using Matrix Approach

It is clear that the constructions of the Section 3.1 are intended primarily to generalize the concept of a fractional integral of a function f with respect to another function g.
Let us consider a function g ( t ) which is of bounded variation on [ a , b ] . It is known that in this case for f C ( [ a , b ] ) there is exists the Riemann–Stieltjes integral
a x f ( t ) d g ( t ) , x b .
If g ( t ) C 1 ( [ a , b ] ) , w ( x ) = g ( x ) then
( I a + ; w 1 f ) ( x ) = a x f ( t ) d g ( t ) = a x f ( t ) g ( t ) d t = a x f ( t ) w ( t ) d t , x b .
Iterated operator I a + , w is
( I a + ; w k f ) ( x ) = 1 ( k 1 ) ! a x f ( t ) ( g ( x ) g ( t ) ) k 1 w ( t ) d t .
Let P = ( x 0 , x 1 , , x n ) be a partition of [ a , x ] , a < x b that is a = x 0 < x 1 < < x n = x , x k x k 1 = h = x a n > 0 , so x 0 = a , x 1 = x 0 + h , x 2 = x 0 + 2 h ,..., x n 1 = x 0 + ( n 1 ) h , x n = x , f ( a ) = 0 , g ( a ) = 0 .
The total variation of a continuous real-valued (or more generally complex-valued) function g is
V a b [ g ] = sup P i = 0 n 1 | g ( x i + 1 ) g ( x i ) | ,
where the supremum is taken over the set of all partitions of the interval considered.
For the integral I a + ; w 1 an estimate is valid
| ( I a + ; w 1 f ) ( x ) | = a x f ( x ) d g ( x ) sup t ( a , x ] | f ( t ) | V a x [ g ] ,
where V a x [ g ] is the total variation of a function g on [ a , x ] .
Let ξ p [ x p h , x ( p + 1 ) h ] , w ( ξ p ) = w p , f ( ξ p ) = f p , p = 0 , . . . , n 1 , then we can write I a + , w in the form
( I a + ; w 1 f ) ( x ) = lim n p = 0 n 1 f p w p h = lim n p = 0 n 1 f ( ξ p ) w ( ξ p ) h .
We associate the operator I a + ; w with the ( n × n ) matrix
A w ; n = h w 0 w 1 w n 1 0 w 1 w n 1 0 0 w n 1
and
( I a + ; w 1 f ) ( x ) = a x f ( t ) w ( t ) d t = lim n + ( A w ; n · f n ) · I 1 n , h = x a n ,
where f n = ( f 0 , f 1 , . . . , f n 1 ) , I 1 n = ( 1 , 0 , . . . , 0 ) n .
Taking ξ p = x p h , by the action of the approximating operator I n , a + ; w 1 to f we will understand now
( I n , a + ; w 1 f ) ( x ) = p = 0 n 1 f p w p h = p = 0 n 1 f p w p h , f p = ( x p h ) , w p = w ( x p h ) .
Iterated operator I n , a + ; w k has the form
( I n , a + ; w k f ) ( x ) = h k ( k 1 ) ! p = 0 n 1 ( p + k 1 ) ! p ! f ( x p h ) w ( x p h ) .
If det A w ; n 0 we can construct the power A w ; n α = e α ln A w ; n , choosing for uniqueness the main branch of the logarithm and define A w ; n α by the Theorem 3.2. Therefore, we obtain the approximating operator I n , a + ; w α in the form
( I n , a + ; w α f ) ( x ) = h α p = 0 n 1 i = 0 p h p w i α + p 1 w p j = 0 , j i p ( w j w i ) · f p .
Now we can apply Theorem 2.1 to operators A = I a + ; w 1 and C = I n , a + ; w 1 . Let B = C w ( [ a , b ] ) with the norm
f C w ( [ a , b ] ) = max x [ a , b ] { | f ( x ) w ( x ) | }
and D = C w 1 ( [ a , b ] ) with norm
f C w 1 ( [ a , b ] ) = max x [ a , b ] { | f ( x ) w ( x ) | , | ( f ( x ) w ( x ) ) | } ,
n N . It is known that for f , w C ( [ a , b ] ) , λ > 0
| | λ R ( λ , I a + ; w 1 ) | | C w ( [ a , b ] ) < M < , | | λ R ( λ , I n , a + ; w 1 ) | | C w ( [ a , b ] ) < M < .
and
| | λ R ( λ , I n , a + ; w 1 ) | | C w 1 ( [ a , b ] ) < M < .
Also, it is easy to see that
( I a + ; w 1 I n , a + ; w 1 ) f C w ( [ 0 , 1 ] ) ( b a ) h 2 f C w 1 ( [ 0 , 1 ] ) , h = b a n .
Then
( R ( λ ; I a + ; w 1 ) R ( λ ; I n , a + ; w 1 ) ) f C w ( [ 0 , 1 ] ) M M ( b a ) h 2 λ 2 f C w 1 ( [ 0 , 1 ] ) ,
and
( ( I a + ; w 1 ) α ( I n , a + ; w 1 ) α ) f C w ( [ 0 , 1 ] ) c f C w 1 ( [ 0 , 1 ] ) h α , h 0 , n ,
c = M 2 + M α 1 α b a 2 α .

3.5. Rate of Convergence for Concrete Fractional Integrals

1. Let us find the rate of convergence for the Riemann-Liouville integral. We consider B = C ( [ 0 , 1 ] ) , D = C 1 ( [ 0 , 1 ] ) with norms
f C ( [ 0 , 1 ] ) = max x [ 0 , 1 ] { | f ( x ) | } and f C 1 ( [ 0 , 1 ] ) = max x [ 0 , 1 ] { | f ( x ) | , | f ( x ) | } ,
respectivy, n N , h = x n
( A f ) ( x ) = ( I 0 + 1 f ) ( x ) = 0 x f ( t ) d t , ( C f ) ( x ) = ( I n , 0 + 1 f ) ( x ) = h p = 0 n 1 f x p h
(see subSection 2.3). Then
( A C ) f C ( [ 0 , 1 ] ) = I 0 + 1 f I n , 0 + 1 f C ( [ 0 , 1 ] ) h 2 f C 1 ( [ 0 , 1 ] ) , h = 1 n ,
R ( λ , I 0 + 1 ) f = 1 λ f 1 λ 2 0 x e x t λ f ( t ) d t ,
R ( λ , I n , 0 + 1 ) f = 1 λ f + k = 1 ( 1 ) k λ k h k Γ ( k ) p = 0 n 1 Γ ( k + p ) Γ ( p + 1 ) f x p h = = 1 λ f + 1 λ p = 0 n 1 1 Γ ( p + 1 ) f x p h k = 1 ( 1 ) k Γ ( k + p ) Γ ( k ) h λ k = = 1 λ f h λ p = 0 n 1 λ p λ + h p + 1 f x p h .
We get
λ R ( λ , I 0 + 1 ) f C ( [ 0 , 1 ] ) max x [ 0 , 1 ] λ 1 λ + 1 e x λ λ f C ( [ 0 , 1 ] ) 2 f C ( [ 0 , 1 ] ) , λ R ( λ , I n , 0 + 1 ) f C ( [ 0 , 1 ] ) max x [ 0 , 1 ] λ 1 λ + 1 λ n λ + x n n λ f C ( [ 0 , 1 ] ) 2 f C ( [ 0 , 1 ] ) , λ R ( λ , I n , 0 + 1 ) f C 1 ( [ 0 , 1 ] ) max x [ 0 , 1 ] 2 λ n λ + x n n , λ n λ + x n n 1 f C 1 ( [ 0 , 1 ] ) = M f C 1 ( [ 0 , 1 ] ) ,
and all conditions of the Theorem 2.1 are satisfied, therefore
( R ( λ ; I 0 + 1 ) R ( λ ; I n , 0 + 1 ) ) f C ( [ 0 , 1 ] ) M h λ 2 f C 1 ( [ 0 , 1 ] ) ,
and
I 0 + α f I n , 0 + α f C ( [ 0 , 1 ] ) γ 2 α h α f C 1 ( [ 0 , 1 ] ) , h 0 , n .
2. Let us find the rate of convergence for the Erdélyi-Kober integral. We consider B = C x ( [ 0 , 1 ] ) , D = C x 1 ( [ 0 , 1 ] ) , with norms
f C x ( [ 0 , 1 ] ) = max x [ 0 , 1 ] { | x f ( x ) | } and f C x 1 ( [ 0 , 1 ] ) = max x [ 0 , 1 ] { | x f ( x ) | , | ( x f ( x ) ) | } ,
respectivy, n N , h = x n ,
( A f ) ( x ) = ( I 0 + ; x 1 f ) ( x ) = 0 x f ( t ) t d t , ( C f ) ( x ) = ( I n , 0 + ; x 1 f ) ( x ) = h p = 0 n 1 f ( x p h ) · ( x p h )
(see subSection 2.4). Then
( A C ) f C x ( [ 0 , 1 ] ) = I 0 + ; x 1 f I n , 0 + ; x 1 f C ( [ 0 , 1 ] ) h 2 f C x 1 ( [ 0 , 1 ] ) , h = 1 n ,
λ R ( λ , I 0 + ; x 1 ) f C x ( [ 0 , 1 ] ) = f 1 λ 0 x e x 2 t 2 2 λ f ( t ) t d t C x ( [ 0 , 1 ] ) 2 f C x ( [ 0 , 1 ] ) ,
λ R ( λ , I n , 0 + ; x 1 ) f C x ( [ 0 , 1 ] ) = f p = 0 n 1 λ p f x p h · ( x p h ) λ + h p + 1 C x ( [ 0 , 1 ] ) 2 f C x ( [ 0 , 1 ] ) ,
λ R ( λ , I n , 0 + ; x 1 ) f C x 1 ( [ 0 , 1 ] ) M f C x 1 ( [ 0 , 1 ] ) ,
and all conditions of the Theorem 2.1 are satisfied, therefore
( R ( λ ; I 0 + ; x 1 ) R ( λ ; I n , 0 + ; x 1 ) ) f C x ( [ 0 , 1 ] ) M h λ 2 f C x 1 ( [ 0 , 1 ] ) ,
and
I 0 + ; x α f I n , 0 + ; x α f C x ( [ 0 , 1 ] ) γ 2 α h α f C x 1 ( [ 0 , 1 ] ) , h 0 , n .
The case of Hadamard-type fractional integral is considered similarly.

4. Discussion

The method presented in this article is universal, applicable to a broad class of operators and their fractional powers. A natural future application is to integrals with signed measures. The study of fractional powers of operators with alternating measures was initiated in the article [23].

Author Contributions

Conceptualization, V. N. Kolokoltsov; methodology, V. N. Kolokoltsov and E. L. Shishkina; validation, V. N. Kolokoltsov and E. L. Shishkina; formal analysis, V. N. Kolokoltsov and E. L. Shishkina; investigation, V. N. Kolokoltsov; resources, V. N. Kolokoltsov and E. L. Shishkina; data curation, E. L. Shishkina; writing—original draft preparation, V. N. Kolokoltsov and E. L. Shishkina; writing—review and editing, V. N. Kolokoltsov and E. L. Shishkina; visualization, V. N. Kolokoltsov and E. L. Shishkina; supervision, V. N. Kolokoltsov; project administration, V. N. Kolokoltsov and E. L. Shishkina. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lamb, W.; McBride, A.C. On relating two approaches to fractional calculus. J. Math. Anal. Appl. 1988, 132, 590–610. [Google Scholar] [CrossRef]
  2. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional integrals and derivatives; Gordon and Breach Science Publishers: Amsterdam, Netherlands, 1993; p. 976 p. [Google Scholar]
  3. Podlubny, I. Fractional differential equations. An introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications. Mathematics in Science and Engineering, 198, Academic Press, SanDiego, 1999; 340 p.
  4. Sprinkhuizen-Kuyper, I.G. A fractional integral operator corresponding to negative powers of a certain second-order differential operator. Math. Analysis and Applications 1979, 72, 674–702. [Google Scholar] [CrossRef]
  5. Phillips, R.S. On the generation of semi-groups of linear operators. Pacific J. Math. 1952, 2, 343–369. [Google Scholar] [CrossRef]
  6. Hille, E.; Phillips, R.S. Functional analysis and semi-groups, 2rd ed.; Amer. Math. Soc. Colloq. Publ.: USA, 1996; p. 808 p. [Google Scholar]
  7. Balakrishnan, A.V. An operational calculus for infinitesimal generators of semigroups. Trans. Amer. Math. Soc. 1959, 91, 330–353. [Google Scholar]
  8. Balakrishnan, A.V. Fractional powers of closed operators and semigroups generated by them. Pacific J. Math. 1960, 10, 419–437. [Google Scholar] [CrossRef]
  9. Kato, T. Note on fractional powers of linear operators. Proc. Japan Acad. 1960, 36, 94–96. [Google Scholar] [CrossRef]
  10. Komatsu, H. Fractional powers of operators. Pacific J. Math. 1966, 19, 285–346. [Google Scholar] [CrossRef]
  11. Hövel, H.W.; Westphal, U. Fractional powers of closed operators. Studia Math. 1972, 42, 177–194. [Google Scholar] [CrossRef]
  12. Martinez, C.; Sanz, M.; Marco, L. Fractional powers of operators. J. Math. Soc. Japan 1988, 40, 331–347. [Google Scholar] [CrossRef]
  13. Krasnosel’skii, M.A.; Sobolevskii, P.E. Fractional powers of operators acting in Banach spaces. Doklady Akad. Nauk SSSR 1959, 129, 499–502. [Google Scholar]
  14. Krasnosel’skii, M.A.; Zabreyko, P.P.; Pustylnik, E.I.; Sobolevski, P.E. Integral operators in spaces of summable functions; Springer, 2011; p. 536 p. [Google Scholar]
  15. Kiryakova, V. Generalized Fractional Calculus and Applications; Longman—J. Wiley: Harlow—New York; Chapman and Hall/CRC: London, UK, 1994. [Google Scholar]
  16. Tarasov, V.E. Fractional Dynamics; Springer Berlin: Heidelberg, 2010. [Google Scholar]
  17. Kolokoltsov, V. N. The Probabilistic Point of View on the Generalized Fractional Partial Differential Equations. Fract. Calc. Appl. Anal. 2019, 22, 543–600. [Google Scholar] [CrossRef]
  18. Kolokoltsov, V.N.; Shishkina, E.L. Matrix Approach to the Fractional Calculus. arXiv 2025, arXiv:2512.10330. Available online: https://arxiv.org/abs/2512.10330 (accessed on 11 December 2025).
  19. Kolokoltsov, V.N. Markov Processes, Semigroups and Generators; De Gruyter: Berlin, New York, 2011. [Google Scholar]
  20. Kolmogorov, A.N.; Fomin, S.V. Elements of the Theory of Functions and Functional Analysis  . In Metric and Normed Spaces; Graylock Press: ROCHESTER, N. Y., UK, 1957; Vol. 1, p. 130 p. [Google Scholar]
  21. Butzer, P.L.; Kilbas, A.A.; Trujillo, J.J. Stirling functions of the second kind in the setting of difference and fractional calculus. Numer. Funct. Anal. Optim. 2003, 24(7–8), 673–711. [Google Scholar] [CrossRef]
  22. Butzer, P.L.; Kilbas, A.A.; Trujillo, J.J. Fractional calculus in the Mellin setting and Hadamard-type fractional integrals. J. Math. Anal. Appl. 2002, 269(1), 1–27. [Google Scholar] [CrossRef]
  23. Kolokoltsov, V.N.; Shishkina, E.L. Fractional Calculus for Non-Discrete Signed Measures. Mathematics 2024, 12, 2804. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated