Preprint
Article

This version is not peer-reviewed.

An Elementary Theory of Indefinite Summation Using Integral Transforms

Submitted:

18 May 2025

Posted:

19 May 2025

You are already at the latest version

Abstract
We develop a generalized framework for a novel approach to indefinite summation through the use of integral transforms. Central to our development is the continuous binomial transform, through which we derive key identities that validate the consistency and effectiveness of the method. The framework further extends to accommodate variable step sizes and addresses the limitations of general nonlinear transformations of the summation index. Our results demonstrate that integral transforms are a powerful and flexible tool for the analysis and computation of discrete indefinite sums.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

1.1. Motivations and Definitions

Indefinite summation, or antidifferences, provides the discrete analogue of antiderivatives in classical calculus. Given a sequence g ( x ) , any function f ( x ) satisfying
Δ f ( x ) : = f ( x + 1 ) f ( x ) = g ( x )
is called an antidifference (or indefinite sum) of g. Summing from a to b 1 then yields the discrete Fundamental Theorem of Calculus,
x = a b 1 g ( x ) = f ( b ) f ( a )

1.2. Euler–Maclaurin Formula

A classical tool for constructing antidifferences is the Euler–Maclaurin formula. Let
f ( x ) = n = 0 x ( n ) f | a n ! ( x a ) n f ( x + 1 ) = n = 0 x ( n ) f | a n ! ( x + 1 a ) n
Setting a = x , we have:
f ( x + 1 ) = n = 0 x ( n ) f | x n ! ( 1 ) n = e x f ( x )
Therefore:
Δ f ( x ) = ( e x 1 ) f ( x ) = g ( x ) f ( x ) = g ( x ) e x 1 = n = 0 B n n ! ( x ) n 1 g ( x ) = ( x ) 1 g ( x ) + 1 2 g ( x ) + n = 1 B 2 n ( 2 n ) ! ( x ) 2 n 1 g ( x ) + R
where B n are the Bernoulli numbers, and R is the constant term that becomes the relevant remainder term.

1.3. Poisson Summation

The Poisson summation formula connects discrete sums to continuous Fourier transforms. For a sufficiently nice function h ( x ) ,
n = h ( n ) = k = h ^ ( k )
where h ^ ( ω ) = h ( x ) e 2 π i ω x d x . This arises immediately through a simple derivation
k = h ^ ( k ) = k = h ( x ) e 2 π i k x d x = h ( x ) k = e 2 π i k x d x = h ( x ) n = δ ( x n ) d x = n = h ( n )
The Poisson summation formula’s use of the Fourier transform provides sufficient intuition. We will develop/generalize the usage of integral transforms to indefinite summation setting, yielding new transform-based antidifference formulas.

2. The Method of Integral Transforms

We once again consider the functional equation:
Δ 1 g ( x ) = x g ( x ) = f ( x )
If we assume that g ( x ) is nicely the integral transform of some function G ( t ) , we must have that:
g ( x ) = a K ( x , t ) G ( t ) d t
This immediately leads to the formal result:
x g ( x ) = x a K ( x , t ) G ( t ) d t = a x K ( x , t ) G ( t ) d t
This becomes a generalization that readily solves the anti-difference of g ( x ) , and can be used easily in the evaluation of concrete summations. That said, we can also approach this problem in the immediately definite sense:
a x < b g ( x ) = a a x < b K ( x , t ) G ( t ) d t
Here, the interchange of operators is justified rather simply through the Fubini-Tonelli Theorem, since the sum is finite, the iterated integral/sum must be absolutely convergent for at least one ordering.
We may also take the bounds to the infinite sense:
a x < g ( x ) = a a x < K ( x , t ) G ( t ) d t
However, in this case, the interchange is only justified when
a x = a K ( x , t ) G ( t ) d t < or a x = a K ( x , t ) G ( t ) d t <
We will primarily work with the solution provided in Equation (10). While the indefinite sum directly yields an anti-difference that solves the definite sum in Equation (11), taking its limit does not always produce a valid solution to the infinite sum in Equation (12) due to the necessary convergence conditions.

2.1. Using the Laplace Transform

Arguably the most natural choice of kernel K ( x , t ) is the exponential function e x t . This leads us to evaluate the indefinite sum:
x e x t
This can be computed directly using the forward difference:
Δ e x t = e x t e t e x t = e x t ( e t 1 )
Therefore,
Δ 1 Δ e x t e t 1 = Δ 1 e x t x e x t = e x t e t 1 + C
Now, assume that g ( x ) is the Laplace transform of some function G ( t ) , i.e.,
g ( x ) = 0 e x t G ( t ) d t
Then,
x g ( x ) = x 0 e x t G ( t ) d t = 0 x e x t G ( t ) d t = 0 e x t e t 1 + C * G ( t ) d t
Dropping the constant term (or absorbing it into a final constant), we obtain:
x g ( x ) = 0 e x t e t 1 G ( t ) d t + C
Where G ( t ) is the Inverse Laplace Transform of g ( x ) . The identities (17) and (19) are already sufficient to derive closed-form expressions for various nontrivial summations.

2.2. Using the Fourier Transform

In a similar fashion to the Laplace transform, we can use the Fourier transform to evaluate indefinite sums. The Fourier transform of a function f ( t ) is given by:
f ^ ( x ) = f ( t ) e i x t d t
This arrives at a very similar result to (19):
x g ( x ) = x e i x t G ( t ) d t = x e i x t G ( t ) d t
= e i x t e i t 1 G ( t ) d t
where G ( t ) = 1 2 π g ( x ) e i x t d t .

2.3. Using the Mellin Transform

The Mellin transform is another useful tool for evaluating sums of the form x f ( x ) . The Mellin transform is defined as:
M { f ( t ) } ( x ) = 0 f ( t ) t x 1 d t
By applying the same theory as in (10), we obtain:
x g ( x ) = x t x 1 G ( t ) d t = t x 1 t 1 G ( t ) d t

3. Working with Arbitrary Transforms

While the Laplace, Fourier, and Mellin transforms lend themselves nicely to the analytic computation of anti-differences or indefinite summations, there are ultimately two key considerations when working with 10:
x g ( x ) = x a K ( x , t ) G ( t ) d t = a x K ( x , t ) G ( t ) d t
  • Simplicity of Kernel Summation: We would like the indefinite summation over the kernel x K ( x , t ) to be sufficiently simple.
  • Existence and Behavior of Inverse Transforms: We would like the inverse transform G ( t ) of g ( x ) to exist in a tractable form and to be well-behaved enough to allow analytic integration in the final expression.
This immediately suggests a slightly unnatural choice of letting
K ( x , t ) = x t x x t = x t + 1
Taking a = , immediately provides a remarkable identity. Let:
I x = x t d t = x 1 t 1 + x 1 t d t = 2 x 1 t d t .
We have
I 0 = 1 , I x = 2 I x 1 I x = 2 x
.
Thus, as 2 x = x t d t , we must have that
x 2 x = x t + 1 d t = 2 x .
Looking at another remarkable example, utilizing the absorption identity x t = x t x 1 t 1 :
I x = t x t d t = x x 1 t 1 d t = x x 1 t d t = x 2 x 1 .
Thus:
x x 2 x 1 = x t + 1 t d t = x t ( t 1 ) d t = x 2 x 1 2 x
As a result, we will now additionally develop a brief theory of this continuous binomial transform.

3.1. The Continuous Binomial Transform

Consider the following integral transform:
B { f } ( x ) = F ( x ) = x t f ( t ) d t f ( t ) = K ( x , t ) B { f } ( x ) d x where , x t = Γ ( x + 1 ) Γ ( t + 1 ) Γ ( t x + 1 )
We proceed in the usual manner:
f ( t ) = K ( x , t ) x τ f ( τ ) d τ d x f ( t ) = f ( τ ) K ( x , t ) x τ d x d τ
And we would like the inner integral K ( x , t ) x τ d x = δ ( t τ ) . Motivated by the discrete binomial inversion formula given by:
F ( n ) = m n m f ( m ) ,
f ( n ) = m ( 1 ) n m n m F ( m ) .
(see Equation (5.48) in [3], [p. 192] ), we proceed with our derivation.
Proof. We will show that the desired kernel K ( x , t ) is given by
K ( x , t ) = ( 1 ) t x t x .
We have
f ( t ) = f ( τ ) K ( x , t ) x τ d x d τ . = f ( τ ) ( 1 ) t x t x x τ d x d τ = f ( τ ) t τ ( 1 ) t x t τ x τ d x d τ
Examining the inner integral:
( 1 ) t x t τ x τ d x .
Setting j = x τ , we have d j = d x .
( 1 ) t ( j + τ ) t τ j d j .
Rewriting the exponent:
( 1 ) t τ ( 1 ) j t τ j d j .
It suffices to show that
I = ( 1 ) j t τ j d j = δ ( t τ )
This is relatively simple to show. Let m = t τ . Consider:
( 1 + x ) m = k 0 m k x k .
Formally, we utilize the Residue Theorem to obtain:
m j = 1 2 π i C ( 1 + z ) m z j + 1 d z
= 1 2 π i π π ( 1 + e i θ ) m e i θ ( j + 1 ) i e i θ d θ
= 1 2 π π π ( 1 + e i θ ) m e i j θ d θ
We also note that this extends to all real values of m and j:
( 1 + e i θ ) m = ( 2 cos ( θ / 2 ) ) m e i m θ / 2 , m j = 2 m 2 π π π cos ( θ / 2 ) m e i ( m / 2 j ) θ d θ m j = 2 m 1 π π π cos ( θ / 2 ) m cos ( ( m / 2 j ) θ ) d θ
which can then be evaluated numerically since the imaginary part is zero due to symmetry. This is well-defined.
Then
I = ( 1 ) j m j d j = e i π j 1 2 π π π ( 1 + e i θ ) m e i j θ d θ d j = 1 2 π π π ( 1 + e i θ ) m e i ( π θ ) j d j d θ
We recognize the inner integral as a basic Fourier Transform giving 2 π δ ( π θ )
2 π 1 2 π π π ( 1 + e i θ ) m δ ( π θ ) d θ = ( 1 + e i π ) m = δ m = δ ( t τ )
Therefore,
f ( t ) = f ( τ ) t τ ( 1 ) t x t τ x τ d x d τ = f ( τ ) t τ ( 1 ) t τ ( 1 ) j t τ j d j d τ = f ( τ ) t τ ( 1 ) t τ δ ( t τ ) d τ = f ( t )
This completes the proof with K ( x , t ) = ( 1 ) t x t x . Thus:
B { f } ( x ) = F ( x ) = x t f ( t ) d t
f ( t ) = e i π ( t x ) t x B { f } ( x ) d x
To demonstrate the method’s consistency, we will fully work out the case where B { f } ( x ) = 2 x nicely.
f ( t ) = e i π ( t x ) t x 2 x d x = e i π ( t x ) 1 2 π i C ( 1 + z ) t z x + 1 d z 2 x d x = 1 2 π e i π ( t x ) π π ( 1 + e i θ ) t e i θ x d θ 2 x d x = e i π t π π ( 1 + e i θ ) t 1 2 π e i π x e i θ x 2 x d x d θ = e i π t π π ( 1 + e i θ ) t 1 2 π e i x ( l o g ( 2 ) i ( θ + π ) ) d x d θ = e i π t π π ( 1 + e i θ ) t δ ( l o g ( 2 ) i θ π ) d θ = e i π t ( 1 + e i ( l o g ( 2 ) i π ) ) t = e i π t ( 1 + 2 ( 1 ) ) t = e 2 π i t = 1 .
Thus
x B { f } ( x ) = x t + 1 ( 1 ) d t = 2 x
We will also illustrate the case where B { f } ( x ) = sin ( x )
f ( t ) = e i π ( t x ) t x sin ( x ) d x = e i π t C ( 1 + e i θ ) t 1 2 π e i π x e i θ x sin ( x ) d x d θ = e i π t C ( 1 + e i θ ) t 1 2 π e i ( π + θ ) x e i x e i x 2 i d x d θ = e i π t C ( 1 + e i θ ) t 1 4 π i ( 2 π δ ( π + θ 1 ) 2 π δ ( π + θ + 1 ) ) d θ We choose C as a unit circle : θ [ π 1 , π 1 ] = 1 2 i [ ( e i 1 ) t ( e i 1 ) t ] = 1 2 i [ z z ¯ ] = ( z ) = ( ( e i 1 ) t ) = ( ( e i / 2 ( e i / 2 e i / 2 ) ) t ) = ( ( e i / 2 ( 2 i ) sin ( 1 / 2 ) ) t ) = ( 2 sin ( 1 / 2 ) ) t sin ( ( π + 1 2 ) t )
Now, evaluating the sum is relatively tricky. We will develop one more identity to help us, utilizing the same methods before:
x t a t d t = 1 2 π i C ( 1 + z ) x z t + 1 d z a t d t = C ( 1 + e i θ ) x 1 2 π e i θ t + t log ( a ) d t d θ = π π ( 1 + e i θ ) x δ ( θ + log ( a ) i ) d θ = ( 1 + a ) x
And we have, relatively simply:
x sin ( x ) = x x t f ( t ) d t = x t + 1 ( 2 sin ( 1 / 2 ) ) t sin ( ( π + 1 2 ) t ) d t = x t + 1 ( 2 sin ( 1 / 2 ) ) t ( e π + 1 2 t i ) d t = ( x t + 1 ( 2 sin ( 1 / 2 ) ) t e π + 1 2 t i d t ) = ( x t + 1 ( A e B i ) t d t ) , where A = 2 sin ( 1 / 2 ) and B = π + 1 2 = ( ( A e B i ) 1 x t ( A e B i ) t d t ) = 1 A ( ( 1 + A e B i ) x e B i ) = 1 2 sin ( 1 / 2 ) ( ( 1 + 2 sin ( 1 / 2 ) e π + 1 2 i ) x e π + 1 2 i ) Notice that 1 + 2 sin ( 1 / 2 ) e π + 1 2 i = 1 + 2 i sin ( 1 / 2 ) e i / 2 = 1 2 sin 2 ( 1 / 2 ) + 2 i sin ( 1 / 2 ) cos ( 1 / 2 ) = cos ( 1 ) + i sin ( 1 ) = e i . Thus : x sin ( x ) = 1 2 sin ( 1 / 2 ) ( e i x e π + 1 2 i ) = 1 2 sin ( 1 / 2 ) e i ( x π + 1 2 ) = sin ( x π + 1 2 ) 2 sin ( 1 / 2 )
This illustrates the consistency of the method. The usage of integral transforms provide a purely systematic method of analyzing the indefinite summations of various functions.

4. Change of Variables

In indefinite summation, it is often useful to focus solely on either the even or odd terms. However, adjusting the step size in the discrete setting can be challenging. To make this idea more concrete, consider the classical sum:
S = a x < b g ( x )
Substitutions of the form x = u + n are valid in the sense that the step size of the sum does not change:
S = a n u < b n g ( u + n )
On the other hand, substitutions of the form x = k u + n for k 1 are not so simple, as terms must be sifted. For instance, consider the substitution x = 2 u . Then:
S = a x < b g ( x ) [ δ x = 1 ] a 2 u < b 2 g ( 2 u )
In order for this substitution to be valid, the step size in the original sum must be halved as δ x = 2 δ u . That is:
S = a 2 u < b 2 g ( 2 u ) [ δ u = 1 2 ]

4.1. The Problem with Scaling

To address this issue, we return to our functional definitions. Consider:
Δ 1 ( g ( a x ) ) = f ( x ) . g ( a x ) = f ( x + 1 ) f ( x )
g ( x ) = f ( 1 a x + 1 ) f ( 1 a x ) .
We may define : Δ a 1 ( g ( x ) ) = a x g ( x ) = f ( x )
If we analyze this functional equation formally, we obtain a result akin to the Euler-Maclaurin Formula:
f ( a x ) = f ( e log x + log a ) = e log ( a ) log x f ( e log ( x ) ) = e log ( a ) x x f ( x ) f ( a x + 1 ) = e x e log ( a ) x x f ( x ) = e x ( 1 + l o g ( a ) x ) f ( x ) . We have : g ( x ) = f ( 1 a x + 1 ) f ( 1 a x ) = ( e x ( 1 + l o g ( 1 a ) x ) e log ( 1 a ) x x ) f ( x ) = g ( x ) ( e log ( 1 a ) x x ) ( e x 1 ) f ( x ) = g ( x ) Thus : f ( x ) = e log ( a ) x x e x 1 g ( x ) .
The laurent series expansion can be easily obtained now for few terms, giving an analogous Euler-Maclaurin formula. For a = 2, we have:
( n = 0 ( log ( 2 ) x x ) n n ! ) ( n = 0 B n n ! ( x ) n 1 ) g ( x ) = ( x 1 + ( log ( 2 ) x 1 2 ) + ( log ( 2 ) x ) 2 log ( 2 ) x + 1 6 2 x + O ( x 2 ) ) g ( x ) + C
That said, we wish to extend the methods in 10 by instead using the operators in 34 If we can find a specific function, whose indefinite sum is known for a generalized step size, we will be able to apply it to larger classes of functions via our integral transforms.
The immediate choice becomes the standard geometric series (which the kernels in the Laplace, Fourier, and Mellin transforms all satisfy), where generalizing by step size is trivial:
n x e x t = x e n x t = e n x t e n t 1
n x e n i x t = n x e n i x t = e i n x t e i n t 1
n x t x 1 = x t n x 1 = t n x 1 t n 1
These can all be verified using the functional definitions in 34. Thus, given the choice of the specific transform K ( x , t ) , we have
x g ( n x ) = n x g ( x ) = 0 n x K ( x , t ) G ( t ) d t
A Concrete Example: Suppose we are looking to evaluate
x e v e n , 0 x N sin ( x ) . We utilize the inverse laplace transform to obtain : sin ( x ) = e i x e i s 2 i L 1 { sin ( x ) } ( t ) = δ ( t + i ) δ ( t i ) 2 i 0 e 2 x t e 2 t 1 δ ( t + i ) δ ( t i ) 2 i d t = 1 2 i e 2 x i e 2 i 1 e 2 x i e 2 i 1 = cos ( 2 x 1 ) 2 sin ( 1 ) = f ( x ) x e v e n , 0 x N sin ( x ) = 0 x N 2 sin ( 2 x ) = cos ( 2 ( N 2 + 1 ) 1 ) + cos ( 1 ) 2 sin ( 1 ) which can be verified immediately through computational means .
Summing odd terms becomes very easy as well. Frankly, given any scaling factor of n: f ( n x ) , we can compute sums for all shifts of f ( n x + m mod n ) . Here, for instance:
x o d d , 0 x N sin ( x ) = 0 x N 2 1 sin ( 2 x + 1 ) We make the shift substitution ( which is completely valid as it does not change step size ) of x = x 1 2 : 1 2 x N 2 1 2 sin ( 2 x ) = f ( N 2 + 1 2 ) f ( 1 2 ) .
Thus, we have a general method of dealing with step size: first make the scaling substitution and apply (19); then shift the bounds based on the application.

4.2. A General Change of Variables

We now consider a generalized substitution. Indeed, let
Δ 1 ( g ( h ( x ) ) = f ( x )
g ( h ( x ) ) = f ( x + 1 ) f ( x )
g ( x ) = f ( h 1 ( x ) + 1 ) f ( h 1 ( x ) )
And let Δ h ( x ) 1 g ( x ) = h ( x ) g ( x ) = f ( x ) .
Then we may say that
x g ( h ( x ) ) = h ( x ) g ( x ) = 0 h ( x ) K ( x , t ) G ( t ) d t
We see that this now becomes a manner of choosing an appropriate K ( x , t ) such that K ( h ( x ) , t ) is easy to sum. Unfortunately, this is extremely limited for nonlinear h ( x )

5. Method Examples

1. An Arbitrary Example
Suppose we have
x ω x 2 + ω 2
The inverse laplace transform gives L 1 { ω x 2 + ω 2 } ( t ) = H ( t ) sin ( ω t ) , where H ( t ) = 0 , t < 0 1 , t 0 . Then
I = x ω x 2 + ω 2 = 0 x e x t H ( t ) sin ( ω t ) d t = 0 e x t e t 1 sin ( ω t ) d t
Using the series expansion 1 1 e t = n = 0 e n t as | e t | 1 t 0 :
I = n = 0 0 e ( x + n ) t sin ( ω t ) d t .
We reuse the Laplace transform of sin ( ω t ) is:
0 e k t sin ( ω t ) d t = ω k 2 + ω 2 .
Applying this with k = x + n :
I = ω n = 0 1 ( x + n ) 2 + ω 2 = 1 2 i n = 0 1 x + n i ω 1 x + n + i ω
Here, we can work backwards by computing each
n = 0 1 n + z = 0 n 0 e n t e z t d t = 0 e z t 1 e t d t
Using the polygamma identity:
ψ ( m ) ( z ) = ( 1 ) m + 1 0 t m e z t 1 e t d t n = 0 1 n + z = ψ ( z )
Therefore
I = 1 2 i ψ ( x i ω ) + ψ ( x + i ω ) = Im ψ ( x + i ω ) .
2. The Riemann-Zeta Function
Consider :
x 1 x k 1 x k = 0 G ( t ) t x 1 d t
Computing the inverse Mellin Transform gives
G ( t ) = ( log ( t ) ) k 1 Γ ( k ) , for 0 < t 1 and 0 elsewhere
We find that:
x 1 x k = 0 1 ( log ( t ) ) k 1 Γ ( k ) t x 1 t 1 , d t
Or similarly:
ζ ( k ) = x = 1 1 x k = 0 1 ( log ( t ) ) k 1 Γ ( k ) 1 1 t , d t
Now, using the beautiful identity of the polygamma function:
ψ ( n ) ( z ) = 0 1 t z 1 1 t ( log ( t ) ) n d t
x 1 x k = 0 1 ( log ( t ) ) k 1 Γ ( k ) t x 1 t 1 , d t = ( 1 ) k 1 Γ ( k ) ψ ( k 1 ) ( x ) + C
Where
ψ ( n ) ( z ) = d n d z n ψ ( z ) , ψ ( z ) = Γ ( z ) Γ ( z )
3. Polynomials
We will address the problem of polynomials here. Evidently, it is unclear as to what might happen when we attempt to compute summations of polynomial terms using this method. Consider the classic example:
x x 2 = 0 e x t e t 1 L 1 { x 2 } d t
Recall that
L { δ ( k ) ( t ) } = x k
This can be shown as:
L { δ ( k ) ( t ) } ( x ) = 0 δ ( k ) ( t ) e x t d t .
Integrate by parts once, moving the t-derivative onto e x t :
1 1 = δ ( k 1 ) ( t ) e x t 0 + s 0 δ ( k 1 ) ( t ) e x t d t .
Since δ ( m ) ( t ) vanishes for t > 0 and boundary terms at t = 0 involve δ ( m ) against a smooth function (hence vanish except at the final step), the boundary contributions drop out each time. Therefore, we have L { δ ( k ) ( t ) } ( x ) = x L { δ ( k 1 ) ( t ) } ( x ) Repeating k times yields
L { δ ( k ) ( t ) } ( x ) = x k 0 δ ( t ) e x t d t = x k .
Therefore,
0 e x t e t 1 L 1 { x 2 } d t = 0 e x t e t 1 δ ( t ) d t = d 2 d t 2 [ e x t e t 1 ] | t = 0
Where f ( 0 ) 2 ! is the t 2 term in the series expansion. Thus:
e x t e t 1 = ( 1 t 1 2 t 12 + . . . ) ( 1 x t + x 2 t 2 2 x 3 t 3 6 + . . . ) = 1 t + ( x 1 2 ) + t ( x 2 2 + x 2 1 12 ) + t 2 ( x 3 6 x 2 4 + x 12 ) + O ( t 3 )
And we see relatively clearly that
x x 2 = 2 ! [ t 2 ] e x t e t 1 = x 3 3 x 2 2 + x 6
More generally:
x x n = n ! [ t n ] e x t e t 1
4. Binomial Coefficients
By the binomial theorem, it is known that
x 0 n x 1 x 1 n x = 2 n
However, a surprisingly much more difficult problem is evaluating in general x a n x or more generally, what the anti-difference x n x , evaluates to.
While this has no known closed form solution in elementary functions, we can use the methods of 31. Let
B { f } ( x ) = n x f ( t ) = e i π ( t x ) t x n x d x
Using the integral representation of the contour
t x = 1 2 π π π ( 1 + e i θ ) t e i θ x d θ .
f ( t ) = 1 4 π 2 e i π ( t x ) π π ( 1 + e i θ ) t e i θ x d θ . π π ( 1 + e i ϕ ) n e i ϕ x d ϕ . d x
Interchanging the integrals, under Fubini’s theorem, we have:
f ( t ) = e i π t ( 2 π ) 2 π π π π ( 1 + e i θ ) t ( 1 + e i ϕ ) n e i ( π + θ + ϕ ) x d x d θ d ϕ .
The integral over x yields a Dirac delta function:
e i ( π + θ + ϕ ) x d x = 2 π δ ( π θ ϕ ) .
Thus:
f ( t ) = e i π t 2 π π π ( 1 + e i θ ) t ( 1 + e i ( π + θ ) ) n d θ .
Using the trigonometric identity that
( 1 + e i θ ) t ( 1 e i θ ) n = e i θ / 2 2 cos ( θ 2 ) t e i θ / 2 2 i sin ( θ 2 ) n
= 2 t + n i n e i θ t n 2 cos t θ 2 sin n θ 2 .
we have :
f ( t ) = e i π t 2 n + t i n 2 π π π e i θ t n 2 cos t ( θ / 2 ) sin n ( θ / 2 ) d θ .
This is related to the Beta Integral, which is defined as:
B ( x , y ) = 0 1 s x 1 ( 1 s ) y 1 d s = 2 0 π 2 sin 2 x 1 ( θ ) cos 2 y 1 ( θ ) d θ = Γ ( x ) Γ ( y ) Γ ( x + y )
It is not known f ( t ) can be expressed in terms of the Beta Function. However, it is true that, by the formulas 3.634.1, 3.634.2 in [8]:
0 π 2 sin u 1 ( x ) cos v 1 ( x ) e i ( u + v ) x = e u π 2 B ( u , v )
However, it is considerably easier to instead evaluate f ( t ) by taking:
( 1 + e i θ ) t = j = 0 t j e i j θ , and ( 1 e i θ ) n = k = 0 n ( 1 ) k n k e i k θ .
( 1 + e i θ ) t ( 1 e i θ ) n = j = 0 k = 0 n t j ( 1 ) k n k e i ( j k ) θ .
Then:
f ( t ) = e i π t 2 π π π j , k t j ( 1 ) k n k e i ( j k ) θ d θ = e i π t j , k t j ( 1 ) k n k 1 2 π π π e i ( j k ) θ d θ .
By orthogonality (or by the Dirac-Delta):
1 2 π π π e i ( j k ) θ d θ = 1 , j = k , 0 , j k ,
so only the terms with j = k survive:
f ( t ) = e i π t k = 0 n ( 1 ) k n k t k .
Using
k = 0 n ( 1 ) k n k t k = k = 0 n ( 1 ) k ( n ) k ( t ) k ( 1 ) k k ! = 2 F 1 ( n , t ; 1 ; 1 )
Here we have the generalized hypergeometric function p F q ( a 1 , , a p ; b 1 , , b q ; z ) is defined as:
p F q ( a 1 , , a p ; b 1 , , b q ; z ) = k = 0 ( a 1 ) k ( a p ) k ( b 1 ) k ( b q ) k · z k k !
where ( a ) k = a ( a + 1 ) ( a + k 1 ) is the Pochhammer symbol (rising factorial). This also has no known closed solution (except for special values of t and n). But this is sufficient enough to yield:
f ( t ) = k = 0 n ( 1 ) t + k n k t k = ( 1 ) t 2 F 1 ( n , t ; 1 ; 1 )
And we can now evaluate:
x n x = x t + 1 f ( t ) d t = x t + 1 ( 1 ) t 2 F 1 ( n , t ; 1 ; 1 ) d t
We may go a bit further in simplification to:
= x t + 1 k = 0 n ( 1 ) t + k n k t k d t = k = 0 n n k e i ( t + k ) π x t + 1 t k d t
We proceed with the contour representations:
= k 0 n k ( 1 ) k e i π t π π ( 1 + e i θ ) x e i θ ( t + 1 ) d θ . π π ( 1 + e i ϕ ) t e i ϕ k d ϕ . d t
=
k 0 n k ( 1 ) k π π π π ( 1 + e i θ ) x e i θ 1 e i ϕ k e i t ( π θ i log ( 1 + e i ϕ ) ) d t d θ d ϕ
=
k 0 n k ( 1 ) k π π ( 1 + e i π ( 1 + e i ϕ ) ) x e i π ( 1 + e i ϕ ) e k ϕ i d ϕ
=
k 0 n k ( 1 ) k π π e i ( ϕ x ϕ k ) 1 + e i ϕ d ϕ = π π e i ( x n ) ϕ ( e i ϕ 1 ) n 1 + e i ϕ d ϕ

6. Discussion/Conclusions

The key result of this paper is given by the formulas in 10, where the anti-difference operator may be taken on a function K ( x , t ) of our choosing. The method is consistent and robust, with widespread application in analyzing discrete summations, particularly through the Laplace Transform, where a large table of inverse transform identities exist. The Continuous Binomial Transform developed in 31 has also shown consistent results and provides interesting integral representations of certain anti-differences. This paper ultimately seeks to provide a mechanical method of solving the functional equation given by the anti-difference operator, providing a full mathematical framework to a new approach in the theory of summation.

References

  1. S. S. Cheng, Advances in Discrete Mathematics and Applications, Volume 3: Partial Difference Equations, 2019.
  2. L. Debnath and D. Bhatta, Integral Transforms and Their Applications, 3rd ed., Taylor & Francis, 2015.
  3. R. Graham, D. R. Graham, D. Knuth, and O. Patashnik, Concrete Mathematics: A Foundation for Computer Science, 2nd ed., Addison–Wesley, 1994.
  4. H. arXiv preprint, 0408; arXiv:1602.04080, 2016. https://arxiv.org/abs/1602.
  5. E. T. Whittaker and G. N. Watson, A Course of Modern Analysis, 4th ed., Cambridge University Press, 1927.
  6. Wikipedia, “Polygamma function,” Wikipedia, The Free Encyclopedia. [Online]. Available: https://en.wikipedia.
  7. R. W. Gosper, “Decision procedure for indefinite hypergeometric summation,” Proceedings of the National Academy of Sciences of the United States of America, vol. 75, no. 1, pp. 40–42, 1978. https://www.pnas.org/doi/pdf/10.1073/pnas.75.1.
  8. I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products, 7th ed., Academic Press, 2007. http://fisica.ciens.ucv.ve/~svincenz/TISPISGIMR.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated