1. Introduction
1.1. Motivations and Definitions
Indefinite summation, or
antidifferences, provides the discrete analogue of antiderivatives in classical calculus. Given a sequence
, any function
satisfying
is called an
antidifference (or
indefinite sum) of
g. Summing from
a to
then yields the discrete Fundamental Theorem of Calculus,
1.2. Euler–Maclaurin Formula
A classical tool for constructing antidifferences is the Euler–Maclaurin formula. Let
Therefore:
where
are the Bernoulli numbers, and
R is the constant term that becomes the relevant remainder term.
1.3. Poisson Summation
The Poisson summation formula connects discrete sums to continuous Fourier transforms. For a sufficiently nice function
,
where
. This arises immediately through a simple derivation
The Poisson summation formula’s use of the Fourier transform provides sufficient intuition. We will develop/generalize the usage of integral transforms to indefinite summation setting, yielding new transform-based antidifference formulas.
2. The Method of Integral Transforms
We once again consider the functional equation:
If we assume that
is nicely the integral transform of some function
, we must have that:
This immediately leads to the formal result:
This becomes a generalization that readily solves the anti-difference of
, and can be used easily in the evaluation of concrete summations. That said, we can also approach this problem in the immediately
definite sense:
Here, the interchange of operators is justified rather simply through the Fubini-Tonelli Theorem, since the sum is finite, the iterated integral/sum must be absolutely convergent for at least one ordering.
We may also take the bounds to the infinite sense:
However, in this case, the interchange is only justified when
We will primarily work with the solution provided in Equation (
10). While the indefinite sum directly yields an anti-difference that solves the definite sum in Equation (
11), taking its limit does not always produce a valid solution to the infinite sum in Equation (
12) due to the necessary convergence conditions.
2.1. Using the Laplace Transform
Arguably the most natural choice of kernel
is the exponential function
. This leads us to evaluate the indefinite sum:
This can be computed directly using the forward difference:
Now, assume that
is the Laplace transform of some function
, i.e.,
Dropping the constant term (or absorbing it into a final constant), we obtain:
Where
is the Inverse Laplace Transform of
. The identities (
17) and (
19) are already sufficient to derive closed-form expressions for various nontrivial summations.
2.2. Using the Fourier Transform
In a similar fashion to the Laplace transform, we can use the Fourier transform to evaluate indefinite sums. The Fourier transform of a function
is given by:
This arrives at a very similar result to (
19):
2.3. Using the Mellin Transform
The Mellin transform is another useful tool for evaluating sums of the form
. The Mellin transform is defined as:
By applying the same theory as in (
10), we obtain:
3. Working with Arbitrary Transforms
While the Laplace, Fourier, and Mellin transforms lend themselves nicely to the analytic computation of anti-differences or indefinite summations, there are ultimately two key considerations when working with
10:
Simplicity of Kernel Summation: We would like the indefinite summation over the kernel to be sufficiently simple.
Existence and Behavior of Inverse Transforms: We would like the inverse transform of to exist in a tractable form and to be well-behaved enough to allow analytic integration in the final expression.
This immediately suggests a slightly unnatural choice of letting
Taking
, immediately provides a remarkable identity. Let:
.
Thus, as
, we must have that
Looking at another remarkable example, utilizing the absorption identity
:
As a result, we will now additionally develop a brief theory of this continuous binomial transform.
3.1. The Continuous Binomial Transform
Consider the following integral transform:
We proceed in the usual manner:
And we would like the inner integral
. Motivated by the discrete binomial inversion formula given by:
(see Equation (5.48) in [
3], [p. 192] ), we proceed with our derivation.
Proof. We will show that the desired kernel
is given by
Examining the inner integral:
Setting
, we have
.
This is relatively simple to show. Let
. Consider:
Formally, we utilize the Residue Theorem to obtain:
We also note that this extends to all real values of m and j:
which can then be evaluated numerically since the imaginary part is zero due to symmetry. This is well-defined.
We recognize the inner integral as a basic Fourier Transform giving
This completes the proof with
. Thus:
To demonstrate the method’s consistency, we will fully work out the case where
nicely.
We will also illustrate the case where
Now, evaluating the sum is relatively tricky. We will develop one more identity to help us, utilizing the same methods before:
And we have, relatively simply:
This illustrates the consistency of the method. The usage of integral transforms provide a purely systematic method of analyzing the indefinite summations of various functions.
4. Change of Variables
In indefinite summation, it is often useful to focus solely on either the even or odd terms. However, adjusting the step size in the discrete setting can be challenging. To make this idea more concrete, consider the classical sum:
Substitutions of the form
are valid in the sense that the step size of the sum does not change:
On the other hand, substitutions of the form
for
are not so simple, as terms must be sifted. For instance, consider the substitution
. Then:
In order for this substitution to be valid, the step size in the original sum must be halved as
. That is:
4.1. The Problem with Scaling
To address this issue, we return to our functional definitions. Consider:
If we analyze this functional equation formally, we obtain a result akin to the Euler-Maclaurin Formula:
The laurent series expansion can be easily obtained now for few terms, giving an analogous Euler-Maclaurin formula. For a = 2, we have:
That said, we wish to extend the methods in
10 by instead using the operators in
34 If we can find a specific function, whose indefinite sum is known for a generalized step size, we will be able to apply it to larger classes of functions via our integral transforms.
The immediate choice becomes the standard geometric series (which the kernels in the Laplace, Fourier, and Mellin transforms all satisfy), where generalizing by step size is trivial:
These can all be verified using the functional definitions in
34. Thus, given the choice of the specific transform
, we have
A Concrete Example: Suppose we are looking to evaluate
Summing odd terms becomes very easy as well. Frankly, given any scaling factor of n:
, we can compute sums for all shifts of
. Here, for instance:
Thus, we have a general method of dealing with step size: first make the scaling substitution and apply (19); then shift the bounds based on the application.
4.2. A General Change of Variables
We now consider a generalized substitution. Indeed, let
We see that this now becomes a manner of choosing an appropriate such that is easy to sum. Unfortunately, this is extremely limited for nonlinear
5. Method Examples
1. An Arbitrary Example
The inverse laplace transform gives
, where
. Then
Using the series expansion
as
:
We reuse the Laplace transform of
is:
Applying this with
:
Here, we can work backwards by computing each
Using the polygamma identity:
2. The Riemann-Zeta Function
Computing the inverse Mellin Transform gives
Now, using the beautiful identity of the polygamma function:
3. Polynomials
We will address the problem of polynomials here. Evidently, it is unclear as to what might happen when we attempt to compute summations of polynomial terms using this method. Consider the classic example:
Integrate by parts once, moving the
t-derivative onto
:
Since
vanishes for
and boundary terms at
involve
against a smooth function (hence vanish except at the final step), the boundary contributions drop out each time. Therefore, we have
Repeating
k times yields
Where
is the
term in the series expansion. Thus:
And we see relatively clearly that
4. Binomial Coefficients
By the binomial theorem, it is known that
However, a surprisingly much more difficult problem is evaluating in general or more generally, what the anti-difference , evaluates to.
While this has no known closed form solution in elementary functions, we can use the methods of
31. Let
Using the integral representation of the contour
Interchanging the integrals, under Fubini’s theorem, we have:
The integral over
x yields a Dirac delta function:
Using the trigonometric identity that
we have :
This is related to the Beta Integral, which is defined as:
It is not known
can be expressed in terms of the Beta Function. However, it is true that, by the formulas 3.634.1, 3.634.2 in [
8]:
However, it is considerably easier to instead evaluate
by taking:
By orthogonality (or by the Dirac-Delta):
so only the terms with
survive:
Here we have the generalized hypergeometric function
is defined as:
where
is the Pochhammer symbol (rising factorial). This also has no known closed solution (except for special values of t and n). But this is sufficient enough to yield:
We may go a bit further in simplification to:
We proceed with the contour representations:
6. Discussion/Conclusions
The key result of this paper is given by the formulas in
10, where the anti-difference operator may be taken on a function
of our choosing. The method is consistent and robust, with widespread application in analyzing discrete summations, particularly through the Laplace Transform, where a large table of inverse transform identities exist. The Continuous Binomial Transform developed in
31 has also shown consistent results and provides interesting integral representations of certain anti-differences. This paper ultimately seeks to provide a mechanical method of solving the functional equation given by the anti-difference operator, providing a full mathematical framework to a new approach in the theory of summation.
References
- S. S. Cheng, Advances in Discrete Mathematics and Applications, Volume 3: Partial Difference Equations, 2019.
- L. Debnath and D. Bhatta, Integral Transforms and Their Applications, 3rd ed., Taylor & Francis, 2015.
- R. Graham, D. R. Graham, D. Knuth, and O. Patashnik, Concrete Mathematics: A Foundation for Computer Science, 2nd ed., Addison–Wesley, 1994.
- H. arXiv preprint, 0408; arXiv:1602.04080, 2016. https://arxiv.org/abs/1602.
- E. T. Whittaker and G. N. Watson, A Course of Modern Analysis, 4th ed., Cambridge University Press, 1927.
- Wikipedia, “Polygamma function,” Wikipedia, The Free Encyclopedia. [Online]. Available: https://en.wikipedia.
- R. W. Gosper, “Decision procedure for indefinite hypergeometric summation,” Proceedings of the National Academy of Sciences of the United States of America, vol. 75, no. 1, pp. 40–42, 1978. https://www.pnas.org/doi/pdf/10.1073/pnas.75.1.
- I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products, 7th ed., Academic Press, 2007. http://fisica.ciens.ucv.ve/~svincenz/TISPISGIMR.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).