Preprint
Article

This version is not peer-reviewed.

Using Vector Representations of Characteristic Functions and Vector Logarithms When Proving Asymptotic Statements

Submitted:

20 December 2025

Posted:

23 December 2025

You are already at the latest version

Abstract
In this methodological-technical note, in addition to the well-known concepts of logarithms of positive real numbers and operators, we open a path for a mathematical treatment of the mathematical concept of the logarithm of a vector. We prove the most basic arithmetic operations for this new logarithm concept and demonstrate how it is applies to characteristic functions and limit theorems of probability theory. As a side result, we revise a formula for $i^i$ that is known from the literature.
Keywords: 
;  ;  ;  

1. Introduction

Classical considerations of characteristic functions can be found in numerous sources, such as in [1,2,3,4,5]. Recently, a vector representation of the characteristic function of a univariate probability distribution was studied in [6]. This representation avoids the use of complex numbers, in particular the imaginary unit, and thus enables, among other things, a complete numerical evaluation.
It is well known that the logarithm of a characteristic function plays a central role in basic asymptotic techniques of probability theory. In the present methodological-technical note, in addition to the well-known concepts of logarithms of positive real numbers and operators, we open a path for a mathematical treatment of the new mathematical concept of the logarithm of a vector and demonstrate the application of this concept in the asymptotic treatment of characteristic functions. For this, in particular, values of the logarithm in a neighborhood of a certain single point or vector are needed.
A fundamental property of logarithms is that they can transform products into sums. Products of characteristic functions represent an essential tool for dealing with sums of independent random variables.
Following an introduction to general vector-valued vector products in Section 2, vector-valued logarithms of vectors are defined in Section 3 and then applied to characteristic functions for an important special case in Section 4. We prove the most basic arithmetic operations for the new logarithm concept and demonstrate that this concept is applicable to an asymptotic analysis of characteristic functions and to limit theorems of probability theory. Finally, we revise a formula for i i that is known from the literature.
The authors of the recently published article [7] state that their complex-valued norm-uniform Fourier dependence measure between the joint and product-marginal characteristic functions does not, in fact, rely on any special algebraic role of the imaginary unit and operates on the two-dimensional real-vector formed by the real and imaginary parts of their statistic.

2. Vector-Valued Vector Products

The great diversity of the theory of vectorial vector logarithms is based on the broad structure of vectorial vector products, which we will briefly discuss here. Let R 2 denote the two-dimensional vector space of columns of length two with real numbers as entries, and : R 2 × R 2 R 2 the component-wise vector addition defined on it. The neutral element with respect to this vector operation is denoted by 0 . Let further | | . | | : R 2 [ 0 , ) be a positively homogeneous functional such that the set B = { z R 2 : | | z | | 1 } is star-shaped with respect to the inner point 0 . We call such functional a phs-functional, call the map : R 2 × R 2 R 2 , which is generated by the phs-functional | | . | | ,
z 1 z 2 = | | z 1 | | | | z 2 | | | | z 1 z 2 | | z 1 z 2 , z l 0 , l = 1 , 2 ,
a vector-valued vector product, let · : R × R 2 R 2 denote multiplication of a vector by a scalar, and denote the quadruple [ R 2 , , · , ] a vector space with vector-valued vector product. Here,
z 1 z 2 = x 1 x 2 y 1 y 2 x 1 y 2 + y 1 x 2 , z l = x l y l , l = 1 , 2
is a vector-valued vector product which is closely related to usual complex numbers multiplication and is composed of an orthogonal transformation and a scaling. A polar representation shows in a simple way that the multiplication of two vectors can be described equivalently by addition of two angles and multiplication of two radii. Note that if | | . | | is the Euclidean norm then = . Here are two more possible product rules, { p , a , b } , where
z 1 p z 2 = ( | x 1 | p + | y 1 | p ) ( | x 2 | p + | y 2 | p ) ( | x 1 x 2 y 1 y 2 | p + | x 1 y 2 + y 1 x 2 | p ) 1 p x 1 x 2 y 1 y 2 x 1 y 2 + y 1 x 2
and, for a > 0 , b > 0 ,
z 1 a , b z 2 = ( x 1 2 a 2 + y 1 2 b 2 ) ( x 2 2 a 2 + y 2 2 b 2 ) ( x 1 x 2 y 1 y 2 ) 2 a 2 + ( x 1 y 2 + y 1 x 2 ) 2 b 2 1 / 2 x 1 x 2 y 1 y 2 x 1 y 2 + x 2 y 1 .
The n -th power of z R 2 with respect to the phs-functional | | . | | is denoted
z n = z ( n 1 ) z , n = 1 , 2 , . . . , z 0 = 1 where 1 = 1 0 R 2 .
One can prove that
z n = | | z | | n z n | | z n | | .
The vector-valued vector division of two vectors from R 2 is defined by
0 ÷ z = 0 and z 1 z 2 = | z 1 | 2 | z 2 | 2 z 1 ÷ z 2 | z 1 ÷ z 2 | 2 , z l 0 , l = 1 , 2 ,
and the exponential function with respect to the phs-functional | | . | | by
exp | | . | | ( z ) = k = 0 z k k ! .
For every real number a > 0 and any vector z R 2 the z-power of a with respect to the phs-functional | | . | | is given as
a | | . | | z : = exp | | . | | ( ( l n a ) · z ) .
The operation ( a , z ) ( l n a ) · z and the exponential function on the right side of this equation are well defined within the vector space with product [ R 2 , , · , ] , so the symbol on the left side of this equation is well defined, too.
Regarding more details of the material presented here, we refer to [8].

3. Vector-Valued Vector Logarithms

An exponential function is defined as summing weighted powers of a quantity z. If z is a real number or an operator, then the concept of a power has long been well-defined. However, until the publication of [8] and author’s work since 2020 cited therein, this was not the case if z is a vector. Some of the results for vectors were presented in the previous section and will now be applied. Section 4 will show that we will only need values of logarithms in small neighborhoods that asymptotically contract to the vector 1 R 2 . For this reason, we ignore the lack of uniqueness in the following definition and thereafter.
Definition 1.  
Let z = x y R 2 and | | . | | a phs-functional. With w = u v R 2 such that exp | | . | | ( w ) = z , w is called a logarithm of z with respect to the phs-functional | | . | | and denoted
w = log | | . | | ( z ) , z R 2 .
Definition 2.  
For z and w from R 2 , we define the vectorial vector power
z w = exp | | . | | ( w log | | . | | z ) .
From now on, we restrict consideration to the case that the functional | | . | | equals the Euclidean norm. We then write log | | . | | = log and exp | | . | | = exp .
Theorem 1.  
For elements z , z 1 and z 2 from R 2 ,
exp ( log z ) = z ,
log ( z 1 z 2 ) = log z 1 + log z 2
and
log ( z 1 ÷ z 2 ) = log z 1 log z 2 .
Proof 
(Proof ). The first statement holds because w is called log z , that is w = log z , if exp ( w ) = z .
Let now log ( z 1 z 2 ) = w , then
exp ( w ) = z 1 z 2 = exp ( log z 1 ) exp ( log z 2 ) = exp ( log z 1 + log z 2 ) .
This proves (2). The proof of the third statement follows analogously □

4. Applications to Asyptotic Probability Theory

The vector representation of characteristic functions in [6] does without the use of the imaginary unit. Following this approach, one cannot directly apply the classical proof methods for limit theorems of probability theory, as presented in [1,2,3,4,5] among others, but can easily adapt them approximately, which will be done here for reproving and slightly modifying the limit theorem of Poisson without using imaginary numbers.
Example 1  
(Limit theorem of Poisson). Let X n , k , k = 1 , . . . , n be independent random variables following the probability distribution P ( X n , k = 0 ) = 1 p n = 1 P ( X n , k = 1 ) ( 0 , 1 ) , n = 1 , 2 , . . . and assume that n p n λ ( 0 , ) as n . Then, the distribution of the sum ξ n = X n , 1 + . . . + X n , n tends to the Poisson distribution with parameter λ.
Proof.We denote the characteristic function of ξ n by φ n . According to [6], φ n has the vector representation
φ n ( t ) = ( ( 1 p n ) 1 + p n exp ( t I ) ) n , t R .
From Theorem 1, it follows that
log φ n ( t ) = n log ( ( 1 p n ) 1 + p n exp ( t I ) ) = n log ( 1 + p n ( exp ( t I ) 1 ) ) .
Let ρ ( n ) R 2 be a function such that p n 1 ρ ( n ) 0 , n , and
exp ( p n ( exp ( t I ) 1 ) + ρ ( n ) ) = 1 + p n ( exp ( t I ) 1 ) , n .
Then,
log φ n ( t ) = n p n ( exp ( t I ) 1 ) + n ρ ( n ) ,
thus, for all t R ,
φ n ( t ) exp ( λ ( exp ( t I ) 1 ) ) , n .
Now, the inversion formula in [9] applies, what essentially proves that the distribution of the sum ξ n tends to the Poisson distribution with parameter λ □
The liberation of a well-known proof from the use of an ultimately completely misunderstood imaginary unit can also be of interest in other areas of mathematics, because this imaginary unit has so far completely eluded comprehensible axioms.
Example 2  
(Central limit theorem of probability theory). Let X 1 , X 2 , . . . be independent and identically distributed random variables, defined on a common probability space and having finite standard deviation σ > 0 . Then, with μ being the expectation of X 1 and Φ denoting the standard Gaussian distribution function,
P X 1 + . . . + X n n μ σ n < x n Φ ( x ) , x R .
Proof. Let
φ ( t ) = E exp ( t X 1 I )
and
φ n ( t ) = E exp ( t ( X 1 + . . . + X n n μ σ n ) I )
where E means mathematical expectation. Then
φ n ( t ) = exp { t n μ σ I } φ n ( t σ n )
where
φ ( t σ n ) = 1 + t μ σ n I t 2 2 σ 2 n E X 2 ( 1 + o ( 1 ) ) 1 , n .
Note that if | | z | | 2 = x 2 + y 2 < 1 then
log ( 1 + z ) = z 1 2 z 2 + Θ ( z ) with Θ ( z ) = k = 3 ( 1 ) k z k k .
With
z = t σ n μ I t 2 2 σ 2 n E X 2 ( 1 + o ( 1 ) ) 1 ,
it follows that
φ n ( t ) = exp { t 2 2 σ 2 E X 2 ( 1 + o ( 1 ) ) 1 n 2 ( t μ σ n ) 2 I 2 + o ( 1 ) ( 1 + I ) + n Θ ( z ) . }
If | | z | | 1 2 then
| | Θ ( z ) | | | | z | | 3 6 .
Thus,
φ n ( t ) = exp { t 2 2 σ 2 E X 2 ( EX ) 2 σ 2 + o ( 1 ) 1 + o ( 1 ) ( 1 + I ) } ,
from where it follows that
φ n ( t ) φ ( t ) = e t 2 2 0 , n .
At this point, the inversion formula in [9] applies. □
Example 3  
(Revising an often mentioned formula for i i where i denotes the imaginary unit). We recall that
exp ( π 2 I ) = cos π 2 sin π 2 = I , I = 0 1 R 2 .
It follows that
π 2 I = log I .
With the relations
I 2 = 1
and
exp ( I log I ) = exp ( π 2 1 ) = e π 2 1 ,
we arrive at
I I = 0 , 20787 . . . 0 ,
whereas the widely held clime in the literature, being based on the imaginary unit i, that
i i = 0 , 20787 . . . ,
to the best of author’s knowledge, has never truly been proven from the perspective of the greatest possible mathematical rigor.

5. Discussion

Since imaginary and complex numbers proved practical for representing solutions to cubic and quadratic equations, respectively, they have also been used in numerous other fields. From the standpoint of mathematical rigor, however, the formal use of an uncomprehended imaginary quantity cannot be considered ultimately satisfactory. If R 2 is equipped with a suitably defined vector product, then vectors from R 2 can replace the complex numbers. In this way, the quantities of interest are axiomatically introduced, and a proof of mathematical existence can be provided by giving an example that fulfills the axioms. Fourier transformations of certain functions and in particular characteristic functions of probability distributions represent a frequently used mathematical tool, which, however, makes use of imaginary numbers. In this paper, we have shown how the use of complex numbers for this tool can be replaced by the use of the mathematically well-known axiomatic concept of vectors. In this paper, the essential technical step to achieving this goal consists of introducing the concept of a vector valued logarithm of a vector, as we introduced it in Section 2. Furthermore, the vector approach to characteristic functions used here has the advantage over the traditional approach that the vector representations can be explicitely evaluated numerically, although this has not been exploited in detail here.

Author Contributions

Wolf-Dieter Richter is the only author of this work.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

Author declares no conflict of interests.

References

  1. Gnedenko, B.V.; Kolmogorov, A.N. Limit distributions for sums of independent random variables. In Addison-Wesley Mathematics Series; Addison-Wesley: Cambridge, MA, USA, 1954. [Google Scholar]
  2. Fisz, M. Wahrscheinlichkeitsrechnung und Mathematische Statistik; VEB Deutscher Verlag der Wissenschaften: Berlin, 1971. [Google Scholar]
  3. Renyi, A. Wahrscheinlichkeitsrechnung. Mit einem Anhang über Informationstheorie; VEB Deutscher Verlag der Wissenschaften: Berlin, 1977. [Google Scholar]
  4. Shiryayev, A.N. Probability; Springer: New York, 1984. [Google Scholar]
  5. C. Hesse, Angewandte Wahrscheinlichkeitstheorie; Friedr.Vieweg u Sohn: Braunschweig/Wiesbaden, 2003.
  6. Richter, W.-D. On the vector representation of characteristic functions. Stats 2023, 6, 1072–1081. [Google Scholar] [CrossRef]
  7. Daniusis, P.; Juneja, S.; Kuzma, L.; Marcinkevicius, V. Measuring Statistical Dependence via Characteristic Function IPM. Entropy 2025, 27(12), 1254. [Google Scholar] [CrossRef] [PubMed]
  8. Richter, W.-D. Vector representations of Euler’s formula and Riemann’s Zeta function. Symmetry 2025, 17, 1597. [Google Scholar] [CrossRef]
  9. Richter, W.-D. Inversion of characteristic functions without imaginary unit. submitted.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated