Preprint
Article

This version is not peer-reviewed.

Density Formula in Malliavin Calculus by Using Stein’s Method and Diffusions

A peer-reviewed article of this preprint also exists.

Submitted:

09 December 2024

Posted:

10 December 2024

You are already at the latest version

Abstract
Let G be a random variable of functionals of an isonormal Gaussian process X defined on some probability space. An explicit formula for the density of G is obtained by Nourdin and Viens (2009) [Density formula and concentration inequalities with Malliavin calculus, Elec. J. of Probab., 14(78). 2287-2309]. In this paper, unlike previous studies, we will use Stein’s method for invariant measures of diffusions to obtain the density formula of G. Using this, we will show that the diffusion coefficient of a Itô diffusion with an invariant measure having a density can be expressed as in terms of operators in Malliavin calculus.
Keywords: 
;  ;  ;  ;  

1. Introduction

Let X = { X ( h ) , h H } , where H is a real separable Hilbert space, be an isonormal Gaussian process defined on a probability space ( Ω , F , P ) , and let G be a random variable of functionals of an isonormal Gaussian process X.
The following formula on the density of a random variable G is a well-known fact of the Malliavin calculus: if D G D G H belongs to the domain of divergence operator δ , then the law of G has a continuous and bounded density p G given by
p G ( x ) = E 1 { G > x } δ D G D G H for all x R .
Several examples are detailed in Section 2.1.1 of Nualart’s book [1] (or [2]). Nourdin and Viens (2009) prove a new general formula for p G which does not refer to divergence operator δ . For a random variable G D 1 , 2 with E [ G ] = 0 , where D 1 , 2 is the domain of the Malliavin derivative operator D with respect to X such that the Malliavin derivative D G of G is a random element belonging in H with E [ D G H 2 ] < , we define the function g G by
g G ( x ) = E [ D G , D L 1 G H | G = x ] .
The operator L appearing in (1) is the so-called generator of the Ornstein-Uhlenbeck semigroup and L 1 is its pseudo-inverse. For details, see the Section 2. It is well known that g G is non-negative on the support of the law of G (see Proposition 3.9 in [3]).
Under some general conditions on a random variable G, Nourdin and Viens (2009) obtain the new formula of the density p G for the law of G provided it exists. A precise statement is given in the following theorem:
Theorem 1.
[Nourdin and Viens] The law of G admits a density (with respect to Lebesgue measure), say p G , if and only if the random variable g G ( G ) is strictly positive almost surely. In this case, the support of p G , denoted by s u p p ( p G ) , is a closed interval of R containing zero and, for almost all x s u p p ( p G ) ,
p G ( x ) = E [ | G | ] 2 g G ( x ) exp 0 x y g G ( y ) d y .
Assume that the density p satisfies the following conditions: it is continuous, bounded, with l u x 2 p ( x ) d x < . Let us set an interval I = ( l , u ) ( l < u ). Then
p ( x ) > 0 i f x I p ( x ) = 0 i f x I c .
We define a continuous function b on I such that there exists e ( l , u ) satisfying
b ( x ) > 0 i f x ( l , e ) b ( x ) < 0 i f x ( e , u ) ,
b p is bounded on I and
l u b ( x ) p ( x ) d x = 0 .
Define
a ( x ) = 2 p ( x ) l x b ( y ) p ( y ) d y .
Then the diffusion with the invariant density p has the Stochastic Differential Equation (SDE) with the form
d X t = b ( X t ) d t + a ( X t ) d W t ,
where W is a standard Brownian motion.
In this paper, we derive the new density formula of a random variable G, that satisfies appropriate conditions related to Malliavin calculus, from the following equation: for every z R ,
P ( G z ) P ( F z ) = E h ˜ z ( G ) 1 2 a ( G ) + D L 1 b ( G ) , D G H + E [ b ( G ) ] E [ h ˜ z ( G ) ] ,
where F is a random variable with the invariant density p and h ˜ z is a solution to the Stein’s equation (for detailed explanation of Stein’s method, see [4,5,6]). Also we will show that the diffusion coefficient a of SDE (4) can be written in an explicit form like (1) if the random variable G in (5) with its value on I has a density p and satisfies b ( G ) L 2 ( Ω ) . .
The rest of this paper is organized as follows. Section 2 reviews some basic notations, and the contents of Malliavin calculus. In Section 3, we will briefly discuss the construction of a diffusion process with an invariant density p and then describe our main results. Finally, as an application of our main results, in Section 4, we give some examples.

2. Preliminaries

In this section, we present some basic facts about Malliavin operators defined on spaces of random elements that are functionals of possibly infinite-dimensional Gaussian fields. For a more detailed explanation, see [1,7]. Suppose that H is a real separable Hilbert space with a scalar product denoted by · , · H . Let X = { X ( h ) , h H } be an isonormal Gaussian process, that is a centered Gaussian family of random variables such that E [ X ( h ) X ( g ) ] = h , g H . For every n 1 , let H n be the nth Wiener chaos of X, that is the closed linear subspace of L 2 ( Ω ) generated by { H n ( X ( h ) ) : h H , h H = 1 } , where H n is the nth Hermite polynomial. We define a linear isometric mapping I n : H n H n by I n ( h n ) = n ! H n ( X ( h ) ) , where H n is the symmetric tensor product. It is well known that any square integrable random variable F L 2 ( Ω , F , P ) ( F denotes the σ -field generated by X) can be expanded into a series of multiple stochastic integrals:
F = q = 0 I q ( f q ) ,
where f 0 = E [ F ] , the series converges in L 2 , and the functions f q H q are uniquely determined by F.
Let S be the class of smooth and cylindrical random variables F of the form
F = f ( X ( φ 1 ) , , X ( φ n ) ) ,
where n 1 , f C b ( R n ) and φ i H , i = 1 , , n . The Malliavin derivative of F with respect to X is the element of L 2 ( Ω , H ) defined by
D F = i = 1 n f x i ( X ( φ 1 ) , , X ( φ n ) ) φ i .
We denote by D l , p the closure of its associated smooth random variable class with respect to the norm
F l , p p = E ( | F | p ) + k = 1 l E ( D k F H k p ) .
We denote by δ the adjoint of the operator D, also called the divergence operator. The domain of δ , denoted by Dom ( δ ) , is an element u L 2 ( Ω ; H ) such that
| E ( < D l F , u > H l ) | C ( E | F | 2 ) 1 / 2 for all F D l , 2 .
If u Dom ( δ ) , then δ ( u ) is the element of L 2 ( Ω ) defined by the duality relationship
E [ F δ ( u ) ] = E [ D F , u H ] for every F D 1 , 2 .
Recall that F L 2 ( Ω ) can be expanded as F = E [ F ] + q = 1 P q F , where p q is the projection operator L 2 ( Ω ) to the qth Wiener chaos H n . The operator L is defined through the projection operator P q , q = 0 , 1 , 2 , as L = q = 0 q P q , and is called the infinitesimal generator of the Ornstein-Uhlhenbeck semigroup. The relationship between the operator D, δ , and L is given as follows: δ D F = L F , that is, for F L 2 ( Ω ) the statement F Dom ( L ) is equivalent to F Dom ( δ D ) (i.e, F D 1 , 2 and D F Dom ( δ ) ), and in this case δ D F = L F . For any F L 2 ( Ω ) , we define the operator L 1 , which is the pseudo-inverse of L, as L 1 F = q = 1 1 q P q F . Note that L 1 is an operator with values in D 2 , 2 and L L 1 F = F E [ F ] for all F L 2 ( Ω ) .

3. Diffusion Process with Invariant Measures and Main Results

In this section, we will give the construction of a diffusion process with an invariant measure, and present our main results in this paper.

3.1. Diffusion Process with Invariant Measures

In this section, we will briefly describe the construction of a diffusion process with an invariant measure μ having a density p with respect to the Lebesgue measure (for more details, see [8,9] ).
Let F be a random variable with a probability measure μ on I = ( l , u ) ( l < u ) with a density p which is continuous, bounded, strictly positive on I and E [ F 2 ] < . Let b be a continuous function on I such that there exists e ( l , u ) that satisfies b ( x ) > 0 for e ( l , u ) and b ( x ) < 0 for e ( l . u ) . Moreover, the function b p is bounded on I and
E [ b ( F ) ] = 0 .
For x I , define
a ( x ) = 2 p ( x ) l x b ( y ) p ( y ) d y .
Then the diffusion coefficient a in (9) is strictly positive for all x ( l , u ) , and also satisfies E [ a ( F ) ] < . The equation (9) implies that, for some c I ,
p ( x ) a ( x ) = p ( c ) a ( c ) exp c x 2 b ( y ) a ( y ) d y .
Then the following SDE
d X t = b ( X t ) d t + a ( X t ) d B t ,
has a unique ergodic Markovian weak solution with the invariant density p.
Let C 0 ( I ) = { f : I R | f is continuous on I vanishing at the boundary of I } . For f C 0 ( I ) , define
h f ( x ) = 0 x h ˜ f ( y ) d y ,
where
h ˜ f ( x ) = 2 l x ( f ( y ) E [ f ( F ) ] ) p ( y ) d y a ( x ) p F ( x ) .
Then h f satisfies the following Stein’s equation:
f ( x ) E [ f ( F ) ] = b ( x ) h f ( x ) + 1 2 a ( x ) h f ( x ) = b ( x ) h ˜ f ( x ) + 1 2 a ( x ) h ˜ f ( x ) ,
where F is a random variable with a probability measure μ as its law.

3.2. Main Results

Before describing our main result in this paper, we begin with the following simple result, given in Theorem 2.9.1 in [7].
Lemma 1.
Suppose that F , G D 1 , 2 and let g : R R be a continuously differentiable with bounded derivative (or when g is only almost everywhere differentiable, one needs G to have an absolutely continuous). Then
E [ F g ( G ) ] = E [ F ] E [ g ( G ) ] + E [ g ( G ) D F , D L 1 G H ] .
Let us set
g b ( G ) ( x ) = E [ D L 1 ( b ( G ) E [ b ( G ) ] ) , D G H | G = x ] .
Similar to the proof of Proposition 3.9 in [3], we will show that g b ( G ) ( x ) is non-negative almost everywhere with respect to the law of G.
Proposition 1.
Let G D 1 , 2 . Then we have that g b ( G ) ( x ) 0 for almost everywhere with respect to the law of G, say H G ( x ) = P ( G x ) .
Proof: 
Let q be a smooth non-negative real function. Define
Q ( x ) = β x q ( y ) d y i f x β x β q ( y ) d y i f x < β ,
where β R is a constant that satisfies b ( x ) E [ b ( G ) ] > 0 for β ( l , u ) and b ( x ) E [ b ( G ) ] < 0 for β ( l . u ) . Since Q ( x ) 0 for x β and Q ( x ) < 0 for x < β , we have E [ ( b ( G ) E [ b ( G ) ] ) Q ( G ) ] 0 . An application of Lemma 13 yields that
E [ ( b ( G ) E [ b ( G ) ] ) Q ( G ) ] = E [ D L 1 ( b ( G ) E [ b ( G ) ] ) , D G H ] = g b ( G ) ( x ) q ( x ) d H G ( x ) 0 .
By an approximation of the function q, we can show that for all Borel measurable set B B ( R ) , we have
B g b ( G ) ( x ) q ( x ) d H G ( x ) 0 .
This obviously implies that g b ( G ) ( x ) 0 for almost everywhere with respect to the law of G. □
Lemma 2.
If the random variable g b ( G ) ( G ) is strictly positive almost surely, then the law of G has a density with respect to Lebesgue measure, say p G .
Proof: 
By a similar argument to the proof of Theorem 3.1 in [10], we have that, for any Borel set B B ( R ) and any n 1 ,
E ( b ( G ) E [ b ( G ) ] ) G 1 B [ n , n ] ( x ) d x = E ( b ( G ) E [ b ( G ) ] ) 1 B [ n , n ] ( G ) g b ( G ) ( G ) .
The same argument as for the case of b ( G ) = G in the proof of Theorem 3.1 in [10] shows that the law of G has a density. □
An explicit formula for the density is the following statement:
Theorem 2.
Let F be a random variable having the law μ, and let G be a random variable in D 1 , 2 with b ( G ) L 2 ( Ω ) . Assume that the random variable g b ( G ) ( G ) is strictly positive almost surely and
b h ˜ f C f = sup x I | f ( x ) | < .
In this case, the support of p G , denoted by s u p p ( p G ) , is a closed interval of R and, for almost all x s u p p ( p G ) ,
p G ( x ) = p G ( β ) g b ( G ) ( β ) g b ( G ) ( x ) exp β x b ( y ) E [ b ( G ) ] g b ( G ) ( y ) d y
for some β s u p p ( p G ) .
Proof: 
Obviously, using (10) shows that the function h ˜ f can be written as
h ˜ f ( x ) = 2 p F ( β ) a ( β ) exp β x 2 b ( y ) a ( y ) d y × l x ( f ( y ) E [ f ( F ) ] ) p F ( y ) d y .
Let us set H F ( x ) = P ( F z ) . If f ( x ) = 1 ( , z ] ( x ) for z R , we write h f = h z and h ˜ f = h ˜ z . Then the function h ˜ z can be written as
h ˜ z ( x ) = 2 p F ( β ) a ( β ) exp β x 2 b ( y ) a ( y ) d y × H F ( z ) [ 1 H F ( x ) ] i f x z H F ( x ) [ 1 H F ( z ) ] i f x < z .
From (20), it follows that for x z ,
h ˜ z ( x ) = 2 p F ( β ) a ( β ) exp β x 2 b ( y ) a ( y ) d y × 2 b ( x ) a ( x ) H F ( z ) [ 1 H F ( x ) ] p F ( x ) H F ( z ) .
For x < z ,
h ˜ z ( x ) = 2 p F ( β ) a ( β ) exp β x 2 b ( y ) a ( y ) d y × 2 b ( x ) a ( x ) H F ( x ) [ 1 H F ( z ) ] + p F ( x ) [ 1 H F ( z ) ] .
If f ( x ) = 1 ( , z ] ( x ) for x I , we take f n C 0 ( I ) such that { f n } is an increasing sequence and f n ( x ) f ( x ) for all x I . Obviously, by the dominated convergence theorem, we have that, as n ,
h ˜ f n ( x ) h ˜ z ( x ) and h ˜ f n ( x ) h ˜ z ( x ) for all x I .
The bound of (17) yields that, for all n 1 ,
b h ˜ f n C f n 1 .
Combining (10) with the bound in (17), we also get, for all n 1 ,
a h ˜ f n C f n 1 .
From (12), it follows that, for f n C 0 ( I ) ,
E [ f n ( G ) ] E [ f n ( F ) ] = E [ b ( G ) h ˜ f n ( G ) ] + E 1 2 a ( G ) h ˜ f n ( G ) .
Due to the bounds of (24) and (25), the dominated convergence theorem can be applied to (26), which gives the following limit value:
P ( G z ) P [ ( F z ) = E ( b ( G ) E [ b ( G ) ] ) h ˜ z ( G ) + E [ b ( G ) ] E [ ] h ˜ z ( G ) ] + E 1 2 a ( G ) h ˜ z ( G ) .
Applying (13) in Lemma 1 to the first expectation in (27), we obtain that
P ( G z ) P [ ( F z ) = E D L 1 ( b ( G ) E [ b ( G ) ] ) , D G H h ˜ z ( G ) + E 1 2 a ( G ) h ˜ z ( G ) + E [ b ( G ) ] E [ h ˜ z ( G ) ] = E h ˜ z ( G ) E D L 1 ( b ( G ) E [ b ( G ) ] ) , D G H | G + E 1 2 a ( G ) h ˜ z ( G ) + E [ b ( G ) ] E [ h ˜ z ( G ) ] .
Differentiating the both sides in (28) yields that
p G ( z ) p F ( z ) = z h ˜ z ( x ) g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + E [ b ( G ) ] z h ˜ z ( x ) p G ( x ) d x .
Next, we concentrate on the computations of two integrals in (29). Using (21) and (22) gives that
z h ˜ z ( x ) g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x : = J 1 ( z ) + J 2 ( z ) ,
where
J 1 ( z ) = z z h ˜ z ( x ) g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x J 2 ( z ) = z z h ˜ z ( x ) g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x
Obviously, we write J 1 ( z ) = J 11 ( z ) + J 12 ( z ) ,
J 11 ( z ) = h ˜ z ( z ) g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) , J 12 ( z ) = z z h ˜ z ( x ) g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x .
For J 12 , we first differentiate h ˜ z ( x ) with respect to z. For x < z ,
z h ˜ z ( x ) = 2 p F ( c ) a ( c ) exp c x 2 b ( y ) a ( y ) d y × 2 b ( x ) a ( x ) H F ( x ) p F ( z ) p F ( x ) p F ( z ) .
By (22) and (30), we get
J 11 ( z ) = 2 p F ( c ) a ( c ) exp c z 2 b ( y ) a ( y ) d y × 2 b ( z ) a ( z ) H F ( z ) [ 1 H F ( z ) ] + p F ( z ) [ 1 H F ( z ) ]
× g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) , J 12 ( z ) = 2 p F ( c ) a ( c ) z exp c x 2 b ( y ) a ( y ) d y × 2 b ( x ) a ( x ) H F ( x ) p F ( z ) p F ( x ) p F ( z )
× g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x .
For x z ,
z h ˜ z ( x ) = 2 p F ( β ) a ( β ) exp β x 2 b ( y ) a ( y ) d y × 2 b ( x ) a ( x ) p F ( z ) [ 1 H F ( x ) ] p F ( x ) p F ( z ) .
On the other hand, we write J 2 ( z ) = J 21 ( z ) + J 22 ( z ) , where
J 21 ( z ) = h ˜ z ( z ) g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) , J 22 ( z ) = z z h ˜ z ( x ) g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x .
From (32). we have that
J 21 ( z ) = 2 p F ( β ) a ( β ) exp c z 2 b ( y ) a ( y ) d y × 2 b ( z ) a ( z ) H F ( z ) [ 1 H F ( z ) ] p F ( z ) H F ( z )
× g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) , J 22 ( z ) = 2 p F ( β ) a ( β ) z exp β x 2 b ( y ) a ( y ) d y × 2 b ( x ) a ( x ) [ 1 H F ( x ) ] p F ( z ) p F ( x ) p F ( z )
× g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x .
From (20), the differentiation of the second integral in (29) can be easily calculated as follows:
E [ b ( G ) ] z h ˜ z ( x ) p G ( x ) d x = E [ b ( G ) ] z z h ˜ z ( x ) p G ( x ) d x + E [ b ( G ) ] z z h ˜ z ( x ) p G ( x ) d x = 2 E [ b ( G ) ] p F ( β ) a ( β ) { p F ( z ) z exp β x 2 b ( y ) a ( y ) d y H F ( x ) p G ( x ) d x + ( 1 H F ( z ) ) exp β z 2 b ( y ) a ( y ) d y H F ( z ) p G ( z ) } + 2 E [ b ( G ) ] p F ( β ) a ( β ) { p F ( z ) z exp β x 2 b ( y ) a ( y ) d y ( 1 H F ( x ) ) p G ( x ) d x H F ( z ) exp β z 2 b ( y ) a ( y ) d y ( 1 H F ( z ) ) p G ( z ) } = 2 p F ( z ) E [ b ( G ) ] p F ( β ) a ( β ) { z exp β x 2 b ( y ) a ( y ) d y H F ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y ( 1 H F ( x ) ) p G ( x ) d x } .
Combining (31), (32), (34), (35) and (36) yields that, for z R ,
p G ( z ) p F ( z ) = 2 p F ( z ) p F ( β ) a ( β ) exp β z 2 b ( y ) a ( y ) d y g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) + 2 p F ( z ) p F ( β ) a ( β ) [ z exp β x 2 b ( y ) a ( y ) d y 2 b ( x ) a ( x ) H F ( x ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y 2 b ( x ) a ( x ) [ 1 H F ( x ) ] × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x ] 2 p F ( z ) p F ( β ) a ( β ) exp β x 2 b ( y ) a ( y ) d y p F ( x ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + 2 p F ( z ) E [ b ( G ) ] p F ( β ) a ( β ) { z exp β x 2 b ( y ) a ( y ) d y H F ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y ( 1 H F ( x ) ) p G ( x ) d x } .
Substituting p F in (10) for p F in the right-hand side of the equation (37), we get
p G ( z ) p F ( z ) = 2 a ( z ) g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) + 2 a ( z ) exp β z 2 b ( y ) a ( y ) d y [ z exp c x 2 b ( y ) a ( y ) d y 2 b ( x ) a ( x ) H F ( x ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y 2 b ( x ) a ( x ) [ 1 H F ( x ) ] × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x ] 2 a ( z ) exp β z 2 b ( y ) a ( y ) d y exp β x 2 b ( y ) a ( y ) d y p F ( x ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + 2 E [ b ( G ) ] a ( z ) exp β z 2 b ( y ) a ( y ) d y { z exp β x 2 b ( y ) a ( y ) d y H F ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y ( 1 H F ( x ) ) p G ( x ) d x } .
From the formula of p F in (10) and (38), we obtain that, for some β s u p p ( p G ) ,
g b ( G ) ( z ) p G ( z ) = p F ( β ) a ( β ) 2 exp β z 2 b ( y ) a ( y ) d y + exp β z 2 b ( y ) a ( y ) d y { z exp β x 2 b ( y ) a ( y ) d y 2 b ( x ) a ( x ) H F ( x ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y 2 b ( x ) a ( x ) [ 1 H F ( x ) ] × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x } exp β z 2 b ( y ) a ( y ) d y exp β x 2 b ( y ) a ( y ) d y p F ( x ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + E [ b ( G ) ] exp β z 2 b ( y ) a ( y ) d y { z exp β x 2 b ( y ) a ( y ) d y × H F ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y ( 1 H F ( x ) ) p G ( x ) d x } .
Differentiating the equation (39) with respect to z proves that
z g b ( G ) ( z ) p G ( z ) = 2 b ( z ) a ( z ) g b ( G ) ( z ) p G ( z ) 2 b ( z ) a ( z ) g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) E [ b ( G ) ] p G ( z ) = ( b ( z ) E [ b ( G ) ] ) p G ( z ) .
This equation (40) proves that, for almost all z s u p p ( p G ) ,
g b ( G ) ( z ) p G ( z ) = z ( b ( x ) E [ b ( G ) ] ) p G ( x ) d x .
From (40) and (41), it follows that, for almost all z s u p p ( p G ) ,
d d z ( g b ( G ) ( z ) p G ( z ) ) g b ( G ) ( z ) p G ( z ) = b ( z ) E [ b ( G ) ] g b ( G ) ( z )
Hence
d d z log ( g b ( G ) ( z ) p G ( z ) ) = b ( z ) E [ b ( G ) ] g b ( G ) ( z )
By integrating both sides of (45) from β s u p p ( p G ) to z, we have
log ( g b ( G ) ( z ) p G ( z ) ) = log ( g b ( G ) ( β ) p G ( β ) ) β z b ( x ) E [ b ( ( G ) ] ) ] g b ( G ) ( x ) d x .
The above equation (44) proves that, for almost all z s u p p ( p G ) ,
p G ( z ) = g b ( G ) ( β ) p G ( β ) g b ( G ) ( z ) exp β z b ( x ) E [ b ( G ) ] g b ( G ) ( x ) d x .
When a random variable G is general, it is not easy to find an explicit computation of g b ( G ) ( ( x ) ) . In particular, when D L 1 ( b ( G ) E [ b ( G ) ] ) , D G H is not measurable with respect to the σ -field generated by G, there are cases where it is impossible to compute the expectation. Using the above Theorem 2, we derive the explicit form of g b ( G ) ( x ) . The following theorem corresponds to Theorem 2 in [9].
Theorem 3.
A random variable G D 1 , 2 , taking its value on I, has the distribution μ and satisfies that E [ b ( G ) 2 ] < if and only if E [ b ( G ) ] = 0 and
g b ( G ) ( x ) = 1 2 a ( x ) f o r   a l l x I .
Proof: 
Suppose that E [ b ( G ) ] = 0 and the equation (46) holds true. Let p F be a density of an invariant measure μ corresponding to a solution of SDE (11). Then substituting 1 2 a ( x ) in (46) instead of g b ( G ) ( x ) in (2) gives that
p G ( x ) = p G ( β ) g b ( G ) ( β ) g b ( G ) ) ( x ) exp β x b ( y ) g b ( G ) ( y ) d y = p G ( β ) a ( β ) a ( x ) exp β x 2 b ( y ) a ( y ) d y .
Combining (10) and (47), we get
p G ( x ) = p G ( β ) p F ( β ) p F ( x ) .
This equation (48) shows that s u p p ( p G ) = s u p p ( p F ) Hence integrating both sides of (48) over I = ( l , u ) yields that
p G ( β ) p F ( β ) = 1 ,
which implies that p G = p F on I. If p G = p F on I, then E [ b ( G ) ] = 0 . From (9) and (41), it follows that
a ( x ) = 2 l x b ( y ) p F ( y ) d y p F ( x ) = 2 l x b ( y ) p G ( y ) d y p G ( x ) = 2 g b ( G ) ( z ) ,
which gives that (46) holds. □

4. Examples

In this section, two examples will be given where invariant measures have the standard Gaussian and uniform distribution, respectively.

4.1. The standard Gaussian distribution

When μ is the standard Gaussian distribution, then the coefficients in (12) are given by a ( x ) = 2 and b ( x ) = x , and u = and l = . Then we have, from (20), that
h z = e x 2 2 x [ 1 ( , z ] ( y ) ) Φ ( z ) ] e x 2 2 d y = 2 π e x 2 2 Φ ( x ) ( 1 Φ ( z ) ) i f x z 2 π e x 2 2 Φ ( z ) ( 1 Φ ( x ) ) i f x > z ,
where Φ ( z ) = P ( Z z ) . We have, from (21), that for x > z , taking β = 0
h ˜ z ( x ) = 2 π x e x 2 2 Φ ( z ) ( 1 Φ ( x ) ) 2 π e x 2 2 p F ( x ) Φ ( z ) = [ 2 π x e x 2 2 ( 1 Φ ( x ) ) 1 ] Φ ( z ) ,
and for x < z ,
h ˜ z ( x ) = 2 π x e x 2 2 Φ ( x ) [ 1 Φ ( z ) ] + 2 π e x 2 2 p F ( x ) [ 1 Φ ( z ) ] = [ 2 π x e x 2 2 Φ ( x ) + 1 ] [ 1 Φ ( z ) ] .
If G D 1 , 2 and the random variable g G ( G ) is strictly positive almost surely, then the density p G of G can be obtained, with β = 0 , by
p G ( z ) = g G ( 0 ) p G ( 0 ) g G ( z ) exp 0 z x g G ( x ) d x = g G ( 0 ) p G ( 0 ) g G ( z ) exp 0 z x g G ( x ) d x .
Since E [ G ] = 0 , we get, from (41), that
g G ( 0 ) p G ( 0 ) = g G ( 0 ) p G ( 0 ) = l 0 x p G ( x ) d x = 1 2 E [ | G | ] .
Substituting (53) into (52), we have
p G ( z ) = E [ | G | ] 2 g G ( z ) exp 0 z x g G ( x ) d x ,
which is the density (18) in Theorem 1. If g G = ( z ) = 1 ,
p G ( z ) = E [ | G | ] 2 exp 0 z x d x = 1 2 π exp x 2 2 ,
which implies that Theorem 3 holds.

4.2. The Uniform Distribution

When μ is the uniform distribution, i.e., F U ( [ 0 , 1 ] ) , then the coefficients in (12) are given by
a ( x ) = x ( 1 x ) and b ( x ) = x 1 2 f o r x ( 0 , 1 ) .
From (20), we have that
h ˜ z ( x ) = 2 p F ( 1 / 2 ) a ( 1 / 2 ) exp 1 / 2 x ( 2 y 1 ) ) y ( 1 y ) d y × [ ( x z ) z x ] = 8 exp 1 / 2 x ( 2 y 1 ) ) y ( 1 y ) d y × [ ( x z ) z x ] = 2 x 1 x × z ( 1 x ) i f x z x ( 1 z ) i f x < z , .
Then the density of G is given by
p G ( x ) = p G ( β ) g b ( G ) ( β ) g b ( G ) ( x ) exp β x b ( y ) E [ b ( G ) ] g b ( G ) ( y ) d y .
Taking β = E [ G ] , then
p G ( x ) = p G ( E [ G ] ) g b ( G ) ( E [ G ] ) g b ( G ) ( x ) exp E [ G ] x b ( y ) E [ b ( G ) ] g b ( G ) ( y ) d y .
The relation (41) gives that
p G ( E [ G ] ) g b ( G ) ( E [ G ] ) = 1 2 E [ | G E [ G ] | ] .
Hence (56) can be written as
p G ( x ) = E [ | G E [ G ] | ] 2 g G ( x ) exp E [ G ] x y E [ G ] g G ( y ) d y = E [ | G E [ G ] | ] 2 g G ( x ) exp E [ G ] x y E [ G ] g G ( y ) d y .
Putting E [ G ] = 0 , we know, from (57), that the density p G is identical to the density in Theorem 1. If g G ( x ) = 1 2 x ( 1 x ) for x ( 0 , 1 ) , a direct computation yields that
p G ( x ) = 1 4 x ( 1 x ) exp 1 / 2 x y 1 2 1 2 y ( 1 y ) d y = 1 4 x ( 1 x ) exp 1 / 2 x 2 ( 1 y ) y ( 1 y ) + 1 y ( 1 y ) d y = 1 4 x ( 1 x ) exp log x + log ( 1 x ) + log 4 = 1 [ 0 , 1 ] ( x ) ,
which implies that Theorem 3 holds true.

5. Conclusions and Future Works

When a random variable F follows the invariant measure μ that has a density p F , and a random variable G D 1 , 2 also allows a density p G , this paper find an explicit formula of the density p G based on the coefficients in the diffusion associated with the density p F . The significant feature of our works is that it shows that the density p G can be obtained by connecting the diffusion with the invariant measure, and that if g b ( G ) is equal to the diffusion coefficient, Theorem 2 in [9] can be easily proven.
Future works will be carried out in two directions: (1) Using the results worked in this paper, we plan to derive a density formula associated with an Edgeworth expansion with general terms given in [11]. (2) In the case when G is a random variable belonging to a fixed Wiener chaos, we will obtain a more rigorous formula than the formula obtained in the previous works.

Funding

This research was supported by Hallym University Research Fund (HRF-202309-009).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nualart, D. (2006). Malliavin calculus and related topics. Probability and its Applications, 2nd Ed. Springer, Berlin.
  2. Nualart, D. (2008). Malliavin calculus and its applications. Regional conference series in Mathematics Number 110.
  3. Nourdin, I. and Peccati, G. (2009). Stein’s method on Wiener Chaos. Probab.Theory Related Fields, 145 75–118. [CrossRef]
  4. Chen, L. H. Y., Goldstein, L. and Shao, Q-M. (2011). Normal approximation by Stein’s method. Probability and its Applications (New York), Springer,Heidelberg.
  5. Stein, C. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability , Vol. II: Probability Theory; University of California Press, Berkeley, California, 1972, 583–602.
  6. Stein, C. Approximate Computation of Expectations, IMS, Hayward, CA. MR882007. 1986.
  7. Nourdin, I. and Peccati, G. (2012). Normal approximations with Malliavin calculus: From Stein’s method to universality. Cambridge Tracts in mathematica, Vol. 192, Cambridge University Press, Cambridge.
  8. Bibby, B.M., Skovgaard, I.M. and Sorensen, M. (2003). Diffusion-type models with given marginals and auto-correlation function. Bernoulli, 11(2), 191-220.
  9. Kusuoka, S. and Tudor, Ciprian A (2012). Stein’s method for invariant measures of diffusions via Malliavin calculus, stoch. proc. their Appl. 122, 1627–1651. [CrossRef]
  10. Nourdin, I and Viens, F.G. (2009). Density formula and concentration inequalities with Malliavin calculus, Elec. J. of Probabi., 14(78). 2287-2309. [CrossRef]
  11. Kim, Y.T. and Park, H.S. (2018). An Edeworth expansion for functionals of Gaussian fields and its applications. stoch. proc. their Appl. 44, 312–320. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated