1. Background
Gaussian copula is widely used in quantitative finance modelling. The Gaussian distribution is closely related to an underlying Brownian motion: the standard multi-variate normal distribution is the terminal distribution of an underlying multi-variate Brownian motion where the correlations are constant over time. However the correlation being constant is a limitation of this model which might not fit the actual market. On the other hand, if the correlations are not constant, the result terminal distribution has no closed-form representation in general. Without the closed-form solution or analytic tractability, it becomes less attractive for practical usage. There are research in alternative directions which bypass this tractability issue, for example in (
Lucic 2012;
Luján 2022), the respective authors created different terminal distributions which can admit shape with the desired correlation skew effect. In this paper, we still focus on the terminal distribution result from the Brownian motion itself. We study the PDE for the density function and show that with some assumption on the correlation function, we can derive some PDEs which can describe the moments of the marginal and conditional distributions. In particular, these PDEs are of lower dimensions, therefore the calculations are fast and practical. With these moments, we can generate moment matching approximations with nice analytic tractability to the true terminal distribution. The result analytic distribution can be a useful variation to the standard multi-variate normal distribution and it can be used for purpose like modelling correlation skew effect in quant finance.
2. Methodology
2.1. Model Setup
We study this math problem below. This is a 2-dimensional case however we show later that similar techniques can be applied to higher dimensions.
This is a 2d-PDE in the convention of quant finance industry (2d refers to 2-dimension in space variables
while in fact it is a 3-d PDE if counting
t, given the common presence of
t in this type of PDE we refer the dimensions to only the space variables) and the general numerical method is slow. However, we can reduce the complexity of the 2d-PDE if we make a reasonable assumtion on the correlation function as below:
This means the correlation depends on the in terms of the total , which can be interpreted as: correlation depends on a market factor which is the average of the underlyers. With this extra assumption, we can simplify the problem as below:
Lets make change of variables below
And
can be written as
Note the first diffusion only involves
u, then Fokker-Planck equation for probability density of
u is a 1d-PDE:
So we can solve
first, then we look at the
. For any given path
, the
is simply a sum of infinitesimal normal variables with variances
, so we know the distrubtion of
condition on this path
is a normal distribution with mean 0 and variance
2.1.1. Conditional Distribution and the First Two Moments
Conditioned on a path is not easy to use for calculation, it would be more useful to condition on a value instead of the whole path. Let be the conditional distribution, note this distribution is not a strict normal distribution in general. A more detailed discussion of this distribution is left to later and now we focus on the first two moments, ie, the mean and variance of this conditional distribution.
The mean is clearly zero by symmetry. For the variance, denoted as , we have
Theorem 1.
Let be the conditional distribution of conditioned on , then its variance satisfy
Proof. The proof is straightforward. For completeness included below: By the stochastic integral definition as limit of sums,
is the following sum with the constraint
:
Using definition of variance, we get this sum
Which only has non-zero terms as below after taking expectation.
with the constraint
.
Then take the limit of and the statement is proven. □
Note
is a path integral on all possible paths
that get to
u at
t. We have the following:
The
is the transition probability from state
to
.
Now we follow the Fokker-Planck equation derivation technique, we will get:
Proof.
Note the second term comes to
Let
be a smooth function with compact support, consider
Now the integral
is the
moment of the Brownian motion
, so we have
Then we have below, in the order of
The last step in above is integration by parts. Because the
is arbitrary smooth function so it follows that:
□
To recap, we have these 2 key equations:
We can solve for p first and then solve for f (It is also possible to bundle the PDE solving for p and f together in discretization etc). Knowing and , we know the marginal distribution of u and the mean (0) and variance of the conditional distribution . Note we mentioned previously the conditional distribution is not strictly normal in general: A sample of it is basicly a two step process: first choose a path for u subject to the terminal condition, this yields a path-wise integral . Then choose a point from a normal distribution with variance set to . Or one can also think it as first choose a sample from a standard normal distribution, then choose a path for u subject to the terminal condition, calculate the and scale the normal variable with it.
One might think the 2nd view can keep the normal ness of the whole sampling result, but actually not: some heriustic thinking is that the normal sampling tends to be centralized, and then the scaling also has some centralized tendency, therefore not the same as a constant scaling will do. This is heriustic of course, but next we develope equations for higher moment, then one can see it won’t be a strict normal as the 4th moment vs 2nd moment relation is different to a normal distribution.
2.1.2. Higher Order Moments
We can derive the equations of the higher order moments following the similar technique. To demonstrate, we look at the 4-th order. Let be the 4-th order moment of the conditional distribution .
Proof. 4-th order moment is limit of sum below with constraint
After taking expectation and removing zero terms, this becomes below: note (
terms are approaching to 0 when making finer grids so we ignored them)
Taking limit, in integral representation, it is
Then following the Fokker Planck derivation, we have:
The
is the transition probability from state
to
. Then following the previous derivation we have
□
Now we can prove the conditional distribution is not normal in general, because normal distribution’s 4th order moment is 3 times of the variance square.
Lemma 1. if , then
Proof. Plug into the equations and straightforward. □
If then it becomes the standard multi-variate normal distribution.
2.1.3. Normal Approximation to the Conditional
With above in mind, we still prefer to use the normal distribution with the variance matching to approximate the true conditional distribution. This is because first it can match the moments to the 2nd order which is usually good enough for many practical usage. Secondly, the normal distribution has very good analytic tractability.
Note that the normal distribution is not a random choice either: if we denote the true joint density as where is the conditional probability of v conditioned on u. Then when we use a normal density form for the q, it will satisfy the Fokker Planck equation on most of the terms.
This leads to a question: is there a good analytic form for the q such that it can match moments to higher order (for example 4-th order) ? Obviously such form will involve or the moments it need to match. Due to my limited knowledge, this remains interesting but also a mystery to me.
3. Higher Dimensions and General Form
In higher dimensions, similar technique can be applied if we assume the correlations are driven by one factor. Lets denote that factor as
M, then by Cholesky decomposition, we can re-write the dynamics as following:
are independent brownian motions. All the correlation terms only depend on M and t. We have following result:
Theorem 4.
Let be the probability density function for M at t, then
Let be the , then
Let be the , then
Proof. Standard Fokker Planck derivation using Taylor expansions and integration by parts. Similar to previous one in 2-d case. □
These are 1-d PDEs and after solving them numerically, we use normal distribution satisfying these moments to approximate the condtional distribution of .
Note that the equations can be written in following intrinsic form.
Note above form is when all the have 0 drift. General form with non-zero drift can be derived as well but would be more complicated.
4. Implementaion Example
We show one example of 2-d case: We discretize p and f together and solve for p first for a time step, and then solve for f. We don’t use chain rule to break out the partial derivatives of product but instead discretize on the product. With standard finite difference methods, the calculation is fast and stable. We present an example of the distribution below:
Figure 1 shows the contour of a Gaussian distribution with correlation skew. The underlying correlation function is:
The graph axis is in x and y. Note will be the two diagonal directions.
The shape of the contour is expected. As we put higher correltion when the is lower, and lower correlation when u is higher, the probability is more concentrated when u is low and more dispersed when u is high. Note with u fixed, the graph also shows symmetry in the direction of v.
The following graphs shows more details on in above example.
In
Figure 2 the distribution of
u is very close but different to a standard normal. To see the difference, we reflected the probability around center and then one can see the negative part has a fatter tail than positive part.
This is expected as we correlated more when is more negative, we expect will have more potential to go lower in the negative direction, and as we de-correlate more when more positive, we expect the diversifying effect makes the less potential to go higher when positive.
To demonstrate this point, we can increase the skew of correlation further to see the fat tail effect. Below is the for a more skewed correlation function.
The coutour in
Figure 1 shows the
v is concentrated when
u more negative and
v is spreaded when
u is more positive. Below
Figure 4 shows the std dev of
v conditioned on
u, ie, the
function.
5. Copula Application
Knowing the and we can integrate any function on this approximated terminal distribution. For given , it maps to , and the marginal distribution of x and y are approximated normal distribution respectively (Note that the true terminal distribution will have the marginals as true normal distribution, as we approximate the terminal we can think we approximated the marginals), so they can be readily used to invert CDFs.
We found the moment matching to 2nd order produces good accuracy in our tests.
References
- Fokker, A. D. (1914). Die mittlere energie rotierender elektrischer dipole im strahlungsfeld. Annalen der Physik, 348(5), 810–820. [CrossRef]
- Kolmogorov, A. (1931). ¨Uber die analytischen methoden in der wahrscheinlichkeitstheorie. Math Annal, 104, 415–458. [CrossRef]
- Lucic, V. (2012). Correlation skew via product copula. In Financial engineering workshop, cass business school.
- Luján, I. (2022). Pricing the correlation skew with normal mean–variance mixture copulas. Journal of Computational Finance, 26(2).
- Planck, V. (1917). ¨Uber einen satz der statistischen dynamik und seine erweiterung in der quantentheorie. Sitzungberichte der.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).