Preprint
Article

This version is not peer-reviewed.

Sigma Martingales: Foundations, Properties, and a New Proof of the Ansel-Stricker Lemma

A peer-reviewed article of this preprint also exists.

Submitted:

31 December 2024

Posted:

03 January 2025

You are already at the latest version

Abstract
Sigma martingales generalize local martingales through localizing sequences of predictable sets, which are essential in stochastic analysis and financial mathematics, particularly for arbitrage-free markets and portfolio theory. In this work, we present a new approach to sigma martingales that avoids using semimartingale characteristics. We develop all fundamental properties, provide illustrative examples, and establish the core structure of sigma martingales in a new, straightforward manner. This approach culminates in a new proof of the Ansel-Stricker lemma, which states that one-sided bounded sigma martingales are local martingales. This result, referenced in nearly every publication on mathematical finance, traditionally relies on the original French-language proof.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

Sigma martingales are a class of stochastic processes that extend the concept of local martingales by broadening the localization procedure. While local martingales are processes that can be localized to uniformly integrable martingales using stopping times, sigma martingales generalize this idea through sequences of predictable sets. They exhibit distinctive properties, making them particularly valuable for addressing challenges in theoretical probability and financial mathematics. Notably, unlike the subset of local martingales, sigma martingales are closed under stochastic integration – a property that underscores their natural emergence in financial mathematics and is often used as an alternative definition for sigma martingales, distinct from the one adopted in this publication.
In finance, sigma martingales play a central role in ensuring the absence of arbitrage opportunities and in modeling self-financing trading strategies. The First Fundamental Theorem of Asset Pricing guarantees that with “No Free Lunch with Vanishing Risk” (NFLVR) satisfied there exists an Equivalent Sigma-Martingale Measure (ESMM), under which discounted price processes are sigma martingales. Even when price processes are modelled as local martingales (e.g., due to positivity constraints or continuity), stochastic integrals like H M , arising in self-financing trading strategies, are typically only sigma martingales and not local martingales.
The concept of sigma martingales was introduced as processes “de la classe ( Σ m )” in the pioneering work of Chou [2] and was further explored by Émery [8]. Despite their fundamental role in financial mathematics, sigma martingales have received relatively limited attention in the broader literature. For example, the treatment in [20] is concise but superficial, while works such as [12,15] delve deeper but rely heavily on the machinery of semimartingale characteristics, which can be mathematically demanding.
This publication seeks to bridge the gap in the literature by developing a comprehensive and accessible framework for sigma martingales without relying on the technical apparatus of semimartingale characteristics. Instead, we demonstrate that all fundamental results about sigma martingales can be derived from first principles. One of this work’s central achievements is a new proof of the crucial Ansel-Stricker lemma, which states that one-sided bounded sigma martingales are local martingales. In the context of mathematical finance, this means that if a trading strategy is admissible (i.e., it satisfies specific boundedness conditions), the stochastic integral H M is not merely a sigma martingale but also a local martingale, significantly simplifying the theoretical framework.

2. Definition and Properties

We start by repeating some basic definitions.
Let ( Ω , F , P ) be a probability space, and let F = { F t : t 0 } be a filtration of sub- σ -algebras of F satisfying the usual conditions, that means:
1.
F 0 includes all P -null sets from F ;
2.
F is right continuous, i.e.,
s > t F s = F t ,
for each t 0 .
Given a measure Q on F , we say that Q P if P ( E ) = 0 implies Q ( E ) = 0 for E F . Similarly, P Q is defined analogously. If both P Q and Q P , we say that P and Q are equivalent, and we write P Q .
A set E Ω × R + is called evanescent if { ω Ω : t 0 such that ( t , ω ) E } is a P -null set. Here, we always consider stochastic processes modulo evanescent sets.
A càdlàg function is a mapping ξ : R + R that is right-continuous and has left-hand limits at every point. From this point forward, we assume that all stochastic processes are càdlàg (at least up to evanescence).
A sequence of processes ( H n ) n 1 converges to a process H uniformly on compacts in probability (abbreviated ucp) if, for each t > 0 ,
sup 0 s t | H s n H s | converges to 0 in probability .
A càdlàg, adapted process X = ( X t ) t 0 is called a semimartingale if it can be decomposed as X t = M t + A t , where M is a local martingale and A is a process of finite variation on every finite interval. The space of d-dimensional semimartingales is denoted by S d .
Beyond this decomposition, semimartingales can also be defined equivalently through their properties as good integrators. In this sense, a semimartingale is a process for which the integral operator is continuous with respect to certain metrics. Finally, semimartingales can also be described as topological semimartingales, whose definition relies on certain convergence properties in the semimartingale topology. These three characterizations – the classical decomposition, the good integrator perspective, and the topological semimartingale framework – are mathematically equivalent.
The predictable σ-algebra, denoted by P , is the smallest σ -algebra on Ω × R + such that all left-continuous adapted processes are measurable with respect to B ( R ) , the Borel σ -algebra on R .
This definition is equivalent to several other characterizations of the predictable σ -algebra on R + × Ω . For example, the predictable σ -algebra can also be generated by simple or elementary predictable processes, continuous adapted processes, or sets of predictable stopping times.
By M d , we denote the space of d-dimensional martingales, by M d the space of bounded d-dimensional martingales, and by M loc d the space of d-dimensional local martingales. A subscript 0, as in M loc , 0 , further indicates that the process starts at 0, that is, M 0 = 0 almost surely for all M M loc , 0 .
In the following, we use H X to denote the stochastic integral of a d-dimensional predictable process H with respect to a d-dimensional semimartingale X, as defined, for example, in [3].
For M a martingale and p [ 1 , ) , write
M H p : = M * p = E sup t M t p 1 p .
Here · p denotes the norm in L p . Then H p is the space of martingales such that
M H p < .
Our definition of a sigma martingale was initially proposed by Goll et al. [10] and subsequently adapted by some authors (for example, [12] Definition 6.33). However, in Theorem 3, we show that the definition from [6] and [20] is equivalent to ours.
Definition 1
(Sigma Martingale).
(a) 
A one-dimensional semimartingaleSis called a sigma martingale if there exists a sequence of sets D n P such that:
(i) 
D n D n + 1 for all n;
(ii) 
n = 1 D n = Ω × R + ;
(iii) 
for any n 1 , the process 1 D n S is a uniformly integrable martingale.
Such a sequence ( D n ) n N is called a σ-localizing sequence.
(b) 
A d-dimensional semimartingale is called a sigma martingale if each of its components is a one-dimensional sigma martingale.
(c) 
By M σ d , we denote the set of all d-dimensional sigma martingales.
(d) 
A sequence of sets D n P is called a Σ-localizing sequence if:
(i) 
D n D n + 1 for all n;
(ii) 
n = 1 D n = Ω × R + ;
(iii) 
for any n 1 , the process 1 D n S is a local martingale.
We begin with some examples. First, observe that by setting D n : = 0 , T n , all local martingales are sigma martingales.
Theorem 1.
Every local martingale is a sigma martingale.
Proof. 
Let M be a local martingale and ( T n ) n N a localizing sequence. Define D n : = 0 , T n . Since 1 0 , T n M = M T n , it follows that M is also a sigma martingale. □
It turns out that M loc d M σ d , as illustrated by the following examples.
Example 1.
Let ( Y n ) n N be a sequence of independent random variables with
P ( Y n = n ) = 1 2 n 2 , P ( Y n = n ) = 1 2 n 2 , P ( Y n = 0 ) = 1 1 n 2 .
We put
X t : = n ; 1 t 1 n Y n .
Then X is a well-defined sigma martingale but no local martingale.
First, we have to show that X is well-defined. Therefore, we define A n = { ω Ω ; Y n 0 } . Clearly, we have P ( A n ) = 1 n 2 and hence n = 1 P ( A n ) < . By the Borel-Cantelli lemma, we conclude P ( { Y n 0 for infinite many n } ) = 0 and thus X is well-defined.
By setting
D n : = Ω × 0 , 1 1 n 1 , ,
we obtain 1 D n X = i = 1 n Y n for each n. Since the sum is finite and all Y n are symmetric and integrable, it is easy to see that D n is a localizing sequence and X a sigma martingale.
Furthermore, X is not a local martingale. In order to show that, we assume X M loc 1 . Since X is a process with independent increments, we even have X M 1 (see, for example, Medvegyev [17] Theorem 7.97). We put
A n : = { Y n = n } k n { Y k = 0 }
and by the independence of the random variables Y k , we obtain
P ( A n ) = P ( Y n = n ) k n P ( Y k = 0 ) P ( Y n = n ) k N P ( Y k = 0 ) = c 1 2 n 2
with c : = k N 1 1 k 2 .  1
By applying monotone convergence, and since the sets A n are pairwise disjoint, we obtain
E X 1 E X 1 n N 1 A n = n N E X 1 1 A n = n N n P ( A n ) c n N n 1 2 n 2 = .
This is a contradiction, and we conclude that X is not a local martingale.
The following example is a variant of the most prominent example for a sigma martingale that is not a local martingal. It is from Émery [8] and mentioned in most publications about sigma martingales (see, for example, [13] Example 9.29, [20] the example preceding Theorem IV.34 or [21] Example 5.2).
Example 2.
Let τ , ξ be independent random variables with P ( τ > t ) = exp ( 2 t ) and P ( ξ = 1 ) = P ( ξ = 1 ) = 1 2 . We put X t = 1 τ , ξ τ . Then, X is a sigma martingale but not a local martingale.
By putting D n = { 0 } 1 n , , we obtain
1 D n ξ τ 1 τ , = ξ τ for τ 1 n and t τ 0 else .
And it is easy to see that D n is a localizing sequence.
But X is not a local martingale as we encounter integrability problems. We assume X M loc 1 , and hence, there exists a stopping time T > 0 such that X T is a uniformly integrable martingale.
Since X is constant on 0 , τ , we deduce that T is constant on { ω Ω ; T < τ } . There exists an ε > 0 such that T τ on { ω Ω ; τ < ε } . Hence we have T τ ε and thus
E X ε T = 0 ε ξ t d P ( τ = t ) = 0 ε 1 t 2 exp ( 2 t ) d t = .
So X T is not a martingale, and thus X is not a local martingale.
Remark 1.
It turns out that for both of the above examples, there exists an equivalent probability measure Q such that X is a Q -martingale.
For the first example, assume a probability measure Q such that Y n are independent random variables with
Q ( Y n = n ) = 1 2 n 3 , Q ( Y n = n ) = 1 2 n 3 , Q ( Y n = 0 ) = 1 1 n 3 .
Then we have
E X t E X 1 = E n = 1 Y n E n = 1 Y n = n = 1 n · 1 n 3 = n = 1 1 n 2 < .
Furthermore, it is easy to see that we have E X t F s = X s . Hence, X is a martingale.
For the second example, assume a probability measure Q such that τ and ξ are independent random variables and P ( ξ A ) = Q ( ξ A ) for all A B and 0 t ξ s d Q ( τ = s ) < (for example, you can choose Q with Q ( τ > t ) = 1 t + 1 for all t 0 ). Then we have E Q X t < and E Q [ X t F s ] = X s and hence X is a martingale.
However, in general, such a probability measure does not necessarily exist. We will illustrate that in Example 3.
The notion of the σ – (or Σ –) localizing sequence is essentially new but is inspired by the procedure of σ -localization, which was first described by Jacod and Shiryaev [12] and Kallsen [15]. It does simplify some proofs since the following theorem holds.
Theorem 2.
Let S be a semimartingale. The following are equivalent:
(i) 
The processSis a sigma martingale;
(ii) 
forS, there exists a Σ-localizing sequence;
(iii) 
forS, there exists a sequence D n P , such that D n D n + 1 and n = 1 D n = Ω × R + and for any n 1 the process 1 D n S is a sigma martingale.
In order to prove this theorem, we need the following lemma.
Lemma 1.
Let S S 1 and D n P sets, which form a countable partition of Ω × R + such that 1 D n S is a uniformly integrable martingale for any n 1 , then S is a sigma martingale.
Proof. 
We put D ˜ n = i = 1 n D i , then it is easy to see that ( D ˜ n ) n N is a σ -localizing sequence. □
Proof of Theorem 2. 
It suffices to prove the theorem for S S 1 . (i)⇒(ii)⇒(iii) is clear, so we just have to show (iii) ⇒ (i).
Let S be a semimartingale, for which a sequence ( D n ) n N of subsets of the predictable σ -algebra exists such that D n D n + 1 and n = 1 D n = Ω × R + and for which S n : = 1 D n S is a sigma martingale for any n 1 .
By assumption, for every n 1 there exists a sequence ( D n , m ) m N , such that
S n , m : = 1 D n , m S n
is a uniformly integrable martingale for all m. By defining D ˜ n , m : = D n , m ( D n D n 1 ) , we get D ˜ n , m = D n , m D n ( D n D n 1 ) and
S ˜ n , m : = 1 D ˜ n , m S = 1 D n , m 1 D n D n 1 1 D n S = 1 D n D n 1 1 D n , m S n = 1 D n D n 1 S n , m
is a local martingale because stochastic integrals with bounded integrands and local martingale integrators are again local martingales. Thus for every pair ( n , m ) there exists a fundamental sequence ( T n , m , k ) k N 0 . We put
D ¯ n , m , 1 = D ˜ n , m 0 , T n , m , 1 , and D ¯ n , m , k = D ˜ n , m T n , m , k 1 , T n , m , k
for k = 2 , 3 , and all ( m , n ) N 2 . Now
1 D ¯ n , m , 1 S = 1 D ˜ n , m S T n , m , 1 = S ˜ n , m T n , m , 1
is a uniformly integrable martingale, and so is
1 D ¯ n , m , k S = 1 D ˜ n , m S T n , m , k 1 D ˜ n , m S T n , m , k 1 = S ˜ n , m T n , m , k S ˜ n , m T n , m , k 1
The sets D ¯ n , m , k are subsets of the predictable σ -algebra and form a countable partition of Ω × R + . Thus, by lemSigmalokdef, S is a sigma martingale. □
The following corollary is immediate.
Corollary 1.
Every local sigma martingale X is a sigma martingale.
The following result shows that the set of sigma martingales is closed under stochastic integration, as opposed to the set of local martingales.
Corollary 2.
Let S M σ d and H L ( S ) , then H S is also a sigma martingale.
Proof. 
Let S ˜ 1 , , S ˜ d be the components of S. Consider a Σ -localizing sequence ( D n ) n N and define D ˜ n = { ( ω , t ) Ω × R + ; H t ( ω ) u n } .
Since H is a predictable process, all of the D ˜ n lie in the predictable σ -algebra. Therefore, the sets D ¯ n = D n D ˜ n are predictable and we have D ¯ n D ¯ n + 1 and n = 1 D ¯ n = Ω × R + . By putting H n : = H 1 D ˜ n and S n : = ( 1 D n S ˜ 1 , , 1 D n S ˜ d ) , the process H n is bounded and by the linearity of the integral, we obtain
1 D ¯ n ( H S ) = ( 1 D n 1 D ˜ n ) ( H S ) = 1 D ˜ n H 1 D n S = H n S n
Hence, since S n is a local martingale, 1 D ¯ n ( H S ) is also a local martingale and thus D ¯ n a Σ -localizing sequence. By Sigmalokdef, H S is a sigma martingale. □
Corollary 3.
For a sigma martingale S with Σ-localizing sequence ( D n ) n N and a sequence of subsets of the predictable σ algebra ( D ˜ n ) n N , which satisfies D n D n + 1 and n = 1 D n = Ω × R + , ( D ¯ n ) n N : = ( D n D ˜ n ) n N is also a Σ-localizing sequence.
Proof. 
We have
1 D ¯ n S = 1 D n 1 D ˜ n S = 1 D ˜ n 1 D n S .
Since ( D n ) n N is a Σ -localizing sequence, 1 D n S is, by definition, a local martingale and since 1 D ˜ n is bounded, 1 D ¯ n S is a local martingale for all n and ( D ¯ n ) n N is a Σ -localizing sequence. □
Corollary 4.
The set of sigma martingales forms a vector space.
Proof. 
Without loss of generality, we assume d = 1 . Consider α , β R and X , Y M σ 1 with Σ -localizing sequences ( D n ) n N for X and ( D ˜ n ) n N for Y. By Sigmalokal ( D n D ˜ n ) n N is a Σ -localizing sequence for both X and Y and we have
1 D n D ˜ n ( α X + β Y ) = α 1 D n D ˜ n X + β 1 D n D ˜ n Y .
Since M loc 1 is a vector space, 1 D n D ˜ n ( α X + β Y ) is a local martingale and thus D n D ˜ n is a Σ -localizing sequence for α X + β Y . Hence, α X + β Y is a sigma martingale. □
We now come to one of the main statements about sigma martingales. As mentioned earlier, H M for M M loc is not necessarily a local martingale. Hence, we have a closer look at the class
{ H M ; H L ( M ) , M M loc }
and it turns out that this class corresponds exactly to the vector space of the sigma martingales. Furthermore, by proving this theorem, we also show that our definition of a sigma martingale is equivalent to the definition used in [6] and [20].
The theorem is mentioned in almost any publication about sigma martingales (for example, in [20] Theorem IV.Theorem 89 or [12] Theorem 6.4.1). Because of our different approach, the proof given here differs slightly from the one given in the above-mentioned literature.
Theorem 3.
Let X = ( X 1 , , X d ) be a d-dimensional semimartingale. The following are equivalent:
(i) 
The process X is a sigma martingale.
(ii) 
There exists a strictly positive process H P and an H 1 -martingale M = ( M 1 , , M d ) with
X i = H M i for all i { 1 , , d } .
(iii) 
There exists a strictly positive process H P and a martingale M = ( M 1 , , M d ) with
X i = H M i for all i { 1 , , d } .
(iv) 
There exists a strictly positive process H P and a local martingale M = ( M 1 , , M d ) with
X i = H M i for all i { 1 , , d } .
(v) 
There exists a local martingale M = ( M 1 , , M d ) and a predictable process H = ( H 1 , , H d ) with
H i L ( M i ) a n d X i = H i M i for all i { 1 , , d } .
Proof. 
The implications (ii)⇒(iii)⇒(iv) ⇒(v) are clear.
(i)
(ii) By assumption, there exists a σ -localizing sequence ( D n ) n N . It is easy to see that each martingale is locally in H 1 (see, for example, [20] Theorem IV.51). Hence, for each n N and each i { 1 , , d } , there exists an increasing sequence of stopping times ( T i n , m ) m N tending to infinity, such that 1 D n X i T i n , m H 1 . Therefore we can construct a sequence ( T n , m ) m N of stopping times, such that 1 D n X i T n , m H 1 for all n , m N and all i { 1 , , d } .
We choose appropriate α m , n > 0 such that
max i { 1 , , d } α n , m M n , i T n , m H 1 2 ( m + n ) , for a l l e m , n N
and put T 0 , n : = 0 as well as
K ˜ : = 1 0 + n = 1 m = 1 α m , n 1 T n , m 1 , T m , n D n N i : = K ˜ X i , i = 1 , , d .
Because of
1 T n , m 1 , T n , m D n X = 1 D n X T n , m H 1 ,
N i is the limit of a sequence of H 1 -martingales which is convergent in H 1 . Since H 1 is a Banach space, N is also a H 1 -martingale. Furthermore, we have K ˜ > 0 and hence K = K ˜ 1 exists and we obtain X i = K N i for all i { 1 , , d } .
(v)
(i): As every martingale is locally in H 1 , for each i, there exists an increasing sequence of stopping times ( T i n ) n N tending to infinity, such that ( M i ) T i n H 1 . Hence, we can construct an increasing sequence of stopping times ( T n ) n N tending to infinity, such that ( M i ) T n H 1 for all n N and for all i { 1 , , d } . We define
D n : = i = 1 d H i n 0 , T n Ω × R +
and hence, we have
1 D n X i = 1 D n H i M i = 1 D n H i ( M i ) T n .
Because of 1 D n H i n and because the process ( M i ) T n , stopped at T n , is a H 1 martingale, 1 D ˜ n X i is for all n a local martingale. Therefore, there exists a Σ -localizing sequence for X, and thus X is a sigma martingale.
The following lemma is a simple statement about general stochastic integration. However, to the best of our knowledge, it so far only mentioned in the still unpublished [22]. For us, it will be helpful; therefore, we will prove it here again.
Lemma 2.
Let ( X n ) n N be a sequence of local martingales, which converges to a process X in ucp. If ( sup n ( X n ) t * ) t R + is locally integrable, then X is a local martingale.
Proof. 
Without loss of generality, we assume all processes to be one-dimensional. Because of the ucp-convergence, we can conclude that X is also càdlàg and adapted. By assuming a suitable subsequence, we can also assume that the convergence is almost surely on compact subsets, and thus
M t : = sup n ( X n ) t *
is also càdlàg and adapted. Furthermore, X * is increasing, and we have
Δ M t 2 sup n ( X n ) t * .
By assumption, the right-hand side is locally integrable; thus, M is also locally integrable.
Now, we can find a sequence ( T k ) k N such that ( X n ) T k is a martingale for all n and k and M T k is integrable for all k. Furthermore, because of X T k M , we can apply the dominated convergence theorem and obtain for every bounded stopping time τ
E X τ T k = E lim n ( X n ) τ T k = lim n E ( X n ) τ T k = lim n E X 0 T k = E X 0 T k .
Hence, X T k is a martingale for all k; thus, we conclude that X is a local martingale. □
Sigma martingales are processes that behave “like” local martingales. It can even be shown that sigma martingales are semimartingales with vanishing drift ([15] Lemma 2.1). It, therefore, raises the question of why they are not local martingales or what additional assumption must be made so that they are. We have a criterion available with ichs345, which allows us to prove the following simple criterion for this question. Despite this simplicity, to the best of our knowledge, it is not explicitly mentioned in the Sigma Martingale literature. However, it will be enormously helpful for this new approach.
Theorem 4.
A sigma martingale X is a local martingale if and only if it is locally integrable.
Proof. 
Since every local martingale is a sigma martingale and locally integrable, it is enough to prove the converse.
Let X be, without loss of generality, a locally integrable one-dimensional sigma martingale. By Theorem 3, there exists a representation X = H M with M M loc 1 and H L ( X ) . We define H n : = H 1 { | H | n } . Clearly, we have H n | H | and H n is a bounded predictable process. We obtain
Δ ( H n M ) = H n Δ M | H | Δ M = Δ ( H M ) .
Since each H n is bounded for all n, we obtain H n M M loc for all n N and with the Dominated Convergence Theorem, we get H n M ucp H M . Choosing a subsequence, we can assume H n M H M converges almost surely on compact subsets.
We put N t : = sup n N H n M t and N is an adapted càdlàg process. Since H M is left-continuous and hence locally bounded, it is also locally integrable. Since H M is locally integrable by assumption, Δ ( H M ) = H M H M is also locally integrable.
Furthermore, we have
Δ N = Δ sup n N H n M sup n N Δ H n M Δ H M .
Hence, Δ N and therefore also Δ ( N * ) are locally integrable. Since any càdlàg process is locally integrable if its jump process is locally integrable, N * and thus sup n ( H n M ) * are also locally integrable. Now the result follows from ichs345. □
As every continuous semimartingale is locally integrable, the following corollary is immediate.
Corollary 5.
Every continuous sigma martingale is a local martingale.
Remark 2.
As opposed to the criterion above, it is well known that any sigma martingale that is also a special semimartingale2 is a local martingale ([3] Corollary 12.3.20 or [20] Theorem IV.91) and it can be shown that a semimartingale is a special semimartingale if and only if its supremum process is locally integrable (see, for example, [Theorem 11.6.10][3] or [11] Theorem 8.6). Hence, we obtain that a local martingale is locally integrable if and only if its supremum process is locally integrable.
The following theorem is of principal importance in financial mathematics. It can be found in many publications on financial mathematics using the semimartingale terminology (not only is it mentioned in almost all of the publications we mentioned frequently in this work, such as [5,6,7] and [21] but it also mentioned in many textbooks dealing with the different aspects of financial mathematics such as [9,14,18,19]). However, to our knowledge, the only published proofs are the French-language original publication [1] Corollaire 3.5 and the more recent [4]. Theorem 4 enables us to give an alternative proof.
Theorem 5
(Ansel-Stricker). A one-sided bounded sigma martingale X is a local martingale. If X is bounded from below (resp. above), it is also a supermartingale (resp. submartingale).
Proof. 
Assume, without loss of generality, X 0 and X to be one-dimensional. By Theorem 3, there exists a representation X = H M with M M loc 1 and H L ( M ) . Proceeding analogously to the proof of Theorem 4, we find a sequence ( H n ) n N of locally bounded predictable processes from L ( M ) such that H n M ucp H M . Furthermore, we can assume that H n M 0 and H n M H M almost surely on compact sets (we can always find a modification of a subsequence for which these properties hold). Since the H n are locally bounded, H n M is a local martingale for all n. Hence, we can find a sequence of stopping times ( T k ) k N , such that H n M T k is a martingale for all n , k N .
By Fatou’s lemma, we know that
E H M t T k = E lim inf n H n M t T k lim inf n E H n M t T k = lim inf n E [ H 0 n M 0 ] = E [ H 0 M 0 ] < .
Hence X T k is integrable and therefore X locally integrable. From Theorem 4 X M loc follows.
We still have to show the supermartingale property. Suppose k N , 0 s t , A F s and ( T k ) k N as defined above. Since X 0 , we know that for all ω Ω with T k s
1 A X t T k X s T k 1 A X s T k X s T k = X s
holds.
However, the above equation is clearly also satisfied for ω Ω with T k < s , and therefore, it also holds in general.
Hence, the sequence Z n : = 1 A X t T k X s T k + X s is positive, and we can again apply Fatou’s lemma. Then X is locally integrable and since 0 E [ X s ] < holds, it follows that
E 1 A X t X s = E lim k 1 A X t T k X s T k + X s E [ X s ] lim inf k E 1 A X t T k X s T k + X s E [ X s ] lim inf k E 1 A X t T k X s T k = 0 .
Thus, X is a supermartingale. □
In finite, discrete time, any non-negative local martingale that is bounded from below is even a martingale and not just a supermartingale. The difference in continuous time is that E [ Y t ] < for all t does not imply E [ sup t Y t ] < (not even on compacts). Thus, there is no integrable pointwise majorant, which would be needed to prove the martingale property.
The following example illustrates that there are sigma martingales where no equivalent probability measure exists under which it is a local martingale.
Example 3.
Assume a two-dimensional sigma martingale Z = ( X , Y ) with X being the process from EmeryExample and
Y t = 2 t for t < τ , 1 2 τ else .
Obviously, Y is a stopped compensated Poisson Process. As the compensated Poisson Process is a martingale (see, for example, [3] Theorem 5.5.18) and a stopped martingale is again a martingale by the Optional Stopping Theorem, Y is also a martingale. Hence,e Z is a sigma martingale that is not a local martingale, and we are going to show that no probability measure Q P exists such that Z is a local martingale under Q .
To that end, let Q be an equivalent probability such that Y is a sigma martingale under Q . For each t > 0 , the stopped process Y t is bounded and since, by Theorem 5, any bounded sigma martingale is a martingale, we conclude that Y t is a martingale. Thus, we have E Q Y t = 0 for all t 0 . With f being the density function of τ under Q and F being the cumulative distribution function of τ (that means F ( t ) = Q ( τ t ) = 0 t f ( s ) d s ), we get
0 = E Q Y t = 2 t Q ( τ > t ) + 0 t ( 1 2 s ) f ( s ) d s = 2 t Q ( τ > t ) + 0 t f ( s ) d s 2 0 t s f ( s ) d s = 2 t Q ( τ > t ) + F ( t ) 2 s F ( s ) 0 t 0 t F ( s ) d s = 2 t Q ( τ > t ) 2 t Q ( τ t ) + F ( t ) + 2 0 t F ( s ) d s = 2 t + F ( t ) + 2 0 t F ( s ) d s
We derive with respect to t and obtain 0 = 2 + F ( t ) + 2 F ( t ) . By putting G ( t ) : = 1 F ( t ) , we get G ( t ) = F ( t ) and F ( t ) = 2 G ( t ) , thus
G ( t ) = 2 G ( t ) and G ( 0 ) = 1 .
Hence, we obtain G ( t ) = exp ( 2 t ) and thus
Q ( τ > t ) = 1 Q ( τ t ) = 1 F ( t ) = G ( t ) = exp ( 2 t ) = P ( τ > t ) .
Now, we turn to X. Our first goal is to show that we have E Q ξ τ = 0 or equivalently, A ξ = 0 for all A σ ( τ ) . We proceed in a sequence of steps.
1. 
We have E Q ξ τ 1 { s < τ t } = 0 .
For any s > 0 , we define X ˜ t ( s ) : = X t X t s where X t s = X s t . Then X ˜ ( s ) is clearly a sigma martingale. Furthermore, we have
X ˜ t ( s ) = 0 for t s X t X s , for t > s , = 0 for t s , 0 for t > s and t < τ , ξ τ for t > s and t τ and s < τ , 0 for t > s and s τ .
And since ξ τ < ξ s for s < τ and ξ 1 , we conclude that X ˜ t ( s ) is a bounded sigma martingale for all s > 0 . Hence, by Theorem 5, it is a martingale and we obtain
0 = E Q X ˜ t ( s ) = E Q X t 1 { s < τ t } = E Q ξ τ 1 { s < τ t } .
2. 
We have A ξ d Q = 0 for all A { { τ ( s , t ] } ; 0 < s < t } .
Let A { { τ ( s , t ] } ; 0 < s < t } . Then we have E Q ξ τ 1 A < and we get
1 A τ A ξ d Q = 1 A τ A E Q ξ τ d Q = A E Q 1 A τ ξ τ d Q = A 1 A τ ξ d Q = 0 .
As 1 A τ is not constantly zero, we conclude A ξ d Q = 0 .
3. 
We have A ξ d Q = 0 for all A σ ( τ ) .
By definition, we have A ξ d Q = A ξ + d Q A ξ d Q . Both summands can be seen as measures on Ω and since the half-open intervals are an ∩-closed generator of the Borel σ algebra,3 these measures are uniquely determined by its values on { { τ ( s , t ] } ; 0 < s < t } , 4 we conclude A ξ d Q = 0 for all A σ ( τ ) and hence
E Q ξ τ = 0 .
Since Q and P are equivalent probability measures, we have
0 < Q ( ξ = 1 τ ) , Q ( ξ = 1 τ ) < 1 0 < P ( ξ = 1 τ ) , P ( ξ = 1 τ ) < 1 .
Thus with (1), we obtain
Q ( ξ = 1 τ ) = Q ( ξ = 1 τ ) = 1 2
almost surely and hence ξ and τ are Q -independent. We conclude Q = P .

References

  1. Ansel, J.P.; Stricker, C. Couverture des actifs contingents et prix maximum. In Proceedings of the Annales de l’institut Henri Poincaré (B) Probabilités et Statistiques. Gauthier-Villars, Vol. 30; 1994; pp. 303–315. [Google Scholar]
  2. Chou, C.S. Caracterisation d’une classe de semimartingales. In Séminaire de Probabilités XIII; Springer, 1979; pp. 250–252.
  3. Cohen, S.N.; Elliott, R.J. Stochastic Calculus and Applications; Probability and Its Applications, Springer New York, 2015.
  4. De Donno, M.; Pratelli, M. On a lemma by Ansel and Stricker. In Séminaire de Probabilités XL; Springer, 2007; pp. 411–414.
  5. Delbaen, F.; Schachermayer, W. A general version of the fundamental theorem of asset pricing. Mathematische Annalen 1994, 300, 463–520. [Google Scholar] [CrossRef]
  6. Delbaen, F.; Schachermayer, W. The fundamental theorem of asset pricing for unbounded stochastic processes. Mathematische Annalen 1998, 312, 215–250. [Google Scholar] [CrossRef]
  7. Delbaen, F.; Schachermayer, W. The Mathematics of Arbitrage; Number Bd. 13 in Springer Finance, Springer, 2006.
  8. Émery, M. Compensation de processus vf non localement intégrables. In Séminaire de Probabilités XIV 1978/79; Springer, 1980; pp. 152–160.
  9. Föllmer, H.; Schied, A. Stochastic Finance: An Introduction in Discrete Time; De Gruyter studies in mathematics, Walter de Gruyter, 2004.
  10. Goll, T.; Kallsen, J.; et al. A complete explicit solution to the log-optimal portfolio problem. The Annals of Applied Probability 2003, 13, 774–799. [Google Scholar] [CrossRef]
  11. He, S.W.; Wang, J.G.; Yan, J.A. Semimartingale theory and stochastic calculus; Kexue Chubanshe (Science Press), Beijing; CRC Press, Boca Raton, FL, 1992; pp. xiv+546.
  12. Jacod, J.; Shiryaev, A.N. Limit theorems for stochastic processes, second ed.; Vol. 288, Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Springer-Verlag, Berlin, 2003; pp. xx+661.
  13. Karandikar, R.L.; Rao, B.V. Introduction to stochastic calculus; Indian Statistical Institute Series, Springer, Singapore, 2018; pp. xiii+441.
  14. Karatzas, I.; Shreve, S. Methods of Mathematical Finance; Applications of mathematics, Springer, 1998.
  15. Kallsen, J. σ-localization and σ-martingales. Theory of Probability & Its Applications 2004, 48, 152–163. [Google Scholar]
  16. Klenke, A. Probability Theory: A Comprehensive Course; Universitext, Springer London, 2008.
  17. Medvegyev, P. Stochastic Integration Theory; Oxford Graduate Texts in Mathematics, OUP Oxford, 2007.
  18. Musiela, M.; Rutkowski, M. Martingale Methods in Financial Modelling (Stochastic Modelling and Applied Probability), 2nd ed.; Springer, 2011.
  19. Platen, E.; Bruti-Liberati, N. Numerical Solution of Stochastic Differential Equations with Jumps in Finance; Vol. 64, Springer Science & Business Media, 2010.
  20. Protter, P. Stochastic Integration and Differential Equations: Version 2.1 (Stochastic Modelling and Applied Probability); Springer, 2010.
  21. Shiryaev, A.N.; Cherny, A. Vector stochastic integrals and the fundamental theorems of asset pricing. Proceedings of the Steklov Institute of Mathematics-Interperiodica Translation 2002, 237, 6–49. [Google Scholar]
  22. Sohns, M. General Stochastic Vector Integration - Three Approaches Probability Surveys 2024, (submitted).
1
We note that c is well-defined, since k N 1 k 2 converges.
2
see, for example, [3] Definition 11.6.9 for the definition of a special semimartingale
3
see, for example, [16] Theorem 1.23
4
see, for example, [16] Lemma 1.42
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated