Preprint
Article

This version is not peer-reviewed.

General Stochastic Vector Integration - Three Approaches

Submitted:

04 January 2025

Posted:

08 January 2025

Read the latest preprint version here

Abstract
The development of stochastic integration theory throughout the 20th century has led to several definitions and approaches of the stochastic integral, particularly for predictable integrands and semimartingale integrators. This survey provides an overview of the two prominent approaches in defining the stochastic integral: the classical approach attributed to Itô, Meyer and Jacod, and the more contemporary functional analytical approach mainly developed by Bichteler and Protter. It also delves into the historical milestones and achievements in this area and analyzes them from a modern perspective. Drawing inspiration from the similarities of existing approaches, this survey introduces a new topology-based approach to the general vector-valued stochastic integral for predictable integrands and semimartingale integrators. This new approach provides a faster, simpler way to define the general integral and offers a self-contained derivation of its key properties without depending on semimartingale decomposition theorems.
Keywords: 
;  ;  ;  ;  

1. Introduction

In the past few decades, the importance of Stochastic Integration has increased immensely in multiple disciplines. This is partially due to the growth of Mathematical Finance, which relies heavily on Stochastic Calculus for pricing and hedging financial derivatives in valuation, risk management and portfolio optimization. Additionally, Stochastic Integration is used in many other fields, such as Engineering, Physics, and Biology. For instance it is used in filtering and control theory, to study how random perturbations affect physical phenomena or to model the impact of stochastic variability on population dynamics. Although basic versions of the stochastic integral are usually sufficient for most of the mentioned applications, some cases need a more advanced version. In such scenarios, the stochastic integral should be defined for multidimensional càdlàg processes as integrators, the integral operator should be linear and continuous, and the set of integrands should include all càglàd processes and as many predictable processes as possible. Two exemplary applications of these more general requirements are the two celebrated Fundamental Theorems of Asset Pricing in continuous time for semimartingale models [33,34,69,70,184].
There are two alternative prominent approaches for defining the stochastic integral and deriving its properties.
The first, known as the classical method, traces the historical development of the stochastic integral. It began with its definition for Brownian Motion integrators [79,80,81,82,83,84]. Later, with the aid of the Doob-Meyer decomposition [146], it expanded to square integrable martingale integrators [86,114,161] and a broader set of integrands [149,150,151,152]. This definition was extended to include locally bounded predictable integrands and general semimartingale integrators [26,89,157]. Jacod [88], Jacod [89] introduced the general stochastic integral for unbounded predictable integrands that meet specific integrability conditions. J. Jacod’s approach centered on the characterization of semimartingale jumps. An alternative method, which provided an equivalent definition of the stochastic integral, was proposed by Chou et al. [26]. One of the compelling attributes of the general semimartingale integral is that the integrands are closed under a specific topology, namely the semimartingale or Émery topology [37,52]. However, when the integral is extended to the multidimensional case via component-wise summation, the space of integrands loses its closed property. Therefore, the vector integral must be further expanded to regain closedness, which was done by Dellacherie and Meyer [37], Jacod [90], Memin [138], Shiryaev and Cherny [184].
Over time, several simplifications and shortcuts have been introduced to the classical approach. For instance, employing the Fundamental Theorem of Local Martingales has simplified the extension from square integrable martingales to local martingales and, consequently, to semimartingales [42,155]. However, defining the general stochastic vector integral and its properties is still challenging and entails numerous technical proofs. At some point in the theory, any semimartingale needs to be decomposed into a continuous and a purely discontinuous part. This usually requires more advanced tools, so most authors restrict themselves to continuous integrators. Even in textbooks that treat general semimartingales as integrators, the authors usually only present the case of locally bounded predictable integrands. This simplifies the theory significantly. Not only because locally bounded predictable integrands allow for a straightforward application of the monotone class theorem but also because local martingales are stable under stochastic integration for locally bounded predictable integrands which avoids the more complicated theory of sigma martingales.
In contrast to the classical approach, the functional analytic approach presents an alternative. This more recent approach is built on a result by Bichteler [13], Bichteler [14] and Dellacherie [35] (first published in Meyer [156] and inspired by results by Pellaumail [166] and Metivier and Pellaumail [141]), stating that semimartingales are the most general class of integrators for which one can define the stochastic integral for predictable integrands in a way that the integral operator is continuous with respect to specific metrics. This was then further developed by Lenglart [124] and Protter [170], Protter [171]. Protter [172] even wrote an entire comprehensive textbook based on the functional analytic approach, covering the results from all the above publications.
The functional analytic approach directly formulates the stochastic integral for one-dimensional semimartingale integrators and càglàd integrands by drawing upon techniques from functional analysis. However, for extending the stochastic integral to general predictable integrands, this approach again relies on the decomposition of semimartingales and utilizes similar, very technical, methods as the classical approach. Also, the functional analytic method does not address the vector-valued scenario. To extend the integral in this context, one must revert to classical methods again.
Owing to these complexities in both approaches, standard textbooks and university lectures rarely cover the general stochastic integral. As a result, topics like the Fundamental Theorems of Asset Pricing and their associated proofs are typically reserved for experts in stochastic analysis.
This survey explores and compares the two approaches, discussing their historical development and highlighting their similarities. The terminology has been slightly adjusted to align with modern terminology, thus avoiding some redundant definitions and notations in an effort to enhance clarity and emphasize common patterns.
Inspired by the commonalities of the two approaches — such as defining the stochastic integral through a feasible extension without using any of their results — this paper introduces a new method for the General Stochastic Integral. It leverages the fact that the integral operator is a linear isometry between topological vector spaces, offering a faster alternative to the two established methods. The caveat of this approach is its high level of abstraction, making the demonstration of its properties challenging. Notably, similar approaches have been previously utilized by Chou et al. [26], Karandikar and Rao [105], and Assefa and Harms [4].
For the classical and functional analytic approaches, we present the definitions and derivations of the stochastic integral without proving their fundamental properties or results. In contrast, we develop the topological stochastic vector integral independently in a self-contained manner, without relying on previous methods. Furthermore, the survey includes proofs of the basic properties of the stochastic integral.
This survey is organized as follows.
First, we will review some definitions and well-known results from probability theory, topology, and functional analysis that are essential for any definition of the stochastic integral and stochastic analysis in general.
The next chapter will examine the classical approach to stochastic integration. We will, mostly in chronological order, trace the development of the stochastic integral, starting with the classical approach developed by Itô. We will present all definitions and important theorems but adjust the terminology to align with modern approaches to minimize complexity and better highlight the similarities and differences between different methods. Using the Doob decomposition of martingales, we will extend Itô’s definition of the stochastic integral to allow for square-integrable martingales as integrators, as Doob, Kunita, Watabe, and Meyer did. We will then follow the various generalizations of the stochastic integral developed by the Strasbourg school around Paul-André Meyer to arrive at the general vector-valued stochastic integral for semimartingale integrators and predictable integrands.
In the following chapter, we review and comment on the functional analytic approach. This approach can be divided into two parts. The definition of the stochastic integral for càglàd integrands fundamentally differs from the classical approach and offers a definition without relying on the decomposition of semimartingales. This will be covered in the first section of that chapter. The second section deals with the extension to predictable integrands, which is similar to classical methods and relies on the decomposition of semimartingales.
The last chapter introduces a third, completely new approach to the general vector-valued stochastic integral for predictable integrands and semimartingale integrators. To that end, we first provide essential preliminaries and define the required metrics and topologies. We then introduce a definition of semimartingales equivalent to the widely used ones and highlight that the set of semimartingales is a topological vector space with respect to the semimartingale topology. After this, we analyze the essential properties of semimartingales and illustrate them with examples.
Next, we examine the general stochastic integral using the new topological approach. Its definition is straightforward and does not require complex proofs. However, studying the basic properties of the stochastic integral relies heavily on the technically involved proof of the Dominated Convergence Theorem, which can be found in the appendix. We will demonstrate all the properties of the stochastic integral that are typically necessary for applications without relying on the decomposition of semimartingales.
Lastly, we explore the stability of local martingales under stochastic integration, as this is highly relevant in finance, and provide a quick overview of the definition of the quadratic variation consistent with this new approach.
In this survey, we focus on predictable integrands. Although several extensions are known in the literature, and the initial versions of the stochastic integral were not restricted to predictable integrands, we concentrate on them because only predictable integrands allow for very general stochastic integral operators that exhibit specific desirable properties, such as the stability of the sigma-martingale property.

2. Preliminaries

2.1. Definitions and Well-Known Results

We always assume a complete filtered probability space ( Ω , F , P ) , equipped with a filtration F = ( F t ) t 0 satisfying the usual conditions. This means:
  • F 0 includes all P -null sets from F .
  • The filtration F = ( F t ) t 0 is right-continuous, i.e., F t = u > t F u for all t 0 .
We use the following definitions and notations:
  • For p [ 1 , ) , define the seminorm
    X L p : = E | X | p 1 / p .
    The space L p is then defined as
    L p : = X : Ω R is measurable and X L p < .
  • A stochastic process X on ( Ω , F , P ) is a collection of R -valued or R d -valued random variables ( X t ) t 0 .
  • A process X is said to be adapted to F if X t is F t -measurable for each t.
  • Two stochastic processes X and Y are modifications of each other if X t = Y t a.s. for all t, and they are indistinguishable if for almost every ω Ω , X t ( ω ) = Y t ( ω ) for all t.
  • A stopping time T is a non-negative random variable such that { T t } F t for each t 0 .
  • For a stopping time T, the σ-algebra F T of events occurring up to time T is the σ -algebra consisting of those events A F with
    A { T t } F t for every t 0 .
  • A process X is said to be càdlàg (respectively càglàd) if its paths are right (resp. left) continuous with left (resp. right) limits.
  • The essential supremum of a random variable X is given by
    ess sup ω Ω X ( ω ) = inf { c R ; P ( X > c ) = 0 } .
  • The function 1 A denotes the indicator function of a subset A:
    1 A ( x ) = 1 if x A , 0 else .
  • Let S , T be stopping times. Then, the sets
    S , T : = { ( ω , t ) Ω × R + ; S ( ω ) t T ( ω ) } S , T : = { ( ω , t ) Ω × R + ; S ( ω ) t < T ( ω ) } S , T : = { ( ω , t ) Ω × R + ; S ( ω ) < t T ( ω ) } S , T : = { ( ω , t ) Ω × R + ; S ( ω ) < t < T ( ω ) } S : = S , S .
    are called stochastic intervals.
  • For a stopping time T and a process X, X t T : = X t T denotes the stopped process.
  • A family of random variables ( U α ) α A is uniformly integrable if
    lim n sup α A { | U α | n } | U α | d P = 0 .
  • A real-valued, adapted process X = ( X t ) 0 t < is called a martingale (resp. supermartingale, submartingale) with respect to the filtration F if:
    (i)
    X t L 1 ( P ) ;
    (ii)
    if s t , then E [ X t | F s ] = X s , a.s. (resp. X s , X s ).
  • A process X is a local martingale if there exists a sequence of stopping times ( T n ) n N increasing to a.s. such that X T n is a martingale for each n.
  • For a càdlàg process X the jump process Δ X is defined as Δ X t : = X t X t with X t = lim s t X s .
  • For a stochastic process Y, the process Y is defined to be the process Z that satisfies Z t = Y t . Hence, one gets Y = Y Δ Y .
  • A property is said to hold locally (resp. prelocally) for a stochastic process X if there exists a localizing sequence of stopping times ( T n ) n N such that the property holds for each X T n (resp. X T n ) almost surely.
  • The predictable σ-algebra is the smallest σ -algebra making all left-continuous, adapted processes measurable. It is denoted by P . A predictable process is a stochastic process X = ( X t ) t 0 such that the mapping
    ( t , ω ) X t ( ω )
    is measurable with respect to the predictable σ -algebra P on Ω × [ 0 , ) and the Borel σ -algebra on R . The set of all predictable processes is also denoted by P .
  • A càdlàg process A is an FV-process if almost all of its paths have finite variation on each compact interval.
Theorem 2.1
([153], Section § 2). The predictable σ-algebra P on Ω × [ 0 , ) can be characterized equivalently in the following ways:
(i)
P is the smallest σ-algebra such that all left-continuous, adapted processes are measurable.
(ii)
P is the σ-algebra generated by all continuous, adapted processes.
(iii)
P is the σ-algebra generated by all right-continuous, adapted processes with left limits (càdlàg processes).
Definition 2.2.
An adapted process B = ( B t ) 0 t < taking values in R n is called an n-dimensionalBrownian motion if
(i)
for 0 s < t < , B t B s is independent of F s (increments are independent of the past);
(ii)
for 0 < s < t , B t B s is a Gaussian random variable with mean zero and variance matrix ( t s ) C , for a given, non-random matrix C.
The Brownian motion starts at x if P ( B 0 = x ) = 1 .
Theorem 2.3
([172], Theorem I.26). Let B be a Brownian motion. Then there exists a modification of B which has continuous paths almost surely.
We assume that all Brownian motions discussed have continuous paths and, unless stated otherwise, the covariance matrix C is the identity matrix, making it a standard Brownian motion.
Theorem 2.4
([172], Theorem I.12). Let X be a right-continuous martingale which is uniformly integrable. Then Y = lim t X t a.s. exists, E [ | Y | ] < , and one says Y closes X as a martingale.
Theorem 2.5
([172], Theorem I.13). Let X be a (right-continuous) martingale. Then ( X t ) 0 t < is uniformly integrable if and only if Y = lim t X t exists a.s., E [ | Y | ] < , and ( X t ) 0 t is a martingale, where X = Y .
The following lemma is rarely explicitly mentioned in the literature. It is, however, crucial for our approach.
Lemma 2.6
([71], Theorem 7.38). Let M be a local martingale, T a stopping time and ξ a real F T -measurable random variable. Then
N = ξ ( M M T )
is a local martingale.
The following theorem goes back to Burkholder [23]. Updated versions with notations aligned to stochastic integration and simplified proofs can be found in [153, Theorem 47], [164, Corollaire VIII-3-14] and [49], also see [175] for similar results.
Theorem 2.7
([164], Corollaire VIII-3-14). Let ( M t ) t N be a time-discrete martingale and ( H t ) t N a predictable process with sup t | H t | < 1 almost surely. For a sequence of time-discrete random times ( T n ) n N and t N , define
( H M ) t : = k = 1 t H k 1 ( M k M k 1 ) ,
where H k are random variables with respect to F k . Then, for any T N , we have
P sup n T | ( H M ) n | > α 18 α E | M T |
for all α > 0 . If M is positive, then 18 can be replaced by 9. If M can be closed by a random variable M , we can replace T by ∞.

2.2. Measurability of Stochastic Processes

For the study of stochastic analysis, stochastic processes and stochastic integration, it is essential to address some fundamental concepts related to the measurability of stochastic processes. Historically, issues with measurability were often treated as an afterthought. Many of the original papers and textbooks on the subject overlook these issues, leaving gaps in their proofs. Here, we give a short overview of the topic and provide the results that close the gaps in the original literature.
In continuous-time stochastic processes, measurability can be more challenging to handle than in discrete-time settings. This is because sets of measure zero, though individually insignificant, can have a positive probability when accumulated over uncountable sets. Before the foundational works of Dynkin [47] and Chung and Doob [27], who introduced the concept of progressive measurability, measurability issues were often neglected. However, the above-mentioned contributions laid the groundwork for handling measurability in more general stochastic processes. In particular, predictable processes – a key subclass of progressively measurable processes – play a crucial role in stochastic integration. The importance of predictable processes later became evident in the work of Courrege [30], Courrege [31] and Meyer [145], Meyer [146], Meyer [149], Meyer [150], Meyer [151], Meyer [152].
We start with the basic definitions.
Definition 2.8.
Let ( X t ) t 0 be a stochastic process on the filtered probability space ( Ω , F , F , P ) .
(a)
It is calledjointly measurable orproduct measurable if the mapping X : Ω × R + R is measurable with respect to the product σ-algebra F B ( R + ) , where B ( R + ) denotes the Borel σ-algebra on R + .
(b)
It is calledprogressively measurable if the restriction of X to Ω × [ 0 , t ] is measurable with respect to the product σ-algebra F t B ( [ 0 , t ] ) , where F t is the filtration up to time t.
(c)
It is calledpredictable if it is measurable with respect to the predictable σ-algebra P .
Product measurability ensures that the process can be integrated with respect to both the probability measure on Ω and the Lebesgue measure on R + . This is a crucial property for defining stochastic integrals, as it guarantees that the integrands are well-behaved in both the probabilistic and temporal dimensions. However, product measurability alone does not guarantee that the process is adapted to the filtration.
Progressive measurability is a stronger condition than product measurability because it requires that the measurability holds for every finite time interval, taking into account the information available up to that time. This condition ensures that the values of the process at each time depend only on the information available up to that time, thereby maintaining the non-anticipative nature of the process. Progressive measurability is essential for the stochastic integral over any time interval to be a well-defined random variable. Additionally, if a process ( X t ) t 0 is progressively measurable, then it is necessarily adapted. However, not every adapted process is progressively measurable.
Predictability is an even stronger condition that requires measurability with respect to the predictable σ -algebra, which is generated by left-continuous, adapted processes. Predictable processes are particularly important in stochastic integration because they form the largest class of integrands for which the stochastic integral with respect to semimartingales can be defined in such a way that the integral exhibits certain desirable properties. Every predictable process is progressively measurable, and by extension, product measurable. However, not all progressively measurable processes are predictable, as predictability specifically requires left-continuity or an equivalent structure.
Proofs of these statements can be found in [37, p120ff].
Example. 
Consider the probability space ( Ω , F ) = ( [ 0 , 1 ] , L ) , where L denotes the σ-algebra of Lebesgue measurable sets within the interval [ 0 , 1 ] . By definition, L is the completion of the Borel σ-algebra B ( [ 0 , 1 ] ) with respect to Lebesgue measure. Let L 0 represent the σ-algebra on [ 0 , 1 ] , comprising all sets in L that have Lebesgue measure 0 or 1. Define the filtration F t : = L 0 for all t 0 .
Now, define the set A [ 0 , ) × Ω as
A = { ( α , ω ) [ 0 , ) × Ω ω [ 0 , 1 / 2 ] } .
Then A belongs to the product σ-algebra B ( [ 0 , ) ) F t for each t. However, for any t > 0 , the intersection
A ( ( 0 , t ] × Ω )
does not belong to B ( [ 0 , t ] ) F t . Consequently, the indicator function of A is measurable and adapted to the filtration F but is not F -progressive.
Verifying progressive measurability is inherently more challenging than verifying adaptability, although there exist simple and useful sufficient criteria. For instance, the limits of progressively measurable processes are also progressively measurable, and if ( X t ) t 0 is an adapted process with right- or left-continuous paths, then it has a progressively measurable modification ([186, Proposition 1.13]). Another important property is that for a progressively measurable process X and a finite stopping time T, the random variable X T is F T -measurable ([186, Proposition 2.18]).
These definitions are crucial for resolving the measurability issues encountered in the construction of the Itô integral. By ensuring that the process is progressively measurable, we obtain the adaptedness of the process throughout operations such as smoothing. This is because progressive measurability inherently incorporates the adaptedness to the filtration, which is generated by the Brownian motion in the context of the Itô integral.
Theorem 2.9
(Chung-Doob-Meyer [27,147]). Let X be an R -valued measurable stochastic process adapted to a filtration F . Then, X has a measurable F -progressive modification.
The history of this theorem involves significant contributions from several researchers. Initially, [27] proved the existence of a progressively measurable modification under the assumption of separability. Meyer [147] extended this result by removing the separability condition, although his argument was considered complex, demanding and incomplete. The gap in Meyer’s proof was later addressed by Dellacherie and Meyer [37], using advanced tools from functional analysis. The theorem plays a pivotal role in the creation of the stochastic integral how it is presented in [186].
Example. 
Theorem 2.9 asserts that for any measurable real-valued process ( X t ) adapted to the family of σ-fields ( F t ) , there exists a modification of the process that is progressively measurable with respect to the same family ( F t ) . If we revisit the example above and define
A ˜ = { ( α , ω ) ω [ 0 , 1 / 2 ] } [ 0 , ) × Ω ,
we see that A ˜ is now progressively measurable with respect to ( F t ) since for any t > 0 , A ˜ ( ( 0 , t ] × Ω ) belongs to B ( [ 0 , t ] ) F t , and thus A ˜ is adapted.
We summarize the results:
  • Every predictable process is progressively measurable and product measurable.
  • Every progressively measurable process is product measurable, but not necessarily predictable.
  • A product measurable process may not be progressively measurable or predictable. However, for any process that is product measurable and adapted, there exists a modification which is progressively measurable.
Thus, predictability implies progressive measurability, and progressive measurability implies product measurability. Each level of measurability provides increasing levels of structure and guarantees certain desirable properties necessary for defining stochastic integrals.
In this survey, we mostly avoid the aforementioned measurability issues by mainly focusing on predictable processes, which are also necessarily progressively measurable and product measurable. However, for the sake of completeness, these different notions of measurability are included here to provide a comprehensive foundation for the discussion of stochastic integration, especially considering that some of the referenced sources here include the above-mentioned gaps.

3. The Classical Approach

This section outlines the classical approach to stochastic integration and traces its historical development. The goal is to not only review key literature but also to identify the technical challenges encountered throughout its development. Given that similar obstacles are expected in the new topological approach introduced in this work, a discussion of the classical solutions will provide valuable insights.
For a comprehensive account of the historical progression of stochastic processes and integration theory, see [96,119,158,196].
It should be noted that while defining the stochastic integral is typically straightforward, proving more advanced properties — such as the Dominated Convergence Theorem and change of variable formulas — tends to be the more challenging task.

3.1. The Beginning: Wiener and Doeblin

It is assumed that Norbert Wiener (1894 – 1964), an American mathematician, was the first to define the stochastic integral
0 t f ( s ) d B s
for a Brownian motion B and smooth deterministic functions f, defined on the positive half-axis, utilizing the concept of “integration by parts” [165,197,198]. Starting with the “natural” formula d ( f B ) = f d B + B d f , one puts
0 t f ( s ) d B s = f ( t ) B t 0 t f ( s ) B s d s ,
where the integral 0 t f ( s ) B s d s is treated as the trajectory-wise (i.e., for each ω Ω ) Riemann integral of the continuous functions f ( s ) B s ( ω ) defined for s 0 .
However, since the need for f to be differentiable and deterministic is too stringent and of little practical use, the idea has received little attention.
The German-French mathematician Wolfgang Doeblin (1915 – 1940), later known as Vincent Dobbin, was probably the first to establish an integration theory for non-differentiable integrands, developing a theory quite similar to the one Itô developed independently of Doeblin’s findings a few years later. However, Doeblin died shortly after developing his idea during the Second World War while fighting for France, and his results were lost until the early twenty-first century [21].

3.2. Itô’s Stochastic Integral

From today’s point of view, Kiyosi Itô (1915 – 2008) is mostly considered as the “discoverer” of stochastic integration. He made groundbreaking contributions to the theory in the 1940s and 1950s, independently of earlier work by Wolfgang Doeblin. His work laid the foundation for modern stochastic calculus, culminating in several key publications [79,80,81,82,83,84], which are extensively summarized in [133]. In these papers, Itô rigorously defined the stochastic integral using Brownian motion as the integrator.
Before Itô’s contributions, applied scientists had already been informally working with integrals involving “white noise”, the non-existent derivative of Brownian motion. Itô’s key innovation was to redefine this product of white noise and time differential as the differential of Brownian motion, which he used as an integrator. This method combined aspects of both the Riemann–Stieltjes integral (regarding the integrator) and the Lebesgue integral (regarding the integrand), which formalizes the construction of the stochastic integral.
The motivation behind Itô’s calculus arose from the need to simplify the construction of Markov diffusion processes, which had traditionally been achieved through a more cumbersome multi-step process. This process involved the Hille–Yosida theory, the Riesz representation theorem, and the Kolmogorov extension theorem. Itô’s approach eliminated these steps, which allowed for the direct construction of diffusion processes as solutions to stochastic integral equations. The properties of these processes could then be derived through the stochastic integral equations and the now-famous Itô formula.
In this section, we revisit the original approach by Itô towards stochastic integration and only incorporate minor adjustments to align the method with more general and contemporary frameworks; for example, we employ left-continuous simple functions rather than right-continuous ones as was presented by Itô, which subsequently leads to the development of the theory of predictable integrands as opposed to optional integrands. This adjustment allows us to eliminate the assumption that the underlying filtration is quasi-left continuous, thereby harmonizing with the modern setup of stochastic integration. While this alteration has minimal impact on the stochastic integral as defined by Itô, it significantly enhances the framework for further extensions of the stochastic integral, which do not seamlessly integrate with Itô’s original setup. The reason is that we want to present a coherent theory in this survey, highlighting the similarities and key differences between the different approaches.
Additionally, we extend the domain of the stochastic integral from the interval [ 0 , 1 ] to [ 0 , ) . The original interval can always be recovered by appropriately adjusting the integrand. We avoid defining the stochastic integral over the entire axis ( , ) , as was done by Doob [44] because it has proven to be irrelevant for most practical applications.
In addition, we’ve standardized the notation and terminology to align with modern conventions and recent approaches, helping to avoid inconsistent definitions and notations. We’ve also tackled the issue of measurability right from the start since it has often been a point of confusion and the cause of incomplete proofs in the past (see Section 2.2).
Itô’s objective was to develop a definition for the stochastic integral
I t ( H ) : = H · B t : = 0 t H ( s , ω ) d B s
that imposed minimal limitations on the process H. Since Brownian motion trajectories ( P -a.s.) have unbounded variation, an application of the Banach-Steinhaus theorem demonstrates that the integral
I t ( H ) = H · B t = 0 t H ( s , ω ) d B s
cannot be defined as the trajectory-wise Lebesgue-Stieltjes integral (see, for example, [172, Theorem I.29] and section I.8 in the same book). Itô proposed defining it as the limit (in an appropriate probabilistic sense) of integrals I t H ( n ) of suitable functions H ( n ) for which a natural stochastic integral definition exists. This technique was later used for all extensions of the stochastic integral as well. Itô began by defining the stochastic integral for elementary predictable processes.
Definition 3.1.
Aelementary predictable process H : Ω × [ 0 , ) is defined as
H ( t , ω ) = H 0 ( ω ) 1 { 0 } ( t ) + i = 1 n H i ( ω ) 1 ( t i 1 , t i ] ( t ) .
where each H i ( ω ) is an F t i -measurable, bounded, random variable, and 0 t i < t i + 1 < for all i. The space of elementary predictable processes is denoted by E .
Remark 1.
Using stopping times T i instead of deterministic times t i , yields simple predictable processes. These are used, for example, in the functional analytic approach to define the stochastic integral. However, both approaches yield the same stochastic integral, as both elementary and simple predictable processes span the space of predictable processes (see Theorem 2.1 or, for more elaborations, [28,176]). Nevertheless, in the literature, the definition for simple predictable processes is often inconsistent. We stick to the most widely-used definition.
Given a Brownian motion B and an elementary predictable function H adapted to the filtration generated by B, the integral J B ( H ) is defined as follows:
Definition 3.2.
Let H E be represented as
H ( t , ω ) = H 0 ( ω ) 1 { 0 } ( t ) + i = 1 n H i ( ω ) 1 ( t i 1 , t i ] ( t ) .
We define the stochastic integral J B ( H ) t by
J t B ( H ) : = H · B t : = 0 t H d B s = H 0 B 0 + i = 1 n H i B t i t B t i 1 t
and
J B ( H ) = H 0 B 0 + i = 1 n H i B t i B t i 1
The stochastic integral, as given above, offers a direct interpretation within financial theory. Consider a process B that represents the price of a financial asset. If an investor rebalances their holdings at deterministic times t i , maintaining H i ( ω ) shares during the interval ( t i , t i + 1 ] , then the value of J t B ( H ) represents the investor’s accumulated wealth at time t, which may be positive or negative depending on market movements.1
Remark 2.
One of the key innovations in the definition of the stochastic integral, as given above, is that the integral is defined in a “non-anticipating” way. This construction is non-anticipating in two key aspects. Firstly, the integrand is evaluated at the start of each interval, ensuring that the integration process does not depend on future values. Secondly, the evaluated values H t depend solely on the past observations of the Brownian path up to time t, aligning with the term “adapted” that has become common in the literature. It should be re-emphasized that the Itô integral does not exist, in general, for non-adapted integrands. It does, however, exist for non-predictable or optional integrands (for discontinuous integrators, one usually needs the additional assumption of predictability for the stochastic integral to exist and exhibit the properties one would expect from a stochastic integral).
Remark 3.
As mentioned above, in the original work, Itô considers step functions that are right-continuous. However, in the context presented here, we use elementary predictable processes, which are by definition left-continuous. For integrators that are continuous processes, such as Brownian motion, the choice between right and left continuity of the step functions used in approximations does not affect the outcome of the integral. Yet, when extending the theory to include integrators with discontinuities, right-continuous step functions do not behave as seamlessly. In particular, in order to ensure that stochastic integrals with respect to martingales do not exhibit any drift, the integrands must be approximable by left-continuous elementary predictable processes. Therefore, even though the distinction is inconsequential for integrals with respect to Brownian motions, we opt for left-continuous functions to maintain consistency with the broader framework. While the use of elementary predictable processes ultimately leads to the concept of the predictable σ-algebra, the use of right-continuous step function would lead to optional processes. Even though there has also been a stochastic integration theory developed for optional processes, it is not considered as important. Itô also adjusted his approach in later publications and switched to left-continuous elementary predictable processes [85].
For the extension to a larger class of integrands, Itô’s central idea was to define appropriate metrics for the set of elementary predictable processes and for a subspace of the space of random variables in such a way that the mapping that maps an elementary predictable process H on its stochastic integral J B ( H ) becomes an isometry and then to extend this isometry to the closure of the set of elementary predictable processes via the well-known extension theorems. This technique has been the same for all extensions for the stochastic integral and is, therefore, extremely crucial.
Itô considered a specific metric on the space of adapted functions H that satisfy certain integrability conditions.
Definition 3.3.
(a) For any jointly measurable and adapted process H, we define the seminorm · It o ^ by
H It o ^ 2 : = E 0 H 2 ( s , ω ) d s .
(b) By E It o ^ , we denote the space of elementary predictable processes endowed with the seminorm · It o ^ .
The following result can be demonstrated by leveraging the fact that the integrands are adapted to the filtration of the Brownian motion. As a result, the independence of the Brownian motion’s increments allows us to establish the L 2 -isometry for elementary predictable processes.
Lemma 3.4
([79], Theorem 2.1). The mapping
J It o ^ : E It o ^ L 2 ( Ω , F , P ) H H · B
is an isometry. Specifically,
H It o ^ 2 = E 0 H ( s , ω ) 2 d s = E H · B 2 = J It o ^ ( H ) L 2 2
In order to be able to extend the stochastic integral from E to its closure E ¯ , we need to define a space in which we want to close the subspace E . We follow Itô’s original approach and choose
E 0 : = { H ; jointly measurable , adapted and H It o ^ < } .
Let E ¯ denote the closure of E in E 0 with respect to · E .
Definition 3.5.
The continuous linear mapping J It o ^ ¯ from E ¯ to L 2 ( Ω , F , P ) , obtained as the extension of J It o ^ from elementary predictable processes to L 2 by continuity, is called thestochastic integral. That is, for H E ¯ , let ( H n ) n N be a sequence of elementary predictable processes in E that converges to H in · E . Then the stochastic integral J It o ^ ¯ ( H ) , denoted by H · B , is defined as the L 2 limit of the stochastic integrals of the approximating sequence, i.e.,
H · B = lim n H n · B ,
where the convergence is in the L 2 ( Ω , F , P ) sense. For each t 0 , by setting
H · B t = H 1 [ 0 , t ] · B ,
we obtain a well-defined stochastic process, which we call thestochastic integral of H with respect to B.
In his first works, Itô initially defined the stochastic integral for integrands within the topological closure of elementary predictable processes, providing an abstract characterization of possible integrands. With today’s knowledge, it is evident that the canonical choice for integrands is the space of predictable processes with a finite L 2 -norm. The concept of predictable processes, however, was not formulated at the time of Itô’s early work. Subsequent literature typically identifies integrands as the set of adapted processes H satisfying H E 2 < , with additional considerations necessary to ensure measurability, as discussed in [28,96,101,133].
In the definition above, the set of potential integrands is given as the closure of the elementary predictable processes with respect to the Itô seminorm, which provides a somewhat abstract characterization. The following theorem offers concrete conditions for describing these integrands more explicitly.
Theorem 3.6
([85], Theorem 5.2.1). Let L 2 be the space of jointly measurable, adapted processes with
E 0 H 2 ( s , ω ) d s < .
Then E ¯ = L 2 . That is, L 2 is a complete metric space, and the space of elementary predictable processes E is dense within L 2 .
For the proof of Theorem 3.6, which is often combined with the proof of the Itô isometry 3.7 (see below), formal measurability arguments are important but are often neglected (see Section 2.2) Here, a possible issue is the potential loss of the adaptedness of processes during certain operations, such as smoothing, which involves uncountably many values of the time parameter.
The proof of Theorem 3.6, and therefore ultimately also of the Itô isometry, relies on approximating the stochastic process X by a sequence of elementary predictable processes X n that are adapted to the filtration generated by the Brownian motion. These approximations are achieved by truncating X and smoothing the truncated versions. The Itô isometry is then demonstrated for these elementary predictable processes, and by taking limits, it is extended to the original process X. The smoothing operation, essential for making the process pathwise continuous, generally does not preserve the adaptedness of the process due to its reliance on an uncountable set of time values. This problem directly affects the applicability of the Itô isometry and, consequently, the construction of the Itô integral. In the literature, these issues have, for example, been neglected in [133] but also in the more recent [202], see also the discussions in [28,96].
One proposed solution to ensure the measurability is to assume that the process X is separable [56], although this approach is not widely adopted in the literature. The more common approach is to employ the condition of progressive measurability, which is slightly stronger than adaptedness and preserves the condition of adaptedness throughout the smoothing operation, thus ensuring the applicability of the Itô isometry in the construction of the Itô integral [101]. With Theorem 2.9, one obtains the general result.
By Theorem 3.6, the Itô Isometry can be naturally extended to E ¯ = L 2 . For completeness, we mention the result again.
Theorem 3.7
(Itô Isometry). Let J B be the integral operator as previously defined. When L 2 is equipped with the extension of the norm · E to · L 2 , then for all H L 2 , the operator J B ( H ) is an isometry from L 2 into L 2 ( Ω , F , P ) . Specifically, for any H L 2 , it holds that
H L 2 2 = E 0 H ( s , ω ) 2 d s = E J B ( H ) 2 = J B ( H ) L 2 2 .
The set of potential integrands can be extended further via localization (see, for example, Klenke [109, p. 642 ff] or Kuo [118, Chapter 5]). However, at the time of Itô’s first articles, the concept of localization was unheard of. We will touch on this topic later.

3.3. The Decomposition of Martingales and Doob’S Work

Itô’s research was limited to Brownian motions as integrators. The next necessary generalization from today’s point of view is to extend the integral to square-integrable martingales as integrators and then via localization to locally square-integrable martingales (which include continuous local martingales). The key objective is to establish metrics analogous to those used for Brownian motion, facilitating an Itô isometry-like result, which would permit the extension of the integral from elementary predictable integrands to a broader class. At the time of Itô’s early research, the concept of local martingales was not yet known, and insights into martingales and their decomposition were still undeveloped. Furthermore, it was necessary to overcome various measurability challenges, particularly for potential classes of integrands. Here’s a brief outline of these developments.
In his book [44] and his treatment of Itô’s results, Doob highlights three essential properties of Brownian motion that were crucial in Itô’s construction of the stochastic integral. These properties are:
  • The increments of Brownian motion are orthogonal. This means that for a stochastic process X and times s 1 < s 2 t 1 < t 2 , the increments satisfy
    E ( X t 2 X t 1 ) ( X s 2 X s 1 ) = 0 .
    This orthogonality simplifies the construction of the stochastic integral for processes with such orthogonal increments.
  • For a Brownian motion B, both B t and B t 2 t t 0 are martingales. This observation led Doob to propose extending the stochastic integral to a broader class of martingales.
  • Brownian motion has independent increments, a fundamental property used to justify its stochastic process structure.
Using these properties, Doob was able to extend the construction of the stochastic integral to processes with orthogonal increments. By exploiting the orthogonality of the increments, Doob could show that integrals of functions against processes with such increments can be expressed as sums over disjoint time intervals, much like in the Riemann-Stieltjes integral, which greatly simplified the extension.
The second key observation (the martingale properties of B t 2 t and B t ) motivated Doob to develop a stochastic integral with respect to L 2 -martingales. However, Doob’s construction hinged on decomposing the square of an L 2 -martingale, which was not fully proven at the time.
Doob also examined martingales M for which there exists a deterministic, increasing process F such that M t 2 F ( t ) is again a martingale. He generalized the conditions on integrands from Equation (2) to
0 E H t 2 d F ( t ) < ,
which allowed him to define the stochastic integral for a broader class of integrands.
Doob extended Itô’s isometry Theorem 3.7 to the following relationship:
E 0 t H s d M s 2 = E 0 t H s 2 d F ( s ) ,
which sufficed to define the stochastic integral similarly to Itô’s construction. Moreover, he demonstrated that the martingale property is preserved under this form of stochastic integration [44].
However, a critical challenge was whether, for every martingale M, such a function F existed. To address this, Doob established the now-famous Doob decomposition theorem for submartingales in discrete time: if X n is a discrete-time submartingale, it can be uniquely decomposed as X n = M n + A n , where M is a martingale and A is a process with non-decreasing paths.
Since M 2 is a submartingale for any martingale M, Doob sought to establish a continuous-time analogue of this decomposition to further generalize the stochastic integral to more martingales. Contributions to this topic were made by Gilbert Agnew Hunt’s and his papers on Markov processes [74,75,76]. Hunt’s work provided key insights into continuous-time decompositions.
Then, in 1962, the French mathematician P. A. Meyer made significant advancements in the theory of stochastic integrals with two important papers [145,146]. These papers contributed in three key ways:
  • Establishing the Doob decomposition for certain subclasses of continuous-time submartingales, under a condition known as “Class (D)” in honor of Doob [145]. He later even proved the uniqueness of this decomposition in [146].
  • Introducing a localization procedure that paved the way for further developments and for defining and applying local martingales.
  • Investigating the structure of L 2 -martingales using Hilbert space methods, which laid the foundation for important future work [86,114,161].
In the closing remarks of his paper, Meyer also suggested further extensions but also challenges that may arise:
Define the norm of a well-adapted step process as this last integral. The preceding relation allows the extension of the stochastic integral by standard Hilbert-space methods to all well-adapted (jointly measurable) processes which have a finite norm and are equal to the limit of a convergent sequence (in the sense of the norm just defined) of well-adapted step processes. Among them are all right continuous well-adapted processes, but it seems hard to show (though it is certainly true) that the full class of well-adapted processes whose norm is finite has been attained by this procedure.
While Meyer’s suggestion mentioned the possible extension of the stochastic integral, a mathematically rigorous definition was not provided. Furthermore, subsequent research by Leonard Ornstein and examples presented by Guy Johnson and Lester Helms, especially those involving three-dimensional Brownian motion, showed that not all submartingales belong to Class (D). This observation suggested that Doob’s and Meyer’s approach might not encompass all martingales as integrators [97].
The formal definition of the “class (D)” is as follows.
Definition 3.8.
A càdlàg supermartingale Z with Z 0 = 0 is ofClass (D) if the collection { Z T : T a finite valued stopping time } is uniformly integrable.
With modern terminology, the key result of Doob was the following theorem.
Theorem 3.9
(Doob-Meyer Decomposition [172], Theorem III.8). Let Z be a càdlàg supermartingale of class (D). Then, there exists a unique, increasing, predictable process A with A 0 = 0 and a uniformly integrable martingale M, such that
Z = M + A .
In the foundational works listed above, Meyer established a decomposition such as the one described, wherein A is a natural process. The contemporary version of this formulation was shown by Doléans [39,40], who demonstrated that an increasing process is natural if it is predictable. Alternative proofs of Theorem 3.9 were presented by Bass [6], Rao [173] and Jakubowski [93].
Rao employed the σ ( L 1 , L ) -topology and utilized the Dunford-Pettis compactness criterion to achieve the sought-after continuous time decomposition as a weak- L 1 limit from discrete approximations. To establish the predictability of A, one subsequently uses the theorem by Doléans-Dade.
Bass provided a more fundamental proof, which hinges on the dichotomy between predictable and entirely inaccessible stopping times.
Jakubowski, following a similar approach to Rao, recognized that the predictability of the process A could also be deduced by applying Komlos’ Lemma [111].
A concise and modern proof can be found in the recent paper by [10], which combines ideas from Jakubowski [93] and Beiglböck et al. [8] to establish the continuous-time decomposition using a suitable Komlós-type lemma.
Even before a general proof of the Doob decomposition for martingales that are not of class (D) was discovered, Philippe Courrège contributed two critical findings to developing the stochastic integral [30,31]. He focussed on processes whose paths are right continuous with left limits a.s. and processes whose jumps at stopping times are only allowed to occur at totally inaccessible stopping times; meaning jumps that happen “totally unannounced”.
When decomposing a submartingale M 2 = N + A into its Doob-Meyer decomposition, assuming the filtration is quasi-left continuous, the process A is guaranteed to have continuous sample paths. Specifically, in the L 2 -isometry,
E 0 t H s d M s 2 = E 0 t H s 2 d A s ,
the process A is guaranteed to have continuous paths under this decomposition. This assumption, as made by Courrège, simplifies the theory significantly by allowing the use of continuous paths for the finite variation process A.
Also, Courrège assumed that the integrands have left-continuous paths and investigated processes that were measurable with respect to the σ -algebra they generate on R × Ω , referring to them as “very well adapted” processes. This laid the groundwork for what would later be formalized as the predictable σ -algebra, a concept that would be fully developed and utilized by P. A. Meyer to rigorously define the space of integrands for stochastic integration.
Courrège’s assumption of continuous paths in A turned out to be highly beneficial in simplifying the theory and clarifying the structure of martingales.

3.4. The Stochastic Integral for Square-Integrable Martingales

So far, the stochastic integral has been defined primarily for Brownian motion as the integrator. Courrège extended this concept by incorporating additional continuity assumptions on the underlying filtration and utilizing Doob’s generalization of the Itô integral, along with the Doob-Meyer decomposition, which provided a “recipe” for defining the stochastic integral with respect to martingales of class (D). The next advancement was to extend the stochastic integral to square-integrable martingales, or L 2 -martingales. This development took place in the 1960s, driven mainly by the works of Itô [86], Kunita and Watanabe [114], Motoo and Watanabe [161], who extended the class of possible integrators, and Meyer [149,150,151,152], who explored the potential class of integrands and relaxed some assumptions on the filtration.
The former three works ([86,114,161]) not only marked a significant milestone in the evolution of stochastic integration but also contributed notably to the theory of martingales and stochastic processes: Ito’s paper on the transformation of Markov processes by multiplicative functionals laid the foundation for further studies in stochastic integrals and transformations. Motoo and Watanabe investigated the structure of square-integrable additive functionals of Markov processes and introduced a random inner product that facilitated the definition and analysis of stochastic integrals. Finally, Kunita and Watanabe extended the framework to more general settings involving càdlàg adapted processes and general filtrations. This provided a crucial foundation for the construction and characterization of stochastic integrals and greatly influenced modern martingale theory. Combined, their primary contributions to stochastic integration were:
  • Establishing the notion of local martingales, providing a localized version of the Doob-Meyer decomposition and a localized version of the stochastic integral.
  • Discovering the Hilbert space structure of the space of L 2 -martingales.
  • Defining a new approach to the stochastic integral, which treats it as an operator on L 2 -martingales.
While the notion of local martingales was first introduced by Itô [86], we will first discuss the novel approach before extending the stochastic integral through localization. But we start with the definition of square-integrable martingales and their immediate consequences.
Definition 3.10.
Let M be a martingale. If E [ M t 2 ] < for all t, we call M asquare-integrable martingale. If sup t E [ M t 2 ] = E [ M 2 ] < , then we call M an L 2 -martingale. The space of L 2 -martingales is denoted by M 2 .
Even though sometimes in the literature, the term “square-integrable martingale” is used for what we define as L 2 -martingale, the condition for L 2 -martingales on the indefinite interval [ 0 , ) is stricter than for square-integrable martingales. For instance, an L 2 -martingale is uniformly integrable and belongs to class (D). However, this distinction has little impact on the development of the stochastic integral, as every locally square-integrable martingale is also a local L 2 -martingale. Nevertheless, L 2 -martingales are stable under stochastic integration, a property not generally valid for square-integrable martingales.
A direct consequence of the Doob-Meyer decomposition theorem is the existence of the sharp-bracket process (also known as conditional quadratic variation or predictable quadratic variation). For an L 2 -martingale X, the process X 2 is a supermartingale. By the Doob-Meyer theorem, there exist processes M and A such that X 2 = M + A , where A is an increasing, predictable process and M is a local martingale. The process A is then defined as the sharp-bracket process.
Theorem 3.11
([149], Théorème 3). Let X be an L 2 -martingale. There exists a unique, increasing, predictable process X , X with X , X 0 = 0 , such that X 2 X , X is a martingale. The process X , X is called thesharp-bracket process orpredictable quadratic variation. If Y is another L 2 -martingale, theconditional quadratic covariation of X and Y is defined as
X , Y = 1 2 X + Y , X + Y X , X Y , Y .
Remark 4.
The predictable covariation process X , Y can be defined even if the predictable quadratic variation X , X does not exist. This is because X , Y can be defined as the compensator of the optional covariation process [ X , Y ] , which will be covered later. Specifically, X , Y is the unique predictable process such that [ X , Y ] X , Y is a local martingale.
Remark 5.
From the definition, it is easy to see that for X , Y M 2 , we have
E [ X Y ] = E X , Y .
The existence of the sharp-bracket process allowed to define an inner product on the space of stochastic integrals, which was used for the new approach. The approach by Itô, Doob and Courrège is to define an L 2 -isometry to extend the stochastic integral from a basic class of integrands to a wider class so the stochastic integral L = H · M is usually defined as a unique continuous extension, often expressed as a limit L = H · M = lim n H n · M for an appropriate sequence H n , with respect to suitable metrics. The new approach by Kunita and Watanabe [114], Motoo and Watanabe [161] and later Meyer [149,150,151,152] is to see the stochastic integral as an operator on L 2 -martingales, exploiting the Hilbert space structure of L 2 , which was given by this newly defined inner product (see Theorem 3.19). They defined orthogonality and the stochastic integral operator such that orthogonality is preserved under stochastic integration. That is, the stochastic integral L = H · M is the unique square-integrable martingale L that satisfies
E L N = E 0 H s d M , N s
for every square-integrable martingale N (see Theorem 3.19).
This approach was later extended to more general stochastic integrals [41] and even the vector-valued stochastic integral [29]. However, to maintain consistency, we present the definition of the stochastic integral for square-integrable martingales in a way that is aligned with Itô’s original approach. The equivalence of these approaches is straightforward to demonstrate.
Remark 6.
Similar to Itô’s initial approach, Motoo and Watanabe first defined the stochastic integral for bounded, adapted càdlàg processes as integrands, then extended this class of processes, ending up with optional processes as integrands. This can lead to contradictions when dealing with non-continuous processes with finite variation. Hence, optional processes for integrands became unpopular. We, therefore, consider only predictable processes as integrands, an adjustment later made by Meyer.
Definition 3.12.
Let M M 2 and H E be represented as
H t = H 0 1 { 0 } + i = 1 n H i 1 ( t i 1 , t i ] ( t ) .
We define thestochastic integral J M ( H ) t by
J t M ( H ) : = H · M t : = 0 t H d M s = H 0 M 0 + i = 1 n H i M t i t M t i 1 t
and
J M ( H ) = H 0 M 0 + i = 1 n H i M t i M t i 1 .
Similar to the other approaches, the definition of the stochastic integral for square-integrable martingales is straight forward. The actual work is finding suitable metrics, demonstrating an Itô-like isometry and showing that the space of elementary predictable processes is dense in a suitable space of integrands.
Definition 3.13.
Let M M 2 .
(a)
For H P , we define
H M 2 : = E 0 H s 2 d M , M s .
(b)
By E M , we denote the space of elementary predictable processes endowed with the seminorm · M .
(c)
We define
L 2 ( M ) : = { H P ; H M 2 < } .
It is indeed easy to check that E L 2 ( M ) .
Lemma 3.14
([29], Lemma 12.1.4). By endowing M 2 with the seminorm defined by M M 2 : = E M 2 1 2 = E M , M 1 2 , the mapping
J M : E M M 2 H H · M
becomes an isometry.
It is easy to show that such an integral with respect to an L 2 -martingale is again an L 2 -martingale, and thus, the integral operator maps elementary predictable functions to the Hilbert space M 2
From the uniqueness of the sharp-bracket, one quickly deduces
H · M , H · M = H 2 · M , M
and thus,
H · M M 2 = E H · M , H · M = E 0 H s 2 d M , M s ,
is the required isometry property. The existence and uniqueness of the extension are then apparent.
Since L 2 ( M ) is the space L 2 of the P -measure associated with the integrable increasing process M , M over the predictable σ -field for the proof, the related Hausdorff space is a Hilbert space. We want to extend the integral to the closure of E M in L 2 ( M ) . Therefore, we examine the closure U ¯ of U : = L 2 ( M ) E . But according to the monotone class theorem U ¯ contains all bounded predictable processes and therefore U ¯ = L 2 ( M ) and hence, E is dense in L 2 ( X ) :
Lemma 3.15
([29], Lemma 12.1.8).
Let E M ¯ denote the closure of E M in L 2 ( M ) with respect to · E , M . Then we have
E M ¯ = L 2 ( M ) .
Now, it is easy to define the stochastic integral similar to how it was done for the Itô integral.
Definition 3.16.
The continuous linear mapping J M ¯ from L 2 ( M ) to M 2 , obtained as the extension of J M from elementary predictable processes to M 2 by continuity, is called thestochastic integral. For each t 0 , by setting
H · B t = H 1 [ 0 , t ] · B ,
we obtain a well-defined stochastic process, which we call thestochastic integral of H with respect to B.
The Itô isometry has now the following shape:
Theorem 3.17
([29], Lemma 12.1.4). Let J M be the integral operator as previously defined. When L 2 is equipped with the extension of the norm · E to · L 2 , then for all H L 2 , the operator J B ( H ) constitutes an isometry from L 2 into M 2 . Specifically, for any H L 2 , it holds that
H L 2 2 = E 0 H ( s , ω ) 2 d s = E J B ( H ) 2 = J B ( H ) M 2 2 .
Kunita and Watanabe [114], Motoo and Watanabe [161] treated the sharp-bracket process as a pseudo-inner product and, therefore, were able to utilize Hilbert-space methods on the space of L 2 -martingales. While the properties of metric spaces or Banach spaces would suffice to define the stochastic integral, some properties of the space of stochastic integrals, such as the martingale representation property, could not be demonstrated without the tools that are exclusive to Hilbert spaces.
By considering the sharp-bracket process as a pseudo inner product, it was only logical to investigate a possible variant of the Cauchy-Schwarz inequality. Such a variant is given by the Kunita-Wantanabe inequality, which is due to Courrège and Meyer [157] but based on a result by Kunita and Watanabe [114].
Theorem 3.18
(Kunita-Watanabe Inequality, [157] Théorème 21). If M and N are two square-integrable martingales and H and K are two measurable processes, then we have almost surely:
0 T | H s | | d M , N s | 0 T H s 2 d M , M s 1 / 2 0 T K s 2 d N , N s 1 / 2 .
The Kunita-Watanabe inequalities provide crucial insights into the properties of stochastic integrals and are crucial in demonstrating the following theorem, which is essentially the definition of the stochastic integral as it was introduced by Kunita and Watanabe.
Theorem 3.19
(Kunita and Watanabe, [178] Theorem 28.1). Let M M 2 , H be a predictable process such that H L 2 ( M ) , and N be another square-integrable martingale. Then the stochastic integral H · M is the unique element in M 2 with the property that for every N M 2 , we have
E ( H · M ) N = E 0 H t d M , N t .
With the help of Theorem 3.18, it is straightforward to show that the novel approach by Kunita and Watanabe [114], Motoo and Watanabe [161] is equivalent to the one outlined above. The mapping
H E ( H · M ) 0 H s d M s N s
defines a linear functional on L 2 ( M ) . By applying the Kunita-Watanabe inequality, it is clear that this function is continuous and vanishes on simple functions.
If the stochastic integral is defined as the unique continuous extension, it follows that the function is zero for all elements in L 2 ( M ) .
Furthermore, the right-hand side of the equation in Theorem 3.19 defines a continuous linear functional in N (given H and M), while the left-hand side represents the inner product of H · M and N in the Hilbert space L 2 . By the Riesz-Fréchet representation theorem in Hilbert spaces, for given H and M, there exists a unique element H · M L 2 that corresponds to this linear functional.

3.5. Localization and Quadratic Variation

Besides the discovery of the sharp-bracket process as a pseudo-inner product, one of the key results of Itô and S. Watanabe was their establishment of local martingales in their studies of multiplicative functionals of Markov processes [86]. With this, it was possible to define the square-bracket process for local L 2 -martingales (see [150, page 97 ff] for more details).
Let M be a local L 2 -martingale, and let ( T n ) be a sequence of stopping times such that M T n is an L 2 -martingale for all n. With this, the process
M T n + 1 2 M T n + 1 , M T n + 1
is by Theorem 3.11 a uniformly integrable martingale, and we see that the process
M t T n 2 M T n + 1 , M T n + 1 t T n
is also a martingale. It then follows from the uniqueness of the Doob decomposition theorem that
M T n , M T n t = M T n + 1 , M T n + 1 T n t .
As T n tend to infinity for n , we establish that there exists an increasing process M , M , such that M 2 M , M is a local martingale.
The statement can be formalized as follows:
Theorem 3.20
([71], Lemma 7.28). Let M be a locally square-integrable martingale. Then there exists a unique predictable and increasing process M , M such that M 2 M , M is a local martingale with M 0 = 0 . This process is again called thesharp-bracket process orpredictable quadratic variation.
This statement allows to extend the definition of M , N to locally square-integrable martingales M , N by setting
M , N = 1 2 M + N M , M N , N .
With similar deductions, it is also possible to prove a version of Doob’s decomposition theorem in continuous time without requiring the use of class (D).
Theorem 3.21
([38], Theorem VII.12). Let X be a càdlàg supermartingale. Then X has a unique decomposition:
X = X 0 + M A
where M is a local martingale, A is a predictable, increasing process, and M 0 = A 0 = 0 .
Inspired by Kunita and Watanabe’s work, Paul-André Meyer contributed significantly by writing four seminal papers [149,150,151,152]. These works extended the stochastic integral to predictable integrands and locally square-integrable martingales.
In the original publication by Itô [86], the definition of the stochastic integral via Equation (4) also applied to local L 2 -martingales. Since the space of local L 2 -martingales is neither a Hilbert space nor a Banach space, the classical extension approach does not naturally carry over to local L 2 -martingales. Therefore, for the transition from L 2 -martingales to locally square-integrable martingales, Meyer [150] applied a similar approach as was done before for the sharp-bracket process via coalescing stopped processes into one process.
Let M be a locally square-integrable martingale, and thus also a local L 2 -martingale, and let L 2 ( M ) be defined as on Page 18. By assumption, there exists a sequence ( T n ) n N of stopping times such that T n as n , and M T n is an L 2 -martingale for all n. If H L 2 ( M T n ) for all n, we have the relation
H · M T n + 1 T n = H · M T n .
Thus, the processes H · M T n can be coalesced into a single process, denoted H · M .
This approach was followed by Dellacherie and Meyer [37] and later by Protter [172] in defining the general stochastic integral. Of course, for a formal derivation of the stochastic integral for local square-integrable martingales, it still needs to be shown that the stochastic integral is independent of the choice of stopping times.
These ideas are formalized as follows. Let L loc 2 ( M ) be the space of processes H such that the increasing process
0 t H s 2 d M , M s
is locally integrable. It is easy to see that H L loc 2 ( M ) is equivalent to the existence of an increasing sequence of stopping times T n , such that H L ( M T n ) for all n.
Theorem 3.22
(Definition of the Stochastic Integral, [150] p. 98). Let M M loc 2 and H L loc 2 ( M ) . Then, there exists a uniquely defined process L with
L = H · M on 0 , T n
for each sequence of stopping times T n such that M T n M 2 and H L 2 ( M T n ) . We note H · M : = L and say that H · M is thestochastic integral of H with respect to M.
In the case where M M 2 , this definition coincides with that of the stochastic integrals defined in the last section and in the case where M is a Brownian motion, the stochastic integral coincides with the Itô integral.
The approach described here, which uses the sharp-bracket process to define a pseudo-scalar product and, subsequently, a metric, follows Itô’s method to extend the integral for square-integrable martingales from elementary predictable processes to general predictable processes. While this process and the localization procedure mentioned earlier are straightforward, the proofs for the properties of the stochastic integral defined by this method are challenging and dense.
An alternative is offered by the approach towards stochastic integration suggested by Motoo and Kunita, as discussed in [114,161] (see Theorem ). This method allows the direct definition of the stochastic integral for localized processes, which simplifies the application of Hilbert space methods.
Theorem 3.23
([114], Theorem 2.1). For M M loc 2 and H L 2 ( M ) loc , the integral H · M is uniquely characterized by the relation
H · M , N = H · M , N for all N M loc 2 .
Besides the definition of the stochastic integral for locally square-integrable martingales, one of the key results of Meyer’s work [149,150,151,152] was the establishment of the square-bracket process. Since the sharp-bracket process originally existed only for square-integrable martingales and was heavily dependent on the chosen probability measure, it was rather impractical and not applicable in many situations. Meyer extended the scalar product X , Y by introducing a pseudo-scalar product, denoted by [ X , Y ] , which exists for all martingales, including local martingales. Unlike X , Y , which depends on the probability measure, [ X , Y ] is invariant under changes of equivalent probability measures. This extension was crucial for the development of semimartingale theory and the further generalization of the stochastic integral.
Another major breakthrough was Meyer’s realization of the importance of the predictable σ -algebra. While Courrège had touched upon related ideas, Meyer demonstrated that for martingales with finite variation paths (including those with jumps), the stochastic integral should coincide with pathwise Lebesgue-Stieltjes integration. But Meyer showed that this is, in general, only true if the integrand is a predictable process. He further analyzed the jumps of the stochastic integral, noting that for predictable integrands, the stochastic integral has the same jump behavior as the Lebesgue-Stieltjes integral. This finding laid the groundwork for semimartingale theory, which was developed in subsequent years. For this reason, the stochastic integral for optional instead of predictable integrands, which leads to the compensated stochastic integral, received less attention compared to the stochastic integral with predictable integrands. Nevertheless, works such as [20] and the textbooks by Dellacherie and Meyer [37] and He et al. [71] provide a thorough discussion of this concept.
Furthermore, Meyer managed to derive a general change of variables formula without relying on Lévy systems, which was a significant generalization and was later extended to Markov processes in his subsequent papers.
The definition of the square-bracket process relies on the decomposition of martingales into a continuous and a discontinuous part:
Theorem 3.24
([150], Proposition 3). Let M be a local martingale. Then M can be uniquely decomposed into a continuous part M c and a pure jump part, each of which is itself a local martingale. This is expressed as:
M t = M t c + s t Δ M s ,
where M c is the continuous part of M, and Δ M s = M s M s represents the jumps of M at time s.
As any continuous local martingale is necessarily locally square-integrable, we conclude with Theorem 3.20 that M c , M c exists for any local martingale M.
Definition 3.25.
Let M be a local martingale and let M = M c + M d be the canonical decomposition, where M c and M d are its continuous and purely discontinuous local martingale components, respectively. Thequadratic variation oroptional quadratic variation or simplysquare-bracket process [ M , M ] of M is defined as the unique increasing process:
[ M , M ] t = ( M t c ) 2 + 0 s t ( Δ M s ) 2 .
For local martingales M , N , thequadratic covariation is defined by:
[ M , N ] t = 1 2 [ M + N ] t [ M , M ] t [ N , N ] t = M c , N c t + 0 s t Δ M s Δ N s .
Theorem 3.26
([71], Theorem 6.28). Let M , N M 2 . Then [ M , N ] is the unique finite variation process, such that M N [ M , N ] is a local martingale and
Δ [ M , N ] = Δ M Δ N .
In [114, Theorem 1.3], it is shown that for any continuous martingale, the sharp-bracket process can be approximated by appropriate sums that remind us of the variance of a stochastic process. This result can be generalized to the quadratic covariation process.
Theorem 3.27
([186], Theorem 5.8). Let M and N be càdlàg local martingales. For any partition P = { 0 = t 0 < t 1 < < t n = t } of the interval [ 0 , t ] , we have:
[ M , N ] t = lim P 0 k = 0 n 1 ( M t k + 1 M t k ) ( N t k + 1 N t k ) ,
where P = max 0 k < n ( t k + 1 t k ) is the mesh of the partition. The limit is taken as the mesh of the partition tends to zero.
The following theorem gives a partial integration formula for stochastic integrals. It allows for an alternative definition of the quadratic variation process [ M , N ] via stochastic integrals. The alternative definition was, for example, used in the ansatz by Protter [172].
Theorem 3.28
([150], Théorème 2). For local martingales M , N , the product N M can be expressed by the following integration formula:
N M = N · M + M · N [ M , N ] ,
where N · M and M · N represent the integrals of N with respect to M and M with respect to N using their left-continuous versions N and M respectively.
Remark 7.
The assumption of square-integrability for the integrator significantly simplifies the definition of the stochastic integral. However, the additional assumption of continuity, as is often made in textbooks and university lectures, while simplifying some aspects of the theory, does not simplify the definition itself. What it does simplify are the proofs of various properties and prerequisites. For example, the existence of quadratic variation is much easier to establish for continuous martingales compared to general martingales. Additionally, the proof of Itô’s formula becomes a matter of straightforward calculation when dealing with continuous martingales. This is why many textbooks restrict the definition of the stochastic integral to continuous martingales, even though the theory extends more generally to locally square-integrable martingales.
Remark 8.
When M is a continuous martingale, the sharp-bracket process M , M coincides with the square bracket [ M , M ] , and the Itô isometry holds for right-continuous adapted integrands. Since simple right-continuous processes generate the optional σ-algebra, they are dense in the corresponding L 2 space. This allows us to define the stochastic integral for optional integrands in a manner analogous to its definition for predictable integrands. Moreover, simple processes are dense in L 2 ( Σ T , d M , M × d P ) , where Σ T is the progressive σ-algebra. Consequently, the stochastic integral with respect to a continuous martingale is well-defined for any square-integrable progressive integrand, making the restriction to predictable integrands unnecessary. For further details, refer to Shreve and Karatzas [186].
Remark 9.
Even though the predictable quadratic variation and the optional quadratic variation generally coincide only for continuous square-integrable martingales, the seminorms they naturally induce are equivalent (see [29, Theorem 12.2.1]). This equivalence allows the stochastic integral for square-integrable martingales to be defined using the optional quadratic variation, analogous to its construction with the predictable quadratic variation.

3.6. The Stochastic Integral for Locally Bounded Predictable Integrands

Up to 1970, stochastic integration was primarily associated with Markov processes, and the above-mentioned work by H. Kunita, S. Watanabe, and P. A. Meyer used the crucial assumption that the underlying filtration of σ -algebras was quasi-left continuous. In a paper by Doléans-Dade and Meyer [41], this assumption was eliminated, transforming the theory into a pure martingale theory and severing its connection to Markov processes. Here, it was shown that the assumption of the filtration being left-continuous can be dropped if the integrands are predictable, which leads to the same results. It was also in this publication that the term “semimartingale” was introduced.
Nowadays, there are different definitions for semimartingales. The definition below is the one that was used in the original papers. As we will examine different (equivalent) definitions of semimartingales, we will, for clarity, use the term classical semimartingale for the historical and original definition.
Definition 3.29.
A process X is a classical semimartingale if it has a decomposition of the form
X = M + A ,
where M is a local martingale, and A is a càdlàg adapted process of almost surely finite variation.
By definition, it is clear that local martingales and processes with finite variations are both classical semimartingales. Additionally, any right-continuous supermartingale or submartingale is also a classical semimartingale, as, by the Doob-Meyer decomposition, a right-continuous supermartingale X can be decomposed as X = M A , where M is a martingale and A is an increasing process and therefore of finite variation.
In their paper, Doléans-Dade and Meyer [41] defined the stochastic integral with respect to continuous local martingales and extended this integral to general classical semimartingales for locally bounded, predictable integrands. The stochastic integral for continuous local martingales had already been defined earlier, as any continuous local martingale is a local L 2 -martingale. However, Doléans-Dade and Meyer used a similar but not completely identical approach to this stochastic integral again. They started by defining the stochastic integral for square-integrable martingales again and chose the same approach as Kunita and Watanabe did before. In particular, they used the definition as is implicitly given by Theorem 3.23 but used the quadratic variation as was done by Meyer [150].
The key idea of the approach by Doléans-Dade and Meyer [41] for the definition of the stochastic integral for local martingales in comparison to earlier approaches was to decompose the local martingale into appropriate processes for which the stochastic integral can be easily defined. This approach subsequently led to significant advancements in the theory of stochastic integration.
The main decomposition result of Doléans-Dade and Meyer [41] is the following:
Proposition 3.30
([41], Proposition 4). Let M be a local martingale. Then there exists a sequence of stopping times ( T n ) n N increasing to infinity and two processes N and V, such that:
  • M = N + V
  • N T n is an L p -martingale for all n and all p with 1 p < .
  • V is a process of integrable variation.
Now, the definition of the stochastic integral for locally bounded predictable processes H and classical semimartingales proceeds as follows.
Definition 3.31
(Stochastic Integral with respect to classical semimartingales for locally bounded integrands). Consider a classical semimartingale X with the decomposition
X = M + A ,
where M is a local martingale and A is a process of finite variation. Let H be a predictable and locally bounded process. We can choose a sequence of finite stopping times T n , increasing to infinity, such that the following holds:
  • For each n, M T n = N T n + V T n , where N T n is an L p -martingale and V T n is a process of integrable variation.
  • For all n, the process H T n is bounded.
Then, we define thestochastic integral as well as the two Stieltjes integrals H · V and H · A by setting
H · X = H · N T n + H · V T n + H · A T n on 0 , T n .
It remains to show that H · X is well-defined and does not depend on the selection of stopping times ( T n ) n N or on the decomposition. For this, it is essential that the stochastic integral with respect to a local martingale with finite variation (which is a trivial or deterministic process) coincides with the Lebesgue-Stieltjes integral. This only holds for predictable integrands H, and the existence of the Stieltjes integral is only guaranteed for bounded processes. Hence, this definition of the stochastic integral cannot be easily extended to unbounded integrands or optional integrands.
Doléans-Dade and Meyer [41] also demonstrated the following crucial result, which allows for the unique characterization of the stochastic integral with local martingale integrators.
Proposition 3.32
([41], Proposition 5). Let M be a local martingale and H a locally bounded, predictable process. Then H · M is the unique local martingale with M 0 = 0 such that
[ H · M , N ] = H · [ M , N ]
holds for all local martingales N.
The proof of this characterization heavily relies on the fact that H · M is a local martingale for a local martingale M. Since this does not generally hold for unbounded H, a different strategy would be necessary to characterize the general stochastic integral.

3.7. The General Real-Valued Stochastic Integral

The stochastic integral was further generalized to accommodate unbounded predictable integrands by Jacod [88,89] and Chou [26], resulting in the comprehensive theory of the stochastic integral in one dimension.
Jacod’s approach is based on the characterization of semimartingale jumps, whereas Chou [26] proposed an alternative methodology, yielding an equivalent definition of the stochastic integral. The space of integrands used by Jacod represents the broadest possible space with a meaningful definition for semimartingale integrators; the stochastic integral cannot be extended to a more general class of one-dimensional integrands without sacrificing crucial properties, such as the continuity of the integral operator.
In general, for the definition of the stochastic integral, classical semimartingales have the disadvantage that they may have non-integrable jumps. Therefore, the first attempts to define the stochastic integral for semimartingale integrators were complicated and technical.
One key ingredient is the Fundamental Theorem of Local Martingales, which states that any local martingale can be represented as the sum of a process of finite variation and a local martingale with bounded jumps. It was proven by Jia-an Yan (but appeared in [155]) and Doléans-Dade [42] independently.
Theorem 3.33
(Fundamental Theorem of Local Martingales [155]). Let M be a local martingale and α > 0 . There exist local martingales U , V such that U is an FV-process, the jumps of V are bounded by α, and M = U + V .
Therefore, when defining the stochastic integral for a classical semimartingale X = M + A , without loss of generality, M can be assumed to have bounded jumps and, therefore, be locally square-integrable. However, the integral with respect to a locally square-integrable martingale had already been defined. Hence, since the integration operator is linear, for the definition of the stochastic integral with respect to a classical semimartingale, only the stochastic integral with respect to a predictable finite variation process A remains to be defined, but this can easily be defined by pathwise Lebesgue-Stieltjes integration. Nevertheless, it remains to be shown that the stochastic integral for a classical semimartingale X = M + A does not depend on the decomposition M + A . This turns out to be more sophisticated than it seems at first glance. In particular, it is not even true for non-predictable integrands. This is one of the reasons why we restrict ourselves to predictable processes as potential integrands.
We start by giving a short overview of the construction of the stochastic integral with only slight modifications in comparison to how it was done by Jacod [88], Jacod [89].
So far, for some cases, the stochastic integral H · X has already been defined for predictable processes H that are not locally bounded. Firstly, we have the case where X has finite variation, provided that
0 t | H s | d | X | s < for t < ,
and secondly, when X is a local martingale, under the assumption that
( H · [ X , X ] ) t is locally integrable .
These two scenarios are referred to as the usual cases of stochastic integrals for predictable processes that are not locally bounded. Therefore, the canonical way to define the general stochastic integral would be to say that H is integrable with respect to X if there exists a decomposition X = M + A such that H · M and H · A exist in the usual sense, and by setting H · X = H · M + H · A . This method was adopted by Jacod. His further strategy was to characterize and control the jumps of the classical semimartingale X, which ultimately led to the concept of special semimartingales.
Definition 3.34.
A classical semimartingale X is called a special semimartingale if there exist a local martingale M and a predictable FV-process A such that
X = M + A .
Remark 10.
There are numerous equivalent definitions of special semimartingales in the literature. For example, a classical semimartingale is called special if there exists a decomposition into a local martingale and a process of locally integrable variation. An overview of equivalent criteria can, for example, be found in [29, Theorem 11.6.10].
Special semimartingales were initially introduced by Meyer [157] and Yoeurp [200]. It is easy to see that the decomposition of a special semimartingale is unique (up to the constant X 0 ), as any predictable local martingale is constant.
The following theorem is crucial for the definition of the stochastic integral.
Theorem 3.35
([88], Théorème 6). Let X be a classical semimartingale and H P . Then
Y t = 0 s t Δ X s 1 { | Δ X s | 1 } { | H s Δ X s | 1 }
is of finite variation, and Z = X Y is a special semimartingale.
Now, the stochastic integral can be defined as follows:
Definition 3.36.
Let X be a classical semimartingale and X = Y + Z be a decomposition as given by Theorem 3.35, and let the canonical decomposition of Z be given as
Z = M + A
with a local martingale M and a predictable FV-process A. If ( H 2 · [ M ] ) 1 2 is a process of integrable variation, 0 t | H s | | d Y s | < , and 0 t | H s | | d A s | < , H is said to beX-integrable, denoted H L ( X ) , and H · Y and H · A exist in the Stieltjes sense while H · M exists in the local martingale sense. We define thestochastic integral H · X as
H · X : = H · Y + H · Z = H · Y + H · M + H · A .
The stochastic integral is well-defined.
Remark 11.
It is straightforward to see that L ( X ) forms a vector space and that the corresponding stochastic integral is additive. Let H and K be two elements of L ( X ) ; in Equation (9), we take J = { s : | Δ X s | | H s | | Δ X s | | K s | | Δ X s | > 1 } ; Theorem 3.35 then provides a decomposition X = N + B of X, such that H · N and K · N exist in the local martingale sense and H · B and K · B in the Stieltjes sense, ensuring the additivity of the usual stochastic integral.
Remark 12.
With the definition given above, it is relatively easy to establish a dominated convergence theorem for stochastic integrals. Let H n be predictable processes that are bounded above in absolute value by a predictable process K and converge pointwise to a process H. If K belongs to L ( X ) , so do the H n and H, and H n · X converge to H · X in S. To demonstrate this, we again apply Theorem 3.35 by taking J = { s : | Δ X s | | H s | | Δ X s | > 1 } in Equation (9), reducing it to a dominated convergence theorem for usual stochastic integrals. In the case of the Topological Stochastic Integral, which is not based on the decomposition of classical semimartingales, this is much harder to prove.
Jacod laid the foundation for general stochastic integration [88,89]. In 1980, Chou, Meyer, and Stricker introduced an alternative approach that leverages the topology of classical semimartingales along with an explicit decomposition result. Jacod later applied a similar method to define vector stochastic integrals [90], and Mémin examined the connections between these integrals and the topology of classical semimartingales [138]. Given the parallels with the topological stochastic integration discussed in Theorem 9, we briefly outline these approaches here.
The ansatz by Chou, Meyer, and Stricker is based on the work of Jacod [90] and Memin [138], as well as Meyer [155]. One disadvantage of the original work was that it does not simplify certain aspects: for example, it is not immediately clear that integrability and stochastic integrals are invariant under a change of measure, nor even that the integral is a linear operation in H or that the existence of the stochastic integral does not depend on the decomposition of the classical semimartingale X. Therefore, Chou et al. [26] introduced another definition, which the authors called the sophisticated definition.
Let X t : = sup s t X s , then the map
X X ucp : = n = 1 2 n E [ 1 X n ]
defines the topology of ucp convergence. We then define the Émery norm of a classical semimartingale X as
X S : = sup H · X ucp : H E , H 1 .
This topology is sometimes also called semimartingale topology (or Émery topology). We denote X n X in S when X n X S 0 .
Here, the supremum is taken over all elementary predictable processes. In the original publication, it was taken over all simple predictable processes (see Theorem 4.1), and this is also how we are going to define it in the section about the Topological Stochastic Integral. However, it is easy to see that these definitions are equivalent. In fact, one could also choose all predictable processes that are bounded by 1.
Now, let S be the space of all classical semimartingales equipped with the Émery topology. Then S is a complete metric non-locally convex topological vector space. If X n converges to X in S , we have for any finite t
lim n X n X t = 0 in probability .
Some immediate results are now the following: If we replace P with an equivalent probability measure Q , the topology of S remains the same (theorem of the closed graph). If Q is only absolutely continuous with respect to P , the identity mapping of S P to S Q is continuous.
Now, the stochastic integral can be defined as follows.
Definition 3.37
(Sophisticated Definition of Stochastic Integral). Let X be a classical semimartingale, H be a predictable process and H n = H 1 { | H | n } . We say that H isX-integrable if H n · X converges to a classical semimartingale Y in the semimartingale topology. In this case, we say Y is thestochastic integral of H with respect to X and write
Y = H · X .
This definition heavily relies on the definition of the stochastic integral for locally bounded predictable integrands. Therefore, the theory as developed by Doléans-Dade and Meyer [41] and presented in Section 3.6 is crucial.
Remark 13.
Since the semimartingale topology does not depend on the underlying probability measure, it is obvious that L ( X ) is independent of any change of equivalent probability measures.
Remark 14.
If H belongs to L ( X ) and L ( Y ) , it belongs to L ( X + Y ) and H · ( X + Y ) = H · X + H · Y . However, the linearity with respect to the integrands is less obvious than the one for the integrators.
Remark 15.
The sophisticated definition of the stochastic integral is equivalent to the definition as given in Theorem 3.36, and H L ( X ) is equivalent to H being X-integrable. For details, refer to [26], page 130 d).
Remark 16.
Although the approach referred to here as the sophisticated definition of the stochastic integral is rarely covered in university lectures or textbooks, certain aspects of it have proven to be extremely valuable. For instance, this approach allows for a simplified proof of the stochastic dominated convergence theorem (see, for example, [29]).
When Doléans-Dade and Meyer [41] developed the general stochastic integral for bounded integrands, they managed to extend the class of suitable integrands to locally bounded predictable processes via localization. This begs the question of whether the general stochastic integral can be even further generalized by examining integrands that are only locally X-integrable. However, it turns out that local X-integrability already implies general X-integrability and therefore, such an extension is not possible for the general integral. This was demonstrated by [26] via the following theorem:
Theorem 3.38
([26], Théorème 4). Suppose H is predictable. Assume that there is a sequence of increasing stopping times T k , tending to ∞, processes J k L ( X ) , such that H = J k on 0 , T k . Then H L ( X ) and H · X = J k · X on 0 , T k .
The approach towards the general stochastic integral was later further simplified. However, all approaches (including the functional analytic approach) rely on the decomposition of classical semimartingales into a local martingale part and a Stieltjes part. The simplified approach, which is, for example, presented in [29,71] starts with a definition of the stochastic integral for general local martingales and then, via the definition of the stochastic integral for classical semimartingales, extending the space of potential integrands. As it turns out, by doing so, the stochastic integral for local martingales also gets generalized. The definition does not classify the jumps of the classical semimartingale but builds upon the decomposition of a local martingale into a continuous and a discontinuous part (Theorem 3.24), and on the definition of the stochastic integral for locally square-integrable martingales (and therefore, in particular, for continuous local martingales) as it was outlined in the previous section. We will give a short overview of this simplified procedure.
Definition 3.39.
Let M be a local martingale and H a predictable process. Thestochastic integral X = H · M of H with respect to M is defined as the is the uniquely defined local martingale possessing the following characteristics:
  • The continuous part of X, denoted X c , satisfies X c = H · M c , where M c is the continuous martingale part of M.
  • The processes Δ ( H · M ) and H Δ M are indistinguishable.
To prove the existence of the stochastic integral, one examines the class of predictable processes H that locally satisfy
E H 0 2 M 0 2 + 0 H s 2 d [ M , M ] s 1 2 < .
By applying the Burkholder-Davis-Gundy inequality (Burkholder et al. [24], Garsia [58]), one can show the equivalence of the norms M E [ M , M ] 1 2 and M E | M | and therefore construct an approximating sequence of simple predictable processes. By dominated convergence and by delocalization (pasting together local processes), one can now construct a local martingale that satisfies the condition from Theorem 3.39.
Now, the integral can be extended in the following way:
Definition 3.40.
Let X be a classical semimartingale and let H be a predictable process such that there exists a decomposition X = M + A , where M is a local martingale and A is a finite variation (FV) process, satisfying the following conditions:
  • H L loc 2 ( M ) ,
  • For almost all ω, the paths of H ( ω ) are integrable as a Lebesgue-Stieltjes integral with respect to A.
Then we say H isX-integrable, denoted H L ( X ) , and we define thestochastic integral H · X as
H · X = H · M + H · A ,
where H · M is defined according to Theorem 3.39, and H · A as a Lebesgue-Stieltjes integral.
The original publication by Jacod [88,89] introduced the condition that, similar to the definition in Theorem 3.39, the jumps satisfy H Δ X = Δ ( H · X ) . Nevertheless, with the help of the Fundamental Theorem of Local Martingales, it is possible to show that the continuous part X c of a classical semimartingale is indeed uniquely determined, and therefore, the condition on the jumps is a straightforward consequence of the requirements given above.
It is worth noting that for this definition, at least one decomposition of X is required to satisfy the above-mentioned criteria. However, it is possible that some decompositions of X fulfil these requirements while others do not. In fact, it is even the case that if X is a local martingale and H L ( X ) , then H · X is not necessarily a local martingale. Nevertheless, there exists at least one decomposition of X = M + A such that H · M is again a local martingale.
As the existence of the stochastic integral may vary across different decompositions, it is not possible to provide simple, necessary and sufficient integral conditions for the space L ( X ) .
The proof of the uniqueness and existence requires more technical details about the spaces of martingales and several properties of the stochastic integral for square-integrable martingales. Usually, one of the key ingredients for the proof of the properties of the stochastic integral (such as the stability of the local martingale property for locally bounded integrands) is the Burkholder-Davis inequality.

4. The Functional Analytic Approach

The approach to the general stochastic integral for classical semimartingales described in Section 3 has some disadvantages. It starts with the definition of the classical semimartingale as a process decomposable into a local martingale and a process of finite variation. Since the local martingale property depends strongly on the choice of the underlying probability measure, it can be assumed that the classical semimartingale property also depends on the selection of an equivalent probability measure. However, this is not the case, as was first shown by Meyer [157].
Furthermore, the meaning of the classical semimartingale for the stochastic integral is not evident from the definition. This means that the fact that classical semimartingales are the largest class of processes for which a stochastic integral can be defined in an “obvious” way seems completely unclear. Also, the derivation of the integral employing the classical method has turned out to be technically very complex and has been criticized many times (see, for example, [171]).
The functional analytic approach offers an alternative to the classical derivation that addresses and largely solves these problems for the stochastic integral for càglàd integrands. It allows to define stochastic integrals with respect to left continuous, adapted processes very quickly and to prove the most properties of the stochastic integral, including a change of variables formula for semimartingales. This integral and these results suffice for a wide variety of applications. However, for the extension of the stochastic integral to general predictable integrands, one needs to resort again to more technical proofs and, in particular, to the decomposition theorems of martingales and semimartingales, including the Bichteler-Dellacherie-Mokobodzki theorem. One of the disadvantages of the functional analytic approach in comparison to the classical approach is that in the case of the functional analytic approach, it is not directly apparent that the stochastic integral with respect to an L 2 -martingale yields again an L 2 -martingale which is clearly a desirable property for any kind of stochastic integral.
The motivation for the functional analytic approach emerged from the realization that classical semimartingales represent the largest class of possible integrators for which a well-defined integral can be constructed. This was demonstrated by Bichteler [13,14], Dellacherie [35], Meyer [157]. Here, “well-defined” means that if a càdlàg process X is to satisfy a weak form of the bounded convergence theorem and if the stochastic integral is defined canonically for simple predictable processes, then X must be a semimartingale.
This discovery clarified the importance of semimartingales in the theory of stochastic integration. Meyer [156] and Dellacherie [35] were the first to propose and describe the functional analytic technique as it is understood today. Then, Lenglart [124] and Protter [170,171] built upon it. Protter [172] wrote an entire comprehensive textbook based on the functional analytic approach and covers the results from all publications mentioned above. Furthermore, it provides a set of simplified proofs for well-known results. Letta [125] came up with a concept similar to the functional analytic approach discussed here.

4.1. The Stochastic Integral for CàGlàD Integrands

Similar to Itô’s approach, in the functional analytic ansatz, one starts by formulating the stochastic integral for a simple type of integrand. Itô chose right continuous (optional) step functions with deterministic step times. As the importance of predictable integrands became apparent, this approach was mostly changed to using elementary predictable processes (as we presented it in this survey). The functional analytic approach usually starts with defining the stochastic integral for simple predictable integrands, which is a slight generalization of elementary predictable processes by allowing the step times to be stopping times instead of deterministic times. However, an ansatz with elementary predictable processes would yield the same integral.
Definition 4.1.
Asimple predictable process H : Ω × [ 0 , ) is defined as
H ( t , ω ) = H 0 ( ω ) 1 { 0 } ( t ) + i = 1 n H i ( ω ) 1 T i , T i + 1 ( t ) .
with stopping times 0 T 1 T n T n + 1 < where each H i ( ω ) is an F T i -measurable, bounded, random variable. The space of simple predictable processes is denoted byΛ.
Similar to the elementary predictable processes, simple predictable processes are an optimal choice to start with as the space of simple predictable processes Λ generates the predictable σ -algebra, that is, P = σ ( Λ ) . And, for any measure μ on P and any q [ 1 , ] , it is easy to show by dominated convergence that Λ is a dense set of functions in L q ( P , μ ) , where L q ( P , μ ) consists of all predictable functions f such that the q-th power of the absolute value of f is integrable with respect to the measure μ :
L q ( Σ p , μ ) = f : Ω | f | q d μ < .
For simple predictable processes, there is a canonical way how to define the stochastic integral with respect to a càdlàd process X:
For a stochastic process X and a simple predictable process H with presentation as in Equation (10), let
I Z ( X ) = H 0 X 0 + i = 1 n H i X T i + 1 X T i
be the stochastic integral with respect to X.
It is easy to see that the stochastic integral does not depend on the choice of the representation of H in Λ .
Any sensible integral operator should be linear and satisfy some kind of continuity. Equation (11) is clearly linear. In order to define or check continuity, one needs to define some kind of topologies first.
Let Λ u be the space of simple predictable processes endowed with the topology of uniform convergence on R + × Ω and let L 0 denote the space of all real-valued random variables on ( Ω , F , P ) , endowed with the topology induced by convergence in probability.
To ensure continuity, in the functional analytic approach, one only allows integrators that satisfy this continuity condition:
Definition 4.2.
A càdlàg, adapted process X is called agood integrator is the mapping I Z ( · t ) : Λ u L 0 is continuous for each t 0 .
The continuity required for a suitable integrator is relatively weak, as uniform convergence is a strong condition, while convergence in probability is comparatively weak. Nevertheless, it can be shown that much stronger continuity results hold (see, for example, [106, Theorem 5.89]).
With these definitions, we can state the previously mentioned result that semimartingales represent the largest class of integrators for which a “reasonable” stochastic integral can be defined. This result is now commonly known as the Bichteler–Dellacherie–Mokobodzki Theorem and is typically presented in the following form.
Theorem 4.3
(Bichteler-Dellacherie-Mokobodzki Theorem, [14] Theorem 7.6). An adapted càdlàg process X is a good integrator if and only if it is a classical semimartingale.
The theorem was initially shown by Bichteler [13,14], Dellacherie [35], Meyer [157]. The original proofs were later simplified by Beiglböck and Siorpaes [7], which can also be found in the appendix of [29]. While the original proofs heavily relied on the entire machinery of stochastic calculus and stochastic integration, the latter approach requires much fewer prerequisites.
This definition makes it easy to show that good integrators (and hence semimartingales) are closed under a change of equivalent probability measures and closed under localization. The set of good integrators forms a vector space.
The process of defining the stochastic integral for càglàd integrands is similar to Itô’s approach:
Definition 4.4.
Let X be a good integrator and H Λ represented as
H ( t , ω ) = H 0 1 { 0 } ( t ) + i = 1 n H i 1 T i , T i + 1 ( t ) .
We define thestochastic integral J X ( H ) t by
J t X ( H ) : = H · X t : = 0 t H d X s = H 0 X 0 + i = 1 n H i X T i + 1 t X T i t
and
J X ( H ) = H 0 B 0 + i = 1 n H i X T i + 1 X T i
Then Λ Ł and the stochastic integral process J Z ( X ) is in D .
Before extending the definition of J X ( H ) to all H Ł , new topologies have to be defined on Λ , Ł and D due to the implicit localization of J X ( H ) ( t ) to the interval [ 0 , t ] .
Definition 4.5.
(a) A series of processes ( X n ) n N is said to converge to a process Xuniformly on compacts in probability (ucp) if for each t 0 ,
sup 0 s t X s n X s 0 in probability as n .
(b) By Λ ucp we refer to the space Λ of simple predictable processes endowed with the topology induced by the ucp convergence. We define Ł ucp , D ucp accordingly.
Remark 17.
More details and a metric for the topology induced by the ucp convergence are given in Section 9.
The following lemmas show why these particular topologies have been chosen.
Lemma 4.6
([172], Theorem 10). The space Λ ucp is dense in Ł ucp .
Lemma 4.7
([106], Theorem 2.71). The space D ucp is complete.
The continuity extends from I X to J X .
Lemma 4.8
([172], Theorem II.11). Let X be a good integrator. Then the mapping J X : Λ ucp D ucp is continuous.
Now, Λ ucp is dense in Ł ucp , D ucp is a complete metric space, and the linear mapping J X : Λ ucp D ucp is continuous.
J X may thus be extended to a continuous linear mapping from Ł to D . For any t 0 , this determines the stochastic integral [ 0 , t ] H s d X s = J X ( H ) ( t ) of H Ł with respect to X on [ 0 , t ] .
Definition 4.9.
The continuous linear mapping J X ¯ from Ł ucp to D ucp , obtained as the extension of J X from Λ ucp to D ucp by continuity, is called thestochastic integral. That is, for H Ł , let { H n } be a sequence of simple predictable processes that converge to H in ucp. Then the stochastic integral J X ¯ ( H ) , denoted by H · B , is defined as the ucp limit of the stochastic integrals of the approximating sequence, i.e.,
H · X = lim n H n · X ,
where the convergence is uniformly on compacts in probability.
In the classical approach, the quadratic variation of semimartingales is required to define the stochastic integral. The quadratic variation is typically defined via existence theorems (see Theorem 3.25) or through approximating sums (Theorem 3.27). In the functional analytic approach, however, the quadratic variation is not necessary for defining the stochastic integral. Consequently, we can first define the stochastic integral for càglàd integrands and then define the quadratic variation using the partial integration formula.
Definition 4.10.
Let X and Y be good integrators. Thequadratic variation [ X ] : = [ X , X ] : = ( [ X , X ] t ) t 0 is defined by
[ X , X ] = X 2 2 X · X .
The quadratic covariation [ X , Y ] of X and Y is defined by
[ X , Y ] = X Y X · Y Y · X .
The stochastic integral, as developed so far, suffices for most applications, such as the Itô Formula, the Doléans-Dade exponential, most applications in derivative pricing and risk management, Lévy’s Theorem, or Lévi’s Stochastic Area formula. Nevertheless, for more advanced applications, a more general stochastic integral is required.

4.2. The Stochastic Integral for Predictable Integrands

We now discuss the extension from càglàd integrands to general predictable integrands. As with the classical approach, the extension is very technical and depends on carefully chosen metrics, the decomposition of classical semimartingales, and the theory of specific semimartingale spaces.
The complete development of this theory can be found in [172] and was first published in the initial edition of the cited textbook. However, similar techniques were already discussed in [170,171], Jacod [89], and Chou et al. [26].
To extend the stochastic integral from Ł integrands to P , it is again necessary to define appropriate topologies on P and D in such a way that Ł is dense in P (or to be more precise b Ł in b P , where b denotes the restriction to bounded processes) and that the integral operator is continuous. While for the integral for càglàd integrators, it was sufficient to define a topology that was independent of the integrator, it is necessary to adapt the metric to the corresponding integrator process for the extension here.
First, the integral is only defined for a particular class of semimartingales, the H 2 -semimartingales, which is a suspace of the special semimartingales.
A classical semimartingale X is a special semimartingale if there exists a decomposition of X into a local martingale and a predictable process of finite variation (see Theorem 3.34). Such a decomposition is always unique.
It is easy to show that a classical semimartingale is a special semimartingale if and only if its supremum process is locally integrable (Meyer [157], Yoeurp [200]).
We follow here closely the presentation in [172].
Definition 4.11.
Let X be a special semimartingale with canonical decomposition X = N + A . The H 2 -norm of X is defined as
X H 2 : = [ N , N ] 1 2 L 2 + 0 | d A s | L 2 < .
The space of semimartingales H 2 consists of all special semimartingales with finite H 2 -norm.
Remark 18.
In the literature, the term H 2 -space typically refers specifically to martingales that satisfy similar integrability conditions. Then, for clarity, the corresponding space for semimartingales is often denoted by H ¯ 2 . Since we do not address the martingale space H 2 in this context, we omit this distinction here.
The first goal is to extend the class of stochastic integrands from b Ł to b P , with X H 2 . First, one easily sees that if H b Ł and X H 2 , then the stochastic integral H · X is an H 2 -semimartingale. Additionally, if X = N + A is the canonical decomposition of X, then H · N + H · A is the canonical decomposition of H · X . Moreover,
H · X H 2 = 0 H s 2 d [ N , N ] s 1 2 L 2 + 0 | H s | d | A | s L 2 .
The key idea in extending our integral is to recognize that [ N , N ] and A are finite variation processes. Therefore, the integrals 0 t H s d [ N ] s and 0 t | H s | d | A | s are well-defined for any H b P and not just for H Ł .
Definition 4.12.
For a special semimartingale X H 2 with canonical decomposition be X = N + A and H , J P , the metric d X is defined as
d X ( H , J ) : = 0 ( H s J s ) 2 d [ N , N ] s 1 2 L 2 + 0 | H s J s | | d A s | L 2 .
and a predictable process H is called ( H 2 , X ) -integrable if d X ( H , 0 ) < holds.
By identifying indistinguishable processes, we obtain the following important result:
Theorem 4.13
([172], Theorem IV.1). The space H 2 is a Banach space, and the quotient space H X / N , where H X is the set of ( H 2 , X ) -integrable processes and N is the set of constant processes (modulo modification), is a normalized vector space with norm
H d X : = d X ( H , 0 ) .
In particular, since the triangle inequality holds, it follows for two ( H 2 , X ) -integrable processes H , J
d X ( H , J ) d X ( H , 0 ) + d X ( J , 0 ) < .
Lemma 4.14
([172], Theorem IV.2). The space of bounded càglàd processes b Ł is dense in the space of bounded predictable processes b P under d X ( · , · ) .
The proof relies on the monotone class theorem, which breaks down for unbounded processes. This is the reason why we restrict ourselves to bounded processes first. This approach is similar to the classical approach, where Doléans-Dade and Meyer [41] defined the stochastic integral first for bounded predictable integrands as the monotone class theorem could be utilized for that (also see [157] for a nice presentation).
Furthermore, one obtains the following important isometry for X H 2 , H , J b Ł :
H · X J · X H 2 = d X ( H , J ) .
Theorem 4.15
([172], Theorem IV.3). If ( H n ) n N is a Cauchy sequence in b Ł with respect to d X , then ( H n · X ) n N is also a Cauchy sequence in H 2 .
With these prerequisites, it is now possible to define the stochastic integral for bounded predictable integrands and H 2 -semimartingales.
Definition 4.16.
Let X H 2 , H b P and ( H n ) n N be a sequence in b Ł which converges in d X against H. Thestochastic integral H · X is the unique process Y H 2 with
Y = lim n H n · X ,
where the convergence is in H 2 .
In particular, it is easy to show that the stochastic integral does not depend on the choice of the approximating sequence.
As the next step, the stochastic integral is defined for unbounded predictable integrands. For X H 2 and an ( H 2 , X ) -integrable process H P , the sequence H n is defined by H n : = H 1 { | H | n } . Then the stochastic integral as defined above exists for all H n and, furthermore, the following theorem holds:
Theorem 4.17
([172], Theorem IV.14). Let X H 2 and let H P be ( H 2 , X ) -integrable. For H n = H 1 { | H | n } the sequence ( H n · X ) n N is a Cauchy sequence in H 2 .
With this result, the stochastic integral H · X can be defined for X H 2 as
H · X : = lim n H n · X = lim n H 1 | H | n · X ,
where the limit is seen in the H 2 -sense.
To pass from H 2 -semimartingales to general classical semimartingales, some kind of localization procedure (similar to how it was done in Section 3.5) should be applied. However, even locally, a classical semimartingale is, in general, not a special semimartingale (one can even show that every local special semimartingale is already a special semimartingale, cf. [92, Proposition 4.25]). In order to be able to pass from classical semimartingales to special semimartingales, the localization procedure needs to be relaxed further, and we examine prelocal behavior instead, which means, roughly speaking, the process X is not stopped at the random time T, but at a minimally earlier time T . This way, we circumvent the problem that arises with unbounded jumps at the stopping times and obtain additional regularity properties.
The reason for studying prelocal behaviour is the following theorem.
Theorem 4.18
([172], Theorem IV.13). Let X be a classical semimartingale, then X is prelocally in H 2 .
This allows us to define the general (component-wise) stochastic integral.
Definition 4.19.
Let X be a classical semimartingale and H P . The stochastic integral H · X exists if there is a sequence of stopping times increasing towards infinity such that X T n H 2 and H is ( H 2 , X T n ) -integrable for all n. In this case, we say H is integrable with respect to X and write H L ( X ) . We define thestochastic integral as
H · X = H · ( X T n ) , o n 0 , T n ,
for all n.
It is easy to show here that the integral does not depend on the sequence of stopping times.
It should be noted once again that the definition of the stochastic integral is usually the easy part. The proof for the most important properties of the stochastic integral is, in most cases, much harder.
There is no dedicated approach for extending the component-wise stochastic integral to the integral for vector-valued semimartingales. Hence, the extension uses the same methods as the classical approach.

5. The Stochastic Vector Integral

So far, we have only examined the stochastic integral for one-dimensional integrands and integrators. The obvious way to generalize stochastic integration to allow for multidimensional semimartingales X = X 1 , , X d and multi-dimensional integrands H = ( H 1 , H d ) would be to calculate each component’s stochastic integral and build the sum of all components. Thus, the set of suitable integrands would be defined as the class of processes H = H 1 , , H d such that, for any i = 1 , , d , H i L ( X i ) is integrable. In this case, the space { H · X ; H L ( X ) } is closed for the one-dimensional processes X and H.
However, contrary to what L. Galtchouk had previously thought, it turns out that the space of stochastic integrals with respect to a given multidimensional semimartingale, when defined as componentwise stochastic integrals, is, in general, not closed in the semimartingale topology for d > 1 . To get the closedness of the space of stochastic integrals, the concept of the componentwise stochastic integral has to be furhter generalized. Jacod [90] presented this generalization when he developed the stochastic vector integral with regard to a multidimensional semimartingale. Jean Mémin showed in [138] that this construction results in a closed space of stochastic integrals. This concept of the stochastic vector integral is also briefly treated in C. Dellacherie and Paul-André Meyer’s book [37].
In [90], the stochastic vector integral is built in implicit form, as opposed to prior publications in which the stochastic integral was first formed for “simple” integrands and then extended to general integrands via extension theorem of limit construction. The explicit approach for stochastic vector integration was first suggested by Shiryaev [185] and then shown in detail in [184]. In [105] and [106], another stochastic integral was developed, which likewise yielded closedness.
A quick summary of the vector stochastic integral’s creation is given in the following, which mainly follows the presentation in [29].
The objective is to enable the components to be netted out before constructing the integral. As in the traditional method, the integral is first defined for local martingales and for finite variation processes. Here, it is assumed that all vectors are column vectors; hence x y is the inner product of x and y.
In the one-dimensional case, the construction of the stochastic integral relies on the quadratic variation process [ M , M ] , which represents the cumulative variance of the martingale M. This process provides a foundation for defining integrals and controlling their behavior.
In the multidimensional setting, each pair of components ( M i , M j ) of a vector martingale M = ( M 1 , , M d ) has its own pairwise covariation [ M i , M j ] . A direct extension of the one-dimensional approach would require a process that can consistently represent these covariations. Here, we introduce a new finite variation process C to serve as a unifying framework that links all pairwise covariations through an optional process θ i j , so that
[ M i , M j ] = θ i j · C .
This construction is essential because it provides a shared “clock” or “driver” for all covariations, ensuring consistency across the components.
In comparison to the one-dimensional case, where [ M , M ] alone suffices, C generalizes the quadratic variation process to handle the vector case. Instead of relying on multiple individual quadratic variations, we define all covariations in terms of a single process C, modulated by θ i j . This setup is particularly advantageous when ensuring that the space of stochastic integrals is closed, as it captures both self- and cross-dependencies in a coherent way.
The first step is to demonstrate that, given a d-dimensional vector local martingale with components M i , there exists an adapted process of finite variation C and an optional process θ such that x θ x 0 for any x R d and θ = θ and
M i , M j = θ i j · C
for all i , j { 1 , 2 , , d } .
Similar to the procedure for the componentwise integral, the next step is to establish suitable metrics. Here, we choose for a d-dimensional semimartingale M with M 0 = 0 , a d-dimensional predictable process H, and p [ 1 , )
H E H θ H · C p / 2 1 / p = : H L p ( M ) .
If H L p ( M ) < , we write H L p ( M ) , and H L loc p ( M ) if H is locally in L p ( M ) .
It is straightforward to demonstrate that H L p ( M ) is a norm since it is composed of a Hilbert space norm and an L p -norm. A dominating convergence argument similarly demonstrates that the simple integrands are dense in the space L p ( M ) for each p.
Considering the definitions above, one can see that the set L p equals the set of processes H for which H θ H · C p / 2 is a process of integrable variation, and as in the case of scalars, the construction for p = 2 may be modified by replacing M i , M j with M i , M j . The space L loc 2 ( M ) perfectly corresponds to the space of predictable processes H, for which H θ H · C is locally integrable.
The following is the definition of the stochastic integral with regards to a local d-dimensional martingale:
Definition 5.1.
For a local martingale M with M 0 = 0 and a predictable process H L loc 1 ( M ) , thestochastic integral is defined as the local martingale X = H · M such that, for any local martingale N
[ X , N ] = H K · C
where K is an optional vector process such that M i , N = K i · C .
It is easy to show that the integral is uniquely defined.
If each component H i is M i -integrable, then the relationship H · M = i ( H i · M i ) holds by the uniqueness and linearity of the integral. This componentwise summation approach ensures consistency in the construction of the vector integral under these conditions.
In cases where the semimartingale M has uncorrelated components, i.e., M i , M j = 0 for each i j , the vector integral and the componentwise sum are equivalent. Here, the optional symmetric matrix process θ is effectively diagonal, simplifying the isometry so that each component H i is necessarily M i -integrable. This correspondence is crucial, as it reaffirms that the vector stochastic integral not only extends the classical componentwise approach but does so in a way that inherently respects the independence of uncorrelated components.
To complete the construction, we extend the stochastic integral to vector processes with finite variation. For such processes, the vector stochastic integral is defined as follows:
Definition 5.2.
Consider A as a càdlàg process in R d with components of finite variation. Define V t = i = 1 d [ 0 , t ] | d A i | . There exists a càdlàg process V and a predictable vector process v = ( v 1 , , v d ) such that A i = v i · V for each component i, and V and v are predictable whenever A is predictable. We denote by L var ( A ) the space of predictable processes H for which the following process is of finite variation:
H · A : = ( H v ) · V .
It is easy to see that L var ( A ) = L loc var ( A ) .
The notation H · X is not ambiguous; if both integrals are defined for a vector martingale of finite variation X and H, then the integrals agree.
This norm serves as a natural extension of the one-dimensional L p -norm for the vector case. In the 1-dimensional setting, the norm relies on the quadratic variation [ M , M ] ; here, H L p ( M ) incorporates the matrix θ and the process C to handle the multidimensional context.
If H L p ( M ) < , we write H L p ( M ) , and if H is locally in L p ( M ) , we write H L loc p ( M ) .
The extension to semimartingales, in general, is now evident.
Definition 5.3.
A vector process H is integrable with regard to a vector semimartingale X (represented by H L ( X ) ) if there exists a decomposition X = M + A such that H · M and H · A are well defined, as vector local martingale and vector Stieltjes integrals respectively. Consequently, H · X : = H · M + H · A .
The integral is independent of the selection of decomposition and vector space L ( X ) of X-integrable processes.
Remark 19.
If H = ( H 1 , , H d ) is a predictable, locally bounded process, then for any decomposition X = A + M where A is a d-dimensional process of finite variation and M is a d-dimensional local martingale, we have H L var ( A ) L loc 1 ( M ) . Thus, L ( X ) includes all locally bounded predictable processes.
Remark 20.
If H L ( X ) , then H may not belong to L var ( A ) L loc 1 ( M ) for all decompositions X = A + M with A a d-dimensional process of finite variation and M a d-dimensional local martingale, see [184, Example 5.2].
The following holds with this formulation of the stochastic integral:
Theorem 5.4
([138], Théorème II.7). In the semimartingale topology, the space { H · X } H L ( X ) is complete for every semimartingale X.

The Infinite Dimensional Case

When X is an infinite-dimensional process the theory developed in this survey does not suffice and further generalizations are required.
Grigelionis and Mikulevičius [66], Korezlioglu and Martaias [112], Kunita [115], Métivier and Pellaumail [142], Métivier and Pistone [143], Meyer [154], and many others, introduced and analyzed several important classes of stochastic integrals in depth. Not unexpectedly, the techniques for infinite-dimensional stochastic integration presented in these papers exhibit parallels and differences.
The cylindrical stochastic integration theory proposed by Mikulevicius and Rozovskii [159,160] is one method for approaching this topic. In Donno and Pratelli [43] and De Donno et al. [32], a particular application of this idea was refined and applied to finance. Alternatively, the theory of Random Measures, which is presented in detail in [15], can be extended to cover infinite-dimensional stochastic integration. If X takes values in a Hilbert space, then the construction for infinite-dimensional stochastic integration, as described in [140], is identical to that for finite-dimensional spaces.

6. The Integral with Respect to Random Measures

There is another method, conceptually equivalent, developed by Metivier and Pellaumail [141], Pellaumail [166], Ruiz de Chávez [181] and for special cases by Skorohod [187]. This method involves extending the “stochastic measure” associated with a semimartingale, starting from the integral of elementary predictable, such as the extension of the measure method of Kussmaul [122]. One of the major advantages of this method is its ability to extend effortlessly to martingales with vector values even in infite definition.
The motivation to study the stochastic integral as an integral with respect to random measures stems firstly from the fact that most of the properties of increasing processes are in fact the properties of the associated random measure d A t ( ω ) on R + : hence the idea of generalizing these properties to random measures on sets larger than R + in an appropriate way. Secondly, a certain number of difficulties in the treatment of martingales and semimartingales in the classical approach are due to the fact that the jumps of these processes can be “too large”. However, it is possible to represent the jumps greater ε for any semimartingale as a random measure, for which integration is straight-forward. Thus simplifying the theory.
Besides the advantage of the straightforward extension to the vector-valued martingales, this method makes the connection to several important related topics more clear. The most prominent examples are probably the point processes in general and Poisson point measures in particular.
As the key elements of this approach are similar to the classical method and rely on the decomposition of martingales and the study of the jump behavior of semimartingales, we are not going into detail here.
The introduction of random measures as we understand them here (in particular their role in time), and especially random measures with integer values, can be seen in the works of Itô [86], Kunita and Watanabe [117] for Poisson measures, Kunita and Watanabe [117] for the associated measure and a Markov process, Skorokhod [188]. These authors immediately emphasized the connections with martingale theory, implicitly using the notion of dual predictable projection.
Comprehensive treatments of the theory can be found in Chung and Williams [28], Jacod [89], Métivier [139]. While Chung and Williams [28] or also [182] follow a nice didactical and easy-to-grasp approach but restrict themselves to (local) martingales, Métivier [139] defines the integral for semimartingales in such a way that the extension to multi-dimensional (and even infinite-dimensional) stochastic integral is straight-forward.
The definitions of dual predictable projection and Doob-Meyer measure are elaborated, for example, in Jacod [91]. The notion of a Lévy system, which is closely related to random measures, is due to Kunita and Watanabe [117] when X is a Markov process, although the characterization of the Lévy system as a dual predictable projection came later: see Benveniste and Jacod [11]. The important and related topic of conditional expectation with respect to a positive Doléans measure is discussed in Jacod [87].
The fact that if N is the counting measure of a Poisson point distribution, then N ( A ) follows a Poisson law, is classic: see for example Kingman [107]. The important characterization theorem (for example Jacod [89, Théorème 3.34]) establishing a connection between Poisson point measures and the dual predictable projection is due to Kunita and Watanabe [117]; generalizations can be found in Meyer [148] and Grigelionis [63].
It was Bremaud [18] who first showed the importance of martingale theory in the study of point processes; see Grigelionis [64], and the review article by Bremaud and Jacod [19] for a more comprehensive bibliography on the subject. Also see Jacod [87], Kabanov et al. [99], Shiryaev [183] for further treatment.
The notion of local characteristics of a semimartingale, which has become extremely important within the last 30 years is based on the study of random measures. It was introduced by Grigelionis [62], then systematized by Jacod [89]. Contributions in the form of characterization theorems were also made by Boy [17], Skorokhod [189], Meyer [144], Jacod [89], and finally Grigelionis [65].

7. Other Types of Stochastic Integrals

Notably, there are several formulations of the stochastic integral, and not all of them generate the same functions under all conditions. There are also the Fisk-Stratonovich integral (Fisk [55] and Stratonovich [190]), the McShane integral (McShane [134,135], Protter [169]); for a comprehensive treatment, see Friz and Hairer [57]) or [174], and the cylindrical stochastic integration (Mikulevicius and Rozovskii [159,160]). In addition, there is a theory for non-predictable integrands (see, for instance, Shreve and Karatzas [186], Gawarecki and Mandrekar [60], or Revuz and Yor [176], which treats the case of a continuous integrator, or Liptser and Shiryaev [129] and Rüdiger [180] for more general cases).
Another alternative to the Itô integral goes back to Lyons [131] and is, for example, presented by Dudley and Norvaiša [45], Lyons and Qian [130], Lyons et al. [132]. Their approach allows to view the stochastic integral as a pathwise Young integral with process of infinite variation (by means of the p-variation norm and without using martingales) and is referred to as processes with rough paths.
Ayed and Kuo [5], Hibino et al. [73], Hwang et al. [77], Jornet [98], Kuo [119], Kuo et al. [120] developed a stochastic integral that permitted non-adapted integrands in their publications.
Toh and Chew [192] presents a different approach to stochastic integration that yields the same integral.
The integral which is today known as Itô-Kurzweil-Henstock integral uses a Riemann approach based on a non-uniform mesh. It was introduced independently by J. Kurzweil and R. Henstock in the 1950s to study the classical deterministic integral [72,121].
In the papers [22,94,113,177], the authors explore various aspects of stochastic integration with respect to cylindrical Lévy processes. These papers collectively contribute to the development of stochastic integration theory, providing significant insights and methodologies for dealing with cylindrical Lévy processes in infinite-dimensional spaces.
Brzeźniak et al. [22] provide a comprehensive framework for stochastic integration with respect to cylindrical Lévy processes in Banach spaces. They establish integrability conditions and apply their findings to stochastic partial differential equations (SPDEs).
Jakubowski and Riedle [94], Riedle [177] introduce a new approach to stochastic integration with respect to cylindrical Lévy processes in Hilbert spaces. They develop a stochastic integral for random integrands valued in the space of Hilbert-Schmidt operators, without requiring moment or boundedness conditions on the integrands or the integrator. This work lays the foundation by establishing an adapted semi-martingale with càdlàg trajectories in a Hilbert space. Kosmala and Riedle [113] extend the theory by introducing a stochastic integral with respect to cylindrical Lévy processes with finite p-th weak moments for p [ 1 , 2 ] . They focus on integrands consisting of p-summing operators between Banach spaces of martingale type p. Their work applies the integration theory to stochastic evolution equations driven by cylindrical Lévy processes, ensuring the existence and uniqueness of solutions under certain conditions.
Another stochastic calculus framework is presented by Aleš Černý and Johannes Ruf in their paper [193]. This framework aims to simplify and unify the treatment of predictable transformations of semimartingales, including changes of variables, stochastic integrals, and their compositions. By introducing the concept of representing a semimartingale Y through another semimartingale X using a predictable function ξ , the authors establish a measure-invariant representation where the increments of Y can be expressed in terms of the increments of X. The framework accommodates both real-valued and complex-valued semimartingales.

8. Stochastic Integration in Textbooks

Itô’s approach to stochastic integration primarily focuses on Brownian motions as integrators and càglàd integrands. This method remains the most common approach in most university courses and textbooks, such as those by Ash et al. [3], Bingham and Kiesel [16], Choe et al. [25], Ebenfeld [48], Elliott and Kopp [50], Klenke [109], Musiela and Rutkowski [163], and Øksendal [202]. This stochastic integral provide a foundation for most applications in biology, physics, and finance. Notably, Øksendal [202] and Musiela and Rutkowski [163] introduce a multidimensional variant of the classical Itô integral for Brownian motions.
Expanding beyond Brownian motion, some authors incorporate Poisson processes and Lévy processes into the framework. Tankov [191] and Applebaum [2] define the stochastic integral not only for Brownian motions but also for Poisson random measures, which facilitates the integration of Lévy processes, which are crucial for modelling jumps in financial markets.
Several works extend the stochastic integral to continuous local martingales. Chung and Williams [28] presents the stochastic integral for continuous local martingales using integration with respect to random measures, and furthermore extends the set of potential integrands beyond predictable ones. Additionally, Durrett [46], Émery [54], Irle [78], Kallianpur [103], Kuo [118], Meintrup and S. [137], Revuz and Yor [176], Shreve and Karatzas [186], and Le Gall et al. [123] use the Itô approach to define the integral for (locally) square-integrable martingales or slightly weaker cases like continuous martingales. However, they do not fully extend these results to define the stochastic integral for semimartingales and tend to omit the fundamental theorem of local martingales, which simplifies the extension from locally square-integrable martingales to semimartingales.
Although Émery [54] provides an appendix discussing the general semimartingale integral, the proofs are omitted. Similarly, Dellacherie and Meyer [38] offers a rigorous treatment of semimartingale theory and stochastic integration but focuses more on the probabilistic potential side.
In some texts, like Gıhman and Skorohod [61], Kallenberg [102], the stochastic integral is first defined for continuous local martingales and semimartingales before being extended to discontinuous cases, using the classical approach for one-dimensional integrals. Similarly, Yan [199], Klebaner [108], and Irle [78] follow the classical approach, starting with the Itô integral for Brownian motions and etend it to local martingales and semimartingales. However, they often only sketch or omit proofs.
Some works, such as Shiryaev [185], first define the integral as the Itô integral and later extend it to semimartingales using the classical method. However, they restrict the set of integrands to locally bounded predictable integrands. While the general vector-valued integral is mentioned, no proofs are provided.
Similar to the historical development, Rogers and Williams [178] defines the general stochastic integral by starting with Itô processes and extending to continuous local martingales, martingales, and finally semimartingales. Unlike earlier works, Rogers uses the fundamental theorem of local martingales for semimartingale integration. This book also touches on proof strategies related to the functional analytic approach, though the primary focus remains classical.
Cohen and Elliott [29] provides a detailed discussion of general predictable vector-valued integrands for general semimartingale integrators, an area only a few sources cover. This book extends beyond the traditional scope to present more advanced applications of stochastic integration with semimartingales.
Also, for a general perspective, He et al. [71] define the stochastic integral for predictable processes with respect to semimartingales directly, without introducing the Itô isometry first. However, it does not cover the vector-valued integral. Kannan and Lakshmikantham [104] follow a similar approach. A detailed treatment of general semimartingale theory is covered in Jacod and Shiryaev [92], which extensively discusses limit theorems and the full semimartingale framework.
An alternative approach to the classical treatment is provided by Medvegyev [136], who defines the stochastic integral for general semimartingales by decomposing a local martingale into its continuous and discontinuous components, treating the integral separately for the discontinuous part. For a functional analytic perspective, Bichteler [14] develops a comprehensive L p theory for semimartingales, covering similar topics in stochastic integration.
Infinite-dimensional stochastic integration is treated by Métivier [140] and Métivier and Pellaumail [142], who introduce a “control process” (unrelated to stochastic control). This approach provides an equivalent definition of the stochastic integral in finite dimensions and extends it to a broader class of integrals in infinite dimensions, a topic also explored by Gyöngy and Krylov [68], Kunita [116], Walsh [194]. The theory of stochastic integration in infinite dimensions, particularly in the context of stochastic partial differential equations, is further discussed in Gawarecki and Mandrekar [59], which extends the classical theory to infinite-dimensional processes.
While most textbooks on stochastic integration follow the classical approach, a few exceptions explore the functional analytic method. Protter [172] and Bichteler [15] take a different path. Though the functional analytic approach is mentioned in several textbooks, such as Rogers and Williams [178], where it is used to justify certain results, detailed development is often limited to appendices, as seen in Émery [54]. Bichteler [14] and Protter [172] provide more in-depth treatments of this approach, particularly with respect to general semimartingales.
Additionally, textbooks such as Grigoriu [67], Koller [110], Mürmann [162], and Prigent [168] closely follow Protter’s approach, although they often omit most proofs and detailed explanations.
In Karandikar and Rao [106], the classical integral for Brownian motions is first defined, and then the integral is extended via the functional analytic approach as introduced by Protter [172].
Rüschendorf [182] employs the random measures approach to define the integral for general semimartingales in one dimension. This approach is straightforward and easy to understand, although the book contains numerous typos.
Rao [174] provides a comprehensive introduction of the general theory of stochastic processes, with a focus on aspects that underpin the study of stochastic integration. This text delves into the theoretical framework of stochastic processes, including semimartingales and martingales.

9. The Topological Stochastic Vector Integral

In this section, we introduce a novel approach to defining the stochastic vector integral through the topology of semimartingales. Traditional approaches to stochastic integration, such as the classical and the extension from càglàd to predictable integrators in the functional analytic approach, rely heavily on the decomposition of semimartingales. However, by utilizing the semimartingale topology, we can bypass this decomposition, allowing for a direct and streamlined definition of the integral. This definition of the stochastic integral is essentially new but builds on ideas and results from the works of Chou et al. [26,90,138], Bichteler [14], Bichteler [15], Shiryaev and Cherny [184], Assefa and Harms [4] and Karandikar and Rao [105].

9.1. Semimartingales

Definition 9.1.
(a)
For a d-dimensional stochastic process
X = ( X 1 , , X d ) ,
we put
X t = max i { 1 , , d } sup s [ 0 , t ] X s i and X = sup t 0 X t .
(b)
A d-dimensional stochastic process H : Ω × R + R d is called simple predictable, if it can be represented as
H t ( ω ) = H 0 ( ω ) 1 { 0 } ( t ) + i = 1 n H i ( ω ) 1 T i , T i + 1 ( ω , t ) ,
where n N , ( T i ) i = 1 , , n + 1 are stopping times with 0 = T 1 T 2 T n + 1 < and H i = ( h i 1 , , h i d ) are F T i -measurable R d -valued random variables that are almost surely bounded in all components for all i = 1 , , n .
The set of R d -valued simple predictable processes will be denoted by Λ d .
1.
For H Λ d , we define
H u : = ess sup ω Ω max j { 1 , , d } max i { 1 , , n } h i j = ess sup ω Ω ( H )
and Λ u d denotes the set Λ d equipped with the topology of uniformly convergence on R × L ( Ω , F , P , R d ) , that means H n H in Λ u d , if H , H n Λ d for all n N and we have H n H u 0 for n .
Remark 21.
Even though not always explicitly mentioned in the literature (as, for example, in [172], it is important that H i are bounded. Otherwise, H u would not be well-defined.
Remark 22.
A stochastic process is a d-dimensional simple predictable process if and only if each component is a one-dimensional simple predictable process.
Remark 23.
For each predictable process H, there exists a sequence of simple predictable processes H n that converges pointwise to a process that is indistinguishable from H. This is a simple consequence of the monotone class theorem (see, for example, [29, Corollary 7.4.3]).
The definition of the stochastic integral for simple predictable is quite intuitive.
Definition 9.2.
Let X be an R d -valued stochastic process with càdlàg paths and H Λ d as the one given in (13). We define thestochastic integral J X : Λ d D by
J X ( H ) : = H · X : = H 0 X 0 + i = 1 n H i ( X T i + 1 X T i ) .
The processes H and X are calledintegrand andintegrator.
For simplicity, we will treat processes and their equivalence classes interchangeably. Consequently, the subsequent metric spaces should be viewed as quotient spaces.
Definition 9.3.
(a)
For any left or right continuous process X, we put
X ucp : = n = 1 1 2 n E X n 1 .
Then the topology induced by the metric d ucp ( X , Y ) : = X Y ucp is thetopology of ucp convergence.
(b)
TheÉmery norm of a càdlàg process X is defined as
X S : = sup K Λ d K u 1 K · X ucp .
The topology induced by the metric d S ( X , Y ) : = X Y S is called thesemimartingale topology orÉmery topology.
(c)
For sequences ( H n ) n N , we write H n ucp H if it converges to a process H in the topology of ucp convergence. We use X n S X analogously.
Remark 24.
Even though the name suggests otherwise, the Émery norm is not a norm but only a metric (see, for example, [29] page 278 for elaborations).
Remark 25.
A sequence of processes ( X n ) n N converges in the ucp topology to a process X if and only if we have
P ( X n X ) t > K 0 as n for each t , K > 0 .
This explains the name ucp (uniformly on compacts in probability) convergence.
Remark 26.
In the definition of X t , the supremum is taken over the uncountable set [ 0 , t ] . This means that it might not be measurable for arbitrary processes. However, by limiting ourselves to either left-continuous or right-continuous processes, the supremum can be considered over the countable set of rational times, ensuring measurability. Additionally, it has been demonstrated that the supremum is measurable for jointly measurable processes, provided the probability space is complete. For further details and an example of a process that “behaves badly”, refer to page 214 Pollard et al. [167, page 214].
In most publications, a semimartingale is defined to be a process which can be written as the sum of a local martingale and an FV-process. In some publications, a semimartingale is defined as a process under which the integral operator is continuous. We give a different definition and will see later that all of these definitions are equivalent. For clarity, we will use the term topological semimartingale to highlight the different definition.
Definition 9.4.
An R d -valued stochastic process X = ( X t ) t 0 is called atopological semimartingale, if X is càdlàg, adapted, satisfies X S < and for each series of real numbers λ n with λ n 0 we have λ n X S 0 . The set of all d dimensional topological semimartingales will be denoted by S d , and we write S e d for the space S d endowed with the semimartingale topology.
Remark 27.
By the definition, it is obvious that S d is a topological vector space.
In the functional analytic approach, as presented by Protter [172], a semimartingale is defined as a “good integrator”, which means a process for which the integral operator is continuous with respect to certain topologies. The next theorem shows that this definition is equivalent to ours.
Theorem 9.5.
Let X be an adapted càdlàg process. Then X is a topological semimartingale if and only if for each sequence of simple predictable processes ( H n ) n N with H n u 0 the random variables H n · X t converge for each t 0 in probability to 0.
Proof. 
The key distinction between the two criteria lies in their convergence requirements: the semimartingale definition demands uniform convergence, whereas the latter criterion requires only pointwise convergence in probability. Therefore, it is sufficient to demonstrate that
sup K Λ d K u 1 E | K · X t | = sup K Λ d K u 1 E [ K · X ] t .
The remaining argument then follows from the linearity of both the expectation and the integral operator.
For t > 0 , λ > 0 and G Λ d with G u < 1 , define the stopping time T and the indicator function H as
T = inf { s > 0 ; | G · X | s > λ } t , H = 1 [ 0 , T ] .
Then, we have
P | G · X | t > λ P [ | G · X | T λ ] = P [ | G H · X | t λ ] .
By basic probability theory, we have E [ K · X ] t = 0 P | K · X | t > λ d λ and since G H Λ d and G H u 1 , we get
sup K Λ d K u 1 E [ K · X ] t = sup K Λ d K u 1 0 P | K · X | t > λ d λ sup K Λ d K u 1 0 P | K · X | t λ d λ = sup K Λ d K u 1 E | K · X | t .
The converse inequality is trivial. □
Remark 28.
Theorem 9.5 states that semimartingales are the càdlàg processes for which the integral operator is continuous. It can be shown that different types of convergence could be chosen in Theorem 9.5 that would yield an equivalent definition of semimartingales (see [106, Theorem 5.89]).

9.2. Properties of Semimartingales

Theorem 9.6.
(a)
An R d -valued càdlàg process X is a d-dimensional topological semimartingale if and only if all components are one-dimensional topological semimartingales.
(b)
Local topological semimartingales and processes that are prelocally topological semimartingales are topological semimartingales.
1.
Let Q be a probability measure that is absolutely continuous with respect to P . Then every P -semimartingale is also a Q -semimartingale.
Proof. 
(a)
If all components are topological semimartingales, the result follows from the triangle inequality. For the converse, we note that since for K i Λ 1 with K i u 1 we have K ˜ i : = ( 0 , , 0 , K i , 0 , 0 ) Λ d with K ˜ i u 1 . Therefore, we get for λ n > 0
sup K ˜ Λ 1 K ˜ u 1 K ˜ · ( λ n X i ) ucp sup K Λ d K u 1 K · ( λ n X ) ucp .
(b)
We use Theorem 9.5. Let ( H n ) n N be a sequence with H n u 0 . We have to show H n · X t 0 in probability. Let X be a local topological semimartingale with ( T k ) k N being a localizing sequence. We define
R n t = T n T n t T n > t .
Now we have
P | H n · X t | ε P | H n · X t T n | ε 0 by assumption + P R n t < . = P T n t 0 sin ce T n
Therefore, X is a topological semimartingale. If X is prelocally a topological semimartingale, then the proof proceeds analogously.
(c)
Since convergence in probability in P implies convergence in probability in Q , this is obvious.
Historically, semimartingales have been defined as the set of processes that can be written as the sum of a local martingale and an FV-process. In the functional analytical approach, semimartingales have been defined as the processes for which the integral operator is continuous (see Theorem 9.5). In fact, these definitions are equivalent, which was proved in the late 1970s independently of each other by Klaus Bichteler [13,14] and Claude Dellacherie with contributions by Paul-André Meyer and Gabriel Mokobodzki [36]. In addition to the literature mentioned above, the theorem is also discussed and proven in modern textbooks such as in Cohen and Elliott [29, Theorem 12.3.26], in Karandikar and Rao [106, Theorem 5.59], and in Protter [172, Theorem III.47]. While the latter two employ classical methods, the former adopts an approach by Beiglböck, who has presented alternative proofs in Beiglböck and Siorpaes [7], Beiglböck et al. [8,9].
As the topological stochastic integral does not rely on the decomposition of semimartingales, the theory does not require the Bichteler-Dellacherie-Mokobodzki. However, for the sake of completeness and because it provides many examples of semimartingales, we present it here.
Theorem 9.7
(Bichteler-Dellacherie-Mokobodzki Theorem). Let X be a càdlàg process. The following are equivalent:
(i)
X is a topological semimartingale;
(ii)
X is a classical semimartingale in each component;
(iii)
X is a good integrator in each component.
Before we proceed with the proof of Theorem 9.7, we will demonstrate two inequalities that are useful when dealing with the topological stochastic integral. The first one, a variant of the Itô inequality, shows that square-integrable martingales are semimartingales. The second, based on the Burkholder inequality, demonstrates the same for “ordinary” martingales. While the latter is sufficient for our purposes here, we will present both for the sake of completeness.
Theorem 9.8.
Let M be a d-dimensional càdlàg process and H Λ d .
(a)
If M is a square-integrable martingale, then we have
E ( H · M t ) 2 d H u 2 E M t R d 2 .
(b)
If M is a martingale, then the following inequality holds:
α P ( H · X ) t α 18 H u d j d E | X t j | .
Proof. 
(a)
By Theorem 2.6, an application of the tower rule and the Cauchy-Schwarz inequality, we get
E ( H · M t ) 2 = E H 0 M 0 + i = 1 n H i M t T i + 1 M t T i 2 = E H 0 M 0 2 + i = 1 n H i M t T i + 1 M t T i 2 E H 0 R d 2 M 0 R d 2 + i = 1 n H i R d 2 M t T i + 1 M t T i R d 2 d H u 2 E M 0 R d 2 + i = 1 n M t T i + 1 M t T i R d 2 = d H u 2 E M t R d 2
(b)
The idea is to transform the time-continuous statement into a time-discrete one and then apply Theorem 2.7. We first assume d = 1 and sup t | H | < 1 . Since H Λ , there is a representation
H = H 0 1 { 0 } ( t ) + i = 1 n H i 1 T i , T i + 1 ,
with stopping times T i . By the tower rule and Doob’s optional stopping theorem, one easily sees that ( M T i T ) i N is a time-discrete martingale for any stopping time T.
By the definition of the stochastic integral for simple predictable integrands, we obtain H · M T i = H M i , where * describes the time-discrete stochastic integral as in Theorem 2.7. An application of Theorem 2.7 yields the result for d = 1 . For general d and H, we obtain
α P ( H · M ) t α = α P H H u · M t α H u α j = 1 d P H j H u · M j t α d H u 18 H u d j = 1 d E | M t j | .
We now outline the proof of the Bichteler-Dellacherie-Mokobodzki Theorem.
Proof of Theorem 9.7
We first assume X = M + A with a local martingale M and an FV-process A. For the FV-process A with A 0 = 0 , we have
Var [ H · A ] = Var i = 1 n H i A T i + 1 A T i i = 1 n j = 1 d h i j Var A j , T i + 1 A j , T i H u j = 1 d Var A j .
For a series H n with H n u 0 , we see that H · A t converges in probability to 0. Thus, A is a semimartingale by Theorem 9.5. For any martingale M, we can proceed analogously by applying Theorem 9.8 and see that M is a semimartingale. Via localization, we get that any local martingale is locally a semimartingale; thus, by Theorem 9.6, any local martingale is a semimartingale. By Theorem 9.6 M+A is a semimartingale as well.
The converse is elegantly proven in [7]. The proof can be readily adapted to our context with minimal changes and does not require the definition or properties of the stochastic integral. Therefore, we will not repeat it here; instead, we will refer to the original article. □
Remark 29.
By employing the Fundamental Theorem of Local Martingales, we can assume without loss of generality that any semimartingale can be decomposed into X = M + A , where A is a finite variation process, and M is a locally bounded local martingale. Therefore, the first inequality in Theorem 9.8 would suffice to demonstrate one direction of the Bichteler-Dellacherie-Mokobodzki Theorem.
Example. 
  • Since a Brownian motion is a martingale, it is also a semimartingale.
  • A counting process (and hence a Poisson process)2 is an FV-process and hence a semimartingale. The compensated Poisson Process3 is a martingale and thus also a semimartingale.
  • Any Lévy process4 can be written as a sum of a local martingale and an FV-process5. Hence, each Lévy process is a semimartingale.
Example. 
Probably the most prominent example of a stochastic process that is not a semimartingale is the fractional Brownian motion B H with Hurst parameter H ( 0 , 1 ) 1 2 (see, for example, [12] for a comprehensive treatment). The fact that a fractional Brownian motion cannot be a semimartingale for H 1 2 has been proved by several authors (see, for example, Liptser and Shiryaev [128, Example 2 of Section 4.9.13], [127, Corollary 2.2] or Rogers [179, section 2]). However, these authors use the classical definition of semimartingales. With our definition, it is possible to present an alternative and short proof for H < 1 2 .
Let X be a fractional Brownian Motion on the interval [ 0 , T ] with X 0 = 0 . Furthermore, let π be the set of partitions of the interval [ 0 , T ] .
For a partition π 0 = { t 0 = 0 , , t n = T } π and N N , we define
H π 0 , N = j = 0 n 1 1 { | X t j | N } X t j N 1 ( t j , t j + 1 ] .
Then H π 0 , N Λ 1 with H π 0 , N u 1 and for λ > 0 on the set
A π 0 , N , λ : = X T N j = 0 n 1 ( X t j + 1 X t j ) 2 > 2 N λ + N 2
we have
H π 0 , N · ( λ X ) T = λ N j = 0 n 1 X t j ( X t j + 1 X t j ) = λ N j = 0 n 1 X t j 2 X t j X t j + 1 = λ 2 N j = 0 n 1 ( X t j + 1 X t j ) 2 X T 2 > λ 2 N 2 N λ + N 2 N 2 = 1 .
Therefore, P ( H π 0 , N · ( λ X ) T > 1 ) P ( A π 0 , N , λ ) and it remains to show that P ( A π 0 , N , λ ) is greater than some positive constant independent of λ for feasible π 0 and N.
It is well known that the quadratic variation of the fractional Brownian Motion with H < 1 2 is unbounded in probability (see, for example, [51, Lemma 4.1.2]). That means that
c : = lim λ 0 sup π P j = 0 n 1 X t j + 1 X t j > 1 λ
exists and is greater than 0.
In particular, that means for each N N , we can select a partition π 0 = ( t 0 , , t n ) π such that
P j = 0 n 1 X t j + 1 X t j 2 > 2 N λ + N 2 > c 2 .
Since X is continuous, we can define N N in a way such that
P X T > N < c 4 .
With Equations (17) and (16), we conclude
P A π , N , λ > c 4 > 0 .
Example. 
Following a similar strategy as above, one can show that for a Brownian motion B, | B | α with 0 < α < 1 is also not semimartingales. A proof of this fact in the classical semimartingale framework can be found in Wang [195] or Yor [201].

9.3. The General Stochastic Integral

To extend the integral, we are interested in the topological properties of the above-defined spaces. These were initially studied by Émery [52,53] and Memin [138]. We note that the Émery norm was initially defined to take the supremum over all bounded predictable processes and not only simple predictable processes. But it was shown in Memin [138] that these definitions are equivalent.
Definition 9.9.
For X S d and H Λ d , we put H X : = H · X S . Then d X ( H , K ) : = H K X is a metric on Λ d . The corresponding topological space is denoted by Λ X d and for a sequence ( H n ) n N we will write H n X H if H n converges to a process H with respect to · X .
Remark 30.
We re-emphasize that we work with equivalence classes of processes. Specifically, the metric d X ( H , K ) : = H K X is defined on the quotient space Λ d / N X , where N X = { K Λ d ; K X = 0 } .
Theorem 9.10.
Let H n Λ d and X n S d for all n.
(a)
If H n converges to H in the ucp topology, then it also converges to H with respect to the topology induced by the metric · X . This implies that the ucp topology is finer than the topology induced by · X .
(b)
If X n converges to X in the semimartingale topology, then it also converges to X in the ucp topology. This means that the semimartingale topology is finer than the ucp topology.
Proof. 
(a)
Let t R + be fixed and ε > 0 . Since X is a semimartingale, we can always find a c > 0 such that c X S ε . Without loss of generality, we assume H n ucp 0 and hence, we can always find a N N such that P ( ( H n ) t > c ) ε for all n N . For K Λ 1 , it is easy to check that we have
K · ( H n · X ) t = ( K H n ) · X t .
We obtain for all n N and K Λ 1 with K u 1
P K · ( H n · X ) t > ε = P K H n · X ) t > ε = P K H n · X ) t > ε H n t > c + P K H n · X ) t > ε H n t c P ( H n ) t > c + P c ( K · X ) t > ε ε + ε
Hence we have H n X 2 ε for all n N and K · ( H n · X ) converges uniformly on compacts in probability to 0 and thus H n X 0 .
(b)
This follows immediately with
X S : = sup K Λ d K u 1 K · X ucp X ucp .
Remark 31.
The converse of these statements does not hold (see, Theorem 9.3). However, for martingales M n ,M, it can be shown, that M n H 1 M implies M n S M (see, for example, [26]).
Example. 
Let X t ( ω ) = t and
H t n ( ω ) = 1 { 0 } ( t ) + k = 0 n 2 k 1 n 2 n 1 k 1 n 2 , k n 2 ( t ) .
Since X is increasing and H n is positive and zero for all t > 1 , we have
( H n · X ) t H n · X 1 = k = 0 n 2 k 1 n 2 n 1 n 2 < 0 1 x n d x = 1 n + 1 0 .
Hence, we have H n · X ucp 0 for n , and one easily sees (again because of the monotony of X and since H n 0 ) that we also have H n · X S 0 and thus H n X 0 .
But we also have
( H n ) 1 = n 2 1 n 2 n 1 .
Hence for no ε < 1 there exists a N N such that ( H n ) 1 < ε for all n N . Thus, we conclude H n ucp 0 .
The following results were initially shown by Memin [138]. In our setting, we can provide a simplified proof.
Theorem 9.11.
The space of semimartingales is complete under the semimartingale topology.
Proof. 
Let X n be a Cauchy sequence in · S . By Theorem 9.10, X n converges in ucp to a càdlàg process X. For H , H m Λ d with H m H u 0 , we get
H m · X H · X ucp H m · X n H · X n ucp + H · X n H · X ucp H · X n H · X ucp ( m ) .
Since X n converges to X with respect to the semimartingale topology, the right-hand side converges to 0 for n . Therefore, H m · X converges in ucp, and thus X is a semimartingale. □
Semimartingales, as integrators, are the largest class of processes, such that the integral operator has some desirable properties. The integral, however, has so far only been defined for simple predictable integrands. Now we are going to increase the class of possible integrands.
Let X S d . We examine the operator J X from Λ X d to S e d :
J X : Λ X d S e d H H · X
The function is well defined as we work with equivalence classes of functions on a quotient space.
By the definition of the corresponding metrics, it is clear that J X is a linear isometry between topological vector spaces and hence is uniformly continuous. Therefore, by the completion of maps theorem (see, for example, Jänich [95, page 55]), we can extend the mapping continuously. From classical theory, we know that the predictability of the integrand is an essential property for ensuring desirable features, such as the stability of the local martingale property under integration with respect to locally bounded integrands. Consequently, we choose to close Λ X d as a subspace of the set of predictable processes.
This justifies the following definition.
Definition 9.12.
Let X be a d-dimensional semimartingale, L ( X ) : = Λ X d ¯ the closure of Λ X d in P and J ¯ X the unique, continuous extension of J X . Then we define
J ¯ X : L ( X ) S e d H H · X : = J ¯ X ( H ) .
We will call H · X thestochastic integral of H with respect to X.
Because of the uniqueness of the extension, we conclude that the stochastic integral coincides with the stochastic integral defined via the classical or functional analytic approach.
One might wonder whether we could obtain a broader range of possible integrands by defining the integral for all processes that are only (pre)locally in L ( X ) , similar to how it was done by Meyer (see Section 3.5). However, this is not the case, as the following theorem demonstrates.
Theorem 9.13.
Let H P be locally in L ( X ) . Then H L ( X ) . In other words:
L ( X ) loc = L ( X ) .
We utilize the following lemma for the proof.
Lemma 9.14.
Let ( X n ) n N be a sequence in S d , X S d , and ( T n ) n N be an increasing sequence of stopping times with T n almost surely. Assume that ( X n ) T m X T m in S d for all m. Then X n X in S d .
Proof. 
Assume that X n X in S d . This implies that for any ε > 0 and any N N , there exists an n N such that X n X S d > ε . That means, that by the definition of the norm · S d , there exists a process K Λ d with | K | 1 such that
K · ( X n X ) ucp ε .
However, we have
K · ( X n X ) T m ucp sup H Λ d H u 1 H · ( X n X ) T m ucp = ( X n X ) T m S d .
By assumption, the right-hand side of (18) tends to 0 as m . This implies that there exists an m ˜ N such that
P ( K · ( X n X ) T m ) t > ε < ε 2
for all m m ˜ .
Since T m almost surely, for any δ > 0 , there exists an m ¯ such that P ( T m < t ) < ε 2 for all m m ¯ .
Thus, for all m max { m ˜ , m ¯ } , we obtain
P ( K · ( X n X ) ) t > ε P ( K · ( X n X ) T m ) t > ε + P ( T m < t ) < ε ,
which is a contradiction to Equation (18). Therefore, we conclude that X n X in S d . □
Proof of Theorem 9.13
Since H L ( X ) loc , there exists a sequence of stopping times T n tending to infinity such that H T n L ( X ) .
As H T n L ( X ) , there exists a sequence ( H n , m ) m N with H n , m Λ d and H n , m X H T n as m , which is equivalent to H n , m · X S H T n · X .
Without loss of generality, we can assume there exists a sequence ( H n ˜ ) n ˜ N such that for all m
( H n ˜ · X ) T m S ( H · X ) T m .
By Theorem 9.14, we have H n ˜ · X S H · X , and thus H L ( X ) . □
Remark 32.
For any semimartingale X, the space { H · X } with H L ( X ) is complete with respect to the semimartingale topology. This result was first established by Memin [138]. An alternative proof is presented in [29, Theorem A.6.17]. For the topological stochastic integral, this follows directly from the definition of the stochastic integral.
So far, we have defined the stochastic integral in great generality, but we hardly have any information about the set L ( X ) . The following theorem provides more information.
Theorem 9.15.
Let X S d , then
Ł b P b P loc L ( X ) = L ( X ) loc P .
Proof. 
By definition, L ( X ) consists of predictable processes. A simple application of the monotone class theorem yields b P L ( X ) (a fact that is also shown in the proof of Theorem A.6). With Theorem 9.13, we get b P loc L ( X ) . □
Remark 33.
Usually, we even have Ł L ( X ) P . In Section 9.4, we provide examples of predictable processes that are not in L ( X ) and demonstrate that there are processes in some L ( X ) that are not càglàd processes.
Remark 34.
Theorem 9.17 and Theorem A.11 will provide a more detailed understanding of L ( X ) .
Remark 35.
Even though, for a given semimartingale X, there are usually predictable processes that are not in L ( X ) , each predictable process can be pointwise almost surely approximated by simple predictable processes and, therefore, processes from L ( X ) (Remark 23).
The following theorem will be instrumental in the proof of the properties of the stochastic integral. As, at this stage, we do not have the basic properties at hand, we need to resort to a different proof than the one that is used in the standard literature. Since it is a long and technical proof, we have included it in the appendix.
Theorem 9.16
(Dominated Convergence Theorem). Let X S d be a semimartingale, and let H m = ( H ˜ 1 , m , H ˜ d , m ) P be processes converging a.s. to a limit H. If there exists a process G = ( G ˜ 1 , , G ˜ d ) with G ˜ i L ( X ) for all i { 1 , , d } such that | H ˜ i , m | G i for all i { 1 , , d } and all m N , then H m , H are in L ( X ) and H m · X converges to H · X in the semimartingale topology (and hence also in ucp).
The proof can be found in Appendix A.
Remark 36.
The condition requiring G to be componentwise integrable is crucial, as demonstrated in the example in Section 9.6. This is because one cannot simply reduce the values of a component without potentially compromising integrability: higher values in one component might be necessary to offset high values in other components, as will be illustrated in Section 9.6
The dominated convergence theorem has a useful corollary that can help comprehend the space of integrands more thoroughly.
Corollary 9.17.
Let X be a semimartingale and ( H n ) n N be a sequence of uniformly locally bounded predictable processes, that is, for a given sequence ( T n ) n N we have sup m | ( H m ) T n | < K n , for some constants K n . Suppose that H t n H t almost surely for every t. Then H L ( X ) and H n X H .
As there is no version of the Dominated Convergence Theorem, which guarantees the integrability of a process that isn’t componentwise integrable, the following result is very useful. The proof can be found in the appendix.
Theorem 9.18.
Let X S d and H a d-dimensional predictable process. Then H L ( X ) if and only if H n · X with H n = H 1 { H u n } converges in the semimartingale topology.

9.4. Properties of the Stochastic Integral

In this section, we are going to examine the most important properties of the stochastic integral.
The following theorem lists most of the important properties of the stochastic integral. All these properties are well known. However, because of our different approach, our proofs differ from the ones in the mentioned literature.
Theorem 9.19.
Let X be a d-dimensional semimartingale and H L ( X ) .
(a)
For α , β R and J L ( X ) , we have α H + β J L ( X ) and
( α H + β J ) · X = α H · X + β J · X .
Thus, L ( X ) is a vector space.
(b)
For Y S d and H L ( X ) L ( Y ) , we have H L ( X + Y ) and
H · ( X + Y ) = H · X + H · Y .
(c)
The process ( Δ ( H · X ) s ) s 0 is indistinguishable from ( H s ( Δ X s ) ) s 0 .
(d)
We have
( H · X ) T = H 1 0 , T · X = H · ( X T ) .
(e)
Assume X = ( X 1 , , X d ) S d and H = ( H 1 , , H d ) with H i L ( X i ) for all i. Furthermore, let K = ( K 1 , , K d ) be a predictable d-dimensional process and we set Y i : = H i · X i , Y : = ( Y 1 , , Y d ) , G i : = K i H i and G = ( G 1 , , G d ) . Then K i L ( Y i ) for all i if and only if G i L ( X i ) for all i, and K L ( Y ) if and only if G L ( X ) . In both cases, we have
K · Y = G · X .
(f)
If X is an FV-process, then H · X is indistinguishable from the Lebesgue-Stieltjes integral, computed path-by-path.
(g)
For a probability measure Q with Q P , H L ( X ) holds under Q as well, and H Q · X = H P · X , Q -almost surely.
Proof. 
(a)
Let A be the set for which 9.19 holds. It is clear that Λ d A . With the help of the bounded convergence theorem A.6, we can show b P A and an application of Theorem 9.18 yields the result.
(b)
Analogous to 9.19.
(c)
Let A be the set of processes for which 9.19 holds. First, we show that Λ d A . We assume H Λ d and the stochastic integral with respect to X is given by
H · X = H 0 X 0 + i = 1 n H i X T i + 1 X T i ,
with H i as in Theorem 9.1.
Hence we have
Δ ( H · X ) t = ( H · X ) t ( H · X ) t = i = 1 n H i X t T i + 1 X t T i + 1 X t T i X t T i = i = 1 n H i Δ X t T i + 1 Δ X t T i ,
which implies Δ ( H · X ) s = ( H · Δ X ) s .
Furthermore, for the stopped process, we have
Δ X t T i = X t T i X t T i = X T i X T i = 0 for t > T i Δ X t for t T i .
That implies
Δ X t T i + 1 Δ X t T i = 1 T i , T i + 1 Δ X t ,
and thus
Δ ( H · X ) t = i = 1 n 1 T i , T i + 1 H i Δ X t = i = 1 n 1 T i , T i + 1 H i Δ X t = H t Δ X t .
We conclude Λ d A .
Now let ( H n ) n N be a uniformly bounded, increasing sequence converging pointwise to a process H. By the bounded convergence theorem A.6, we have
Δ ( H · X ) t = lim n Δ ( H n · X ) t = lim n H t n ( Δ X t ) = H ( Δ X t ) .
Since Q [ 0 , ) is countable, (20) holds almost everywhere for all rational t 0 . Since H · X is càdlàg, (20) holds almost surely for all t 0 . Thus, we have H A and can apply the monotone class theorem, which yields b P A .
For H L ( X ) , we define H n : = H 1 { H n } and the result follows with Theorem 9.18.
(d)
The result is obvious for H Λ d and a bounded stopping time T taking only finitely many values.
For a bounded stopping time T, we can approximate T from above by stopping time taking finitely many values. For a potentially unbounded stopping time, we can define T n = T n , and the result still holds.
For general integrands, we precede as before with the monotone class theorem and then with Theorem 9.18.
(e)
See Theorem A.13.
(f)
The proof is again analogous to the ones above, with the only difference that the dominated convergence theorem for the Lebesgue-Stieltjes integral needs to be applied.
(g)
For a probability measure Q that is absolutely continuous with respect to P , convergence in probability under P implies convergence in probability under Q . Hence, the result is immediate
Example. 
Let H t : = 1 t and X t : = t . Clearly, X is a semimartingale and since H is a deterministic process, it is also predictable.
Assume H L ( X ) , then H · X exists and coincides with the Lebesgue-Stieltjes Integral [ 0 , t ] 1 s d s . Since this integral does not exist, we conclude H L ( X ) .
Example. 
Let H t : = 1 [ 1 , ) and X t : = t . Then, it is easy to see that we have H L ( X ) , but H .
In many publications on stochastic integration, the multidimensional integral is often defined as the sum of integrals for each component. However, Section 9.6 demonstrates that an integrand being integrable doesn’t necessarily imply its componentwise integrability. Nevertheless, when an integrand is componentwise integrable, the subsequent theorem confirms that the stochastic integral is indeed the sum of the integrals of its components
Theorem 9.20.
Let X be a d-dimensional semimartingale and
H = ( H 1 , , H n )
be a stochastic process with H i L ( X i ) for all i. Then we have H L ( X ) and
H · X = i = 1 d H i · X i .
Proof. 
This follows directly from the linearity. □
We provide an example illustrating that L ( X ) encompasses more than just the componentwise integrable processes with respect to X. Additionally, this example demonstrates that the space H · X , where H represents componentwise integrable processes, is not closed under the semimartingale topology. Further details and a comprehensive proof are available in [29, Example 12.5.1] or [184, Example 6.4].
Example. 
Let W 1 , W 2 be two independent Brownian motions. We put H t = t and define the two-dimensional process X by
X = W 1 1 H · W 1 + H · W 2 .
Then the space
L C ( X ) = i = 1 2 K i · X i ; K i L ( X i )
is not closed in the semimartingale topology.

9.5. Stability of Local Martingales under Stochastic Integration

To the best of our knowledge, the subsequent criterion for the convergence of a sequence of local martingales to a local martingale has not been documented in existing literature. Nevertheless, this criterion streamlines the proofs of several established theorems. For instance, it offers a more straightforward approach to the general stochastic integral as presented in [172] and facilitates our proof in Theorem 9.22.
Lemma 9.21.
Let ( X n ) n N be a sequence of local martingales, which converges to a process X in ucp. If ( sup n ( X n ) t ) t R + is locally integrable, then X is a local martingale.
Proof. 
Without loss of generality, we assume all processes to be one-dimensional. Because of the ucp-convergence, we can conclude that X is also càdlàg and adapted. By assuming a suitable subsequence, we can also assume that the convergence is almost surely on compact subsets, and thus
M t : = sup n ( X n ) t
is also càdlàg and adapted. Furthermore, X is increasing, and we have
| Δ M t | 2 sup n ( X n ) t .
By assumption, the right-hand side is locally integrable and thus M is also locally integrable.
Now, we can find a sequence ( T k ) k N such that ( X n ) T k is a martingale for all n and k and M T k is integrable for all k. Furthermore, because of X T k M , we can apply the dominated convergence theorem and obtain for every bounded stopping time τ
E X τ T k = E lim n ( X n ) τ T k = lim n E ( X n ) τ T k = lim n E X 0 T k = E X 0 T k .
Hence, X T k is a martingale for all k, and thus, we conclude that X is a local martingale. □
Remark 37.
We have found a criterion for the limit of a sequence of local martingales to be a local martingale again. In fact, there are numerous examples of criteria for that kind of problem in the literature. A detailed description with examples and counterexamples is given in the chapter `Some Particular Problems of Martingale Theory’ in Kabanov et al. [100].
Theorem 9.22.
Let X be a local martingale (locally square-integrable martingale / L 2 -martingale) and let H P be locally bounded. Then, the stochastic integral H · X is also a local martingale (locally square-integrable martingale / L 2 -martingale).
Proof. 
Let X be a local martingale. Without loss of generality, we assume there exists a K with H u K . Now let ( H n ) n N be a sequence with H n Λ d and H n X H .
By putting H ˜ n : = H n K (where the minimum is taken in each component), we obtain a uniformly bounded sequence H ˜ n with H ˜ n X H .
By Theorem 2.6, H ˜ n · X is a local martingale for all n. Furthermore, by Theorem 9.21 and since a process is locally uniformly bounded if and only if its jump process is locally uniformly bounded, it suffices to show that M t : = sup n ( Δ H ˜ · X n ) t is locally integrable.
By Theorem 9.19 from Theorem 9.19, we obtain
sup n Δ ( H ˜ n · X ) = sup n ( H n ) Δ X K Δ X ,
which is, by assumption, locally integrable.
For X being a locally square-integrable martingale or L 2 -martingale, we again consider a sequence of simple predictable processes converging to H, then we apply Theorem 9.8 and proceed analogously. □
Remark 38.
Since each H Ł is locally bounded, H · M is always a local martingale for a local martingale M and H Ł .
Remark 39.
For a semimartingale X = M + A with M M loc and an FV-process A and H L ( M ) , one cannot assume that H · M M loc . However, there is always a decomposition X = M ¯ + A ¯ with M ¯ M loc and an FV-process A such that H · M ¯ M loc . This is a direct consequence of the Fundamental Theorem of Local Martingales and Theorem 9.22. For details, we refer to Shiryaev and Cherny [184, Remarks after Definition 3.9].

9.6. The Quadratic Variation of a Semimartingale

The quadratic variation plays a central role in the development of the general stochastic integral in both the classical approach and the extension from left-continuous integrands to general predictable integrands in the functional analytic approach.
As the quadratic variation of semimartingales can be defined without the more complicated concepts of integration with respect to general predictable integrands (as opposed to stochastic integration with respect to càglàd integrands) and integration of non-componentwise integrable multidimensional stochastic processes, the concept of the quadratic variation is usually not extended to multidimensional processes and is often omitted in the literature on multidimensional stochastic integration (see [90,184]). We also defined the stochastic integral without resorting to the quadratic variation. However, for the sake of completeness, we will define a multidimensional version of the quadratic variation that generalizes the concept of one-dimensional quadratic variation and mention some interesting facts and examples for further treatment and development of the theory. Our approach is consistent with the so-called tensor quadratic variation as it was presented by Métivier [139].
Definition 9.23.
Let X , Y S d and
X Y = X 1 Y 1 X 1 Y d X d Y 1 X d Y d .
Then we define quadratic covariation of X and Y as
[ X , Y ] : = X 1 Y 1 X 1 · Y 1 Y 1 · X 1 X 1 Y d X 1 · Y d Y d · X 1 X d Y 1 X d · Y 1 Y 1 · X d X d Y d X d · Y d Y d · X d .
The quadratic variation of X is defined as [ X , X ] and denoted by [ X ] or [ X , X ] .
In the literature, the quadratic variation is often defined to be the limit of particular processes (see, for example, [108, Section 8.5]), which is an equivalent definition. There are even more ways to define the quadratic variation, especially for martingales (see, for example, [29, Chapter 11], [172, Chapter II Corollary 2 of Theorem 27] or [1, Definition 1.15]).
One easily sees that the ( X , Y ) [ X , Y ] is bilinear and symmetric. Therefore, we get the polarization identity
[ X , Y ] = 1 2 ( [ X + Y , X + Y ] [ X , X ] [ Y , Y ] )
In the following, we are going to present the main properties of the quadratic (co-)variation. In order to justify the name quadratic variation by the approximating property, we need the following definition.
Definition 9.24.
A finite set σ of stopping times, which satisfies 0 = T 1 T 2 T k < is called arandom partition. If ( σ n ) n N is a sequence of random partitions σ n : = ( T 1 n , , T k n n ) , k n N , then we say that ( σ n ) n N converges to the identity if lim n sup i = 1 , , k n T i n = almost surely and
σ n : = sup i = 1 , , k n | T i + 1 n T i n | n 0 P - a . s .
holds.
Theorem 9.25.
Let X , Y S d . The process [ X , Y ] is, in each component, an FV-process, a semimartingale and has the following properties.
(a)
[ X , Y ] 0 = X 0 Y 0 and Δ [ X , Y ] i , j = Δ X i Δ Y j .
(b)
For a sequence ( σ n ) n N of random partitions that tends to the identity, we have
X 0 Y 0 + i = 1 k n ( X T i + 1 n X T i n ) ( Y T i + 1 n Y T i n ) [ X , Y ] , n ucp ,
where σ n is given by 0 = T 1 n T 2 n T k n n = T .
(c)
Let T be a stopping time. Then we have
[ X T , Y ] = [ X , Y T ] = [ X T , Y T ] = [ X , Y ] T .
(d)
The quadratic variation [ X , X ] is a positive, increasing process.
(e)
If X is an FV-process, we have
[ X , Y ] t = X 0 Y 0 + 0 < s t Δ X s Δ Y s .
Proof. 
As the set of semimartingales form an algebra, it is easy to see that each component is a semimartingale, and by Theorem 9.25, it is an FV-process.
We can, for all properties, without loss of generality, assume d = 1 .
(a)
By definition, [ X , Y ] 0 = X 0 Y 0 X · Y 0 Y · X 0 = X 0 Y 0 , and the first equation is clear. For the second equation, using Theorem 9.19,
Δ X Δ Y = ( X X ) ( Y Y ) = X Y X Y Y X + X Y = ( X Y X Y ) X Y Y X + 2 X Y = Δ ( X Y ) X ( Y Y ) Y ( X X ) = Δ ( X Y ) X Δ Y Y Δ X = Δ ( X Y ) Δ ( X · Y ) Δ ( Y · X ) ,
where the equality holds almost surely. This proves the result.
(b)
By the polarization identity, it suffices to consider the case where X = Y , and without loss of generality, we can assume X 0 = 0 .
Let ( σ n ) n N be a sequence of partitions that converges to the identity, with σ n = { T 1 n , T 2 n , , T k n n } . We define X σ n as follows:
X t σ n = X T i n for t T i n , T i + 1 n for each i .
For a fixed N N , we set R = inf { t 0 : | X t | N } . Since X is càdlàg, we have | X σ n 1 0 , R | N for all n, and X t σ n 1 0 , R X t 1 0 , R almost surely as n .
Now, by expressing ( X R ) t 2 as a telescoping sum, we obtain:
( X R ) t 2 = ( X 0 R ) 2 + 2 i X T i R ( X T i + 1 t R X T i t R ) + i ( X T i + 1 t R X T i t R ) 2 = ( X 0 R ) 2 + 2 ( X σ n 1 0 , R ) · X t + i ( X T i + 1 t R X T i t R ) 2 .
By the assumptions on σ n , we note that only finitely many terms of this sum are nonzero. Define Z ( σ n , t ) : = i ( X T i + 1 n t X T i n t ) 2 . According to Proposition 4.32 from Potter’s book, we have:
( X σ n 1 0 , R ) · X X · X t R
as n in the semimartingale topology. Furthermore,
{ Z ( σ n , R t ) } t 0 { Z ( R ) t } t 0
in ucp for some càdlàg process Z ( R ) .
Since N was arbitrary, we now have a family of processes { Z ( R N ) } N N , where R N = inf { t 0 : | X t | N } . It is easy to verify that if M > N (and thus R M R N ), then Z ( R N ) = Z ( R M ) on the interval 0 , R N . Hence, by pasting these processes together, we can define a single process Z such that Z = Z ( R N ) on 0 , R N for all N, and Z ( σ n ) Z in ucp as n .
Thus, on each interval 0 , R , equation (23) implies Z = [ X , X ] , and therefore Z = [ X , X ] in general.
(c)
We assume d = 1 . By Theorem 9.19, we get
[ X , Y ] T = ( X Y ) T ( X · Y ) T ( Y · X ) T = X T Y T X T · Y T Y T · X T = [ X T , Y T ] .
Furthermore,
[ X T , Y ] = X T Y ( X T ) · Y Y · ( X T ) = X T ( Y T + Y Y T ) ( 1 { t > T } X T + 1 [ 0 , T ] X ) · Y Y · ( X T ) = ( X Y ) T + X T ( Y Y T ) ( 1 { t > T } X T ) · Y = 0 ( X · Y ) T ( Y · X ) T = [ X , Y ] T .
Thus,
[ X T , Y ] = [ X , Y ] T = [ Y , X ] T = [ Y T , X ] = [ X , Y T ]
and everything is shown.
(d)
This follows with X = Y directly from the summation representation in Theorem 9.25.
(e)
X is a semimartingale, and a stochastic integral with respect to X coincides pathwise with the Lebesgue-Stieltjes integral. The formula for partial integration in the Lebesgue-Stieltjes integral yields
X t 2 = 0 t X s d X s + 0 t X s d X s .
The formula for partial integration for semimartingales, in turn, yields
X t 2 = 2 0 t X s d X s + [ X , X ] .
Equating the two equations for X 2 , we obtain
0 t X s d X s = 0 t X s d X s + [ X , X ] t ,
thus
[ X , X ] t = 0 t ( X s X s ) d X s = 0 t Δ X s d X s = s t ( Δ X s ) 2 .
Theorem 9.26.
For X , Y S 1 , H L ( X ) and K L ( Y ) , we have
[ H · X , K · Y ] t = 0 t H s K s d [ X , Y ] s ( t 0 ) .
Proof. 
Without loss of generality, assume X 0 = Y 0 = 0 . We first consider that H is of the form H = 1 0 , T for a stopping time T. Now, by Theorem 9.25, we have
[ H · X , Y ] = [ 1 0 , T · X , Y ] = [ X T , Y ] = [ X , Y ] T = 1 0 , T · [ X , Y ] = H · [ X , Y ] .
If H is of the form H = U 1 S , T with stopping times S T and an F S -measurable random variable U, we get
[ H · X , Y ] = [ U ( 1 S , T · X ) , Y ] = [ U ( X T X S ) , Y ] = U ( [ X T , Y ] [ X S , Y ] ) = U ( [ X , Y ] T [ X , Y ] S ) = U ( 1 S , T · [ X , Y ] ) = H · [ X , Y ] .
With the usual monotone-class argument, this can be extended to any locally bounded H. And finally, Theorem 9.18 provides the result for H L ( X ) . □
Remark 40.
Theorem 9.26 has a multi-dimensional version as well ([184, Theorem 4.19]). However, the multidimensional variant requires some more technical definitions, and we, therefore, stick to the much simpler version in one dimension.
Remark 41.
It would have been possible to define the quadratic covariation of two local martingales, M and N, as the unique FV-process [ M , N ] such that M N [ M , N ] is a local martingale in each component, and Δ [ M , N ] i , j = Δ M i Δ N j .
By the integration by parts formula and Theorem 9.22, it is established that M N [ M , N ] behaves as a local martingale in each component. Furthermore, according to Theorem 9.25, the relation Δ [ M , N ] i , j = Δ M i Δ N j is clear. The uniqueness of [ M , N ] is derived from the fact that any local martingale of finite variation must be a pure jump process. This condition regarding the jumps then secures the uniqueness.
Now, we have the tools at hand to give some more examples.
Example. 
Let W be a standard Brownian motion. We define
H t = 1 t 1 t and X t = W t 1 W t 1 + 1 [ 1 , ) · W t .
We assume H is componentwise integrable with respect to X. Then by Theorem 9.25, [ H 1 · X 1 , H 1 · X 1 ] is a semimartingale. But since
[ H 1 · X 1 , H 1 · X 1 ] t = 0 t 1 1 s d s = ,
we obtain a contradiction, and hence H is not componentwise integrable with respect to X.
However, we have H L ( X ) . To show that, we define
H t n = 1 t + 1 n 1 t + 1 n .
Then, one easily sees that H n is Cauchy in the X-topology and converges to H.
Remark 42.
There are several criteria for determining when an integrand is componentwise integrable. One such criterion is discussed in Remark 43. Moreover, all locally bounded predictable integrands are necessarily integrable (see Theorem 9.15). Additionally, if H L ( X ) and [ X i , X j ] = 0 for all i , j with i j , then H i L ( X i ) for all i.
The next example shows that the condition of componentwise integrability in the dominant convergence theorem cannot be dropped.
Example. 
Let H , X be the processes from the previous example and K = ( 0 , H 2 ) with H 2 being the second component of H. Clearly, we have | K i | H i for i = 1 , 2 and by Section 9.6, we also have H L ( X ) but we have K L ( X ) . This demonstrates that the condition of componentwise integrability cannot be dropped in the dominated convergence theorem.
Remark 43.
One might criticize that the topological approach does not directly yield a description of the class of integrands via integrability conditions. However, such criteria are straightforward to derive, particularly for componentwise integrability.
Let X S 1 and H P , and consider a decomposition of X as X = N + A . Assume that, locally,
E 0 H s 2 d [ N , N ] s + E 0 | H s | | d A s | 2 <
holds. Then H L ( X ) .
This can be demonstrated by setting H n = 1 { | H | n } H . Then H n L ( X ) , and it remains to show that ( H n ) n N is a Cauchy sequence with respect to d X . To this end, we first establish that ( H n · X ) n N is Cauchy in H 2 (see for example [172, Theorem IV.14]). Next, we demonstrate that H n · X converges in the semimartingale topology ([170, Theorem VI.4.19]). The rest follows with the equivalence of the H 2 and X norms ([172, Corollary of Theorem IV.24 and Theorem V.14]).

Appendix A. The Technical Proofs

In this section, we prove the Dominated Convergence Theorem and some limit theorems providing information about L ( X ) , an important convergence theorem for multi-dimensional stochastic integration and, as it is required for the proofs of the former theorems, the associativity of the integral. The strategy for the proof of the dominated convergence theorem is similar to the popular proof of Caratheodory’s extension theorem and is as follows: First, we define two metrics which are less abstract than the ones we used to define the stochastic integral. These metrics will enable us to show that the Dominated Convergence Theorem holds for bounded predictable processes, a statement which is also known as Bounded Convergence Theorem. Then, we define the class of integrands for which the dominated convergence theorem holds and show that this class coincides with L ( X ) (in one dimension).
As the dominated convergence theorem requires componentwise integrability, we can, without loss of generality, assume for its proof that d = 1 .
Definition A.1.
Let X be a one-dimensional semimartingale.
(a)
For H Λ 1 , we define
H Λ , X = sup { G · X ucp ; G Λ 1 , | G | | H | } ,
where | G | | H | means | G t | | H t | for all t 0 .
(b)
For H b P , we define
H b P , X : = inf n = 1 G n Λ , X ; G n Λ 1 , | H | n = 1 | G n | .
(c)
Furthermore, we define A b P to be the closure of Λ 1 with respect to · b P , X .
Remark A1.
It is easy to check that by identifying indistinguishable processes, · Λ , X becomes a norm and · b P , X a metric.
Remark A2.
We will see later in the proof of the bounded convergence theorem that A coincides with b P .
Lemma A.2.
Let ( H n ) n N be a sequence of simple predictable processes such that | H n | | H n + 1 | K for all n, where K is a constant. For each ε > 0 there exists an n 0 such that we have
H m H n Λ , X ε for all m , n n 0 .
Proof. 
Let ε > 0 and assume that equation (A1) does not hold. Then we can extract subsequences ( H n k ) k N , ( H m k ) k N such that ( G k ) k N : = ( H n k H m k ) k N is a bounded, monotone sequence with H n k H m k Λ , X > ε for all k N . Since ( H k ) k N is monotone and bounded, ( G k ) k N converges pointwise almost surely to 0. By proceeding analogously to the proof of Theorem 9.5, we obtain H n k H m k Λ , X 0 , which is a contradiction. □
Lemma A.3.
Assume positive processes H , ( H n ) n N b P such that H = j = 1 H j . Then we have
j = 1 H j b P , X j = 1 j H j b P , X .
Proof. 
We have
j = 1 n H j b P , X j = 1 n H j b P , X j = 1 H j b P , X .
The result follows from the continuity of the metric. □
Lemma A.4.
Suppose H b P and let ( H n ) n N be a sequence in b P such that
H n H b P , X 0 .
Then we have H A .
Proof. 
For each H n , there exists a sequence ( H n m ) m N Λ 1 such that
H n m H n b P , X 0 for m .
Choosing a diagonal sequence yields a sequence ( H n m k ) k N with
H H n m k b P , X 0 for k .
Hence, H A . □
Lemma A.5.
Let H b P and ( H n ) n N A with | H n | | H n + 1 | | H | for all n N and H b P such that H n H pointwise almost surely. Then we have H A and
H H n b P , X 0 .
Proof. 
By the definition of A , for each H n A , there exists a sequence
( G n , m ) m N Λ 1 ,
such that G n , m H n b P , X 0 for m . Without loss of generality, we assume all sequences ( G n , m ) m N to be increasing. By assumption, for each ε > 0 and each n, there exists an m n ( ε ) , such that G m H n b P , X < ε for all m m n ( ε ) .
We put
m ¯ n ( ε ) : = max m ¯ n 1 ε 3 + 1 , m n ε 3 and G ˜ n : = G m ¯ n ( ε ) .
As G ˜ m G ˜ n Λ 1 , we have G ˜ m G ˜ n b P , X = G ˜ m G ˜ n Λ , X and by Theorem A.2, there exists an n ˜ ( ε ) N , such that G ˜ m G ˜ n Λ , X < ε 3 for all n , m n ˜ ( ε ) .
We obtain for m , n max { n ˜ ( ε ) , m ¯ n ( ε ) }
H m H n b P , X = H m G ˜ m + ( G ˜ n H n ) + ( G ˜ m G ˜ n ) b P , X H m G ˜ m + b P , X ε 3 + G ˜ n H n b P , X ε 3 + G ˜ m G ˜ n b P , X ε 3 .
Hence, for each ε , there exists an n ( ε ) N , such that H m H n b P , X ε for all m , n n ( ε ) .
Now we put n k = n 2 k and obtain a sequence ( H n k ) k N such that
H n k + 1 H n k b P , X 2 k .
Because of
H H n k = lim n H n H n k = m = k H n m + 1 H n m ,
we obtain with Theorem A.3
H H n k b P , X m = k H n m + 1 H n m b P , X 2 k + 1 .
Hence, H H n k b P , X 0 and thus, every subsequence of ( H n ) n N has a further subsequence that converges to H under · b P , X . By Theorem A.4, this proves the result and implies H A . □
Theorem A.6
(Bounded Convergence Theorem). Let H b P and ( H n ) n N be a sequence in b P such that | H n | K for a constant K and H n H pointwise almost surely. Then we have
H n · X S H · X .
Proof. 
Let ( G n ) n N be a sequence in A converging uniformly to a process G b P . That means, there exists a sequence ( a n ) n N in R such that ( G n G ) a n and a n 0 . Now we have
G n G b P , X ( G n G ) b P , X a n b P , X = | a n | X ucp 0 .
By Theorem A.4, we have G A and hence A is closed under uniform convergence. Clearly, A also contains all constant functions. Together with Theorem A.5, we can apply the monotone class theorem and obtain A = b P . Thus, we have H n A for all n. We put G ˜ n : = sup m n | H m H | and G ^ n : = G ˜ 1 G ˜ n and have | G ^ n | | G ˜ 1 | and G ^ n G ˜ 1 pointwise almost surely. By Theorem A.5, we obtain
H n H b P , X G ˜ n b P , X = G ^ n G ˜ 1 b P , X 0 .
Hence, we deduce H n H b P , X 0 .
Furthermore, it is easy to check that we have
H n H X H n H b P , X .
Thus, H n H X 0 , which implies H n X H . □
Definition A.7.
Let X S 1 . We define L 1 ( X ) L ( X ) to be the set of predictable processes G such that, for each sequence of processes H n b P with | H n | | G | and converging pointwise almost surely to 0, we have H n X 0 for n .
For the proof of the dominated convergence theorem for L 1 ( X ) , we need a simple property of the stochastic integral. We will examine this property in greater generality in Theorem A.13 and Theorem 9.19.
Theorem A.8.
Let X be a one-dimensional semimartingale, H Λ 1 and K L ( X ) . Then we have
H · ( K · X ) = ( H K ) · X .
Proof. 
This is a simple calculation for K Λ 1 (see, Theorem 9.19). Taking limits yields the result:
H · ( K · X ) = lim n H · ( K n · X ) = lim n ( H K n ) · X = ( H K ) · X
with K n Λ 1 such that K n X K . □
Theorem A.9.
(Dominated Convergence Theorem for L 1 ( X ) ). Let X S 1 and ( H n ) n N be a sequence of predictable processes converging pointwise almost surely to a process H such that | H n | | G | with G L 1 ( X ) . Then H n H X 0 .
Proof. 
We put Y n : = ( H n H ) · X . Now let ( K n ) n N be a sequence in Λ 1 with | K n | 1 . Then | K n ( H n H ) | 2 | G | and by the definition of L 1 ( X ) , we obtain
K n · Y n = K n ( H n H ) · X ucp 0 .
Hence, Y n converges in the semimartingale topology and thus H n H X 0 . □
Lemma A.10.
Let X S 1 , ( G n ) n N be a sequence in b P with
n = 1 G n X < a n d G : = n = 1 | G n | .
Furthermore, let ( H n ) n N be a sequence in b P with | H n | | G | .
(a)
Let ( λ n ) n N be a sequence of positive numbers such that | λ n H n | | G | . Then
sup n N λ n H n X < .
(b)
If H n 0 pointwise almost surely, then H n X 0 and thus G L 1 ( X ) holds.
(c)
The set A = { ω Ω ; G = } is predictable and we have
1 A H X = 0
for all H b P .
Proof. 
We put K n : = k = 1 n | G k | . For any λ > 0 and H b P with λ | H | | G | , λ ( H K n ) ( K n ) converges pointwise almost surely to λ H and we have λ ( H K n ) ( K n ) | λ H | .
Hence, we can apply bounded convergence and obtain
λ H X = lim n λ H K n ( K n ) X .
Furthermore, we have
λ ( H K n ) ( K n ) k = 1 n | λ H | | G k |
and thus
λ H X = lim n λ ( H K n ) ( K n ) X lim n k = 1 n | λ H | | G k | X = n = 1 | λ H | | G n | X ,
where we used the triangle inequality of the metric · X .
Hence, we obtain
sup m N λ m H m X sup m N n = 1 | λ m H m | G n X n = 1 G n X < .
That proves A.10. For A.10, we proceed analogously and obtain, by putting λ m = 1 ,
lim m H m X lim m n = 1 | H m | G n X ,
and as each summand of the right-hand side is bounded by G n X and, furthermore, n = 1 G n X < , we can commute the sum and the limes and obtain
lim m H m X n = 1 lim m | H m | G n X .
Since | H m | G n G n for all m and H m converges pointwise almost surely to 0, we can apply bounded convergence and obtain
| H m | G n X X 0 for m .
Hence,
lim m H m X n = 1 lim m | H m | G n X = 0 .
For A.10 we use A.10 with a sequence ( λ n ) n N such that lim n λ n . As | λ n 1 A H | | G | , we obtain sup n N λ n 1 A H X < and thus 1 A H X = 0 . □
Theorem A.11.
We have L 1 ( X ) = L ( X ) . That is, the stochastic dominated convergence theorem holds for all H L ( X ) .
Proof. 
Let H L ( X ) . By definition, there exists a sequence ( H n ) n N Λ 1 with H n X H . That means, for each ε > 0 , there exists an n 0 N such that H m H n X ε for all m , n n 0 . We put H 1 : = 0 , and by passing to subsequences, we can assume that
H n + 1 H n X 2 n for all n N .
We put
G n : = H n + 1 H n and G : = n = 1 | G n | .
We have n = 1 G n X n = 1 2 n = 1 and we put A : = { ω Ω ; G = } . By Theorem A.10, 1 A c G L 1 ( X ) . Since n = 1 G n is absolutely convergent outside of A, we can define
H ˜ : = lim n 1 A c H n = n = 1 1 A c G n .
Since | 1 A c H n | | 1 A c G | , we have H L 1 ( X ) and by dominated convergence, we obtain 1 A c H n H ˜ X 0 . By Theorem A.10, we have 1 A H n X = 0 , hence,
lim n H n · X = lim n 1 A c H n · X = H ˜ · X
and thus H H ˜ X = 0 . We conclude H = H ˜ and hence H L 1 ( X ) . □
Lemma A.12.
Let X S d and H L ( X ) . If H n = H 1 { H u n } then H n · X H · X in the semimartingale topology.
Proof. 
The proof is analogous to Theorem 9.5. □
For the proof of the important Theorem A.14, we need to show the associativity of the stochastic integral first.
Theorem A.13.
Let X = ( X 1 , , X d ) S d and H = ( H 1 , , H d ) with H i L ( X i ) for all i. Furthermore, let K = ( K 1 , , K d ) be a predictable d-dimensional process and we set Y i : = H i · X i , Y : = ( Y 1 , , Y d ) , G i : = K i H i and G = ( G 1 , , G d ) . Then K i L ( Y i ) for all i if and only if G i L ( X i ) for all i, and K L ( Y ) if and only if G L ( X ) . In both cases, we have
K · Y = G · X .
Proof. 
For H , K Λ d , this is a basic calculation.
Now, we suppose G i = K i H i L ( X i ) with i { 1 , , d } . By Theorem 23, there exists a K ˜ n = ( K ˜ 1 , n , , K ˜ d , n ) Λ d , such that | K ˜ i , n | | K i | and K ˜ t i , n K t i for all t almost surely. Now, by the first part and by the dominated convergence theorem 9.16, we have
K ˜ i , n · Y i = K ˜ i , n H i · X i K i H i · X i = G i · X i .
Hence, K ˜ i , n · Y i is Cauchy and we have K i L ( Y i ) . Since this holds for all i { 1 , , d } , we obtain K · Y = G · X .
Conversely, assume K i L ( Y i ) and let G ˜ i , n Λ 1 be a sequence such that G ˜ i , n G i = K i H i pointwise almost surely. We can write
G ˜ i , n = G ˜ i , n H i 1 { H i 0 } H i
and by proceeding analogously, we obtain G ˜ i , n H i 1 { H i 0 } L ( Y i ) and
G ˜ i , n H i 1 { H i 0 } · Y i = G ˜ i , n · X .
By dominated convergence, this tends to K i · Y i and we have K i H i = G i L ( X i ) .
For G L ( X ) , we define K ˜ n = K 1 K n , G ˜ accordingly. As all K ˜ n and G ˜ n are bounded and hence componentwise integrable, we get
K · Y G · Y S K · Y K ˜ n · Y + ( K ˜ n ) · Y G ˜ n · X S + G ˜ n · X G · Y S
For the middle term, we can apply bounded convergence and see that it vanishes. The first and last tend to 0 (Theorem A.12). □
Theorem A.14.
Let X S d and H a d-dimensional predictable process. Then H L ( X ) if and only if H n · X with H n = H 1 { H u n } converges in the semimartingale topology.
Proof. 
The one direction has already been shown with Theorem A.14. For the other direction, follow [184] and note that Theorem 32 ensures the existence of a K L ( X ) with Z = K · X . Let
G = n = 1 1 n 1 { n 1 H < n }
where G is predictable with 0 < G 1 . By Theorem A.13 and the definition of the semimartingale topology, it is easy to see that
G · ( H n · X ) S G · Z = ( G K ) · X .
Since G H 1 , we can apply dominated convergence together with Theorem A.13 and obtain
G · ( H n · X ) = ( G H n ) · X S ( G H ) · X .
Hence, ( G H ) · X coincides with ( G K ) · X .
Since K = G 1 G K is an element of L ( X ) , Theorem A.13 states that G 1 also belongs to L ( ( G K ) · X ) , which is identical to L ( ( G H ) · X ) . A subsequent application Theorem A.13 shows that H (represented as G 1 G H ) is within L ( X ) , leading to
H · X = G 1 · ( ( G H ) · X ) = G 1 ( ( G K ) · X ) = K · X = Z ,
which concludes the proof. □

References

  1. A. Aksamit and M. Jeanblanc. Enlargement of Filtration with Finance in View. SpringerBriefs in Quantitative Finance. Springer International Publishing, 2017. ISBN 9783319412559. [CrossRef]
  2. D. Applebaum. Lévy Processes and Stochastic Calculus (Cambridge Studies in Advanced Mathematics). Cambridge University Press, 2nd edition, 2009. ISBN 9780521738651.
  3. Robert B Ash, B Robert, Catherine A Doleans-Dade, and A Catherine. Probability and measure theory. Academic press, 2000.
  4. J. Assefa and P. Harms. Cylindrical stochastic integration and applications to financial term structure modeling, 2023.
  5. Wided Ayed and Hui-Hsiung Kuo. An extension of the itô integral. Communications on Stochastic Analysis, 2(3):5, 2008.
  6. R. F. Bass. The doob-meyer decomposition revisited. Canad. Math. Bull., 39(2):150, 1996.
  7. M. Beiglböck and P. Siorpaes. Riemann-integration and a new proof of the Bichteler–Dellacherie theorem. Stoch. Process. Appl., 124(3):1226–1235, 2014. ISSN 0304-4149. [CrossRef]
  8. M. Beiglböck, W. Schachermayer, B. Veliyev, et al. A direct proof of the Bichteler–Dellacherie theorem and connections to arbitrage. The Annals of Probability, 39(6):2424–2440, 2011. ISSN 0091–1798.
  9. M. Beiglböck, W. Schachermayer, and B. Veliyev. A short proof of the Doob-Meyer theorem. Stochastic Processes and their Applications, 122(4):1204–1209, 2012. ISSN 0304-4149. [CrossRef]
  10. Mathias Beiglboeck, Walter Schachermayer, and Bezirgen Veliyev. A short proof of the doob–meyer theorem. Stochastic Processes and their applications, 122(4):1204–1209, 2012.
  11. A. Benveniste and J. Jacod. Systèmes de lévy des processus de markov. Invent. Math., 21(3):183–198, 1973.
  12. F. Biagini, Y. Hu, B. ksendal, and T. Zhang. Stochastic Calculus for Fractional Brownian Motion and Applications. Probability and Its Applications. Springer London, 2008. ISBN 9781846287978.
  13. K. Bichteler. Stochastic integrators. Bulletin of the American Mathematical Society, 1(5):761–765, 1979. [CrossRef]
  14. K. Bichteler. Stochastic integration and Lp-theory of semimartingales. The Annals of Probability, pages 49–89, 1981. ISSN 0091–1798.
  15. K. Bichteler. Stochastic Integration with Jumps. Number Bd. 89 in Encyclopedia of Mathematics and its Applications. Cambridge University Press, 2002. ISBN 9780521811293. [CrossRef]
  16. N.H. Bingham and R. Kiesel. Risk-Neutral Valuation: Pricing and Hedging of Financial Derivatives. Springer Finance. Springer London, 2013. ISBN 9781447138563.
  17. L. Boy. Stochastic integration and martingales in banach spaces. Sém. Prob. Rennes, 9:49–72, 1977.
  18. P. Bremaud. Poisson processes in systems with dependent components. IEEE Trans. Inf. Theory, 18:676–686, 1972.
  19. P. Bremaud and J. Jacod. Processus ponctuels et martingales: résultats récents sur la modélisation et le filtrage. Adv. Appl. Probab., 2:362–416, 1977.
  20. James K Brooks and David Neal. The optional stochastic integral. In Seminar on Stochastic Processes, 1988, pages 45–54. Springer, 1989. [CrossRef]
  21. B. Bru and M. Yor. Comments on the life and mathematical legacy of wolfgang doeblin. Finance and Stochastics, 6(1):3–47, 2002. ISSN 0949–2984. [CrossRef]
  22. Zdzisław Brzeźniak, Erika Hausenblas, and Jianliang Zhu. 2d stochastic navier-stokes equations driven by jump noise. Stochastic Processes and their Applications, 118:2066–2092, 2008. [CrossRef]
  23. D. L. Burkholder. Martingale Transforms. The Annals of Mathematical Statistics, 37(6):1494 – 1504, 1966. [CrossRef]
  24. Donald L Burkholder, Burgess J Davis, and Richard F Gundy. Integral inequalities for convex functions of operators on martingales. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, volume 2, pages 223–240. Univ. California Press Berkeley, Calif., 1972.
  25. Geon Ho Choe et al. Stochastic analysis for finance with simulations. Springer, 2016. [CrossRef]
  26. C. S. Chou, P.-A. Meyer, and C. Stricker. Sur les integrales stochastiques de processus previsibles non bornes. In Séminaire de Probabilités XIV 1978/79, pages 128–139. Springer, 1980. [CrossRef]
  27. K. L. Chung and J. L. Doob. Fields, optionality and measurability. Amer. J. Math., 87:397–424, 1965. ISSN 0002-9327. [CrossRef]
  28. K. L. Chung and R. J. Williams. Introduction to stochastic integration. Probability and its Applications. Birkhäuser Boston, Inc., Boston, MA, second edition, 1990. ISBN 0-8176-3386-3. [CrossRef]
  29. S. N. Cohen and R. J. Elliott. Stochastic Calculus and Applications. Probability and Its Applications. Springer New York, 2015. ISBN 9781493928675. [CrossRef]
  30. Philippe Courrege. Intégrales stochastiques et martingales de carré intégrable. Séminaire Brelot-Choquet-Deny. Théorie du potentiel, 7:1–20, 1963.
  31. Philippe Courrège. Décomposition des martingales de carré intégrable. pages No. 6, 14, 1964.
  32. Marzia De Donno, Paolo Guasoni, and Maurizio Pratelli. Super-replication and utility maximization in large financial markets. Stochastic processes and their applications, 115(12):2006–2022, 2005. [CrossRef]
  33. F. Delbaen and W. Schachermayer. A general version of the fundamental theorem of asset pricing. Mathematische Annalen, 300(1):463–520, 1994. ISSN 0025-5831. [CrossRef]
  34. F. Delbaen and W. Schachermayer. The fundamental theorem of asset pricing for unbounded stochastic processes. Mathematische Annalen, 312(2):215–250, 1998. ISSN 0025-5831. [CrossRef]
  35. C. Dellacherie. Un survol de la theorie de l’integrale stochastique. Stochastic Processes and their Applications, 10(2):115–144, 1980. ISSN 0304-4149. [CrossRef]
  36. C. Dellacherie. Mesurabilite des debuts et theoreme de section: Le lot a la portee de toutes les boures. In Séminaire de Probabilités XV 1979/80, pages 351–370. Springer, 1981.
  37. C. Dellacherie and P.-A. Meyer. Probabilities and Potential. North-Holland Mathematics Studies. Hermann, Amsterdam, 1978. ISBN 9780720407013.
  38. C. Dellacherie and P.-A. Meyer. Probabilities and Potential, B: Theory of Martingales. North-Holland Mathematics Studies. Elsevier Science, 1982. ISBN 9780444865267.
  39. Catherine Doléans. Processus croissants naturels et processus croissants très-bien-mesurables. C. R. Acad. Sci. Paris Sér. A-B, 264:A874–A876, 1967. ISSN 0151-0509.
  40. Catherine Doléans. Existence du processus croissant natural associé à un potentiel de la classe (D). Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 9:309–314, 1968. [CrossRef]
  41. C. Doléans-Dade and P.-A. Meyer. Intégrales stochastiques par rapport aux martingales locales. In Séminaire de Probabilités IV Université de Strasbourg, pages 77–107. Springer, 1970.
  42. Catherine Doléans-Dade. On the existence and unicity of solutions of stochastic integral equations. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 36(2):93–101, 1976. [CrossRef]
  43. Marzia De Donno and Maurizio Pratelli. Stochastic integration with respect to a sequence of semimartingales. In In Memoriam Paul-André Meyer, pages 119–135. Springer, 2006. [CrossRef]
  44. Joseph L Doob. Stochastic processes, volume 7. Wiley New York, 1953.
  45. R. M. Dudley and R. Norvaiša. Concrete functional calculus. Springer Monographs in Mathematics. Springer, New York, 2011. ISBN 978-1-4419-6949-1. [CrossRef]
  46. Richard Durrett. Stochastic calculus. Probability and Stochastics Series. CRC Press, Boca Raton, FL, 1996. ISBN 0-8493-8071-5. A practical introduction.
  47. E. B. Dynkin. Markov processes. Vols. I, II, volume 122 of Die Grundlehren der mathematischen Wissenschaften, Band 121. Springer-Verlag, Berlin-Göttingen-Heidelberg; Academic Press, Inc., Publishers, New York, 1965. Translated with the authorization and assistance of the author by J. Fabius, V. Greenberg, A. Maitra, G. Majone.
  48. S. Ebenfeld. Grundlagen der Finanzmathematik: mathematische Methoden, Modellierung von Finanzmärkten und Finanzprodukten. Schäffer-Poeschel, 2007. ISBN 9783791026343.
  49. D. A. Edwards. A note on stochastic integrators. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 107, pages 395–400. Cambridge University Press, 1990. [CrossRef]
  50. Robert J. Elliott and P. Ekkehard Kopp. Mathematics of financial markets. Springer Finance. Springer-Verlag, New York, second edition, 2005. ISBN 0-387-21292-2.
  51. Paul Embrechts and Makoto Maejima. Selfsimilar processes. Princeton Series in Applied Mathematics. Princeton University Press, Princeton, NJ, 2002. ISBN 0-691-09627-9.
  52. M. Émery. Une topologie sur l’espace des semimartingales. In Séminaire de Probabilités XIII, pages 260–280. Springer, 1979.
  53. M. Émery. Métrisabilité de quelques espaces de processus aléatoires. Séminaire de probabilités de Strasbourg, 14:140–147, 1980.
  54. Michel Émery. Stochastic calculus in manifolds. Universitext. Springer-Verlag, Berlin, 1989. ISBN 3-540-51664-6. With an appendix by P.-A. Meyer. [CrossRef]
  55. Donald L. Fisk. Quasi-martingales. Trans. Amer. Math. Soc., 120:369–389, 1965. ISSN 0002-9947. [CrossRef]
  56. Avner Friedman. Stochastic differential equations and applications. Vol. 1. Probability and Mathematical Statistics, Vol. 28. Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1975.
  57. Peter Friz and Martin Hairer. A course on rough paths. Preprint, 2014. [CrossRef]
  58. Adriano M. Garsia. Martingale inequalities: Seminar notes on recent progress. Mathematics Lecture Note Series. W. A. Benjamin, Inc., Reading, Mass.-London-Amsterdam, 1973.
  59. L. Gawarecki and V. Mandrekar. Stochastic Differential Equations in Infinite Dimensions: with Applications to Stochastic Partial Differential Equations. Probability and Its Applications. Springer Berlin Heidelberg, 2010. ISBN 9783642161940. URL https://books.google.de/books?id=yqlcgl3KumQC.
  60. Leszek Gawarecki and Vidyadhar Mandrekar. Partial differential equations as equations in infinite dimensions. In Stochastic Differential Equations in Infinite Dimensions, pages 3–16. Springer, 2011.
  61. Ĭ. Ī. Gīhman and A. V. Skorohod. Stochastic differential equations. Ergebnisse der Mathematik und ihrer Grenzgebiete [Results in Mathematics and Related Areas], Band 72. Springer-Verlag, New York-Heidelberg, 1972. Translated from the Russian by Kenneth Wickwire.
  62. B. Grigelionis. On the representation of integer-valued random measures by means of stochastic integrals. Lit. Math. J., 9:93–108, 1969.
  63. B. Grigelionis. On the representation of integer-valued random measures by means of stochastic integrals with respect to the poisson measure. Lit. Math. J., 11:93–108, 1971.
  64. B. Grigelionis. Stochastic point processes and martingales. Lit. Math. J., 15:101–114, 1975.
  65. B. Grigelionis. A generalization of the representation theorem for martingales. Lit. Math. J., 17:75–85, 1977.
  66. B Grigelionis and R Mikulevičius. Stochastic evolution equations and densities of the conditional distributions. In Theory and Application of Random Fields, pages 49–88. Springer, 1983.
  67. Mircea Grigoriu. Stochastic calculus. Birkhäuser Boston, Inc., Boston, MA, 2002. ISBN 0-8176-4242-0. Applications in science and engineering. [CrossRef]
  68. István Gyöngy and Nicolai V Krylov. On stochastics equations with respect to semimartingales ii. itô formula in banach spaces. Stochastics, 6(3-4):153–173, 1982.
  69. J. M. Harrison and S. R. Pliska. Martingales and stochastic integrals in the theory of continuous trading. Stochastic processes and their applications, 11(3):215–260, 1981. ISSN 0304–4149. [CrossRef]
  70. J. M. Harrison and S. R. Pliska. A stochastic calculus model of continuous trading: Complete markets. Stochastic Processes and their Applications, 15(3):313–316, 1983. ISSN 0304–4149. [CrossRef]
  71. Sheng Wu He, Jia Gang Wang, and Jia An Yan. Semimartingale theory and stochastic calculus. Kexue Chubanshe (Science Press), Beijing; CRC Press, Boca Raton, FL, 1992. ISBN 7-03-003066-4.
  72. R. Henstock. The efficiency of convergence factors for functions of a continuous real variable. J. London Math. Soc., 30:273–286, 1955. ISSN 0024-6107. [CrossRef]
  73. Shinya Hibino, Hui Hsiung Kuo, and Kimiaki Saitô. A stochastic integral by a near-martingale. Communications on Stochastic Analysis, 12(2):197, 2018. [CrossRef]
  74. GA Hunt. Markoff processes and potentials i. Illinois Journal of Mathematics, 1(1):44–93, 1957a.
  75. GA Hunt. Markoff processes and potentials ii. Illinois Journal of Mathematics, 1(3):316–369, 1957b. [CrossRef]
  76. GA Hunt. Markoff processes and potentials iii. Illinois Journal of Mathematics, 2(2):151–213, 1958. [CrossRef]
  77. Chii-Ruey Hwang, Hui-Hsiung Kuo, Kimiaki Saitô, and Jiayu Zhai. A general itô formula for adapted and instantly independent stochastic processes. Communications on Stochastic Analysis, 10(3):5, 2016.
  78. Albrecht Irle. Finanzmathematik. Studienbücher Wirtschaftsmathematik. Springer Spektrum, Wiesbaden, third edition, 2012. ISBN 978-3-8348-1574-3; 978-3-8348-8314-8. Die Bewertung von Derivaten. [The evaluation of derivatives]. [CrossRef]
  79. K. Itô. Stochastic integral. Proc. Imp. Acad., 20(8):519–524, 1944. [CrossRef]
  80. K. Itô. On a stochastic integral equation. Proc. Japan Acad., 22(2):32–35, 1946. [CrossRef]
  81. K. Itô. Stochastic differential equations in a differentiable manifold. Nagoya Math. J., 1:35–47, 1950.
  82. K. Itô. On a formula concerning stochastic differentials. Nagoya Math. J., 3:55–65, 1951a.
  83. K. Itô. Multiple Wiener integral. Journal of the Mathematical Society of Japan, 3(1):157–169, 1951b.
  84. K. Itô. On Stochastic Differential Equations, volume 4. American Mathematical Soc., 1951c.
  85. K. Itô. Lectures on Stochastic Processes, volume 24. Tata Institute of Fundamental Research, Bombay, 1961.
  86. K. Itô. Transformation of markov processes by multiplicative functionals. Ann. Inst. Fourier, 15(1):15–30, 1965.
  87. J. Jacod. Calculus of semimartingales and applications to the theory of continuous martingales. Z. Wahrscheinlichkeitstheorie verw. Gebiete, 25:37–53, 1972.
  88. J. Jacod. Sur la construction des intégrales stochastiques et les sous-espaces stables de martingales. In Séminaire de Probabilités XI, pages 390–410. Springer, 1977.
  89. J. Jacod. Calcul stochastique et problèmes de martingales. Number Nr. 714 in Lecture Notes in Mathematics. Springer Verlag, Berlin-Heidelberg-New York, 1979. ISBN 9780387092539.
  90. J. Jacod. Intégrales stochastiques par rapport à une semimartingale vectorielle et changements de filtration. In Séminaire de Probabilités XIV 1978/79, pages 161–172. Springer, 1980.
  91. Jean Jacod. Multivariate point processes: predictable projection, radon-nikodym derivatives, representation of martingales. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 31(3):235–253, 1975.
  92. Jean Jacod and Albert N. Shiryaev. Limit theorems for stochastic processes, volume 288 of Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, second edition, 2003. ISBN 3-540-43932-3. [CrossRef]
  93. Adam Jakubowski. An almost sure approximation for the predictable process in the Doob-Meyer decomposition theorem. In Séminaire de Probabilités XXXVIII, volume 1857 of Lecture Notes in Math., pages 158–164. Springer, Berlin, 2005. [CrossRef]
  94. Adam Jakubowski and Markus Riedle. Stochastic integration with respect to cylindrical Lévy processes. Ann. Probab., 45(6B):4273–4306, 2017. ISSN 0091-1798. [CrossRef]
  95. K. Jänich. Topology. Undergraduate Texts in Mathematics. Springer-Verlag New York Inc., 1984. ISBN 9781461270188.
  96. Robert Jarrow and Philip Protter. A short history of stochastic integration and mathematical finance: the early years, 1880–1970. 45:75–91, 2004. [CrossRef]
  97. Guy Johnson and LL Helms. Class d supermartingales. Bulletin of the American Mathematical Society, 69(1):59–62, 1963.
  98. Marc Jornet. On the ayed-kuo stochastic integration for anticipating integrands. Stochastic Analysis and Applications, pages 1–25, 2022. [CrossRef]
  99. Y. Kabanov, R. Liptzer, and A. N. Shiryaev. Absolute continuity and singularity of probability measures in terms of stochastic integrals. Math. Sb., 104:227–247, 1977.
  100. Y. M. Kabanov, A. N. Shiryaev, J. M. Stojanov, and R. S. Liptser. From Stochastic Calculus to Mathematical Finance: The Shiryaev Festschrift. Springer, 2006. ISBN 9783540307822. [CrossRef]
  101. Svenja Kaden and Jürgen Potthoff. Progressive stochastic processes and an application to the Itô integral. Stochastic Anal. Appl., 22(4):843–865, 2004. ISSN 0736-2994. [CrossRef]
  102. O. Kallenberg. Foundations of Modern Probability. Springer, 2. edition, 2002. ISBN 9780387953137. [CrossRef]
  103. Gopinath Kallianpur. Stochastic filtering theory, volume 13 of Applications of Mathematics. Springer-Verlag, New York-Berlin, 1980. ISBN 0-387-90445-X.
  104. Dhandapani Kannan and Vangipuram Lakshmikantham. Handbook of stochastic analysis and applications. CRC Press, 2001.
  105. Rajeeva L. Karandikar and B. V. Rao. On the second fundamental theorem of asset pricing, 2016. [CrossRef]
  106. Rajeeva L. Karandikar and B. V. Rao. Introduction to stochastic calculus. Indian Statistical Institute Series. Springer, Singapore, 2018. ISBN 978-981-10-8317-4; 978-981-10-8318-1. [CrossRef]
  107. J. F. C. Kingman. Completely random measures, volume 21. 1967.
  108. F. C. Klebaner. Introduction to Stochastic Calculus With Applications (2nd Edition). ICP, 2. edition, 2005.
  109. A. Klenke. Probability Theory: A Comprehensive Course. Universitext. Springer London, 2008. ISBN 9781848000483.
  110. M. Koller. Life Insurance Risk Management Essentials: Life Insurance Risk Management Essentials. EAA Series. Springer, 2011. ISBN 9783642207211. [CrossRef]
  111. J. Komlós. A generalization of a problem of Steinhaus. Acta Math. Acad. Sci. Hungar., 18:217–229, 1967. ISSN 0001-5954. [CrossRef]
  112. Hayri Korezlioglu and Claude Martaias. Stochastic integration for operator valued processes on hilbert spaces and on nuclear spaces. Stochastics: An International Journal of Probability and Stochastic Processes, 24(3):171–219, 1988. [CrossRef]
  113. Tomasz Kosmala and Markus Riedle. Stochastic integration with respect to cylindrical Lévy processes by p-summing operators. J. Theoret. Probab., 34(1):477–497, 2021. ISSN 0894-9840. [CrossRef]
  114. H. Kunita and S. Watanabe. On square integrable martingales. Nagoya Mathematical Journal, 30:209–245, 1967. [CrossRef]
  115. Hiroshi Kunita. Stochastic integrals based on martingales taking values in hilbert space. Nagoya Mathematical Journal, 38:41–52, 1970. [CrossRef]
  116. Hiroshi Kunita. Stochastic flows and stochastic differential equations, volume 24. Cambridge university press, 1997.
  117. Hiroshi Kunita and Takesi Watanabe. Markov processes and Martin boundaries. I. Illinois J. Math., 9:485–526, 1965. ISSN 0019-2082. URL http://projecteuclid.org/euclid.ijm/1256068151.
  118. H.-H. Kuo. Introduction to Stochastic Integration (Universitext). Springer, 1. edition, 2005. ISBN 9780387287201.
  119. Hui-Hsiung Kuo. The itô calculus and white noise theory: a brief survey toward general stochastic integration. Communications on Stochastic Analysis, 8(1):8, 2014.
  120. Hui-Hsiung Kuo, Anuwat Sae-Tang, and Benedykt Szozda. The itô formula for a new stochastic integral. Communications on Stochastic Analysis, 6(4):7, 2012.
  121. Jaroslav Kurzweil. Generalized ordinary differential equations and continuous dependence on a parameter. Czechoslovak Mathematical Journal, 7(3):418–449, 1957. [CrossRef]
  122. A. U. Kussmaul. Stochastic integration and generalized martingales. Research Notes in Mathematics, No. 11. Pitman Publishing, London-San Francisco, Calif.-Melbourne, 1977.
  123. Jean-François Le Gall et al. Brownian motion, martingales, and stochastic calculus, volume 274. Springer, 2016.
  124. E. Lenglart. Semi-martingales et intégrales stochastiques en temps continu. Revue du CETHEDEC, 20(75):91–160, 1983.
  125. Giorgio Letta. Martingales et intégration stochastique. Springer, 1984.
  126. P. Lévy. Théorie de l’addition des variables aléatoires. Paris : Gauthier-Villars, 1937.
  127. S. J. Lin. Stochastic analysis of fractional Brownian motions. Stochastics: An International Journal of Probability and Stochastic Processes, 55(1-2):121–140, 1995.
  128. R. Liptser and A. Shiryaev. Theory of Martingales, volume 49 of Mathematics and its Applications. Kluwer Academic Publishers, Dordrecht, 1989. ISBN 978-0-7923-0395-4.
  129. Robert S Liptser and Albert N Shiryaev. Statistics of random processes: I. General theory, volume 5. Springer Science & Business Media, 2013.
  130. Terry Lyons and Zhongmin Qian. System control and rough paths. Oxford Mathematical Monographs. Oxford University Press, Oxford, 2002. ISBN 0-19-850648-1. Oxford Science Publications. [CrossRef]
  131. Terry J. Lyons. Differential equations driven by rough signals. Rev. Mat. Iberoamericana, 14(2):215–310, 1998. ISSN 0213-2230. [CrossRef]
  132. Terry J. Lyons, Michael Caruana, and Thierry Lévy. Differential equations driven by rough paths, volume 1908 of Lecture Notes in Mathematics. Springer, Berlin, 2007. ISBN 978-3-540-71284-8; 3-540-71284-4. Lectures from the 34th Summer School on Probability Theory held in Saint-Flour, July 6–24, 2004, With an introduction concerning the Summer School by Jean Picard.
  133. HP McKean. Stochastic integrals academic press. New York, 1969.
  134. Edward James McShane. Stochastic Calculus and Stochastic Models. Academic Press, New York, 1974.
  135. EJ McShane. Stochastic differential equations. Journal of Multivariate Analysis, 5(2):121–177, 1975.
  136. P. Medvegyev. Stochastic Integration Theory. Oxford Graduate Texts in Mathematics. OUP Oxford, 2007. ISBN 9780199215256. [CrossRef]
  137. D. Meintrup and Schäffler S. Stochastik: Theorie und Anwendungen (Statistik und ihre Anwendungen) (German Edition). Springer, 1. edition, 2005. ISBN 9783540216766. [CrossRef]
  138. J. Memin. Espaces de semi martingales et changement de probabilité. Probability Theory and Related Fields, 52(1):9–39, 01 1980. ISSN 1432-2064. [CrossRef]
  139. M. Métivier. Semimartingales: A Course on Stochastic Processes, volume 2 of De Gruyter studies in mathematics. XI, Berlin, New York, 1982. ISBN 9783110086744.
  140. M. Métivier. Semimartingales. In Semimartingales. de Gruyter, 2011.
  141. M. Metivier and J. Pellaumail. Mesures stochastiques à valeurs dans des espaces L0. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 40:101–114, 1977.
  142. M. Métivier and J. Pellaumail. Stochastic integration. Academic Press, 2014.
  143. M. Métivier and G. Pistone. Une formule d’isométrie pour l’intégrale stochastique hilbertienne et équations d’évolution linéaires stochastiques. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 33(1):1–18, 1975.
  144. P. A. Meyer. Intégrales stochastiques iv–v. Sém. de Prob., X–XI.
  145. P.-A. Meyer. A decomposition theorem for supermartingales. Illinois Journal of Mathematics, 6(2):193–205, 1962. [CrossRef]
  146. P.-A. Meyer. Decomposition of supermartingales: The uniqueness theorem. Illinois Journal of Mathematics, 7(1):1–17, 1963. [CrossRef]
  147. P.-A. Meyer. Probabilités et Potentiel. Hermann, Paris, 1966.
  148. P.-A. Meyer. On the multiplicative decomposition of positive supermartingales. pages 103–116, 1967a.
  149. P.-A. Meyer. Intégrales stochastiques I. Séminaire de Probabilités de Strasbourg, 39:72–94, 1967b.
  150. P.-A. Meyer. Intégrales stochastiques II. Séminaire de probabilités de Strasbourg, 1:95–117, 1967c.
  151. P.-A. Meyer. Intégrales stochastiques III. Séminaire de probabilités de Strasbourg, 1:118–141, 1967d.
  152. P.-A. Meyer. Intégrales stochastiques IV. Séminaire de probabilités de Strasbourg, 1:142–162, 1967e.
  153. P.-A. Meyer. Martingales and stochastic integrals I., volume 284 of Lecture notes in mathematics. Springer, 1972. [CrossRef]
  154. P. A. Meyer. Notes sur les intégrales stochastiques. IV. Caractérisation de BMO par un opérateur maximal. pages 446–481, 1977a. [CrossRef]
  155. P.-A. Meyer. Le théorème fondamental sur les martingales locales. Séminaire de Probabilités XI, 581:463–464, 1977b.
  156. P.-A. Meyer. Caracterisation des semimartingales, d’apres dellacherie. In C. Dellacherie, P.-A. Meyer, and M. Weil, editors, Séminaire de Probabilités XIII, volume 721 of Lecture Notes in Mathematics, pages 620–623. Springer Berlin Heidelberg, 1979. ISBN 978-3-540-09505-7. [CrossRef]
  157. P.-A. Meyer. Un cours sur les intégrales stochastiques. In M. Émery and M. Yor, editors, Séminaire de probabilités 1967 - 1980, volume 1771 of Lecture Notes in Mathematics, pages 174–329. Springer Berlin Heidelberg, 2002. ISBN 978-3-540-42813-8. [CrossRef]
  158. Paul-André Meyer. Les processus stochastiques de 1950 à nos jours, pages 813–848. Birkhäuser, Basel, 2000.
  159. R Mikulevicius and BL Rozovskii. Martingale problems for stochastic pdes. Stochastic partial differential equations: six perspectives, 64:243–326, 1999.
  160. Remigijus Mikulevicius and Boris L Rozovskii. Normalized stochastic integrals in topological vector spaces. In Séminaire de probabilités XXXII, pages 137–165. Springer, 1998. [CrossRef]
  161. M. Motoo and S. Watanabe. On a class of additive functionals of Markov processes. Journal of Mathematics of Kyoto University, 4(3):429–469, 1965.
  162. Michael Mürmann. Wahrscheinlichkeitstheorie und stochastische Prozesse. Springer, 2014.
  163. M. Musiela and M. Rutkowski. Martingale Methods in Financial Modelling (Stochastic Modelling and Applied Probability). Springer, 2nd edition, 2011. ISBN 9783540209669.
  164. Jacques Neveu. Martingales à temps discret. Masson et Cie, Éditeurs, Paris, 1972.
  165. Raymond Edward Alan Christopher Paley, Norbert Wiener, and Antoni Zygmund. Notes on random functions. Mathematische Zeitschrift, 37(1):647–668, 1933. [CrossRef]
  166. J. Pellaumail. Sur l’intégrale stochastique et la décomposition de Doob-Meyer. Société mathématique de France, 1973.
  167. D. Pollard, R. Gill, and B. D. Ripley. A User’s Guide to Measure Theoretic Probability. Cambridge Series in Statistica. Cambridge University Press, 2002. ISBN 9780521002899.
  168. Jean-Luc Prigent. Weak convergence of financial markets. Springer Finance. Springer-Verlag, Berlin, 2003. ISBN 3-540-42333-8. [CrossRef]
  169. P. Protter. A comparison of stochastic integrals. The Annals of Probability, 7(2):276–289, 1979. ISSN 0091–1798. [CrossRef]
  170. P. Protter. Semimartingales and stochastic differential equations. Lecture Notes 3rd Chilean Winter School in Probab. and Statist., Technical Report, pages 85–25, 1985.
  171. P. Protter. Stochastic integration without tears. Stochastics: An International Journal of Probability and Stochastic Processes, 16(3-4):295–325, 1986. [CrossRef]
  172. P. Protter. Stochastic Integration and Differential Equations: Version 2.1 (Stochastic Modelling and Applied Probability). Springer, 2010. ISBN 9783642055607.
  173. K. Murali Rao. On decomposition theorems of Meyer. Math. Scand., 24:66–78, 1969. ISSN 0025-5521. [CrossRef]
  174. Malempati M Rao. Stochastic processes: general theory, volume 342. Springer Science & Business Media, 2013.
  175. Murali Rao. Doob’s decomposition and Burkholder’s inequalities. In Séminaire de Probabilités, VI (Univ. Strasbourg, année universitaire 1970–1971; Journées Probabilistes de Strasbourg, 1971), Lecture Notes in Math., Vol. 258, pages 198–201. Springer, Berlin-New York, 1972.
  176. D. Revuz and M. Yor. Continuous Martingales and Brownian Motion, volume 293 of Grundlehren der mathematischen Wissenschaften A series of comprehensive studies in mathematics. Springer, Berlin, Heidelberg, 3rd edition, 1999. ISBN 9783540643258.
  177. Markus Riedle. Stochastic integration with respect to cylindrical Lévy processes in Hilbert spaces: an L2 approach. Infin. Dimens. Anal. Quantum Probab. Relat. Top., 17(1):1450008, 19, 2014. ISSN 0219-0257. [CrossRef]
  178. L. Rogers and D. Williams. Diffusions, Markov Processes and Martingales: Volume 2, Itô Calculus. Cambridge Mathematical Library. Cambridge University Press, 2000. ISBN 9780521775939.
  179. L. C. G. Rogers. Arbitrage with fractional Brownian motion. Mathematical Finance, 7(1):95–105, 1997. ISSN 0960–1627.
  180. Barbara Rüdiger. Stochastic integration with respect to compensated Poisson random measures on separable Banach spaces. Stochastics and Stochastic Reports, 76(3):213–242, 2004.
  181. J. Ruiz de Chávez. Le théorème de Paul Lévy pour des mesures signées. In Seminar on probability, XVIII, volume 1059 of Lecture Notes in Math., pages 245–255. Springer, Berlin, 1984. [CrossRef]
  182. Ludger Rüschendorf. Stochastic processes and financial mathematics. Springer, 2023.
  183. A. N. Shiryaev. Absolute continuity and singularity of probability measures in functional spaces. pages 209–225, 1980.
  184. A. N. Shiryaev and A. Cherny. Vector stochastic integrals and the fundamental theorems of asset pricing. Proceedings of the Steklov Institute of Mathematics-Interperiodica Translation, 237:6–49, 2002.
  185. Albert N. Shiryaev. Essentials of stochastic finance, volume 3 of Advanced Series on Statistical Science & Applied Probability. World Scientific Publishing Co., Inc., River Edge, NJ, 1999. ISBN 981-02-3605-0. Facts, models, theory, Translated from the Russian manuscript by N. Kruzhilin. [CrossRef]
  186. S. E. Shreve and I. Karatzas. Brownian Motion and Stochastic Calculus (Graduate Texts in Mathematics). Springer, korrigierter 8. druck, 2. edition, 2008. ISBN 9780387976556.
  187. A. V. Skorohod. A remark on square integrable martingales. Teor. Verojatnost. i Primenen., 20:199–202, 1975. ISSN 0040-361x.
  188. A. V. Skorokhod. Studies in the theory of random processes. pages viii+199, 1965. Translated from the Russian by Scripta Technica, Inc.
  189. A. V. Skorokhod. Integral representation of martingales. Z. Wahrscheinlichkeitstheorie verw. Gebiete, 11:367–388, 1969.
  190. RL Stratonovich. A new representation for stochastic integrals and equations. SIAM Journal on Control, 4(2):362–371, 1966. [CrossRef]
  191. P. Tankov. Financial Modelling with Jump Processes, Second Edition. Chapman & Hall/CRC Financial Mathematics Series. Taylor & Francis, 2003. ISBN 9780203485217. [CrossRef]
  192. Tin-Lam Toh and Tuan-Seng Chew. The riemann approach to stochastic integration using non-uniform meshes. Journal of mathematical analysis and applications, 280(1):133–147, 2003. ISSN 0022–247X. [CrossRef]
  193. Aleš Černý and Johannes Ruf. Simplified stochastic calculus via semimartingale representations. Electron. J. Probab., 27:Paper No. 3, 32, 2022. [CrossRef]
  194. John B Walsh. An introduction to stochastic partial differential equations. In École d’Été de Probabilités de Saint Flour XIV-1984, pages 265–439. Springer, 1986.
  195. A. T. Wang. Quadratic variation of functionals of Brownian motion. The Annals of Probability, pages 756–769, 1977. ISSN 0091–1798.
  196. Shinzo Watanabe. The Japanese contributions to martingales. J. Électron. Hist. Probab. Stat., 5(1):13, 2009. ISSN 1773-0074.
  197. Norbert Wiener. Differential-space. Journal of Mathematics and Physics, 2(1-4):131–174, 1923.
  198. Norbert Wiener. The Dirichlet problem. Journal of Mathematics and Physics, 3(3):127–146, 1924. [CrossRef]
  199. Jia-An Yan. Introduction to Stochastic Finance. Universitext. Springer Singapore, 1 edition, 2018. ISBN 978-981-13-1657-9. [CrossRef]
  200. Ch Yoeurp. Décompositions des martingales locales et formules exponentielles. In Séminaire de Probabilités X Université de Strasbourg, pages 432–480. Springer, 1976.
  201. M. Yor. Un exemple de processus qui n’est pas une semi-martingale. Temps Locaux, 52:219–222, 1978.
  202. B. Øksendal. Stochastic Differential Equations: An Introduction with Applications (Universitext). Springer, 6. edition, 2010. ISBN 9783540047582.
1
In practice, the modelling of asset prices typically involves processes that exhibit positivity and may incorporate a drift component, differing from the idealized model of a Brownian motion.
2
see Cohen and Elliott [29] Definition 5.5.16
3
see Cohen and Elliott [29] Theorem 5.5.18
4
see Cohen and Elliott [29] Definition 13.5.1
5
This was initially shown in Lévy [126], see also Ito [85] for an English proof, Kunita and Watanabe [114] for a proof in the classical semimartingale setting or Applebaum [2] Proposition 2.7.1 for a modern textbook treatment.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated