Preprint
Article

This version is not peer-reviewed.

Minimal Entropy and Entropic Risk Measures: A Unified Framework via Relative Entropy

A peer-reviewed article of this preprint also exists.

Submitted:

02 March 2025

Posted:

03 March 2025

You are already at the latest version

Abstract
We introduce a new coherent risk measure called the minimal entropy risk measure. This measure is based on the minimal entropy σ-martingale measure, which itself is inspired by the minimal entropy martingale measure well-known in option pricing. While the minimal entropy martingale measure is commonly used for pricing and hedging, the minimal entropy σ-martingale measure has not previously been studied, nor has it been analyzed as a traditional risk measure. We address this gap by clearly defining this new risk measure and examining its fundamental properties. In addition, we revisit the entropic risk measure, typically expressed through an exponential formula. We provide an alternative definition using a supremum over Kullback–Leibler divergences, making its connection to entropy clearer. We verify important properties of both risk measures, such as convexity and coherence, and extend these concepts to dynamic situations. We also illustrate their behavior in scenarios involving optimal risk transfer. Our results link entropic concepts with incomplete-market pricing and demonstrate how both risk measures share a unified entropy-based foundation once market constraints are considered.
Keywords: 
;  ;  ;  ;  

1. Introduction

Risk measures play an essential role in both academic research and financial practice, as they provide a systematic way to assess the potential losses of a financial position. Their importance has been highlighted ever since the seminal work of [2], which introduced an axiomatic framework for coherent risk measures. Subsequent studies, such as ([23], Chapter 4) or [12], have refined and extended these ideas.
In the monetary risk-measure framework, one models a financial position by a real-valued random variable X on a probability space ( Ω , F , P ) . The number X ( ω ) is the discounted net worth of the position if scenario ω occurs. A monetary risk measure ρ assigns a real number ρ ( X ) , which represents the minimal amount of capital needed to make X acceptable according to certain risk criteria. Desirable properties include Monotonicity (increasing payoffs lowers the risk), and Translation Invariance (adding a sure amount of cash decreases the risk by the same amount), and when ρ is further assumed to be convex or coherent, it reflects the benefits of diversification. We refer to [2,12,23] for standard references on these axioms.
A preferred way to price financial claims in financial mathematics (and hence risk measurement, when viewed from a market perspective) is to determine the price as expectation of the discounted payoff under an equivalent (local) martingale measure. However, in general markets and for certain price processes, a (local) martingale measure need not exist. Instead, from the more general perspective of no-arbitrage (or no free lunch with vanishing risk, to be more precise), one can only ensure the existence of an equivalent σ -martingale measure (E σ MM). See, for instance, [15,33,48] for details on σ -martingale arguments. In addition, in incomplete markets, there may be multiple such measures, and a natural question is how to select one “best” or “preferred” measure among the many.
One popular selection criterion in incomplete markets is to pick the measure that is “closest” to the real-world probability measure P in terms of the relative entropy (also known as the Kullback–Leibler divergence); see [13,24,27,40]. Minimizing relative entropy leads to the well-known minimal entropy martingale measure, a construction which has proven valuable in option pricing and hedging and which is closely connected to maximizing the investor’s utility (see Section 4). However, most of the literature focuses on the local martingale setting. The corresponding minimal entropy σ -martingale measure has not been studied, even though, in some markets, one must work with E σ MMs. We close this gap by introducing and studying the minimal entropy σ -martingale measure:
Q E = argmin Q M σ H ( Q P ) ,
where M σ denotes the set of E σ MMs and H ( · | P ) is the relative entropy with respect to P . The associated minimal entropy risk measure is then
ρ E ( X ) = E Q E [ X ] .
This measure is an extension of the classical minimal entropy martingale measure (since an ELMM is a special case of an E σ MM) but is strictly more general whenever no local martingale measure exists. Notably, while the minimal entropy martingale measure has been studied for pricing and hedging, it has not been viewed or analyzed as a traditional risk measure in the classical sense. Nor has the σ -martingale version been examined at all. In this paper, we fill this gap by proving that the minimal entropy σ -martingale measure leads to a coherent risk measure with desirable properties.
One of the most popular risk measures is the entropic risk measure. At first glance, one might suspect connections between entropic risk measures and minimal entropy methods because of the shared “entropy” label. Nevertheless, the entropic risk measure is typically introduced via the exponential formula
e γ ( X ) = 1 γ log E [ e γ X ] ,
which does not reveal any connection to entropy. Its name stems from its robust representation
e γ ( X ) = sup Q P E Q [ X ] 1 γ H ( Q P ) ,
showing that it penalizes deviations from P in proportion to the relative entropy. Although this measure is well understood as a convex, time-consistent risk measure, the label “entropic” might not be fully transparent when beginning from an exponential definition. Here, we show that one can equivalently start with a relative-entropy-based formulation, making the “entropic” nature manifest. We show that the measure can be equivalently defined, and one arrives at the same results and conclusions quite easily.
This measure is strictly convex (rather than linear) in the payoff X. In contrast, the minimal entropy risk measure only picks out a single measure – the one that eliminates arbitrage and is of the least relative entropy. As a result, it turns out to be coherent. Despite these structural differences, both measures share a fundamental entropy-based underpinning.
In summary, the main contributions of this paper are:
  • We introduce the minimal entropy σ -martingale measure for general semimartingale models (with dividends), which is new to the literature.
  • We prove that the induced minimal entropy risk measure is coherent and extends the classical minimal entropy martingale measure results.
  • We define the entropic risk measure via its robust representation and, therefore, provide an alternative approach which highlights its connection to entropy.
  • We demonstrate key properties including convexity, coherence, dynamic consistency, and optimal risk transfer for both measures, thereby revealing that minimal-entropy techniques are not only pricing tools but also valid risk measures in their own right.
  • We provide some estimates, comparing the two risk measures to their real-world expectations.
The paper is organized as follows. Section 2 contains the precise definitions of both the minimal entropy σ -martingale measure (by minimizing relative entropy under the no-arbitrage condition) and the entropic risk measure (via a relative-entropy supremum), along with existence criteria. We establish their existence and compare their properties, highlighting the new σ -martingale generalization Section 3 focuses on the definition of monetary risk measures, convexity and coherence, proving that the minimal entropy risk measure is coherent while the entropic measure is convex. In Section 4, we explore duality and highlight the deeper relationship between entropy-based valuations and risk measures and, in particular, show that our definition of the entropic risk measure is equivalent to the more common definition in the literature. Section 5 provides dynamic versions, establishing time consistency. Finally, Section 6 discusses optimal risk transfer and how each of these risk measures behaves in that context.

2. Definition and Existence

Let ( Ω , F , P ) be a probability space, L the space of bounded random variables and P L the set of probability measures on L ( F ) . Furthermore, let F = ( F t ) 0 t T be a filtration with F T = F , and let S be a (potentially multi-dimensional) stochastic process adapted to F . We assume S ^ = ( S ^ t ) 0 t T is the discounted price process (for details on discounting, see [47]).
In most models, there exists at least one probability measure such that the discounted process S ^ is a local martingale under this probability measure. However, [15] showed that in an arbitrage-free market (more precise a market that satisfies no risk with vanishing risk), you can only assume that a probability measure exists such that S ^ is a σ -martingale under this probability measure. Therefore, it makes sense to study σ -martingales, equivalent σ -martingale measures and derived risk measures.
First, let us recall:
Definition 1.
A one-dimensional semimartingale S is called a σ-martingale if there exists a sequence of predictable sets  D n  such that:
(i) 
D n D n + 1  for all n;
(ii) 
n = 1 D n = Ω × R + ;
(iii) 
for each  n 1 , the process  1 D n S  is a uniformly integrable martingale.
Such a sequence  ( D n ) n N  is called a σ-localizing sequence. A d-dimensional semimartingale is called a σ-martingale if each of its components is a one-dimensional σ-martingale.
These notions, introduced by Chou [7], further explored by Émery [51], and subsequently discussed by, for example, Jacod and Shiryaev [31] and Kallsen [33], allow certain processes to fail to be true local martingales yet still satisfy a limiting or piecewise martingale property.
Definition 2.
An equivalent σ-martingale measure (EσMM) is a probability measure  Q  with  Q P  such that  S ^  is a σ-martingale under  Q . The set of all EσMM is denoted by  M σ .
We also define a broader sets of absolutely continuous measures ( Q P ) under which S ^ is a σ-martingale:
M σ a c : = Q ; Q P , S ^ is a σ - martingale under Q ,
For more details on σ-martingales, see, for example, [28,33,48].
Definition 3.
The relative entropy  H ( Q P )  of a probability measure  Q  with respect to  P  is defined as
H ( Q P ) = Ω log d Q d P d Q = E Q log d Q d P for Q P , otherwise .
The relative entropy provides a notion of distance between a probability measure Q and a reference probability measure P . Even though it can be interpreted as a distance, it is not a metric since neither the symmetry property nor the triangle inequality holds.
Beyond financial mathematics, relative entropy is a crucial concept in statistical physics and information theory, where it is referred to as the Kullback–Leibler divergence [46]. It also plays a fundamental role in large deviations theory, where it underpins results such as Sanov’s theorem [8]. A comprehensive summary of its applications can be found in [6].
Further illustrations of this definition are available in [11], including a demonstration of how relative entropy can be understood as a distance measure. In statistics, this interpretation is reinforced by results such as Stein’s Lemma, which can be found in [30, Satz 10.4].
Definition 4.
(a) 
An equivalent σ-martingale measure  Q E M σ  is called the minimal entropy σ-martingale measure if it minimizes the relative entropy (or Kullback–Leibler divergence)  H ( · P )  among all  Q M σ , i.e.,
H ( Q E P ) = inf Q M σ H ( Q P ) .
The corresponding minimal entropy risk measure  ρ E  is defined by
ρ E ( X ) : = E Q E [ X ] .
(b) 
For  X L  and  γ > 0 , the entropic risk  e γ  is defined as
e γ ( X ) : = sup Q P L E Q [ X ] 1 γ H ( Q P ) .
Remark 1.
In the above definition of  e γ , the parameter  γ > 0  can be viewed as a risk-aversion level or scaling factor, much like in exponential-utility frameworks where larger γ corresponds to greater risk aversion. Thus,  e γ  can be seen as an "entropic" or "exponential-penalized" valuation: the higher the γ, the stronger the penalty on deviating from the reference measure  P .
Note that the existence of these risk measures is not immediately clear. In particular, the existence of a minimal entropy σ-martingale measure can still depend on the properties of S. For instance, if the market is not arbitrage-free (e.g., the No Free Lunch With Vanishing Risk condition fails) and S represents the price process of a tradable asset, then no equivalent σ-martingale measure can exist. Even if the market does satisfy NFLVR, one may still require additional conditions on the price processes (e.g., local boundedness or boundedness from below) to ensure that there is some equivalent σ-martingale measure carrying finite relative entropy. For further details, see [14,15], and [16].
As opposed to the minimal entropy σ-martingale measure, the entropic risk always exists.
Theorem 1.
The entropic risk always exists.
Proof. 
It suffices to show that e γ is indeed a finite number.
Let X L ( F ) . It is straightforward to verify:
lim γ 0 | e γ ( X ) | = | E P [ X ] | <
and
lim γ | e γ ( X ) | = | ess sup ( X ) | <
because X is almost surely bounded.
Since e γ ( X ) is increasing in γ, we conclude for a fixed γ ˜ > 0 :
< lim γ 0 e γ ( X ) e γ ˜ ( X ) lim γ e γ ( X ) < .
While the minimal entropy σ-martingale measure is new in the literature, the notion of the minimal entropy martingale measure has been extensively studied in the literature. The definition of the latter goes analogue to ours, but instead of equivalent σ-martingale measures, one minimizes the entropy among all equivalent martingale measures (see, for example, [29,38,39,43,45]). There are numerous criteria in the literature for the existence of the minimal entropy martingale measure. Most of these criteria impose strong conditions on the stochastic process S ^ . A prominent example is when S ^ is a Lévy process. For this case, Theorem 3.1 in [26] provides a proof of existence, albeit with a highly technical approach. Another simple result is the existence of the minimal entropy martingale measure if S ^ is bounded as shown in [24, Theorem 2.1]. Other results depend on the specific form of the Radon-Nikodym derivative, e.g., [29] or are applicable only in discrete time settings, e.g., [23, Corollary 3.27].
For the minimal entropy σ-martingale measure, things are a bit easier. We now state and prove a theorem showing that, under a mild condition – namely, the existence of at least one σ-martingale measure Q with finite H ( Q P ) – one obtains the existence (and uniqueness) of a minimal entropy σ-martingale measure. This holds even if S ^ is not locally bounded and even if no equivalent local martingale measures exists.
For the proof, we need some lemmas.
Lemma 1.
Let  P  be the set of probability measures on  ( Ω , A ) . We have:
(a) 
H ( Q P ) 0 . Furthermore,  H ( Q P ) = 0  if and only if  Q = P .
(b) 
The mapping
: P R Q H ( Q P )
is convex and strictly convex on the set of probability measures which are absolutely continuous with respect to  P .
Proof. 
We have:
H ( Q P ) = E P d Q d P log d Q d P for Q P , else .
hence, it is reasonable to take a closer look at the function ϕ, which we define as a continuous extension of the function x x log x to [ 0 , ) . Because ϕ ( x ) = 1 x and Theorem A1, we have ϕ ( x ) > 0 for x > 0 , which proves the strict convexity of ϕ.
It is easy to see that the function x x log x x takes its minimum at x = 1 . Hence, we have x log x x 1 and thus
ϕ ( x ) x 1 for all x 0 .
(a)
According to Equation (2), we have for Q P :
H ( Q P ) = E P d Q d P log d Q d P E P d Q d P 1 = E P d Q d P 1 = E Q [ 1 ] 1 = 0 .
Now let H ( Q P ) = 0 . We must show that the equality Q = P follows. We have:
0 = H ( Q P ) = E P d Q d P log d Q d P = E P d Q d P log d Q d P E P d Q d P 1 = 0 = E P d Q d P log d Q d P d Q d P + 1 Equation ( 2 ) 0 .
Hence P -almost everywhere we have d Q d P log d Q d P = d Q d P 1 . Now it follows that d Q d P = 1 P -almost surely. If Q is not absolutely continuous with respect to P , the statement is obvious.
(b)
For probability measures that are not absolutely continuous with respect to P , the statement is obvious. So we focus on the case where Q 1 , Q 2 P . For λ 1 , λ 2 0 with λ 1 + λ 2 = 1 , we have:
H ( λ 1 Q 1 + λ 2 Q 2 P ) = E P ϕ λ 1 d Q 1 d P + λ 2 d Q 2 d P E P λ 1 ϕ d Q 1 d P + λ 2 ϕ d Q 2 d P = λ 1 E P ϕ d Q 1 d P + λ 2 E P ϕ d Q 2 d P = λ 1 H ( Q 1 P ) + λ 2 H ( Q 2 P ) .
Thus, strict convexity follows.□
Lemma 2.
Suppose there exists a measure  Q 0 M σ a c  with
H ( Q 0 P ) < and H ( Q 0 P ) H ( Q P ) for all Q M σ a c .
Then  Q 0  is the unique Minimal Entropy σ-Martingale Measure.
Proof. 
Define the set
Γ : = { Q M σ a c ; H ( Q P ) < } .
By assumption, Γ is nonempty because Q 0 Γ . Thus, we have
I ( Γ , P ) : = inf Q Γ H ( Q P ) < .
Next, we show that Γ is convex and closed in total variation:
For any Q 1 , Q 2 Γ , their convex combination α Q 1 + ( 1 α ) Q 2 , α [ 0 , 1 ] , is also in Γ. Indeed, convex combinations preserve both the finite entropy condition (due to convexity of relative entropy) and the σ-martingale property as σ-martingales form a vector space.
The set M σ a c can be expressed through linear constraints as
M σ a c = { Q P : E Q [ f i ] = 0 for a suitable family of bounded measurable functions f i } .
Sets defined via linear constraints of the form { Q P : E Q [ f i ] = 0 } are closed in total variation topology. Intersecting with the closed set of measures having finite entropy preserves this closedness.
By applying Csiszár’s Theorem A6, we conclude that the set Γ has a unique I-projection Q ˜ of P , which satisfies:
H ( Q ˜ P ) = inf Q Γ H ( Q P ) .
By hypothesis, Q 0 already satisfies this minimality, so it must coincide with Q ˜ . Thus, uniqueness follows directly from the strict convexity of relative entropy.
Finally, we verify that Q 0 is equivalent to P . Suppose this is not the case. Then there exists Q ˜ Γ , which is equivalent to P , as given by hypothesis. If Q 0 is not equivalent to P , its support is strictly smaller than that of Q ˜ , which contradicts minimality via Csiszár–Gibbs inequality (Theorem A7). Hence, Q 0 P .
Thus, Q 0 M σ a c , is equivalent to P , and minimizes entropy. Therefore, it is the unique minimal entropy σ-martingale measure. □
Lemma 3.
Let  ( Ω , F , P )  be a probability space, and let  Q ˜ , Q 1 , Q 2 ,  be probability measures on  ( Ω , F )  with  Q ˜ , Q n P  for each n and
d Q n d P n L 1 ( P ) d Q ˜ d P .
Suppose X is a semimartingale that is a σ-martingale under each measure  Q n . Then X is also a σ-martingale under  Q ˜ .
Proof. 
Since X is a σ-martingale under each measure Q n , there exists, for every n, a suitable family of predictable sets making each localized piece of X a uniformly integrable martingale under Q n . By reindexing or combining these families appropriately, we can select a single sequence ( D k ) k N of predictable sets for which 1 D k X is a uniformly integrable martingale simultaneously under every measure Q n
To show that 1 D k X remains a uniformly integrable martingale under Q ~ , fix arbitrary times s < t and an arbitrary set A F s . Define the bounded random variable
Z = 1 A ( ( 1 D k X ) t ( 1 D k X ) s ) .
Since 1 D k X is a martingale under each Q n , we immediately have E Q n [ Z ] = 0 for every n. Using the convergence of the Radon–Nikodým derivatives in L 1 ( P ) and boundedness of Z, we get:
E Q ˜ [ Z ] = E P Z d Q ˜ d P = E P lim n Z d Q n d P = lim n E P Z d Q n d P = lim n E Q n [ Z ] = 0 .
This equality implies that the conditional expectation of ( 1 D k X ) t given F s under Q n equals ( 1 D k X ) s , establishing the martingale property for each localized piece. Since this argument applies to each predictable set D k , it follows directly that X is a σ-martingale under Q ˜ . □
Theorem 2.
Suppose there exists at least one measure  Q 1 M σ  with finite entropy  H ( Q 1 P ) < . Then there exists a unique minimal entropy σ-martingale measure  Q E M σ , characterized by
H ( Q E P ) = inf Q M σ H ( Q P ) < .
Proof. 
We define
M 0 = { Q M σ a c : H ( Q P ) < } .
By assumption, there exists a sequence { Q n } n 1 M 0 such that H ( Q n P ) n N is decreasing and satisfies
H ( Q n P ) inf Q M σ a c H ( Q P ) .
Because the sequence of the relative entropies is monotone, we obtain for the convex function ϕ ( x ) = x log x
sup n 1 E P ϕ d Q n d P H ( Q 1 P ) ,
and therefore the sequence d Q n d P is uniformly integrable according to Theorem A3. By Theorem A4, there exists a subsequence d Q n k d P k 1 which converges weakly in L 1 . However, by Mazur’s Lemma (Theorem A5) there exists a sequence d Q n d P ˜ n 1 consisting of convex combinations
d Q n d P ˜ = k = n N n λ n k d Q n k d P λ n k 0 , k λ n k = 1 ,
which converges in L 1 ( P ) to some d Q 0 d P .
The measure Q ˜ n is uniquely defined if we interpret d Q n d P ˜ as a Radon–Nikodym derivative. One can see directly that Q ˜ n P . By assumption, S ^ is a σ-martingale with respect to all Q n (and hence also w.r.t. all Q n k ). Now, we want to show it is also a σ-martingale under Q ˜ n . To prove this, let ( D i ) i be a sequence of predictable sets such that 1 D i S ^ is a martingale under a fixed Q n k . For each A F s ,
A ( 1 D i S ^ t 1 D i S ^ s ) d Q ˜ n = A ( 1 D i S ^ t 1 D i S ^ s ) d Q n d P ˜ d P = A ( 1 D i S ^ t 1 D i S ^ s ) ( k = n N n λ n k d Q n k d P ) d P = k = n N n λ n k A ( 1 D i S ^ t 1 D i S ^ s ) d Q n k d P d P = k = n N n λ n k A ( 1 D i S ^ t 1 D i S ^ s ) d Q n k = 0 ,
because Q n k M σ a c . Thus 1 D i S ^ is also a martingale under Q ˜ n , hence S ^ is a σ martingale under Q ˜ n .
Moreover, d Q n d P ˜ converges in L 1 to d Q 0 d P , and so, by Lemma 3 we have Q 0 M σ a c .
Because of the convexity of the relative entropy (Lemma 1), Equation (3), and since H ( Q n P ) n N is a decreasing sequence, we have
H ( Q ˜ n P ) = H k = n N n λ k Q n k | P k = n N n λ k H ( Q n P ) max k { n , , N n } H ( Q n k P ) = H ( Q n P ) .
Thus, by applying Fatou’s Lemma, we obtain
H ( Q 0 P ) = E P lim inf n d Q n d P ˜ log d Q n d P ˜ lim inf n E P d Q n d P ˜ log d Q n d P ˜ = lim inf n H ( Q n ˜ P ) lim inf n H ( Q n P ) .
Since H ( Q n P ) is decreasing, it follows that H ( Q 0 P ) is already minimal. Now, all conditions of Lemma 2 are satisfied, so we conclude that Q 0 is equivalent to P , proving the existence of the minimal entropy martingale measure.
Uniqueness follows from Lemma 1 and the strict convexity of H ( · P ) . □
Remark 2.
Theorem 2 shows that the “entropy-minimizing” approach extends smoothly to σ-martingale models: as soon as there is one finite-entropy σ-martingale measure, there must be a unique one of minimal entropy, even if  S ^  is unbounded or not locally bounded. This measure can be used for entropy-based valuation, risk measurement, or other applications. The local martingale approach, in contrast, might be impossible if  M loc = .

3. Convex and Coherent Risk Measures

In this section, we define the notions of monetary, convex, and coherent risk measures. We also highlight the relationships among translation invariance, monotonicity, convexity, positive homogeneity, and subadditivity, emphasizing how these conditions characterize different levels of risk-measure structure.
Definition 5.
Let  X  be the space of linear bounded, real-valued random variables. and
(a) 
A mapping  ρ : X R { + }  is called a monetary risk measure, if it satisfies for all  X , Y X
  • Monotonicity: If  X Y  almost surely, then  ρ ( X ) ρ ( Y ) .
  • Translation Invariance: For all  m R ρ ( X + m ) = ρ ( X ) m .
(b) 
The monetary risk measure ρ is called convex if it also satisfies 
  • Convexity: For all  X , Y X  and  λ [ 0 , 1 ] ,
    ρ λ X + ( 1 λ ) Y λ ρ ( X ) + ( 1 λ ) ρ ( Y ) .
(c) 
A convex risk measure ρ is called coherent if, in addition to monotonicity and translation invariance, it satisfies 
  • Positive Homogeneity: For all  α > 0 ,
    ρ ( α X ) = α ρ ( X ) .
  • Subadditivity: For all  X , Y X ,
    ρ ( X + Y ) ρ ( X ) + ρ ( Y ) .
Theorem 3.
Let ρ be a monetary risk measure on a linear space of random variables  X .
(a) 
We say that ρ is normalized if  ρ ( 0 ) = 0 . In particular, if ρ is positively homogeneous, then it follows that  ρ ( 0 ) = 0  automatically.
(b) 
Suppose ρ satisfies translation invariance and monotonicity (as in Definition 5). Then, any two of the following three properties imply the remaining third:
  • Convexity,
  • Positive Homogeneity,
  • Subadditivity.
For a detailed discussion and proof, see [23, Chapter 4].
Theorem 4.
The minimal entropy martingale measure and the entropic risk are both convex risk measures. The minimal entropy martingale measure is also a coherent risk measure.
Proof. 
We start with the minimal entropy martingale measure. It suffices to prove the coherence. The convexity follows from the fact that positive homogeneity and subadditivity imply convexity.
  • Monotonicity. Let X , Y L ( F ) with P [ X Y ] = 1 . We have
    ρ E ( X ) = E Q E [ X ] E Q E [ Y ] = ρ E ( Y ) .
  • Cash Translability. This simply follows from
    ρ E ( X + c ) = ρ E ( X c ) = ρ E ( X ) c = ρ E ( X ) c .
  • Positive Homogeneity and Subadditivity. These follow directly from the linearity of the expectation.
We now address the entropic risk.
  • Monotonicity. From X Y almost surely, E Q [ Y ] E Q [ X ] follows for all Q P L , and hence
    E [ Y ] 1 γ H ( Q P ) E [ X ] 1 γ H ( Q P )
    for all Q . Thus e γ ( X ) e γ ( Y ) .
  • Cash Translability. With
    E [ ( X + c ) ] 1 γ H ( Q P ) = E [ X ] 1 γ H ( Q P ) c ,
    we obtain e γ ( X + c ) = e γ ( X ) c .
  • Convexity. We have
    E [ ( λ X + ( 1 λ ) Y ) ] 1 γ H ( Q P ) = λ E [ X ] + ( 1 λ ) E [ Y ] λ 1 γ H ( Q P ) ( 1 λ ) 1 γ H ( Q P ) = λ E [ X ] 1 γ H ( Q P ) + ( 1 λ ) E [ Y ] 1 γ H ( Q P ) .
    And since for general functions f , g and c 1 , c 2 R , we have
    sup x ( c 1 f ( x ) + c 2 g ( x ) ) c 1 sup x f ( x ) + c 2 sup x g ( x ) ,
    the result follows. □
In general, the entropic risk is not a coherent risk measure, even though it is additive for independent positions. By considering Lemma 1, one can easily see from the definition that in order to be a coherent risk measure, the entropic risk and the risk measure ρ defined as ρ ( X ) = E P [ X ] have to coincide.
There is also a coherent version of the entropic risk measure described in [21], which is quite similar to the two risk measures we are examining here.
Finally, to better understand these risk measures, we explore their relationship with the real-world probability measure. The following lemma will be helpful.
Lemma 4.
Let  Q , R M σ a c  be absolutely continuous σ–martingale measures, i.e. each of  d Q d P , d R d P  is in  L 1 ( P ) , and assume  H ( Q P )  and  H ( R P )  are both finite. Then, for any bounded random variable  X L ( F ) , we have
E Q [ X ] E R [ X ] X 2 H ( Q P ) + 2 H ( R P ) .
Proof. 
First note that for any probability measures Q , R (absolutely continuous w.r.t. P),
E Q [ X ] E R [ X ] = E P X d Q d P E P X d R d P = E P X d Q d P d R d P X E P d Q d P d R d P = X Q R TV ,
where Q R TV denotes the total variation distance:
Q R TV = E P d Q d P d R d P .
Next, by the triangle inequality in total variation,
Q R TV Q P TV + P R TV .
Finally, we invoke the Pinsker inequality Theorem A8 and obtain
Q P TV 2 2 H ( Q P ) , R P TV 2 2 H ( R P ) ,
whenever H ( Q P ) (resp. H ( R P ) ) is finite. Hence
Q P TV 2 H ( Q P ) , R P TV 2 H ( R P ) .
Putting these pieces together yields
Q R TV Q P TV + P R TV 2 H ( Q P ) + 2 H ( R P ) .
Therefore,
| E Q [ X ] E R [ X ] | X Q R TV X 2 H ( Q P ) + 2 H ( R P ) .
That completes the proof. □
Remark 3.
Inequality (4) shows that two absolutely continuous σ–martingale measures  Q , R  which both have small entropy  H ( · P )  must produce close expectations of any bounded payoff X. In particular, one recovers the fact that a measure with finite entropy is “close” to  P  in the sense of total variation if the entropy is small. Hence, if two measures are entirely determined by their Radon–Nikodým derivatives in  L 1 ( P ) , a bound in terms of  H ( · P )  is typically the best general estimate on how far their integrals can differ.
Theorem 5.
Let  X L ( F )  be bounded and  γ > 0 . Then, the entropic risk
e γ ( X ) = sup Q P L E Q [ X ] 1 γ H ( Q P )
satisfies the following two-sided bound:
E P [ X ] e γ ( X ) E P [ X ] + γ X 2 2 .
Proof. 
By taking the specific choice Q = P in the definition of e γ ( X ) , we see
e γ ( X ) E P [ X ] 1 γ H ( P P ) = E P [ X ] ,
since H ( P P ) = 0 . This proves the left inequality in (5).
For any probability measure Q P with finite entropy H ( Q P ) , we use the triangle bound from total variation Lemma 4 to obtain
E Q [ X ] E P [ X ] X 2 H ( Q P ) .
Hence,
E Q [ X ] E P [ X ] + X 2 H ( Q P ) .
Therefore, for any such Q ,
E Q [ X ] 1 γ H ( Q P ) E P [ X ] + X 2 H ( Q P ) 1 γ H ( Q P ) .
Define h : = H ( Q P ) 0 . Then, we want to maximize X 2 h ( 1 / γ ) h over h 0 . A short calculus argument shows that the maximum occurs at
h * = X 2 2 / γ 2 = γ 2 X 2 2 ,
and that the maximum value is γ X 2 2 . Consequently,
sup h 0 X 2 h h γ = γ X 2 2 .
Hence, for all Q P L with H ( Q P ) < ,
E Q [ X ] 1 γ H ( Q P ) E P [ X ] + γ X 2 2 .
Taking the supremum over all Q P L leads to
e γ ( X ) = sup Q E Q [ X ] 1 γ H ( Q P ) E P [ X ] + γ X 2 2 .
This completes the proof of (5). □
Corollary 1.
Let  Q E  be the minimal entropy σ-martingale measure for a (bounded) price process  S ^ . Then,
e γ ( X ) E Q E [ X ] 1 γ H Q E P for all bounded X .
Furthermore, combining this estimate with the two-sided bound of Theorem 5, we have:
max E P [ X ] , ρ E ( X ) 1 γ H ( Q E P ) e γ ( X ) E P [ X ] + γ X 2 2 .
Proof. 
Since e γ ( X ) is defined as the supremum sup Q P L E Q [ X ] 1 γ H ( Q P ) , it clearly holds that for any Q in that admissible set,
e γ ( X ) E Q [ X ] 1 γ H ( Q P ) .
If Q E (the minimal entropy σ-martingale measure) is in P L (i.e. Q E P and H ( Q E P ) < ), then simply take Q = Q E in the supremum. This yields the lower bound (7).
Finally, to deduce (8), observe that e γ ( X ) E P [ X ] also holds by choosing Q = P . Then Theorem 5 provides the upper bound, and so taking the maximum of these lower estimates proves (8). □
Remark 4.
The assumption that  Q E P L  requires that the minimal entropy measure  Q E  be absolutely continuous w.r.t.  P  with finite relative entropy. If, for instance, the market admits no arbitrage of the first kind (NA1) and there exists a finite-entropy EσMM, one can show  Q E P L . In that case, (7) provides a non-trivial lower estimate on the entropic risk in terms of the minimal entropy measure.
Remark 5.
When the risk-aversion parameter γ tends to zero, the penalty factor  1 γ  in the entropic risk measure
e γ ( X ) = sup Q P E Q X 1 γ H Q P
becomes very large. As a result, any measure  Q P  with  H ( Q P ) > 0  gets heavily penalized in the objective, so that the supremum is forced increasingly toward  Q = P .
Formally, it can be shown that
lim γ 0 e γ ( X ) = E P X .
Economically, this reflects a situation of ultra-strong aversion to deviating from the reference measure  P . In the limit  γ 0 , the only measure not incurring an infinite penalty is  Q = P . Thus, the entropic risk measure collapses to  E P [ X ] , the plain (negative) expectation under the real-world probability  P .

4. Duality

Any convex or coherent risk measure admits a so-called robust representation. This has been discussed in several works, including [2,23], and [25].
The usual approach to defining the entropic risk measure is:
1 γ log E e γ X
and its robust representation, (1), is typically derived using dual representation theorems.
In this paper, we defined the entropic risk using its robust representation and now demonstrate that this definition is equivalent to the traditional one found in literature. Since the convexity of (9) has not been explicitly shown here, we present an alternative proof.
Theorem 6.
For  γ > 0  and  X L ( F ) , we have:
sup Q P L E Q ( X ) 1 γ H ( Q P ) = 1 γ log E P e γ X .
Proof. 
Define a new probability measure P X by:
d P X d P = e γ X E P e γ X .
Then:
H ( Q P ) = E Q log d Q d P = E Q log d Q d P X d P X d P = E Q log d Q d P X + log e γ X log E P e γ X = E Q log d Q d P X + E Q [ γ X ] log E P [ e γ X ] .
Substituting this into the optimization problem:
exp sup Q P L E Q [ γ X ] H ( Q P ) = exp sup Q P L E Q [ γ X ] E Q log d Q d P X E Q [ γ X ] + log E P e γ X = exp sup Q P L E Q log d Q d P X + log E P e γ X = E P e γ X exp inf Q P L E Q log d Q d P X = E P e γ X ,
where the last equality follows from P X P L and Lemma 1.
Taking the logarithm and dividing by γ yields the result. □
A detailed analysis of the dual problem for the minimal entropy martingale measure is more complex but provides valuable insights into its economic interpretation as an equivalent local martingale measure. The dual problem, in essence, connects minimizing relative entropy to maximizing a specific utility function. Below, we elaborate briefly on this connection.
Let U be a utility function of the form:
U ( x ) = exp ( x ) .
We aim to maximize the expected utility of the terminal value of a wealth process by selecting an optimal strategy ϕ from a set of admissible strategies Φ. If the terminal value is represented as ϕ S ^ T : = ϕ 0 S ^ 0 + 0 T ϕ t d S ^ t , then the objective becomes:
sup ϕ Φ E U ( ϕ S ^ T ) = sup ϕ Φ E exp ϕ S ^ T .
Under weak conditions, [13] show:
sup ϕ Φ E exp ϕ S ^ T = exp inf Q M loc H ( Q P )
as well as:
d Q E d P = exp H ( Q E P ) + ϕ * S ^ T ,
where ϕ * is the strategy that maximizes the utility in Equation (10).
Further details on this duality can be found in [5,13], and [32].

5. Dynamic Consistency

In this section, we present dynamic versions of the entropic risk measure and the risk measure associated with the minimal entropy martingale measure, proving that they are time-consistent. Dynamic consistency is particularly important when dealing with practical financial applications. For instance, dynamic risk measures based on entropy have been successfully applied to energy markets and stochastic volatility models, as discussed in [49].
We start by introducing the relevant definitions for this section.
Let F = ( F t ) 0 t T be a filtration, L t the set of bounded F t -measurable functions, Q t the set of probability measures on F t which are equivalent to P on F t , and let S be an adapted stochastic process.
Definition 6.
(a) 
A map  ρ t : L L t  is called a dynamic risk measure.
(b) 
A dynamic risk measure is called time consistent if for  s t ,
P [ ρ t ( X ) ρ t ( Y ) ] = 1 P [ ρ s ( X ) ρ s ( Y ) ] = 1 .
(c) 
The dynamic entropic risk measure  e γ , t  is defined as
e γ , t ( X ) : = 1 γ log E exp γ X F t .
(d) 
The dynamic minimal entropy risk measure  ρ t E  is defined as
ρ t E ( X ) : = E Q E [ X F t ] ,
where  Q E  is the minimal entropy martingale measure.
It is possible to allow γ to be an adapted process instead of a constant, resulting in a slightly more general definition of the dynamic entropic risk measure. However, in such cases, time consistency may not be guaranteed.
If for Q Q t we define the conditional relative entropy H t ( Q P ) as
H t ( Q P ) = E Q log d Q d P F t ,
the dynamic entropic risk measure can also be expressed in a robust representation involving the relative entropy:
e γ , t ( X ) = sup Q Q t E Q [ X F t ] 1 γ H t ( Q P ) .
Further details on this duality and on dynamic risk measures in general (in both continuous and discrete time) can be found in [1,18,22], and [41].
It is straightforward to verify that the two risk measures defined above satisfy the properties of dynamic risk measures.
Theorem 7.
The dynamic entropic risk measure and the dynamic minimal entropy risk measure are time-consistent.
Proof. 
We start with the entropic risk measure. We aim to show that e γ , s ( X ) = e γ , s ( e γ , t ( X ) ) for 0 s t . Time consistency then follows from the intertemporal monotonicity theorem [22, Proposition 4.2].
Using the tower property of conditional expectation, we have:
e γ , s ( X ) = 1 γ log E e γ X F s = 1 γ log E E e γ X F t F s = 1 γ log E exp γ 1 γ log E e γ X F t = e γ , t ( X ) F s = e γ , s ( e γ , t ( X ) ) .
For the minimal entropy martingale measure, we proceed similarly:
ρ s E ( X ) = E Q E [ X F s ] = E Q E E Q E [ X F t ] F s = ρ s E ( ρ t E ( X ) ) .
This proves time consistency for both measures. □

6. Optimal Risk Transfer

Optimal risk transfer is a classical topic in the study of risk measures and has been studied extensively in, e.g., [3]. In this section, we present a somewhat more general and extended view, highlighting in particular the property of γ-dilated families of risk measures, the associated inf-convolution approach, and how these results can be restricted to the risk-neutral setting. We then provide a brief discussion of why no non-trivial γ-dilation family can be constructed from the minimal entropy coherent risk measure, whereas the standard entropic risk measure naturally fits into this framework. Finally, we show how restricting to (local) risk-neutral measures force the minimal entropy measure to emerge as the unique solution to the risk-transfer problem.
Let ρ be any (convex or coherent) risk measure on L ( F ) . The fundamental question in optimal risk sharing between two entities with risk measures ρ 1 and ρ 2 is to split a position X into two parts X F and F. The goal is to minimize the combined risk ρ 1 ( X F ) + ρ 2 ( F ) . This optimization problem is well expressed via the inf-convolution operator:
ρ 1 ρ 2 ( X ) : = inf F ρ 1 ( X F ) + ρ 2 ( F ) .
The optimal risk transfer is solved when the infimum in (11) is achieved by some F * . Well-known results, see [3], show that certain risk measures, particularly those arising from γ-dilated families, enjoy a simple solution: the optimal allocation is often a linear split of X.
Definition 7 (γ-Dilated Family).
A family  { ρ γ } γ I  of risk measures on  L ( F )  is called γ-dilated if there exists a base risk measure ρ such that
ρ γ ( X ) = 1 γ ρ γ X for each γ I ,
where  I ( 0 , )  is some index set.
An immediate (but important) example is the entropic risk measure, which is γ-dilated by construction:
e γ ( X ) = 1 γ log E [ e γ X ] = 1 γ ρ γ X
if one sets ρ ( Z ) : = log E [ e Z ] . By contrast, a coherent risk measure with positive homogeneity cannot usually form a non-trivial γ-dilated family unless it is purely linear in γ and thus collapses to the E -type functional (more details below).
We restate here a known result, cf. [3], slightly paraphrased to fit our notation.
Theorem 8.
Let  { ρ γ } γ > 0  be a γ-dilated family of convex risk measures on  L ( F ) . Then the following statements hold:
(a) 
For any  γ 1 , γ 2 > 0 , we have the inf-convolution identity:
ρ γ 1 + γ 2 ( X ) = ρ γ 1 ρ γ 2 ( X ) .
(b) 
The optimal allocation  F *  solving  inf F { ρ γ 1 ( X F ) + ρ γ 2 ( F ) }  is linear in X, specifically
F * = γ 2 γ 1 + γ 2 X .
(c) 
Consequently, we also have
ρ γ 1 + γ 2 ( X ) = ρ γ 1 ( X F * ) + ρ γ 2 ( F * ) .
(d) 
Moreover, if  ρ 1  and  ρ 2  are two convex risk measures that each can be embedded in a single γ-dilated family (i.e., they are both instances of  ρ γ  for some base ρ), then for any  γ 1 , γ 2 > 0 ,
ρ γ 1 ρ γ 2 = ρ ρ γ 1 + γ 2
under a suitable identification. Hence, the γ-dilation structure is preserved under inf-convolution.
Sketch of Proof.
The key insight is that for a γ-dilated family, scaling X by γ and adjusting the risk measure by 1 γ yields a consistent re-parametrization of the same base functional ρ. Once this is established, standard inf-convolution arguments for the exponentially (or more generally, ρ-) tilted function imply the linear sharing rule (13). For full details, see [3] or references therein. □
We now remark why a coherent risk measure – in particular, the minimal entropy risk measure ρ E – cannot generally be part of a non-trivial γ-dilated family. Recall that a coherent risk measure ρ is positively homogeneous, i.e.
ρ ( λ X ) = λ ρ ( X ) for all λ 0 .
Combining this with the γ-dilation requirement ρ γ ( X ) = 1 γ ρ ( γ X ) quickly forces ρ γ ( X ) ρ ( X ) for all γ, unless ρ 0 or ρ some fixed linear functional. Indeed, for coherent ρ, we would have
ρ ( γ X ) = γ ρ ( X ) , so 1 γ ρ ( γ X ) = ρ ( X ) .
Hence, ρ γ cannot genuinely “depend” on γ. In short, non-trivial γ-dilated families (like the entropic class) are not coherent. This explains why the minimal entropy risk measure, ρ E ( X ) = E Q E [ X ] , cannot appear in a standard γ-dilated framework.

Appendix A. Some well-known theorems

The following theorem is from [37, page 146].
Theorem A1.
Let f be a function which is continuous on  [ a , b ]  and two times differentiable  ( a , b ) . Then we have
(a) 
f is convex in  [ a , b ]  if and only if  f 0 ,
(b) 
f is strictly convex if  f > 0 .
A thorough presentation of this and closely related inequalities can be found, for instance, in [44, Section 4] or in standard real analysis texts.
Theorem A2 (Classical Young–Gibbs Inequality).
Let  x > 0  and  y R . Then
x y ln ( x ) y , with equality if and only if x = 1 .
The following theorem is a special case of de La Vallée Poussin-criterion and is proved for example in ([17], T22).
Theorem A3.
Let  ( X λ ) λ Λ  be a family of integrable random variables and  ϕ : R R +  a measurable function with  lim t ϕ ( t ) t = . If we have
K : = sup λ Λ E [ ϕ ( | X λ | ) ] < ,
then  ( X λ ) λ Λ  is uniformly integrable.
A proof of the following theorem can be found in [19, page 294].
Theorem A4 (Dunford-Pettis criterion).
Let  K L 1 . Then the following are equivalent:
(i) 
K  ist uniformly integrable.
(ii) 
K  is relatively compact in  L 1 , if we consider the weak convergence in  L 1  as corresponding convergence.
(iii) 
For each sequence in  K , there exists a subsequence which converges weakly in  L 1 .
A proof of the next statement can be found in [20, page 6] or [50, Theorem III.2.5].
Theorem A5 (Mazur’s lemma).
Let  ( X , · )  be a Banach space and let  ( u n ) n N  be a sequence in X that converges weakly to some  u 0  in X:
u n u 0 as n .
That is, for every continuous linear functional  f X * , the continuous dual space of X,
f ( u n ) f ( u 0 ) a s n .
Then, there exists a function  N : N N  and a sequence of sets of real numbers
{ α ( n ) k | k = n , , N ( n ) }
such that  α ( n ) k 0  and
k = n N ( n ) α ( n ) k = 1
such that the sequence  ( v n ) n N  defined by the convex combination
v n = k = n N ( n ) α ( n ) k u k
converges strongly in X to  u 0 , i.e.
v n u 0 0 a s n .
We denote with L the set of probability measures defined on ( Ω , F ) .
For Γ L we set:
I ( Γ , P ) = inf { I ( Q , P ) : Q Γ } .
A probability Q Γ satisfying
I ( Q , P ) = I ( Γ , P )
is called the I–projection of P on Γ.
Theorem A6 (Th. 2.1 [9]).
Let  Γ L  be a convex set such that  I ( Γ , P ) < . If Γ is closed in variation then  P  has a (unique) I–projection on Γ.
Theorem A7 (Th. 2.2 [9]).
Let  Γ L  be a convex set. A  Q Γ  such that  I ( Q , P ) <  is the I–projection of  P  on Γ if and only if:
I ( R , P ) I ( R , Q ) + I ( Q , P ) R Γ .
The following theorem is known as the Pinsker inequality (also sometimes called Kullback–Csiszár–Kemperman inequality), linking total variation norm and relative entropy (Kullback–Leibler divergence) between two probability measures.
Theorem A8 (Pinsker inequality).
Let  P  and  Q  be two probability measures on the same measurable space. Then
Q P TV 2 2 H ( Q P ) ,
where  H ( Q P )  is the relative entropy or Kullback–Leibler divergence defined by
H ( Q P ) = log d Q d P d Q ,
provided  Q P  and otherwise  H ( Q P ) = + .
This inequality can be found explicitly, for example, in [4, Section 2]. An earlier result with weaker constants appears in [42]. The final and more general form was established independently by [10,35], and [34].

References

  1. Acciaio, Beatrice and Irina Penner. 2011. Dynamic risk measures. In Advanced Mathematical Methods for Finance, pp. 1–34. Springer. [CrossRef]
  2. Artzner, Philippe, Freddy Delbaen, Jean-Marc Eber, and David Heath. 1999. Coherent measures of risk. Mathematical finance 9(3), 203–228. [CrossRef]
  3. Barrieu, Pauline and Nicole El Karoui. 2005. Inf-convolution of risk measures and optimal risk transfer. Finance and stochastics 9(2), 269–298. [CrossRef]
  4. Barron, Andrew R. 1986. Entropy and the central limit theorem. The Annals of Probability 14(1), 336–342.
  5. Bellini, Fabio and Marco Frittelli. 2002. On the existence of minimax martingale measures. Mathematical Finance 12(1), 1–21. [CrossRef]
  6. Cherny, Alexander S and Victor P Maslov. 2004. On minimization and maximization of entropy in various disciplines. Theory of Probability & Its Applications 48(3), 447–464. [CrossRef]
  7. Chou, Ching-Sung. 1979. Caractérisation d’une classe de semimartingales. In Séminaire de Probabilités XIII, pp. 250–252. Springer.
  8. Cover, Thomas M and Joy A Thomas. 2012. Elements of information theory. John Wiley & Sons. [CrossRef]
  9. Csiszár, Imre. 1975. I-divergence geometry of probability distributions and minimization problems. The Annals of Probability 3(1), 146–158. [CrossRef]
  10. Csiszár, Imre. 1967. Information-type measures of difference of probability distributions and indirect observations. Studia Scientiarum Mathematicarum Hungarica 2, 299–318.
  11. Dacunha-Castelle, Didier and Marie Duflo. 1986. Probability and Statistics (1 ed.). Springer.
  12. Delbaen, Freddy. 2000. Coherent risk measures. Blätter der DGVFM 24(4), 733–739. [CrossRef]
  13. Delbaen, Freddy, Peter Grandits, Thorsten Rheinländer, Davide Samperi, Martin Schweizer, and Christophe Stricker. 2002. Exponential hedging and entropic penalties. Mathematical Finance 12(2), 99–123. [CrossRef]
  14. Delbaen, Freddy and Walter Schachermayer. 1994. A general version of the fundamental theorem of asset pricing. Mathematische Annalen 300(1), 463–520. [CrossRef]
  15. Delbaen, Freddy and Walter Schachermayer. 1998. The fundamental theorem of asset pricing for unbounded stochastic processes. Mathematische Annalen 312(2), 215–250. [CrossRef]
  16. Delbaen, Freddy and Walter Schachermayer. 2006. The Mathematics of Arbitrage. Springer Finance. Springer. [CrossRef]
  17. Dellacherie, Claude and Paul-André Meyer. 1982. Probabilities and Potential, B: Theory of Martingales. North-Holland Mathematics Studies. Elsevier Science.
  18. Detlefsen, Kai and Giacomo Scandolo. 2005. Conditional and dynamic convex risk measures. Finance and stochastics 9(4), 539–561. [CrossRef]
  19. Dunford, Nelson and Jacob Theodore Schwartz. 1958. Linear Operators. Part 1. General Theory. Pure and Applied Mathematics. Interscience Publishers.
  20. Ekeland, Ivar and Roger Temam. 1976. Convex Analysis and Variational Problems. Studies in Mathematics and its Applications. Elsevier Science.
  21. Föllmer, Hans and Thomas Knispel. 2011. Entropic risk measures: Coherence vs. convexity, model ambiguity and robust large deviations. Stochastics and Dynamics 11(02n03), 333–351. [CrossRef]
  22. Föllmer, Hans and Irina Penner. 2006. Convex risk measures and the dynamics of their penalty functions. Statistics & Decisions 24(1/2006), 61–96. [CrossRef]
  23. Föllmer, Hans and Alexander Schied. 2011. Stochastic Finance: An Introduction in Discrete Time (2nd ed.), Volume 27 of De Gruyter Studies in Mathematics. Berlin: Walter de Gruyter. Updated and expanded edition, introducing discrete-time methods for stochastic finance. [CrossRef]
  24. Frittelli, Marco. 2000. The minimal entropy martingale measure and the valuation problem in incomplete markets. Mathematical Finance 10(1), 39–52. [CrossRef]
  25. Frittelli, Marco and Emanuela Rosazza Gianin. 2002. Putting order in risk measures. Journal of Banking & Finance 26(7), 1473–1486. [CrossRef]
  26. Fujiwara, Tsukasa. 2004. From the minimal entropy martingale measures to the optimal strategies for the exponential utility maximization: the case of geometric lévy processes. Asia-Pacific Financial Markets 11(4), 367–391. [CrossRef]
  27. Fujiwara, Tsukasa and Yoshio Miyahara. 2003. The minimal entropy martingale measures for geometric lévy processes. Finance and Stochastics 7(4), 509–531. [CrossRef]
  28. Goll, Thomas, Jan Kallsen, et al. 2003. A complete explicit solution to the log-optimal portfolio problem. The Annals of Applied Probability 13(2), 774–799. [CrossRef]
  29. Grandits, Peter and Thorsten Rheinländer. 2002. On the minimal entropy martingale measure. The Annals of Probability 30(3), 1003–1038. [CrossRef]
  30. Hesse, Christian. 2003. Angewandte Wahrscheinlichkeitstheorie. Vieweg. [CrossRef]
  31. Jacod, Jean and Albert N. Shiryaev. 2003. Limit theorems for stochastic processes (Second ed.), Volume 288 of Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin. [CrossRef]
  32. Kabanov, Yuri M. and Christophe Stricker. 2002. On the optimal portfolio for the exponential utility maximization: remarks to the six-author paper. Mathematical Finance 12(2), 125–134. [CrossRef]
  33. Kallsen, Jan. 2004. σ-localization and σ-martingales. Theory of Probability & Its Applications 48(1), 152–163. [CrossRef]
  34. Kemperman, Johannes H. B. 1969. On the optimum rate of transmitting information. Annals of Mathematical Statistics 40, 2156–2177. [CrossRef]
  35. Kullback, Solomon. 1967. A lower bound for discrimination information in terms of variation. IEEE Transactions on Information Theory 13(1), 126–127. [CrossRef]
  36. Kupper, Michael and Walter Schachermayer. 2009. Representation results for law invariant time consistent functions. Mathematics and Financial Economics 2(3), 189–210. [CrossRef]
  37. Königsberger, Konrad. 2003. Analysis 1. Springer-Lehrbuch. Springer Berlin Heidelberg. [CrossRef]
  38. Lee, Young and Thorsten Rheinländer. 2013. The minimal entropy martingale measure for exponential markov chains. Journal of Applied Probability 50(2), 344–358. [CrossRef]
  39. Miyahara, Yoshio. 1999. Minimal entropy martingale measures of jump type price processes in incomplete assets markets. Asia-Pacific Financial Markets 6(2), 97–113. [CrossRef]
  40. Miyahara, Yoshio. 2004. A note on esscher transformed martingale measures for geometric lévy processes. Discussion Papers in Economics, Nagoya City University 379, 1–14.
  41. Penner, Irina. 2007. Dynamic convex risk measures: time consistency, prudence, and sustainability. Ph. D. thesis, Humboldt-Universität zu Berlin.
  42. Pinsker, Mark S. 1964. Information and Information Stability of Random Variables and Processes. Holden-Day.
  43. Rheinländer, Thorsten and Gallus Steiger. 2006. The minimal entropy martingale measure for general barndorff-nielsen/shephard models. The Annals of Applied Probability 16(3), 1319–1351. [CrossRef]
  44. Rockafellar, R. Tyrrell. 1970. Convex Analysis. Princeton, NJ: Princeton University Press.
  45. Schweizer, Martin. 2010. Minimal entropy martingale measure. In Encyclopedia of Quantitative Finance. Wiley. [CrossRef]
  46. Shunsuke, Ihara. 1993. Information theory for continuous systems, Volume 2. World Scientific.
  47. Sohns, Moritz. 2023. The general semimartingale model with dividends. In O. Roubai (Ed.), Proceedings of the 10th Rocník Konference Doktorandů Na Vysoké škole Finanční A Správní. [CrossRef]
  48. Sohns, Moritz. 2025. σ-martingales: Foundations, properties, and a new proof of the Ansel–Stricker lemma. Mathematics 13(4), 682. [CrossRef]
  49. Swishchuk, Anatoliy. 2007. Change of time method in mathematical finance. Canad. Appl. Math. Quart 15(3), 299–336.
  50. Werner, Dirk. 2008. Funktionalanalysis (Springer-Lehrbuch) (German Edition) (6., korr. ed.). Springer. [CrossRef]
  51. Émery, Michel. 1980. Compensation de processus vf non localement intégrables. In Séminaire de Probabilités XIV 1978/79, pp. 152–160. Springer. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated