1. Introduction
Risk measures play an essential role in both academic research and financial practice, as they provide a systematic way to assess the potential losses of a financial position. Their importance has been highlighted ever since the seminal work of [
2], which introduced an axiomatic framework for coherent risk measures. Subsequent studies, such as ([
23], Chapter 4) or [
12], have refined and extended these ideas.
In the monetary risk-measure framework, one models a financial position by a real-valued random variable
X on a probability space
. The number
is the discounted net worth of the position if scenario
occurs. A monetary risk measure
assigns a real number
, which represents the minimal amount of capital needed to make
X acceptable according to certain risk criteria. Desirable properties include Monotonicity (increasing payoffs lowers the risk), and Translation Invariance (adding a sure amount of cash decreases the risk by the same amount), and when
is further assumed to be convex or coherent, it reflects the benefits of diversification. We refer to [
2,
12,
23] for standard references on these axioms.
A preferred way to price financial claims in financial mathematics (and hence risk measurement, when viewed from a market perspective) is to determine the price as expectation of the discounted payoff under an equivalent (local) martingale measure. However, in general markets and for certain price processes, a (local) martingale measure need not exist. Instead, from the more general perspective of no-arbitrage (or no free lunch with vanishing risk, to be more precise), one can only ensure the existence of an equivalent
-martingale measure (E
MM). See, for instance, [
15,
33,
48] for details on
-martingale arguments. In addition, in incomplete markets, there may be multiple such measures, and a natural question is how to select one “best” or “preferred” measure among the many.
One popular selection criterion in incomplete markets is to pick the measure that is “closest” to the real-world probability measure
in terms of the relative entropy (also known as the Kullback–Leibler divergence); see [
13,
24,
27,
40]. Minimizing relative entropy leads to the well-known minimal entropy martingale measure, a construction which has proven valuable in option pricing and hedging and which is closely connected to maximizing the investor’s utility (see Section 4). However, most of the literature focuses on the local martingale setting. The corresponding minimal entropy
-martingale measure has not been studied, even though, in some markets, one must work with E
MMs. We close this gap by introducing and studying the minimal entropy
-martingale measure:
where
denotes the set of E
MMs and
is the relative entropy with respect to
. The associated minimal entropy risk measure is then
This measure is an extension of the classical minimal entropy martingale measure (since an ELMM is a special case of an E
MM) but is strictly more general whenever no local martingale measure exists. Notably, while the minimal entropy martingale measure has been studied for pricing and hedging, it has not been viewed or analyzed as a traditional risk measure in the classical sense. Nor has the
-martingale version been examined at all. In this paper, we fill this gap by proving that the minimal entropy
-martingale measure leads to a coherent risk measure with desirable properties.
One of the most popular risk measures is the entropic risk measure. At first glance, one might suspect connections between entropic risk measures and minimal entropy methods because of the shared “entropy” label. Nevertheless, the entropic risk measure is typically introduced via the exponential formula
which does not reveal any connection to entropy. Its name stems from its robust representation
showing that it penalizes deviations from
in proportion to the relative entropy. Although this measure is well understood as a convex, time-consistent risk measure, the label “entropic” might not be fully transparent when beginning from an exponential definition. Here, we show that one can equivalently start with a relative-entropy-based formulation, making the “entropic” nature manifest. We show that the measure can be equivalently defined, and one arrives at the same results and conclusions quite easily.
This measure is strictly convex (rather than linear) in the payoff X. In contrast, the minimal entropy risk measure only picks out a single measure – the one that eliminates arbitrage and is of the least relative entropy. As a result, it turns out to be coherent. Despite these structural differences, both measures share a fundamental entropy-based underpinning.
In summary, the main contributions of this paper are:
We introduce the minimal entropy -martingale measure for general semimartingale models (with dividends), which is new to the literature.
We prove that the induced minimal entropy risk measure is coherent and extends the classical minimal entropy martingale measure results.
We define the entropic risk measure via its robust representation and, therefore, provide an alternative approach which highlights its connection to entropy.
We demonstrate key properties including convexity, coherence, dynamic consistency, and optimal risk transfer for both measures, thereby revealing that minimal-entropy techniques are not only pricing tools but also valid risk measures in their own right.
We provide some estimates, comparing the two risk measures to their real-world expectations.
The paper is organized as follows. Section 2 contains the precise definitions of both the minimal entropy -martingale measure (by minimizing relative entropy under the no-arbitrage condition) and the entropic risk measure (via a relative-entropy supremum), along with existence criteria. We establish their existence and compare their properties, highlighting the new -martingale generalization Section 3 focuses on the definition of monetary risk measures, convexity and coherence, proving that the minimal entropy risk measure is coherent while the entropic measure is convex. In Section 4, we explore duality and highlight the deeper relationship between entropy-based valuations and risk measures and, in particular, show that our definition of the entropic risk measure is equivalent to the more common definition in the literature. Section 5 provides dynamic versions, establishing time consistency. Finally, Section 6 discusses optimal risk transfer and how each of these risk measures behaves in that context.
2. Definition and Existence
Let
be a probability space,
the space of bounded random variables and
the set of probability measures on
. Furthermore, let
be a filtration with
, and let
S be a (potentially multi-dimensional) stochastic process adapted to
. We assume
is the discounted price process (for details on discounting, see [
47]).
In most models, there exists at least one probability measure such that the discounted process
is a local martingale under this probability measure. However, [
15] showed that in an arbitrage-free market (more precise a market that satisfies no risk with vanishing risk), you can only assume that a probability measure exists such that
is a
-martingale under this probability measure. Therefore, it makes sense to study
-martingales, equivalent
-martingale measures and derived risk measures.
First, let us recall:
Definition 1.
A one-dimensional semimartingale S is called a σ-martingale if there exists a sequence of predictable sets such that:
-
(i)
for all n;
-
(ii)
-
(iii)
for each , the process is a uniformly integrable martingale.
Such a sequence is called a σ-localizing sequence. A d-dimensional semimartingale is called a σ-martingale if each of its components is a one-dimensional σ-martingale.
These notions, introduced by Chou [
7], further explored by Émery [
51], and subsequently discussed by, for example, Jacod and Shiryaev [
31] and Kallsen [
33], allow certain processes to fail to be true local martingales yet still satisfy a limiting or piecewise martingale property.
Definition 2.
An equivalent σ-martingale measure (EσMM) is a probability measure with such that is a σ-martingale under . The set of all EσMM is denoted by .
We also define a broader sets of absolutely continuous measures
under which
is a σ-martingale:
For more details on σ-martingales, see, for example, [
28,
33,
48].
Definition 3.
The relative entropy of a probability measure with respect to is defined as
The relative entropy provides a notion of distance between a probability measure and a reference probability measure . Even though it can be interpreted as a distance, it is not a metric since neither the symmetry property nor the triangle inequality holds.
Beyond financial mathematics, relative entropy is a crucial concept in statistical physics and information theory, where it is referred to as the Kullback–Leibler divergence [
46]. It also plays a fundamental role in large deviations theory, where it underpins results such as Sanov’s theorem [
8]. A comprehensive summary of its applications can be found in [
6].
Further illustrations of this definition are available in [
11], including a demonstration of how relative entropy can be understood as a distance measure. In statistics, this interpretation is reinforced by results such as Stein’s Lemma, which can be found in [
30, Satz 10.4].
Definition 4.
-
(a)
-
An equivalent σ-martingale measure is called the minimal entropy σ-martingale measure if it minimizes the relative entropy (or Kullback–Leibler divergence) among all , i.e.,
The corresponding minimal entropy risk measure is defined by
-
(b)
For and , the entropic risk is defined as
Remark 1.
In the above definition of , the parameter can be viewed as a risk-aversion level or scaling factor, much like in exponential-utility frameworks where larger γ corresponds to greater risk aversion. Thus, can be seen as an "entropic" or "exponential-penalized" valuation: the higher the γ, the stronger the penalty on deviating from the reference measure .
Note that the existence of these risk measures is not immediately clear. In particular, the existence of a minimal entropy σ-martingale measure can still depend on the properties of S. For instance, if the market is not arbitrage-free (e.g., the No Free Lunch With Vanishing Risk condition fails) and S represents the price process of a tradable asset, then no equivalent σ-martingale measure can exist. Even if the market does satisfy NFLVR, one may still require additional conditions on the price processes (e.g., local boundedness or boundedness from below) to ensure that there is some equivalent σ-martingale measure carrying finite relative entropy. For further details, see [
14,
15], and [
16].
As opposed to the minimal entropy σ-martingale measure, the entropic risk always exists.
Theorem 1. The entropic risk always exists.
Proof. It suffices to show that is indeed a finite number.
Let
. It is straightforward to verify:
and
because X is almost surely bounded.
Since
is increasing in γ, we conclude for a fixed
:
□
While the minimal entropy σ-martingale measure is new in the literature, the notion of the minimal entropy martingale measure has been extensively studied in the literature. The definition of the latter goes analogue to ours, but instead of equivalent σ-martingale measures, one minimizes the entropy among all equivalent martingale measures (see, for example, [
29,
38,
39,
43,
45]). There are numerous criteria in the literature for the existence of the minimal entropy martingale measure. Most of these criteria impose strong conditions on the stochastic process
. A prominent example is when
is a Lévy process. For this case, Theorem 3.1 in [
26] provides a proof of existence, albeit with a highly technical approach. Another simple result is the existence of the minimal entropy martingale measure if
is bounded as shown in [
24, Theorem 2.1]. Other results depend on the specific form of the Radon-Nikodym derivative, e.g., [
29] or are applicable only in discrete time settings, e.g., [
23, Corollary 3.27].
For the minimal entropy σ-martingale measure, things are a bit easier. We now state and prove a theorem showing that, under a mild condition – namely, the existence of at least one σ-martingale measure with finite – one obtains the existence (and uniqueness) of a minimal entropy σ-martingale measure. This holds even if is not locally bounded and even if no equivalent local martingale measures exists.
For the proof, we need some lemmas.
Lemma 1.
Let be the set of probability measures on . We have:
-
(a)
. Furthermore, if and only if .
-
(b)
-
is convex and strictly convex on the set of probability measures which are absolutely continuous with respect to .
Proof. We have:
hence, it is reasonable to take a closer look at the function
ϕ, which we define as a continuous extension of the function
to
. Because
and Theorem A1, we have
for
, which proves the strict convexity of
ϕ.
It is easy to see that the function
takes its minimum at
. Hence, we have
and thus
- (a)
-
According to Equation (2), we have for
:
Now let
. We must show that the equality
follows. We have:
Hence -almost everywhere we have . Now it follows that -almost surely. If is not absolutely continuous with respect to , the statement is obvious.
- (b)
-
For probability measures that are not absolutely continuous with respect to
, the statement is obvious. So we focus on the case where
. For
with
, we have:
Thus, strict convexity follows.□
Lemma 2.
Suppose there exists a measure with
Then is the unique Minimal Entropy σ-Martingale Measure.
Proof. Define the set
By assumption, Γ is nonempty because
. Thus, we have
Next, we show that Γ is convex and closed in total variation:
For any , their convex combination , , is also in Γ. Indeed, convex combinations preserve both the finite entropy condition (due to convexity of relative entropy) and the σ-martingale property as σ-martingales form a vector space.
The set
can be expressed through linear constraints as
Sets defined via linear constraints of the form
are closed in total variation topology. Intersecting with the closed set of measures having finite entropy preserves this closedness.
By applying Csiszár’s Theorem A6, we conclude that the set Γ has a unique I-projection
of
, which satisfies:
By hypothesis,
already satisfies this minimality, so it must coincide with
. Thus, uniqueness follows directly from the strict convexity of relative entropy.
Finally, we verify that is equivalent to . Suppose this is not the case. Then there exists , which is equivalent to , as given by hypothesis. If is not equivalent to , its support is strictly smaller than that of , which contradicts minimality via Csiszár–Gibbs inequality (Theorem A7). Hence, .
Thus, , is equivalent to , and minimizes entropy. Therefore, it is the unique minimal entropy σ-martingale measure. □
Lemma 3.
Let be a probability space, and let be probability measures on with for each n and
Suppose X is a semimartingale that is a σ-martingale under each measure . Then X is also a σ-martingale under .
Proof. Since X is a σ-martingale under each measure , there exists, for every n, a suitable family of predictable sets making each localized piece of X a uniformly integrable martingale under . By reindexing or combining these families appropriately, we can select a single sequence of predictable sets for which is a uniformly integrable martingale simultaneously under every measure
To show that
remains a uniformly integrable martingale under
, fix arbitrary times
s <
t and an arbitrary set
. Define the bounded random variable
Since
is a martingale under each
, we immediately have
for every
n. Using the convergence of the Radon–Nikodým derivatives in
and boundedness of
Z, we get:
This equality implies that the conditional expectation of
given
under
equals
, establishing the martingale property for each localized piece. Since this argument applies to each predictable set
, it follows directly that
X is a
σ-martingale under
. □
Theorem 2.
Suppose there exists at least one measure with finite entropy . Then there exists a unique minimal entropy σ-martingale measure , characterized by
Proof. We define
By assumption, there exists a sequence
such that
is decreasing and satisfies
Because the sequence of the relative entropies is monotone, we obtain for the convex function
and therefore the sequence
is uniformly integrable according to Theorem A3. By Theorem A4, there exists a subsequence
which converges weakly in
. However, by Mazur’s Lemma (Theorem A5) there exists a sequence
consisting of convex combinations
which converges in
to some
.
The measure
is uniquely defined if we interpret
as a Radon–Nikodym derivative. One can see directly that
. By assumption,
is a σ-martingale with respect to all
(and hence also w.r.t. all
). Now, we want to show it is also a σ-martingale under
. To prove this, let
be a sequence of predictable sets such that
is a martingale under a fixed
. For each
,
because
. Thus
is also a martingale under
, hence
is a σ martingale under
.
Moreover, converges in to , and so, by Lemma 3 we have .
Because of the convexity of the relative entropy (Lemma 1), Equation (3), and since
is a decreasing sequence, we have
Thus, by applying Fatou’s Lemma, we obtain
Since
is decreasing, it follows that
is already minimal. Now, all conditions of Lemma 2 are satisfied, so we conclude that
is equivalent to
, proving the existence of the minimal entropy martingale measure.
Uniqueness follows from Lemma 1 and the strict convexity of . □
Remark 2.
Theorem 2 shows that the “entropy-minimizing” approach extends smoothly to σ-martingale models: as soon as there is one finite-entropy σ-martingale measure, there must be a unique one of minimal entropy, even if is unbounded or not locally bounded. This measure can be used for entropy-based valuation, risk measurement, or other applications. The local martingale approach, in contrast, might be impossible if .
3. Convex and Coherent Risk Measures
In this section, we define the notions of monetary, convex, and coherent risk measures. We also highlight the relationships among translation invariance, monotonicity, convexity, positive homogeneity, and subadditivity, emphasizing how these conditions characterize different levels of risk-measure structure.
Definition 5.
Let be the space of linear bounded, real-valued random variables. and
-
(a)
-
A mapping is called a monetary risk measure, if it satisfies for all
Monotonicity: If almost surely, then .
Translation Invariance: For all , .
-
(b)
-
The monetary risk measure ρ is called convex if it also satisfies
Convexity: For all and ,
-
(c)
-
A convex risk measure ρ is called coherent if, in addition to monotonicity and translation invariance, it satisfies
Positive Homogeneity: For all ,
Subadditivity: For all ,
Theorem 3.
Let ρ be a monetary risk measure on a linear space of random variables .
-
(a)
We say that ρ is normalized if . In particular, if ρ is positively homogeneous, then it follows that automatically.
-
(b)
-
Suppose ρ satisfies translation invariance and monotonicity (as in Definition 5). Then, any two of the following three properties imply the remaining third:
Convexity,
Positive Homogeneity,
Subadditivity.
For a detailed discussion and proof, see [
23, Chapter 4].
Theorem 4. The minimal entropy martingale measure and the entropic risk are both convex risk measures. The minimal entropy martingale measure is also a coherent risk measure.
Proof. We start with the minimal entropy martingale measure. It suffices to prove the coherence. The convexity follows from the fact that positive homogeneity and subadditivity imply convexity.
We now address the entropic risk.
In general, the entropic risk is not a coherent risk measure, even though it is additive for independent positions. By considering Lemma 1, one can easily see from the definition that in order to be a coherent risk measure, the entropic risk and the risk measure ρ defined as have to coincide.
There is also a coherent version of the entropic risk measure described in [
21], which is quite similar to the two risk measures we are examining here.
Finally, to better understand these risk measures, we explore their relationship with the real-world probability measure. The following lemma will be helpful.
Lemma 4.
Let be absolutely continuous σ–martingale measures, i.e. each of is in , and assume and are both finite. Then, for any bounded random variable , we have
Proof. First note that for any probability measures
(absolutely continuous w.r.t.
P),
where
denotes the total variation distance:
Next, by the triangle inequality in total variation,
Finally, we invoke the Pinsker inequality Theorem A8 and obtain
whenever
(resp.
) is finite. Hence
Putting these pieces together yields
Therefore,
That completes the proof. □
Remark 3.
Inequality (4) shows that two absolutely continuous σ–martingale measures which both have small entropy must produce close expectations of any bounded payoff X. In particular, one recovers the fact that a measure with finite entropy is “close” to in the sense of total variation if the entropy is small. Hence, if two measures are entirely determined by their Radon–Nikodým derivatives in , a bound in terms of is typically the best general estimate on how far their integrals can differ.
Theorem 5.
Let be bounded and . Then, the entropic risk
satisfies the following two-sided bound:
Proof. By taking the specific choice
in the definition of
, we see
since
. This proves the left inequality in (
5).
For any probability measure
with finite entropy
, we use the triangle bound from total variation Lemma 4 to obtain
Hence,
Therefore, for any such
,
Define
. Then, we want to maximize
over
. A short calculus argument shows that the maximum occurs at
and that the maximum value is
. Consequently,
Hence, for all
with
,
Taking the supremum over all
leads to
This completes the proof of (
5). □
Corollary 1.
Let be the minimal entropy σ-martingale measure for a (bounded) price process . Then,
Furthermore, combining this estimate with the two-sided bound of Theorem 5, we have:
Proof. Since
is defined as the supremum
it clearly holds that for any
in that admissible set,
If
(the minimal entropy σ-martingale measure) is in
(i.e.
and
), then simply take
in the supremum. This yields the lower bound (
7).
Finally, to deduce (
8), observe that
also holds by choosing
. Then Theorem 5 provides the upper bound, and so taking the maximum of these lower estimates proves (
8). □
Remark 4.
The assumption that requires that the minimal entropy measure be absolutely continuous w.r.t. with finite relative entropy. If, for instance, the market admits no arbitrage of the first kind (NA1) and there exists a finite-entropy EσMM, one can show . In that case, (7) provides a non-trivial lower estimate on the entropic risk in terms of the minimal entropy measure.
Remark 5.
When the risk-aversion parameter γ tends to zero, the penalty factor in the entropic risk measure
becomes very large. As a result, any measure with gets heavily penalized in the objective, so that the supremum is forced increasingly toward .
Formally, it can be shown that
Economically, this reflects a situation of ultra-strong aversion to deviating from the reference measure . In the limit , the only measure not incurring an infinite penalty is . Thus, the entropic risk measure collapses to , the plain (negative) expectation under the real-world probability .
4. Duality
Any convex or coherent risk measure admits a so-called robust representation. This has been discussed in several works, including [
2,
23], and [
25].
The usual approach to defining the entropic risk measure is:
and its robust representation, (
1), is typically derived using dual representation theorems.
In this paper, we defined the entropic risk using its robust representation and now demonstrate that this definition is equivalent to the traditional one found in literature. Since the convexity of (
9) has not been explicitly shown here, we present an alternative proof.
Theorem 6.
For and , we have:
Proof. Define a new probability measure
by:
Then:
Substituting this into the optimization problem:
where the last equality follows from
and Lemma 1.
Taking the logarithm and dividing by γ yields the result. □
A detailed analysis of the dual problem for the minimal entropy martingale measure is more complex but provides valuable insights into its economic interpretation as an equivalent local martingale measure. The dual problem, in essence, connects minimizing relative entropy to maximizing a specific utility function. Below, we elaborate briefly on this connection.
Let
U be a utility function of the form:
We aim to maximize the expected utility of the terminal value of a wealth process by selecting an optimal strategy
ϕ from a set of admissible strategies Φ. If the terminal value is represented as
, then the objective becomes:
Under weak conditions, [
13] show:
as well as:
where
is the strategy that maximizes the utility in Equation (10).
Further details on this duality can be found in [
5,
13], and [
32].
5. Dynamic Consistency
In this section, we present dynamic versions of the entropic risk measure and the risk measure associated with the minimal entropy martingale measure, proving that they are time-consistent. Dynamic consistency is particularly important when dealing with practical financial applications. For instance, dynamic risk measures based on entropy have been successfully applied to energy markets and stochastic volatility models, as discussed in [
49].
We start by introducing the relevant definitions for this section.
Let be a filtration, the set of bounded -measurable functions, the set of probability measures on which are equivalent to on , and let S be an adapted stochastic process.
Definition 6.
-
(a)
A map is called a dynamic risk measure.
-
(b)
A dynamic risk measure is called time consistent if for ,
-
(c)
The dynamic entropic risk measure is defined as
-
(d)
The dynamic minimal entropy risk measure is defined as
where is the minimal entropy martingale measure.
It is possible to allow γ to be an adapted process instead of a constant, resulting in a slightly more general definition of the dynamic entropic risk measure. However, in such cases, time consistency may not be guaranteed.
If for
we define the conditional relative entropy
as
the dynamic entropic risk measure can also be expressed in a robust representation involving the relative entropy:
Further details on this duality and on dynamic risk measures in general (in both continuous and discrete time) can be found in [
1,
18,
22], and [
41].
It is straightforward to verify that the two risk measures defined above satisfy the properties of dynamic risk measures.
Theorem 7. The dynamic entropic risk measure and the dynamic minimal entropy risk measure are time-consistent.
Proof. We start with the entropic risk measure. We aim to show that
for
. Time consistency then follows from the intertemporal monotonicity theorem [
22, Proposition 4.2].
Using the tower property of conditional expectation, we have:
For the minimal entropy martingale measure, we proceed similarly:
This proves time consistency for both measures. □
6. Optimal Risk Transfer
Optimal risk transfer is a classical topic in the study of risk measures and has been studied extensively in, e.g., [
3]. In this section, we present a somewhat more general and extended view, highlighting in particular the property of γ-dilated families of risk measures, the associated inf-convolution approach, and how these results can be restricted to the risk-neutral setting. We then provide a brief discussion of why no non-trivial γ-dilation family can be constructed from the minimal entropy coherent risk measure, whereas the standard entropic risk measure naturally fits into this framework. Finally, we show how restricting to (local) risk-neutral measures force the minimal entropy measure to emerge as the unique solution to the risk-transfer problem.
Let ρ be any (convex or coherent) risk measure on
. The fundamental question in optimal risk sharing between two entities with risk measures
and
is to split a position X into two parts
and F. The goal is to minimize the combined risk
. This optimization problem is well expressed via the inf-convolution operator:
The optimal risk transfer is solved when the infimum in (
11) is achieved by some
. Well-known results, see [
3], show that certain risk measures, particularly those arising from γ-dilated families, enjoy a simple solution: the optimal allocation is often a linear split of X.
Definition 7 (
γ-Dilated Family).
A family of risk measures on is called γ-dilated if there exists a base risk measure ρ such that
where is some index set.
An immediate (but important) example is the entropic risk measure, which is γ-dilated by construction:
if one sets
. By contrast, a coherent risk measure with positive homogeneity cannot usually form a non-trivial γ-dilated family unless it is purely linear in γ and thus collapses to the
-type functional (more details below).
We restate here a known result, cf. [
3], slightly paraphrased to fit our notation.
Theorem 8.
Let be a γ-dilated family of convex risk measures on . Then the following statements hold:
-
(a)
For any , we have the inf-convolution identity:
-
(b)
The optimal allocation solving is linear in X, specifically
-
(c)
Consequently, we also have
-
(d)
Moreover, if and are two convex risk measures that each can be embedded in a single γ-dilated family (i.e., they are both instances of for some base ρ), then for any ,
under a suitable identification. Hence, the γ-dilation structure is preserved under inf-convolution.
Sketch of Proof. The key insight is that for a γ-dilated family, scaling X by γ and adjusting the risk measure by
yields a consistent re-parametrization of the same base functional ρ. Once this is established, standard inf-convolution arguments for the exponentially (or more generally, ρ-) tilted function imply the linear sharing rule (
13). For full details, see [
3] or references therein. □
We now remark why a coherent risk measure – in particular, the minimal entropy risk measure
– cannot generally be part of a non-trivial γ-dilated family. Recall that a coherent risk measure ρ is positively homogeneous, i.e.
Combining this with the γ-dilation requirement
quickly forces
for all γ, unless
or
some fixed linear functional. Indeed, for coherent ρ, we would have
Hence,
cannot genuinely “depend” on γ. In short, non-trivial γ-dilated families (like the entropic class) are not coherent. This explains why the minimal entropy risk measure,
, cannot appear in a standard γ-dilated framework.