Preprint
Article

This version is not peer-reviewed.

Intrinsically Optimal Equivariant Estimator for the Univariate Linear Normal Models

A peer-reviewed article of this preprint also exists.

Submitted:

09 October 2025

Posted:

13 October 2025

You are already at the latest version

Abstract
In this paper we prove the existence and unicity of the minimum risk equivariant estimator (MIRE) for the univariate linear normal models, under the action of the subgroup of the affine group which leaves the column space of the design matrix invariant and using the framework of the intrinsic analysis of statistical estimation which uses the square of the Rao distance as the loss function, which expectation behaves very different to the corresponding of the square error loss when we work with small samples, and supplying also an intrinsic bias measure for any equivariant estimator. This estimator is studied and compared with the standard maximum likelihood estimator (MLE) in terms of its intrinsic risk and bias, showing explicitly its differences, apparent with small samples, fortunately small for large sample sizes. Moreover, an approximate and very simple estimator is also suggested, which reduces the majority of bad behaviour of MLE, of intrinsic risk and bias, for small samples.
Keywords: 
;  ;  ;  

1. Introduction

The Fisher information matrix of a regular parametric family of probability distributions induces a natural Riemannian structure on the parameter space [1,2,3]. This structure provides the foundation for the intrinsic analysis of statistical estimation, see [4,5] and, beyond statistics, it has also been shown to reveal connections with physical laws [6].
In this intrinsic framework, estimators are assessed through tools that do not depend on any particular parametrization of the model. Consequently, non-intrinsic criteria such as the squared error loss should be replaced by intrinsic measures. A natural choice is the squared Riemannian (Rao) distance, which serves as an intrinsic loss function. The associated risk can behave very differently from that based on squared error loss, especially in small-sample regimes. Similarly, the classical definition of bias must be reformulated in terms of a vector field determined by the geometry of the model, whose squared norm provides a natural intrinsic bias measure.
The estimator itself should also be intrinsic, i.e., independent of parametrization. Consider, for instance, the exponential distribution under two parametrizations,
f ( x , θ ) = θ exp ( θ x ) 1 R + ( x ) and f ( x , λ ) = 1 λ exp ( x / λ ) 1 R + ( x )
where 1 R + ( x ) is the positive real numbers indicator. The UMVU estimator computed under each parametrization θ ^ and λ ^ are not related by θ ^ = 1 / λ ^ . This apparent inconsistency arises because UMVU estimators rely on non-intrinsic notions such as unbiasedness and variance. As a result, the UMVU estimator is parametrization-dependent and cannot be regarded as intrinsically defined.
When the statistical model is invariant under the action of a transformation group on the sample space, it is natural to restrict attention to equivariant estimators, i.e., estimators U satisfying
U ( g ( x ) ) = g ¯ ( U ( x ) ) for all g G n
where G n is the transformation group acting on the space of all samples of size n and g ¯ G ¯ is the induced transformation on the parameter space Θ . This restriction ensures logical consistency: only equivariant estimators remain coherent under data transformations. It is therefore a requirement that should be imposed prior to any attempt to minimize an intrinsic risk function.
As an illustration, consider the estimation of the unconstrained mean μ of a p–variate normal distribution. The MLE of μ (which really defines and intrinsic estimator) based on a sample of size n, is just the sample mean vector X ¯ n . However, for p 3 , the James–Stein estimator is known to have smaller quadratic risk [7]. In our view, this does not undermine the MLE. Although the James–Stein estimator enjoys a lower risk under the non-intrinsic quadratic loss, it lacks the essential property of equivariance, which is crucial from the intrinsic perspective.
In this work we focus on classical statistical estimation, as is standard in the absence of prior knowledge. Nonetheless, intrinsic Bayesian approaches based on non-informative priors are also possible [5,8], and represent an interesting avenue for future research. Here, we adopt the squared Rao distance as a natural loss function, since it is the conceptually simplest intrinsic analogue of the quadratic loss, despite potential computational challenges. For a broader background on statistical estimation and its intrinsic developments, see [9,10,11,12].
Specifically, we consider the univariate linear normal model with a fixed design matrix. First, we explicitly characterize the class of equivariant estimators under the action of a suitable subgroup of the affine group–namely, those affine transformations of the data that leave the column space of the design matrix unchanged. This subgroup is the largest one that preserves the model’s structure. Next, we prove the existence and uniqueness of the estimator within this class that minimizes intrinsic risk, i.e., the equivariant estimator that minimizes the mean squared Rao distance. We also derive an explicit expression for the intrinsic bias of any equivariant estimator.
Furthermore, we compare the intrinsic bias and risk of the proposed estimator with those of the MLE, highlighting their differences in small samples and showing how these differences diminish as the sample size grows. Finally, we propose a computable approximation of the optimal estimator, which mitigates most of the intrinsic bias and risk issues that affect the MLE in finite samples.

2. Equivariant estimators for linear models

Let us consider the univariate linear normal model,
y = X β + e
where y is a n × 1 random vector, X is a n × m matrix of known constants with 0 < rank ( X ) = m n , β is a m × 1 vector of unknown parameters to be estimated and e is the fluctuation or error of y about X β . We assume that the errors are unbiased, independent, with the same variance σ 2 and following a n–variate normal distribution, that is e N n ( 0 , σ 2 I n ) , where I n is the n × n identity matrix. Therefore y distributes according to an element of the parametric family of probability distributions P = { N n ( X β , σ 2 I n ) | ( β , σ ) Θ } with parameter space Θ = R m × R + , a ( m + 1 ) – dimensional simply connected real manifold. Hereafter, we shall identify the elements of R m with m × 1 column vectors, when necessary.
Denote by
O E ( n ) = { H O ( n ) | y E Hy E }
where E is a subspace of R n and O ( n ) is the group of n × n orthogonal matrices with entries from R . Define F as the subspace of R n spanned by the columns of X , that is F = Col ( X ) . Observe that O F ( n ) = O F ( n ) , I n O F ( n ) , Col ( X ) = Col ( HX ) H O F ( n ) and if H O F ( n ) then H t O F ( n ) . Moreover, every H O F ( n ) induces, in F and F , two isomorphisms isomorphisms preserving the Euclidean norm in each subspace.
P is invariant under the action of the subgroup of the affine group in R n given by the family of transformations
g ( a , H , c ) ( y ) = a H y + c , y R n
where a > 0 , H O F ( n ) , c F .
Observe that G = { g ( a , H , c ) | a > 0 , H O F ( n ) , c F } induces an action on the parameter space θ given by
g ( a , H , c ) ¯ ( β , σ ) = a ( X t X ) 1 X t H X β + ( X t X ) 1 X t c , a σ
where this result is obtained taking into account that X ( X t X ) 1 X t is the projection matrix into F and thus, if w F there exists a unique η such that w = X η and X ( X t X ) 1 X t w = w .
Since the family P is invariant under the action of G , it is natural to restrict our attention to the class of equivariant estimators U = ( U 1 , U 2 ) of ( β , σ ) i.e. an estimator satisfying U g ( a , H , c ) ( y ) = g ( a , H , c ) ¯ U ( y ) for all g ( a , H , c ) G .
Proposition 1.  
Let U be an equivariant estimator of ( β , σ ) . Then U belongs to the family { U λ , λ R + } where,
U λ ( y ) = ( X t X ) 1 X t y , λ y t I n X ( X t X ) 1 X t y 1 / 2
Proof. Let U = ( U 1 , U 2 ) . The equivariance condition for U involves,
U 1 ( a Hy + c ) = a ( X t X ) 1 X t H X U 1 ( y ) + ( X t X ) 1 X t c U 2 ( a Hy + c ) = a U 2 ( y )
for any a > 0 , H O F ( n ) , c F .
Any y R n can be written in a unique form as y = y F + y F where y F F and y F F . Specifically
y F = X ( X t X ) 1 X t y a n d y F = ( I n X ( X t X ) 1 X t ) y
If we choose c = a Hy F in the previous expressions, we obtain
U 1 ( a Hy F ) = a ( X t X ) 1 X t H X U 1 ( y ) a ( X t X ) 1 X t H y F
U 2 ( a Hy F ) = a U 2 ( y )
First we focus on U 1 . If we let H * = ( I n 2 X ( X t X ) 1 X t ) , we have H * O F ( n ) with
H * v = v v F a n d H * v = v v F
Now observe that (5) is satisfied for any a > 0 and H in O F ( n ) , in particular for I n and H * . Therefore
U 1 ( a y F ) = a U 1 ( y ) a ( X t X ) 1 X t y F U 1 ( a H * y F ) = a U 1 ( y ) + a ( X t X ) 1 X t y F
But a H * y F = a y F which leads to
0 = U 1 ( y ) ( X t X ) 1 X t y F
From (4),
U 1 ( y ) = ( X t X ) 1 X t y , y R n
Next, we consider U 2 . Let a = 1 and H = I n in (), it follows
U 2 ( y F ) = U 2 ( y )
Accordingly, it is enough to determine U 2 on F . Let us take a unit vector z F such that U 2 ( z ) = λ > 0 . Then ().
U 2 ( Hz ) = U 2 ( z )
for any H O F ( n ) . Observe that any arbitrary unit vector in F can be written as Hz for a proper H O F ( n ) . Therefore, for any y F F , y F 0 , we have
U 2 ( y F ) = y F U 2 y F y F = y F U 2 ( Hz ) = y F U 2 ( z ) = λ y F
Observe also that if y = 0 then y F = 0 and, choosing c = 0 , we have that U 2 ( 0 ) = a U 2 ( 0 ) , a R + . This implies U 2 ( 0 ) = 0 .
Finally, from (7) and (4) we have
U 2 ( y ) = λ y t I n X ( X t X ) 1 X t y 1 / 2
Observe that the standard maximum likelihood estimator, MLE, for the present model is an equivariant estimator, with λ = 1 n .

3. Minimum Riemannian risk estimators

In the framework of intrinsic analysis, where the loss function is the square of the Rao distance, the Riemannian distance induced by the information metric in the parameter space Θ , once the class of the equivariant estimators has been determined a natural question arises: which is the equivariant estimator that minimizes the risk?
First of all, we summarize the basic geometric results corresponding to the model (1) which are going to be used hereafter. We are going to use a standardized version of the information metric, given by the usual information metric corresponding to this linear model divided by a constant factor n, i.e. the number of rows of matrix X . This metric is given by
d s 2 = 1 n σ 2 d β t X t X d β + 2 n ( d σ ) 2 ,
which is, up to a linear coordinate change, the Poincaré hyperbolic metric of the upper half space R m × R + , see [13]. The Riemannian curvature κ = 1 2 is constant and negative and the unique geodesic, parameterized by the arc–length, which connects two points θ 1 = ( β 1 , σ 1 ) and θ 2 = ( β 2 , σ 2 ) Θ , when β 1 β 2 , is given by:
β ( s ) = 2 n K 2 tanh s 2 + ϵ ( X t X ) 1 / 2 C + D σ ( s ) = K 1 s e c h s 2 + ϵ
where s is the arc–length, C and D are m × 1 vectors whose components, and also ϵ , are convenient real integration constants, such that β ( 0 ) = β 1 , β ( ρ 12 ) = β 2 , σ ( 0 ) = σ 1 , σ ( ρ 12 ) = σ 2 being ρ 12 the Riemannian distance between θ 1 and θ 2 . Finally, K is given by K 2 = 2 n C t C . When β 1 = β 2 , the geodesic is given by
β ( s ) = D σ ( s ) = B e ± s 2
where B is a positive integration constant.
The Rao distance ρ between the points θ 1 and θ 2 is
ρ 12 ρ θ 1 , θ 2 = 2 ln 1 + δ ( θ 1 , θ 2 ) 1 δ ( θ 1 , θ 2 ) = 2 2 a r c t a n h δ ( θ 1 , θ 2 )
where
δ ( θ 1 , θ 2 ) = d M 2 ( β 1 , β 2 ) + 2 ( σ 1 σ 2 ) 2 σ 1 σ 2 d M 2 ( β 1 , β 2 ) + 2 ( σ 1 + σ 2 ) 2 σ 1 σ 2 1 / 2
and
d M 2 ( θ 1 , θ 2 ) = 1 n σ 1 σ 2 ( β 1 β 2 ) t X t X ( β 1 β 2 )
or, equivalently,
ρ θ 1 , θ 2 = 2 a r c c o s h 1 4 d M 2 ( θ 1 , θ 2 ) + σ 1 2 + σ 2 2 2 σ 1 σ 2
Let e x p θ 1 1 ( θ 2 ) be the inverse of the exponential map corresponding to Levi-Civita connection and W 1 , , W m , W m + 1 its components corresponding to the basis field ( / β 1 ) θ 1 , , ( / β m ) θ 1 , ( / σ ) θ 1 . Then, we have
W i = ρ 12 / 2 sinh ( ρ 12 / 2 ) ( β 2 i β 1 i ) σ 1 σ 2 , i = 1 , , m W m + 1 = ρ 12 / 2 sinh ( ρ 12 / 2 ) cosh ( ρ 12 / 2 ) σ 1 σ 2 σ 1
It is well know that the Riemannian distance induced by the information metric is invariant under equivariant estimator transformations. We shall supply a direct and alternative proof for the linear model setting.
Proposition 2.  
The Rao distance ρ given by (11) is invariant under the action of the induced group by G on the parameter space, G ¯ . In other words
ρ ( g ( a , H , c ) ¯ θ 1 , g ( a , H , c ) ¯ θ 2 ) = ρ ( θ 1 , θ 2 )
Proof: Observe that
HX ( β 1 β 2 ) F , H O F ( n )
and taking into account that X ( X t X ) 1 X t is the projection matrix into F, we have
X ( X t X ) 1 X t HX ( β 1 β 2 ) = HX ( β 1 β 2 )
Therefore
( β 1 β 2 ) t X t H t X ( X t X ) 1 X t HX ( β 1 β 2 ) = ( β 1 β 2 ) t X t X ( β 1 β 2 )
and the invariance of δ and ρ trivially follows. □
Proposition 3.  
G ¯ acts transitively on Θ.
Proof: The transitivity follows observing that a is an arbitrary positive real number and X ( X t X ) 1 X t is the projection matrix into F with rank X = m . □
Since ρ , and thus ρ 2 , is invariant under the action of G ¯ and G ¯ acts transitively on Θ , the distribution of ρ 2 U λ ( y ) , θ does not depend on θ , and therefore, the risk of any equivariant estimator remains constant and independent of the target parameter provided that this risk is finite. More precisely, observe that if we let
z = 1 σ ( y F X β ) , V = z a n d W = 1 σ y F
from (1) and (4) we clearly have that z N n ( 0 , X ( X t X ) 1 X t ) with a rank m idempotent covariance matrix, and V 2 and W 2 are independent random variables following a chi-square distribution with m and ( n m ) degrees of freedom, equal to the dimensions of F and F since V 2 and W 2 are quadratic forms based on the projection matrices on these subspaces of R n and y F (or z ) and y F are independent random vectors. Therefore, since X U λ 1 ( y ) = y F and U λ 2 ( y ) = λ y F , we have that
d M 2 ( U λ ( y ) , θ ) = 1 n U λ 2 ( y ) σ ( U λ 1 ( y ) β ) t X t X ( U λ 1 ( y ) β ) = V 2 n λ W
δ ( U λ ( y ) , θ ) = V 2 n λ W + 2 ( λ W 1 ) 2 λ W V 2 n λ W + 2 ( λ W + 1 ) 2 λ W 1 / 2
and
ρ ( U λ ( y ) , θ ) = 2 2 a r c t a n h V 2 + 2 n ( λ W 1 ) 2 V 2 + 2 n ( λ W + 1 ) 2 1 / 2
or
ρ U λ ( y ) , θ = 2 a r c c o s h 1 4 V 2 n λ W + 1 2 λ W + 1 λ W
which have a distribution which depend only on V 2 and W 2 , independent random variables with fixed distribution, whatever the value of θ .
Since the risk of any equivariant estimator remains constant on the parameter space, it’s enough to examine it at one point, for instance at the point ( 0 , 1 ) Θ . Let us denote the expectation with respect to the n–variate linear normal model N n ( X β , σ 2 I n ) by E ( β , σ ) and by E the E ( 0 , 1 ) . We can prove the following propositions.
Proposition 4.  
If n m + 1 we have:
E ( β , σ ) ρ 2 U λ ( y ) , ( β , σ ) = E ρ 2 U λ ( y ) , ( 0 , 1 ) < , λ > 0
for any ( β , σ ) Θ .
Proof: From (14) and (13), since
1 + 1 2 ! ρ 2 2 + 1 4 ! ρ 4 4 cosh ( ρ 2 ) = 1 4 d M 2 ( θ 1 , θ 2 ) + σ 1 2 + σ 2 2 2 σ 1 σ 2
we have
0 ρ 2 θ 1 , θ 2 12 1 + d M 2 ( θ 1 , θ 2 ) 6 + 1 3 ( σ 1 σ 2 ) 2 σ 1 σ 2 1
developing the square of the difference and taking into account that the standard Euclidean norm of a vector is less or equal to the absolute value of the sum of its components, we obtain
0 ρ 2 θ 1 , θ 2 2 6 d M ( θ 1 , θ 2 ) + 4 3 σ 1 σ 2 + σ 2 σ 1 + 4 3 12
Notice that both bounds (22) and (23) are invariant under the action of the induced group on the parameter space.
As we mentioned before, from [14], it is enough to prove that the risk is finite at ( 0 , 1 ) Θ . Taking into account (16) it follows, from (23), that
0 ρ 2 ( U λ ( y ) , ( 0 , 1 ) ) 2 6 V 2 λ W n + 4 3 λ W + 1 λ W + 4 3 12
Observe that if Q has a chi-square distribution with k degrees of freedom
E ( Q α ) = 2 α Γ ( k 2 + α ) Γ ( k 2 ) providedthat k 2 + α > 0
Therefore, since V 2 and W 2 are independent random variables following a central chi-square distribution with m and ( n m ) degrees of freedom, we have
E ( V ) = 2 Γ ( m + 1 2 ) Γ ( m 2 ) , E ( W ) = 2 4 Γ ( 2 ( n m ) + 1 4 ) Γ ( n m 2 ) p r o v i d e d t h a t n m > 1 / 2
and
E ( 1 W ) = 1 2 4 Γ ( 2 ( n m ) 1 4 ) Γ ( n m 2 ) p r o v i d e d t h a t n m > 1 / 2
Then, taking the average, it follows that
E ρ 2 ( U λ ( y ) , ( 0 , 1 ) ) < 2 6 λ n E ( V ) E ( 1 W ) + + 4 3 λ E ( W ) + 1 λ E ( 1 W ) + 4 3 12 < + , λ > 0 and n m > 1 / 2
Since n and m are positive integers being n > m , we conclude that the risk is finite if n m + 1 . □
Proposition 4 is a sufficient condition for the existence of the Riemannian risk of the equivariant estimator U λ , thus
Φ ( λ ) = E ρ 2 ( U λ ( y ) , ( 0 , 1 ) ) , λ > 0 ,
is well defined for n m + 1 , which we shall assume hereafter.
Proposition 5.  
There exists a unique minimizer λ n m > 0 of the Riemannian risk given by Φ.
Proof:
Let us consider the Riemannian risk at λ = e s 2 2 as a function of s, that is
F ( s ) = Φ ( e s 2 2 ) , s R
The particular selection of λ , from which F follows, relies on the Riemannian structure of Θ induced by the information metric. The Riemannian curvature is constant and equal to 1 2 and taking into account (10) we have that
γ ( s ) = ( X t X ) 1 X t y , e s 2 y t I n X ( X t X ) 1 X t y 1 / 2
is a geodesic in Θ ; precisely a geodesic parameterized by the arc–length, see [13] for further details.
Then, following [15], the real valued function
s ρ 2 ( γ ( s ) , ( 0 , 1 ) )
is strictly convex. Since almost surely convexity of a stochastic process carries over the mean of a process, the map F is strictly convex as well.
On the other hand, from Fatou’s Lemma F ( s ) + as s or s . This, together with the strict convexity of F yield the existence of a unique minimizer s * of the function F, which depends on n and m.
Finally, since the map s e s 2 2 is a strictly monotonous function; must exist a unique λ n m , namely λ n m = e s * 2 2 , such that Φ ( λ n m ) = min λ > 0 Φ ( λ ) . □
In fact, this result guarantees the unicity of the MIRE, although a numerical analysis is required to obtain it explicitly (see next section). It could be useful to develop a simple approximate estimator, that shall be referred hereafter a-MIRE, obtained, luckily, minimizing a convenient upper bound of Φ ( λ ) . Since
1 + 1 2 ! ρ 2 2 cosh ( ρ 2 ) = 1 4 d M 2 ( θ 1 , θ 2 ) + σ 1 2 + σ 2 2 2 σ 1 σ 2
we shall have
0 ρ 2 ( U λ ( y ) , ( 0 , 1 ) ) 1 n λ V 2 W + 2 λ W 1 λ W 2
and therefore
0 Φ ( λ ) H ( λ ) = 1 λ Γ ( n m 2 1 2 ) 2 Γ ( n m 2 ) m n + 2 + 2 2 Γ ( n m 2 + 1 2 ) Γ ( n m 2 )
the upper bound H ( λ ) it is clearly a convex functions with an absolute minimum attained when λ satisfy
λ n m ˜ = 1 + m 2 n 1 n m 1
Furthermore, given an arbitrary m, we have
lim n λ n m ˜ 2 n = 1
and, therefore, a-MIRE is very close to MLE for large values of n. Observe also that it is possible to compute a-MIRE for n > m + 1 , a condition which is slightly stronger that the result required for the existence of MIRE in proposition (4).
A further aspect is the intrinsic bias of the equivariant estimators. In fact connections between minimum risk, bias and invariance have been established, see [14]. Since the action of the group G is not commutative, we cannot guarantee the unbiasedness of the MIRE and an additional analysis must be performed. First of all we are going to compute the vector bias, see [4], a quantitative measure of the bias which is compatible with Lehmann results.
Let A θ ( λ ) = e x p θ 1 ( U λ ( y ) ) and A θ 1 ( λ ) , , A θ m ( λ ) , A θ m + 1 ( λ ) be the components of A θ ( λ ) corresponding to the basis field ( / β 1 ) θ , , ( / β m ) θ , ( / σ ) θ . With matrix notation, A θ 1 ( λ ) = ( A θ 1 ( λ ) , , A θ m ( λ ) ) t . Furthermore, let us define h ( x ) x / sinh x for x 0 and h ( 0 ) 0 ; taking into account (16), (19) and from (11) and (15) we have
XA θ 1 ( λ ) = f λ ( V , W ) ) z σ λ W A θ m + 1 ( λ ) = f λ ( V , W ) g λ ( V , W ) σ
where
f λ ( V , W ) h ρ ( U λ ( y ) , θ ) / 2
and
g λ ( V , W ) c o s h ρ ( U λ ( y ) , θ ) / 2 1 λ W
Let B θ ( λ ) = E θ ( e x p θ 1 ( U λ ( y ) ) ) be the intrinsic bias vector corresponding to an equivariant estimator U λ ( y ) = ( U λ 1 ( y ) , U λ 2 ( y ) ) evaluated at the point θ = ( β , σ ) and let B θ 1 ( λ ) , , B θ m ( λ ) , B θ m + 1 ( λ ) be their components. In matrix notation, B θ 1 ( λ ) = ( B θ 1 ( λ ) , , B θ m ( λ ) ) t . We have
Proposition 6.  
If n m + 1 , the bias vector is finite and
B θ 1 ( λ ) = 0 B θ m + 1 ( λ ) = E f λ ( V , W ) g λ ( V , W ) σ
where V 2 and W 2 are independent random variables following a chi-square distribution with m and n m degrees of freedom respectively.
Moreover, the square of the norm of the bias vector is constant and given by
B θ ( λ ) θ 2 = 2 E f λ ( V , W ) g λ ( V , W ) 2
Proof: Observe that if n m + 1 we have
B θ ( λ ) θ 2 E θ ( e x p θ 1 ( U λ ( y ) ) θ ) 2 E θ ( e x p θ 1 ( U λ ( y ) ) θ 2 ) = E θ ( ρ 2 ( U λ ( y ) , θ ) ) <
where · θ denotes the Riemannian norm at the tangent space at θ .
On the other hand taking into account (31) and defining z as in (16) observe that V = z = z , z is independent of W and z and z has the same distribution. Then we have
X B θ 1 ( λ ) = E θ f λ ( V , W ) z σ λ W = E θ f λ ( V , W ) ( z ) σ λ W = E θ f λ ( V , W ) z σ λ W = 0
and since r a n k ( X ) = m it follows that B θ 1 ( λ ) = 0 .
B θ m + 1 ( λ ) is obtained directly from (31). The distribution of z and W 2 follow from basic properties of multivariate normal distribution. Finally, the norm of the bias vector field follows from (32) and (8).
We may remark, finally, that the norm of the bias vector field of any equivariant estimator, B θ ( λ ) θ , is invariant under the action of the induced group, G ¯ , on the parameter space and since this group acts transitively on Θ , this quantity must be constant, which is clear from (33).

4. Numerical Evaluation

In this section we are going to compare, numerically, the MIRE estimator with the standard MLE. Observe that both estimators only differ in the estimation of the parameter σ (or σ 2 ). Precisely, the MIRE and the MLE of σ 2 are, respectively,
σ 2 ^ = λ n m 2 y t I n X ( X t X ) 1 X t y σ 2 * = 1 n y t I n X ( X t X ) 1 X t y
namely, they only differ by the factor ξ n m = λ n m 2 n , since σ 2 ^ = ξ n m σ 2 * .
In order to compare MLE with the MIRE we have computed the factor ξ n m , the intrinsic risk and the square of the norm of the bias vector for each estimators. All the computations have been performed using Mathematica 10.2.
Moreover, we have suggested a rather simple approximation for ξ n m , or λ n m , which allow us to approximate MIRE estimator through (29), i.e.
ξ n m ˜ = 1 + m 2 n n n m 1
The corresponding estimator, that shall be referred hereafter as a-MIRE, has been also compared with MLE and MIRE in terms of intrinsic risk.
The results are summarized in the following figures which summarize the tables given in the Appendix, see [16] for a convincing argument for the use of plots over tables. In Figure 1 numerical results of ξ n m (left) and Φ ( λ n m ) = Φ ( ξ n m / n ) (right) are displayed graphically. Observe that, for m fixed, as n increases Φ ( ξ n m / n ) goes to zero and ξ n m goes to one. The exact numerical values are given in the Appendix, Table 1.
Figure 1. Numerical results for ξ n m (left) and Φ ( λ n m ) (right), with ξ n m = λ n m 2 n .
Figure 1. Numerical results for ξ n m (left) and Φ ( λ n m ) (right), with ξ n m = λ n m 2 n .
Preprints 180123 g001
Figure 2. Numerical results for ξ n m (left) and Φ ( λ n m ) (right), with ξ n m = λ n m 2 n .
Figure 2. Numerical results for ξ n m (left) and Φ ( λ n m ) (right), with ξ n m = λ n m 2 n .
Preprints 180123 g002
Figure 3 shows graphically the percentage of intrinsic risk increment for MLE (left) and a-MIRE (right), that is
100 Φ ( 1 n ) Φ ( λ n m ) Φ ( λ n m ) and 100 Φ ( λ n m ˜ ) Φ ( λ n m ) Φ ( λ n m )
respectively, for some values of n and m. The exact numerical results are given in the Appendix, Table 2.
Observe that if we approximate the MIRE by the MLE for a certain value of m, the relative difference of risk decreases as n increases. Notice that these relative differences of risks are rather moderate or small (about 10-15%) only if n > 10 m . Let us remark also the non appropriate behavior of the MLE for small values of n m . This regards to the intrinsic risk increment: when we use MLE instead of MIRE this increment oscillates between 80 , 5 % and 251 , 1 % when n m = 1 with m from 1 to 10. On the other hand, the behavior of the a-MIRE is reasonably good, with its intrinsic risk very similar to the risk of the MIRE estimator: here the percentage on risk increment is less than 1 % for all studied cases and also this percentage decreases as n increases. In fact this percentage is lower than 1‰ when n m 4 , which indicates the extraordinary degree of approximation, being therefore the a-MIRE a reasonable and useful approximation of MIRE. In particular as an example, for a two way analysis of variance, with a and b levels for factors A and B respectively, with a single replicate for treatment we shall have
λ ˜ = 1 + a + b 1 2 a b 1 a b a b
with a > 1 and b > 1 , while the corresponding quantity for MLE is λ * = 1 a b .
In particular if a = 3 and b = 5 we shall have
λ ˜ = 1 + 7 30 1 7 = 0 , 419750
and λ * = 1 15 = 0 , 258199 , which is sensibly different. At this moment, it could be useful to recall that the quadratic loss and the squared of the Riemannian distance behaves very different, see [5].
Figure 3. Percentage of intrinsic risk increment for MLE (left) and a-MIRE (right).
Figure 3. Percentage of intrinsic risk increment for MLE (left) and a-MIRE (right).
Preprints 180123 g003
Figure 4. Percentage of intrinsic risk increment for MLE (left) and a-MIRE (right).
Figure 4. Percentage of intrinsic risk increment for MLE (left) and a-MIRE (right).
Preprints 180123 g004
Figure 5 displays graphically the numerical results of the m + 1 component of the bias vector divided by σ , i.e. the unique non zero physical component of this vector field, for MIRE (left) and MLE (right), 1 σ B θ m + 1 ( λ n m ) and 1 n σ B θ m + 1 ( 1 n ) respectively, for some values of n and m. Observe the sign differences, meaning that MIRE overestimates, in average, σ while MLE underestimates this quantity, in average as well. This follows from the equation of the geodesics of the present model (9). The exact numerical values are given in the Appendix, Table 3.
Figure 5. Numerical results for the unique non–null physical component of bias vector field 1 σ B θ m + 1 ( λ ) , for MIRE (left, with λ = λ n m ) and MLE (right, with λ = 1 / n ).
Figure 5. Numerical results for the unique non–null physical component of bias vector field 1 σ B θ m + 1 ( λ ) , for MIRE (left, with λ = λ n m ) and MLE (right, with λ = 1 / n ).
Preprints 180123 g005
Figure 6. Numerical results for the unique non–null physical component of bias vector field 1 σ B θ m + 1 ( λ ) , for MIRE (left, with λ = λ n m , positive values) and MLE (right, with λ = 1 / n , negative values).
Figure 6. Numerical results for the unique non–null physical component of bias vector field 1 σ B θ m + 1 ( λ ) , for MIRE (left, with λ = λ n m , positive values) and MLE (right, with λ = 1 / n , negative values).
Preprints 180123 g006
Figure 7 shows graphically the numerical results of the percentage of intrinsic risk due to bias for MIRE and MLE, that is
100 B θ 2 ( λ n m ) Φ ( λ n m ) a n d 100 B θ 2 ( 1 n ) Φ ( λ n m )
respectively, for some values of n and m. Observe that the bias is moderate, with respect the intrinsic risk, in both estimators. The bias of MIRE estimator is smaller than the bias of MLE for small values of m and the opposite for large values. The exact numerical results are given in the Appendix, Table 4.
Figure 7. Percentage of intrinsic risk due to bias for MIRE (left) and MLE (right), referred to the risk of MIRE.
Figure 7. Percentage of intrinsic risk due to bias for MIRE (left) and MLE (right), referred to the risk of MIRE.
Preprints 180123 g007
Figure 8. Percentage of intrinsic risk due to bias for MIRE (left) and MLE (right), referred to the risk of MIRE.
Figure 8. Percentage of intrinsic risk due to bias for MIRE (left) and MLE (right), referred to the risk of MIRE.
Preprints 180123 g008
Acknowledgements: We have to thank the referees and the editor comments and suggestions for the improvement of this paper.

Appendix

Preprints 180123 i001Preprints 180123 i002

References

  1. Rao, C. Information and accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 1945, 37, 81–91. [Google Scholar]
  2. Burbea, J.; Rao, C. Entropy differential metric, distance and divergence measures in probability spaces: a unified approach. J. Multivar. Anal. 1982, 12, 575–596. [Google Scholar] [CrossRef]
  3. Burbea, J. Informative geometry of probability spaces. Expo. Math. 1986, 4, 347–378. [Google Scholar]
  4. Oller, J.; Corcuera, J. Intrinsic Analysis of the Statistical Estimation. Ann. Stat. 1995, 23, 1562–1581. [Google Scholar] [CrossRef]
  5. García, G.; Oller, J. What does intrinsic mean in statistical estimation? Sort 2006, 2, 125–146. [Google Scholar]
  6. Bernal-Casas, D.; Oller, J. Variational Information Principles to Unveil Physical Laws. Mathematics 2024, 12, 3941. [Google Scholar] [CrossRef]
  7. Muirhead, R. Aspects of Multivariate Statistical Theory; John Wiley & Sons Inc.: New York, NY, USA, 1982. [Google Scholar]
  8. Bernardo, J.; Juárez, M. Intrinsic estimation. In Bayesian Statistics 7; Bernardo, J., Bayarri, M., Berger, J., Dawid, A., Hackerman, W., Smith, A., West, M., Eds.; Oxford University Press: Berlin, Germany, 2003; pp. 465–476. [Google Scholar]
  9. Lehmann, E.; Casella, G. Theory of Point Estimation; Springer Science & Business Media: New York, NY, USA, 2006. [Google Scholar]
  10. ichi Amari, S. Information Geometry and Its Applications; Springer: Tokyo, 2016. [Google Scholar]
  11. Ay, N.; Jost, J.; Lê, H.V.; Schwachhöfer, L. Information Geometry; Springer: Cham, 2017. [Google Scholar]
  12. Nielsen, F. An Elementary Introduction to Information Geometry. Entropy 2020, 22, 1100. [Google Scholar] [CrossRef] [PubMed]
  13. Burbea, J.; Oller, J. The information metric for univariate linear elliptic models. Stat. Decis. 1988, 6, 209–221. [Google Scholar] [CrossRef]
  14. Lehmann, E. A general concept of unbiasedness. Ann. Math. Stat. 1951, 22, 587–592. [Google Scholar] [CrossRef]
  15. Karcher, H. Riemannian Center of Mass and Mollifier Smoothing. Commun. Pure Appl. Math. 1977, 30, 509–541. [Google Scholar] [CrossRef]
  16. Gelman, A.; Pasarica, C.; Dodhia, R. Let’s practice what we preach: turning tables into graphs. Am. Stat. 2002, 56, 121–130. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated