Preprint
Article

This version is not peer-reviewed.

On Least Squares Approximations for Shapley Values and Applications to Interpretable Machine Learning

Submitted:

29 January 2026

Posted:

29 January 2026

You are already at the latest version

Abstract
The Shapley value stands as the predominant point-valued solution concept in cooperative game theory and has recently become a foundational method in interpretable machine learning. In that domain, a prevailing strategy to circumvent the computational intractability of exact Shapley values is to approximate them by reframing their computation as a weighted least squares optimization problem. We investigate an algorithmic framework by Benati et al. (2019), discuss its feasibility for feature attribution and explore a set of methodological and theoretical refinements, including an approach for sample reuse across strata and a relation to Unbiased KernelSHAP. We conclude with an empirical evaluation of the presented algorithms, assessing their performance on several cooperative games including practical problems from interpretable machine learning.
Keywords: 
;  ;  ;  ;  

1. Introduction

The original KernelSHAP paper by Lundberg and Lee [1] from 2017 was a landmark success for interpretable machine learning. Its primary achievement was making the Shapley value [2] — a theoretically optimal but computationally intractable [3,4,5] concept from cooperative game theory [6] — practical for explaining machine learning models. Lundberg and Lee [1] discovered that the exact Shapley values for a prediction could be recovered as the solution to a specially weighted least squares regression problem. By sampling a manageable number of coalitional inputs and solving this approximate regression, KernelSHAP provided a single, consistent metric for feature importance. The accompanying open-source shap library SHAP [1] fueled its widespread use, making it a predominant standard for model interpretation, particularly in the model-agnostic setting.
The follow-up work on Unbiased KernelSHAP by Covert and Lee [7] represents a success in methodological rigor. It addresses the problem that the original KernelSHAP algorithm has not been proven to be unbiased and does not provide uncertainty estimates. While not as publicly visible as the initial breakthrough, this refinement solidified SHAP’s foundation as a robust, statistically sound tool.
According to Lundberg and Lee [1], KernelSHAP was developed without knowledge of prior articles on cooperative game theory like Charnes et al. [8] and Ruiz et al. [9], who decades earlier had already derived the precise weighting scheme that makes a least-squares regression yield the Shapley value when complete data is available. This outlines a fascinating case of parallel innovation across disciplines. Cooperative game theory, with the Shapley value [2] as its most important solution concept, had long been a mature and applied field providing rigorous solutions to real-world problems of fair division and coalitional analysis, from allocating airport landing fees among airlines based on runway use [10] to measuring the voting power of political blocs in a legislature [11,12,13]. Yet, the field of machine learning, increasingly reliant on inscrutable black-box models, had not seen its own predictions as precisely such a system — where input features act as cooperating agents whose joint effort produces a model’s output. Lundberg and Lee’s pivotal contribution [1] was to bridge this conceptual gap. They recognized that the abstract "game" defined by a model’s prediction function was a natural domain for Shapley’s theory. By doing so, they inadvertently reinvented a specific computational tool — the weighted least-squares approximation — from the game theory literature, but with a transformative new purpose: not to analyze economic coalitions, but to explain the inner logic of artificial intelligence.
Whereas KernelSHAP [1] and its unbiased variant [7] are very widely used, the methods from Benati et al. [14] — which are equally based on the least squares formulation of the Shapley value — have received comparatively little attention. This article represents, to our knowledge, their first application in explainable artificial intelligence. We adapt the ideas from Benati et al. [14] as general TU game approximation algorithms, terming their weighted sampling strategy LSS (Least Squares Sampling). Regarding their proposal for stratification, we differentiate S-LSS, an algorithm without sample reuse across strata, from SRS-LSS, an algorithm with sample reuse across strata suggested in Benati et al. [14].
The objective of this paper is neither to introduce a novel Shapley value estimator nor to perform a broad comparison of approximation algorithms. Instead, the contribution of this work is a detailed structural, theoretical, and algorithmic analysis of the methods LSS, S-LSS and SRS-LSS from Benati et al. [14]. We compare the variances of the LSS and S-LSS estimators in detail. We point out how sample reuse across strata may introduce non-zero covariance terms between strata for SRS-LSS. We prove that LSS and UKS approximate the same underlying problem, thereby only differing in their respective sampling strategies. Finally, we test how the unbiased Shapley estimators LSS, S-LSS and SRS-LSS perform in comparison to KernelSHAP and UKS for both classical cooperative games and real-world applications from interpretable machine learning.
The remainder of this article is structured as follows. In Section 2, we summarize basic ideas from cooperative game theory, including linear solution concepts and the Shapley value, and introduce the BShap (Baseline Shapley) model for interpretable machine learning. Section 3 briefly reviews Monte Carlo methods, along with stratified sampling and importance sampling for variance reduction. A summary of results on approximating linear solution concepts by means of importance sampling on the coalition space from our previous work [15] is presented in Section 4. The central developments of this paper are detailed in Section 5. We formally introduce the LSS, S-LSS and SRS-LSS estimators, compare variances of LSS and S-LSS, investigate the emergence of non-zero covariance terms between strata for SRS-LSS and clarify the similarities and differences between LSS and UKS. The empirical performance of the algorithms introduced in Section 5 is analyzed in Section 6 using two types of cooperative games and three real-world explainability scenarios, thereby numerically substantiating our previous analytical claims. The paper concludes in Section 7 with a summary and recommendations.

2. Preliminaries on Cooperative Game Theory, the Shapley Value and the BShap Model for Interpretable Machine Learning

Following the expositions in Chakravarty et al. [6], Peters [16], and Benati et al. [14], this section offers a concise introduction to transferable utility (TU) cooperative games and point-valued solution concepts such as the Shapley value. Additionally, we briefly describe two representative TU games — the airport game and the weighted voting game — and the model from interpretable machine learning which we utilize in our analysis and our numerical experiments in Section 6.

2.1. Transferable Utility Games and Their Characteristic Functions

We investigate transferable utility games (TU games), cooperative games where a coalition’s earnings are expressed as a scalar [6,16]. This implies that the utility can be distributed without restriction among the players.
Following [6,16], a TU game is formally defined as a pair ( N , v ) . Here, N = { 1 , , n } is the player set and v : 2 N R is the characteristic function, which assigns a real value to each coalition S N representing its total utility from cooperation. We adopt the standard normalization v ( ) = 0 .
It is frequently convenient to represent a coalition S by an indicator vector z { 0 , 1 } n . We define z ( S ) = [ 1 i S ] i = 1 n and its inverse S ( z ) = { i N z i = 1 } . This mapping allows functions and operators defined on 2 N or { 0 , 1 } n to be used interchangeably; for instance, v ( S ) and v ( z ) denote the same value, and | z | = | S ( z ) | .

2.2. Linear Solution Concepts for TU Games

Let G ( N ) denote the set of all TU games on player set N. A point-valued solution concept is a function α : G ( N ) R n that returns a vector α ( N , v ) R n for each game ( N , v ) G ( N ) . Each entry α i quantifies the value assigned to player i. These concepts are employed to evaluate individual influence or to distribute a coalition’s payoff v ( S ) among its members S N .
A key property of a point-valued solution concept α is linearity [14]. This requires that α can be written as
α ( N , v ) = S N w ( S ) ν ( S , v ) ,
with ⊙ denoting the Hadamard product of vectors, w ( S ) = [ w 1 ( S ) , , w n ( S ) ] standing for weights depending only on S and the player, and ν ( S , v ) = [ ν 1 ( S , v ) , , ν n ( S , v ) ] denoting values depending upon S, v, and the player.

2.3. The Shapley Value

The Shapley value [2] represents the most prominent solution concept in cooperative game theory. In recent years, it has been extensively adopted in machine learning and explainable artificial intelligence [1,17,18,19]. Formally, for any player i N , the Shapley value is the expected marginal contribution Δ i ( S , v ) = v ( S i ) v ( S ) , taken over all sets S of players that precede i in a uniformly random permutation π of N, i.e.,
ϕ i ( N , v ) = E π U ( Π ( N ) ) Δ i Pre i ( π ) , v = 1 n ! π Π ( N ) Δ i Pre i ( π ) , v
= S N i S | S | ! ( n | S | 1 ) ! n ! Δ i ( S , v ) ,
where U stands for a uniform distribution, Π ( N ) denotes the set of all permutations of N, and Pre i ( π ) gives the set of players preceding i in a permutation π Π ( N ) .
Let ϕ = [ ϕ 1 , , ϕ n ] be the vector of Shapley values, and let ϕ ^ represent any approximation, with ϕ ^ i its i-th entry. Where no ambiguity arises, we use ϕ i to mean ϕ i ( N , v ) .
It follows directly from () that the Shapley value belongs to the class of linear solution concepts introduced in SubSection 2.2. The coefficients w ( S ) are specified by ν i ( S , v ) = Δ i ( S , v ) and
w i ( S ) = | S | ! ( n | S | 1 ) ! n ! if i S 0 if i S .
Note that the Shapley value exhibits symmetry, i.e., two players that contribute equally to each coalition have the same Shapley value meaning there holds
if S N { i , j } , v ( S { i } ) = v ( S { j } ) , then ϕ i ( N , v ) = ϕ j ( N , v ) ,
for any game ( N , v ) and any players i , j N with i j . It also exhibits efficiency, i.e., for all games ( N , v ) there holds i N ϕ i ( N , v ) = v ( N ) .

2.4. Airport Games

Airport games, proposed by Littlechild and Thompson [10], address the problem of distributing runway construction costs among players whose aircrafts require different runway lengths. Costs are encoded in a vector c = [ c 1 , , c n ] , where c i corresponds to player i. The characteristic function is
v ( S ) = max i S c i .
Classified as line-graph maintenance problems, these games have closed-form Shapley values [20], making them ideal benchmark instances for the numerical studies in Section 6.

2.5. Weighted Voting Games

Weighted voting games [6,16,21] represent voting bodies in which players possess different voting weights (e.g., parliamentary seats), collected in c = [ c 1 , , c n ] . A coalition S succeeds when i S c i C for a predetermined quota C. This yields a characteristic function defined by
v ( S ) = 0 if i S c i < C 1 if i S c i C .
In our theoretical and numerical investigations in Section 5 and Section 6, we frequently use a family of weighted voting games with n = 7 players and
c = [ 1 , 2 , 3 , 1 , 1 , 1 , 1 ] , C { 1 , , 10 } .
There exist fast methods for computing point-valued solutions of weighted voting games for large player numbers, for instance dynamic programming algorithms [22,23]. Hence these games serve as excellent test games for our experiments in Section 6.

2.6. Baseline Shapley (BShap) for Interpretable Machine Learning

Our numerical experiments in Section 6 apply the Shapley value framework to attribute a model’s prediction to its input features. Here, the prediction task is interpreted as a cooperative game with features as players. Given a model o and an input x , the characteristic function v ( S ) specifies the value of a feature subset S.
We adopt Baseline Shapley (BShap), a method originally suggested in [1] and formalized by Sundararajan and Najmi [24]. The value v ( S ) is defined deterministically relative to a single baseline vector  z , where features not in S are set to their baseline counterparts:
v BShap ( S ) = o ( [ z S , z S ¯ ] ) o ( z )
with S ¯ denoting the set of features not in S. The subtrahend o ( z ) in (8) ensures the normalization v BShap ( ) = 0 as introduced in SubSection 2.1. Note that this normalization would not be needed for the algorithms based on marginal contributions investigated in our article [15], but becomes crucial for the algorithms based on least square formulations of the Shapley value discussed in Section 5.
Our specific baseline: In the applications in Section 6, the baseline z is taken as the expected feature vector over the training distribution:
z = z ¯ = E z D [ z ] .
with D representing the training data distribution. This anchors Shapley values to the prediction o ( z ¯ ) of an average sample, providing a natural reference point. The resulting attributions thus explain the deviation of o ( x ) from this baseline.

3. Preliminaries on Monte Carlo Methods for Estimation

In this section, we first introduce the use of Monte Carlo methods for estimating expectations together with two techniques for variance reduction. We draw mainly from the short overviews provided by Rubinstein and Kroese [25] and Botev and Ridder [26] as well as from the textbook by Rubinstein and Kroese [27].
Let X be a d-dimensional discrete (or continuous) random variable with a sample space X R d and a probability mass function q X (or probability density function, respectively), which may be known or unknown. Denote a specific realization of X by x X . For a function H : X R , we want to estimate the expectation
μ = E [ H ( X ) ] .
This expectation is frequently intractable in closed form, either because q X is unknown (and only i.i.d. samples are given) or because the defining sum or integral is computationally infeasible. Monte Carlo methods circumvent this analytical hurdle through simulation.
We begin with Crude Monte Carlo as the foundational method, then present importance sampling and stratified sampling for variance reduction which we examine in this paper. A finite variance of H ( X ) is assumed throughout this manuscript.

3.1. Crude Monte Carlo Method

Crude Monte Carlo proceeds by drawing an i.i.d. sample X 1 , , X τ iid q X and averaging the H ( X k ) values. This yields the standard estimator [26]:
μ ^ = 1 τ k = 1 τ H ( X k ) .
According to [26], the estimator is unbiased, i.e.  E [ μ ^ ] = μ , and its variance is
Var [ μ ^ ] = 1 τ Var [ H ( X ) ] .

3.2. Stratified Sampling for Variance Reduction

Stratified sampling is a well-established Monte Carlo variance reduction technique [26]. It separates the sample space X into disjoint strata { X 1 , , X } such that l = 1 X l = X with X l X l = for l l . Suppose L is a discrete random variable taking values in { 1 , , } with known probabilities p L ( l ) = P ( L = l ) = P ( X X l ) . We can rewrite μ as
μ = E p L E q X [ H ( X ) X X L ] = l = 1 p L ( l ) E [ H ( X ) X X l ] = l = 1 p L ( l ) μ l ,
with μ l = E [ H ( X ) X X l ] standing for the expectation over the conditional probability distribution of X given that X X l . Our stratified estimator of μ becomes
μ ^ s t = l = 1 p L ( l ) μ ^ l ,
where μ ^ l denotes the estimated value of H ( X ) in stratum X l , i.e.,
μ ^ l = 1 τ l k = 1 τ l H ( X l , k ) ,
with the sample X l , 1 , , X l , τ l of size τ l being drawn i.i.d. from the conditional probability distribution of X given that X X l . To ensure fair comparisons to other Monte Carlo techniques, we always assume l = 1 τ l τ (up to rounding errors), where τ stands for the overall sample budget. The estimator is unbiased [26], i.e.,
E [ μ ^ s t ] = μ .
Assuming that the sample sizes per stratum are proportionally assigned, i.e., τ l = p L ( l ) τ , the variance of the crude Monte Carlo estimator (11) has been established as an upper bound [26], i.e.
Var [ μ ^ s t ] Var [ μ ^ ] ,
meaning that the stratified estimator μ ^ s t never performs worse than its crude counterpart μ ^ .

3.3. Importance Sampling for Variance Reduction

Importance sampling introduces another probability mass (or density) function p X , such that p X ( x ) = 0 H ( x ) q X ( x ) = 0 . This allows (9) to be rewritten as
μ = E p X q X ( X ) p X ( X ) H ( X ) .
A Monte Carlo approximation based on (13) is
μ ^ = 1 τ k = 1 τ q X ( X k ) p X ( X k ) H ( X k ) ,
where X 1 , , X τ iid p X . As shown in [25], this estimator is unbiased, i.e.  E p X [ μ ^ ] = μ , and its variance is analogous to the crude Monte Carlo case
Var p X [ μ ^ ] = 1 τ Var p X q X ( X ) p X ( X ) H ( X ) .
With a properly selected p X , the variance of the importance sampling estimator is less than or equal to that of the crude Monte Carlo estimator. This follows from comparing (11) and (15), yielding
1 τ Var q X [ H ( X ) ] min p X 1 τ Var p X q X ( X ) p X ( X ) H ( X ) ,
which holds because p X = q X can always be chosen as the trivial case. We apply importance sampling in the context of TU games in Section 4.

4. Approximating Linear Solution Concepts via Importance Sampling

The exact calculation of a linear solution concept, i.e., (1), for a TU game normally requires summing up terms whose number grows exponentially in the number of players n. Therefore, approximation algorithms are needed to estimate these values in real-world situations, especially in the context of applications in interpretable machine learning [1,17,18,19] where one can typically not exploit any special structure of the underlying game for computing Shapley values exactly. In this section, we briefly summarize some ideas from [14] and the general framework for importance sampling on the coalition space for linear solution concepts which we recently proposed in [15] and will apply in Section 5.
Benati et al. [14] consider a uniform sampling strategy on the coalition space for approximating linear solution concepts, i.e. they simply adopt the uniform distribution q = U ( 2 N ) . Let us regard it as the crude Monte Carlo method on the coalition space. Although sampling subsets from the uniform distribution is both straightforward and unbiased, it is obviously not an optimal — or even recommendable — sampling strategy for all linear solution concepts or problem settings. For example, when approximating the Shapley value by sampling coalitions using (), small or large coalitions obtain larger weights resulting in a strong influence on the estimator. However, when sampling uniformly from the coalition space, they may extract only a few samples resulting in a less accurate estimator.
To enable more efficient sampling, we apply the importance sampling technique from SubSection 3.3. By appropriately reweighting the estimator, importance sampling allows us to draw samples from a non-uniform, user-defined distribution while maintaining an unbiased estimate of the underlying linear solution concept.
Theorem 1.
[15] For all i N , let
p i : 2 N [ 0 , 1 ] with p i ( S ) = 0 w i ( S ) ν i ( S ) = 0
be a probability distribution and S i iid p i with S i = [ S i , 1 , , S i , τ ] be a sample of size τ generated by sampling with replacement according to p i . Then,
α ^ ( N , v ) = [ α ^ i ( N , v ) ] i N = 1 τ S S i w i ( S ) p i ( S ) ν i ( S , v ) i N
is an importance sampling estimator of the linear solution concept α ( N , v ) .
The following proposition connects our importance sampling estimator from Theorem 1 established in Pollmann and Staudacher [15] to a finding from Benati et al. [14], p. 95.
Proposition 1.
The importance sampling estimator from Theorem 1 has the following properties:
(a)
It is unbiased, i.e.,
E [ α ^ i ( N , v ) ] = α i ( N , v ) ( i N ) .
(b)
Its variance is given by
Var [ α ^ i ( N , v ) ] = 1 τ S N w i ( S ) 2 p i ( S ) ν i ( S , v ) 2 α i ( N , v ) 2 ( i N ) .
(c)
It is consistent in probability, i.e.,
α ^ i ( N , v ) τ α i ( N , v ) ( i N ) .
We note that Theorem 1 and Proposition 1 subsume both the crude Monte Carlo method (setting p i ( S ) = q ( S ) = 2 n ) and stratified sampling. The latter is covered because any stratum estimator for a linear solution concept can be written in the form of (16), allowing direct application of these results.

5. Least Squares Approaches for Approximating Shapley Values

This section covers the idea of approaching the approximation of Shapley values as a least squares problem. First, we give an introduction to the family of least square values in Section 5.1, followed by an overview about Shapley value approximation algorithms in this setting in Section 5.2. In Section 5.3Section 5.4, and Section 5.5, we compare these algorithms and put them into the overall context.

5.1. The Family of least Square Values

The family of least square values was proposed by Ruiz et al. [9] and comprises a generalization of the formulation of the Shapley value as a least squares problem, which was already proposed earlier by Charnes et al. [8].
Essentially, this family consists of all solution concepts that can be obtained as minimizers of a weighted least squares problem, where the weights determine the relative importance of different coalition sizes. These weights are given by the function m : { 0 , , n } R 0 that assigns a weight to each coalition size s { 0 , , n } with m ( s ) > 0 for at least one s. Clearly, per definition, m is symmetric in a sense of assigning the same weight to coalitions of the same cardinality in order to retrieve symmetric solutions like the Shapley value. We denote by y R n a payoff vector for a given game with y i being the payoff for player i. In the following, we use the shorthand notation y ( S ) = i S y i . Recall that a payoff vector is called efficient if y ( N ) = v ( N ) .
With these definitions established, any solution concept belonging to the family of least square values can be expressed as the optimizer of the following weighted least squares problem:
min y R n S N m ( | S | ) v ( S ) y ( S ) 2 s . t . y ( N ) = v ( N ) ,
with m being the weight function that is specific to the particular solution concept.
The closed-form solution — again, for a fixed m — to problem (20) is
y i = v ( N ) n + 1 γ n n a i j N a j ( i N ) ,
with
a j = S N j S m ( | S | ) v ( S ) and γ = s = 1 n 1 m ( s ) n 2 s 1 .
These expressions illustrate how the resulting allocation depends on the weight function m.

Least square values as linear solution concepts

Let us briefly point out why all members of the family of least square values are linear solution concepts in the sense of the definition from SubSection 2.2. Benati et al. [14] showed that (21) can be rewritten as
y = 1 v ( N ) n + S N 0 < | S | < n w ( S ) v ( S ) ,
where 1 R n is the all-ones vector and the individual elements of w ( S ) are given by
w i ( S ) = ( n | S | ) m ( | S | ) n γ if i S | S | m ( | S | ) n γ if i S .
From (23), it follows directly that this representation satisfies the definition of linear solution concepts provided in SubSection 2.2.

The Shapley value in the context of least square values

The family of least square values was introduced by Ruiz et al. [9] after Charnes et al. [8] had demonstrated that the Shapley value can be expressed as a least squares optimization problem. In detail, the vector of Shapley values ϕ is the solution to problem (20) when m is defined as
m ( s ) = 1 n 1 n 2 s 1 1 ,
and therefore, via (22), one obtains
γ = s = 1 n 1 m ( s ) n 2 s 1 = s = 1 n 1 1 n 1 n 2 s 1 1 n 2 s 1 = 1 .
Thus, by inserting (26) into (21) and reformulating the result, one obtains the closed-form solution for the Shapley value as
ϕ i = v ( N ) n + n 1 n a i 1 n j N j i a j ( i N ) ,
with a j given by (22) for all j N . We illustrate the computation of the Shapley value as a least squares optimization problem in Figure 1.

5.2. Shapley Value Approximation Algorithms

In this subsection, we provide an introduction to Shapley value approximation algorithms that are based on the formulation of the Shapley value as a least squares problem. The algorithms discussed below are not meant to be an exhaustive list, but rather represent those algorithms we will analyse and test in the rest of the article. Specifically, the first three algorithms have been proposed by Benati et al. [14] in the context of stochastic approximations of cooperative games and are based on the family of least square values defined in SubSection 5.1, while the latter two have their origins in the machine learning world.
Additionally, we would like to draw attention to the absence of the projection method proposed by Benati et al. [14], which ensures efficiency and symmetry of the obtained estimators. Although this approach potentially improves the accuracy of the estimators (or at least never makes their accuracy worse), it is not agnostic to the underlying characteristic function, as it requires prior knowledge about symmetric players, which is an information that we assume to be unavailable — in particular, in the context of interpretable machine learning. Nevertheless, one could still achieve estimator efficiency through a single evaluation of v ( N ) , similar to Castro et al. [28], who proposed filling the “efficiency gap” post hoc. However, since such additional considerations lie outside the scope of this work, we exclude them equally from all algorithms.

Least Squares Sampling (LSS)

This paragraph deals with an algorithm proposed by Benati et al. [14] that approximates (23) by using importance sampling as defined in Theorem 1. Specifically, we refer to the estimator in Equation (22) on p. 97 in the paper [14] combined with their weighted sampling strategy. In the following, we name this algorithm LSS.
The idea of LSS is to sample coalitions with replacement according to
p ( S ) = 1 ( n 1 ) n | S | if 0 < | S | < n 0 else ,
such that, by combining (24), (25), (26), and (28), one obtains
w i ( S ) p ( S ) = n 1 | S | if i S n 1 n | S | if i S .
Then, based on Theorem 1 as well as (23) and (29), an approximation of the Shapley value of player i is given by
ϕ ^ i = v ( N ) n + 1 τ S S w i ( S ) p ( S ) v ( S )
= v ( N ) n + 1 τ S S i S n 1 | S | v ( S ) S S i S n 1 n | S | v ( S ) ,
where S is obtained by sampling τ times with replacement according to p.
Equation (31) serves as the basis for Algorithm 1. Note that one evaluation of v can be used to update all players, reducing the number of evaluations of v and increasing convergence speed.
Algorithm 1 LSS
1:
ϕ ^ 1 v ( N ) n
2:
for τ times do
3:
    Draw S p
4:
     v S v ( S )
5:
     w in n 1 | S |
6:
     w out n 1 n | S |
7:
    for all  i N  do
8:
        if  i S  then
9:
            ϕ ^ i ϕ ^ i + 1 τ w in v S
10:
        else
11:
            ϕ ^ i ϕ ^ i 1 τ w out v S
12:
        end if
13:
    end for
14:
end for
Since (31) is an importance sampling estimator of a linear solution concept in the sense of Theorem 1, which can easily be obtained via (23) and (30), LSS inherits the properties stated in Proposition 1. Thus, the estimator is unbiased, and we can use (18) to calculate its variance:
Proposition 2.
The variance of the LSS estimator is given by
Var [ ϕ ^ i ] = 1 τ S N 0 < | S | < n i S n | S | n | S | n 2 | S | 1 v ( S ) 2 + S N 0 < | S | < n i S | S | n ( n | S | ) n 2 | S | 1 v ( S ) 2 α i 2
with
α i = ϕ i v ( N ) n
for all i N .
Proof. 
Using (18) and setting ν i = v results in
Var [ ϕ ^ i ] = 1 τ S N 0 < | S | < n w i ( S ) 2 p ( S ) v ( S ) 2 α i 2 ,
where α i = ϕ i v ( N ) n . Here, the term v ( N ) n is excluded from the variance calculation since it is added once at the beginning of the LSS algorithm, but it is not part of the sampling procedure and thus, there is no randomness to it, and it does not influence the variance of ϕ ^ i . As a result, we construct a temporary solution concept of the game, α i , that does not contain the constant term v ( N ) n .
With that given, one obtains (32) by calculating
w i ( S ) 2 p ( S ) = n | S | n | S | n 2 | S | 1 1 if i S | S | n ( n | S | ) n 2 | S | 1 1 if i S ,
and inserting the result into (33).    □

Stratified Least Squares Sampling (S-LSS)

This paragraph discusses another algorithm proposed by Benati et al. [14], which approximates the Shapley value through what the authors introduce as their stratification strategy. Specifically, we refer to equations (23) and (24) on p. 97 in the paper [14] without any reuse of samples across strata. In the following, we refer to this method as S-LSS.
The stratification according to [14] comes into play when approximating the individual addends a j in (21), where a reformulation of (22) results in
a j = s = 1 n 1 S N s = | S | j S m ( s ) v ( S ) a j , s .
Based on that formulation, approximations of a j , s , denoted by a ^ j , s , are obtained as
a ^ j , s = 1 τ j , s S S j , s m ( s ) p j , s ( S ) v ( S ) = 1 τ j , s ( n s ) S S j , s v ( S ) ,
with S j , s being an i.i.d. sample of size τ j , s from { S N ( s = | S | ) ( j S ) } according to the uniform probability distribution p j , s ( S ) = n 1 s 1 1 .
Then, based on (27), one obtains an estimator of ϕ i as
ϕ ^ i = v ( N ) n + n 1 n s = 1 n 1 a ^ i , s 1 n j N j i s = 1 n 1 a ^ j , s ,
forming the basis for Algorithm 2.
Algorithm 2 S-LSS
1:
for i N do
2:
    for  s = 1 to n 1  do
3:
         a ^ i , s 0
4:
        for  τ i , s  times do
5:
           Draw S p i , s
6:
            a ^ i , s a ^ i , s + 1 τ i , s ( n s ) v ( S )
7:
        end for
8:
    end for
9:
end for
10:
ϕ ^ i = v ( N ) n + n 1 n s = 1 n 1 a ^ i , s 1 n j N : j i s = 1 n 1 a ^ j , s ( i N )
Clearly, by using the formulation given in (35), the estimators a ^ j , s can be seen as estimators of individual solution concepts a j , s in the sense of Theorem 1 by setting
w j ( S ) = m ( s ) if ( s = | S | ) ( j S ) 0 else ,
as well as
p j ( S ) = p j , s ( S ) if ( s = | S | ) ( j S ) 0 else ,
and ν j = v .
Therefore, the individual stratum estimators are unbiased, and (18) can be used to calculate their variances:
Proposition 3.
In the context of the S-LSS algorithm, the variance of a ^ j , s is given by
Var [ a ^ j , s ] = 1 τ j , s 1 ( n s ) ( n 1 ) n 2 s 1 S N s = | S | j S v ( S ) 2 a j , s 2
for all j N and s { 1 , , n 1 } .
Proof. 
By using the fact that
n 1 s 1 n 2 s 1 = ( n 1 ) ! ( s 1 ) ! ( n s ) ! ( s 1 ) ! ( n s 1 ) ! ( n 2 ) ! = n 1 n s ,
we can use (18) to obtain
S N s = | S | j S m ( s ) 2 p j , s ( S ) v ( S ) 2 = n 1 s 1 ( n 1 ) 2 n 2 s 1 2 S N s = | S | j S v ( S ) 2 = n 1 ( n s ) ( n 1 ) 2 n 2 s 1 S N s = | S | j S v ( S ) 2 = 1 ( n s ) ( n 1 ) n 2 s 1 S N s = | S | j S v ( S ) 2 .
Inserting this result back into (18) results in (36).    □
Based on the individual stratum estimator variances given by Proposition 3, Benati et al. [14] stated the overall variance of the estimated Shapley value of player i as
Var [ ϕ ^ i ] = Var v ( N ) n + n 1 n s = 1 n 1 a ^ i , s 1 n j N j i s = 1 n 1 a ^ j , s = n 1 n 2 s = 1 n 1 Var [ a ^ i , s ] + 1 n 2 j N j i s = 1 n 1 Var [ a ^ j , s ] .
Perhaps surprisingly and contrary to what might be expected from the definition of stratified sampling in SubSection 3.2, S-LSS does not guarantee a smaller variance compared to LSS, as we discuss in SubSection 5.3 and show empirically in Section 6.

Sample Reuse Stratified Least Squares Sampling (SRS-LSS)

Benati et al. [14] proposed a third algorithm, which is based on the S-LSS algorithm and reuses samples across strata. Specifically, we refer to the final three sentences in subsection 4.2 on p. 97 in the paper [14]. In the following, we refer to this method as SRS-LSS.
The central idea is to reuse one evaluation v ( S ) in (35) to update all stratum estimators a ^ j , s where s = | S | and j S , hence the name. To achieve that, samples S s of size τ s are drawn from { S N s = | S | } with probabilities p s ( S ) = n s 1 . Based on the elements of S S s , all stratum estimators a ^ j , s with j S are updated with a single evaluation v ( S ) . As a side effect, the number of evaluations of v that are used to update player j’s stratum, i.e., τ j , s , are dependent on the generated samples, which is why, in Algorithm 3, we need to keep track of their values as well. This essentially means that all τ j , s become estimators themselves, which is why we denote them by τ ^ j , s . Additionally, we use a ˜ j , s to denote unnormalized estimators of a j , s before dividing by τ ^ j , s ( n s ) .
Algorithm 3 SRS-LSS
1:
for s = 1 to n 1 do
2:
     τ ^ i , s 0 ( i N )
3:
     a ˜ i , s 0 ( i N )
4:
    for  τ s  times do
5:
        Draw S p s
6:
         v S v ( S )
7:
        for all  i S  do
8:
            a ˜ i , s a ˜ i , s + v S
9:
            τ ^ i , s τ ^ i , s + 1
10:
        end for
11:
    end for
12:
     a ^ i , s 1 τ ^ i , s ( n s ) a ˜ i , s ( i N )
13:
end for
14:
ϕ ^ i = v ( N ) n + n 1 n s = 1 n 1 a ^ i , s 1 n j N : j i s = 1 n 1 a ^ j , s ( i N )
One needs to guarantee that τ ^ j , s > 0 for all j N and s { 1 , , n 1 } in order for SRS-LSS to function properly and avoid divisions by zero. One straightforward approach is to repeatedly generate samples until this condition is satisfied, or alternatively, to employ a conditional probability distribution that accounts for which elements have not yet been sampled. In the remainder of this work, we rely on the results from the following proposition, and therefore, we do not incorporate additional safeguards into Algorithm 3.
Proposition 4.
Let τ s as τ for all s { 1 , , n 1 } . Then, with probability approaching one asymptotically, each stratum estimator receives at least one sample, i.e.,
lim τ P ( j N , s { 1 , , n 1 } : τ ^ j , s = 0 ) = 0 .
Proof. 
The probability of a player j N belonging to a random coalition of size s { 1 , , n 1 } is given by s n > 0 . Thus, we have
P ( τ ^ j , s = 0 ) = 1 s n τ s τ s 0 .
Since
P ( j N , s { 1 , , n 1 } : τ ^ j , s = 0 ) = P j N s = 1 n 1 { τ ^ j , s = 0 } ,
we can use the union bound [29] and (39) to derive
P j N s = 1 n 1 { τ ^ j , s = 0 } j N s = 1 n 1 P ( τ ^ j , s = 0 ) τ 0 ,
which holds as long as τ s with τ . □
Since Benati et al. [14] did not include a variance analysis of the SRS-LSS estimator in their work, we refer to SubSection 5.4 for a detailed discussion of the variance of the obtained estimator.

KernelSHAP (KS)

Let us now introduce the KernelSHAP algorithm, one of the predominant Shapley approximation algorithms in the machine learning world. It was developed by Lundberg and Lee [1], without knowledge of prior works like [8] and [9]. The optimization objective is
ϕ = arg min y R n z { 0 , 1 } n 0 < | z | < n p ( z ) y ( z ) v ( z ) 2 s . t . y ( N ) = v ( N )
to calculate the Shapley values for the cooperative game ( N , v ) [7]. Here, p ω denotes a probability mass function satisfying p ( 0 ) = 0 and p ( 1 ) = 0 such that
p ( z ) = Q 1 ω ( z ) if 0 < | z | < n 0 else ,
where
ω ( z ) = n 1 n | z | | z | ( n | z | )
is the Shapley kernel and
Q = ( n 1 ) s = 1 n 1 1 s ( n s )
is a normalization constant [7]. As noted in SubSection 2.1, all functions defined on { 0 , 1 } n are implicitly also defined on N.
Since solving (40) requires optimizing over the elements in { 0 , 1 } n , a set that grows exponentially in n, KernelSHAP approximates Shapley values by generating a sample S iid p of size τ and solving the approximate optimization problem defined over the sampled subsets only, i.e.,
ϕ ^ = arg min y R n 1 τ S S y ( S ) v ( S ) 2 s . t . y ( N ) = v ( N ) .
Following [7], the Lagrangian of the approximate problem given by (44) with multiplier λ R is
L ^ ( y , λ ) = y 1 τ S S z ( S ) z ( S ) A ^ y 2 y 1 τ S S z ( S ) v ( S ) b ^ + 1 τ S S v ( S ) 2 + 2 λ y ( N ) v ( N ) .
Based on the Lagrangian, Covert and Lee [7] derived the closed-form solution
ϕ ^ = A ^ 1 b ^ 1 1 A ^ 1 b ^ v ( N ) 1 A ^ 1 1 .
The resulting estimator is consistent in probability, but assessing its unbiasedness or variance is challenging due to terms like A ^ 1 b ^ . While the unbiasedness or variance of A ^ or b ^ can be analyzed individually [7], their interaction complicates the analysis of the final Shapley value estimator.
The KernelSHAP algorithm will not be further analyzed theoretically in this work. Due to its setup, it is unclear how the obtained estimator fits the framework defined by Theorem 1 or Monte Carlo estimators introduced in Section 3 in general. To the best of our knowledge, no analytic solution for its variance or a proof of its unbiasedness has been derived up today, which is why the unbiased KernelSHAP algorithm was introduced, guaranteeing unbiasedness and facilitating the variance analysis of the obtained Shapley value estimator.

Unbiased KernelSHAP (UKS)

The UKS algorithm [7] is a continued development of the KernelSHAP algorithm from the previous paragraph, where we pointed out that it is unclear if the KernelSHAP estimator is unbiased due to the interactions between A ^ and b ^ .
Considering the original optimization problem defined in (40), its Lagrangian with multiplier λ R is given by
L ( y , λ ) = y E [ Z Z ] A y 2 y E [ Z v ( Z ) ] b + E [ v ( Z ) 2 ] + 2 λ 1 y v ( N ) ,
whereby Z is a random variable taking values from { 0 , 1 } n and being distributed according to p defined by (41). With that formulation given, the closed-form solution of the original problem can be derived as
ϕ = A 1 b 1 1 A 1 b v ( N ) 1 A 1 1 .
Clearly, b cannot be computed exactly for large n, since it depends on v, and thus, needs 2 n 2 evaluations of v in order to be exact. Nevertheless, since A does not depend on v and the probability distribution p is known, A can be pre-computed and is exact, which is the main improvement introduced by UKS in contrast to the original KernelSHAP algorithm.
Thus, UKS estimates all players’ Shapley values via
ϕ ^ = A 1 b ^ 1 1 A 1 b ^ v ( N ) 1 A 1 1 ,
where b ^ is defined in (45) as for the KernelSHAP algorithm, i.e., b ^ = 1 τ S z ( S ) v ( S ) , and A R n × n is given by
A = 1 2 ρ ρ ρ 1 2 ρ ρ ρ 1 2
with
ρ = 1 n ( n 1 ) s = 2 n 1 s 1 n s s = 1 n 1 1 s ( n s ) .
Following the results provided in [7], the resulting Shapley value estimator is unbiased and consistent in probability. Furthermore, its covariance is given by
Σ ϕ ^ = E Σ b ^ E
with
Σ b ^ = Cov [ Z v ( Z ) ]
and
E = A 1 A 1 1 1 A 1 1 A 1 1 .

5.3. Comparison of LSS and S-LSS

In this subsection, we first clarify the relationship between the two algorithms LSS and S-LSS which we introduced at the beginning of SubSection 5.2, followed by a detailed comparison of variances.

Algorithm Comparison

For comparing the algorithms LSS and S-LSS, we start with the following proposition:
Proposition 5.
In the sense of the definition of stratified sampling provided in Section 3.2, S-LSS is not a valid stratification scheme of LSS.
Proof. 
As stated in Section 3.2, stratification divides the sample space X into strata { X 1 , , X } , such that X l X l = for l , l { 1 , , } and l l .
By observing the strata definitions given in (34), one obtains that their sample spaces have the intersection
{ S N ( s = | S | ) ( j S ) ( j S ) } ( j j N ) .
Clearly, for any coalition size s { 2 , , n 1 } , this set is nonempty, since it contains at least one element of the form { j , j , } , whereby the dots indicate the presence of s 2 distinct players other than j and j . Therefore, S-LSS does not form a valid partition of the sample space as required by our definition in SubSection 3.2. □
We note in passing that the structure of the overlap between strata is such that it does not introduce a bias in the Shapley estimator defined by S-LSS.
In order to generate a deeper understanding regarding the relationship between LSS and S-LSS, we explain how Benati et al. [14] derived the LSS algorithm from the initial formulation given by (21). Via (21) and (22), one directly obtains
y i = v ( N ) n + 1 γ n n S N i S m ( | S | ) v ( S ) a i j N S N j S m ( | S | ) v ( S ) a j = v ( N ) n + S N n 1 i S γ n m ( | S | ) v ( S ) j N S N j S 1 γ n m ( | S | ) v ( S ) S N | S | γ n m ( | S | ) v ( S )
= v ( N ) n + S N n 1 i S | S | γ n m ( | S | ) w i ( S ) v ( S ) ,
which matches the formulation proposed in (23) with weights given by (24). Therefore, one evaluation of v for a given S in (51) corresponds to updating all a j where j S in (50). This implicit update of all a j where j S is encapsulated by the weight function w i .
As a result, stratifying by s and j, as done in the case of S-LSS, cannot be derived from LSS in the same way as defined in Section 3.2, since LSS is an algorithm that simultaneously updates multiple strata of S-LSS with a single sampled element, i.e., with one evaluation of v.
We conclude that the statement from SubSection 3.2, which points out that stratified sampling always yields a variance less than or equal to that of the crude Monte Carlo method, does not apply here, since S-LSS is not a valid stratification scheme of LSS. Therefore, in order to understand how their variances compare, we calculate both algorithms’ variances on two selected examples.

Variance Comparison in the Absence of Variance Within Strata

First, let us compare the variances of the LSS and S-LSS estimator in the context of an example game characterized by large variance between strata and no variance within strata. In particular, we consider a cardinality game with n 2 players and the characteristic function
v ( S ) = | S | .
Lemma 1.
On the game defined by (52) with n 2 , the variance of the LSS estimator is greater zero for all players, i.e.,
Var [ ϕ ^ i LSS ] > 0 ( i N ) .
Proof. 
First, we obtain that the Shapley values of the game are given by v ( N ) n for all i N , since all i are symmetric. Without loss of generality, we fix one player i N in order to simplify notations. Then, by using Proposition 2, we derive
α i 2 = ϕ i v ( N ) n 2 = 0 .
Inserting α i 2 into (32), multiplying by τ , and using the identities defined by (37) as well as
n 1 s n 2 s 1 = ( n 1 ) ! s ! ( n s 1 ) ! ( s 1 ) ! ( n s 1 ) ! ( n 2 ) ! = n 1 s
results in
τ Var [ ϕ ^ i LSS ] = S N 0 < | S | < n i S n | S | n | S | n 2 | S | 1 v ( S ) 2 + S N 0 < | S | < n i S | S | n ( n | S | ) n 2 | S | 1 v ( S ) 2 = s = 1 n 1 S N s = | S | i S n s s n n 2 s 1 + S N s = | S | i S s n ( n s ) n 2 s 1 s 2 = s = 1 n 1 n 1 s 1 n s s n n 2 s 1 + n 1 s s n ( n s ) n 2 s 1 s 2 = s = 1 n 1 n 1 n s s ( n s ) n > 0 n s s n s ( n s ) n > 0 + n 1 s s ( n s ) n > 0 s n ( n s ) > 0 s 2 s n ( n s ) > 0 > 0 .
Dividing by τ , we obtain
Var [ ϕ ^ i LSS ] > 0 ( i N ) .
Lemma 2.
On the game defined by (52) with n 2 , the variance of the S-LSS estimator is zero for all players, i.e.,
Var [ ϕ ^ i S - LSS ] = 0 ( i N ) ,
as long as τ j , s > 0 for all j N and s { 1 , , n 1 } .
Proof. 
Without loss of generality, we fix j N and s { 1 , , n 1 } in order to simplify notations.
To calculate the variances of the individual stratum estimators via (36), the exact values of the stratum estimators a j , s are required. By using the identity (37), these are given by
a j , s = S N s = | S | j S m ( s ) v ( S ) = 1 ( n 1 ) n 2 s 1 S N s = | S | j S v ( S ) = 1 ( n 1 ) n 2 s 1 n 1 s 1 s = n 1 ( n 1 ) ( n s ) s = s n s .
Furthermore, by using (37), the sum of squared evaluations of v in (36) is given by
1 ( n s ) ( n 1 ) n 2 s 1 S N s = | S | j S v ( S ) 2 = 1 ( n s ) ( n 1 ) n 2 s 1 n 1 s 1 s 2 = n 1 ( n 1 ) ( n s ) 2 s 2 = s 2 ( n s ) 2 .
Inserting (53) and (54) into (36) results in
Var [ a ^ j , s ] = 1 τ j , s s 2 ( n s ) 2 a j , s 2 = 1 τ j , s s 2 ( n s ) 2 s 2 ( n s ) 2 = 0 ,
as long as τ j , s > 0 . Then, one can easily verify via (38) that
Var [ ϕ ^ i S - LSS ] = 0 ( i N ) .
Corollary 1.
There exist games for which the variances of all players’ S-LSS estimators are smaller than those the corresponding LSS estimators, i.e.,
Var [ ϕ ^ i S - LSS ] < Var [ ϕ ^ i LSS ] ( i N ) ,
as long as the sample allocation scheme of S-LSS satisfies τ j , s > 0 for all j N and s { 1 , , n 1 } .
Proof. 
The proof is straightforward and follows directly from Lemmas 1 and 2. □

Variance comparison in the presence of variance within strata

Let us now consider a weighted voting game as defined in (6) with
c = [ 1 , 2 , 3 ] and C = 5
as a second example. Here, we use concrete numbers in order to simplify calculations. In detail, we are interested in the variances of the Shapley value estimators of player 2 with Shapley value ϕ 2 = 1 2 .
To simplify our later reasoning, we first calculate the population and estimator variances for all strata of the S-LSS estimator in Table 1.
Lemma 3.
On the game defined by (55), the variance of player 2’s LSS estimator is given by
Var [ ϕ ^ 2 LSS ] = 5 36 τ .
Proof. 
The proof is straightforward by inserting (55) into (32). □
Lemma 4.
On the game defined by (55), the variance of player 2’s S-LSS estimator with equal sample allocation is given by
Var [ ϕ ^ 2 S - LSS ] = 5 6 τ .
Proof. 
The proof is straightforward and is done by calculating (36), first, to obtain values for all Var [ a ^ j , s ] . Note that these Var [ a ^ j , s ] are given in the “estimator variances” column from Table 1. Then, one obtains the final estimator’s variance by inserting all Var [ a ^ j , s ] into (38). □
Corollary 2.
There exist games for which the variance of a player’s S-LSS estimator with equal sample allocation is larger than that of the LSS estimator, i.e., Var [ ϕ ^ i S - LSS ] > Var [ ϕ ^ i LSS ] for some i N .
Proof. 
The proof is straightforward and directly follows from Lemmas 3 and 4. □
Although an equal sample allocation scheme might be a reasonable choice, especially when calculating all players’ Shapley values, one might still ask whether alternative allocation procedures could further reduce the variance of the S-LSS estimator. While our goal is to treat characteristic functions — and, in the context of explainable machine learning, the machine learning models behind them — as black boxes without relying on internal knowledge, let us examine a sample allocation strategy that makes use of a priori information about the variances within strata. Interestingly, we will see that even this variance-informed allocation performs worse than the LSS method on the weighted voting game defined by (55).
Lemma 5.
On the game defined by (55), the variance of player 2’s S-LSS estimator with an optimum sample allocation, i.e., a sample allocation that is tailored to player 2 in order to minimize the variance of their estimator, is given by
Var [ ϕ ^ 2 S - LSS ] = 1 4 τ .
Proof. 
By inserting the estimator variances from Table 1 into (38), we obtain
Var [ ϕ ^ 2 S - LSS ] = 4 9 1 4 τ 2 , 2 + 1 9 1 4 τ 3 , 2 = 1 9 1 τ 2 , 2 + 1 4 τ 3 , 2 = 1 9 1 x τ + 1 4 ( 1 x ) τ = 1 9 τ 1 x + 1 4 ( 1 x ) ,
whereby x ( 0 , 1 ) is the fraction of τ that is used for estimating the stratum with j = 2 , s = 2 and ( 1 x ) is the fraction of τ that is used for estimating the stratum with j = 3 , s = 2 .
Therefore, we get the proxy minimization problem
min x ( 0 , 1 ) 1 x + 1 4 ( 1 x ) .
Let us now set
f ( x ) = 1 x + 1 4 ( 1 x )
with its derivative given by
f ( x ) = 1 x 2 + 1 4 ( 1 x ) 2 .
Setting f ( x ) = 0 , we get
x = 4 ± 2 3 ,
with x = 2 3 being the only solution in ( 0 , 1 ) . Furthermore, since
f ( x ) = 2 x 3 + 1 2 ( 1 x ) 3 > 0 on ( 0 , 1 ) ,
we obtain that x = 2 3 indeed is a minimum with value f 2 3 = 9 4 . Thus, via (56), the minimum variance of the S-LSS estimator is given by
Var [ ϕ ^ 2 S - LSS ] = 1 9 τ 9 4 = 1 4 τ .
Corollary 3.
There exist games for which the variance of a player’s S-LSS estimator with optimum sample allocation, i.e., a sample allocation that is tailored to that specific player in order to minimize the variance of their estimator, is larger than that of the LSS estimator, i.e., Var [ ϕ ^ i S - LSS ] > Var [ ϕ ^ i LSS ] for some i N .
Proof. 
The proof is straightforward and directly follows from Lemmas 3 and 5. □

Concluding Variance Comparison of LSS and S-LSS

Recapitulating the results from the previous two paragraphs, we obtain:
Theorem 2.
Depending on the specific cooperative game, the variance of a player’s Shapley value estimator may be smaller under either LSS or S-LSS, even when the S-LSS algorithm uses an optimal, player-specific sample allocation.
Proof. 
The proof is straightforward and directly follows from Corollaries 1 and 3. □
To summarize the results in this subsection so far, Proposition 5 demonstrates that S-LSS is not the stratified variant of LSS in a sense of the definition provided in Section 3.2, and Theorem 2 states that neither of both algorithms is preferable over the other one on all cooperative games. Since the proof of Theorem 2 builds on the calculation of both algorithms’ theoretical variances on two distinct games, it does not provide an intuitive explanation why S-LSS may perform worse than LSS in some settings. Therefore, we provide an explanation in the following.
Since Benati et al. [14] did not provide a sample allocation scheme for the S-LSS algorithm, we first derive a proportional sample allocation similar to SubSection 3.2, which is motivated by the fact that all coalition sizes are equally likely in the context of LSS, i.e.,
P ( s = | S | ) = 1 n 1 ( s { 1 , , n 1 } ) ,
with S p LSS , whereby p LSS is the probability mass function from (28).
Furthermore, once a random coalition S p LSS of some size s { 1 , , n 1 } is sampled, the probability of any player being in it is the same for all players, i.e.,
P ( j S s = | S | ) = s n ( j N ) .
Then, the overall probability that a random coalition S p LSS belongs to a specific stratum of the S-LSS algorithm — which is the case when S meets a specific size s and, additionally, some specific player j is in it — is given by
P ( s = | S | ) P ( j S s = | S | ) = s n ( n 1 ) ( j N , s { 1 , , n 1 } ) .
As we established at the beginning of this subsection, the statement on variance reduction for a stratified estimator with stratum sample sizes proportional to the probabilities of sampled elements belonging to a stratum from SubSection 3.2 does not apply in this context since S-LSS is not a valid stratified variant of LSS. However, the proportional sample allocation will come out as a useful starting point for our subsequent explanations. Moreover, as established in Theorem 2, the exact choice of the applied sample allocation scheme does not alter the fact that S-LSS can perform worse than LSS, since the increased variance of S-LSS was also observed under an optimal allocation scheme. Additionally, as we will show later, sample allocation strategies other than the proportional sample allocation do not change the outcome and conclusions of our following reasoning.
Hence we set all τ j , s S - LSS P ( s = | S | ) P ( j S s = | S | ) . Then, we obtain
τ j , s S - LSS = τ P ( s = | S | ) P ( j S s = | S | ) s = 1 n 1 j N P ( s = | S | ) P ( j S s = | S | ) = s τ n ( n 1 ) s = 1 n 1 j N s n ( n 1 ) = s τ s = 1 n 1 j N s = s τ n s = 1 n 1 s = 2 s τ n 2 ( n 1 ) ( j N , s { 1 , , n 1 } ) .
On the other hand, let us now find out how often a sample is expected to belong to a stratum a j , s of S-LSS when actually running LSS, which is denoted by E [ τ j , s LSS ] . Via (57), we obtain
E [ τ j , s LSS ] = τ P ( s = | S | ) P ( j S s = | S | ) = s τ n ( n 1 ) ( j N , s { 1 , , n 1 } ) .
Finally, by comparing (58) and (59), we observe
τ j , s S - LSS = 2 s τ n 2 ( n 1 ) < n > 2 s τ n ( n 1 ) = E [ τ j , s LSS ]
for all j N and s { 1 , , n 1 } . Clearly, (60) shows that the expected sample size for stratum a j , s is larger by a factor of n 2 when executing LSS in comparison to the corresponding sample size of S-LSS with a proportional sample allocation. This is due to the fact that LSS actually updates multiple strata with one evaluation of v, see the beginning of this subsection. We conclude that S-LSS might eliminate the variance between the estimators of different strata (similar to the definition provided in SubSection 3.2), but taking into account the results from Proposition 3, one obtains the dependency on τ j , s S - LSS when quantifying the stratum estimators’ variances, which results in larger variances of the individual stratum estimators due to smaller τ j , s S - LSS .
This underscores our observations from the previous paragraphs. Whenever the variances within strata are small, e.g., in the context of the game defined by (52), the small sample sizes for each stratum are enough for S-LSS to provide accurate stratum estimators, and the variance between the strata is eliminated by algorithm design. Contrary, if the variances within strata are larger, e.g., in the context of the game defined by (55), the reduced amount of samples available for individual strata of the S-LSS algorithm might increase the stratum estimators’ variances more than the stratified algorithm design reduces the overall variance.
This result does not only apply to our chosen sample allocation scheme, i.e., the proportional sample allocation. Clearly, from (60), one obtains that there are too little samples available for the individual stratum estimators in comparison to LSS, and any other sample allocation scheme would also have the drawback that there exists at least one stratum for which there are fewer samples available in comparison to LSS. Additionally, we highlight that even an optimized sample allocation cannot account for the reduced amount of samples available, see Lemma 5 and Corollary 3.
In summary, we recommend using LSS whenever one expects large variances within the strata of S-LSS and a small variance between the stratum estimators. On the other hand, whenever one expects a large variance between the stratum estimators and small variances within strata, we recommend choosing S-LSS. In the latter case, if one has information about the exact variances of individual strata or a good approximation of them, one could distribute the sample sizes across strata similar to the procedure from Lemma 5. Additionally, we highlight that several improvements can be made in order to reduce the mutual shortcomings of both algorithms. One approach is the reuse of samples across strata in order to eliminate the sample size inequality from (60), which was proposed by Benati et al. [14] and introduced as SRS-LSS in SubSection 5.2. However, as we will point out in SubSection 5.4, this approach adds covariance terms to the overall variances of the Shapley value estimators. Another improvement could be some kind of two-stage algorithm, where the first stage is used to decide whether the LSS or S-LSS algorithm should be executed in the second stage.
Finally, we visualize the behavior of LSS and S-LSS when applied to different cooperative games. Figure 2 illustrates the previously obtained results by plotting the variances of LSS and S-LSS against weighted voting games with different quotas. As expected, the variances show a clear dependence to the underlying game, determining whether LSS or S-LSS performs better.

5.4. Analysis of the Variance of SRS-LSS

Since Benati et al. [14] did not address the variance of the SRS-LSS algorithm, we derive:
Proposition 6.
For all i N , the variance of the SRS-LSS estimator is given by
Var [ ϕ ^ i ] = n 1 n 2 s = 1 n 1 Var [ a ^ i , s ] = + 1 n 2 j N j i s = 1 n 1 Var [ a ^ j , s ] + 2 j , k N j < k j , k i s = 1 n 1 Cov [ a ^ j , s , a ^ k , s ] = 2 n 1 n 2 j N j i s = 1 n 1 Cov [ a ^ i , s , a ^ j , s ] ,
with Var [ a ^ j , s ] given in (36),
Cov [ a ^ j , s , a ^ k , s ] = 1 ( n s ) 2 Cov 1 τ ^ j , s S S s 1 j S v ( S ) , 1 τ ^ k , s S S s 1 k S v ( S ) ,
τ ^ j , s = S S s 1 j S and τ ^ k , s = S S s 1 k S ,
and S s iid U { S N s = | S | } .
Proof. 
Recall the closed-form formulation of the Shapley value as a member of the family of least square values in (27). By using the stratification scheme introduced in (34), one obtains the estimator
ϕ ^ i = v ( N ) n + n 1 n a ^ i 1 n j N j i a ^ j
with
a ^ j = s = 1 n 1 a ^ j , s .
With these definitions in place, the variance of the Shapley value estimator is given by
Var [ ϕ ^ i ] = Var v ( N ) n + n 1 n a ^ i 1 n j N j i a ^ j = n 1 n 2 Var [ a ^ i ] + 1 n 2 Var j N j i a ^ j 2 n 1 n 2 Cov a ^ i , j N j i a ^ j ,
where
Var j N j i a ^ j = j N j i Var [ a ^ j ] + 2 j , k N j < k j , k i Cov [ a ^ j , a ^ k ] ,
and
Cov a ^ i , j N j i a ^ j = j N j i Cov [ a ^ i , a ^ j ] .
Then, via (63), we rewrite
Cov [ a ^ j , a ^ k ] = Cov s = 1 n 1 a ^ j , s , s = 1 n 1 a ^ k , s = s = 1 n 1 s = 1 n 1 Cov [ a ^ j , s , a ^ k , s ] ,
for all j k N , but since samples are independent for s s , the equation can be simplified to
Cov [ a ^ j , a ^ k ] = s = 1 n 1 Cov [ a ^ j , s , a ^ k , s ] .
Next, from line 12 in Algorithm 3, we derive
Cov [ a ^ j , s , a ^ k , s ] = Cov 1 τ ^ j , s ( n s ) S S s 1 j S v ( S ) , 1 τ ^ k , s ( n s ) S S s 1 k S v ( S ) = 1 ( n s ) 2 Cov 1 τ ^ j , s S S s 1 j S v ( S ) , 1 τ ^ k , s S S s 1 k S v ( S )
with τ ^ j , s = S S s 1 j S , τ ^ k , s = S S s 1 k S , and S s iid U { S N s = | S | } .
Finally, setting
Var [ a ^ j ] = s = 1 n 1 Var [ a ^ j , s ] ,
since Cov [ a ^ j , s , a ^ j , s ] = 0 for s s and inserting everything back into (64) results in (61). □
Corollary 4.
For S-LSS, (61) is equal to (38).
Proof. 
In the case of S-LSS, sampling within each stratum is independent of sampling in all other strata. Therefore, the covariance terms defined by (62) are zero. Consequently, setting these covariance terms to zero in (61) results in (38). □
Theorem 3.
For SRS-LSS, the covariance terms in (61) do not vanish in general.
Proof. 
To simplify notations, we first obtain from (62) that the constant, non-zero term ( n s ) 2 > 0 can be ignored when demonstrating that Cov [ a ^ j , s , a ^ k , s ] 0 might occur. Therefore, we define
Cov [ a ¯ j , s , a ¯ k , s ] = Cov 1 τ ^ j , s S S s 1 j S v ( S ) , 1 τ ^ k , s S S s 1 k S v ( S ) .
Now, we demonstrate by example that Cov [ a ¯ j , s , a ¯ k , s ] may be non-zero. For this purpose, we define a weighted voting game with c = [ 1 , 0 , 1 ] and quota C = 2 , such that the only winning coalition of size 2 is { 1 , 3 } . Let us now fix s = 2 , such that the corresponding sample is obtained as S 2 iid U { { 1 , 2 } , { 1 , 3 } , { 2 , 3 } } . Without loss of generality, we set the sample size to τ 2 = 3 .
In Proposition 4, we established that the probability of at least one value τ ^ j , s being zero converges to zero as τ s . For τ 2 = 3 , this probability is given by
P ( S 1 = S 2 = S 3 ) = 3 1 3 3 = 1 9 ,
where S 2 = [ S 1 , S 2 , S 3 ] . We denote this event by B and its complement by B ¯ , such that P ( B ) = P ( S 1 = S 2 = S 3 ) .
Although in this example the probability of failing to sample at least one element from each stratum is relatively large, we proceed under the standard assumption that this probability is negligible, which is justified for large τ 2 . Therefore, we calculate the conditional expectations in Table 2 based on the assumption that B does not occur.
Based on Table 2, the conditional covariance between a ¯ 1 , 2 and a ¯ 3 , 2 can be calculated as
Cov [ a ¯ 1 , 2 , a ¯ 3 , 2 B ¯ ] = E [ a ¯ 1 , 2 a ¯ 3 , 2 B ¯ ] E [ a ¯ 1 , 2 B ¯ ] E [ a ¯ 3 , 2 B ¯ ] = 5 16 1 2 1 2 = 1 16 > 0 .
Although conditioning on B ¯ is necessary for computing the covariance, it may slightly affect its numerical value. The true (unconditional) covariance could differ marginally. Accounting for the undefined cases would require specifying a particular strategy to handle such edge cases, which was not proposed for the SRS-LSS algorithm and lies beyond the scope of this investigation. Nevertheless, the key point remains that the covariance does not vanish. As can be observed from the a ¯ 1 , 2 and a ¯ 3 , 2 columns in Table 2, these quantities have a clear positive correlation. Since players 1 and 3 together form the only winning coalition of size 2, large realizations of a ¯ 1 , 2 necessarily coincide with large realizations of a ¯ 3 , 2 . This dependence confirms that Cov [ a ¯ 1 , 2 , a ¯ 3 , 2 ] > 0 , and as a result, we have Cov [ a ^ 1 , 2 , a ^ 3 , 2 ] 0 . □
From Theorem 3, it follows that reusing evaluations of the characteristic function introduces non-zero covariance terms. These terms are hard to characterize, which complicates the exact computation of the SRS-LSS estimator’s theoretical variance and its comparison to LSS and S-LSS. A detailed analysis of (62) together with the resulting influence on (61) might be a question for further research. In the meantime, we rely on empirical observations of the SRS-LSS estimator’s variance in the numerical experiments in Section 6, which indicate a significantly reduced variance of SRS-LSS in comparison to LSS and S-LSS.

5.5. Comparison of LSS and UKS

In the following, we compare LSS (Algorithm 1) with UKS (see the end of SubSection 5.2). First, we show how UKS fits into the general importance sampling framework defined by Theorem 1 and discuss the relationship between both algorithms within this context. Subsequently, we compare their variances.
Before going into detail, we note that the formulation of UKS derived in the following was already proposed by Fumagalli et al. [30]. The authors identify UKS as a special case of their SHAPley Interaction Quantification framework, which enables a reformulation of the algorithm similar to ours. Nevertheless, we present our derivations for several reasons. First, our derivation of UKS is, from our perspective, more intuitive. Second, while Fumagalli et al. [30] suggest that their framework enables a particular formulation of UKS that arises specifically from their approach, this form of UKS can also be directly obtained by expanding the matrices and vectors in (46) without using the indirection via their SHAPley Interaction Quantification framework. Third, UKS has not yet been formally integrated into the importance sampling framework defined by Theorem 1, which allows us to directly apply our results from Proposition 1. Finally, and most importantly, to the best of our knowledge, we are the first to demonstrate that UKS approximates the same underlying problem as LSS, establishing a novel link between the two algorithms.
We refer to Appendix A, which shows that our following results are consistent with those of Fumagalli et al. [30].

Algorithm Comparison

First, recall the definition of LSS given in (30) with w i being defined in (24), m in (25), γ = 1 in (26), and p in (28). Since we have only derived w i ( S ) / p ( S ) so far and not w i ( S ) directly, inserting m into w i results in
n | S | n ( n 1 ) n 2 | S | 1 = ( n | S | ) ( | S | 1 ) ! ( n | S | 1 ) ! n ( n 1 ) ( n 2 ) ! = ( | S | 1 ) ! ( n | S | ) ! n ! = 1 | S | | S | ! ( n | S | ) ! n ! = 1 | S | n | S |
for the case i S , and
| S | n ( n 1 ) n 2 | S | 1 = | S | ( | S | 1 ) ! ( n | S | 1 ) ! n ( n 1 ) ( n 2 ) ! = | S | ! ( n | S | 1 ) ! n ! = 1 n | S | | S | ! ( n | S | ) ! n ! = 1 ( n | S | ) n | S |
for the case i S . Thus, w i can be rewritten as
w i LSS ( S ) = 1 | S | n | S | if i S 1 ( n | S | ) n | S | if i S .
Now, let us turn our attention to the UKS algorithm:
Proposition 7.
UKS can be brought into the form
ϕ ^ i = t + 1 τ S S UKS w i UKS ( S ) p UKS ( S ) v ( S )
with S UKS iid p UKS being a sample according to (41),
t = A 1 1 v ( N ) 1 A 1 1 i
being a constant, and
w i UKS ( S ) p UKS ( S ) = A 1 z ( S ) i A 1 1 i 1 A 1 z ( S ) 1 A 1 1 .
Proof. 
Starting from (46) and substituting b ^ as defined in (45) results in
ϕ ^ i = A 1 b ^ 1 1 A 1 b ^ v ( N ) 1 A 1 1 i = A 1 b ^ A 1 1 1 A 1 b ^ 1 A 1 1 + A 1 1 v ( N ) 1 A 1 1 i = A 1 1 v ( N ) 1 A 1 1 i t + A 1 b ^ A 1 1 1 A 1 1 A 1 1 b ^ i = t + A 1 A 1 1 1 A 1 1 A 1 1 b ^ i = t + A 1 A 1 1 1 A 1 1 A 1 1 1 τ S S UKS z ( S ) v ( S ) i = t + 1 τ S S UKS A 1 A 1 1 1 A 1 1 A 1 1 z ( S ) v ( S ) i = t + 1 τ S S UKS A 1 z ( S ) A 1 1 1 A 1 z ( S ) 1 A 1 1 v ( S ) i = t + 1 τ S S UKS A 1 z ( S ) i A 1 1 i 1 A 1 z ( S ) 1 A 1 1 w i UKS ( S ) / p UKS ( S ) v ( S ) .
We now aim to show that both algorithms share the same weight function w and the same initial term, i.e., w i LSS ( S ) = w i UKS ( S ) and t = v ( N ) n . To prepare for this, we first carry out some reformulations in order to simplify the upcoming calculations. Readers only interested in the final result may skip ahead to Theorem 4.
Lemma 6.
The expression for Q given in (43) can be rewritten as
Q = 2 ( n 1 ) H n 1 n ,
where H n 1 denotes the ( n 1 ) -th harmonic number.
Proof. 
The proof is straightforward by inserting the identity s = 1 n 1 1 s ( n s ) = 2 H n 1 n with H n 1 being the ( n 1 ) -th harmonic number [30] into (43), which results in (71). □
Lemma 7.
The inverse of A as defined in (47) is given by
A 1 = 2 H n 1 + ρ ¯ ρ ¯ ρ ¯ ρ ¯ 2 H n 1 + ρ ¯ ρ ¯ ρ ¯ ρ ¯ 2 H n 1 + ρ ¯
with
ρ ¯ = 2 H n 1 ( H n 1 1 ) 1 + n ( H n 1 1 ) .
Proof. 
The article by Fumagalli et al. [30] shows that the off-diagonal entries of the inverse of A as defined in (47) are given by
ρ ¯ = 2 H n 1 H n 1 1 2 H n 1 1 2 + ( n 1 ) H n 1 1 2 H n 1 .
Its denominator can be reformulated as
1 2 + ( n 1 ) H n 1 1 2 H n 1 = H n 1 + ( n 1 ) ( H n 1 1 ) 2 H n 1 = 1 + ( H n 1 1 ) + ( n 1 ) ( H n 1 1 ) 2 H n 1 = 1 + n ( H n 1 1 ) 2 H n 1 ,
and thus,
ρ ¯ = 2 H n 1 H n 1 1 2 H n 1 1 + n ( H n 1 1 ) 2 H n 1 = 2 H n 1 ( H n 1 1 ) 1 + n ( H n 1 1 ) .
For the diagonal entries, Fumagalli et al. [30] provides
2 H n 1 1 2 + ( n 2 ) H n 1 1 2 H n 1 1 2 + ( n 1 ) H n 1 1 2 H n 1 ,
and similarly to (72), its numerator can be derived as
1 2 + ( n 2 ) H n 1 1 2 H n 1 = 1 + ( n 1 ) ( H n 1 1 ) 2 H n 1 .
Putting (72) and (73) together, we retrieve
2 H n 1 1 2 + ( n 2 ) H n 1 1 2 H n 1 1 2 + ( n 1 ) H n 1 1 2 H n 1 = 2 H n 1 1 + ( n 1 ) ( H n 1 1 ) 2 H n 1 1 + n ( H n 1 1 ) 2 H n 1 = 2 H n 1 1 + ( n 1 ) ( H n 1 1 ) 1 + n ( H n 1 1 ) = 2 H n 1 2 H n 1 1 1 + ( n 1 ) ( H n 1 1 ) 1 + n ( H n 1 1 ) = 2 H n 1 2 H n 1 2 H n 1 ( 1 + n ( H n 1 1 ) 1 ( n 1 ) ( H n 1 1 ) ) 1 + n ( H n 1 1 ) = 2 H n 1 2 H n 1 ( H n 1 1 ) 1 + n ( H n 1 1 ) = 2 H n 1 + ρ ¯ .
Proposition 8.
The constant terms of both algorithms are equal, i.e.,
t = v ( N ) n .
Proof. 
From (47) one directly obtains that 1 is an eigenvector of A with eigenvalue λ = 1 2 + ( n 1 ) ρ . Thus, we have A 1 1 = λ 1 1 . With that given, rewriting t from Proposition 7 results in
t = A 1 1 v ( N ) 1 A 1 1 i = A 1 1 1 1 ( A 1 1 ) i v ( N ) = λ 1 1 1 1 ( λ 1 1 ) i v ( N ) = λ 1 1 1 n λ 1 i v ( N ) = λ 1 n λ 1 v ( N ) = v ( N ) n .
Lemma 8.
For any S N and i N , there holds
A 1 1 i 1 A 1 z ( S ) 1 A 1 1 = 2 | S | H n 1 n + | S | ρ ¯ .
Proof. 
Using Lemma 7, we derive
1 A 1 = 2 H n 1 + n ρ ¯ , , 2 H n 1 + n ρ ¯ R 1 × n ,
and thus,
1 A 1 z ( S ) 1 A 1 1 = | S | ( 2 H n 1 + n ρ ¯ ) n ( 2 H n 1 + n ρ ¯ ) = | S | n .
Furthermore, also based on Lemma 7,
A 1 1 i = 2 H n 1 + n ρ ¯ ,
and therefore,
A 1 1 i 1 A 1 z ( S ) 1 A 1 1 = 2 | S | H n 1 n + | S | n ρ ¯ n = 2 | S | H n 1 n + | S | ρ ¯ .
Proposition 9.
For all S N with 0 < | S | < n and i S ,
w i UKS ( S ) = 1 n | S | | S | .
Proof. 
From Proposition 7, we obtain
w i UKS ( S ) p UKS ( S ) = A 1 z ( S ) i A 1 1 i 1 A 1 z ( S ) 1 A 1 1 .
Using Lemma 7, we derive
A 1 z ( S ) i = 2 H n 1 + | S | ρ ¯ .
Rewriting the Shapley kernel from (42) to follow our general notation for cooperative games results in
ω ( S ) = n 1 n | S | | S | ( n | S | ) .
Combining the results from Lemmas 6 and 8 as well as (41), (74), (75), and (76) results in
w i UKS ( S ) = p UKS ( S ) w i UKS ( S ) p UKS ( S ) = Q 1 ω ( S ) A 1 z ( S ) i A 1 1 i 1 A 1 z ( S ) 1 A 1 1 = n 2 ( n 1 ) H n 1 n 1 n | S | | S | ( n | S | ) 2 H n 1 + | S | ρ ¯ 2 | S | H n 1 n | S | ρ ¯ = n 2 H n 1 1 n | S | | S | ( n | S | ) 2 H n 1 2 | S | H n 1 n = n 2 H n 1 1 n | S | | S | ( n | S | ) 2 ( n | S | ) H n 1 n = 1 n | S | | S | .
Proposition 10.
For all S N with 0 < | S | < n and i S ,
w i UKS ( S ) = 1 n | S | ( n | S | ) .
Proof. 
From Proposition 7, we have
w i UKS ( S ) p UKS ( S ) = A 1 z ( S ) i A 1 1 i 1 A 1 z ( S ) 1 A 1 1 .
Using Lemma 7, we derive
A 1 z ( S ) i = | S | ρ ¯ .
Combining the results from Lemmas 6 and 8 as well as (41), (76), (77), and (78) results in
w i UKS ( S ) = p UKS ( S ) w i UKS ( S ) p UKS ( S ) = Q 1 ω ( S ) A 1 z ( S ) i A 1 1 i 1 A 1 z ( S ) 1 A 1 1 = n 2 ( n 1 ) H n 1 n 1 n | S | | S | ( n | S | ) | S | ρ ¯ 2 | S | H n 1 n | S | ρ ¯ = n 2 H n 1 1 n | S | | S | ( n | S | ) 2 | S | H n 1 n = 1 n | S | ( n | S | ) .
Theorem 4.
The estimators obtained via LSS and UKS are both importance sampling estimators for the same linear solution concept in a sense of Theorem 1, differing only in their respective sampling strategies.
Proof. 
Per definition given in (30), LSS has the form
ϕ ^ i = v ( N ) n + 1 τ S S w i ( S ) p ( S ) v ( S ) ,
with weights given by (68).
Proposition 7 shows that UKS can also be written in this form, sharing the same constant term t = v ( N ) n , see Proposition 8. Moreover, by combining propositions 9 and 10, one obtains
w i UKS ( S ) = 1 | S | n | S | if i S 1 ( n | S | ) n | S | if i S
for 0 < | S | < n , which is identical to the weight function of LSS defined in (68).
Therefore, we conclude that both algorithms are importance sampling estimators of the same problem
ϕ i = v ( N ) n + S N 0 < | S | < n w i ( S ) v ( S )
with w i ( S ) = w i LSS ( S ) = w i UKS ( S ) .
Finally, we highlight that both algorithms differ only in their respective sampling distributions, since (28) and the distribution that is proportional to (76) are clearly different sampling distributions. □
We illustrate the result from Theorem 4 in Figure 3 which visualizes the different sampling strategies of LSS and UKS for n = 9 .

Variance Comparison

In the previous section, we showed that UKS fits the importance sampling framework introduced by Theorem 1, see Proposition 7 and Theorem 4. Therefore, although Covert and Lee [7] already provided (49) as the variance for the UKS estimator, we can now use a different formulation:
Proposition 11.
The variance of the UKS estimator is given by
Var [ ϕ ^ i ] = 1 τ S N 0 < | S | < n w i ( S ) 2 p ( S ) v ( S ) 2 ϕ i v ( N ) n 2 ( i N )
with
w i ( S ) 2 p ( S ) = 2 ( n | S | ) H n 1 n | S | n | S | if i S 2 | S | H n 1 n ( n | S | ) n | S | if i S .
Proof. 
First, let us analyze the case i S . Using Lemma 8 as well as (69), (75), and (79), we derive
w ( S ) 2 p ( S ) = w ( S ) w ( S ) p ( S ) = 1 | S | n | S | A 1 z ( S ) i A 1 1 i 1 A 1 z ( S ) 1 A 1 1 = 1 | S | n | S | 2 H n 1 + | S | ρ ¯ 2 | S | H n 1 n | S | ρ ¯ = 1 | S | n | S | 2 H n 1 2 | S | H n 1 n = 2 ( n | S | ) H n 1 n | S | n | S | .
Similarly, for the case i S , by using Lemma 8 as well as (69), (78), and (79), we obtain
w ( S ) 2 p ( S ) = w ( S ) w ( S ) p ( S ) = 1 ( n | S | ) n | S | A 1 z ( S ) i A 1 1 i 1 A 1 z ( S ) 1 A 1 1 = 1 ( n | S | ) n | S | | S | ρ ¯ 2 | S | H n 1 n | S | ρ ¯ = 2 | S | H n 1 n ( n | S | ) n | S | .
Combining both cases results in (80), and since UKS fits the framework introduced by Theorem 1, we can use (18) to compute its variance. Note that we construct a temporary solution concept ϕ i v ( N ) n that excludes v ( N ) n , similar to the variance of LSS given by Proposition 2. □
In Figure 4, we use Proposition 11 to compute the variances of UKS and Proposition 2 for those of LSS. As expected, neither LSS nor UKS outperforms the other, since they implement the same algorithm while differing only in their respective sampling strategies. The effectiveness of any given sampling strategy depends on the characteristics of the underlying problem, such as the parameters of the weighted voting game. Obviously, there exist problems for which sampling strategies other than the one employed in LSS or UKS achieve superior performance. More broadly, this illustrates that no single sampling strategy is universally optimal, and selecting the most effective one requires tailoring it to the specific problem.

6. Empirical Results

We validate our results by applying the algorithms introduced in SubSection 5.2 to approximate Shapley values for airport games, weighted voting games, and three real-world interpretable machine learning problems.
The algorithms introduced in SubSection 5.2 were implemented in Python. Both the implementations and test problems can be freely accessed via the first author’s GitHub repository
We consistently assume that all players’ Shapley values must be estimated. Although not universal, this is common in explainable machine learning, where one seeks to explain a prediction using all features.
Before proceeding, we note that each algorithm uses different parameters to determine the number of sampled coalitions and thus, evaluations of the characteristic function v. For fair comparisons, we introduce a unified sample budget T, ensuring each algorithm performs T evaluations of v, up to negligible rounding errors.
The overall sample budget T, introduced above, sets the total number of v evaluations per algorithm. We now express the parameters of each algorithm from SubSection 5.2 in terms of T.
For LSS (Algorithm 1) and UKS (see the end of SubSection 5.2), this is a one-to-one mapping, i.e., τ = T . Note that we ignore the single evaluation of v needed for calculating the initial term v ( N ) n , since it is negligible in comparison to the rounding errors of other algorithms, especially for large T.
S-LSS (Algorithm 2) divides the sample space based on n 1 distinct values of s, for each i N . We use the heuristically motivated proportional sample allocation from (58) to obtain
τ i , s = 2 s T n 2 ( n 1 ) ( i N , s { 1 , , n 1 } ) .
Finally, SRS-LSS (Algorithm 3) divides the sample space into n 1 distinct sets. Thus, for simplicity, noting that alternative sample allocation schemes across strata may provide better results, we set
τ s = T n 1 ( s { 1 , , n 1 } ) .
We conclude that the deviation from the true total sample budget T is 1 for LSS and UKS, while the upper bounds of their deviations are given by n 2 n 1 for S-LSS and n 2 for SRS-LSS. We consider these deviations to be negligible in our subsequent analysis, in particular for large T.
Note that for SRS-LSS (Algorithm 3), it is not guaranteed that the algorithm runs successfully in the sense that every stratum receives at least one sample, compare Proposition 4. In our mean squared error comparisons, we mandate for any τ that at least half of all runs must be successful for the results to be displayed in the final figure.
With these definitions established, we proceeded with our experiments.
Figure 5 confirms the theoretical variances of LSS and S-LSS presented in Propositions 2 and 3, respectively. Moreover, it underscores our results from Theorem 2, where we demonstrated that one or the other algorithm might achieve a smaller variance, depending on the underlying cooperative game. Furthermore, we validate that LSS and UKS perform as expected with respect to the theoretical variances presented in Propositions 2 and 11 as Figure 5 clearly illustrates that the derived theoretical variances are accurate and that, depending on the specific problem, either LSS or UKS may slightly outperform the other, as one sampling strategy might be better suited to the given task than the other method, see Theorem 4.
As for mean squared error comparisons, we first look at an airport game with 100 players specified in Castro et al. [31] and compare the approximation methods from SubSection 5.2 in Figure 6. Note that the graph for SRS-LSS begins only at a sample size of 50,000, as Proposition 4 fails too frequently with smaller sample budgets. With n = 100 players, guaranteeing at least one sample per stratum becomes more challenging.
Figure 7 compares mean squared errors for our Monte Carlo estimators from SubSection 5.2 for a weighted voting game with 50 players and normally distributed weights, which was previously employed in the software EPIC [22,23]. It can be found on the GitHub page of the second author via https://github.com/jhstaudacher/EPIC/blob/master/test_cases/normal_sqrd/normal_sqrd.n50.q249646.csv.
Figure 8 compares mean squared errors for our Monte Carlo estimators from SubSection 5.2 for a weighted voting game with 150 players and uniformly distributed weights, which was previously employed in the software EPIC [22,23]. It can be found on the GitHub page of the second author via https://github.com/jhstaudacher/EPIC/blob/master/test_cases/uniform/uniform.n150.q35951.csv. Note that the graph for SRS-LSS begins only at a sample size of 150,000, as Proposition 4 fails too frequently with smaller sample budgets. With n = 150 players, guaranteeing at least one sample per stratum becomes even more challenging than in the airport game from Figure 6.
Figure 9 uses the standard diabetes dataset (442 patients, 10 baseline features) where the target is disease progression after one year. We train a Gradient Boosting Regressor and evaluate the approximation methods from SubSection 5.2 by their mean squared error against exact reference Shapley values.
In Figure 10, the California housing dataset (20,640 entries, 8 features) predicts median house value in hundreds of thousands of dollars. Using an MLP Regressor, we compare approximation methods by their mean squared error on Shapley value estimation against exact references.
Figure 11 uses the classic wine dataset (178 instances, 13 features, 3 wine classes). We train a Random Forest Classifier and evaluate the approximation methods from SubSection 5.2 via mean squared error in estimating Shapley values for predicting the probability of class 0 only.
Let us summarize the comparisons of mean squared errors of all algorithms for approximating the Shapley value from SubSection 5.2 in Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 succinctly. As expected, LSS and UKS perform more or less equally for all six test problems. SRS-LSS outperforms the other three provably unbiased algorithms LSS, S-LSS and UKS for all test problems. This observation is consistent with the results shown in Figure 1 in Benati et al. [14], where only LSS and SRS-LSS were compared. Therefore, we conclude that the covariance terms established in Theorem 3 do not significantly affect the overall variances of individual players’ Shapley value estimators negatively. S-LSS performs worst for five test problems with the notable exception of the airport game with 100 players in Figure 6 where S-LSS is even faster than KernelSHAP (KS). Not surprisingly, KS is consistently faster than UKS and LSS. For the three large cooperative games in Figure 6, Figure 7 and Figure 8 SRS-LSS converges faster than KS, whereas that comparison reverses for the machine learning tasks in Figure 9, Figure 10 and Figure 11.

7. Summary, Conclusions and Outlook

The celebrated KernelSHAP approach by Lundberg and Lee [1] as well as its later refinement Unbiased KernelSHAP (UKS) proposed by Covert and Lee [7] compute the Shapley value as a least squares optimization problem. While these two algorithms are extremely well established in the machine learning community, the methods from the paper by Benati et al. [14] — which are also based on the least square formula for the Shapley value — have received fairly little attention. To our knowledge, we are the first to apply them in the context of explainable artificial intelligence. We formulate the ideas from Benati et al. [14] as approximation algorithms for general TU games. We refer to their weighted sampling strategy as LSS (Least Square Sampling). As for their approach towards stratification, we distinguish S-LSS with no reuse of samples across strata and SRS-LSS with sample reuse across strata as proposed in Benati et al. [14].
As the result of our thorough and detailed analysis of LSS, S-LSS and SRS-LSS, we presented three key findings.
First, in Proposition 5, we showed that S-LSS as proposed in [14] is not a valid stratified variant of LSS in the sense of the definition provided SubSection 3.2,i.e., for S-LSS the strata overlap. Therefore, we demonstrated that S-LSS might reduce the variance of the obtained estimator, but it could also lead to an increase in comparison to LSS, see Theorem 2.
Second, in Theorem 3, we showed that the SRS-LSS approach proposed by Benati et al. [14] introduces covariance terms between stratum estimators, making its theoretical variance difficult to analyze. Although empirical results suggest that the variance is significantly reduced in comparison to LSS and S-LSS, a theoretical analysis remains an open research question.
Third, in Theorem 4, we established that LSS and UKS are importance sampling estimators in the sense of Theorem 1, addressing the same underlying problem but differing in their respective sampling strategies. Therefore, neither LSS nor UKS is superior over the other one. Which algorithm’s variance is smaller depends on the underlying problem and whether the respective sampling strategy is suitable for this problem or not.
As noted in the introduction, this paper’s aim is neither to propose a new Shapley value estimator nor to compare numerous approximation algorithms. Rather, the focus is on providing structural, theoretical, and algorithmic insight into the methods from Benati et al. [14]. While antithetic sampling — as discussed, for instance, in [32] — could be integrated into the SRS-LSS algorithm, this would have vastly exceeded the scope of this study. In the future, it could be worthwhile to investigate whether there are algorithmic approaches comparable to sophisticated stratification strategies [28,33,34] which could be successfully incorporated to enhance the performance of SRS-LSS. Definitely, our theoretical and numerical results suggest that the SRS-LSS estimator — which is unbiased — justifies more attention. Another, perhaps even more pressing question, is why SRS-LSS outperforms KernelSHAP for weighted voting games and airport games while KernelSHAP exhibits superior performance for our test problems from interpretable machine learning. We have yet to identify the properties of the characteristic function responsible for this phenomenon.

Author Contributions

Conceptualization, T.P. and J.S.; Methodology, T.P. and J.S.; Software, T.P.; Validation, J.S.; Formal Analysis, T.P. and J.S.; Writing—original draft, T.P. and J.S.; Writing—review & editing,, T.P. and J.S.. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

U uniform distribution
Hadamard product of vectors
· ^ estimator or estimated value
1 all-ones vector with dimension being clear from context
0 all-zeros vector with dimension being clear from context
1 condition indicator function, 1 if condition is satisfied, 0 otherwise
n number of players
N player set, N = { 1 , , n }
v characteristic function, v : 2 N R
S coalition, S N
z ( S ) indicator vector corresponding to coalition S, z ( S ) = 1 i S i = 1 n
S ( z ) subset corresponding to indicator vector z , S ( z ) = { i N z i = 1 }
w weight vector, w ( S ) = w 1 ( S ) , , w n ( S )
ν value vector, ν ( S , v ) = ν 1 ( S , v ) , , ν n ( S , v )
α linear solution concept, α ( N , v ) = S N w ( S ) ν ( S , v )
ϕ Shapley value
p probability mass function used for sample generation, p : 2 N [ 0 , 1 ]
τ sample size
S sampled coalition, S N
S sample consisting of τ coalitions, S iid p , S = [ S 1 , , S τ ]

Appendix A. Comparison of Our Results Against the SHAPley Interaction Quantification (SHAP-IQ) Formulation of UKS

Fumagalli et al. [30] state that choosing the probability mass function
p ( S ) = 1 2 H n 1 1 n 1 n 2 | S | 1 1 if 0 < | S | < n 0 else
with H n 1 being the ( n 1 ) -th harmonic number ensures that UKS is equal SHAP-IQ for the Shapley value.
In detail, they state that the estimator obtained via UKS can be rewritten as
ϕ ^ i = v ( N ) n + 2 H n 1 τ S S 1 i S | S | n v ( S ) ,
where S iid p .
Let us now rewrite (A1) similar to (70), such that
ϕ ^ i = v ( N ) n + 1 τ S S 2 H n 1 1 i S | S | n w i ( S ) / p ( S ) v ( S ) .
By comparing this formulation to (70) and Proposition 8, one obtains that it shares the same initial term v ( N ) n with the formulation of UKS derived by us.
Next, we compare the functions w i and w i . For 0 < | S | < n , we find
w i ( S ) = p ( S ) w i ( S ) p ( S ) = 1 2 H n 1 1 n 1 n 2 | S | 1 1 2 H n 1 1 i S | S | n = 1 n 1 ( | S | 1 ) ! ( n | S | 1 ) ! ( n 2 ) ! 1 i S | S | n .
If i S , we obtain
w i ( S ) = 1 n 1 ( | S | 1 ) ! ( n | S | 1 ) ! ( n 2 ) ! 1 | S | n = ( | S | 1 ) ! ( n | S | 1 ) ! ( n 2 ) ! n | S | n ( n 1 ) = ( | S | 1 ) ! ( n | S | ) ! n ! = | S | ! ( n | S | ) ! | S | n ! = 1 | S | n | S | ,
which matches the case i S in (79).
On the other hand, if i S , we have
w i ( S ) = 1 n 1 ( | S | 1 ) ! ( n | S | 1 ) ! ( n 2 ) ! | S | n = | S | ! ( n | S | 1 ) ! n ! = | S | ! ( n | S | ) ! n ! ( n | S | ) = 1 ( n | S | ) n | S | ,
which matches the case i S in (79).
Finally, we compare the sampling distributions p and p . While Fumagalli et al. [30] define their sampling distribution as
p ( S ) ω ( S ) = 1 ( n 1 ) n 2 | S | 1 ,
we define this distribution as p ( S ) ω ( S ) with ω being specified in (76). Reformulating ω results in
ω ( S ) = n 1 n | S | | S | ( n | S | ) = | S | ! ( n | S | ) ! ( n 1 ) n ! | S | ( n | S | ) = | S | ( | S | 1 ) ! ( n | S | ) ( n | S | 1 ) ! ( n 1 ) n ( n 1 ) ( n 2 ) ! | S | ( n | S | ) = ( | S | 1 ) ! ( n | S | 1 ) ! n ( n 2 ) ! = 1 n n 2 | S | 1 .
By comparing (A2) and (A3), one directly obtains that these expressions only differ by a constant factor, i.e.,
ω ( S ) = n 1 n ω ( S ) ,
which means that the corresponding sampling distributions p and p are equivalent.

References

  1. Lundberg, S.M.; Lee, S. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4768–4777. [Google Scholar] [CrossRef]
  2. Shapley, L. S. A value for n-person games. Contributions to the Theory of Games 1953, 28, 307–317. [Google Scholar]
  3. Deng, X.; Papadimitriou, C. H. On the complexity of cooperative solution concepts. Math. Oper. Res. 1994, 19, 257–266. [Google Scholar] [CrossRef]
  4. Faigle, U.; Kern, W. The Shapley value for cooperative games under precedence constraints. Int. J. Game Theory 1992, 21, 249–266. [Google Scholar] [CrossRef]
  5. Fernández, J.R.; Algaba, E.; Bilbao, J.M.; Jiménez, A.; Jiménez, N.; López, J.J. Generating functions for computing the Myerson value. Ann. Oper. Res. 2002, 9, 143–158. [Google Scholar] [CrossRef]
  6. Chakravarty, S.R.; Mitra, M.; Sarkar, P. A Course on Cooperative Game Theory; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar]
  7. Covert, I.; Lee, S. Improving KernelSHAP: Practical Shapley Value Estimation Using Linear Regression. In Proceedings of The 24th International Conference on Artificial Intelligence and Statistics; Banerjee, A., Fukumizu, K., Eds.; PLMR, 2021; Volume 130, pp. 3457–3465. Available online: http://proceedings.mlr.press/v130/covert21a/covert21a.pdf (accessed on 27 January 2026).
  8. Charnes, A.; Golany, B.; Keane, M.; Rousseau, J. Extremal Principle Solutions of Games in Characteristic Function Form: Core, Chebychev and Shapley Value Generalizations. In Econometrics of Planning and Efficiency; Springer Netherlands: Dordrecht, 1988; Volume 11, pp. 123–133. [Google Scholar] [CrossRef]
  9. Ruiz, L.M.; Valenciano, F..; Zarzuelo, J.M. The Family of Least Square Values for Transferable Utility Games. Games Econ. Behav. 1998, 24, 109–130. [Google Scholar] [CrossRef]
  10. Littlechild, S.C.; Thompson, G.F. Aircraft landing fees: a game theory approach. Bell J. Econ. 1977, 8, 186–204. [Google Scholar] [CrossRef]
  11. Algaba, E.; Bilbao, J.M.; Fernández-García, J.R. The distribution of power in the European Constitution. Eur. J. Oper. Res. 2007, 176, 1752–1766. [Google Scholar] [CrossRef]
  12. Kóczy, L.A. Beyond Lisbon. Demographic trends and voting power in the European Union Council of Ministers. Math. Soc. Sci. 2012, 63, 152–158. [Google Scholar] [CrossRef]
  13. Kóczy, L.A. Brexit and Power in the Council of the European Union. Games 2021, 12, 51. [Google Scholar] [CrossRef]
  14. Benati, S.; López-Blázquez, F.; Puerto, J. A stochastic approach to approximate values in cooperative games. Eur. J. Oper. Res. 2019, 279, 93–106. [Google Scholar] [CrossRef]
  15. Pollmann, T.; Staudacher, J. On Importance Sampling and Multilinear Extensions for Approximating Shapley Values with Applications to Explainable Artificial Intelligence. Preprints 2026. [Google Scholar] [CrossRef]
  16. Peters, H. Game theory: A Multi-leveled approach, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar] [CrossRef]
  17. Rozemberczki, B.; Watson, L.; Bayer, P.; Yang, H.; Kiss, O.; Nilsson, S.; Sarkar, R. The Shapley value in machine learning. In The 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence, IJCAI-ECAI; de Raedt, L., Ed.; International Joint Conferences on Artificial Intelligence Organization: Vienna, Austria, 2022; pp. 5572–5579. [Google Scholar] [CrossRef]
  18. Chen, H.; Covert, I.C.; Lundberg, S.M.; Lee, S. Algorithms to estimate Shapley value feature attributions. Nat. Mach. Intell. 2023, 5, 590–601. [Google Scholar] [CrossRef]
  19. Molnar, C. : Interpreting Machine Learning Models With SHAP. LeanPub. 2023. Available online: https://leanpub.com/shap.
  20. Borm, P.; Hamers, H.; Hendrickx, R. Operations research games: A survey. Top 2001, 9, 139–199. [Google Scholar] [CrossRef]
  21. Fatima, S.S.; Wooldridge, M.; Jennings, N.R. A linear approximation method for the Shapley value. Artif. Intell. 2008, 172, 1673–1699. [Google Scholar] [CrossRef]
  22. Staudacher, J.; Kóczy, L.Á.; Stach, I.; Filipp, J.; Kramer, M.; Noffke, T.; Olsson, L.; Pichler, J.; Singer, T. Computing power indices for weighted voting games via dynamic programming. Oper. Res. 2021, 31, 123–145. [Google Scholar] [CrossRef]
  23. Staudacher, J.; Wagner, F.; Filipp, J. Dynamic Programming for Computing Power Indices for Weighted Voting Games with Precoalitions. Games 2021, 13, 6. [Google Scholar] [CrossRef]
  24. Sundararajan, M.; Najmi, A. The Many Shapley Values for Model Explanation. In Proceedings of the 37th International Conference on Machine Learning; Daumé, III, H. Singh, A., Eds.; PMLR: London, UK, 2020; Volume 119, pp. 9269–9278. Available online: http://proceedings.mlr.press/v119/sundararajan20b/sundararajan20b.pdf (accessed on 27 January 2026).
  25. Rubinstein, R.; Kroese, D. Monte Carlo methods. Wiley Interdiscip. Rev. Comput. Stat. 2012, 4, 48–58. [Google Scholar] [CrossRef]
  26. Botev, Z.; Ridder, A. Variance reduction. In Wiley statsRef: Statistics Reference Online; Wiley: Hoboken, NJ, USA, 2017; pp. 1–6. [Google Scholar] [CrossRef]
  27. Rubinstein, R.Y.; Kroese, D.P. Simulation and the Monte Carlo method; John Wiley & Sons: Hoboken, New Jersey, USA, 2007. [Google Scholar] [CrossRef]
  28. Castro, J.; Gómez, D.; Molina, E.; Tejada, J. Improving polynomial estimation of the Shapley value by stratified random sampling with optimum allocation. Comput. Oper. Res. 2017, 82, 180–188. [Google Scholar] [CrossRef]
  29. Hunter, D. An Upper Bound for the Probability of a Union. J. Appl. Probab. 1976, 13, 597–603. [Google Scholar] [CrossRef]
  30. Fumagalli, F.; Muschalik, M.; Kolpaczki, P.; Hüllermeier, E.; Hammer, B. HAP-IQ: Unified Approximation of any-order Shapley Interactions. In Advances in Neural Information Processing Systems; Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S., Eds.; Curran Associates, Inc.: USA, 2023; Volume 36, pp. 11515–11551. [Google Scholar]
  31. Castro, J.; Gómez, D.; Tejada, J. Polynomial calculation of the Shapley value based on sampling. Comput. Oper. Res. 2009, 36, 1726–1730. [Google Scholar] [CrossRef]
  32. Staudacher, J.; Pollmann, T. Assessing Antithetic Sampling for Approximating Shapley, Banzhaf, and Owen Values. AppliedMath 2023, 3, 957–988. [Google Scholar] [CrossRef]
  33. Neyman, J. On the Two Different Aspects of the Representative Method: The Method of Stratified Sampling and the Method of Purposive Selection. J. R. Stat. 1934, 97, 558–625. [Google Scholar] [CrossRef]
  34. Burgess, M.A.; Chapman, A.C. Approximating the Shapley Value Using Stratified Empirical Bernstein Sampling. IJCAI 2021, 73–81. [Google Scholar]
Figure 1. Let ( N , v ) be a cooperative game with N = { 1 , 2 } and v ( ) = 0 , v ( { 1 } ) = 0 , v ( { 2 } ) = 2 , and v ( { 1 , 2 } ) = 10 . The player 1 and player 2 axes should be interpreted in a discrete way such that 0 denotes the exclusion and 1 specifies the inclusion of a player. The blue plane is the solution to problem (20) with m being defined as (25) and γ = 1 being obtained from (26). As a result, the coefficients defining this plane are the Shapley values of the game ( N , v ) , i.e., ϕ 1 = 4 , and ϕ 2 = 6 .
Figure 1. Let ( N , v ) be a cooperative game with N = { 1 , 2 } and v ( ) = 0 , v ( { 1 } ) = 0 , v ( { 2 } ) = 2 , and v ( { 1 , 2 } ) = 10 . The player 1 and player 2 axes should be interpreted in a discrete way such that 0 denotes the exclusion and 1 specifies the inclusion of a player. The blue plane is the solution to problem (20) with m being defined as (25) and γ = 1 being obtained from (26). As a result, the coefficients defining this plane are the Shapley values of the game ( N , v ) , i.e., ϕ 1 = 4 , and ϕ 2 = 6 .
Preprints 196628 g001
Figure 2. Theoretical variance comparison of LSS and S-LSS evaluated on the weighted voting game defined by (7). The sample budget is τ = 10000 , and the sample sizes of S-LSS are distributed according to (58) with ceiling operations applied whenever the result is not an integer. The crosses denote the mean variance across all players’ Shapley values, while the dots specify the variance of ϕ ^ 1 .
Figure 2. Theoretical variance comparison of LSS and S-LSS evaluated on the weighted voting game defined by (7). The sample budget is τ = 10000 , and the sample sizes of S-LSS are distributed according to (58) with ceiling operations applied whenever the result is not an integer. The crosses denote the mean variance across all players’ Shapley values, while the dots specify the variance of ϕ ^ 1 .
Preprints 196628 g002
Figure 3. The different sampling distributions of LSS and UKS for n = 9 .
Figure 3. The different sampling distributions of LSS and UKS for n = 9 .
Preprints 196628 g003
Figure 4. Theoretical variance comparison of LSS and UKS evaluated the weighted voting games defined by (7). The number of evaluations of v is τ = 10000 . The crosses represent the mean variance across all players’ Shapley values, while the dots denote the variance for player i = 1 .
Figure 4. Theoretical variance comparison of LSS and UKS evaluated the weighted voting games defined by (7). The number of evaluations of v is τ = 10000 . The crosses represent the mean variance across all players’ Shapley values, while the dots denote the variance for player i = 1 .
Preprints 196628 g004
Figure 5. Empirical variance validation of ϕ ^ 1 obtained via LSS, S-LSS and UKS evaluated on the weighted voting games defined by (7). The overall sample budget is T = 10000 . The crosses represent the theoretical variances, while the dots denote the empirical variances. The empirical variances were obtained over 5000 runs.
Figure 5. Empirical variance validation of ϕ ^ 1 obtained via LSS, S-LSS and UKS evaluated on the weighted voting games defined by (7). The overall sample budget is T = 10000 . The crosses represent the theoretical variances, while the dots denote the empirical variances. The empirical variances were obtained over 5000 runs.
Preprints 196628 g005
Figure 6. Empirical mean squared error comparison of ϕ ^ obtained via LSS, S-LSS, SRS-LSS, KS and UKS evaluated on an airport game with 100 players. The mean squared errors were averaged over 250 runs.
Figure 6. Empirical mean squared error comparison of ϕ ^ obtained via LSS, S-LSS, SRS-LSS, KS and UKS evaluated on an airport game with 100 players. The mean squared errors were averaged over 250 runs.
Preprints 196628 g006
Figure 7. Empirical mean squared error comparison of ϕ ^ obtained via LSS, S-LSS, SRS-LSS, KS and UKS evaluated on a weighted voting game with 50 players. The mean squared errors were averaged over 250 runs.
Figure 7. Empirical mean squared error comparison of ϕ ^ obtained via LSS, S-LSS, SRS-LSS, KS and UKS evaluated on a weighted voting game with 50 players. The mean squared errors were averaged over 250 runs.
Preprints 196628 g007
Figure 8. Empirical mean squared error comparison of ϕ ^ obtained via LSS, S-LSS, SRS-LSS, KS and UKS evaluated on a weighted voting game with 150 players. The mean squared errors were averaged over 250 runs.
Figure 8. Empirical mean squared error comparison of ϕ ^ obtained via LSS, S-LSS, SRS-LSS, KS and UKS evaluated on a weighted voting game with 150 players. The mean squared errors were averaged over 250 runs.
Preprints 196628 g008
Figure 9. Empirical mean squared error comparison of ϕ ^ obtained via LSS, S-LSS, SRS-LSS, KS and UKS evaluated on the diabetes example. The mean squared errors were averaged over 250 runs.
Figure 9. Empirical mean squared error comparison of ϕ ^ obtained via LSS, S-LSS, SRS-LSS, KS and UKS evaluated on the diabetes example. The mean squared errors were averaged over 250 runs.
Preprints 196628 g009
Figure 10. Empirical mean squared error comparison of ϕ ^ obtained via LSS, S-LSS, SRS-LSS, KS and UKS evaluated on the California housing example. The mean squared errors were averaged over 250 runs.
Figure 10. Empirical mean squared error comparison of ϕ ^ obtained via LSS, S-LSS, SRS-LSS, KS and UKS evaluated on the California housing example. The mean squared errors were averaged over 250 runs.
Preprints 196628 g010
Figure 11. Empirical mean squared error comparison of ϕ ^ obtained via LSS, S-LSS, SRS-LSS, KS and UKS evaluated on the wine example. The mean squared errors were averaged over 250 runs.
Figure 11. Empirical mean squared error comparison of ϕ ^ obtained via LSS, S-LSS, SRS-LSS, KS and UKS evaluated on the wine example. The mean squared errors were averaged over 250 runs.
Preprints 196628 g011
Table 1. Population and estimator variances for all strata of the S-LSS estimator.
Table 1. Population and estimator variances for all strata of the S-LSS estimator.
s j elements population var. estimator var.
1 1 v ( { 1 } ) = 0 0 0
1 2 v ( { 2 } ) = 0 0 0
1 3 v ( { 3 } ) = 0 0 0
2 1 v ( { 1 , 2 } ) = 0 , v ( { 1 , 3 } ) = 0 0 0
2 2 v ( { 1 , 2 } ) = 0 , v ( { 2 , 3 } ) = 1 1 / 4 1 / ( 4 τ 2 , 2 )
2 3 v ( { 1 , 3 } ) = 0 , v ( { 2 , 3 } ) = 1 1 / 4 1 / ( 4 τ 3 , 2 )
Table 2. All valid realizations of the sample S 2 (order ignored) when τ 2 = 3 , together with their respective probabilities as well the corresponding resulting values of a ¯ 1 , 2 , a ¯ 3 , 2 , and a ¯ 1 , 2 a ¯ 3 , 2 . The last row shows the expected values over all valid realizations, i.e., conditioned on B ¯ .
Table 2. All valid realizations of the sample S 2 (order ignored) when τ 2 = 3 , together with their respective probabilities as well the corresponding resulting values of a ¯ 1 , 2 , a ¯ 3 , 2 , and a ¯ 1 , 2 a ¯ 3 , 2 . The last row shows the expected values over all valid realizations, i.e., conditioned on B ¯ .
#permutations P ( S 2 ) P ( S 2 B ¯ ) elements of S 2 a ¯ 1 , 2 a ¯ 3 , 2 a ¯ 1 , 2 a ¯ 3 , 2
6 / 27 2 / 9 2 / 8 { 1 , 2 } , { 1 , 3 } , { 2 , 3 } 1 / 2 1 / 2 1 / 4
3 / 27 1 / 9 1 / 8 { 1 , 2 } , { 1 , 2 } , { 1 , 3 } 1 / 3 1 1 / 3
3 / 27 1 / 9 1 / 8 { 1 , 2 } , { 1 , 2 } , { 2 , 3 } 0 0 0
3 / 27 1 / 9 1 / 8 { 1 , 2 } , { 1 , 3 } , { 1 , 3 } 2 / 3 1 2 / 3
3 / 27 1 / 9 1 / 8 { 1 , 2 } , { 2 , 3 } , { 2 , 3 } 0 0 0
3 / 27 1 / 9 1 / 8 { 1 , 3 } , { 1 , 3 } , { 2 , 3 } 1 2 / 3 2 / 3
3 / 27 1 / 9 1 / 8 { 1 , 3 } , { 2 , 3 } , { 2 , 3 } 1 1 / 3 1 / 3
E [ · B ¯ ] 1 / 2 1 / 2 5 / 16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated