Preprint
Article

This version is not peer-reviewed.

Optimal and efficient approximations of gradients of functions with non-independent variables

A peer-reviewed article of this preprint also exists.

Submitted:

16 June 2024

Posted:

17 June 2024

You are already at the latest version

Abstract
Gradients of smooth functions with non-independent variables are relevant for exploring complex models and for the optimization of functions subjected to constraints. In this paper, we investigate new and simple approximations and computations of such gradients by making use of independent, central and symmetric variables. Such approximations are well-suited for applications in which the computations of the gradients are too expansive or impossible. The derived upper-bounds of the biases of our approximations do not suffer from the curse of dimensionality for any $2$-smooth function, and theoretically improve the known results. Also, our estimators of such gradients reach the optimal (mean squared error) rate of convergence (i.e., $\mathcal{O}(N^{-1})$) for the same class of functions. Numerical comparisons based on a test case and a high-dimensional PDE model show the efficiency of our approach.
Keywords: 
;  ;  ;  ;  

1. Introduction

Non-independent variables arise when at least two variables do not vary independently, and such variables are often characterized by their covariance matrices, distribution functions, copulas, weighted distributions (see e.g., [1,2,3,4,5,6,7]). Recently, dependency models provide explicit functions that link these variables together by means of additional independent variables ([8,9,10,11,12]). Models with non-independent input variables, including functions subjected to constraints, are widely encountered in different scientific fields, such as data analysis, quantitative risk analysis, and uncertainty quantification (see e.g., [13,14,15]).
Analyzing such functions requires being able to calculate or to compute their dependent gradients, that is, the gradients that account for the dependencies among the inputs. Recall that gradients are involved in i) inverse problems and optimization (see e.g., [16,17,18,19,20]), ii) exploring complex mathematical models or simulators (see [21,22,23,24,25,26,27,28] for independent inputs and [9,15] for non-independent variables); iii) Poincaré inequalities and equalities ([9,28,29,30]), and recently in iv) derivative-based ANOVA (i.e., exact expansions) of functions ([28]). While the first-order derivatives of functions with non-independent variables have been derived in [9] for screening dependent inputs of high-dimensional models, the theoretical expressions of the gradients of such functions (dependent gradients) have been introduced in [15], enhancing the difference between the gradients and the first-order partial derivatives when the input variables are dependent or correlated.
In high-dimensional settings and for time-demanding models, having an efficient approach for computing the dependent gradients provided in [15] using a few model evaluations is worth investigating. So far, the adjoint methods can provide the exact classical gradients for some classes of PDE/ODE-based models ([31,32,33,34,35,36]). Additionally, Richardson’s extrapolation and its generalization considered in [37] provide accurate estimates of the classical gradients using a number of model runs that strongly depends on the dimensionality. In contrary, the Monte-Carlo approach allows for computing the classical gradients using a number of model runs that can be very less than the dimensionality (i.e., d N ) ([17,38,39]). The Monte-Carlo approach is a consequence of the Stokes theorem, which claims that the expectation of a function evaluated at a random point about x R d is the gradient of a certain function. Such a property leads to randomized approximations of the classical gradients in derivative-free optimization or zero-order stochastic optimization (see [16,18,19,20] and references therein). Such approximations are also relevant for applications in which the computations of the gradients are impossible ([20]).
Most of the randomized approximations of the classical gradients, including the Monte-Carlo approach, rely on randomized kernels and/or random vectors that are uniformly distributed on the unit ball. The qualities of such approximations are often assessed by the upper-bounds of the biases and the rates of convergence. The upper-bounds provided in [19,20,40] depend on the dimensionality in general.
In this paper, we propose new surrogates of the gradients of smooth functions with non-independent inputs and the associated estimators that
  • are simple and applicable to a wide class of functions by making use of model evaluations at randomized points, which are only based on independent, central and symmetric variables;
  • lead to a dimension-free upper-bound of the bias, and improve the best known upper-bounds of the bias for the classical gradients;
  • lead to the optimal and parametric (mean squared error) rates of convergence;
  • are going to increase the computational efficiency and accuracy of the gradients estimates by means of a set of constraints.
Surrogates of dependent gradients are derived in Section 3 by combining the properties of i) the generalized Richardson extrapolation approach thanks to a set of constraints, and ii) the Monte-Carlo approach based only on independent random variables that are symmetrically distributed about zero. Such expressions are followed by their order of approximations, biases and a comparison with known results for the classical gradients. We also provide the estimators of such surrogates and their associated mean squared errors, including the rates of convergence for a wide class of functions (see Section 3.3). A number of numerical comparisons is considered so as to assess the efficiency of our approach. While Section 4 presents comparisons of our approach to other methods, simulations based on a high-dimensional PDE (spatio-temporal) model with given auto-collaborations among the initial conditions are considered in Section 5 to compare our approach to the adjoint-based methods. We conclude this work in Section 6.

2. Preliminaries

For an integer d > 0 , let X : = ( X 1 , , X d ) be a random vector of continuous and non-independent variables having F as the joint cumulative distribution function (CDF) (i.e., X F ). For any j { 1 , , d } , we use F x j or F j for the marginal CDF of X j and F j 1 for its inverse. Also, we use ( j ) : = ( 1 , , j 1 , j + 1 , , d ) and X j : = ( X 1 , , X j 1 , X j + 1 , , X d ) . The equality (in distribution) X = d Z means that X and Z have the same CDF.
As the sample values of X are dependent, here we use f x k for the formal partial derivative of f w.r.t. x k , that is, the partial derivative obtained by considering other inputs as constant or independent of x k . Thus, f : = f x 1 , , f x d T stands for the formal or classical gradient of f.
Given an open set Ω R d , consider a weak partial differentiable function f : Ω R ([41,42]). Given ı : = ( i 1 , , i d ) N d , denote D ( ı ) f : = k = 1 d i k x k f ; ( x ) ı = x ı : = k = 1 d x k i k , ı ! = i 1 ! i d ! , and consider the Hölder space of α -smooth functions given by x , y R d
H α : = f : R d R n : f ( x ) 0 i 1 + + i d α 1 D ( ı ) f ( y ) ı ! x y ı M α x y 2 α ,
with α 1 and M α > 0 . We use · 2 for the Euclidean norm, | | · | | 1 for the L 1 -norm, E ( · ) for the expectation and V ( · ) for the variance.
For the stochastic evaluations of functions, consider L , q N { 0 } , β R with = 1 , , L , h : = ( h 1 , , h d ) R + d , and denote with V : = ( V 1 , , V d ) a d-dimensional random vectors of independent variables satisfying: j { 1 , , d } ,
E V j = 0 ; E V j 2 = σ 2 ; E V j 2 q + 1 = 0 ; E V j 2 q < + .
Random vectors of independent variables that are symmetrically distributed about zero are instances of V , including the standard Gaussian random vector and symmetric uniform distributions about zero.
Also, denote h V : = ( h 1 V 1 ; , h d V d ) ; h 1 V : = ( V 1 / h 1 ; , V d / h d ) and β h V : = ( β h 1 V 1 ; , β h d V d ) . The reals β ’s are used for controlling the order of approximations and the order of derivatives (i.e., | | ı | | 1 = 1 , 2 ) we are interested in. Finally, h j ’s are used to define a neighborhood of a sample point of X (i.e., x ). Thus, using β m a x : = max | β 1 | , , | β L | and keeping in mind the variance of β h j V j , we assume that j { 1 , , d } ,
Assumption (A1) : β m a x h j σ 1 / 2 o r e q u i v a l e n t l y 0 < β m a x h j | V j | 1 f o r b o u n d e d V j s .

3. Main Results

This section aims at providing new expressions of the gradient of a function with non-independent variables, and the associated order of approximations. We are also going to derive the estimators of such a gradient, including the optimal and parametric rates of convergence. Recall that the input variables are said to be non-independent whenever there exists at least two variables X j , X k such that the joint CDF F j , k ( x j , x k ) F j ( x j ) F k ( x k ) .

3.1. Stochastic Expressions of the Gradients of Functions with Dependent Variables

Using the fact X F with F ( x ) j = 1 d F j ( x j ) , we are able to model X as follows ([8,9,10,11,12,14,43]):
X j = d r j X j , Z j = r 1 , j X j , Z j , , r j 1 , j X j , Z j , r j + 1 , j X j , Z j , r d , j X j , Z j T ,
where r j : R d R d 1 ; X j and Z j : = Z 1 , , Z j 1 , Z j + 1 , Z d are independent. Moreover, we have X j , X j = d X j , r j X j , Z , and it is worth noting that the function r j is invertible w.r.t. Z j for continuous variables, that is,
Z j = r j 1 X j | X j .
Note that the formal Jacobian matrix of g : R d R d , x x is the identity matrix. As x is a sample value of X , the dependent Jacobean of g based on the above dependency function is clearly not the identity matrix due to the fact that such a matrix accounts for the dependencies among the elements of x . The dependent partial derivatives of x w.r.t. x j is then given by ([9,15])
J ( j ) x : = x x j = r 1 , j x j 1 j th position r d , j x j T x j , r j 1 x j | x j ,
and the dependent Jacobian matrix becomes (see [15] for more details)
J d x : = J ( 1 ) x , , J ( d ) x .
Moreover, the gradient of f with non-independent variables is given by ([15])
g r a d ( f ) ( x ) : = J d x T J d x 1 f ( x ) = G 1 ( x ) f ( x ) ,
with G ( x ) : = J d x T J d x the tensor metric and G 1 ( x ) its generalized inverse. Based on the above framework, Theorem 1 provides the stochastic expression of g r a d ( f ) ( x ) . In what follows, denote 1 : = 1 , , 1 T R d .
Theorem 1.
Assume f H α with α 2 L , (A1) holds and β ’s are distinct. Then, there exists α 1 { 1 , , L } and reals coefficients C 1 , , C L such that
g r a d ( f ) ( x ) = G 1 ( x ) = 1 L C E f x + β h V V h 1 σ 2 + O h 2 2 α 1 1 .
Proof. 
See Appendix A for the detailed proof. □
Using the Kronecker symbol δ 1 , r , the setting L = 1 , β 1 = 1 , C 1 = 1 or the constraints = 1 L = 2 C β r = δ 1 , r ; r = 0 , 1 lead to the order of approximation O h 2 2 , while the constraints = 1 L C β r = δ 1 , r ; r = 1 , 3 , 5 , , 2 L 1 allow for increasing that order up to O h 2 2 L . For distinct β ’s, the above constraints lead to the existence of the constants C 1 , , C L . Indeed, some constraints rely on the Vandermonde matrix of the form
A L : = 1 1 1 β 1 β 2 β L β 1 2 β 2 2 β L 2 ,
which is invertible for distinct values of β ’s (i.e., β 1 β 2 ) because the determinant det A L = 1 1 < 2 L β 1 β 2 .
Remark 1.
For an even integer L , the following nodes may be considered: β 1 , , β L = ± 2 k , k = 0 , , L 2 2 . When L is odd, one may add 0 to the above set. Of course, there are other possibilities provided that = 1 L C β = 1 .
Beyond the strong assumption made on functions in Theorem 1, and knowing that increasing L will require more evaluations of f at random points, we are going to derive the upper-bounds of the biases of our appropriations under different structural assumptions on the deterministic functions f and V , such as f H α with α > 1 . To that end, denote with R : = R 1 , , R d a d-dimensional random vector of independent variables that are centered about zero and standardized (i.e., E [ R k 2 ] = 1 , k = 1 , , d ), and R c the set of such random vectors. Define
K 1 : = inf R R c G 1 ( x ) E R 2 R 2 1 ; K 2 : = inf R R c G 1 ( x ) E R 2 R 2 2 ;
with G 1 the matrix obtained by putting the entries of G 1 in the absolute value.
When 1 < α 2 , only L = 1 or L = 2 can be considered for any function that belongs to H α . To be able to derive the parametric rates of convergence, Corollary 1 starts providing the upper-bounds of the bias when L = 2 .
Corollary 1.
Consider β 1 = 1 , β 2 = 1 ; C 1 = 1 / 2 ; C 2 = 1 / 2 . If f H 2 and (A1) holds, then there exists M 2 > 0 such that
g r a d ( f ) ( x ) G 1 ( x ) = 1 L = 2 C E f x + β h V V h 1 σ 2 1 σ M 2 K 1 h 2 ;
g r a d ( f ) ( x ) G 1 ( x ) = 1 L = 2 C E f x + β h V V h 1 σ 2 2 σ M 2 K 2 h 2 .
Proof. 
Detailed proofs are provided in Appendix B. □
For a particular choice of V , we obtain the results below.
Corollary 2.
Consider β 1 = 1 , β 2 = 1 ; C 1 = 1 / 2 ; C 2 = 1 / 2 . If V k U ( ξ , ξ ) with ξ > 0 , k = 1 , , d ; f H 2 and (A1) holds, then
g r a d ( f ) ( x ) G 1 ( x ) = 1 L = 2 C E f x + β h V V h 1 σ 2 1 M 2 ξ G 1 ( x ) 1 1 h 1 ;
g r a d ( f ) ( x ) G 1 ( x ) = 1 L = 2 C E f x + β h V V h 1 σ 2 2 M 2 ξ G 1 ( x ) 1 2 h 1 .
Proof. 
Since | V k | ξ , we have h V 1 ξ h 1 and the results hold using the upper-bounds M 2 G 1 ( x ) E V 2 σ 2 h V 1 1 and M 2 G 1 ( x ) E V 2 σ 2 h V 1 2 obtained in Appendix B. □
It is worth noting that, choosing h k = h and ξ = 1 / d 2 leads to the dimension-free upper-bound of the bias, that is,
g r a d ( f ) ( x ) G 1 ( x ) = 1 L = 2 C E f x + β h V V h 1 σ 2 1 M 2 h d G 1 ( x ) 1 1 ,
because G 1 ( x ) 1 2 is a function of d in general.
For the sequel of generality, Corollary 3 provides the bias of our approximations for highly smooth functions. To that end, define
K 2 , L : = inf R R c G 1 ( x ) E R 2 R 2 L 2 ; K 3 : = = 1 L + 1 C β 1 + L .
Corollary 3.
For an odd integer L > 2 , consider = 1 L + 1 C β r = δ 1 , r ; r = 0 , 1 , , L . If f H 1 + L and (A1) holds, then there exists M 1 + L > 0 such that
g r a d ( f ) ( x ) G 1 ( x ) = 1 L + 1 C E f x + β h V V h 1 σ 2 2 σ L M 1 + L K 2 , L K 3 h 2 L .
Moreover, if V k U ( ξ , ξ ) with ξ > 0 and k = 1 , , d , then
g r a d ( f ) ( x ) G 1 ( x ) = 1 L + 1 C E f x + β h V V h 1 σ 2 2 ξ L M 1 + L G 1 ( x ) 1 2 h 1 L K 3 .
Proof. 
The proofs are similar to those of Corollary 1 (see Appendix B). □
In view of the results provided in Corollary 3, finding β ’s and C’s that minimize the quantity K 3 = = 1 L + 1 C β 1 + L might be helpful for improving the above upper-bounds.

3.2. Links to Other Works for Independent Input Variables

Recall that for independent input variables, the matrix G 1 ( x ) comes down to the identity matrix, and g r a d ( f ) = f . Thus, Equation (7) becomes
f ( x ) = 1 L = 2 C E f x + β h V V h 1 σ 2 2 M 2 h ,
when ξ = d / d 2 . Taking ξ = d / d leads to the upper-bound M 2 h d .
Other results about the upper-bounds of the bias of the (formal) gradient approximations have been provided in [19,20] (and the references therein) under the same assumptions made on f and evaluations of f. Such results rely on a random vector S that is uniformly distributed on the unit ball and a kernel K. Under such a framework, the upper-bound derived in [19,20] is
f ( x ) d h E f ( x + U h S ) S K ( U ) 2 2 2 α d M α h α 1 ,
where U U ( 1 , 1 ) is independent of S . Therefore, our results improve the upper-bound obtained in [19,20] when α = 2 for instance.

3.3. Computation of the Gradients of Functions with Dependent Variables

Consider a sample of V given by V i : = V i , 1 , , V i , d i = 1 N . Using Equation (3), the estimator of g r a d ( f ) ( x ) is derived as follows:
g r a d ( f ) ^ ( x ) : = G 1 ( x ) 1 N i = 1 N = 1 L C f x + β h V i V i h 1 σ 2 .
To assess the quality of such an estimator, it is common to use the mean squared error (MSE), including the rates of convergence. The MSEs are often used in statistics for determining the optimal value of h as well. Theorem 2 and Corollary 4 provide such quantities of interest. To that end, define
K 4 : = inf R R c E G 1 ( x ) R h 1 2 2 R 2 2 .
Theorem 2.
Consider β 1 = 1 , β 2 = 1 ; C 1 = 1 / 2 ; C 2 = 1 / 2 . If f H 2 and (A1) holds, then
E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 σ 2 M 2 2 K 2 2 h 2 2 + M 1 2 K 4 h 2 2 N .
Moreover, if V k U ( ξ , ξ ) with ξ > 0 and k = 1 , , d , then
E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 M 2 2 ξ 2 G 1 ( x ) 1 2 2 h 1 2 + 3 d M 1 2 h 2 2 N inf R R c E G 1 ( x ) R h 1 2 2 .
Proof. 
See Appendix C. □
Using a uniform bandwidth, that is, h k = h with k = 1 , , d , the upper-bounds of MSEs provided in Theorem 2 have simple expressions. Indeed, the upper-bounds in Equations (8)-(9) become, respectively,
σ 2 M 2 2 K 2 2 d h 2 + M 1 2 d N inf R R c E G 1 ( x ) R 2 2 R 2 2 ;
M 2 2 ξ 2 G 1 ( x ) 1 2 2 d 2 h 2 + 3 d M 1 2 N inf R R c E G 1 ( x ) R 2 2 .
It comes out that the second-terms of the above upper-bounds do not depend on the bandwidth h. This key observation leads to the derivation of the optimal and parametric rates of convergence of the proposed estimator.
Corollary 4.
Under the assumptions made in Theorem 2, if ξ = d 3 / 2 and h k = h N γ / 2 with γ ] 1 , 2 [ , then we have
E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 = O N 1 d 2 .
Proof. 
The proof is straightforward since h 2 N γ and N h when N . □
It is worth noting that the upper-bound of the squared bias obtained in Corollary 4 does not depend on the dimensionality thanks to the choice ξ = d 3 / 2 . But, the derived rate of convergence depends on d 2 , meaning that our estimator suffers from the curse of dimensionality. In higher-dimensions, an attempt to improve our results consists in controlling the upper-bound of the second-order moment of the estimator through = 1 L C β . For instance, requiring = 1 L C β = 1 / d 2 with L = 2 admits a solution in C and not in R .
Remark 2.
For highly smooth functions (i.e., f H 1 + L with L > 3 ) and under the assumptions made in Corollary 3, we can see that
E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 ξ 2 L M 1 + L 2 G 1 ( x ) 1 2 2 h 1 2 L K 3 2 + 3 d M 1 2 h 2 2 N inf R R c E G 1 ( x ) R h 1 2 2 .

4. Computations of the Formal Gradient of Rosenbrock’s Function

For comparing our approach to i) the finite differences method (FDM) using the R-package numDeriv ([44]) with h = 10 4 , ii) the Monte Carlo (MC) approach provided in [17] with h = 10 4 , let us consider the Rosenbrock function given as follows: x R d ,
r ( x ) : = k = 1 d 1 ( 1 x k ) 2 + 100 x k + 1 x k 2 2 .
The gradient of that function at 0 is r ( 0 ) = 2 , , 2 , 0 T R 100 (see [17]). To assess the numerical accuracy of each approach, the following measure is considered:
E r r : = r ( 0 ) r ^ ( 0 ) 1 r ( 0 ) 1 ,
where r ^ ( 0 ) is the estimated value of the gradient. Table 1 reports the values of E r r for the three approaches. To obtain the results using our approach, we have used h = 1 / N with N the sample size and ξ = 1 / d 2 = 10 4 with d = 100 . Also, the Sobol sequence is used for generating the values of V j ’s, and the Gram-schmidt algorithm is applied to obtain (perfect) orthogonal vectors for a given N.
Based on Table 1, our approach provides efficient results compared to other methods. Since the FDM is not possible when N < 2 d = 200 , it comes out that our approach is much flexible thanks to L and the fact that the gradient can be computed for every value of N. Increasing N improves our results, as expected.

5. Application to a Heat PDE Model with Stochastic Initial Conditions

5.1. Heat Diffusion Model and Its Formal Gradient

Consider a time-dependent model f ( x , t ) defined by the one-dimensional (1-D) diffusion PDE with stochastic initial conditions, that is,
f t D 2 f 2 x = 0 , x ] 0 , 1 [ , t [ 0 , T ] f ( x , t = 0 ) = Z ( x ) x [ 0 , 1 ] f ( x = 0 , t ) = 0 , f ( x = 1 , t ) = 1 , t [ 0 , T ] ,
where D R + represents the diffusion coefficient. It is common to consider J ( Z ( x ) ) : = 1 2 0 T 0 10 f ( x , t ) 2 d x d t as the quantity of interest (QoI). The spatial discretisation consists in subdividing the spatial domain [ 0 , 1 ] in d equally-sized cells, which lead to d initial conditions or inputs given by Z ( x j ) with j = 1 , , d . Given zero-mean random variables ( R j , j = 1 , , d ) , assume that X j : = Z ( x j ) = sin ( 2 π x j ) + s j R j , j = 1 , , d , where s j R + represents the inverse precision about our knowledge on the initial conditions. For the dynamic aspect, a time step of 0 . 025 is considered starting from 0 up to T = 5 .
Given a direction z ( x ) and the Gâteaux derivative f ˇ ( x , t ) : = f z ( x ) , the tangent linear model is derived as follows:
f ˇ t D 2 f ˇ 2 x = 0 , x ] 0 , 1 [ , t [ 0 , T ] f ˇ ( x , t = 0 ) = z ( x ) , x [ 0 , 1 ] f ˇ ( x = 0 , t ) = f ˇ ( x = 1 , t ) = 0 , t [ 0 , T ] ,
and we can check that the adjoint model (AM) (i.e., f a ) is given by
f a t D 2 f a 2 x = f , x ] 0 , 1 [ , t [ 0 , T ] f a ( x = 0 , t ) = f a ( x = 1 , t ) = 0 , t [ 0 , T ] f a ( x , T ) = 0 , x [ 0 , 1 ] .
The formal gradient of J ( Z ( x ) ) w.r.t. the inputs Z ( x ) is Z J ( Z ( x ) ) = f a ( x , 0 ) . Remark that the above gradient relies on f a ( x , 0 ) , and only one evaluation of such a function is needed.

5.2. Spatial Auto-Correlations of Initial Conditions and the Tensor Metric

Recall that the above gradient is based on the assumption of independent input variables, suggesting that the initial conditions within different cells are uncorrelated. To account for the spatial auto-correlations between different cells, assume that the d input variables follow the Gaussian process with the following auto-correlation function:
ρ ( X j 1 , X j 2 ) = 1 2 | j 1 j 2 | 1 [ 0 , 3 ] ( | j 1 j 2 | ) ; j 1 , j 2 { 1 , , d } ,
where 1 [ 0 , 3 ] ( | j 1 j 2 | ) = 1 if | j 1 j 2 | [ 0 , 3 ] and zero otherwise. Such spatial auto-correlations lead to the correlation matrix of the form
R : = 1 0.5 0.25 0.125 0 0 0 0 0.5 1 0.5 0.25 0.125 0 0 0 0.25 0.5 1 0.5 0.25 0.125 0 0 0.125 0.25 0.5 1 0.5 0.25 0.125 0 0 0 .
Using the same standard deviation s j = s leads to the following covariance matrix Σ = s 2 R , and X = ( X 1 , , X d ) N d μ , Σ with μ : = sin ( 2 π c 1 ) , , sin ( 2 π c d ) and c 1 , , c d the centers of the cells. The associated dependency model is given below.
Consider the diagonal matrix D j = d i a g Σ 1 , 1 , , Σ j 1 , j 1 , Σ j + 1 , j + 1 , , Σ d , d , and the Gaussian random vector W N d 1 μ j , D j . Denote with Σ ( j ) the matrix obtained by moving the j t h row and column of Σ to the the first row and column; L ( j ) the Cholesky factor of Σ ( j ) , and μ ( j ) : = ( μ j , μ 1 , , μ j 1 , μ j + 1 , , μ d ) . We can see that ( X j , X j ) N d μ ( j ) , Σ ( j ) , and the dependency model is given by ([10])
X j , X j = L ( j ) 1 Σ j , j X j E [ X j ] D j 1 / 2 W μ j + μ ( j ) ; j = 1 , , d .
Based on Equation (10), we have X j x j = L 1 , 1 ( j ) Σ j , j = Σ 1 , 1 ( j ) Σ j , j = Σ j , j Σ j , j . Thus, we can deduce that J ( j ) = Σ , j Σ j , j with Σ , j the j t h column of Σ , and the dependent Jacobian becomes J d = J ( 1 ) , , J ( d ) = Σ , 1 Σ 1 , 1 , , Σ , d Σ d , d = Σ s 2 = R , as Σ j , j = s j 2 = s 2 and Σ = s 2 R . The tensor metric is given by G = R T R .

5.3. Comparisons between Exact Gradient and Estimated Gradients

For running the above PDE-based model using the R-package deSolve ([45]), we are given D = 0 . 0011 and s = 1 . 96 . The exact and formal gradient associated with the mean values of the initial conditions is obtained by running the corresponding adjoint model. For estimating the gradient using the proposed estimators, we consider L = 2 , 3 and N = 50 , 100 , 150 , 200 . We also use h = 1 / N and V j N ( 0 , 1 ) , j = 1 , , d = 50 . The Sobol sequence is used for generating the random values of V j ’s, and the Gram-schmidt algorithm is applied to obtain perfect orthogonal vectors for a given N.
Figure 1 shows the comparisons between the estimated and the exact values of the formal gradient f (i.e., ρ ( X j 1 , Z j 2 ) = 0 ) for L = 1 , 2 . Likewise, Figure 2Figure 3 depict the dependent gradient g r a d ( f ) = R T R 1 f and its estimation. The estimates of both gradients are in line with the exact values using only N L = 50 (resp. N L = 100 ) model evaluations when L = 1 and N = 50 (resp. L = 1 and N = 100 or L = 2 and N = 50 ). Increasing the values of L and N gives the same quasi-perfect results for both the formal and dependent gradients (see Figure 3).

6. Conclusion

In this paper, we have proposed new, simple and generic approximations of the gradients of functions with non-independent input variables by means of independent, central and symmetric variables and a set of constraints. It comes out that the biases of our approximations for a wide class of functions, such as 2-smooth functions, do not suffer from the curse of dimensionality by properly choosing the set of independent, central and symmetric variables. For functions including only independent input variables, a theoretical comparison has shown that the upper-bounds of the bias of the formal gradient derived in this paper outperform the best known results.
For computing the dependent gradient of the function of interest, we have provided estimators of such a gradient by making use of evaluations of that function at L N randomized points. Such estimators reach the optimal (mean squared error) rates of convergence (i.e., O ( N 1 d 2 ) ) for a wide class of functions. Numerical comparisons using a test case and simulations based on a PDE model with given auto-collaborations among the initial conditions have shown the efficiency of our approach, even when L = 1 , 2 constraints are used. Our approach is then flexible thanks to L and the fact that the gradient can be computed for every value of the sample size N in general.
While the proposed estimators reach the parametric rate of convergence, note that the second-order moments of such estimators depend on d 2 . An attempt to reach a dimension-free rate of convergence requires working in C rather than R when L = 2 . In next future, it is worth investigating the derivation of the optimal rates of convergence that are dimension-free or (at least) are linear with respect to d by considering L > 3 constraints. Also, combining such a promising approach with a transformation of the original space might be helpful for reducing the number of model evaluations in higher dimensions.

Acknowledgments

We would like to thank the reviewers for their comments that have helped improving our manuscript.

Appendix A. Proof of Theorem 1

As ı = ( ı 1 , , ı d ) , let k = 0 , , 0 , 1 k t h p o s i t i o n , 0 , 0 R d and q = ( q 1 , , q d ) N d . Multiplying the Taylor expansion of f x + β h V about x , that is,
f x + β h V = p = 0 m | | ı | | 1 = p D ( ı ) f ( x ) ı ! β p h V ı + O β h V 2 m + 1 ,
by V h 1 σ 2 R d and the constant C , and taking the sum over = 1 , , L , we can see that the expectation E : = = 1 L C E f x + β h V V h 1 σ 2 becomes
E = p 0 | | ı | | 1 = p D ( ı ) f ( x ) ı ! C β p E V ı h ı V h 1 σ 2 .
Firstly, for a given k { 1 , , d } and by independence, we can see that
E V ı h ı V k h k 1 = E V ı + k h ı k 0
iff ı k = 2 q k + 1 ; ı j = 2 q j for any j { 1 , , d } { k } with q k , q j N , which implies that ı = k + 2 q . Thus, one obtains f x k when | | ı | | 1 = | | k + 2 q | | 1 = 1 , and the fact that E V k 2 = σ 2 ; E V j = 0 and C β = 1 . At this point, by taking k = 1 , , d and setting L = 1 , β = 1 and C = 1 result in the approximation of f ( x ) of order O ( h 2 2 ) because when | | ı | | 1 = 2 , E V ı h ı V k h k 1 = 0 .
Secondly, for L > 1 the constraints = 1 L C β r + 1 = δ 0 , r r = 0 , 2 , , 2 ( L 1 ) allow to eliminate some higher-order terms so as to reach the order O h 2 2 L . Using other constraints complete the proof, bearing in mind Equation (2).

Appendix B. Proof of Corollary 1

For q = ( q 1 , , q d ) N d ; k { 1 , , d } and k = 0 , , 0 , 1 k t h p o s i t i o n , 0 , 0 R d , consider s k : = q + k : | | q | | 1 = 1 . As f H 2 , we can write
f ( x + β h V ) = | | ı | | 1 = 0 1 D ( ı ) f ( x ) β | | ı | | 1 ( h V ) ı ı ! + | | ı | | 1 = 2 ı s k D ( ı ) f ( x ) β 2 ( h V ) ı ı ! + R k h , β , V ,
with the remainder term
R k h , β , V = | | ı | | 1 = 2 ı s k D ( ı ) f ( x + β h V ) β 2 ( h V ) ı ı ! = | | q | | 1 = 1 ı s k D ( k + q ) f ( x + β h V ) β 2 ( h V ) q + k ( k + q ) ! = h k V k β 2 | | q | | 1 = 1 D ( k + q ) f ( x + β h V ) ( h V ) q ( k + q ) ! .
Denote R k 0 : = | | q | | 1 = 1 D ( k + q ) f ( x + β h V ) ( h V ) q ( k + q ) ! , and remark that | R k 0 | M 2 h V 1 . Using Theorem 1, we can see that the absolute value of the bias, that is,
B : = g r a d ( f ) ( x ) G 1 ( x ) = 1 L C E f x + β h V V h 1 σ 2 1 is given by
B = G 1 ( x ) E f ( x ) = 1 L C f x + β h V V h 1 σ 2 1 = G 1 ( x ) = 1 L C β 2 σ 2 E V 1 2 R 1 0 , , V d 2 R d 0 T 1 = 1 L C β 2 σ 2 G 1 ( x ) E V 1 2 R 1 0 , , V d 2 R d 0 T 1 = 1 L C β 2 M 2 G 1 ( x ) E V 2 σ 2 h V 1 1 ,
using the expansion of the product between matrices.
Using the same reasoning and taking the Euclidean norm, we obtain
B 2 = G 1 ( x ) E f ( x ) = 1 L C f x + β h V V h 1 σ 2 2 = 1 L C β 2 M 2 G 1 ( x ) E V 2 σ 2 h V 1 2 .
The results hold using R : = V / σ .

Appendix C. Proof of Theorem 2

Firstly, remark that M S E : = E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 is given by
M S E = E g r a d ( f ) ^ ( x ) E g r a d ( f ) ^ ( x ) 2 2 + E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 .
Since, the bias E E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 has been derived in previous Corollaries, we are going to treat the second-order moment.
Secondly, as f H 2 implies that f H 1 , we have f ( x + β h V ) f ( x ) M 1 β h V 2 . Also, as = 1 L C = 0 , we then have
= 1 L C f ( x + β h V ) = = 1 L C f ( x + β h V ) f ( x ) ,
which leads to = 1 L C ( | u | ) f ( x + β h V ) = 1 L C β M 1 h V 2 and
Q ( x ) : = G 1 ( x ) V h 1 σ 2 = 1 L C f ( x + β h V ) = G 1 ( x ) V h 1 σ 2 = 1 L C f ( x + β h V ) f ( x ) .
Thirdly, using (3), we can see that E Q ( x ) = E g r a d ( f ) ^ ( x ) . Bearing in mind the definition of the Euclidean norm and the variance, the centered second-order moment, that is, V g r a d : = E g r a d ( f ) ^ ( x ) E g r a d ( f ) ^ ( x ) 2 2 is given by
V g r a d 1 N E G 1 ( x ) V h 1 σ 2 = 1 L C f ( x + β h V ) E g r a d ( f ) ^ ( x ) 2 2 1 N E G 1 ( x ) V h 1 σ 2 = 1 L C f ( x + β h V ) 2 2 = ( ) 1 N E G 1 ( x ) V h 1 σ 2 = 1 L C f ( x + β h V ) f ( x ) 2 2 1 N E G 1 ( x ) V h 1 σ 2 2 2 h V 2 2 M 1 2 = 1 L C β 2 1 N E G 1 ( x ) V h 1 σ 2 2 2 V 2 2 h 2 2 M 1 2 = 1 L C β 2
bearing in mind the Hölder inequality. The results hold using R : = V / σ , and the fact that when V k U ( ξ , ξ ) , V 2 2 d ξ 2 and σ 2 = ξ 2 / 3 .

References

  1. Rosenblatt, M. Remarks on a Multivariate Transformation. Ann. Math. Statist. 1952, 23, 470–472. [Google Scholar]
  2. Nataf, A. Détermination des distributions dont les marges sont données. Comptes Rendus de l’Académie des Sciences 1962, 225, 42–43. [Google Scholar]
  3. Joe, H. Dependence Modeling with Copulas; Chapman & Hall/CRC, London, 2014.
  4. McNeil, A.J.; Frey, R.; Embrechts, P. Quantitative Risk Management; Princeton University Press, Princeton and Oxford, 2015.
  5. Navarro, J.; Ruiz, J.M.; Aguila, Y.D. Multivariate weighted distributions: a review and some extensions. Statistics 2006, 40, 51–64. [Google Scholar]
  6. Sklar, A. Fonctions de Repartition a n Dimensions et Leurs Marges. Publications de l’Institut Statistique de l’Universite de Paris 1959, 8, 229–231. [Google Scholar]
  7. Durante, F.; Ignazzi, C.; Jaworski, P. On the class of truncation invariant bivariate copulas under constraints. Journal of Mathematical Analysis and Applications 2022, 509, 125898. [Google Scholar]
  8. Skorohod, A.V. On a representation of random variables. Theory Probab. Appl 1976, 21, 645–648. [Google Scholar]
  9. Lamboni, M.; Kucherenko, S. Multivariate sensitivity analysis and derivative-based global sensitivity measures with dependent variables. Reliability Engineering & System Safety 2021, 212, 107519. [Google Scholar]
  10. Lamboni, M. Efficient dependency models: Simulating dependent random variables. Mathematics and Computers in Simulation 2022. [Google Scholar] [CrossRef]
  11. Lamboni, M. On exact distribution for multivariate weighted distributions and classification. Methodology and Computing in Applied Probability 2023, 25, 1–41. [Google Scholar]
  12. Lamboni, M. Measuring inputs-outputs association for time-dependent hazard models under safety objectives using kernels. International Journal for Uncertainty Quantification 2024, -, 1–17. [CrossRef]
  13. Kucherenko, S.; Klymenko, O.; Shah, N. Sobol’ indices for problems defined in non-rectangular domains. Reliability Engineering & System Safety 2017, 167, 218–231. [Google Scholar]
  14. Lamboni, M. On dependency models and dependent generalized sensitivity indices. arXiv preprint arXiv2104.12938 2021. [Google Scholar]
  15. Lamboni, M. Derivative Formulas and Gradient of Functions with Non-Independent Variables. Axioms 2023, 12. [Google Scholar] [CrossRef]
  16. Nemirovsky, A.; Yudin, D. Problem Complexity and Method Efficiency in Optimization; Wiley & Sons, New York, 1983; p. 404.
  17. Patelli, E.; Pradlwarter, H. Monte Carlo gradient estimation in high dimensions. International Journal for Numerical Methods in Engineering 2010, 81, 172–188. [Google Scholar]
  18. Agarwal, A.; Dekel, O.; Xiao, L. Optimal Algorithms for Online Convex Optimization with Multi-Point Bandit Feedback. Colt. Citeseer, 2010, pp. 28–40.
  19. Bach, F.; Perchet, V. Highly-Smooth Zero-th Order Online Optimization. 29th Annual Conference on Learning Theory; Feldman, V.; Rakhlin, A.; Shamir, O., Eds., 2016, Vol. 49, pp. 257–283.
  20. Akhavan, A.; Pontil, M.; Tsybakov, A.B. Exploiting higher order smoothness in derivative-free optimization and continuous bandits; Curran Associates Inc.: Red Hook, NY, USA, 2020; NIPS ’20. [Google Scholar]
  21. Sobol, I.M.; Kucherenko, S. Derivative based global sensitivity measures and the link with global sensitivity indices. Mathematics and Computers in Simulation 2009, 79, 3009–3017. [Google Scholar]
  22. Kucherenko, S.; Rodriguez-Fernandez, M.; Pantelides, C.; Shah, N. Monte Carlo evaluation of derivative-based global sensitivity measures. Reliability Engineering and System Safety 2009, 94, 1135–1148. [Google Scholar]
  23. Lamboni, M.; Iooss, B.; Popelin, A.L.; Gamboa, F. Derivative-based global sensitivity measures: General links with Sobol’ indices and numerical tests. Mathematics and Computers in Simulation 2013, 87, 45–54. [Google Scholar]
  24. Roustant, O.; Fruth, J.; Iooss, B.; Kuhnt, S. Crossed-derivative based sensitivity measures for interaction screening. Mathematics and Computers in Simulation 2014, 105, 105–118. [Google Scholar]
  25. Fruth, J.; Roustant, O.; Kuhnt, S. Total interaction index: A variance-based sensitivity index for second-order interaction screening. Journal of Statistical Planning and Inference 2014, 147, 212–223. [Google Scholar]
  26. Lamboni, M. Derivative-based generalized sensitivity indices and Sobol’ indices. Mathematics and Computers in Simulation 2020, 170, 236–256. [Google Scholar]
  27. Lamboni, M. Derivative-based integral equalities and inequality: A proxy-measure for sensitivity analysis. Mathematics and Computers in Simulation 2021, 179, 137–161. [Google Scholar]
  28. Lamboni, M. Weak derivative-based expansion of functions: ANOVA and some inequalities. Mathematics and Computers in Simulation 2022, 194, 691–718. [Google Scholar]
  29. Bobkov, S. Isoperimetric and Analytic Inequalities for Log-Concave Probability Measures. The Annals of Probability 1999, 27, 1903–1921. [Google Scholar]
  30. Roustant, O.; Barthe, F.; Iooss, B. Poincare inequalities on intervals - application to sensitivity analysis. Electron. J. Statist. 2017, 11, 3081–3119. [Google Scholar]
  31. Le Dimet, F.X.; Talagrand, O. Variational algorithms for analysis and assimilation of meteorological observations: theoretical aspects. Tellus A: Dynamic Meteorology and Oceanography 1986, 38, 97–110. [Google Scholar]
  32. Le Dimet, F.X.; Ngodock, H.E.; Luong, B.; Verron, J. Sensitivity analysis in variational data assimilation. Journal-Meteorological Society of Japan 1997, 75, 245–255. [Google Scholar]
  33. Cacuci, D.G. Sensitivity and uncertainty analysis - Theory; Chapman & Hall/CRC, 2005.
  34. Gunzburger, M.D. Perspectives in flow control and optimization; SIAM, Philadelphia, 2003.
  35. Borzi, A.; Schulz, V. Computational Optimization of Systems Governed by Partial Differential Equations; SIAM, Philadelphia, 2012.
  36. Ghanem, R.; Higdon, D.; Owhadi, H. Handbook of Uncertainty Quantification; Springer International Publishing, 2017.
  37. Guidotti, E. calculus: High-Dimensional Numerical and Symbolic Calculus in R. Journal of Statistical Software 2022, 104, 1–37. [Google Scholar]
  38. Ancell, B.; Hakim, G.J. Comparing Adjoint- and Ensemble-Sensitivity Analysis with Applications to Observation Targeting. Monthly Weather Review 2007, 135, 4117–4134. [Google Scholar]
  39. Pradlwarter, H. Relative importance of uncertain structural parameters. Part I: algorithm. Computational Mechanics 2007, 40, 627–635. [Google Scholar]
  40. Polyak, B.; Tsybakov, A. Optimal accuracy orders of stochastic approximation algorithms. Probl. Peredachi Inf. 1990, pp. 45–53.
  41. Zemanian, A. Distribution Theory and Transform Analysis: An Introduction to Generalized Functions, with Applications; Dover Books on Advanced Mathematics, Dover Publications, 1987.
  42. Strichartz, R. A Guide to Distribution Theory and Fourier Transforms; Studies in advanced mathematics, CRC Press, Boca, 1994.
  43. Lamboni, M. Kernel-based Measures of Association Between Inputs and Outputs Using ANOVA. Sankhya A 2024. [Google Scholar] [CrossRef]
  44. Gilbert, P.; Varadhan, R. R-package numDeriv: Accurate Numerical Derivatives; CRAN Repository, 2019.
  45. Soetaert et al., K. R-package deSolve: Solvers for Initial Value Problems of Differential Equations; CRAN Repository, 2022.
Figure 1. Exact gradient versus estimated gradients using L = 1 (∘) and L = 2 (+) of the QoI by considering the inputs as independent (formal gradients).
Figure 1. Exact gradient versus estimated gradients using L = 1 (∘) and L = 2 (+) of the QoI by considering the inputs as independent (formal gradients).
Preprints 109437 g001
Figure 2. Exact gradient versus estimated gradients using L = 1 (∘) and L = 2 (+) of the QoI by considering the auto-correlations anong the inputs (dependent gradients).
Figure 2. Exact gradient versus estimated gradients using L = 1 (∘) and L = 2 (+) of the QoI by considering the auto-correlations anong the inputs (dependent gradients).
Preprints 109437 g002
Figure 3. Exact gradient versus estimated gradients using L = 2 (∘) and L = 3 (+) of the QoI by considering the auto-correlations anong the inputs (dependent gradients).
Figure 3. Exact gradient versus estimated gradients using L = 2 (∘) and L = 3 (+) of the QoI by considering the auto-correlations anong the inputs (dependent gradients).
Preprints 109437 g003
Table 1. Values of E r r for three different approximations of the formal gradients.
Table 1. Values of E r r for three different approximations of the formal gradients.
Number of total model evaluations (i.e., L N )
100 150 200 200 1000 1000
Methods
FDM ([44]) - - - 0.005 - -
MC ([17]) 0.042 - - - - -
L = 1 L = 1 L = 1 L = 2 L = 1 L = 2
This paper 0.035 0.014 0.009 0.009 0.0020 0.00199
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated