2.1. Statistical simulation and optimization framework
Statistical simulation for critical nodes is based on the random matrix theory that uses the probability distribution of eigenvalues such as the Tracy Widom (TW) distribution [
29].
The critical nodes detection problem can be formulated as a special case of clustering, where critical nodes are assigned to a particular cluster, while the remaining nodes form some disconnected clusters. The number of singleton clusters increases as the number critical nodes increases.
The largest eigenvalues of the adjacency matrix associated with the critical nodes cluster has the TW distribution. Instead of using parametric bootstrap to estimate the TW distribution, it is computationally efficient to run a few simulations to compute the mean and the variance of the distribution.
Figure 1 illustrates the statistical simulation-optimization framework used in this study.
To compute the eigenvalues of the graph network, we use spectral clustering method. Initially, the Laplacian matrix is computed, followed by generating eigenvectors. The eigenvectors form an n by n matrix, where each row represents a node, and each column stores an eigenvalue. These eigenvalues are sorted incrementally with those close to zero being removed.
Once the eigenvalues are sorted, the largest k eigenvectors are chosen and stored in a new n by k matrix, which is subsequently normalized. We can assess the TW distribution on the new matrix of eigenvalues. If the matrix of eigenvalues follows the TW distribution, the clustering method described below is applied to the normalized matrix of eigenvalues to obtain the labels and scores of each bank.
The simulation yields a normalized matrix of eigenvalues that can be used to compute the value of signed weight
on each edge in the cluster, the critical nodes of that cluster can be computed with the following Integer Programming (IP) optimization model.
s.t.
Denoted by
equal to 1 if node i in cluster k and c_max as the maximum number of clusters formed. After the clusters are formed, then the critical nodes are identified using a connected node pairs (CNP) reduction model to minimize the total connectivity of the computer network.
denoted by
,
= 1, . . ., L, and the number of nodes in each cluster. Once the critical nodes are identified, resources can be allocated to improve the resilience of the network.
2.2. Optimization model for redundancy allocation
Before we present a general linear IP model for GRAP, the notations for optimization model is given as follows:
Parameters and Variables:
D: number of potential disasters +1 (the last one for no disaster occurring),
probability of disaster d occurring, and M: number of components in Smart Grid needs to perform,
: importance weight of components (nodes) in Smart Grid m, and number of solutions (assets) available for component (node) m to select from,
1 if solution is selected for component (node) m, or 0 otherwise,
cost of selecting solution i for component (node) m,
survivability of solution i for component (node) m against disaster d,
failure probability of solution i for component (node) m against disaster d (i.e., ).
a real-valued function defined on vector , for and a utility function defined on vector for and when is the total contribution of applying all or some of available solutions .
Here, we give a general redundancy allocation problem (GRAP). As we mentioned earlier the GRAP has various of applications in different settings, (see [
32,
33] for explanations, examples, and several heuristic algorithms).
The objective function in this case aims to maximize the overall survivability of all components (nodes) against all potential disasters. Also, note that component (node)
m fails against disaster d only when all of its selected solutions fail at the same time. For a component (node)
m and a disaster d, we define
as the total contribution of applying all or some of available solutions
. The utility of such solution application is equal to
. It is worth mentioning that GRAP is written in a generic format. When estimating the functions and parameters, one possibility is to use of game theory, see for example [
40,
41] for a survey.
Moreover, it is important to note that GRAP is a non-linear integer program. However, if
and
where
is a constant weight, then the objective function is separable and linear which is a special case of the generalized assignment problem (GAP) [
37]. Although GAP is known to be strongly NP-hard, it is easier to solve compared to a non-linear optimization with a non-separable objective function like the GRAP. A variety of exact and heuristic algorithms are available for GAP (see for example, [
28,
29], for a recent survey). A variety of exact and heuristic algorithms are available for GAP (see for example, [
32,
33], for a recent survey).
In the following section, we will demonstrate how the GRAP can be transformed into an optimization with separable objective function, and linear constraints, converting it into a GAP. By taking logarithm from both sides of equality (8), we will have the following.
and
Let
and
Since
then we have
and thus, we have
Note that, we have . With this in mind, the GRAP transforms into the following, where the objective function is separable and nonlinear. Please note that the last term in the objective function is constant thus can be ignored.
Since
for
d=1,…,D, is an array of constants in the objective function and is not part of any constraint, thus in order to maximize
S*, for a given d we need to optimize
Now, since
for
m=1,…,M, is also an array of constants and the decision variable is
, for each
m we need
to be as small as possible under the constraints and thus
must be as large as possible. This proves that the RAP is equivalent to the following linear integer program.
s.t. (5-7) and (9-10)
Since
, thus we can restate the RAP as the following generalized assignment problem.
s.t.
Constraints (19) are capacitated with the budget limit. Following our recently published method (the r-flip paper [
43] and recent papers [
35,
44]), we implemented a
r-flip local search heuristic to improve the assignment. In this r-flip heuristic, we choose
r = 2,3, and 4 for the assignment of both components (nodes) and assets that components (nodes) s chose from to improve the survivability. The improvement process based on the r-flip heuristic is implemented by the Tabu Search algorithm with an embedded strategic oscillation, as detailed in .