1. Introduction
The minimum dominating tree problem for weighted undirected graphs is to find a dominating tree in a weighted undirected graph such that all vertices in this weighted undirected graph are either in or adjacent to this tree, and the sum of the edge weights of this tree is minimized [
1]. Adjacent means that there is an edge between this vertex and at least one vertex in the tree. The minimum dominating tree is a concept in graph theory and one of the important classes of tree structures in graph theory.
A highly related problem, the Minimum Connected Dominating Set (MCDS), has been extensively studied for building routing backbone wireless sensor networks (WSNs) [
2,
3]. One of the goals of introducing MCDS in WSNs is to minimize energy consumption; if two devices are too far away from each other, they may consume too much power to communicate [
4,
5]. Using a routing backbone to transmit messages will greatly reduce energy consumption, which increases dramatically as the transmission distance becomes longer [
6]. However, some directly connected vertices in MCDS may still be far away from each other because MCDS does not account for distance [
7]. Therefore, considering each edge in the routing backbone is more in line with energy consumption purposes [
8]. The Minimum Dominating Tree (MDT) problem was first proposed by Zhang et al. [
9] for generating a routing backbone that is well adapted to broadcast protocols.
Shin et al. [
1] proved that MDT is NP-hard and introduced an approximate framework for solving it. They also provided heuristic algorithms and mixed integer programming (MIP) formulations for the MDT problem. Adasme et al. [
10] introduced two other MIP formulations, one based on a tree formulation in the bidirectional counterpart of the input graph, and the other obtained from a generalized spanning tree polyhedron. Adasme et al. [
11] proposed a primal dyadic model for the minimum cost dominated tree problem and an effective inequality to improve the linear relaxation. Álvarez-Miranda et al. [
12] proposed an precise solution framework that combines a primal-dual heuristic algorithm with a branch-and-cut approach to transform the problem into a Steiner tree problem with additional constraints. Their framework solves most instances in the literature within three hours and proves its optimality.
In recent years, efficient heuristic algorithms for MDT problems have flourished. Sundar and Singh [
13] proposed two meta-heuristic algorithms, the Artificial Bee Colony (ABC-DT) algorithm and the Ant Colony Optimization (ACO-DT) algorithm, for the MDT problem. These two algorithms are the first meta-heuristics for the MDT problem and provide better performance than previous algorithms. They also provided 54 randomly generated instances in their work, which are considered challenging instances of the MDT problem and are widely used to evaluate the performance of algorithms for the MDT problem. Based on the latter work, Chaurasia and Singh [
14] proposed an evolutionary algorithm with guided mutation (EA/G-MP) for MDT problems. Dražic et al. [
15] proposed a variable neighborhood search algorithm for MDT problems. Singh and Sundar [
16] proposed another artificial bee colony (ABC-DTP) algorithm for the MDT problem. This new ABC-DTP method differs from ABC-DT in the way it generates initial solutions and in the strategy for determining neighboring solutions. Their experiments show that for the MDT problem, ABC-DTP outperforms all existing problem-specific heuristics and meta-heuristics available in the literature. Hu et al. [
17] proposed a hybrid algorithm combining genetic algorithms (GAITLS) and iterative local search to solve the dominated tree problem. Experimental results on classical instances show that the method outperforms existing algorithms. Xiong et al. [
18] present a two-level meta-heuristic (TLMH) algorithm for solving the MDT problem with a solution sampling phase and two local search based procedures nested in a hierarchical structure. The results demonstrate the efficiency of the proposed algorithm in terms of solution quality compared with the existing meta-heuristics.
Metaheuristics have been shown to be very effective in solving many challenging real-world problems [
19]. However, for some problems, due to the complexity of the problem structure and the large search space, the classical metaheuristic framework fails to produce the desired results [
20]. Many researchers have relied on composite neighborhood structures. If properly designed, most composite neighborhood structures have proven successful [
21]. These methods include Variable Depth Search (VDS), which searches a large search space through a series of successive simple neighborhood search operations. Although understanding of the basic concepts of VDS algorithms dates back to the 1970s [
22], researchers have maintained a sustained enthusiasm for the term [
23,
24]. For a more detailed survey of VDS, we refer to Ahuja et al. [
25,
26,
27]. Another idea for dealing with complex structural problems is to use a hierarchical meta-trial approach, where several trials are combined in a nested structure. Wu et al. [
28] successfully implemented a two-level iterative local search for a network design problem with traffic sparing. According to their analysis, hierarchical metaheuristics must be carefully designed to balance the complexity of the algorithm and its performance. In particular, for the outer framework, keeping it as simple as possible makes the algorithm converge faster. Pop et al. [
29] proposed a two-level solution to the generalized minimum spanning tree problem. Carrabs et al. [
30] introduced a meta-heuristic algorithm implementing a two-level structure to solve the shortest path problem for all colors. Contreras Bolton and Parada [
31] proposed an iterative local search method to solve the generalized minimum spanning tree problem using a two-level solution.
In this paper, we design a meta-heuristic algorithm for two-neighborhood search to solve the MDT problem that uses two neighborhood moves to perform the search and combines a taboo search to escape local optima. The DNS algorithm is described in detail in Section II, the experimental results of the DNS algorithm and comparison with other algorithms are given in Section III, and some comparative experiments within the DNS algorithm are done in Section IV.
2. Dual Neighborhood Search
2.1. Main Framework
The basic idea of our proposed DNS algorithm is to tackle the MDT problem by optimizing the candidate dominating tree weight using a neighborhood search based meta-heuristic with two neighborhood move operators. The search space of DNS consists of all the minimum spanning trees of all the possible dominating sets of the instance graph. The proposed NDS algorithm optimizes the following objective function:
Where
stands for the current configuration, i.e., the candidate dominating tree; Notations
X and
represents the vertex and edge sets of
T respectively. Function
calculates the number of vertices not dominated by
T. Function
calculates the weights of the minimum spanning tree of
T. And
is a constant parameter to balance the importance between
and
.
T is a feasible solution to the minimum dominating tree problem if and only if
.
The algorithm primarily comprises several key steps. Firstly, an initial solution is generated, followed by a neighborhood evaluation. Subsequently, the best neighborhood move is selected and executed iteratively. During the iteration, the ever best configuration is recorded. The framework of the algorithm can be represented in pseudo-code as follows:
In Algorithm 1, represents the initial configuration, represents the recorded ever best solution, and represents the current configuration. In each iteration, the sub-procedure Do_NeighborEvaluate evaluates all the neighborhood moves in the current configuration. The following two sub-procedure select and execute the best move. The termination condition can be the time or iteration limits.
2.2. Initial Solution Generation
The proposed DNS algorithm uses a feasible dominating tree as the initial configuration. The sub-procedure Generate_InitialSolution generates this initial dominating tree. It first find the minimum spanning tree for the whole graph, and try to trim the tree by removing leaves iteratively until removing one more leave will break the dominancy of the tree. The pseudo-code of this procedure is defined in Algorithm 2.
The procedure starts from the minimum spanning tree generated by Kruskal’s algorithm. Then it tries to delete the leaf with the largest edge weight. The process terminates if no more leaf can be deleted. The algorithm returns a feasible dominating tree as the initial configuration. In the following sections, we focus on the meta-heuristic part of the proposed DNS algorithm, i.e., the neighborhood structure as well as its evaluation.
2.3. Definition
For better description, we first define some important concepts and notations used in the proposed DNS algorithm.
X : the set of vertices in the current dominator tree.
: the set of vertices dominated by X and not in X.
-
: An array of the number of un-dominated vertices, the length of the array is the number of graph vertices.
denotes the number of vertices not dominated by the new X if move i from X to (or from to X).
-
: array of minimum spanning tree weights for
X. The length of the array is the number of graph vertices.
denotes the weight of the new minimum spanning tree of X if move i from X to (or from to X).
The following example illustrates how and are calculated.
As shown in the
Figure 1, the current dominating tree is
containing two vertices,
B and
D. Therefore,
. The vertices dominated by
X are
A,
C, and
E. Thus,
. We correspond the vertices
A,
B,
C,
D, and
E to the array subscripts 0, 1, 2, 3, and 4, respectively. To evaluate the neighborhood moves, the algorithm takes vertex
A out and puts it in the set of the other side. The number of vertices that are not dominated by the new
X after this move is 0, thus
is assigned to 0. The weight of the new minimum spanning tree of
X is 13, thus
is assigned to 13. After evaluating all the neighborhood moves, the resulting arrays are
and
.
and
are used to evaluate the neighborhood moves.
2.4. Neighborhood move and Evaluation
There are two kinds of neighborhood moves in DNS algorithm, one is to take out one vertex in X and put it into , and the other one is to take out one vertex in and put it into X. In each iteration, the best neighborhood move is selected and performed among all the two kinds of neighborhood moves. There are two criteria to evaluate the quality of the moves, one is the dominance and the other is the weight of the dominating tree. The pseudo-code for neighborhood evaluation is described in Algorithm 3.
The evaluation is done by trying to move each vertex to the other set, then calculate the and values. Based on these two arrays, the best move is selected as described in Algorithm 4.
Procedure Select_BestMove picks the move with the smallest and , higher priority for . Then, the best move selected is performed by Algorithm 5.
Procedure Execute_BestMove moves the selected vertex to if it is in X, and vice versa. After the move, the minimum spanning tree of is calculated using Kruskal’s algorithm and assigned to . The following example illustrates how the best move is evaluated and performed.
As shown in
Figure 2, the current domination tree is
,
,
. To evaluate vertex
A, we first move it from
to
X, then
X becomes
. The number of vertices that are not dominated by the new
X at this point is 0, thus
. The minimum spanning tree weight of
is 13, thus
. We then move
A back to its original set. The evaluation for
A is done. The
B,
C,
D, and
E are evaluated sequentially by the same process. After the evaluation for each vertex,
and
.
Then we pick the best neighborhood move, finding the minimum value from and , prior to . There are 3 minimum values in , corresponding to A, C, and E. Then we compare the value of these three vertices in , the minimum value is 3, corresponding to vertex E. Therefore, the best vertex is E, and the best neighboring move is to move E. After the move, the new . We calculate the minimum spanning tree of the new X. The new minimum spanning tree is with a weight of 3.
2.5. Fast Neighborhood Evaluation
In order to improve the efficiency of the algorithm, this paper proposes a method to dynamically update the neighborhood evaluation matrices , .
2.5.1. Fast evaluation for
The number of un-dominated vertices may increase or remain unchanged when vertices are removed from X to . The newly added un-dominated vertices must be originally in the set and connected to the moved vertex. Since the number of un-dominated vertices is zero throughout the algorithm, we can count the newly introduced un-dominated vertices by counting the vertices in , which the moved vertex is its only connection to X.
When we move vertices from
to
X, the number of un-dominated vertices may decrease or remain the same. Because
X is dominated throughout the algorithm, the number of un-dominated vertices after this kind of moves is still 0. The above observation can be utilized to dynamically compute
without having to traverse the entire graph. The formula is as follows:
2.5.2. Fast evaluation for
For , we use a dynamic Kruskal’s algorithm. The algorithm dynamically maintains a set , which is the set of edges contained in the subgraph , i.e., the set of edges whose two vertices are in X. The set is sorted from smallest to largest by the weights of the edges. When a X to move is performed, the edges connecting to the moved vertex and X are deleted from the set. Similarly, when a to X move is performed, the edges connecting to the moved vertex and X are inserted to the set. Note that, edges should be inserted into the appropriate position in to guarantee that it is sorted. The dynamic Kruskal’s algorithm then assumes that the edges before the deletion or insertion position are sure to be in the new minimum spanning tree, then start the normal procedure from that position. The pseudo-code for dynamic Kruskal’s algorithm is described in Algorithm 6 and 7.

In Algorithm 6, The notate represents the edge connecting vertices a and b, is the original minimum spanning tree, i.e., the entire algorithm of the current solution. The main job for this procedure is to update the set. And the Algorithm 7 calculates the spanning tree dynamically according to .
The following example illustrates the above procedures:
As shown in
Figure 3, the original tree is
, currently,
,
,
, and the weights of the edges
. Let’s evaluate the move of vertex
E from
to
X. After the move
,
. Since
E was originally in
,
. The new edge added after the move is the edge
with weights
. Then we insert these two edges into the appropriate position in
according to their weights from smallest to largest in
, and the corresponding
. We only need to start from position of
to determine the new minimum spanning tree. The edges before
must be in the new minimum spanning tree. The evaluated minimum spanning tree is
with weight 12, thus
.
2.6. Tabu strategy and aspiration mechanism
The proposed DNS algorithm implements tabu strategy. The vertex is prohibited to be moved again within a tenure once it is moved. The tabu strategy is implemented to both kinds of moves in the algorithm. Since there is no intersection of X and , only one taboo table is needed. We denote the tabu tenure of the move from to X as and the move from X to as . These two tabu tenures are set to the number of vertices in X and , respectively, thus implementing dynamic tabu tenures in this way. This tabu strategy improves the accuracy and efficiency makes the algorithm to jump out of the local optimum more easily.
In order to avoid missing some good solutions, an aspiration strategy is introduced. If one tabu move may improve the ever best solution, the searching process breaks its tabu status and selects it as a candidate best move.
Note that we do not describe the details of how the tabu and aspiration is implemented in the previous pseudo-codes to give a clearer layout for better understanding. For more details of tabu we recommend readers to the literature [
32].
2.7. Perturbation Strategy
In order to further improve the quality of the solution, the proposed DNS algorithm implements a perturbation strategy. The specific perturbation is to move some vertices from to X randomly. The algorithm set a parameter as the perturbation period. When the number of iterations reaches the perturbation period a perturbation will be triggered, and the number of iterations will be cleared to zero if the ever best solution is updated within this period. There are another two parameters, the perturbation amplitude and the perturbation tabu tenure. The perturbation amplitude is the number of vertices taken out from in the perturbation. The perturbation tabu tenure is the tabu tenure used during the perturbation period. In addition, after a certain number of small perturbations, a larger perturbation needs to be triggered to give a larger spatial span to the search process. The larger perturbation is implemented by moving one-third of the vertices from to X randomly.
3. Algorithm Experimentation
3.1. Datasets and Experimental Protocols
The experiments are carried out on the following two data sets:
The DTP dataset is a dataset proposed by Dražic et al. [
15], with the number of vertices ranging from 150 to 1000.
The Range dataset is a dataset proposed by Sundar and Singh [
13], with the number of vertices ranging from 50 to 500 and a transmission range of 100 to 150 meters.
Both datasets are randomly generated and can be downloaded online or obtained from the authors. The DNS algorithm is implemented in Java (JDK17) and tested on a desktop computer equipped with an Intel® Xeon® W-2235 CPU @3.80GHz, with 16.0GB of RAM.
3.2. Calibration
This section we conduct experiments to fix the value of key parameters of DNS algorithm:
Parameter , the first perturbation period. Values from 14 to 17 are tested.
Parameter , the perturbation amplitude. Values from 7 to 12 are tested.
Parameter , The tabu length of the neighborhood move of taking a vertex from and putting it into X during perturbation. Values from 1 to 2 are tested.
Parameter , The tabu length of the neighborhood move of taking a vertex from X and putting it into during perturbation. Values from 3 to 8 are tested.
We select 13 representative instances to tackle the calibration experiments. Representatives are instances 200-400-1, 200-600-1, 300-600-1 and 300-1000-1 In DTP; instances 300-1, 400-1 and 500-1 respectively in Range100, Range125 and Range150. The experiment is done as following steps: First, roughly experiment with parameter combinations to select better parameter combinations. Then for each set of parameters, run these 13 instances in sequence. Each instance is run five times with different random seeds for 300 seconds each time. We compare the gap rates for each parameter setting. The gap is calculated as:
, where
is the average result obtained and
is the known best objective.
Table 1 shows the result for the calibration experiment.
According to the experimental data, the minimum total rate is 0.075, corresponding to of 15, of 8, of 1, and of 4. In the following experiment, we set the parameters of the algorithm to this setting. Note that this experiment does not guarantee the optimal values of the parameters and the optimal scheme may vary from one benchmark to another. It can also be seen that for different parameter combinations, the rate is small, indicating the robustness of the algorithm.
3.3. Comparison on DTP Data Set
In this section, we compare the proposed DNS algorithm with other methods in the literature on DTP data set. There are two DTP datasets: dtp_large and dtp_small. Since all algorithms can obtain the best results for dtp_small with little difference in speed, only the experimental results for dtp_large are shown here. The compared algorithms are TLMH, VNS, and GAITLS algorithms. For each instance, 10 runs with different random seeds are made, each lasting 1000 seconds. The best, average objective values, and average time are recorded for each instance. The experimental results and comparisons are shown in
Table 2. Bolded numbers represent that the current best value has been obtained and the results are not worse than other algorithms. The start marks represent that DNS algorithm updates the best objective in the literature.
From
Table 2, it can be seen that this algorithm runs most instances to the best value, and those that do not reach the optimal value are also very close to it. The overall best values are slightly worse than the TLMH algorithm but better than the VNS and GAITLS algorithms. The overall average of this algorithm outperforms other algorithms, demonstrating its stability and faster speed. It also improves the best solution for two instances.
3.4. Range Dataset Experiments
In this section, the widely used Range dataset with 54 instances is tested and compared with the TLMH, ACO-DT, EA/G-MP, and ABC-DTP algorithms. The experimental results of these algorithms compared in this paper are the best results obtained using the best parameters in the original literature. In this section, this algorithm is run 10 times for each dataset with the previously measured best parameters and different random seeds. Each run lasts 1000 seconds and the best, average objective values, and average time are calculated. The results and comparisons are shown in
Table 3,
Table 4 and
Table 5:
==layoutwidth=297mm,layoutheight=210 mm, left=2.7cm,right=2.7cm,top=1.8cm,bottom=1.5cm, includehead,includefoot [LO,RE]0cm [RO,LE]0cm
==This algorithm obtains best solutions for most instances in the Range dataset, and those that are not optimal are close to the optimal solution. It updates the best solution for two instances and outperforms the TLMH algorithm in speed.