Preprint
Article

This version is not peer-reviewed.

Multiobjective Particle Swarm Optimization: A Survey of the State-of-the-Art

Submitted:

22 February 2025

Posted:

25 February 2025

You are already at the latest version

Abstract
In the last decade, multiobjective particle swarm optimization (MOPSO) has been observed as one of the most powerful optimization algorithms in solving multiobjective optimization problems (MOPs). Nowadays, it is becoming increasingly clear that MOPSO can handle with complex MOPs based on the competitive-cooperative framework. The goal of this paper is to provide a comprehensive review on MOPSO from the basic principles to hybrid evolutionary strategies. To offer the readers insights on the prominent developments of MOPSO, the key parameters on the convergence and diversity performance of MOPSO were analyzed to reflect the influence on the searching performance of particles. Then, the main advanced MOPSO methods were discussed, as well as the theoretical analysis of multiobjective optimization performance metrics. Even though some hybrid MOPSO methods show promising multiobjective optimization performance, there is much room left for researchers to improve further, in particular in terms of engineering applications. As a result, further in-depth studies are required. This paper should motivate evolutionary computation researchers to pay more attention to this practical yet challenging area.
Keywords: 
;  ;  ;  

I. Introduction

Most of the practical engineering application problems, such as electrical and electronic engineering problem, civil engineering problem, are multiobjective optimization problems (MOPs) [1]. MOPs not only contain multiple conflicting objectives simultaneously, but the objectives are usually time-varying and coupled with each other. The presence of multiple conflicting objectives can give rise to a set of trade-off solutions known as Pareto Front in the objective space and Pareto Set in the decision space, respectively [2]. Since, it is practically different to obtain the entire Pareto Front solutions, an approximation non-dominated solutions can be obtained. The original multiobjective optimization approaches are usually based on transforming the MOP to several single objectives by the different weight method and then obtaining a set of non-dominated optimal solutions. Therefore, traditional optimization methods have many drawbacks, such as high computational complexity and long time, so they cannot meet the requirements of speed, convergence and diversity [3].
Generally speaking, to obtain the accurate solution set when solving the complex MOPs, researchers draw lessons from the laws of nature and biology to design a class of multiobjective evolutionary algorithms (MOEAs) for improve the convergence accuracy. As an important field of artificial intelligence, MOEA has made many breakthroughs in algorithm theory and algorithm performance, because of its characteristics of intelligence and parallelism. It has played an important role in scientific research and production practice. During the last two decades, MOEA has widely become a new research direction, because it has better global search ability and does not rely on the specific mathematical model and characteristics of solving the problem when solving the multi-objective optimization problem [4,5]. The performance of MOEAs is mainly evaluated by a set of non-dominated solutions obtained by the performance metrics, including the convergence metric and diversity metric. In general, the typical MOEAs include the multiobjective genetic algorithm (MOGA) [6], the multiobjective differential evolution (MODE) algorithm [7], and so on [8,9,10]. It is precisely this way that deepening the research of computational intelligence algorithm can promote the development of intelligent technology and promote innovation in many fields. Distinguishingly, the multiobjective particle swarm optimization (MOPSO) algorithm based on the bird population shows complex intelligent behavior through the cooperation of simple individual particles, and uses social sharing among the bird population to promote the evolutionary process of the algorithm. Therefore, MOPSO realizes the transcendence of the swarm intelligence that can exceed the outstanding individual particle intelligence [11]. Meanwhile, due to the few key operation parameters, high convergence speed and ease of implementation, MOPSO can handle with many kinds of objective functions and constraints.
In MOPSO community, the key parameters can impact the exploitation ability and exploration ability in the searching process of particles [12]. Meanwhile, different updating methods will guide the particles to search different area and then affect the performance of the whole population. With the increasing number of iterations, more and more non-dominated solutions will be generated [13]. As the amount of calculation increase, the archive cannot accommodate the whole non-dominated solutions [14]. The convergence and diversity of the non-dominated solutions in the archive will become important. In order to reach to good convergence and diversity of the non-dominated in the archive, the three main optimization stages which including the update of the archive, the selection of the global best and the key flight parameter adjustment will be concerned. Like all MOEAs, MOPSO is also need an explicit diversity mechanism to preserve the non-dominated solutions in the archive. In [15], a selection procedure was proposed to prune the non-dominated solutions. In order to obtain have a good diversity of the archive, a novel reproduction operator based on the differential evolution was presented, which can to create potential solutions, and accelerate the convergence toward the Pareto set [16]. The MOPSO emerged as a competitive and cooperative form of evolutionary computation in the last decade, one of the most special features of MOPSO is the updating of the global best (gBest) and personal best (pBest) [17,18]. Aiming at the selection of the gBest and pBest, a novel parallel cell coordinate system (PCCS) is proposed to accelerate the convergence of MOPSO by assessing the evolutionary environment [19]. Another important feature is the parameter adjustment, for example, the inertia weight can achieve the balance of the exploration and exploitation. And the coefficients adjustment can also influence movement of the particle. In [20], a time varying flight parameter mechanism was proposed for the MOPSO algorithms. At present, there are many performance metrics in multiobjective optimization to determine the convergence and diversity of MOPSO. The main performance metrics of MOPSO contain the determination of convergence and diversity. Meanwhile, different application environments of MOPSO will consider different performance metrics and impact the future development trend.
At present, especially in the complex science field, MOPSO has effectively solved the problem which is difficult to describe in many complex systems. It breaks through the limitations of the traditional multiobjective optimization algorithm, and has been used in many applications in various academic and industrial fields so far [61]. MOPSO has made encouraging progress in many applications in various academic and industrial fields so far, which is including the automatic control system [62], communication theory [63], medical engineering [64], electrical and electronic engineering [65], communication theory [66], fuel and energy [67] and so on.
According to the analysis and studies on MOPSO, it has become a popular algorithm to solve the complex MOPs. The typical characteristics are summarized as follows.
1) Different from the cross mutation operation of other optimization algorithms, MOPSO is much more simple and straightforward to implement. As an MOEA based on birds group, MOPSO shows complex intelligent behavior through the cooperation of simple individual particles, and uses social sharing among groups to promote the evolutionary process of the algorithm. Therefore, realizing the breakthrough of the swarm intelligence on MOPSO that can exceed the excellent individual particle intelligence.
2) As indicated by the current studies on MOPSO, it has few key parameters. It can be seen from the updating formula of MOPSO algorithm that the position and speed of the particle are greatly influenced by the key parameters. The convergence of the algorithm is the fundamental guarantee of its application. The effect of the key parameters of the MOPSO algorithm on the convergence of the algorithm is analyzed in detail, and the flight direction of the particles is calculated by the state transfer matrix. The constraint condition [97] that satisfies the parameters of particle trajectory is obtained.
3) Compared with other MOEAs, the fast convergence is a typical Characteristics of MOPSO. Since the flight direction of MOPSO can be obtained by the gBest and pBest of the population, the whole particle swarm is easy to gathering or dispersing. Therefore, the convergence rate of MOPSO will be relatively faster than other MOEAs.
In order to make a summary of the work in the last two decades, we discussed the achievements and the direction of development, and wrote this review article. This paper attempts to provide a comprehensive survey of the MOPSO. The major scheme of this paper is shown in Figure 1. Section II briefly describes the basic concepts and key parameters of MOPSO. In Sections III, the improved approaches of MOPSO by different performance metrics are presented, and the theoretical analysis of MOPSO have been presented. Then, Sections IV gives the potential future research challenges of MOPSO. Finally, the paper is concluded in section V.

II. Characteristics of Multiobjective Particle Swarm Optimization

Faced with the complex MOPs in practical application, the traditional optimization method has the problems of high computational complexity and long time, which cannot meet the requirements of in computing speed, convergence, diversity and so on. In order to solve the complex MOPs better, scientists draw lessons from the laws of nature and biology to design a computational intelligence algorithm for solving the problem. As an important field of artificial intelligence, computational intelligence algorithm has made many breakthroughs in algorithm theory and algorithm performance because of its characteristics of intelligence and parallelism. However, MOPSO algorithm is a typical computational intelligence algorithm with strong optimization ability. It has been able to solve the multi-objective optimization problem which is difficult to establish accurate models in many complex systems.

A. Basic Concept of MOPSO

MOPSO is a population-based optimization technique, in which the population is referred as a swarm. A particle has a position which is represented by a vector:
xi(t)=[xi,1(t),xi,2 (t), ….xi,D(t)],
where D is the dimension of the search space, i=1, 2, …, S, S is the size of the swarm. And each particle has a velocity which is recorded as:
vi(t) = [vi,1(t),vi,2 (t), ….vi,D(t)],
In the evolutionary process, pi(t) is the best previous position of the particle at the tth iteration which is recorded as pi(t)=[pi,1(t), pi,2(t),…, pi,D(t)], and gBest(t) is the best position found by the swarm which is recorded as gBest(t)=[gBest1(t), gBest2(t),…, gBestD(t)]. A global best solution gBest can be found by the whole particle swarm. In each iteration, the velocity is updated by:
vi,d(t+1)=ωvi,d(t)+c1r1(pi,d(t)-xi,d(t))+c2r2(gBestd(t)-xi,d(t)).
where i=1, 2, …, s, t represents the tth iteration in the evolutionary process; d=1, 2, …, D represents the dth dimension in the searching space; ɷ is the inertia weight, which is used to control the effect of the previous velocities on the current velocity; c1 and c2 are the acceleration constants, r1 and r2 are the random values uniformly distributed in [0, 1]. Then the new position is updated as:
xi,d(t+1) = [xi,d(t) + vi,d(t+1)],
And the pseudocode of the basic MOPSO algorithm is presented in Table 1. MOPSO is a typical population-based MOEA, which includes these following characteristics:
Remark1: MOPSO is a population-based evolutionary algorithm which is inspired by the social behavior of the birds flocking motion, which has been steadily gaining attention from the research community because of its high convergence speed. The aggregate motion of the whole particles formed the searching movement of MOPSO algorithm. Like other evolutionary algorithm, MOPSO suffers from a notable bias: they tend to perform best when the optimum is located at or near the center of the initialization region, which is often the origin.
In MOPSO, particles move through the search space using an information interaction between particles, and each particle is attracted by the personal and global best solutions to move to their potential leader. Particles can be connected to each other in any kind of neighborhood topology which containing the ring neighborhood topology, the fully connected neighborhood topology, the star network topology and tree network topology. For instance, due to the fully connected topology that all particles are connected to each other, each particle can receive the information of the best solution from the whole swarm at the same time. Thus, when using the fully connected topology, the swarm is inclined to converge more rapidly than when using other local best topologies [43].
Remark2: In the searching process of a MOPSO algorithm, when the convergence is considered separately, it may lead to the local optimal trap. If the diversity is considered separately, the convergence speed and quality will be an unsolved problem. In the optimization process of MOPSO algorithm, many optimization patterns could exert an influence on the optimization results, in terms of leader selection, archive maintenance, the flight parameter adjustment, population size and perturbation. Therefore, these several important aspects will become the key means to improve the optimization effect. The leader selection affects the convergence capability and the distribution of non-dominated solutions along the Pareto Front.

B. Key Parameters of MOPSO

The relationship between the key parameters is depicted in Figure 2.
(a) Average and maximum velocity
MOPSO algorithm makes full use of share learning factor to modify the velocity updating formulas, which aims to improve the global search ability [41]. The optimal value of maximum velocity is problem specific. Further when maximum velocity was implemented the particle’s trajectory failed to converge. From the velocity updating formula of the particle, it can be seen that the velocity of the particle is subjected to the key parameters (ω, c1 and c2) of the particles. The contribution rate of a particle’s previous velocity to its velocity at the current time step is determined by the key parameters [42]. It is necessary to limit the maximum of the velocity. For example, if the velocity is very large, particles may fly out of the search space and decrease the searching quality of MOPSO algorithm. In contrast, if the velocity is very small, particles may trap into the local optimum. In order to hinder too quick movement of particles, their velocities are bounded to specified values [51].
(b) Inertia weight and acceleration factors
From the flight equations, it is clearly shown that new position of each particle is affected by the inertia weight ω and another two cognitive acceleration coefficient c1 and c2. The acceleration coefficient c1 prompts the attraction of the particle towards its own pBest and the acceleration coefficient c2 prompts the attraction of the particle towards the gBest. The parameter inertia weight ω, helps the particles converge to personal and global best, rather than oscillating around it. The inertia weight controls the influence of previous velocities on the new velocity [42].
Too high values of cognitive acceleration coefficient weakens exploration ability, while too high values of social acceleration coefficient leads to weak exploitation capability [51]. Therefore, suitable cognitive acceleration coefficients is very important for the optimization process of a MOPSO algorithm. Most of the prior research have indicated that the inertia weight ω controls the impact of the previous velocity on the current velocity, which is employed to trade-off between the global and local exploration abilities of the particles [41]. Moreover, the purpose of designing the adaptive inertia weight is to balance the global and local search ability of the particles. Most previous works have demonstrate that a larger inertia weight ω facilitates global exploration, while a small inertia weight tends to facilitate local exploration to guide to the current search area. Suitable selection of inertia weight ω can provide balance between global and local exploration abilities and thus require less iteration on average to find the optimum. In the previous research, different inertia weight mechanisms have been designed to balance the global searching ability and the local searching ability, which the inertia weight was adjusted dynamically to adapt the optimization process.
(c) Global best (gBest) and personal best (pBest)
In MOPSO, each particle moves toward the most promising directions guided by the gBest and the pBest together, and the whole population follows the trajectory of gBest [23]. The gBest and the pBest can guide the evolutionary direction of the whole population. In addition, the updating formula of the MOPSO algorithm have illustrated that the value of the gBest and the pBest can play an important role on the updating of the velocity and position. In the searching process, selecting the appropriate gBest and the pBest in MOPSO is a feasible way to control its convergence and promote its diversity. In recent years, the popular issue of gBest selection is keeping the balance of convergence and diversity. Some researchers have proposed adaptive selection mechanisms which selecting the gBest with convergence and diversity feature by the dynamic evolutionary environment.
(d) Population size
The population size of the MOPSO does indirectly contribute to the effectiveness and efficiency of the performance of an algorithm [36]. One contribution of population size on these population-based evolutionary algorithms is the computational cost.
In the searching process, if an algorithm employs an overly large population size, it will enjoy a better chance of exploring the search space and discovering possible good solutions but inevitably suffer from an undesirable and high computational cost. In contrary, an MOPSO algorithm with an insufficient population size may trap into the premature convergence or may obtain the solution archive with a loss of diversity.
The external archive can store a certain number non-dominated solution which determine the convergence and diversity performance of MOPSO. Although there are many external archive strategies which have been proposed, several strategies can achieve a good balance between the diversity and convergence. In addition, the research on the external archive strategies is still necessary for improve the performance of MOPSO as a whole. On one hand, diversity is one of the most characteristics in the external archive of MOPSO, which reflects validity of the MOPs to be solved. On the other hand, convergence is another criterion to judge performance of MOPSO and approach to the true Pareto Front.
Remark3: The convergence and diversity are two principals for evaluating the performance of MOPSO. Meanwhile, the adjustments of the key parameters can affect the flight direction of particles and then obtain different optimization performance. However, the performance metrics have been designed with different standpoints to evaluate the performance of MOPSO. Three typical performance criteria have been considered in multiobjective optimization: 1) the number of the non-dominated solutions. 2) the convergence of non-dominated solutions to the Pareto Front. 3) the diversity of non-dominated solutions in the objective space. In particular, a set of optimal non-dominated solutions with best convergence and diversity which are approaching the true Pareto Front and scattering evenly are generally desirable.

III. Different Performance Metrics of MOPSO

In the multiobjective optimization performance metrics, two major performance criteria, namely, convergence, and diversity, have typically been taken into considerations. Based on the convergence and diversity performance of MOPSO, the existing improved approaches categorized into three groups.
1) Diversity metrics: These metrics contain two aspects: a) Distribution measures whether evenly scattered are the optimal non-dominated solutions and b) spread demonstrates whether the optimal non-dominated solutions approach to the extrema of the Pareto Front.
2) Convergence metrics: These metrics can measure the degree of approximation between the optimal non-dominated solutions by the proposed MOPSO and the true Pareto Front.
3) Convergence-diversity metrics: These metrics can both indicate measure the convergence and diversity of the optimal non-dominated solutions.
According to above-mentioned analysis, the major performance metrics of MOPSO are shown in Figure 3.
Figure 2. The major performance metrics of MOPSO.
Figure 2. The major performance metrics of MOPSO.
Preprints 150262 g003
Figure 3. The major relation of the key parameters, the performance metrics and the theoretical analysis of MOPSO.
Figure 3. The major relation of the key parameters, the performance metrics and the theoretical analysis of MOPSO.
Preprints 150262 g004

A. Diversity Metrics

Diversity metrics demonstrate the distribution and spread of the solutions in the archive.

(a) Distribution in Diversity Metrics

The distribution quality of the non-dominated solutions in the archive is an important aspect to reflect the diversity performance of MOPSO algorithm.
The spacing (SP) metric: The distribution is derived from the non-dominated solutions in the archive, which is defined as:
S P ( S ) = i = 1 s ( d i d ¯ ) 2 / ( S 1 ) ,
where d i = min s j S , s i s j F ( s i ) F ( s j ) , s i S. di is minimum Euclidean distance between the solution s i S and the solutions s j S. d ¯ is the average Euclidean distance of all the di. If the value of SP is larger, it will represent a poor distribution of the diversity. In contrary, the smaller SP can indicate the MOPSO algorithm with good distribution performance.
SP is used to measure the spread of vectors throughout the non-dominated vectors found so far. Since the “beginning” and “end” of the current Pareto front found are known, a suitably defined metric judges how well the solutions in such front are distributed. A value of zero for this metric indicates all members of the Pareto front currently available are equidistantly spaced. This metric addresses the second issue from the list previously provided.
The M 2 * ( S ) metric in [56] is equipped with a niche radius σ and takes the form of
M 2 * ( S ) = s 1 S { s 2 S s 1 s 2 < σ } S 1 ,
where, s 1 S , M 2 * ( S ) measures how many solutions s 2 S are located in its local vicinity s 1 s 2 < σ . σ is a niche radius. Note that a niche neighboring size to calculate the distribution of the non-dominated solution. The higher the value of M 2 * ( S ) , the MOPSO algorithm can obtain good distribution of the non-dominated solutions.
a) 
Increasing the diversity of the non-dominated solutions in the archive
Raquel et al. presented an extended approach by incorporating the mechanism of crowding distance computation into the developed MOPSO algorithm to handle the MOPs, which the global best selection and the deletion method of the external archive have been used. The results show that the proposed MOPSO algorithm can generate a set of uniformly distributed non-dominated solutions closing to the Pareto Front [38].
Coello et al. proposed an external repository strategy to guide the flight direction of particles, which including the archive controller and an adaptive grid [20]. The archive controller is designed to control the storage of non-dominated solutions in the external archive, and the adaptive grid is used to distribute in a uniform way to obtain the largest possible amount of hypercubes. The external repository strategy also incorporate a special mutation operator method which improves the exploratory capabilities of the particles and enriches the diversity of the MOPSO algorithm. Moreover, Moubayed et al. developed a MOPSO by incorporating dominance with decomposition (D2MOPSO), which employs a new archiving technique that facilitates attaining better diversity and coverage in both objective and solution spaces [25].
Agrawal et al. proposed a fuzzy clustering-based particle swarm optimization (FCPSO) algorithm to solve the highly constrained conflicting multiobjective problems. The results indicated that it generated a uniformly distributed Pareto front whose optimality has been proved greater than ε-constrainted method [35].
A multiobjective particle swarm optimization is proposed in [48], which uses a fitness function derived from the maximin strategy to determine Pareto-domination. The results show that the proposed MOPSO algorithm produces an almost perfect convergence and spread of solutions towards and along the Pareto Front.
b) 
The inertia weight adjustment mechanisms improved the global exploration ability
Daneshyari et al. introduced a cultural framework to design a flight parameter mechanism for updating the personalized flight parameters of the mutated particles in [28]. The results show that this flight parameter mechanism performs efficiently in exploring solutions close to the true Pareto front. In addition, a parameter control mechanism was developed to change the parameters for improving the robustness of MOPSO.
c) 
Selecting the proper gBest and pBest with better diversity
Ali et al. introduced an attributed MOPSO algorithm, which can update the velocity of each dimension by selecting the gBest solutions from the population [29]. The experiments indicate that the attributed MOPSO algorithm can improve the search speed in the evolutionary process. In [30], a multiobjective particle swarm optimization with preference- based sort (MOPSO-PS), in which the user’s preference was incorporated into the evolutionary process to determine the relative merits of non-dominated solutions, was developed to choose the suitable gBest and pBest. After each optimization, the most preferable particle can be chosen as the gBest by the selection of the highest global evaluation value.
Zheng et al. introduced a new MOPSO algorithm, which can maintain the diversity of the swarm and improve the performance of the evolving particles significantly over some state-of-the-art MOPSO algorithms by using a comprehensive learning strategy [24]. Torabi et al. introduced an efficient MOPSO with a new fuzzy multi-objective programming model to solve an unrelated parallel machine scheduling problem. The proposed MOPSO exploits a new selection regimes for preserving global best solutions and obtains a set of non-dominated solutions with good diversity [55].
d) 
Dividing the particle population into multi groups
Zhang et al. introduced an enhanced by problem-specific local search techniques (MO-PSO-L) to seek high-quality non-dominated solutions. The local search technique has been specifically designed for searching more potential non-dominated solutions in the vacant space. The computational experiments have verified that the proposed MO-PSO-L can deal with complex MOPs [59].
The effective archive strategy can generate a group of uniformly distributed non dominant solutions and can accurately approach the Pareto frontiers. At the same time, the effective archive strategy can guide the direction of the particle flight. The external knowledge base strategy also contains a special mutation operator method, which can improve the searching ability of the particles. In order to balance the global exploration ability and the local exploitation ability of the particle, the time-varying flight parameter mechanism can update the flight parameters by iteration and adjust the value of the inertia weight. It can strengthen the global searching ability of the algorithm and obtain more optimal solutions. For multiple constrained multiobjective conflicting problem, the clustering method is used to divide the non dominant solution in the archive into multiple subgroups, which can enhance the local search performance of each subgroup. The dominating relationship is determined by calculating the fitness value of the target function, so that the non-dominanted solution in the archive can be distributed evenly.

(b) Spread in Diversity Metrics

The spread of diversity is another typical aspect to reflect the diversity performance of MOPSO algorithms.
The maximum spread (MS) metric: The spread quality of the non-dominated solutions in the archive can also represent the diversity, and MS is defined as:
MS = 1 N i = 1 N [ min ( f i max F i max ) m a x ( f i min F i min ) F i max F i min ] 2
where, Fimax is the maximum value of the ith objective in Pareto Front, Fimin is the minimum value of the ith objective in Pareto Front. fimax is the maximum value of the ith objective in the solution archive, fimin is the minimum value of the ith objective in the solution archive. Most of the previous work show that the larger MS, the better spread of the diversity will be obtained by the evolutionary algorithm in the archive.
The maximum spread is conceived to reveal how well obtained optimal solutions covers the true Pareto front. The larger the MS value is, the better the obtained optimal solutions covers the true Pareto front. The limiting value MS=1 means the obtained optimal solutions covers completely the true Pareto front.
a) 
Selecting the proper gBest and pBest with better diversity
Shim et al. proposed an estimation distribution algorithm to model the global distribution of the population for balancing the convergence and diversity of the MOPSO algorithms [27]. The results indicate that this method can improve the convergent speed.
Multimodal multiobjective problems are usually posed as several single objective problems that sometimes including more than one local optimum or several global optimum. To handle the multimodal multiobjective problems effectively, Yue et al. proposed a multi-objective particle swarm optimizer using an index-based ring topology to maintain a set of non-dominated solutions with good distribution in the decision and objective spaces [22]. Further, the experimental results show that the proposed algorithm can obtain a larger MS value and have made great progress on solving the decision space distribution.
b) 
Increasing the diversity of the non-dominated solutions in the archive
Huang et al. proposed a multiobjective comprehensive learning particle swarm optimizer (MOCLPSO) algorithm by integrating an external archive technique to handle MOPs. Simulation results show that the proposed MOCLPSO algorithm is able to find a much better spread of solutions and faster convergence to true Pareto Front [39].

(c) Distribution and Spread in Diversity Metrics

The metric Δ is introduced in [6], which consider to reflect the distribution and spread of the non-dominated solutions in the archive simultaneously. The formulation of Δ is derived as follows:
Δ ( S , P ) = d f + d l + i = 1 S 1 d i d ¯ d f + d l + ( S 1 ) d ¯ ,
where, di is the Euclidean distance between consecutive solutions, d ¯ is the average of all di. df and dl are the minimum Euclidean distance between the extreme solutions in the true Pareto Front and the boundary solutions of the non-dominated solutions in the archive, S is the capacity of the archive.
In order to increase the diversity for dealing with MOPs, Tsai et al. proposed an improved multi-objective particle swarm optimizer based on proportional distribution and jump improved operation (PDJI-MOPSO). The proposed PDJI-MOPSO maintains diversity of new found non-dominated solutions via proportional distribution and obtains extensive exploitations of MOPSO algorithm in the archive with the jump improved operation to enhance the solution searching abilities of particles [47].
a) 
Increasing the diversity of the non-dominated solutions in the archive
Cheng et al. proposed a hybrid MOPSO with local search strategy (LOPMOPSO), which consists of the quadratic approximation algorithm and the exterior penalty function method. The dynamic archive maintenance strategy is applied to improve the diversity of solutions, and the experimental results show that the proposed LOPMOPSO is highly competitive in convergence speed and generate a set of non-dominated solutions with good diversity [53]. Ali et al. proposed an attributed multiobjective comprehensive learning particle swarm optimizer (A-MOCLPSO), which optimize the total security cost and the residual damage. The experimental results show that the proposed A-MOCLPSO algorithm can provide diverse solutions for the problem and outperform the previous solutions obtained by other comparable algorithms [54].
In order to effectively deal with the multimodal and complex MOPs, a group of non-dominated solutions with good distribution can be obtained by changing the information sharing mode between particles in the decision and objective space, and a great progress has been made in solving the distribution of decision space. In view of multimodal problem and MOPs in noisy environment, it is necessary to consider the extension of MOPSO algorithm and analyze the MS metric.

B. Convergence Metrics

Convergence metrics measure the degree of the proximity which the distance between the non-dominated solutions and the Pareto Front.
The generational distance (GD) metric: The essence of GD is to calculate the distance the non-dominated solution in the archive and the true Pareto Front, which is defined as:
GD ( S , P ) = ( i = 1 s d i q ) 1 / q S ,
where, a finite number of the non-dominated solutions that approximates the true Pareto Front is called as P, the optimal non-dominated solution archive obtained by the evolutionary algorithms is termed as S. |S| is the number of the non-dominated solutions in the archive, d i = min p P F ( s i ) F ( p ) , s i S and q=2, di is the minimum Euclidean distance between the solution s S and the solutions in P. In essence, GD can reflect the convergence performance of MOPSO algorithm.
GD illustrates the convergence ability of the algorithm by measuring the closeness between the Pareto optimal front and the evolved Pareto front. Thus, a lower value of GD shows that the evolved Pareto front is closer to the Pareto optimal front. It should be clear that a value of indicates that all the elements generated are in the Pareto optimal set. Therefore, any other value will indicate how “far” we are from the global Pareto front of our problem. This metric addresses the first issue from the list previously provided.

(a) The Inertia Weight Adjustment Mechanisms Improved the Local Exploitation Ability

Tang et al. introduced a self-adaptive PSO (SAPSO) based on a parameter selection principle to guarantee the convergence when handling the MOPs. To gain a well-distributed Pareto front, an external repository was designed to keep the non-dominated solutions with a good convergence. The statistical results of GD have illustrated the proposed SAPSO can obtain a set of non-dominated solutions closed to the Pareto Front [58].

(b) Speeding up the Convergence by the External Archive

Zhu et al. introduced a novel external archive-guided MOPSO (AgMOPSO) algorithm, where the leaders for velocity updating and position updating are selected from the external archive [21]. In AgMOPSO, MOPs are transformed into a set of sub-problems and each particle is allocated to optimize a sub problem. Meanwhile, an immune-based evolutionary strategy of the external archive increased the convergence to the Pareto Front and accelerated the speed rate. Different from the existing algorithms, the proposed AgMOPSO algorithm is devoted to exploit the useful information fully from the external archive to enhance the convergence performance. In [23], a novel parallel cell coordinate system (PCCS) is proposed to accelerate the convergence of MOPSO by assessing the evolutionary environment. The PCCS have transformed the multiobjective functions into two-dimensional space, which can accurately grasp the distribution of the non-dominated solutions in high-dimensional space. An additional experiment for density estimation in MOPSO illustrates that the performance of PCCS is superior to that of adaptive grid and crowding distance in terms of convergence and diversity.
Wang et al. developed a multiobjective optimization algorithm with the preference order ranking of the non-dominated solutions in the archive [44]. And the experimental results indicated that the proposed algorithm improves the exploratory ability of MOPSO and converges to the Pareto Front effectively.

(c) Selecting Proper gBest and pBest

Alvarez et al. developed a MOPSO algorithm using exclusively on dominance for selecting guides from the solution archive to finding more feasible region and explore regions closed to the boundaries. The results demonstrate that the proposed algorithm can shrink the velocity of the particles and the particle can flight to the boundary of the true Pareto Front, which having a good GD value [37].
Wang et al. developed a new ranking scheme based on equilibrium strategy for MOPSO algorithm to select the gBest in the archive, and the preference ordering is used to decreases the selective pressure, especially when the number of objectives is very large. The experimental results indicate that the proposed MOPSO algorithm produces better convergence performance [50].

(d) Adjusting the Population Size

A multiple-swarm MOPSO algorithm, named dynamic multiple swarms in MOPSO, is proposed in which the number of the swarms are dynamic in the searching process. Yen et al. proposed a dynamic multiple swarms in MOPSO (DSMOPSO) algorithm to manage the communication within a swarm and among swarms and an objective space compression and an expansion strategy to progressively exploit the objective space during the search process [34]. The proposed DSMOPSO algorithm occasionally exhibits slower search progression, which may render larger computational cost than other selected MOPSOs.
In order to solve the application problem with the increasing complexity and dimensionality, Goh et al. developed a MOPSO algorithm with the competitive and cooperative co-evolutionary approach, which dividing the particle swarms into several sub-swarms [46]. Simulation results demonstrated that proposed MOPSO algorithm can retain the fast convergence speed to the Pareto Front with a good GD value.

(e) Hybrid MOPSO Algorithms

In order to increase the convergence accuracy and the speed of MOPSO algorithm, MOSPO combined with other intelligent algorithm. In [49], an efficient MOPSO algorithm based on the strength Pareto approach from EA was developed. The experimental results show that the proposed MOPSO algorithm can converge to the Pareto Front and have a slower convergence time than the SPEA2 and a competitive MOPSO algorithm. MOPSO can also combine with other global optimization algorithms. The evaluation of GD index is relatively simple, mainly considering the distance between all non-dominated solutions in the external archive and the frontier, but cannot provide diversity information.
The evaluation of GD metric is relatively simple, which mainly considers the distance between all non-dominated solutions and the Pareto front in the archive, but it can not provide the diversity. Because the calculation of the GD metric needs the real Pareto front, but the actual problem often has no real Pareto front, so its use will be limited. However, for the multiobjective optimization problem of the known front, it is suitable for MOPs with strict convergence when handling the actual problems.

C. Convergence-Diversity Metrics

The aim of optimizing MOPs is to obtain a set of uniformly distributed non-dominated solutions which is close to the true Pareto Front. In order to evaluate the optimal solutions in the archive, two performance metrics are applied to measure the MOPSO algorithm which can reflect both the convergence and diversity performance.
The inverted generational distance (IGD) metric: IGD is used to compare the disparity between the non-dominated solutions by the optimization algorithm and the true Front which is defined as:
I GD ( P , S ) = ( i = 1 P d i q ) 1 / q P ,
where, a finite number of the non-dominated solutions that approximates the true Pareto Front is called as P, the optimal non-dominated solution archive obtained by the evolutionary algorithms is termed as S. |P| is the number of the non-dominated solutions in the Pareto Front, d i = min s S F ( p i ) F ( s ) , p i P and q=2, di is the minimum Euclidean distance between the solution s S and the solutions p P. In particular, the smaller value of IGD means that the non-dominated in the archive is closer to the true Pareto Front.
IGD performs the near similar calculation as done by GD. The difference is that GD the distance of each solution in optimal solutions to Pareto Front, while IGD calculates the distance of each solution in Pareto Front to optimal solutions. In this indicator, both convergence and diversity are taken into consideration. A lower value of IGD implies that the algorithm has better performance.
The hypervolume (HV) metric: HV is another popular convergence –diversity metric to evaluate the volume of the non-dominated solutions in the archive with respect to the reference set.
HV ( S , R ) = v o l u m e ( i = 1 S v i ) ,
where, R is the reference set, s i S, a hypercube vi is formed by the reference set and the solution s i as the diagonal corners of the hypercube. When the non-dominated solutions in the archive is closer to the Pareto Front, a larger HV can demonstrate solutions in the archive are more uniformly distributed in the objective space.
In order to assess the performance among different compared algorithms, two performance measures, i.e., IGD and HV were adopted here. It is believed that these two performance indicators can not only account for convergence, but also the distribution of final solutions.
b) 
Adjusting the population size
Leong et al. presented a dynamic population multiple-swarm MOPSO algorithm to improve the diversity within each swarm, which including the integration of a dynamic population strategy and an adaptive local archives. The experimental results indicated that the proposed MOPSO algorithm shows competitive results with improved diversity and convergence and demands less computational cost [36].
In actual industrial problems, there are various many-objective problems (MaOP) and optimization algorithms aimed at searching for a set of uniformly distributed solutions which are closely approximating the Pareto Front. In [32], Carvalho et al. proposed a many objective technique named control of dominance area of solutions (CDAS) which is used on three different many objective particle swarm optimization algorithms. Most previous studies only deal with rank-based algorithms, the proposed CDAS technique to MOPSO algorithm that are based on the cooperation of particles, instead of a competitive method. Wang et al. proposed a hybrid evolutionary algorithm by the MOEA and MOPSO to balance the exploitation and exploration of the particles. The whole population is divided into several sub-populations to solve the scalar sub-problems from a MOP [52]. The comprehensive experiments with respect to IGD metric and HV metric can indicate that the performance of the proposed method is better than other comparable MOEAs.
c) 
Increasing the diversity of the non-dominated solutions in the archive
In general, MOPSO algorithm will be scale poorly when the number of the objectives more than three. To solve this problem, Britto et al. proposed a novel archiving MOPSO algorithm to explore more regions of the Pareto Front which applying the reference point to update the archive [33]. The empirical analysis of the proposed MOPSO algorithm that verified the distribution of the solutions and experimental results presented that the solutions generated by this algorithms could be very close to the reference point.
d) 
The inertia weight adjustment mechanisms improved the global exploration ability
Sorkhabi et al. presented an efficient approach to constraint handling in MOPSO, and the whole population is divided into two non-overlapping populations which including the infeasible particles and feasible particles. Meanwhile, the leader is selected from the feasible population. The experimental results demonstrated that the proposed algorithm is highly competitive in solving the MOPs. Meza et al. proposed a multiobjective vortex particle swarm optimization (MOVPSO) based on the emulation of the particles. The qualitative results show that MOVPSO algorithm can have a better performance compared to traditional MOPSO algorithm [56]. Zhang et al. proposed a competitive MOPSO, where the particles are updated on the basis of the pairwise competitions performed in the current swarm at each generation. Experimental results demonstrate the promising performance of the proposed algorithm in terms of both optimization quality and convergence speed [57]. Lin et al. proposed a MOPSO with multiple search strategies (MMOPSO) to tackle complex MOPs, where decomposition approach is exploited for transforming the MOPs. Two search strategies are used to update the velocity and position of each particle [60].
The factors that affect the IGD and HV metrics in MOPSO:
1) Changing the storage form of the non dominated solution in the knowledge base, 2) Adjusting the flight parameters of the particle, 3) Increasing the mutation of the particle;, 4) Adjusting the population size adaptively. In order to improve the diversity and convergence of MOPSO, a MOPSO with dynamic population size is used to improve the diversity of each group, including a dynamic population strategy and an integration of an adaptive local archive, which can improve the diversity and convergence.

IV. Theoretical Analysis of MOPSO

A. Convergence Analysis of MOPSO

Theoretical and empirical analysis of the properties of evolutionary algorithms are very important to understand their searching behaviors and to develop more efficient algorithms. Fang et al. proposed quantum-behaved particle swarm optimization (QPSO) algorithm and discuss the convergence of QPSO within the framework of random algorithm’s global convergence theorem [90]. In [89], Tian et al. presented the convergence analysis with construction coefficient, limit, differential equation, Z transformation and matrix. Meanwhile, if the condition in eq.(14) is met, the position of a single particle will tend to be (φ1pi+φ2pg)/(φ1+φ2).
1 ω 0 2 ω + 2 φ 1 + φ 2 ,
The convergence analysis of MOPSO contains the theory of probability principle and Lyapunov stability theorem.
In [91], Sun et al. investigated in detail the convergence of the MOPSO algorithm on a probabilistic metric space and prove that the MOPSO algorithm is a form of contraction mapping and can converge to the global optimum. This is the first time that the theory of probabilistic metric spaces has been employed to analyze a stochastic optimization algorithm.
Then, Kadirkamanathan et al. [92] proved a more generalized stability analysis of the particle dynamics using the Lyapunov stability theorem. Moreover, Van et al. proved that particles could converge to a stable point [93]. In [94], the swarm state sequence is defined and its Markov properties are examined according to the theory of MOPSO. Two closed sets, the optimal particle state set and optimal swarm state set, are then obtained. In the previous work, several variants about MOPSO algorithm have been proposed to handle the MOPs based on the concept and the Pareto optimality. However, a fairly small number of scholars have analyzed and proved the convergence of their improved MOPSO algorithms. In [31], Chakraborty et al. presented the a first, simple analysis of the general Pareto-based MOPSO and finds conditions on its most important control parameters (the inertia factor and acceleration coefficients) that govern the convergence behavior of the algorithm to the optimal Pareto front in the objective function space.
In [45], Li et al. presented a novel MOPSO algorithm based on the global margin ranking (GMR) strategy which deploys the position information of individuals in objective space to gain the margin of dominance throughout the population. In order to make sure the convergence of the proposed MOPSO algorithm, it gives a convergence analysis and ranking efficiency analysis to verify the effectiveness.
MOPSO has been accepted widely as a potential global optimization algorithm, but there is still a great space for the research of the algorithm itself [95]. So far, the mathematical proofs of convergence, convergent velocity, parameter selection and robustness have not been proposed perfectly on MOPSO. Hence, how to study and analyze MOPSO by the ideas of limit, probability, evolution and topology in order to reflect the mechanism of how MOPSO works, which is also a highly desired subject that should be paid much attention to by MOPSO researchers.

B. Timing Complexity of MOPSO

The time complexity analysis of MOPSO is a significant issue to apply to the different fields. The commonly used computation methods of time complexity can be summed up as summation and recursion method. In [25], Moubayed et al. presented that the time complexity was calculated by the proposed D2MOPSO that the global set N and the population size K. Since the K is equal to N, it concludes from this analysis that D2MOPSO has similar computational complexity to the other comparable algorithms. D2MOPSO uses the e archive (of size L ≤ N) which is updated on each iteration. In order to select the global leader for each particle, all solutions in the external archive are checked for the best aggregation value. The complexity would then be O(2LN) ∼ O(N). When an external archive (of size K > N) is used, the complexity becomes O(KN +2LN) ∼ O(KN).
In [20], Coello et al. investigated that the adaptive grid was lower than niching [i.e., O(N2)]. In [38], Raquel et al. investigates the time complexity in MOPSO with the original crowding distance. If the objectives is M and population size is N, the overall complexity of MOPSO-CD will be O(MN2).
In addition, the convergence time of particle swarm optimization is analyzed on the facet of particle interaction [96], in which the theoretical analysis is conducted on the social-only model of MOPSO instead of on common models in practice. The theoretical results reveal the relationship between the convergence time and the level of convergence as well as the relationship between the convergence time and the swarm size.
In totally, the major relation of the key parameters, the performance metrics and the theoretical analysis of MOPSO is shown in Figure 4.

V. Potential Future Research Directions Of MOPSO

Although there has been a lot of research work on the theoretical analysis, more problems often need to be considered according to the practical applications. On the basis of the operational principle of MOPSO, several potential future research directions in the area of MOPSO have been listed as follows.

A. The Trade-off Between Rapidity and Diversity

Rapidity is a pursuit of the algorithms in solving a MOP. In the practical application, it is usually necessary to consider both rapidity and diversity at the same time. At present, many researchers have proposed several approaches to decrease the time complexity and improve the diversity on MOPSO, but it is so hard to reach the two desirable objectives. In [59], a local search enhanced MOPSO is used for scheduling textile production processes, which the time complexity and diversity are considered comprehensively. More and more practical cases in solving the MOPs, the rapidity and diversity are need to be paid attention and studied in the future work.

B. Dynamic Multiobjective Optimization Problems

One of the most major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is that the objectives are time-varying. At present, the existing MOPSO algorithms cannot obtain the satisfactory optimization effects when handling with DMOPs. Therefore, it is urgent to study a MOPSO algorithm which can solve the dynamic multiobjective problem. In [68], Jiang et al. presented a transfer learning mechanism and incorporate the proposed approach into the development of a MOPSO algorithm to solve the complex DMOPs. The experimental results confirm the effectiveness of the proposed design for DMOPs. Although there are many presented MOPSO algorithms, few improved MOPSO algorithms are for their characteristics of the optimization process.
Most scholars verify the MOPSO algorithm through the static multi-objective optimization problem, and a few scholars have studied the application of MOPSO algorithm in the complex dynamic process.

C. The Many-objective Large-Scale Optimization

Traditional research about MOPSO algorithm focuses on MOPs with small numbers of variables and less than four objectives. However, with the complexity of big data era, more and more multiobjective optimization will exceed three objectives. CAO et al. presented many-objective large-scale optimization problems (MaOLSOPs), we need to explore thoroughly parallel attributes of the particle swarm, and design the novel PSO algorithms according to the characteristics of distributed parallel computation [69]. In the process of calculation, when the objective number is large and the number of variables is huge, the optimization process will be extremely time-consuming [70,71,72,73,74]. Therefore, it is very necessary for research to be effective in addition to large-scale multi-objective problems.

D. More Theoretical Guarantee

Although some scholars have made some theoretical research on the MOPSO algorithm, it has been shown that the MOPSO algorithm is effective for the practical multi-objective optimization problem, but the strict mathematical proof of the convergence of the MOPSO algorithm is not given. In [31], Chakraborty et al. presented the stability and convergence properties of MOPSO algorithm, which including the analysis of the general Pareto-based MOPSO and finds conditions on its most important control parameters (the inertia factor and acceleration coefficients) [75]. However, the above methods need too many hypothetical prerequisites. Therefore, the theoretical proofs of MOPSO algorithm is still a shortage, and the further research needs to be strengthened [76,77,78,79].

E. Stagnation of particles in the last stage

Although during the last ten years, research on and with MOPSO has reached an improvement state, there are still many open problems and new application areas are continually emerging for MOPSO algorithm [80,81,82,83]. Below, we unfold some important future directions of research in the area of MOPSO. In view of the problem of parameter adjustment in the MOPSO algorithm, how to judge the current population evolution environment by a comprehensive evaluation index and make a unified adjustment to the parameters of the MOPSO algorithm [84].

F. Self-organization MOPSO

In the process of optimizing the MOPSO algorithm, how to realize the optimization method of the whole particle swarm optimization and find the optimal solution set most close to the real Pareto frontier are two crucial problems [85,86,87]. In particular, a population structure is the foundation of a swarm, and different structures may drive the swarm to behave differently. Through the analysis of particle behavior in a search process, dynamic population size strategies are used. In [88], an adaptive MOPSO based on clustering, by considering the population topology and individual behavior control together to balance local and global search in an optimization process. Meanwhile, it separates the swarm dynamically in the searching process to connect the subpopulation clusters and uses a ring neighborhood topology to share the information among these clusters. Though many approaches have been proposed, one of the important questions is whether there are other effective evaluation methods to evaluate the effectiveness of the individual particles. Therefore, the self-organization MOPSO algorithm is need to be studied in the future work.

VI. Conclusion

In conclusion, the MOPSO algorithm has emerged as a potent tool for handling complex multi-objective optimization problems within a competitive-cooperative framework. This review paper has provided a comprehensive survey of MOPSO, encompassing its basic principles, key parameters, advanced methods, theoretical analyses, and performance metrics. The analysis of parameters influencing convergence and diversity performance has offered insights into the searching behavior of particles in MOPSO. The discussion on advanced MOPSO methods has highlighted various strategies to enhance the algorithm's efficiency, such as selecting proper gBest and pBest solutions, employing hybrid approaches with other intelligent algorithms, and adjusting population sizes dynamically. The theoretical analysis section has delved into convergence and timing complexity of MOPSO, shedding light on its mathematical foundations and practical implications.
Despite the significant progress in MOPSO research over the last two decades, several potential future research directions have been identified. These include exploring the application of MOPSO in complex dynamic processes, addressing the many-objective large-scale optimization challenges, providing more theoretical guarantees, and overcoming stagnation issues in particle evolution. In particular, there is a need for unified parameter adjustment methods, self-organization capabilities, and real-world applications of MOPSO to further solidify its position as a leading algorithm in multi-objective optimization.
Overall, this review paper has aimed to provide a comprehensive understanding of MOPSO's developments, achievements, and future directions, fostering further research and advancements in this promising field.

References

  1. Li, H.; Landa-Silva, D. An Adaptive Evolutionary Multi-Objective Approach Based on Simulated Annealing. Evol. Comput. 2011, 19, 561–595. [Google Scholar] [CrossRef]
  2. Brockhoff, D.; Zitzler, E. Objective Reduction in Evolutionary Multiobjective Optimization: Theory and Applications. Evol. Comput. 2009, 17, 135–166. [Google Scholar] [CrossRef]
  3. Hu, W.; Tan, Y. Prototype Generation Using Multiobjective Particle Swarm Optimization for Nearest Neighbor Classification. IEEE Trans. Cybern. 2015, 46, 2719–2731. [Google Scholar] [CrossRef]
  4. Zhang, Y.; Gong, D.-W.; Cheng, J. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification. IEEE/ACM Trans. Comput. Biol. Bioinform. 2015, 14, 64–75. [Google Scholar] [CrossRef]
  5. Zhang, X.; Tian, Y.; Cheng, R.; Jin, Y. An Efficient Approach to Nondominated Sorting for Evolutionary Multiobjective Optimization. IEEE Trans. Evol. Comput. 2014, 19, 201–213. [Google Scholar] [CrossRef]
  6. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  7. Mathijssen, G.; Lefeber, D.; Vanderborght, B. Variable Recruitment of Parallel Elastic Elements: Series–Parallel Elastic Actuators (SPEA) With Dephased Mutilated Gears. IEEE/ASME Trans. Mechatronics 2014, 20, 594–602. [Google Scholar] [CrossRef]
  8. Helwig, S.; Branke, J.; Mostaghim, S. Experimental Analysis of Bound Handling Techniques in Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2012, 17, 259–271. [Google Scholar] [CrossRef]
  9. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the strength pareto evolutionary algorithm. Computer Engineering and Networks Laboratory (TIK), Zurich, Switzerland, 2001. 259-271.
  10. Mathijssen, G.; Lefeber, D.; Vanderborght, B. Variable H. Ali, F. A. Khan, “Attributed multi-objective comprehensive learning particle swarm optimization for optimal security of networks. Applied Soft Computing 2013, 13, 3903–3921. [Google Scholar]
  11. Mathijssen, G.; Lefeber, D.; Vanderborght, B. Variable Recruitment of Parallel Elastic Elements: Series–Parallel Elastic Actuators (SPEA) With Dephased Mutilated Gears. IEEE/ASME Trans. Mechatronics 2014, 20, 594–602. [Google Scholar] [CrossRef]
  12. Helwig, S.; Branke, J.; Mostaghim, S. Experimental Analysis of Bound Handling Techniques in Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2012, 17, 259–271. [Google Scholar] [CrossRef]
  13. Pehlivanoglu, Y.V. A New Particle Swarm Optimization Method Enhanced With a Periodic Mutation Strategy and Neural Networks. IEEE Trans. Evol. Comput. 2012, 17, 436–452. [Google Scholar] [CrossRef]
  14. He, X.; Zhou, Y.; Chen, Z. An Evolution Path-Based Reproduction Operator for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2017, 23, 29–43. [Google Scholar] [CrossRef]
  15. Han, H.; Lu, W.; Zhang, L.; Qiao, J. Adaptive Gradient Multiobjective Particle Swarm Optimization. IEEE Trans. Cybern. 2017, 48, 3067–3079. [Google Scholar] [CrossRef]
  16. Feng, L.; Mao, Z.; Yuan, P.; Zhang, B. Multi-objective particle swarm optimization with preference information and its application in electric arc furnace steelmaking process. Struct. Multidiscip. Optim. 2015, 52, 1013–1022. [Google Scholar] [CrossRef]
  17. Bonabeau, E.; Dorigo, M. ; G.Theraulaz, Swarm Intelligence: From Natural to Artificial Systems, Oxford University Press, New York, NY, USA, 1999.
  18. Li, K.; Deb, K.; Zhang, Q.; Kwong, S. An Evolutionary Many-Objective Optimization Algorithm Based on Dominance and Decomposition. IEEE Trans. Evol. Comput. 2014, 19, 694–716. [Google Scholar] [CrossRef]
  19. Mukhopadhyay, A.; Maulik, U.; Bandyopadhyay, S.; Coello, C.A.C. A survey of multiobjective evolutionary algorithms for data mining: Part I. IEEE Transactions on Evolutionary Computation, vol. 18, no. 1, pp. 4–19, Feb. 2014.
  20. Coello, C.A.C.; Toscano-Pulido, G.T.; Lechuga, M.S. Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 256–279. [Google Scholar] [CrossRef]
  21. Zhu, Q.; Lin, Q.; Chen, W.; Wong, K.-C.; Coello, C.A.C.; Li, J.; Chen, J.; Zhang, J. An External Archive-Guided Multiobjective Particle Swarm Optimization Algorithm. IEEE Trans. Cybern. 2017, 47, 2794–2808. [Google Scholar] [CrossRef]
  22. Yue, C.; Qu, B.; Liang, J. , “A Multi-objective Particle Swarm Optimizer Using Ring Topology for Solving Multimodal Multi-objective Problems,” IEEE Transactions on Evolutionary Computation, 2017.
  23. Hu, W.; Yen, G.G. Adaptive Multiobjective Particle Swarm Optimization Based on Parallel Cell Coordinate System. IEEE Trans. Evol. Comput. 2013, 19, 1–18. [Google Scholar] [CrossRef]
  24. Zheng, Y.-J.; Ling, H.-F.; Xue, J.-Y.; Chen, S.-Y. Population Classification in Fire Evacuation: A Multiobjective Particle Swarm Optimization Approach. IEEE Trans. Evol. Comput. 2013, 18, 70–81. [Google Scholar] [CrossRef]
  25. Al Moubayed, N.; Petrovski, A.; McCall, J. D2MOPSO: MOPSO Based on Decomposition and Dominance with Archiving Using Crowding Distance in Objective and Solution Spaces. Evol. Comput. 2014, 22, 47–77. [Google Scholar] [CrossRef]
  26. Tripathi, P.K.; Bandyopadhyay, S.; Pal, S.K. Multi-Objective Particle Swarm Optimization with time variant inertia and acceleration coefficients. Inf. Sci. 2007, 177, 5033–5049. [Google Scholar] [CrossRef]
  27. Shim, V.A.; Tan, K.C.; Chia, J.Y.; Al Mamun, A. Multi-Objective Optimization with Estimation of Distribution Algorithm in a Noisy Environment. Evol. Comput. 2013, 21, 149–177. [Google Scholar] [CrossRef]
  28. Daneshyari, M.; Yen, G.G. Cultural-Based Multiobjective Particle Swarm Optimization. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 2010, 41, 553–567. [Google Scholar] [CrossRef]
  29. Ali, F. A. Khan, “Attributed multi-objective comprehensive learning particle swarm optimization for optimal security of networks,” Applied Soft Computing, vol. 13, no. 9, pp. 3903–3921, May 2013.
  30. Lee, K.-B.; Kim, J.-H. Multiobjective Particle Swarm Optimization With Preference-Based Sort and Its Application to Path Following Footstep Optimization for Humanoid Robots. IEEE Trans. Evol. Comput. 2013, 17, 755–766. [Google Scholar] [CrossRef]
  31. Chakraborty, P.; Das, S.; Roy, G.G.; Abraham, A. On convergence of the multi-objective particle swarm optimizers. Inf. Sci. 2010, 181, 1411–1425. [Google Scholar] [CrossRef]
  32. De Carvalho, A.B.; Pozo, A. Measuring the convergence and diversity of CDAS multi-objective particle swarm optimization algorithms: a study of many-objective problems. Neurocomputing 2012, 75, 43–51. [Google Scholar] [CrossRef]
  33. Britto, A.; Pozo, A. Using reference points to update the archive of MOPSO algorithms in Many-Objective Optimization. Neurocomputing 2014, 127, 78–87. [Google Scholar] [CrossRef]
  34. Yen, G.G.; Leong, W.F. Dynamic Multiple Swarms in Multiobjective Particle Swarm Optimization. IEEE Trans. Syst. Man, Cybern. - Part A: Syst. Humans 2009, 39, 890–911. [Google Scholar] [CrossRef]
  35. Agrawal, S.; Panigrahi, B.K.; Tiwari, M.K. Multiobjective Particle Swarm Algorithm With Fuzzy Clustering for Electrical Power Dispatch. IEEE Trans. Evol. Comput. 2008, 12, 529–541. [Google Scholar] [CrossRef]
  36. Leong, W.-F.; Yen, G.G. PSO-Based Multiobjective Optimization With Dynamic Population Size and Adaptive Local Archives. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 2008, 38, 1270–1293. [Google Scholar] [CrossRef]
  37. J. E. Alvarez-Ben´ıtez, R. M. Everson, and J. E. Fieldsend, “A MOPSO algorithm based exclusively on Pareto dominance concepts,” in Proc. Evol. Multi-Criterion Optimiz., 2005, pp. 459–473.
  38. C. R. Raquel and P. C. Nava, “An effective use of crowding distance in multiobjective particle swarm optimization,” in Proc. Genetic Evol. Comput., 2005, pp. 257–264.
  39. Huang, V.; Suganthan, P.; Liang, J. Comprehensive learning particle swarm optimizer for solving multiobjective optimization problems. Int. J. Intell. Syst. 2005, 21, 209–226. [Google Scholar] [CrossRef]
  40. Salazar-Lechuga, M.; Rowe, J. Particle swarm optimization and fitness sharing to solve multi-objective optimization problems. in Proc. IEEE Evol. Comput., Sep. 2005, pp. 1204–1211.
  41. Peng, G.; Fang, Y.W.; Peng, W.S.; et al. Multi-objective particle optimization algorithm based on sharing–learning and dynamic crowding distance. Optik-International Journal for Light and Electron Optics 2016, 127, 5013–5020. [Google Scholar] [CrossRef]
  42. Ayachitra, A.; Vinodha, R. Comparative study and implementation of multi-objective PSO algorithm using different inertia weight techniques for optimal control of a CSTR process. ARPN Journal of Engineering and Applied Sciences 2015, 10, 10395–10404. [Google Scholar]
  43. Coello, C.A.C.; Reyes-Sierra, M. Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art. Int. J. Comput. Intell. Res. 2006, 2. [Google Scholar] [CrossRef]
  44. Wang, Y.; Yang, Y. Particle swarm optimization with preference order ranking for multi-objective optimization. Inf. Sci. 2009, 179, 1944–1959. [Google Scholar] [CrossRef]
  45. Li, L.; Wang, W.; Xu, X. Multi-objective particle swarm optimization based on global margin ranking. Inf. Sci. 2017, 375, 30–47. [Google Scholar] [CrossRef]
  46. Goh, C.; Tan, K.; Liu, D.; Chiam, S. A competitive and cooperative co-evolutionary approach to multi-objective particle swarm optimization algorithm design. Eur. J. Oper. Res. 2010, 202, 42–54. [Google Scholar] [CrossRef]
  47. Tsai, S.-J.; Sun, T.-Y.; Liu, C.-C.; Hsieh, S.-T.; Wu, W.-C.; Chiu, S.-Y. An improved multi-objective particle swarm optimizer for multi-objective problems. Expert Syst. Appl. 2010, 37, 5872–5886. [Google Scholar] [CrossRef]
  48. Andervazh, M.; Olamaei, J.; Haghifam, M. Adaptive multi-objective distribution network reconfiguration using multi-objective discrete particles swarm optimisation algorithm and graph theory. IET Gener. Transm. Distrib. 2013, 7, 1367–1382. [Google Scholar] [CrossRef]
  49. Cheng, S.; Zhao, L.-L.; Jiang, X.-Y. An Effective Application of Bacteria Quorum Sensing and Circular Elimination in MOPSO. IEEE/ACM Trans. Comput. Biol. Bioinform. 2015, 14, 56–63. [Google Scholar] [CrossRef]
  50. Wang, Y.; Yang, Y. Particle swarm with equilibrium strategy of selection for multi-objective optimization. Eur. J. Oper. Res. 2010, 200, 187–197. [Google Scholar] [CrossRef]
  51. Jordehi, A.R. Particle swarm optimisation (PSO) for allocation of FACTS devices in electric transmission systems: A review. Renew. Sustain. Energy Rev. 2015, 52, 1260–1267. [Google Scholar] [CrossRef]
  52. Wang, H.; Fu, Y.; Huang, M.; Huang, G.; Wang, J. A hybrid evolutionary algorithm with adaptive multi-population strategy for multi-objective optimization problems. Soft Comput. 2016, 21, 5975–5987. [Google Scholar] [CrossRef]
  53. Cheng, S.; Zhan, H.; Shu, Z. An innovative hybrid multi-objective particle swarm optimization with or without constraints handling. Appl. Soft Comput. 2016, 47, 370–388. [Google Scholar] [CrossRef]
  54. Ali, H.; Khan, F.A. Attributed multi-objective comprehensive learning particle swarm optimization for optimal security of networks. Appl. Soft Comput. 2013, 13, 3903–3921. [Google Scholar] [CrossRef]
  55. Torabi, S.; Sahebjamnia, N.; Mansouri, S.; Bajestani, M.A. A particle swarm optimization for a fuzzy multi-objective unrelated parallel machines scheduling problem. Appl. Soft Comput. 2013, 13, 4750–4762. [Google Scholar] [CrossRef]
  56. Meza, J.; Espitia, H.; Montenegro, C.; Giménez, E.; González-Crespo, R. MOVPSO: Vortex Multi-Objective Particle Swarm Optimization. Appl. Soft Comput. 2017, 52, 1042–1057. [Google Scholar] [CrossRef]
  57. Zhang, X.; Zheng, X.; Cheng, R.; Qiu, J.; Jin, Y. A competitive mechanism based multi-objective particle swarm optimizer with fast convergence. Inf. Sci. 2018, 427, 63–76. [Google Scholar] [CrossRef]
  58. Tang, B.; Zhu, Z.; Shin, H.-S.; Tsourdos, A.; Luo, J. A framework for multi-objective optimisation based on a new self-adaptive particle swarm optimisation algorithm. Inf. Sci. 2017, 420, 364–385. [Google Scholar] [CrossRef]
  59. Zhang, R.; Chang, P.-C.; Song, S.; Wu, C. Local search enhanced multi-objective PSO algorithm for scheduling textile production processes with environmental considerations. Appl. Soft Comput. 2017, 61, 447–467. [Google Scholar] [CrossRef]
  60. Lin, Q.; Li, J.; Du, Z.; Chen, J.; Ming, Z. A novel multi-objective particle swarm optimization with multiple search strategies. Eur. J. Oper. Res. 2015, 247, 732–744. [Google Scholar] [CrossRef]
  61. Ganguly, S.; Sahoo, N.C.; Das, D. Multi-objective particle swarm optimization based on fuzzy-Pareto-dominance for possibilistic planning of electrical distribution systems incorporating distributed generation. Fuzzy Sets and Systems 2013, 213, 47–73. [Google Scholar] [CrossRef]
  62. Chang, W.-D.; Chen, C.-Y. PID Controller Design for MIMO Processes Using Improved Particle Swarm Optimization. Circuits, Syst. Signal Process. 2013, 33, 1473–1490. [Google Scholar] [CrossRef]
  63. Mahmoodabadi, M.; Taherkhorsandi, M.; Bagheri, A. Optimal robust sliding mode tracking control of a biped robot based on ingenious multi-objective PSO. Neurocomputing 2013, 124, 194–209. [Google Scholar] [CrossRef]
  64. Chen, G.; Liu, L.; Song, P.; Du, Y. Chaotic improved PSO-based multi-objective optimization for minimization of power losses and L index in power systems. Energy Convers. Manag. 2014, 86, 548–560. [Google Scholar] [CrossRef]
  65. Liu, J.; Luo, X.G.; Zhang, X.M.; Zhang, F. Job Scheduling Algorithm for Cloud Computing Based on Particle Swarm Optimization. Adv. Mater. Res. 2013, 662, 957–960. [Google Scholar] [CrossRef]
  66. Chou, C.-J.; Lee, C.-Y.; Chen, C.-C. Survey of reservoir grounding system defects considering the performance of lightning protection and improved design based on soil drilling data and the particle swarm optimization technique. IEEJ Trans. Electr. Electron. Eng. 2014, 9, 605–613. [Google Scholar] [CrossRef]
  67. Xu, Y.; You, T. Minimizing thermal residual stresses in ceramic matrix composites by using Iterative MapReduce guided particle swarm optimization algorithm. Compos. Struct. 2013, 99, 388–396. [Google Scholar] [CrossRef]
  68. Jiang, M.; Huang, Z.; Qiu, L.; Huang, W.; Yen, G.G. Transfer Learning-Based Dynamic Multiobjective Optimization Algorithms. IEEE Trans. Evol. Comput. 2018, 22, 501–514. [Google Scholar] [CrossRef]
  69. Cao, B.; Zhao, J.; Lv, Z.; Liu, X.; Yang, S.; Kang, X.; Kang, K. Distributed Parallel Particle Swarm Optimization for Multi-Objective and Many-Objective Large-Scale Optimization. IEEE Access 2017, 5, 8214–8221. [Google Scholar] [CrossRef]
  70. Yang, Y.; Zhang, T.; Yi, W.; Kong, L.; Li, X.; Wang, B.; Yang, X. Deployment of multistatic radar system using multi-objective particle swarm optimisation. IET Radar, Sonar Navig. 2018, 12, 485–493. [Google Scholar] [CrossRef]
  71. Fernandez-Rodriguez, A.; Fernandez-Cardador, A.; Cucala, A.P.; Dominguez, M.; Gonsalves, T. Design of Robust and Energy-Efficient ATO Speed Profiles of Metropolitan Lines Considering Train Load Variations and Delays. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2061–2071. [Google Scholar] [CrossRef]
  72. Wen, S.; Lan, H.; Fu, Q.; Yu, D.C.; Zhang, L. Economic Allocation for Energy Storage System Considering Wind Power Distribution. IEEE Trans. Power Syst. 2014, 30, 644–652. [Google Scholar] [CrossRef]
  73. Shahsavari, A.; Mazhari, S.M.; Fereidunian, A.; Lesani, H. Fault Indicator Deployment in Distribution Systems Considering Available Control and Protection Devices: A Multi-Objective Formulation Approach. IEEE Trans. Power Syst. 2014, 29, 2359–2369. [Google Scholar] [CrossRef]
  74. Srivastava, L.; Singh, H. Hybrid multi-swarm particle swarm optimisation based multi-objective reactive power dispatch. IET Gener. Transm. Distrib. 2015, 9, 727–739. [Google Scholar] [CrossRef]
  75. Niknam, T.; Narimani, M.R.; Aghaei, J. Improved particle swarm optimisation for multi-objective optimal power flow considering the cost, loss, emission and voltage stability index. Iet Generation Transmission & Distribution 2012, 6, 515–527. [Google Scholar]
  76. Chamaani, S.; Mirtaheri, S.A.; Abrishamian, M.S. Improvement of Time and Frequency Domain Performance of Antipodal Vivaldi Antenna Using Multi-Objective Particle Swarm Optimization. IEEE Trans. Antennas Propag. 2011, 59, 1738–1742. [Google Scholar] [CrossRef]
  77. Karimi, E.; Ebrahimi, A. Inclusion of Blackouts Risk in Probabilistic Transmission Expansion Planning by a Multi-Objective Framework. IEEE Trans. Power Syst. 2014, 30, 2810–2817. [Google Scholar] [CrossRef]
  78. Ho, S.L.; Yang, J.; Yang, S.; Bai, Y. Integration of Directed Searches in Particle Swarm Optimization for Multi-Objective Optimization. IEEE Trans. Magn. 2015, 51, 1–4. [Google Scholar] [CrossRef]
  79. Pham, M.-T.; Zhang, D.; Koh, C.S. Multi-Guider and Cross-Searching Approach in Multi-Objective Particle Swarm Optimization for Electromagnetic Problems. IEEE Trans. Magn. 2012, 48, 539–542. [Google Scholar] [CrossRef]
  80. X. Ye, H. Chen, H. Liang, “Multi-Objective Optimization Design for Electromagnetic Devices With Permanent Magnet Based on Approximation Model and Distributed Cooperative Particle Swarm Optimization Algorithm,” IEEE Transactions on Magnetics, PP(99):1-5.
  81. Ganguly, S. Multi-Objective Planning for Reactive Power Compensation of Radial Distribution Networks With Unified Power Quality Conditioner Allocation Using Particle Swarm Optimization. IEEE Trans. Power Syst. 2014, 29, 1801–1810. [Google Scholar] [CrossRef]
  82. Shukla, A.; Singh, S.N. Multi-objective unit commitment using search space-based crazy particle swarm optimisation and normal boundary intersection technique. IET Gener. Transm. Distrib. 2016, 10, 1222–1231. [Google Scholar] [CrossRef]
  83. Goudos, S.K.; Zaharis, Z.D.; Kampitaki, D.G.; Rekanos, I.T.; Hilas, C.S. Pareto Optimal Design of Dual-Band Base Station Antenna Arrays Using Multi-Objective Particle Swarm Optimization With Fitness Sharing. IEEE Trans. Magn. 2009, 45, 1522–1525. [Google Scholar] [CrossRef]
  84. Xue, B.; Zhang, M.; Browne, W.N. Particle Swarm Optimization for Feature Selection in Classification: A Multi-Objective Approach. IEEE Trans. Cybern. 2012, 43, 1656–1671. [Google Scholar] [CrossRef]
  85. Eladany, M.M.; Eldesouky, A.A.; Sallam, A.A. Power System Transient Stability: An Algorithm for Assessment and Enhancement Based on Catastrophe Theory and FACTS Devices. IEEE Access 2018, 6, 26424–26437. [Google Scholar] [CrossRef]
  86. Cao, Y.; Zhang, Y.; Zhang, H. Probabilistic Optimal PV Capacity Planning for Wind Farm Expansion Based on NASA Data. IEEE Transactions on Sustainable Energy 2017, PP(99):1-1.
  87. Ahmadi, K.; Salari, E. Small dim object tracking using a multi objective particle swarm optimisation technique. IET Image Process. 2015, 9, 820–826. [Google Scholar] [CrossRef]
  88. Liang, X.; Li, W.; Zhang, Y.; Zhou, M. An adaptive particle swarm optimization method based on clustering. Soft Comput. 2014, 19, 431–448. [Google Scholar] [CrossRef]
  89. Tian, D.P. A Review of Convergence Analysis of Particle Swarm Optimization. Int. J. Grid Distrib. Comput. 2013, 6, 117–128. [Google Scholar] [CrossRef]
  90. Fang, W.; Sun, J.; Xie, Z.; et al. Convergence analysis of quantum-behaved particle swarm optimization algorithm and study on its control parameter. Acta Physica Sinica 2010, 59, 3686–3694. [Google Scholar] [CrossRef]
  91. Sun, J.; Wu, X.; Palade, V.; Fang, W.; Lai, C.-H.; Xu, W. Convergence analysis and improvements of quantum-behaved particle swarm optimization. Inf. Sci. 2012, 193, 81–103. [Google Scholar] [CrossRef]
  92. Kadirkamanathan, V.; Selvarajah, K.; Fleming, P. Stability analysis of the particle dynamics in particle swarm optimizer. IEEE Trans. Evol. Comput. 2006, 10, 245–255. [Google Scholar] [CrossRef]
  93. Bergh, F.V.D.; Engelbrecht, A.P. A study of particle swarm optimization particle trajectories. Information Sciences 2006, 176, 937–971. [Google Scholar]
  94. Xu, G.; Yu, G. Reprint of: On convergence analysis of particle swarm optimization algorithm. J. Comput. Appl. Math. 2018, 340, 709–717. [Google Scholar] [CrossRef]
  95. Han, H.; Lu, W.; Zhang, L.; Qiao, J. Adaptive Gradient Multiobjective Particle Swarm Optimization. IEEE Trans. Cybern. 2017, 48, 3067–3079. [Google Scholar] [CrossRef]
  96. Chen, C.-H.; Chen, Y.-P. Convergence Time Analysis of Particle Swarm Optimization Based on Particle Interaction. Adv. Artif. Intell. 2011, 2011, 1–7. [Google Scholar] [CrossRef]
  97. Clerc, M.; Kennedy, J. The particle swarm - explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
Figure 1. The major scheme of this paper.
Figure 1. The major scheme of this paper.
Preprints 150262 g001
Figure 2. The relationship between the key parameters.
Figure 2. The relationship between the key parameters.
Preprints 150262 g002
Table 1. The basic MOPSO algorithm.
Table 1. The basic MOPSO algorithm.
Preprints 150262 i001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated