Experimental analysis of time complexity and solution quality of swarm intelligence algorithm

Nature-inspired algorithms are very popular tools for solving optimization problems inspired by nature. However, there is no guarantee that optimal solution can be obtained using a randomly selected algorithm. As such, the problem can be addressed using trial and error via the use of different optimization algorithms. Therefore, the proposed study in this paper analyzes the time-complexity and efficacy of some nature-inspired algorithms which includes Artificial Bee Colony, Bat Algorithm and Particle Swarm Optimization. For each algorithm used, experiments were conducted several times with iterations and comparative analysis was made. The result obtained shows that Artificial Bee Colony outperformed other algorithms in terms of the quality of the solution, Particle Swarm Optimization is time efficient while Artificial Bee Colony yield a worst case scenario in terms of time complexity.

algorithms because of their ability to effectively handle highly nonlinear and complex problems especially in science and engineering [6]. Therefore, the study in this paper presents a comparative analysis of three randomly selected algorithms: Artificial Bee Colony, Bat Algorithms and Particle Swarm Optimization for both time complexity and quality of solution.
The paper is structured as follows: Section 2 presents the basic concepts of the algorithms under study; ABC, BA, and PSO. Section 3 discusses the method used to collect literature and conduct experiment. Section 4 present results and discussion and finally conclusion is drawn in section 5.

Basic concept of the algorithms
This section discusses the basic theoretical background of the ABC, BA and PSO algorithms. The soul of computer they say is algorithm; it is a sequence-of-steps or procedure deigned to solve a problem or help answer a question. Yet, there is no universally accepted definition for algorithm.
Mathematically speaking, an algorithm is a procedure to generate outputs for given inputs. From the optimization point of view, an optimization algorithm generates a new solution xt11 to a given problem from a known solution xt at iteration or time t [10].

Artificial Bee Colony
Artificial Bee Colony simulates the foraging process of natural honey bees. The bee colony family in ABC consists of three members: employed, onlooker and scout bees. Scout bees' initiates searching of food sources randomly, once the potential food sources are identified by scout, they become employed bees. Then food sources are exploited by employed bees that also shares the information about the quality and quantity of food sources to the onlooker (bees resting at hive and waiting for the information from employed bees). A specific waggle dance is performed to share food information.

Figure 1. Bee colony
The ABC algorithm is presented below: • Initialization of random food sources The random food sources (FS) are generated in the search space using following Eq. (1): where represents the FS and denotes the ℎ dimension. max and min denote the upper and lower bounds.
• Employed bee process The search equation involved in this phase and also performs the global search by introducing new food sources = ( 1 , 2 , . . . , ) corresponding to = ( 1 , 2 , . . . , )is discussed below: where is selected randomly and distinct from . Greedy selection mechanism is performed to select the population to store in a trail vector. In case fails corresponding to boundary constraints then they are handled using following Eq. (3): From the Eq. (3), new solution may be generated then there will be greedy selection in Eqs. (3) and where () represents the fitness value which is defined in Eq. (5) (for minimization case): where () represents the objective function value.

• Onlooker bee process
Onlooker bee carry out local search in the region of the food sources shared by employed bee.
Equation (6) is used to choose the food source by Onlooker bee from a set of FS solutions.
Probability Pi is used to choose the food source (solution).
Onlooker bee chooses the food source having better probability, then Eq. (2) is used to exploit the food source and new food source is generated. After this a greedy process is followed using Eq. (4).

• Scout bee process
If the food source does not improve in the fix number of trials (limit a control parameter) then employed bees turns into scout bees and randomly forage for the new food sources. Initially ABC was designed to handle unconstrained optimization problems. Later ABC was modified to handle and solve COPs [22] by adding one more parameter, modification rate (MR) in employed and onlooker phase, constraints were handled using Deb's rule (Deb 2000) and thirdly another control parameter named SPP is added along with limit in scout bee phase that controls the abandoned food source, if it exceeds limit. If it exceeds limit then a scout production process is carried out. SPP ensures that the new food source randomly generated by the scout bee replace the proposed food source. Deb's rule suggests that: Following Eq. (7) is used by the employed and onlooker bees to generate the new food source.
Where is a random number in the range [−1,1] and MR controls the modification in xij and R ∈

Bat Algorithm
Bat algorithm (BA) was developed based on the echolocation features of microbats [3], and BA uses a frequency-tuning technique to increase the diversity of the solutions in the population, while at the same time, it uses the automatic zooming to try to balance exploration and exploitation during the search process by mimicking the variations of pulse emission rates and loudness of bats when searching for prey. BA can deal with both continuous optimization and discrete optimization problems [7]. Bat algorithm has the advantage of simplicity and flexibility. BA is easy to implement, and such a simple algorithm can be very flexible to solve a wide range of problems [9].
[9] developed the bat algorithm with the following three idealized rules: 1. All bats use echolocation to sense distance, and they also `know' the difference between food/prey and background barriers in some magical way; 2. Bats y randomly with velocity vi at position xi with a frequency f (or wavelength) and loudness A0 to search for prey. They can automatically adjust the wavelength (or frequency) of their emitted pulses and adjust the rate of pulse emission r 2 [0; 1], depending on the proximity of their target; 3. Although the loudness can vary in many ways, it was assume that the loudness varies from a large (positive) A0 to a minimum constant value Amin.

Particle Swarm Optimization
Particle swarm optimization (PSO); it is originated from the analysis of behavior of birds catching food [13], such as birds, fishes, ants, and so on, they concluded that, in the behavior rules of social animals, there has been an invisible information sharing platform for those seemingly unstructured and dispersed biological groups. Inspired by this, scholars simulated the behavior of birds constantly, and proposed the concept of optimization [13] [14].
Particle swarm optimization has become a better-developed optimization, in recent years. It searches the optimal solution through continuous iteration, and it finally employs the size of the value of objective function, or the function to be optimized (also known as the fitness function in the particle swarm), in order to evaluate the quality of the solution [15].
To ease research, birds are considered as particles of life without mass and volume in the algorithm.
The algorithm initializes the position of each particle into the solution of problems to be optimized.
In the movement process of the particle swarm, information is conveyed between each individual influencing the others, and a particle's moving state is influenced by the speed and direction of its colleagues, and of the whole particle swarm, so that each particle adjusts its own speed and direction according to the historical optimal positions of itself and its colleagues, and keeps flying and searching for the optimal position -the optimal solution. In the process of flying, particles update their position and direction according to their and external information; this has proved that the particle has the memory function, and particles with good positions and directions have the tendency to approach the optimal solution. As such, optimization is done through competition and cooperation between particles [13][14][15].

Materials and Methods
Program designed for each algorithm is made up of the same number of agents, decision variables, number of iterations and upper and lower bounds to compose the search space, a mathematical function to be optimized, and an optimizer, which is the meta heuristic technique used to perform the optimization process. Furthermore, they are bundled to an Opytimizer class, which holds all the vital information about the optimization task. Finally the task is started, and when it finishes, it returns a history object which encodes valuable data which include iteration count, fitness, position and time taken to complete the experiment about the optimization procedure. The experiment is repeated twenty times with twenty iterations. The input variables and the number of iterations remain the same for all the algorithms in other to easily compare and obtain a better result.
For each iteration count in an experiment, the fitness and position are recorded in a table which gives room for easy understanding and analysis. The time taken for every experiment to complete is recorded for all the algorithms, which was used to analyze and determine the most time efficient among the three algorithms. The results obtained from the procedure above are presented in the later section.

System configuration
The experiment was coded and implemented in a python micro-framework -Opytimizer. Visual Studio Code was used to code and run the experiments on a computer with the following configurations. Intel(R) Pentium(R), CPU N3540 @ 2.16GHz, RAM 4.00 GB, 64-bit operating system, x64-based processor. All experiments are performed on the same computer/machine.

Results
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 July 2020 doi:10.20944/preprints202007.0593.v1 The results obtained from the experiments are as follows. For each of the experiment, the best, average and worst cases were recorded and are shown in table 2, 3 and 4. Table 1 compare the   running time of the experiments for each algorithm while tables 2, 3 and 4 also contain comparison of best, average and worst solutions of the algorithms for each experiment.
In terms of time complexity, particle swarm optimization has less running time or converge to the global optimum faster that both the other algorithms, closely followed by Bat algorithm and then Artificial Bee Colony. Figure 3: present visual representation of the time analysis.
Each algorithm results were compared to one another for easy analysis. Table ii compared the best solutions of the algorithm and we found out that ABC outperformed the other algorithms. That is; it has the best solution among best solutions as it can be seen in Fig. 4., followed closely by PSO and lastly BA. Fig. 5. compared the average cases of the algorithms. Also ABC proved to be more efficient than the other two. Worst case solutions were compared in Fig. 6., with BA being very worst compared to both PSO and ABC.    Table ii present the best solutions for each of the algorithms. Chats are displayed in figures for each of the tables. Figure 2 showed the time complexity of the algorithms, as shown PSO has small running time than both BA and ABC.
The results revealed that the Particle swarm optimization converge to global optimum faster than the other two (BA and ABC), while in terms of quality of solution ABC outperformed the rest of the algorithms.
The results appeared like this because the algorithms are different and have different approach even though they are designed to achieve the same task. This study could be used by researcher to help choose a better algorithm to solve a problem or answer a question.

Conclusions
In conclusion, the objective of the research was to conduct an experiment using Opytimizer to determine/measure the performance (solution quality and time complexity) of three nature-inspired algorithms; particle swarm optimization, bat algorithm and artificial bee colony to determine which of the algorithms converge faster. Opytimizer python micro-framework was used on several benchmark functions. The experiment was run several times and the mean, best and worst cases were recorded. It revealed that PSO converge to global optimum faster than both the other two; BA and ABC. In terms of quality of solution ABC outperformed the rest of the algorithms. We have used basic versions of these algorithms without finely tuning the parameters to compare the results.