Modified Particle Swarm Optimization with Novel Modulated Inertia for Velocity Update

— Particle swarm optimization (PSO) is a population-based stochastic search algorithm for searching the optimal regions from multidimensional space, inspired by the social behaviour of some animal species. However, it has its limitations such as being trapped into a local optima and having a slow rate of convergence. In this paper, a new method of creating a combination of a developed Accelerated PSO and a new modulated inertia coefficient for the velocity update has been proposed. Random term based on particle neighbourhood has been added in the position update formula, inspired by the Artificial Bee Colony (ABC) algorithm. To verify the proposed modified PSO, experiments were conducted on several benchmark optimization problems. The results show that the proposed algorithm is superior in comparison with standard PSO and accelerated PSO algorithms.


I. INTRODUCTION
Particle swarm optimization (PSO) is a population-based stochastic search algorithm for searching the optimal regions from multidimensional space. It is an optimization method inspired by social behaviour of fish schooling and birds flocking and was defined by Kennedy and Eberhart in 1995 [1]. PSO is inspired by general artificial life and random search methods applied in evolutionary algorithm [2]. When travelling in a group, individual birds and fishes have the ability to move without colliding with each other. This is achieved by having each member follow its own group and adjust its position and velocity using the group information, thereby reducing the burden of individual's effort in searching the target (food, shelter). Particle swarm optimization is quite similar to genetic algorithm because both are population-based and are equally effective [2]. The advantage of the PSO method lies in its lower complexity while having comparable performance as there are only a few parameters to be adjusted and manipulated. It also has better computational efficiency, need less memory space, and is less dependent on the CPU speed. Another advantage of PSO over derivative-based local search methods is that when solving a complicated optimization problem, the gradient information is not needed to perform the iterative search.
In PSO, a member in the swarm is called a particle, representing a potential solution of a problem. A population of particles starts to move in a search space by following the current optimum particles and changing their positions in order to find out the optima. The position of a particle refers to a possible solution of the function to be optimized. Each particle has a fitness value, determined by evaluating a function using the particle's position, and a velocity. The experiences of the swarm are used as a learning tool in the search for the global optima [3]. While it has been successfully used to solve many optimization tests and real-life optimization problems, the PSO method often suffers from premature convergence and getting trapped in a local optimum region.
In order to achieve better algorithm performance, the original PSO algorithm has been modified by many researchers to be used in various types of applications. Shi et al. proposed an extended PSO based on inertia weight. A large inertia weight facilitates a global search while a small inertia weight facilitates a local search. By changing the inertia weight dynamically, the search capability is dynamically adjusted [4]. Nickabadi et al. proposed a new dynamic inertia weight PSO. While the former uses the fitness or iteration number as the basis for its inertia weight updating scheme, the latter proposed a method of using the success rate of the swarm to determine the inertia weight [5]. To improve the flexibility of mutation in PSO, Hui Wang presents an adaptive mutation strategy. In the new approach, three different mutation operators: Gaussian, Cauchy, and Lévy, are utilized [6]. Xin-She Yang simplified the standard PSO to become Accelerated PSO by neglecting the individual particle best. The reason for neglecting the individual particle best is that the diversity resulting from it can be simulated by introducing some randomness. Jian Hu [3] proposed a general fitness evaluation strategy (GFES), in which a particle is evaluated in multiple subspaces and different contexts in order to take diverse paces towards the destination position [3]. In [8], woven fabrics with the desired quality and low manufacturing cost were designed by optimizing the weave parameters using PSO to find the appropriate combination of weave parameters. In [9], the author added differential evolution (DE) mutation operator to the accelerated particle swarm optimization (APSO) algorithm to solve numerical optimization problems so as to speed up convergence. The mutation operator tunes the newly generated solution for each particle, rather than random walks used in APSO. PSO also had been used to identify inelastic material parameter [10]. Each individual particle is associated to hyper-coordinates in the search space, corresponding to a set of material parameters, upon which velocity operators with random components are applied. The effectiveness of PSO was utilised by applying it in finding the best solution for Maximum Covering Location Problem (MCLP) for ambulance in Malaysia. The best ambulance location was determined to ensure the efficiency of emergency medical services delivery [11].
II. PARTICLE SWARM OPTIMIZATION PSO generates an exciting, ever-expanding research subject, called swarm intelligence. PSO has been applied to almost every area in optimization, computational intelligence, and designing applications [12,13]. There are many types of PSO variants developed by researchers. The trend of combining PSO with other existing algorithms is also increasingly popular and generated much interest from researchers.
The movement of a swarming particle consists of two major components: a position component and a velocity component. Each particle is attracted toward the position of the current global best and its historical personal best location . refers to the position that provided the best fitness for the particle. refers to the best position obtained by any particle in the swarm in the particular iteration. PSO remembers both the best position found by all particles and the best positions found by each particle in the search process.
The pseudo code of PSO algorithm is as follows: 1 Initialization. 2 Fitness evaluation of each particle. 3 Repeat 4 Compare the fitness of particle with its . If current value is better, replaced the previous value. 5 Compare the fitness of particle with its . If current value is better, replaced the previous value. 6 Update the velocity and position of the particle. 7 If criterion is met, algorithm is ended. Else, Repeat.
Let and be the position vector and velocity for particle i, respectively. The new velocity is determined by the following formula where rand is a random vector, taking the values between 0 and 1. Shi and Eberhart [4] had improved the standard PSO algorithm by adding inertia weight as follows where inertia is a value between 0.9 and 0.4, progressively decreasing throughout the process. They claimed that a large inertia weight facilitates a global search while a small inertia weight facilitates a local search. This is so as to stabilize the motion of the particles, and as a result, the algorithm is expected to converge more quickly. The position update is always

III. MODIFIED PARTICLE SWARM OPTIMIZATION
The standard particle swarm optimization uses both the current global best, and the individual best, . The reason of using the information is to increase the diversity in the quality solutions. However, this diversity can also be generated by introducing some randomness. A simplified version which could accelerate the convergence of the algorithm is to use the only, as demonstrated by She Yang as accelerated PSO. This paper proposed a new modulated inertia, p, added in the term for velocity update formula, neglecting the information without replacing it with some random term.
where p is a modulated inertia defined as where (0.5,1.0), is the current Euclidean distance of particle from the global best.
= (∑ ( − , ) ) / where is the current global best. The maximum distance, max _ , of a particle from the global best is calculated using Equation 7.
Inspired by Artificial Bee Colony (ABC) algorithm established by Dervis Karaboga in [14], position update is supplemented by a random term from ABC position update formula, ( − ). A particle (bee) in ABC selects a food source (solution) and compare it with other food sources within their neighbourhood [15]. The choice is based on the neighbourhood of the previously selected food source. This increases the feasibility of the solution [16]. Hence, equation 3 was manipulated by implementing the random term.
where denotes another solution selected randomly from the population. is a new solution that is modified from its previous value, based on a comparison with the randomly selected position from its neighbouring solution, .

IV. BENCHMARK FUNCTION
In the simulation studies, MPSO was applied for finding the global minimum of three well known benchmark functions. A function of variables is separable if it can be rewritten as a sum of functions of just one variable [17]. A function is multimodal if it has two or more local optima, which makes finding the global optimum more difficult. The most complex case occurs when the local optima are randomly distributed in the search space [17]. Another important factor that affects the complexity of the problem is the dimension of the search space. where D denotes the number of dimensions.
The second function is Ackley's function. It has a value of 0 at its global minimum (0, 0… 0). Initialization range for the function is [−5, 5]. The formula is, (2 )) + 20 + The third function is the Sphere function whose value is 0 at its global minimum (0, 0). Initialization range for the function is [−100, 100]. It is continuous, convex and unimodal. The formula is, These functions were used to compare the performance of the improved optimization algorithm against the existing PSO and APSO algorithms.

V. EXPERIMENTAL RESULTS
To make a fair comparison between the three algorithms, all experiments were run for 100 function evaluations. The number of particles in the swarm was set at 36, and the dimension of the search space was 2. A run will be terminated after 100 function evaluations or until the function error dropped below e -30 (values less than e -30 were reported as 0), whichever comes first. Each of the experiments was repeated 30 times independently and the reported results are the mean of the function values, worst and best values, and the standard deviations of the statistical experimental data. The simulation experiments were implemented on a computer with MATLAB R2010a, Windows 8, and Intel i5 CPU clocked at 2.10 GHz with memory of 8 GB.
The results are presented in Tables 1, 2 and 3. The mean of function value for every algorithm was calculated after 30 experiments. "Best" represent the nearest function value to 0 among 30 attempts. "Worst" represent the most far away function value from 0 among 30 attempts. Standard deviation was calculated to know how far the function values spread out from the average. Hence, precision of an algorithm can be concluded. A standard deviation close to 0 indicates that the data points tend to be very close to the mean (precise), while a high standard deviation indicates that the data points are spread out over a wider range of values (less precise). Table I showed the results for Rosenbrock function experiment. MPSO mean value was the nearest to 0 compared to others. With the lowest value of standard deviation, 30 function value of MPSO were near to mean value in 0 to 1.25 e-25 range, hence it can be concluded that MPSO is precise. MPSO was the only one that managed to achieve 0 function value in 30 repeated experiments with 63.33% success while the other two algorithms failed.  Table II showed the results for Sphere function experiment. MPSO mean value was the nearest to 0 compared to others. With the lowest value of standard deviation, 30 function value of MPSO were near to mean value in 0 to 1.01 e-28 range, hence it can be concluded that MPSO is precise. MPSO were most accurate among the algorithms, with 50% chance to achieve 0 function value in 30 experiments, while APSO managed to achieve 0 value with 26.67% probability. PSO failed to get any 0 value.  This paper proposes the addition of a term in the position update formula, inspired by ABC algorithm technique, and a new modulated inertia, p, in velocity update formula to improve the established PSO algorithm.
The benchmark functions which were used for comparison were selected to analyse the performance of the proposed MPSO against the established PSO and APSO algorithms when facing different characteristics like multi-modality, local minima and difficulty of convergence. The results show that the proposed PSO algorithm outperforms the other two for the three selected test functions. Hence, we can conclude that MPSO can be proposed as an optimization algorithm of functions with similar characteristics. This experiment was conducted using 2 dimension problems, thus the results may vary when the dimensionality is increased.