LADPSO Published

15
Appl Intell DOI 10.1007/s10489-011-0328-6 LADPSO: using fuzzy logic to conduct PSO algorithm Mohammad Sadegh Norouzzadeh · Mohammad Reza Ahmadzadeh · Maziar Palhang © Springer Science+Business Media, LLC 2011 Abstract Optimization plays a critical role in human mod- ern life. Nowadays, optimization is used in many aspects of human modern life including engineering, medicine, agri- culture and economy. Due to the growing number of opti- mization problems and their growing complexity, we need to improve and develop theoretical and practical optimiza- tion methods. Stochastic population based optimization al- gorithms like genetic algorithms and particle swarm opti- mization are good candidates for solving complex prob- lems efficiently. Particle swarm optimization (PSO) is an optimization algorithm that has received much attention in recent years. PSO is a simple and computationally inex- pensive algorithm inspired by the social behavior of bird flocks and fish schools. However, PSO suffers from pre- mature convergence, especially in high dimensional multi- modal functions. In this paper, a new method for improv- ing PSO has been introduced. The Proposed method which has been named Light Adaptive Particle Swarm Optimiza- tion is a novel method that uses a fuzzy control system to conduct the standard algorithm. The suggested method uses two adjunct operators along with the fuzzy system in or- der to improve the base algorithm on global optimization problems. Our approach is validated using a number of com- mon complex uni-modal/multi-modal benchmark functions and results have been compared with the results of Stan- dard PSO (SPSO2011) and some other methods. The sim- ulation results demonstrate that results of the proposed ap- proach is promising for improving the standard PSO algo- M.S. Norouzzadeh ( ) · M.R. Ahmadzadeh · M. Palhang Electrical & Computer Engineering Department, Isfahan University of Technology, Isfahan, Iran e-mail: [email protected] M.R. Ahmadzadeh ( ) e-mail: [email protected] rithm on global optimization problems and also improving performance of the algorithm. Keywords Particle swarm optimization · Fuzzy control · Random search · Numerical function optimization · Premature convergence 1 Introduction There exists systems in nature which are composed of sim- ple agents. Each agent in these systems looks to be simple and unintelligent but, the behavior of the entire system sur- prisingly seems intelligent. In such systems, no central con- trol or other types of coordinators exist but the collective behavior of the system is purposeful and smart. Scientists named this behavior as swarm intelligence. In other words, in a swarm intelligent system each agent does a simple task and interacts locally with the environment and other agents, but the collective behavior of the entire system that results from these simple tasks is intelligent. Many swarm intel- ligent systems exist in nature, for example ants that allo- cate their tasks dynamically without any coordinator. As an- other example birds in a flock and fishes in a school organize themselves in optimal geometrical patterns. Computer scientists have been inspired by swarm intelli- gent systems in nature and have tried to imitate the behav- ior of such systems by inventing computational swarm in- telligent models. The aim of such computational models is making powerful methods for solving problems which com- posed of simple and computationally inexpensive agents in- stead of devising complex concentrated systems. Working with many simple and understandable agents is easier than working with a very complex system. Particle Swarm Op- timization (PSO) algorithm is an example of such compu-

Transcript of LADPSO Published

Page 1: LADPSO Published

Appl IntellDOI 10.1007/s10489-011-0328-6

LADPSO: using fuzzy logic to conduct PSO algorithm

Mohammad Sadegh Norouzzadeh ·Mohammad Reza Ahmadzadeh · Maziar Palhang

© Springer Science+Business Media, LLC 2011

Abstract Optimization plays a critical role in human mod-ern life. Nowadays, optimization is used in many aspects ofhuman modern life including engineering, medicine, agri-culture and economy. Due to the growing number of opti-mization problems and their growing complexity, we needto improve and develop theoretical and practical optimiza-tion methods. Stochastic population based optimization al-gorithms like genetic algorithms and particle swarm opti-mization are good candidates for solving complex prob-lems efficiently. Particle swarm optimization (PSO) is anoptimization algorithm that has received much attention inrecent years. PSO is a simple and computationally inex-pensive algorithm inspired by the social behavior of birdflocks and fish schools. However, PSO suffers from pre-mature convergence, especially in high dimensional multi-modal functions. In this paper, a new method for improv-ing PSO has been introduced. The Proposed method whichhas been named Light Adaptive Particle Swarm Optimiza-tion is a novel method that uses a fuzzy control system toconduct the standard algorithm. The suggested method usestwo adjunct operators along with the fuzzy system in or-der to improve the base algorithm on global optimizationproblems. Our approach is validated using a number of com-mon complex uni-modal/multi-modal benchmark functionsand results have been compared with the results of Stan-dard PSO (SPSO2011) and some other methods. The sim-ulation results demonstrate that results of the proposed ap-proach is promising for improving the standard PSO algo-

M.S. Norouzzadeh (�) · M.R. Ahmadzadeh · M. PalhangElectrical & Computer Engineering Department, IsfahanUniversity of Technology, Isfahan, Irane-mail: [email protected]

M.R. Ahmadzadeh (�)e-mail: [email protected]

rithm on global optimization problems and also improvingperformance of the algorithm.

Keywords Particle swarm optimization · Fuzzy control ·Random search · Numerical function optimization ·Premature convergence

1 Introduction

There exists systems in nature which are composed of sim-ple agents. Each agent in these systems looks to be simpleand unintelligent but, the behavior of the entire system sur-prisingly seems intelligent. In such systems, no central con-trol or other types of coordinators exist but the collectivebehavior of the system is purposeful and smart. Scientistsnamed this behavior as swarm intelligence. In other words,in a swarm intelligent system each agent does a simple taskand interacts locally with the environment and other agents,but the collective behavior of the entire system that resultsfrom these simple tasks is intelligent. Many swarm intel-ligent systems exist in nature, for example ants that allo-cate their tasks dynamically without any coordinator. As an-other example birds in a flock and fishes in a school organizethemselves in optimal geometrical patterns.

Computer scientists have been inspired by swarm intelli-gent systems in nature and have tried to imitate the behav-ior of such systems by inventing computational swarm in-telligent models. The aim of such computational models ismaking powerful methods for solving problems which com-posed of simple and computationally inexpensive agents in-stead of devising complex concentrated systems. Workingwith many simple and understandable agents is easier thanworking with a very complex system. Particle Swarm Op-timization (PSO) algorithm is an example of such compu-

Page 2: LADPSO Published

M.S. Norouzzadeh et al.

tational models which tries to simulate behavior of the birdswarms.

PSO is a population based numerical optimization al-gorithm inspired by the social behavior of bird swarms.Kennedy and Eberhart first introduced PSO algorithm in1995 [1]. Like evolutionary algorithms, PSO maintains agroup of candidate solutions. In PSO each candidate’s so-lutions is called particle and the entire population is calledswarm. In order to simulating the behavior of natural birdswarms, each particle within PSO swarm does two simpletasks. First, it constantly desires to come back to its ownprevious best position and second, it follows its successfulneighbors by flying to their successful positions. The overallbehavior of the swarm results from these two simple tasks isswarm’s rapid focusing on promising areas of search space.

Although PSO is a speedy and robust optimization al-gorithm, it suffers from premature convergence especiallyin problems with many variables. Premature convergence isdefined as converging of the algorithm to a suboptimal so-lution rather than locating the global optimum solution. Todate, many researchers have examined various approaches tosolve this problem. We will discuss this in some importantprevious works on this paper.

To avoid converging to local optima in PSO, an idea ofconducting the base algorithm by a fuzzy control system andalso utilizing exploration ability of random search and diver-sification of a type of mutation operator is addressed in thispaper.

This paper is organized as follows. In Sect. 2, we havetalked about the basic PSO algorithm. In Sect. 3, we havereviewed previous attempts for improving PSO algorithmon global optimization problems. In Sect. 4, we have intro-duced our method and have described it in detail. In Sect. 5,we have explained benchmark functions and their settingsthen we have compared the simulation results with those ofstandard PSO, and some other methods. Finally, Sect. 6 isthe conclusion and suggestion for some future works.

2 Particle swarm optimization

Particle swarm optimization is a stochastic optimization toolthat stores a population of possible solutions to solve prob-lems. The current solution of each particle is called posi-tion of the particle. The population initialized by randomlygenerated particles. The historical PSO algorithm uses uni-form random numbers for initialization. During each iter-ation, particles within the swarm update their position insearch space based on two types of knowledge. First theirown personal experiences and second the experiences oftheir neighbors. It means that in each iteration of the algo-rithm, each particle flies to the direction of the best positionof itself and the best position of its neighbors. The neighbors

of each particle are usually determined before the start of thealgorithm, basic PSO algorithm could be classified as localbest PSO or global best PSO base on social structure of theswarm. In PSO algorithm, positions of particles are updatedby their velocity vectors. It means that the position of eachparticle in the next iteration is calculated by adding its veloc-ity vector to its current position (1). Consider that we havean optimization problem in a d-dimensional search space.Let Xi = (xi1, xi2, . . . , xid) and Vi = (vi1, vi2, . . . , vid) arethe ith particle’s position vector and velocity vector re-spectively. Also suppose that Pi = (pi1,pi2, . . . , pid) repre-sents best previously visited position of the ith particle andPgi = (pg1,pg2, . . . , pgd) represents the global best posi-tion of the swarm. In PSO algorithm, optimization is done bythe velocity equation. The velocity of each particle is com-puted according to (2).

Xt+1i = Xt

i + V ti (1)

V t+1i = wV t

i + c1r1(Pti − Xt

i ) + c2r2(Ptg − Xt

i ) (2)

where d ∈ {1,2, . . . ,D}, i ∈ {1,2, . . . ,N} and N is theswarm size and the superscript t is the iteration number.In (2) w is the inertia weight which prevents particles tosuddenly change their directions, r1 and r2 are two randomvectors with values in the range [0,1] used to keep somediversity and c1 and c2 are the cognitive and social scalingparameters which are positive constants. The proportion ofc1 and c2 determine how much each particle relies on itsexperience and how much it relies on others experiences.Usually c1 and c2 are equal and set to about 2.

Empirical simulations showed that in basic PSO, if wedo not limit the amount of velocity to some predeter-mined value, then the magnitude of velocity of particlesincrease rapidly. In such conditions particles locations in-crease rapidly too and therefore the swarms is unable to dooptimization [2]. Constriction coefficient model [3] is an-other popular model of PSO algorithm which does not needvelocity clamping. Moreover if certain conditions are met,the constriction coefficient model could guarantee conver-gence of the swarm. In constriction coefficient model thevelocity equation change to (3). Where χ called the con-striction factor.

V t+1i = χ[V t

i + ϕ1(Pti − Xt

i ) + ϕ2(Ptg − Xt

i )] (3)

where

χ = 2k

|2 − ϕ − √ϕ(ϕ − 4)| ,

ϕ = ϕ1 + ϕ2, ϕ1 = c1r1, ϕ2 = c2r2 and k ∈ [0,1]The parameter k, in (3) controls the exploration and ex-ploitation abilities of the swarm. For k ≈ 0, fast convergenceis obtained with local exploitation. The swarm exhibits analmost hill-climbing behavior. On the other hand, k ≈ 1 re-sults in slow convergence with a high degree of exploration.

Page 3: LADPSO Published

LADPSO: using fuzzy logic to conduct PSO algorithm

Usually, k is set to a constant value. However it can be de-creased during the execution of algorithm.

In this paper, we have used the current standard versionof the PSO (SPSO-2011 [12]) as the base algorithm for ourmethod. SPSO is a standard version of PSO approved byresearchers of the field, as a base method for comparison ofvariations of the algorithm.

3 Related works

In optimization theory we are looking for an optimizationalgorithm that can locate the global optimum point on ev-ery problem regardless of its difficulty and algorithm’s start-ing point. In 1997 Wolpert and Macready [4] proved that“averaged over all possible problems or cost functions, theperformance of all search algorithms is exactly the same”.This means that no algorithm is better on average than otheralgorithms. Although reaching to the most excellent algo-rithm for all problems is impossible, we are not coping withall possible problems or cost functions, therefore we are stilllooking for an algorithm with high accuracy on common anduseful problems.

Any optimization algorithm must do two tasks, explo-ration and exploitation. Exploration means that searching re-gions of search space to find promising regions and exploita-tion means that concentrating on a promising area to refinethe solution. Some problems needs more exploration andless exploitation while other problems needs more exploita-tion and less exploration. Every good optimization methodmust be able to balance these two contradictive tasks. Usu-ally algorithms are trying to explore search space in earlysteps and then they try to exploit hopeful areas.

Up to now many attempts have been made to deal withPSO algorithm’s premature convergence. Some of these at-tempts were successful to improve PSO algorithm on globaloptimization problems. Previous works have tried differentapproaches for improving basic PSO algorithm. In this sec-tion we introduce studies about important previous attemptsand classify them according to the changes made on the ba-sic algorithm. In this section we have examined seven typesof approaches for dealing with premature convergence.

3.1 Changing the initialization method of the algorithm

Some methods have tried to improve the initializationmethod of the PSO in order to have a better explorationin early iterations of the algorithms. Pant et al. [5] haveused different probability distributions for initializing theswarm. In another work Pant et al. utilized low discrep-ancy sequences (discrepancy is deviation of the random se-quence from the true uniform distribution, low discrepancy

sequences are less random but more uniform than pseudo-random numbers) for better initializing particles [6]. Plow-ing PSO is another approach for initializing PSO effectively,which uses a kind of random search for improving initializa-tion [7].

3.2 Changing the neighborhood pattern of the swarm

Some other methods have changed the neighborhood struc-ture of the swarm in order to get to a compromise be-tween exploration and exploitation. Whatever neighborhoodof particles in swarm is more coupled, the flow of experi-ences between particles become faster and therefore prema-ture convergence is more probable. Various methods havebeen suggested for changing the neighborhood structure. Forexample in fully informed PSO [8] each particle not onlyimpressed by best of its neighbor but also impressed by allof its neighbors. Suganthan have used Euclidian distance forselecting neighbors [9]. Kennedy and Mendes in [10] haveexamined the impact of various social topology graphs onPSO performance. In basic PSO algorithm each particle con-verges to median of its own and its neighbor’s best position,Kennedy in barebones PSO [11] has used randomly gen-erated positions around the middle of these two positionsinstead of particle’s best position and best swarm positionin velocity equation. Standard PSO (SPSO-07, SPSO-2011)[12] is an improved version and current standard of PSOwhich uses a random topology for choosing the local bestparticles adaptively.

3.3 Adjustment of basic algorithm parameters

Some researchers have tried to achieve a compromise be-tween exploration and exploitation by adjusting historicalPSO algorithm parameters [13–16]. For example whateverthe value of inertia weight is greater, the exploration abilityof the swarm is more and vice versa. So some methods ad-just inertia weight so that exploration in early steps is highand decrease it during running of algorithm. Constriction co-efficient model [3] itself is another method that tries to adjusthistorical PSO parameters for improving it.

3.4 Hybrid methods

There exists various studies that have mixed PSO algorithmby other optimization methods [17–19] or other method’sheuristics and concepts. For example Chen et al. have mixedPSO with Extremal optimization [17]. Angeline [20] hashybridized PSO with the selection operator of genetic al-gorithm and some other methods have combined PSO withcrossover and mutation operators of the genetic algorithm[21–25]. The proposed method in this paper could be classi-fied as hybrid method because it uses the mutation conceptfrom genetic algorithms to improve the PSO algorithm.

Page 4: LADPSO Published

M.S. Norouzzadeh et al.

3.5 Multi-swarm methods

Some techniques have tried the idea of utilizing more thanone swarm for doing optimization. These swarms may co-operate or compete for doing their tasks. For example co-operative split PSO [26] has a number of sub-swarms thateach one is responsible for optimization of a subset of thesolution. In predator-prey PSO [27], Silva has inspired bypredator-prey relationship in nature and introduced predatorswarm to PSO which is useful for keeping diversity withinthe swarm. The predator swarm follows global best positionof the (prey) swarm so the preys fear and do not get closeto global best position very much so the swarm keeps itsdiversity.

3.6 Multi-start methods

One of the main reasons of premature convergence is thelack of diversity within the swarm. In PSO when a parti-cle reaches to an optima, it rapidly attracts its neighborsand therefore, the swarm concentrates on the founded op-timum. Many techniques [28–30] have been suggested forkeeping diversity within swarm by injecting diversity to theswarm by reinitializing particles or their velocities. Thesetechniques are differ based on when they inject diversityto the swarm and how they do diversification. For examplesome methods randomize velocity vectors while the othersrandomize the position of the particles. We utilized randomrestart technique in our proposed algorithm for injecting di-versity to the swarm.

3.7 Other methods

Several methods exist that cannot simply be classified asabove types. For example Blackwell et al. [31] have intro-duced charged PSO in which repulsion force has been in-troduced to PSO. In charged PSO each particle has an elec-trostatic charge that has impact on the behavior of the parti-cle. Particles with the same electrostatic charge repulse eachother and so we can ensure of the existence of diversity inthe swarm.

4 The proposed method

Our proposed algorithm is based on the Standard PSO al-gorithm. It means that our suggested algorithm does all ofStandard PSO computations in every iterations, moreoverwe have utilized two operators and a simple fuzzy con-trol system in order to improve Standard PSO algorithm onglobal optimization problems. The first operator or plow op-erator is used for efficient initializing of the algorithm. Thesecond operator or mutation operator is used for escaping

Fig. 1 Pseudo code of plow operator

from local optima and finally the fuzzy control system isused for conducting the search and effectively using the mu-tation operator, with the aim of facilitating exploration andexploitation. We will explain details of these operators andcontrol system in addition to description of our algorithmwhich we named Light Adaptive PSO (LADPSO) in the fol-lowing sub-sections. Note that all pseudo codes are assum-ing minimization problems.

4.1 Plow operator

The plow operator has utilized the exploration ability ofblind random search for making initialization of the PSOmore effective [7]. Plow operator performs on one particleof the swarm. Plow operator tries to reach a better solutionby iteratively changing the variable of one dimension in acandidate solution by uniform random numbers while otherdimensions remain fixed. If the new generated position isbetter than the previous one, then it keeps the new positionand drops it otherwise. This procedure gets done for each di-mension k times. Since this process is like plowing in agri-culture and prepares the algorithm to reach to a better solu-tion, this process has been named as plow. The pseudo codeof plow operator is represented in Fig. 1.

As we mentioned earlier, plow operator changes the valueof each dimension k times. The original paper [7] uses theplow operator only for global best location of the swarmwith k = 50, but we use the plow operator for all particlesonce at the beginning of the algorithm with k = 20. How-ever plow operator may be used several times during execu-tion of the algorithm for various reasons. For example plowoperator may be used to facilitate exploration, helping thealgorithm to escape from local optima or even injecting di-versity to the swarm. Usefulness of the plow operator hasbeen reported on [7].

4.2 Mutation operator

Up to now mutation has been combined with PSO in manyprevious works. We have referenced to some of them in pre-

Page 5: LADPSO Published

LADPSO: using fuzzy logic to conduct PSO algorithm

Fig. 2 Pseudo code of the adaptive mutation operator

vious section. Several methods including uniform mutation,Gaussian mutation, Levy mutation and adaptive mutationwhich choose the type of mutation based on problems condi-tions have been suggested for performing mutation in PSO.In our work, we have added a Gaussian mutation to the par-ticles. The amplitude of the Gaussian mutation is controlledby the fuzzy system. The control strategy designed so thatthe amplitude of the Gaussian noise is high in the early stepsto help the algorithm to explore more effectively and it willbe decreased to facilitate exploitation in the last phases.

Like the plow operator, mutation operator changes thevalue of one dimension k times while other variables remainconstant. This change is performed by replacing the value ofthe variable by a random number with normal distributionwith average of variable’s old value and a determined vari-ance. The value of variance and k is set automatically by thefuzzy control system. Mutation operator acts on global bestlocation of the entire swarm. Like multi restart techniques,in our algorithm after each application of mutation operator,we have reinitialized some of population randomly (exceptthe global best particle) to inject diversity to the swarm; thenumber of particles to reinitialize is determined by fuzzycontrol system too.

The pseudo code of the mutation operator has been de-picted in Fig. 2 where “randn” indicates a normal randomnumber with zero mean and unity standard deviation.

4.3 Proposed algorithm (LADPSO)

We have utilized two above operators in order to improv-ing standard PSO algorithm. As we mentioned earlier theplowing operator has been used for effective initializationand the mutation operator has been used for escaping fromlocal optima. In LADPSO algorithm plowing is used onlyonce at beginning of the algorithm, on the other hand themutation operator acts when a fuzzy control system triggersit. We have used control system with the aim of helping thealgorithm to converge to global optimum and optimal useof computational resources. Usually on similar works ad-junct operators have been used regularly for example onetime on every few iterations. This approach does not seemefficient because on some conditions the algorithm uses op-erators unduly or the algorithm could not use them when it

Fig. 3 Pseudo code of the LADPSO algorithm

needs to use them. In this paper, we have used a compu-tationally inexpensive fuzzy control system for determiningthe exact timing of operator’s usage and configuration of theoperator. This control system specifies whether we must useor not use the mutation operator based on swarm’s condi-tions. Also the fuzzy system specifies the type of mutationoperator. The details of our control system will be explainedin next section.

In LADPSO, in each iteration, if the control system de-cides that we need to perform the mutation operator, thenit will be done. When the swarm starts to converge we usethe mutation operator to help the swarm to escape from lo-cal optima. The proposed algorithm first tries to reach to agood start point by performing the plow operator on the en-tire swarm. In consequent iterations, if the swarm starts toconverge then the fuzzy control system decides to do muta-tion, during the mutation the algorithm tries to escape fromlocal optimum and to inject some diversity to the swarm. Ifafter some mutations the swarm was unable to get better re-sult, the amplitude of Gaussian mutation is decreased to helpthe algorithm to do more exploitation. In fact the aim of thefuzzy system is to help both the exploration and exploitationwhen the algorithm needs to them.

Pseudo code and the flowchart of the LADPSO are shownin Figs. 3 and 6.

4.4 Fuzzy control system

We have used a fuzzy approach to cope with premature con-vergence in this paper. Our fuzzy control system tries to de-tect premature convergence by examining swarm conditionsand use necessary reaction (mutation operator timing andsettings) to deal with it. In this section we will explain thedetails of our fuzzy control. Our control system has beendesigned based on fuzzy approach; we have defined two in-put variables for our system. The first one is swarm radius;

Page 6: LADPSO Published

M.S. Norouzzadeh et al.

Fig. 4 Membership function of swarm radius

Fig. 5 Membership of stagnation

swarm radius is mean of the standard deviations of the di-mensions of particles within swarm. The swarm radius hasbeen computed based on following equations (4) and (5).The second input variable is the number of iterations whichthe swarm does not success to improve the solution or thenumber of stagnant iterations.

radius(D) = STD(D) (4)

Swarm_Radius =∑dim

D=1 STD(D)

dim(5)

To deal with normal variables for all problems we have de-fined the membership functions of input variables as showedon Figs. 4 and 5.

We have designed a table-based inference engine for ourproposed fuzzy system. The inference table which definesour system strategy is depicted on Table 1. We have plannedtwo setting for the mutation operator. These setting are de-signed so that it could help exploration in early phases andalso assist exploitation on the last phases. The details ofthese settings are shown on Table 2, where k is the k param-eter on the mutation operator, sigma is multiplied to searchspace boundaries (upper limit-lower limit) and produces thevariance of the Gaussian mutation. The reinitialize propor-tion is the proportion of the swarm that must be randomlyreinitialized. Our suggested algorithm does not have any ex-tra parameters for setting by user. The values of decisiontable and membership functions of fuzzy variables are de-termined based on our experimental simulations. We havetried to tune these parameters accurately, but by using more

Table 1 Decision table of fuzzy controller

Stagnation Swarm radius

Very low Low Mid High

Low M1 M1 NOP NOP

Mid M2 M1 M1 NOP

High M2 M2 M1 M1

Table 2 Parameters of mutation

k Sigma Reinitialize proportion

M1 10 0.05 0.05

M2 10 0.01 0.00

computational resources it is possible to do a Comprehen-sive experimental simulation to set these parameters moreaccurately. Note that in Table 1, NOP means that do not anyoperation; just leave the standard PSO to go further.

5 Results and discussion

We have organized our simulations in two sections. Onthe first section we have evaluated the algorithm on firstten functions designed for the special session on real opti-mization of CEC 2005 [32]. These benchmark functions arelargely used and result of many various algorithm are avail-able for them. Also we have compared the results with theresults of standard PSO 2011. On the second section we haveapplied our algorithm on a real life optimization problemand compared it with results of some other methods. Notethat all simulations of the proposed algorithm were run onIntel Pentium Dual core with 1.7 GHz CPU and 2 GB ofmemory.

5.1 CEC 2005 functions

For this section, we have set the number of dimensions to(D =)10 and (D =)30, the swarm size is set to 40, andmaximum function evaluation number is set to 10000 ∗ D,the other parameters of algorithm is like standard PSOalgorithm[12]. We have reported the results of standard PSOalgorithm on Tables 4 and 5. Also we have shown the resultsof our method on Tables 6 and 7. We have reported the resultbased on settings suggested by [32]. Our results are compet-itive with some other algorithms which they tested on thesebenchmark functions for example [33–35] we have quotedresults of [33] as an example for comparison on Tables 8and 9. Also we have reported Algorithm Complexity basedon [32] in Table 8, where T0 is the Computing time for thecode on Fig. 7, T1 is computing time of the function 3 for200000 evaluations and T̂2 is the average of the complete

Page 7: LADPSO Published

LADPSO: using fuzzy logic to conduct PSO algorithm

Fig. 6 Flowchart of LADPSOalgorithm

Fig. 7 The code for evaluating T0

computing time for the algorithm with 200000 evaluationsfor 5 times.

From the results above, it could be said that, our methodis promising for improving the standard algorithm, and itwas able to locate the global minimum on most of problems,with acceptable accuracy. Also from the results of Table 3 itis clear that our algorithm is more computationally efficientthan its standard version

5.2 Real life problem

In this section we have reported the results of applyingour algorithm to the Lennard-Jones potential problem. TheLennard-Jones potential (L-J potential or 6-12 potential) is amathematically simple model that describes the interactionbetween a pair of neutral atoms or molecules. Minimizingthe Lennard-Jones potential is an unconstrained global opti-

mization problem with many local optimum points. Becauseof its many local optimums the results of algorithms on min-imizing this function is very discriminant for comparison ofalgorithms. In minimizing this function we are seeking to ageometrical arrangement of n atoms with the lowest poten-tial energy. We have reported the average results of our al-gorithm for 5 to 15 atoms on Table 10 along with the resultsof some other methods like [36] and its global minimum forcomparison. The swarm size is 40 and the function evalua-tion count is set to 5000 + 3000 ∗ n ∗ (n − 1). We have re-ported the average of results for running the test for 20 times.

Also in Table 11 we have compared the results of ouralgorithm to one of the most comprehensive studies aboutLennard-Jones function optimization [37]. In Table 11 thebest value achieved over 10 runs, for various number ofatoms has been reported. As we could see, our method has agood performance on Lennard-Jones potential problem.

5.3 Analyzing the results

We have used Mann–Whitney–Wilcoxon (MWW) method[38, 39] to compare the results of SPSO and LADPSO al-gorithms. MWW is a non-parametric statistical analysis forexamining whether one of two samples of independent ob-servations have a tendency to have smaller values than the

Page 8: LADPSO Published

M.S. Norouzzadeh et al.

Tabl

e4

Err

orva

lues

ofSP

SO20

11fo

rpr

oble

ms

1-10

(10D

)

FEs/

func

tion

12

34

56

78

910

103

min

1.15

39E

+02

1.15

09E

+03

3.11

60E

+06

5.32

83E

+02

1.87

04E

+03

3.93

46E

+05

1.39

00E

+01

2.04

23E

+01

4.21

29E

+01

3.25

57E

+01

73.

4795

E+

021.

8323

E+

036.

7645

E+

061.

9885

E+

033.

4304

E+

037.

7151

E+

054.

7139

E+

012.

0689

E+

014.

9606

E+

015.

3311

E+

01

134.

1273

E+

022.

3802

E+

031.

0634

E+

073.

0851

E+

033.

7821

E+

031.

4911

E+

066.

1102

E+

012.

0792

E+

015.

4903

E+

016.

1824

E+

01

194.

5542

E+

022.

8692

E+

031.

6214

E+

074.

3814

E+

034.

3999

E+

032.

7670

E+

066.

9930

E+

012.

0842

E+

015.

6695

E+

016.

7036

E+

01

max

6.21

31E

+02

3.23

08E

+03

2.46

06E

+07

6.99

24E

+03

5.32

74E

+03

6.42

35E

+06

1.30

45E

+02

2.09

30E

+01

6.73

48E

+01

7.21

16E

+01

mea

n3.

9501

E+

022.

3480

E+

031.

1482

E+

073.

2882

E+

033.

8881

E+

032.

0175

E+

066.

1812

E+

012.

0748

E+

015.

3631

E+

015.

9921

E+

01

std

1.22

93E

+02

6.18

74E

+02

6.53

27E

+06

1.72

12E

+03

8.17

26E

+02

1.62

13E

+06

2.62

67E

+01

1.36

32E

-01

5.55

98E

+00

9.88

30E

+00

104

min

4.83

39E

-07

1.63

06E

-05

4.18

56E

+04

1.26

36E

-03

4.29

29E

-03

4.40

28E

+00

5.60

55E

-02

2.01

50E

+01

5.53

83E

+00

5.78

30E

+00

77.

3028

E-0

74.

7178

E-0

41.

3289

E+

058.

9430

E-0

31.

4527

E-0

28.

5542

E+

001.

6503

E-0

12.

0351

E+

019.

8777

E+

001.

1343

E+

01

138.

1489

E-0

79.

7080

E-0

41.

8476

E+

051.

9373

E-0

21.

9517

E-0

21.

0229

E+

022.

4216

E-0

12.

0375

E+

011.

3254

E+

011.

4659

E+

01

198.

6275

E-0

72.

2922

E-0

33.

7931

E+

053.

7432

E-0

24.

0156

E-0

28.

2742

E+

022.

9198

E-0

12.

0523

E+

011.

8500

E+

011.

7670

E+

01

max

9.83

29E

-07

1.34

13E

-02

9.77

53E

+05

1.15

59E

+00

8.50

52E

-02

6.73

80E

+03

4.69

71E

-01

2.06

11E

+01

2.53

48E

+01

2.38

27E

+01

mea

n7.

8889

E-0

72.

2670

E-0

32.

7539

E+

057.

7618

E-0

22.

8589

E-0

21.

1759

E+

032.

3943

E-0

12.

0390

E+

011.

4226

E+

011.

4858

E+

01

std

1.25

51E

-07

3.30

99E

-03

2.05

03E

+05

2.28

65E

-01

2.11

55E

-02

2.05

25E

+03

1.07

00E

-01

1.34

79E

-01

5.25

04E

+00

5.07

32E

+00

105

min

5.39

71E

-07

4.48

58E

-07

4.33

65E

+03

6.41

67E

-07

6.96

08E

-07

1.68

95E

-02

9.62

10E

-03

2.01

12E

+01

1.98

99E

+00

2.06

68E

+00

77.

3741

E-0

77.

8312

E-0

72.

1273

E+

048.

3495

E-0

79.

1386

E-0

73.

0600

E-0

22.

9562

E-0

22.

0201

E+

013.

9798

E+

003.

7657

E+

00

137.

9384

E-0

78.

7540

E-0

73.

5677

E+

049.

3487

E-0

79.

4535

E-0

74.

3523

E-0

24.

6796

E-0

22.

0236

E+

014.

7817

E+

005.

2163

E+

00

199.

0362

E-0

79.

7090

E-0

75.

2422

E+

049.

7272

E-0

79.

6729

E-0

71.

0220

E+

025.

6552

E-0

22.

0302

E+

015.

9698

E+

006.

2968

E+

00

max

9.78

47E

-07

9.98

12E

-07

9.02

86E

+04

9.99

80E

-07

9.99

42E

-07

2.78

00E

+02

5.22

46E

-01

2.03

68E

+01

1.59

40E

+01

9.83

10E

+00

mea

n8.

0304

E-0

78.

5995

E-0

73.

7918

E+

048.

9410

E-0

79.

2681

E-0

75.

0578

E+

016.

7519

E-0

22.

0241

E+

015.

6251

E+

005.

1169

E+

00

std

1.28

02E

-07

1.37

76E

-07

2.24

58E

+04

1.03

23E

-07

6.89

07E

-08

8.68

87E

+01

9.82

47E

-02

7.11

37E

-02

3.16

87E

+00

2.06

06E

+00

Page 9: LADPSO Published

LADPSO: using fuzzy logic to conduct PSO algorithm

Tabl

e5

Err

orva

lues

ofSP

SO20

11fo

rpr

oble

ms

1-10

(30D

)

FEs/

func

tion

12

34

56

78

910

103

min

3.12

32E

+02

1.20

07E

+04

2.47

22E

+07

1.81

95E

+04

6.14

91E

+03

1.94

61E

+06

1.45

21E

+02

2.09

98E

+01

1.72

00E

+02

1.80

29E

+02

77.

0195

E+

021.

7960

E+

043.

6941

E+

072.

2750

E+

047.

7983

E+

036.

2680

E+

062.

4781

E+

022.

1101

E+

012.

1750

E+

022.

2878

E+

02

138.

5861

E+

021.

8943

E+

044.

2360

E+

072.

5047

E+

048.

6130

E+

031.

2253

E+

073.

1822

E+

022.

1156

E+

012.

2487

E+

022.

4412

E+

02

191.

1197

E+

032.

0358

E+

046.

2464

E+

072.

8751

E+

049.

5131

E+

032.

7609

E+

073.

9816

E+

022.

1189

E+

012.

3701

E+

022.

6437

E+

02

max

1.72

06E

+03

2.77

07E

+04

8.77

08E

+07

3.86

13E

+04

1.16

67E

+04

5.96

57E

+07

8.29

58E

+02

2.12

37E

+01

2.59

97E

+02

2.91

69E

+02

mea

n9.

1242

E+

021.

9402

E+

044.

7832

E+

072.

5673

E+

048.

6904

E+

031.

8587

E+

073.

2889

E+

022.

1144

E+

012.

2512

E+

022.

4327

E+

02

std

3.18

47E

+02

4.10

23E

+03

1.66

73E

+07

4.90

48E

+03

1.52

65E

+03

1.63

49E

+07

1.42

70E

+02

6.25

36E

-02

1.78

61E

+01

2.85

78E

+01

104

min

8.01

26E

-07

1.76

29E

+01

1.34

34E

+06

4.12

80E

+03

2.96

19E

+03

2.48

56E

+01

4.62

81E

-02

2.08

07E

+01

3.58

18E

+01

3.58

18E

+01

78.

7817

E-0

73.

2873

E+

012.

0793

E+

065.

3854

E+

034.

3196

E+

032.

0192

E+

021.

3710

E-0

12.

0920

E+

014.

6825

E+

015.

9632

E+

01

139.

5457

E-0

76.

4617

E+

012.

5578

E+

066.

3613

E+

034.

9577

E+

034.

3300

E+

022.

1258

E-0

12.

0958

E+

016.

1966

E+

019.

1185

E+

01

199.

7326

E-0

78.

7396

E+

012.

9987

E+

068.

2697

E+

035.

3113

E+

031.

1023

E+

033.

0652

E-0

12.

1007

E+

019.

5180

E+

011.

2468

E+

02

max

9.99

03E

-07

1.60

48E

+02

4.45

25E

+06

1.17

01E

+04

6.17

53E

+03

1.03

10E

+04

9.08

20E

-01

2.10

76E

+01

1.56

28E

+02

1.82

10E

+02

mea

n9.

2528

E-0

76.

6370

E+

012.

5926

E+

066.

8501

E+

034.

9010

E+

031.

7795

E+

032.

5857

E-0

12.

0957

E+

017.

3631

E+

019.

5081

E+

01

std

6.13

35E

-08

3.80

12E

+01

7.52

00E

+05

2.06

14E

+03

8.16

84E

+02

2.87

54E

+03

2.01

78E

-01

7.39

35E

-02

3.39

02E

+01

4.18

87E

+01

105

min

7.89

82E

-07

9.32

96E

-07

7.92

07E

+04

7.39

03E

+00

3.38

13E

+03

1.71

40E

+01

9.94

53E

-03

2.07

22E

+01

2.48

74E

+01

2.78

59E

+01

79.

0581

E-0

79.

8250

E-0

71.

8243

E+

052.

3050

E+

014.

2821

E+

032.

0745

E+

019.

9990

E-0

32.

0820

E+

014.

3778

E+

014.

4773

E+

01

139.

4793

E-0

79.

8978

E-0

72.

2671

E+

053.

6253

E+

014.

6013

E+

031.

4877

E+

022.

4573

E-0

22.

0850

E+

015.

6635

E+

015.

1738

E+

01

199.

8749

E-0

79.

9408

E-0

73.

0744

E+

056.

9441

E+

015.

1330

E+

037.

2051

E+

023.

6894

E-0

22.

0891

E+

016.

5667

E+

015.

6713

E+

01

max

9.99

24E

-07

9.98

47E

-07

5.07

08E

+05

2.31

62E

+02

6.31

36E

+03

4.40

44E

+03

8.10

87E

-02

2.10

05E

+01

1.80

47E

+02

7.66

12E

+01

mea

n9.

3626

E-0

79.

8438

E-0

72.

4853

E+

055.

4290

E+

014.

6923

E+

034.

6342

E+

022.

5243

E-0

22.

0856

E+

016.

4551

E+

015.

3327

E+

01

std

5.46

211E

-08

1.59

4E-0

810

9334

.06

48.4

1229

771

2.72

878

892.

1820

10.

0179

063

0.06

7497

534

.209

586

11.9

0755

5

Page 10: LADPSO Published

M.S. Norouzzadeh et al.

Tabl

e6

Err

orva

lues

ofL

AD

PSO

for

prob

lem

s1-

10(1

0D)

FEs/

func

tion

12

34

56

78

910

103

min

3.14

74E

+00

5.97

16E

+03

6.80

73E

+06

5.69

97E

+03

2.53

19E

+03

3.90

80E

+04

7.45

02E

+02

2.02

53E

+01

4.71

94E

+00

4.74

08E

+01

78.

3217

E+

001.

0305

E+

041.

0182

E+

071.

2180

E+

044.

6451

E+

032.

7180

E+

051.

1178

E+

032.

0452

E+

017.

1692

E+

006.

3064

E+

01

131.

6061

E+

011.

3250

E+

041.

3308

E+

071.

6530

E+

045.

6153

E+

033.

8131

E+

051.

3390

E+

032.

0494

E+

018.

7066

E+

006.

8110

E+

01

194.

2378

E+

011.

5201

E+

041.

7371

E+

072.

0073

E+

046.

5441

E+

034.

6673

E+

051.

3571

E+

032.

0551

E+

019.

3167

E+

007.

1814

E+

01

max

8.38

00E

+01

1.79

89E

+04

2.99

86E

+07

2.15

86E

+04

7.42

98E

+03

6.58

48E

+05

1.36

99E

+03

2.06

19E

+01

1.03

83E

+01

7.45

07E

+01

mea

n2.

8667

E+

011.

2582

E+

041.

4375

E+

071.

5536

E+

045.

5389

E+

033.

5928

E+

051.

2207

E+

032.

0493

E+

018.

4098

E+

006.

5700

E+

01

std

2.78

69E

+01

3.49

82E

+03

5.97

45E

+06

4.91

48E

+03

1.30

67E

+03

1.70

65E

+05

1.98

38E

+02

9.71

79E

-02

1.41

91E

+00

7.85

44E

+00

104

min

3.86

99E

-01

7.40

20E

+01

7.14

49E

+05

1.71

05E

+02

6.70

14E

+01

1.97

21E

+02

1.06

79E

+00

2.02

60E

+01

3.79

01E

+00

1.75

71E

+01

71.

1261

E+

001.

3100

E+

021.

6784

E+

063.

4252

E+

021.

3931

E+

021.

1367

E+

031.

2441

E+

002.

0482

E+

015.

4865

E+

003.

2576

E+

01

131.

8927

E+

002.

0350

E+

022.

5234

E+

064.

9128

E+

022.

7293

E+

021.

7516

E+

031.

3780

E+

002.

0548

E+

016.

2237

E+

003.

9626

E+

01

194.

0834

E+

003.

1862

E+

023.

9012

E+

066.

1154

E+

023.

8794

E+

023.

0385

E+

031.

5380

E+

002.

0571

E+

016.

7404

E+

004.

5093

E+

01

max

5.14

83E

+00

5.13

25E

+02

5.00

82E

+06

8.22

93E

+02

6.01

62E

+02

6.55

63E

+03

2.53

11E

+00

2.06

23E

+01

8.44

83E

+00

5.05

36E

+01

mea

n2.

4352

E+

002.

3931

E+

022.

7851

E+

064.

6938

E+

022.

9016

E+

022.

2883

E+

031.

4854

E+

002.

0512

E+

016.

1684

E+

003.

7599

E+

01

std

1.59

64E

+00

1.36

29E

+02

1.22

17E

+06

1.96

34E

+02

1.59

89E

+02

1.83

26E

+03

3.76

02E

-01

9.27

11E

-02

1.16

28E

+00

8.97

23E

+00

105

min

2.56

05E

-07

4.04

98E

-07

5.58

84E

+03

7.86

60E

-07

7.20

06E

-07

4.31

30E

-01

9.99

67E

-03

2.00

48E

+01

3.91

19E

-03

1.98

99E

+00

77.

7901

E-0

79.

1245

E-0

72.

6539

E+

049.

4924

E-0

78.

1613

E-0

71.

6660

E+

002.

4590

E-0

22.

0304

E+

017.

5708

E-0

12.

9854

E+

00

138.

6851

E-0

79.

5076

E-0

73.

8194

E+

049.

7118

E-0

79.

3372

E-0

73.

5934

E+

004.

4283

E-0

22.

0334

E+

012.

4443

E+

003.

9798

E+

00

199.

2751

E-0

79.

8094

E-0

77.

1675

E+

049.

8974

E-0

79.

7750

E-0

72.

3480

E+

016.

8866

E-0

22.

0365

E+

013.

3726

E+

005.

9697

E+

00

max

9.51

70E

-07

9.86

20E

-07

8.48

03E

+04

9.99

55E

-07

9.96

02E

-07

5.60

94E

+01

7.87

75E

-02

2.03

81E

+01

3.88

97E

+00

6.96

51E

+00

mea

n8.

1585

E-0

79.

0734

E-0

74.

7083

E+

049.

4849

E-0

78.

9888

E-0

71.

5314

E+

014.

4214

E-0

22.

0316

E+

012.

0849

E+

004.

2733

E+

00

std

1.65

48E

-07

1.34

57E

-07

2.59

74E

+04

6.10

95E

-08

8.99

36E

-08

1.79

95E

+01

2.25

14E

-02

7.19

50E

-02

1.42

13E

+00

1.58

01E

+00

Page 11: LADPSO Published

LADPSO: using fuzzy logic to conduct PSO algorithm

Tabl

e7

Err

orva

lues

ofL

AD

PSO

for

prob

lem

s1-

10(3

0D)

FEs/

func

tion

12

34

56

78

910

103

min

2.07

79E

+01

5.66

83E

+03

8.79

51E

+06

7.49

08E

+03

1.59

29E

+04

3.49

28E

+06

4.67

81E

+03

2.07

73E

+01

3.46

84E

+01

3.23

97E

+02

73.

5562

E+

019.

2440

E+

031.

6708

E+

071.

0049

E+

041.

9252

E+

044.

3747

E+

065.

1484

E+

032.

0972

E+

013.

8951

E+

013.

5364

E+

02

134.

8565

E+

011.

1642

E+

042.

0291

E+

071.

1824

E+

042.

1243

E+

046.

1217

E+

065.

1837

E+

032.

1014

E+

014.

0360

E+

013.

7186

E+

02

195.

3055

E+

011.

2758

E+

042.

2463

E+

071.

3230

E+

042.

2523

E+

047.

2519

E+

065.

2024

E+

032.

1047

E+

014.

2480

E+

013.

9072

E+

02

max

1.01

23E

+02

1.69

71E

+04

2.79

74E

+07

1.56

12E

+04

2.42

08E

+04

8.34

99E

+06

5.23

95E

+03

2.10

73E

+01

4.38

98E

+01

4.07

74E

+02

mea

n4.

7321

E+

011.

1217

E+

041.

9784

E+

071.

1639

E+

042.

0927

E+

045.

9427

E+

065.

1536

E+

032.

1002

E+

014.

0496

E+

013.

7342

E+

02

std

1.72

08E

+01

2.79

11E

+03

4.74

31E

+06

2.13

48E

+03

2.35

93E

+03

1.57

31E

+06

1.13

60E

+02

6.38

00E

-02

2.21

45E

+00

2.28

32E

+01

104

min

4.36

90E

-01

3.44

90E

+01

7.02

54E

+05

9.46

90E

+02

4.57

95E

+03

2.91

43E

+02

1.30

42E

+00

2.08

47E

+01

9.00

18E

+00

3.67

60E

+01

77.

3626

E-0

15.

4928

E+

011.

3057

E+

061.

3878

E+

035.

3579

E+

034.

8023

E+

021.

9922

E+

002.

0985

E+

011.

9135

E+

015.

6085

E+

01

139.

9280

E-0

17.

0801

E+

011.

7703

E+

061.

7559

E+

035.

5637

E+

037.

1231

E+

023.

0081

E+

002.

1016

E+

012.

3402

E+

018.

1358

E+

01

191.

1016

E+

007.

6666

E+

012.

2772

E+

061.

9259

E+

036.

5134

E+

031.

4329

E+

033.

4617

E+

002.

1044

E+

012.

7773

E+

018.

9431

E+

01

max

1.94

13E

+00

9.64

10E

+01

2.90

27E

+06

2.28

41E

+03

7.16

84E

+03

6.52

88E

+03

4.83

18E

+00

2.10

88E

+01

3.68

74E

+01

9.71

08E

+01

mea

n1.

0498

E+

006.

6754

E+

011.

7700

E+

061.

6702

E+

035.

9192

E+

031.

6087

E+

032.

7810

E+

002.

1011

E+

012.

3740

E+

017.

2203

E+

01

std

4.11

62E

-01

1.65

25E

+01

6.20

00E

+05

4.00

73E

+02

7.69

00E

+02

1.91

46E

+03

9.31

92E

-01

5.50

64E

-02

6.56

01E

+00

1.89

60E

+01

105

min

7.18

95E

-07

8.26

68E

-07

4.02

45E

+04

3.53

02E

+00

2.53

40E

+03

4.17

11E

+00

9.93

42E

-03

2.07

50E

+01

8.86

36E

-03

2.48

74E

+01

79.

3850

E-0

71.

5025

E-0

66.

2722

E+

045.

4150

E+

003.

8358

E+

038.

0561

E+

019.

9972

E-0

32.

0870

E+

011.

2714

E+

013.

6813

E+

01

139.

7278

E-0

71.

9957

E-0

67.

1059

E+

047.

4276

E+

004.

1526

E+

039.

4653

E+

019.

9998

E-0

32.

0897

E+

011.

5968

E+

014.

2783

E+

01

199.

8254

E-0

72.

6610

E-0

68.

3794

E+

049.

1478

E+

004.

4287

E+

031.

3062

E+

022.

4573

E-0

22.

0939

E+

011.

8953

E+

015.

1738

E+

01

max

9.91

93E

-07

4.14

96E

-06

1.08

88E

+05

1.23

13E

+01

5.08

47E

+03

1.63

40E

+02

2.95

52E

-02

2.09

66E

+01

2.36

95E

+01

6.56

68E

+01

mea

n9.

5249

E-0

72.

1342

E-0

67.

3452

E+

047.

4768

E+

004.

0612

E+

039.

4867

E+

011.

6313

E-0

22.

0896

E+

011.

4859

E+

014.

5014

E+

01

std

5.60

76E

-08

9.30

99E

-07

1.71

99E

+04

2.47

64E

+00

6.12

98E

+02

4.88

63E

+01

7.76

82E

-03

5.35

80E

-02

6.23

13E

+00

1.12

63E

+01

Page 12: LADPSO Published

M.S. Norouzzadeh et al.

Table 8 Error values of [32] for problems 1-10 (10D)

FES Prob. 1 2 3 4 5 6 7 8 9 10

103 min 1.88E-3 6.22E+00 1.07E+06 8.86E+01 6.71E+00 1.12E+01 6.51E-1 2.05E+01 5.02E+00 6.11E+00

7th 7.56E-3 2.40E+01 5.73E+06 5.81E+02 1.25E+01 1.27E+02 7.28E-1 2.07E+01 1.43E+01 3.63E+01

med. 1.34E-2 4.35E+01 1.14E+07 1.58E+03 2.16E+01 1.91E+03 9.06E-1 2.08E+01 2.75E+01 4.56E+01

19th 2.46E-2 8.36E+01 2.39E+07 4.28E+03 3.76E+01 3.15E+03 9.74E-1 2.08E+01 4.70E+01 5.14E+01

max 4.68E-2 1.70E+02 5.67E+07 1.75E+04 6.71E+01 9.89E+05 1.11E+00 2.09E+01 5.91E+01 6.37E+01

mean 1.70E-2 5.83E+01 1.68E+07 3.00E+03 2.81E+01 4.30E+04 8.69E-1 2.08E+01 3.07E+01 4.17E+01

std 1.20E-2 4.74E+01 1.59E+07 3.83E+03 1.85E+01 1.97E+05 1.38E-1 9.93E-2 1.72E+01 1.57E+01

104 min 1.84E-9 2.21E-9 2.21E-9 1.71E-9 2.46E-9 3.29E-9 9.31E-10 2.03E+01 1.99E+00 2.98E+00

7th 3.75E-9 3.27E-9 4.61E-9 3.85E-9 5.02E-9 4.49E-9 2.81E-9 2.05E+01 4.97E+00 4.97E+00

med. 5.65E-9 4.53E-9 5.51E-9 4.78E-9 6.33E-9 7.37E-9 5.46E-9 2.06E+01 5.97E+00 6.96E+00

19th 6.42E-9 5.71E-9 6.58E-9 6.46E-9 8.60E-9 3.99E+00 7.77E-9 2.06E+01 7.96E+00 7.96E+00

max 9.34E-9 7.67E-9 9.66E-9 7.80E-9 9.84E-9 2.63E+02 1.48E-2 2.07E+01 1.09E+01 1.59E+01

mean 5.20E-9 4.70E-9 5.60E-9 5.02E-9 6.58E-9 1.17E+01 2.27E-3 2.05E+01 6.21E+00 7.16E+00

std 1.94E-9 1.56E-9 1.93E-9 1.71E-9 2.17E-9 5.24E+01 4.32E-3 8.62E-2 2.10E+00 3.12E+00

105 min 1.84E-9 2.21E-9 2.21E-9 1.71E-9 2.46E-9 1.44E-9 6.22E-10 2.00E+01 1.52E-10 1.50E-10

7th 3.75E-9 3.27E-9 4.61E-9 3.85E-9 5.02E-9 3.81E-9 1.65E-9 2.00E+01 3.46E-10 3.34E-10

med. 5.65E-9 4.53E-9 5.51E-9 4.78E-9 6.33E-9 4.69E-9 2.84E-9 2.00E+01 6.14E-10 5.64E-10

19th 6.42E-9 5.71E-9 6.58E-9 6.46E-9 8.60E-9 5.67E-9 5.46E-9 2.00E+01 3.50E-9 1.08E-9

max 9.34E-9 7.67E-9 9.66E-9 7.80E-9 9.84E-9 8.13E-9 7.77E-9 2.00E+01 9.95E-1 9.95E-1

mean 5.20E-9 4.70E-9 5.60E-9 5.02E-9 6.58E-9 4.87E-9 3.31E-9 2.00E+01 2.39E-1 7.96E-2

std 1.94E-9 1.56E-9 1.93E-9 1.71E-9 2.17E-9 1.66E-9 2.02E-9 3.89E-3 4.34E-1 2.75E-1

Table 9 Error values of [32] for problems 1-10 (30D)

FES Prob. 1 2 3 4 5 6 7 8 9 10

3 × 103 min 4.49 E+2 1.12 E+5 3.84 E+8 6.13 E+5 6.21 E+3 3.26 E+6 4.10 E+1 2.12 E+1 2.19 E+2 2.43 E+2

7th 5.48 E+2 1.73 E+5 8.00 E+8 1.12 E+6 9.57 E+3 7.31 E+6 9.39 E+1 2.12 E+1 2.45 E+2 2.65 E+2

med. 7.40 E+2 2.35 E+5 1.00 E+9 1.44 E+6 1.09 E+4 1.23 E+7 1.20 E+2 2.12 E+1 2.50 E+2 2.74 E+2

19th 1.04 E+3 2.94 E+5 1.38 E+9 1.87 E+6 1.24 E+4 1.99 E+7 1.59 E+2 2.13 E+1 2.66 E+2 2.88 E+2

max 1.61 E+3 3.83 E+5 2.07 E+9 3.29 E+6 1.42 E+4 6.81 E+7 3.26 E+2 2.13 E+1 2.87 E+2 3.08 E+2

mean 8.16 E+2 2.39 E+5 1.07 E+9 1.55 E+6 1.07 E+4 1.77 E+7 1.39 E+2 2.12 E+1 2.53 E+2 2.77 E+2

std 3.01 E+2 7.80 E+4 4.43 E+8 6.15 E+5 2.13 E+3 1.62 E+7 7.17 E+1 4.35 E-2 1.65 E+1 1.90 E+1

3 × 104 min 3.98 E-9 2.29 E-3 1.24 E+6 4.88 E+2 5.00 E-2 1.77 E+1 3.93 E-9 2.10 E+1 2.39 E+1 3.08 E+1

7th 4.70 E-9 1.60 E-2 3.41 E+6 1.46 E+3 1.00 E+3 2.28 E+1 4.85 E-9 2.11 E+1 4.28 E+1 4.38 E+1

med. 5.20 E-9 2.57 E-2 4.90 E+6 3.51 E+3 1.32 E+3 2.58 E+1 5.69 E-9 2.11 E+1 4.88 E+1 5.27 E+1

19th 6.10 E-9 3.99 E-2 8.21 E+6 5.18 E+4 2.04 E+3 2.22 E+2 6.95 E-9 2.11 E+1 5.47 E+1 5.87 E+1

max 7.51 E-9 7.49 E-2 1.42 E+7 2.88 E+5 3.20 E+3 2.66 E+3 2.46 E-2 2.12 E+1 7.96 E+1 8.26 E+1

mean 5.42 E-9 2.73 E-2 6.11 E+6 4.26 E+4 1.51 E+3 4.60 E+2 1.77 E-3 2.11 E+1 4.78 E+1 5.14 E+1

std 9.80 E−10 1.79 E-2 3.79 E+6 7.43 E+4 8.82 E+2 8.29 E+2 5.52 E-3 4.04 E-2 1.15 E+1 1.25 E+1

3 × 105 min 3.98 E-9 4.48 E-9 4.07 E-9 6.06 E-9 7.15 E-9 4.05 E-9 1.76 E-9 2.00 E+1 2.98 E+0 9.95 E-1

7th 4.70 E-9 5.59 E-9 4.78 E-9 8.75 E-9 8.06 E-9 5.31 E-9 4.59 E-9 2.02 E+1 4.97 E+0 5.97 E+0

med. 5.20 E-9 6.13 E-9 5.44 E-9 1.93 E+1 8.61 E-9 6.32 E-9 5.41 E-9 2.09 E+1 6.96 E+0 6.96 E+0

19th 6.10 E-9 6.85 E-9 6.16 E-9 2.72 E+3 9.34 E-9 7.52 E-9 6.17 E-9 2.10 E+1 8.95 E+0 8.95 E+0

max 7.51 E-9 8.41 E-9 8.66 E-9 1.57 E+5 2.51 E-6 3.99 E+0 7.81 E-9 2.11 E+1 1.19 E+1 1.09 E+1

mean 5.42 E-9 6.22 E-9 5.55 E-9 1.27 E+4 1.08 E-7 4.78 E-1 5.31 E-9 2.07 E+1 6.89 E+0 6.96 E+0

std 9.80 E-10 8.95 E−10 1.09 E-9 3.59 E+4 4.99 E-7 1.32 E+0 1.41 E-9 4.28 E-1 2.22 E+0 2.45 E+0

Page 13: LADPSO Published

LADPSO: using fuzzy logic to conduct PSO algorithm

Table 3 Computational complexity of LADPSO

Dim T0 T1 T̂2 (T̂2 − T1)/T0

Method – – SPSO LADPSO SPSO LADPSO

10 1.559 40.200 79.056 65.886 24.912 16.468

30 1.559 44.841 146.546 82.844 65.209 24.366

50 1.559 51.661 222.774 92.938 109.711 26.465

Table 10 Results of LADPSO for the Lennard-Jones potential prob-lem

N/Method LADPSO SPSO Results of [34] Minimum

5 −9.09772 −9.01407 −9.10385 −9.10385

6 −12.5239 −11.811 −12.7121 −12.7121

7 −16.2533 −14.4687 −16.5054 −16.5054

8 −19.4855 −17.1723 −19.2084 −19.8215

9 −23.2127 −19.5568 −22.9690 −24.1134

10 −27.3535 −22.1338 −26.8424 −28.4225

11 −31.6974 −23.8806 −31.5629 −32.7656

12 −36.4019 −24.399 −36.3692 −37.9676

13 −40.9243 −27.2597 −41.1379 −44.3268

14 −45.3636 −29.5903 −44.3883 −47.8452

15 −49.7374 −32.774 −49.7777 −52.3226

other. The test gives a p-value which indicates whether wecould reject the null hypothesis (two samples have equal me-dians) or not. We have performed MWW for all benchmarkproblems separately on significance level of 0.05. The re-sults of our statistical analysis are shown on Table 12. Wehave reported p-value of the test along with rejection or notrejection of null hypothesis. In Table 12, h = 0 indicates fail-ure of the rejection and h = 1 indicates rejection of hypoth-esis on significance level of 0.05.

Based on Table 12, we could see that on most cases thedifference between results is significant and it gets moretangible when the complexity of problem increased. Thisobservation suggests that the LADPSO seems promising toimprove the base algorithm especially on high dimensionalproblems.

6 Conclusion and future works

In this paper, we proposed a novel extension to PSO opti-mization method, called LADPSO algorithm, through com-bining two operators to the standard PSO. The suggestedapproach utilizes the fuzzy logic to conduct the standardPSO on global optimization problems. We also comparedaccuracy and robustness of our approach with standard PSO,and some other methods by using some well-known bench-mark functions. It has been depicted that our algorithm Ta

ble

11R

esul

tsof

LA

DPS

Ofo

rth

eL

enna

rd-J

ones

pote

ntia

lpro

blem

NA

ppII

App

III.

Stee

pB

FGS

DFP

LG

OB

BL

GO

Rd

Ref

.ESP

SO20

11L

AD

PSO

6−1

.230

3E+

01−1

.271

2E+

01−1

.230

3E+

01−1

.230

3E+

01−1

.230

3E+

01−1

.271

2E+

01−1

.230

3E+

01−1

.271

2E+

01−1

.271

1E+

01−1

.271

2E+

01

7−1

.553

3E+

01−1

.650

5E+

01−1

.549

9E+

01−1

.550

0E+

01−1

.550

0E+

01−1

.553

3E+

01−1

.553

3E+

01−1

.650

5E+

01−1

.650

5E+

01−1

.650

5E+

01

8−1

.982

2E+

01−1

.982

2E+

01−1

.882

9E+

01−1

.882

9E+

01−1

.882

9E+

01−1

.877

8E+

01−1

.982

2E+

01−1

.982

2E+

01−1

.982

1E+

01−1

.982

1E+

01

9−2

.411

3E+

01−2

.411

3E+

01−2

.218

1E+

01−2

.218

1E+

01−2

.218

1E+

01−2

.304

4E+

01−2

.215

6E+

01−2

.411

3E+

01−2

.216

2E+

01−2

.411

3E+

01

10−2

.754

5E+

01−2

.842

3E+

01−2

.694

3E+

01−2

.695

5E+

01−2

.693

9E+

01−2

.727

4E+

01−2

.662

3E+

01−2

.842

3E+

01−2

.509

5E+

01−2

.755

6E+

01

11−3

.276

6E+

01−3

.276

6E+

01−3

.276

6E+

01−3

.276

6E+

01−3

.276

6E+

01−2

.999

7E+

01−3

.095

4E+

01−3

.276

6E+

01−2

.885

6E+

01−3

.191

4E+

01

12−3

.617

8E+

01−3

.796

8E+

01−3

.796

8E+

01−3

.796

8E+

01−3

.796

8E+

01−3

.517

1E+

01−3

.546

2E+

01−3

.796

8E+

01−2

.942

3E+

01−3

.796

8E+

01

13−4

.139

4E+

01−4

.432

7E+

01−4

.432

7E+

01−4

.432

7E+

01−4

.432

7E+

01−3

.723

8E+

01−4

.135

6E+

01−4

.432

7E+

01−3

.147

3E+

01−4

.432

7E+

01

14−4

.784

5E+

01−4

.784

5E+

01−4

.784

5E+

01−4

.784

5E+

01−4

.784

5E+

01−4

.487

4E+

01−4

.693

5E+

01−4

.784

5E+

01−3

.271

3E+

01−4

.784

5E+

01

15−5

.232

3E+

01−5

.232

3E+

01−5

.232

3E+

01−5

.232

3E+

01−5

.232

3E+

01−4

.922

2E+

01−4

.758

3E+

01−5

.232

3E+

01−3

.819

5E+

01−5

.232

3E+

01

Page 14: LADPSO Published

M.S. Norouzzadeh et al.

Tabl

e12

Res

ults

ofM

ann–

Whi

tney

–Wilc

oxon

test

(with

sign

ifica

nce

leve

lof

0.05

)on

benc

hmar

kpr

oble

ms

CE

C20

05(d

im=

10)

CE

C20

05(d

im=

30)

Len

nard

-Jon

es

Prob

lem

ph

Prob

lem

ph

Num

ber

ofat

oms

ph

14.

4922

E-0

10

12.

6874

E-0

10

52.

6162

E-0

10

22.

9475

E-0

10

23.

3440

E-0

71

61.

4359

E-0

21

32.

3656

E-0

10

33.

6690

E-0

91

77.

4064

E-0

51

45.

0032

E-0

20

46.

5743

E-0

91

82.

0616

E-0

61

55.

0945

E-0

10

52.

3172

E-0

31

97.

8980

E-0

81

66.

5289

E-0

20

61.

2532

E-0

10

106.

7956

E-0

81

76.

9080

E-0

10

77.

4230

E-0

20

116.

7956

E-0

81

81.

5460

E-0

41

81.

1029

E-0

21

126.

7956

E-0

81

92.

4478

E-0

71

91.

4157

E-0

91

136.

7956

E-0

81

101.

6231

E-0

10

101.

7005

E-0

21

146.

7956

E-0

81

––

––

––

156.

7956

E-0

81 (LADPSO) has better performance in accuracy, stability and

robustness. The future work includes the research on us-ing similar approaches for solving multi-objective and dy-namic optimization problems. Also researching about de-signing more intelligent systems for controlling the usage ofadditional operators within standard algorithm seems to befruitful. In this paper we have used a fuzzy control system toimprove standard PSO algorithm with 2 operators, it is pos-sible to examine this approach for other operators and opti-mization algorithms. Furthermore it seems that some goodresearches could be done about improving the suggested al-gorithm to solve constrained optimization problems.

Acknowledgement The authors are very grateful to the referees fortheir excellent and many helpful comments, suggestions and correc-tions.

References

1. Kennedy J, Eberhart RC (1995) Particle swarm optimization. In:Proceedings of the 1995 IEEE international conference on neuralnetworks, Piscataway, NJ, pp 1942–1948

2. Engelbrecht AP (2007) Computational intelligence, 2nd edn. Wi-ley, New York

3. Clerc M, Kennedy J (2002) The particle swarm-explosion, stabil-ity, and convergence in a multidimensional complex space. IEEETrans Evol Comput 6(1):58–73

4. Wolpert DH, Macready WG (1997) No free lunch theorems foroptimization. IEEE Trans Evol Comput

5. Pant M, Radha T, Singh VP (2007) Particle swarm optimization:experimenting the distributions of random numbers. In: 3rd Indianint conf on artificial intelligence (IICAI 2007), pp 412–420

6. Pant M, Thangaraj R, Abraham A (2008) Improved particle swarmoptimization with low-discrepancy sequences. In: IEEE cong onevolutionary computation (CEC 2008), Hong Kong

7. Norouzzadeh MS, Ahmadzadeh MR, Palhang M (2010) PlowingPSO: anovel approach to effectively initializing particle swarm op-timization. In: Proceeding of 3rd IEEE international conference oncomputer science and information technology, Chengdu, China,vol 1, pp 705–709

8. Mendes R, Kennedy J, Neves J (2005) The fully informed particleswarm: simpler, maybe better. IEEE Trans Evol Comput, 1(1)

9. Suganthan PN (1999) Particle swarm optimiser with neighbor-hood operator. In: Proceedings of the IEEE congress on evolu-tionary computation, pp 1958–1962

10. Kennedy J, Mendes R (2002) Population structure and particleperformance. In: Proceedings of the IEEE congress on evolution-ary computation, pp 1671–1676

11. Kennedy J (2003) Bare bones particle swarms. In: Proceedings ofthe IEEE swarm intelligence symposium, pp 80–87

12. Standard PSO 2007 and 2011, http://particleswarm.info13. Shi Y, Eberhart RC (2001) Fuzzy adaptive particle swarm opti-

mization. In: Proceedings of the IEEE congress on evolutionarycomputation, pp 101–106

14. Venter G, Sobieszczanski-Sobieski J (2003) Particle swarm opti-mization. J Am Inst Aeronaut Astronaut 41(8):1583–1589

15. Clerc M (2001) Think locally, act locally: the way of life of cheap-PSO, an adaptive PSO. http://clerc.maurice.free.fr/pso/. Technicalreport

16. Zheng Y, Ma L, Zhang L, Qian J (2003) On the convergence anal-ysis and parameter selection in particle swarm optimization. In:Proceedings of the international conference on machine learningand cybernetics, pp 1802–1807

Page 15: LADPSO Published

LADPSO: using fuzzy logic to conduct PSO algorithm

17. Chen M-R, Lu Y-Z, Luo Q (2010) A novel hybrid algorithm withmarriage of particle swarm optimization and extremal optimiza-tion. Appl Soft Comput J 10(2):367–373

18. Lim A, Lin J, Xiao F (2007) Particle swarm optimization and hillclimbing for the bandwidth minimization problem. Int J Appl In-tell 26:175–182

19. Shuang B, Chen J, Li Z (2011) Study on hybrid PS-ACO algo-rithm. Int J Appl Intell 34:64–73

20. Angeline PJ (1998) Using selection to improve particle swarm op-timization. In: Proceedings of the IEEE congress on evolutionarycomputation, pp 84–89

21. Pant M, Thangaraj R, Abraham A (2007) A new PSO algorithmwith crossover operator for global optimization problems. In: 2ndinternational workshop on hybrid artificial intelligence systems

22. Higashi H, Iba H (2003) Particle swarm optimization with gaus-sian mutation. In: Proceedings of the IEEE swarm intelligencesymposium, pp 72–79

23. Pant M, Radha T, Singh VP A new diversity based particle swarmoptimization using Gaussian mutation. Int J Math Model, SimulAppl

24. Li C, Yang S, Korejo I An adaptive mutation operator for particleswarm optimization

25. Li C, Liu Y, Zhou A, Kang L, Wang H A fast particle swarm op-timization algorithm with Cauchy mutation and natural selectionstrategy

26. van den Bergh F, Engelbrecht AP (2000) Cooperative learning inneural networks using particle swarm optimizers. S Afr Comput J26:84–90

27. Silva A, Neves A, Costa E (2002) An empirical comparison of par-ticle swarm and predator prey optimisation. In: Proceedings of thethirteenth Irish conference on artificial intelligence and cognitivescience, pp 103–110

28. Xie X, Zhang W, Yang Z (2002) Adaptive particle swarm opti-mization on individual level. In: Proceedings of the sixth interna-tional conference on signal processing, pp 1215–1218

29. Xie X, Zhang W, Yang Z (2002) A dissipative particle swarm op-timization. In: Proceedings of the IEEE congress on evolutionarycomputation, pp 1456–1461

30. van den Bergh F (2002) An analysis of particle swarm optimiz-ers. In PhD thesis, Department of Computer Science, Universityof Pretoria, Pretoria, South Africa

31. Blackwell TM, Bentley PJ (2002) Dynamic search with chargedswarms. In: Proceedings of the genetic and evolutionary computa-tion conference, pp 19–26

32. Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y-P, AugerA, Tiwari S (2005) Problem definitions and evaluation criteriafor the CEC 2005 special session on real-parameter optimiza-tion. Nanyang Technological University, Singapore and KanpurGenetic Algorithms Laboratory, IIT Kanpur, Technical Report

33. Auger A, Hansen N (2005) A restart CMA evolution strategywith increasing population size. In: IEEE congress on evolution-ary computation (CEC2005), vol 2, pp 1785–1791

34. Qin AK, Suganthan PN (2005) Self-adaptive differential evolutionalgorithm for numerical optimization. In: IEEE congress on evo-lutionary computation (CEC2005), vol 2, pp 1785–1791

35. Qin AK, Suganthan PN (2005) Dynamic multi-swarm particleswarm optimizer with local search. In: IEEE congress on evolu-tionary computation (CEC2005), vol 1, pp 522–528

36. Gockenbach MS, Kearsley AJ, Symes WW (1997) An infeasiblepoint method for minimizing the Lennard-Jones potential. ComputOptim Appl 8(3):273–286

37. Fan E (2002) Global optimization of Lennard-Jones atomic clus-ters. MSc thesis, McMaster University

38. Gibbons JD (1985) Nonparametric statistical inference. MarcelDekker, New York

39. Hollander M, Wolfe DA (1999) Nonparametric statistical meth-ods. Wiley, Hoboken

Mohammad Sadegh Norouzzadehreceived his B.S. degree in Soft-ware Engineering from the TarbiatMoallem University, Iran, in 2007and the Master degree in ArtificialIntelligence from Isfahan Univer-sity of Technology, in 2009. Cur-rently he is Ph.D. student of Artifi-cial Intelligence in Amirkabir Uni-versity of Technology. His currentresearch interests include computa-tional intelligence, machine learn-ing and computer vision.

Mohammad Reza Ahmadzadehreceived his B.Sc. degree in Elec-tronic Engineering from the Fer-dowsi University of Mashhad, Iranin 1989 and the M.Sc. degree inElectronic Engineering from theUniversity of Tarbiat Modarres,Tehran in 1992. He received hisPh.D. from University of Surrey,UK in 2001. He was a lecturer atShiraz University from 1993–1997,and an Assistant Professor from2001–2004. He is an Assistant Pro-fessor of Electrical Engineering atIsfahan University of Technology,

Iran from 2004. His research interests include reasoning with uncer-tainty, pattern recognition, image processing, expert systems, informa-tion fusion and neural networks.

Maziar Palhang received his B.Sc.in Computer Hardware from SharifUniversity of Technology, Tehran,Iran. He received his M.Comp.Sc.and Ph.D. from the University ofNew South Wales, Sydney, Aus-tralia. He was a Postdoctoral fellowin UNSW as well. He later joinedElectrical and Computer Engineer-ing Department of Isfahan Univer-sity of Technology, Isfahan, Iran.He has been the Chairman of Hu-manoid league of IranOpen Com-petitions. His research interests areMachine Learning, Computer Vi-sion, and Robotics.