Hybridizing particle swarm optimization with differential evolution for constrained numerical and...

12

Click here to load reader

Transcript of Hybridizing particle swarm optimization with differential evolution for constrained numerical and...

Page 1: Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

Applied Soft Computing 10 (2010) 629–640

Hybridizing particle swarm optimization with differential evolutionfor constrained numerical and engineering optimization

Hui Liu, Zixing Cai, Yong Wang *

School of Information Science and Engineering, Central South University, Changsha 410083, People’s Republic of China

A R T I C L E I N F O

Article history:

Received 11 April 2008

Received in revised form 11 June 2009

Accepted 23 August 2009

Available online 29 August 2009

Keywords:

Particle swarm optimization

Differential evolution

Constrained optimization

PSO-DE

A B S T R A C T

We propose a novel hybrid algorithm named PSO-DE, which integrates particle swarm optimization

(PSO) with differential evolution (DE) to solve constrained numerical and engineering optimization

problems. Traditional PSO is easy to fall into stagnation when no particle discovers a position that is

better than its previous best position for several generations. DE is incorporated into update the previous

best positions of particles to force PSO jump out of stagnation, because of its strong searching ability. The

hybrid algorithm speeds up the convergence and improves the algorithm’s performance. We test the

presented method on 11 well-known benchmark test functions and five engineering optimization

functions. Comparisons show that PSO-DE outperforms or performs similarly to seven state-of-the-art

approaches in terms of the quality of the resulting solutions.

� 2009 Elsevier B.V. All rights reserved.

Contents lists available at ScienceDirect

Applied Soft Computing

journa l homepage: www.e lsevier .com/ locate /asoc

1. Introduction

Constrained optimization problems (COPs) are always inevi-table in many science and engineering disciplines. Without loss ofgenerality, the nonlinear programming (NLP) problem can beformulated as

min f ð~xÞ; ~x ¼ ðx1; x2; . . . ; xnÞ

where ~x2V� S, and S is an n-dimensional rectangle space in Rn

defined by the parametric constraints:

lðiÞ � xi � uðiÞ; 1 � i � n:

Here, the feasible region V� S is defined by a set of m additionallinear or nonlinear constraints (m�0):

g jð~xÞ � 0; j ¼ 1; . . . ; q and h jð~xÞ ¼ 0; j ¼ qþ 1; . . . ;m;

where q is the number of inequality constraints and (m� q) is thenumber of equality constraints. Feasible individuals satisfy allconstraints while infeasible individuals do not satisfy at least oneconstraint.

Evolutionary algorithms (EAs) possess a number of exclusiveadvantages: generality, reliable and robust performance, little

* Corresponding author. Tel.: +86 731 8830583.

E-mail addresses: [email protected] (H. Liu), [email protected] (Z. Cai),

[email protected] (Y. Wang).

1568-4946/$ – see front matter � 2009 Elsevier B.V. All rights reserved.

doi:10.1016/j.asoc.2009.08.031

information requirement for the problem to be solved, easyimplementation, etc. Due to those advantages, EAs have beensuccessfully and broadly applied to solve COPs [1–3] recently.Consequently, a variety of EA-based constraint-handling techni-ques have been proposed for real-parameter optimizationproblems which can be grouped as [1]: (1) penalty functions;(2) special representations and operators; (3) repair algorithms;(4) separation of objectives and constraints; (5) hybrid methods.

Penalty function methods are by far the most common andsimplest approach in handling constraints. By adding (orsubtracting) a penalty term to (or from) the objective function,a constrained optimization problem is transformed into an

unconstrained one. In common practice, a penalty term Gð~xÞ ¼

Xmj¼1

G jð~xÞ is based on the degree of constraint violation of an

individual ~x. In addition, G jð~xÞ is defined as

G jð~xÞ ¼max f0; g jð~xÞg; 1 � j � q

max f0; jh jð~xÞj � eg; qþ 1 � j � m

�(1)

where e is a positive tolerance value for equality constraints. Gð~xÞrepresents the distance of the individual ~x from the boundaries ofthe feasible set. A remarkable limitation of the penalty functionmethods is that most of them require a careful fine-tuning ofparameters to obtain competitive results. A too small penaltyparameter results in underpenalization, and consequently, thepopulation will experience difficulty in landing within the feasible

Page 2: Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

H. Liu et al. / Applied Soft Computing 10 (2010) 629–640630

region and may converge to an infeasible solution. Instead, a toolarge penalty coefficient will result in the loss of some valuableinformation provided by infeasible individuals. In [4], a self-adaptive penalty function based on genetic algorithm (SAPF) isproposed. Both distance value and the penalty are based on thenormalized fitness value and the normalized degree of constraintviolation. The final fitness value of each individual is calculated byadding the penalty value to the corresponding distance value. In[5], DE based on a co-evolution mechanism, named CDE, isproposed to solve COPs. Due to the co-evolution, not only aredecision solutions, but penalty factors are also adjusted bydifferential evolution.

Apart from the penalty function method, several noveltechniques have been utilized to handle constraints. For specialrepresentations and operators, how to determine the appropriategeneric representation scheme remains an open issue. The use ofspecial representations and operators is, with no doubt, quiteuseful for the intended application for which they were designed,but their generalization to other (even similar) problems is by nomeans obvious. When an infeasible solution can be easily (or atleast at a low computational cost) transformed into a feasiblesolution, repair algorithms are a good choice. However, this is notalways possible and in some cases repair operators may introducea strong bias in the search, harming the evolutionary process itself.Furthermore, this approach is problem-dependent, since a specificrepair algorithm has to be designed for each particular problem.One problem with separation of constraints and objectives is thatin cases where the ratio between the feasible region and the wholesearch space is too small (for example, when there are constraintsvery difficult to satisfy), this technique will fail unless a feasiblepoint is introduced in the initial population.

Deb [6] proposed a feasibility-based rule, where pair-wisesolutions are compared using the following criteria: (1) any feasiblesolution is preferred to any infeasible solution; (2) between twofeasible solutions, the one with better objective function value ispreferred; (3) between two infeasible solutions, the one with smallerdegree of constraint violation is preferred. However, his techniquehas problems to maintain diversity in the population. Runarsson andYao [7] introduced a stochastic ranking method to balance theobjective and penalty functions. Given pair-wise adjacent indivi-duals, (1) if both individuals are feasible, the rank of them isdetermined according to the objective function value, else (2) theprobability of ranking according to the objective function value is P f ,while the probability of ranking according to the constraint violationvalue is ð1� P f Þ. Its drawback is the need of the most appropriatevalue of P f . Amirjanov [8] investigated an approach named thechanging range-based genetic algorithms (CRGA), which adaptivelyshifts and shrinks the size of the search space of the feasible region byemploying feasible and infeasible solutions in the population toreach the global optimum. In [9], a general variable neighborhoodsearch (VNS) heuristic is developed to solve COPs. VNS defines a set ofneighborhood structures to conduct a search through the solutionspace. It exploits systematically the idea of neighborhood change,both in the descent to local minima and in the escape from the valleyswhich contain them. Mezura-Montes and Coello [10] proposed asimple multi-membered evolution strategy (SMES). SMES uses asimple diversity mechanism to allow the individual with the lowestamount of constraint violation and the best value of the objectivefunction to be selected for the next population. By emulating societybehavior, Ray and Liew [11] made use of intra- and intersocietyinteractions within a formal society and civilization model to solveengineering optimization problems. A society corresponds to acluster of points while a civilization is a set of all such societies at anygiven time. Every society has its set of better-performing individualsthat help others in the society to improve through an intrasocietyinformation exchange. In [12], a direct extension of ant colony

optimization (ACO) is proposed for continuous optimization. Takingadvantage of multi-objective optimization techniques, Cai and Wang[13] presented the non-dominated individuals replacement scheme,which selects one non-dominated individual from the offspringpopulation and then applies it to replace one dominated individual inthe parent population.

The organization of the remaining paper is as follows. InSections 2 and 3, PSO and DE are briefly introduced. In Section 4,hybridizing particle swarm optimization with differential evolu-tion, named PSO-DE, is proposed and explained in detail.Simulation results and comparisons are presented in Section 5,and the discussion is provided in Section 6. Finally, we concludethe paper in Section 7.

2. Basics of PSO

Particle swarm optimization is a stochastic global optimizationmethod inspired by the choreography of a bird flock. PSO relies onthe exchange of information between individuals, called particles,of the population, called swarm. In PSO, each particle adjusts itstrajectory stochastically towards the positions of its own previousbest performance (pbest) and the best previous performance of itsneighbors (nbest) or the whole swarm (gbest). At the t th iteration,taking the ith particle into account, the position vector and thevelocity vector are Xt

i ¼ ðxti;1; . . . ; xt

i;nÞ and Vti ¼ ðvt

i;1; . . . ; vti;nÞ. The

velocity and position updating rules are given by

vtþ1i; j ¼ vvt

i; j þ c1r1ð pbestti; j � xt

i; jÞ þ c2r2ðgbesttj � xt

i; jÞ; (2)

xtþ1i; j ¼ xt

i; j þ vtþ1i; j ; (3)

where j2f1; . . . ;ng, v2 ½0:0;1:0� is the inertia factor, c1 and c2 arepositive constants, r1 and r2 are two uniformly distributed randomnumbers in the range [0,1]. In this version, the variable Vt

i is limitedto the range �Vmax . When a particle discovers a position that isbetter than any it has found previously, it stores the new positionin the corresponding pbest. Clerc and Kennedy [14] introduced thevelocity adjustment as

vtþ1i; j ¼ xðvt

i; j þ c1r1ð pbestti; j � xt

i; jÞ þ c2r2ðgbesttj � xt

i; jÞÞ (4)

where x ¼ 2k=j2�j�ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffij2 � 4j

pj with j ¼ c1 þ c2 >4. Due to

the constriction coefficient x, the algorithm requires no explicitlimit Vmax . Krohling and dos Santos Coelho [15] analyzed Eq. (4)and concluded that the interval [0.72,0.86] is a possible goodchoice for x. So, instead of x, the absolute value of the Gaussianprobability distribution with zero mean and unit varianceabsðNð0;1ÞÞ is introduced into the velocity equation.

vtþ1i; j ¼ R1ð pbestt

i; j � xti; jÞ þ R2ðgbestt

j � xti; jÞ (5)

where R1 and R2 are generated using absðNð0;1ÞÞ. According to thestatistical knowledge, the mean of absðNð0;1ÞÞ is 0.798 and thevariance is 0.36. In order to solve COP, they introduced Lagrange

multipliers to transform a COP into a dual or min–max problem.Two populations of independent PSO are evolved in the co-evolutionary particle swarm using Gaussian distribution (CPSO-GD) [15]: the first PSO focuses on evolving the individuals whilethe vector of Lagrangian multiplier is maintained frozen and theother PSO focuses on evolving the Lagrangian multiplier vectorwhile the individuals are maintained frozen. The two PSOs interactwith each other through a common fitness evaluation. The first PSOprovides the optimum individual of the COP in the end.

A novel multi-strategy ensemble PSO algorithm (MEPSO) [16]introduces two new strategies, Gaussian local search and differentialmutation, to one part of its population (part I) and the other part(part II), respectively. In every iteration, for each particle of part I, ithas the probability Pls to perform the Gaussian local search defined

Page 3: Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 631

in Eqs. (6) and (3), and has the probability (1� Pls) to perform theconventional search defined in Eqs. (2) and (3). gbest is the bestsolution found by all particles, in both part I and part II. For eachparticle of part II, the differential mutation operator defined in Eq. (7)is performed to change the direction of its velocity.

vtþ1i; j ¼ c3�R3 (6)

vtþ1i; j ¼ sgnðr1 � 0:5Þðvvt

i; j þ c1r1ð pbestti; j � xt

i; jÞ þ c2r2ð pta � xt

i; jÞÞ; (7)

where R3 is generated using Nð0;1Þ, c3 are positive constants andpa is the best solution found by particle a, which is chosenrandomly from part I. The moving peaks benchmark (MPB) anddynamic Rastrigin function are used to test the performance ofMEPSO. dos Santos Coelho and Lee [17] proposed that randomnumbers (c1, c2, r1 and r2) for the velocity updating equation of PSOare generate by using the Gaussian probability distribution and/orchaotic sequences in the interval [�1, 1], and then mapped to theinterval [0, 1]. Maitra and Chatterjee [18] proposed hybridcooperative-comprehensive learning PSO algorithm for multilevelthresholding for histogram-based image segmentation. In [19],Gaussian functions and PSO are used to select and adjust the radialbasis function neural networks.

3. Basics of DE

Differential evolution, a stochastic, simple yet powerfulevolutionary algorithm, not merely possesss the advantage of aquite few control variables but also performs well in convergence.DE is introduced to solve the global optimization by Storn and Price[20]. DE creates new candidate solutions by perturbing the parentindividual with the weighted difference of several other randomlychosen individuals of the same population. A candidate replacesthe parent only if it is better than its parent. Thereafter, DE guidesthe population towards the vicinity of the global optimum throughrepeated cycles of mutation, crossover and selection. The mainprocedure of DE is explained in detail as follows.

Mutation: For each individual Xti ¼

fxti;1; x

ti;2; . . . ; xt

i;ngði2f1;2; . . . ;NPgÞ at generation t, an associatedmutant individual Yt

i ¼ fyti;1; y

ti;2; . . . ; yt

i;ng can be created by usingone of the mutation strategies. The most used strategies are:

rand/1: yti; j ¼ xt

r½1�; j þ Fðxtr½2�; j � xt

r½3�; jÞ best/1: yt

i; j ¼ xtbest; j þ Fðxt

r½1�; j � xtr½2�; jÞ

current to best/1: yti; j ¼ xt

i; j þ Fðxtbest; j � xt

i; jÞ þ Fðxtr½1�; j � xt

r½2�; jÞ best/2: yt

i; j ¼ xtbest; j þ Fðxt

r½1�; j � xtr½2�; jÞ þ Fðxt

r½3�; j � xtr½4�; jÞ

rand/2: yti; j ¼ xt

r½1�; j þ Fðxtr½2�; j � xt

r½3�; jÞ þ Fðxtr½4�; j � xt

r½5�; jÞwhere r½k�ðk2f1;2; . . . ;5gÞ is a uniformly distributed random numberin the range ½1;NP�, j2f1; . . . ;ng, xt

best; j is the best individual of thepopulation at generation t, FðF 2 ½0;2�Þ is a amplification factor.Salman et al. [21] introduced the self-adapting parameter F as:

Fi ¼ Fi1 þ Nð0;0:5ÞðFi2 � Fi3Þ (8)

where i1; i2 and i3 are uniformly distributed random numbers inthe range ½0;NP� and i1 6¼ i2 6¼ i3.

Crossover: DE applies a crossover operator on Xti and Yt

i togenerate the offspring individual Zt

i ¼ fzti;1; z

ti;2; . . . ; zt

i;ng. The genesof Zt

i are inherited from Xti or Yt

i , determined by a parameter calledcrossover probability(CR2 ½0;1�), as follows:

zti; j ¼

yti; j; if rand � CR or j ¼ jrand

xti; j; otherwise

((9)

where rand is a uniformly distributed random number in the range[0,1], and jrand is a uniformly distributed random number in therange ½1;n�.

Selection: The offspring individual Zti competes against the

parent individual Xti using the greedy criterion and the survivor

enters the t þ 1 generation.

Xtþ1i ¼ Zt

i ; if f ðZti Þ � f ðXt

i ÞXt

i ; otherwise

�(10)

Different techniques are integrated into DE to solve COPs.Constraint adaptation by differential evolution (CADE) [22]combines the ideas of constraint adaptation and DE into a versatiledesign method. CADE utilities a so-called region of acceptability(ROA) as a selection operator. If Zt

i lies within the ROA thenXtþ1

i ¼ Zti . Otherwise, the procedure of DE is repeated up to several

times. If the generated offspring still lies outside the ROA, Xtþ1i is

set to Xti . In [23], a cultural algorithm with DE population (CULDE)

is proposed. The variation operator of differential evolution isinfluenced by the belief space to generate the offspring population.

4. Proposed method

In this section, PSO-DE is introduced in detail. In order to handlethe constraints, we minimize the original objective function f ð~xÞ aswell as the degree of constraint violation Gð~xÞ. Two kinds ofpopulations with the same size NP are used. In the initial step of thealgorithm, a population (denoted by pop) is created randomly andthe replication of pop is denoted as pBest. Note that pBest is utilizedto store each particle’s pbest. At each generation, pop is sortedaccording to the degree of constraint violation in a descendingorder. In order to keep one-to-one mapping between each particleand its pbest, the order of pBest changes when pop is sorted. Onlythe first half of pop are evolved by using Krohling and dos SantosCoelho’s PSO [15]. If the variable value xtþ1

i; j of Xtþ1i generated by

Krohling and dos Santos Coelho’s PSO [15] violates the boundaryconstraint, violating variable value is reflected back from theviolated boundary using the following rule [24]:

xtþ1i; j ¼

0:5�ðlð jÞ þ xti; jÞ; if xtþ1

i; j < lð jÞ0:5�ðuð jÞ þ xt

i; jÞ; if xtþ1i; j >uð jÞ

xtþ1i; j ; otherwise

8><>: (11)

Algorithm 1: PSO-DE

Input: Population size NP, objective function f, the degreeof constrained violation G,upper bound of variables U ¼ fuð1Þ; . . . ;uðnÞgand lower bound of variables L ¼ flð1Þ; . . . ; lðnÞg

Output: The best objective function value of f best

Initialize a population pop that contains NP particles withrandom positions. Note that each particle is clampedwithin ½L;U�;

Set velocity of each particle equal to zero;

Evaluate f and G for all particles;

pBest ¼ po p;

% pBest is used to store each particle’s previous best position(pbest) %;

gbest ¼ the optimum of pBest according to Deb’sfeasibility-based rule [6];

foreach Generation do

Sort pop in descending order according to G and changethe order of pBest

when the order of pop changes to keep one-to-one mapbetween each particle and its pbest;

p1 = pop’s first half part;

Page 4: Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

H. Liu et al. / Applied Soft Computing 10 (2010) 629–640632

foreach individual (denoted as a) of p1 do

Update a’s velocity and position by Eqs. (5) and (3);

If a violates boundary then

Modify its variables by Eq. (11);

end

Calculate f and G for a;

Compare a against the corresponding pbest accordingto Deb’s feasibility-based rule [6],

and if a wins, it replaces the corresponding pbest;

end

pop’s first half part = p1;

gbest ¼ the optimal individual of pBest according to Deb’s

feasibility-based rule [6];

foreach pbest of pBest do

Generate three offspring by DE’s three mutationstrategies(rand/1, current to best/1, and rand/2,

see Section 3 for details), respectively,

and store them in a set B;

foreach individual (denoted as b) of B do

if b violates boundary then

Modify its variables by Eq. (12);

end

Calculate f and G for b;

Compare b with pbest by Eq. (13), then if b wins,it replaces pbest;

end

end

gbest ¼ the optimal individual of pBest according to Deb’s

feasibility-based rule [6];

f best ¼ the objective function value of gbest;

end

return f best

The updated particle is compared with its corresponding pbest inpBest by Deb’s selection criterion [6]. If the updated particle wins, itreplaces its corresponding pbest to survive to pBest; if not, thecorresponding pbest remains. After the PSO evolution, we employ DEto update pBest. Each pbest in pBest produces three offspring by usingDE’s three mutation strategies: rand/1 strategy, current to best/1

strategy, and rand/2 strategy. If a variable value zti; j of an offspring Zt

i

violates the boundary constraint, violating variable value is reflectedback from the violated boundary using the following rule [25]:

zti; j ¼

2�lð jÞ � zti; j; if zt

i; j < lð jÞ2�uð jÞ � zt

i; j; if zti; j >uð jÞ

zti; j; otherwise

8><>: (12)

We use a selection criterion to compare pbest against itsoffspring. The considered individual will be replaced at eachgeneration only if its offspring has a better fitness value and lowerdegree of constraint violation. The criterion for replacement isdescribed as follows:

pbesttþ1i ¼ Zt

i ; i f f ðZti Þ< f ðpbestt

i ÞT

GðZti Þ � Gð pbestt

i Þpbestt

i ; otherwise

�(13)

This process is repeated generation after generation until somespecific stopping criterion is met. PSO-DE’s main procedure can besummarized in Algorithm 1.

4.1. Why only 50% particles are involved in PSO

Krohling and dos Santos Coelho’s PSO [15] is easy to stagnate,which is caused by its velocity equation (Eq. (5)). The Eq. (5)consists of two parts: the first is the randomly weighted differencebetween the particle and its pbest and the second is the randomlyweighted difference between the particle and gbest. The first partrepresents the personal experience of each particle, which makesthe particle move towards its own best position. The second partrepresents the collaborative behavior of the particles in finding theglobal optimal solution, which pulls each particle towards the bestposition found by its swarm. Eq. (3) provides the new position ofthe particle by adding its new velocity to its current position. Asmentioned above, if a particle stays on the position of gbest, itsvelocity tends to be zero and its position is unchanged. If bothð pbestt

i � xti Þ and ðgbestt � xt

i Þ are small, the particle almost freezesin its track in some generations. If the pbest and gbest are tooclosed, some particles are inactive during the evolution process.When the position of a particle equals its pbest, the velocity is onlyinfluenced by gbest. The Eq. (5) indicates that the pbest and gbest

play a primordial role in PSO’s evolution process.Based on Deb’s feasibility-based rule [6], the lower the particle’s

degree of constraint violation is, the higher the probability that itclusters together around gbest is. So particles with lower degrees ofconstraint violations are very difficult to jump out of gbest’sadjacent region. This may cause gbest to stay on the same positionfor a long time and lost the diversity of population. In other words,premature convergence may occur in the early evolution stage.However, if pop converges too quickly to a position, which may be alocal optimum, particles will also give up attempts for explorationand stagnate in the rest of the evolution process. On the other hand,for a particle with higher degree of constraint violation, its pbest

has a relatively significant difference between gbest. Its perfor-mance will be improved by extracting meaningful informationfrom its own pbest and gbest belonging to the same population sothat it is dragged toward better-performing point. The updatedparticle may better than gbest and then gbest jumps to a newposition that is obviously different from its current position. Thisreplacement may spur PSO to adjust its evolutionary direction andguide particles to fly throughout a new region that has not beensearched before. For the purpose of improving the performance ofPSO, only the first half individuals are extracted from thepopulation pop after ranking the individuals based on theirconstraint violations in descending order. Thus, a temporarypopulation p1 of size NP=2 is obtained. Thereafter, p1 is involvedin PSO’s evolution. To some extent, this mechanism maintains thediversity of pop and slows down the convergence speed to avoidstagnating.

4.2. DE-based search for pBest

In order to compensate the convergence speed and supply morevaluable information to adjust the particles’ trajectories, DE isapplied to update pBest which ensures highly preferable positionsin pBest and increases the probability of finding a better solution.Only three representatives of the five DE’s mutation strategies areused. Because if both best/1 and best/2 strategies are integrated intoPSO-DE, the information carried by gbest will be reutilized inproducing new individuals. Under this condition, pBest might beeasily trapped in a local optimum. By applying three strategies,such as rand/1 strategy, current to best/1 strategy and rand/2

strategy, on a pbest, its performance might be improved, which inturn leads to the better-performing pBest over time. DE-basedsearch process motivates the particles to search for new regionsincluding some lesser explored regions and enhance the particlescapability to explore the vast search space. In addition, this way

Page 5: Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

Table 1Summary of 11 benchmark problems.

Function n Type of f r LI NE NI a

g01 13 Quadratic 0.0003 % 9 0 0 6

g02 20 Nonlinear 99.9965 % 1 0 1 1

g03 10 Nonlinear 0.0000 % 0 1 0 1

g04 5 Quadratic 26.9356 % 0 0 6 2

g06 2 Nonlinear 0.0064 % 0 0 2 2

g07 10 Quadratic 0.0003 % 3 0 5 6

g08 2 Nonlinear 0.8640 % 0 0 2 0

g09 7 Nonlinear 0.5256 % 0 0 4 2

g10 8 Linear 0.0005 % 3 0 3 3

g11 2 Quadratic 0.0000 % 0 1 0 1

g12 3 Quadratic 4.779 % 0 1 93 0

H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 633

can increase the diversity of pBest and the probability of findingbetter pbest so as to enhance the chances of finding the globaloptimum if it had not been yet determined. As we know, betterpbest and gbest guide particles towards the optimum effectivelyand speed up the convergence.

The two populations work separately, but individuals in thesetwo parts are also interrelated. pBest stores the personal best ofparticles in pop. The best solution found by pBest can be the globalattractor of pop (if it is also the best of entire swarm), which willguide the pop fly to the new best (maybe the changed optimum).PSO can gradually search the neighborhood of the best solutionfound so far, and DE can avoid convergence to a local optimum. In aword, by hybridizing DE and PSO, PSO-DE is a good trade-offbetween accuracy and efficiency. PSO-DE can increase theprobability of hiting the global optimum and reduce the numberof fitness functions evaluations (FFEs) required to obtain compe-titive solutions.

5. Experimental study

Eleven benchmark test functions and five engineering optimi-zation functions are used to validate the proposed PSO-DE. Thesetest cases include various types (linear, nonlinear and quadratic) ofobjective functions with different number of decision variables anda range of types (linear inequalities, nonlinear equalities, andnonlinear inequalities) and number of constraints. These 16problems pose a challenge for constraint-handling methods andare a good measure for testing their ability. All test functions arelisted in Appendix A.

5.1. Benchmark test function

The main characteristics of 11 benchmark functions arereported in table as shown in Table 1, where a is the number ofconstraints active at the optimal solution. In addition, r is the ratiobetween the size of the feasible search space and that of the entire

Table 2Experimental results obtained by PSO-DE with 100 independent runs on 11 benchmar

Function Best Median Mean

g01 �15.000000 �15.000000 �15.0g02 �0.8036145 �0.7620745 �0.756

g03 1.0050100 �1.0050100 �1.005

g04 �30665.5387 �30665.5387 �3066g06 �6961.81388 �6961.81388 �6961g07 24.3062091 24.3062096 24.306

g08 �0.09582594 �0.09582594 �0.095

g09 680.6300574 680.6300574 680.63g10 7049.248021 7049.248028 7049.2

g11 0.749999 0.749999 0.7499

g12 �1.000000 �1.000000 �1.00

search space, i.e., r ¼ jVj=jSj, where jSj is the number of solutionsrandomly generated from S, jVj is the number of feasible solutionsout of these jSj solutions. In the experimental setup, jSj =1,000,000.

For each test case, 100 independent runs are performed in VC++6.0 (the source code may be obtained from the authors uponrequest). The parameters used by PSO-DE are the following:NP ¼ 100, F and CR are randomly generated within [0.9, 1.0] and[0.95, 1.0], respectively. For g03, a tolerance value e is equal to0.001, while for g11, e equals to 0:000001. The number of FFEs is setas in Table 2 for each test function. In each run, the number ofiterations of seven test cases (i.e., g01, g02, g03, g06, g07, g09 andg10) is 800. For g04 and g11, it is 400. In addition, it is 60 for g08and 100 for g12. Table 2 summarizes the experimental resultsusing the above parameters, showing the best, median, mean, andworst objective function values, and the standard deviations foreach test problem. As described in Table 2, the global optima areconsistently found by PSO-DE over 100 independent runs in seventest cases (i.e., g01, g04, g06, g07, g09, g10, and g12). For theremaining test cases, the resulting solutions achieved are veryclose to the global optima. Note that the standard deviation over100 runs for all the problems are relatively small.

PSO-DE is compared against six aforementioned state-of-the-art approaches: CRGA [8], SAPF [4], CDE [5], CULDE [23], CPSO-GD[15] and SMES [10]. As shown in Tables 3–5, the proposed methodoutperforms CRGA, SAPF, CDE, CPSO-GD, SMES and performssimilarly to CULDE in terms of the selected performance metrics,such as the best, mean, and worst objective function values. Withrespect to CULDE, the proposed approach finds better best resultsin two problems (g10 and g11) and similar best results in other nineproblems (g01, g02, g03, g04, g06, g07, g08, g09 and g12). Also, theproposed technique reaches better mean and worst results in fourproblems (g02, g03, g10 and g11). Similar mean and worst resultsare found in seven problems (g01, g04, g06, g07, g08, g09 and g12).As far as the computational cost (i.e., the number of FFEs) isconcerned, PSO-DE requires from 10,600 to 140,100 FFEs to obtainthe reported results, compared against 500,000 FFEs used by SAPF,248,000 FFEs by CDE, 100,100 FFEs by CULDE and 240,000 FFEs bySMES. So we can conclude that the computational cost of PSO-DE isless than that of the aforementioned approaches except for CULDE[23].

5.2. Engineering optimization

In order to study the performance of solving the real-worldengineering design problems, the proposed method is applied to 5well-known engineering design problems. We perform 100independent runs with the same setting of parameters as follows:NP ¼ 100, F and CR are randomly generated within [0.9, 1.0] and[0.95, 1.0], respectively. The number of FFEs is set as in Table 6 foreach test function. We will measure the quality of results (betterbest and mean solutions found) and the robustness of PSO-DE (the

k functions.

SD Worst FFEs

00000 2.1E�08 �15.000000 140,100

6775 3.3E�02 �0.63679947 140,100

0100 3.8E�12 �1.0050100 140,100

5.5387 8.3E�10 �30665.5387 70,100

.81388 2.3E�09 �6961.81388 140,100

2100 1.3E�06 24.3062172 140,100

82594 1.3E�12 �0.09582594 10,600

00574 4.6E�13 680.6300574 140,100

48038 3.0E�05 7049.248233 140,100

99 2.5E�07 0.750001 70,100

0000 0.0E+00 �1.000000 17,600

Page 6: Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

Table 3Comparing of the best results of PSO-DE with respect to six other state-of-the-art algorithms. ‘‘NA’’ means not available.

Function Best results of the compared techniques

PSO-DE CRGA [8] SAPF [4] CDE [5] CULDE [23] CPSO-GD [15] SMES [10]

g01 �15.000000 �14.9977 �15.000 �15.0000 �15.000000 �15.0 �15.000

g02 �0.8036145 �0.802959 �0.803202 �0.794669 �0.803619 NA �0.803601

g03 �1.0050100 �0.9997 �1.000 NA �0.995413 NA �1.000

g04 �30665.539 �30665.520 �30665.401 �30665.539 �30665.539 NA �30665.539

g06 �6961.8139 �6956.251 �6961.046 �6961.814 �6961.8139 NA �6961.814

g07 24.306209 24.882 24.838 NA 24.306209 24.711 24.327

g08 �0.095826 �0.095825 �0.095825 NA �0.095825 NA �0.095825

g09 680.63006 680.726 680.773 680.771 680.63006 680.678 680.632

g10 7049.2480 7114.743 7069.981 NA 7049.2481 7055.6 7051.903

g11 0.749999 0.750 0.749 NA 0.749900 NA 0.75

g12 �1.000000 �1.000000 �1.000000 �1.000000 �1.000000 NA �1.000

Table 4Comparing of the mean results of PSO-DE with respect to six other state-of-the-art algorithms. ‘‘NA’’ means not available.

Function Mean results of the compared techniques

PSO-DE CRGA [8] SAPF [4] CDE [5] CULDE [23] CPSO-GD [15] SMES [10]

g01 �15.000000 �14.9850 �14.552 �15.0000 �14.999996 �14.997 �15.000

g02 �0.756678 �0.764494 �0.755798 �0.785480 �0.724886 NA �0.785238

g03 �1.0050100 �0.9972 �0.964 NA �0.788635 NA �1.000

g04 �30665.539 �30664.398 �30665.922 �30665.536 �30665.539 NA �30665.539

g06 �6961.8139 �6740.288 �6953.061 �6960.603 �6961.8139 NA �6961.284

g07 24.306210 25.746 27.328 NA 24.306210 25.709 24.475

g08 �0.0958259 �0.095819 �0.095635 NA �0.095825 NA �0.095825

g09 680.63006 681.347 681.246 681.503 680.63006 680.7810 680.643

g10 7049.2480 8785.149 7238.964 NA 7049.2483 8464.2 7253.047

g11 0.749999 0.752 0.751 NA 0.757995 NA 0.75

g12 �1.000000 �1.000000 �0.99994 �1.000000 �1.000000 NA �1.000

Table 5Comparing of the worst results of PSO-DE with respect to six other state-of-the-art algorithms. ‘‘NA’’ means not available.

Function Worst results of the compared techniques

PSO-DE CRGA [8] SAPF [4] CDE [5] CULDE [23] CPSO-GD [15] SMES [10]

g01 �15.000000 �14.9467 �13.097 �15.0000 �14.999993 �14.994 �15.000

g02 �0.6367995 �0.722109 �0.745712 �0.779837 �0.590908 NA �0.751322

g03 �1.0050100 �0.9931 �0.887 NA �0.639920 NA �1.000

g04 �30665.539 �30660.313 �30656.471 �30665.509 �30665.539 NA �30665.539

g06 �6961.8139 �6077.123 �6943.304 �6901.285 �6961.8139 NA �6952.482

g07 24.3062 27.381 33.095 NA 24.3062 27.166 24.843

g08 �0.0958259 �0.095808 �0.092697 NA �0.095825 NA �0.095825

g09 680.6301 682.965 682.081 685.144 680.6301 681.371 680.719

g10 7049.2482 10826.09 7489.406 NA 7049.2485 11458 7638.366

g11 0.750001 0.757 0.757 NA 0.796455 NA 0.75

g12 �1.000000 �1.000000 �0.999548 �1.000000 �1.000000 NA �1.000

H. Liu et al. / Applied Soft Computing 10 (2010) 629–640634

standard deviation values). These statistical results are summar-ized in Table 6.

5.2.1. Welded beam design problem

The best feasible solution found by PSO-DE is f ð0:205729640;3:470488666;9:036 623910;0:205729640Þ ¼ 1:724852309. Theproblem has been solved by a number of researchers: Huanget al. [5] and Ray and Liew [11]. A comparison of results ispresented in Table 7. As it can be seen, our method outperforms thetwo compared approaches, in terms of quality and robustness. It isalso interesting to note that our result using 33,000 FFEs is betterthan the result of CDE [5], which is reported using 240,000 FFEs.

5.2.2. Tension compression spring design problem

This design optimization problem involves three continuousvariables and four nonlinear inequality constraints. The best

feasible solution found by PSO-DE is f ð0:0516888101;0:3567117001;11:289319935Þ ¼ 0:012665232900, which is thebest-known result for this problem. The problem has been studiedby CDE [5] and Ray and Liew [11]. A comparison of results ispresented in Table 8. As it can be seen, our method outperforms thetwo compared approaches, in terms of quality and robustness. It isalso interesting to note that our result using 24,950 FFEs is betterthan the result of CDE [5], which is reported using 240,000 FFEs.

5.2.3. Pressure vessel design problem

The best feasible solution obtained by PSO-DE is f ð0:8125;0:4375;42:098445596; 176:636595842Þ ¼ 6059:714335048. Thestatistical simulation solutions obtained by CDE [5] and PSO-DEare listed in Table 9. As shown in Table 9, the searching quality ofour method is superior to that of CDE [5], and even the worstsolution found by PSO-DE is better than the best solution reported

Page 7: Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

Table 7Comparing of the welded beam design problem results of PSO-DE with respect to

two other state-of-the-art algorithms.

Method Best Mean Worst SD FFEs

PSO-DE 1.7248531 1.7248579 1.7248811 4.1E�06 33,000

CDE [5] 1.733461 1.768158 1.824105 2.2E�02 240,000

Ray and

Liew [11]

2.3854347 3.0025883 6.3996785 9.6E�01 33,095

Table 10Comparing of the speed reducer design problem results of PSO-DE with respect to

the other state-of-the-art algorithms.

Method Best Mean Worst SD FFEs

PSO-DE 2996.348167 2996.348174 2996.348204 6.4E�06 54,350

Ray and

Liew [11]

2994.744241 3001.758264 3009.964736 4.0E+00 54,456

Table 11Comparing of the three-bar truss design problem results of PSO-DE with respect to

the other state-of-the-art algorithms.

Method Best Mean Worst SD FFEs

PSO-DE 263.89584338 263.89584338 263.89584338 4.5E�10 17,600

Ray and

Liew [11]

263.89584654 263.90335672 263.96975638 1.3E�02 17,610

Table 6Experimental results obtained by PSO-DE with 100 independent runs on five engineering design problems.

Design problem Best Mean SD Worst FFEs

Welded beam 1.724852309 1.724852309 6.7E�16 1.724852309 66,600

Pressure vessel 6059.714335 6059.714335 1.0E�10 6059.714335 42,100

Speed reducer 2996.348165 2996.348165 1.0E�07 2996.348166 70,100

Three-bar truss 263.89584338 263.89584338 1.2E�10 263.89584338 17,600

Tension/compression spring 0.012665233 0.012665233 4.9E�12 0.012665233 42,100

H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 635

in [5]. Moreover, the standard deviation of the results by PSO-DE in100 independent runs for this problem is much smaller than that ofCDE [5] in 50 independent runs. The total number of evaluations is42,100 in PSO-DE, while in CDE [5] the total number of evaluationsis 240,000.

5.2.4. Speed reducer design problem

f ð3:5000000;0:7000000;17:000000;7:300000000013;7:800000000005;3:3502146 66097;5:286683229758Þ ¼ 2996:348164969 is the best feasible solution found by PSO-DE. Ray and Liew [11]provided the better best objective function value which is2994.744241. However, PSO-DE provides better mean and worst

objective function values and the standard deviation than thoseprovided by Ray and Liew’s technique [11]. Table 10 indicates PSO-DE is more robust than Ray and Liew [11].

5.2.5. Three-bar truss design problem

The best feasible solution found by PSO-DE isf ð0:788675134746;0:408248290037Þ ¼ 263:895843376468,

which is the reported best-known result for this problem. Acomparison of results presented in Table 11 shows that PSO-DEoutperforms Ray and Liew [11], in terms of quality and robustness.These overall results validate that PSO-DE has the substantialcapability in handling various COPs and its solution quality isquite stable under a low computational effort. So, it can beconcluded that PSO-DE is a good alternative for constrainedoptimization.

Table 8Comparing of the tension/compression spring design problem results of PSO-DE

with respect to three other state-of-the-art algorithms.

Method Best Mean Worst SD FFEs

PSO-DE 0.012665233 0.012665244 0.012665304 1.2E�08 24,950

CDE [5] 0.0126702 0.012703 0.012790 2.7E�05 240,000

Ray and

Liew [11]

0.012669249 0.012922669 0.016717272 5.9E�04 25,167

Table 9Comparing of the pressure vessel design problem results of PSO-DE with respect to

the other state-of-the-art algorithm.

Method Best Mean Worst SD FFEs

PSO-DE 6059.714335 6059.714335 6059.714335 1.0E�10 42,100

CDE [5] 6059.7340 6085.2303 6371.0455 4.3E+01 240,000

6. Discussion

In this section, we will carry out discussion on the effectivenessof PSO-DE, only evolving 50% particles by PSO and updating pBest

by DE. We use 11 well-known benchmark functions as theexamples and the parameters are the same as those aforemen-tioned in Section 5. The comparison between PSO-DE and HPSO[26] (a hybrid PSO with Deb’s feasibility-based rule [6]) shows theeffectiveness of the mechanism adopted by PSO-DE.

6.1. Searching efficiency of PSO-DE

Figs. 1 and 2 illustrate a typical evolution process of the objectivevalue of gbest when solving examples g01 and g02, respectively. Asshown in Figs. 1 and 2, the performance of the PSO-DE is better thanthe PSO and DE on the testing suite in terms of the optimizationresults. PSO converges to local optima quickly and the particles giveup attempts for exploration. Then the PSO stagnates in the rest ofevolution. Due to DE, PSO-DE escapes from local optima andconverges to the global optimum very quickly. So, it is demonstratedthat PSO-DE is of effective and efficient ability in global search.

6.2. The effectiveness of evolving 50% particles by PSO

For the sake of studying the effectiveness of only evolving 50%particles by PSO, we modify PSO-DE to allow all particles to beinvolved in PSO. Under this condition, the algorithm finds worsemean and worst results in six functions (i.e., g01, g02, g07, g08, g10and g11). The details are shown in Table 12. Compared Table 12against Table 2, we can conclude that particles with lower degreeof constraint violation might cause the population to be trapped ina local optimum.

6.3. The effectiveness of DE-based search

In order to identify the effectiveness of updating pBest by DE, wedesign a experiment that is only running PSO described in Section 4.Table 13 shows the performance of PSO without DE in detail. The

Page 8: Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

Fig. 1. The objective function value curves of PSO-DE, PSO and DE for test function

g01.

Fig. 2. The objective function value curves of PSO-DE, PSO and DE for test function

g02.

H. Liu et al. / Applied Soft Computing 10 (2010) 629–640636

global optima are found only in three functions (i.e., g06, g11 andg12), what’s worse, the standard deviations over 100 runs for all theproblems except g02 and g12 increase significantly. This experimentindicates that the use of DE is quite beneficial to improve theperformance of PSO. Comparing Table 13 against Table 2, we canconclude that DE adjusts the PSO’s exploration and exploitationability to satisfy the requirements of different optimization tasks.

6.4. PSO-DE vs. HPSO

HPSO [26] updates the velocities and positions using Eqs. (2)and (3) and uses Deb’s feasibility-based rule [6] to determinewhether the updated particles replace their corresponding pbest s

Table 13Experimental results obtained only by PSO with 100 independent runs on 11 benchma

Function Best Median Mean

g01 �14.826583 �12.704538 �12.8

g02 �0.6628897 �0.4771309 �0.48

g03 �1.0049865 �1.0048991 �1.00

g04 30663.8563 �30601.0847 �3057

g06 �6961.81388 �6961.81388 �6961

g07 24.3338653 26.0181836 27.137

g08 �0.09582594 �0.09582594 �0.09

g09 680.6345517 680.8197237 680.97

g10 7051.220659 7736.101904 8209.8

g11 0.750000 0.857219 0.8605

g12 �1.000000 �1.000000 �1.00

Table 12Experimental results obtained by PSO-DE which evolves all particles by PSO with 100

Function Best Median Mean

g01 �15.000000 �15.000000 �14.8

g02 �0.8036176 �0.759234 �0.72

g03 �1.0050100 �1.0050100 �1.00

g04 �30665.5387 �30665.5387 �3066g06 �6961.81388 �6961.81388 �6961g07 24.3062091 24.3062103 24.306

g08 �0.09582594 �0.09582594 �0.09

g09 680.6300574 680.6300574 680.63g10 7049.248021 7049.248060 7049.2

g11 0.749999 0.749999 0.7570

g12 �1.000000 �1.000000 �1.00

or not. In contrast to our method, HPSO employs the mechanism ofthe simulated annealing (SA) and the feasibility-based rule whichare fused as a local search for gbest to help the search escape fromlocal optima and make a well balance between exploration andexploitation. As shown in Table 14, the average searching quality ofPSO-DE is superior to that of HPSO. With respect to HPSO, thestandard deviations of PSO-DE decrease significantly and PSO-DEobtains better mean and worst solutions for the welded beamdesign problem, the pressure vessel design problem and thetension/compression spring design problem. Besides, it needs to bementioned that the maximum computational cost (70,100 FFEs)and the minimum computational cost (10,600 FFEs) are used inPSO-DE, whereas HPSO performs 81,000 FFEs. So Table 14 indicates

rk functions.

SD Worst FFEs

10001 1.4E+00 �9.036253 140,100

40580 7.6E�02 �0.3493741 140,100

48795 1.0E+00 �1.0042690 140,100

0.9286 8.1E+01 �30252.3258 70,100

.81387 6.5E�06 �6961.81381 140,100

3743 3.0E+00 38.4299014 140,100

449230 9.4E�03 �0.02914408 10,600

10606 5.1E�01 684.5289146 140,100

29782 3.0E�05 18527.51823 140,100

30 8.4E�02 0.998823 70,100

0000 0.0E+00 �1.000000 17,600

independent runs on 11 benchmark functions.

SD Worst FFEs

62187 5.0E�01 �12.453125 140,100

94312 7.7E�02 �0.4451479 140,100

50100 1.1E�15 �1.0050100 140,100

5.5387 8.7E�09 �30665.5387 70,100

.81388 1.8E�12 �6961.81388 140,100

2117 4.8E�06 24.3062488 140,100

449230 9.4E�03 �0.02914408 10,800

00574 4.6E�13 680.6300574 140,100

48131 2.7E�04 7049.250303 140,100

95 4.0E�02 0.998719 70,100

0000 0.0E+00 �1.000000 17,700

Page 9: Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

Table 14Comparing PSO-DE with respect to HPSO [26].

Function Method Best Mean Worst SD FEEs

g04 PSO-DE �30665.539 �30665.539 �30665.539 8.4E�10 70,100HPSO [26] �30665.539 �30665.539 �30665.539 1.7E�06 81,000

g08 PSO-DE �0.095826 �0.095826 �0.095826 1.3E�12 10,600HPSO [26] �0.095825 �0.095825 �0.095825 1.2E�10 81,000

g12 PSO-DE �1.000000 �1.000000 �1.000000 0.0E+00 17,600HPSO [26] �1.000000 �1.000000 �1.000000 1.6E�15 81,000

Tension/compression PSO-DE 0.0126652 0.0126652 0.0126652 4.9E�12 42,100Spring design HPSO [26] 0.0126652 0.0127072 0.0127191 1.58E�05 81,000

Pressure vessel PSO-DE 6059.7143 6059.7143 6059.7143 1.0E�10 42,100Design HPSO [26] 6059.7143 6099.9323 6288.6770 8.6E+01 81,000

Welded beam PSO-DE 1.724852 1.724852 1.724852 6.7E�16 66,600Design HPSO [26] 1.724852 1.749040 1.814295 4.0E�02 81,000

H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 637

the superiority of the mechanism that evolves 50% particles by PSOand updates pbest by DE in the terms of stableness as well as thelower time consuming.

7. Conclusions

A new method named PSO-DE is introduced in this paper, whichimproves the performance of the particle swarm optimization byincorporating differential evolution. PSO-DE allows only half a partof particles to be evolved by PSO. Those particles with higher degreeof constraint violation fly throughout the search space according tothe information delivered by their pbest s and gbest to search thebetter positions. Deb’s feasibility-based rule [6] is used to comparethe updated particle against its corresponding pbset, then thewinner survives into pBest. Due to the utilization of DE, each pbset

communicates and collaborates with its neighbors belonging topBest in order to improve its performance. The approach obtainscompetitive results on 11 well-known benchmark functionsadopted for constrained optimization and on five engineeringoptimization problems at a relatively low computational cost(measured by the number of FFEs). From the comparative study,PSO-DE has shown its potential to handle various COPs, and itsperformance is much better than eight other state-of-the-art COEAsin terms of the selected performance metrics. That is to say, thismechanism does improve the robustness of the PSO. The future workwill be focused on two directions: (i) the application of PSO-DE toreal COPs from industry; and (ii) the extension of the method to solvethe multi-objective problems.

Acknowledgments

The authors sincerely thank the anonymous reviewers for theirvaluable and constructive comments and suggestions.

This research was supported in part by the National NaturalScience Foundation of China under Grant 60805027 and 90820302,and in part by the Research Fund for the Doctoral Program ofHigher Education under Grant 200805330005.

Appendix A. Benchmark functions

A.1. g01

Minimize

f ð~xÞ ¼ 5X4

i¼1

xi � 5X4

i¼1

x2

i�X13

i¼5

xi

subject to

g1ð~xÞ ¼ 2x1 þ 2x2 þ x10 þ x11 � 10 � 0g2ð~xÞ ¼ 2x1 þ 2x3 þ x10 þ x12 � 10 � 0g3ð~xÞ ¼ 2x2 þ 2x3 þ x11 þ x12 � 10 � 0g4ð~xÞ ¼ �8x1 þ x10 � 0g5ð~xÞ ¼ �8x2 þ x11 � 0g6ð~xÞ ¼ �8x3 þ x12 � 0g7ð~xÞ ¼ �2x4 � x5 þ x10 � 0g8ð~xÞ ¼ �2x6 � x7 þ x11 � 0g9ð~xÞ ¼ �2x8 � x9 þ x12 � 0

where the bounds are 0 � xi � 1, ði ¼ 1; . . . ;9Þ0 � xi � 100ði ¼10;11;12Þ and 0 � x13 � 1. The global minimum is at~x� ¼ ð1;1;1;1;1;1;1;1;1;3;3;3;1Þ, where f ð~x�Þ ¼ �15.

A.2. g02

Maximize

f ð~xÞ ¼

Xn

i¼1

cos 4ðxiÞ � 2Yn

i¼1

cos 2ðxiÞffiffiffiffiffiffiffiffiffiffiffiffiffiXn

i¼1

ix2i

s����������

����������subject to

g1ð~xÞ ¼ 0:75�Yn

i¼1

xi � 0

g2ð~xÞ ¼Xn

i¼1

xi � 7:5n � 0

where n ¼ 20 and 0 � xi � 10 ði ¼ 1; . . . ;nÞ. The global maximumis unknown; the best reported solution is f ð~x�Þ ¼ 0:803619.

A.3. g03

Maximize

f ð~xÞ ¼ ðffiffiffinpÞnYn

i¼1

xi

Page 10: Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

H. Liu et al. / Applied Soft Computing 10 (2010) 629–640638

subject to

hð~xÞ ¼Xn

i¼1

x2

i� 1 ¼ 0

where n ¼ 10 and 0 � xi � 10 ði ¼ 1; . . . ;nÞ. The global maximumis at xi� ¼ 1=

ffiffiffinpði ¼ 1; . . . ;nÞ, where f ð~x�Þ ¼ 1.

A.4. g04

Minimize

f ð~xÞ ¼ 5:3578547x23 þ 0:8356891x1x5 þ 37:293239x1

� 40792:141

subject to

g1ð~xÞ ¼ 85:334407þ 0:0056858x2x5 þ 0:0006262x1x4 þ 0:0022053x3x6 � 92g2ð~xÞ ¼ �85:334407� 0:0056858x2x5 � 0:0006262x1x4 þ 0:0022053x3x6 � 0g3ð~xÞ ¼ 80:51249þ 0:0071317x2x5 þ 0:0029955x1x2 þ 0:0021813x2

3 � 110 � 0g4ð~xÞ ¼ �80:51249� 0:0071317x2x5 � 0:0029955x1x2 � 0:0021813x2

3 þ 90 � 0g5ð~xÞ ¼ 9:300961þ 0:0047026x3x5 þ 0:0012547x1x3 þ 0:0019085x3x4 � 25 � 0g6ð~xÞ ¼ �9:300961� 0:0047026x3x5 � 0:0012547x1x3 � 0:0019085x3x4 þ 20 � 0

where 78 � x1 � 102, 33 � x2 � 45 and 27 � xi � 45 ði ¼ 3;4;5Þ.The optimum solution is ~x� ¼ ð78;33;29:995256025682;45;

36:775812905788Þ, where f ð~x�Þ ¼ �30665:539.

A.5. g06

Minimize

f ð~xÞ ¼ ðx1 � 10Þ3 þ ðxx � 20Þ3

subject to

g1ð~xÞ ¼ �ðx1 � 5Þ2 � ðx2 � 5Þ2 þ 100 � 0

g2ð~xÞ ¼ ðx1 � 6Þ2 þ ðx2 � 5Þ2 � 82:81 � 0

where 13 � x1 � 100 and 0 � x2 � 100. The optimum solution is~x� ¼ ð14:095;0:84296Þ, where f ð~x�Þ ¼ �6961:81388.

A.6. g07

Minimize

f ð~xÞ ¼ x21 þ x2

2 þ x1x2 � 14x1 � 16x2 þ ðx3 � 10Þ2 þ 4ðx4 � 5Þ2

þ ðx5 � 3Þ2 þ 2ðx6 � 1Þ2 þ 5x27 þ 7ðx8 � 11Þ2 þ 2ðx9 � 10Þ2

þ ðx10 � 7Þ2 þ 45

subject to

g1ð~xÞ ¼ �105þ 4x1 þ 5x2 � 3x7 þ 9x8 � 0g2ð~xÞ ¼ 10x1 � 8x2 � 17x7 þ 2x8 � 0g3ð~xÞ ¼ �8x1 þ 2x2 þ 5x9 � 2x10 � 12 � 0

g4ð~xÞ ¼ 3ðx1 � 2Þ2 þ 4ðx2 � 3Þ2 þ 2x23 � 7x4 � 120 �

g5ð~xÞ ¼ 5x21 þ 8x2 þ ðx3 � 6Þ2 � 2x4 � 40 � 0

g6ð~xÞ ¼ x21 þ 2ðx2 � 2Þ2 � 2x1x2 þ 14x5 � 6x6 � 0

g7ð~xÞ ¼ 0:5ðx1 � 8Þ2 þ 2ðx2 � 4Þ2 þ 3x25 � x6 � 30 � 0

g8ð~xÞ ¼ �3x1 þ 6x2 þ 12ðx9 � 8Þ2 � 7x10 � 0

where �10 � xi � 10 ði ¼ 1;2; . . . ;10Þ. The optimum solution is~x� ¼ ð2:171996;2:363683;8:773926;5:095984;0:9906548;

1:430574;1:321644;9:828726;8:280092;8:375927Þ, wheref ð~x�Þ ¼ 24:3062091.

A.7. g08

Minimize

f ð~xÞ ¼ sin 3ð2px1Þ sin ð2px2Þx3

1ðx1 þ x2Þ

subject to

g1ð~xÞ ¼ x21 � x2 þ 1 � 0

g2ð~xÞ ¼ 1� x1 þ ðx2 � 4Þ2 � 0

where 0 � x1 � 10 and 0 � x2 � 10. The optimum solution islocated at ~x� ¼ ð1:2279713;4:2453733Þ, where f ð~x�Þ ¼ 0:095825.

A.8. g09

Minimize

f ð~xÞ ¼ ðx1 � 10Þ2 þ 5ðx2 � 12Þ2 þ x43 þ 3ðx4 � 11Þ2 þ 10x6

5 þ 7x26

þ x47 � 4x6x7 � 10x6 � 8x7

subject to

g1ð~xÞ ¼ �127þ 2x21 þ 3x4

2 þ x3 þ 4x24 þ 5x5 � 0

g2ð~xÞ ¼ �282þ 7x1 þ 3x2 þ 10x23 þ x4 � x5 � 0

g3ð~xÞ ¼ �196þ 23x1 þ x22 þ 6x2

6 � 8x7 � 0g4ð~xÞ ¼ 4x2

1 þ x22 � 3x1x2 þ 2x2

3 þ 5x6 � 11x7 � 0

where �10 � xi � 10 for ði ¼ 1;2; . . . ;7Þ. The optimum solution is~x� ¼ ð2:330499;1:951372;�0:4775414;4:365726;�0:6244870;1:1038131;1:594227Þ, where f ð~x�Þ ¼ 680:6300573.

A.9. g10

Minimize

f ð~xÞ ¼ x1 þ x2 þ x3

subject to

g1ð~xÞ ¼ �1þ 0:0025ðx4 þ x6Þ � 0g2ð~xÞ ¼ �1þ 0:0025ðx5 þ x7 � x4Þ � 0g3ð~xÞ ¼ �1þ 0:01ðx8 � x5Þ � 0g4ð~xÞ ¼ �x1x6 þ 833:33252x4 þ 100x1 � 83333:333 � 0g5ð~xÞ ¼ �x2x7 þ 1250x5 þ x2x4 � 1250x4 � 0g6ð~xÞ ¼ �x3x8 þ 1250000þ x3x5 � 2500x5 � 0

where 100 � x1 � 10;000, 1000 � xi � 10;000 ði ¼ 2;3Þ and1000 � xi � 10;000 ði ¼ 4; . . . ;8Þ. The optimum solution is ~x� ¼ð579:3066;1359:9707;5109:9707;182:0177;295:601;217:982;286:165;395:6012Þ, where f ð~x�Þ ¼ 7049:248021.

Page 11: Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 639

A.10. g11

Minimize

f ð~xÞ ¼ x21 þ ðx2 � 1Þ2

subject to

hð~xÞ ¼ x2 � x21 ¼ 0

where �1 � x1 � 1 and �1 � x1 � 1. The optimum solution is~x� ¼ ð�1=

ffiffiffi2p

;1=2Þ, where f ð~x�Þ ¼ 0:75.

A.11. g12

Maximize

f ð~xÞ ¼ 100� ðx1 � 5Þ2 � ðx2 � 5Þ2 � ðx3 � 5Þ2

100

subject to

gð~xÞ ¼ ðx1 � pÞ2 þ ðx2 � qÞ2 þ ðx3 � rÞ2 � 0:0625 � 0

where 0 � xi � 10 ði ¼ 1;2;3Þ and p; q; r ¼ 1;2; . . . ;9. The feasibleregion of the search space consists of 93 disjointed spheres. A pointðx1; x2; x3Þ is the feasible if and only if there exist p; q; r such thatthe above inequality holds. The optimum solution is located at~x� ¼ ð5;5;5Þ where f ð~x�Þ ¼ 1.

Appendix B. Engineering design examples

B.1. Welded beam design

A welded beam is designed for the minimum cost subject to

constraints on shear stress (t); bending stress in the beam (u);

buckling load on the bar (Pc); end deflection of the beam (d). There are

four design variables hðx1Þ; lðx2Þ; tðx3Þ and bðx4Þ.Minimize

f ð~xÞ ¼ 1:10471x21x2 þ 0:04811x3x4ð14:0þ x2Þ

subject to

g1ð~xÞ ¼ tð~xÞ � tmax � 0g2ð~xÞ ¼ sð~xÞ � smax � 0g3ð~xÞ ¼ x1 � x4 � 0g4ð~xÞ ¼ 0:10471x2

1 þ 0:04811x3x4ð14:0þ x2Þ � 5:0 � 0g5ð~xÞ ¼ 0:125� x1 � 0g6ð~xÞ ¼ dð~xÞ � dmax � 0g7ð~xÞ ¼ P � PCð~xÞ � 0

The other parameters are defined as follows:

tð~xÞ ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðt0Þ2 þ ðt00Þ2 þ 2t0t00x2

2R

r; t0 ¼ pffiffiffi

2p

x1x2

t00 ¼ MR

J; M ¼ P Lþ x2

2

� �; R ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix1 þ x3

2

� �2

þ x22

4

s

J ¼ 2x1x2ffiffiffi

2p x2

2

12þ x1 þ x3

2

� �2� ��

; sð~xÞ ¼ 6PL

x4x23

dð~xÞ ¼ 4PL3

Ex4x33

; PCð~xÞ ¼4:013

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiEGx2

3x64=36

qL2

1� x3

2L

ffiffiffiffiffiffiE

4G

r !

where P = 6000lb, L ¼ 14, dmax ¼ 0:25 in., E = 30,106 psi,G = 12,106 psi, tmax = 13,600 psi, smax = 30,000 psi and0:1 � xi � 10:0 ði ¼ 1;2;3;4Þ.

B.2. Pressure vessel design

In this problem, the objective is to minimize the total cost ð f ð~xÞÞ,including the cost of the material, forming and welding. There are four

design variables: Ts (x1, thickness of the shell), Th (x2, thickness of the

head), R (x3, inner radius) and L (x4, length of the cylindrical section of

the vessel, not including the head). Among the four design variables,

Ts and Th are integer multiples of 0.0625in. which are available

thicknesses of rolled steel plates, R and L are continuous variables.

Minimize

f ð~xÞ ¼ 0:6224x1x3x4 þ 1:7781x2x23 þ 3:1661x2

1x4 þ 19:84x21x3

subject to

g1ð~xÞ ¼ �x1 þ 0:0193x3 � 0g2ð~xÞ ¼ �x2 þ 0:00954x3 � 0

g3ð~xÞ ¼ �px23x4 �

4

3px3

3 þ 1296000 � 0

g4ð~xÞ ¼ x4 � 240 � 0

where 1 � x1 � 99; 1 � x2 � 99; 10 � x3 � 200 and10 � x4 � 200.

B.3. Speed reducer design minimize

Minimize

f ð~xÞ ¼ 0:7854x1x22ð3:3333x2

3 þ 14:9334x3 � 43:0934Þ

� 1:508x1ðx26 þ x2

7Þ þ 7:4777ðx36 þ x3

7Þ þ 0:7854ðx4x26 þ x5x2

subject to

g1ð~xÞ ¼27

x1x22x3�1 � 0

g2ð~xÞ ¼397:5

x1x22x2

3

�1 � 0

g3ð~xÞ ¼1:93x3

4

x2x46x3�1 � 0

g4ð~xÞ ¼1:93x3

5

x2x47x3�1 � 0

g5ð~xÞ ¼½ð745x4=x2x3Þ2 þ 16:9 106�

1=2

110:0x36

�1 � 0

g6ð~xÞ ¼½ð745x5=x2x3Þ2 þ 157:5 106�

1=2

85:0x37

�1 � 0

g7ð~xÞ ¼x2x3

40�1 � 0

g8ð~xÞ ¼5x2

x1�1 � 0

g9ð~xÞ ¼x1

12x2�1 � 0

g10ð~xÞ ¼1:5x6 þ 1:9

x4�1 � 0

g11ð~xÞ ¼1:1x7 þ 1:9

x5�1 � 0

where 2:6 � x1 � 3:6; 0:7 � x2 � 0:8; 17 � x3 � 28; 7:3 � x4 �8:3; 7:3 � x5 � 8:3; 2:9 � x6 � 3:9; 5:0 � x7 � 5:5.

Page 12: Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

H. Liu et al. / Applied Soft Computing 10 (2010) 629–640640

B.4. Three-bar truss design

Minimize

f ð~xÞ ¼ ð2ffiffiffi2p

x1 þ x2Þ l

subject to

g1ð~xÞ ¼ffiffiffi2p

x1 þ x2ffiffiffi2p

x21 þ 2x1x2

P � s � 0

g2ð~xÞ ¼x2ffiffiffi

2p

x21 þ 2x1x2

P � s � 0

g3ð~xÞ ¼1ffiffiffi

2p

x2 þ x1

P � s � 0

where 0 � x1 � 1 and 0 � x2 � 1; l ¼ 100 cm, P ¼ 2 kN/cm2, ands ¼ 2 kN/cm2.

B.5. A tension/compression spring design

This problem needs to minimize the weight ( f ð~xÞ) of a tension/

compression spring design subject to constraints on minimum

deflection, shear stress, surge frequency, limits on outside diameter

and on design variables. The design variables are the mean coil

diameter Dðx2Þ; the wire diameter dðx1Þ and the number of active

coils Pðx3Þ.Minimize

f ð~xÞ ¼ ðx3 þ 2Þx2x21

subject to

g1ð~xÞ ¼ 1� x32x3

71785x41

� 0

g2ð~xÞ ¼4x2

2 � x1x2

12566ðx2x31 � x4

1Þþ 1

5108x21

�1 � 0

g3ð~xÞ ¼ 1� 140:45x1

x22x3

� 0

g4ð~xÞ ¼x1 þ x2

1:5�1 � 0

where 0:05 � x1 � 2, 0:25 � x2 � 1:3, and 2 � x3 � 15.

References

[1] C.A.C. Coello, Theoretical and numerical constraint-handling techniques usedwith evolutionary algorithms: a survey of the state of the art, Computer Methodsin Applied Mechanics and Engineering 191 (2002) 1245–1287.

[2] Z. Michalewicz, K. Deby, M. Schmidtz, T. Stidsenx, Test-case generator for non-linear continuous parameter optimization techniques, IEEE Transactions on Evo-lutionary Computation 4 (3) (2000) 187–215.

[3] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained para-meter optimization problems, Evolutionary Computation, MIT Press 4 (1) (1996)1–32.

[4] B. Tessema, G. Yen, A self adaptive penalty function based algorithm for con-strained optimization, in: Proceedings 2006 IEEE Congress on Evolutionary Com-putation, 2006, pp. 246–253.

[5] F.Z. Huang, L. Wang, Q. He, An effective co-evolutionary differential evolution forconstrained optimization, Applied Mathematics and Computation 186 (1) (2007)340–356.

[6] K. Deb, An efficient constraint handling method for genetic algorithms, ComputerMethods in Applied Mechanics and Engineering 186 (2000) 311–338.

[7] T.P. Runarsson, X. Yao, Stochastic ranking for constrained evolutionary optimiza-tion, IEEE Transactions on Evolutionary Computation 4 (3) (2000) 284–294.

[8] A. Amirjanov, The development of a changing range genetic algorithm, ComputerMethods in Applied Mechanics and Engineering 195 (2006) c2495–c2508.

[9] N. Mladenovic, M. Drazic, V. Kovacevic-Vujcic, M. Cangalovic, General variableneighborhood search for the continuous optimization, European Journal of Opera-tional Research 191 (3) (2008) 753–770.

[10] E. Mezura-Montes, C.A.C. Coello, A simple multimembered evolution strategy tosolve constrained optimization problems, IEEE Transactions on EvolutionaryComputation 9 (1) (2005) 1–17.

[11] T. Ray, K.M. Liew, Society and civilization: an optimization algorithm based on thesimulation of social behavior, IEEE Transactions on Evolutionary Computation 7(4) (2003) 386–396.

[12] K. Socha, M. Dorigo, Ant colony optimization for continuous domains, EuropeanJournal of Operational Research 185 (3) (2008) 1155–1173.

[13] Z. Cai, Y. Wang, A multiobjective optimization-based evolutionary algorithm forconstrained optimization, IEEE Transactions on Evolutionary Computation 10 (6)(2006) 658–675.

[14] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence in amultidimensional complex space, IEEE Transactions on Evolutionary Computa-tion 6 (1) (2002) 58–73.

[15] R.A. Krohling, L. dos Santos Coelho, Coevolutionary particle swarm optimizationusing Gaussian distribution for solving constrained optimization problems, IEEETransactions on Systems, Man, Cybernetics Part B: Cybernetics 36 (6) (2006)1407–1416.

[16] W. Du, B. Li, Multi-strategy ensemble particle swarm optimization for dynamicoptimization, Information Sciences 178 (15) (2008) 3096–3109.

[17] L. dos Santos Coelho, C.-S. Lee, Solving economic load dispatch problems in powersystems using chaotic and Gaussian particle swarm optimization approaches,International Journal of Electrical Power & Energy Systems 30 (5) (2008) 297–307.

[18] M. Maitra, A. Chatterjee, A hybrid cooperative-comprehensive learning based psoalgorithm for image segmentation using multilevel thresholding, Expert Systemswith Applications 34 (2) (2008) 1341–1350.

[19] F.A. Guerra, L. dos, S. Coelho, Multi-step ahead nonlinear identification of Lorenz’schaotic system using radial basis neural network with learning by clusteringand particle swarm optimization, Chaos, Solitons & Fractals 35 (5) (2008) 967–979.

[20] R. Storn, K. Price, Differential evolution—a simple and efficient heuristic for globaloptimization over continuous spaces, Journal of Global Optimization 11 (1997)341–359.

[21] A. Salman, A.P. Engelbrecht, M.G.H. Omran, Empirical analysis of self-adaptivedifferential evolution, European Journal of Operational Research 183 (2) (2007)785–804.

[22] R. Storn, System design by constraint adaptation and differential evolution, IEEETransactions on Evolutionary Computation 3 (1) (1999) 22–34.

[23] R.L. Becerra, C.A.C. Coello, Cultured differential evolution for constrained opti-mization, Computer Methods in Applied Mechanics and Engineering 195 (2006)4303–4322.

[24] K. Zielinski, R. Laur, Constrained single-objective optimization using differentialevolution, in: Proceedings 2006 IEEE Congress on Evolutionary Computation,2006, pp. 223–230.

[25] S. Kukkonen, J. Lampinen, Constrained real-parameter optimization with general-ized differential evolution, in: Proceedings 2006 IEEE Congress on EvolutionaryComputation, 2006, pp. 207–214.

[26] Q. He, L. Wang, A hybrid particle swarm optimization with a feasibility-based rulefor constrained optimization, Applied Mathematics and Computation 186 (2)(2007) 1407–1422.