[IEEE 2013 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE)...

7
Proposal for Parameter Selection of the Vortex Particle Swarm Optimization During the Dispersion Stage Helbert Eduardo Espitia Department of Systems Engineering Universidad Distrital Francisco Jos´ e de Caldas Bogot´ a, Colombia Email: [email protected] Jorge Ivan Sofrony Department of Mechanical and Mechatronics Engineering Universidad Nacional de Colombia Bogot´ a, Colombia Email: [email protected] Abstract—This paper presents the parameter selection for the optimization algorithm Vortex Particle Swarm Opti- mization (VPSO). The optimization algorithm switches be- tween translational (convergence) and vortex (dispersion) behavior of the swarm to achieve a good exploration of the search space and avoid getting trapped in local minima. This proposal is based on living organism strategies such as foraging and predator avoidance. The selection of parameters is proposed based on an approximate analysis of the swarm behavior. Index Terms—Bio-inspired optimization, PSO, parame- ter selection. I. I NTRODUCTION Some developments in bio-inspired optimization tech- niques are based on colony behavior ([1], [2]) and evolution [3], as well as in other developments like the behavior of unicellular organisms such as bacteria [4], swarms of bees [5], bats [6], [7], fireflies [8] and primates [9]. A general algorithm of a particle swarm is called Particle Swarm Optimization (PSO), which was initially introduced by James Kennedy and Russell Eberhart in [10]. A. PSO algorithms and local minima evasion A problem identified in PSO algorithms consists of convergence to local minima [11], [12], despite many modifications which seek to avoid local minima by inducing an “explosion” of the swarm (see for example [13], [14], [15]). A general strategy to avoid early convergence in PSO algorithms is to force the dispersion of the swarm once it has reached a (local) minimum. An example of this approach, that uses the concept of dispersion to escape local minima, can be found in [14] with the Supernova Optimization algorithm and in [15] with an optimization algorithm based on bacteria foraging. The Vortex Particle Swarm Optimization (VPSO) al- gorithm proposed used a model of self-propelled particle to escape from local minima and to induce dispersion to achieve a better exploration of the search space [16]. In this paper we present the parameter selection of VPSO considering an approximate analysis of particle swarm, which allows establishing ranges for the parameters. Aspects considered are the turning radius of the swarm, the maximum speed of the particles for a given search space and the time spent by particles to escape form local minimum. B. Notation and definitions Notation is standard throughout the paper. Vectors are denoted as (.); the euclidean norm of a vector x R n is given by x = x T x. A time function f (t) has a time derivative denoted by ˙ f (t)= df (t) dt and the operator (.) is the vector of partial derivatives of a given vector function. The search space Ω R n is defined in [17] as: Ω=[x L 1 ,x U 1 ] × [x L 2 ,x U 2 ] ×···× [x L D ,x U D ] R n (1) where x L d and x U d are the lower and upper bounds of the search space for the d =1, 2,...,D dimension. The range of search space Ω for the d dimension is: range d (Ω) = x U d x L d (2) II. DYNAMIC MODEL OF THE SWARM The model considered is based on the locomotion of the zooplankton Daphnia, which has translational movements and vortex-like behavior [18]. This is a 2013 International Conference on Mechatronics, Electronics and Automotive Engineering 978-1-4799-2252-9/13 $31.00 © 2013 IEEE DOI 10.1109/ICMEAE.2013.14 65 2013 International Conference on Mechatronics, Electronics and Automotive Engineering 978-1-4799-2252-9/13 $31.00 © 2013 IEEE DOI 10.1109/ICMEAE.2013.14 65 2013 International Conference on Mechatronics, Electronics and Automotive Engineering 978-1-4799-2252-9/13 $31.00 © 2013 IEEE DOI 10.1109/ICMEAE.2013.14 65 2013 International Conference on Mechatronics, Electronics and Automotive Engineering 978-1-4799-2253-6/13 $31.00 © 2013 IEEE DOI 10.1109/ICMEAE.2013.14 65

Transcript of [IEEE 2013 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE)...

Page 1: [IEEE 2013 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE) - Morelos, Mexico (2013.11.19-2013.11.22)] 2013 International Conference on Mechatronics,

Proposal for Parameter Selection of the VortexParticle Swarm Optimization During the Dispersion

Stage

Helbert Eduardo EspitiaDepartment of Systems Engineering

Universidad Distrital Francisco Jose de Caldas

Bogota, Colombia

Email: [email protected]

Jorge Ivan SofronyDepartment of Mechanical and Mechatronics Engineering

Universidad Nacional de Colombia

Bogota, Colombia

Email: [email protected]

Abstract—This paper presents the parameter selectionfor the optimization algorithm Vortex Particle Swarm Opti-mization (VPSO). The optimization algorithm switches be-tween translational (convergence) and vortex (dispersion)behavior of the swarm to achieve a good exploration of thesearch space and avoid getting trapped in local minima.This proposal is based on living organism strategies suchas foraging and predator avoidance. The selection ofparameters is proposed based on an approximate analysisof the swarm behavior.

Index Terms—Bio-inspired optimization, PSO, parame-ter selection.

I. INTRODUCTION

Some developments in bio-inspired optimization tech-

niques are based on colony behavior ([1], [2]) and

evolution [3], as well as in other developments like the

behavior of unicellular organisms such as bacteria [4],

swarms of bees [5], bats [6], [7], fireflies [8] and primates

[9]. A general algorithm of a particle swarm is called

Particle Swarm Optimization (PSO), which was initially

introduced by James Kennedy and Russell Eberhart in

[10].

A. PSO algorithms and local minima evasion

A problem identified in PSO algorithms consists of

convergence to local minima [11], [12], despite many

modifications which seek to avoid local minima by

inducing an “explosion” of the swarm (see for example

[13], [14], [15]).

A general strategy to avoid early convergence in PSO

algorithms is to force the dispersion of the swarm once

it has reached a (local) minimum. An example of this

approach, that uses the concept of dispersion to escape

local minima, can be found in [14] with the Supernova

Optimization algorithm and in [15] with an optimization

algorithm based on bacteria foraging.

The Vortex Particle Swarm Optimization (VPSO) al-

gorithm proposed used a model of self-propelled particle

to escape from local minima and to induce dispersion to

achieve a better exploration of the search space [16]. In

this paper we present the parameter selection of VPSO

considering an approximate analysis of particle swarm,

which allows establishing ranges for the parameters.

Aspects considered are the turning radius of the swarm,

the maximum speed of the particles for a given search

space and the time spent by particles to escape form

local minimum.

B. Notation and definitions

Notation is standard throughout the paper. Vectors are

denoted as �(.); the euclidean norm of a vector �x ∈ Rn

is given by ‖�x‖ =√�xT�x. A time function f(t) has a

time derivative denoted by f(t) = df(t)dt and the operator

�∇(.) is the vector of partial derivatives of a given vector

function.

The search space Ω ∈ Rn is defined in [17] as:

Ω = [xL1 , xU1 ]× [xL2 , x

U2 ]× · · · × [xLD, x

UD] ∈ R

n (1)

where xLd and xUd are the lower and upper bounds of the

search space for the d = 1, 2, . . . , D dimension.

The range of search space Ω for the d dimension is:

ranged(Ω) = xUd − xLd (2)

II. DYNAMIC MODEL OF THE SWARM

The model considered is based on the locomotion

of the zooplankton Daphnia, which has translational

movements and vortex-like behavior [18]. This is a

2013 International Conference on Mechatronics, Electronics and Automotive Engineering

978-1-4799-2252-9/13 $31.00 © 2013 IEEE

DOI 10.1109/ICMEAE.2013.14

65

2013 International Conference on Mechatronics, Electronics and Automotive Engineering

978-1-4799-2252-9/13 $31.00 © 2013 IEEE

DOI 10.1109/ICMEAE.2013.14

65

2013 International Conference on Mechatronics, Electronics and Automotive Engineering

978-1-4799-2252-9/13 $31.00 © 2013 IEEE

DOI 10.1109/ICMEAE.2013.14

65

2013 International Conference on Mechatronics, Electronics and Automotive Engineering

978-1-4799-2253-6/13 $31.00 © 2013 IEEE

DOI 10.1109/ICMEAE.2013.14

65

Page 2: [IEEE 2013 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE) - Morelos, Mexico (2013.11.19-2013.11.22)] 2013 International Conference on Mechatronics,

model of self-propelled particles [19], [20], capable of

describing the motion of particles with vorticity; such

circular motion around a point may be a good strategy

to avoid local minima [21]. The selected model is:

d�ridt

= �vi (3)

mid�vidt

= �Fpro,i + �Fint,i + �Fobj,i (4)

where �ri is the position vector of the i-th particle, �viis its velocity and i = {1, 2, . . . , N}, where N is the

total number of particles in the swarm. The mass of

the particles is given by mi, and �Fpro,i, �Fint,i, �Fobj,i

correspond to the self-propulsion force, the strength of

interaction between particles and the objective function

information respectively.

The self-propulsion term is nonlinear and is given by:

�Fpro,i = (α− β‖�vi‖2)�vi (5)

The strength of interaction between particles is mod-

eled as a quadratic type potential, i.e. Uint(�r) =− c

2N

∑Nj=1 ‖�r − �rj‖2; thus the interaction force asso-

ciated with this potential is:

�Fint,i = �∇Uint(�ri)

= −a(�ri − �R) (6)

where �R corresponds to the center of the mass of the

swarm defined as:

�R =1

N

N∑j=1

�rj (7)

Finally, the objective function information is incorpo-

rated via the potential Uobj(�r) and a positive constant kis used to weight the influence of this function over the

dynamics of the particles. The force associated with the

potential of the objective function is:

�Fobj,i = −k�∇Uobj(�ri) (8)

Parameters mi, α, β, a and k are all real positive user

defined values.

III. ANALYSIS OF THE PARTICLE SWARM

The idea behind the proposed algorithm is to force

the swarm to converge to some minimum (equilibrium

state), and once it has found a minimum, force it to

exhibit vorticity behavior with growing dispersion. This

switched behavior provides capabilities to allow the

swarm to escape from local minima. The self-propulsion

parameter α will be used to switch the swarm’s behavior

and will be considered as a function of time such that

0 ≤ α(t) ≤ αmax; in the following analysis, two extreme

cases are considered: α(t) = 0 and α(t) = αmax.

A. Energy analysis

Consider Ki and Ui as the kinetic and potential energy

of the i-th particle respectively. These quantities are

given by:

Ki =1

2mi‖�vi‖2 Ui = Uint(�ri) + kUobj(�ri) (9)

The total energy of the i-th particle is defined as

Ei = Ki + Ui and the total energy of the swarm ET

corresponds to the sum of the total energy of all the

particles. Taking the time derivative of the total energy

of each particle and using equation (4), we have that:

Ei =(α(t)− β ‖�vi‖2

)‖�vi‖2 − a

(�ri − �R

)�R (10)

Adding the contributions for each particle and using

equality∑N

i=1(�ri − �R) = 0

ET =

N∑i=1

Ei =

N∑i=1

(α(t)− β ‖�vi‖2

)‖�vi‖2 (11)

From equation (11), it is possible to conclude that a

state of minimum energy (i.e. ET = 0) is reached when

‖�vi‖ = 0 or ‖�vi‖2 = α(t)/β. We will now consider two

cases where α(t) takes a fixed value, i.e. (i) α(t) = 0and (ii) α(t) = αmax, in which αmax is considered to be

a large but bounded positive value. The first case forces

the swarm to converge to some equilibrium point, while

the second case forces the swarm to exhibit vorticity and

maximum dispersion.

For case (i), i.e. α(t) = 0, equation (11) reduces to:

E = −N∑i=1

(β ‖�vi‖2

)‖�vi‖2 (12)

Since the energy of the system has a negative definite

time derivative for all ‖�vi‖ �= 0, the system tends to a

minimum energy state with ‖�vi‖ = 0.

For case (ii), the parameter α(t) is fixed to a large

but bounded value αmax, so that:

E =

N∑i=1

(αmax − β ‖�vi‖2

)‖�vi‖2 (13)

In this particular case, the system is bounded [22] with

velocity:

‖�vi‖ =√

αmax/β (14)

Finally, notice that if ET = 0, then the system must

remain in a state of constant energy, hence it must be

bounded both in position and velocity.

66666666

Page 3: [IEEE 2013 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE) - Morelos, Mexico (2013.11.19-2013.11.22)] 2013 International Conference on Mechatronics,

IV. ESTIMATION SWARM RADIUS

By a uniform circular movement for a i-th particle,

we have the tangencial force FT and the normal force

FN , so that:

FN = miaN = miv2

R(15)

FT = miaT = wR (16)

where aN and aT are the normal and tangencial acceler-

ation, R the radius, v the linear velocity, w the angular

velocity and w the angular acceleration, for uniform

circular movement w = 0.

Taking into account a normal force FUN(towards the

center of the swarm) given by external potential Uobj , a

normal interaction force FN = aR and the swarm in a

circular motion with radius R, we have:

miv2

R= aR+ FUN

(17)

aR2 + FUNR−miv

2 = 0 (18)

therefore:

R =−FUN

±√

F 2UN

+ 4amiv2

2a(19)

To determine the root sign the case when v = 0, where

the radius must tend to zero, is considered, and therefore

it is concluded that the radical sign is positive. Finally

to estimate the radius, from the energy analysis we have

v =√

αmax/β.

V. VORTEX PARTICLE SWARM OPTIMIZATION

The algorithm Vortex Particle Swarm Optimizationemploys the dispersion process of the particles to achieve

their escape from local minima.

The underlying idea is that while the swarm presents

translational movements (α(t) = 0), the swarm is mildly

disperse and travels towards a minimum. Once the ob-

jective function evaluated at the center of the mass does

not decrease any further, α(t) increases and a dispersion

process is induced. It is assumed that the dispersion will

force the swarm to find a new lower value, hence α is

set back to zero and the swarm will travel towards a new

minimum value. Finally, by repeating this sequence, the

swarm will travel from minimum to minimum until it

cannot find a new minimum value, point at which the

dispersion of the swarm will increase.

A. Escaping local minima strategy

The proposed strategy to scape from local minima

is to increase the particle dispersion by increasing the

propulsion energy via the α parameter once the swarm

has found a possible local optimum. For this, the variable

Umin which is updated with the minimum value of

the objective function evaluated at the center of mass

Uobj(�R), is considered.

Umin =

{Uobj(�R), if Umin ≥ Uobj(�R);

Umin, if Umin < Uobj(�R).(20)

B. Proposal for increasing energy

The addition of energy is performed by the α factor

which is considered as:⎧⎨⎩

α = 0, if Umin ≥ Uobj(�R);dα

dt= g, if Umin < Uobj(�R).

(21)

Function g employs a time Tα for increasing energy

and a time TR in which the particles are dispersed in a

circular motion on the search space. The total number

of iterations taken by the cycle is Kα + KR, with,

Kα = Tα/Δt and KR = TR/Δt. For the account of

the iterations the variable cont is used. The expression

for g is:

g =

{τc, if 0 ≤ cont < Kα;0, if Kα ≤ cont < Kα +KR.

(22)

After the swarm finds a local minima, the term αincreases and the swarm starts to disperse and exhibit

vortex behavior. This action will continue to happen

until the particles escape the local minima and find a

new (smaller) update of Umin, point at which the added

“energy” is released in its entirety (α(t) = 0).

C. Issues about objective function

For the force produced by the objective function, two

aspects are considered. The first aspect is the minimum

particles speed, which is directly related to the minimum

step when the particle approaches to a local minimum.

The second aspect is to limit the maximum speed

produced by the objective potential target to prevent

uncontrolled velocity.

In the first case, when the swarm converges to a local

minimum, it is intended that the gradient is approxi-

mately zero; additionally, the energy analysis shows that

by having α = 0, the swarm converges to a minimum

energy state. However, this convergence depends on the

magnitude of the velocity, that is, the algorithm for

low speeds tends to be delayed more. Considering this,

67676767

Page 4: [IEEE 2013 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE) - Morelos, Mexico (2013.11.19-2013.11.22)] 2013 International Conference on Mechatronics,

a minimum magnitude value for the velocity of the

particles is established, so that if the magnitude of the

velocity is less than the minimum speed, then the speed

magnitude is set equal to this value.

With this strategy, it is expected that the particles

move at minimum speed until they do not find another

better value of Uobj , so the swarm begins the dispersion

process.

In the second case, to control the force produced by

the objective function, the adjustment of k is proposed

considering the maximum gradient value found.

D. Algorithm stopping criteria

As mentioned before, the algorithm uses dispersion to

escape local minima and to explore efficiently the search

space, therefore, the dispersion level may be a suitable

stop criterion for the VPSO algorithm. This proposal

considers the number of particles in the search space to

stop the algorithm. The proposed stop criterion stipulates

that if the number of particles in the search space is less

than a specific value, then the algorithm stops.

E. Algorithm implementation

The dynamic equations of the swarm are implemented

as discrete time updates with time increment Δt. The

equations in (3) are implemented as:

�ri[n] = �ri[n− 1] + �viΔt (23)

�νi[n] = �νi[n−1]+(�Fpro,i+ �Fint,i+ �Fobj,i)Δt/mi (24)

Where �νi[n] is the velocity of each particle given by

the forces. Considering the value of the minimum speed

vmin, the speed of each particle can be determined:

�vi[n] =

⎧⎨⎩

�νi[n], if vmin ≤ ‖�νi[n]‖;vmin

�νi[n]

‖�νi[n]‖, if vmin > ‖�νi[n]‖. (25)

Likewise, α(t) may be expressed in discrete time as:

α[n] =

{0, if Umin ≥ Uesp(�R);

α[n− 1] + gΔt, if Umin < Uesp(�R).(26)

Finally, the pseudo-code for the VPSO algorithm is

presented in algorithm 1.

VI. PARAMETER SELECTION

In this proposal, we considered the parameter selection

for a case when the swarm exhibits dispersion in the

search space, i.e. Umin < Uobj(�R).To avoid uncontrolled dispersion, we considered that

the maximum velocity for each particle is a percentage

Algorithm 1: VPSO algortihm pseudo-code.

1 Initialize the swarm in the solution space with

random positions and zero velocity;

2 begin3 while Number of particles in the search space

is larger than a specified value or maximumnumber of iterations is not reached do

4 Compute Umin, equation 20;

5 Compute α, equation 26;

6 for i = 1 until N do7 Compute new �vi, equation 25;

8 Compute new �ri, equation 23;

9 end10 end11 Establish the optimum point found.

12 end

of search space vmax = λmax range(Ω); in this case, a

scale factor is considered for each dimension, so that

range(Ω) = ranged(Ω) = rangej(Ω) for d = 1, 2, . . . , Dand j = 1, 2, . . . , D. Usually the value of λmax is within

0.1 ≤ λmax ≤ 0.5 [17]. On the other hand, the minimum

speed can also be set as vmin = λmin range(Ω).

A. Propulsion factor

The maximum value of the velocity vmax implies a

maximum value for the propulsion factor αmax as shown

in equation 27.

αmax = βv2max (27)

B. Scale factor for objective function

In the dispersion stage, the normalization of the ob-

jective function to take control of the dispersion of the

particles is considered; in this case, the target function

is normalized considering the found gradient maximum

value, so that there is a bound of the force given by the

objective function and consequently a maximum turning

radius.

First, the maximum value of the gradient is determined

as:

FUmax = maxi=1,...,N

‖�∇Uobj(�ri)‖ (28)

with FUmax, the value of the force for each particle at

each iteration is normalized as:

�Fobj,i = k�∇Uobj =kU

FUmax

�∇Uobj (29)

In the case of Rmax, it is considered that the normal

force provided by the objective function must be less

68686868

Page 5: [IEEE 2013 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE) - Morelos, Mexico (2013.11.19-2013.11.22)] 2013 International Conference on Mechatronics,

than the normal force produced by the interaction of the

particles, therefore FUN≤ aRmax, then, we take:

kU = FUN= γaRmax (30)

with γ ∈ [0, 1].

C. Interaction factor

Form equation 18, the interaction factor can be esti-

mated as:

a =miv

2max − FUN

Rmax

R2max

(31)

In this case, Rmax corresponds to the maximum radius

expected to be obtained and can be seen as Rmax =range(Ω)/2. On the other hand, with equation 30 we

have:

a =miv

2max

(1 + γ)R2max

(32)

D. Increment energy factor

When the swarm increases its radius, we have that

dα/dt = τc, and considering a time T to reach the

maximum value of α, then:

αmax =

∫ T

0τc dt (33)

Therefore,

T =αmax

τc(34)

A bound case can be considered with the particles

escaping from the local minimum with maximum speed

and a linear displacement equal to maximum radius of

the swarm Rmax.

For linear displacement equal to the maximum radius

of the swarm Rmax, we have:

Tmin =Rmax

vmax(35)

In this case, we have a Tmin, such that T > Tmin.

αmax

τc>

Rmax

vmax(36)

Then:

τc <αmaxvmax

Rmax(37)

1) Estimate of TR: A strategy to achieve a good dis-

persion of the particles consists of progressive increases

of energy timeouts while the particles perform a certain

number of turns NR. Considering that the particles do

not acquire the maximum speed the timeout starts, this

time must be greater than the estimated time. To estimate

this time, we take the relationship between linear and

angular velocity:

w =dθ

dt=

v

R(38)

Performing the calculation for a spin, we have:

∫ 2π

0dθ =

∫ TR

0

v

Rdt (39)

Then:

TR =2πR

v(40)

Therefore, considering NR, then the expression to

establish the number of iterations is:

KR = NR2πR

vΔt(41)

This value depends on v and R in each iteration,

therefore, the calculation should be performed with

these values in the corresponding iteration. However, if

FUN= 0 is considered, we can estimate a radius Re > R

from equation 19, so that:

KRe= NR

Δt

√mi

a(42)

2) Estimate of Tα: Considering that the total time

for increased energy is T = Tα, then it is possible to

determine the value Kα considering a value Nα that

corresponds to the times in which it is necessary to wait

while the particles move in a circular motion.

Kα =T

NαΔt=

αmax

NαΔtτc(43)

VII. TEST FUNCTION

To observe the behavior of the algorithm, the Uobj

objective function Peaks, proposed in [23], is used. This

function has two local minima and a global minimum at

(0.2282,−1.6199), which is equal to −6.4169.

Uobj = 3(1− x)2e−(x2+(y+1)2) (44)

− 10(x5− x3 − y5

)e−(x

2+y2) − 1

3e−((x+1)2+y2)

The test function has several minima and maxima

which can be seen in Fig 1.

69696969

Page 6: [IEEE 2013 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE) - Morelos, Mexico (2013.11.19-2013.11.22)] 2013 International Conference on Mechatronics,

Fig. 1. Test function.

VIII. NUMERICAL RESULTS

The range of the search space considered is (−5 ≤x ≤ 5) and (−5 ≤ y ≤ 5) then range(Ω) = 10,

Rmax = 5. The free parameters are set as: mi = 1,

β = 1, λmax = 0.4 λmin = 0.01, γ = 0.5, Δt = 0.1,

NR = 1 and Nα with 1, 2, 5 and 10. With the

previous values, the parameters on the dispersion stage

are: vmax = 4, αmax = 16, a = 0.43, τc < 12.8 (we

take τc = 2), kU = 1.07, KRe= 96, Kα with 80, 40, 16

and 8. On the other hand, the parameters on convergence

stage are set on: vmin = 0.05 and k = 1.

The initial conditions of the particles are random

positions and have zero velocities. Finally, 30 runs were

performed for each case and the information in Table

I shows maximum and minimum values found during

the optimization process (the best and worst results), the

mean value, and the standard deviation of the results. We

also considered the number of iterations needed before

the algorithm is stopped.

TABLE IRESULTS FOR 30 RUNS.

Fun Nα = 1 Nα = 2Uobj Fitness Iterations Fitness Iterations

Max -2.8376 298 -2.9266 506

Min -6.4162 50 -6.4167 50

Average -5.8440 102.5 -5.8924 137.1

STD 1.1395 69.59 1.0159 124.59

Fun Nα = 5 Nα = 10Uobj Fitness Iterations Fitness Iterations

Max -6.2435 952 -6.2249 1231

Min -6.4168 163 -6.4169 170

Average -6.3913 493.2 -6.3984 682.2

STD 0.0391 193.43 0.0391 327.42

The results show that the algorithm is able to find

a good approximation of the global minimum. It also

shows that by increasing Nα, the algorithm has a better

performance this because the swarm can make a good

dispersion in the search space; in this case the number

of iterations increases is also appreciated.

IX. CONCLUSIONS

The VPSO algorithm uses the emergent behavior of

the particles to escape local minima with circular motion

of particles. The results show the effect of the selected

parameters.

A proposal for the selection of algorithm parameters,

considering the behavior of the particle swarm, was

presented. It is expected in future works make an analysis

and selection of parameters in the convergence stage, as

well as using several test functions to observe the algo-

rithm performance in different scenarios and performing

a more approximate analysis of the swarm in order to

make a better selection of parameters.

REFERENCES

[1] M. Dorigo, M. Birattari, T. Stutzle, “Ant colony optimization,artificial ants as a computational intelligence technique”, IEEEComputational Intelligence Magazine, November 2006.

[2] M. Birattari, P. Pellegrini, M. Dorigo, “On the invariance ofant colony optimization”, IEEE Transactions On EvolutionaryComputation, Vol. 11, No. 6, December 2007.

[3] T. Weise, “Global optimization algorithms - theory and appli-cation”, Self-Published Thomas Weise, 2009.

[4] K. Passino, “Biomimicry of bacterian foragin for distributedoptimization and control”, IEEE Control Systems Magazine,June, 2002.

[5] S. Thakoor, J. M. Morookian, J. Chahl, B. Hine, S. Zornetzer,“BEES: Exploring Mars with bioinspired technologies”, IEEEComputer Society, 2004.

[6] D. Sedighizadeh, E. Masehian, “Particle swarm optimizationmethods, taxonomy and applications”, International Journal ofComputer Theory and Engineering, Vol. 1, No. 5, December2009.

[7] X. S. Yang, “A new metaheuristic bat-inspired algorithm”, Na-ture Inspired Cooperative Strategies for Optimization (NICSO),2010.

[8] X. S. Yang, “Firefly algorithm, levy flights and global op-timization”, Springer London, Research and Development inIntelligent Systems XXVI, 2009.

[9] A. Mucherino, O. Seref, “Monkey search: a novel metaheuristicsearch for global optimization”, AIP Conference Proceedings,Data mining systems analysis and optimization in biomedicine,2007.

[10] E. Russell, J. Kennedy, “Particle swarm optimization”, IEEEProceedings Neural Networks, 1995.

[11] G. I. Evers, “An automatic regrouping mechanism to dealwith stagnation in particle swarm optimization”, Master Thesis,University of Texas-Pan American, 2009.

[12] J. F. Schutte, “Particle swarms in sizing and global optimiza-tion”, University of Pretoria, Master’s Dissertation, 2002.

70707070

Page 7: [IEEE 2013 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE) - Morelos, Mexico (2013.11.19-2013.11.22)] 2013 International Conference on Mechatronics,

[13] L. Yin, X. Liu, “A PSO algorithm based on biologe popu-lation multiplication (PMPSO)”, Proceedings of the SecondSymposium International Computer Science and ComputationalTechnology (ISCSCT ’09), 2009.

[14] D. E. Mesa, “Supernova: un algoritmo novedoso de opti-mizacion global”, Tesis de Maestrıa, Universidad Nacional deColombia Sede Medellın, 2010.

[15] K. Passino, “Biomimicry for optimization, control, and automa-tion”, Springer-Verlag, London, UK, 2005.

[16] H. E. Espitia, J. I. Sofrony, “Vortex Particle Swarm Optimiza-tion”, IEEE Congress on Evolutionary Computation (CEC),June, 2013.

[17] G. I. Evers, M. Ben Ghalia, “Regrouping particle swarm opti-mization: A new global optimization algorithm with improvedperformance consistency across benchmarks”, IEEE Interna-tional Conference on Systems, Man and Cybernetics, 2009.

[18] A. Ordemanna, G. Balazsi, F. Moss, “Pattern formation andstochastic motion of the zooplankton Daphnia in a light field”,Elsevier Science B.V., Physica A 325, 2003.

[19] M. R. D’Orsogna, Y. L. Chuang, A. L. Bertozzi, L. S. Chayes,“Self-propelled particles with soft-core interactions: patterns,stability, and collapsel”, Physical Review Letters, PRL 96, 2006.

[20] H. Levine, W. J. Rappel, I. Cohen, “Self-organization in systemsof self-propelled particles”, Physical Review E 63, 2000.

[21] M. H. M. Abdel, C. R. McInnes, “Wall following to escapelocal minima for swarms of agents using internal states andemergent behavior”, International Conference of ComputationalIntelligence and Intelligent Systems ICCIIS, 2008.

[22] H. Khalil, “Nonlinear systems”, Prentice Hall, Third Edition,2002.

[23] K. N. Krishnanand, D. Ghose, “Glowworm swarm optimizationfor simultaneous capture of multiple local optima of multimodalfunctions”, Springer Science, Swarm Intell, 2009.

71717171