P44_2012_IJAIT

20
International Journal on Artificial Intelligence Tools Vol. 21, No. 3 (2012) 1240011 (20 pages) c World Scientific Publishing Company DOI: 10.1142/S0218213012400118 STOCHASTIC STABILITY AND NUMERICAL ANALYSIS OF TWO NOVEL ALGORITHMS OF THE PSO FAMILY: PP-GPSO AND RR-GPSO JUAN LUIS FERN ´ ANDEZ-MART ´ INEZ * and ESPERANZA GARC ´ IA-GONZALO Mathematic Department, Oviedo University, Facultad de Ciencias, 33007, Oviedo, Spain * [email protected] [email protected] The PSO algorithm can be physically interpreted as a stochastic damped mass-spring system: the so-called PSO continuous model. Furthermore, PSO corresponds to a partic- ular discretization of the PSO continuous model. Based on this mechanical analogy we derived in the past a family of PSO-like versions, where the acceleration is discretized using a centered scheme and the velocity of the particles can be regressive (GPSO), progressive (CP-GPSO) or centered (CC-GPSO). Although the first and second order trajectories of these algorithms are isomorphic, CC-GPSO and CP-GPSO are very dif- ferent from GPSO. In this paper we present two other PSO-like methods: PP-GPSO and RR-GPSO. These algorithms correspond respectively to progressive and regressive discretizations in acceleration and velocity. PP-PSO has the same velocity update than GPSO, but the velocities used to update the trajectories are delayed one iteration, thus, PP-PSO acts as a Jacobi system updating positions and velocities at the same time. RR-GPSO is similar to a GPSO with stochastic constriction factor. Both versions have a very different behavior from GPSO and the other family members introduced in the past: CC-PSO and CP-PSO. RR-PSO seems to have the greatest convergence rate and its good parameter sets can be calculated analytically since they are along a straight line located in the first order stability region. Conversely PP-PSO seems to be a more explorative version, although the behavior of these algorithms can be partly problem dependent. Both exhibit a very peculiar behavior, very different from other family mem- bers, and thus they can be called distant PSO relatives. RR-PSO have the greatest convergence rate of all family members for a wide range of benchmark functions with different numerical complexity in 10, 30 and 50 dimensions. These algorithms have been succesfully applied for protein secondary structure prediction and in oil and gas reservoir optimization. Keywords : Particle swarm optimization; GPSO; stochastic analysis; convergence. 1. The Generalized PSO (GPSO) The particle swarm algorithm applied to optimization problems is very simple: in- dividuals, or particles, are represented by vectors whose length is the number of degrees of freedom of the optimization problem. To start, a population of parti- cles is initialized with random positions (x 0 i ) and velocities (v 0 i ). A same objective 1240011-1

description

cuk

Transcript of P44_2012_IJAIT

Page 1: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

International Journal on Artificial Intelligence ToolsVol. 21, No. 3 (2012) 1240011 (20 pages)c© World Scientific Publishing Company

DOI: 10.1142/S0218213012400118

STOCHASTIC STABILITY AND NUMERICAL ANALYSIS

OF TWO NOVEL ALGORITHMS OF THE PSO FAMILY:

PP-GPSO AND RR-GPSO

JUAN LUIS FERNANDEZ-MARTINEZ∗ and ESPERANZA GARCIA-GONZALO†

Mathematic Department, Oviedo University,

Facultad de Ciencias, 33007, Oviedo, Spain∗[email protected][email protected]

The PSO algorithm can be physically interpreted as a stochastic damped mass-springsystem: the so-called PSO continuous model. Furthermore, PSO corresponds to a partic-ular discretization of the PSO continuous model. Based on this mechanical analogy wederived in the past a family of PSO-like versions, where the acceleration is discretizedusing a centered scheme and the velocity of the particles can be regressive (GPSO),progressive (CP-GPSO) or centered (CC-GPSO). Although the first and second ordertrajectories of these algorithms are isomorphic, CC-GPSO and CP-GPSO are very dif-ferent from GPSO. In this paper we present two other PSO-like methods: PP-GPSOand RR-GPSO. These algorithms correspond respectively to progressive and regressivediscretizations in acceleration and velocity. PP-PSO has the same velocity update thanGPSO, but the velocities used to update the trajectories are delayed one iteration, thus,PP-PSO acts as a Jacobi system updating positions and velocities at the same time.RR-GPSO is similar to a GPSO with stochastic constriction factor. Both versions havea very different behavior from GPSO and the other family members introduced in thepast: CC-PSO and CP-PSO. RR-PSO seems to have the greatest convergence rate andits good parameter sets can be calculated analytically since they are along a straightline located in the first order stability region. Conversely PP-PSO seems to be a moreexplorative version, although the behavior of these algorithms can be partly problemdependent. Both exhibit a very peculiar behavior, very different from other family mem-bers, and thus they can be called distant PSO relatives. RR-PSO have the greatestconvergence rate of all family members for a wide range of benchmark functions withdifferent numerical complexity in 10, 30 and 50 dimensions. These algorithms have beensuccesfully applied for protein secondary structure prediction and in oil and gas reservoiroptimization.

Keywords: Particle swarm optimization; GPSO; stochastic analysis; convergence.

1. The Generalized PSO (GPSO)

The particle swarm algorithm applied to optimization problems is very simple: in-

dividuals, or particles, are represented by vectors whose length is the number of

degrees of freedom of the optimization problem. To start, a population of parti-

cles is initialized with random positions (x0i ) and velocities (v0

i ). A same objective

1240011-1

Page 2: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

J. L. Fernandez-Martınez & E. Garcıa-Gonzalo

function is used to compute the objective value of each particle. As time advances,

the position and velocity of each particle is updated as a function of its objective

function value and of the objective function values of its neighbors. At time-step

k+1, the algorithm updates positions (xk+1i ) and velocities (vk+1

i ) of the individuals

as follows:

vk+1i = ωvk

i + φ1(gk − xk

i ) + φ2(lki − xk

i ) ,

xk+1i = xk

i + vk+1i ,

φ1 = r1ag , φ2 = r2al , r1, r2 ∈ U(0, 1) ω, al, ag ∈ R ,

(1)

where lki is the ith particle’s best position, gk the global best position on the whole

swarm, φ1, φ2 are the random global and local accelerations, and ω is a real constant

called inertia weight. Finally, r1 and r2 are random numbers uniformly distributed

in (0, 1), to weight the global and local acceleration constants, ag and al.

First order stability of this algorithm has been studied by several authors.1,2

Convergence properties of this algorithm and parameter tuning are related to its

exploration capabilities and the stability of second order trajectories.3,4

PSO is the particular case for t = k and ∆t = 1 of the GPSO algorithm:5

v(t+∆t) = (1 − (1− ω)∆t)v(t) + φ1∆t(g(t)− x(t)) + φ2∆t(l(t)− x(t)) ,

x(t+∆t) = x(t) + v(t+∆t)∆t .(2)

This model was derived using a mechanical analogy: a damped mass-spring system

with unit mass, damping factor, 1 − ω, and total stiffness constant, φ = φ1 + φ2,

the so-called PSO continuous model:

x′′(t) + (1− ω)x′(t) + φx(t) = φ1g(t− t0) + φ2l(t− t0) ,

x(0) = x0 ,

x′(0) = v0 .

(3)

In this case x(t) stands for the coordinate trajectory of any particle in the swarm.

In this model particles interact through the local and global attractors, l(t), g(t).

In this model mean particle trajectories oscillate around the point:

o(t) =agg(t− t0) + all(t− t0)

ag + al. (4)

In this model the attractors might be delayed a time t0 with respect to the particle

trajectories.6

S1gpso =

(ω, φ) :∆t− 2

∆t< ω < 1, 0 < φ <

2ω∆t− 2∆t+ 4

∆t2

, (5)

S2gpso =

(ω, φ) : 1− 2

∆t< ω < 1, 0 < φ < φgpso(w,α,∆t)

, (6)

1240011-2

Page 3: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

Analysis of Two Novel Algorithms of the PSO Family: PP-GPSO and RR-GPSO

where φ = (ag + al)/2 is the total mean acceleration and φgpso depends on ω,

α, and ∆t and is the analytic expression for the limit hyperbola of second order

stability:

φgpso =12

∆t

(1− ω)(2 + (ω − 1)∆t)

4− 4(ω − 1)∆t+ (α2 − 2α)(2 + (ω − 1)∆t). (7)

α = ag/φ = 2ag/(ag + al) is the ratio between the global acceleration and the total

mean acceleration, and varies in the interval [0, 2]. Low values of α imply for the

same value of φ that the local acceleration is bigger than the global one, and thus,

the algorithm is more explorative. These stability regions do coincide for ∆t = 1

with those shown in previous analyses for the PSO case.1,5,7

Figure 1 shows for the PSO case the first and second order stability regions and

their corresponding spectral radii. The spectral radii are related to the attenuation

of the first and second order trajectories. In the PSO case, the first order spectral

radius is zero in (ω, φ) = (0, 1). The first order stability zone (S1gpso) only depends

on (ω, φ), while the second order stability region (S2gpso) depends on (ω, ag, al). Also,

the second order stability region is embedded in the first order stability region, and

depends symmetrically on α, reaching its maximum size when α = 1 (al = ag).

Good parameter sets are close to the upper limit of second order stability5 as we

can observed in figure 2 that shows for the Griewank, Rosenbrock, Rastrigin and

De Jong-f4 functions the median logarithmic error after 50 simulations for a lattice

of (ω, φ) points located on the GPSO first stability region. Fernandez Martınez

and Garcıa Gonzalo6 also derived CP-GPSO and CC-GPSO which correspond to

two different discretizations of the PSO continuous model when the acceleration is

centered and the velocity is either progressive or centered. These two versions are

very different from PSO: CP-GPSO seems to have a more explorative character,

while CC-GPSO seems to locate faster the global minimum using two consecutive

centers of attraction.

ω

φ

(a) GPSO First Order Spectral Radius

−1 −0.5 0 0.5 10

0.5

1

1.5

2

2.5

3

3.5

4

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

ω

φ

(b) GPSO Second Order Spectral Radius

−1 −0.5 0 0.5 10

0.5

1

1.5

2

2.5

3

3.5

4

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 1. PSO: First and second order stability region and corresponding spectral radii.

1240011-3

Page 4: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

J. L. Fernandez-Martınez & E. Garcıa-Gonzalo

ω

φ

(a) Griewank

−1 −0.5 0 0.5 10

0.5

1

1.5

2

2.5

3

3.5

4

−0.5

0

0.5

1

1.5

2

2.5

3

ω

φ

(b) Rosenbrock

−1 −0.5 0 0.5 10

0.5

1

1.5

2

2.5

3

3.5

4

3

4

5

6

7

8

ω

φ

(c) Rastrigin

−1 −0.5 0 0.5 10

0.5

1

1.5

2

2.5

3

3.5

4

2

2.1

2.2

2.3

2.4

2.5

2.6

2.7

2.8

ω

φ

(d) De Jong f4

−1 −0.5 0 0.5 10

0.5

1

1.5

2

2.5

3

3.5

4

−2

0

2

4

6

Fig. 2. PSO: Mean error contourplot (in log10 scale) for the Griewank, Rosenbrock, Rastriginand De Jong f4 functions in 50 dimensions.

Full stochastic analysis of the PSO continuous and discrete models (GPSO) has

been performed by Fernandez Martınez and Garcıa Gonzalo.3 This analysis served

to analyze the GPSO second order trajectories, to show the convergence of GPSO

to the continuous PSO model as the discretization time step goes to zero and to

analyze the role of the oscillation center on the first and second order continuous and

discrete dynamical systems. This analysis also shed light about PSO convergence

for a wide class of benchmark functions when the PSO parameters are selected close

to the upper border of the second order stability region.

In this paper we present two other algorithms belonging to PSO extended family:

PP-GPSO and RR-GPSO. These algorithms correspond respectively to progressive

and regressive discretizations in acceleration and velocity. PP-GPSO has the same

velocity update than GPSO, but the velocities used to update the trajectories are

delayed one iteration. RR-GPSO can be interpreted as a GPSO with a stochastic

constriction factor that depends on the iterations. In both cases we derive their first

and second order stability regions.

1240011-4

Page 5: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

Analysis of Two Novel Algorithms of the PSO Family: PP-GPSO and RR-GPSO

Finally we perform a numerical comparison between the different members of

the family for a set of benchmark functions in 50 dimensions. RR-PSO shows (at

least in these cases) convergence rates similar or even greater than PSO, while PP-

PSO is a more explorative version. This is coherent with the fact that PP-PSO acts

as a Jacobi system updating positions and velocities at the same time. These results

are very promising, although they might be different for other kind of benchmark

functions exhibiting other numerical difficulties. Nevertheless these results are very

important in the application of PSO to inverse problems, because due to the non-

convex character of these kind of problems, it is not only important to achieve very

low misfits but also to explore the space of possible solutions that are compatible

with the observed data and the prior information that is at disposal.8,9 RR-PSO

has been applied to the reservoir history matching inverse problems in oil and gas

and has given very impressive results.10 These algorithms have been also succesfully

applied for protein secondary structure prediction.11,12

2. The PP-GPSO Algorithm

Let us use progressive discretizations in acceleration and in velocity to approximate

the PSO continuous model (3):

x′(t) ≃ x(t+∆t)− x(t)

∆t, x′′(t) ≃ x(t+ 2∆t)− 2x(t+∆t) + x(t)

∆t2, (8)

The acceleration in this case corresponds to a progressive discretization in velocity

x′′(t) ≃ x′(t+∆t)− x′(t)

∆t. (9)

The following relationships apply:

x(t+∆t) = x(t) + v(t)∆t ,

v(t+∆t)− v(t)

∆t+ (1− ω)v(t) = φ1(g(t− t0)− x(t)) + φ2(l(t− t0)− x(t)) .

(10)

Adopting t0 = 0 we arrive at:

v(t+∆t) = (1 − (1− ω)∆t)v(t) + φ1∆t(g(t)− x(t)) + φ2∆t(l(t)− x(t)) ,

x(t+∆t) = x(t) + v(t)∆t ,(11)

which has the same expression for the velocity that the GPSO. The main difference

is that the velocity used to update the trajectory is v(t) instead of v(t +∆t) that

is used in the GPSO. This fact will provoke PP-GPSO to be more explorative than

GPSO and having a lower convergence rate. This fact has already been pointed in

the CP-PSO case.6 In the next section we analyze which are the similarities and

differences between both algorithms regarding their first and second order stability

regions.

1240011-5

Page 6: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

J. L. Fernandez-Martınez & E. Garcıa-Gonzalo

2.1. The first and second order stability regions

The following stochastic second order difference equation is obtained for PP-GPSO

algorithm:

x(t +∆t)−Appx(t) −Bppx(t −∆t) = Cpp(t−∆t) ,

App = 2− (1 − ω)∆t ,

Bpp = (1 − ω)∆t− 1− φ∆t2 ,

Cpp(t−∆t) = (φ1g(t−∆t) + φ2l(t−∆t))∆t2 .

(12)

The first order moments satisfy the affine dynamical system:

µ(t+∆t) = Appµ µ(t) + bpp

µ (t) , (13)

where

Appµ =

(

A1 E(B1)

1 0

)

, bppµ (t) =

(

φ∆t2E(o(t −∆t))

0

)

, (14)

and

E(o(t)) =agE(g(t)) + alE(l(t))

ag + al. (15)

The first order stability region of the PP-GPSO is:

S1pp =

(ω, φ) :∆t− 4

∆t< ω < 1, 0 < 2

(1− ω)∆t− 2

∆t2< φ <

(1 − ω)

∆t

. (16)

The curve separating real and complex eigenvalues ofAppµ in the first stability region

is independent of ∆t:

φ =1

4(1− 2ω + ω2) . (17)

The spectral radius is zero on the point:

(ω, φ) =

(

1− 2

∆t,

1

∆t2

)

. (18)

The non-centered second order moments satisfy the following second order affine

dynamical system:

r2(t+∆t) = Appσ r2(t) + bpp

r(t) , (19)

where

Appσ =

A2pp 2AppE(Bpp) E(B2

pp)

App E(Bpp) 0

1 0 0

, (20)

1240011-6

Page 7: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

Analysis of Two Novel Algorithms of the PSO Family: PP-GPSO and RR-GPSO

and

bppr(t)1 = E(C2

1 (t−∆t)) + 2A1E(C1(t−∆t)x(t))

+ 2E(B1)E(C1(t−∆t)x(t −∆t)) ,

bppr(t)2 = E(C1(t−∆t)x(t)) ,

bppr(t)3 = 0 .

(21)

The second order stability region S2pp is made up by the pairs (ω, φ):

[

ωv < ω < 1− 2

∆t, φ−

pp(ω, α,∆t) < φ < φ+pp(ω, α,∆t)

]

∪[

1− 2

∆t< ω < 1, 0 < φ < φ+

pp(ω, α,∆t)

]

(22)

where φ = φ−

pp(ω, α,∆t), φ = φ+pp(ω, α,∆t) are the upper limits hyperbolas of

second order stability and ωv > 1− (4/∆t). This curve has a complicate analytical

equation. For example for α = 1 and ∆t = 1 its equation is given by:

ωv = −1.73 ,

φ+pp(ω, 1, 1) =

1

14

(

− 7− 19ω +√385 + 266ω + 25ω3

)

,

φ−

pp(ω, 1, 1) =1

14

(

− 7− 19ω −√385 + 266ω + 25ω3

)

.

(23)

As for the GPSO case, the region S2pp is embedded in S1

pp.

PP-PSO is the particular case where the time step is ∆t = 1. Figure 3 shows

the first and second order stability regions of the PP-PSO case (∆t = 1) with

the associated spectral radii. For the case of second order region the parameter α

has been set to 1 in this case. As it can be observed both regions of stability are

bounded. Generally speaking both stability regions are a tilted versions of the PSO

regions.

Figure 4 shows the correspondence between the homogeneous first order trajec-

tories as defined in the paper.2 The fact that these regions are linearly isomorphic

does not mean that both algorithms are the same, because of their stochastic char-

acter and how they update the force term (the oscillation center o(t) in PP-PSO is

delayed as shown in the Cpp(t−∆t) constant). Finally figure 5 shows for PP-PSO

the logarithmic error for the Griewank, Rosenbrock, Rastrigin and De Jong-f4 case

in 50 dimensions. Compared to figure 2 it can be observed that PP-PSO provides

greater misfits than the PSO for all the functions. This has to be with the fact

that PP-PSO updates at the same time the velocities and positions of the particles.

1240011-7

Page 8: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

J. L. Fernandez-Martınez & E. Garcıa-Gonzalo

ω

φ

(a) PP−PSO First order spectral radius

−3 −2 −1 0 10

0.5

1

1.5

2

2.5

3

3.5

4

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

ω

φ

(b) PP−PSO Second order spectral radius

−3 −2 −1 00

0.5

1

1.5

2

2.5

3

3.5

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 3. PP-PSO: First and second order stability regions and corresponding spectral radii.

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

v v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

+

+

+

+

+

+

+

+

+

+

+

+

+

+ + + + + + +

x

x

x

x

x

x

x

x

x x

xx

xx

x

x

x

x

x

x

-1.0 -0.5 0.0 0.5 1.00

1

2

3

4

Ω

Φ-

PSO

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

ov v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v+

+

+

+

+

+

+

+

+

+

+

+

+

+ + + + + ++

x

x

x

x

x

x

x

x

x x

xx

xx

x

x

x

x

x

x

-3 -2 -1 0 10

1

2

3

4

Ω

Φ-

PP-PSO

Fig. 4. Correspondence between the homogeneous first order trajectories of PSO and PP-PSO.

Also it can be observed that the algorithm does not converge for ω < 0, and the

good parameter sets are in the complex region, that can be seen in figure 4, close to

the limit of second order stability and close to φ = 0. These results can be partially

altered by clamping the velocities or by varying the time step, ∆t. In conclusion it is

expected for the PP-PSO a more explorative behavior (and thus lower convergence

rates) than for the PSO case. This analysis also shows that how the PSO algorithm

was proposed is a kind of coincidence because if in the velocity update v(t) had been

used instead of v(t+1) the results of this algorithm would not be so impressive as in

the PSO case and maybe today we would not be writing this paper. This illustrates

the importance of the velocity update in the PSO convergence.

1240011-8

Page 9: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

Analysis of Two Novel Algorithms of the PSO Family: PP-GPSO and RR-GPSO

ω

φ

(a) Griewank

−3 −2 −1 00

1

2

3

4

0.5

1

1.5

2

2.5

3

ω

φ

(b) Rosenbrock

−3 −2 −1 00

1

2

3

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

ω

φ

(c) Rastrigin

−3 −2 −1 00

1

2

3

4

2.1

2.2

2.3

2.4

2.5

2.6

2.7

2.8

ω

φ

(d) De Jong f4

−3 −2 −1 00

1

2

3

4

3

4

5

6

7

Fig. 5. PP-PSO: Mean error contourplot (in log10 scale) for the Griewank, Rosenbrock, Rastriginand De Jong f4 functions in 50 dimensions.

3. The RR-GPSO Algorithm

Let us adopt a regressive discretization in acceleration and in velocity in time t ∈ R,

in order to discretize model (3) :

x′(t) ≃ x(t)− x(t−∆t)

∆t,

x′′(t) ≃ x(t)− 2x(t−∆t) + x(t− 2∆t)

∆t2=

x′(t)− x′(t−∆t)

∆t.

(24)

The following relationships apply:

x(t) = x(t−∆t) + v(t)∆t ,

v(t) − v(t−∆t)

∆t+ (1 − ω)v(t) + φ(x(t −∆t) + v(t)∆t)

= φ1g(t− t0) + φ2l(t− t0) ,

(25)

1240011-9

Page 10: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

J. L. Fernandez-Martınez & E. Garcıa-Gonzalo

v(t) =v(t−∆t) + φ1∆t(g(t− t0)− x(t−∆t))

1 + (1− ω)∆t+ φ∆t2

+φ2∆t(l(t− t0)− x(t−∆t))

1 + (1− ω)∆t+ φ∆t2. (26)

RR-GPSO algorithm can be written as:

x(t +∆t) = x(t) + v(t+∆t)∆t ,

v(t+∆t) =v(t) + φ1∆t(g(t+∆t− t0)− x(t))

1 + (1− ω)∆t+ φ∆t2

+φ2∆t(l(t+∆t− t0)− x(t))

1 + (1 − ω)∆t+ φ∆t2.

(27)

The natural choice for t0 is ∆t. Thus, the RR-GPSO algorithm with delay one

becomes:

v(t+∆t) =v(t) + φ1∆t(g(t)− x(t)) + φ2∆t(l(t)− x(t))

1 + (1− ω)∆t+ φ∆t2,

x(t +∆t) = x(t) + v(t+∆t)∆t , t,∆t ∈ R ,

x(0) = x0 , v(0) = v0 .

(28)

RR-PSO with delay one is a particular case of (28) for a unit time step, ∆t = 1.

RR-PSO is a PSO-like algorithm where the parameter

A(ω, φ,∆t) =1

1 + (1− ω)∆t+φ∆t2(29)

could be interpreted as a constriction factor similar to the introduced by Clerc and

Kennedy (2002). The fact that there is a delay one on the parameter t0 causes that

there will be only a correspondence between the homogeneous trajectories (without

taking into account the force term) of RR-PSO and PSO. This fact has been also

outlined for other PSO family members.6

3.1. The first and second order stability regions

The following stochastic second order difference equation is obtained for the RR-

GPSO algorithm:

x(t+∆t)−Arrx(t)−Brrx(t−∆t) = Crr(t+∆t) , (30)

where

Arr =2 + (1− ω)∆t

1 + (1− ω)∆t+ φ∆t2, Brr =

−1

1 + (1− ω)∆t+ φ∆t2∈ R , (31)

1240011-10

Page 11: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

Analysis of Two Novel Algorithms of the PSO Family: PP-GPSO and RR-GPSO

and

Crr(t+∆t) =φ1g(t+∆t− t0) + φ2l(t+∆t− t0)

1 + (1− ω)∆t+ φ∆t2∆t2 . (32)

The first order affine system describing the RR-GPSO mean trajectories is:

µ(t+∆t) = ARRµ µ(t) + bRR

µ (t) , (33)

where

ARRµ =

(

E(Arr) E(Brr)

1 0

)

, bRRµ (t) =

φE(o(t+∆t− t0))

1 + (1− ω)∆t+ φ∆t2∆t2

0

. (34)

The first order stability region of RR-GPSO is composed of two different disjoint

zones D1 and D2 described by S1RR−gpso = (ω, φ) : D1 ∪D2, where:

D1 =

[ω < 1, φ > 0] ∪[

1 < ω < 1 +4

∆t, φ >

ω − 1

∆t

]

∪[

ω > 1 +4

∆t, φ >

2

∆t2((ω − 1)∆t− 2)

]

, (35)

and

D2 =

[

ω < 1 +2

∆t, φ <

2

∆t2((ω − 1)∆t− 2)

]

∪[

ω > 1 +2

∆t, φ < 0

]

. (36)

The parabola separating the real and complex eigenvalues of ARRµ is the same as in

PP-PSO case and does not depend on ∆t:

φ =1

4(1− 2ω + ω2) . (37)

For ∆t = 1, this first order region of stability becomes:

S1RR = (ω, φ) : D1 ∪D2 ,

D1 =

[ω < 1, φ > 0] ∪ [1 < ω < 5, φ > ω − 1]

∪[ω > 5, φ > 2(ω − 3)]

,

D2 = [ω < 3, φ < 2(ω − 3)] ∪ [ω > 3, φ < 0].

(38)

The non-centered second order moments fulfill the following second order affine

dynamical system:

r2(t+∆t) = ARRσ r2(t) + bRR

r(t) , (39)

1240011-11

Page 12: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

J. L. Fernandez-Martınez & E. Garcıa-Gonzalo

ω

φ

(a) RR−PSO First order spectral radius

−10 −5 0 5−10

−8

−6

−4

−2

0

2

4

6

8

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

ω

φ

(b) RR−PSO Second order spectral radius

−10 −5 0 5−10

−8

−6

−4

−2

0

2

4

6

8

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 6. RR-PSO: First and second order stability regions and corresponding spectral radii.

where

ARRσ =

E(A2rr) 2E(ArrBrr) E(B2

rr)

(Arr) E(Brr) 0

1 0 0

, (40)

and

brrr(t)1 = E(C2

1 (t+∆t)) + 2E(AC1(t+∆t)x(t))

+ 2E(B1C1(t+∆t)x(t −∆t)) ,

brrr(t)2 = E(C1(t+∆t)x(t)) ,

brrr(t)3 = 0 .

(41)

The eigenvalues analysis of second order iteration matrix ARRσ allows to determine

the RR-PSO second order stochastic stability region.

Figure 6 shows for ∆t = 1 (RR-PSO case) and α = 1 (ag = al), the first

and second order stability regions with the corresponding first and second order

spectral radii. Both regions of stability are unbounded. Also in both cases the first

and second order spectral radii are zero at the infinity: (ω, φ) = (−∞,+∞) and

(ω, φ) = (+∞,−∞). Figure 7 shows the correspondence between the homogeneous

trajectories of RR-PSO and PSO. Figure 8 shows for RR-PSO the logarithmic error

for the Griewank, Rosenbrock, Rastrigin and de Jong-f4 case in 50 dimensions.

Compared to figure 2 it can be observed that RR-PSO provides lower misfits than

the PSO for all the benchmark functions. The difference in some cases is very

significative (Rastrigin and de Jong-f4). These results have been also found for the

same functions in 10 and 30 dimensions.

The good parameters sets for RR-PSO seem to be concentrated around the line

of equation φ = 3(ω−(3/2)),mainly for inertia values greater than 2. This line is the

same for both functions and seems to be invariant when the number of parameters

1240011-12

Page 13: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

Analysis of Two Novel Algorithms of the PSO Family: PP-GPSO and RR-GPSO

ooooo

oooooooooo

oooooooooooooo

ooooooooooooooooo

ooooooooooooooooooo

ooooooooooooooooooooo

ooooooooooooooooooooooo

ooooooooooooooooooooooooo

oooooooooooooooooooooooooo

oooooooooooooooooooooooooooo

ooooooooooooooooooooooooooooo

oooooooooooooooooooooooooooooo

oooooooooooooooooooooooooooooooo

ooooooooooooooooooooooooooooooooo

oooooooooooooooooooooooooooooooooo

ooooooooooooooooooooooooooooooooooo

ooooooooooooooooooooooooooooooooooooo

oooooooooooooooooooooooooooooooooooooo

ooooooooooooooooooooooooooooooooooooooo

oooooooooooooooooooooooooooooooooooooooo

v vv

vvv

vvvv

vvvvv

vvvvvv

vvvvvvv

vvvvvvvv

vvvvvvvvv

vvvvvvvvvv

vvvvvvvvvvv

vvvvvvvvvvvv

vvvvvvvvvvvvv

vvvvvvvvvvvvvv

vvvvvvvvvvvvvvv

vvvvvvvvvvvvvvvv

vvvvvvvvvvvvvvvvv

vvvvvvvvvvvvvvvvvv

vvvvvvvvvvvvvvvvvvv

vvvvvvvvvvvvvvvvvvvv

vvvvvvvvvvvvvvvvvvvvv

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+

+ + + + + + + + + + +

xxxxxxxxx

xxxxxx

xxxxx

xxxx

xxx x

xx

xx x

x xx x

x xx

xx

xx

xx

xx

-1.0 -0.5 0.0 0.5 1.00

1

2

3

4

Ω

Φ-

PSO

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

oo

oo

oo

oo

oo

oo

oo

oo

oo

o

oo

oo

oo

oo

oo

oo

oo

oo

oo

oo

o

oo

oo

oo

oo

oo

oo

oo

oo

oo

oo

oo

o

oo

oo

oo

oo

oo

oo

oo

oo

oo

oo

oo

oo

o

oooooooooooooooooooooooooo

oooooooooooooooooooooooooooo

ooooooooooooooooooooooooooooo

oooooooooooooooooooooooooooooo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

o

oooo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

oo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

oooo

ooo

oooooo

oooooo

oooooo

oooooo

oooooo

oooooo

o

oooooo

oooooo

oooooo

oooooo

oooooo

oooooo

oo

oooooo

oooooo

oooooo

oooooo

oooooo

oooooo

ooo

oooooo

oooooo

oooooo

oooooo

oooooo

oooooo

ooo

o vvvvvv vvvvvv vvvvvvvvvv

vvvvvvv

vvvvvvvv

vvvvvvvvv

vvvvvvvvvv

vvvvvvvvvvv

vvvvvvvvvvvv

vv

vv

vv

vv

vv

vv

v

vv

vv

vv

vv

vv

vv

vv

vv

vv

vv

vv

vv

vv

vv

v

vv

vv

vv

vv

vv

vv

vv

vv

vv

vv

vv

vv

vv

vv

vv

vv

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v

v v

v

+

+

+

+

+

+

+

+

+

+

+

+

+++

+++

+++++++++++++++++

xx

xx

x

xx

xx

xx

xxx

xxxxxxxx

x

-5 0 5 10-10

-5

0

5

10

Ω

Φ-

RR-PSO

Fig. 7. Correspondence between the homogeneous first order trajectories of PSO and RR-PSO.

Griewank

ω

φ

0 2 4 6 80

2

4

6

8

10

12

−2

−1

0

1

2

3Rosenbrock

ω

φ

0 2 4 6 80

2

4

6

8

10

12

3

4

5

6

7

8

Rastrigin

ω

φ

0 2 4 6 80

2

4

6

8

10

12

0.5

1

1.5

2

2.5de Jong−f4

ω

φ

0 2 4 6 80

2

4

6

8

10

12

−6

−4

−2

0

2

4

6

Fig. 8. RR-PSO: Mean error contourplot (in log10 scale) for the Griewank, Rosenbrock, Rastriginand De Jong f4 functions in 50 dimensions.

1240011-13

Page 14: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

J. L. Fernandez-Martınez & E. Garcıa-Gonzalo

ω

φ

(a) RR−PSO second order spectral radius

−10 −5 0 5−10

−5

0

5

10

15

20

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

ω

φ

(b) RR−PSO second order trajectory frequency

−10 −5 0 5 10−10

−5

0

5

10

15

20

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

Fig. 9. RR-PSO: second order spectral radius and frequency of the second order moments.

of the optimization function increases. This result is very different from the ones

shown for the other version, since the good parameters are not in relation with the

second order stability upper border. Figure 9 shows the position of this line with

respect to the RR-PSO second order spectral radius and the frequency of the second

order moments trajectories as shown in the paper6 for PSO. This line is located in

a zone of medium attenuation and very high frequency of trajectories. This last

property might be the cause of its good properties, since this feature allows for a

very efficient and explorative search around the oscillation center of each particle

in the swarm.

4. Correspondence Rules

Since all the PSO algorithms presented in this paper come from different finite dif-

ference schemes of the same continuous model, there should be a correspondence

between discrete trajectories for the different PSO versions, that are deduced iden-

tifying the coefficients in their corresponding difference equations, as we have shown

in the paper.6 The corresponding rules to make these algorithms corresponding in

their homogeneous parts, as wPSO = wPP +∆tφPP , are:

wPSO =wRR −∆tφRR + (1− wRR)∆t+∆t2φRR

1 + (1− wRR)∆t+∆t2φRR

,

φPSO = φPP =φRR

1 + (1− wRR)∆t+∆t2φRR

.

(42)

These relationships have been used in figures 4 and 7 to establish the correspondence

of the first trajectories between PP-GPSO, RR-GPSO and GPSO.

1240011-14

Page 15: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

Analysis of Two Novel Algorithms of the PSO Family: PP-GPSO and RR-GPSO

It has been shown in the paper6 that these correspondences make the algorithms

linearly isomorphic, but when applied to real optimization problems they show

different performances or two main reasons:

(1) The introduction of a delay parameter increases the exploration and makes the

algorithms different in the way they update the force terms. This is the case for

RR-GPSO.

(2) The different way these algorithms update the positions and velocities of the

particles. This is the case for PP-GPSO.

To show that these algorithms are not equivalent, we perform the following

numerical experiment. We considered the Clerc and Kennedy’s point1 for PSO and

we calculated for the other members of the family the corresponding points using

the formulae (42) and the corresponding rules stated in the paper.6 We compute the

median misfit of 100 different simulations after 200 iterations. For these simulations

we have used a swarm of 20 particles for 10 dimensions, 40 particles for 30 dimensions

and 100 particles for 50 dimensions, as it is recommended on the benchmark tests.13

The search spaces for these functions are:

Ackley [−32, 32]D

Alpine [−32, 32]D

deJong-f4 [−32, 32]D

Griewank [−600, 600]D

Sphere [−100, 100]D

Rastrigin [−5.12, 5.12]D

Rosenbrock [−30, 30]D

Table 1 shows the results of these simulations. As expected convergence rate is

very different for all the family members. We have already shown this point in the

paper6 for GPSO, CC-GPSO and CP-GPSO.

5. Numerical Experiments

Although we have shown some numerical results in figures 2, 5 and 8, in the next

section we show some additional comparisons between all the members of the family.

Table 2 shows the median convergence curves after 100 different simulations and 200

iterations, for different benchmark functions. For PSO we have considered the Clerc

and Kennedy’s point (ω, φ) = (0.729, 1.494) that has a very good performance. For

CC-PSO, CP-PSO and PP-PSO we have considered the following points selected

from the points that are located on low misfit region of the corresponding algorithms:

• CC: (ω, φ) = (0.74, 1.95),

• PP: (ω, φ) = (0.88, 0.1),

• CP: (ω, φ) = (0.96, 0.4).

1240011-15

Page 16: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

J. L. Fernandez-Martınez & E. Garcıa-Gonzalo

Table 1. Median errors for different benchmark functions and algorithms using some Clerc

and Kennedy’s point and the corresponding (ω, φ) points.

Function dim PSO RR PP CC CP

10 1.1e-03 6.9e+00 1.9e+01 1.6e+00 1.3e+00

Ackley 30 2.0e+00 1.4e+01 1.9e+01 8.6e+00 2.0e+01

50 3.3e+00 1.6e+01 1.9e+01 1.1e+01 2.0e+01

10 5.3e-03 6.4e+00 4.7e+01 9.6e-02 3.4e+00

Alpine 30 4.9e+00 6.4e+01 2.1e+02 1.2e+01 1.8e+02

50 1.8e+01 1.2e+02 3.6e+02 3.2e+01 3.4e+02

10 6.5e-12 8.3e+02 2.9e+06 3.1e-28 1.4e+01

deJong-f4 30 1.6e+00 9.7e+05 5.4e+07 1.7e+02 5.1e+07

50 3.7e+02 6.2e+06 1.4e+08 9.4e+03 1.4e+08

10 1.0e-01 5.2e+00 1.3e+02 1.2e-01 8.1e-01

Griewank 30 9.3e-01 9.2e+01 6.5e+02 1.8e+00 6.4e+02

50 1.2e+00 2.2e+02 1.1e+03 7.5e+00 1.1e+03

10 3.5e-06 2.3e+02 1.5e+04 7.6e-14 1.1e+00

Sphere 30 1.6e+00 7.5e+03 7.1e+04 7.7e+01 6.9e+04

50 2.9e+01 1.9e+04 1.2e+05 6.9e+02 1.1e+05

10 7.3e+00 2.9e+01 1.0e+02 1.9e+01 4.7e+01

Rastrigin 30 5.1e+01 2.1e+02 4.3e+02 1.0e+02 4.2e+02

50 1.0e+02 4.0e+02 7.4e+02 1.9e+02 7.4e+02

10 6.8e+00 9.2e+03 3.8e+07 8.9e+00 5.3e+02

Rosenbrock 30 2.9e+02 5.2e+06 2.9e+08 2.8e+03 2.7e+08

50 3.2e+03 2.2e+07 4.8e+08 3.4e+04 4.6e+0

For RR-PSO we have used two points named P1 : (ω, φ) = (3.9, 6.97) and P2 :

(ω, φ) = (1.8, 0.34). Point P2 seems not to outperform (see figure 8) for functions of

higher numerical complexity such as Rosenbrock and Griewank, while point P1seems

to work better for the test of benchmark functions with lower numerical complexity.

It is possible to observe that RR-PSO is the algorithm that performs the best for

all the benchmark functions. Point P1 works better than P2 for Griewank and

Rosenbrock. For the other functions with lower complexity the results obtained

by RR-PSO using P2 are very impressive. The RR-PSO algorithm is able to find

the global minimum within 50 iterations. Figure 10 shows the median convergence

curves for 1000 runs for different benchmark functions in 50 dimensions. In this

case to run RR-PSO we have adopted the point P2. It is possible to observe that

for most of the functions the RR-PSO curve stops suddenly, meaning that the

algorithm has found the global minimum. Table 2 (first column) shows the results

that we have obtained using the algorithm Standard-PSO-200714 and the program

developed by Birge.15 The Standard-PSO-2007 algorithm provides very bad results

compared to the rest of the algorithms (Table 2) when only 200 iterations are used,

1240011-16

Page 17: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

Analysis of Two Novel Algorithms of the PSO Family: PP-GPSO and RR-GPSO

Table

2.

Med

ianerrors

fordifferen

tben

chmark

functionsandalgorithmsusingsomeperform

ing(ω

,φ)points.

Function

dim

PSO-S

tdPSO-B

irge

RR(P

1)

RR(P

2)

PP

CC

CP

10

1.2e+

01

1.3e-03

2.3e-07

-8.9e-16

5.3e+

00

1.6e-05

2.8e+

00

Ackley

30

1.8e+

01

2.1e+

00

7.7e-02

-8.9e-16

7.6e+

00

6.1e+

00

4.9e+

00

50

1.8e+

01

3.2e+

00

5.6e-01

-8.9e-16

7.5e+

00

8.1e+

00

5.1e+

00

10

2.0e+

01

4.5e-03

3.8e-07

0.0e+

00

7.4e+

00

6.2e-03

6.3e+

00

Alpine

30

1.1e+

02

5.0e+

00

3.4e-02

0.0e+

00

5.5e+

01

5.7e+

00

7.4e+

01

50

2.1e+

02

1.7e+

01

3.9e-01

0.0e+

00

9.8e+

01

1.6e+

01

1.3e+

02

10

2.7e+

04

6.2e-12

1.6e-23

0.0e+

00

1.9e+

02

5.4e-21

2.9e+

00

deJong-f4

30

4.1e+

06

1.9e+

00

4.8e-05

0.0e+

00

1.5e+

04

4.9e-02

9.7e+

02

50

1.7e+

07

3.2e+

02

1.5e-01

0.0e+

00

4.8e+

04

3.7e+

01

2.4e+

03

10

1.6e+

01

1.1e-01

1.2e-01

2.6e+

01

1.7e+

00

1.0e-01

1.1e+

00

Griewank

30

1.9e+

02

9.7e-01

2.1e-02

7.6e+

01

5.9e+

00

7.9e-01

1.8e+

00

50

3.7e+

02

1.3e+

00

2.6e-01

1.3e+

02

7.9e+

00

1.7e+

00

1.7e+

00

10

1.5e+

03

2.6e-06

1.4e-14

0.0e+

00

1.1e+

02

6.9e-11

1.0e+

01

Sphere

30

1.8e+

04

1.6e+

00

2.3e-03

0.0e+

00

6.9e+

02

1.8e+

00

8.3e+

01

50

3.6e+

04

2.8e+

01

1.7e-01

0.0e+

00

9.4e+

02

7.0e+

01

8.9e+

01

10

7.1e+

01

8.0e+

00

9.0e+

00

0.0e+

00

3.1e+

01

1.6e+

01

2.8e+

01

Rastrigin

30

3.0e+

02

5.1e+

01

3.5e+

01

0.0e+

00

1.8e+

02

9.5e+

01

1.9e+

02

50

5.3e+

02

1.1e+

02

6.5e+

01

0.0e+

00

3.1e+

02

1.8e+

02

3.3e+

02

10

2.8e+

05

6.8e+

00

7.1e+

00

9.0e+

00

4.0e+

03

6.0e+

00

4.4e+

02

Rosenbrock

30

2.1e+

07

2.9e+

02

1.0e+

02

2.9e+

01

6.3e+

04

2.5e+

02

1.1e+

04

50

5.2e+

07

3.4e+

03

2.5e+

02

4.9e+

01

9.9e+

04

1.5e+

03

1.4e+

04

1240011-17

Page 18: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

J. L. Fernandez-Martınez & E. Garcıa-Gonzalo

! "! #!! #"! $!!!%"

#

#%"

$

$%"

&'()*+,

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!!!

$

7

8

9

#!4,6:0+1 ;7

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!! #

!

#

$

&

7<.*,=/+>

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!!#

$

&

7

"

8?)@,.,

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!!!%"

#

#%"

$

$%"

&A/B-.*1*+

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!!!

$

7

8

9

#!A0B,+C.0D>

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

E?F

AA E?F

EE E?F

GG E?F

GE E?F

! "! #!! #"! $!!!%"

#

#%"

$

$%"

&'()*+,

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!!!

$

7

8

9

#!4,6:0+1 ;7

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!! #

!

#

$

&

7<.*,=/+>

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!!#

$

&

7

"

8?)@,.,

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!!!%"

#

#%"

$

$%"

&A/B-.*1*+

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!!!

$

7

8

9

#!A0B,+C.0D>

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

E?F

AA E?F

EE E?F

GG E?F

GE E?F

! "! #!! #"! $!!!%"

#

#%"

$

$%"

&'()*+,

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!!!

$

7

8

9

#!4,6:0+1 ;7

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!! #

!

#

$

&

7<.*,=/+>

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!!#

$

&

7

"

8?)@,.,

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!!!%"

#

#%"

$

$%"

&A/B-.*1*+

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

! "! #!! #"! $!!!

$

7

8

9

#!A0B,+C.0D>

*-,./-*0+

(01#!23

,4*/+2,..0.55

6

6

E?F

AA E?F

EE E?F

GG E?F

GE E?F

Fig. 10. Median convergence curves for different benchmark functions in 50 dimension in P2.

but it improves dramatically when we increase its number of iterations to several

thousands. Nevertheless, to increase the number of functions evaluations is not

always possible when applied to real problems due to the very high computational

cost.

This analysis done for point P2 has been also generalized for a grid of (ω, φ)

points within the first order stability region (figure 8).

1240011-18

Page 19: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

Analysis of Two Novel Algorithms of the PSO Family: PP-GPSO and RR-GPSO

6. Conclusions

In this paper we present two more different members of the PSO family: PP-PSO

and RR-PSO. Both versions are deduced from the PSO continuous model adopting

respectively a progressive and a regressive discretization in velocities and accelera-

tions. Although they are PSO-like versions, PP-PSO has the same velocity update

than GPSO, and RR-PSO has the form of a PSO with constriction factor, their

behavior is very different from the PSO case. RR-PSO best parameters are concen-

trated along a straight line located in the complex zone of the first order stability

region, but are not in direct relation with the upper limit of the second order sta-

bility zone. Its behavior is very different from others family members including

PP-PSO. Finally with respect to the convergence rate, RR-PSO seems to provide

similar or even better results than PSO and PP-PSO seems to be most explorative

(higher median misfits). Knowledge of these two new versions can be very impor-

tant in the solution of very different optimization and inverse problems in science

and technology as shown in some applications in oil and gas and protein secondary

structure prediction.

Acknowledgments

This work benefited from one-year sabbatical grant (2008-2009) at the University

of California Berkeley (Department of Civil and Environmental Engineering) given

by the University of Oviedo (Spain), and by the “Secretarıa de Estado de Univer-

sidades y de Investigacion”of the Spanish Ministry of Science and Innovation. We

also acknowledge the financial support for 2009-2010 coming from the University

of California Berkeley, the Lawrence Berkeley National Laboratory (Earth Science

Division) and the Energy Resources Engineering Department of Stanford University

(Stanford Center for Reservoir Forecasting and Smart Field Consortia).

References

1. M. Clerc and J. Kennedy, The particle swarm—explosion, stability, and convergence ina multidimensional complex space, IEEE Trans. on Evolutionary Computation Febru-ary, 6 (2002) 58–73, doi:10.1109/4235.985692.

2. J.L. Fernandez-Martınez and E. Garcıa-Gonzalo, Theoretical analysis of particleswarm trajectories through a mechanical analogy, Int. J. of Comp. Int. Res. 4 (2008)93–104.

3. J.L. Fernandez-Martınez and E. Garcıa-Gonzalo, Stochastic stability analysis of thelinear continuous and discrete PSO models, IEEE Transactions on Evolutionary

Computation 15 (2011) 405–423, doi:10.1109/TEVC.2010.2053935.4. R. Poli, Dynamics and stability of the sampling distribution of particle swarm op-

timisers via moment analysis, Journal of Artificial Evolution and Applications 2008

(2008) 1–10, doi:10.1155/2008/761459.5. J.L. Fernandez-Martınez and E. Garcıa-Gonzalo, The generalized PSO: A new door

for PSO evolution, Journal of Artificial Evolution and Applications 2008 (2008) 1–15,doi:10.1007/s11721-009-0034-8.

1240011-19

Page 20: P44_2012_IJAIT

1st ReadingMay 4, 2012 16:6 WSPC/INSTRUCTION FILE S0218213012400118

J. L. Fernandez-Martınez & E. Garcıa-Gonzalo

6. J. L. Fernandez-Martınez and E. Garcıa-Gonzalo, The PSO family: Deduction,stochastic analysis and comparison, Swarm Intelligence 3 (2009) 245–273, doi:10.1007/s11721-009-0034-8.

7. Y.-L. Zheng, L.-H. Ma, L.-Y. Zhang and J.-X. Qian, On the convergence analysis andparameter selection in particle swarm optimisation,Proc. of the 2nd International Con-

ference on Machine Learning and Cybernetics (ICMLC’03), Xi’an, China, November,3 (2003) 1802–1807, doi:10.1109/ICMLC.2003.1259789

8. J. L. Fernandez-Martınez, E. Garcıa-Gonzalo, J. P. F. Alvarez, H. A. Kuzma and C. O.Menendez-Perez, PSO: A powerful algorithm to solve geophysical inverse problems.Application to a 1D-DC resistivity case, Jounal of Applied Geophysics 71 (2010) 13–25, doi:10.1016/j.jappgeo.2010.02.001.

9. J. L. Fernandez-Martınez and E. Garcıa-Gonzalo, Particle swarm optimization appliedto the solving and appraisal of the streaming potential inverse problem, Geophysics

75 (2010) WA3–WA15, doi:10.1190/1.3460842.10. A. Suman, J. L. Fernandez-Martınez, and T. Mukerji, (2011) Joint inversion of time-

lapse seismic and production data for Norne field, SEG Technical Program Expanded

Abstracts 30 (2011) 4102–4108, doi:10.1190/1.3628063.11. J. L. Fernandez-Martınez, E. Garcıa-Gonzalo, S. Saraswathi, R. Jernigan and

A. Kloczkowski, Particle swarm optimization: A powerful family of stochastic op-timizers. Analysis, design and application to inverse modelling, in Y. Tan, Y. Shi,Y. Chai and G. Wang, eds., Advances in Swarm Intelligence (Springer Berlin, Heidel-berg, 2011), pp. 1–8, doi:10.1007/978-3-642-21515-5 1.

12. S. Saraswathi and J. L. Fernandez-Martınez, A. Kolinski, R. Jernigan andA. Kloczkowski, Fast learning optimized prediction methodology (FLOPRED) forprotein secondary structure prediction, Journal of Molecular Modeling, accepted forpublication.

13. C. Trelea, The particle swarm optimization algorithm: Convergence analysis and pa-rameter selection, Information Processing Letters 85 (2003) 317–325, doi:10.1016/S0020-0190(02)00447-7.

14. PSO (2007), Standard PSO, http://www.particleswarm.info/standard pso 2007.zip15. B. Birge, PSOt — A particle swarm optimization toolbox for use with Matlab, Proc. of

the IEEE Swarm Intell. Symp (SIS 03) (2003) 182–186, doi:10.1109/SIS.2003.1202265.

1240011-20