Opposition-based learning in the shuffled differential evolution algorithm

35
ORIGINAL PAPER Opposition-based learning in the shuffled differential evolution algorithm Morteza Alinia Ahandani Hosein Alavi-Rad Published online: 9 February 2012 Ó Springer-Verlag 2012 Abstract This paper proposes using the opposition-based learning (OBL) strategy in the shuffled differential evolu- tion (SDE). In the SDE, population is divided into several memeplexes and each memeplex is improved by the dif- ferential evolution (DE) algorithm. The OBL by comparing the fitness of an individual to its opposite and retaining the fitter one in the population accelerates search process. The objective of this paper is to introduce new versions of the DE which, on one hand, use the partitioning and shuffling concepts of SDE to compensate for the limited amount of search moves of the original DE and, on the other hand, employ the OBL to accelerate the DE without making premature convergence. Four versions of DE algorithm are proposed based on the OBL and SDE strategies. All algorithms similarly use the opposition-based population initialization to achieve fitter initial individuals and their difference is in applying opposition-based generation jumping. Experiments on 25 benchmark functions designed for the special session on real-parameter optimization of CEC2005 and non-parametric analysis of obtained results demonstrate that the performances of the proposed algo- rithms are better than the SDE. The fourth version of proposed algorithm has a significant difference compared to the SDE in terms of all considered aspects. The emphasis of comparison results is to obtain some suc- cessful performances on unsolved functions for the first time, which so far have not been reported any successful runs on them. In a later part of the comparative experiments, performance comparisons of the proposed algorithm with some modern DE algorithms reported in the literature confirm a significantly better performance of our proposed algorithm, especially on high-dimensional functions. Keywords Opposition-based learning Shuffled differential evolution Memeplex Premature convergence 1 Introduction The evolutionary algorithms (EAs) can be applied to search the complex, discrete or continuous, linear or non-linear, and convex or non-convex spaces. So those are recognized as all-propose and direct search optimization methods. Existence of a criterion to evaluate a candidate solution is unique, necessary and with enough limitation to apply the EAs to each problem. Differential evolution (DE) (Storn and Price 1997) is a kind of EAs, which was proposed as a modified version of genetic algorithms (GAs) (Holland 1975). Although the DE is not biologically inspired, its name was derived from natural biological evolutionary processes. There are various ideas behind EAs for generation of new trial points and candidate solutions. In the GAs, inspired from the natural biological evolution and heredity process, by employing three operators of biological sys- tems, i.e., selection, crossover and mutation, poor chro- mosomes and poor genes disappear from population and newly generated members are replaced with them. This algorithm says which better chromosomes containing bet- ter genes and/or better combinations of genes tend to produce better offspring. The particle swarm optimization (PSO) (Kennedy and Eberhart 1995) simulates the M. A. Ahandani (&) H. Alavi-Rad Department of Electrical Engineering, Langaroud Branch, Islamic Azad University, Langaroud, Iran e-mail: [email protected] H. Alavi-Rad e-mail: [email protected] 123 Soft Comput (2012) 16:1303–1337 DOI 10.1007/s00500-012-0813-9

Transcript of Opposition-based learning in the shuffled differential evolution algorithm

Page 1: Opposition-based learning in the shuffled differential evolution algorithm

ORIGINAL PAPER

Opposition-based learning in the shuffled differential evolutionalgorithm

Morteza Alinia Ahandani • Hosein Alavi-Rad

Published online: 9 February 2012

� Springer-Verlag 2012

Abstract This paper proposes using the opposition-based

learning (OBL) strategy in the shuffled differential evolu-

tion (SDE). In the SDE, population is divided into several

memeplexes and each memeplex is improved by the dif-

ferential evolution (DE) algorithm. The OBL by comparing

the fitness of an individual to its opposite and retaining the

fitter one in the population accelerates search process. The

objective of this paper is to introduce new versions of the

DE which, on one hand, use the partitioning and shuffling

concepts of SDE to compensate for the limited amount of

search moves of the original DE and, on the other hand,

employ the OBL to accelerate the DE without making

premature convergence. Four versions of DE algorithm are

proposed based on the OBL and SDE strategies. All

algorithms similarly use the opposition-based population

initialization to achieve fitter initial individuals and their

difference is in applying opposition-based generation

jumping. Experiments on 25 benchmark functions designed

for the special session on real-parameter optimization of

CEC2005 and non-parametric analysis of obtained results

demonstrate that the performances of the proposed algo-

rithms are better than the SDE. The fourth version of

proposed algorithm has a significant difference compared

to the SDE in terms of all considered aspects. The

emphasis of comparison results is to obtain some suc-

cessful performances on unsolved functions for the first

time, which so far have not been reported any successful

runs on them. In a later part of the comparative

experiments, performance comparisons of the proposed

algorithm with some modern DE algorithms reported in the

literature confirm a significantly better performance of

our proposed algorithm, especially on high-dimensional

functions.

Keywords Opposition-based learning � Shuffled

differential evolution �Memeplex � Premature convergence

1 Introduction

The evolutionary algorithms (EAs) can be applied to search

the complex, discrete or continuous, linear or non-linear,

and convex or non-convex spaces. So those are recognized

as all-propose and direct search optimization methods.

Existence of a criterion to evaluate a candidate solution is

unique, necessary and with enough limitation to apply the

EAs to each problem. Differential evolution (DE) (Storn

and Price 1997) is a kind of EAs, which was proposed as a

modified version of genetic algorithms (GAs) (Holland

1975). Although the DE is not biologically inspired, its

name was derived from natural biological evolutionary

processes.

There are various ideas behind EAs for generation of

new trial points and candidate solutions. In the GAs,

inspired from the natural biological evolution and heredity

process, by employing three operators of biological sys-

tems, i.e., selection, crossover and mutation, poor chro-

mosomes and poor genes disappear from population and

newly generated members are replaced with them. This

algorithm says which better chromosomes containing bet-

ter genes and/or better combinations of genes tend to

produce better offspring. The particle swarm optimization

(PSO) (Kennedy and Eberhart 1995) simulates the

M. A. Ahandani (&) � H. Alavi-Rad

Department of Electrical Engineering, Langaroud Branch,

Islamic Azad University, Langaroud, Iran

e-mail: [email protected]

H. Alavi-Rad

e-mail: [email protected]

123

Soft Comput (2012) 16:1303–1337

DOI 10.1007/s00500-012-0813-9

Page 2: Opposition-based learning in the shuffled differential evolution algorithm

movement of organisms and social behavior in a flock of

birds or school of fishes. This algorithm updates its mem-

bers using the particle’s previous best position and using

the previous best position of the particle’s neighbors or

whole swarm. The shuffled frog leaping (SFL) (Eusuff and

Lansey 2003), inspired from grouping search of frogs for

food resources in a swamp, combines the benefits of the

memetic algorithm (MA) (Moscato 1989), the social

behavior-based PSO algorithm and sharing information of

parallel local search algorithms. The DE on one hand, such

as GAs, employs three biological operators to generate new

offspring, and such as GAs, in this algorithm accepting or

rejecting a new offspring has a direct relation with its fit-

ness. On the other hand, the DE such as PSO and unlike

GAs, does not eliminate the poor members from the pop-

ulation and each member preserves its own position in the

population all over the search process and is simply pro-

pelled to different areas of search spaces without complete

generation of a new member. Liu et al. (2007) mentioned

some other attractive characteristics of DE compared to the

GAs and PSO. The DE uses a simple differential operator

to create new candidate solutions and one-to-one compe-

tition scheme to greedily select new candidate, which

works with real numbers in a natural manner and avoids

complicated generic searching operators in the GA. It has a

memory; so, knowledge of good solutions is retained in the

current population, whereas in the GA previous knowledge

of the problem is destroyed once the population changes,

and in the PSO a secondary archive is needed. It also has

constructive cooperation between individuals; individuals

in the population share information between them.

The remaining sections of this paper are organized as

follows. In the next section, a literature review of related

works is presented. The original DE algorithm is briefly

described in Sect. 3. In Sect. 4, the OBL is presented.

In Sect. 5, the proposed modified versions of DE are

explained. The simulation results are presented and ana-

lyzed in Sect. 6. Section 7 concludes the paper and some

concepts for future works are presented.

2 Related works

This section reviews the benefits and drawbacks of DE, as

well as some reported approaches to compensate for its

drawbacks. After that, we concentrate on researches that

deal with opposition-based strategy.

2.1 The DE: benefits and drawbacks

The DE has only three control parameters to be tuned, i.e.,

amplification factor of the difference vector, crossover rate

and population size. However, proper setting of control

parameters in the DE is not an easy task. In addition to

these attractive characteristics, simplicity and easy imple-

mentation are two main preferences of DE than other

EAs and a basic reason for widespread application of

DE on different optimization problems in recent years

(see Plagianakos et al. 2008). Feoktistov (2006), from an

algorithmic viewpoint, mentioned the reasons for the

success of DE: the success of DE is due to an implicit

self-adaptation contained within the algorithmic structure.

Besides the aforementioned preferences, some drawbacks

of the standard DE are as follows:

(I) Stagnation or premature convergence because of its

low or fast convergence speed. Neri and Tirronen (2010)

describes how the stagnation occurs in the DE: a DE

scheme is highly explorative at the beginning of the evo-

lution and subsequently becomes more exploitative during

the optimization. Although this mechanism seems, at first

glance, to be very efficient, it hides a limitation. If for some

reason the algorithm does not succeed in generating off-

spring solutions which outperform the corresponding par-

ent, the search is repeated again with similar step size

values and will likely fail by falling into an undesired

stagnation condition. Liu and Lampinen (2005) mentioned

three reasons for this drawback of standard DE: (1) control

parameters not being well chosen initially for a given task;

(2) parameters being kept fixed through the whole process

and having no response to the population’s information

even though the environment in which the DE operated

may be variable; (3) lack of knowledge from the search

space.

Some of the DE variants as a solution for this problem

employed self-adaptive settings for automatically and

dynamically adjusting evolutionary parameters. Liu and

Lampinen (2005) proposed a fuzzy logic control (FLC) for

controlling parameters of DE. The FLC was employed to

choose the initial control parameters freely and the control

parameters adjusted online to dynamically adapt to

changing situations. It was found that DE with a fuzzy

search parameter control could perform better than DE

using all fixed parameters. Brest et al. (2006) proposed a

new version of DE for obtaining self-adaptive setting of

two control parameters, i.e., amplification factor of the

difference vector and crossover rate. The results on

numerical benchmark problems showed that their proposed

DE with self-adaptive control parameter settings was better

than, or at least comparable to, the standard DE algorithm.

Teng et al. (2009), for solving the manual setting of

population size by user, carried out two new systems for

the self-adaptive population size to test two different

methodologies, absolute encoding and relative encoding.

The empirical testing results showed that DE with self-

adaptive population size using relative encoding performed

well in terms of the average performance as well as

1304 M. A. Ahandani, H. Alavi-Rad

123

Page 3: Opposition-based learning in the shuffled differential evolution algorithm

stability compared to absolute encoding version, as well as

the original DE. Also, a self-adaptive DE (SaDE) algorithm

was proposed by Qin et al. (2009) in which both trial vector

generation strategies and their associated control parameter

values were gradually self-adapted by learning from their

previous experiences in generating promising solutions.

Consequently, a more suitable generation strategy along

with its parameter settings could be determined adaptively

to match different phases of the search process/evolution.

Those compared the performance of SaDE with the con-

ventional DE and three adaptive DE variants over a suite of

26 bound constrained numerical optimization problems and

concluded that the SaDE was more effective in obtaining

better quality solutions, which were more stable with the

relatively smaller standard deviation and had higher suc-

cess rates. An opposition-based DE (ODE) was proposed

by Rahnamayan et al. (2008) to accelerate the DE. The

ODE used opposite numbers during population initializa-

tion and also for generating new populations during the

evolutionary process. Experimental results confirmed that

the ODE outperformed the original DE and fuzzy adaptive

DE in terms of convergence speed and solution accuracy.

(II) Having a problem in accurately zooming to optimal

solution. The DE can efficiently find the neighborhood of

global optimal solution, but for many cases, it is not able to

zoom exactly to individual optimal point. Hybridizing the

DE with a local search method has been proposed as a

solution for this drawback. In the hybrid algorithms, the

DE has an exploration duty of whole search space to find

some promising areas, and then the local search is entered

to localize the found areas and for exploitation of optimum

point as accurately as possible.

Caponio et al. (2009) proposed a superfit memetic DE

(SFMDE) by employing a DE framework hybridized with

three meta-heuristics: the PSO as assister of the DE to

generate a super-fit individual, the Nelder–Mead algorithm

and the Rosenbrock algorithm as local searchers, which are

adaptively coordinated by means of an index measuring

quality of the super-fit individual with respect to the rest of

the population. Numerical results of the SFMDE on two

engineering problems demonstrated that the SFMDE had a

high performance standard in terms of both final solutions

detected and convergence speed.

Also, a scale factor local search DE based on MA was

proposed by Neri and Tirronen (2009). It employed, within

a self-adaptive scheme, two local search algorithms, i.e.,

golden section search and uni-dimensional hill climb. The

local search algorithms assisted in the global search and

generated offspring with high performance, which were

subsequently supposed to promote the generation of

enhanced solutions within the evolutionary framework.

Numerical results demonstrated that the efficiency of the

proposed algorithm seemed to be very high, especially for

large-scale problems and complex fitness landscapes. Perez-

Bellido et al. (2008) presented a memetic DE to solve the

spread spectrum radar poly phase code design problem. The

DE and other utilized global search algorithms hybridized

with a gradient-based local search procedure, which included

a dynamic step adaptation procedure to perform accurate and

efficient local search for better solutions. Simulations in

several numerical examples showed that their proposed

approaches improved the performance of previous approa-

ches existing in this problem.

(III) To limit the number and diversity of search moves

in the original DE. The original DE carries out an effort in

per iteration to improve each member. Limiting the number

and variety of search moves prevents exploration of whole

search space. Ahandani et al. (2010) proposed three mod-

ified versions of the DE to repair its defect in accurate

converging to individual optimal point and to compensate

the limited amount of search moves of original DE. These

algorithms carried out several efforts one by one to obtain a

fitter offspring so that each search move was different from

previously utilized moves. Their proposed DE algorithms

employed the bidirectional optimization and parallel search

concepts. A comparison of their proposed methods with

some modern DE algorithms and the other EAs reported in

the literature confirmed a better or at least comparable

performance of their proposed algorithms.

(IV) Utilizing greedy criterion in accepting or rejecting

a new generated offspring. In the greedy acceptance cri-

terion, only generating an offspring with a better fitness

compared to its corresponding parent is considered as an

admissive move and those of candidate solutions with a

worse fitness will be rejected. However, greedy criterion

ensures the fast convergence; it increases the probability of

getting stuck in local minimums. The selection operator of

DE follows a greedy criterion inspired by the hill-climbing

process. Other EAs by employing some criteria to accept

poor candidate solutions do not confine their exploration.

The PSO accepts all new position of particles and the SFL

after two stages and a random frog without evaluating its

fitness. In the simulated annealing (SA) (Kirkpatrick et al.

1983), a member with a worse fitness is accepted with a

probability and the TS accepts all moves that do not satisfy

some tabu restriction criteria. But to the best of our

knowledge, all proposed DE algorithms employ greedy

criterion and there is not any superseded operator.

(V) Poor performance of DE in noisy environment. A

noisy optimization problem can be seen as a problem

where the fitness landscape varies with time and, due to a

wide search space and the existence of many local opti-

mums, obtaining an optimal point is very hard. Thus, for a

noisy problem, a deterministic choice of the scale factor

can be inadequate and a standard DE can easily fail at

handling a noisy fitness function (Neri and Tirronen 2010).

OBL in the SDE algorithm 1305

123

Page 4: Opposition-based learning in the shuffled differential evolution algorithm

Looking at the problem from a different perspective, DE

employs too much deterministic search logic for a noisy

environment and therefore tends to stagnate.

(VI) To require multiple runs for tuning parameters and

to be problem dependent of the best control parameter

settings. The adaptive or self-adaptive control of parame-

ters have been proposed as a solution to overcome these

disadvantages [for example, see Qin and Suganthan (2005);

Liu and Lampinen (2005); Brest et al. (2007); Teng et al.

(2009)].

This study proposes four modified versions of DE

algorithms. These proposed algorithms use a combination

of shuffled DE (SDE) proposed by Ahandani et al. (2010),

and ODE proposed by Rahnamayan et al. (2008). On one

hand, The SDE diversifies exploratory moves of DE and

increases the number of efforts carried out to independently

improve each member. On the other hand, the ODE

accelerates the DE and prevents its stagnation. So moti-

vated by the successful implementation of the mentioned

techniques to remove some drawbacks of the original DE,

this study integrates these features together to develop

some modified algorithms, which combine benefits of both

aforementioned algorithms.

2.2 Opposition-based strategy

Opposition-based strategy in the optimization algorithms

uses the concept of opposition-based learning (OBL),

which was introduced by Tizhoosh (2005). The OBL by

comparing the fitness of an individual to its opposite and

retaining the fitter one in the population accelerates the

EAs. The OBL was initially applied to accelerate rein-

forcement learning (Tizhoosh 2006; Shokri et al. 2006) and

back-propagation learning in neural networks (Ventresca

and Tizhoosh 2006). A mathematical proof was proposed

by Rahnamayan et al. (2006) to show that, in general,

opposite numbers are more likely to be closer to the opti-

mal solution than a purely random one.

The OBL was recently applied to different EAs.

Rahnamayan et al. (2008) for the first time utilized oppo-

site numbers to speed up the convergence rate of an opti-

mization algorithm. Their proposed opposition-based DE

(ODE) employed the OBL for population initialization and

also for generation jumping. Also, the performance of ODE

on large-scale problems was examined by Rahnamayan

and Wang (2008). Results confirmed that ODE performed

much better than DE when the dimensionality of the

problems was increased from 500D to 1000D.

Subudhi and Jena (2009) presented a new DE approach

based on OBL applied for training neural network used for

non-linear system identification. The obtained results for

identification of two non-linear system benchmark prob-

lems demonstrated that the opposition-based DE-neural

network method of non-linear system identification pro-

vided excellent identification performance in comparison

to both the DE and neuro-fuzzy approaches. Balamurugan

and Subramanian (2009) presented an opposition-based

self-adaptive DE for emission-constrained dynamic eco-

nomic dispatch problem with non-smooth fuel cost and

emission level functions. A multi-objective function was

formulated by assigning the relative weight to each of the

objective and then optimized by opposition-based self-

adaptive DE. The convergence rate of DE was improved by

employing an OBL scheme and a self-adaptive procedure

for control parameter settings. The simulation results on a

test system with five thermal generating units showed that

the proposed approach provided a higher quality solution

with better performance. Boskovis et al. (2011) presented a

DE-based approach to chess evaluation function tuning.

The DE with opposition-based optimization was employed

and upgraded with a history mechanism to improve the

evaluation of individuals and the tuning process. The

general idea was based on individual evaluations according

to played games through several generations and different

environments. They introduced a new history mechanism,

which used an auxiliary population containing good indi-

viduals. This new mechanism ensured that good individu-

als remained within the evolutionary process, even though

they died several generations back and later could be

brought back into the evolutionary process. In such a

manner, the evaluation of individuals was improved and

consequently the whole tuning process.

The PSO is another EA for which the OBL has been

recently used in its structure. Wang et al. (2007) applied an

OBL scheme to the PSO (OPSO), along with a Cauchy

mutation to keep the globally best particle moving and

avoid premature convergence of it. The main objective of

OPSO with Cauchy mutation was to help avoid premature

convergence on multi-modal functions. Using OBL, two

different positions, the particle’s own position and the

position opposite the center of the swarm, were evaluated

for each randomly selected particle. Experimental results

on benchmark optimization problems showed that the

OPSO could successfully deal with those difficult multi-

modal functions while maintaining fast search speed on

those simple unimodal functions in the function optimiza-

tion. Han and He (2007) used OBL to improve the per-

formance of PSO. They employed the OBL in the

initialization phase and also during each iteration. How-

ever, a constriction factor was used to enhance the con-

vergence speed. In both the above mentioned which used

the OBL in the PSO, several parameters were added to the

PSO that were difficult to tune. Omran (2009) used the

OBL to improve the performance of PSO and barebones

DE (BBDE) without adding any extra parameter. Two

opposition-based variants were proposed (namely, iPSO

1306 M. A. Ahandani, H. Alavi-Rad

123

Page 5: Opposition-based learning in the shuffled differential evolution algorithm

and iBBDE). The iPSO and iBBDE algorithms replaced the

least-fit particle with its anti-particle. The results showed

that, in general, iPSO and iBBDE outperformed PSO and

BBDE, respectively. In addition, the results showed that

using the OBL enhanced the performance of PSO and

BBDE without requiring additional parameters. Also,

Rashid and Baig (2010) presented an improved opposition-

based PSO and applied this algorithm to feed-forward

neural network training. The improved opposition-based

PSO utilized opposition-based initialization, opposition-

based generation jumping and opposition-based velocity

calculation.

Also some researchers have applied the OBL in bioge-

ography-based optimization (BBO), which is an EA

developed for global optimization. Ergezer et al. (2009)

proposed a novel variation to the BBO. The new algorithm

employed the OBL alongside BBO’s migration rates to

create oppositional BBO (OBBO). Additionally, a new

opposition method named quasi-reflection was introduced.

Empirical results demonstrated that with the assistance of

quasi-reflection, the OBBO significantly outperformed the

BBO in terms of success rate and the number of fitness

function evaluations required for finding an optimal solu-

tion. Bhattacharya and Chattopadhyay (2010) presented a

quasi-reflection oppositional biogeography-based optimi-

zation to accelerate the convergence of BBO and to

improve solution quality for solving complex economic

load dispatch problems of thermal power plants. The pro-

posed method employed the OBL along with a BBO

algorithm. Instead of opposite numbers, they used quasi-

reflected numbers for population initialization and also for

generation jumping. The effectiveness of the proposed

algorithm was verified on four different test systems.

Compared with the other existing techniques, the proposed

algorithm was found to perform better in a number of

cases. Considering the quality of the solution and conver-

gence speed obtained, this method seemed to be a prom-

ising alternative approach for solving the economic load

dispatch problems.

Besides the aforementioned algorithms, there are some

other cases of application of this strategy to the EAs.

Malisia and Tizhoosh (2007) employed the opposite-

based concept to improve the quality of solutions and

convergence rate of an ant colony system (ACS) (Dorigo

and Gambardella 1997). The modifications focused on the

solution construction phase of the ACS. Results on the

application of these algorithms on traveling salesman

problem instances demonstrated that the concept of

opposition was not easily applied to the ACS. Only one of

the pheromone-based methods showed performance

improvements that were statistically significant. Also, an

improved vanilla version of the SA using opposite

neighbors called opposition-based simulated annealing

(OSA) was proposed by Ventresca and Tizhoosh (2007).

They provided a theoretical basis for the algorithm as well

as its practical implementation. Simulation results on

six common real optimization problems confirmed the

theoretical predictions as well as showed a significant

improvement in accuracy and convergence rate over tra-

ditional SA. They also provided experimental evidence

for the use of opposite neighbors over purely random

ones.

Also, this strategy has been employed for several opti-

mization problems such as training neural network

(Subudhi and Jena 2009; Rashid and Baig 2010) and eco-

nomic dispatch problem (Balamurugan and Subramanian

2009; Bhattacharya and Chattopadhyay 2010). Because

random generation does not necessarily provide a good

initial population, Kofjac and Kljajic (2008) used the

mirroring of initial population by inspiring from the

opposite-based population initialization to solve a job shop

scheduling problem.

3 The DE algorithm

The DE such as other EAs starts with an initial sampling of

individuals within the search space. Generally speaking,

the initialization is performed randomly with a uniform

distribution function. After initialization, three main stages

of DE, i.e., mutation, crossover and selection are carried

out.

In the mutation stage, at each generation and for each

individual a donor member using an operator is generated.

There are several operators for the mutation. In these

operators, a donor member is generated by adding a

weighted difference of two, four or six members to one

other member. Some of the mutation schemes proposed in

the literature are as follow:

DE=rand=1 : v ¼ xr1þ Fðxr3

� xr2Þ ð1Þ

DE=current=1 : v ¼ xi þ Fðxr2� xr1

Þ ð2ÞDE=best=1 : v ¼ xbest þ Fðxr2

� xr1Þ ð3Þ

DE=rand=2 : v ¼ xr1þ Fðxr3

� xr2Þ þ kðxr5

� xr4Þ ð4Þ

DE=current=2 : v ¼ xi þ Fðxr2� xr1

Þ þ kðxr4� xr3

Þ ð5ÞDE=best=2 : v ¼ xbest þ Fðxr2

� xr1Þ þ kðxr4

� xr3Þ ð6Þ

DE=current-to-rand=1 :

v ¼ xi þ kðxr1� xiÞ þ Fðxr3

� xr2Þ ð7Þ

DE=current-to-best=1 :

v ¼ xi þ kðxbest � xiÞ þ Fðxr2� xr1

Þ ð8Þ

DE=rand-to-best=2 :

v ¼ xr1þ kðxbest � xiÞ þ Fðxr2

� xr1Þ þ Kðxr4

� xr3Þ ð9Þ

OBL in the SDE algorithm 1307

123

Page 6: Opposition-based learning in the shuffled differential evolution algorithm

where xi is the current point, xbest is the best point found so far

and xr 1, xr 2

, xr 3, xr 4

and xr 5are random points that are selected

from the population, where r1 = r2 = r3 = r4 = r5, and F,

K and k are three random numbers within a definite range.

After generation of a donor member, in the crossover

stage each member of the population is allowed to carry out

a crossover by mating with a donor member. There are a

few crossover strategies. One of the well-known crossover

strategies is named binary crossover. The binary crossover

operator is as follows:

uj ¼vj if randð0; 1Þ\CR _ j ¼ kxj otherwise

�ð10Þ

where uj, vj and xj are the jth gene of the offspring, donor

and current individuals, respectively. rand(0, 1) is a random

number in the range of [0,1] and CR is the user-supplied

crossover rate constant. k e {1, 2, …, N} is a randomly

chosen index, chosen once for each member to make sure

that in the u at least one parameter is always selected from v.

Also, N is the dimension of problem. In the binary cross-

over, a comparison of random variable, rand(0, 1) and

crossover rate, CR, determines which gene should be copied

from the current member or donor member.

There is another crossover operator called exponential

crossover, in which crossover rate regulates how many

consecutive genes of the donor individual v on average

should be copied to the offspring u. In this crossover, the

genes are taken from donor member until the random

variable is smaller or equal to CR for the first time. Then

the remaining genes are copied from the current member.

The exponential crossover is as follows:

uj ¼vj j ¼ hniN ; hnþ 1iN ; . . .; hnþ l� 1iNxj otherwise

�ð11Þ

where n and l are two integer numbers in the set of

{1, 2, …, N} that denote the starting point and number of

component, respectively. Also, angular brackets ‹ ›N denote

a modulo function with modulus N. According to Eq. (11),

to generate u, those of genes are between n - 1 and n ? l

must be copied from v and other genes will be copied from

x. The integer l is selected from {1, 2, …, N} according to

the following pseudo-code:

l ¼ 0;

While ðrandð0; 1Þ\CRÞ ^ ðl\NÞfl ¼ lþ 1;

gEnd While

After generation of an offspring, in the selection stage

and according to a one-to-one spawning strategy, the value

of the cost function in the point u is evaluated, and while

f(u) B f(x), x is replaced with u and otherwise no

replacement occurs:

x ¼ u f ðuÞ� f ðxÞx f ðuÞ[ f ðxÞ

�ð12Þ

This evolutionary process consisting of the mutation,

crossover and selection stages is repeated over several

generations until one of the stopping criteria is met.

4 The OBL

4.1 The concept behind OBL

In general, the EAs start with some initial solutions (initial

population) and try to improve them toward some optimal

solution(s). In the absence of a priori information about the

solution, starting with random guesses, generally with a

uniform distribution in whole search space, is a common

initialization. The computation time, among others, is

related to the distance of these initial guesses from the

optimal solution. We can improve our chance of starting

with a closer (fitter) solution by simultaneously checking

the opposite solution. By doing this, the fitter one (guess or

opposite guess) can be chosen as an initial solution. In fact,

according to probability theory, 50% of the time a guess is

further from the solution than its opposite guess. Therefore,

starting with the closer of the two guesses (as judged by its

fitness) has the potential to accelerate convergence. The

same approach can be applied not only to initial solutions,

but also continuously to each solution in the current pop-

ulation. However, before concentrating on OBL, we need

to define the concept of opposite numbers (Rahnamayan

et al. 2008).

An opposition-based number can be defined as follows.

Let x e [a, b] be a real number. The opposition number x̂ is

defined by

x̂ ¼ aþ b� x ð13Þ

Similarly, the opposite number in an N-dimensional

space can be defined as follows:

Let X = (x1, x2, …, xN) be a point in an N-dimensional

space where x1, x2, …, xN e R and xi e [ai, bi]Vi e

{1, 2, …, N}. The opposite point X̂ ¼ ðx̂1; x̂2; . . .; x̂NÞ is

completely defined by its coordinates

x̂i ¼ ai þ bi � xi ð14Þ

Now, by employing the definition of opposite point, the

opposition-based optimization can be defined as follows.

Let X = (x1, x2, …, xN) be a point in an N-dimensional

space (i.e., candidate solution). Assume f(�) is a fitness

function which is used to measure the candidate’s fitness.

1308 M. A. Ahandani, H. Alavi-Rad

123

Page 7: Opposition-based learning in the shuffled differential evolution algorithm

According to the definition of the opposite point, X̂ ¼ðx̂1; x̂2; . . .; x̂NÞ is the opposite of X = (x1, x2, …, xN). Now

if f ðXÞ� f ðX̂Þ, i.e., X̂ has a better fitness than X, then point

X can be replaced with X̂; otherwise, we continue with X.

Hence, the point and its opposite point are evaluated

simultaneously to continue with the fitter one.

4.2 The OBL in DE

The OBL can be used in two stages of DE: firstly in the

initialization stage to achieve fitter starting candidate

solutions, while a priori knowledge about the initial

members does not exist; secondly, carrying out the DE to

force the current population to jump into some new can-

didate solutions, which ideally are fitter than the current

ones. Rahnamayan et al. (2008) called these two stages as

opposition-based population initialization and opposition-

based generation jumping, respectively.

4.2.1 Opposition-based population initialization

The following steps present the utilizing of the OBL for

population initialization (Rahnamayan et al. 2008).

1. Initialize the population pop with a size of Npop

randomly.

2. Calculate the opposite population by

Opopi;j ¼ aj þ bj � popi;j

i ¼ 1; 2; . . .;Npop; j ¼ 1; 2; . . .;Nð15Þ

where popi,j and Opopi,j denote the jth variable of the

ith member of the population and the opposite popu-

lation, respectively.

3. Select Npop the fittest individuals from {pop [ Opop}

as the initial population.

To demonstrate the efficiency of opposition-based strat-

egy to obtain members which are closer to optimum point

than simply random population, we use an example from a

minimization problem. Equation (16) and Fig. 1 show a

two-dimensional function and its surface plot, respectively.

f ðx; yÞ ¼ ðx2 þ y2Þ0:25sin 30 ðxþ 0:5Þ2 þ y2

h i0:1��

þ xj j þ yj jf ð0; 0Þ ¼ 0 � 10� x; y� 10 ð16Þ

Figure 2a shows a random population with Npop equal to

40 on surface plot of Eq. (16) and its corresponding

opposition members. Figure 2b shows the remaining

members after applying an opposition-based population

initialization technique. As can be seen, those of members

which are far from the minimum point are rejected and

only closer members to the minimum point are preserved.

4.2.2 Opposition-based generation jumping

In this stage, based on a jumping rate Jr (i.e., jumping

probability), after generating new population by DE oper-

ators, the opposite population is calculated and the Npop

fittest individuals are selected from the union of the current

population and the opposite population. Unlike opposition-

based initialization, generation jumping calculates the

opposite population dynamically. Instead of using vari-

ables’ predefined interval boundaries ([aj, bj]), generation

jumping calculates the opposite of each variable based on

minimum (MINjp) and maximum (MAXj

p) values of that

variable in the current population

Opopi;j ¼ MINpj þMAXp

j � popi;j

i ¼ 1; 2; . . .;Npop; j ¼ 1; 2; . . .;Nð17Þ

By staying within variables’ interval static boundaries,

we would jump outside of the already shrunken search

space and the knowledge of the current reduced space

(converged population) would be lost. Hence, we calculate

the opposite points by using variables’ current interval in

the population ([MINjp, MAXj

p]), which is, as the search

progresses, increasingly smaller than the corresponding

initial range [aj, bj] (Rahnamayan et al. 2008).

5 The modified versions of DE

In this section, we propose four modified versions of DE

which utilize the OBL techniques, i.e., opposition-based

population initialization as well as opposition-based gen-

eration jumping, in the SDE. Ahandani et al. (2010) by

inspiring from partitioning and shuffling processes

employed in the SFL algorithm gave the parallel search

ability to the DE algorithms and called them as the SDE.

The SDE such as SFL divides the population into several

Fig. 1 Surface plot of Eq. (16)

OBL in the SDE algorithm 1309

123

Page 8: Opposition-based learning in the shuffled differential evolution algorithm

subsets referred to as memeplexes and each memeplex is

improved by the DE.

The SFL includes three exclusive stages: partitioning,

local search and shuffling. In this algorithm, after genera-

tion of the initial population, members of the population

are evaluated in the cost function. Then, all frogs are par-

titioned to several parallel subsets. In order to partition

frogs into memeplexes on the assumption of partitioning

m memeplexes, each containing n frogs, after sorting the

population in a decreasing order in terms of their function

evaluation value, frog ranking 1 goes to memeplex 1, frog

ranking 2 goes to memeplex 2,…, frog ranking m goes to

memeplex m; then the second member of each subset is

assigned as: frog ranking (m ? 1) goes to memeplex 1,

frog ranking (m ? 2) goes to memeplex 2,…, frog ranking

(m ? m) goes to memeplex m. This process continues to

assign all frogs into memeplexes. After partitioning, the

different subsets perform a local search independently

using an evolutionary process to evolve their quality. This

evolutionary process is iterated for a defined maximum

number of iterations. Then all subsets shuffle together and

the stopping criteria are checked, so that if these are not

met the algorithm will be continued.

Our proposed algorithms similar to SFL have all these

three stages, but, those such as the SDE, employ the DE as

evolutionary process to evolve the quality of each member

of memeplexes. Also, those employ the opposition-based

population initialization and opposition-based generation

jumping. All four proposed algorithms similarly use the

opposition-based population initialization to achieve fitter

initial individuals and their difference is in applying oppo-

sition-based generation jumping. Also these algorithms use

Eq. (8) (DE/current-to-best/1) and Eq. (11) (exponential

crossover) as mutation and crossover strategies, respectively.

In the exponential crossover, we randomly select n and

l from the set of {1, 2, …, N} and do not use the proposed

pseudo-code in Sect. 3 to decrease one of the control

parameters of DE.

The first modified version of SDE is called shuffled

opposition-based DE (SOBDE). The SOBDE employs the

opposition-based generation jumping after each iteration of

evolutionary process for each memeplex. Steps of the

SOBDE algorithm with m memeplexes, n members of each

memeplex and kmax defined iteration number of evolu-

tionary process are shown in Fig. 3. In this algorithm,

MINmemjp and MAXmemj

p denote minimum and maximum

values of jth variable in the current memeplex, respec-

tively. mem and Omem are members and corresponding

opposite members of the current memeplex, respectively.

Thus, the SOBDE considers each memeplex as an inde-

pendent set, for which each memeplex carries out abso-

lutely the DE stages and applies opposition-based

generation jumping on its members with a probability of Jr.

In other words, the SOBDE uses the ODE algorithm as

an evolutionary process to evolve members of each

memeplex.

The second modified version of SDE is called shuffled

extended opposition-based DE (SEOBDE). The SEOBDE

applies the opposition-based generation jumping as an

extra stage to improve the members of each memeplex.

Steps of the SEOBDE algorithm are shown in Fig. 4. In

this algorithm, to prevent fast convergence, Step 3.3.5 is

carried out based on Eq. (18).

v ¼ xi þ rem kðxg2� xiÞ þ Fðxr2

� xr1Þ;MaxSize

� �ð18Þ

where rem denotes the remainder of division kðxg2� xiÞ þ

Fðxr2� xr1

Þ on MaxSize. Also MaxSize is determined as

follows

X-axis

Y-a

xis

square:population triangle: opposition population

-10 -8 -6 -4 -2 0 2 4 6 8 10-10

-8

-6

-4

-2

0

2

4

6

8

10 square:population triangle: opposition population

X-axis

Y-a

xis

-10 -8 -6 -4 -2 0 2 4 6 8 10-10

-8

-6

-4

-2

0

2

4

6

8

10(a) (b)

Fig. 2 An example for opposition-based population strategy. a A set of 40 random members and their corresponding opposition members.

b Remaining members after applying opposition-based population technique

1310 M. A. Ahandani, H. Alavi-Rad

123

Page 9: Opposition-based learning in the shuffled differential evolution algorithm

MaxSizej ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffijajj � jbjj

3� ðjajj þ jbjjÞ

sfor j ¼ 1; 2; . . .;N ð19Þ

where aj and bj are minimum and maximum variables’

interval boundaries, respectively. Thus based on the above

equation, the maximum allowable step size of donor

member for approaching the best member of all population,

i.e., xg2, is restricted. The MaxSize sets itself in terms of

minimum and maximum variables’ interval boundaries in

every problem.

The steps of the third modified version of SDE called

opposition-based SDE (OBSDE) are shown in Fig. 5. The

OBSDE, unlike two aforementioned algorithms which used

the opposition-based generation jumping inside evolution-

ary process to evolve members of a memeplex, employs it

after a complete iteration of SDE on the current population.

In this algorithm, MINjp and MAXj

p denote minimum and

maximum values of jth variable in the current population,

respectively.

The final modified version of SDE is called opposition-

based shuffled extended opposition-based DE (OB-SEO-

BDE). The OB-SEOBDE on one hand, such as the SEOBDE,

after generation of an offspring for current member of

memeplex, applies the opposition-based generation jumping

as an extra stage to improve the current offspring. On the

other hand, the OB-SEOBDE, such as OBSDE, employs

another opposition-based generation jumping stage after a

complete iteration of algorithm on all members of the current

population. Figure 6 shows the steps of the OB-SEOBDE

algorithm.

6 Computational results

In this section, different experiments were carried out to

assess the performance of the proposed algorithms. The

considered values to set parameters of different algorithms

are shown in Table 1. The values of parameters in this

table are results of our experiments carried out by each

algorithm on different functions. For example, we found

that the values 5 and 20 for m and n, respectively, led to a

better performance in the SOBDE, but value of 20 and 5

were suitable for three other algorithms.

Firstly, the focus of the experiments is to compare the

performance of the proposed algorithms with the original

SDE algorithm proposed by Ahandani et al. (2010). Then, a

Algorithm 1 (the SOBDE algorithm)

Begin SOBDE Step 1: generate and evaluate initial population of size popN based on "opposition-based population

initialization" strategy.Step 2: generate m memplexes with n members where popN m n= × .

Step 3: apply the OBDE (Step3. 0 to Step 3.5) to improve each memplex for maxk iterations:

Step 3.0: set counter 1k = . Step 3.1: While maxk k≤Improve each member of memplex ( ix ) as follow:

Step 3.2: set counter 1i = . Step 3.3: While i n≤Step 3.3.1: determine the best member of each memplex, g1x and the best member of population, g 2x .

Step 3.3.2: apply mutation and crossover operators for ix according to Eq. (8) and Eq. (11) with best g1x x= and

generate new member of u . Step 3.3.3: evaluate value of cost function in the point u and while ( ) ( )if u f x≤ , replace ix with u and set

1i i= + , else go to next step.Step 3.3.4: repeat steps 3.3.2 and 3.3.3 with best g2x x= .

End While Step 3.4: apply "opposition-based generation jumping" strategy based on a jumping rate of rJ :

0 1

1

1

,,

If ( , )

For :

For :

;

End For

End For

r

p pjiji j j

rand J

i n

j D

Omem MINmem MAXmem mem

<=

=

= + −

Select n fittest members from the set of { }mem Omem∪ as current memeplex.

Step 3.5: set 1k k= + . End While

Step 4: shuffle the population. Step 5: check the stopping criteria if are not met go to Step 2. End SOBDE

Fig. 3 The steps of SOBDE

OBL in the SDE algorithm 1311

123

Page 10: Opposition-based learning in the shuffled differential evolution algorithm

comparison is carried out among the proposed algorithm

that obtained a better performance and some modern DE

algorithms proposed in the literature. The performance

comparison is made on 25 benchmark functions designed

for the special session on real-parameter optimization of

CEC2005 (Suganthan et al. 2005). Hansen (2005) split

these benchmark functions into three subsets:

• unimodal functions,

• solved multimodal functions (at least one algorithm

conducted at least one successful run),

• unsolved multimodal functions (no single run was

successful for any algorithm).

Unimodal functions are F1–F6, solved multimodal

functions are F7, F9–F12 and F15, and unsolved multimodal

functions include F8, F13 and F16–F25.

Twenty-five runs were performed for each benchmark

function. The results of 1st (min), 7th, 13th (median), 19th

and 25th (max) run are reported. All runs are averaged

(Avg) and standard deviation (SD) is also given. Also, a

non-parametric analysis over the obtained results using of

the Wilcoxon signed-ranks test with a = 0.05 (see Garcia

et al. 2009) in terms of the best run (1st run), average of all

runs (Avg) and success rate (SR) is provided. These studies

by means of pairwise comparisons compare the modified

versions of DE with the SDE and other DE algorithms

proposed in the literature. Experiments are carried out for

10, 30 and 50 variable numbers. The obtained results are

reported in Tables 2, 3, 4, 5, 6, 7, 8 and 9.

Table 2 shows a comparison among the SDE2 algorithm

proposed by Ahandani et al. (2010) and four versions of the

DE algorithm proposed in this research. These experiments

are carried out with ten variable numbers. In this table, we

intend to show how well the proposed algorithms perform

when compared with the conventional SDE.

On F1 to F4, all algorithms have a similar performance.

Those found optimal solutions on F1, F2 and F4 functions

with a success rate of 100%. On F5 to F7, success rate

obtained by the proposed algorithms are considerably

better than that obtained by the SDE2, and the SOBDE

obtains the best average results. It can be seen that, unlike

the SDE which has only a success rate of 44% on F7, our

proposed algorithms have at least a success rate of 84%.

On F8, the SEOBDE and OB-SEOBDE obtain a better

average and minimum results, respectively. On F9, all

algorithms have a successful performance. On F10, only the

SOBDE had a success rate of 4% and all other algorithms

did not converge to a minimum point at all. Also, this

algorithm has the best minimum and average results on

F11. On F12 and F13, the OB-SEOBDE obtains the best

Algorithm 2 (the SEOBDE algorithm)

Begin SEOBDE Step 1: generate and evaluate initial population of size popN based on "opposition-based population

initialization" strategy.Step 2: generate m memplexes with n members where popN m n= × .

Step 3: apply the extended OBDE (Step3. 0 to Step 3.5) to improve each memplex for maxk iterations:

Step 3.0: set counter 1k = . Step 3.1: While maxk k≤Improve each member of memplex ( ix ) as follow:

Step 3.2: set counter 1i = . Step 3.3: While i n≤Step 3.3.1: determine the best member of each memplex, g1x and the best member of population, g 2x .

Step 3.3.2: apply mutation and crossover operators for ix according to Eq. (8) and Eq. (11) with best g1x x= and

generate new member of u . Step 3.3.3: evaluate value of cost function in the point u and while ( ) ( )if u f x≤ , replace ix with u .

Step 3.3.4: apply "opposition-based generation jumping" strategy based on a jumping rate of rJ on u , and

generate Ou : 0 1

1

,,

If ( , )

For :

;

End For

r

p pjiji j j

rand J

j D

Ou MINmem MAXmem u

<=

= + −

While ( ) ( )if Ou f x≤ , replace ix with Ou and set 1i i= + , else go to next step.

Step 3.3.5: repeat steps 3.3.2 to 3.3.4 with best g2x x= .

End While Step 3.4: set 1k k= + . End While

Step 4: shuffle the population. Step 5: check the stopping criteria if are not met go to Step 2. End SEOBDE

Fig. 4 The steps of SEOBDE

1312 M. A. Ahandani, H. Alavi-Rad

123

Page 11: Opposition-based learning in the shuffled differential evolution algorithm

success rate and average results. F13 is categorized in the

unsolved multimodal functions, but our proposed algo-

rithms solve this problem. A success rate of 36% is

obtained by the OB-SEOBDE on this function. To the best

of our knowledge, after Becker et al. (2005) obtained a

success rate of 4% with a minimum value of 9.8771e-3 on

F13, this is the first time that an algorithm has a success-

ful performance on this function. On F14, the SDE2

outperforms all other algorithms. On F15, the OBSDE and

OB-SEOBDE have a better success rate, but the OBSDE

has the best average results.

F16 to F25 are unsolved multimodal functions. Our

proposed algorithms for the first time obtain some prom-

ising results on these functions. On F16, the OB-SEOBDE

has a better success rate and average results. To the best of

our knowledge, this is for the first time that an algorithm

solves this function. On F17, also for the first time the

SEOBDE and OBSDE have a success rate of 4%, but the

OB-SEOBDE obtains a better average results with respect

to the other algorithms under investigation. On F18 and F19,

the OB-SEOBDE has the best performance. F18 is solved

for the first time by the OB-SEOBDE algorithm. On F20,

all algorithms have a similar performance in terms of

minimum obtained result, but the OBSDE has a better

average result. On F21 and F22, the SEOBDE and

OB-SEOBDE obtain the best results, respectively. On F23,

the SOBDE and OB-SEOBDE obtain a better minimum

and average results, respectively. On F24 and F25, the

SEOBDE and SOBDE obtain the best average results,

respectively.

Table 3 shows the pairwise comparisons of the obtained

results of Table 2. These analyses demonstrate that the

performance of OB-SEOBDE has a significant difference

with respect to the SDE2. The OB-SEOBDE considerably

outperforms the SDE2 in terms of all three considered

aspects. Also, the SEOBDE has a slightly better perfor-

mance in terms of the average obtained results and success

rate with respect to SDE2. The OBSDE has a better per-

formance in terms of the average obtained results com-

pared to the SDE2. Overall, all results of the Wilcoxon test

show a better performance of modified algorithms, but only

performance of the aforementioned algorithms is signifi-

cantly better compared to the SDE2. On the other side, a

pairwise comparison only among our algorithms proposed

in this research shows that any algorithm does not have a

significant difference from other algorithms.

Algorithm 3 (the OBSDE algorithm)

Begin OBSDE Step 1: generate and evaluate initial population of size popN based on "opposition-based population

initialization" strategy.Step 2: generate m memplexes with n members where popN m n= × .

Step 3: apply the DE (Step3. 0 to Step 3.4) to improve each memplex for maxk iterations:

Step 3.0: set counter 1k = . Step 3.1: While maxk k≤Improve each member of memplex ( ix ) as follow:

Step 3.2: set counter 1i = . Step 3.3: While i n≤Step 3.3.1: determine the best member of each memplex, g1x and the best member of population, g 2x .

Step 3.3.2: apply mutation and crossover operators for ix according to Eq. (8) and Eq. (11) with best g1x x= and

generate new member of u . Step 3.3.3: evaluate value of cost function in the point u and while ( ) ( )if u f x≤ , replace ix with u and set

1i i= + , else go to next step.Step 3.3.4: repeat steps 3.3.2 and 3.3.3 with best g2x x= .

End While Step 3.4: set 1k k= + . End While

Step 4: shuffle the population. Step 5: apply "opposition-based generation jumping" strategy based on a jumping rate of rJ :

0 1

1

1

,,

If ( , )

For :

For :

;

End For

End For

r

pop

p pjiji j j

rand J

i N

j D

Opop MIN MAX pop

<=

=

= + −

Select popN fittest members from the set of { }pop Opop∪ as current population.

Step 6: check the stopping criteria if are not met go to Step 2. End OBSDE

Fig. 5 The steps of OBSDE

OBL in the SDE algorithm 1313

123

Page 12: Opposition-based learning in the shuffled differential evolution algorithm

Based on the results of Table 2 and clear statistical

evidence of Table 3, it can be concluded that combining

the OBL strategy in the SDE gives a promising and proper

approach to evolve the DE algorithm. This is clearly

reflected also by the Wilcoxon test. Our proposed algo-

rithms for the first time solve some of the functions that did

not have any successful reported results on them.

To evaluate the efficiency and quality of the proposed

DE algorithms, a comparison among our proposed

algorithms that obtained a better performance was per-

formed, i.e., the OB-SEOBDE, and some modern DE

algorithms. The following algorithms have been taken into

account for this goal: the Guided-DE and SaDE algorithms

were taken from (Bui et al. 2005) and (Qin and Suganthan

2005), respectively, and jDE and jDE-2 were taken from

(Brest et al. 2007). Table 4 shows the obtained results of

this comparison. Also Table 5 illustrates the results of

pairwise comparison for Table 4.

From Table 4, it is clearly observed that the OB-SEO-

BDE algorithm not only solves some unsolved functions

for the first time, but also obtains some promising and

comparable performances on the other algorithms. It has

the best success rate on F7 and also on F8, after the SaDE

obtains the best minimum result. On F12, only the

OB-SEOBDE and SaDE algorithms obtain a success rate of

100%. On F16 to F18, F22, F23 and F25, the OB-SEOBDE

obtains the best minimum result. Also on F16, F17, F22,

Algorithm 4 (the OB-SEOBDE algorithm) Begin SEOBDE Step 1: generate and evaluate initial population of size popN based on "opposition-based population

initialization" strategy.Step 2: generate m memplexes with n members where popN m n= × .

Step 3: apply the DE (Step3. 0 to Step 3.5) to improve each memplex for maxk iterations:

Step 3.0: set counter 1k = . Step 3.1: While maxk k≤Improve each member of memplex ( ix ) as follow:

Step 3.2: set counter 1i = . Step 3.3: While i n≤Step 3.3.1: determine the best member of each memplex, g1x and the best member of population, g 2x .

Step 3.3.2: apply mutation and crossover operators for ix according to Eq. (8) and Eq. (11) with best g1x x= and

generate new member of u . Step 3.3.3: evaluate value of cost function in the point u and while ( ) ( )if u f x≤ , replace ix with u .

Step 3.3.4: apply "opposition-based generation jumping" strategy based on a jumping rate of rJ on u , and

generate Ou : 0 1

1

,,

If ( , )

For :

;

End For

r

p pjiji j j

rand J

j D

Ou MINmem MAXmem u

<=

= + −

While ( ) ( )if Ou f x≤ , replace ix with Ou and set 1i i= + , else go to next step.

Step 3.3.5: repeat steps 3.3.2 to 3.3.4 with best g2x x= .

End While Step 3.5: set 1k k= + . End While

Step 4: shuffle the population. Step 5: apply "opposition-based generation jumping" strategy based on a jumping rate of rJ :

0 1

1

1

,,

If ( , )

For :

For :

;

End For

End For

r

pop

p pjiji j j

rand J

i N

j D

Opop MIN MAX pop

<=

=

= + −

Select popN fittest members from the set of { }pop Opop∪ as current population.

Step 6: check the stopping criteria if are not met go to Step 2. End OB-SEOBDE

Fig. 6 The steps of

OB-SEOBDE

Table 1 Parameter setting for algorithms

Algorithms Npop F k m n kmax Jr

SOBDE 100 [0,1] [0,1.5] 5 20 20 0.3

SEOBDE 100 [0,1] [0,1.5] 20 5 20 0.3

OBSDE 100 [0,1] [0,1.5] 20 5 20 0.3

OB-SEOBDE 100 [0,1] [0,1.5] 20 5 20 0.3

1314 M. A. Ahandani, H. Alavi-Rad

123

Page 13: Opposition-based learning in the shuffled differential evolution algorithm

Table 2 A comparison among different modified versions of SDE algorithm with original SDE

F SDE2 SOBDE SEOBDE OBSDE OB-SEOBDE

1

1st 0 0 0 0 0

7th 0 0 0 0 0

13th 0 0 0 0 0

19th 0 0 0 0 0

25th 0 0 0 0 0

Avg 0 0 0 0 0

SD 0 0 0 0 0

SR 100% 100% 100% 100% 100%

2

1st 0 0 0 0 0

7th 0 0 0 0 0

13th 0 0 0 0 0

19th 0 0 0 0 0

25th 0 0 0 0 1.0097e-028

Avg 0 0 0 0 4.039e-030

SD 0 0 0 0 2.0195e-029

SR 100% 100% 100% 100% 100%

3

1st 1,482.1 8,798 26,665 81,040 48,558

7th 1.9042e?05 88,741 1.0264e?005 3.4653e?005 2.6209e?005

13th 6.8461e?05 242,730 3.127e?005 6.965e?005 5.2998e?005

19th 1.9124e?06 604,330 7.7417e?005 1.1847e?006 1.3261e?006

25th 6.8311e?06 991,490 4.1338e?006 5.2102e?006 4.7468e?006

Avg 1.4238e?06 3.6271e?005 6.0006e?005 1.1279e?006 1.0695e?006

SD 1.8216e?06 3.201e?005 8.5133e?005 1.3115e?006 1.2453e?006

SR 0% 0% 0% 0% 0%

4

1st 0 0 0 0 0

7th 0 0 0 0 0

13th 0 0 0 0 0

19th 0 0 0 0 0

25th 0 0 0 0 0

Avg 0 0 0 0 0

SD 0 0 0 0 0

SR 100% 100% 100% 100% 100%

5

1st 1.3529e-09 0 0 4.5475e-012 0

7th 2.2685e-08 0 1.819e-012 6.4574e-011 3.638e-012

13th 7.6028e-08 0 5.457e-012 1.7917e-010 1.819e-011

19th 1.9793e-07 1.819e-012 5.0932e-011 9.5679e-010 9.8225e-011

25th 0.00058102 2.7285e-012 2.2737e-009 3.5782e-007 7.2396e-010

Avg 2.3374e-05 7.8217e-013 1.4799e-010 1.4917e-008 1.1216e-010

SD 0.00011618 1.0128e-012 4.5788e-010 7.1443e-008 1.9874e-010

SR 96% 100% 100% 100% 100%

6

1st 0 0 0 0 0

7th 4.4366e-027 0 0 0 0

OBL in the SDE algorithm 1315

123

Page 14: Opposition-based learning in the shuffled differential evolution algorithm

Table 2 continued

F SDE2 SOBDE SEOBDE OBSDE OB-SEOBDE

13th 5.3977e-026 0 0 0 0

19th 1.3765e-023 0 0 1.0747e-026 0

25th 3.9866 3.9866 3.9866 3.9866 3.9866

Avg 0.47839 0.15946 0.15946 0.31893 0.31893

SD 1.3222 0.79732 0.79732 1.1038 1.1038

SR 88% 96% 96% 92% 92%

7

1st 0 0 0 0 0

7th 0 0 0 0 0

13th 0.31471 0 0 0 0

19th 0.43267 0 0 0 0

25th 0.56817 0.2522 0.51323 0.52774 0.5001

Avg 0.23472 0.02132 0.055695 0.053074 0.071392

SD 0.22308 0.071751 0.15462 0.14981 0.16767

SR 44% 92% 84% 88% 84%

8

1st 20.185 20.182 20.175 20.207 20.161

7th 20.262 20.268 20.286 20.317 20.262

13th 20.314 20.327 20.333 20.345 20.330

19th 20.395 20.382 20.382 20.382 20.411

25th 20.436 20.459 20.449 20.423 20.457

Avg 20.352 20.357 20.322 20.338 20.337

SD 0.07194 0.070692 0.079901 0.055888 0.077742

SR 0% 0% 0% 0% 0%

9

1st 0 0 0 0 0

7th 0 0 0 0 0

13th 0 0 0 0 0

19th 0 0 0 0 0

25th 0 0 0 0 0

Avg 0 0 0 0 0

SD 0 0 0 0 0

SR 100% 100% 100% 100% 100%

10

1st 0.3246 0 2.5163 1.9930 3.9798

7th 3.9798 1.9899 5.6396 4.9748 4.9748

13th 6.1726 2.9849 6.9647 5.9737 5.9698

19th 9.2496 3.9798 8.2866 8.7600 7.8613

25th 11.941 4.9748 10.945 11.9740 11.939

Avg 6.115 2.9849 6.7233 6.5850 6.2845

SD 3.0802 1.3472 2.0618 2.4629 2.0102

SR 0% 4% 0% 0% 0%

11

1st 4.0696 0.20195 2.2851 0.2841 1.5927

7th 5.8809 0.34713 3.8167 4.0300 3.7245

13th 6.5069 1.1816 4.7275 4.3902 4.5041

19th 6.8359 3.977 5.3568 5.0413 5.1917

25th 7.1235 5.9717 6.4281 6.2281 5.9237

1316 M. A. Ahandani, H. Alavi-Rad

123

Page 15: Opposition-based learning in the shuffled differential evolution algorithm

Table 2 continued

F SDE2 SOBDE SEOBDE OBSDE OB-SEOBDE

Avg 6.1784 2.4006 4.5245 4.0058 4.3741

SD 0.63647 1.918 1.0891 1.6514 1.0961

SR 0% 0% 0% 0% 0%

12

1st 8.3251e-021 0 0 2.6253e-027 0

7th 2.9908e-019 0 4.5438e-028 3.8562e-018 0

13th 5.0646e-016 1.1107e-027 1.7184e-024 9.1946e-010 2.2883e-026

19th 1.1069e-012 7.622e-020 3.2081e-022 1.086e-006 1.2591e-017

25th 0.33006 3.1991 10.26 30.909 0.00032304

Avg 0.01321 0.13665 0.41039 1.2716 1.2922e-005

SD 0.06601 0.63948 2.052 6.1767 6.4608e-005

SR 96% 92% 96% 88% 100%

13

1st 0 0 0 0 0

7th 0.53028 0.33998 0.0098704 0.1148 0

13th 0.61752 0.39505 0.25542 0.3631 0.23436

19th 0.65918 0.54487 0.45909 0.4519 0.40209

25th 0.75641 0.70635 0.66559 0.6088 0.58901

Avg 0.54499 0.40265 0.2658 0.2912 0.22196

SD 0.19019 0.19736 0.22287 0.1991 0.21824

SR 4% 8% 28% 20% 36%

14

1st 1.8054 2.1365 2.1794 2.4224 2.773

7th 2.4539 2.6075 2.9611 3.0985 3.1501

13th 2.9027 2.7998 3.1416 3.2188 3.2706

19th 3.1588 3.1274 3.2562 3.3144 3.3128

25th 3.6929 3.527 3.6604 3.5209 3.5885

Avg 2.8039 2.8313 3.0661 3.1697 3.2259

SD 0.42955 0.3566 0.3195 0.2435 0.20208

SR 0% 0% 0% 0% 0%

15

1st 0 0 0 0 0

7th 0 0 0 0 0

13th 200.0000 200.0000 100.0000 0.0338 35.457

19th 400.0000 400.0000 200.0000 150.2511 200.0000

25th 404.5218 407.942 237.2780 300.0000 300.0000

Avg 171.4286 229.76 101.4132 74.0761 97.924

SD 179.9471 153.98 95.2440 94.3074 104.2

SR 36% 32% 36% 44% 44%

16

1st 15.0245 14.372 1.3797e-013 9.4760 0

7th 21.6688 24.841 18.6846 31.2192 18.057

13th 28.6613 30.908 27.6082 43.0641 37.004

19th 33.6794 43.463 46.2769 91.5970 46.277

25th 115.2318 107.414 134.8407 121.6007 96.174

Avg 43.5085 44.148 40.2680 57.4225 37.171

SD 56.7509 30.171 35.2638 36.0052 25.682

SR 0% 0% 4% 0% 8%

OBL in the SDE algorithm 1317

123

Page 16: Opposition-based learning in the shuffled differential evolution algorithm

Table 2 continued

F SDE2 SOBDE SEOBDE OBSDE OB-SEOBDE

17

1st 30.5549 10.175 0 0 11.5331

7th 39.5409 26.252 19.8993 23.1548 20.3217

13th 56.9181 36.603 35.1286 36.4231 28.4788

19th 89.6497 55.552 43.4500 47.0098 41.5130

25th 150.1124 89.736 122.5260 150.5640 57.0901

Avg 68.7787 42.533 45.2879 43.9726 29.4416

SD 52.7338 22.427 34.7103 33.3869 12.3062

SR 0% 0% 4% 4% 0%

18

1st 300.0000 300.0000 300.0000 300.0000 3.2816e-013

7th 400.0000 300.0000 400.0000 400.0000 300.0000

13th 808.7119 408.5410 500.0000 500.0000 400.0000

19th 824.1082 814.5702 800.0000 808.7699 500.0000

25th 902.5931 910.6617 909.5770 866.3345 830.8944

Avg 747.2209 554.0830 587.4405 592.3354 462.8617

SD 204.0083 245.7911 231.8998 227.2923 218.2436

SR 0% 0% 0% 0% 4%

19

1st 300.0000 300.000 300.0000 300.0000 300.0000

7th 400.0000 400.000 400.0000 335.9206 336.21472

13th 811.0385 804.2170 407.1330 400.0000 400.0000

19th 822.2928 829.4831 802.8916 800.0000 500.0000

25th 885.8953 930.0147 971.3249 882.6131 849.1442

Avg 691.0391 646.4917 580.2510 529.0426 465.7913

SD 198.8886 230.5831 238.4771 225.8041 181.8147

SR 0% 0% 0% 0% 0%

20

1st 300.000 300.000 300.0000 300.0000 300.0000

7th 300.000 400.000 300.0000 400.0000 374.7275

13th 400.000 800.000 500.0000 400.0000 400.0000

19th 800.000 815.9904 800.0000 800.0000 800.0000

25th 848.9451 860.7317 903.3796 825.0095 852.1974

Avg 545.4350 615.1227 563.0376 528.6478 543.4026

SD 247.2186 219.95 233.6682 214.1895 224.0738

SR 0% 0% 0% 0% 0%

21

1st 500.0000 500.0000 410.4922 410.4945 500.0000

7th 531.9214 524.6717 500.0000 500.0000 500.0000

13th 614.3927 586.4908 557.6002 567.2293 584.2689

19th 673.65 687.9217 648.2883 616.4363 678.5980

25th 900.0000 828.7513 747.2052 780.8986 900.0000

Avg 640.0708 619.5562 572.3121 573.8298 605.6206

SD 42.6968 106.4992 98.4800 84.7315 104.2032

SR 0% 0% 0% 0% 0%

22

1st 500.0000 500.0000 200.0000 500.0000 200.0000

7th 500.0058 500.0024 500.0002 500.0012 500.0001

1318 M. A. Ahandani, H. Alavi-Rad

123

Page 17: Opposition-based learning in the shuffled differential evolution algorithm

F23and F25, the OB-SEOBDE obtains the best average

results. Also on F13, F16 and F18, only the OB-SEOBDE

has a successful performance. Furthermore from Table 4,

we can observe that the OB-SEOBDE besides the Guided-

DE is the algorithm with the worst performance on F3.

Also from the results of non-parametric analysis of

Table 5, it is clearly shown that the OB-SEOBDE outper-

forms the Guided-DE. It has a considerably better perfor-

mance than the Guided-DE based on all three considered

aspects. Also, the OB-SEOBDE has a considerably better

performance than the jDE in terms of the best run. Also, the

paired pairwise comparison of the OB-SEOBDE and other

algorithms does not show a significant difference. As an

overall conclusion, the results of Table 5 show that

although the performance of OB-SEOBDE and some

algorithms does not have a significant difference, in all

cases the OB-SEOBDE is the fitter algorithm.

To examine the performance of the OB-SEOBDE

on high-dimensional functions, a comparison among the

OB-SEOBDE, the Guided-DE proposed in Bui et al. (2005)

and three modified versions of DE proposed in Ahandani

et al. (2010), i.e., the BDE2, SDE2 and SBDE2, is carried

out in Tables 6, 7, 8 and 9 for 30 and 50 variables.

Applying the OB-SEOBDE on high-dimensional func-

tions highlights its efficiency. Table 6 shows the compar-

ison results for 30 variables. The obtained results of

this table and the pairwise comparisons of those in Table 7

clearly demonstrate that the OB-SEOBDE has a

Table 2 continued

F SDE2 SOBDE SEOBDE OBSDE OB-SEOBDE

13th 500.2138 500.1100 500.0019 500.0121 500.0094

19th 518.6937 503.2314 500.0492 500.3966 500.0353

25th 889.2026 508.7924 802.3267 510.6857 576.6202

Avg 594.5276 502.2866 500.2884 500.6647 487.2779

SD 160.2612 2.5360 86.9411 2.1514 64.9543

SR 0% 0% 0% 0% 0%

23

1st 510.1981 508.0902 523.1287 507.0437 529.3801

7th 566.9675 548.0127 543.5150 526.9657 529.3801

13th 588.5153 613.0882 567.7894 554.4714 565.9457

19th 614.5676 809.5482 603.1247 635.4232 611.2536

25th 701.0727 949.4413 928.7413 827.3615 766.0608

Avg 594.8642 660.5117 589.5522 597.6233 575.1675

SD 76.0495 156.7053 84.3700 94.7825 77.5191

SR 0% 0% 0% 0% 0%

24

1st 200.0000 200.0000 200.0000 200.0000 200.0000

7th 200.0000 202.8504 202.5138 207.0925 200.0000

13th 203.4436 216.1271 207.3989 213.7113 205.9986

19th 216.5592 251.1467 220.4807 223.2402 248.8015

25th 500.0000 421.0412 400.0000 500.0000 500.0000

Avg 243.4304 254.1792 226.4891 237.7216 246.2228

SD 92.5408 78.5770 53.6241 79.8734 84.4419

SR 0% 0% 0% 0% 0%

25

1st 200.0001 200.0000 200.0000 200.0000 200.0000

7th 200.0528 205.1204 200.1249 200.9169 200.0000

13th 201.1032 229.5440 207.9837 207.1592 400.0000

19th 221.9227 272.5841 259.5602 313.7887 500.0000

25th 597.0077 816.2591 1122.2786 900.0000 500.0000

Avg 284.0225 290.1214 311.2877 302.0553 354.0000

SD 115.2128 142.7110 241.8858 181.2217 147.7973

SR 0% 0% 0% 0% 0%

OBL in the SDE algorithm 1319

123

Page 18: Opposition-based learning in the shuffled differential evolution algorithm

significantly better performance than other algorithms in

terms of 1st run and Avg. It also outperforms the Guided-

DE in terms of success rate, but there is not a significant

difference among success rates of the OB-SEOBDE and

other algorithms.

Table 8 shows the comparison results for 50 variables.

The Guided-DE was not applied for 50 variables; thus, this

table is only a comparison of the OB-SEOBDE and three

DE algorithms proposed in Ahandani et al. (2010). Table 8

and the pairwise comparisons of its results in Table 9

clearly confirm the results of Tables 6 and 7. The OB-

SEOBDE considerably outperforms three other algorithms

in terms of 1st run and Avg. Also those do not have a

significant difference for success rate.

Table 3 Wilcoxon test applied over the all possible comparisons among algorithms obtained from Table 2 in terms of the best run (1st run),

average of all runs (Avg) and success rate (SR)

Comparison 1st run Avg SR

R? R- p value R? R- p value R? R- p value

SDE2–SOBDE 227.5 97.5 0.155 193 132 0.394 127.5 197.5 0.161

SDE2–SEOBDE 205 120 0.382 253 72 0.017 95 230 0.026

SDE2–OBSDE 200.5 144.5 0.695 239 86 0.042 108 217 0.105

SDE2–OB-SEOBDE 203 122 0.019 261 64 0.009 76.5 248.5 0.011

SOBDE–SEOBDE 164 161 0.878 180.5 144.5 0.502 130.5 194.5 0.380

SOBDE–OBSDE 141.5 183.5 0.859 161 164 0.876 169.5 155.5 0.725

SOBDE–OB-SEOBDE 144 181 0.169 217 108 0.115 136.5 188.5 0.203

SEOBDE–OBSDE 116.5 208.5 0.203 137 188 0.590 185 140 0.453

SEOBDE–OB-SEOBDE 136 189 0.508 196 129 0.372 127.5 197.5 0.160

OBSDE–OB-SEOBDE 172 153 0.221 192 133 0.414 137 188 0.168

Table 4 A comparison among our proposed OB-SEOBDE with some modern DE algorithms proposed in the literature

F OB-SEOBDE Guided-DE SaDE jDE-2 jDE

1

1st 0 0 0 0 0

7th 0 0 0 0 0

13th 0 0 0 0 0

19th 0 0 0 0 0

25th 0 0 0 0 0

Avg 0 0 0 0 0

SD 0 0 0 0 0

SR 100% 100% 100% 100% 100%

2

1st 0 0 0 0 0

7th 0 0 0 0 0

13th 0 0 0 0 0

19th 0 0 0 0 2.7755e-17

25th 1.0097e-028 0.958 2.5580e-12 0 4.4408e-16

Avg 4.039e-030 0.057 1.0459e-13 0 2.8865e-17

SD 2.0195e-029 0.193 5.1124e-13 0 9.0109e-17

SR 100% 80% 100% 100% 100%

3

1st 48,558 1.34e?04 0 0 1.0963e-14

7th 2.6209e?005 4.63e?04 0 0 6.1627e-12

13th 5.2998e?005 1.47e?05 0 0 1.3901e-10

19th 1.3261e?006 2.83e?05 9.9142e-06 2.7758e-17 9.5245e-09

1320 M. A. Ahandani, H. Alavi-Rad

123

Page 19: Opposition-based learning in the shuffled differential evolution algorithm

Table 4 continued

F OB-SEOBDE Guided-DE SaDE jDE-2 jDE

25th 4.7468e?006 9.41e?05 1.0309e-04 1.3569e-11 6.2006e-06

Avg 1.0695e?006 2.09e?05 1.6720e-05 5.4316e-13 4.3401e-07

SD 1.2453e?006 2.04e?05 3.1196e-05 2.7138e-12 1.3459e-06

SR 0% 0% 64% 100% 88%

4

1st 0 0 0 0 0

7th 0 0 0 0 5.5511e-17

13th 0 0.065 0 0 1.9428e-16

19th 0 1.090 0 0 4.4408e-16

25th 0 2.942 3.5456e-04 0 4.0245e-15

Avg 0 0.619 1.4182e-05 0 4.4186e-16

SD 0 0.921 7.0912e-05 0 8.1713e-16

SR 100% 40% 96% 100% 100%

5

1st 0 37.703 1.1133e-06 0 1.7763e-15

7th 3.638e-012 66.064 0.0028 0 5.3290e-15

13th 1.819e-011 108.370 0.0073 0 1.0658e-14

19th 9.8225e-011 168.983 0.0168 0 1.5987e-14

25th 7.2396e-010 250.417 0.0626 0 5.5067e-14

Avg 1.1216e-010 121.796 0.0123 0 1.5312e-14

SD 1.9874e-010 63.770 0.0146 0 1.3994e-14

SR 100% 0% 0% 100% 100%

6

1st 0 0.082 0 0 7.8986e-13

7th 0 0.839 4.3190e-09 0 5.9842e-03

13th 0 1.722 5.1631e-09 0 0.0203

19th 0 4.096 9.1734e-09 2.7755e-17 0.0886

25th 3.9866 18.128 8.0479e-08 1.1379e-15 0.6044

Avg 0.31893 3.538 1.1987e-08 1.2767e-16 0.0760

SD 1.1038 4.068 1.9282e-08 3.0520e-16 0.1303

SR 92% 0% 100% 100% 32%

7

1st 0 64.788 4.6700e-10 0 1.8444e-13

7th 0 132.678 0.0148 7.3960e-03 0.0322

13th 0 153.585 0.0197 0.0123 0.0555

19th 0 186.077 0.0271 0.0270 0.0767

25th 0.5001 231.308 0.0369 0.0467 0.1124

Avg 0.071392 157.751 0.0199 0.0167 0.0531

SD 0.16767 39.583 0.0107 0.0140 0.0310

SR 84% 0% 24% 40% 12%

8

1st 20.161 20.267 20.0000 20.1999 20.1727

7th 20.262 20.390 20.0000 20.3017 20.2538

13th 20.330 20.449 20.0000 20.3444 20.3384

19th 20.411 20.497 20.0000 20.4060 20.3951

25th 20.457 20.572 20.0000 20.5016 20.4314

Avg 20.337 20.444 20.0000 20.3541 20.3206

SD 0.077742 0.077 5.3901e-08 0.0711 0.0750

OBL in the SDE algorithm 1321

123

Page 20: Opposition-based learning in the shuffled differential evolution algorithm

Table 4 continued

F OB-SEOBDE Guided-DE SaDE jDE-2 jDE

SR 0% 0% 0% 0% 0%

9

1st 0 0 0 0 0

7th 0 1.990 0 0 0

13th 0 3.980 0 0 0

19th 0 4.975 0 0 0

25th 0 10.945 0 0 0

Avg 0 3.967 0 0 0

SD 0 2.911 0 0 0

SR 100% 12% 100% 100% 100%

10

1st 3.9798 3.980 1.9899 2.2576 4.4341

7th 4.9748 8.955 3.9798 5.1310 7.6225

13th 5.9698 12.935 4.9748 5.9697 9.6381

19th 7.8613 16.914 5.9698 7.4081 10.3258

25th 11.939 29.849 9.9496 9.9866 13.2220

Avg 6.2845 13.651 4.9685 6.1680 9.2627

SD 2.0102 6.414 1.6918 1.9945 1.9270

SR 0% 0% 0% 0% 0%

11

1st 1.5927 0.541 3.2352 8.9258e-10 0.0859

7th 3.7245 0.956 4.5129 2.0095e-05 4.7395

13th 4.5041 2.202 4.7649 3.2703e-03 5.1636

19th 5.1917 2.807 5.3823 0.1453 5.9770

25th 5.9237 4.759 5.9546 6.4392 6.8224

Avg 4.3741 2.196 4.8909 0.8058 5.0802

SD 1.0961 1.140 0.6619 1.9162 1.4210

SR 0% 0% 0% 56% 0%

12

1st 0 0 1.4120e-10 0 2.8310e-15

7th 0 1.668 1.7250e-08 0 1.0361e-13

13th 2.2883e-026 10.731 8.1600e-08 0 4.6544e-09

19th 1.2591e-017 21.505 3.8878e-07 10.0030 1.2524e-04

25th 0.00032304 1,556.660 3.3794e-06 712.2541 35.4601

Avg 1.2922e-005 262.015 4.5011e-07 67.8339 4.1255

SD 6.4608e-005 484.243 8.5062e-07 198.4822 8.7597

SR 100% 0% 100% 68% 76%

13

1st 0 0.272 0.1201 0.3701 0.3090

7th 0 0.444 0.1957 0.4394 0.3988

13th 0.23436 0.568 0.2170 0.5053 0.4573

19th 0.40209 0.702 0.2508 0.5700 0.5006

25th 0.58901 1.347 0.3117 0.6486 0.5458

Avg 0.22196 0.598 0.2202 0.5052 0.4504

SD 0.21824 0.238 0.0411 0.0813 0.0631

SR 36% 0% 0% 0% 0%

14

1st 2.773 2.558 2.5765 1.4669 2.8531

1322 M. A. Ahandani, H. Alavi-Rad

123

Page 21: Opposition-based learning in the shuffled differential evolution algorithm

Table 4 continued

F OB-SEOBDE Guided-DE SaDE jDE-2 jDE

7th 3.1501 3.000 2.7576 2.6163 3.2235

13th 3.2706 3.233 2.8923 2.8582 3.2685

19th 3.3128 3.395 3.0258 2.9935 3.3636

25th 3.5885 3.742 3.3373 3.2399 3.5279

Avg 3.2259 3.208 2.9153 2.7747 3.2613

SD 0.20208 0.274 0.2063 0.3540 0.1668

SR 0% 0% 0% 0% 0%

15

1st 0 0 0 0 0

7th 0 77.738 0 0 0

13th 35.457 119.676 0 400.0000 67.8032

19th 200.0000 406.258 2.9559e-12 400.0000 100.5804

25th 300.0000 441.765 400.0000 400.0000 400.0000

Avg 97.924 183.862 32.0000 224.0000 95.1523

SD 104.2 153.572 110.7550 202.6491 122.2963

SR 44% 8% 92% 44% 28%

16

1st 0 101.171 86.3059 61.4505 95.3606

7th 18.057 115.255 98.5482 91.6652 107.4456

13th 37.004 120.640 101.4533 98.3280 112.8221

19th 46.277 126.891 104.9396 102.0676 115.7979

25th 96.174 159.818 111.9003 124.0614 121.4367

Avg 37.171 121.892 101.2093 96.7627 111.2191

SD 25.682 12.841 6.1686 12.0952 6.1818

SR 8% 0% 0% 0% 0%

17

1st 11.5331 87.447 99.0400 96.3383 108.1094

7th 20.3217 111.870 106.7286 101.0323 124.9650

13th 28.4788 116.812 113.6242 112.0854 129.6757

19th 41.5130 132.588 119.2813 116.7995 136.1688

25th 57.0901 148.346 135.5105 121.5371 153.8196

Avg 29.4416 120.604 114.0600 109.9146 130.4137

SD 12.3062 14.572 9.9679 8.5175 10.7680

SR 0% 0% 0% 0% 0%

18

1st 3.2816e-013 300.000 300.0000 300.0000 300.0000

7th 300.0000 300.0000 800.0000 800.0000 300.0000

13th 400.0000 724.071 800.0000 800.0000 300.0000

19th 500.0000 800.136 800.0000 800.0000 800.0000

25th 830.8944 842.140 900.8377 800.0000 800.0000

Avg 462.8617 586.495 719.3861 700.0000 440.0000

SD 218.2436 237.556 208.5161 204.1241 229.1288

SR 4% 0% 0% 0% 0%

19

1st 300.0000 300.000 300.0000 300.0000 300.0000

7th 336.21472 724.974 653.5664 300.0000 300.0000

13th 400.0000 835.172 800.0000 800.0000 300.0000

19th 500.0000 840.791 800.0000 800.0000 300.0000

OBL in the SDE algorithm 1323

123

Page 22: Opposition-based learning in the shuffled differential evolution algorithm

Table 4 continued

F OB-SEOBDE Guided-DE SaDE jDE-2 jDE

25th 849.1442 972.051 930.7288 852.8697 800.0000

Avg 465.7913 725.042 704.9373 662.1148 400.0000

SD 181.8147 213.177 190.3959 230.7133 204.1241

SR 0% 0% 0% 0% 0%

20

1st 300.0000 300.000 300.0000 300.0000 300.0000

7th 374.7275 728.782 800.0000 300.0000 300.0000

13th 400.0000 832.504 800.0000 800.0000 300.0000

19th 800.0000 839.811 800.0000 800.0000 300.0000

25th 852.1974 841.154 907.0822 852.9271 800.0000

Avg 543.4026 723.011 713.0240 662.1171 400.0000

SD 224.0738 184.818 201.3396 230.7153 204.1241

SR 0% 0% 0% 0% 0%

21

1st 500.0000 862.119 300.0000 300.0000 500.0000

7th 500.0000 1,064.940 300.0000 500.0000 500.0000

13th 584.2689 1,080.770 500.0000 500.0000 500.0000

19th 678.5980 1,085.790 500.0000 500.0000 500.0000

25th 900.0000 1,092.470 800.0000 862.5199 800.0000

Avg 605.6206 1,068.658 464.0000 514.5008 512.0000

SD 104.2032 43.467 157.7973 173.9449 60.0000

SR 0% 0% 0% 0% 0%

22

1st 200.0000 749.223 300.0000 754.6395 749.4194

7th 500.0001 770.919 750.6537 757.3936 761.3741

13th 500.0094 781.991 752.4286 763.5070 763.6705

19th 500.0353 804.480 756.9808 766.7042 766.6464

25th 576.6202 884.563 800.0000 825.7207 768.1616

Avg 487.2779 796.623 734.9044 770.4906 763.2931

SD 64.9543 38.076 91.5229 22.1955 4.5850

SR 0% 0% 0% 0% 0%

23

1st 529.3801 559.468 559.4683 559.4683 559.4683

7th 529.3801 1,091.480 559.4683 559.4683 559.4683

13th 565.9457 1,102.440 559.4683 559.4683 559.4683

19th 611.2536 1,107.440 721.2327 721.2160 559.4683

25th 766.0608 1,125.880 970.5031 970.5031 970.5031

Avg 575.1675 1,032.814 664.0557 670.5235 605.2911

SD 77.5191 177.038 152.6608 151.4658 118.6263

SR 0% 0% 0% 0% 0%

24

1st 200.0000 399.931 200.0000 200.0000 200.0000

7th 200.0000 406.682 200.0000 200.0000 200.0000

13th 205.9986 408.827 200.0000 200.0000 200.0000

19th 248.8015 410.455 200.0000 200.0000 200.0000

25th 500.0000 412.857 200.0000 500.0000 200.0000

Avg 246.2228 408.229 200.0000 248.0000 200.0000

SD 84.4419 3.061 0 112.2497 0

1324 M. A. Ahandani, H. Alavi-Rad

123

Page 23: Opposition-based learning in the shuffled differential evolution algorithm

Table 4 continued

F OB-SEOBDE Guided-DE SaDE jDE-2 jDE

SR 0% 0% 0% 0% 0%

25

1st 200.0000 200.002 370.9112 402.0977 401.7665

7th 200.0000 406.489 373.0349 402.3489 402.3340

13th 400.0000 408.979 375.4904 402.5183 402.5493

19th 500.0000 411.463 378.1761 402.6305 402.6452

25th 500.0000 415.074 381.5455 114.9815 402.9896

Avg 354.0000 368.090 375.8646 452.1622 402.4783

SD 147.7973 84.078 3.1453 166.3749 0.2858

SR 0% 0% 0% 0% 0%

Table 5 Wilcoxon test applied over the obtained results of Table 4 in terms of the best run (1st run), average of all runs (Avg) and success rate

(SR)

Comparison 1st run Avg SR

R? R- p value R? R- p value R? R- p value

Guided-DE–OB-SEOBDE 254 71 0.003 290.5 34.5 0.001 52.5 272.5 0.003

SaDE–OB-SEOBDE 208.5 116.5 0.278 185.5 139.5 0.503 133.5 191.5 0.553

jDE2–OB-SEOBDE 192 133 0.311 222 103 0.082 145 180 0.944

jDE–OB-SEOBDE 259.5 65.5 0.017 163.5 161.5 1.000 101.5 223.5 0.161

Table 6 A comparison among the OB-SEOBDE with Guided-DE algorithm and results reported in Ahandani et al. (2010) for 30 variable

numbers

F OB-SEOBDE BDE2 SDE2 SBDE2 Guided-DE

1

1st 0 0 0 0 0.000

7th 0 0 0 0 0.000

13th 0 0 0 0 0.000

19th 0 0 0 0 0.137

25th 2.0195e-028 6.1781e-028 2.0195e-028 2.5244e-028 4.800

Avg 1.2117e-029 6.0182e-029 2.1457e-029 2.6506e-029 0.454

SD 4.4398e-029 9.9110e-029 6.3542e-029 7.9483e-029 1.078

SR 100% 100% 100% 100% –

2

1st 1.5146e-028 7.993e-029 0 5.1097e-029 0.119

7th 5.4905e-028 5.8376e-028 0 1.1967e-028 7.350

13th 7.2891e-028 8.141e-028 2.5244e-029 3.2585e-028 24.610

19th 9.9554e-028 1.0419e-027 4.4965e-029 6.9341e-028 64.701

25th 5.3011e-027 1.4414e-027 1.5146e-028 1.3373e-027 310.798

Avg 9.9009e-028 8.0404e-028 4.0469e-029 5.4863e-028 51.146

SD 1.0001e-027 3.6084e-028 4.9427e-029 3.2185e-028 67.704

SR 100% 100% 100% 100% 0%

3

1st 2.3512e?005 3.1476e?06 1.4467e?06 5.7461e?05 1.80e?06

7th 7.4967e?006 5.1924e?007 2.8219e?007 1.052e?007 2.57e?06

13th 2.4559e?007 2.4527e?008 6.5273e?007 2.3014e?007 3.38e?06

OBL in the SDE algorithm 1325

123

Page 24: Opposition-based learning in the shuffled differential evolution algorithm

Table 6 continued

F OB-SEOBDE BDE2 SDE2 SBDE2 Guided-DE

19th 8.110e?007 5.2713e?008 5.0448e?008 4.8214e?008 4.30e?06

25th 5.2278e?008 7.6017e?008 7.1908e?008 6.6334e?008 9.70e?06

Avg 6.0923e?007 5.1234e?008 3.1254e?008 1.5127e?008 3.81e?06

SD 8.0526e?007 3.7215e?008 5.7215e?008 6.2145e?008 1.94e?06

SR 0% 0% 0% 0% 0%

4

1st 5.418e-020 1.4361e-014 3.3937e-020 4.2468e-018 62.353

7th 3.7285e-016 1.4222e-010 1.3132e-018 6.6684e-017 113.070

13th 1.7778e-015 2.1623e-009 2.0363e-017 2.2164e-015 228.604

19th 9.5514e-015 7.7203e-009 8.1365e-016 4.5062e-014 662.856

25th 4.2682e-013 0.00010071 1.0174e-014 1.7014e-012 6,969.270

Avg 4.4738e-014 8.0174e-006 1.3049e-015 2.1602e-013 659.219

SD 1.0205e-013 3.1846e-005 3.1778e-015 5.3413e-013 1,347.355

SR 100% 96% 100% 100% 0%

5

1st 0.00020259 484.14 8,209.6 2.0049 942.059

7th 0.0051244 2,604.6 12,044 22.916 1,745.360

13th 0.01463 4,505.9 14,224 31.476 2,231.190

19th 0.17468 10,264 15,627 60.674 2,852.270

25th 13.175 13,722 17,497 259.48 4,235.990

Avg 0.67721 4,393.4 13,487 58.685 2,348.966

SD 2.6183 6,541.4 3,191.4 70.732 860.644

SR 0% 0% 0% 0% 0%

6

1st 9.1503e-026 1.3138e-026 0 0 0.840

7th 1.3898e-025 1.4176e-025 0 1.8923e-026 21.403

13th 2.9727e-025 2.2986e-025 0 2.1719e-025 31.882

19th 4.5041e-025 2.8887e-025 3.9781e-026 3.9866 73.881

25th 3.9866 3.9866 1.6749e-025 3.9866 163.846

Avg 0.79732 1.196 5.4807e-026 1.5946 56.180

SD 1.6275 1.9257 6.3257e-026 2.0587 45.049

SR 80% 76% 100% 68% 0%

7

1st 0 0.4988 12.51 1.9863e-008 1,346.130

7th 0 0.92976 21.587 1.5875e-006 1,518.540

13th 0 1.1029 41.93 1.6134e-005 1,626.350

19th 0 1.3849 54.576 0.0030908 1,703.580

25th 1.1797e-010 6.3554 175.42 0.88537 1,921.080

Avg 4.587e-010 1.6952 52.55 0.090865 1,629.623

SD 2.3127e-009 1.6891 56.487 0.090865 147.479

SR 100% 0% 0% 60% 0%

8

1st 20.6641 20.7102 20.6839 20.7236 20.791

7th 20.832 20.8330 20.827 20.894 20.980

13th 20.9007 20.9531 20.9138 20.9322 21.013

19th 20.937 20.9939 20.9698 20.9590 21.036

25th 20.967 21.1037 21.036 21.0214 21.130

Avg 20.8141 20.9360 20.829 20.9107 21.008

1326 M. A. Ahandani, H. Alavi-Rad

123

Page 25: Opposition-based learning in the shuffled differential evolution algorithm

Table 6 continued

F OB-SEOBDE BDE2 SDE2 SBDE2 Guided-DE

SD 0.067754 0.029065 0.10685 0.053022 0.063

SR 0% 0% 0% 0% 0%

9

1st 5.9698 27.849 6.1647 12.924 23.882

7th 9.9496 40.793 8.9546 21.889 44.795

13th 11.94 41.788 12.934 25.854 58.815

19th 16.914 49.748 16.914 29.824 64.692

25th 32.834 74.622 40.793 36.813 103.480

Avg 14.168 48.454 15.621 26.963 57.852

SD 6.6016 13.497 15.621 6.4897 17.005

SR 0% 0% 0% 0% 0%

10

1st 29.829 38.703 26.763 31.788 31.955

7th 50.743 83.526 46.108 61.687 62.688

13th 57.687 94.49 55.718 72.627 74.622

19th 64.657 102.45 65.167 86.536 83.577

25th 83.576 113.42 73.627 119.39 118.517

Avg 57.574 94.931 58.708 74.753 73.577

SD 14.257 18.823 11.835 22.981 21.372

SR 0% 0% 0% 0% 0%

11

1st 18.121 14.1608 20.756 11.692 14.364

7th 24.056 18.083 25.108 16.201 15.946

13th 26.121 20.325 27.547 19.591 21.013

19th 27.765 22.119 30.804 21.546 24.967

25th 31.050 26.725 32.211 23.813 35.773

Avg 25.903 20.927 27.852 18.986 21.126

SD 2.9576 6.49125 2.2754 3.0596 5.048

SR 0% 0% 0% 0% 0%

12

1st 6.4445e-016 14,845 5.7541e-019 1.3586e-007 52.735

7th 6.7361e-010 27,744 1.2288e-015 0.0042246 989.109

13th 2.0288e-006 35,429 5.4283e-015 0.1492 1,580.930

19th 0.0071141 87,639 1.4367e-014 29.868 2,348.470

25th 25.509 6.3404e?005 2.3969e-014 50,666 1.12e?04

Avg 1.3981 1.473e?005 9.1191e-015 5,695.6 2441.885

SD 8.9530 2.2534e?005 8.1169e-015 15,921 2,605.095

SR 80% 0% 100% 32% 0%

13

1st 0.71835 1.1346 1.2820 0.80427 2.098

7th 1.2231 1.9449 2.2275 1.035 3.622

13th 1.3054 2.5194 2.8052 1.5879 3.966

19th 1.5151 2.764 3.1576 1.8976 4.356

25th 2.7829 3.509 3.6905 2.6498 8.442

Avg 1.4279 2.6047 2.8583 1.5135 4.179

SD 0.42793 0.65802 0.61543 0.57842 1.342

SR 0% 0% 0% 0% 0%

OBL in the SDE algorithm 1327

123

Page 26: Opposition-based learning in the shuffled differential evolution algorithm

Table 6 continued

F OB-SEOBDE BDE2 SDE2 SBDE2 Guided-DE

14

1st 12.535 11.874 10.788 11.487 11.799

7th 12.769 12.174 11.926 12.126 12.895

13th 12.896 12.231 12.243 12.411 13.098

19th 13.061 12.317 12.581 12.686 13.390

25th 13.123 12.494 13.038 12.883 13.720

Avg 12.928 12.267 12.155 12.403 13.080

SD 0.1935 0.13036 0.543 0.29501 0.393

SR 0% 0% 0% 0% 0%

15

1st 202.68 202.03 205.03 200.00 204.393

7th 218.73 274.13 231.77 210.63 212.871

13th 230.000 307.63 306.74 218.5 230.674

19th 400.80 400 354.85 225.28 252.062

25th 500.000 401.17 400.00 401.62 401.536

Avg 315.31 321.93 320.11 227.14 258.444

SD 76.81 74.315 76.21 71.471 67.054

SR 0% 0% 0% 0% 0%

16

1st 31.3026 51.5240 35.4910 84.1800 73.644

7th 55.6228 77.7260 60.8648 131.108 103.422

13th 84.6663 92.4620 87.0665 151.7871 131.223

19th 142.2772 400.0000 168.5320 168.5100 215.766

25th 400.0000 400.7220 400.2981 186.4911 401.614

Avg 114.54 236.2554 122.6239 145.9877 191.250

SD 90.033 158.0505 117.8998 31.7808 121.847

SR 0% 0% 0% 0% 0%

17

1st 42.4480 65.573 58.9740 60.4920 79.762

7th 58.0234 91.21 88.0527 77.1940 102.751

13th 88.2147 117.092 114.5402 90.3100 127.307

19th 95.2247 152.173 128.2126 102.9162 159.434

25th 143.142 489.93 165.7200 193.0700 437.790

Avg 80.9177 169.58 109.9214 93.1725 144.814

SD 30.9687 117.19 35.4737 45.598 70.201

SR 0% 0% 0% 0% 0%

18

1st 300.0000 767.98 620.47 659.78 842.076

7th 571.0487 813.54 655.72 743.4 859.492

13th 721.6254 823.64 708.63 809.49 863.223

19th 815.4869 851.01 814.05 821.25 865.722

25th 869.6934 910.16 836.76 911.07 871.441

Avg 701.1381 845.91 747.58 809.61 862.275

SD 150.4512 48.489 82.136 94.38 6.058

SR 0% 0% 0% 0% 0%

19

1st 604.1339 735.1523 716.0900 758.7012 852.813

7th 772.4365 803.5175 804.5631 807.8211 858.968

1328 M. A. Ahandani, H. Alavi-Rad

123

Page 27: Opposition-based learning in the shuffled differential evolution algorithm

Table 6 continued

F OB-SEOBDE BDE2 SDE2 SBDE2 Guided-DE

13th 806.2962 815.6446 810.5315 811.6306 863.000

19th 811.9980 820.0809 814.4906 813.3052 864.898

25th 912.0979 875.1917 831.1900 817.0719 867.107

Avg 780.5621 809.6267 799.1730 810.6067 862.108

SD 65.5099 40.0791 36.0907 12.2955 4.009

SR 0% 0% 0% 0% 0%

20

1st 300.0000 756.3110 651.1036 742.6102 856.228

7th 574.1389 808.6283 781.7015 808.8116 860.962

13th 663.9974 814.1081 809.8928 811.5144 863.679

19th 814.1187 818.1978 813.5106 813.5534 865.482

25th 838.8724 827.7574 818.1264 819.1095 870.040

Avg 649.8701 812.970 773.4084 810.7644 863.248

SD 131.7891 24.3667 81.5913 31.8369 3.601

SR 0% 0% 0% 0% 0%

21

1st 559.38032 605.2532 589.1410 565.5196 863.243

7th 587.3724 642.5066 620.6930 602.9371 865.649

13th 628.3850 685.1409 635.1909 619.5048 867.489

19th 654.3850 708.7728 733.2443 642.1721 871.163

25th 702.4777 720.5542 773.4389 686.4944 883.513

Avg 619.5612 673.7939 672.7416 621.1317 868.876

SD 42.3437 41.1572 71.0482 40.8959 4.482

SR 0% 0% 0% 0% 0%

22

1st 500.0000 500.0000 500.0000 500.0000 550.801

7th 500.0000 500.0812 500.0000 500.0000 557.383

13th 500.0081 500.1611 500.0072 500.0094 560.569

19th 500.0912 501.2285 500.0728 500.0866 564.529

25th 500.2210 502.5235 500.8079 500.1660 568.612

Avg 500.08443 501.1073 500.2055 500.0658 560.973

SD 0.1108 1.0678 0.4016 0.1050 4.617

SR 0% 0% 0% 0% 0%

23

1st 553.5043 608.7612 539.3434 567.5573 867.525

7th 587.1616 621.5201 557.5559 618.8724 872.371

13th 613.3389 639.0388 589.5459 635.5267 875.118

19th 700.2539 689.5573 626.0184 665.7802 877.092

25th 742.4490 773.0118 886.8944 691.5946 883.600

Avg 630.9312 672.8516 621.5295 643.3877 874.996

SD 56.4911 61.8820 109.8517 44.1934 3.552

SR 0% 0% 0% 0% 0%

24

1st 200.1208 202.9346 200.0057 201.6099 242.567

7th 203.1023 212.5192 201.2007 208.0116 248.551

13th 207.8814 218.7320 202.2416 221.4537 250.417

19th 213.4412 224.6322 204.1864 234.6506 252.526

25th 400.0000 466.7289 206.8495 339.2744 256.223

OBL in the SDE algorithm 1329

123

Page 28: Opposition-based learning in the shuffled differential evolution algorithm

Table 6 continued

F OB-SEOBDE BDE2 SDE2 SBDE2 Guided-DE

Avg 229.8914 279.4537 202.5502 247.3866 250.163

SD 65.0845 124.9821 2.2234 68.5517 3.222

SR 0% 0% 0% 0% 0%

25

1st 201.0048 205.0939 202.1478 201.0441 250.691

7th 205.3717 213.2079 208.296 205.2231 251.846

13th 212.0711 219.9711 217.1156 212.4859 252.521

19th 225.3719 250.8567 236.4612 230.9312 253.214

25th 327.9073 350.1024 365.7446 325.1075 256.565

Avg 222.1447 249.4860 239.55304 227.84348 252.770

SD 52.8059 57.8298 67.5495 54.1050 1.361

SR 0% 0% 0% 0% 0%

Table 7 Wilcoxon test applied over the obtained results of Table 6 in terms of the best run (1st run), average of all runs (Avg) and success rate

(SR) for 30 variables

Comparison 1st run Avg SR

R? R- p value R? R- p value R? R- p value

Guided-DE–OB-SEOBDE 311.5 13.5 0.000 283 42 0.001 95 230 0.026

BDE2–OB-SEOBDE 285.5 39.5 0.000 307 18 0.000 115.5 209.5 0.066

SDE2–OB-SEOBDE 252.5 72.5 0.008 265 60 0.004 173.5 151.5 1.000

SBDE2–OB-SEOBDE 173.5 51.5 0.003 278 47 0.002 126.5 198.5 0.109

Table 8 A comparison among the OB-SEOBDE with results reported in Ahandani et al. (2010) for 50 variable numbers

F OB-SEOBDE BDE2 SDE2 SBDE2

1

1st 0 2.5244e–028 0 1.2622e–029

7th 2.051e–028 4.6701e–028 0 5.0487e–029

13th 4.039e–028 7.6993e–028 5.0487e–029 2.5244e–028

19th 6.5633e–028 1.1612e–027 3.2622e–029 4.039e–028

25th 1.4641e–027 2.7894e–027 2.5244e–028 1.1107e–027

Avg 4.5868e–028 8.7582e–028 8.709e–029 3.1819e–028

Std 3.5398e–028 5.4428e–028 1.339e–028 3.1159e–028

SR 100% 100% 100% 100%

2

1st 3.3271e–027 4.6014e–027 3.1381e–028 3.1125e–027

7th 6.3109e–027 7.2189e–027 1.3251e–027 4.1232e–027

13th 9.8306e–027 1.0502e–026 1.7262e–027 5.8467e–027

19th 1.5614e–026 1.4453e–026 2.1481e–027 7.2694e–027

25th 2.0994e–025 3.7084e–026 4.4176e–027 1.1879e–026

Avg 2.1933e–026 1.3046e–026 1.8950e–027 6.0025e–027

Std 4.1779e–026 9.0369e–027 9.6816e–028 2.7723e–027

SR 100% 100% 100% 100%

3

1st 1.2498e?006 2.1424e?007 5.2104e?06 2.6318e?06

7th 3.3175e?007 7.3142e?007 3.05841e?007 2.1748e?007

1330 M. A. Ahandani, H. Alavi-Rad

123

Page 29: Opposition-based learning in the shuffled differential evolution algorithm

Table 8 continued

F OB-SEOBDE BDE2 SDE2 SBDE2

13th 5.7749e?008 1.8254e?009 4.9214e?008 3.5228?009

19th 8.4187e?008 4.9215e?009 4.1827e?009 5.6117e?009

25th 3.1038e?009 8.6217e?009 8.3207e?009 8.319e?009

Avg 6.9917e?008 4.0253e?009 4.2825e?009 3.6128e?009

Std 5.5550e?008 4.8719e?009 4.9930e?009 5.934e?009

SR 0% 0% 0% 0%

4

1st 0.007192 57.175 1.146e-007 0.17815

7th 0.074442 97.81 0.00013438 2.3683

13th 0.40182 351.12 0.0010104 35.03

19th 1.813 558.49 0.0046718 366.78

25th 6.2445 1163.7 0.024412 508.72

Avg 1.151 435.08 0.0044463 3458.2

Std 1.6067 377.84 0.0073684 1080.6

SR 0% 0% 12% 0%

5

1st 154.4184 18100 16429 8795

7th 958.2955 21499 20194 11076

13th 1919.3028 23308 23010 13521

19th 4951.9688 25031 24783 14341

25th 15967.7129 32206 28329 19809

Avg 3492.3308 23469 23549 14653

Std 3845.7337 4683.1 3296.7 3062.1

SR 0% 0% 0% 0%

6

1st 2.2658e–025 1.9942e–025 1.639e–025 3.8133e–025

7th 1.2295e–024 6.4291e–025 3.1729e–025 6.52e–025

13th 2.873e–024 2.3965e–024 4.9086e–025 1.4118e–024

19th 8.4827e–024 3.9592e–023 7.9845e–025 3.9866

25th 3.9866 3.9866 3.9866 3.9866

Avg 0.79732 1.196 0.79732 2.1011

Std 1.6275 1.9257 1.6809 1.9933

SR 80% 76% 84% 60%

7

1st 3.7157e–005 91.461 269.417 0.018

7th 0.0321 121.017 365.661 4.960

13th 0.0817 176.926 5,655.723 6.523

19th 0.3455 182.782 629.715 9.229

25th 0.4806 227.048 939.808 16.173

Avg 0.1967 169.120 600.152 9.017

Std 0.1854 70.531 218.26 6.037

SR 20% 0% 0% 0%

8

1st 20.7014 21.009 20.829 20.912

7th 20.9103 21.106 21.014 21.088

13th 21.1160 21.121 21.109 21.119

19th 21.1254 21.172 21.123 21.168

25th 21.2014 21.221 21.189 21.207

OBL in the SDE algorithm 1331

123

Page 30: Opposition-based learning in the shuffled differential evolution algorithm

Table 8 continued

F OB-SEOBDE BDE2 SDE2 SBDE2

Avg 21.0677 21.145 21.120 21.164

Std 0.1388 0.06077 0.0215 0.0314

SR 0% 0% 0% 0%

9

1st 34.8235 92.31 31.788 66.561

7th 52.7328 112.41 63.677 94.521

13th 58.7025 132.33 84.820 108.48

19th 69.6470 170.11 103.48 119.05

25th 95.5159 194.02 132.34 135.36

Avg 60.4934 152.22 79.099 107.06

Std 13.9501 31.811 31.936 17.748

SR 0% 0% 0% 0%

10

1st 82.5563 174.02 91.41 142.18

7th 111.4301 219.85 114.47 166.16

13th 134.3191 232.82 158.2 200.98

19th 152.2081 253.71 175.06 227.82

25th 201.9759 293.61 197 252.18

Avg 135.6279 236.2 154.81 198.79

Std 24.9865 39.934 42.352 39.128

SR 0% 0% 0% 0%

11

1st 34.9124 31.1350 47.6459 36.5600

7th 43.0279 41.6300 53.2561 42.6851

13th 50.0627 46.1078 58.8258 47.1627

19th 56.9028 49.6871 60.5207 52.7034

25th 60.2904 53.0820 64.2020 56.6207

Avg 50.6618 45.1217 59.5579 45.3162

Std 6.5476 9.7844 7.4571 9.4852

SR 0% 0% 0% 0%

12

1st 0.031176 1.5570e?005 1.1250e–006 40,171

7th 12.0872 4.0184e?005 0.0058 1.0525e?005

13th 328.1047 6.5205e?005 0.0375 2.1104e?005

19th 10,960 7.1822e?005 0.4749 2.7349e?005

25th 81,642 8.7402e?005 4.1647 3.8604e?005

Avg 20,311 5.8021e?005 0.49284 1.992e?005

Std 62,072 1.3941e?005 1.14172 99,800

SR 0% 0% 40% 0%

13

1st 2.1013 7.722 1.5037 4.3988

7th 3.4125 11.241 3.1887 5.6706

13th 4.3003 15.256 4.0646 6.5115

19th 4.7357 18.825 4.9627 7.2551

25th 6.7217 20.586 7.7219 8.129

Avg 4.1417 15.3472 4.7683 6.573

Std 1.0645 4.011 1.8919 1.2249

SR 0% 0% 0% 0%

1332 M. A. Ahandani, H. Alavi-Rad

123

Page 31: Opposition-based learning in the shuffled differential evolution algorithm

Table 8 continued

F OB-SEOBDE BDE2 SDE2 SBDE2

14

1st 21.745 21.499 21.402 21.814

7th 22.267 22.157 21.637 22.139

13th 22.409 22.324 21.742 22.307

19th 22.611 22.521 21.824 22.546

25th 22.758 22.702 22.108 22.887

Avg 22.428 22.332 21.700 22.471

Std 0.2429 0.921 0.125 0.501

SR 0% 0% 0% 0%

15

1st 216.661 248.091 191.532 203.535

7th 265.227 314.992 236.091 251.199

13th 400.000 375.012 312.752 319.897

19th 400.000 404.494 375.416 371.191

25th 500.000 471.417 500.000 410.525

Avg 355.451 359.757 314.12 313.262

Std 73.702 93.039 97.888 74.860

SR 0% 0% 0% 0%

16

1st 47.6572 89.2525 85.7265 79.1045

7th 79.2944 120.1371 141.0857 109.5117

13th 123.5570 176.3244 213.1325 142.9316

19th 400.000 205.2484 264.5471 185.1913

25th 400.000 501.0105 500.0000 247.1839

Avg 168.2884 210.3252 275.0531 175.5309

Std 154.9750 183.2781 190.3011 160.4336

SR 0% 0% 0% 0%

17

1st 64.2291 112.2511 99.9632 85.0741

7th 79.8824 202.3107 181.1128 128.7871

13th 109.6147 235.8372 218.4971 155.4413

19th 236.6845 264.4430 251.4301 211.5576

25th 412.0214 501.5862 524.3217 420.3908

Avg 174.0152 253.2856 227.0086 186.8582

Std 134.7250 146.6668 161.8550 132.6859

SR 0% 0% 0% 0%

18

1st 775.3187 828.8015 809.5542 816.8407

7th 820.3135 840.0714 818.3673 826.1269

13th 830.0716 857.6108 828.1547 845.5234

19th 839.6455 862.8715 835.1807 852.1402

25th 885.0004 982.1534 848.1819 871.8631

Avg 825.9200 872.9411 830.0931 842.3385

Std 15.6037 75.1962 17.1256 27.5154

SR 0% 0% 0% 0%

19

1st 759.7364 824.8937 807.0158 815.5812

7th 815.849 839.1949 819.7194 824.5682

OBL in the SDE algorithm 1333

123

Page 32: Opposition-based learning in the shuffled differential evolution algorithm

Table 8 continued

F OB-SEOBDE BDE2 SDE2 SBDE2

13th 827.3298 846.8178 826.4296 831.5799

19th 837.4564 858.7525 838.1690 838.01475

25th 923.4938 876.2723 849.5272 851.2634

Avg 827.8923 852.1862 829.7722 835.5764

Std 37.1305 19.9362 15.4446 13.6032

SR 0% 0% 0% 0%

20

1st 691.2085 822.5528 813.2102 814.1063

7th 818.4284 833.4069 824.5445 829.2518

13th 827.4300 842.01824 836.8216 842.1520

19th 833.2078 851.5638 849.5234 849.1725

25th 857.1437 866.6390 896.0999 1,055.0000

Avg 823.8865 842.8662 846.2849 859.4527

Std 30.9193 17.0701 26.4349 46.0739

SR 0% 0% 0% 0%

21

1st 534.7437 559.2351 572.9342 568.1000

7th 560.2876 568.2886 591.2531 589.1209

13th 582.3520 577.0805 608.3084 605.0933

19th 610.3835 594.1862 615.2314 624.6041

25th 651.8365 614.0174 622.4916 645.6796

Avg 596.0950 583.1976 600.2414 607.3692

Std 28.5720 23.3076 19.0997 34.7668

SR 0% 0% 0% 0%

22

1st 500.0000 500.0000 500.0000 500.0000

7th 500.0000 500.1107 500.0000 500.0000

13th 500.0140 500.4395 500.0372 500.0082

19th 500.0771 500.8737 500.1827 500.0511

25th 500.7239 501.2271 500.6119 500.1423

Avg 500.0878 500.5810 500.1418 500.0312

Std 0.2903 0.8370 0.3241 0.0585

SR 0% 0% 0% 0%

23

1st 554.1794 571.9187 565.0694 558.5649

7th 571.9907 604.2135 608.0819 582.3128

13th 593.5107 619.9710 624.2178 595.6842

19th 626.3521 625.5682 642.4327 603.3455

25th 657.0389 645.7019 662.6954 611.4208

Avg 601.5780 615.0344 623.0161 578.7269

Std 31.0642 42.2450 36.4146 19.3516

SR 0% 0% 0% 0%

24

1st 205.3700 211.5406 207.8617 210.1275

7th 215.0700 234.1469 223.1672 229.8107

13th 236.6101 248.4800 237.1524 256.49

19th 249.1028 310.4365 265.1287 292.8296

25th 268.2130 1,245.0724 1,158.1048 929.67542

1334 M. A. Ahandani, H. Alavi-Rad

123

Page 33: Opposition-based learning in the shuffled differential evolution algorithm

7 Conclusions and future works

The DE is one simple and effective EA for global opti-

mization. It has only three control parameters to be tuned.

The DE employs simple differential operator to create new

candidate solutions and one-to-one competition scheme to

greedily select new candidates. Also, it has a simple and

compact structure that makes its implementation easy.

Besides the aforementioned benefits of the DE, stagnation

or premature convergence, not being able to accurately

zoom to optimal solution, the limited number and diversity

of search moves, greedy criterion as acceptance strategy,

poor performance in noisy environment, requiring multiple

runs for tuning parameters and the best control parameter

being problem dependent are some challenging drawbacks.

In an attempt on one hand to accelerate the classic DE and

on the other hand to compensate the limited amount of

number and diversity of search moves, we enhanced the

OBL strategy and SDE in this work.

In the SDE, the population is divided into several

memeplexes and each memeplex is improved by the DE.

The OBL by comparing the fitness of an individual to its

opposite and retaining the fitter one in the population

accelerates the search process. The emphasis of this paper

was to demonstrate how the OBL strategy can improve the

performance of SDE. Four versions of the DE algorithm

were proposed. All algorithms similarly used the opposi-

tion-based population initialization to achieve fitter initial

individuals and their difference was in applying opposi-

tion-based generation jumping.

Experiments were performed on 25 benchmark func-

tions designed for the special session on real-parameter

optimization of CEC2005. A non-parametric analysis over

the obtained results by using of the Wilcoxon signed-ranks

test showed that the proposed algorithms, despite their

simplicity, had a remarkable performance over a wide

and various set of test problems. The fourth version of

the proposed DE had a significant difference compared to

the SDE in terms of all three considered aspects. Also, the

proposed algorithms obtained some successful perfor-

mance on a set of functions named to unsolved functions

for the first time. In a later part of the comparative

experiments, performance comparisons of the proposed

algorithm with some modern DE algorithms reported in the

literature confirmed a significantly better performance of

our proposed algorithm, especially on high-dimensional

functions.

However, our proposed algorithms obtained a better or

at least comparable performance than other proposed

approaches, but those have many parameters for setting

Table 8 continued

F OB-SEOBDE BDE2 SDE2 SBDE2

Avg 235.7661 564.9503 454.38 490.0069

Std 18.8645 482.5603 499.2357 440.3765

SR 0% 0% 0% 0%

25

1st 204.9172 219.1167 210.2215 209.4638

7th 217.4204 237.0632 221.0362 235.1028

13th 234.6802 247.2539 244.6829 251.3718

19th 254.3583 344.1182 282.3716 275.1924

25th 1,304.4088 1,463.2813 1,342.1932 1,241.6122

Avg 343.9790 616.7214 515.2516 508.3351

Std 338.1631 562.3119 527.1362 501.4438

SR 0% 0% 0% 0%

Table 9 Wilcoxon test applied over the obtained results of Table 8 in terms of the best run (1st run), average of all runs (Avg) and success rate

(SR) for 50 variables

Comparison 1st run Avg SR

R? R- p value R? R- p value R? R- p value

BDE2–OB-SEOBDE 308.5 16.5 0.000 301 24 0.000 138 187 0.180

SDE2–OB-SEOBDE 261.5 63.5 0.005 263.5 61.5 0.007 185.5 139.5 0.465

SBDE2–OB-SEOBDE 307.5 17.5 0.000 278 47 0.002 138 187 0.157

OBL in the SDE algorithm 1335

123

Page 34: Opposition-based learning in the shuffled differential evolution algorithm

which need several pre-runs to obtain the best composition.

These parameters are due to the original DE, partitioning

concept and opposition-based strategy. For the future work,

it might be interesting to employ adaptive or self-adaptive

tuning of parameters following the ideas presented in Liu

and Lampinen (2005), Brest et al. (2006) and Qin et al.

(2009). We considered a fixed value for MaxSize parame-

ter. It might also be interesting to limit the value of Max-

Size when the algorithm goes forward toward its final

iterations. In addition, the performance of the opposition-

based SDE variants may also be improved by employing

other mutation and crossover strategies and also to exper-

iment different population sizes.

References

Ahandani MA, Shirjoposht NP, Banimahd R (2010) Three modified

versions of differential evolution algorithm for continuous

optimization. Soft Comput 15:803–830

Rashid M, Baig, AR (2010) Improved opposition-based PSO for

feedforward neural network international conference on training.

Information Science and Applications (ICISA2010), Seoul,

pp 1–6

Balamurugan R, Subramanian S (2009) Emission-constrained

dynamic economic dispatch using opposition-based self-adaptive

differential evolution algorithm. Int Energy J 10:267–277

Becker W, Yu X, Tu J (2005) EvLib: a parameterless self-adaptive

real-valued optimisation library. In: The 2005 IEEE congress on

evolutionary computation CEC2005

Bhattacharya A, Chattopadhyay P (2010) Solution of economic power

dispatch problems using oppositional biogeography-based opti-

mization. Electr Power Compon Syst 38:1139–1160

Boskovis B, Brest J, Zamuda A, Greiner S, Zumer V (2011) History

mechanism supported differential evolution for chess evaluation

function tuning. Soft Comput 15:667–682

Brest J, Greiner S, Boskovic B, Mernik M, Zumer V (2006) Self-

adapting control parameters in differential evolution: a compar-

ative study on numerical benchmark problems. IEEE Trans Evol

Comput 10:646–657

Brest J, Boskovic B, Greiner S, Zumer V, Maucec MS (2007)

Performance comparison of self-adaptive and adaptive differen-

tial evolution algorithms. Soft Comput 11:617–629

Bui LT, Shan Y, Qi F, Abbass HA (2005) Comparing two versions of

differential evolution in real parameter optimization. In: The

2005 IEEE congress on evolutionary computation CEC2005

Caponio A, Neri F, Tirronen V (2009) Super-fit control adaptation in

memetic differential evolution frameworks. Soft Comput 13:811–

831

Dorigo JM, Gambardella LM (1997) Ant colony system: a cooper-

ative learning approach to the traveling salesman problem. IEEE

Trans Evol Comput 1(1):53–66

Ergezer M, Simon D, Du D (2009) Oppositional biogeography-based

optimization. In: IEEE Conference on Systems, Man, and

Cybernetics. San Antonio, Texas, pp 1035–1040

Eusuff MM, Lansey KE (2003) Optimization of water distribution

network design using the shuffled frog leaping algorithm.

J Water Res Plane Manag 129:210–225

Feoktistov V (2006) Differential evolution: in search of solutions. In:

Optimization and its applications, vol 5. Springer, New York

Garcia S, Molina D, Lozano M, Herrera F (2009) A study on the use

of non-parametric tests for analyzing the evolutionary algo-

rithms’ behaviour: a case study on the CEC’2005 special session

on real parameter optimization. J Heuristics 15:617–644

Han L, He X (2007) A novel opposition-based particle swarm

optimization for noisy problems. In: Proceedings of the third

international conference on natural computation, vol 3. IEEE

press, pp 624–629

Hansen N (2005) Compilation of results on the CEC benchmark

function set. http://www.ntu.edu.sg/home/epnsugan/index_files/

CEC05/compareresults.pdf

Holland J (1975) Adaptation in natural and artificial systems. The

University of Michigan Press, Ann Arbor

Kennedy J, Eberhart RC (1995) Particle swarm optimization. In:

Proceedings of the IEEE International Conference on Neural

Networks, pp 1942–1948

Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by

simulate annealing. Science 220:671–680

Kofjac D, Kljajic M (2008) Application of genetic algorithms and

visual simulation in a real-case production optimization.

WSEAS Trans Syst Control 3:992–1001

Liu J, Lampinen J (2005) A fuzzy adaptive differential evolution

algorithm. Soft Comput 9:448–469

Liu B, Wang L, Jin Y-H, Huang D-X, Tang F (2007) Control and

synchronization of chaotic systems by differential evolution

algorithm. Chaos Soliton Fract 34:412–419

Malisia AR, Tizhoosh HR (2007) Applying opposition-based ideas to

the ant colony system. In: Proceedings of the IEEE symposium

on foundations of computational intelligence (SIS 2007). Hono-

lulu, Hawaii, pp 182–189

Moscato P (1989) On evolution, search, optimization, genetic

algorithms and martial arts: toward memetic algorithm. Techni-

cal Report Caltech Concurrent Computation Program: Report 26,

California Institute of Technology

Neri F, Tirronen V (2009) Scale factor local search in differential

evolution. Memet Comp 1:153–171

Neri F, Tirronen V (2010) Recent advances in differential evolution: a

review and experimental analysis. Artif Intell Rev 33:61–106

Omran MGH (2009) Using opposition-based learning with particle

swarm optimization and barebones differential evolution. In:

Lazinica A (ed) Particle swarm optimization. In Tech, pp 373–

384

Perez-Bellido AM, Salcedo-Sanz S, Ortiz-Garcia EG, Portilla-

Figueras JA, Lopez-Ferreras F (2008) A comparison of memetic

algorithms for the spread spectrum radar polyphase codes design

problem. Eng Appl Artif Intel 21:1233–1238

Plagianakos VP, Tasoulis DK, Vrahatis MN (2008) A review of major

application areas of differential evolution. In: Chakraborty UK

(ed) Advances in differential evolution of studies in computa-

tional intelligence, vol 143. Springer, Berlin, pp 197–238

Qin AK, Suganthan PN (2005) Self-adaptive differential evolution

algorithm for numerical optimization. In: The 2005 IEEE

congress on evolutionary computation CEC2005, vol 13,

pp 1785–1791

Qin AK, Huang VL, Suganthan PN (2009) Differential evolution

algorithm with strategy adaptation for global numerical optimi-

zation. IEEE Trans Evol Comput 13:398–417

Rahnamayan S, Wang GG (2008) Investigating in scalability of

opposition-based differential evolution. WSEAS Trans Comput

7:1792–1804

Rahnamayan S, Tizhoosh HR, Salama MMA (2006) Opposition

versus randomness in soft computing techniques. Elsevier J Appl

Soft Comput 8(906):918

Rahnamayan S, Tizhoosh HR, Salama MMA (2008) Opposition-

based differential evolution. IEEE Trans Evol Comput 12:64–79

1336 M. A. Ahandani, H. Alavi-Rad

123

Page 35: Opposition-based learning in the shuffled differential evolution algorithm

Shokri M, Tizhoosh HR, Kamel M (2006) Opposition-based Q(k)

algorithm. In: Proceedings of IEEE World Congress on Com-

putational Intelligence. Vancouver, BC, Canada, pp 646–653

Storn R, Price K (1997) Differential evolution—a simple and efficient

heuristic for global optimization over continuous spaces.

J Global Optim 11:341–359

Subudhi B, Jena D (2009) Nonlinear system identification using

opposition based Learning differential evolution and neural

network techniques. IEEE J Intell Cybern Syst 5:1–13

Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y-P, Auger A,

Tiwari S (2005) Problem definitions and evaluation criteria for

the CEC 2005 special session on real-parameter optimization.

Technical Report Report #2005005, Nanyang Technological

University, Singapore and IIT Kanpur, India. http://www.ntu.

edu.sg/home/EPNSugan/

Teng NS, Teo J, Hijazi MHA (2009) Self-adaptive population sizing

for a tune-free differential evolution. Soft Comput 13:709–724

Tizhoosh HR (2005) Opposition-based learning: a new scheme for

machine intelligence. In: Proc Int Conf Comput Intell Modeling

Control and Autom, vol 1. Vienna, Austria, pp 695–701

Tizhoosh HR (2006) Opposition-based reinforcement learning. J Adv

Comput Intell Intell Inf 10:578–585

Ventresca M, Tizhoosh HR (2006) Improving the convergence of

backpropagation by opposite transfer functions. In: Proceedings

of IEEE World Congress and Computational Intelligence.

Vancouver, BC, Canada, pp 9527–9534

Ventresca M, Tizhoosh HR (2007) Simulated annealing with opposite

neighbors. In: Proceedings of the IEEE Symposium on Founda-

tions of Computational Intelligence (SIS 2007). Honolulu,

Hawaii, pp 186–192

Wang H, Liu Y, Zeng S, Li C (2007) Opposition-based particle

swarm algorithm with Cauchy mutation. In: Proceedings of the

IEEE Congress on Evolutionary Computation, pp 4750–4756

OBL in the SDE algorithm 1337

123