K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based...

18
FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Ó Springer-Verlag 2004 Abstract In this paper, we propose a population-based, four-step, real-parameter optimization algorithm-gener- ator. The approach divides the task of reaching near the optimum solution into four independent plans of (i) selecting good solutions from a solution base, (ii) gen- erating new solutions using the selected solutions, (iii) choosing inferior or spurious solutions for replacement, and (iv) updating the solution base with good new or old solutions. Interestingly, many classical and evolutionary optimization algorithms are found to be representable by this algorithm-generator. The paper also recommends an efficient optimization algorithm with the possibility of using a number of different recombination plans and parameter values. With a systematic parametric study, the paper finally recommends a real-parameter optimi- zation algorithm which outperforms a number of exist- ing classical and evolutionary algorithms. To extend this study, the proposed algorithm-generator can be utilized to develop new and more efficient population-based optimization algorithms. The treatment of population- based classical and evolutionary optimization algo- rithms identically through the proposed algorithm-gen- erator is the main hall-mark of this paper and should enable researchers from both classical and evolutionary fields to understand each other’s methods better and interact in a more coherent manner. Keywords Real-parameter optimization Evolutionary optimization Multi-point search Algorithm-generator 1 Introduction Over the years, many real-parameter optimization algorithms have been developed by using point-by-point (Rao 1984; Reklaitis et al. 1983) as well as multi-point approaches (Goldberg 1989; Holland 1975; Fogel 1995; Schwefel 1995). While a point-by-point approach begins with one guess solution and updates the solution itera- tively in the hope of reaching near the optimum solution, a multi-point approach deals with a number of solutions in each iteration. Starting with a number of guess solu- tions, the multi-point algorithm updates one or more solutions in a synergistic manner in the hope of steering the population towards the optimum. In this paper, we focus our attention to the multi-point optimization algorithms, which we shall refer here as the population- based optimization algorithms. Although there exists a few classical population-based optimization algorithms, almost all evolutionary optimization algorithms are population-based algorithms. In the remainder of this paper, we suggest a generic population-based algorithm-generator for real-parame- ter optimization. The main advantage of the proposed four-step algorithm-generator is the functional decom- position of four important tasks needed in real-parame- ter optimization. The tasks (we refer them as plans) are independent to each other, thereby enabling an user to study the effect of each plan on the complete algorithm and should make it easier to develop efficient popula- tion-based optimization algorithms. We then demon- strate how the proposed algorithm-generator can be used to represent a number of classical and evolutionary algorithms in a functionally decomposed manner. One of the important tasks in an optimization algo- rithm is the generation of new solutions from existing solutions. In this paper, we review a number of com- monly-used evolutionary generational plans and suggest an efficient operator for this purpose. With a recently suggested efficient optimization algorithm (the general- ized generation gap (G3) model (Deb et al. 2002)) and an efficient parent-centric recombination operator used as a generational plan, we then perform a detail study to identify optimal parameter values for the G3 model. Finally, we compare the performance of the resulting well- parameterized optimization algorithm (G3 model) with a Soft Comput (2004) DOI 10.1007/s00500-004-0377-4 K. Deb Kanpur Genetic Algorithms Laboratory (KanGAL), Indian Institute of Technology Kanpur, Kanpur, 208 016, India E-mail: [email protected], http://www.iitk.ac.in/kangal/pub.htm

Transcript of K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based...

Page 1: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

FOCUS

K. Deb

A population-based algorithm-generator for real-parameteroptimization

Published online: 7 June 2004� Springer-Verlag 2004

Abstract In this paper, we propose a population-based,four-step, real-parameter optimization algorithm-gener-ator. The approach divides the task of reaching near theoptimum solution into four independent plans of (i)selecting good solutions from a solution base, (ii) gen-erating new solutions using the selected solutions, (iii)choosing inferior or spurious solutions for replacement,and (iv) updating the solution base with good new or oldsolutions. Interestingly, many classical and evolutionaryoptimization algorithms are found to be representableby this algorithm-generator. The paper also recommendsan efficient optimization algorithm with the possibility ofusing a number of different recombination plans andparameter values. With a systematic parametric study,the paper finally recommends a real-parameter optimi-zation algorithm which outperforms a number of exist-ing classical and evolutionary algorithms. To extend thisstudy, the proposed algorithm-generator can be utilizedto develop new and more efficient population-basedoptimization algorithms. The treatment of population-based classical and evolutionary optimization algo-rithms identically through the proposed algorithm-gen-erator is the main hall-mark of this paper and shouldenable researchers from both classical and evolutionaryfields to understand each other’s methods better andinteract in a more coherent manner.

Keywords Real-parameter optimization � Evolutionaryoptimization �Multi-point search � Algorithm-generator

1 Introduction

Over the years, many real-parameter optimizationalgorithms have been developed by using point-by-point

(Rao 1984; Reklaitis et al. 1983) as well as multi-pointapproaches (Goldberg 1989; Holland 1975; Fogel 1995;Schwefel 1995). While a point-by-point approach beginswith one guess solution and updates the solution itera-tively in the hope of reaching near the optimum solution,a multi-point approach deals with a number of solutionsin each iteration. Starting with a number of guess solu-tions, the multi-point algorithm updates one or moresolutions in a synergistic manner in the hope of steeringthe population towards the optimum. In this paper, wefocus our attention to the multi-point optimizationalgorithms, which we shall refer here as the population-based optimization algorithms. Although there exists afew classical population-based optimization algorithms,almost all evolutionary optimization algorithms arepopulation-based algorithms.

In the remainder of this paper, we suggest a genericpopulation-based algorithm-generator for real-parame-ter optimization. The main advantage of the proposedfour-step algorithm-generator is the functional decom-position of four important tasks needed in real-parame-ter optimization. The tasks (we refer them as plans) areindependent to each other, thereby enabling an user tostudy the effect of each plan on the complete algorithmand should make it easier to develop efficient popula-tion-based optimization algorithms. We then demon-strate how the proposed algorithm-generator can beused to represent a number of classical and evolutionaryalgorithms in a functionally decomposed manner.

One of the important tasks in an optimization algo-rithm is the generation of new solutions from existingsolutions. In this paper, we review a number of com-monly-used evolutionary generational plans and suggestan efficient operator for this purpose. With a recentlysuggested efficient optimization algorithm (the general-ized generation gap (G3) model (Deb et al. 2002)) and anefficient parent-centric recombination operator used as agenerational plan, we then perform a detail study toidentify optimal parameter values for the G3 model.Finally,we compare the performance of the resultingwell-parameterized optimization algorithm (G3 model) with a

Soft Comput (2004)DOI 10.1007/s00500-004-0377-4

K. DebKanpur Genetic Algorithms Laboratory (KanGAL),Indian Institute of Technology Kanpur,Kanpur, 208 016, IndiaE-mail: [email protected], http://www.iitk.ac.in/kangal/pub.htm

Used Distiller 5.0.x Job Options
This report was created automatically with help of the Adobe Acrobat Distiller addition "Distiller Secrets v1.0.5" from IMPRESSED GmbH. You can download this startup file for Distiller versions 4.0.5 and 5.0.x for free from http://www.impressed.de. GENERAL ---------------------------------------- File Options: Compatibility: PDF 1.2 Optimize For Fast Web View: Yes Embed Thumbnails: Yes Auto-Rotate Pages: No Distill From Page: 1 Distill To Page: All Pages Binding: Left Resolution: [ 600 600 ] dpi Paper Size: [ 595.276 785.197 ] Point COMPRESSION ---------------------------------------- Color Images: Downsampling: Yes Downsample Type: Bicubic Downsampling Downsample Resolution: 150 dpi Downsampling For Images Above: 225 dpi Compression: Yes Automatic Selection of Compression Type: Yes JPEG Quality: Medium Bits Per Pixel: As Original Bit Grayscale Images: Downsampling: Yes Downsample Type: Bicubic Downsampling Downsample Resolution: 150 dpi Downsampling For Images Above: 225 dpi Compression: Yes Automatic Selection of Compression Type: Yes JPEG Quality: Medium Bits Per Pixel: As Original Bit Monochrome Images: Downsampling: Yes Downsample Type: Bicubic Downsampling Downsample Resolution: 600 dpi Downsampling For Images Above: 900 dpi Compression: Yes Compression Type: CCITT CCITT Group: 4 Anti-Alias To Gray: No Compress Text and Line Art: Yes FONTS ---------------------------------------- Embed All Fonts: Yes Subset Embedded Fonts: No When Embedding Fails: Warn and Continue Embedding: Always Embed: [ ] Never Embed: [ ] COLOR ---------------------------------------- Color Management Policies: Color Conversion Strategy: Convert All Colors to sRGB Intent: Default Working Spaces: Grayscale ICC Profile: RGB ICC Profile: sRGB IEC61966-2.1 CMYK ICC Profile: U.S. Web Coated (SWOP) v2 Device-Dependent Data: Preserve Overprint Settings: Yes Preserve Under Color Removal and Black Generation: Yes Transfer Functions: Apply Preserve Halftone Information: Yes ADVANCED ---------------------------------------- Options: Use Prologue.ps and Epilogue.ps: No Allow PostScript File To Override Job Options: Yes Preserve Level 2 copypage Semantics: Yes Save Portable Job Ticket Inside PDF File: No Illustrator Overprint Mode: Yes Convert Gradients To Smooth Shades: No ASCII Format: No Document Structuring Conventions (DSC): Process DSC Comments: No OTHERS ---------------------------------------- Distiller Core Version: 5000 Use ZIP Compression: Yes Deactivate Optimization: No Image Memory: 524288 Byte Anti-Alias Color Images: No Anti-Alias Grayscale Images: No Convert Images (< 257 Colors) To Indexed Color Space: Yes sRGB ICC Profile: sRGB IEC61966-2.1 END OF REPORT ---------------------------------------- IMPRESSED GmbH Bahrenfelder Chaussee 49 22761 Hamburg, Germany Tel. +49 40 897189-0 Fax +49 40 897189-71 Email: [email protected] Web: www.impressed.de
Adobe Acrobat Distiller 5.0.x Job Option File
<< /ColorSettingsFile () /AntiAliasMonoImages false /CannotEmbedFontPolicy /Warning /ParseDSCComments false /DoThumbnails true /CompressPages true /CalRGBProfile (sRGB IEC61966-2.1) /MaxSubsetPct 100 /EncodeColorImages true /GrayImageFilter /DCTEncode /Optimize true /ParseDSCCommentsForDocInfo false /EmitDSCWarnings false /CalGrayProfile () /NeverEmbed [ ] /GrayImageDownsampleThreshold 1.5 /UsePrologue false /GrayImageDict << /QFactor 0.9 /Blend 1 /HSamples [ 2 1 1 2 ] /VSamples [ 2 1 1 2 ] >> /AutoFilterColorImages true /sRGBProfile (sRGB IEC61966-2.1) /ColorImageDepth -1 /PreserveOverprintSettings true /AutoRotatePages /None /UCRandBGInfo /Preserve /EmbedAllFonts true /CompatibilityLevel 1.2 /StartPage 1 /AntiAliasColorImages false /CreateJobTicket false /ConvertImagesToIndexed true /ColorImageDownsampleType /Bicubic /ColorImageDownsampleThreshold 1.5 /MonoImageDownsampleType /Bicubic /DetectBlends false /GrayImageDownsampleType /Bicubic /PreserveEPSInfo false /GrayACSImageDict << /VSamples [ 2 1 1 2 ] /QFactor 0.76 /Blend 1 /HSamples [ 2 1 1 2 ] /ColorTransform 1 >> /ColorACSImageDict << /VSamples [ 2 1 1 2 ] /QFactor 0.76 /Blend 1 /HSamples [ 2 1 1 2 ] /ColorTransform 1 >> /PreserveCopyPage true /EncodeMonoImages true /ColorConversionStrategy /sRGB /PreserveOPIComments false /AntiAliasGrayImages false /GrayImageDepth -1 /ColorImageResolution 150 /EndPage -1 /AutoPositionEPSFiles false /MonoImageDepth -1 /TransferFunctionInfo /Apply /EncodeGrayImages true /DownsampleGrayImages true /DownsampleMonoImages true /DownsampleColorImages true /MonoImageDownsampleThreshold 1.5 /MonoImageDict << /K -1 >> /Binding /Left /CalCMYKProfile (U.S. Web Coated (SWOP) v2) /MonoImageResolution 600 /AutoFilterGrayImages true /AlwaysEmbed [ ] /ImageMemory 524288 /SubsetFonts false /DefaultRenderingIntent /Default /OPM 1 /MonoImageFilter /CCITTFaxEncode /GrayImageResolution 150 /ColorImageFilter /DCTEncode /PreserveHalftoneInfo true /ColorImageDict << /QFactor 0.9 /Blend 1 /HSamples [ 2 1 1 2 ] /VSamples [ 2 1 1 2 ] >> /ASCII85EncodePages false /LockDistillerParams false >> setdistillerparams << /PageSize [ 576.0 792.0 ] /HWResolution [ 600 600 ] >> setpagedevice
Page 2: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

number of existing classical and evolutionary algorithms.In almost all cases, theG3model with a couple of versionsof the parent-centric recombination operator evolvedas the winner. Besides the suggestion of a successfulalgorithm here, the other main contribution of this paperis the proposed algorithm-generator, which is generic andcan be used to systematically develop other more efficientpopulation-based optimization algorithms. The pro-ceedings of this paper also demonstrates a case study ofparameterizing and evaluating such an algorithm.

2 Population-based optimization algorithms

Optimization algorithms can be classified into two maincategories: (i) gradient-based methods, in which first-order (and sometimes second-order) derivative vectorsare used to drive the search process and (ii) direct searchmethods, in which only objective function values areused to drive the search. Although gradients can becomputed numerically or automatically, each has theirown merits and demerits (Rao 1984; Reklaitis et al.1983). Since a numerical gradient computation involvesmore than one function evaluations in the vicinity of asolution, most gradient-based techniques use a point-by-point approach, in which one solution (or point) is up-dated to a new and hopefully better solution. Withoutthe use of any gradient information, most direct searchmethods explicitly employ more than one solutions ineach iteration. The presence of multiple solutions is thenexploited to find better search regions. It is these multi-point approaches, which are of interest to this paper. Wecall these algorithms population-based approaches, asthey require a population of solutions in one iteration,instead of just one solution. Some of the population-based classical optimization algorithms are Hooke-Jee-ves pattern search method (Reklaitis et al. 1983), Nelderand Meade’s simple search method (Nelder and Mead1965), Box’s complex search method (Box 1965) and aplethora of adaptive random search methods discussedin texts (Deb 1995; Reklaitis et al. 1983; Rao 1984).Among the non-traditional optimization methods, evo-lutionary search algorithms (involving genetic algo-rithms (GAs), evolution strategies (ESs), evolutionaryprogramming (EP), differential evolution (DE)) aredirect population-based approaches (Holland 1975;Goldberg 1989; Rechenberg 1965; Storn and Price 1997;Fogel et al. 1995).

In the following, we outline some of the advantagesof using a population-based optimization algorithm:

1. In principle, gradient-like information can be implic-itly obtained by explicitly processing multiple solu-tions. This would be beneficial in problems in whichgradient computations may be difficult or erroneouswith a numerical scheme.

2. A population of solution provides the information ofa good search region, instead of only a good point, inthe search space.

3. In addition to representing a potentially good searchregion, the variance in the population membersprovide a good information about the extent of thepotential search region. If new solutions are createdin proportion to the variance of existing populationof points, a self-adaptive search procedure can bedeveloped. In the start of such an algorithm with arandomly-picked initial population of solutions, thevariance in the population members is expected tobe large, thereby ensuring a thorough search of theentire space. On the other hand, during later itera-tions when the population of points have convergednear the optimum, the variance in populationmembers is expected to be small, thereby ensuring afocused search near the optimum. Without anexternal guidance, such a population-based algo-rithm can widen or narrow down its search poweradaptively.

4. Each iteration of such a population-based algorithmcan use a different form of the underlying objectivefunction. Since a comparison scheme would usuallybe followed to establish a hierarchy of populationmembers, a rough estimate of the quality of thesolutions is enough to drive the search. For highlycomputationally intensive objective function evalua-tion procedures and for problems where a clearmathematical or procedural objective function eval-uation is not possible (such as the aesthetics of adesign being an objective to be maximized), such acomparative evaluation procedure is certainly bene-ficial. It is not clear how a similar task can beachieved with a point-by-point optimizationapproach, in which there exist only one solution ineach iteration.

5. There always lies safety in numbers. The presence ofmultiple solutions in a search process allows diversityto be maintained. Although this can cause a com-putational overhead in simpler problem-solvingtasks, they can become useful in avoiding suddenlocal convergence behavior, common to many point-by-point optimization algorithms. Even in harderproblems, the processing of multiple solutions in oneiteration may constitute a parallel search, by sharingthe discovered good sub-solutions among differentpopulation members.

6. The presence of multiple solutions can be beneficialin handling constraint-optimization problems.Although some point-by-point approaches can onlybe started with a feasible solution, most population-based approaches can work with some feasible andsome infeasible solutions and some approaches canwork with infeasible solutions alone. A careful hier-archy of infeasible and feasible solutions can bemaintained in a population to steer the searchtowards potential feasible regions. One such popula-tion-based constraint-handling approach is developedelsewhere (Deb 2001), which does not require anypenalty parameter while establishing the hierarchyamong feasible and infeasible solutions.

Page 3: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

7. Finally, there exist a number of other optimizationtasks, such as multi-modal and multi-objectiveoptimizations, in which the task is to find multipleoptimal solutions. For handling these problems, apoint-by-point approach has no other option but toapply the algorithm again and again to find multipledifferent solutions. On the other hand, a population-based approach can be adapted to find and capturemultiple optimal solutions simultaneously in a singlesimulation run (Deb 2001; Goldberg and Richardson1987). It is intuitive to realize that the simultaneousdiscovery of multiple optimal solutions is possible toachieve by an implicit parallel approach within apopulation-based algorithm, whereas in multipleapplications of one algorithm, any property commonto the multiple optimal solutions must have to berediscovered in every application independently, amatter which is well addressed in a recent study (Deb2003).

In the next section, we suggest a population-basedalgorithm-generator for real-parameter optimization.

3 A population-based algorithm-generatorfor optimization

As the name suggests, a population-based optimizationalgorithm begins its search with a population of guesssolutions. Thereafter, in each iteration the population isupdated by using a population-update algorithm. Weassume that the algorithm at iteration t has a set ofpoints BðtÞ (with N ¼ jBðtÞj). At the end of each iteration,this set is updated to a new set Bðtþ1Þ by using four user-defined plans. For brevity, here we drop the superscriptdenoting the iteration counter from the sets.

Population-update-algorithm(SP, GP, RP, UP)

Step 1 Choose l solutions (the set P ) from B using aselection plan (SP).

Step 2 Create k solutions (the set C) from P using ageneration plan (GP).

Step 3 Choose r solutions (the set R) from B forreplacement using a replacement plan (RP).

Step 4 Update these r members by r solutions chosenfrom a comparison set of R, P , and C using aupdate plan (UP).

Since by choosing a plan for each step one develops anew optimization algorithm, we call the above proce-dure an algorithm-generator. Figure 1 shows a sketch ofthe proposed generic procedure. The first step chooses lsolutions from the solution bank B (all solutions in thetop-left plot) for their use in creating new solutions inthe second step. Thus, the selection plan (SP) forchoosing these l solutions must emphasize the bettersolutions of B. The chosen solutions (l ¼ 3 in the fig-ure) are shown as dark-outlined points on the top-leftplot and are also joined by lines. Clearly, there are twoways to choose a good SP. A set of l solutions can bechosen either by directly emphasizing the better solu-tions in B or by de-emphasizing the worse solutions of B.The algorithm can be made somewhat ‘greedy’ byalways including a copy of the ‘best’ solution of B in P .

In the second step, k new solutions are created fromthe chosen set P by using a generation plan GP. Oneconvenient procedure would be to choose a probabilitydistribution function (PDF) using the set P and create ksolutions. Figure 1 shows a typical probability distri-bution in which solutions around the members of P areemphasized. Members of the set C (k ¼ 4 in the figure)are shown in solid circles in the figure. This is the mostcrucial step of the proposed algorithm-generator. Wereview some existing generational plans and suggest anew one in Sect.5. Although for generating each newsolution a different PDF can be used thereby requiring

Fig. 1 A sketch of one iteration of the proposed population-basedoptimization procedure. For some realizations, the SP and GP stepscan be applied multiple times before proceeding to the RP and UPplans

Page 4: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

the need to use SP-GP steps many times before pro-ceedings further, for simplicity one PDF can be chosento create all k new solutions.

In the third step, r solutions are chosen from thesolution bank B for a replacement using the newly cre-ated solutions. Here, different replacement plans arepossible. The RP can simply choose r solutions at ran-dom or include some or all members of P to ensurediversity preservation. The RP can also pick a set of badsolutions from B to constitute a faster search. In thefigure, r (¼ 2 in the figure) worst solutions from B arechosen as members of R.

In the fourth step, the r chosen members are updatedby r members chosen from R, P , and C by using anupdate plan (UP). It is intuitive that the UP mustemphasize the better solutions of R [ P [ C in this step.However, the r slots can also be updated directly frommembers of C alone or from a combined populationR [ C or P [ C. In the former case, the populationupdate algorithm does not have the elite-preservingproperty. To really ensure elite-preservation, the RPshould choose the best solutions of B and a combinedP and C set needs to be considered in Step 4. In thefigure, the best r solutions (joined by a line) fromthe R [ P [ C are chosen and the originally chosen rsolutions in Step 3 are replaced by these r solutions.

Although the above decomposition principle is notthe only possible way to describe an optimization algo-rithm-generator, it certainly makes a functional decom-position of the salient tasks needed in a good search andoptimization algorithm. The selection plan emphasizesbetter solutions from a population, the generational planuses these solutions to explore the search region aroundthem and create new solutions, the replacement planpicks potentially bad solutions from the original popu-lation to be replaced, and finally the update plan com-pletes the replacement task with the newly createdsolutions. Each plan controls a crucial property of theresulting algorithm. By properly choosing a strategy foreach plan, the resulting optimization algorithm can bemade greedy to mainly solve unimodal problems, canhave a global perspective for targeting to find the globaloptimum, or can be made to be a monotonically non-deteriorating algorithm to have sustained improvementin the solution quality.

The procedure of using the above algorithm-genera-tor for developing a new optimization algorithm is asfollows:

1. Choose a suitable plan for each step (SP, GP, RP, andUP).

2. Perform a parametric study to discover optimalalgorithm parameters, such as N , l, k and r.

Of course, in order to develop an efficient optimizationalgorithm, the first task above must have to be opti-mized somewhat. A few potential candidate plans foreach of the four steps can be first chosen and thencompared by applying each of the resulting algorithmsto a set of carefully chosen test problems. Each such

algorithm must have to be studied with its optimalparameter setting. We shall demonstrate the proceedingsof one such study later in this paper. But, first we reviewa number of population-based classical algorithms anddiscuss how the above algorithm-generator can be usedto describe each of them in a unified manner. Thereafter,we shall discuss the same for a number of real-parameterevolutionary algorithms.

4 Classical methods

Although most classical optimization methods use apoint-by-point approach, in which only one solution isused and updated in each iteration, there exist a numberof population-based techniques. Without loss of gener-ality, we discuss here a few algorithms used for mini-mizing objective functions.

4.1 Evolutionary optimization

Box’s (1957) evolutionary optimization technique (notto be confused with the evolutionary algorithms de-scribed in the next section) used a structured set ofsolutions in each iteration. For n decision variables, theset B contains N ¼ 2n þ 1 solutions, one having at eachcorner of an orthogonal hyper-box formed using thecoordinate directions. Figure 2 shows a set of ð22 þ 1Þ or5 points for a n ¼ 2-variable optimization problem.Based on the objective function values at all these cornerpoints, a new hyper-box (hence a new set of solutions) isformed around the best solution (having the smallestobjective function value). In the context of the proposedalgorithm-generator, this method uses a selection plan(SP) in which only the best solution is chosen from B,thereby making l ¼ jP j ¼ 1. The generational plan (GP)then creates k ¼ 2n more solutions (the set C) system-atically around this best solution. Figure 2 illustrates thisprocedure. In the third step, all members of B are chosenas R and in the fourth step they are replaced by P [ C.Here are the four plans which constitute Box’s evolu-

Fig. 2 Box’s evolutionary optimization procedure. Solutions 1 to 5are five members of the initial population

Page 5: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

tionary optimization algorithm in terms of our proposedalgorithm-generator:

SP: Choose the best of N ¼ 2n þ 1 solutions and call itas P . Thus, l ¼ jP j ¼ 1.

GP: Create k ¼ 2n solutions (the set C) systematicallyaround the sole member of P (Box 1957; Deb 1995).Thus, jCj ¼ 2n.

RP: Set R ¼ B.UP: Set B :¼ P [ C, ensuring jBj ¼ N .

4.2 Simplex search method

Nelder and Meade’s simplex search method (Nelder andMead 1965) uses N ¼ nþ 1 solutions in the populationB. Figure 3 shows a set of three points in the population(simplex) on a n ¼ 2-variable problem. The worst sim-plex member is reflected around the centroid of the rest ofthe n solutions. Thereafter, depending on the functionvalue comparison of this reflected solution with the thebest, next-best, and the worst solution in the population,a new solution is created. The new solution then replacesthe worst solution. A few iterations of this simplex searchmethod are illustrated in Fig. 3. The shaded region rep-resent the first simplex. In terms of our algorithm-gen-erator, the four plans for this algorithm are as follows:SP: Set P ¼ B (that is, all N ¼ nþ 1 population mem-bers of B are chosen as members of P ).

GP: Create a reflected solution using members of P .Based on objective function value comparisons ofmembers of P , a new solution C is created byreflection, expansion or contraction, as the case maybe (see (Deb 1995; Nelder and Mead 1965) fordetails). Thus, jCj ¼ 1.

RP: Pick the worst solution of B and include it in R.Thus, jRj ¼ 1.

UP: Set B :¼ ðB [ CÞnR.The complex search method (Box 1965 ;Deb 1995)

has a similar search principle to the above and can alsobe written systematically as above.

4.3 Adaptive random search methods

A number of early studies (Brooks 1958; Luus and Jaak-ola 1973; Price 1983), introduced and encouraged theuse of random search methods in practice, particularly togive a search algorithm its global perspective. In a par-ticular adaptive search method (Luus and Jaakola 1973),a set of N randomly chosen solutions is initialized and thebest solution is chosen. Thereafter, a set of N new solu-tions are created in a hyper-box which is smaller in sizecompared to the previous hyper-box. Figure 4 showsN ¼ 6 solutions in two consecutive iterations. The solu-tions marked in circles are created at random on theshaded region in the first iteration. The best solution ineach iteration is marked with a solid symbol. The figureshows how the next-iteration hyper-box gets reduced insize andplaced centered around the best solution. In termsof our algorithm-generator, the four plans are as follows:

SP: Set P with the best solution of B. Thus, l ¼ jP j ¼ 1.GP: Create k ¼ N new random solutions C from a

hyper-box centered around the sole member of P(see Deb 1995; Luus and Jaakola 1973) for details).

RP: Set R ¼ B.UP: Set B ¼ C.

In the last step, all members of B are replaced withthe newly created solutions (C).

There exists other complicated adaptive randomsearch methods which use more sophisticated updaterules. In a recent approach (the so-called shuffled com-plex evolution (SCE)) (Duan et al. 1993), a population-based combined simplex and random search method issuggested. The study reported mixed results on a num-ber of test problems. By carefully understanding thetasks needed in each step of the proposed algorithm-generator and by using the salient properties of classicaland non-classical optimization methods each step can bedesigned better. Such an endeavor should allow devel-opment of more interesting and efficient optimizationalgorithms systematically.

Fig. 3 Nelder and Meade’s simplex search method is illustrated.Solutions 1 to 3 are three members of the initial population

Fig. 4 Adaptive random search method is illustrated. Circles repre-sent Bð0Þ and squares represent Bð1Þ

Page 6: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

5 Evolutionary methods

Evolutionary algorithms (EAs) are stochastic searchalgorithms which use randomized search operators(Goldberg 1989; Holland 1975). For real-parameteroptimization, different types of evolutionary algorithmsare suggested. Of them, the real-parameter geneticalgorithms, evolution strategies, differential evolutionand evolutionary programming are most popularly used.Before we discuss how some of these algorithms can berepresented by the proposed algorithm-generator, wediscuss an important matter related to EAs and thealgorithm-generator.

To those familiar with EAs, the four steps of theproposed algorithm-generator may seem to be quitemotivated from the working principles of an evolution-ary algorithm. This is somewhat true since the majorityof population-based optimization algorithms in use to-day follow evolutionary principles. However, the explicituse of the last two steps (RP and UP) are purposeful. Toan evolutionary algorithm and to any population-basedoptimization algorithm, there lies an important issue –the balance between the exploitation and exploration(Goldberg 1989). The first term is related to the extent ofimportance given to already discovered solutions in thesubsequent iterations and the second term is related tothe extent of generalization used to create a new solutionfrom the already available solutions. Studies in the past(Goldberg et al. 1993; Thierens and Goldberg 1993)have shown a way to quantify these two terms in thecontext of a GA and have clearly shown the need tomake a balance between them. We emphasize here thatthe exploitation aspect of an EA not only depends on theselection plan, but also on the other two plans (RP andUP), a matter which is often ignored in the EA studies.The emphasis of the current-best solutions in a finite-sized population not only depends on which solutionsand how often they are selected in the selection plan forcreating the mating pool, but also indirectly depends on(i) how many and which solutions are chosen to be de-leted from the population and (ii) whether or not theexisting population members are compared with thenewly created solutions for their further inclusion in thepopulation (this is often used to introduce elite-preser-vation). The above two tasks are explicitly included inthe algorithm-generator as the replacement plan (RP)and the update plan (UP), respectively, to make thealgorithm development task flexible. However, theexploration issue can be solely derived from the opera-tions of the GP.

We are now ready to present a number of EAs in thelight of our proposed algorithm-generator.

5.1 Real-parameter genetic algorithms

In a real-parameter genetic algorithm (rGA), a solutionis directly represented as a vector of real-parameter

decision variables. Starting with a population of suchsolutions (usually randomly created), a set of geneticoperations (such as selection, recombination, mutation,and elite-preservation) are performed to create a newpopulation in an iterative manner. Clearly, the selectionplan (SP) must implement the selection operator ofrGAs, the generational plan (GP) must implement allvariation operators (such as recombination and muta-tion operators), and the update plan (UP) must imple-ment any elite-preservation operator. Although mostrGAs differ from each other mainly in terms of theirrecombination and mutation operators (some com-monly-used recombination operators are discussed inthe Appendix and are also can be found in the litera-ture), they mostly follow one of a few algorithmicmodels. We shall discuss some of these models a littlelater, but before that we shall discuss some commonly-used EA operators which can be explained easily withthe help of plans outlined in the proposed algorithm-generator.

5.1.1 Elite preservation operator

The elite preservation in an EA is an important task(Rudolph 1994). This is enforced by allowing best ofparent and offspring populations to be propagated intwo consecutive iterations. The update plan of the pro-posed algorithm-generator achieves elite preservation ina simple way. As long as the previous population B orthe chosen parent population P is included in the updateplan for choosing better solutions, elite preservation isguaranteed. Some EAs achieve this by choosing the bestl solutions from a combined population of P and C.

5.1.2 Commonly-used EA selection operators

Among the EA selection operators, the proportionate,ranking, and tournament selection operators are oftenused. With the proposed algorithm-generator, all theseoperators must have to be represented in the selectionplan. For example, the proportionate selection operatorcan be implemented by choosing the i-th solution for itsinclusion to P with a probability fi=

Pi fi. This proba-

bility distribution around B is used to choose choose lmembers of P . The ranking selection operator can beimplemented with the sorted ranks rnki and using aprobability rnki=

Pi rnki. The tournament selection

operator with a size s can be implemented by using a SPin which s solutions are chosen and the best is placed inP . The above procedure needs to be repeated l times tocreate the parent population P .

5.1.3 Niche-preservation and mating restriction operators

A niche-preservation operator is often used to main-tain a diverse set of solutions in the population.

Page 7: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

Among all the four steps of the proposed algorithm-generator, only the selection plan gets affected toimplement a niche-preservation operator. Whilechoosing the parent population P , care should be gi-ven to lower the selection probability to population-best solutions of P . Solutions with a wider diversity intheir decision variables must be given priorities. Thestandard sharing function approach (Goldberg andRichardson 1987), clearing approaches (Petrowski1996), and others can be designed using an appropri-ate selection plan.

A mating restriction operator, on the other hand, isused to reduce the chance of creating lethal solutionsarising from mating of two dissimilar yet good solutions.This requires a selection plan SP in which the proceduresof choosing of each of the l parents become dependentto each other. Once the first parent is chosen, theprocedure of choosing other parents must consider asimilarity measure with respect to the first parent.

We are now ready to discuss the commonly-usedmodels for real-parameter evolutionary optimization.

5.1.4 Generational versus steady-state EAs

Evolutionary algorithms, particularly genetic algo-rithms, are often used with a generational or with asteady-state concept. In the case of former, a completepopulation of k solutions are first created before makingany further decision. The proposed algorithm-generatorcan be used to develop a generational EA by repeatedlyusing the SP-GP plans to create k new offspring solu-tions. Thereafter, the replacement plan simply choosesthe whole parent population to be replaced, or R ¼ B(and r ¼ l). With an elite-preservation operator, theupdate plan chooses the best l solutions from thecombined population B [ C. In an EA without elitism,the update plan only chooses the complete offspringpopulation C. We describe a generic real-parametergenerational GA in detail later.

On the other extreme, a steady-state EA can bedesigned by using a complete SP-GP-RP-UP cycle forcreating and replacing only one solution (k ¼ r ¼ 1) ineach iteration. It is interesting to note that the SP canuse a multi-parent (l > 1) population, but the GP cre-ates only one offspring solution from it. We shall returnto two specific steady-state evolutionary algorithmslater. The generational gap EAs can be designed with anon-singleton C and R (or having k > 1 and r > 1).

The generational model is a direct extension of thecanonical binary GAs to real-parameter optimization.In each iteration of this model, a complete set of Nnew offspring solutions are created. For preservingelite solutions, both the parent and offspring popula-tions are compared and the best N solutions are re-tained. In most such generational models, thetournament selection (SP) is used to choose two parentsolutions and a recombination and a mutation opera-tor are applied to the parent solutions to create two

offspring solutions. In terms of our algorithm-genera-tor, the four plans are described below. However, thefirst two steps are performed iteratively till N offspringsolutions are formed.

SP: This is usually a tournament selection, in which tworandom solutions from B are compared and thebetter one is selected. By two consecutive tourna-ment selection operations, two parent solutions (P )are chosen.

GP: A real-parameter recombination operator and areal-parameter mutation operator are used to createtwo offspring solutions (k ¼ 2) from P . Continuethe above two steps till N offspring solutions (the setC) is formed.

RP: Set R ¼ B.UP: Set B with the best N solutions of B [ C.

The BLX, SBX, and the fuzzy recombination areoften used with this generational model. The randommutation operator (Michalewicz 1992), uniform muta-tion operator (Schwefel 1987), and polynomial mutationoperator (Deb and Goyal 1996) are used along with arecombination operator.

Another commonly-used real-parameter geneticalgorithm is called the CHC (Eshelman 1991) in whichboth parent and offspring population (of the same sizeN ) are combined and the best N members are chosen.Such an algorithm can be realized by using a updateplan which chooses the best N members from a com-bined B [ C population. The CHC algorithm also uses amating restriction scheme, a matter which can also beimplemented using the proposed algorithm-generator asdescribed in Sect.5.1.3

Besides the classical and evolutionary algorithms,there exist a number of hybrid search and optimizationmethods in which each population member of a GA isundergone with a local search operation (mostly usingone of the classical principles). In the so-calledLamarckian approach, the resulting solution vectorreplaces the starting GA population member, whereasin the Baldwin approach the solution is unchanged butsimply the modified objective function value is usedin the subsequent search operations. Recent studies(Joines and Houck 1994; Whitley et al. 1994) showedthat instead of using a complete Baldwin or a com-plete Lamarckian approach to all population members,the use of Lamarckian approach to about 20–40% ofthe population members and the use of Baldwinapproach to the remaining solutions is a better strat-egy. In any case, the use a local search strategy toupdate a solution can be considered as a special-pur-pose mutation operator associated with the genera-tional plan of the proposed algorithm-generator.Whether the mutated solution is accepted in the pop-ulation (the Lamarckian approach) or simply theobjective function value of the mutated solution isused (the Baldwin approach) or they are used partially(the partial Lamarckian approach) is an implementa-tional matter.

Page 8: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

5.1.5 Minimal generation gap (MGG) model

This model is completely different from the generationalmodel. It is a steady-state GA, in which in every itera-tion only two new solutions are updated in the GApopulation (Higuchi et al. 2000):

1. From the solution bank B, select l parents randomly.2. Generate k offspring from l parents using a recom-

bination scheme.3. Choose two parents at random from the population B.4. Of these two parents, one is replaced with the best of k

offspring and the other is replaced with a solutionchosen by a roulette-wheel selection procedure from acombined population of k offspring and two chosenparents.

This GA was originally proposed by Satoh, Yamamuraand Kobayashi (1996) and later used in a number ofstudies (Higuchi et al. 2000; Kita et al. 1999; Tsutsuiet al. 1999). In the context of our algorithm-generator,the above model can be described as follows:

SP: This uses a uniform probability for choosing anysolution from B. In total, l solutions are picked.

GP: A real-parameter recombination operator (such asSPX or UNDX described in the Appendix) is ap-plied to the set P as many as k times to create theoffspring set C.

RP: R is set by choosing two solutions from B at ran-dom. Thus, r ¼ jRj ¼ 2.

UP: Here, one solution in R is replaced by the best of Cand other by using a roulette-wheel selection on theset C [ R.

The original study considered two different recom-bination plans (SPX and UNDX). Typical parametersettings used with this algorithm were as follows:l ¼ nþ 1 and k ¼ 200 with the SPX recombinationoperator (Higuchi et al. 2000) and l ¼ 3 and k ¼ 200with the UNDX recombination operator (kita 1998).Although the developers of this model did not perform adetailed parametric study, we perform a parametricstudy by varying the offspring size k on the MGG modelwith UNDX and SPX operators. The following threecommonly-used test problems are used:

Felp ¼Xn

i¼1ix2i (Ellipsoidal function); ð1Þ

Fsch ¼Xn

i¼1

Xi

j¼1xj

!2

(Schwefel’s function); ð2Þ

Fros ¼Xn�1

i¼1100ðx2i � xiþ1Þ2 þ ðxi � 1Þ2� �

(Generalized Rosenbrock’s function). ð3ÞFirst, we fix population size N ¼ 300 and vary k from 2to 300. All other parameters are kept as they were usedin the original MGG study (Higuchi et al. 2000), except

that in UNDX l ¼ 6 is used here, as this value is foundto produce better results. In all experiments, we have runthe MGG model till a pre-defined number of functionevaluations F T have elapsed. We have used the followingvalues of F T for different functions: F T

elp ¼ 0:5ð106Þ,F Tsch ¼ 1ð106Þ and F T

ros ¼ 1ð106Þ. In all experiments, 50runs with different initial populations are taken and thesmallest, median, and highest best function values arerecorded. Figure 5 shows the best function valuesobtained by the SPX and the UNDX operators on Felp

with different values of k. The figure shows that k ¼ 50produced the best reliable performance for the SPXoperator. Importantly, the MGG model with k ¼ 200(which was suggested and used in the original MGGstudy) did not perform as well. Similarly, for the UNDXoperator, the best performance is observed at k ¼ 4,which is much smaller than the suggested value of 200.The optimality in best function value with respect to k isclearly evident from the figure. Figure 6 shows that inFsch best performances are observed with identical valuesfor k with SPX and UNDX. Figure 7 shows the popu-lation best objective function value for the MGG modelwith SPX and UNDX operators applied to the Fros

function. Here, the best performance is observed at

Fig. 5 Best objective function value in B for different k on Felp usingthe MGG model with SPX and UNDX operators

Fig. 6 Best objective function value in B for Fsch using the MGGmodel with SPX and UNDX

Page 9: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

k ¼ 100 to 300 for the SPX operator and k ¼ 6 for theUNDX operator.

Thus, it is clear from the above experiments that thesuggested value of k ¼ 200 (which was recommendedand used in earlier studies (kita et al. 1999; Satoh et al.1996) is not optimal for either recombination operator(UNDX or SPX). Instead, a smaller value of k hasexhibited better performance. It is also clear from thefigures that the SPX operator works better with a largeoffspring pool size, whereas the UNDX works wellwith a small offspring pool size. Since a uniformprobability distribution on a large simplex is used inthe SPX operator, a large pool size requirement isintuitive for a proper sampling of the region covered bythe simplex. On the other hand, with a biased proba-bility distribution towards the centroid of the parentsolutions, a few samples are enough with the UNDXoperator.

However, in order to properly evaluate the MGGmodel, more such parametric studies must be per-formed. In this direction, the effect of replacement setsize (r) and the parent set size (l) on the performanceof MGG must also be carefully studied. In addition,the MGG model can also be modified with otherstrategies (such as with a different selection plan (SP)or a different replacement plan (RP) or a differentupdate plan (UP) or other combinations) may also betried. The representation of the MGG model in theabove-mentioned structured manner allows an user tostudy individual plans separately and understand theirinteractions.

5.1.6 Generalized generation gap (G3) model

In another study (Deb et al. 2002), the MGG model ismodified to make it computationally faster by replacingthe roulette-wheel selection with a block selection of thebest two solutions. This model also preserves elitesolutions from the previous iteration:

1. From the population B of size N , select the best parentand l� 1 other parents randomly.

2. Generate k offspring from the chosen l parents usinga recombination scheme.

3. Choose r parents at random from the population B.4. From a combined subpopulation of r chosen parents

and k created offspring solutions, choose the best rsolutions and replace the chosen r solutions (inStep 3) with these solutions.

Although it is clear, for the sake of completeness werewrite the above model in terms of our algorithm-generator:

SP: The best parent and the ðl� 1Þ other randomsolutions are picked from B.

GP: Create k offspring solutions from P using anyrecombination scheme (PCX, UNDX, SPX, or anyother recombination operator described in theAppendix).

RP: Set R with r random solutions from B.UP: From the combined set C [ R, choose r best solu-

tions and put them in R0. Update B as B :¼ðBnRÞ [ R0.

To compare the performance of the MGG and theG3 model, we consider the UNDX recombinationoperator as the GP and apply both MGG and G3methods to three test problems described earlier. In allcases, we use n ¼ 20 variables and continue till a func-tion value of 10�20 is achieved. Recall that in all threeobjective functions the minimum function value is zero.But before we describe the simulation results, let usdiscuss an important matter which we would like tohighlight for the future studies on population-basedoptimization algorithms. All population-based algo-rithms require the user to supply an initial guess popu-lation. In most past studies, this population wasinitialized symmetrically around the known optimum ofthe chosen test problem. Along with such an initialpopulation, if a recombination operator which biasessolutions in the center of the population is employed, itis not surprising that such an algorithm will work verywell. To obtain a real evaluation of an algorithm on atest problem, one way to remedy this difficulty is to usean initial population far away from the known optimumsolution. Here, for each test problem, the populationBð0Þ is initialized randomly in xi 2 ½�10;�5�, which isaway from the global optimum solution (xi ¼ 0 for Felp

and Fsch and xi ¼ 1 for Fros).Table 1 shows the minimum number of evaluations

needed in 50 different replications of each GA. In eachcase, results are obtained with their best parameter

Fig. 7 Best objective function value in B on Fros using the MGGmodel with SPX and UNDX

Table 1 Comparison of MGG and G3 optimization models withthe UNDX recombination operator

Function MGG G3

Felp 2,97,546 17,826Fsch 5,03,838 30,568Fros 9,38,544 71,756

Page 10: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

settings. In all cases, a population size N � 100, l ¼ 3,and k in the range 2 to 6 produced the best performance.These results indicate that the G3 model is an order ofmagnitude faster than the MGG model. By keeping thebest population member in P and by using a more directupdate strategy (instead of using a roulette-wheel selec-tion plan used in MGG), the G3 model makes the searchmuch faster than the MGG model.

5.2 Self-adaptive evolution strategies

Besides the real-parameter genetic algorithms, self-adaptive evolution strategies (ESs) are often used forsolving real-parameter optimization problems. In theirstandard models, a ðl=qþ kÞ or ðl=q; kÞ self-adaptiveES starts with l randomly chosen solutions B. There-after, by using a recombination scheme (intermediate ordiscrete) on q of the l parents and a self-adaptivemutation scheme (Beyer 2001), k (usually greater than l)offspring solutions (the set C) are created. In the plusstrategy, both parent and offspring populations arecombined together and the best l solutions are retained.In the comma strategy, the best l of k offspring solutionsare retained. In terms of the proposed algorithm-gen-erator, we express these two self-adaptive ESs as follows:

SP: Set P ¼ B.GP: Here, an ES recombination and mutation plan are

chosen (see (Back 1996; Beyer 2001; Schwefel 1981)for standard operators).

RP: Set R ¼ B.UP: For the plus ES, B is set with the best k solutions of

R [ C and for the comma ES, it is set with the best ksolutions of C alone.

Elsewhere (Fogel et al. l995), the evolutionary pro-gramming (EP) technique is used in a similar fashion forreal-parameter optimization. Those EP techniques canalso be represented by our algorithm-generator model ina very similar manner as described above.

5.3 Differential evolution

Differential evolution (DE) (Storn and Price 1997) is asteady-state EA in which for every offspring a set ofthree parent solutions and an index parent are chosen.Thereafter, a new solution is created either by a variable-wise linear combination of three parent solutions or bysimply choosing the variable from the index parent witha probability. The resulting new solution is comparedwith the index parent and the better of them is declaredas the offspring. In the context of our algorithm-gener-ator, the DE can be written as follows:

SP: Here, three solutions and an index solution arechosen at random from a solution bank B. Thus,l ¼ jP j ¼ 4.

GP: Create an offspring solution (the singleton C) fromP with a specified recombination plan (see (Stornand Price 1997) for details and different variations).

RP: Set R with the index parent chosen in SP. Thus,r ¼ 1.

UP: Update B by replacing R with the better of R and C.

6 Suggested properties of a generational(recombination) plan

Although the performance of an optimization algorithmdepends on the interaction of all four plans outlined inthe algorithm-generator, the generational plan of creat-ing new solutions from a set of old solutions is probablythe most crucial one to design. In Sect. A, we reviewed anumber of generational plans commonly used in theliterature. Here, we discuss some salient propertieswhich may be kept in mind while designing a new gen-erational plan (GP).

The task of a GP is to use the members of the set Pand create a new pool of solutions (the set C) byexploiting location and spread of solutions in P . Tomake the plans independent from each other, it isadvisable not to use any objective function informationexplicitly in GP. This is because the emphasis of goodsolutions based on objective function information wasalready the task of the selection plan (SP). An objectivefunction based emphasis in both SP and GP may makethe resulting algorithm overly greedy, causing a pre-mature convergence in certain problems. Based on thisunderstanding, Beyer and Deb (2001) argued that arecombination operator must have the following twoproperties:

1. Population mean decision variable vector shouldremain the same before and after the generationalplan is applied.

2. The spread in members of C must be more than thatin P .

Since the generational plan does not usually use anyobjective function information explicitly, the first argu-ment makes sense for global optimization. However, weemphasize the fact that this may not be a strictrequirement, particularly in designing an algorithm tofind a local optimal solution or to solve a unimodaloptimization problem.

The second argument comes from the realization thatthe selection plan (SP) has a general tendency to reducethe population variance by preferring a few solutionsfrom the solution bank. Thus, the population variancemust be increased by the generational plan (GP) topreserve adequate diversity in the offspring population.Figure 8 illustrates this matter. If the GP causes areduction in the variance, then both SP and GP havecontinual tendencies of reducing the variance of thepopulation, a matter which will lead the search processtowards a premature convergence.

Following the above argument for global optimiza-tion, we realize that the population mean of P and C canbe preserved by several means. One method would be tohave individual recombination events preserving the

Page 11: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

mean between the participating parents and resultingoffspring. We call this approach as the mean-centricrecombination. The other approach would be to have anindividual recombination event biasing offspring to becreated near the parents, but assigning each parent anequal probability of creating offspring in its neighbor-hood. This will also ensure that the population mean ofthe entire offspring population is identical to that of theparent population. We call this latter approach theparent-centric recombination. In the following subsec-tion, we discuss the merits of each of these recombina-tion operators.

6.1 Advantages of parent-centric operators

A couple of examples of the mean-centric recombina-tions are UNDX and SPX operators, whereas someexamples of the parent-centric recombination operatorsare the PCX, the fuzzy recombination, and the SBXoperator. It is argued here that parent-centric recombi-nations are, in general, better than the mean-centricones. Since each parent is carefully picked by the selec-tion plan (meaning that the parents are preferred solu-tions from the solution bank B), for most real-parameteroptimization problems it can be assumed that solutionsclose to these parents are also likely to be potential goodcandidates. On the contrary, it may be quite demandingto assume that the solutions close to the centroid of theparticipating parents are also good, especially in caseswhere parents are well sparsed in the search space. Thissituation happens in the early iterations of an algorithm,when most population members are randomly placedover the entire search space. Although individual par-ents may be good, but the resulting centroid of the

parents need not be good as well. Thus, emphasizingsolutions close to the centroid of the parents (as enforcedin UNDX directly and in SPX indirectly) may not be agood idea. However, creating solutions close to parentsas emphasized by the PCX operator should make a moresteady and reliable search. An earlier study (Beyer andDeb 2001) has demonstrated that a variable-wise parent-centric recombination operator (SBX) produced a fasterconvergence towards the true optimum on a number oftest problems compared to a uniformly emphasizedrecombination operator (BLX).

7 A parametric study of the G3 model

In this section, we suggest a modified version of the PCXoperator originally suggested by the author and hisstudents (Deb et al. 2002) and then compare its perfor-mance with respect to a representative mean-centricrecombination operator (UNDX) on the G3 model.Finally, the performance of the G3 model with themodified PCX and the original PCX operators is com-pared with other population-based optimization algo-rithms, such as a classical quasi-Newton method, threeself-adaptive ESs, and the differential evolution strategy.

7.1 A modified PCX operator

In the original PCX operator, there exists a finite(albeit small) probability of creating an offspringsolution on the far side of the centroid. Figure 9(a)shows a typical probability distribution of PCX onsolution 1 for creating offspring solutions. When allparent solutions are to be used one after another with

Population Afterselection

Afterrecombination

Fig. 8 When the selection planreduces the diversity of P, thetecombination operator mustincrease the diversity

(a) (b)Fig. 9 The original and themodified PCX operators areillustrated in a and b, respectively

Page 12: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

the original PCX operator, this causes an artificial in-crease in probability of creating solutions near thecentroid. In the modified PCX operator, we restrict theoffspring to be created strictly on the region in whichthe parent lies. Figure 9(b) illustrates this modification(mPCX). For this purpose, we simply suggest the use ofa log-normal probability distribution for the ~dðpÞ com-ponent, as given below:

~y ¼~xp þ expðwfÞ � 1ð Þ~dðpÞ þXl

i¼1; i6¼p

wg �D~eðiÞ; ð4Þ

The parameter wf is a zero-mean normally distributedvariable with standard deviation

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2 log rf

p. This modi-

fied PCX operator ensures a zero probability of creatinga solution on the other side of the centroid.

7.2 Comparison with the original PCX operator

Figure 10 shows the evaluations needed to achieve afunction value of 10�20 using the G3 model with themodified PCX, original PCX, and the UNDX recom-bination operators. In all cases, a population sizeN ¼ 100 and a parent size l ¼ 3 are used. For themPCX operator, we have used rg ¼ 0:1 and rf ¼ 1:01.However, for the PCX operator, we use rg ¼ rf ¼ 0:1.For the UNDX operator, we use the suggested values(Kita and Yamamura 1999). The figure clearly showsthat the G3 model with the PCX operators performsmuch better than that with the UNDX operator. Formost k values, the modified PCX operator producesslightly better results than the original PCX operator.The smallest number of evaluations needed by themPCX operator is 5,194 (with k ¼ 2 and N ¼ 100),whereas the smallest number of evaluations needed bythe original PCX was 5,818 with identical k and N val-ues. It is also interesting to note from this parametricstudy that the performance of G3 gets better with

reducing k, irrespective of the recombination operatorused. Since a fixed number of evaluations is allowed ineach run, the G3 model works better with more itera-tions and fewer new solutions created in each iteration.

In order to investigate the effect of population size Non the performance of the three recombination opera-tors, we fix the offspring set size to k ¼ 2 and vary N .Although a more detailed statistical test is necessary, avisual comparison shown in Fig. 11 reveals that themPCX operator performs better than original PCX forsmaller N and the best performance occurs with a pop-ulation size of 100. This study indicates the importanceof such a parametric study in developing an efficientalgorithm. The study reveals that there exists an optimalrange of population size for which the G3 model worksthe best. For all three recombination operators, a pop-ulation size around 100 performs well.

Figure 12 shows the performance comparison forthe 20-variable Schwefel’s function. It is also clear from

UNDX

mPCX

PCX10000

100000

2 4 10 20 50 100 300

Function Evaluations

λ

5000

500000

Fig. 10 Function evaluations needed to find a solution with afunction value equal to 10�20 on Felp using the G3 model with mPCX,PCX and UNDX. N ¼ 100 is used

Fig. 11 Function evaluations versus population sizes on Felp using theG3 model with mPCX, PCX and UNDX. k ¼ 2 is used

Fig. 12 Function evaluations needed to find a solution with afunction value equal to 10�20 on Fsch using the G3 model with mPCX,PCX and UNDX. N ¼ 100 is used

Page 13: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

the figure that the modified PCX operator performsslightly better than the original PCX operator even inthis function. For a population size of N ¼ 100, thebest performance for mPCX is recorded with k ¼ 2requiring 13,126 evaluations compared to 15,358required with the original PCX. For this function also,the performance of G3 model with any of the threerecombination operators is better with smaller offspringsize (k). We have also observed an optimal populationsize (N ) for this function. For brevity, we do not showthe plot here.

Figure 13 shows the performance comparison for the20-variable Rosenbrock’s function. A slightly betterperformance of the modified PCX operator is observedagain. In this function, the best performance (requiring19,692 evaluations) with N ¼ 100 is observed fork ¼ 6, compared to 20,640 evaluations needed with theoriginal PCX operator having k ¼ 4. It is interesting tonote that for this function the G3 model with all threerecombination operators performs the best with karound 4 to 6. For k smaller than these values, theperformance deteriorates. As the problem gets morecomplicated, an adequate number of offspring samplesare needed to provide useful information about thesearch space. Once again, the performance of the G3model is found to be optimal for a specific range of N inthis function. We do not present the detail results herefor brevity.

7.3 Comparison with other evolutionary algorithms

Next, we compare the G3 model and both PCX opera-tors with other evolutionary optimization methods.Table 2 shows the total function evaluations needed byvarious algorithms to find a solution with a functionvalue equal to or smaller than 10�20 for all three testproblems. In all cases, 50 different runs are made and thebest, median and the worst total evaluations of the 50runs are recorded. It is clear that for both elliptical andSchwefel’s functions, the G3+mPCX works the best.However, in all three test problems, G3 model witheither PCX operators performs much better than otherevolutionary methods. In all cases except the CMA-ES(Hansen and Ostermeier 1996)– an evolution strategywhich works by deriving salient search directions from acollection of previously-found good solutions, the per-formance is at least an order of magnitude better.Although the CMA-ES approach comes closer to the G3and PCX operators, the latter approaches are better inall three test problems.

7.4 Comparison with a classical optimization method

Finally, we compare the G3+PCX operator with aclassical point-based optimization algorithm – the quasi-Newton method. In Table 3, we present the best, med-ian, and the worst function values obtained from a set of10 independent runs started from random solutions inxi 2 ½�10;�5�. The maximum number of function eval-uations allowed in each test problem is determined fromthe best and worst function evaluations needed for theG3 model with PCX operator to achieve an accuracy of10�20. These limits on function evaluations are alsotabulated. The tolerances in the variables and in thefunction values are set to 10�30. The table shows that thequasi-Newton method has outperformed the G3 modelwith PCX for the ellipsoidal function by achieving abetter function value. Since the ellipsoidal functionhas no linkage among variables, the performance ofthe quasi-Newton search method (a point-by-pointapproach) is difficult to match. However, it is clear fromthe table that the quasi-Newton method is not able tofind the optimum with an accuracy of 10�20 (obtained byG3+PCX) within the allowed number of functionevaluations in more epistatic problems (Schwefel’s andRosenbrock’s functions).

Fig. 13 Function evaluations needed to find a solution with afunction value equal to 10�20 on Fros using the G3 model with mPCX,PCX and UNDX. N ¼ 100 is used

Table 2 Comparison of G3 model and the modified PCX with a number of other optimization algorithms on Felp, Fsch and Fros

EA Felp Fsch Fros

Best Median Worst Best Median Worst Best Median Worst

G3+mPCX 5,194 6,576 7,240 13,126 14,820 16,708 19,776 23,296 25,852G3+PCX 5,826 6,800 7,728 13,988 15,602 17,188 20,640 23,688 25,720CMA-ES 8,064 8,472 8,868 15,096 15,672 16,464 29,208 33,048 41,076ð1; 10Þ-ES 28,030 40,850 87,070 72,330 105,630 212,870 591,400 803,800 997,500ð15; 100Þ-ES 83,200 108,400 135,900 173,100 217,200 269,500 663,760 837,840 936,930DE 9,660 12,033 20,881 102,000 119,170 185,590 243,800 587,920 942,040

Page 14: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

8 Conclusions

Real-parameter optimization using classical gradient ordirect search algorithms is not new. Although there exista plethora of point-based algorithms, requiring onlyone solution in each iteration, there also exist a fewpopulation-based algorithms. On the contrary, the real-parameter optimization using population-based evolu-tionary algorithms are comparatively new. In this paper,we have suggested a population-based real-parameteroptimization algorithm-generator, which can representmost classical and evolutionary algorithms. For anumber of such existing optimization algorithms, wehave demonstrated how the proposed algorithm-gener-ator can be used to express them.

Besides being able to represent an existing algorithm,the structure of the algorithm-generator also provides anideal platform to investigate different new plans. In suchan effort, we have also suggested an efficient algorithm(we called generalized generation gap or G3) and per-formed a detailed parametric study. To evaluate theproposed G3 algorithm, we have compared its perfor-mance with an efficient classical method and a numberof other evolutionary optimization algorithms.

The following conclusions can be derived out of thisstudy:

� For real-parameter optimization using a population-based algorithm, an algorithm-generator is proposed.Most population-based optimization algorithmsfound in classical literature and in evolutionaryoptimization literature can be easily described usingthe proposed algorithm-generator.

� The algorithm-generator contains four functionalplans, each of which can be independently controlledby the user. This functional decomposition of analgorithm allows an user to investigate effect of eachfunctional plan on the algorithm’s performance andshould help an user to develop efficient optimizationalgorithms.

� Based on extensive simulation results on three testproblems, it has been observed that the G3 modelwith a parent-centric recombination operator is bet-ter than all other population-based real-parameteroptimization algorithms considered in this study.

� A parametric study with the population size of B hasshown that in all problems the G3 model performs

the best only when an adequate population size isconsidered. This amply emphasizes the need of usingpopulation-based approaches (instead of a point-based approach) in solving search and optimizationproblems efficiently.

� Parent-centric recombination is found to be muchbetter than the mean-centric recombination operatorswith the G3 model.

� Contrary to many real-parameter optimization stud-ies, it has been emphasized here and in other studiesof the author (Deb et al. 2002) that for a properevaluation of an algorithm the initial populationmust not be created centered around the knownoptimum of a test problem. This may introduce ainherent bias for evolutionary algorithms with certaingenerational plans and may not result in a fair com-parison with classical and other evolutionary algo-rithms.

� One other important aspect of this study is that bothclassical and evolutionary algorithms are treated asan outcome of an identical algorithm-generator.Although evolutionary algorithmists tend to useterminologies which are based on natural geneticsand natural evolution, in terms of the proposedalgorithm-generator they can be treated differentlyand probably more meaningfully along the lines ofother optimization algorithms.

The algorithm-generator suggested here provides aplatform for developing new and hopefully better opti-mization algorithms. After developing such algorithms,they can be evaluated by performing a parametric studysimilar to the one performed here with the G3 model.Although this study has considered only three testproblems, more test problems and test problems withfurther complexities must also be tried. Because of thedemonstrated common principles between classical andevolutionary optimization algorithms through the pro-posed algorithm-generator, this study should encourageresearchers from both fields to understand the meritsand demerits of their approaches and help develop moreefficient hybrid algorithms.

A real-parameter recombination operators

The early studies on real-parameter recombinationoperators concentrated on developing variable-wiseoperators, in which a recombination is performed oneach decision variable from two or more parent vectorsat a time with a probability.

One of the earliest implementations was reported byWright (1991), where a linear crossover operator createdthe three specific solutions from two parent solutions

xð1;tÞi and xð2;tÞi at iteration t:

0:5ðxð1;tÞi þxð2;tÞi Þ; ð1:5xð1;tÞi �0:5xð2;tÞi Þ;ð�0:5xð1;tÞi þ1:5xð2;tÞi Þ:Of these three solutions, the best two solutions werechosen to replace the parents. It is clear that the creation

Table 3 Solution accuracy obtained using the quasi-Newtonmethod. FE denotes the maximum allowed function evaluations.

Func. FE Best Median Worst

Felp 6,000 8:819ð10�24Þ 9:718ð10�24Þ 2:226ð10�23ÞFsch 15,000 4:118ð10�12Þ 1:021ð10�10Þ 7:422ð10�9ÞFros 15,000 6:077ð10�17Þ 4:046ð10�10Þ 3:987

Felp 8,000 5:994ð10�24Þ 1:038ð10�23Þ 2:226ð10�23ÞFsch 18,000 4:118ð10�12Þ 4:132ð10�11Þ 7:422ð10�9ÞFros 26,000 6:077ð10�17Þ 4:046ð10�10Þ 3:987

Page 15: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

of only three pre-destined offspring from two parentsolutions does not quite make this recombinationoperator efficient to be used.

Following an interval schema processing suggested byEshelman and Schaffer (1993) (which is incidentally sim-ilar in concept to Goldberg’s (1991) virtual alphabets),they suggested a blend crossover or BLX-a operator forreal-parameter GAs. For two parent solutions xð1;tÞi andxð2;tÞi (assuming xð1;tÞi < xð2;tÞi ), the BLX-a randomly picks asolution in the range [xð1;tÞi � aðxð2;tÞi � xð1;tÞi Þ,xð2;tÞi þ aðxð2;tÞi � xð1;tÞi Þ]. This crossover operator is illus-trated in Fig.14. Thus, if ui is a random number between 0and 1, the following is an offspring:

xð1;tþ1Þi ¼ ð1� ciÞxð1;tÞi þ cix

ð2;tÞi ; ð5Þ

where ci ¼ ð1þ 2aÞui � a. However, it is important tonote that the factor ci is uniformly distributed for a fixedvalue of a. The figure also depicts this fact. The range ofthis probability distribution depends on the parameter a.If a is chosen to be zero, this crossover creates a randomsolution in the range (xð1;tÞi , xð2;tÞi ). In a number of testproblems, the investigators have reported that BLX-0:5(with a ¼ 0:5) performs better than BLX operators withany other a value. It is important to note that there existsa number of other crossover operators which work byusing the same principle. The arithmetic crossover(Michalewicz and Janikow, 1991) uses Eq.(5) with afixed value of c for all decision variables. However, c ischosen by carefully calculating its maximum allowedvalue in all decision variables so that the resulting off-spring does not exceed the lower or upper limits. Theextended crossover operator (Voigt et al. 1995) is alsosimilar to BLX-a.

The simulated binary recombination (SBX) operatorassigns more probability for an offspring to remaincloser to the parents than away from parents (Deband Agrawal 1995). The procedure of computing the

offspring xð1;tþ1Þi and xð2;tþ1Þi from the parent solutions

xð1;tÞi and xð2;tÞi is described as follows. A spread factor biis defined as the ratio of the absolute difference in off-spring values to that of the parents:

bi ¼xð2;tþ1Þi � xð1;tþ1Þi

xð2;tÞi � xð1;tÞi

�����

�����: ð6Þ

First, a random number ui between 0 and 1 is created forthe i-th variable. Thereafter, from a specified probabilitydistribution function, the ordinate bqi

is found so thatthe area under the probability curve from 0 to bqi

isequal to the chosen random number ui. The probabilitydistribution is chosen to have a search power similar to

that in a single-point crossover in binary-coded GAs andis given as follows (Deb and Agrawal 1995):

PðbiÞ ¼0:5ðgc þ 1Þbgc

i ; if bi � 1;0:5ðgc þ 1Þ 1

bgcþ2i

; otherwise.

(

ð7Þ

Figure 15 shows this probability distribution withgc ¼ 2 and 5 for creating offspring from two parentsolutions (xð1;tÞi ¼ 2:0 and xð2;tÞi ¼ 5:0) in the real space.In the above expressions, the distribution index gc is anynon-negative real number. A large value of gc gives ahigher probability for creating ‘near-parent’ solutionsand a small value of gc allows distant solutions to beselected as offspring. Typically, gc in the range of 2 to 5is used in most single-objective optimization studies(Deb and Goyal 1998). After obtaining bqi from theabove probability distribution, two offspring solutionsare calculated as follows:

xð1;tþ1Þi ¼ 0:5 ð1þ bqiÞxð1;tÞi þ ð1� bqi

Þxð2;tÞi

h i; ð8Þ

xð2;tþ1Þi ¼ 0:5 ð1� bqiÞxð1;tÞi þ ð1þ bqi

Þxð2;tÞi

h i: ð9Þ

Note that two offspring are symmetrically placed aroundthe parent solutions. This is deliberately enforced toavoid a bias towards any particular parent solution in asingle recombination operation. Another interestingaspect of this recombination operator is that for a fixedgc the offspring have a spread which is proportional tothat of the parent solutions. From Eqs. 8 and 9, we have

xð2;tþ1Þi � xð1;tþ1Þi

� �¼ bqi

xð2;tÞi � xð1;tÞi

� �: ð10Þ

This has an important implication. If the difference indecision variable values of parents (the term inside thebracket in the right side term) is smaller (larger), theoffspring is also closer (more distant) to the parents.Thus, during early iterations, parents being far awayfrom each other, offspring are also created on the entiresearch space, thereby providing a good initial search ofthe entire space. When solutions converge near a goodregion, the parents are closer to each other and this

Fig. 14 The BLX-a operator. Parents are marked by filled circlesFig. 15 The probability density function for creating offspring underan SBX-gc operator. Parents are marked with an ’o’

Page 16: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

operator helps provide a more focused search. Thisproperty makes the resulting GA self-adaptive (Deb andBeyer 2001).

Voigt et al. (1995) suggested a fuzzy recombination(FR) operator, which is similar to the SBX operator. Insome sense, the FR operator can be thought of as aspecial case of the SBX operator with gc ¼ 1.

The fuzzy connectives based recombination operator(Herrara et al. 1997) is also applied variable by variable.For each variable xi, three regions are marked: region I

ðxðLÞi ; xð1;tÞi Þ, region II ðxð1;tÞi ; xð2;tÞi Þ and region III

ðxð2;tÞi ; xðUÞi Þ (here, we assume that xð1;tÞi < xð2;tÞi ). Thefourth region is constructed as an overlapping regionðyð1Þi ; yð2Þi Þ with all of the above three regions. Here,

yð1Þi � xð1;tÞi and yð2Þi � xð2;tÞi . Once these regions areidentified, one solution is chosen from each of them byusing two user-defined fuzzy connective functions, whichare defined over the two normalized parents.

Besides, there exists a number of other recombinationoperators such as the unfair average crossover (Nomuraand Miyoshi 1996) and extensions to the above opera-tors. Although these operators are well-tested on dif-ferent problems, because of their strict variable-wiseapplication they may not be adequate in handlingproblems which requires linkage preservation amongvariables (Harik and Goldberg 1996). To be able tosolve such problems, researchers have also suggested anumber of recombination operators which directly workon the decision variable vectors. In the following para-graphs, we describe a few such operators.

In the unimodal normally distributed recombination(UNDX) operator (Ono and Kobayashi 1997), ðl� 1Þparents are randomly chosen and their mean ~g is com-puted. From this mean, the ðl� 1Þ direction vectors~dðiÞ ¼~xðiÞ �~g is formed. Let the direction cosines be~eðiÞ ¼~dðiÞ=j~dðiÞj. Thereafter, from another randomlychosen parent ~xðlÞ, the length D of the vector (~xðlÞ �~g)orthogonal to all ~eðiÞ is computed. Let ~eðjÞ (forj ¼ l; . . . ; n, where n is the size of the variable vector~x)be the orthonormal basis of the subspace orthogonal tothe subspace spanned by all ~eðiÞ for i ¼ 1; . . . ; ðl� 1Þ.Then, the offspring is created as follows:

~y ¼~gþXl�1

i¼1wij~dðiÞj~eðiÞ þ

Xn

i¼l

viD~eðiÞ; ð11Þ

where wi and vi are zero-mean normally distributedvariables with variances r2

f and r2g, respectively. Kita

and Yamamura (1999) suggested rf ¼ 1=ffiffiffiffiffiffiffiffiffiffiffil� 2p

andrg ¼ 0:35=

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffin� l� 2p

, respectively and observed thatl ¼ 3 to 7 performed well. It is interesting to note thateach offspring is created around the mean vector~g. Theprobability of creating an offspring away from the meanvector reduces and the maximum probability is assignedat the mean vector. Figure 16 shows three parents and afew offspring created by the UNDX operator. Thecomplexity of the above procedure in creating one off-spring is Oðl2Þ, governed by the Gram-Schmidt ort-honormalization needed in the process.

The simplex recombination (SPX) operator (Tsutsuiet al. 1999) also creates offspring around the mean, butrestricts them within a predefined region (in a simplexsimilar but c ¼

ffiffiffiffiffiffiffiffiffiffiffilþ 1p

times bigger than the parentsimplex). A distinguishing aspect of SPX from UNDXoperator is that the SPX assigns a uniform probabilitydistribution for creating any solution in a restrictedregion. Figure 17 shows a number of offspring solutionscreated using three parents. The computationalcomplexity for creating one offspring here is OðlÞ.

The parent-centric recombination (PCX) operator(Deb et al. 2002) is an extension of the SBX operator(described earlier) for any number of parents. For lparents, first the mean vector~g is computed. Thereafter,for each offspring, one parent ~xðpÞ is chosen with equalprobability. The direction vector ~dðpÞ ¼~xðpÞ �~g is cal-culated. Thereafter, from each of the other ðl� 1Þparents perpendicular distances Di to the line ~dðpÞ arecomputed and their average �D is found. The offspring iscreated as follows:

~y ¼~xp þ wf~dðpÞ þ

Xl

i¼1; i 6¼p

wg �D~eðiÞ; ð12Þ

Fig.16 UNDX

Fig.17 SPX

Page 17: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

where ~eðiÞ are the ðl� 1Þ orthonormal bases that spanthe subspace perpendicular to~dðpÞ. Thus, the complexityof the PCX operator to create one offspring is OðlÞ,instead of Oðl2Þ required for the UNDX operator. Theparameters wf and wg are zero-mean normally distrib-uted variables with variance r2

f and r2g, respectively. The

important distinction from the UNDX operator is thatoffspring solutions are centered around each parentunder the PCX operator. The probability of creating anoffspring closer to the parent is more. Figure 18 shows adistribution of offspring solutions with three parents.

Acknowledgement The author wishes to thank his students DhirajJoshi, Ashish Anand, Abhishek Porwal, and Amit Gautam fortheir programming expertise.

References

Back T (1996) Evolutionary algorithms in theory and practice.Oxford University Press, New York

Beyer H-G (2001) The theory of evolution strategies. Springer,Berlin Germany

Beyer H-G, Deb K (2001) On self-adaptive features in real-para-meter evolutionary algorithms. IEEE Trans Evol Comput 5(3):250–270

Box GEP (1957) Evolutionary operation: method for increasingindustrial productivity. Appl Stat 6(2): 81–101

Box MJ (1965) A new method of constrained optimization and acomparison with other methods. Comput J 8(1): 42–52

Brooks SH (1958) A discussion of random methods for seekingmaxima. Operations Res 6: 244–253

Deb K (1995) Optimization for engineering design: algorithms andexamples. Prentice-Hall, New Delhi

Deb K (2000) An efficient constraint handling method for geneticalgorithms. ComputMethodsApplMechEng 186(2–4): 311–338

Deb K (2001) Multi-objective optimization using evolutionaryalgorithms. Chichester Wiley, UK

Deb K (2003) Unveiling innovative design principles by means ofmultiple conflicting objectives. Eng Optimization 35(5): 445–470

Deb K, Agrawal RB (1995) Simulated binary crossover for con-tinuous search space. Complex Syst 9(2): 115–148

Deb K, Anand A, Joshi D (2002) A computationally efficientevolutionary algorithm for real-parameter optimization. EvolComput J 10(4): 371–395

Deb K, Beyer H-G (2001) Self-adaptive genetic algorithms withsimulated binary crossover. Evol Comput J 9(2): 197–221

Deb K, Goyal M (1996) A combined genetic adaptive search(GeneAS) for engineering design. Comput Sci Inform 26(4): 30–45

Deb K, Goyal M (1998) A robust optimization procedure formechanical component design based on genetic adaptive search.Trans ASME: J Mech Des 120(2): 162–164

Duan QY, Gupta VK, Sorooshian S (1993) Shuffled complexevolution approach for effective and efficient global minimiza-tion. J Optimization: Theory Appl 76(3): 501–521

Eshelman LJ (1991) The CHC adaptive search algorithm: how tohave safe search when engaging in nontradtional geneticrecombination. In: Foundations of genetic algorithms 1(FOGA-1), pp 265–283

Eshelman LJ, Schaffer JD (1993) Real-coded genetic algorithmsand interval-schemata. In: Foundations of Genetic Algorithms2 (FOGA-2), pp 187–202

Fogel DB (1995) Evolutionary Computation. Piscataway, IEEEPress, NY

Fogel LJ, Angeline PJ, Fogel DB (1995) An evolutionary pro-gramming approach to self-adaptation on finite state machines.In: Proceedings of the fourth international conference on evo-lutionary programming, pp 355–365

Goldberg DE (1989) Genetic algorithms for search, optimization,and machine learning. Reading, Addison-Wesley, MA

Goldberg DE (1991) Real-coded genetic algorithms, virtualalphabets, and blocking. Complex Syst 5(2): 139–168

Goldberg DE, Deb K, Thierens D (1993) Toward a better under-standing of mixing in genetic algorithms. J Society InstrumControl Eng (SICE) 32(1): 10–16

Goldberg DE, Richardson J (1987) Genetic algorithms with shar-ing for multimodal function optimization. In: Proceedings ofthe first international conference on genetic algorithms andtheir applications, pp 41–49

Hansen N, Ostermeier A (1996) Adapting arbitrary normal muta-tion distributions in evolution strageties: the covariance matrixadaptation. In: Proceedings of the IEEE international confer-ence on evolutionary computation, pp 312–317

Harik G, Goldberg DE (1996) Learning linkages. In:Foundationsof genetic algorithms 4 (FOGA-4), pp 247–262

Herrara F, Lozano M, Verdegay JL (1997) Fuzzy connectivesbased crossover operators to model genetic algorithms popu-lation diversity. Fuzzy Set and Syst 92(1): 21–30

Higuchi T, Tsutsui S, Yamamura M (2000) Theoretical analysis ofsimplex crossover for real-coded genetic algorithms. In: Parallelproblem solving from nature (PPSN-VI), pp 365–374

Holland JH (1975) Adaptation in natural and artificial systems.Ann Arbor MIT Press, MI

Joines JA, Houck CR (1994) On the use of nonstationary penaltyfunctions to solve nonlinear constrained optimization problemswith GAs. In: Proceedings of the international conference onevolutionary computation, pp 579–584

Kita H (1998) A comparison study on self-adaptation in evolutionstrategies and real-coded genetic algorithms. Department ofcomputational intelligence and systems science, Tokyo instituteof technology, Japan

Kita H, Ono I, Kobayashi S (1999) Multi-parental extension of theunimodal normal distribution crossover for real-coded geneticalgorithms. In: Proceedings of the 1999 congress on evolu-tionary computation, pp 1581–1587

Kita H, Yamamura M (1999) A functional specialization hypoth-esis for designing genetic algorithms. In:Proceedings of the 1999IEEE international conference on systems, man, and cyber-netics, pp 579–584

Luus R, Jaakola THI (1973) Optmization by direct search andsystematic reduction of the size of search region. AIChE J 19:760–766

Michalewicz Z (1992) Genetic algorithms þ data structures ¼evolution programs. Springer Berlin

Michalewicz Z, Janikow CZ (1991) Handling constraints in geneticalgorithms. In: Proceedings of the fourth international confer-ence on genetic algorithms, pp 151–157

Nelder JA, Mead R (1965) A simplex method for function mini-mization. Comput J 7: 308–313

Nomura T, Miyoshi T (1996) Numerical coding and unfair averagecrossover in GA for fuzzy rule extraction in dynamic environ-

Fig. 18 PCX

Page 18: K. Deb A population-based algorithm-generator for …...FOCUS K. Deb A population-based algorithm-generator for real-parameter optimization Published online: 7 June 2004 Springer-Verlag

ments. In: Uchikawa Y, Furuhashi T (eds), Fuzzy logic, neuralnetworks, and evolutionary computation, (lecture notes incomputer science 1152). Springer Berlin, pp 55–72

Ono I, Kobayashi S (1997) A real-coded genetic algorithm forfunction optimization using unimodal normal distributioncrossover. In: Proceedings of the seventh international confer-ence on genetic algorithms (ICGA-7), pp 246–253

Petrowski A (1996) A clearing procedure as a niching method forgenetic algorithms. In: Proceedings of third IEEE internationalconference on evolutionary computation(ICEC’96). IEEEpress, Piscataway NJ, pp 798–803

Price WL (1983) Global optimization by controlled random search.J Optimization: Theory Appl 40: 333–348

Rao SS (1984) Optimization: theory and applications. Wiley, NewYork

Rechenberg I (1965) Cybernetic solution path of an experimentalproblem. Royal aircraft establishment, library translationnumber 1122, Farnborough

Reklaitis GV, Ravindran A, Ragsdell KM (1983) Engineeringoptimization methods and applications. Wiley, New York

Rudolph G (1994) Convergence analysis of canonical geneticalgorithms. IEEE Trans Neural Netw 5(1): 96–101

Satoh H, Yamamura M, Kobayashi S (1996) Minimal generationgap model for gas considering both exploration and exploita-tion. In: Proceedings of the IIZUKA: methodologies for theconception, design, and application of intelligent systems, pp494–497

Schwefel H-P (1981) Numerical optimization of computer models.Chichester Wiley, UK

Schwefel H-P (1987) Collective phenomena in evolutionary sys-tems. In:Checkland P, Kiss I (eds), Problems of constancy andchange – the complementarity of systems approaches to com-plexity. pp 1025–1033. Budapest: International Society forGeneral System Research.

Schwefel H-P (1995) Evolution and Optimum Seeking. Wiley, NewYork

Storn R, Price K (1997) Differential evolution – a fast and efficientheuristic for global optimization over continuous spaces.J Global Optimization 11: 341–359

Thierens D, Goldberg DE (1993) Mixing in genetic algorithms.In:Proceedings of the fifth international conference on geneticalgorithms, pp 38–45

Tsutsui S, Yamamura M, Higuchi T (1999) Multi-parent recom-bination with simplex crossover in real-coded genetic algo-rithms. In: Proceedings of the genetic and evolutionarycomputation conference (GECCO-99), pp 657–664

Voigt H-M, Muhlenbein H, Cvetkovic D (1995) Fuzzy recombi-nation for the breeder genetic algorithm. In:Proceedings of thesixth international conference on genetic algorithms, pp 104–111

Whitley D, Gordon S, Mathias K (1994) Lamarckian evolution,the Baldwin effect and function optimization. In:Parallelproblem solving from nature (PPSN-III), pp 6–15

Wright A (1991) Genetic algorithms for real parameter optimiza-tion. In: Foundations of genetic algorithms 1 (FOGA-1), pp205–218