CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web...

95
Multi Objective Optimization of Process Parameters for Friction Welding using Genetic Algorithm A THESIS SUBMITTED TO THE FACULTY OF MECHANICAL ENGINEERING OF NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE AWARD OF THE DEGREE MASTER OF TECHNOLOGY in MANUFACTURING ENGINEERING by K. LAXMAN SARAT (ROLL NO. ME093107) Under the Esteemed Guidance of Dr. N. VENKAIAH Assistant Professor

Transcript of CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web...

Page 1: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Multi Objective Optimization of Process Parameters for Friction Welding using

Genetic AlgorithmA THESIS SUBMITTED TO THE FACULTY OF MECHANICAL ENGINEERING OF

NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE AWARD OF THE DEGREE

MASTER OF TECHNOLOGYin

MANUFACTURING ENGINEERINGby

K. LAXMAN SARAT

(ROLL NO. ME093107)

Under the Esteemed Guidance of

Dr. N. VENKAIAH

Assistant Professor

DEPARTMENT OF MECHANICAL ENGINEERINGNATIONAL INSTITUTE OF TECHNOLOGY, WARANGAL

WARANGAL-506004

2009-2011

Page 2: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

TABLE OF CONTENTSDescription Page No

ACKNOWLEDGEMENT i

ABSTRACT ii

LIST OF FIGURES vi

LIST OF TABLES vii

LIST OF GRAPHS viii

ABBREVATIONS AND NOTATIONS ix

CHAPTER-1: INTRODUCTION 1-22

1.0 Friction Welding 1

1.1 Principle of friction welding 1

1.2 Types of Friction Welding 2

1.2.1 Rotary Friction Welding 2

1.2.2 Linear Friction Welding 4

1.3 Advantages of Friction Welding 4

1.4 Applications 5

1.6 Optimization 6

1.6.1 Two Approaches To Multi-Objective Optimization 6

1.7 Evolutionary Algorithms 8

1.7.1 Basic Principles 9

1.7.2 Multi objective Optimization 9

1.8 Evolutionary Computation 11

1.9 Algorithm Design Issues 13

1.9.1 Fitness Assignment 14

1.10 Diversity Preservation 16

1.11 Elitism 17

Page 3: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

1.12 Performance of Multi-objective Evolutionary Algorithms 19

1.12.1 Limit Behaviour 20

1.12.2 Concept of Pareto Set Approximation 22

CHAPTER-2: LITERATURE REVIEW 24-25

CHAPTER-3: GENETIC ALGORITHM 26-47

3.1 Search Space 26

3.2 Genetic Algorithms World 26

3.3 A Simple Genetic Algorithm 27

3.4 Terminologies and Operators of GA 32

3.4.1 Introduction 32

3.4.2 Key Elements 32

3.4.3 Individuals 32

3.5 Genes 34

3.6 Fitness 34

3.7 Populations 35

3.8 Data Structures 36

3.9 Search Strategies 36

3.10 Encoding 37

3.10.1 Binary Encoding 37

3.10.2 Octal Encoding 37

3.10.3 Hexadecimal Encoding 38

3.10.4 Permutation Encoding (Real Number Coding) 38

3.10.4 Value Encoding 38

3.10.5 Tree Encoding 39

3.11 Breeding 39

Page 4: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

3.11.1 Selection 39

3.11.1.1 Roulette Wheel Selection 41

3.11.1.2 Random Selection 42

3.11.1.3 Rank Selection 42

3.11.1.4 Tournament Selection 42

3.11.1.5 Boltzmann Selection 43

3.11.1.6 Stochastic Universal Sampling 43

3.11.2 Crossover (Recombination) 44

3.11.2.1 Single Point Crossover 44

3.11.2.2 Two Point Crossover 45

3.11.2.3 Crossover Probability 46

3.11.3 Mutation 46

3.11.3.1 Mutation Probability 47

CHAPTER-4: OPTIMIZATION OF PROCESS PARAMETERS FORFRICTION WELDING USING GENETIC ALGORITHM 48-57

4.1 GA Run 51

4.1.1Input data 51

4.1.2 Output 52

4.2 Results and Discussion 53

CHAPTER-5: CONCLUSIONS 58

5.1 Conclusions 58

7.0 REFERENCES 59

Page 5: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

CHAPTER-1

INTRODUCTION

1.0 Friction Welding

Friction welding is a solid state joining process which can be used to join a number of different

metals. Friction welding achieves 100 per cent metal-to-metal joints, giving parent metal

properties. It is the only joining process to do this. No addition material or fillers are required

and there are no emissions from the process.

The process involves making welds in which one component is moved relative to, and in

pressure contact, with the mating component to produce heat at the faying surfaces. Softened

material begins to extrude in response to the applied pressure, creating an annular upset. Heat is

conducted away from the interfacial area for forging to take place. The weld is completed by the

application of a forge force during or after the cessation of relative motion. The joint undergoes

hot working to form a homogenous, full surface, high-integrity weld.

1.1 Principle of Friction WeldingMetals are made up of positive ions ‘floating’ in a ‘sea’ of electrons.

Fig1.1. Positive ions floating in a sea of electrons

Page 6: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

In principle when two pieces of metal are brought together they form 1 piece.

However, in a practical situation metal pieces do not spontaneously bond to each other and form

1 piece. This is because even polished metal surfaces have a layer of oxide and surface

contamination. They are also not smooth enough for the atoms to be brought close enough to

bond.

Two Polished Metal Surfaces Brought Into Contact.

Fig1.2. Metal surfaces brought in to contact

In friction welding, the surfaces are rubbed together to burn off the oxide and surface

contamination layers and bring the atoms in close enough proximity to bond.

1.2 Types of Friction Welding

1.2.1 Rotary friction weldingRotary friction welding is the most common form of friction welding and has become the

industry standard for a number of processes including welding API drill pipes and drill rods,

joining of axle cases and spindles and welding of piston rods.

Rotary friction welding involves holding one component still while spinning the other

component and brining the two together.

Page 7: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

In order for a join to be successful the following processes must take place:

(a) Pre friction (b) First friction (c) Second friction

One part is held The chuck is accelerated The force is increased

stationary in a fixed to speed and parts plastic material starts to

clamp. The other part brought into contact. extrude from the weld

is held in a rotating chuck. interface.

(d) Second friction (e) Forge (f) Weld complete

The second friction phase Rotation is stopped- The weld is complete a full

continues until sufficient the force increased and area, homogenous bond.

material has been extruded. the parts forged together.

Fig 1.3. Steps in friction welding process

The rotary friction welding process is inherently flexible, robust and tolerant to different qualities

of materials. The parameters involved are the rotational speed, time and force applied. However,

as the process is inherently robust and flexible, deviations on these parameters can still give a

good weld.

Page 8: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

1.2.2 Linear friction weldingIn linear friction welding the same principles apply as rotary. One component is held still while

the other is moved at speed and the two are brought together. The difference is that the moving

component does not rotate, it is made to oscillate laterally. The weld times for parts are similar

no matter how big the part.

Due to the geometry, rotary friction welding does not have any friction in the centre of the

rotating part. This portion of the weld must heat up conventionally rather than due to friction.

This is not the same for a linear friction weld. Friction occurs throughout the weld surface. This

means that weld times are very quick and do not vary hugely from part size to part size.

Because of the lateral motion of the weld, linear friction welding has the advantage of not

needing a symmetrical part for welding.

1.3 Advantages of Friction WeldingFriction welding has become industry standard in a number of applications. Some of the

advantages of the process are detailed below:

Weld monitoring can insure 100% weld quality Friction welding produces a 100% cross sectional weld area Far superior weld integrity compared to MIG welding Limited operator training require – full automation also possible The weld cycle is fully controlled by the machine Repeatable results Friction welding is a solid state process and does not suffer from inclusions and gas

porosity. Friction welding required no consumables therefore becomes more cost effective over time Friction welding typically will complete a full cross sectional weld in 15% of the time it

take MIG welding to produce an 85% cross sectional weld. Friction welding requires no special weld interface preparation welding) No post machining is needed for friction welded components in many cases Dissimilar materials can be joined with no alloying of the material

Page 9: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

1.4 ApplicationsDue to the advantages of friction welding, it has now become industry standard in a number of

applications:

1. Trailer axles – welding spindle to the case. Thompson Friction Welding are the only company to make a double ended machine which can weld two spindles to the same housing simultaneously. Advantages of this include

fast production time

extremely accurate weld

required machinery footprint reduction

2. Piston rods – welding the eye or yoke to the shaft. Thompson Friction Welding has supplied

many machines that can weld pre chromed bars without any damage to the delicate chrome

surface.

3. API drill pipes and drill rods – welding of connectors to pipes and rods. Thompson Friction

Welding are at the forefront of technology advances in this area with new developments

including internal flash removal over undulating surfaces.

In addition, friction welding can replace different forms of joining in other applications,

speeding up process times and giving superior weld integrity

Air bag canisters

Track rollers

Drive shafts

Anode hangers

Engine valves

Steering racks

Turbo chargers

Linear friction welding has a number of key applications in the aerospace industry. Welding of

bladed discs has great advantages and cost savings over traditional manufacturing methods.

1.6 Optimization

Page 10: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Optimization refers to finding one or more feasible solutions which correspond to extreme

values of one or more objectives. The need for finding such optimal solutions in a problem

comes mostly from the extreme purpose of either designing a solution for minimum possible cost

of fabrication, or for maximum possible reliability, or others.

When an optimization problem modelling a physical system involves only one objective

function, the task of finding the optimal solution is called single-objective optimization. These

algorithms that work by using gradient-based and heuristic-based search techniques. Besides

deterministic search principles involved in an algorithm, there also exist stochastic search

principles, which allow optimization algorithms to find globally optimal solutions more reliably.

When an optimization problem involves more than one objective function, the task of finding

one or more optimum solutions is known as multi-objective optimization. In the parlance of

management, such search and optimization problems are known as multiple criterion decision-

making (MCDM). Since multi-objective optimization involves multiple objectives, it is intuitive

to realize that single-objective optimization is a degenerate case of multi-objective optimization.

Most real-world search and optimization problems naturally involve multiple objectives. The

extremist principle mentioned above cannot be applied to only one objective, when the rest of the

objectives are also important. Different solutions may produce trade-offs (conflicting scenarios)

among different objectives. A solution that is extreme (in a better sense) with respect to one

objective requires a compromise in other objectives. This prohibits one to choose a solution

which is optimal with respect to only one objective.

1.6.1 Two approaches to multi-objective optimizationAlthough the fundamental difference between these two optimizations lies in the cardinality in

the optimal set, from a practical standpoint a user needs only one solution, no matter whether the

associated optimization problem is single-objective or multi-objective. In a multi-objective

optimization, ideally the effort must be made in finding the set of trade-off optimal solutions by

considering all objectives to be important. After a set of such trade-off solutions are found, a user

can then use higher-level qualitative considerations to make a choice. In view of these

discussions, we therefore suggest the following principle for an ideal multi-objective

optimization procedure:

Page 11: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Step 1. Find multiple trade-off optimal solutions with a wide range of values for objectives.

Step 2. Choose one of the obtained solutions using higher-level information.

Ideal multi-objective optimization procedure

In step 1, multiple trade-off solutions are found. Thereafter in step 2, higher-level information is

used to choose one of the trade-off solutions. With this procedure in mind, it is easy to realize

that single-objective optimization is a degenerate case of multi-objective optimization. In the

case of single-objective optimization with only one global optimal solution, Step 1 will find only

one solution, there by not requiring us to proceed to Step 2. In the case of single-objective

optimization with multiple global optima, both steps are necessary to first find all or many of the

global optima and then to choose one from them by using the higher-level information about the

problem.

Preference-based multi-objective optimization procedure

If such a relative preference factor among the objectives is known for a specific problem, there is

no need to follow the above principle for solving a multi-objective optimization problem. A

simple method would be to form a composite objective function as the weighted sum of the

objectives, where a weight for an objective is proportional to the preference factor assigned to

that particular objective. This method of scalarizing an objective vector into a single composite

objective function converts the multi-objective optimization problem in to a single-objective

optimization problem. When such a composite objective function is optimized, in most it is

possible to obtain one particular trade-off solution. This procedure of handling multi-objective

optimization problems is much simpler, yet still being more subjective than the above ideal

procedure. We call this procedure a preference-based multi-objective optimization.

The ideal multi-objective optimization procedure suggested earlier is less subjective. In step 1, a

user does not need any relative preference vector information. The task there is finding as many

different trade-off solutions is found, step 2 then requires certain problem information in order to

choose one solution. It is important to mention that in step2, the problem information is used to

evaluate and compare each of the obtained trade-off solutions. In the ideal approach, the problem

information is not used to search for a new solution; instead, it is used to choose one solution

Page 12: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

from a set of already obtained trade-off solutions. Thus, there is a fundamental difference in

using the problem information in both approaches. In the preference-based approach, a relative

preference vector needs to be supplied without any knowledge of the possible consequences.

However, in the proposed ideal approach, the problem information is used to choose one solution

from the obtained set of trade-off solutions. We argue that the ideal approach in this matter is

more methodical, more practical, and less subjective. At the same time, we highlight the fact that

if a reliable relative preference vector is available to a problem, there is no reason to find other

trade-off solutions. In such a case, a preference-based approach would be adequate.

1.7 Evolutionary AlgorithmsThe classical way to solve multi-objective optimization problems is to follow the preference-

based approach, where a relative preference vector is used to scalarize multiple objectives. Since

classical search and optimization methods use a point-by-point approach, where one solution in

each iteration is modified to a different solution, the outcome of using a classical optimization

method is a single optimized solution.

However, the field of search and optimization has changed over the last few years by the

introduction of a number of non-classical, unorthodox and stochastic search and optimization

algorithms. Of these, the evolutionary algorithm (EA) mimics nature’s evolutionary principles to

drive its search towards an optimal solution. One of the most striking differences to classical

search and optimization algorithms is that EAs use a population of solutions in each iteration,

instead of a single solution. Since populations of solutions are processed in each iteration, the

outcome of an EA is also a population of solutions. If an optimization problem has a single

optimum, all EA population members can be expected to converge to that optimum solution.

However, if an optimization problem has multiple optimal solutions, an EA can be used to

capture multiple optimal solutions in its final population.

This ability of an EA to find multiple optimal solutions in one single simulation run makes

EAs unique in solving multi-objective optimization problems. Since step 1 of the ideal strategy

for multi-objective optimization requires multiple trade-off solutions to be found, an EA’s

population –approach can be suitably utilized to find a number of solutions in a single simulation

run.

Page 13: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

The term evolutionary algorithm (EA) stands for a class of stochastic optimization methods that

simulate the process of natural evolution. The origins of EAs can be traced back to the late

1950s, and since the 1970s several evolutionary methodologies have been proposed, mainly

genetic algorithms, evolutionary programming, and evolution strategies [1]. All of these

approaches operate on a set of candidate solutions. Using strong simplifications, this set is

subsequently modified by the two basic principles: selection and variation. While selection

mimics the competition for reproduction and resources among living beings, the other principle,

variation, imitates the natural capability of creating ”new” living beings by means of

recombination and mutation. Although the underlying mechanisms are simple, these algorithms

have proven themselves as a general, robust and powerful search mechanism [1]. In particular,

they possess several characteristics that are desirable for problems involving i) multiple

conflicting objectives, and ii) intractably large and highly complex search spaces. As a result,

numerous algorithmic variants have been proposed and applied to various problem domains

since the mid-1980s.

1.7.1 Basic principles

1.7.2 Multi objective optimizationThe scenario involves an arbitrary optimization problem with k objectives, which are, without

loss of generality, all to be maximized and all equally important, i.e., no additional knowledge

about the problem is available. We assume that a solution to this problem can be described in

terms of a decision vector (x1, x2, . . . , xn) in the decision space X. A function f : X → Y

evaluates the quality of a specific solution by assigning it an objective vector (y1, y2, . . . , yk) in

the objective space Y (Fig.1.4). Now, let us suppose that the objective space is a subset of the

real numbers, i.e., Y ⊆ IR, and that the goal of the optimization is to maximize the single

objective. In such a single-objective optimization problem, a solution x1 ∈ X is better than

another solution x2 ∈ X if y1 > y2 where y1 = f(x1) and y2 = f(x2). Although several optimal

solutions may exist in decision space, they are all mapped to the same objective vector, i.e., there

exists only a single optimum in objective space.

In the case of a vector-valued evaluation function f with Y ⊆ IRk and k > 1, the situation of

comparing two solutions x1 and x2 is more complex. Following the well-known concept of

Pareto dominance, an objective vector y1 is said to dominate another objective vectors y2

Page 14: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

(y1 > y2) if no component of y1 is smaller than the corresponding component of y2 and at least

one component is greater. Accordingly, we can say that a solution x1 is better to another solution

x2, i.e., x1 dominates x2 (x1 > x2), if f(x1) dominates f(x2). Here, optimal solutions, i.e.,

solutions not dominated by any other solution, may be mapped to different objective vectors. In

other words: there may exist several optimal objective vectors representing different trade-offs

between the objectives. The set of optimal solutions in the decision space X is in general denoted

as the Pareto set X∗ ⊆ X, and we will denote its image in objective space as Pareto front Y ∗ =

f(X∗) ⊆ Y . With many multi-objective optimization problems, knowledge about this set helps

the decision maker in choosing the best compromise solution. For instance, when designing

computer systems, engineers often perform a so-called design space exploration to learn more

about the Pareto set. Thereby, the design space is reduced to the set of optimal trade-offs: a first

step in selecting an appropriate implementation.

Fig1.4. Illustration of general multi objective optimization problem

Although there are different ways to approach a multi objective optimization problem, e.g., by

aggregation of the objectives into a single one, most work in the area of evolutionary

multiobjective optimization has concentrated on the approximation of the Pareto set. Therefore,

we will assume in the following that the goal of the optimization is to find or approximate the

Page 15: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Pareto set. Accordingly, the outcome of an MOEA is considered to be a set of mutually

nondominated solutions, or Pareto set approximation for short.

1.8 Evolutionary ComputationGenerating the Pareto set can be computationally expensive and is often infeasible, because the

complexity of the underlying application prevents exact methods from being applicable. For this

reason, a number of stochastic search strategies such as evolutionary algorithms, tabu search,

simulated annealing, and ant colony optimization have been developed: they usually do not

guarantee to identify optimal trade-offs but try to find a good approximation, i.e., a set of

solutions whose objective vectors are (hopefully) not too far away from the optimal objective

vectors.

Roughly speaking, a general stochastic search algorithm consists of three parts: i) a working

memory that contains the currently considered solution candidates, ii) a selection module, and

iii) a variation module as depicted in Fig.1.5.

Fig1.5. Components of a general stochastic search algorithm

As to the selection, one can distinguish between mating and environmental selection. Mating

selection aims at picking promising solutions for variation and usually is performed in a

randomized fashion. In contrast, environmental selection determines which of the previously

stored solutions and the newly created ones are kept in the internal memory. The variation

module takes a set of solutions and systematically or randomly modifies these solutions in order

to generate potentially better solutions. In summary, one iteration of a stochastic optimizer

includes the consecutive steps mating selection, variation, and environmental selection; this

cycle may be repeated until a certain stopping criterion is fulfilled.

Page 16: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Many stochastic search strategies have been originally designed for single objective optimization

and therefore consider only one solution at a time, i.e., the working memory contains just a

single solution. As a consequence, no mating selection is necessary and variation is performed by

modifying the current solution candidate.

In contrast, an evolutionary algorithm is characterized by three features:

1. A set of solution candidates is maintained,

2. A mating selection process is performed on this set, and

3. Several solutions may be combined in terms of recombination to generate new solutions.

By analogy to natural evolution, the solution candidates are called individuals and the set of

solution candidates is called the population. Each individual represents a possible solution, i.e., a

decision vector, to the problem at hand; however, an individual is not a decision vector but rather

encodes it based on an appropriate representation.

The mating selection process usually consists of two stages: fitness assignment and sampling. In

the first stage, the individuals in the current population are evaluated in the objective space and

then assigned a scalar value, the fitness, reflecting their quality. Afterwards, a so-called mating

pool is created by random sampling from the population according to the fitness values. For

instance, a commonly used sampling method is binary tournament selection. Here, two

individuals are randomly chosen from the population, and the one with the better fitness value is

copied to the mating pool. This procedure is repeated until the mating pool is filled. Then, the

variation operators are applied to the mating pool. With EAs, there are usually two of them,

namely the recombination and the mutation operator. The recombination operator takes a certain

number of parents and creates a predefined number of children by combining parts of the

parents. To mimic the stochastic nature of evolution, a crossover probability is associated with

this operator. By contrast, the mutation operator modifies individuals by changing small parts in

the associated vectors according to a given mutation rate. Note that due to random effects some

individuals in the mating pool may not be affected by variation and therefore simply represent a

copy of a previously generated solution.

Page 17: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Finally, environmental selection determines which individuals of the population and the

modified mating pool form the new population. The simplest way is to use the latter set as the

new population. An alternative is to combine both sets and deterministically choose the best

individuals for survival.

Based on the above concepts, natural evolution is simulated by an iterative computation process

as shown in Fig.1.6. First, an initial population is created at random (or according to a predefined

scheme), which is the starting point of the evolution process. Then a loop consisting of the steps

evaluation (fitness assignment), selection, recombination, and/or mutation is executed a certain

number of times. Each loop iteration is called a generation, and often a predefined maximum

number of generations serves as the termination criterion of the loop. But also other conditions,

e.g., stagnation in the population or existence of an individual with sufficient quality, may be

used to stop the simulation. At the end, the best individuals in the final population represent the

outcome of the EA.

Fig1.6. Outline of a general evolutionary algorithm for a problem with four binary decision

variables

1.9 Algorithm Design IssuesThe goal of approximating the Pareto set is itself multi objective. For instance, we would like to

minimize the distance of the generated solutions to the Pareto set and to maximize the diversity

of the achieved Pareto set approximation. This is certainly a fuzzy statement that it is impossible

to exactly describe what a good approximation is in terms of a number of criteria such as

closeness to the Pareto set, diversity, etc. Nevertheless, it well illustrates the two fundamental

Page 18: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

goals in MOEA design: guiding the search towards the Pareto set and keeping a diverse set of

non-dominated solutions.

The first goal is mainly related to mating selection, in particular to the problem of assigning

scalar fitness values in the presence of multiple optimization criteria. The second goal concerns

selection in general because we want to avoid that the population contains mostly identical

solutions (with respect to the objective space and the decision space). Finally, a third issue which

addresses both of the above goals is elitism, i.e., the question of how to prevent non-dominated

solutions from being lost.

In the following, each of these aspects will be discussed: fitness assignment, diversity

preservation, and elitism. Remarkably, they are well reflected by the development of the field of

evolutionary multi-objective optimization. While the first studies on multi-objective evolutionary

algorithms were mainly concerned with the problem of guiding the search towards the Pareto set

all approaches of the second generation incorporated in addition a niching concept in order to

address the diversity issue. The importance of elitism was recognized and supported

experimentally in the late nineties and most of the third generation MOEAs implement this

concept in different ways.

1.9.1 Fitness AssignmentIn contrast to single-objective optimization, where objective function and fitness function are

often identical, both fitness assignment and selection must allow for several objectives with

multi-criteria optimization problems. In general, one can distinguish aggregation-based,

criterion-based, and Pareto-based fitness assignment strategies, Fig 1.7.

Page 19: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Fig1.7. Different fitness assignment strategies.

One approach which is built on the traditional techniques for generating trade-off surfaces is to

aggregate the objectives into a single parameterized objective function. The parameters of this

function are systematically varied during the optimization run in order to find a set of non-

dominated solutions instead of a single trade-off solution. For instance, some MOEAs use

weighted-sum aggregation, where the weights represent the parameters which are changed

during the evolution process.

Criterion-based methods switch between the objectives during the selection phase. Each time an

individual is chosen for reproduction, potentially a different objective will decide which member

of the population will be copied into the mating pool. For example, Schaffer proposed filling

equal portions of the mating pool according to the distinct objectives, while Kursawe suggested

assigning a probability to each objective which determines whether the objective will be the

sorting criterion in the next selection step—the probabilities can be user-defined or chosen

randomly over time.

The idea of calculating an individual’s fitness on the basis of Pareto dominance goes back to

Goldberg, and different ways of exploiting the partial order on the population have been

proposed. Some approaches use the dominance rank, i.e., the number of individuals by which an

individual is dominated, to determine the fitness values. Others make use of the dominance

depth; here, the population is divided into several fronts and the depth reflects to which front an

individual belongs. Alternatively, also the dominance count, i.e., the number of individuals

Page 20: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

dominated by a certain individual, can be taken into account. For instance, SPEA and SPEA2

assign fitness values on the basis of both dominance rank and count. Independent of the

technique used, the fitness is related to the whole population in contrast to aggregation-based

methods which calculate an individual’s raw fitness value independently of other individuals.

1.10 Diversity PreservationMost MOEAs try to maintain diversity within the current Pareto set approximation by

incorporating density information into the selection process: an individual’s chance of being

selected is decreased the greater the density of individuals in its neighbourhood. This issue is

closely related to the estimation of probability density functions in statistics, and the methods

used in MOEAs can be classified according to the categories for techniques in statistical density

estimation.

Kernel methods define the neighbourhood of a point in terms of a so-called Kernel function K

which takes the distance to another point as an argument. In practice, for each individual the

distances di to all other individuals i are calculated and after applying K the resulting values

K (di) are summed up. The sum of the K function values represents the density estimate for the

individual. Fitness sharing is the most popular technique of this type within the field of

evolutionary computation, which is used, e.g., in MOGA, NSGA, and NPGA.

Nearest neighbour techniques take the distance of a given point to its kth nearest neighbour into

account in order to estimate the density in its neighbourhood. Usually, the estimator is a function

of the inverse of this distance. SPEA2, for instance, calculates for each individual the distance to

the kth nearest individual and adds the reciprocal value to the raw fitness value (fitness is to be

minimized).

Histograms define a third category of density estimators that use a hyper grid to define

neighbourhoods within the space. The density around an individual is simply estimated by the

number of individuals in the same box of the grid. The hyper grid can be fixed, though usually it

is adapted with regard to the current population as, e.g., in PAES.

Page 21: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Fig1.8. Illustration of diversity preservation techniques.

Each of the three approaches is visualized in Fig.1.8. However, due to space limitations, a

discussion of strengths and weaknesses of the various methods cannot be provided here.

Furthermore, note that all of the above methods require a distance measure which can be defined

on the genotype, on the phenotype with respect to the decision space, or on the phenotype with

respect to the objective space. Most approaches consider the distance between two individuals as

the distance between the corresponding objective vectors.

1.11 ElitismElitism in the context of single-objective GA means that the best solution found so far during the

search always survives to the next generation. In this respect, all non- dominated solutions

discovered by a multi-objective GA are considered elite solutions. However, implementation of

elitism in multi-objective optimization is not as straightforward as in single objective

optimization mainly due to the large number of possible elitist solutions. Early multi-objective

GA did not use elitism. However, most recent multi-objective GA and their variations use

elitism. Multi-objective GA using elitist strategies tend to outperform their non-elitist

counterparts. Multi-objective GA use two strategies to implement elitism (i) maintaining elitist

solutions in the population, and (ii) storing elitist solutions in an external secondary list and

reintroducing them to the population.

Strategies to maintain elitist solutions in the population

Random selection does not ensure that a non-dominated solution will survive to the next

generation. A straightforward implementation of elitism in a multi-objective GA is to copy all

Page 22: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

non-dominated solutions in population Pt to population Pt+1, then fill the rest of Pt+1 by selecting

from the remaining dominated solutions in Pt. This approach will not work when the total

number of non-dominated parent and offspring solutions is larger than NP.

Diversity, fitness assignment, fitness sharing, and niching

Maintaining a diverse population is an important consideration in multi-objective GA to obtain

solutions uniformly distributed over the Pareto front. Without taking preventive measures, the

population tends to form relatively few clusters in multi-objective GA. This phenomenon is

called genetic drift, and several approaches have been devised to prevent genetic drift as follows.Fitness sharing

Fitness sharing encourages the search in unexplored sections of a Pareto front by artificially

reducing fitness of solutions in densely populated areas. To achieve this goal, densely populated

areas are identified and a penalty method is used to penalize the solutions located in such areas.

Crowding distance

Crowding distance approaches aim to obtain a uniform spread of solutions along the best-known

Pareto front without using a fitness sharing parameter.

Strategies to maintain elitist solutions in the population

Random selection does not ensure that a non-dominated solution will survive to the next

generation. A straightforward implementation of elitism in a multi-objective GA is to copy all

non-dominated solutions in population Pt to population Pt+1, then fill the rest of Pt+1 by

selecting from the remaining dominated solutions in Pt. This approach will not work when the

total number of non-dominated parent and offspring solutions is larger than NP.

Constraint handling

Most real-world optimization problems include constraints that must be satisfied. A single-

objective GA may use one of four different constraint handling strategies:

(i) discarding infeasible solutions (the ‘‘death penalty’’);

(ii) reducing the fitness of infeasible solutions by using a penalty function;

(iii) crafting genetic operators to always produce feasible solutions; and

(iv) Transforming infeasible solutions to be feasible (‘‘repair’’).

Page 23: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Handling of constraints has not been adequately researched for multi-objective GA. For instance,

all major multi-objectives GA assume problems without constraints. While constraint handling

strategies (i), (iii), and (iv) are directly applicable to the multi-objective case, implementation of

penalty function strategies, which is the most frequently used constraint handling strategy in

single-objective GA, is not straightforward in multi-objective GA due to fact that fitness

assignment is based on the non-dominance rank of a solution, not on its objective function

values.

Jimenez proposed a niched selection strategy to address infeasibility in multi-objective problems

as follows:

Step 1: Randomly chose two solutions x and y from the population.

Step 2: If one of the solutions is feasible and the other one is infeasible, the winner is the feasible

solution, and stop. Otherwise, if both solutions are infeasible go to Step 3, else go to Step 4.

Step 3: In this case, solutions x and y are both infeasible. Then, select a random reference set C

among infeasible solutions in the population. Compare solutions x and y to the solutions in

reference set C with respect to their degree of infeasibility. In order to achieve this, calculate a

measure of infeasibility (e.g., the number of constraints violated or total constraint violation) for

solutions x, y, and those in set C. If one of solutions x and y is better and the other one is worse

than the best solution in C, with respect to the calculated infeasibility measure, then the winner is

the least infeasible solution. However, if there is a tie, that is, both solutions x and y are either

better or worse than the best solution in C, then their niche count in decision variable space is

used for selection. In this case, the solution with the lower niche count is the winner.

Step 4: In the case that solutions x and y are both feasible, select a random reference set C among

feasible solutions in the population. Compare solutions x and y to the solutions in set C. If one of

them is non-dominated in set C, and the other is dominated by at least one solution, the winner is

the former. Otherwise, there is a tie between solutions x and y, and the niche count of the

solutions is calculated in decision variable space. The solution with the smaller niche count is the

winner of the tournament selection.

1.12 Performance of Multi-objective Evolutionary AlgorithmsBasically, there are two ways to assess the performance of MOEAs: i) theoretically by analysis

or ii) empirically by simulation. In the following, we will present some recent results with

Page 24: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

respect to both approaches. On the one hand, we will discuss the limit behaviour of MOEAs and

provide a run-time analysis of two simple MOEAs. On the other hand, we will address the

question of how to assess the quality of the outcome of an MOEA from a theoretical perspective.

1.12.1 Limit behaviourThe limit behaviour of MOEAs is of interest when we want to investigate their global

convergence properties. It refers to the question what the algorithm is able to achieve in the limit,

i.e., when unlimited time resources are available.

Global Convergence Roughly speaking, an MOEA is called globally convergent if the sequence

of Pareto front approximations A(t) it produces converges to the true Pareto front Y* while the

number of generations t goes to infinity. It is intuitively clear that this property can only be

fulfilled with unlimited memory resources, as the cardinality of the Pareto front can be arbitrary

large in general. Practical implementations, however, always have to deal with limited memory

resources. In this case one is restricted to finding a subset of the Pareto front, and a globally

convergent algorithm should guarantee A(t)→YꞋ⊆Y*.

In the single-objective case, two conditions are sufficient to guarantee global convergence:

1. A strictly covering mutation distribution, which ensures that any solution xꞋ ∈ X can be

produced from every x ∈ X by mutation with a positive probability, and

2. An elitist (environmental) selection rule, which ensures that an optimal solution is not lost and

no deterioration can occur.

While the mutation condition transfers easily to the multi-objective case, the elitist selection rule

does not. This is due to the fact that a total order of the solutions is not given anymore and

solutions can become incomparable to each other. If too many non-dominated solutions arise

than can be stored in the population, some have to be discarded. This environmental selection

strategy essentially determines whether an algorithm is globally convergent or not.

Rudolph, Hanne and Rudolph and Agapie proposed different selection schemes that preclude

deterioration and guarantee convergence. The basic idea is that solutions are only discarded if

they are replaced by a dominating alternative. This ensures the sufficient monotonicity in the

sequence of accepted solutions. However, no statements could be made with respect to the final

distribution of solutions.

Page 25: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Most state-of-the-art MOEAs, though, take in addition to the dominance criterion density

information into account. Nevertheless, for all of these algorithms it can be proven that a

succession such selection steps can lead to deterioration, as depicted in Fig.1.9. Hence,

convergence can no longer be guaranteed for any of these algorithms.

Fig1.9. A possible deterioration of a hypothetical population of size 3

In generation t, a forth non-dominated solution is found and a truncation operation is invoked,

e.g., based on density information, to reduce the population to its maximum size. In generation t

+ 1, another solution is found that is dominated by the former, now discarded solution. The new

solution, however, is not dominated by the current population members. Now, the truncation

procedure again has to decide which solution to discard and might take a decision to keep this

new solution (e.g., as it has a lower density around it). In comparing the new situation after

generation t + 1 with the situation before generation t, one immediately notices that the

population became worse: the outer solutions remained the same, while the inner solution

’deteriorated’.

In order to design a selection rule that enables global convergence with limited memory

resources together with a well distributed subset of solutions we have to define what we

understand by a well distributed Pareto set approximation and then define a selection algorithm

which respects this and fulfills the monotonicity condition to preclude deterioration.

Page 26: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

1.12.2 Concept of pareto set approximation Since finding the Pareto front of an arbitrary objective space Y is usually not practical because of

its size, one needs to be less ambitious in general. Therefore, the ε approximate Pareto set was

proposed as practical solution concept as it not only represents all vectors Y *but also consists of

a smaller number of elements. The ε approximate

Pareto front is based on the following generalization of the dominance relation (see also Fig 2.0).

Fig2.0. The concept of ε dominance and ε pareto fronts.

Definition 1 (ε-Dominance). Let a, b ∈ Y . Then a is said to ε-dominate b for some ε> 0,

denoted as a ≻ε b, if ε. ai ≥ bi ∀ i ∈ {1, . . . , k} (1)

Definition 2 (ε-approximate Pareto front). Let Y ⊆ R+k be a set of vectors and ε≥ 1. Then a

set Yε is called an ε-approximate Pareto front of Y, if any vector b ∈ Y is ε-dominated by at least

one vector a∈Yε, i.e. ∀ b ∈ Y ∋a ∈ Yε: a ≻ε b (2)The set of all ε-approximate Pareto fronts of Y is denoted as Pε (Y ).

Of course, the set Yε is not unique. Many different concepts for ε-efficiency and the

corresponding Pareto front approximations exist in the operations research literature, a survey is

given by. As most of the concepts deal with infinite sets, they are not practical for our purpose of

producing and maintaining a representative subset. Nevertheless, they are of theoretical interest

and have nice properties which can for instance be used in convergence proofs, for an application

in MOEAs.

Page 27: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Note that this concept of approximation can also be used with slightly different definitions of ε-

dominance, e.g. the following additive approximation

εi + ai ≥ bi ∀i ∈ {1,2,………..,k} (3)

Where εi are constants, separately defined for each coordinate. A further refinement of the

concept of ε-approximate Pareto sets leads to the following definition.

Definition 3 (ε-Pareto front). Let Y ⊆ R+m be a set of vectors and ε> 0. Then a set Y*ε ⊆ Y is

called an ε-Pareto front of Y if

1. Y*ε is an ε-approximate Pareto set of Y , i.e. Y*ε∈ Pε(Y ), and

2. Y*ε contains Pareto points of Y only, i.e. Y*ε ⊆ Y*.

The set of all ε-Pareto fronts of Y is denoted as P*ε (Y).

Page 28: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

CHAPTER-2LITERATURE REVIEW

Siva et al., (2008). Optimization of weld bead geometry in plasma transferred arc hard-faced

austenitic stainless steel plates using genetic algorithm. The quality of hard-faced components

depends on the weld bead geometry and dilution, which have to be properly controlled and

optimized to ensure better economy and desirable mechanical characteristics of the weld. The

experiments were conducted based on a five factor, five level central composite rotatable design

matrix. A genetic algorithm (GA) was developed to optimize the process parameters for

achieving the desired bead geometry variables.

Rosa Di Lorenzo etal., (2009). This paper discussed on a Pareto Optimal design approach for

simultaneous control of thinning and springback in stamping processes. One of the most

relevant research issues in automotive field is focused on the reduction of stamped parts weight

also increasing their strength. In this way, a strong research effort is developed on high strength

steels which are widely utilized and they require a proper springback control. Springback

reduction in sheet metal forming is a typical goal to be pursued which is conflicting with

thinning reduction for instance. Thus, such problems can be considered as multi-objective ones

characterized by conflicting objectives.

Tsutao Katayama et al., (2004). This paper focused on forming defects (fracture and wrinkle) in

the two-stage deep-drawing. In order to solve the problems, it is necessary to improve

simultaneously both fracture and wrinkle. For the reason, we had proposed a transfer forming

technique, that is a new intermediate process die shape. We investigated the influences of

forming defects on the intermediate process die shape, and searched an optimum solution of die

shape for improving both of forming defects by using the multi-objective function and sweeping

simplex method which is the optimization one.

Erol Kilickap,et al., (2010). The present study focused on the influence machining parameters on

the surface roughness obtained in drilling of AISI 1045. The matrices of test conditions consisted

of cutting speed, feed rate, and cutting environment. A mathematical prediction model of the

Page 29: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

surface roughness was developed using response surface methodology (RSM). The effects of

drilling parameters on the surface roughness were evaluated and optimum machining conditions

for minimizing the surface roughness were determined using RSM and genetic algorithm. As a

result, the predicted and measured values were quite close, which indicates that the developed

model can be effectively used to predict the surface roughness.

Sathiya,et al., (2004). Friction welding of austenitic stainless steel and optimization of weld

quality. In friction welding, the joints are formed in the solid state by utilizing the heat generated

by friction. The objectives of this study are obtaining friction weldment of austenitic stainless

steel(AISI 304) and optimizing the friction welding parameters in order to establish the

weldquality. Similar austenitic stainless specimens were joined using the laboratory model

friction welding machine. The processed joints were tested for their microstructure and strength

related aspects. Acoustic emission emanated by the joints during tensile testing was acquired to

assess the quality of the joints. Also a method to decide near optimal settings of the process

parameters using Genetic Algorithm is proposed.

In literature, mainly the optimization is carried on different machining processes. The concept of

response surface methodology and pareto optimal solutions were used and multi objective

optimization also used as technique with the genetic algorithm to solve the problems. In the

present work, genetic algorithm is used to solve multi objective responses to treat all responses

as equal criteria. With the 3 input parameters of 2 levels the 4 objective functions are maximized

i.e., plain tensile strength, notch tensile strength, impact toughness, micro hardness are

maximized.

Page 30: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

CHAPTER-3

GENETIC ALGORITHM

Evolutionary computing was introduced in the 1960s by I. Rechenberg in the work “Evolution

strategies”. This idea was then developed by other researches. Genetic Algorithms (GAs) was

invented by John Holland and developed this idea in his book “Adaptation in natural and

artificial systems” in the year 1975. Holland proposed GA as a heuristic method based on

“Survival of the fittest”. GA was discovered as a useful tool for search and optimization

problems.

3.1 Search SpaceMost often one is looking for the best solution in a specific set of solutions. The space of all

feasible solutions (the set of solutions among which the desired solution resides) is called search

space (also state space). Each and every point in the search space represents one possible

solution. Therefore each possible solution can be “marked” by its fitness value, depending on the

problem definition. With Genetic Algorithm one looks for the best solution among a number of

possible solutions represented by one point in the search space i.e.; GAs are used to search the

search space for the best solution e.g., minimum. The difficulties in this ease are the local

minima and the starting point of the search.

3.2 Genetic Algorithms WorldGenetic Algorithm raises a couple of important features. First it is a stochastic algorithm;

randomness as an essential role in genetic algorithms. Both selection and reproduction needs

random procedures. A second very important point is that genetic algorithms always consider a

population of solutions. Keeping in memory more than a single solution at each iteration offers a

lot of advantages. The algorithm can recombine different solutions to get better ones and so, it

can use the benefits of assortment. A population base algorithm is also very amenable for

parallelization. The robustness of the algorithm should also be mentioned as something essential

for the algorithm success. Robustness refers to the ability to perform consistently well on a broad

range of problem types. There is no particular requirement on the problem before using GAs, so

Page 31: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

it can be applied to resolve any problem. All those features make GA a really powerful

optimization tool.

3.3 A Simple Genetic AlgorithmAn algorithm is a series of steps for solving a problem. A genetic algorithm is a problem solving

method that uses genetics as its model of problem solving. It’s a search technique to find

approximate solutions to optimization and search problems.

Basically, an optimization problem looks really simple. One knows the form of all possible

solutions corresponding to a specific question. The set of all the solutions that meet this form

constitute the search space. The problem consists in finding out the solution that fits the best, i.e.

the one with the most payoffs, from all the possible solutions. If it’s possible to quickly

enumerate all the solutions, the problem does not raise much difficulty. But, when the search

space becomes large, enumeration is soon no longer feasible simply because it would take far too

much time. In this it’s needed to use a specific technique to find the optimal solution. Genetic

Algorithms provides one of these methods. Practically they all work in a similar way, adapting

the simple genetics to algorithmic mechanisms.

GA handles a population of possible solutions. Each solution is represented through a

chromosome, which is just an abstract representation. Coding all the possible solutions into a

chromosome is the first part, but certainly not the most straightforward one of a Genetic

Algorithm. A set of reproduction operators has to be determined, too. Reproduction operators are

applied directly on the chromosomes, and are used to perform mutations and recombinations

over solutions of the problem. Appropriate representation and reproduction operators are really

something determinant, as the behaviour of the GA is extremely dependant on it. Frequently, it

can be extremely difficult to find a representation, which respects the structure of the search

space and reproduction operators, which are coherent and relevant according to the properties of

the problems.

Selection is supposed to be able to compare each individual in the population. Selection is done

by using a fitness function. Each chromosome has an associated value corresponding to the

fitness of the solution it represents. The fitness should correspond to an evaluation of how good

the candidate solution is. The optimal solution is the one, which maximizes the fitness function.

Page 32: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Genetic Algorithms deal with the problems that maximize the fitness function. But, if the

problem consists in minimizing a cost function, the adaptation is quite easy. Either the cost

function can be transformed into a fitness function, for example by inverting it; or the selection

can be adapted in such way that they consider individuals with low evaluation functions as

better.

Once the reproduction and the fitness function have been properly defined, a Genetic Algorithm

is evolved according to the same basic structure. It starts by generating an initial population of

chromosomes. This first population must offer a wide diversity of genetic materials. The gene

pool should be as large as possible so that any solution of the search space can be engendered.

Generally, the initial population is generated randomly.

Then, the genetic algorithm loops over an iteration process to make the population evolve. Each

iteration consists of the following steps:

SELECTION: The first step consists in selecting individuals for reproduction. This

selection is done randomly with a probability depending on the relative fitness of the

individuals so that best ones are often chosen for reproduction than poor ones.

REPRODUCTION: In the second step, offspring are bred by the selected individuals. For

generating new chromosomes, the algorithm can use both recombination and mutation.

EVALUATION: Then the fitness of the new chromosomes is evaluated.

REPLACEMENT: During the last step, individuals from the old population are killed and

replaced by the new ones.

The algorithm is stopped when the population converges toward the optimal solution. The basic

genetic algorithm is as follows:

[start] Genetic random population of n chromosomes (suitable solutions for the problem)

[Fitness] Evaluate the fitness f(x) of each chromosome x in the population

[New population] Create a new population by repeating following steps until the New

population is complete

[Selection] select two parent chromosomes from a population according to their fitness

( the better fitness, the bigger chance to get selected).

[Crossover] With a crossover probability, cross over the parents to form new offspring

(children). If no crossover was performed, offspring is the exact copy of parents.

Page 33: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

[Mutation] With a mutation probability, mutate new offspring at each locus (position in

chromosome)

[Accepting] Place new offspring in the new population.

[Replace] Use new generated population for a further sum of the algorithm.

[Test] If the end condition is satisfied, stop, and return the best solution in current population.

[Loop] Go to step2 for fitness evaluation.

Reproduction is the process by which the genetic material in two or more parent is combined to

obtain one or more offspring. In fitness evaluation step, the individual’s quality is assessed.

Mutation is performed to one individual to produce a new version of it where some of the

original genetic material has been randomly changed. Selection process helps to decide which

individuals are to be used for reproduction and mutation in order to produce new search points.

Before implementing GAs it is important to understand few guidelines for designing a general

search algorithm i.e. a global optimization algorithm based on the properties of the fitness

landscape and the most common optimization method types:

1. Determinism: A purely deterministic search may have an extremely high variance in

solution quality because it may soon get stuck in worst case situations from which it is

incapable to escape because of its determinism. This can be avoided, but it is a well-

known fact that the observation of the worst-case situation is not guaranteed to be

possible in general.

2. Non-determinism: A stochastic search method usually does not suffer from the above

potential worst case ”wolf trap” phenomenon. It is therefore likely that a search method

should be stochastic, but it may well contain a substantial portion of determinism,

however. In principle it is enough to have as much non-determinism as to be able to

avoid the worst-case wolf traps.

3. Local determinism: A purely stochastic method is usually quite slow. It is therefore

reasonable to do as much as possible efficient deterministic predictions of the most

promising directions of (local) proceedings. This is called local hill climbing or greedy

search according to the obvious strategies.

Page 34: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Based on the foregoing discussion, the important criteria for GA approach can be formulated as

given below:

Completeness: Any solution should have its encoding.

Non redundancy: Codes and solutions should correspond one to one.

Soundness: Any code (produced by genetic operators) should have its corresponding solution.

Characteristic perseverance: Offspring should inherit useful characteristics from parents.

In short, the basic four steps used in simple Genetic Algorithm to solve a problem are,

1. The representation of the problem.

2. The fitness calculation.

3. Various variables and parameters involved in controlling the algorithm.

4. The representation of result and the way of terminating the algorithm.

Comparison Of Genetic Algorithm With Other Optimization Techniques:

The principle of GAs is simple: imitate genetics and natural selection by a computer program:

The parameters of the problem are coded most naturally as a DNA-like linear data structure, a

vector or a string. Sometimes, when the problem is naturally two or three-dimensional also

corresponding array structures are used.

A set, called population, of these problem dependent parameter value vectors is processed by

GA. To start there is usually a totally random population, the values of different parameters

generated by a random number generator. Typical population size is from few dozens to

thousands. To do optimization we need a cost function or fitness function as it is usually called

when genetic algorithms are used. By a fitness function we can select the best solution

candidates from the population and delete the not so good specimens.

The nice thing when comparing GAs to other optimization methods is that the fitness function

can be nearly anything that can be evaluated by a computer or even something that cannot! In the

latter case it might be a human judgement that cannot be stated as a crisp program, like in the

case of eyewitness, where a human being selects among the alternatives generated by GA.

So, there are not any definite mathematical restrictions on the properties of the fitness function. It

may be discrete, multimodal etc. The main criteria used to classify optimization algorithms are as

follows: continuous / discrete, constrained / unconstrained and sequential / parallel. There is a

Page 35: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

clear difference between discrete and continuous problems. Therefore it is instructive to notice

that continuous methods are sometimes used to solve inherently discrete problems and vice

versa. Parallel algorithms are usually used to speed up processing. There are, however, some

cases in which it is more efficient to run several processors in parallel rather than sequentially.

These cases include among others such, in which there is high probability of each individual

search run to get stuck into a local extreme.

Irrespective of the above classification, optimization methods can be further classified into

deterministic and non-deterministic methods. In addition optimization algorithms can be

classified as local or global. In terms of energy and entropy local search corresponds to entropy

while global optimization depends essentially on the fitness i.e. energy landscape.

Genetic algorithm differs from conventional optimization techniques in following ways:

1. GAs operate with coded versions of the problem parameters rather than parameters themselves

i.e., GA works with the coding of solution set and not with the solution itself.

2. Almost all conventional optimization techniques search from a single point but GAs always

operate on a whole population of points(strings) i.e., GA uses population of solutions rather than

a single solution fro searching. This plays a major role to the robustness of genetic algorithms. It

improves the chance of reaching the global optimum and also helps in avoiding local stationary

point.

3. GA uses fitness function for evaluation rather than derivatives. As a result, they can be applied

to any kind of continuous or discrete optimization problem. The key point to be performed here

is to identify and specify a meaningful decoding function.

4. GAs use probabilistic transition operates while conventional methods for continuous

optimization apply deterministic transition operates i.e., GAs does not use deterministic rules.

These are the major differences that exist between Genetic Algorithm and conventional

optimization techniques.

Page 36: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

3.4 Terminologies and Operators of GA

3.4.1 Introduction

Genetic Algorithm uses a metaphor where an optimization problem takes the place of an

environment and feasible solutions are considered as individuals living in that environment. In

genetic algorithms, individuals are binary digits or of some other set of symbols drawn from a

finite set. As computer memory is made up of array of bits, anything can be stored in a computer

and can also be encoded by a bit string of sufficient length. Each of the encoded individual in the

population can be viewed as a representation, according to an appropriate encoding of a

particular solution to the problem. For Genetic Algorithms to find a best optimum solution, it is

necessary to perform certain operations over these individuals. This chapter discusses the basic

terminologies and operators used in Genetic Algorithms to achieve a good enough solution for

possible terminating conditions.

3.4.2 Key elementsThe two distinct elements in the GA are individuals and populations. An individual is a single

solution while the population is the set of individuals currently involved in the search process.

3.4.3 IndividualsAn individual is a single solution. Individual groups together two forms of solutions as given

below:

1. The chromosome, which is the raw ‘genetic’ information (genotype) that the GA deals.

2. The phenotype, which is the expressive of the chromosome in the terms of the model.

A chromosome is subdivided into genes. A gene is the GA’s representation of a single factor for

a control factor. Each factor in the solution set corresponds to gene in the chromosome. Figure

3.1 shows the representation of a genotype.

Solution set Phenotype

Page 37: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Factor 1 Factor 2 Factor 3 …………………. Factor N

Gene 1 Gene 2 Gene 3 …………………. Gene n

Chromosome Genotype

Fig3.1. Representation of Genotype

A chromosome should in some way contain information about solution that it represents. The

morphogenesis function associates each genotype with its phenotype. It simply means that each

chromosome must define one unique solution, but it does not mean that each solution encoded by

exactly one chromosome. Indeed, the morphogenesis function is not necessary bijective, and it is

even sometimes impossible (especially with binary representation). Nevertheless, the

morphogenesis function should at least be subjective. Indeed; all the candidate solutions of the

problem must correspond to at least one possible chromosome, to be sure that the whole search

space can be explored. When the morphogenesis function that associates each chromosome to

one solution is not injective, i.e., different chromosomes can encode the same solution, the

representation is said to be degenerated.

A slight degeneracy is not so worrying, even if the space where the algorithm is looking for the

optimal solution is inevitably enlarged. But a too important degeneracy could be a more serious

problem. It can badly affect the behavior of the GA, mostly because if several chromosomes can

represent the same phenotype, the meaning of each gene will obviously not correspond to a

specific characteristic of the solution. It may add some kind of confusion in the search.

Chromosomes are encoded by bit strings are given below in Fig.3.2.

101010111010110

Fig3.2. Representation of chromosome

3.5 Genes

Page 38: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Genes are the basic “instructions” for building a Generic Algorithms. A chromosome is a

sequence of genes. Genes may describe a possible solution to a problem, without actually being

the solution. A gene is a bit string of arbitrary lengths. The bit string is a binary representation of

number of intervals from a lower bound. A gene is the GA’s representation of a single factor

value for a control factor, where control factor must have an upper bound and lower bound. This

range can be divided into the number of intervals that can be expressed by the gene’s bit string.

A bit string of length ‘n’ can represent (2n-1) intervals. The size of the interval would be

(range)/(2n-1).

The structure of each gene is defined in a record of phenotyping parameters. The phenotype

parameters are instructions for mapping between genotype and phenotype. It can also be said as

encoding a solution set into a chromosome and decoding a chromosome to a solution set. The

mapping between genotype and phenotype is necessary to convert solution sets from the model

into a form that the GA can work with, and for converting new individuals from the GA into a

form that the model can evaluate. In a chromosome, the genes are represented as in Fig.3.3.

Fig3.3. Representation of gene

3.6 Fitness

The fitness of an individual in a genetic algorithm is the value of an objective function for its

phenotype. For calculating fitness, the chromosome has to be first decoded and the objective

function has to be evaluated. The fitness not only indicates how good the solution is, but also

corresponds to how close the chromosome is to the optimal one. In the case of multicriterion

optimization, the fitness function is definitely more difficult to determine. In multicriterion

optimization problems, there is often a dilemma as how to determine if one solution is better than

another. What should be done if a solution is better for one criterion but worse for another? But

here, the trouble comes more from the definition of a ‘better’ solution rather than from how to

implement a GA to resolve it. If sometimes a fitness function obtained by a simple combination

of the different criteria can give good result, it supposes that criterions can be combined in a

consistent way. But, for more advanced problems, it may be useful to consider something like

Pareto optimally or others ideas from multicriteria optimization theory.

1010 1110 1111 0101

Gene 1 Gene 2 Gene 3 Gene 4

Page 39: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

3.7 Populations

A population is a collection of individuals. A population consists of a number of individuals

being tested, the phenotype parameters defining the individuals and some information about

search space. The two important aspects of population used in Genetic Algorithms are:

1. The initial population generation.

2. The population size.

For each and every problem, the population size will depend on the complexity of the problem. It

is often a random initialization of population is carried. In the case of a binary coded

chromosome this means, that each bit is initialized to a random zero or one. But there may be

instances where the initialization of population is carried out with some known good solutions.

Ideally, the first population should have a gene pool as large as possible in order to be able to

explore the whole search space. All the different possible alleles of each should be present in the

population. To achieve this, the initial population is, in most of the cases, chosen randomly.

Nevertheless, sometimes a kind of heuristic can be used to seed the initial population. Thus, the

mean fitness of the population is already high and it may help the genetic algorithm to find good

solutions faster. But for doing this one should be sure that the gene pool is still large enough.

Otherwise, if the population badly lacks diversity, the algorithm will just explore a small part of

the search space and never find global optimal solutions. The size of the population raises few

problems too. The larger the population is, the easier it is to explore the search space. But it has

established that the time required by a GA to converge is O (nlogn) function evaluations where n

is the population size. We say that the population has converged when all the individuals are

very much alike and further improvement may only be possibly by mutation. Goldberg has also

shown that GA efficiency to reach global optimum instead of local ones is largely determined by

the size of the population. To sum up, a large population is quite useful. But it requires much

more computational cost, memory and time. Practically, a population size of around 100

individuals is quite frequent, but anyway this size can be changed according to the time and the

memory disposed on the machine compared to the quality of the result to be reached.

Population being combination of various chromosomes is represented as in Fig.3.4.

Population

Chromosome 1 11100010

Chromosome 2 01111011

Chromosome 3 10101010

Page 40: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Chromosome 4 11001100

Fig3.4. Population

Thus the above population consists of four chromosomes.

3.8 Data StructuresThe main data structures in GA are chromosomes, phenotypes, objective function values and

fitness values. This is particularly easy implemented when using MATLAB package as a

numerical tool. An entire chromosome population can be stored in a single array given the

number of individuals and the length of their genotype representation. Similarly, the design

variables, or phenotypes that are obtained by applying some mapping from the chromosome

representation into the design space can be stored in a single array. The actual mapping depends

upon the decoding scheme used. The objective function values can be scalar or vectorial and are

necessarily the same as the fitness values. Fitness values are derived from the object function

using scaling or ranking function and can be stored as vectors.

3.9 Search StrategiesThe search process consists of initializing the population and then breeding new individuals until

the termination condition is met. There can be several goals for the search process, one of which

is to find the global optima. This can never be assured with the types of models that GAs work

with. There is always a possibility that the next iteration in the search would produce a better

solution. In some cases, the search process could run for years and does not produce any better

solution than it did in the first little iteration. Another goal is faster convergence. When the

objective function is expensive to run, faster convergence is desirable, however, the chance of

converging on local and possibly quite substandard optima is increased. Apart from these, yet

another goal is to produce a range of diverse, but still good solutions. When the solution space

contains several distinct optima, which are similar in fitness, it is useful to be able to select

between them, since some combinations of factor values in the model may be more feasible than

others. Also, some solutions may be more robust than others.

Page 41: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

3.10 EncodingEncoding is a process of representing individual genes. The process can be performed using bits,

numbers, trees, arrays, lists or any other objects. The encoding depends mainly on solving the

problem. For example, one can encode directly real or integer numbers.

3.10.1 Binary encodingThe most common way of encoding is a binary string, which would be represented as in Fig.3.5.

Each chromosome encodes a binary (bit) string. Each bit in the string can represent some

characteristics of the solution. Every bit string therefore is a solution but not necessarily the best

solution. Another possibility is that the whole string can represent a number. The way bit strings

can code differs from problem to problem.

Chromosome 1 110100011010

Chromosome 2 011111111100

Fig3.5. Binary coding

Binary encoding gives many possible chromosomes with a smaller number of alleles. On the

other hand this encoding is not natural for many problems and sometimes corrections must be

made after genetic operation is completed. Binary coded strings with 1s and 0s are mostly used.

The length of the string depends on the accuracy. In this,

1. Integers are represented exactly

2. Finite number of real numbers can be represented

3. Number of real numbers represented increases with string length

3.10.2 Octal encodingThis encoding uses string made up of octal numbers (0–7). See fig. 3.6.

Chromosome 1 03467216

Chromosome 2 15723314

Page 42: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Fig3.6. Octal encoding

3.10.3 Hexadecimal encoding

This encoding uses string made up of hexadecimal numbers (0–9, A–F). See fig.3.7.

Chromosome 1 9CE7

Chromosome 2 3DBA

Fig3.7. Hexadecimal encoding

3.10.4 Permutation encoding (Real number coding)Every chromosome is a string of numbers, which represents the number in sequence. Sometimes

corrections have to be done after genetic operation is completed. In permutation encoding, every

chromosome is a string of integer/real values, which represents number in a sequence.

Permutation encoding is only useful for ordering problems. Even for this problems for some

types of crossover and mutation corrections must be made to leave the chromosome consistent

(i.e., have real sequence in it). See fig. 3.8.

Chromosome A 153264798

Chromosome B 856723149

Fig3.8. Permutation Encoding

3.10.5 Value encodingEvery chromosome is a string of values and the values can be anything connected to the

problem. This encoding produces best results for some special problems. On the other hand, it is

often necessary to develop new genetic operator’s specific to the problem. Direct value encoding

can be used in problems, where some complicated values, such as real numbers, are used. Use of

binary encoding for this type of problems would be very difficult.

In value encoding, every chromosome is a string of some values. Values can be anything

connected to problem, form numbers, real numbers or chars to some complicated objects. Value

encoding is very good for some special problems. On the other hand, for this encoding is often

necessary to develop some new crossover and mutation specific for the problem. See fig. 3.9.

Page 43: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Chromosome A 1.2324 5.3243 0.4556 2.3293 2.4545

Chromosome B ABDJEIFJDHDIERJFDLDFLFEGT

Chromosome C (back), (back), (right), (forward), (left)

Fig3.9. Value encoding

3.10.6 Tree encodingThis encoding is mainly used for evolving program expressions for genetic programming. Every

chromosome is a tree of some objects such as functions and commands of a programming

language.

3.11 BreedingThe breeding process is the heart of the genetic algorithm. It is in this process, the search process

creates new and hopefully fitter individuals.

The breeding cycle consists of three steps:

a. Selecting parents.

b. Crossing the parents to create new individuals (offspring or children).

c. Replacing old individuals in the population with the new ones.

3.11.1 SelectionSelection is the process of choosing two parents from the population for crossing. After deciding

on an encoding, the next step is to decide how to perform selection i.e., how to choose

individuals in the population that will create offspring for the next generation and how many

offspring each will create. The purpose of selection is to emphasize fitter individuals in the

population in hopes that their off springs have higher fitness. Chromosomes are selected from the

initial population to be parents for reproduction. The problem is how to select these

chromosomes. According to Darwin’s theory of evolution the best ones survive to create new

offspring.

The below Fig.3.10 shows the basic selection process. Selection is a method that randomly picks

chromosomes out of the population according to their evaluation function. The higher the fitness

function, the more chance an individual has to be selected. The selection pressure is defined as

Page 44: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

the degree to which the better individuals are favoured. The higher the selection pressured, the

more the better individuals are favoured. This selection pressure drives the GA to improve the

population fitness over the successive generations. The convergence rate of GA is largely

determined by the magnitude of the selection pressure, with higher selection pressures resulting

in higher convergence rates.

New

Mating pool the two best individuals New population

Fig3.10. Basic selection process

Genetic Algorithms should be able to identify optimal or nearly optimal solutions under a wide

range of selection scheme pressure. However, if the selection pressure is too low, the

convergence rate will be slow, and the GA will take unnecessarily longer time to find the optimal

solution. If the selection pressure is too high, there is an increased change of the GA prematurely

converging to an incorrect (sub-optimal) solution. In addition to providing selection pressure,

selection schemes should also preserve population diversity, as this helps to avoid premature

convergence. Typically we can distinguish two types of selection scheme, proportionate

selection and ordinal-based selection. Proportionate-based selection picks out individuals based

upon their fitness values relative to the fitness of the other individuals in the population.

Ordinal-based selection schemes selects individuals not upon their raw fitness, but upon their

rank within the population. This requires that the selection pressure is independent of the fitness

distribution of the population, and is solely based upon the relative ordering (ranking) of the

population. It is also possible to use a scaling function to redistribute the fitness range of the

population in order to adapt the selection pressure. For example, if all the solutions have their

fitness in the range [999, 1000], the probability of selecting a better individual than any other

using a proportionate-based method will not be important. If the fitness in every individual is

brought to the range [0, 1] equitably, the probability of selecting good individual instead of bad

one will be important. Selection has to be balanced with variation form crossover and mutation.

Too strong selection means sub optimal highly fit individuals will take over the population,

Page 45: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

reducing the diversity needed for change and progress; too weak selection will result in too slow

evolution. The various selection methods are discussed as follows:

3.11.1.1 Roulette wheel selectionRoulette selection is one of the traditional GA selection techniques. The commonly used

reproduction operator is the proportionate reproductive operator where a string is selected from

the mating pool with a probability proportional to the fitness. The principle of roulette selection

is a linear search through a roulette wheel with the slots in the wheel weighted in proportion to

the individual’s fitness values. A target value is set, which is a random proportion of the sum of

the finesses in the population. The population is stepped through until the target value is reached.

This is only a moderately strong selection technique, since fit individuals are not guaranteed to

be selected for, but somewhat have a greater chance. A fit individual will contribute more to the

target value, but if it does not exceed it, the next chromosome in line has a chance, and it may be

weak. It is essential that the population not be sorted by fitness, since this would dramatically

bias the selection. The above described Roulette process can also be explained as follows: The

expected value of an individual is that fitness divided by the actual fitness of the population.

Each individual is assigned a slice of the roulette wheel, the size of the slice being proportional

to the individual’s fitness. The wheel is spun N times, where N is the number of individuals in

the population. On each spin, the individual under the wheel’s marker is selected to be in the

pool of parents for the next generation.

This method is implemented as follows:

1. Sum the total expected value of the individuals in the population. Let it be T.

2. Repeat N times:

i. Choose a random integer ‘r’ between o and T.

ii. Loop through the individuals in the population, summing the expected values, until the sum is

greater than or equal to ‘r’. The individual whose expected value puts the sum over this limit is

the one selected. Roulette wheel selection is easier to implement but is noisy. The rate of

evolution depends on the variance of fitness’s in the population.

Page 46: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

3.11.1.2 Random selection

This technique randomly selects a parent from the population. In terms of disruption of genetic

codes, random selection is a little more disruptive, on average, than roulette wheel selection.

3.11.1.3 Rank selectionThe Roulette wheel will have a problem when the fitness values differ very much. If the best

chromosome fitness is 90%, its circumference occupies 90% of Roulette wheel, and then other

chromosomes have too few chances to be selected. Rank Selection ranks the population and

every chromosome receives fitness from the ranking. The worst has fitness 1 and the best has

fitness N. It results in slow convergence but prevents too quick convergence. It also keeps up

selection pressure when the fitness variance is low. It preserves diversity and hence leads to a

successful search. In effect, potential parents are selected and a tournament is held to decide

which of the individuals will be the parent. There are many ways this can be achieved and two

suggestions are,

1. Select a pair of individuals at random. Generate a random number, R, between 0 and 1. If

R < r use the first individual as a parent. If the R>=r then use the second individual as the parent.

This is repeated to select the second parent. The value of r is a parameter to this method.

2. Select two individuals at random. The individual with the highest evaluation becomes the

parent. Repeat to find a second parent.

3.11.1.4 Tournament selectionAn ideal selection strategy should be such that it is able to adjust its selective pressure and

population diversity so as to fine-tune GA search performance. Unlike, the Roulette wheel

selection, the tournament selection strategy provides selective pressure by holding a tournament

competition among Nu individuals. The best individual from the tournament is the one with the

highest fitness, which is the winner of Nu. Tournament competitions and the winner are then

inserted into the mating pool. The tournament competition is repeated until the mating pool for

generating new offspring is filled. The mating pool comprising of the tournament winner has

higher average population fitness. The fitness difference provides the selection pressure, which

Page 47: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

drives GA to improve the fitness of the succeeding genes. This method is more efficient and

leads to an optimal solution.

3.11.1.5 Boltzmann selectionSimulation annealing is a method of function minimization or maximization. This method

simulates the process of slow cooling of molten metal to achieve the minimum function value in

a minimization problem. Controlling a temperature like parameter introduced with the concept of

Boltzmann probability distribution simulates the cooling phenomenon. In Boltzmann selection a

continuously varying temperature controls the rate of selection according to a pre-set schedule.

The temperature starts out high, which means the selection pressure is low. The temperature is

gradually lowered, which gradually increases the selection pressure, thereby allowing the GA to

narrow in more closely to the best part of the search space while maintaining the appropriate

degree of diversity. A logarithmically decreasing temperature is found useful for convergence

without getting stuck to a local minima state. But to cool down the system to the equilibrium

state takes time.

Let fmax be the fitness of the currently available best string. If the next string has fitness f(Xi )

such that f(Xi)>fmax, then the new string is selected. Otherwise it is selected with Boltz Mann

probability,

P = exp[−(fmax-f(Xi))/T] (3.1) Where T = To(1-α)k and k = (1 + 100∗g/G); g is the current

generation number; G, the maximum value of g. The value of α can be chosen from the range [0,

1] and To from the range [5, 100]. The final state is reached when computation approaches zero

value of T, i.e., the global solution is achieved at this point.

The probability that the best string is selected and introduced into the mating pool is very high.

However, Elitism can be used to eliminate the chance of any undesired loss of information

during the mutation stage. Moreover, the execution time is less.

3.11.1.6 Stochastic universal samplingStochastic universal sampling provides zero bias and minimum spread. The individuals are

mapped to contiguous segments of a line, such that each individual’s segment is equal in size to

its fitness exactly as in roulette-wheel selection. Here equally spaced pointers are placed over the

line, as many as there are individuals to be selected. Consider NPointer the number of

Page 48: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

individuals to be selected, then the distance between the pointers are 1/NPointer and the position

of the first pointer is given by a randomly generated number in the range [0, 1/NPointer].

For 6 individuals to be selected, the distance between the pointers is 1/6=0.167. Sample of 1

random number in the range [0, 0.167]: 0.1.

After selection the mating population consists of the individuals, 1, 2, 3, 4, 6, 8.

Stochastic universal sampling ensures a selection of offspring, which is closer to what is

deserved than roulette wheel selection.

3.11.2 Crossover (Recombination)Crossover is the process of taking two parent solutions and producing from them a child. After

the selection (reproduction) process, the population is enriched with better individuals.

Reproduction makes clones of good strings but does not create new ones. Crossover operator is

applied to the mating pool with the hope that it creates a better offspring.

Crossover is a recombination operator that proceeds in three steps:

i. The reproduction operator selects at random a pair of two individual strings for the mating.

ii. A cross site is selected at random along the string length.

iii. Finally, the position values are swapped between the two strings following the cross site.

That is, the simplest way how to do that is to choose randomly some crossover point and copy

everything before this point from the first parent and then copy everything after the crossover

point from the other parent. The various crossover techniques are discussed as follows:

3.11.2.1 Single point crossoverThe traditional genetic algorithm uses single point crossover, where the two mating

chromosomes are cut once at corresponding points and the sections after the cuts exchanged.

Here, a cross-site or crossover point is selected randomly along the length of the mated strings

and bits next to the cross-sites are exchanged. If appropriate site is chosen, better children can be

obtained by combining good parents else it severely hampers string quality.

The Fig3.11 Illustrates single point crossover and it can be observed that the bits next to the

crossover point are exchanged to produce children. The crossover point can be chosen randomly

Page 49: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Parent 1 10110 010

Parent 2 10101 111

Child 1 10110 111

Child 2 10101 010

Fig3.11. Single point crossover

3.11.2.2 Two point crossoverApart from single point crossover, many different crossover algorithms have been devised, often

involving more than one cut point. It should be noted that adding further crossover points

reduces the performance of the GA. The problem with adding additional crossover points is that

building blocks are more likely to be disrupted.

However, an advantage of having more crossover points is that the problem space may be

searched more thoroughly.

In two-point crossover, two crossover points are chosen and the contents between these points

are exchanged between two mated parents.

Parent 1 110 110 10

Parent 2 011 011 00

Child 1 110 011 10

Child 2 011 110 00

Fig3.12. Two point crossover

In the above Fig3.12 the lines indicate the crossover points. Thus the contents between these

points are exchanged between the parents to produce new children for mating in the next

generation.

Page 50: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Originally, GAs were using one-point crossover which cuts two chromosomes in one point and

splices the two halves to create new ones. But with this one-point crossover, the head and the tail

of one chromosome cannot be passed together to the offspring. If both the head and the tail of a

chromosome contain good genetic information, none of the offsprings obtained directly with

one-point crossover will share the two good features. Using a 2-point crossover avoids this

drawback, and then, is generally considered better than 1-point crossover. In fact this problem

can be generalized to each gene position in a chromosome. Genes that are close on a

chromosome have more chance to be passed together to the offspring obtained through a N-

points crossover. It leads to an unwanted correlation between genes next to each other.

Consequently, the efficiency of a N-point crossover will depend on the position of the genes

within the chromosome. In a genetic representation, genes that encode dependant characteristics

of the solution should be close together.

3.11.2.3 Crossover probabilityThe basic parameter in crossover technique is the crossover probability (Pc). Crossover probability is a parameter to describe how often crossover will be performed. If there is no crossover, offspring are exact copies of parents. If there is crossover, offspring are made from parts of both parent’s chromosome. If crossover probability is 100%, then all offspring are made by crossover. If it is 0%, whole new generation is made from exact copies of chromosomes from old population (but this does not mean that the new generation is the same!). Crossover is made in hope that new chromosomes will contain good parts of old chromosomes and therefore the new chromosomes will be better. However, it is good to leave some part of old population survive to next generation.

3.11.3 MutationAfter crossover, the strings are subjected to mutation. Mutation prevents the algorithm to be trapped in a local minimum. Mutation plays the role of recovering the lost genetic materials as well as for randomly disturbing genetic information. It is an insurance policy against the irreversible loss of genetic material. Mutation has traditionally considered as a simple search

Page 51: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

operator. If crossover is supposed to exploit the current solution to find better ones, mutation is supposed to help for the exploration of the whole search space. Mutation is viewed as a background operator to maintain genetic diversity in the population. It introduces new genetic structures in the population by randomly modifying some of its building blocks. Mutation helps escape from local minima’s trap and maintains diversity in the population. It also keeps the gene pool well stocked, and thus ensuring ergodicity. A search space is said to be ergodic if there is a non-zero probability of generating any solution from any population state. There are many different forms of mutation for the different kinds of representation. For binary representation, a simple mutation can consist in inverting the value of each gene with a small probability. The probability is usually taken about 1/L, where L is the length of the chromosome. It is also possible to implement kind of hill-climbing mutation operators that do mutation only if it improves the quality of the solution. Such an operator can accelerate the search. But care should be taken, because it might also reduce the diversity in the population and makes the algorithm converge toward some local optima. Mutation of a bit involves flipping a bit, changing 0 to 1 and vice-versa.

3.11.3.1 Mutation ProbabilityThe important parameter in the mutation technique is the mutation probability (Pm). The mutation probability decides how often parts of chromosome will be mutated. If there is no mutation, offspring are generated immediately after crossover (or directly copied) without any change. If mutation is performed, one or more parts of a chromosome are changed. If mutation probability is 100%, whole chromosome is changed, if it is 0%, nothing is changed. Mutation generally prevents the GA from falling into local extremes. Mutation should not occur very often, because then GA will in fact change to random search.

Page 52: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

CHAPTER-4

OPTIMIZATION OF PROCESS PARAMETERS FOR FRICTION

WELDING USING GENETIC ALGORITHM

Two dissimilar materials such as AISI 4140 and AISI 304 have been considered and joined using

friction welding. The input parameters such as friction force, forging force and burn-off are

considered. Table 4.1 gives the range of each input parameters.

Table4.1. Process Parameters with Their Range

Parameter Range

Friction force, kN

(X1)

15-30

Forging force, kN

(X2)

40-75

Page 53: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Burn-off, mm

(X3)

4-10

With these 3 input parameters and 2 levels L8 orthogonal array is generated as given in Table 4.2.

After this the experimental values of Plain tensile strength, Notch tensile strength, Impact

toughness, Micro hardness are listed.

Table4.2. L8 Orthogonal Array

Friction force

Forging force

Burn-off

1 1 11 1 21 2 11 2 22 1 12 1 22 2 12 2 2

Table4.3. Experiment results for Plain tensile strength

Friction force(kN)-Forging force(kN)-Burn off(mm)

Plain tensile strength (MPa) Average Plain tensile strength(MPa)

Trial 1 Trial 2 Trial 315-40-4 623 631 633 62915-40-10 599 612 607 60615-75-4 643 651 644 64615-75-10 601 608 601 603.3330-40-4 650 656 647 65130-40-10 622 632 627 62730-75-4 644 652 647 647.67

Page 54: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

30-75-10 638 644 645 642.33

The regression equations are generated with the help of Minitab software for these Plain tensile

strength values.

Objective function of Plain tensile strength:

Yp=645.018+0.1092*X1+0.2547*X2-8.363*X3-0.0022*X1*X2+0.2018*X1*X3-0.00238*X2*X3;

Table4.4. Experiment results for Notch tensile strength

Friction force(kN)-Forging force(kN)-Burn off(mm)

Notch tensile strength (MPa)

Average Notch tensile strength(MPa)

Trial 1 Trial 2 Trial 315-40-4 900 913 907 906.6715-40-10 790 813 797 80015-75-4 814 857 832 834.3315-75-10 816 840 831 82930-40-4 770 783 778 77730-40-10 761 775 768 76830-75-4 725 688 703 705.3330-75-10 769 783 763 771.67

Objective function of Notch tensile strength:

Yn=1278.25-11.034*X1-3.21*X2-47.63*X3-0.0235*X1*X2+0.9407*X1*X3+0.4206*X2*X3;

Table4.5. Experiment results for Impact toughness

Friction force(kN)-Forging force(kN)-Burn off(mm)

Impact toughness ( J ) Average Impact toughness(J)

Trial 1 Trial 2 Trail 315-40-4 41 43 43 42.33

Page 55: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

15-40-10 24 27 26 25.6715-75-4 38 36 37 3715-75-10 25 21 24 23.3330-40-4 36 35 36 35.6730-40-10 12 14 12 12.6730-75-4 32 28 30 3030-75-10 13 11 13 12.33

Objective function of impact toughness:

Yi=68.8194-0.3171*X1-0.273*X2-2.80*X3+0.0015*X1*X2-0.057*X1*X3+0.0198*X2*X3;

Table4.6. Experiment results for Micro hardness

Friction force (kN)Forging force (kN)Burn off (mm)

Micro hardness (HV) at weld center

Average micro hardness(HV)

Trial 1

Trial 2 Trial 3

15-40-4 357 360 348 35515-40-10 325 320 331 325.3315-75-4 358 355 362 358.3315-75-10 316 310 321 315.6730-40-4 405 410 397 40430-40-10 371 375 367 37130-75-4 412 415 399 408.6730-75-10 370 377 365 370.67

Objective function of Micro hardness:

Ym=328.270+2.697*X1+0.057*X2-3.67*X3+0.01015*X1*X2+0.0074*X1*X3-0.042*X2*X3;

Combined objective function written in Matlab for genetic algorithm is

Y=Yp+Yn+Yi+Ym;

Optimal Solution with the above objective function involving plain tensile, notch tensile, impact

toughness, and micro hardness has been obtained with the help of genetic algorithm written in

Matlab software.

Page 56: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

4.1 GA Run

4.1.1 Input data

Population size= 100

Chromosome length= 30

Number of generations= 350

Probability of cross over= 0.9

Probability of mutation= 0.01

Probability of elitism= 0.01

Min value of friction force= 15kn

Max value of friction force= 30kN

Min value of forging force= 40kn

Max value of forging force= 75kn

Min value of burn off= 4mm

Max value of burn off= 10mm

4.1.2 OutputProblem converged, solution generated at the value = 328

The objective function value(Y): = 1935.034068

The input parameters;

The value of friction force (KN):= 15.000000

The value of forging force (KN):= 40.136719

The value of burn off (mm):= 4.023438

The output parameters;

The value of Plain tensile strength in MPa (Yp):= 633.700908;

The value of Notch tensile strength in MPa (Yn):= 902.811002;

Page 57: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

The value of Impact toughness in J (Yi):= 42.500442;

The value of Micro hardness in HV (Ym):= 356.021716;

4.2 Results and Discussion

Table 4.7. % variation between experiment and program values of process parameters

Process parameters Experiment values Program values % variation

Plain tensile strength,

MPa

629 633.70 0.74

Notch tensile strength,

MPA

906 902.81 0.35

Impact toughness, J 42.33 42.51 0.42

Micro hardness, HV 355 356.03 0.29

In this Table 4.7 the theoretical values of plain tensile strength, notch tensile strength, impact

toughness, micro hardness which come after the run of genetic algorithm are nearly closed to

corresponding experimental results.

Page 58: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Previously, the objective functions are allotted some weighted values so that the solution

corresponding to those criteria depends upon application purpose. In this work, all objective

functions are combined into one single objective without giving any weighted values between

them.

Graphs between no of iteration and objective function value for the plain tensile, notch tensile,

impact toughness and micro hardness

Graph 4.1: Plain tensile strength vs no of iterations.

Page 59: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Graph 4.2:Notch tensile strength vs no of iterations

Page 60: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Graph 4.3: Impact toughness vs no of iterations

Page 61: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

Graph 4.4: Micro hardness vs No of iterations

Page 62: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

CHAPTER-5

CONCLUSIONS

5.1 Conclusions

1. The GA is employed in the present work to optimize the process parameters such as

friction force, forging force and burn–off with the objective of maximizing the plain

tensile strength, notch tensile strength, impact toughness and micro hardness.

2. It is observed that the optimum values obtained using GA are very close to the

experimental values. Hence the GA program developed in this work can be used to

optimize the parameters accurately.

Page 63: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

REFERENCES

1. Farhang-Mehr. A., S. Azarm,., 2002. Entropy-based multi-objective genetic algorithm for

design optimisation. Struct. Multidiscip. Optim. 24, 351–361.

2. Ghouati.O.; Joannic, D.; Gelin, J.C. 1998: Optimization of process parameters for the

control of springback in deep drawing. In: Huetink, J.; Baaijens, F.P.T. (eds.),

Simulations of Materials Processing, Methods and Applications, Rotterdam: Balkerma.

3. Hiroyasu. T., M. Miki, J. Kamiura, S. Watanabe, H. Hiroyasu,2002. Multi-objective

optimization of diesel engine emissions and fuel economy using genetic algorithms

andphenomenological model. SAE paper no. 2002-01-2778.

4. Kasprzak, E.M., K.E. Lewis, 2000. Pareto analysis in multiobjective optimization using

the collinearity theorem and scaling method. Struct. Multidiscip. Optim. 22, 208–218.

5. Katayama. T., E. Nakamachi, Y. Nakamura, T. Ohata, Y. Morishita, H. Murase, 2004.

Development of process design system for press forming—multi-objective optimization

of intermediate die shape in transfer forming. J. Mater. Proc. Technol. 155/156, 1564–

1570.

6. Lienert.J, w. L. Stellwag, jr., b. B. Grimmett, and r. W. Warke Journal of welding,

january 2003.

7. Li, J., L.J. Liu, Z.P. Feng, 2004. Multiobjective optimization of a centrifugal impeller

using evolutionary algorithms. Chin. J. Mech. Eng. 17 (3), 389–393.

8. Multi-objective optimization using Evolutionary Algorithms by Kalyanmoy Deb.

9. Rosa Di Lorenzo1, Giuseppe Ingarao, Fabrizio Micari1, Francisco Chinesta.(2009). A

pareto optimal design approach for simultaneous control of thinning and springback in

stamping processes. Int J Mater Form. Vol. 2 Suppl 1:801–804.

10. Sathiya. P, S. Aravindan and A.Noorul haq. International Symposium of Research

Students on Materials Science and Engineering December 20-22, 2004, Chennai, India

Department of Metallurgical and Materials Engineering Indian Institute of Technology,

Madras.

11. Siva. K & N. Murugan & R. Logesh (2009). Optimization of weld bead geometry in

plasma transferred arc hard faced austenitic stainless steel plates using genetic algorithm.

Int J Adv Manuf Technol. 41:24–30.

Page 64: CHAPTER-1 - A Journey with Time - Homeajourneywithtime.weebly.com/uploads/6/7/3/3/6733268… · Web viewBesides deterministic search principles involved in an algorithm, there also

12. Tsutao Katayama, Eiji Nakamachi, Yasunori Nakamura, Tomiso Ohata, Yusuke

Morishita, Hbiroki Murase. (2004). Development of process design system for press

forming—multi-objective optimization of intermediate die shape in transfer forming

Journal of Materials Processing Technology. 155–156