Rapport Du Projet RO

56
Republic of Tunisia Ministry of Higher Education and Scientific Research University of Carthage Tunisia Polytechnic School Operations Research Project Solving The Knapsack Problem Using The Genetic Algorithm Mai-June 2013 Elaborated by: Lotfi Slim Akram Bedoui Second year engineering students Option: Economics and Scientific Management Supervised by: Mm Safa Bhar Operations Research Professor at TPS Academic Year 2012 - 2013

Transcript of Rapport Du Projet RO

Page 1: Rapport Du Projet RO

Republic of TunisiaMinistry of Higher Education and Scientific Research

University of Carthage

Tunisia Polytechnic School

Operations Research Project

Solving The Knapsack ProblemUsing The Genetic Algorithm

Mai-June 2013

Elaborated by:Lotfi SlimAkram BedouiSecond year engineering studentsOption: Economics and Scientific Management

Supervised by:Mm Safa BharOperations Research Professor at TPS

Academic Year2012 - 2013

Page 2: Rapport Du Projet RO

ii

Abstract

The knapsack problem is a problem in combinatorial optimization where, given a

set of items, each with a weight and a value, we aim at determining the number of

each item to include in a collection so that the total weight is less than or equal to

a given limit and the total value is as large as possible. It has been studied for more

than a century and many approches have been developed to solve it. One of them

is the genetic algorithme which is an evolutionary method inspired by the process of

natural selection. Computer science comes in the help of mathematic to obtain fast

near-optimal solution to the KP.

Keywords: Knapsack problem, optimization, metaheuristic, evolutionary algo-rithms, genetic algorithm, natural selection, java application.

Page 3: Rapport Du Projet RO

iii

Contents

1 Introduction 1

2 Knapsack Problem Setting 22.1 Origins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.4 Problem variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.4.1 bounded vs unbounded knapsack problems . . . . . . . . . . . 42.4.2 Multiple-choice knapsack problem . . . . . . . . . . . . . . . . 52.4.3 Subset sum problem . . . . . . . . . . . . . . . . . . . . . . . 52.4.4 Multiple knapsack problem . . . . . . . . . . . . . . . . . . . . 52.4.5 Quadratic knapsack problem . . . . . . . . . . . . . . . . . . . 6

3 Genetic Algorithm Formulation 73.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3.1.1 Metaheuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.1.2 Evolutionary algorithms . . . . . . . . . . . . . . . . . . . . . 8

3.2 Genetic algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2.1 Brief history of GA . . . . . . . . . . . . . . . . . . . . . . . . 113.2.2 GA concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.2.3 Flowchart explanation . . . . . . . . . . . . . . . . . . . . . . 13

3.3 GA operators and terminologies . . . . . . . . . . . . . . . . . . . . . 143.3.1 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.3.2 Fitness function . . . . . . . . . . . . . . . . . . . . . . . . . . 153.3.3 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.3.4 Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.3.5 Mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.3.6 Penalizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.4 Advantages Of GAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.4.1 Global Search Methods . . . . . . . . . . . . . . . . . . . . . . 193.4.2 Blind Search Methods . . . . . . . . . . . . . . . . . . . . . . 203.4.3 Probabilistic rules . . . . . . . . . . . . . . . . . . . . . . . . . 203.4.4 Parallel machines applicability . . . . . . . . . . . . . . . . . . 20

Page 4: Rapport Du Projet RO

Contents iv

4 Implementation 214.1 Programming languages used . . . . . . . . . . . . . . . . . . . . . . 214.2 Graphical Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.3 C code listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.4 Java code listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.5 Tests and results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Conclusion 51

Page 5: Rapport Du Projet RO

v

List of Figures

2.1 Knapsack problem illustration (from wikipedia) . . . . . . . . . . . . 22.2 Knapsack example of solution . . . . . . . . . . . . . . . . . . . . . . 2

3.1 Structure of a single population evolutionary algorithm . . . . . . . . 83.2 flowchart of genetic algorithm . . . . . . . . . . . . . . . . . . . . . . 123.3 Binary encoding example . . . . . . . . . . . . . . . . . . . . . . . . . 143.4 Permutation encoding example . . . . . . . . . . . . . . . . . . . . . 143.5 Value encoding example . . . . . . . . . . . . . . . . . . . . . . . . . 153.6 Single Point Crossover example . . . . . . . . . . . . . . . . . . . . . 173.7 Two Point Crossover example . . . . . . . . . . . . . . . . . . . . . . 173.8 Uniform Crossover example . . . . . . . . . . . . . . . . . . . . . . . 173.9 Flipping mutation example . . . . . . . . . . . . . . . . . . . . . . . . 183.10 Interchanging mutation example . . . . . . . . . . . . . . . . . . . . . 183.11 Reversing mutation example . . . . . . . . . . . . . . . . . . . . . . . 18

4.1 Graphical interface: setting the parameters . . . . . . . . . . . . . . . 224.2 Graphical interface: showing results . . . . . . . . . . . . . . . . . . . 22

Page 6: Rapport Du Projet RO

1

Chapter 1

Introduction

In the field of engineering, Operations Research (OR) is of huge use as it’s one

of the most powerful tools of decision making. By using techniques such as mathe-

matical modeling to analyze complex situations, operations research gives executives

the power to make more effective decisions and build more productive systems based

on more complete data, consideration of all available options, careful predictions of

outcomes and estimates of risk and the latest decision tools and techniques such as

simulation, optimization, probability and statistics.

Employing techniques from other mathematical sciences, such as mathematical

modeling, statistical analysis, and mathematical optimization, operations research

arrives at optimal or near-optimal solutions to complex problems. We said ”optimal

or near-optimal solutions” because it really depends on the method followed to solve

the given problem. In fact, in this domain we distinguish between exact and heuristic

methods. While the first gives The exact solution, the second converges to it with an

error margin from the optimal solution. The reason why we use the latest technique

is that it converges ”very quickly” to the solution. Mathematically, we say that it

converges polynomially, by contrast to the exponential convergence of the first ones.

A vast variety of problems have been expressed since the second half of the last

century, and very solutions have been given to them. One of the most popular and

typical examples is The Knapsack Problem which is treated in the present work due

to its simplicity. In the seek of solution, we will opt for the genetic algorithm, an

example of the metaheurisitc evolutionary algorithms.

This report summarizes the work that we did during the last weeks and is divided

into three main parts. While the first two parts are dedicated to theoretical presen-

tation of Knapsack Problem and Genetic Algorithme respectively, we will present in

the last part the computational and programming work we elaborated.

Page 7: Rapport Du Projet RO

2

Chapter 2

Knapsack Problem Setting

2.1 Origins

Suppose we are planning a hiking trip; and we are, therefore, interested in filling

a knapsack with items that are considered necessary for the trip. There are N differ-

ent item types that are deemed desirable; these could include bottle of water, apple,

orange, sandwich, and so forth. Each item type has a given set of two attributes,

namely a weight (or volume, or in other cases a cost) and a value (or a profit) that

quantifies the level of importance associated with each unit of that type of item. Since

the knapsack has a limited weight (or volume) capacity, the problem of interest is to

figure out how to load the knapsack with a combination of units of the specified types

of items that yields the greatest total value. What we have just described is called the

knapsack, or the rucksack, problem .

Figure 2.1: Knapsack problem il-lustration (from wikipedia)

Figure 2.2: Knapsack example of solu-tion

Page 8: Rapport Du Projet RO

2 Knapsack Problem Setting 3

2.2 Applications

A large variety of resource allocation problems can be cast in the framework

of a knapsack problem. The general idea is to think of the capacity of the knapsack

as the available amount of a resource and the item types as activities to which this

resource can be allocated. Two quick examples are the allocation of an advertising

budget to the promotions of individual products and the allocation of your effort to

the preparation of final exams in different subjects.

In finance, selection of capital investments and financial portfolios, selection of

assets for asset-backed securitization are examples that show how solving this problem

can allow companies to be more competitive and profitable.

2.3 Problem formulation

The knapsack problem has been studied for more than a century, with early works

dating as far back as 1897. Great mathematician Tobias Dantzig is though to be the

first to set the problem mathematically as follows:

Let there be n items, x1 to xn where xi has a value vi and weight wi. The maximum

weight that we can carry in the bag is W. It is common to assume that all values

and weights are nonnegative. To simplify the representation, we also assume that the

items are listed in increasing order of weight.

Thus, the optimization problem is written as:

maximizen∑

i=1

vixi

subject ton∑

i=1

wixi 6 W

xi ∈ {0, 1}

2.4 Problem variations

The last formulation is known as the 0-1 knapsack problem, as it restricts the

number xi of copies of each kind of item to zero or one. But there is a huge amount

of different kinds of variations of the knapsack problem in the scientific literature,

Page 9: Rapport Du Projet RO

2 Knapsack Problem Setting 4

among them:

2.4.1 bounded vs unbounded knapsack problems

The bounded knapsack problem (BKP) removes the restriction that there is only

one of each item, but restricts the number xi of copies of each kind of item to an

integer value ci. Mathematically the bounded knapsack problem can be formulated

as:

maximizen∑

i=1

vixi

subject ton∑

i=1

wixi 6 W

xi ∈ {0, 1, ..., ci}

The unbounded knapsack problem (UKP, sometimes called the integer knapsack

problem) places no upper bound on the number of copies of each kind of item and

can be formulated as above except for that the only restriction on xi is that it is a

non-negative integer. If the example with the colored bricks above is viewed as an

unbounded knapsack problem, then the solution is to take three yellow boxes and

three grey boxes. Mathematically the UKP can be formulated as:

maximizen∑

i=1

vixi

subject ton∑

i=1

wixi 6 W

xi 6 0

Page 10: Rapport Du Projet RO

2 Knapsack Problem Setting 5

2.4.2 Multiple-choice knapsack problem

If the items are subdivided into k classes denoted Ni, and exactly one item must

be taken from each class, we get the multiple-choice knapsack problem:

maximizek∑

i=1

∑j∈Ni

vijxij

subject tok∑

i=1

∑j∈Ni

wijxij 6 W∑j∈Ni

xij = 1 ∀ 1 6 i 6 k

xij ∈ {0, 1} ∀ 1 6 i 6 k and ∀ j ∈ Ni

2.4.3 Subset sum problem

The subset sum problem (often the corresponding decision problem is given in-

stead), is a special case of the decision and 0-1 problems where each kind of item, the

weight equals the value vi = wi (if for each item the profits and weights are identical):

maximizen∑

i=1

vixi

subject ton∑

i=1

vixi 6 W

xi ∈ {0, 1}

It’s frequently in the field of cryptography.

2.4.4 Multiple knapsack problem

If we have n items and m knapsacks with capacities Wi, we get:

Page 11: Rapport Du Projet RO

2 Knapsack Problem Setting 6

maximizem∑i=1

n∑j=1

vixij

subject ton∑

i=1

wjxij 6 Wi ∀ 1 6 i 6 m

m∑i=1

xij 6 1 ∀ 1 6 j 6 n

xij ∈ {0, 1} ∀ 1 6 j 6 n and ∀ 1 6 i 6 m

As a special case of the multiple knapsack problem, when the profits are equal to

weights and all bins have the same capacity, we can have multiple subset sum problem.

his variation is used in many loading and scheduling problems in Operations Research

and has a polynomial-time approximation scheme (PTAS).

2.4.5 Quadratic knapsack problem

The quadratic knapsack problem (QKP) maximizes a quadratic objective function

subject to a binary and linear capacity constraint. The problem was treated in the

late 70’s and is described by:

maximizen∑

j=1

vjxj +n−1∑i=1

n∑j=i+1

vijxixj

subject ton∑

j=1

wjxj 6 W

xj ∈ {0, 1} ∀ 1 6 j 6 n

Page 12: Rapport Du Projet RO

7

Chapter 3

Genetic Algorithm Formulation

3.1 Overview

To solve the knapsack problem, many approches have been developed over the

past decades. In this framework we will focus on one particular method: genetic

algorithm. GA is a metaheuristic that mimics the process of natural evolution to

generate useful solutions to optimization and search problems.

3.1.1 Metaheuristic

As defined by Osman and Laporte, a metaheuristic is formally defined as an itera-

tive generation process which guides a subordinate heuristic by combining intelligently

different concepts for exploring and exploiting the search space, learning strategies

are used to structure information in order to find efficiently near-optimal solutions.

Metaheuristics are strategies that guide the search process. The goal is to effi-

ciently explore the search space in order to find near-optimal solutions. Techniques

which constitute metaheuristic algorithms range from simple local search procedures

to complex learning processes. MA are approximate and usually stochastic (non-

deterministic).

They may incorporate mechanisms to avoid getting trapped in confined areas of

the search space. The basic concepts of metaheuristics permit an abstract level de-

scription. Metaheuristics are not problem-specific. Metaheuristics may make use of

domain-specific knowledge in the form of heuristics that are controlled by the upper

level strategy. Todays more advanced metaheuristics use search experience (embodied

in some form of memory) to guide the search.

Page 13: Rapport Du Projet RO

3 Genetic Algorithm Formulation 8

3.1.2 Evolutionary algorithms

EA prniciples

Evolutionary algorithms are stochastic search methods that mimic the metaphor

of natural biological evolution. Evolutionary algorithms operate on a population of

potential solutions applying the principle of survival of the fittest to produce better

and better approximations to a solution. At each generation, a new set of approxi-

mations is created by the process of select-ing individuals according to their level of

fitness in the problem domain an d breeding them together using operators borrowed

from natural genetics. This process leads to the evolution of populations of individ-

uals that are better suited to their environment than the individuals that they were

created from, just as in natural adaptation.

Evolutionary algorithms model natural processes, such as selection, recombina-

tion, mutation, migration, locality and neighborhood. The following figure shows the

structure of a simple evolutionary algorithm. Evolutionary algorithms work on popu-

lations of individuals instead of single solu-tions. In this way the search is performed

in a parallel manner.

Figure 3.1: Structure of a single population evolutionary algorithm

EA techniques

Similar techniques differ in the implementation details and the nature of the par-

ticular applied problem:

- Genetic algorithm: This is the most popular type of EA. One seeks the

solution of a problem in the form of strings of numbers (traditionally binary, al-

though the best representations are usually those that reflect something about

the problem being solved), by applying operators such as recombination and

Page 14: Rapport Du Projet RO

3 Genetic Algorithm Formulation 9

mutation (sometimes one, sometimes both). This type of EA is often used in

optimization problems.

- Genetic programming: Here the solutions are in the form of computer pro-

grams, and their fitness is determined by their ability to solve a computational

problem.

- Evolutionary programming: Similar to genetic programming, but the struc-

ture of the program is fixed and its numerical parameters are allowed to evolve.

- Gene expression programming: Like genetic programming, GEP also evolves

computer programs but it explores a genotype-phenotype system, where com-

puter programs of different sizes are encoded in linear chromosomes of fixed

length.

- Evolution strategy: Works with vectors of real numbers as representations

of solutions, and typically uses self-adaptive mutation rates.

- Memetic algorithm: It is the hybrid form of population based methods. In-

spired by the both Darwinian principles of natural evolution and Dawkins notion

of a meme and viewed as a form of population-based algorithm coupled with

individual learning procedures capable of performing local refinements. The

focus of the research study is thus to balance been exploration and exploitation

in the search.

- Differential evolution: Based on vector differences and is therefore primarily

suited for numerical optimization problems.

- Neuroevolution: Similar to genetic programming but the genomes represent

artificial neural networks by describing structure and connection weights. The

genome encoding can be direct or indirect.

- Learning classifier system: LCS is a machine learning system with close

links to reinforcement learning and genetic algorithms.

Page 15: Rapport Du Projet RO

3 Genetic Algorithm Formulation 10

Biological terminology

At this point it is useful to formally introduce some of the biological terminology

that will be used throughout this work. In the context of genetic algorithms, these

biological terms are used in the spirit of analogy with real biology, though the entities

they refer to are much simpler than the real biological ones.

All living organisms consist of cells, and each cell contains the same set of one or

more chromosomes (strings of DNA) that serve as a ”blueprint” for the organism. A

chromosome can be conceptually divided into genes, each of which encodes a partic-

ular protein. Very roughly, one can think of a gene as encoding a trait, such as eye

color. The different possible ”settings” for a trait (e.g., blue, brown, hazel) are called

alleles. Each gene is located at a particular locus (position) on the chromosome.

Many organisms have multiple chromosomes in each cell. The complete collection

of genetic material (all chromosomes taken together) is called the organism’s genome.

The term genotype refers to the particular set of genes contained in a genome. Two

individuals that have identical genomes are said to have the same genotype. The

genotype gives rise, under fetal and later development, to the organism’s phenotype,

its physical and mental characteristics, such as eye color, height, brain size, and in-

telligence.

Organisms whose chromosomes are arrayed in pairs are called diploid ; organisms

whose chromosomes are unpaired are called haploid. In nature, most sexually re-

producing species are diploid, including human beings, who each have 23 pairs of

chromosomes in each somatic (non-germ) cell in the body. During sexual repro-

duction, recombination (or crossover) occurs: in each parent, genes are exchanged

between each pair of chromosomes to form a gamete (a single chromosome), and then

gametes from the two parents pair up to create a full set of diploid chromosomes. In

haploid sexual reproduction, genes are exchanged between the two parents’ single-

strand chromosomes. Offspring are subject to mutation, in which single nucleotides

(elementary bits of DNA) are changed from parent to offspring, the changes often

resulting from copying errors. The fitness of an organism is typically defined as the

probability that the organism will live to reproduce (viability) or as a function of the

number of offspring the organism has (fertility).

In genetic algorithms, the term chromosome typically refers to a candidate solu-

tion to a problem, often encoded as a bit string. The ”genes” are either single bits or

short blocks of adjacent bits that encode a particular element of the candidate solu-

tion (e.g., in the context of multiparameter function optimization the bits encoding

a particular parameter might be considered to be a gene). An allele in a bit string is

Page 16: Rapport Du Projet RO

3 Genetic Algorithm Formulation 11

either 0 or 1; for larger alphabets more alleles are possible at each locus. Crossover

typically consists of exchanging genetic material between two singlechromosome hap-

loid parents. Mutation consists of flipping the bit at a randomly chosen locus (or, for

larger alphabets, replacing a the symbol at a randomly chosen locus with a randomly

chosen new symbol).

Most applications of genetic algorithms employ haploid individuals, particularly,

single-chromosome individuals. The genotype of an individual in a GA using bit

strings is simply the configuration of bits in that individual’s chromosome. Often

there is no notion of ”phenotype” in the context of GAs, although more recently

many workers have experimented with GAs in which there is both a genotypic level

and a phenotypic level (e.g., the bit-string encoding of a neural network and the

neural network itself).

Natural Selection

In nature, the individual that has better survival traits will survive for a longer

period of time. This in turn provides it a better chance to produce offspring with

its genetic material. Therefore, after a long period of time, the entire population

will consist of lots of genes from the superior individuals and less from the inferior

individuals. In a sense, the

ttest survived and the un

t died out. This force of nature is called natural selection.

The existence of competition among individuals of a species was recognized cer-

tainly before Darwin. The mistake made by the older theorists (like Lamarck) was

that the environment had an effect on an individual. That is, the environment will

force an individual to adapt to it. The molecular explanation of evolution proves that

this is biologically impossible. The species does not adapt to the environment, rather,

only the

ttest survive.

3.2 Genetic algorithms

3.2.1 Brief history of GA

Although simulating evolution computationally was possible since the 50’s, it took

John Holland (from the University of Michigan) more than a decade to first introduce

the concepts of GA after working on adaptive systems. The main results of his

Page 17: Rapport Du Projet RO

3 Genetic Algorithm Formulation 12

work were published in his book Adaptation in Natural and Artificial Systems by the

1975. But research in GAs remained largely theoretical until the mid-1980s, when

The First International Conference on Genetic Algorithms was held in Pittsburgh,

Pennsylvania. Nowadays genetic algorithms are applied to a broad range of subjects.

3.2.2 GA concepts

The most common type of genetic algorithm works like this: a population is cre-

ated with a group of individuals created randomly. The individuals in the population

are then evaluated. The evaluation function is provided by the programmer and gives

the individuals a score based on how well they perform at the given task. Two in-

dividuals are then selected based on their fitness, the higher the fitness, the higher

the chance of being selected. These individuals then ”reproduce” to create one or

more offspring, after which the offspring are mutated randomly. This continues until

a suitable solution has been found or a certain number of generations have passed,

depending on the needs of the programmer.

Figure 3.2: flowchart of genetic algorithm

Page 18: Rapport Du Projet RO

3 Genetic Algorithm Formulation 13

3.2.3 Flowchart explanation

Step 1: Create a random initial population

An initial population is created from a random selection of solutions. These

solutions have been seen as represented by chromosomes as in living organisms. A

chromosome is a packet of genetic information organized in a standard way that

defines completely an individual solution. The genetic structure (way in which that

information is packed and defined) enables the solutions to be manipulated. The

genetic operands (way in which that information can be manipulated) enables the

solutions to reproduce and envolve.

Step 2: Evaluate Fitness

A value for fitness is assigned to each solution (chromosome) depending on how

close it actually is to solving the problem. Therefore we need to define the problem,

model it, simulate it or have a data set as sample answers. Each possible solution has

to be tested in the problem and the answer evaluated (or marked) on how good it is.

The overall mark of each solution relative to all the marks of all solutions produces a

fitness ranking.

Step 3: Produce Next Generation

Those chromosomes with a higher fitness value are more likely to reproduce off-

spring. The population for the next Generation will be produced using the genetic

operators. Reproduction by Copy or Crossing Over and Mutation will be applied to

the chromosomes according to the selection rule. This rule states that the fitter and

individual is, the higher the probability it has to reproduce

Step 4: Next Generation or Termination

If the population in the last generation contains a solution that produces an output

that is close enough or equal to the desired answer then the problem has been solved.

This is the ideal termination criterion of the evolution. If this is not the case, then

the new generation will go through the same process as their parents did, and the

evolution will continue. This will iterate until a solution is reached or another of the

termination criteria is satisfied. A termination criterion that always must be included

is Time-Out (either as computing time or as number of generations evaluated). Since

Page 19: Rapport Du Projet RO

3 Genetic Algorithm Formulation 14

one drawback of Evolutionary Programming is that is very difficult (impossible most

of the time) to know if the ideal termination criterion is going to be satisfied, or when.

3.3 GA operators and terminologies

3.3.1 Encoding

The process of representing a solution in the form of a string that conveys the

necessary information. Just as in a chromosome, each gene controls a particular

characteristic of the individual, similarly, each bit in the string represents a charac-

teristic of the solution.

Some encoding methods are:

Binary encoding

Most common method of encoding is binary coded. Chromosomes are strings of

1 and 0 and each position in the chromosome represents a particular characteristic of

the problem.

Figure 3.3: Binary encoding example

Permutation encoding

Usseful in ordering problems such as the TSP were every chromosome is a string

of numbers, each of which represents a city to be visited.

Figure 3.4: Permutation encoding example

Page 20: Rapport Du Projet RO

3 Genetic Algorithm Formulation 15

Value encoding

Used in problems where complicated values, such as real numbers, are used and

where binary encoding would not suffice. Good fo some problems, but often necessary

to develop some specific crossover and mutation techniques for these chromosomes.

Figure 3.5: Value encoding example

3.3.2 Fitness function

A fitness function value quantifies the optimality of a solution. The value is used

to rank a particular solution against all the other solutions. A fitness value is assigned

to each solution depending on how close it is actually to the optimal solution of the

problem. Ideal fitness function correlates closely to goal and is quickely computable.

3.3.3 Selection

The process that determines which solutions are to be preserved and allowed to

reproduce and which ones deserve to die out. The primary objective of the selection

operator is to emphasize the good solutions and eliminate the bad solutions in a

population while keeping the population size constant.The rule is Select the best,

discard the rest.

Practically, we identify the good solutions in a population, make multiple copies of

the good solutions and eliminate bad solutions from the population so that multiple

copies of good solutions can be placed in the population.

There are different techniques to implement selection in Genetic Algorithms:

Tournament selection

In tournament selection several tournaments are played among a few individuals.

The individuals are chosen at random from the population. The winner of each

tournament is selected for next generation. Selection pressure can be adjusted easily

by changing the tournament size. Weak individuals have a smaller chance to be

selected if tournament size is large.

Page 21: Rapport Du Projet RO

3 Genetic Algorithm Formulation 16

Roulette Wheel selection

Each current string in the population has a slot assigned to it which is in propor-

tion to its fitness. We spin the weighted roulette wheel thus defined n times (where n

is the total number of solutions). Each time Roulette Wheel stops, the string corre-

sponding to that slot is created. Strings that are fitter are assigned a larger slot and

hence have a better chance of appearing in the new population.

Rank selection

The Roulette wheel selection will have problems when the fitness values differ

very much. In this case, the chromosomes with low fitness values will have a very few

chance to be selected. This problem can be avoided using ranking selection. Every

chromosome receives fitness from the ranking. The worst has fitness 1 and the best

has fitness N. It preserves diversity and results in slow convergence.

Elitism

Elitism is a method which copies the best chromosomes to the new offspring pop-

ulation before crossover and mutation. When creating a new population by crossover

or mutation, the best chromosome might be lost. It forces GAs to retain some number

of the best individuals at each generation and has been found that elitism significantly

improves performance.

3.3.4 Crossover

It is the process in which two chromosomes (strings) combine their genetic material

(bits) to produce a new offspring which possesses both their characteristics. Two

strings are picked from the mating pool at random to cross over. The method chosen

depends on the Encoding Method.

Crossover between 2 good solutions MAY NOT ALWAYS yield a better or as

good a solution. Since parents are good, probability of the child being good is high.

If offspring is not good (poor solution), it will be removed in the next iteration during

Selection.

The most known mthods are:

Page 22: Rapport Du Projet RO

3 Genetic Algorithm Formulation 17

Single Point Crossover

A random point is chosen on the individual chromosomes (strings) and the genetic

material is exchanged at this point.

Figure 3.6: Single Point Crossover example

Two Point Crossover

Two random points are chosen on the individual chromosomes (strings) and the

genetic material is exchanged at these points.

Figure 3.7: Two Point Crossover example

Uniform Crossover

Each gene (bit) is selected randomly from one of the corresponding genes of the

parent chromosomes. Uniform Crossover yields ONLY 1 offspring.

Figure 3.8: Uniform Crossover example

Page 23: Rapport Du Projet RO

3 Genetic Algorithm Formulation 18

3.3.5 Mutation

It is the process by which a string is deliberately changed so as to maintain

diversity in the population set. Mutation Probability determines how often the parts

of a chromosome will be mutated. For chromosomes using Binary Encoding, randomly

selected We have 3 types of mutation: flipping, interchanging and reversing.

Flipping

Figure 3.9: Flipping mutation example

Interchanging

Figure 3.10: Interchanging mutation example

Reversing

The number of bits to be inverted depends on the Mutation Probability.

Figure 3.11: Reversing mutation example

3.3.6 Penalizing

This ”death penalty” heuristic is a popular option in many evolutionary tech-

niques (e.g., evolution strategies). Note that rejection of infeasible individuals offers

a few simplifications of the algorithm.

Page 24: Rapport Du Projet RO

3 Genetic Algorithm Formulation 19

The method of eliminating infeasible solutions from a population may work reason-

ably well when the feasible search space is convex and it constitutes a reasonable part

of the whole search space (e.g., evolution strategies do not allow equality constraints

since with such constraints the ratio between the sizes of feasible and infeasible search

spaces is zero). Otherwise such an approach has serious limitations. For example, for

many search problems where the initial population consists of infeasible individuals

only, it might be essential to improve them (as opposed to rejecting them). More-

over, quite often the system can reach the optimum solution easier if it is possible to

”cross” an infeasible region (especially in non-convex feasible search spaces).

Penalizing infeasible individuals is the most common approach in the genetic algo-

rithms community. Several researchers studied heuristics on design of penalty func-

tions. Some hypotheses were formulated by Richardson, Siedlecki and Sklanski in

1989.

– penalties which are functions of the distance from feasibility are better perform-

ers than those which are merely functions of the number of violated constraints,

– for a problem having few constraints, and few full solutions, penalties which

are solely functions of the number of violated constraints are not likely to find

solutions,

– good penalty functions can be constructed from two quantities, the maximum

com- pletion cost and the expected completion cost,

– penalties should be close to the expected completion cost, but should not fre-

quently fall below it. The more accurate the penalty, the better will be the

solutions found. When penalty often underestimates the completion cost, then

the search may not find a solution.the genetic algorithm with a variable penalty

coefficient outperforms the fixed penalty factor algorithm,”

Here, variability of penalty coefficient was determined by a heuristic rule.

3.4 Advantages Of GAs

3.4.1 Global Search Methods

GAs search for the function optimum starting from a population of points of

the function domain, not a single one. This characteristic suggests that GAs are

global search methods. They can, in fact, climb many peaks in parallel, reducing

the probability of finding local minima, which is one of the drawbacks of traditional

optimization methods.

Page 25: Rapport Du Projet RO

3 Genetic Algorithm Formulation 20

3.4.2 Blind Search Methods

GAs only use the information about the objective function. They do not require

knowledge of the first derivative or any other auxiliary information, allowing a number

of problems to be solved without the need to formulate restrictive assumptions. For

this reason, GAs are often called blind search methods.

3.4.3 Probabilistic rules

GAs use probabilistic transition rules during iterations, unlike the traditional

methods that use fixed transition rules. This makes them more robust and appli-

cable to a large range of problems.

3.4.4 Parallel machines applicability

GAs can be easily used in parallel machines- Since in real-world design opti-

mization problems, most computational time is spent in evaluating a solution, with

multiple processors all solutions in a population can be evaluated in a distributed

manner. This reduces the overall computational time substantially.

Page 26: Rapport Du Projet RO

21

Chapter 4

Implementation

4.1 Programming languages used

In order to elaborate an application to solve KPs, we used two programming

languages which are:

– C

The calculus part (details of the genetic algorithm) is developed using C lan-

guage. Then they are included in the java project as a library.

– Java

To have an interactive application, we developed a graphical interface where

the user can set the KP parameters. The main classes are:

* KnapSacProject.java It is the first class to be run.

* Interface.java It contains the previous graphical interface.

* algorithm.java It loads the C files to calculate and solve the problem.

* StatusColumnCellRenderer.java It manages the changement in the result

cells table.

4.2 Graphical Interface

After resolution, of an item is chosen as part of the optimal solution the corre-

sponding cell is colored in green and the user should take it, otherwise it’s colored in

red and he should leave it.

Page 27: Rapport Du Projet RO

4 Implementation 22

Figure 4.1: Graphical interface: setting the parameters

Figure 4.2: Graphical interface: showing results

4.3 C code listing

The C files are converted into headers so that java can load them. The most

important functions are as follow:

gaParameters.h

1 /* GA parameters : generat ions , populat ion s i z e and how many c h i l d r e n per

GEN */

2 #d e f i n e POP 70

3 #d e f i n e CHILDREN 40

4

5 /* the ra t e f o r p e n a l i z i n g the u n s a t i s f i e d c o n s t r a i n t */

Page 28: Rapport Du Projet RO

4 Implementation 23

6 #d e f i n e RATE 10

7

8 /* How we r e p r e s e n t each ind iv idua l , with i t s chromossome , i t s f i t n e s s ,

and i t s p r o b a b i l i t y */

9 typede f s t r u c t crom{10 char cromo [ 1 5 0 ] ;

11 i n t f i t n e s s ;

12 i n t prob ;

13

14 } crom ;

15

16 i n t ob j e c t ( crom cromo , i n t Prof i tT [ ] , i n t s i z e ) ;

17 i n t pena l ty ( crom cromo , i n t CostT [ ] , i n t Cmax, i n t s i z e ) ;

18 void eva luate ( crom *cromo , i n t Prof i tT [ ] , i n t CostT [ ] , i n t Cmax, i n t

s i z e ) ;

19 void c r o s s o v e r ( crom par1 , crom par2 , crom * son1 , crom * son2 , i n t s i z e ) ;

20 void mutation ( crom *cromo , i n t s i z e ) ;

21 void p r o b a b i l i t y ( crom *pop , i n t s i z e p o p ) ;

22 void s e l e c t ( crom *pop , i n t s i z e pop , i n t n s e l e c t i o n s , crom * r e s u l t , i n t

s i z e ) ;

23 i n t random ( i n t s ta r t , i n t end ) ;

24 i n t i round ( double x ) ;

galib.h

1 #inc lude <s t d i o . h>

2 #inc lude <s t d l i b . h>

3 #inc lude <math . h>

4 #inc lude <time . h>

5 #inc lude ” gaParameters . h”

6

7 void g a s o l v e ( i n t * r e s u l t , i n t * CostT , i n t * Prof itT , i n t Cmax, i n t

GENS, i n t TURNS, i n t s i z e ) {8

9 i n t i , j , k , gen , t ;

10 t ime t s ta r t , stop ;

11 time(& s t a r t ) ;

12

13 /* * * * * * * * * * * * * * * * * * * * * * * * * * * *

14 * d e c l a r a t i o n o f the populat ion , t h e i r ch i ld r en , *

15 * an a u x i l i a r y v a r i a b l e and the best i n d i v i d u a l ever *

16 * * * * * * * * * * * * * * * * * * * * * * * * * * * */

17 crom pop [POP] , c h i l d r e n [CHILDREN] , temp [POP + CHILDREN] , best ,

b e s t tu rn ;

Page 29: Rapport Du Projet RO

4 Implementation 24

18

19 /* * * * * * * * * * * * * * * * * * * * *

20 * in the beg inning the bes t i n d i v i d u a l *

21 * i s the 0 one , so . . . no need to s e t *

22 * any more parameters s i n c e we ’ l l j u s t *

23 * use the f i t n e s s f o r comparison *

24 * * * * * * * * * * * * * * * * * * * * */

25 best . f i t n e s s = 0 ;

26

27 /* * * * * * * * * * * * * * * * * *

28 * Let ’ s run t h i s program s e v e r a l *

29 * t imes to get a r epor t o f i t s *

30 * performance , so why not put *

31 * t h i s on a loop in s t ead o f *

32 * running i t s e v e r a l t imes and *

33 * take notes ? *

34 * * * * * * * * * * * * * * * * * */

35 f o r ( t = 0 ; t < TURNS; ++t ) {36

37 be s t tu rn . f i t n e s s = 0 ;

38

39 /* Let ’ s generate the i n i t i a l populat ion at random ( or pseudo−random as you wish ) */

40 f o r ( i = 0 ; i < POP; ++i ) {41 f o r ( j = 0 ; j < s i z e ; ++j ) {42 pop [ i ] . cromo [ j ] = random (0 , 2) ;

43 }44 }45

46 gen = 0 ;

47

48 /* repeat the reproduct ion s t ep s u n t i l the max number o f

g ene ra t i on s i s reached */

49 whi le ( gen++<GENS) {50

51 /* Fir s t , l e t ’ s s e e how good they are . . . */

52 f o r ( i = 0 ; i < POP; ++i )

53 eva luate (&pop [ i ] , Prof itT , CostT , Cmax, s i z e ) ;

54

55 /* . . . and what i s the chance o f each one */

56 p r o b a b i l i t y ( pop , POP) ;

57

58 /* * * * * * * * * * * * * * * * * * * * * * * * * *

Page 30: Rapport Du Projet RO

4 Implementation 25

59 * And two by two , my human zoo , s h a l l reproduce *

60 * u n t i l the number d e s i r e d o f c h i l d r e n i s reached *

61 * * * * * * * * * * * * * * * * * * * * * * * * * */

62 f o r ( i = 0 ; i < CHILDREN; i += 2) {63 s e l e c t ( pop , POP, 2 , temp , s i z e ) ;

64 c r o s s o v e r ( temp [ 0 ] , temp [ 1 ] , &c h i l d r e n [ i ] , &c h i l d r e n [ i +

1 ] , s i z e ) ;

65 }66

67 /* Do our c h i l d r e n are good enough? */

68 f o r ( i = 0 ; i < CHILDREN; ++i )

69 eva luate (& c h i l d r e n [ i ] , Prof itT , CostT , Cmax, s i z e ) ;

70

71 /* Let ’ s do some mutation exper iments to our populat ion

buhuahuahua */

72 i = random (0 , POP) ;

73 mutation(&pop [ i ] , s i z e ) ;

74

75 /* Let ’ s s e e how good our mutant i s */

76 eva luate (&pop [ i ] , Prof itT , CostT , Cmax, s i z e ) ;

77

78 /* Let ’ s gather a l l t oge the r the o l d i e s and the newbies */

79

80 /* f i r s t the o l d i e s */

81 f o r ( i = 0 ; i < POP; ++i ) {82 temp [ i ] . f i t n e s s = pop [ i ] . f i t n e s s ;

83 f o r ( j = 0 ; j < s i z e ; ++j )

84 temp [ i ] . cromo [ j ] = pop [ i ] . cromo [ j ] ;

85 }86

87 /* and now our c h i l d r e n */

88 f o r ( k = 0 ; i < POP + CHILDREN; ++i , ++k ) {89 temp [ i ] . f i t n e s s = c h i l d r e n [ k ] . f i t n e s s ;

90 f o r ( j = 0 ; j < s i z e ; ++j )

91 temp [ i ] . cromo [ j ] = c h i l d r e n [ k ] . cromo [ j ] ;

92 }93

94 /* Let ’ s e l e c t the bes t o f t h i s gene ra t i on */

95 f o r ( i = 1 , k = 0 ; i < POP + CHILDREN; ++i )

96 i f ( temp [ i ] . f i t n e s s > temp [ k ] . f i t n e s s ) {97 /* We are l ook ing f o r someone who r e s p e c t the

c o n s t r a i n t s */

98 i f ( ! pena l ty ( temp [ i ] , CostT , Cmax, s i z e ) )

Page 31: Rapport Du Projet RO

4 Implementation 26

99 k = i ;

100 }101

102 /* Let ’ s s t o r e i t f o r l a t e r */

103 i f ( temp [ k ] . f i t n e s s > be s t tu rn . f i t n e s s ) {104 f o r ( i = 0 ; i < s i z e ; ++i )

105 be s t tu rn . cromo [ i ] = temp [ k ] . cromo [ i ] ;

106 be s t tu rn . f i t n e s s = temp [ k ] . f i t n e s s ;

107 }108

109 /* Now the party can begin , who w i l l l i v e and who s h a l l d i e ?

*/

110 p r o b a b i l i t y ( temp , POP + CHILDREN) ;

111 s e l e c t ( temp , POP + CHILDREN, POP, pop , s i z e ) ;

112

113 } /* End o f t h i s Generation */

114

115 i f ( b e s t tu rn . f i t n e s s > best . f i t n e s s ) {116 f o r ( i = 0 ; i < s i z e ; ++i )

117 best . cromo [ i ] = be s t tu rn . cromo [ i ] ;

118 best . f i t n e s s = be s t tu rn . f i t n e s s ;

119 }120

121 } /* End o f t h i s Turn */

122

123 /* And the best i n d i v i d u a l ever was . . . */

124 f o r ( i = 0 ; i < s i z e ; ++i ) {125 r e s u l t [ i ] = best . cromo [ i ] ;

126 }127

128 /* The l a s t two b i t s are r e s e rved f o r o b j e c t i v e func t i on and penal ty

*/

129 r e s u l t [ s i z e ] = ob j e c t ( best , Prof itT , s i z e ) ;

130 r e s u l t [ s i z e + 1 ] = penal ty ( best , CostT , Cmax, s i z e ) ;

131

132

133 /* And the best i n d i v i d u a l ever was . . . */

134 f o r ( i = 0 ; i < s i z e ; ++i ) {135 p r i n t f ( ”The %d th gene i s equal to %d\n” , i + 1 , bes t . cromo [ i ] ) ;

136 }137

138 p r i n t f ( ”The best f i t n e s s was : %d\n” , ob j e c t ( best , Prof itT , s i z e ) ) ;

139 p r i n t f ( ” Penalty : %d\n” , pena l ty ( best , CostT , Cmax, s i z e ) ) ;

Page 32: Rapport Du Projet RO

4 Implementation 27

140

141 time(&stop ) ;

142 double d i f = d i f f t i m e ( stop , s t a r t ) ;

143 p r i n t f ( ” Elasped time i s %.4 l f seconds .\n” , d i f ) ;

144

145 }146

147 /* * * * * * * * * * * * * * * * *

148 * t h i s func t i on w i l l c a l c u l a t e *

149 * the p r o f i t f unc t i on based on *

150 * the s i x ba s i c v a r i a b l e s *

151 * * * * * * * * * * * * * * * * */

152 i n t ob j e c t ( crom cromo , i n t Prof i tT [ ] , i n t s i z e ) {153 i n t ob j t = 0 , i ;

154

155 f o r ( i = 0 ; i < s i z e ; ++i ) {156 ob j t += cromo . cromo [ i ] * Prof i tT [ i ] ;

157 }158

159 re turn ob j t ;

160 }161

162 /* * * * * * * * * * * * * * * * *

163 * t h i s func t i on w i l l c a l c u l a t e *

164 * the pena l ty func t i on *

165 * * * * * * * * * * * * * * * * */

166 i n t pena l ty ( crom cromo , i n t CostT [ ] , i n t Cmax, i n t s i z e ) {167

168 i n t P = 0 , Pn , i ;

169 f o r ( i = 0 ; i < s i z e ; ++i ) {170 P += cromo . cromo [ i ] * CostT [ i ] ;

171 }172

173 /* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

174 * i f (P − Cmax) > 0 ( the amount passed from c o n s t r a i n t ) *

175 * the value returned w i l l be : pena l ty ra t e * t h i s d i f e r e n c e *

176 * e l s e i t w i l l be 0 , f o r i t r e spec t ed the c o n s t r a i n t s *

177 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */

178 Pn = ( (P − Cmax) > 0) ? (RATE * (P − Cmax) ) : 0 ;

179

180 re turn Pn ;

181 }182

Page 33: Rapport Du Projet RO

4 Implementation 28

183 /* * * * * * * * * * * * * * * * *

184 * t h i s func t i on w i l l eva luate *

185 * each i n d i v i d u a l thru the *

186 * o b j e c t i v e and the pena l ty *

187 * f unc t i on *

188 * * * * * * * * * * * * * * * * */

189 void eva luate ( crom *cromo , i n t Prof i tT [ ] , i n t CostT [ ] , i n t Cmax, i n t

s i z e ) {190

191 i n t objt , P ;

192

193 /* Now l e t ’ s apply the o b j e c t i v e func t i on to i t */

194 ob j t = ob j e c t ( (* cromo ) , Prof itT , s i z e ) ;

195

196 /* i t s pena l ty f o r not r e s p e c t i n g our c o n s t r a i n t s */

197 P = penalty ( (* cromo ) , CostT , Cmax, s i z e ) ;

198

199 /* * * * * * * * * * * * * * * * * * * * * * * * * * *

200 * Let ’ s guarantee that the f i t n e s s w i l l be always *

201 * p o s i t i v e , because we don ’ t want the p r o b a b i l i t y *

202 * f unc t i on to g ive us negat ive r e s u l t s *

203 * So i f i t s pena l ty i s too much much , l e t ’ s ze ro *

204 * the r e s u l t and tak ing i t out o f f u r t h e r s e l e c t i o n *

205 * ( i t w i l l have a p r o b a b i l i t y o f 0% to be chosen ) *

206 * * * * * * * * * * * * * * * * * * * * * * * * * * */

207 cromo−>f i t n e s s = ( ob j t − P > 0) ? ( ob j t − P) : 0 ;

208

209 }210

211 /* * * * * * * * * * * * * * * * * *

212 * t h i s func t i on w i l l perform the *

213 * breed ing dance , where two *

214 * i n d i v i d u a l s exchange part o f *

215 * t h e i r chromossomes *

216 * * * * * * * * * * * * * * * * * */

217 void c r o s s o v e r ( crom par1 , crom par2 , crom * son1 , crom * son2 , i n t s i z e ) {218

219 i n t point , i ;

220

221 /* Let ’ s get a po int between 0 and s i z e */

222 po int = random (0 , s i z e ) ;

223

224 /* * * * * * * * * * * * * * * * * *

Page 34: Rapport Du Projet RO

4 Implementation 29

225 * The f i r s t part we copy the genes *

226 * from parent 1 to c h i l d 1 *

227 * and from parent 2 to c h i l d 2 *

228 * * * * * * * * * * * * * * * * * */

229 f o r ( i = 0 ; i < po int ; ++i ) {230 son1−>cromo [ i ] = par1 . cromo [ i ] ;

231 son2−>cromo [ i ] = par2 . cromo [ i ] ;

232 }233

234 /* * * * * * * * * * * * * * * * * * * * *

235 * Here we ’ l l ” c r o s s ” t h e i r chromosome , *

236 * now copying from parent 2 to c h i l d 1 *

237 * and from parent 1 to c h i l d 2 *

238 * * * * * * * * * * * * * * * * * * * * */

239 f o r ( ; i < s i z e ; ++i ) {240 son1−>cromo [ i ] = par2 . cromo [ i ] ;

241 son2−>cromo [ i ] = par1 . cromo [ i ] ;

242 }243

244 }245

246 /* * * * * * * * * * * * * * * * * *

247 * t h i s func t i on w i l l perform the *

248 * mutation , where one i n d i v i d u a l *

249 * may have h i s l i f e changed . . . *

250 * or j u s t d i e f o r i t *

251 * * * * * * * * * * * * * * * * * */

252 void mutation ( crom *cromo , i n t s i z e ) {253

254 i n t point , i ;

255

256 /* Let ’ s get a po int between 0 and s i z e */

257 po int = random (0 , s i z e ) ;

258

259 /* j u s t i n v e r t the b i t in the po int chosen ( b l e s s the binary system )

*/

260 cromo−>cromo [ po int ] = ! cromo−>cromo [ po int ] ;

261

262 }263

264 /* * * * * * * * * * * * * * * * * *

265 * here we have a b e t t e r rand ( ) *

266 * funct ion , where we can have *

Page 35: Rapport Du Projet RO

4 Implementation 30

267 * a we l l de f i n ed range o f va lue s *

268 * NB:RAND MAX i s the maximum value *

269 * returned by the rand func t i on . *

270 * * * * * * * * * * * * * * * * * */

271 i n t random ( i n t s ta r t , i n t end ) {272 re turn ( i n t ) ( ( ( ( double ) rand ( ) / ( double ) (RAND MAX + 1) ) * end ) +

s t a r t ) ;

273 }274

275 /* * * * * * * * * * * * * * * * * *

276 * f o r g e t t i n g t o t a l p r o b a b i l i t y *

277 * near 100% we must round the *

278 * numbers the r i g h t way us ing the *

279 * c l o s e s t to even r u l e *

280 * * * * * * * * * * * * * * * * * */

281 i n t i round ( double x ) {282

283 i n t r ;

284 double v ;

285

286 /* get the f i r s t decimal d i g i t */

287 v = x − ( i n t ) x ;

288

289 /* and turn i t to i n t e g e r */

290 v = v * 1 0 . 0 ;

291 r = ( i n t ) v ;

292

293 /* i f the f i r s t d i g i t was g r e a t e r than 5 we c e i l the number */

294 i f ( r > 5)

295 v = c e i l ( x ) ;

296 /* l e s s than 5 we f l o o r i t */

297 e l s e i f ( r < 5)

298 v = f l o o r ( x ) ;

299 /* e l s e we round to the c l o s e s t even number */

300 e l s e

301 i f ( ( i n t ) x % 2)

302 v = c e i l ( x ) ;

303 e l s e

304 v = f l o o r ( x ) ;

305

306 r = ( i n t ) v ;

307 re turn r ;

308 }

Page 36: Rapport Du Projet RO

4 Implementation 31

309

310 /* * * * * * * * * * * * * * * * * *

311 * t h i s func t i on c a l c u l a t e the *

312 * p r o b a b i l i t y o f each i n d i v i d u a l *

313 * to be chosen based on i t s *

314 * f i t n e s s *

315 * * * * * * * * * * * * * * * * * */

316 void p r o b a b i l i t y ( crom *pop , i n t s i z e p o p ) {317

318 i n t i ;

319 double prob ;

320 i n t sum = 0 , prob sum = 0 ;

321

322 /* Let ’ s c a l c u l a t e the sum of a l l f i t n e s s */

323 f o r ( i = 0 ; i < s i z e p o p ; ++i ) {324 sum += pop [ i ] . f i t n e s s ;

325 }326

327 /* * * * * * * * * * * * * * * *

328 * now f o r each one we d iv id e *

329 * i t s f i t n e s s by the sum of *

330 * a l l and mult ip ly by 100 *

331 * r e s u l t i n g in i t s percentage *

332 * * * * * * * * * * * * * * * */

333 f o r ( i = 0 ; i < s i z e p o p ; ++i ) {334 prob = ( double ) (100 * pop [ i ] . f i t n e s s ) / sum ;

335 pop [ i ] . prob = iround ( prob ) ;

336

337 /* j u s t in case we want to see i f we have r e a l l y 100% ( and

r a r e l y we do ) */

338 prob sum += iround ( prob ) ;

339 }340

341 }342

343 void s e l e c t ( crom *pop , i n t s i z e pop , i n t n s e l e c t i o n s , crom * r e s u l t , i n t

s i z e ) {344

345 i n t i , j , k , cho ice , theone , t r i e s = 0 ;

346 char *h pop ;

347

348 /* Let ’ s use t h i s dynamic array to avoid choos ing 2 same i n d i v i d u a l s

*/

Page 37: Rapport Du Projet RO

4 Implementation 32

349 h pop = ( char *) mal loc ( s i z e o f ( char ) * s i z e p o p ) ;

350

351 /* i n i t i a l i z e i t to 0 , where 0 i s not choosen and 1 i s a l r eady

choosen */

352 memset ( ( void *) h pop , s i z e pop , ’ \0 ’ ) ;

353

354 f o r ( i = 0 ; i < n s e l e c t i o n s ; ++i ) {355

356 t r i e s = 0 ;

357 do {358 j = 0 ;

359 theone = 0 ;

360

361 /* 0 to 100 percent */

362 cho i c e = random (1 , 100) ;

363

364 /* sum the p r o b a b i l i t i e s u n t i l we get the percentage

randomly chosen */

365 whi le ( theone < cho i c e && j < s i z e p o p )

366 theone += pop [ j ++].prob ;

367 /* get back to the chosen one */

368 −−j ;

369

370 /* * * * * * * * * * * * * * * * * * * * * *

371 * a f t e r the loop , j w i l l s t o r e the *

372 * value o f the chosen one , but in *

373 * case we have passed thru the l i m i t . . . *

374 * * * * * * * * * * * * * * * * * * * * * */

375 j = j % s i z e p o p ;

376 i f ( j < 0) j = 0 ;

377

378 /* * * * * * * * * * * * * * * * *

379 * loop u n t i l we choose someone *

380 * not chosen be fore , or we have *

381 * t r i e d more than 20 t imes *

382 * * * * * * * * * * * * * * * * */

383 } whi le ( h pop [ j ] && t r i e s++ < 20) ;

384

385 /* t h i s one i s now chosen */

386 h pop [ j ] = 1 ;

387

388 /* * * * * * * * * * *

389 * do the copy dance *

Page 38: Rapport Du Projet RO

4 Implementation 33

390 * * * * * * * * * * */

391 f o r ( k = 0 ; k < s i z e ; ++k )

392 r e s u l t [ i ] . cromo [ k ] = pop [ j ] . cromo [ k ] ;

393

394 /* * * * * * * * * * * * * * * * * * * * *

395 * only the f i t n e s s w i l l be copied *

396 * f o r the p r o b a b i l i t y w i l l be d i f f e r e n t *

397 * * * * * * * * * * * * * * * * * * * * */

398 r e s u l t [ i ] . f i t n e s s = pop [ j ] . f i t n e s s ;

399 }400

401 /* l e t ’ s not waste memory */

402 f r e e ( h pop ) ;

403 }

4.4 Java code listing

KnapSacProject.java

1 package knapsacpro j ec t ;

2

3 pub l i c c l a s s KnapSacProject {4 pub l i c s t a t i c void main ( St r ing [ ] a rgs ) {5

6 I n t e r f a c e in = new I n t e r f a c e ( ) ;

7 in . s e t V i s i b l e ( t rue ) ;

8 }9 }

KnapSacProject.java

1 package knapsacpro j ec t ;

2

3 import com . a lgor i thm . j n i . a lgor i thm ;

4 import javax . swing . t ab l e . DefaultTableModel ;

5

6 pub l i c c l a s s I n t e r f a c e extends javax . swing . JFrame {7

8 pub l i c nat ive double [ ] genet i cAlgor i thm ( i n t [ ] cost , i n t [ ] p r o f i t ) ;

9 pub l i c I n t e r f a c e ( ) {10 initComponents ( ) ;

11 }12

Page 39: Rapport Du Projet RO

4 Implementation 34

13

14 p r i v a t e void jButton1ActionPerformed ( java . awt . event . ActionEvent evt )

{15 DefaultTableModel model = ( DefaultTableModel ) jTable1 . getModel ( )

;

16 Object [ ] o b j e c t s = new Object [ 4 ] ;

17 o b j e c t s [ 0 ] = model . getRowCount ( ) + 1 ;

18 o b j e c t s [ 1 ] = n u l l ;

19 o b j e c t s [ 2 ] = n u l l ;

20 o b j e c t s [ 3 ] = n u l l ;

21 model . addRow( o b j e c t s ) ;

22 }23

24 p r i v a t e void jButton2ActionPerformed ( java . awt . event . ActionEvent evt )

{25 DefaultTableModel model = ( DefaultTableModel ) jTable1 . getModel ( )

;

26 i n t [ ] rows = jTable1 . getSelectedRows ( ) ;

27 f o r ( i n t i = 0 ; i < rows . l ength ; i++) {28 model . removeRow( rows [ i ] − i ) ;

29 }30 f o r ( i n t i = 0 ; i < jTable1 . getRowCount ( ) ; i++) {31 model . setValueAt ( i + 1 , i , 0) ;

32 }33 }34

35 p r i v a t e void jButton3ActionPerformed ( java . awt . event . ActionEvent evt )

{36 jTextFie ld17 . setText ( ”100” ) ;

37 jTextFie ld18 . setText ( ”1” ) ;

38 }39

40 pub l i c i n t [ ] getTableData ( DefaultTableModel model , I n t e g e r co l Index )

{41 i n t nRow = model . getRowCount ( ) ;

42 i n t [ ] tableData = new i n t [ nRow ] ;

43 f o r ( i n t i = 0 ; i < nRow; i++) {44 tableData [ i ] = ( i n t ) model . getValueAt ( i , co l Index ) ;

45 }46 re turn tableData ;

47 }48

Page 40: Rapport Du Projet RO

4 Implementation 35

49 p r i v a t e void jButton4ActionPerformed ( java . awt . event . ActionEvent evt )

{50 DefaultTableModel model = ( DefaultTableModel ) jTable1 . getModel ( )

;

51 i n t nRow = model . getRowCount ( ) ;

52 t ry {53 a lgor i thm algo = new algor i thm ( getTableData ( model , 1) ,

getTableData ( model , 2) ,

54 I n t e g e r . pa r s e In t ( jTextFie ld17 . getText ( ) ) , I n t e g e r .

pa r s e In t ( jTextFie ld18 . getText ( ) ) ,

55 I n t e g e r . pa r s e In t ( jTextF ie ld1 . getText ( ) ) ) ;

56 i n t [ ] r e s u l t = a lgo . ge tResu l t ( ) ;

57 f o r ( i n t i = 0 ; i < nRow; i++) {58 i f ( r e s u l t [ i ]==1)

59 model . setValueAt ( ” take ” , i , 3) ;

60 e l s e

61 model . setValueAt ( ” l eave ” , i , 3) ;

62 }63 jTable1 . getColumnModel ( ) . getColumn (3) . s e tCe l lRendere r (new

StatusColumnCellRenderer ( ) ) ;

64 jTextFie ld19 . setText ( I n t e g e r . t oS t r i ng ( r e s u l t [ nRow ] ) ) ;

65 jTextFie ld20 . setText ( I n t e g e r . t oS t r i ng ( r e s u l t [ nRow+1]) ) ;

66 } catch ( Exception e ) {67 System . out . p r i n t l n ( e . g e tC la s s ( ) ) ;

68 }69 }70

71 pub l i c s t a t i c void main ( St r ing args [ ] ) {72 java . awt . EventQueue . invokeLater (new Runnable ( ) {73 pub l i c void run ( ) {74 new I n t e r f a c e ( ) . s e t V i s i b l e ( t rue ) ;

75 }76 }) ;

77 }

algorithm.java

1 package com . a lgor i thm . j n i ;

2

3 import com . Cl ib ra ry . j n i . Na t i v eUt i l s ;

4 import java . i o . IOException ;

5

6 pub l i c c l a s s a lgor i thm {7

Page 41: Rapport Du Projet RO

4 Implementation 36

8 pub l i c nat ive i n t [ ] genet i cAlgor i thm ( i n t [ ] cost , i n t [ ] p r o f i t , i n t

cmax , i n t gens , i n t turns ) ;

9 s t a t i c {10 t ry {11 Nat iveUt i l s . loadLibraryFromJar ( ”/ d i s t / libMyCLibrary . so ” ) ;

12 } catch ( IOException e ) {13 e . pr intStackTrace ( ) ; // This i s probably not the bes t way to

handle except ion :−)

14 }15 }16 protec ted i n t [ ] cost , p r o f i t , r e s u l t ;

17 protec ted i n t gens , turns , cmax ;

18

19 pub l i c a lgor i thm ( i n t [ ] cost , i n t [ ] p r o f i t , i n t gens , i n t turns , i n t

cmax) {20 t h i s . c o s t = cos t ;

21 t h i s . p r o f i t = p r o f i t ;

22 t h i s . gens = gens ;

23 t h i s . turns=turns ;

24 t h i s . cmax = cmax ;

25 }26

27 pub l i c i n t [ ] ge tResu l t ( ) {28 re turn genet i cAlgor i thm ( cost , p r o f i t , cmax , gens , turns ) ;

29 }30

31 pub l i c s t a t i c void main ( St r ing [ ] a rgs ) {32 }33 }

KnapSacProject.java

1 package knapsacpro j ec t ;

2

3 import java . awt . Color ;

4 import java . awt . Component ;

5 import javax . swing . JLabel ;

6 import javax . swing . JTable ;

7 import javax . swing . t ab l e . Defau l tTableCe l lRenderer ;

8 import javax . swing . t ab l e . DefaultTableModel ;

9

10 pub l i c c l a s s StatusColumnCellRenderer extends Defau l tTableCe l lRenderer {11

12 @Override

Page 42: Rapport Du Projet RO

4 Implementation 37

13 pub l i c Component getTableCellRendererComponent ( JTable tab le , Object

value , boolean i s S e l e c t e d , boolean hasFocus , i n t row , i n t c o l ) {14

15 // C e l l s are by d e f a u l t rendered as a JLabel .

16 JLabel l = ( JLabel ) super . getTableCellRendererComponent ( tab le ,

value , i s S e l e c t e d , hasFocus , row , c o l ) ;

17

18 //Get the s t a t u s f o r the cur rent row .

19 DefaultTableModel model = ( DefaultTableModel ) t a b l e . getModel ( ) ;

20 i f ( model . getValueAt ( row , c o l ) . equa l s ( ” l eave ” ) ) {21 l . setBackground ( Color .RED) ;

22 } e l s e {23 l . setBackground ( Color .GREEN) ;

24 }25 // Return the JLabel which render s the c e l l .

26 re turn l ;

27 }28 }

4.5 Tests and results

As demanded, we will apply our project in the resolution of the following 10

examples. In each case we will note the optimal result and the time needed to

calculate it. Note that for time computation, we use Windows’s command interpreter

(Cmd.exe).

Test n 1: Cmax = 27

Item no profit cost leave VS take

1 5 2 take

2 8 1 take

3 6 1 take

4 3 8 leave

5 7 9 take

6 5 3 take

7 2 4 take

8 9 3 take

9 2 6 leave

10 7 3 take

Page 43: Rapport Du Projet RO

4 Implementation 38

Objective Time (ms) Penalty

49 0.0 0

Test n 2: Cmax = 20

Item no profit cost leave VS take

1 10 5 take

2 7 6 leave

3 9 7 leave

4 8 8 leave

5 1 6 leave

6 1 4 leave

7 7 9 leave

8 7 7 take

9 7 8 take

10 3 4 leave

Objective Time (ms) Penalty

26 0.0 0

Page 44: Rapport Du Projet RO

4 Implementation 39

Test n 3: Cmax = 58

Item no profit cost leave VS take

1 2 2 leave

2 7 1 leave

3 6 2 leave

4 3 6 take

5 1 10 take

6 9 4 take

7 8 3 take

8 8 10 leave

9 8 2 leave

10 3 9 take

11 5 8 take

12 2 2 take

13 6 6 take

14 3 6 take

15 5 5 take

16 4 9 take

17 9 8 leave

18 4 10 take

19 9 5 leave

20 3 8 take

Objective Time (ms) Penalty

86 0.0 0

Page 45: Rapport Du Projet RO

4 Implementation 40

Test n 4: Cmax = 38

Item no cost profit leave VS take

1 8 1 leave

2 8 9 leave

3 6 6 take

4 10 4 leave

5 10 5 leave

6 3 9 take

7 1 7 take

8 7 7 leave

9 3 10 take

10 6 1 take

11 1 4 take

12 1 7 take

13 6 8 leave

14 7 1 leave

15 1 8 take

16 8 10 take

17 3 3 leave

18 10 4 leave

19 6 2 leave

20 4 10 take

Objective Time (ms) Penalty

72 0.0 0

Page 46: Rapport Du Projet RO

4 Implementation 41

Test n 5: Cmax = 86

Item no profit cost leave VS take

1 4 9 leave

2 8 9 take

3 3 10 leave

4 9 6 take

5 8 4 take

6 5 7 take

7 3 2 take

8 7 8 take

9 7 1 leave

10 8 7 leave

11 6 1 take

12 7 3 take

13 7 2 take

14 5 9 leave

15 10 2 take

16 2 4 take

17 3 3 leave

18 5 8 take

19 8 3 take

20 5 4 leave

21 2 10 leave

22 7 2 take

23 1 2 take

24 7 3 take

25 5 6 take

26 2 5 leave

27 6 5 leave

28 10 1 take

29 9 10 leave

30 3 3 take

Objective Time (ms) Penalty

118 0.0 0

Page 47: Rapport Du Projet RO

4 Implementation 42

Test n 6: Cmax = 57

Item no profit cost leave VS take

1 4 3 take

2 3 4 leave

3 6 7 leave

4 5 8 take

5 9 7 take

6 2 6 leave

7 7 9 leave

8 7 9 take

9 8 1 take

10 4 8 leave

11 7 10 leave

12 4 10 leave

13 2 9 leave

14 10 1 take

15 4 4 take

16 6 4 take

17 4 8 leave

18 10 6 take

19 7 6 leave

20 8 6 leave

21 9 9 leave

22 2 7 leave

23 4 5 leave

24 6 5 take

25 1 7 leave

26 8 10 leave

27 9 6 leave

28 4 4 leave

29 10 6 take

30 2 10 leave

Objective Time (ms) Penalty

79 0.0 0

Page 48: Rapport Du Projet RO

4 Implementation 43

Test n 7: Cmax = 109

Item no profit cost leave VS take

1 3 6 take

2 2 8 leave

3 4 7 leave

4 5 5 leave

5 9 1 leave

6 7 4 take

7 8 10 leave

8 5 10 leave

9 8 3 take

10 3 8 leave

11 6 10 take

12 3 7 take

13 8 10 take

14 9 8 take

15 6 8 leave

16 9 2 take

17 3 4 leave

18 5 5 leave

19 2 9 take

20 8 1 take

21 5 3 take

22 10 3 take

23 5 8 leave

24 8 4 take

25 8 10 take

26 1 6 leave

27 6 7 leave

28 1 4 take

29 3 9 leave

30 7 7 leave

Page 49: Rapport Du Projet RO

4 Implementation 44

Item no cost profit leave VS take

31 7 2 take

32 5 1 take

33 6 3 take

34 6 3 leave

35 3 2 take

36 6 10 take

37 2 7 leave

38 8 3 take

39 5 4 take

40 4 1 leave

Objective Time (ms) Penalty

135 0.0 0

Page 50: Rapport Du Projet RO

4 Implementation 45

Test n 8: Cmax = 61

Item no profit cost leave VS take

1 5 1 leave

2 3 10 leave

3 3 3 leave

4 9 3 take

5 9 1 take

6 4 10 leave

7 6 2 leave

8 4 3 leave

9 10 5 leave

10 6 10 take

11 3 8 leave

12 5 4 leave

13 10 7 leave

14 5 1 leave

15 3 8 leave

16 5 3 leave

17 5 7 leave

18 4 9 leave

19 4 7 take

20 8 8 take

21 9 3 take

22 4 5 take

23 4 1 take

24 1 7 take

25 5 5 leave

26 1 7 leave

27 8 1 take

28 5 2 leave

29 2 2 take

30 3 1 leave

Page 51: Rapport Du Projet RO

4 Implementation 46

Item no cost profit leave VS take

31 9 3 leave

32 4 4 take

33 3 8 leave

34 3 10 leave

35 2 7 leave

36 2 8 leave

37 4 7 take

38 2 7 leave

39 2 4 leave

40 1 2 leave

Objective Time (ms) Penalty

72 0.0 0

Page 52: Rapport Du Projet RO

4 Implementation 47

Test n 9: Cmax = 146

Item no profit cost leave VS take

1 5 7 take

2 1 10 take

3 8 4 take

4 6 2 take

5 10 6 leave

6 9 6 leave

7 3 1 take

8 7 7 take

9 6 9 leave

10 5 8 leave

11 6 6 leave

12 10 7 take

13 9 5 leave

14 1 1 leave

15 5 10 leave

16 6 1 take

17 2 7 take

18 9 10 take

19 8 2 take

20 6 3 take

21 1 6 take

22 1 2 take

23 8 1 take

24 9 5 take

25 7 8 take

26 6 8 leave

27 9 2 take

28 6 10 leave

29 8 1 take

30 1 7 take

Page 53: Rapport Du Projet RO

4 Implementation 48

Item no cost profit leave VS take

31 2 4 take

32 4 3 take

33 1 1 take

34 6 7 leave

35 8 6 take

36 10 9 leave

37 9 9 take

38 9 10 leave

39 8 6 take

40 4 2 take

41 3 6 leave

42 7 1 take

43 9 6 take

44 5 1 take

45 3 2 leave

46 3 5 take

47 5 2 take

48 8 5 take

49 3 9 leave

50 7 3 leave

Objective Time (ms) Penalty

188 0.0 0

Page 54: Rapport Du Projet RO

4 Implementation 49

Test n 10: Cmax = 90

Item no profit cost leave VS take

1 2 7 leave

2 9 7 take

3 8 2 take

4 1 8 leave

5 3 9 leave

6 2 6 leave

7 9 5 take

8 4 5 take

9 5 1 take

10 7 2 take

11 10 6 take

12 9 10 leave

13 3 3 leave

14 3 4 take

15 7 3 leave

16 9 9 take

17 10 8 leave

18 4 10 leave

19 6 1 take

20 10 2 take

21 3 10 leave

22 9 9 leave

23 6 2 take

24 8 2 take

25 7 8 leave

26 5 4 take

27 1 4 leave

28 7 6 leave

29 5 10 leave

30 10 9 take

Page 55: Rapport Du Projet RO

4 Implementation 50

Item no cost profit leave VS take

31 3 2 leave

32 2 8 leave

33 2 7 take

34 2 1 leave

35 10 6 leave

36 7 6 take

37 6 10 leave

38 7 1 take

39 2 6 leave

40 1 6 leave

41 8 5 leave

42 4 5 leave

43 5 4 leave

44 5 5 take

45 1 1 leave

46 2 8 leave

47 5 2 leave

48 6 3 leave

49 10 5 take

50 2 9 leave

Objective Time (ms) Penalty

140 0.0 0

Page 56: Rapport Du Projet RO

51

Conclusion

Throughout this project, we tried to create a graphical interface that allows us to

simply manipulate and solve the 0-1 KP. The conception took into consideration the

mathematical background of GAs.

As a further step, we can make deeper research on how to pass the time calculates

in C to the java application in order to display it on the interface. Moreover, as we

can enlarge our mastery of metaheurisitic concepts by including other approches like

the dynamic programmig, ant colony, Lagrangian relaxation method ...etc. This will

let us to make a benchmark so that we can compare these different methods in term

of rapidity of calculations.