Download - Cs 801 Practicals

Transcript
Page 1: Cs 801 Practicals

GWALIOR ENGINEERING COLLEGE

AIRPORT ROAD MAHARAJ PURA GWALIOR 474015

PRACTICAL FILE

ON

Soft Computing (CS-801)

Department of CSE & IT

Submitted To:- Submitted By:- Mr. Rakesh Singh Monika Sharma Dept of CS 0916CS051039

Page 2: Cs 801 Practicals

HOD Principal

GWALIOR ENGINEERING COLLEGE

AIRPORT ROAD MAHARAJ PURA GWALIOR 474015

Department of CSE & IT

Lab Manual

(In accordance with RGTU syllabus)

SUBJECT : Soft Computing Lab

SUBJECT CODE : CS 801

SEMESTER : 8th Sem

Page 3: Cs 801 Practicals

STREAM : CSE & IT

FACULTY :

Head

CSE &IT

GWALIOR ENGINEERING COLLEGE

AIRPORT ROAD MAHARAJ PURA GWALIOR 474015

Lab Manual of Soft Computing (CS 801)

Suggestions from Principal:

Page 4: Cs 801 Practicals

Enhancement if any:

Comments:

Principal

INDEXS. No. CONTENT Page. No

1 Introduction About Lab

Page 5: Cs 801 Practicals

2 Syllabus

3 Guidelines to Students

4 List of Experiments

5 Solutions for Programs

6 Practical Quiz

Signature of the faculty

INTRODUCTION ABOUT LAB

There are 60 systems installed in this Lab. Their configurations are as follows:

Processor : Dual Core 1.7 GHz

RAM : 512MB

Hard Disk : 80 GB

Page 6: Cs 801 Practicals

Monitor : 15” LCD Color Monitor

Mouse : Optical Mouse

Key Board : 105 MMX Key Board

Network Topology : Star Topology

Network Interface card : Present

Software

1. All systems are configured in Windows XP as per their lab requirement.

2. Each student has a separate login for database access

Oracle 9i client version is installed in all systems. On the server, account for each student has been created. This is very useful because students can save their work (scenarios’, pl/sql programs, data related projects, etc) in their own accounts. Each student work is safe and secure from other students.

1. Latest Technologies like DOT NET and J2EE are installed in some systems. Before submitting their final project, they can start doing mini project from 2nd year onwards.

2. Softwares installed: C, C++, JDK1.5, MASM, OFFICE-XP, J2EE and DOT NET, Rational Rose.

CS801 Soft Computing

Page 7: Cs 801 Practicals

Branch: B.E. Computer Science & Engineering, VIII semester

Course: CS801 Soft Computing

Unit – I

Soft Computing: Introduction of soft computing, soft computing vs. hard computing, various types of soft computing techniques, applications of soft computing. Artificial Intelligence : Introduction, Various types of production systems, characteristics of production systems, breadth first search, depth first search techniques, other Search Techniques like hill Climbing, Best first Search, A* algorithm, AO* Algorithms and various types of control strategies. Knowledge representation issues, Prepositional and predicate logic, monotonic and non monotonic reasoning, forward Reasoning, backward reasoning, Weak & Strong Slot & filler structures, NLP.

Unit – II

Neural Network: Structure and Function of a single neuron: Biological neuron, artificial neuron, definition of ANN, Taxonomy of neural net, Difference between ANN and human brain, characteristics and applications of ANN, single layer network, Perceptron training algorithm, Linear separability, Widrow & Hebb’s learning rule/Delta rule, ADALINE, MADALINE, AI v/s ANN. Introduction of MLP, different activation functions, Error back propagation algorithm, derivation of BBPA, momentum, limitation, characteristics and application of EBPA,

Unit – III

Counter propagation network, architecture, functioning & characteristics of counter Propagation network, Hopfield/ Recurrent network, configuration, stability constraints, associative memory, and characteristics, limitations and applications. Hopfield v/s Boltzman machine. Adaptive Resonance Theory: Architecture, classifications, Implementation and training. Associative Memory.

Unit – IV

Fuzzy Logic: Fuzzy set theory, Fuzzy set versus crisp set, Crisp relation & fuzzy relations, Fuzzy systems: crisp logic, fuzzy logic, introduction & features of membership functions, Fuzzy rule base system : fuzzy propositions, formation, decomposition & aggregation of fuzzy rules, fuzzy reasoning, fuzzy inference systems, fuzzy decision making & Applications of fuzzy logic.

Unit – V

Page 8: Cs 801 Practicals

Genetic algorithm : Fundamentals, basic concepts, working principle, encoding, fitness function, reproduction, Genetic modeling: Inheritance operator, cross over, inversion & deletion, mutation operator, Bitwise operator, Generational Cycle, Convergence of GA, Applications & advances in GA, Differences & similarities between GA & other traditional methods.

IT802 Soft Computing

Branch: B.E. Information Technology, VIII Semester

Course: IT802 Soft Computing

UNIT–I

Introduction to Neural Network: Concept, biological neural network, evolution of artificial neural network, McCulloch-Pitts neuron models, Learning (Supervise & Unsupervise) and activation function, Models of ANN-Feed forward network and feed back network, Learning Rules- Hebbian, Delta, Perceptron Learning and Windrow-Hoff, winner take all.

UNIT–II

Supervised Learning: Perceptron learning,- Single layer/multilayer, linear Separability, Adaline,Madaline, Back propagation network, RBFN. Application of Neural network in forecasting, data compression and image compression.

UNIT–III

Unsupervised learning: Kohonen SOM (Theory, Architecture, Flow Chart, Training Algorithm) Counter Propagation (Theory , Full Counter Propagation NET and Forward only counter propagation net),ART (Theory, ART1, ART2). Application of Neural networks in pattern and face recognition, intrusion detection, robotic vision.

UNIT-IV

Page 9: Cs 801 Practicals

Fuzzy Set: Basic Definition and Terminology, Set-theoretic Operations, Member Function,Formulation and Parameterization, Fuzzy rules and fuzzy Reasoning, Extension Principal and Fuzzy Relations, Fuzzy if-then Rules, Fuzzy Inference Systems. Hybrid system including neuro fuzzy hybrid, neuro genetic hybrid and fuzzy genetic hybrid, fuzzy logic controlled GA. Application of Fuzzy logic in solving engineering problems.

UNIT-V

Genetic Algorithm: Introduction to GA, Simple Genetic Algorithm, terminology and operators of GA (individual, gene, fitness, population, data structure, encoding, selection, crossover, mutation, convergence criteria). Reasons for working of GA and Schema theorem, GA optimization problems including JSPP (Job shop scheduling problem), TSP (Travelling salesman problem), Network design routing, timetabling problem. GA implementation using MATLAB.

Guidelines to Students

1. Equipment in the lab for the use of student community. Students need to maintain a proper decorum in the computer lab. Students must use the equipment with care. Any damage is caused is punishable.

2. Students are required to carry their observation / programs book with completed exercises while entering the lab.

3. Students are supposed to occupy the machines allotted to them and are not supposed to talk or make noise in the lab. The allocation is put up on the lab notice board.

4. Lab can be used in free time / lunch hours by the students who need to use the systems should take prior permission from the lab in-charge.

5. Lab records need to be submitted on or before date of submission.

6. Students are not supposed to use usb device

Page 10: Cs 801 Practicals

List of Experiments

Aim Experiments PAGE NO.

SIGN.

1 Study of Biological Neural Network

Page 11: Cs 801 Practicals

2Study of Artificial Neural Network

3Write a program of Perceptron Training Algorithm.

4Write a program to implement Hebb’s rule

5Write a program to implement of delta rule

6Write a program for Back propagation Algorithm

7Write a program for Back Propagation Algorithm by second method

8Write a program to implement logic gates

9Study of genetic algorithm

10Study of Genetic programming (Content Beyond the Syllabus)

Experiment No. 1

Aim: Study of BNN:-

Neural networks are inspired by our brains. A biological neural network describes a

population of physically interconnected neurons or a group of disparate neurons whose inputs

Page 12: Cs 801 Practicals

or signaling targets define a recognizable circuit. Communication between neurons often

involves an electrochemical process. The interface through which they interact with

surrounding neurons usually consists of several dendrites (input connections), which are

connected via synapses to other neurons, and one axon (output connection). If the sum of the

input signals surpasses a certain threshold, the neuron sends an action potential (AP) at the

axon hillock and transmits this electrical signal along the axon.

The control unit - or brain - can be divided in different anatomic and functional sub-units,

each having certain tasks like vision, hearing, motor and sensor control. The brain is

connected by nerves to the sensors and actors in the rest of the body.

The brain consists of a very large number of neurons, about 1011 in average. These can be seen

as the basic building bricks for the central nervous system (CNS). The neurons are

interconnected at points called synapses. The complexity of the brain is due to the massive

number of highly interconnected simple units working in parallel, with an individual neuron

receiving input from up to 10000 others.

The neuron contains all structures of an animal cell. The complexity of the structure and of

the processes in a simple cell is enormous. Even the most sophisticated neuron models in

artificial neural networks seem comparatively toy-like.

Structurally the neuron can be divided in three major parts: the cell body (soma), the

dendrites, and the axon.

The cell body contains the organelles of the neuron and also the `dendrites' are originating

there. These are thin and widely branching fibers, reaching out in different directions to make

connections to a larger number of cells within the cluster.

Input connections are made from the axons of other cells to the dendrites or directly to the

body of the cell. These are known as axondentrititic and axonsomatic synapses.

Page 13: Cs 801 Practicals

Fig: Biological Neurons

There is only one axon per neuron. It is a single and long fiber, which transports the output

signal of the cell as electrical impulses (action potential) along its length. The end of the axon

may divide in many branches, which are then connected to other cells. The branches have the

function to fan out the signal to many other inputs.

There are many different types of neuron cells found in the nervous system. The differences

are due to their location and function.

The neurons perform basically the following function: all the inputs to the cell, which may

vary by the strength of the connection or the frequency of the incoming signal, are summed

up. The input sum is processed by a threshold function and produces an output signal.

The brain works in both a parallel and serial way. The parallel and serial nature of the brain is

readily apparent from the physical anatomy of the nervous system. That there is serial and

parallel processing involved can be easily seen from the time needed to perform tasks. For

example a human can recognize the picture of another person in about 100 ms. Given the

processing time of 1 ms for an individual neuron this implies that a certain number of

neurons, but less than 100, are involved in serial;

Biological neural systems usually have a very high fault tolerance. Experiments with people

with brain injuries have shown that damage of neurons up to a certain level does not

necessarily influence the performance of the system, though tasks such as writing or speaking

may have to be learned again. This can be regarded as re-training the network.

Viva Questions

Page 14: Cs 801 Practicals

1. What do you mean by artificial neural networks?

2. What is neural network architecture?

3. What is meant by training of artificial neural networks?

4. What is meant by weights?

5. Write the logistic sigmoid function?

Page 15: Cs 801 Practicals

Experiment No.2

Aim: Study of ANN:-

An artificial neural network is a system based on the operation of biological neural networks, in other words, is an emulation of biological neural system. Why would be necessary the implementation of artificial neural networks? Although computing these days is truly advanced, there are certain tasks that a program made for a common microprocessor is unable to perform; even so a software implementation of a neural network can be made with their advantages and disadvantages.

Advantages of ANN

• A neural network can perform tasks that a linear program can not.

• When an element of the neural network fails, it can continue without any problem by their parallel nature.

• A neural network learns and does not need to be reprogrammed.

• It can be implemented in any application.

• It can be implemented without any problem.

Disadvantages of ANN

• The neural network needs training to operate.

• The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated.

• Requires high processing time for large neural networks.

Another aspect of the artificial neural networks is that there are different architectures, which consequently requires different types of algorithms, but despite to be an apparently complex system, a neural network is relatively simple.

Artificial neural networks (ANN) are among the newest signal-processing technologies in the engineer's toolbox. The field is highly interdisciplinary, but our approach will restrict the view to the engineering perspective. In engineering, neural networks serve two important functions: as pattern classifiers and as nonlinear adaptive filters.

An Artificial Neural Network is an adaptive, most often nonlinear system that learns to perform a function (an input/output map) from data. Adaptive means that the system parameters are changed during operation, normally called the training phase . After the training phase the Artificial Neural Network parameters are fixed and the system is deployed to solve the problem at hand (the testing phase ). The Artificial Neural Network is built with a

Page 16: Cs 801 Practicals

systematic step-by-step procedure to optimize a performance criterion or to follow some implicit internal constraint, which is commonly referred to as the learning rule . The input/output training data are fundamental in neural network technology, because they convey the necessary information to "discover" the optimal operating point. The nonlinear nature of the neural network processing elements (PEs) provides the system with lots of flexibility to achieve practically any desired input/output map, i.e., some Artificial Neural Networks are universal mappers . There is a style in neural computation that is worth describing.

An input is presented to the neural network and a corresponding desired or target response set at the output (when this is the case the training is called supervised). An error is composed from the difference between the desired response and the system output. This error information is fed back to the system and adjusts the system parameters in a systematic fashion (the learning rule). The process is repeated until the performance is acceptable. It is clear from this description that the performance hinges heavily on the data. If one does not have data that cover a significant portion of the operating conditions or if they are noisy, then neural network technology is probably not the right solution. On the other hand, if there is plenty of data and the problem is poorly understood to derive an approximate model, then neural network technology is a good choice. This operating procedure should be contrasted with the traditional engineering design, made of exhaustive subsystem specifications and intercommunication protocols. In artificial neural networks, the designer chooses the network topology, the performance function, the learning rule, and the criterion to stop the training phase, but the system automatically adjusts the parameters. So, it is difficult to bring a priori information into the design, and when the system does not work properly it is also hard to incrementally refine the solution. But ANN-based solutions are extremely efficient in terms of development time and resources, and in many difficult problems artificial neural networks provide performance that is difficult to match with other technologies. Denker 10 years ago said that "artificial neural networks are the second best way to implement a solution" motivated by the simplicity of their design and because of their universality, only shadowed by the traditional design obtained by studying the physics of the problem. At present, artificial neural networks are emerging as the technology of choice for many applications, such as pattern recognition, prediction, system identification, and control.

Page 17: Cs 801 Practicals

Neural Network Topologies

In the previous section we discussed the properties of the basic processing unit in an artificial neural network. This section focuses on the pattern of connections between the units and the propagation of data. As for this pattern of connections, the main distinction we can make is between:

• Feed-forward neural networks, where the data ow from input to output units is strictly feedforward. The data processing can extend over multiple (layers of) units, but no feedback connections are present, that is, connections extending from outputs of units to inputs of units in the same layer or previous layers.

• Recurrent neural networks that do contain feedback connections. Contrary to feed-forward networks, the dynamical properties of the network are important. In some cases, the activation values of the units undergo a relaxation process such that the neural network will evolve to a stable state in which these activations do not change anymore. In other applications, the change of the activation values of the output neurons are significant, such that the dynamical behaviour constitutes the output of the neural network (Pearlmutter, 1990).

Training of Artificial neural networks

A neural network has to be configured such that the application of a set of inputs produces (either 'direct' or via a relaxation process) the desired set of outputs. Various methods to set the strengths of the connections exist. One way is to set the weights explicitly, using a priori knowledge. Another way is to 'train' the neural network by feeding it teaching patterns and letting it change its weights according to some learning rule.

We can categories the learning situations in two distinct sorts. These are:• Supervised learning or Associative learning in which the network is trained by providing it

with input and matching output patterns. These input-output pairs can be provided by an external teacher, or by the system which contains the neural network (self-supervised).

Page 18: Cs 801 Practicals

• Unsupervised learning or Self-organization in which an (output) unit is trained to respond to clusters of pattern within the input. In this paradigm the system is supposed to discover statistically salient features of the input population. Unlike the supervised learning paradigm, there is no a priori set of categories into which the patterns are to be classified; rather the system must develop its own representation of the input stimuli.

• Reinforcement Learning This type of learning may be considered as an intermediate form of the above two types of learning. Here the learning machine does some action on the environment and gets a feedback response from the environment. The learning system grades its action good (rewarding) or bad (punishable) based on the environmental response and accordingly adjusts its parameters. Generally, parameter adjustment is continued until an equilibrium state occurs, following which there will be no more changes in its parameters. The self organizing neural learning may be categorized under this type of learning.

Viva Question

• What is unsupervised training?

• Which net is related to “winner take all”?

• What is meant by winner take all?

• What is feedforword nerwork?

• What is the equation for updating the activation?

Page 19: Cs 801 Practicals

Experiment No.3

Aim:- Study and implementation of Perceptron training Algorithm.

Algorithm

Start with a randomly chosen weight vector w0;

Let k=1;

While these exists input vector that are misclassified by: Wk-1 do

Let i be a misclassified input vector

Let Xk=class(ij)ij, impling that Wk-1.Xk<0

Update the weight vector to Wk= Wk-1 + nXk;

increment k;

End while;

Program

#include<iostream.h>

#include<conio.h>

Void main( ){

clrscr( );

int in[3],d,w[3],a=0;

for(int i=0;i<3,i++)

{

cout<<”\n initialize the weight vector w”<<i;

Page 20: Cs 801 Practicals

cin>>w[i]}

for(i=0;i<3:i++}

{

cout<<”\n enter the input vector i”<<i;

cin>>in[i];

}

cout<<”\n enter the desined output”;

cin>>d;

int ans=1;

while(ans= = 1)

{

for (a= 0, i==0;i<3;i++)

{

a = a + w[i] * in[i];

}

clrscr( );

cout<<”\n desired output is”<<d;

cout<<”\n actual output is “<<a;

int e;

e=d-a;

cout<<”\n error is “<<e;

cout<<”\n press 1 to adjust weight else 0”;

cin>>ans;

if (e<0)

Page 21: Cs 801 Practicals

{

for(i=0;i<3;i++)

{

w[i]=w[i]-1;

}

}

else if (e>0)

{

for(i=0;i<3:i++)

{

w[i]=w[i]+1;

}

}

getch( );

}

Page 22: Cs 801 Practicals

OUTPUT:

Viva Question:

• In which situation fuzzy logic is most suitable.

• What is called the principle of incompatibility?

• Write an example for linguistic variable and values.

• Define the concentration of linguistic values.

• Define the dilation of linguistic values.

Page 23: Cs 801 Practicals

Experiment No.4

Aim:- Write a program to implement Hebb’s rule.

#include<<iostream.h>>

#include<<conio.h>>

void main()

{

float n,w,t,net,div,a,al;

cout<<”consider o single neuron percetron with a single i/p”;

cin>>w;

cout<<”enter the learning cofficient”;

cin>>d;

for (i=0;i<10;i++)

{

net = x+w;

if(wt<0)

a=0;

else

a=1;

div=at+a+w;

w=w+div;

cout<<”i+1 in fraction are i”<<a<<”change in weight”<<dw<<”adjustment at=”<<w;

}

}

Page 24: Cs 801 Practicals

OUTPUT:

Viva Question:

• What is direct evaluation?

• Define perceptron.

• What is single layer network?

• What is linguistic information?

• What is numerical information?

Page 25: Cs 801 Practicals

Experiment No.5

Aim: Write a program to implement of delta rule.

#include<<iostream.h>>#include<<conio.h>>

void main(){

clrscr( );

float input[3],d,weight[3],delta;

for(int i=0;i < 3 ; i++)

{

cout<<”\n initilize weight vector “<<i<<”\t”;

cin>>input[i];

}

cout<<””\n enter the desired output\t”;

cin>>d;

do

{ del=d-a;if(del<0)

for(i=0 ;i<3 ;i++)

w[i]=w[i]-input[i];

else if(del>0)

for(i=0;i<3;i++)

weight[i]=weight[i]+input[i];

for(i=0;i<3;i++)

{

val[i]=del*input[i];

weight[+1]=weight[i]+val[i];

Page 26: Cs 801 Practicals

}

cout<<”\value of delta is “<<del;

cout<<”\n weight have been adjusted”;

}while(del ≠ 0)

if(del=0)

cout<<”\n output is correct”;

}

OUTPUT

Viva Question

• What is hamming net?

• Define hamming distance between two vectors.

• Draw the architecture of hamming net.

• What is the other name of Kohonen Self-organising maps?

• What is the concept of hebbian learning?

Page 27: Cs 801 Practicals

Experiment No.6

Aim: Write a program for Back propagation Algorithm.

//Backpropagation, 25x25x8 units, binary sigmoid function network#include <iostream.h>#include <fstream.h>#include <conio.h>#include <stdlib.h>#include <math.h>#include <ctype.h>#include <stdio.h>#include <float.h>double **input,*hidden,**output,**target,*bias,**weight_i_h,**weight_h_o,*errorsignal_hidden, *errorsignal_output;int input_array_size, hidden_array_size, output_array_size, max_patterns,

bias_array_size, gaset = -2500, number_of_input_patterns, pattern, file_loaded = 0, ytemp = 0, ztemp = 0;double learning_rate, max_error_tollerance = 0.1; char filename[128];#define IA 16807#define IM 2147483647#define AM (1.0 / IM)#define IQ 127773#define IR 2836#define NTAB 32#define NDIV (1+(IM-1) / NTAB)#define EPS 1.2e-7#define RNMX (1.0 - EPS)int compare_output_to_target();void load_data(char *arg);void save_data(char *argres);void forward_pass(int pattern);void backward_pass(int pattern);void custom();void compute_output_pattern();void get_file_name();float bedlam(long *idum);void learn();void make();void test();void print_data();void print_data_to_screen();

Page 28: Cs 801 Practicals

void print_data_to_file();void output_to_screen();int getnumber();void change_learning_rate();void initialize_net();void clear_memory();

main()

{cout << "backpropagation network " << endl;

for(;;) {

char choice;

cout << endl << "1. load data" << endl;

cout << "2. learn from data" << endl;

cout << "3. compute output pattern" << endl;

cout << "4. make new data file" << endl;

cout << "5. save data" << endl;

cout << "6. print data" << endl;

cout << "7. change learning rate" << endl;

cout << "8. exit" << endl << endl;

cout << "Enter your choice (1-8)";

do { choice = getch(); } while (choice != '1' && choice != '2' && choice != '3' && choice != '4' && choice != '5' && choice != '6' && choice != '7' && choice != '8');

switch(choice) {

case '1':

{

if (file_loaded == 1) clear_memory();

get_file_name();

file_loaded = 1;

load_data(filename);

}

Page 29: Cs 801 Practicals

break;

case '2': learn();

break;

case '3': compute_output_pattern();

break;

case '4': make();

break;

case '5':

{

if (file_loaded == 0)

{

cout << endl << "there is no data loaded into memory" << endl;

break;

}

cout << endl << "enter a filename to save data to: ";

cin >> filename;

save_data(filename);

}

break;

case '6': print_data();

break;

case '7': change_learning_rate();

break;

case '8': return 0;

};

}

}

Page 30: Cs 801 Practicals

void initialize_net()

{int x;

input = new double * [number_of_input_patterns];

if(!input) { cout << endl << "memory problem!"; exit(1); }

for(x=0; x<number_of_input_patterns; x++)

{

input[x] = new double [input_array_size];

if(!input[x]) { cout << endl << "memory problem!"; exit(1); }

}

hidden = new double [hidden_array_size];

if(!hidden) { cout << endl << "memory problem!"; exit(1); }

output = new double * [number_of_input_patterns];

if(!output) { cout << endl << "memory problem!"; exit(1); }

for(x=0; x<number_of_input_patterns; x++)

{

output[x] = new double [output_array_size];

if(!output[x]) { cout << endl << "memory problem!"; exit(1); }

}

target = new double * [number_of_input_patterns];

if(!target) { cout << endl << "memory problem!"; exit(1); }

for(x=0; x<number_of_input_patterns; x++)

{

target[x] = new double [output_array_size];

if(!target[x]) { cout << endl << "memory problem!"; exit(1); }

}

bias = new double [bias_array_size];

if(!bias) { cout << endl << "memory problem!"; exit(1); }

Page 31: Cs 801 Practicals

weight_i_h = new double * [input_array_size];

if(!weight_i_h) { cout << endl << "memory problem!"; exit(1); }

for(x=0; x<input_array_size; x++)

{

weight_i_h[x] = new double [hidden_array_size];

if(!weight_i_h[x]) { cout << endl << "memory problem!"; exit(1); }

}

weight_h_o = new double * [hidden_array_size];

if(!weight_h_o) { cout << endl << "memory problem!"; exit(1); }

for(x=0; x<hidden_array_size; x++)

{

weight_h_o[x] = new double [output_array_size];

if(!weight_h_o[x]) { cout << endl << "memory problem!"; exit(1); }

}

errorsignal_hidden = new double [hidden_array_size];

if(!errorsignal_hidden) { cout << endl << "memory problem!"; exit(1); }

errorsignal_output = new double [output_array_size];

if(!errorsignal_output) { cout << endl << "memory problem!"; exit(1); }

return;

}

void learn()

{

if (file_loaded == 0)

{cout << endl << "there is no data loaded into memory" << endl;return;

}

cout << endl << "learning..." << endl << "press a key to return to menu" << endl;

Page 32: Cs 801 Practicals

register int y;

while(!kbhit()) {

for(y=0; y<number_of_input_patterns; y++) {

forward_pass(y);

backward_pass(y);

}

if(compare_output_to_target()) {

cout << endl << "learning successful" << endl;

return;

}

}

cout << endl << "learning not successful yet" << endl;

return;

}

void load_data(char *arg) {

int x, y;

ifstream in(arg);

if(!in) { cout << endl << "failed to load data file" << endl; file_loaded = 0; return; }

in >> input_array_size;

in >> hidden_array_size;

in >> output_array_size;

in >> learning_rate;

in >> number_of_input_patterns;

bias_array_size = hidden_array_size + output_array_size;

initialize_net();

for(x=0; x<bias_array_size; x++) in >> bias[x];

Page 33: Cs 801 Practicals

for(x=0; x<input_array_size; x++) {

for(y=0; y<hidden_array_size; y++) in >> weight_i_h[x][y];

}

for(x=0; x<hidden_array_size; x++) {

for(y=0; y<output_array_size; y++) in >> weight_h_o[x][y];

}

for(x=0; x<number_of_input_patterns; x++) {

for(y=0; y<input_array_size; y++) in >> input[x][y];

}

for(x=0; x<number_of_input_patterns; x++) {

for(y=0; y<output_array_size; y++) in >> target[x][y];

}

in.close();

cout << endl << "data loaded" << endl;

return;

}

void forward_pass(int pattern)

{

_control87 (MCW_EM, MCW_EM);

register double temp=0;

register int x,y;

// INPUT -> HIDDEN

for(y=0; y<hidden_array_size; y++) {

for(x=0; x<input_array_size; x++) {

temp += (input[pattern][x] * weight_i_h[x][y]);

}

Page 34: Cs 801 Practicals

hidden[y] = (1.0 / (1.0 + exp(-1.0 * (temp + bias[y]))));

temp = 0;

}

// HIDDEN -> OUTPUT

for(y=0; y<output_array_size; y++) {

for(x=0; x<hidden_array_size; x++) {

temp += (hidden[x] * weight_h_o[x][y]);

}

output[pattern][y] = (1.0 / (1.0 + exp(-1.0 * (temp + bias[y + hidden_array_size]))));

temp = 0;

}

return;

}

void backward_pass(int pattern)

{

register int x, y;

register double temp = 0;

// COMPUTE ERRORSIGNAL FOR OUTPUT UNITS

for(x=0; x<output_array_size; x++) {

errorsignal_output[x] = (target[pattern][x] - output[pattern][x]);

}

// COMPUTE ERRORSIGNAL FOR HIDDEN UNITS

for(x=0; x<hidden_array_size; x++) {

Page 35: Cs 801 Practicals

for(y=0; y<output_array_size; y++) {

temp += (errorsignal_output[y] * weight_h_o[x][y]);

}

errorsignal_hidden[x] = hidden[x] * (1-hidden[x]) * temp;

temp = 0.0;

}

// ADJUST WEIGHTS OF CONNECTIONS FROM HIDDEN TO OUTPUT UNITS

double length = 0.0;

for (x=0; x<hidden_array_size; x++) {

length += hidden[x]*hidden[x];

}

if (length<=0.1) length = 0.1;

for(x=0; x<hidden_array_size; x++) {

for(y=0; y<output_array_size; y++) {

weight_h_o[x][y] += (learning_rate * errorsignal_output[y] *

hidden[x]/length);

}

}

// ADJUST BIASES OF HIDDEN UNITS

for(x=hidden_array_size; x<bias_array_size; x++) {

bias[x] += (learning_rate * errorsignal_output[x] / length);

}

// ADJUST WEIGHTS OF CONNECTIONS FROM INPUT TO HIDDEN UNITS

length = 0.0;

Page 36: Cs 801 Practicals

for (x=0; x<input_array_size; x++) {

length += input[pattern][x]*input[pattern][x];

}

if (length<=0.1) length = 0.1;

for(x=0; x<input_array_size; x++) {

for(y=0; y<hidden_array_size; y++) {

weight_i_h[x][y] += (learning_rate * errorsignal_hidden[y] *

input[pattern][x]/length);

}

}

// ADJUST BIASES FOR OUTPUT UNITS

for(x=0; x<hidden_array_size; x++) {

bias[x] += (learning_rate * errorsignal_hidden[x] / length);

}

return;

}

int compare_output_to_target()

{

register int y,z;

register double temp, error = 0.0;

temp = target[ytemp][ztemp] - output[ytemp][ztemp];

if (temp < 0) error -= temp;

else error += temp;

if(error > max_error_tollerance) return 0;

error = 0.0;

Page 37: Cs 801 Practicals

for(y=0; y < number_of_input_patterns; y++) {

for(z=0; z < output_array_size; z++) {

temp = target[y][z] - output[y][z];

if (temp < 0) error -= temp;

else error += temp;

if(error > max_error_tollerance) {

ytemp = y;

ztemp = z;

return 0;

}

error = 0.0;

}

}

return 1;

}

void save_data(char *argres) {

int x, y;

ofstream out;

out.open(argres);

if(!out) { cout << endl << "failed to save file" << endl; return; }

out << input_array_size << endl;

out << hidden_array_size << endl;

out << output_array_size << endl;

out << learning_rate << endl;

out << number_of_input_patterns << endl << endl;

for(x=0; x<bias_array_size; x++) out << bias[x] << ' ';

Page 38: Cs 801 Practicals

out << endl << endl;

for(x=0; x<input_array_size; x++) {

for(y=0; y<hidden_array_size; y++) out << weight_i_h[x][y] << ' ';

}

out << endl << endl;

for(x=0; x<hidden_array_size; x++) {

for(y=0; y<output_array_size; y++) out << weight_h_o[x][y] << ' ';

}

out << endl << endl;

for(x=0; x<number_of_input_patterns; x++) {

for(y=0; y<input_array_size; y++) out << input[x][y] << ' ';

out << endl;

}

out << endl;

for(x=0; x<number_of_input_patterns; x++) {

for(y=0; y<output_array_size; y++) out << target[x][y] << ' ';

out << endl;

}

out.close();

cout << endl << "data saved" << endl;

return;

}

void make()

{

int x, y, z;

double inpx, bias_array_size, input_array_size, hidden_array_size, output_array_size;

Page 39: Cs 801 Practicals

char makefilename[128];

cout << endl << "enter name of new data file: ";

cin >> makefilename;

ofstream out;

out.open(makefilename);

if(!out) { cout << endl << "failed to open file" << endl; return;}

cout << "how many input units? ";

cin >> input_array_size;

out << input_array_size << endl;

cout << "how many hidden units? ";

cin >> hidden_array_size;

out << hidden_array_size << endl;

cout << "how many output units? ";

cin >> output_array_size;

out << output_array_size << endl;

bias_array_size = hidden_array_size + output_array_size;

cout << endl << "Learning rate: ";

cin >> inpx;

out << inpx << endl;

cout << endl << "Number of input patterns: ";

cin >> z;

out << z << endl << endl;

for(x=0; x<bias_array_size; x++) out << (1.0 - (2.0 * bedlam((long*)(gaset)))) << ' ';

out << endl << endl;

for(x=0; x<input_array_size; x++) {

for(y=0; y<hidden_array_size; y++) out << (1.0 - (2.0 * bedlam((long*)(gaset)))) << ' ';

Page 40: Cs 801 Practicals

}

out << endl << endl;

for(x=0; x<hidden_array_size; x++) {

for(y=0; y<output_array_size; y++) out << (1.0 - (2.0 * bedlam((long*)(gaset)))) << ' ';

}

out << endl << endl;

for(x=0; x < z; x++) {

cout << endl << "input pattern " << (x + 1) << endl;

for(y=0; y<input_array_size; y++) {

cout << (y+1) << ": ";

cin >> inpx;

out << inpx << ' ';

}

out << endl;

}

out << endl;

for(x=0; x < z; x++) {

cout << endl << "target output pattern " << (x+1) << endl;

for(y=0; y<output_array_size; y++) {

cout << (y+1) << ": ";

cin >> inpx;

out << inpx << ' ';

}

out << endl;

}

out.close();

Page 41: Cs 801 Practicals

cout << endl << "data saved, to work with this new data file you first have to load it" << endl;

return;

}

float bedlam(long *idum)

{

int xj;

long xk;

static long iy=0;

static long iv[NTAB];

float temp;

if(*idum <= 0 || !iy)

{

if(-(*idum) < 1)

{

*idum = 1 + *idum;

}

else

{

*idum = -(*idum);

}

for(xj = NTAB+7; xj >= 0; xj--)

{

xk = (*idum) / IQ;

*idum = IA * (*idum - xk * IQ) - IR * xk;

if(*idum < 0)

Page 42: Cs 801 Practicals

{

*idum += IM;

}

if(xj < NTAB)

{

iv[xj] = *idum;

}

}

iy = iv[0];

}

xk = (*idum) / IQ;

*idum = IA * (*idum - xk * IQ) - IR * xk;

if(*idum < 0)

{

*idum += IM;

}

xj = iy / NDIV;

iy = iv[xj];

iv[xj] = *idum;

if((temp=AM*iy) > RNMX)

{

return(RNMX);

}

else

{

Page 43: Cs 801 Practicals

return(temp);

}

}

void test()

{

pattern = 0;

while(pattern == 0) {

cout << endl << endl << "There are " << number_of_input_patterns << " input patterns in the file," << endl << "enter a number within this range: ";

pattern = getnumber();

}

pattern--;

forward_pass(pattern);

output_to_screen();

return;

}

void output_to_screen()

{

int x;

cout << endl << "Output pattern:" << endl;

for(x=0; x<output_array_size; x++) {

cout << endl << (x+1) << ": " << output[pattern][x] << " binary: ";

if(output[pattern][x] >= 0.9) cout << "1";

else if(output[pattern][x]<=0.1) cout << "0";

else cout << "intermediate value";

}

Page 44: Cs 801 Practicals

cout << endl;

return;

}

int getnumber()

{

int a, b = 0;

char c, d[5];

while(b<4) {

do { c = getch(); } while (c != '1' && c != '2' && c != '3' && c != '4' && c != '5' && c != '6' && c != '7' && c != '8' && c != '9' && c != '0' && toascii(c) != 13);

if(toascii(c)==13) break;

if(toascii(c)==27) return 0;

d[b] = c;

cout << c;

b++;

}

d[b] = '\0';

a = atoi(d);

if(a < 0 || a > number_of_input_patterns) a = 0;

return a;

}

void get_file_name()

{

cout << endl << "enter name of file to load: ";

cin >> filename;

return;

Page 45: Cs 801 Practicals

}

void print_data()

{

char choice;

if (file_loaded == 0)

{

cout << endl

<< "there is no data loaded into memory"

<< endl;

return;

}

cout << endl << "1. print data to screen" << endl;

cout << "2. print data to file" << endl;

cout << "3. return to main menu" << endl << endl;

cout << "Enter your choice (1-3)" << endl;

do { choice = getch(); } while (choice != '1' && choice != '2' && choice != '3');

switch(choice) {

case '1': print_data_to_screen();

break;

case '2': print_data_to_file();

break;

case '3': return;

};

return;

}

Page 46: Cs 801 Practicals

void print_data_to_screen() {

register int x, y;

cout << endl << endl << "DATA FILE: " << filename << endl;

cout << "learning rate: " << learning_rate << endl;

cout << "input units: " << input_array_size << endl;

cout << "hidden units: " << hidden_array_size << endl;

cout << "output units: " << output_array_size << endl;

cout << "number of input and target output patterns: " << number_of_input_patterns << endl << endl;

cout << "INPUT AND TARGET OUTPUT PATTERNS:";

for(x=0; x<number_of_input_patterns; x++) {

cout << endl << "input pattern: " << (x+1) << endl;

for(y=0; y<input_array_size; y++) cout << input[x][y] << " ";

cout << endl << "target output pattern: " << (x+1) << endl;

for(y=0; y<output_array_size; y++) cout << target[x][y] << " ";

}

cout << endl << endl << "BIASES:" << endl;

for(x=0; x<hidden_array_size; x++) {

cout << "bias of hidden unit " << (x+1) << ": " << bias[x];

if(x<output_array_size) cout << " bias of output unit " << (x+1) << ": " << bias[x+hidden_array_size];

cout << endl;

}

cout << endl << "WEIGHTS:" << endl;

for(x=0; x<input_array_size; x++) {

for(y=0; y<hidden_array_size; y++) cout << "i_h[" << x << "][" << y << "]: " << weight_i_h[x][y] << endl;

}

Page 47: Cs 801 Practicals

for(x=0; x<hidden_array_size; x++) {

for(y=0; y<output_array_size; y++) cout << "h_o[" << x << "][" << y << "]: " << weight_h_o[x][y] << endl;

}

return;

}

void print_data_to_file()

{

char printfile[128];

cout << endl << "enter name of file to print data to: ";

cin >> printfile;

ofstream out;

out.open(printfile);

if(!out) { cout << endl << "failed to open file"; return; }

register int x, y;

out << endl << endl << "DATA FILE: " << filename << endl;

out << "input units: " << input_array_size << endl;

out << "hidden units: " << hidden_array_size << endl;

out << "output units: " << output_array_size << endl;

out << "learning rate: " << learning_rate << endl;

out << "number of input and target output patterns: " << number_of_input_patterns << endl << endl;

out << "INPUT AND TARGET OUTPUT PATTERNS:";

for(x=0; x<number_of_input_patterns; x++) {

out << endl << "input pattern: " << (x+1) << endl;

for(y=0; y<input_array_size; y++) out << input[x][y] << " ";

out << endl << "target output pattern: " << (x+1) << endl;

Page 48: Cs 801 Practicals

for(y=0; y<output_array_size; y++) out << target[x][y] << " ";

}

out << endl << endl << "BIASES:" << endl;

for(x=0; x<hidden_array_size; x++) {

out << "bias of hidden unit " << (x+1) << ": " << bias[x];

if(x<output_array_size) out << " bias of output unit " << (x+1) << ": " << bias[x+hidden_array_size];

out << endl;

}

out << endl << "WEIGHTS:" << endl;

for(x=0; x<input_array_size; x++) {

for(y=0; y<hidden_array_size; y++) out << "i_h[" << x << "][" << y << "]: " << weight_i_h[x][y] << endl;

}

for(x=0; x<hidden_array_size; x++) {

for(y=0; y<output_array_size; y++) out << "h_o[" << x << "][" << y << "]: " << weight_h_o[x][y] << endl;

}

out.close();

cout << endl << "data has been printed to " << printfile << endl;

return;

}

void change_learning_rate()

{

if (file_loaded == 0)

{

cout << endl

Page 49: Cs 801 Practicals

<< "there is no data loaded into memory"

<< endl;

return;

}

cout << endl << "actual learning rate: " << learning_rate << " new value: ";

cin >> learning_rate;

return;

}

void compute_output_pattern()

{

if (file_loaded == 0)

{

cout << endl

<< "there is no data loaded into memory"

<< endl;

return;

}

char choice;

cout << endl << endl << "1. load trained input pattern into network" << endl;

cout << "2. load custom input pattern into network" << endl;

cout << "3. go back to main menu" << endl << endl;

cout << "Enter your choice (1-3)" << endl;

do { choice = getch(); } while (choice != '1' && choice != '2' && choice != '3');

switch(choice) {

case '1': test();

break;

Page 50: Cs 801 Practicals

case '2': custom();

break;

case '3': return;

};

}

void custom()

{

_control87 (MCW_EM, MCW_EM);

char filename[128];

register double temp=0;

register int x,y;

double *custom_input = new double [input_array_size];

if(!custom_input)

{

cout << endl << "memory problem!";

return;

}

double *custom_output = new double [output_array_size];

if(!custom_output)

{

delete [] custom_input;

cout << endl << "memory problem!";

return;

}

cout << endl << endl << "enter file that contains test input pattern: ";

cin >> filename;

Page 51: Cs 801 Practicals

ifstream in(filename);

if(!in) { cout << endl << "failed to load data file" << endl; return; }

for(x = 0; x < input_array_size; x++) {

in >> custom_input[x];

}

for(y=0; y<hidden_array_size; y++) {

for(x=0; x<input_array_size; x++) {

temp += (custom_input[x] * weight_i_h[x][y]);

}

hidden[y] = (1.0 / (1.0 + exp(-1.0 * (temp + bias[y]))));

temp = 0;

}

for(y=0; y<output_array_size; y++) {

for(x=0; x<hidden_array_size; x++) {

temp += (hidden[x] * weight_h_o[x][y]);

}

custom_output[y] = (1.0 / (1.0 + exp(-1.0 * (temp + bias[y + hidden_array_size]))));

temp = 0;

}

cout << endl << "Input pattern:" << endl;

for(x = 0; x < input_array_size; x++) {

cout << "[" << (x + 1) << ": " << custom_input[x] << "] ";

}

cout << endl << endl << "Output pattern:";

for(x=0; x<output_array_size; x++) {

cout << endl << (x+1) << ": " << custom_output[x] << " binary: ";

if(custom_output[x] >= 0.9) cout << "1";

Page 52: Cs 801 Practicals

else if(custom_output[x]<=0.1) cout << "0";

else cout << "intermediate value";

}

cout << endl;

delete [] custom_input;

delete [] custom_output;

return;

}

void clear_memory()

{

int x;

for(x=0; x<number_of_input_patterns; x++)

{

delete [] input[x];

}

delete [] input;

delete [] hidden;

for(x=0; x<number_of_input_patterns; x++)

{

delete [] output[x];

}

delete [] output;

for(x=0; x<number_of_input_patterns; x++)

{

delete [] target[x];

}

Page 53: Cs 801 Practicals

delete [] target;

delete [] bias;

for(x=0; x<input_array_size; x++)

{

delete [] weight_i_h[x];

}

delete [] weight_i_h;

for(x=0; x<hidden_array_size; x++)

{

delete [] weight_h_o[x];

}

delete [] weight_h_o;

delete [] errorsignal_hidden;

delete [] errorsignal_output;

file_loaded = 0;

return;

}

Page 54: Cs 801 Practicals

OUTPUT

Viva Questions

• What is segregation?

• What is cross over and inversion?

• What is mutation rate?

• What is the concept of simulated annealing?

• Explain the ADALINE network with an example.

Page 55: Cs 801 Practicals

Experiment No.7 Aim:- Write a program for Back Propagation Algorithm by second method.

# include <iostream.h>

#include <conio.h>

void main ()

{

int i ;

float delta, com, coeff = 0.1;

struct input

{

float val,out,wo, wi;

int top;

} s[3] ;

cout<< “\n Enter the i/p value to target o/p” << “\t”;

for (i=0; i<3 ; i++)

cin>> s [i], val>> s[i], top);

i = 0;

do

{

if (i = = 0)

{

W0 = -1.0;

W1 = -0.3;

Page 56: Cs 801 Practicals

}

else

{

W0 = del [i - 1], W0 ;

W1 = del [i - 1] , Wi ;

}

del [i]. aop = w0 + (wi * del [i]. val);

del [i].out = del [i]. aop);

delta = (top – del [i]. out) * del [i].out * (1 – del [i].out);

corr = coeff * delta * del [i].[out];

del [i].w0 = w1 + corr;

del [i]. w1 = w1 + corr;

i++;

}While ( i ! = 3)

cout<< “VALUE”<<”Target”<<”Actual”<<”w0” <<”w1”<<’\n;

for (i=0; i=3; i++){

cout<< s [i].val<< s[i].top<<s[i].out << s[i]. w0<< s[i]. w1;cout<< “\n”;

}getch ();}

Page 57: Cs 801 Practicals

OUTPUT

Viva Questions

• Explain the MADALINE network with an example.

• Architecture

• Algorithm

• What is meant by weights?

• Name some application of artificial neural networks.

• What is the classification of training?

• What is supervised training?

Page 58: Cs 801 Practicals

Experiment No.8

Aim: Write a program to implement logic gates.

#include <iostream> int main(){

char menu; //Menu control variableint result; //final output variableint dataValue1;int dataValue2;cout << "enter your Boolean operator code: (A,O,N,X): ";cin >> menu;switch (menu) //Menu control variable{

case 'A':cout << "Enter first Boolean value:"; cin >> dataValue1;cout << "Enter second Boolean value:";cin >> dataValue2;if(dataValue1 == 1 && dataValue2 == 1){

result = 1; }else

{ result = 0;

}cout << "show result:" << result;break;

case 'O':cout << "Enter first Boolean value:";

cin >> dataValue1;cout << "Enter second Boolean value:";cin >> dataValue2;if(dataValue1 == 1 || dataValue2 == 1){

result = 1; }else {

result = 0; }cout << "show result:" << result;break;

case 'N':

Page 59: Cs 801 Practicals

cout << "Enter first Boolean value:"; cin >> dataValue1;

result = !dataValue1; cout << "show result:" << result; break;

case 'X':cout << "Enter first Boolean value:"; cin >> dataValue1;cout << "Enter second Boolean value:";cin >> dataValue2;if(dataValue1 = !dataValue1){

result = 1; }else {

result = 0; }cout << "show result:" << result; break;

default: result = 0; break;

}//end switch

cin.ignore(2);return 0;

}//end main

Page 60: Cs 801 Practicals

OUTPUT

Viva Questions

• What is associative memory?

• What is hetro-associative memory?

• What is unsupervised training?

• What are the three layers of perceptron?

• Define delta rule.

Page 61: Cs 801 Practicals

Experiment No. 9 Aim:- Study of genetic algorithm:-

Genetic Algorithms were invented to mimic some of the processes observed in natural evolution. Thefather of the original Genetic Algorithm was John Holland who invented it in the early 1970's. Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the evolutionary ideas ofnatural selection and genetics. As such they represent an intelligent exploitation of a random sarch used to solve optimization problems. Although randomized, GAs are by no means random, instead they exploit historical information to direct the search into the region of better performance within the search space. The basic techniques of the GAs are designed to simulate processes in natural systems necessary for evolution, specially those follow the principles first laid down by Charles Darwin of "survival of the fittest".

Why Genetic Algorithms?It is better than conventional AI in that it is more robust. Unlike older AI systems, they do not breakeasily even if the inputs changed slightly, or in the presence of reasonable noise. Also, in searching a large state-space, multi-modal state-space, or n-dimensional surface, a genetic algorithm may offer significant benefits over more typical search of optimization techniques. (linear programming, heuristic, depth-first, breath-first, and praxis.)

GA Algorithms:1 randomly initialize population(t) 2 determine fitness of population(t) 3 repeat 4 select parents from population(t) 5 perform crossover on parents creating population(t+1) 6 perform mutation of population(t+1) 7 determine fitness of population(t+1) 8 until best individual is good enough

Page 62: Cs 801 Practicals

Genetic Algorithm Flowchart:

Search SpaceA population of individuals is maintained within search space for a GA, each representing a possible solution to a given problem. Each individual is coded as a finite length vector of components, or variables, in terms of some alphabet, usually the binary alphabet {0, 1}. To continue the genetic analogy these individuals are likened to chromosomes and the variables are analogous to genes. Thus a chromosome (solution) is composed of several genes (variables). A fitness score is assigned to each solution representing the abilities of an individual to `compete'. The individual with the optimal (or generally near optimal) fitness score is sought. The GA aims to use selective `breeding' of the solutions to produce `offspring' better than the parents by combining information from the chromosomes.

The GA maintains a population of n chromosomes (solutions) with associated fitness values. Parents are selected to mate, on the basis of their fitness, producing offspring via a reproductive plan. Consequently highly fit solutions are given more opportunities to reproduce,

Page 63: Cs 801 Practicals

so that offspring inherit characteristics from each parent. As parents mate and produce offspring, room must be made for the new arrivals since the population is kept at a static size. Individuals in the population die and are replaced by the new solutions, eventually creating a new generation once all mating opportunities in the old population have been exhausted. In this way it is hoped that over successive generations better solutions will thrive while the least fit solutions die out.

New generations of solutions are produced containing, on average, better genes than a typical solution in a previous generation. Each successive generation will contain more good `partial solutions' than previous generations. Eventually, once the population has converged and is not producing offspring noticeably different from those in previous generations, the algorithm itself is said to have converged to a set of solutions to the problem at hand.

Implementation Details Based on Natural Selection

After an initial population is randomly generated, the algorithm evolves the through three operators:

• selection which equates to survival of the fittest; • crossover which represents mating between individuals; • mutation which introduces random modifications.

1. Selection OperatorGive preference to better individuals, allowing them to pass on their genes to the next generation. The goodness of each individual depends on its fitness. Fitness may be determined by an objective function or by a subjective judgment.

2. Crossover OperatorPrime distinguished factor of GA from other optimization techniques Two individuals are chosen from the population using the selection operator A crossover site along the bit strings is randomly chosen The values of the two strings are exchanged up to this point If S1=000000 and s2=111111 and the crossover point is 2 then S1'=110000 and s2'=001111 The two new offspring created from this mating are put into the next generation of the population By recombining portions of good individuals, this process is likely to create even better individuals

3. Mutation OperatorWith some low probability, a portion of the new individuals will have some of their bits flipped. Its purpose is to maintain diversity within the population and inhibit premature convergence.

Page 64: Cs 801 Practicals

Mutation alone induces a random walk through the search space Mutation and selection (without crossover) create a parallel, noise-tolerant, hill-climbing algorithms

Effects of Genetic OperatorsUsing selection alone will tend to fill the population with copies of the best individual from the population Using selection and crossover operators will tend to cause the algorithms to converge on a good but sub-optimal solution Using mutation alone induces a random walk through the search space. Using selection and mutation creates a parallel, noise-tolerant, hill climbing algorithm.

Applications of Genetic AlgorithmsScheduling: Facility, Production, Job, and Transportation SchedulingDesign: Circuit board layout, Communication Network design, keyboard layout, Parametric design in aircraft Control: Missile evasion, Gas pipeline control, Pole balancingMachine Learning: Designing Neural Networks, Classifier Systems, Learning rulesRobotics: Trajectory Planning, Path planningCombinatorial Optimization: TSP, Bin Packing, Set Covering, Graph Bisection, Routing,Signal Processing: Filter DesignImage Processing: Pattern recognitionBusiness: Economic Forecasting; Evaluating credit risks, Detecting stolen credit cards before customer reports it is stolenMedical: Studying health risks for a population exposed to toxins

Viva Questions

• Name some of the existing search methods.

• What are the operators involved in a simple genetic algorithm?

• What is reproduction?

• What is the use of SCHEMATA?

• What is schema?

Page 65: Cs 801 Practicals

Experiment No. 10

Aim: Study of Genetic Programming.

Genetic Programming is a branch of evolutionary computation inspired by biological evolution. It is introduced by Koza and his group. It is popular for it’s ability to learn relationships hidden in data and express them automatically in a mathematical manner. It is a machine learning technique used to optimize a population of computer programs according to a fitness landscape. This fitness has been determined by a program's ability to perform a given computational task. In this direction, a variety of classifier programs have been considered. These classifiers have used different representation techniques including decision trees, expression trees and classification rule sets. Genetic programming is an extension of the genetic algorithm in which the genetic population contains computer programs.

Genetic Programming, one of a number of evolutionary algorithms, follows Darwin’s theory of evolution (“survival of the fittest”). There is a population of computer programs (individuals) that reproduce with each other. The best individuals will survive and eventually evolve to do well in the given environment.

Why Genetic Programming?Genetic programming (GP) is a technique to automatically discover computer programs using principle of Darwinian evolution. GP is a means of getting computers to solve problems without being explicitly programmed. By being explicitly programmed, it infers that the programmer does not specify the size, shape, or structural complexity of the solution in advance but rather all these factors are automatically determined. Automatic programming has been the goal of computer scientists for a number of decades. Genetic programming shows the most potential way to automatically write computer programs. So there is an amount of hope for GP’s role in future computing.

Genetic Programming has been applied successfully to symbolic regression (system identification, empirical discovery, modeling, forecasting, data mining), classification, control, optimization, equation solving, game playing, induction, image compression, cellular automata programming, decision tree induction and many others. The problems in Genetic Programming include the fields of machine learning, artificial intelligence, and neural network.

Steps of GP:Following are the steps of Genetic Programming process:

• Generate an initial population of random compositions of the functions and terminals of the problem (computer programs).

Page 66: Cs 801 Practicals

• Iteratively perform the following sub steps until the termination criteria have been satisfied:

• Execute each program in the population and assign it a fitness value.• Create a new population of computer programs by applying the following two

primary operations. The operations are applied to computer programs in the population selected with a probability based on fitness

• Reproduce an existing program by coping it into the new population• Create two new computer programs from existing programs by genetically

recombining randomly chosen parts of two existing programs using the crossover operation applied at a randomly chosen crossover point within each program

• Design the program that is identified by the method of result designation as the result of the run of GP. This result may represent a solution to the problem.

Advantages of GPGP is used successfully for multicategory classification problems because it has following advantages:

• No analytical knowledge is needed and we can still get accurate results.• Every component of the resulting GP rule-base is relevant in some way for the solution of the problem. Thus operations are not encoded null that will expend computational resources at runtime.

• With GP restrictions are not imposed on how the structure of solutions should be. Also we do not bound the complexity or the number of rules of the computed solution.

• GP provides a mathematical representation of the classifier. As the classifier uses only a few selected features, the mathematical representation of the classifier can be easily analyzed to know more about the underlying system.

• Genetic Programming is the absence or relatively minor role of preprocessing of inputs and postprocessing of outputs. The inputs, intermediate results, and outputs are typically expressed directly in terms of the natural terminology of the problem domain. The programs produced by genetic programming consist of functions that are natural for the problem domain. The postprocessing of the output of a program, if any, is done by a wrapper (output interface).

Applications of GPGenetic Programming has been applied successfully to symbolic regression (system identification, empirical discovery, modeling, forecasting, data mining), classification, control, optimization, equation solving, game playing, induction, image compression, cellular automata programming, decision tree induction and many others. The problems in Genetic Programming include the fields of machine learning, artificial intelligence, and neural network.

Page 67: Cs 801 Practicals

Viva Questions• What is Genetic Programming?

• Write the advantage of GP.

• Difference between GA & GP.

• List the real world applications of GP.