Post on 13-Apr-2018
7/26/2019 Lab Manual Soft Computing
1/44
1
One Week Faculty Development Program on
Artificial Intelligence and Soft Computing Techniques
Under TEQIP Phase II
(28th
April 02nd
May 2015)
LAB MANUAL
Organized by
Department of Computer Science & Engineering
University Institute of Technology
Rajiv Gandhi Proudyogiki Vishwavidhyalaya
(State Technological University of Madhya Pradesh)
Airport Road, Bhopal 462033
Website: www.uitrgpv.ac.in
7/26/2019 Lab Manual Soft Computing
2/44
2
SNO. LIST OF EXPERIMENTS PAGE NO.
1. WAP to implement Artificial Neural Network 2
2. WAP to implement Activation Functions 3
3. WAP to implement Adaptive prediction in ADALINE NN 5
4. WAP to implement LMS and Perceptron Learning Rule 7
5. WAP to implement ART NN 12
6. WAP to implement BAM Network 14
7. WAP to implement Full CPN with input pair 15
8. WAP to implement discrete Hopfield Network 17
9. WAP to implement Hebb Network 18
10. WAP to implement Hetro associate neural net for mapping input vectors
to output vectors
19
11. WAP to implement Delta Learning Rule 20
12. WAP to implement XOR function in MADALINE NN 22
13. WAP to implement AND function in Perceptron NN 24
14. WAP to implement Perceptron Network 26
15. WAP to implement Feed Forward Network 32
16. WAP to implement Instar learning Rule 38
17. WAP to implement Weight vector Matrix 43
INDEX
7/26/2019 Lab Manual Soft Computing
3/44
3
Experiment No. 1
AIM: WAP to implement Artificial Neural Network in MATLABCODE:
%Autoassociative net to store the vector
clc;clear;x=[1 1 -1 -1];
w=zeros(4,4);
w=x'*x;
yin=x*w;
for i=1:4
if yin(i)>0y(i)=1;
elsey(i)=-1;
end
end
disp('wieght matrix');
disp(w);
if x==y
disp('The vector is a known vector');
else
disp('The vector is a unknown vector');end
OUTPUT:
Weight matrix
1 1 -1 -1
1 1 -1 -1
-1 -1 1 1
-1 -1 1 1
The vector is a known vector
7/26/2019 Lab Manual Soft Computing
4/44
4
Experiment No. 2
AIM: WAP to implement Activation Function in MATLAB
CODE:
>> % Illustration of various activation functions used in NN's
x=-10:0.1:10;
tmp=exp(-x);
y1=1./(1+tmp);
y2=(1-tmp)./(1+tmp);
y3=x;
subplot(231); plot(x,y1); grid on;
axis([min(x) max(x) -2 2]);
title('Logistic Function');
xlabel('(a)');
axis('square')
subplot(232);plot(x,y2); grid on;
axis([min(x) max(x) -2 2]);
title('Hyperbolic Tangent Function');qw
xlabel('(b)');
axis('square');
subplot(233);plot(x,y3); grid on;
axis([min(x) max(x) min(x) max(x)]);
title('Identity Function');
xlabel('(c)');
axis('square');
7/26/2019 Lab Manual Soft Computing
5/44
5
OUTPUT:
-10 0 10-2
-1
0
1
2Logistic Function
(a)
-10 0 10-2
-1
0
1
2Hyperbolic Tangent Function
(b)
-10 0 10-10
-5
0
5
10Identity Function
(c)
7/26/2019 Lab Manual Soft Computing
6/44
6
Experiment No. 3
AIM: WAP to implement Adaptive Prediction in ADALINE Network
CODE:
% Adaptive Prediction with Adaline
clear;
clc;
%Input signal x(t)
f1=2; %kHzts=1/(40*f1); % 12.5 usec -- sampling time
N=100;
t1=(0:N)*4*ts;
t2=(0:2*N)*ts+4*(N+1)*ts;t=[t1 t2]; %0 to 7.5 sec
N=size(t,2); % N = 302
xt=[sin(2*pi*f1*t1) sin(2*pi*2*f1*t2)];plot(t, xt), grid, title('Signal to be predicted')
p=4; % Number of synapses% formation of the input matrix X of size p by N
%use the convolution matrix. Try convmtx(1:8, 5)
X = convmtx(xt, p) ; X=X(:,1:N);
d=xt; % The target signal is equal to the input signal
y=zeros(size(d)); % memory allocation for y
eps=zeros(size(d)); % memory allocation for epseta=0.4 ; %learning rate/gainw=rand(1, p) ; % Initialisation of weight vector
for n=1:N % learning loopy(n)=w*X(:,n); %predicted output signal
eps(n)=d(n)-y(n); %error signal
w=w+eta*eps(n)*X(:,n)';
end
figure(1)
plot(t, d, 'b',t,y, '-r'), grid, ...title('target and predicted signals'), xlabel('time[sec]')
figure(2)plot(t, eps), grid, title('prediction error'), xlabel('time[sec]')
7/26/2019 Lab Manual Soft Computing
7/44
7
OUTPUT:
0 1 2 3 4 5 6 7 8-1.5
-1
-0.5
0
0.5
1
1.5
2 target and predicted signals
time[sec]
0 1 2 3 4 5 6 7 8-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4prediction error
time[sec]
7/26/2019 Lab Manual Soft Computing
8/44
8
Experiment No. 4
AIM: WAP to implement LMS and Perceptron Learning rule
CODE:
%For the following 2-class problem determine the decision boundaries
%obtained by LMS and perceptron learning laws.
% Class C1 : [-2 2]', [-2 3]', [-1 1]', [-1 4]', [0 0]', [0 1]', [0 2]',% [0 3]' and [1 1]'
% Class C2 : [ 1 0]', [2 1]', [3 -1]', [3 1]', [3 2]', [4 -2]', [4 1]',% [5 -1]' and [5 0]'
clear;
inp=[-2 -2 -1 -1 0 0 0 0 1 1 2 3 3 3 4 4 5 5;2 3 1 4 0 1 2 3 1 0 1 -1 1 2 -2 1 -1 0];
out=[1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0];
choice=input('1: Perceptron Learning Law\n2: LMS Learning Law\n Enter your choice :');
switch choice
case 1network=newp([-2 5;-2 4],1);
network=init(network);
y=sim(network,inp);
figure,plot(inp,out,inp,y,'o'),title('Before Training');
axis([-10 20 -2.0 2.0]);network.trainParam.epochs = 20;
network=train(network,inp,out);
y=sim(network,inp);
figure,plot(inp,out,inp,y,'o'),title('After Training');axis([-10 20 -2.0 2.0]);
display('Final weight vector and bias values : \n');
Weights=network.iw{1};
Bias=network.b{1};
Weights
Bias
Actual_Desired=[y' out'];Actual_Desiredcase 2
network=newlin([-2 5;-2 4],1);
network=init(network);y=sim(network,inp);
network=adapt(network,inp,out);
y=sim(network,inp);
display('Final weight vector and bias values : \n');
Weights=network.iw{1};Bias=network.b{1};
7/26/2019 Lab Manual Soft Computing
9/44
9
Weights
BiasActual_Desired=[y' out'];
Actual_Desiredotherwise
error('Wrong Choice');
end
OUTPUT:
1: Perceptron Learning Law
2: LMS Learning Law
Enter your choice :1
Final weight vector and bias values : \n
Weights =
-1 1
Bias =
0
Actual_Desired =
1 1
7/26/2019 Lab Manual Soft Computing
10/44
10
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
7/26/2019 Lab Manual Soft Computing
11/44
11
-10 -5 0 5 10 15 20-2
-1.5
-1
-0.5
0
0.5
1
1.5
2Before Training
-10 -5 0 5 10 15 20-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
After Training
7/26/2019 Lab Manual Soft Computing
12/44
12
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
10-0.5
10-0.4
10-0.3
10-0.2
10-0.1
10
0
Best Training Performance is 0.5 at epoch 0
MeanAbsoluteError(mae)
2 Epochs
Train
Best
7/26/2019 Lab Manual Soft Computing
13/44
13
Experiment No. 5
AIM: WAP to implement ART Neural Network
CODE:
%ART Neural Net
clc;
clear;
b=[0.57 0.0 0.3;0.0 0.0 0.3;0.0 0.57 0.3;0.0 0.47 0.3];
t=[1 1 0 0;1 0 0 1;1 1 1 1];vp=0.4;
L=2;
x=[1 0 1 1];
s=x;ns=sum(s);
y=x*b;
con=1;while con
for i=1:3if y(i)==max(y)
J=i;
end
end
x=s.*t(J,:);
nx=sum(x);if nx/ns >= vp
b(:,J)=L*x(:)/(L-1+nx);
t(J,:)=x(1,:);con=0;
else
y(J)=-1;
con=1;
end
if y+1==0con=0;
endend
disp('Top Down Weights');
disp(t);
disp('Bottom up Weights');
disp(b);
7/26/2019 Lab Manual Soft Computing
14/44
14
OUTPUT:
Top Down Weights
1 1 0 0
1 0 0 1
1 1 1 1
Bottom up Weights
0.5700 0.6667 0.3000
0 0 0.3000
0 0 0.3000
0 0.6667 0.3000
7/26/2019 Lab Manual Soft Computing
15/44
15
Experiment No. 6
AIM: WAP to implement BAM Network
CODE:
%Bidirectional Associative Memory neural net
clc;
clear;
s=[1 1 0;1 0 1];
t=[1 0;0 1];x=2*s-1
y=2*t-1
w=zeros(3,2);
for i=1:2w=w+x(i,:)'*y(i,:);
end
disp('the calculated weight matrix');disp(w);
OUTPUT:
x =
1 1 -1
1 -1 1
y =
1 -1
-1 1
the calculated weight matrix
0 0
2 -2
-2 2
7/26/2019 Lab Manual Soft Computing
16/44
16
Experiment No. 7
AIM: WAP to implement Full Counter Propagation Network with input pair
CODE:
%Full Counter Propagation Network for given input pair
clc;
clear;
%set initial weights
v=[0.6 0.2;0.6 0.2;0.2 0.6;0.2 0.6];w=[0.4 0.3;0.4 0.3;];
x=[0 1 1 0];
y=[1 0];
alpha=0.3;for j=1:2
D(j)=0;
for i=1:4D(j)=D(j)+(x(i)-v(i,j))^2;
endfor k=1:2
D(j)=D(j)+(y(k)-w(k,j))^2;
end
end
for j=1:2
if D(j)==min(D)J=j;
end
enddisp('After one step the weight matrix are');
v(:,J)=v(:,J)+alpha*(x'-v(:,J))
w(:,J)=w(:,J)+alpha*(y'-w(:,J))
7/26/2019 Lab Manual Soft Computing
17/44
17
OUTPUT:
After one step the weight matrix are
v =
0.4200 0.2000
0.7200 0.2000
0.4400 0.6000
0.1400 0.6000
w =
0.5800 0.3000
0.2800 0.3000
7/26/2019 Lab Manual Soft Computing
18/44
18
Experiment No. 8
AIM: WAP to implement Discrete Hopfield Network
CODE:
% discrete hopfield net
clc;
clear;
x=[1 1 1 0];
tx=[0 0 1 0];w=(2*x'-1)*(2*x-1);
for i=1:4
w(i,i)=0;
endcon=1;
y=[0 0 1 0]
while conup=[4 2 1 3];
for i= 1:4yin(up(i))=tx(up(i))+y*w(1:4,up(i));
if yin(up(i))>0
y(up(i))=1;
end
end
if y==xdisp('convergence has been obtained');disp('the convergence output');
disp(y);con=0;
end
end
OUTPUT:
y =
0 0 1 0
convergence has been obtained
the convergence output
1 1 1 0
7/26/2019 Lab Manual Soft Computing
19/44
19
Experiment No. 9
AIM: WAP to implement Hebb Network
CODE:
%Hebb Net to classify two dimensional inputs patterns
clear;
clc;
%Input Patterns
E=[1 1 1 1 1 -1 -1 -1 1 1 1 1 1 -1 -1 -1 1 1 1 1];F=[1 1 1 1 1 -1 -1 -1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1];
x(1,1:20)=E;
x(2,1:20)=F;
w(1:20)=0;t=[1 -1];
b=0;
for i=1:2w=w+x(i,1:20)*t(i);
b=b+t(i);end
disp('Weight matrix');
disp(w);
disp('Bias');
disp(b);
OUTPUT:
Weight matrix
Columns 1 through 12
0 0 0 0 0 0 0 0 0 0 0 0
Columns 13 through 20
0 0 0 0 0 2 2 2
Bias
0
7/26/2019 Lab Manual Soft Computing
20/44
20
Experiment No.10
AIM: WAP to implement Hetro associate neural net for mapping input vectors to output
vectors
CODE:
%Hetro associate neural net for mapping input vectors to output vectorsclc;
clear;x=[1 1 0 0;1 0 1 0;1 1 1 0;0 1 1 0];
t=[1 0;1 0;0 1;0 1];
w=zeros(4,2);
for i=1:4
w=w+x(i,1:4)'*t(i,1:2);end
disp('weight matrix');
disp(w);
OUTPUT:
weight matrix
2 1
1 2
1 2
0 0
7/26/2019 Lab Manual Soft Computing
21/44
21
Experiment No. 11
AIM: WAP to implement Delta Learning Rule
CODE:
% Determine the weights of a network with 4 input and 2 output units using
% Delta Learning Law with f(x)=1/(1+exp(-x)) for the following input-output
% pairs:
%
% Input: [1100]' [1001]' [0011]' [0110]'% output: [11]' [10]' [01]' [00]'
% Discuss your results for different choices of the learning rate parameters.
% Use suitable values for the initial weights.
in=[1 1 0 0 -1;1 0 0 1 -1; 0 0 1 1 -1; 0 1 1 0 -1];
out=[1 1; 1 0; 0 1; 0 0];
eta=input('Enter the learning rate value = ');it=input('Enter the number of iterations required = ');
wgt=input('Enter the weights,2 by 5 matrix(including weight for bias):\n');for x=1:it
for i=1:4
s1=0;
s2=0;
for j=1:5
s1=s1+in(i,j)*wgt(1,j);s2=s2+in(i,j)*wgt(2,j);
end
wi=eta*(out(i,1)-logsig(s1))*dlogsig(s1,logsig(s1))*in(i,:);wgt(1,:)=wgt(1,:)+wi;
wi=eta*(out(i,2)-logsig(s2))*dlogsig(s2,logsig(s2))*in(i,:);
wgt(2,:)=wgt(2,:)+wi;
end
end
wgt
7/26/2019 Lab Manual Soft Computing
22/44
22
OUTPUT:
Enter the learning rate value = 0.6
Enter the number of iterations required = 1
Enter the weights,2 by 5 matrix(including weight for bias):
[1 2 1 3 1;1 0 1 0 2]
wgt =
1.0088 1.9508 0.9177 2.9757 1.0736
1.0476 0.0418 1.0420 0.0478 1.9104
7/26/2019 Lab Manual Soft Computing
23/44
23
Experiment No. 12
AIM: WAP to implement XOR function for MADALINE NN
CODE:
%Madaline for XOR function
clc;
clear;
%Input and Target
x=[1 1 -1 -1;1 -1 1 -1];t=[-1 1 1 -1];
%Assume initial weight matrix and bias
w=[0.05 0.1;0.2 0.2];
b1=[0.3 0.15];v=[0.5 0.5];
b2=0.5;
con=1;alpha=0.5;
epoch=0;while con
con=0;
for i=1:4
for j=1:2
zin(j)=b1(j)+x(1,i)+x(1,i)*w(1,j)+x(2,i)*w(2,j);
if zin(j)>=0z(j)=1;
else
z(j)=-1;end
end
yin=b2+z(1)*v(1)+z(2)*v(2);
if yin>=0
y=1;
elsey=-1;
endif y~=t(i)
con=1;
if t(i)==1
if abs(zin(1))>abs(zin(2))
k=2;
else
k=1;end
b1(k)=b1(k)+alpha*(1-zin(k));
w(1:2,k)=w(1:2,k)+alpha*(1-zin(k))*x(1:2,i);else
7/26/2019 Lab Manual Soft Computing
24/44
24
for k=1:2
if zin(k)>0;b1(k)=b1(k)+alpha*(-1-zin(k));
w(1:2,k)=w(1:2,k)+alpha*(-1-zin(k))*x(1:2,i);end
end
end
end
end
epoch=epoch+1;
end
disp('weight matrix of hidden layer');disp(w);
disp('Bias of hidden layer');
disp(b1);disp('Total Epoch');
disp(epoch);
OUTPUT:
weight matrix of hidden layer
0.2812 -2.1031
-0.6937 0.9719
Bias of hidden layer
-1.3562 -1.6406
Total Epoch
3
7/26/2019 Lab Manual Soft Computing
25/44
25
Experiment No. 13
AIM: WAP to implement AND function in Perceptron NN
CODE:
%Perceptron for AND function
clear;
clc;
x=[1 1 -1 -1;1 -1 1 -1];
t=[1 -1 -1 -1];w=[0 0];
b=0;
alpha=input('Enter Learning rate=');
theta=input('Enter Threshold value');con=1;
epoch=0;
while concon=0;
for i=1:4yin=b+x(1,i)*w(1)+x(2,i)*w(2);
if yin>theta;
y=1;
end
if yin=-theta
y=0;endif yin
7/26/2019 Lab Manual Soft Computing
26/44
26
OUTPUT:
Enter Learning rate=0.6
Enter Threshold value0.8
Perceptron for AND function
Final weight matrix
1.2000 1.2000
Final Bias
-1.2000
7/26/2019 Lab Manual Soft Computing
27/44
27
Experiment No. 14
AIM: WAP to implement Perceptron Network
CODE:
clear;
clc;
p1=[1 1]';p2=[1 2]';
p3=[-2 -1]';p4=[2 -2]';
p5=[-1 2]';p6=[-2 -1]';p7=[-1 -1]';p8=[-2 -2]';
% define the input matrix , which is also a target matrix for auto
% association
P=[p1 p2 p3 p4 p5 p6 p7 p8];%we will initialize the network to zero initial weights
net= newlin([min(min(P)) max(max(P)); min(min(P)) max(max(P))],2);
weights = net.iw{1,1}%set training goal (zero error)
net.trainParam.goal=0.0;%number of epochs
net.trainParam.epochs=400;
[net, tr]= train(net,P,P);
%target matrix T=P
%default training function is Widrow-Hoff Learning for newlin defined
%weights and bias after the trainingW=net.iw{1,1}B=net.b{1}
Y=sim(net,P);%Haming like distance criterion
criterion=sum(sum(abs(P-Y)')')
%calculate and plot the errors
rs=Y-P; legend(['criterion=' num2str(criterion)])
figure
plot(rs(1,:),rs(2,:),'k*')test=P+rand(size(P))/10;
%let's add some noise in the input and test the networkagainYtest=sim(net,test);
criteriontest=sum(sum(abs(P-Ytest)')')
figure
output=Ytest-P
%plot errors in the output
plot(output(1,:),output(2,:),'k*')
7/26/2019 Lab Manual Soft Computing
28/44
28
OUTPUT:
weights =
0 0
0 0
W =
1.0000 -0.0000
-0.0000 1.0000
B =
1.0e-12 *
-0.1682
-0.0100
criterion =
1.2085e-12
Warning: Plot empty.
> In legend at 287
criteriontest =
0.9751
output =
Columns 1 through 7
0.0815 0.0127 0.0632 0.0278 0.0958 0.0158 0.0957
0.0906 0.0913 0.0098 0.0547 0.0965 0.0971 0.0485
Column 8
0.0800
0.0142
7/26/2019 Lab Manual Soft Computing
29/44
29
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
-3 -2.5 -2 -1.5 -1 -0.5
x 10-13
-16
-14
-12
-10
-8
-6
-4
-2x 10
-15
7/26/2019 Lab Manual Soft Computing
30/44
30
0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
0 50 100 150 200 250 300 350 400
10-25
10-20
10-15
10-10
10-5
100
Best Training Performance is 1.3086e-26 at epoch 400
MeanSquaredError(mse)
400 Epochs
Train
Best
7/26/2019 Lab Manual Soft Computing
31/44
31
0 50 100 150 200 250 300 350 400-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
valfail
400 Epochs
Validation Checks = 0, at epoch 400
7/26/2019 Lab Manual Soft Computing
32/44
32
10 20 30 40 50
10
20
30
40
50
Target
O
utput~=0.9
5*Target+
1.1
Training: R=0.97897
Data
Fit
Y = T
10 20 30 40 50
10
20
30
40
50
Target
O
utput~=0.9
1*Target+
1.4
Validation: R=0.92226
Data
Fit
Y = T
10 20 30 40 50
10
20
30
40
50
Target
O
utput~=0.9
1*Target+
1.5
Test: R=0.90828
Data
Fit
Y = T
10 20 30 40 50
10
20
30
40
50
Target
O
utput~=0.9
4*Target+
1.1
All: R=0.96404
Data
Fit
Y = T
7/26/2019 Lab Manual Soft Computing
33/44
33
Experiment No.15
AIM: WAP to implement feed forward network
CODE:
% a)Design and Train a feedforward network for the following problem:
% Parity: Consider a 4-input and 1-output problem, where the output should be
% 'one' if there are odd number of 1s in the input pattern and 'zero'
% other-wise.
clear
inp=[0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1;0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1;...
0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1;0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1];
out=[0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0];network=newff([0 1;0 1; 0 1; 0 1],[6 1],{'logsig','logsig'});
network=init(network);
y=sim(network,inp);figure,plot(inp,out,inp,y,'o'),title('Before Training');
axis([-5 5 -2.0 2.0]);network.trainParam.epochs = 500;
network=train(network,inp,out);
y=sim(network,inp);
figure,plot(inp,out,inp,y,'o'),title('After Training');
axis([-5 5 -2.0 2.0]);
Layer1_Weights=network.iw{1};Layer1_Bias=network.b{1};Layer2_Weights=network.lw{2};
Layer2_Bias=network.b{2};Layer1_Weights
Layer1_Bias
Layer2_Weights
Layer2_Bias
Actual_Desired=[y' out'];
Actual_Desired
OUTPUT:
Layer1_Weights =
1.0765 2.1119 2.6920 2.3388
-10.4592 -10.9392 10.0824 10.9071
6.0739 9.4600 -5.3666 -5.9492
7/26/2019 Lab Manual Soft Computing
34/44
34
-6.0494 -18.5892 -5.9393 5.6923
-2.5863 -1.7445 -11.6903 3.7168
10.7251 -10.5659 9.8250 -10.4745
Layer1_Bias =
-16.0634
5.4848
9.5144
9.6231
7.4340
5.7091
Layer2_Weights =
-2.5967 -23.3294 15.7618 23.4261 -22.5208 -23.3569
Layer2_Bias =
18.4268
7/26/2019 Lab Manual Soft Computing
35/44
35
Actual_Desired =
0.0000 0
1.0000 1.0000
0.9999 1.0000
0.0000 0
1.0000 1.0000
0.0000 0
0.0000 0
0.9998 1.0000
1.0000 1.0000
0.0000 0
0.0000 0
0.9999 1.0000
0.0000 0
1.0000 1.0000
1.0000 1.0000
0.0000 0
7/26/2019 Lab Manual Soft Computing
36/44
36
-5 -4 -3 -2 -1 0 1 2 3 4 5-2
-1.5
-1
-0.5
0
0.5
1
1.5
2Before Training
-5 -4 -3 -2 -1 0 1 2 3 4 5-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
After Training
7/26/2019 Lab Manual Soft Computing
37/44
37
0 5 10 15 20 25
10-8
10-6
10-4
10-2
100
Best Training Performance is 3.3016e-09 at epoch 26
MeanS
quaredError(mse)
26 Epochs
TrainBest
7/26/2019 Lab Manual Soft Computing
38/44
38
10-10
10-5
100
gradient
Gradient = 1.9837e-08, at epoch 26
10-20
10-10
100
mu
Mu = 1e-13, at epoch 26
0 5 10 15 20 25-1
0
1
valfail
26 Epochs
Validation Checks = 0, at epoch 26
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Target
Output~=1*Target+1.6e-05
Training: R=1
Data
Fit
Y = T
7/26/2019 Lab Manual Soft Computing
39/44
39
Experiment No. 16
AIM: WAP to implement Instar learning Rule
CODE:
% Using the Instar learning law, group all the sixteen possible binary
% vectors of length 4 into four different groups. Use suitable values for
% the initial weights and for the learning rate parameter. Use a 4-unit
% input and 4-unit output network. Select random initial weights in the
% range [0,1]
in=[0 0 0 0;0 0 0 1;0 0 1 0;0 0 1 1;0 1 0 0;0 1 0 1;0 1 1 0;0 1 1 1;1 0 0 0;1 0 0 1;1 0 1 0;1 0 1 1;1
1 0 0;1 1 0 1;1 1 1 0;1 1 1 1];
wgt=[0.4 0.1 0.2 0.7; 0.9 0.7 0.4 0.7; 0.1 0.2 0.9 0.8 ; 0.5 0.6 0.7 0.6];eta=0.5;
it=3000;
for t=1:it
for i=1:16for j=1:4
w(j)=in(i,:)*wgt(j,:)';
end
[v c]=max(w);
wgt(c,:)=wgt(c,:)+eta*(in(i,:)-wgt(c,:));
k=power(wgt(c,:),2);f=sqrt(sum(k));wgt(c,:)=wgt(c,:)/f;
endend
for i=1:16
for j=1:4
w(j)=in(i,:)*wgt(j,:)';
end
[v c]=max(w);if(v==0)
c=4;end
s=['Input= ' int2str(in(i,:)) ' Group= ' int2str(c)];
display(s);
end
wgt
7/26/2019 Lab Manual Soft Computing
40/44
40
OUTPUT:
s =
Input= 0 0 0 0 Group= 4
s =
Input= 0 0 0 1 Group= 1
s =
Input= 0 0 1 0 Group= 3
s =
Input= 0 0 1 1 Group= 3
s =
Input= 0 1 0 0 Group= 2
7/26/2019 Lab Manual Soft Computing
41/44
41
s =
Input= 0 1 0 1 Group= 4
s =
Input= 0 1 1 0 Group= 2
s =
Input= 0 1 1 1 Group= 4
s =
Input= 1 0 0 0 Group= 1
s =
Input= 1 0 0 1 Group= 1
7/26/2019 Lab Manual Soft Computing
42/44
42
s =
Input= 1 0 1 0 Group= 3
s =
Input= 1 0 1 1 Group= 3
s =
Input= 1 1 0 0 Group= 2
s =
Input= 1 1 0 1 Group= 1
s =
Input= 1 1 1 0 Group= 2
7/26/2019 Lab Manual Soft Computing
43/44
43
s =
Input= 1 1 1 1 Group= 4
wgt =
0.6548 0.4318 0.0000 0.6203
0.5646 0.6819 0.4651 0.0000
0.5646 0.0000 0.6819 0.4651
0.3877 0.5322 0.5322 0.5322
7/26/2019 Lab Manual Soft Computing
44/44
Experiment No.17
AIM: WAP to implement weight vector matrix
CODE:
clc;
clear;
x= [-1 -1 -1 -1; -1 -1 1 1 ];
t= [1 1 1 1];
w= zeros(4,4);for i=1:2
w= w+x(i,1:4)'*x(i,1:4);
end
yin= t*w;for i= 1:4
if yin(i)>0
y(i)=1;else
y(i)= -1;end
end
disp('The Calculated Weight Matrix');
disp(w);
if x(1,1:4)==y(1:4)| x(2,1:4)==y(1:4)
disp('the vector is a known vector');else
disp('the vector is a unknown vector');
end
OUTPUT:
The Calculated Weight Matrix
2 2 0 0
2 2 0 0
0 0 2 2
0 0 2 2
the vector is a unknown vector