Artificial Neural Network Supervised Learning

45
Artificial Neural Network Supervised Learning ي ن كاها ن س ح م ر كت دhttp:// www.um.ac.ir/ ~kahani/

description

Artificial Neural Network Supervised Learning. دكترمحسن كاهاني http://www.um.ac.ir/~kahani/. Back-propagation neural network. Learning in a multilayer network proceeds the same way as for a perceptron. A training set of input patterns is presented to the network. - PowerPoint PPT Presentation

Transcript of Artificial Neural Network Supervised Learning

Page 1: Artificial Neural Network  Supervised Learning

Artificial Neural Network

Supervised Learning

كاهاني دكترمحسنhttp://www.um.ac.ir/~kahani/

Page 2: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Back-propagation neural network Learning in a multilayer network proceeds the

same way as for a perceptron. A training set of input patterns is presented to

the network. The network computes its output pattern, and

if there is an error or in other words a difference between actual and desired output patterns the weights are adjusted to reduce this error.

Page 3: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

BP Training Phases In a back-propagation neural network, the learning

algorithm has two phases.

1. A training input pattern is presented to the network input layer. The network propagates the input pattern from layer to layer until the output pattern is generated by the output layer.

2. If this pattern is different from the desired output, an error is calculated and then propagated backwards through the network from the output layer to the input layer. The weights are modified as the error is propagated.

Page 4: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Three-layer BP network

Page 5: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

The back-propagation training algorithm

Step 1: Initialisation Set all the weights and threshold levels of the

network to random numbers uniformly distributed inside a small range:

where Fi is the total number of inputs of neuron i in the network.

The weight initialization is done on a neuron-by-neuron basis.

ii FF

4.2 ,

4.2

Page 6: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Step 2: Activation Activate the BP network by applying:

inputs x1(p), x2(p),…, xn(p)

desired outputs yd,1(p), yd,2(p),…, yd,n(p).

(a) Calculate the actual outputs of the neurons in the hidden layer:

where n is the number of inputs of neuron j in the hidden layer, and sigmoid is the sigmoid activation function.

j

n

iijij pwpxsigmoidpy

1

)()()(

Page 7: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Step 2: Activation (cont.)

(b) Calculate the actual outputs of the neurons in the output layer:

where m is the number of inputs of neuron k in the output layer.

k

m

jjkjkk pwpxsigmoidpy

1

)()()(

Page 8: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Step 3: Weight training Update the weights in the BP network propagating

backward the errors associated with output neurons.

(a) Calculate the error gradient for the neurons in the output layer:

k ( p) yk ( p) 1yk ( p)ek ( p)

where ek ( p) yd ,k ( p) yk ( p)

Calculate the weight corrections:

wjk ( p) y j ( p) k ( p)

Update the weights at the output neurons:

wjk ( p 1) wjk ( p) wjk ( p)

Page 9: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Step 3: Weight training (cont.)(b) Calculate the error gradient for the neurons in the

hidden layer:

Calculate the weight corrections:

wij ( p) xi ( p) j ( p)

Update the weights at the hidden neurons:

wij ( p 1) wij ( p) wij ( p)

)()()(1)()(1

][ p wppypyp jk

l

kkjjj

Page 10: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Step 4: Iteration Increase iteration p by one, go back to Step 2 and

repeat the process until the selected error criterion is satisfied.

Page 11: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

ExampleThree-layer network for solving the Exclusive-OR operation

y55

x1 31

x2

Inputlayer

Outputlayer

Hidden layer

42

3

w13

w24

w23

w24

w35

w45

4

5

1

1

1

Page 12: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Example (cont.) The effect of the threshold applied to a neuron in the

hidden or output layer is represented by its weight, , connected to a fixed input equal to 1.

The initial weights and threshold levels are set randomly as follows:

w13 = 0.5, w14 = 0.9, w23 = 0.4, w24 = 1.0, w35 = -1.2, w45 = 1.1q3 = 0.8, q4 = -0.1 and q5 = 0.3.

Page 13: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Example (cont.) We consider a training set where inputs x1 and x2 are

equal to 1 and desired output yd,5 is 0. The actual outputs of neurons 3 and 4 in the hidden layer are calculated as

Now the actual output of neuron 5 in the output layer is determined as:

Thus, the following error is obtained:

e yd,5 y5 0 0.5097 0.5097

5250.01 /1)( )8.014.015.01(32321313 ewxwx sigmoidy

8808.01 /1)( )1.010.119.01(42421414 ewxwx sigmoidy

5097.01 /1)( )3.011.18808.02.15250.0(54543535 ewywy sigmoidy

Page 14: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Example (cont.) The next step is weight training. To update the weights

and threshold levels in our network, we propagate the error, e, from the output layer backward to the input layer.

First, we calculate the error gradient for neuron 5 in the output layer:5 y5 (1y5) e 0.5097 (10.5097) (0.5097) 0.1274

Then we determine the weight corrections assuming that the learning rate parameter, , is equal to 0.1:

0067.0)1274.0(5250.01.05335 yw0112.0)1274.0(8808.01.05445 yw

0127.0)1274.0()1(1.0)1( 55

Page 15: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Example (cont.) Next we calculate the error gradients for neurons 3 and

4 in the hidden layer:

We then determine the weight corrections:

0381.0)2.1 (0.1274) (0.5250)(1 0.5250)1( 355333 wyy

0.0147.11 4) 0.127 ( 0.8808)(10.8808)1( 455444 wyy

0038.00381.011.03113 xw0038.00381.011.03223 xw

0038.00381.0)1(1.0)1( 33 0015.0)0147.0(11.04114 xw0015.0)0147.0(11.04224 xw

0015.0)0147.0()1(1.0)1( 44

Page 16: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Example (cont.) At last, we update all weights and threshold:w13 w13 w13 0.5 0.0038 0.5038

w14 w14 w14 0.9 0.0015 0.8985

w23 w23 w23 0.4 0.0038 0.4038

w24 w24 w24 1.0 0.0015 0.9985

w35 w35 w35 1.2 0.0067 1.2067

w45 w45 w45 1.10.0112 1.0888

3 3 3 0.8 0.0038 0.7962

4 4 4 0.1 0.0015 0.0985

5 5 5 0.3 0.0127 0.3127 The training process is repeated until the sum of squared

errors is less than 0.001.

Page 17: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Learning curve for Exclusive-OR

0 50 100 150 200

101

Epoch

Su

m-S

qu

are

d E

rro

r

Sum-Squared Network Error for 224 Epochs

100

10-1

10-2

10-3

10-4

Page 18: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Final results of learning

Inputs

x1 x2

1010

1100

011

Desiredoutput

yd

0

0.0155

Actualoutput

y5Y

Error

e

Sum ofsquarederrors

e 0.9849 0.9849 0.0175

0.0155 0.0151 0.0151 0.0175

0.0010

Page 19: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

McCulloch-Pitts model for solving the Exclusive-OR

Page 20: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Decision boundaries

(a) Decision boundary constructed by hidden neuron 3; (b) Decision boundary constructed by hidden neuron 4; (c) Decision boundaries constructed by the complete three-

layer network

Page 21: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Accelerated learning in multilayer neural networks A multilayer network learns much faster when the

sigmoidal activation function is represented by a hyperbolic tangent:

where a and b are constants. Suitable values for a and b are:

a = 1.716 and b = 0.667

ae

aY

bXhtan

1

2

Page 22: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Accelerated learning (cont.) We also can accelerate training by including a

momentum term in the delta rule:

wjk ( p) wjk ( p 1) y j ( p) k ( p) where is a positive number (0 ≤1) called

the momentum constant. Typically, the momentum constant is set to 0.95.

This equation is called the generalised delta rule.

Page 23: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Learning with momentum for operation Exclusive-OR

Page 24: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Learning with adaptive learning rate

To accelerate the convergence and yet avoid the danger of instability, we can apply two heuristics:

Heuristic 1 If the change of the sum of squared errors has the same

algebraic sign for several consequent epochs, then the learning rate parameter, , should be increased.

Heuristic 2 If the algebraic sign of the change of the sum of

squared errors alternates for several consequent epochs, then the learning rate parameter, , should be decreased.

Page 25: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Learning with adaptive learning rate (cont.)

Adapting the learning rate requires some changes in the back-propagation algorithm.

If the sum of squared errors at the current epoch exceeds the previous value by more than a predefined ratio (typically 1.04), the learning rate parameter is decreased (typically y multiplying by 0.7) and new weights and thresholds are calculated.

If the error is less than the previous one, the learning rate is increased (typically by multiplying by 1.05).

Page 26: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Learning with adaptive learning rate (cont.)

0 10 20 30 40 50 60 70 80 90 100Epoch

Training for 103 Epochs

0 20 40 60 80 100 1200

0.2

0.4

0.6

0.8

1

Epoch

Le

arn

ing

Ra

te

10-4

10-2

100

102

Su

m-S

qu

are

d E

rro

r

10-3

101

10-1

Page 27: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Learning with momentum and adaptive learning rate

0 10 20 30 40 50 60 70 80Epoch

Training for 85 Epochs

0 10 20 30 40 50 60 70 80 900

0.5

1

2.5

Epoch

Lea

rnin

g R

ate

10-4

10-2

100

102

Su

m-S

qu

are

d E

rro

r

10-3

101

10-1

1.5

2

Page 28: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

The Hopfield Network The brain’s memory, however, works by association. For example, we can recognize a familiar face even in

an unfamiliar environment within 100-200 ms. BPN is good for pattern recognition problems. we need a recurrent neural network. A recurrent neural network has feedback loops from

its outputs to its inputs. The stability of recurrent networks was solved only in

1982, when John Hopfield formulated the physical principle of storing information in a dynamically stable network.

Page 29: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Single-layer n-neuron Hopfield network

xi

x1

x2

xnI n

p u

t

S i

g n

a l

s

yi

y1

y2

yn

1

2

i

n

O u

t p

u t

S

i g

n a

l s

Page 30: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

MultiLayer Hopfield Network

z-1

z-1

z-1

inputhiddenoutput

Page 31: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Hopfield Network Attributes The Hopfield network uses McCulloch and Pitts

neurons with the sign activation function as its computing element:

XY

X

X

Y sign

if ,

if ,1

0 if ,1

Page 32: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

The current state of the Hopfield network is determined by the current outputs of all neurons, y1, y2, . . ., yn.

Thus, for a single-layer n-neuron network, the state can be defined by the state vector as:

Hopfield Network Attributes

ny

y

y

2

1

Y

Page 33: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

In the Hopfield network, synaptic weights between neurons are usually represented in matrix form as follows:

where

M is the number of states to be memorized by the network,

Ym is the n-dimensional binary vector,

I is n n identity matrix,

and superscript T denotes a matrix transposition.

Hopfield Network Attributes

IYYW MM

m

Tmm

1

Page 34: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Possible states for the three-neuron Hopfield network

y1

y2

y3

(1, 1, 1)( 1, 1, 1)

( 1, 1, 1) (1, 1, 1)

(1, 1, 1)( 1, 1, 1)

(1, 1, 1)( 1, 1, 1)

0

Page 35: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Calculationnetwork Hopfield

The stable state-vertex is determined by the weight matrix W, the current input vector X, and the threshold matrix

If the input vector is partially incorrect or incomplete, the initial state will converge into the stable state-vertex after a few iterations.

Page 36: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Hopfield Network Calculations (Cont.)

Suppose, for instance, that our network is required to memorize two opposite states, (1, 1, 1) and (1, 1, 1). Thus,

or

where Y1 and Y2 are the three-dimensional vectors. The 3 x3 identity matrix I is

1

1

1

1Y

1

1

1

2Y 1 1 11 TY 1 1 12 TY

1 0 0

0 1 0

0 0 1

I

Page 37: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Hopfield Network Calculations (Cont.)

Thus, we can now determine the weight matrix as follows:

Next, the network is tested by the sequence of input vectors, X1 and X2, which are equal to the output (or target) vectors Y1 and Y2, respectively.

1 0 0

0 1 0

0 0 1

21 1 1

1

1

1

1 1 1

1

1

1

W

0 2 2

2 0 2

2 2 0

Page 38: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Hopfield Network Calculations (Cont.)

First, we activate the Hopfield network by applying the input vector X. Then, we calculate the actual output vector Y, and finally, we compare the result with the initial input vector X.

1

1

1

0

0

0

1

1

1

0 2 2

2 0 2

2 2 0

1 signY

1

1

1

0

0

0

1

1

1

0 2 2

2 0 2

2 2 0

2 signY

Page 39: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Hopfield Network Calculations (Cont.)

The remaining six states are all unstable. However, stable states (also called fundamental memories) are capable of attracting states that are close to them.

The fundamental memory (1, 1, 1) attracts unstable states (1, 1, 1), (1, 1, 1) and (1, 1, 1). Each of these unstable states represents a single error, compared to the fundamental memory (1, 1, 1).

The fundamental memory (1, 1, -1) attracts unstable states (1, 1, 1), (1, 1, 1) and (1, 1, 1).

Thus, the Hopfield network can act as an error correction network.

Page 40: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Storage capacity of the Hopfield network

Storage capacity is the largest number of fundamental memories that can be stored and retrieved correctly.

The maximum number of fundamental memories Mmax that can be stored in the n-neuron recurrent network is limited by

Mmax 0.15 n

Page 41: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Bidirectional associative memory (BAM)

The Hopfield network: autoassociative memory It can retrieve a corrupted or incomplete memory but cannot associate this memory with another different

memory. Human memory is essentially associative. Associative memory needs a recurrent neural network

capable of accepting an input pattern on one set of neurons and producing a related, but different, output pattern on another set of neurons.

Page 42: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Bidirectional associative memory (BAM)

Bidirectional associative memory (BAM), first proposed by Bart Kosko, is a heteroassociative network.

It associates patterns from one set, set A, to patterns from another set, set B, and vice versa.

Like a Hopfield network, the BAM can generalize and also produce correct outputs despite corrupted or incomplete inputs.

Page 43: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

BAM operation

yj(p)

y1(p)

y2(p)

ym(p)

1

2

j

m

Outputlayer

Inputlayer

xi(p)

x1(p)

x2(p)

xn(p)

2

i

n

1

xi(p+1)

x1(p+1)

x2(p+1)

xn(p+1)

yj(p)

y1(p)

y2(p)

ym(p)

1

2

j

m

Outputlayer

Inputlayer

2

i

n

1

(a) Forward direction. (b) Backward direction.

Page 44: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

BAM Concept The basic idea behind the BAM is to store pattern pairs so

that when n-dimensional vector X from set A is presented as input, the BAM recalls m-dimensional vector Y from set B, but when Y is presented as input, the BAM recalls X.

To develop the BAM, we need to create a correlation matrix for each pattern pair we want to store.

BAM weight matrix:

where M is the number of pattern pairs to be stored in the BAM.

Tm

M

mm YXW

1

Page 45: Artificial Neural Network  Supervised Learning

دانش مهندسي و خبره -سيستمهاي

كاهاني دكتر

Stability and storage capacity of the BAM

The BAM is unconditionally stable. This means that any set of associations can be learned without risk of instability.

Max. no. of associations < No. of neurons in the smaller layer.

The more serious problem with the BAM is incorrect convergence. The BAM may not always produce the The BAM may not always produce the closest association. In fact, a stable association may be closest association. In fact, a stable association may be only slightly related to the initial input vector.only slightly related to the initial input vector.