Advanced AI Neural Networks. Introduction Artificial Neural Network is based on the biological...

Post on 18-Jan-2016

222 views 3 download

Transcript of Advanced AI Neural Networks. Introduction Artificial Neural Network is based on the biological...

Advanced AI

Neural Networks

Introduction

Artificial Neural Network is based on the

biological nervous system as Brain

It is composed of interconnected computing

units called neurons

ANN like human, learn by examples

3

Science: Model how biological neural systems, like human brain, work?

How do we see? How is information stored in/retrieved

from memory? How do you learn to not to touch fire? How do your eyes adapt to the amount of

light in the environment? Related fields: Neuroscience,

Computational Neuroscience, Psychology, Psychophysiology, Cognitive Science, Medicine, Math, Physics.

History• Roots of work on NN are in:

• Neurobiological studies (more than one century ago):

• How do nerves behave when stimulated by different magnitudes of electric current? Is there a minimal threshold needed for nerves to be activated? Given that no single nerve cel is long enough, how do different nerve cells communicate among each other?

• Psychological studies:

• How do animals learn, forget, recognize and perform other types of tasks?

• Psycho-physical experiments helped to understand how individual neurons and groups of neurons work.

• McCulloch and Pitts introduced the first mathematical model of single neuron, widely applied in subsequent work.

History• Widrow and Hoff (1960): Adaline

• Minsky and Papert (1969): limitations of single-layer perceptrons (and they erroneously claimed that the limitations hold for multi-layer perceptrons)

Stagnation in the 70's:

• Individual researchers continue laying foundations

• von der Marlsburg (1973): competitive learning and self-organization

Big neural-nets boom in the 80's

• Grossberg: adaptive resonance theory (ART)

• Hopfield: Hopfield network

• Kohonen: self-organising map (SOM)

04/21/23 7

Brief HistoryOld Ages: Association (William James; 1890) McCulloch-Pitts Neuron (1943,1947) Perceptrons (Rosenblatt; 1958,1962) Adaline/LMS (Widrow and Hoff; 1960) Perceptrons book (Minsky and Papert; 1969)

Dark Ages: Self-organization in visual cortex (von der Malsburg; 1973) Backpropagation (Werbos, 1974) Foundations of Adaptive Resonance Theory (Grossberg; 1976) Neural Theory of Association (Amari; 1977)

04/21/23 8

History

Modern Ages: Adaptive Resonance Theory (Grossberg; 1980) Hopfield model (Hopfield; 1982, 1984) Self-organizing maps (Kohonen; 1982) Reinforcement learning (Sutton and Barto; 1983) Simulated Annealing (Kirkpatrick et al.; 1983) Boltzmann machines (Ackley, Hinton, Terrence; 1985) Backpropagation (Rumelhart, Hinton, Williams; 1986) ART-networks (Carpenter, Grossberg; 1992) Support Vector Machines

04/21/23 10

Hebb’s Learning Law In 1949, Donald Hebb formulated William James’ principle of association into a

mathematical form.

• If the activation of the neurons, y1 and y2 , are both on (+1) then the weight between the two neurons grow. (Off: 0)

• Else the weight between remains the same.

• However, when bipolar activation {-1,+1} scheme is used, then the weights can also decrease when the activation of two neurons does not match.

Applications Classification:

– Image recognition – Speech recognition– Diagnostic– Fraud detection– …

Regression:– Forecasting (prediction on base of past history)– …

Pattern association:– Retrieve an image from corrupted one– …

Clustering: – clients profiles– disease subtypes– …

12

Biological Neurons Human brain = tens of thousands

of neurons Each neuron is connected to

thousands other neurons A neuron is made of:

– The soma: body of the neuron– Dendrites: filaments that provide

input to the neuron– The axon: sends an output signal– Synapses: connection with other

neurons – releases certain quantities of chemicals called neurotransmitters to other neurons

13

The biological neuron

The pulses generated by the neuron travels along the axon as an electrical wave.

Once these pulses reach the synapses at the end of the axon open up chemical vesicles exciting the other neuron.

Neural Network

Simple Neuron

X1

X2

Xn

OutputInputs

Neural Network Application

•Pattern recognition can be implemented using NN

•The figure can be T or H character, the network should identify each class of T or H.

04/21/23 17

Navigation of a car

Done by Pomerlau. The network takes inputs from a 34X36 video image and a 7X36 range finder. Output units represent “drive straight”, “turn left” or “turn right”. After training about 40 times on 1200 road images, the car drove around CMU campus at 5 km/h (using a small workstation on the car). This was almost twice the speed of any other non-NN algorithm at the time.

18

Automated driving at 70 mph on a public highway

Camera image

30x32 pixelsas inputs

30 outputsfor steering

30x32 weightsinto one out offour hiddenunit

4 hiddenunits

NN

● An Artificial Neural Network is specified by:− neuron model: the information processing unit of the NN,− an architecture: a set of neurons and links connecting neurons.

Each link has a weight,− a learning algorithm: used for training the NN by modifying the

weights in order to model a particular learning task correctly on the training examples.

● The aim is to obtain a NN that is trained and generalizes well.

● It should behaves correctly on new instances of the learning task.

Neuron Model

A neuron has more than one input x1, x2,..,xm

Each input is associated with a weight w1, w2,..,wm

The neuron has a bias b The net input of the neuron is

n = w1 x1 + w2 x2+….+ wm xm + b

bxwn ii

The Neuron Diagram

Inputvalues

weights

Summingfunction

Bias

b

ActivationfunctionInduced

Field

vOutput

y

x1

x2

xm

w2

wm

w1

)(

Bias of a Neuron

● The bias b has the effect of applying a transformation to the weighted sum u

v = u + b● The bias is an external parameter of the neuron. It can be

modeled by adding an extra input.● v is called induced field of the neuron

bw

xwv j

m

j

j

0

0

Neuron output

The neuron output is

y = f (n)

f is called transfer function

Transfer Function

We have 3 common transfer functions

– Hard limit transfer function

– Linear transfer function

– Sigmoid transfer function

Hard Limit Transfer FunctionIt returns 0 if the input is less than 0 or 1 if the

input is greater than or equal to 0

Linear Function

The linear transfer function gives the output is equal to the input

y= n

Simple Neuron

Inputs f

X1

X2

Xn

Output

W1

W2

Wn

• The Gaussian function is the probability function of the normal distribution. Sometimes also called the frequency curve.

Exercises

The input to a single-input neuron is 2.0, its weight is

2.3 and the bias is –3.

What is the output of the neuron if it has transfer

function as:

– Hard limit

– Linear

– sigmoid

Exercises

The input to a single input neuron is 2.0, its

weight is 1.3 and the bias is 3.0. What possible

transfer function to get the output as:

– 1.6

– 1.0

– 0.9963

Exercises

Consider a single input neuron with a bias. We the

output to be –1 for inputs less than 3 and +1 for

inputs greater than or equal 3. What kind of

transfer function is required and bias. Draw the

network and state the weights.

Neural Network

Input Layer Hidden 1 Hidden 2 Output Layer

Network Layers

The common type of ANN consists of three

layers of neurons: a layer of input neurons

connected to the layer of hidden neuron

which is connected to a layer of output

neurons.

Network Layers

The activity of input units represents the information that is fed into the network

The activity of hidden unit is determined by activities of input units and weights between input and hidden units

The activity of output unit is determined by activities of hidden units and weights between hidden and output units

Architecture of ANN

Feed-Forward networks

Allow the signals to travel one way from input to output

Feed-Back Networks

The signals travel as loops in the network, the output is connected to the input of the network

Network Architectures

● Three different classes of network architectures

− single-layer feed-forward

− multi-layer feed-forward

− recurrent

● The architecture of a neural network is linked with the learning algorithm used to train

Learning Rule

The learning rule modifies the weights of

the connections.

The learning process is divided into

Supervised and Unsupervised learning

Supervised Network

Which means there exists an external

teacher. The target is to minimization of the

error between the desired and computed

output

Unsupervised Network

Uses no external teacher and is based upon

only local information.

Perceptron

It is a network of one neuron and hard limit transfer function

Inputs f

X1

X2

Xn

Output

W1

W2

Wn

Perceptron: Neuron Model (Special form of single layer feed forward)

− The perceptron was first proposed by Rosenblatt (1958) is a simple neuron that is used to classify its input into one of two categories.

− A perceptron uses a step function that returns +1 if weighted sum of its input 0 and -1 otherwise

x1

x2

xn

w2

w1

wn

b (bias)

v y(v)

0 if 1

0 if 1)(

v

vv

Perceptron

The perceptron is given first a randomly

weights vectors

Perceptron is given chosen data pairs (input

and desired output)

Preceptron learning rule changes the

weights according to the error in output

Perceptron Learning Rule

W new = W old + (t-a) X

Where W new is the new weight

W old is the old value of weight

X is the input value

t is the desired value of output

a is the actual value of output

Learning Process for Perceptron

● Initially assign random weights to inputs between -0.5 and +0.5

● Training data is presented to perceptron and its output is observed.

● If output is incorrect, the weights are adjusted accordingly using following formula.

wi wi + (a* xi *e), where ‘e’ is error produced and ‘a’ (-1 a 1) is learning rate

− ‘a’ is defined as 0 if output is correct, it is +ve, if output is too low and –ve, if output is too high.

− Once the modification to weights has taken place, the next piece of training data is used in the same way.

− Once all the training data have been applied, the process starts again until all the weights are correct and all errors are zero.

− Each iteration of this process is known as an epoch.

Example: Perceptron to learn OR function

● Initially consider w1 = -0.2 and w2 = 0.4● Training data say, x1 = 0 and x2 = 0, output is 0.● Compute y = Step(w1*x1 + w2*x2) = 0. Output is correct so

weights are not changed.● For training data x1=0 and x2 = 1, output is 1● Compute y = Step(w1*x1 + w2*x2) = 0.4 = 1. Output is correct

so weights are not changed.● Next training data x1=1 and x2 = 0 and output is 1● Compute y = Step(w1*x1 + w2*x2) = - 0.2 = 0. Output is

incorrect, hence weights are to be changed.● Assume a = 0.2 and error e=1

wi = wi + (a * xi * e) gives w1 = 0 and w2 =0.4● With these weights, test the remaining test data.● Repeat the process till we get stable result.

Example

Let – X1 = [0 0] and t =0– X2 = [0 1] and t=0– X3 = [1 0] and t=0– X4 = [1 1] and t=1

W = [2 2] and b = -3

AND Network

This example means we construct a network for AND operation. The network draw a line to separate the classes which is called Classification

Perceptron Geometric ViewThe equation below describes a (hyper-)plane in the input space

consisting of real valued m-dimensional vectors. The plane splits the input space into two regions, each of them describing one class.

0 wxw 0

m

1iii

x2

C1

C2x1

decisionboundary

w1x1 + w2x2 + w0 = 0

decisionregion for C1

w1x1 + w2x2 + w0 >= 0

Problems

Four one-dimensional data belonging to two classes are

X = [1 -0.5 3 -2]

T = [1 -1 1 -1]

W = [-2.5 1.75]

The First Neural Neural Networks

AND Function

1

1X1

X2

Y

AND

X1 X2 Y

1 1 1

1 0 0

0 1 0

0 0 0

Threshold(Y) = 2

Simple Networks

t = 0.0

y

x

W = 1.5

W = 1

-1

Exercises

Design a neural network to recognize the problem of

X1=[2 2] , t1=0 X=[1 -2], t2=1 X3=[-2 2], t3=0 X4=[-1 1], t4=1

Start with initial weights w=[0 0] and bias =0

Example

-1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1

-1-1 -1-1 ++11

++11

++11

++11

-1-1 -1-1

-1-1 -1-1 -1-1 -1-1 -1-1 ++11

-1-1 -1-1

-1-1 -1-1 -1-1 ++11

++11

++11

-1-1 -1-1

-1-1 -1-1 -1-1 -1-1 -1-1 ++11

-1-1 -1-1

-1-1 -1-1 -1-1 -1-1 - - 11

++11

-1-1 -1-1

-1-1 -1-1 ++11

++11

++11

++11

-1-1 -1-1

-1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1

Example

-1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1

-1-1 -1-1 ++11

++11

++11

++11

-1-1 -1-1

-1-1 -1-1 -1-1 -1-1 -1-1 ++11

-1-1 -1-1

-1-1 -1-1 -1-1 ++11

++11

++11

-1-1 -1-1

-1-1 ++11

-1-1 -1-1 -1-1 ++11

-1-1 -1-1

-1-1 -1-1 -1-1 -1-1 - - 11

++11

-1-1 -1-1

-1-1 -1-1 ++11

++11

++11

++11

-1-1 -1-1

-1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1

Perceptron: Limitations

The perceptron The perceptron can only model linearly separable classes, linearly separable classes, like (those described by) the following Boolean functions:

ANDAND OROR COMPLEMENTCOMPLEMENT It cannot cannot model the XORXOR.

You can experiment with these functions in the Matlab practical lessons.

These two classes (true and false) cannot be separated using a line. Hence XOR is non linearly separable.

Input Output X1 X2 X1 XOR X2 0 0 0 0 1 1 1 0 1 1 1 0

X1 1 true false false true 0 1 X2

Types of decision regions

022110 xwxww

022110 xwxww

x1

1

x2 w2

w1

w0

Convexregion

L1L2

L3L4 -3.5

Networkwith a singlenode

One-hidden layer network that realizes the convex region

1

1

1

1

1

x1

x2

1

Multi-layers Network

Let the network of 3 layers– Input layer– Hidden layer– Output layer

Each layer has different number of neurons The famous example to need the multi-layer

network is XOR unction

FFNN for XOR

● The ANN for XOR has two hidden nodes that realizes this non-linear separation and uses the sign (step) activation function.

● Arrows from input nodes to two hidden nodes indicate the directions of the weight vectors (1,-1) and (-1,1).

● The output node is used to combine the outputs of the two hidden nodes.

Input nodes Hidden layer Output layer Output H1 –0.5 X1 1 –1 1 Y –1 H2 X2 1 1

Learning rule

The perceptron learning rule can not be

applied to multi-layer network

We use BackPropagation Algorithm in

learning process

66

Feed-forward + Backpropagation

Feed-forward: – input from the features is fed forward in the network from input

layer towards the output layer Backpropagation:

– Method to asses the blame of errors to weights– error rate flows backwards from the output layer to the input

layer (to adjust the weight in order to minimize the output error)

Backprop

Back-propagation training algorithm illustrated:

Backprop adjusts the weights of the NN in order to minimize the network total mean squared error.

Network activationError computationForward Step

Error propagationBackward Step

68

BackpropagationIn a multilayer network…

Computing the error in the output layer is clear. Computing the error in the hidden layer is not clear, because we

don’t know what output it should be

Intuitively: A hidden node h is “responsible” for some fraction of the error in

each of the output node to which it connects.

So the error values (δ) are divided according to the weight of their connection between the hidden node and the output node and are propagated back to provide the error values (δ) for the hidden layer.

04/21/23 69

The delta rule

wij

wij

wij

wij

wijwij

wij

04/21/23 70

The generalized delta rulewij

wij

wij

wij

wij wij

wij

wij

wjk

opi

oiopi

04/21/23 71

Cont’d

wij

wij

04/21/23 72

04/21/23 73

wjk opj

j

wjk wjk

wjk

04/21/23 74

In short..

wjk

wij

Bp Algorithm

The weight change rule is

Where is the learning factor <1 Error is the error between actual and trained

value f’ is is the derivative of sigmoid function =

f(1-f)

)('.. ioldij

newij inputferror

Delta Rule

Each observation contributes a variable amount to the output

The scale of the contribution depends on the input Output errors can be blamed on the weights A least mean square (LSM) error function can be

defined (ideally it should be zero)

E = ½ (t – y)2

Stopping criterions

● Total mean squared error change: − Back-prop is considered to have converged when the absolute rate

of change in the average squared error per epoch is sufficiently small (in the range [0.1, 0.01]).

● Generalization based criterion: − After each epoch, the NN is tested for generalization.

− If the generalization performance is adequate then stop.

− If this stopping criterion is used then the part of the training set used for testing the network generalization will not used for updating the weights.

Example

For the network with one neuron in input layer and one neuron in hidden layer the following values are givenX=1, w1 =1, b1=-2, w2=1, b2 =1, =1 and t=1

Where X is the input valueW1 is the weight connect input to hidden W2 is the weight connect hidden to outputB1 and b2 are biasT is the training value

Exercises

Design a neural network to recognize the problem of

X1=[2 2] , t1=0 X=[1 -2], t2=1 X3=[-2 2], t3=0 X4=[-1 1], t4=1

Start with initial weights w=[0 0] and bias =0

Exercises

Perform one iteration of backprpgation to network of two layers. First layer has one neuron with weight 1 and bias –2. The transfer function in first layer is f=n2

The second layer has only one neuron with weight 1 and bias 1. The f in second layer is 1/n.

The input to the network is x=1 and t=1

Neural NetworkConstruct a neural network to solve the problem

X1 X2 Output

1.0 1.0 1

9.4 6.4 -1

2.5 2.1 1

8.0 7.7 -1

0.5 2.2 1

7.9 8.4 -1

7.0 7.0 -1

2.8 0.8 1

1.2 3.0 1

7.8 6.1 -1

Initialize the weights 0.75 , 0.5, and –0.6

Neural NetworkConstruct a neural network to solve the XOR problem

X1 X2 Output

1 1 0

0 0 0

1 0 1

0 1 1

Initialize the weights –7.0 , -7.0, -5.0 and –4.0

-0.5

-0.5

-2

3

-1

1

1

1-1

1

0.5

The transfer function is linear function.

Consider a transfer function as f(n) = n2. Perform one iteration of BackPropagation with a= 0.9 for neural network of two neurons in input layer and one neuron in output layer. The input values are X=[1 -1] and t = 8, the weight values between input and hidden layer are w11 = 1, w12 = - 2, w21 = 0.2, and w22 = 0.1. The weight between input and output layers are w1 = 2 and w2= -2. The bias in input layers are b1 = -1, and b2= 3.

X1

X2

W11

W22

W12

W1

W21 W2

Building Neural Networks

Define the problem in terms of neurons– think in terms of layers

Represent information as neurons– operationalize neurons– select their data type– locate data for testing and training

Define the network Train the network Test the network

● Data representation depends on the problem. ● In general ANNs work on continuous (real valued) attributes.

Therefore symbolic attributes are encoded into continuous ones.● Attributes of different types may have different ranges of values

which affect the training process. ● Normalization may be used, like the following one which scales

each attribute to assume values between 0 and 1.

for each value xi of ith attribute, mini and maxi are the minimum and

maximum value of that attribute over the training set.

Data Representation

i

i

minmax

min

i

ii

xx

04/21/23 87

The local minima problem

Unlike LMS, the error function is not smooth with a single minima. Local minima can occur, in which case a true gradient descent is not desirable.Momentum, incremental updates, and a large learning rate make “jiggly” path that can avoid local minimas.

04/21/23 88

Some variations

True gradient descent assumes infinitesmall learning rate (). If is too small then learning is very slow. If large, then the system's learning may never converge.

Some of the possible solutions to this problem are:– Add a momentum term to allow a large learning rate.– Use a different activation function– Use a different error function– Use an adaptive learning rate– Use a good weight initialization procedure.– Use a different minimization procedure

04/21/23 89

Momentum

The most widely used trick is to remember the direction of earlier steps.Weight update becomes:

wij (n+1) = (pj opi) + wij(n) The momentum parameter is chosen between 0

and 1, typically 0.9. This allows one to use higher learning rates. The momentum term filters out high frequency oscillations on the error surface.

What would the learning rate be in a deep valley?

04/21/23 90

Genetic algorithms

● How are the weights initialized?● How is the learning rate chosen?● How many hidden layers and how many

neurons?● How many examples in the training set?

Network parameters

Initialization of weights

● In general, initial weights are randomly chosen, with typical values between -1.0 and 1.0 or -0.5 and 0.5.

● If some inputs are much larger than others, random initialization may bias the network to give much more importance to larger inputs.

● In such a case, weights can be initialized as follows:

Ni

N,...,1

|x|1

21

ij iw For weights from the input to the first layer

For weights from the first to the second layer

Ni

Ni,...,1)xw(

121

jk ijw

● The right value of depends on the application. ● Values between 0.1 and 0.9 have been used in

many applications.● Other heuristics is that adapt during the training

as described in previous slides.

Choice of learning rate

Recurrent Network

● FFNN is acyclic where data passes from input to the output nodes and not vice versa.− Once the FFNN is trained, its state is fixed and does not alter as new

data is presented to it. It does not have memory.● Recurrent network can have connections that go backward

from output to input nodes and models dynamic systems.− In this way, a recurrent network’s internal state can be altered as sets

of input data are presented. It can be said to have memory.− It is useful in solving problems where the solution depends not just on

the current inputs but on all previous inputs. ● Applications

− predict stock market price, − weather forecast

● Recurrent Network with hidden neuron: unit delay operator d is used to model a dynamic system

d

d

d

Recurrent Network Architecture

inputhiddenoutput

Learning and Training

● During learning phase, − a recurrent network feeds its inputs through the network, including

feeding data back from outputs to inputs− process is repeated until the values of the outputs do not change.

● This state is called equilibrium or stability ● Recurrent networks can be trained by using back-

propagation algorithm. ● In this method, at each step, the activation of the output is

compared with the desired activation and errors are propagated backward through the network.

● Once this training process is completed, the network becomes capable of performing a sequence of actions.

Hopfield Network

● A Hopfield network is a kind of recurrent network as output values are fed back to input in an undirected way.− It consists of a set of N connected neurons with weights which are

symmetric and no unit is connected to itself. − There are no special input and output neurons.− The activation of a neuron is binary value decided by the sign of the

weighted sum of the connections to it.− A threshold value for each neuron determines if it is a firing neuron. − A firing neuron is one that activates all neurons that are connected to

it with a positive weight. − The input is simultaneously applied to all neurons, which then output

to each other. − This process continues until a stable state is reached.

Activation AlgorithmActive unit represented by 1 and inactive by 0.

● Repeat− Choose any unit randomly. The chosen unit may be active or

inactive.− For the chosen unit, compute the sum of the weights on the

connection to the active neighbours only, if any. If sum > 0 (threshold is assumed to be 0), then the chosen unit

becomes active, otherwise it becomes inactive.− If chosen unit has no active neighbours then ignore it, and

status remains same.● Until the network reaches to a stable state

Current State Selected Unit from current state

Corresponding New State

-2 3 -2 3 1 -2

-2 3 -2 3 1 -2 Sum = 3 – 2 = 1 > 0; activated

-2 3 -2 3 1 -2 Here, the sum of weights of active neighbours of a selected unit is calculated.

-2 3 -2 3 1 -2

-2 3 -2 3 1 -2 Sum = –2 < 0; deactivated

1 2 1 1 –2 3 X = [0 1 1]

1 2 1 1 –2 3 X =[1 1 0]

1 2 1 1 –2 3 X =[0 0 0]

Stable Networks

Weight Computation Method

● Weights are determined using training examples.

● Here − W is weight matrix − Xi is an input example represented by a vector of N values from

the set {–1, 1}. − Here, N is the number of units in the network; 1 and -1

represent active and inactive units respectively.− (Xi)T is the transpose of the input Xi , − M denotes the number of training input vectors, − I is an N × N identity matrix.

W = Xi . (Xi)T – M.I, for 1 i M

Example

● Let us now consider a Hopfield network with four units and three training input vectors that are to be learned by the network.

● Consider three input examples, namely, X1, X2, and X3 defined as follows:

1 1 –1

X1 = –1 X2 = 1 X3 = 1

–1 –1 1

1 1 –1

W = X1 . (X1)T + X2 . (X2)

T + X3 . (X3)T– 3.I

3 –1 –3 3 3 0 0 0 0 –1 –3 3

W = –1 3 1 –1 . – 0 3 0 0 = –1 0 1 –1 –3 1 3 –3 0 0 3 0 –3 1 0 –3

3 –1 –3 3 0 0 0 3 3 –1 –3 0

X1 = [1 –1 –1 1] 1 -1 2 3 1 -3 -1

3 -3 4 X3 = [-1 1 1 -1] 1 -1 2 3 1 -3 -1

3 -3 4

Stable positions of the network

Contd..

● The networks generated using these weights and input vectors are stable, except X2.

● X2 stabilizes to X1 (which is at hamming distance 1).

● Finally, with the obtained weights and stable states (X1 and X3), we can stabilize any new (partial) pattern to one of those

Radial-Basis Function Networks

● A function is said to be a radial basis function (RBF) if its output depends on the distance of the input from a given stored vector. − The RBF neural network has an input layer, a hidden layer and an

output layer. − In such RBF networks, the hidden layer uses neurons with RBFs as

activation functions. − The outputs of all these hidden neurons are combined linearly at the

output node. ● These networks have a wide variety of applications such as

− function approximation, − time series prediction, − control and regression, − pattern classification tasks for performing complex (non-linear).

RBF Architecture

● One hidden layer with RBF activation functions

● Output layer with linear activation function.

x2

xm

x1

y

wm1

w11

1m

11... m

||)(||...||)(|| 111111 mmm txwtxwy

txxxtx m center from ),...,( of distance |||| 1

Cont...

● Here we require weights, wi from the hidden layer to the output layer only.

● The weights wi can be determined with the help of any of the standard iterative methods described earlier for neural networks.

● However, since the approximating function given below is linear w. r. t. wi, it can be directly calculated using the matrix methods of linear least squares without having to explicitly determine wi iteratively.

● It should be noted that the approximate function f(X) is differentiable with respect to wi.

)()(1

N

iiii tXwXfY

RBF NN FF NN

Non-linear layered feed-forward networks.

Non-linear layered feed-forward networks

Hidden layer of RBF is non-linear, the output layer of RBF is linear.

Hidden and output layers of FFNN are usually non-linear.

One single hidden layer May have more hidden layers.

Neuron model of the hidden neurons is different from the one of the output nodes.

Hidden and output neurons share a common neuron model.

Activation function of each hidden neuron in a RBF NN computes the Euclidean distance between input vector and the center of that unit.

Activation function of each hidden neuron in a FFNN computes the inner product of input vector and the synaptic weight vector of that neuron

Comparison