Artificial Neural Networks

62
Artificial Neural Networks Yuki Osada Andrew Cannon 1

description

Artificial Neural Networks. Yuki Osada Andrew Cannon. Metaphor. Humans are an intelligent species. One feature is the ability to learn. The ability to learn comes down to the brain. The brain learns from experience. Research shows that the brain stores information as patterns. - PowerPoint PPT Presentation

Transcript of Artificial Neural Networks

Page 1: Artificial Neural Networks

1

Artificial Neural Networks

Yuki OsadaAndrew Cannon

Page 2: Artificial Neural Networks

2

Humans are an intelligent species.◦ One feature is the ability to learn.

The ability to learn comes down to the brain.

The brain learns from experience.◦ Research shows that the brain stores information

as patterns.◦ This information is stored in neurons.

Metaphor

Page 3: Artificial Neural Networks

3

Neurons do not regenerate suggesting that these cells are what provide us with the abilities to:◦ remember,◦ think, and◦ apply previous experiences.

Humans generally have between 80 and 120 million neurons.◦ Each neuron typically connect with 1000 to 10000

other neurons.◦ The human brain is a huge network of neurons - a

neural network.

Metaphor

Page 4: Artificial Neural Networks

4

The power of the human mind comes from the sheer number of these neurons, and its connections.◦ The individual neurons act as a function of their

incoming signals.◦ Although neurons themselves are complicated,

they don't exhibit complex behaviour on their own. This is the key feature that makes it a viable

computational intelligence approach.

Metaphor

Page 5: Artificial Neural Networks

5

Artificial Neural Networks are a computational model inspired by the neural structure of the human brain, a biological neural network.◦ They attempt to replicate only the basic elements of

this complicated, versatile, and powerful organism.◦ It consists of an interconnected group of artificial

neurons.◦ It learns by changing its structure based on

information that flows through the network. They are used to model complex relationships

between inputs and outputs, or to find patterns in data.

Metaphor

Page 6: Artificial Neural Networks

6

Neurons are the fundamental processing elements of a neural network.

Artificial Neurons

Jarosz, Q. (2009), "Neuron Hand-tuned.svg". Retrieved 10 September, 2012, from Wikipedia, Neuron https://en.wikipedia.org/wiki/Neuron.

Page 7: Artificial Neural Networks

7

A biological neuron basically:1. receives inputs from other sources (dendrites),2. merges them in some way (soma),3. performs an operation on the result (axon), then4. outputs the result - possibly to other neurons

(axon terminals). Artificial neurons follow this basic approach.

Artificial Neurons

Page 8: Artificial Neural Networks

8

The basic structure of an artificial neuron consists of:1. input connections (dendrites) with weights,2. a summation function or input function (soma),3. a transfer function or activation function (axon),

and4. output connections (axon terminals).

It has no learning process as such.

Artificial Neurons

Page 9: Artificial Neural Networks

9

The function:1. Input values enter the neuron via the connections.2. The inputs are multiplied with the weighting factor of

their respective connection. There is often a separate bias connection, which can act as

a threshold for the neuron to produce some useful output.3. The modified inputs are fed into a summation function.

Usually just sums the products.4. The result from the summation function is sent to a

transfer function. Usually a step function, or a sigmoid function.

5. The neuron outputs the result of the transfer function into other neurons, or to an outside connection.

Artificial Neurons

Page 10: Artificial Neural Networks

10

How can the neurons be clustered together?◦ The structure used in these networks is a layering

approach.◦ These layers are connected to each other in a

linear fashion. It's possible that a neuron may have an output

connection to itself. How these layers may be connected is generally

problem-dependent.

Neural Networks

Page 11: Artificial Neural Networks

11

Single layer neurons are the simplest networks.◦ Multiple input sources will be fed into the set of

neurons, which produce the outputs to the neural network.

◦ These are called perceptrons.◦ These perceptrons can only represent linearly-

separable functions. We can make the system represent more

complex functions by adding more layers.

Neural Networks

Page 12: Artificial Neural Networks

12

Multi-layered neural networks are more powerful than single-layered neural networks.◦ The cost is that these hidden layers increase the

complexity and training time of these networks. Networks with a single hidden layer can

approximate any continuous function with arbitrary accuracy.

Networks with two hidden layers can represent discontinuous functions.

Neural Networks

Page 13: Artificial Neural Networks

13

Neural Networks

JokerXtreme (2011), "Artificial_neural_network.svg". Retrieved 10 September, 2012, from Wikipedia, Artificial neural network https://en.wikipedia.org/wiki/Artificial_neural_networks.

Page 14: Artificial Neural Networks

14

There are two main types of multi-layered neural networks:1. Feedforward.◦ A simple acyclic structure:

Information always moves in one direction; it never goes backwards.

Stateless encoding; no information is accumulated.

Neural Networks

Page 15: Artificial Neural Networks

15

There are two main types of multi-layered neural networks:2. Recurrent.◦ A structure with cyclic feedback loops:

Information may be sent to any layer; it can process arbitrary sequences of input, and produce more complex results.

Stateful encoding; introduces short-term memory into the system, and allows dynamic temporal behaviour.

Neural Networks

Page 16: Artificial Neural Networks

16

Artificial neural networks are used to model complex systems that were not understood by the programmer.◦ We usually don't know how to construct a perfect

neural network for a problem.◦ We must train them to produce better results.◦ We can only train aspects of a neural network.

Training

Page 17: Artificial Neural Networks

17

Training is the adjusting of parameters with the aim to minimise a measure of error, the cost function.

What parameters in the artificial neural network do we want to adjust?◦ The weighting factors.

The link weights influence the function represented by the neural network.

In the case when we have no idea for the link weights, these might be randomly generated at initialisation.

Training

Page 18: Artificial Neural Networks

18

There are 2 main approaches to training:◦ Supervised:

The user provides sample input and output data. The network adjusts its weights to match the expected results.

◦ Unsupervised. Only input data is supplied, and the neural network

must find patterns on its own.

Training

Page 19: Artificial Neural Networks

19

G. McNeil and D. Anderson, ‘Artificial Neural Networks Technology’, The Data & Analysis Center for Software Technical Report, 1992.

Leslie S Smith, "An Introduction to Neural Networks", Centre for Cognitive and Computational Neuroscience, Department of Computing and Mathematics, University of Stirling, 2008. Retrieved 10 September, 2012, http://www.cs.stir.ac.uk/~lss/NNIntro/InvSlides.html.

Jarosz, Q. (2009), "Neuron Hand-tuned.svg". Retrieved 10 September, 2012, from Wikipedia, Neuron https://en.wikipedia.org/wiki/Neuron.

JokerXtreme (2011), "Artificial_neural_network.svg". Retrieved 10 September, 2012, from Wikipedia, Artificial neural network https://en.wikipedia.org/wiki/Artificial_neural_networks.

Sources

Page 20: Artificial Neural Networks

20

Language processing Character recognition Pattern recognition Signal processing Prediction

Applications

Page 21: Artificial Neural Networks

21

Supervised learning◦ Perceptron◦ Feedforward, back-propagation

Unsupervised learning◦ Self organising maps

Learning

Page 22: Artificial Neural Networks

22

Simplest type of neural network Introduced by Rosenblatt (1958)

Perceptron

Inputs Output

Adapted from Haykin, SS 2009 (p48)

1

2

n

.

.

.

Page 23: Artificial Neural Networks

23

Simplest type of neural network Introduced by Rosenblatt (1958)

Perceptron

Inputs Output

Adapted from Haykin, SS 2009 (p48)

1

2

n

.

.

.

Page 24: Artificial Neural Networks

24

Simplest type of neural network Introduced by Rosenblatt (1958)

Perceptron

Inputs Output

Adapted from Haykin, SS 2009 (p48)

1

2

n

.

.

.

Page 25: Artificial Neural Networks

25

Input is a real vector i = (i1,i2,…,in)

Calculate a weighted scalar s from inputss = Σj wjij + b

Calculate outputr = sgn(s)

Perceptron

Page 26: Artificial Neural Networks

26

Categorises input vectors as being in one of two categories

A single perceptron can be trained to separate inputs into two linearly separable categories

Perceptron

Category 1

Category 2

Page 27: Artificial Neural Networks

27

Need a training set of input/output pairs Initialise weights and bias (randomly or to

zero)

Calculate output

Adjust the weights and bias in proportion to the difference between actual and expected values

Training a Perceptron

Page 28: Artificial Neural Networks

28

Repeat until termination criteria is reached

Rosenblatt (1962) showed that the weights and bias will converge to fixed values after a finite number of iterations (if the categories are linearly separable)

Training a Perceptron

Page 29: Artificial Neural Networks

29

We want to classify points in R2 into those points for which y≥x+1 and those for which y<x+1

Perceptron Example

x

y y=x+1

Page 30: Artificial Neural Networks

30

Initialise bias/weight vector to (0,0,0) Input is the point (-1,-1) (below the line) –

expressed as (1,-1,-1)

s = 0x1+0x-1+0x-1 = 0

Actual output is sgn(0) = +1 Expected output is -1 (below the line)

Perceptron Example

Page 31: Artificial Neural Networks

31

Error (expected-actual) is -2 Constant learning rate of 0.25

So new weight vector is

(0,0,0)

Perceptron Example

Page 32: Artificial Neural Networks

32

Error (expected-actual) is -2 Constant learning rate of 0.25

So new weight vector is

(0,0,0) + 0.25

Perceptron Example

Page 33: Artificial Neural Networks

33

Error (expected-actual) is -2 Constant learning rate of 0.25

So new weight vector is

(0,0,0) + 0.25(-2)

Perceptron Example

Page 34: Artificial Neural Networks

34

Error (expected-actual) is -2 Constant learning rate of 0.25

So new weight vector is

(0,0,0) + 0.25(-2)(1,-1,-1)

Perceptron Example

Page 35: Artificial Neural Networks

35

Error (expected-actual) is -2 Constant learning rate of 0.25

So new weight vector is

(0,0,0) + 0.25(-2)(1,-1,-1) = (-0.5,0.5,0.5)

Perceptron Example

Page 36: Artificial Neural Networks

36

New bias/weight vector is (-0.5,0.5,0.5) Input is the point (0,2) (above the line) –

expressed as (1,0,2)

s = -0.5x1+0.5x0+0.5x2 = 0.5

Actual output is sgn(0.5) = +1 Expected output is +1 (above the line) – no

change to weight

Perceptron Example

Page 37: Artificial Neural Networks

37

Eventually, this will converge to the correct answer of (-a,-a,a) for some a>0

Generally, we won’t know the correct answer!

Perceptron Example

Page 38: Artificial Neural Networks

38

Feedforward network has no connections looping backwards

Back-propagation algorithm allows for learning

Feedforward, Back-Propagation

Page 39: Artificial Neural Networks

39

Operates similarly to perceptron learning

Back-Propagation

.

.

.

Input

Output

Page 40: Artificial Neural Networks

40

Inputs are fed forward through the network

Back-Propagation

.

.

.

Input

Output

Page 41: Artificial Neural Networks

41

Inputs are fed forward through the network

Back-Propagation

.

.

.

Input

Output

Page 42: Artificial Neural Networks

42

Inputs are fed forward through the network

Back-Propagation

.

.

.

Input

Output

Page 43: Artificial Neural Networks

43

Inputs are fed forward through the network

Back-Propagation

.

.

.

Input

Output

Page 44: Artificial Neural Networks

44

Inputs are fed forward through the network

Back-Propagation

.

.

.

Input

Output

Compare to expected

Page 45: Artificial Neural Networks

45

Errors are propagated back

Back-Propagation

.

.

.

Input

Output

Page 46: Artificial Neural Networks

46

Errors are propagated back

Back-Propagation

.

.

.

Input

Output

Page 47: Artificial Neural Networks

47

Errors are propagated back

Back-Propagation

.

.

.

Input

Output

Page 48: Artificial Neural Networks

48

Errors are propagated back

Back-Propagation

.

.

.

Input

Output

Page 49: Artificial Neural Networks

49

Adjust weights based on errors

Back-Propagation

.

.

.

Input

Output

Page 50: Artificial Neural Networks

50

Weights might be updated after each pass or after multiple passes

Back-Propagation

Page 51: Artificial Neural Networks

51

Need a comprehensive training set Network cannot be too large for the training

set No guarantees the network will learn Network design and learning strategies

impact the speed and effectiveness of learning

Supervised Learning Issues

Page 52: Artificial Neural Networks

52

More powerful (if you can make it work) No external notion of correct/incorrect

output – the network uses internal rules to adjust its output in response to inputs

Unsupervised Learning

Page 53: Artificial Neural Networks

53

One or more inputs connected to a set of outputs

Self Organising Maps

Page 54: Artificial Neural Networks

54

Output neurons form a lattice in (usually) two dimensional space

Self Organising Maps

Page 55: Artificial Neural Networks

55

Measurable distance between output neurons

Self Organising Maps

d

Page 56: Artificial Neural Networks

56

Based on the network weights, for each input, each output neuron is excited to a different degree

Self Organising Maps

Page 57: Artificial Neural Networks

57

Select best matching unit (BMU)

Self Organising Maps

Page 58: Artificial Neural Networks

58

Identify a neighbourhood around BMU

Self Organising Maps

Page 59: Artificial Neural Networks

59

Based on their levels of excitation, adjust the weights of each output neuron in this neighbourhood to more closely match the input

Hope that output neurons diverge into stable (and distinct) categories allowing the input data to be classified

Self Organising Maps

Page 60: Artificial Neural Networks

60

Adapted from: AI-Junkie n.d., Kohonen's Self Organizing Feature Maps, Available from: <http://www.ai-junkie.com/ann/som/som1.html>, [11 September 2012].

Self Organising Maps

Page 61: Artificial Neural Networks

61

AI-Junkie n.d., Kohonen's Self Organizing Feature Maps, Available from: <http://www.ai-junkie.com/ann/som/som1.html>, [11 September 2012].

Bose, NK & Liang, P 1996, Neural network fundamentals with graphs, algorithms, and applications, McGraw-Hill, New York. (Chapters 4,5 and 9)

Fausett, LV 1994, Fundamentals of neural networks : architectures, algorithms, and applications, Prentice-Hall, Englewood Cliffs, N.J.. (Chapter 6)

Haykin, SS 2009, Neural networks and learning machines, 3rd edn, Prentice Hall, New York. (Chapters 1, 4 and 9)

Kartalopoulos, SV 1996, Understanding neural networks and fuzzy logic - basic concepts and applications, IEEE Press, New York. (Sections 3.2, 3.5 and 3.14)

McNeil, G & Anderson, D 1992, 'Artificial Neural Networks Technology', The Data & Analysis Center for Software Technical Report.

Sources

Page 62: Artificial Neural Networks

62

Questions?