November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1...

18
November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Self-Organizing Maps (Kohonen Maps) Maps) In the BPN, we used In the BPN, we used supervised supervised learning. learning. This is not biologically plausible: In This is not biologically plausible: In a biological system, there is no a biological system, there is no external “teacher” who manipulates the external “teacher” who manipulates the network’s weights from outside the network’s weights from outside the network. network. Biologically more adequate: Biologically more adequate: unsupervised unsupervised learning. learning. We will study Self-Organizing Maps We will study Self-Organizing Maps (SOMs) as examples for unsupervised (SOMs) as examples for unsupervised learning (Kohonen, 1980). learning (Kohonen, 1980).
  • date post

    22-Dec-2015
  • Category

    Documents

  • view

    218
  • download

    0

Transcript of November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1...

Page 1: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

1

Self-Organizing Maps (Kohonen Maps)Self-Organizing Maps (Kohonen Maps)

In the BPN, we used In the BPN, we used supervisedsupervised learning. learning.

This is not biologically plausible: In a biological This is not biologically plausible: In a biological system, there is no external “teacher” who system, there is no external “teacher” who manipulates the network’s weights from outside the manipulates the network’s weights from outside the network. network.

Biologically more adequate: Biologically more adequate: unsupervisedunsupervised learning. learning.

We will study Self-Organizing Maps (SOMs) as We will study Self-Organizing Maps (SOMs) as examples for unsupervised learning (Kohonen, 1980).examples for unsupervised learning (Kohonen, 1980).

Page 2: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

2

Self-Organizing Maps (Kohonen Maps)Self-Organizing Maps (Kohonen Maps)

In the human cortex, multi-dimensional sensory input In the human cortex, multi-dimensional sensory input spaces (e.g., visual input, tactile input) are spaces (e.g., visual input, tactile input) are represented by two-dimensional maps.represented by two-dimensional maps.

The projection from sensory inputs onto such maps is The projection from sensory inputs onto such maps is topology conserving.topology conserving.

This means that neighboring areas in these maps This means that neighboring areas in these maps represent neighboring areas in the sensory input represent neighboring areas in the sensory input space.space.

For example, neighboring areas in the sensory cortex For example, neighboring areas in the sensory cortex are responsible for the arm and hand regions.are responsible for the arm and hand regions.

Page 3: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

3

Self-Organizing Maps (Kohonen Maps)Self-Organizing Maps (Kohonen Maps)

Such topology-conserving mapping can be achieved Such topology-conserving mapping can be achieved by SOMs:by SOMs:

• Two layers: input layer and output (map) layerTwo layers: input layer and output (map) layer

• Input and output layers are completely connected.Input and output layers are completely connected.

• Output neurons are interconnected within a defined Output neurons are interconnected within a defined neighborhood. neighborhood.

• A topology (neighborhood relation) is defined on A topology (neighborhood relation) is defined on the output layer. the output layer.

Page 4: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

4

Self-Organizing Maps (Kohonen Maps)Self-Organizing Maps (Kohonen Maps)BPN structure:BPN structure:

input vector input vector xx

II11

output vector output vector oo

II22 IInn

OO11 OO22 OO33 OOmm

……

……

Page 5: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

5

Self-Organizing Maps (Kohonen Maps)Self-Organizing Maps (Kohonen Maps)

Common output-layer structures:Common output-layer structures:

One-dimensionalOne-dimensional(completely interconnected(completely interconnectedfor determining “winner” unit)for determining “winner” unit)

Two-dimensionalTwo-dimensional(connections omitted, (connections omitted, only neighborhood only neighborhood relations shown [green])relations shown [green])

ii

ii

Neighborhood of neuron iNeighborhood of neuron i

Page 6: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

6

Self-Organizing Maps (Kohonen Maps)Self-Organizing Maps (Kohonen Maps)

A neighborhood function (i, k) indicates how closely A neighborhood function (i, k) indicates how closely neurons i and k in the output layer are connected to neurons i and k in the output layer are connected to each other.each other.

Usually, a Gaussian function on the distance between Usually, a Gaussian function on the distance between the two neurons in the layer is used:the two neurons in the layer is used:

position of iposition of k

Page 7: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

7

Unsupervised Learning in SOMsUnsupervised Learning in SOMsFor n-dimensional input space and m output neurons:

(1) Choose random weight vector wi for neuron i, i = 1, ..., m

(2) Choose random input x

(3) Determine winner neuron k: ||wk – x|| = mini ||wi – x|| (Euclidean distance)

(4) Update all weight vectors of all neurons i in the neighborhood of neuron k: wi := wi + ·(i, k)·(x – wi) (wi is shifted towards x)

(5) If convergence criterion met, STOP. Otherwise, narrow neighborhood function and learning parameter and go to (2).

Page 8: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

8

Unsupervised Learning in SOMsUnsupervised Learning in SOMsExample I: Learning a one-dimensional representation of a two-dimensional (triangular) input space:

0

25000

20 100

1000 10000

Page 9: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

9

Unsupervised Learning in SOMsUnsupervised Learning in SOMsExample II: Learning a two-dimensional representation of a two-dimensional (square) input space:

Page 10: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

10

Unsupervised Learning in SOMsUnsupervised Learning in SOMs

Example III:Learning a two-dimensional mapping of texture images

Page 11: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

11

The Hopfield NetworkThe Hopfield NetworkThe The HopfieldHopfield model is a single-layered recurrent model is a single-layered recurrent network.network.

It is usually It is usually initializedinitialized with appropriate weights with appropriate weights instead of being trained.instead of being trained.

The network structure looks as follows:The network structure looks as follows:

XX11 XX22 XXNN……

Page 12: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

12

The Hopfield NetworkThe Hopfield Network

We will focus on the We will focus on the discretediscrete Hopfield model, Hopfield model, because its mathematical description is more because its mathematical description is more straightforward.straightforward.

In the discrete model, the output of each neuron is In the discrete model, the output of each neuron is either 1 or –1.either 1 or –1.

In its simplest form, the output function is the In its simplest form, the output function is the sign sign functionfunction, which yields 1 for arguments , which yields 1 for arguments 0 and –1 0 and –1 otherwise.otherwise.

Page 13: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

13

The Hopfield NetworkThe Hopfield NetworkWe can set the weights in such a way that the network We can set the weights in such a way that the network learns a set of different inputs, for example, images.learns a set of different inputs, for example, images.

The network associates input patterns with The network associates input patterns with themselves, which means that in each iteration, the themselves, which means that in each iteration, the activation pattern will be activation pattern will be drawn towardsdrawn towards one of those one of those patterns.patterns.

After converging, the network will most likely present After converging, the network will most likely present one of the patterns that it was initialized with.one of the patterns that it was initialized with.

Therefore, Hopfield networks can be used to Therefore, Hopfield networks can be used to restorerestore incomplete or noisy input patterns.incomplete or noisy input patterns.

Page 14: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

14

The Hopfield NetworkThe Hopfield NetworkExample:Example: Image reconstruction (Ritter, Schulten, Image reconstruction (Ritter, Schulten, Martinetz 1990)Martinetz 1990)

A 20A 2020 discrete Hopfield network was trained with 20 input 20 discrete Hopfield network was trained with 20 input patterns, including the one shown in the left figure and 19 patterns, including the one shown in the left figure and 19 random patterns as the one on the right.random patterns as the one on the right.

Page 15: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

15

The Hopfield NetworkThe Hopfield Network

After providing only one fourth of the “face” image as After providing only one fourth of the “face” image as initial input, the network is able to perfectly reconstruct initial input, the network is able to perfectly reconstruct that image within only two iterations.that image within only two iterations.

Page 16: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

16

The Hopfield NetworkThe Hopfield NetworkAdding noise by changing each pixel with a probability Adding noise by changing each pixel with a probability p = 0.3 does not impair the network’s performance.p = 0.3 does not impair the network’s performance.

After two steps the image is perfectly reconstructed.After two steps the image is perfectly reconstructed.

Page 17: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

17

The Hopfield NetworkThe Hopfield NetworkHowever, for noise created by p = 0.4, the network is However, for noise created by p = 0.4, the network is unable the original image.unable the original image.

Instead, it converges against one of the 19 random Instead, it converges against one of the 19 random patterns. patterns.

Page 18: November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.

November 24, 2009 Introduction to Cognitive Science Lecture 21: Self-Organizing Maps

18

The Hopfield NetworkThe Hopfield Network

The Hopfield model constitutes an interesting neural The Hopfield model constitutes an interesting neural approach to identifying partially approach to identifying partially occluded objectsoccluded objects and and objects in objects in noisy imagesnoisy images..

These are among the These are among the toughest problemstoughest problems in computer vision. in computer vision.

Notice, however, that Hopfield networks require the input Notice, however, that Hopfield networks require the input patterns to always be in exactly the same position, otherwise patterns to always be in exactly the same position, otherwise they will fail to recognize them.they will fail to recognize them.