Artificial Neural Networksvasighi/courses/ann97win/ann12.pdfSelf Organizing Maps Topology The...

Post on 24-May-2020

2 views 0 download

Transcript of Artificial Neural Networksvasighi/courses/ann97win/ann12.pdfSelf Organizing Maps Topology The...

Artificial Neural

Networks

Part 12

Self Organizing Maps Introduction

Teuvo Kohonen has introduced several new

concepts to neural computing.

The most popular one is the self-organizing

feature maps (SOMs) which can be used for

visualization and clustering of high dimensional

data.

He is a Professor of the Academy of Finland

The Kohonen ANN is a ‘self-organizing’ system which is capable to

solve the unsupervised rather than the supervised problems.

Self Organizing Maps Introduction

Grandmother cell theory:A "grandmother cell" is a hypothetical

neuron that responds only to a specific

and meaningful stimulus, such as the

image of one's grandmother. The term

originated by Jerry Lettvin in 1967.

o The neurons are organized according to a physical network of

connections in the brain (planar topology).

o Some neurons are tuned by evolution and training to fire

electrical signals for particular events.

o Neurons that are neighbors tend to fire for similar input data.

Self Organizing Maps Introduction

Self Organizing Maps Introduction

Self Organizing Maps Structure

As a rule, the Kohonen type of net is based on a single layer of neurons

arranged in a two-dimensional plane having a well defined topology

A defined topology means that each neuron has a defined number of

neurons as nearest neighbors, second-nearest neighbor, etc.

Two-dimensional lattice of neurons, illustrated for a three-dimensional

input and four-by-four dimensional output

Self Organizing Maps Structure

Self Organizing Maps Topology

The neighborhood of a neuron is usually arranged either in squares or

in hexagon.

In the Kohonen conception of neural networks, the signal similarity is

related to the spatial (topological) relation among neurons in the

network.

Self Organizing Maps Learning scheme

Similarity is the basis of

selection of the winner

neuron.

In other words, there is a

competition between neurons

for winning.

(competitive learning)

The Kohonen learning concept tries to map the input so that similar signals excite neurons that are very close together.

W

Input vector

Output

Weight vector

w

xs

Similarity map

Self Organizing Maps Training

cout

2

jisi

m

1j

wxmin

1st step : an m-dimensional

object xs enters the network and

only one neuron from those in

the output layer is selected after

input occurs, the network

selects the winner “c” (central)

according to some criteria.

To begin, we assign random

numbers to each of the weights, as

in the case of other Neural

Network computations

2nd step : After finding the neuron c, its weight vector are corrected to

make its response closer to input.

old

jiSi

old

ji

new

ji wXtjcdtww ),()(

min

max

maxminmax

1)()( at

ttaat

W

Self Organizing Maps Training

amax

dc d

Triangular

amax

dc d

Mexican

hat

3rd step : The weight of neighboring neurons must be

corrected as well. These corrections are usually

scaled down, depending on the distance from c.

old

jiSi

old

ji

new

ji wXtjcdtww ),()(

4th step : The next object Xs is input and the

process repeated. After all objects are input

once, one epoch is completed.

Self Organizing Maps Training

0.2

0.4

0.1

0.4

0.5

0.5

0.1

0.3

0.6

0.6

0.8

0.0

0.7

0.2

0.9

0.2

0.4

0.3

0.3

0.1

0.8

0.9

0.2

0.4

0.5

0.1

0.5

0.0

0.6

0.3

0.7

0.0

0.1

0.2

0.9

0.1

1.0

0.0

0.1

0.1

0.2

0.3

0.8

0.7

0.4

0.7

0.2

0.7

4×4 map

1.0

0.2

0.6

Input vector

output

0.34 0.80 0.52 0.76

1.28 0.46 0.80 1.18

0.82 0.30 0.76 0.44

1.06 0.32 1.18 1.16

cout

m

1i

sijij xwmaxoutmax

Winner

Self Organizing Maps Training

0.2

0.4

0.1

0.4

0.5

0.5

0.1

0.3

0.6

0.6

0.8

0.0

0.7

0.2

0.9

0.2

0.4

0.3

0.3

0.1

0.8

0.9

0.2

0.4

0.5

0.1

0.5

0.0

0.6

0.3

0.7

0.0

0.1

0.2

0.9

0.1

1.0

0.0

0.1

0.1

0.2

0.3

0.8

0.7

0.4

0.7

0.2

0.7

1.0

0.2

0.6

Input vector

0.8

-0.2

0.5

0.6

-0.3

0.1

0.9

-0.1

0.0

0.4

-0.6

0.6

0.3

0.0

-0.3

0.8

-0.2

0.3

0.7

0.1

-0.2

0.1

0.0

0.2

0.5

0.1

0.1

1.0

-0.4

0.3

0.3

0.2

0.5

0.8

-0.7

0.5

0.0

0.2

0.5

0.9

0.0

0.3

0.2

-0.5

0.2

0.3

0.0

-0.1

1× 0.9×

0.8×0.9×

0.6×0.9×

× 0.4×0.9

old

jiSiji wXtjcdtw ),()(

min

max

maxminmax

1)()( at

ttaat

amax=0.9

amin=0.1 max)1( at=1 (first epoch)

Neighbor function: Linearold

jiSi wX

win

ner

old

jiw

d

Self Organizing Maps Training

Initialize Network

Get Input

Find Winner

Update Winner

Update Neighborhood

Repeat for

all input objects

Repeat n time

epochs

Self Organizing Maps Training

After the training process accomplished, the complete set of the training

vectors is once more run through the KANN. In this last run the labeling of

the neurons excited by the input vector is made into the table called top

map.

e

d b

c

a

Top Map

a

Input vectors

b c d e

Trained SOM

Self Organizing Maps Top Map

The number of weights in each neuron is equal to the dimension m of the

input vector. Hence, in each level of weight only data of one specific

variable are handled.

Trained KANN

0 0 0 0 0

1 0 0 0 0

1 1 0 0 0

4 3 1 1 0

5 6 2 1 1

1 3 0 1 2

3 2 2 1 3

2 1 1 2 3

1 2 1 0 1

3 2 1 1 2XS

Input Vector

L L L L

L L L

H

H H H H

H H H

Top Map

Self Organizing Maps Weight Map

The U-matrix is simply a collection of pairwise distances between the

model vectors of neighboring SOM. With it, the clusters can be

visualized as gray shades on top of the SOM display.

U-matrix25×25

Long distances correspond to dark

shades and short distances to light

shades.

Self Organizing Maps U-matrix

Self Organizing Maps Bounding

Kohonen Map

toroid

W

3rd layer of neighbor neurons

Self Organizing Maps Inside feature space

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

x1 x2 x3 …

y1 y2 y3 …

Self Organizing Maps Inside feature space

x1 x2 x3 …

y1 y2 y3 …

3×3 SOM

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

Self Organizing Maps Inside feature space

Inputsamplex1 x2 x3 …

y1 y2 y3 …

3×3 SOM

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

Self Organizing Maps Inside feature space

x1 x2 x3 …

y1 y2 y3 …

3×3 SOM

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

Self Organizing Maps Inside feature space

x1 x2 x3 …

y1 y2 y3 …

3×3 SOM

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

Self Organizing Maps Inside feature space

x1 x2 x3 …

y1 y2 y3 …

3×3 SOM

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

Self Organizing Maps Inside feature space

x1 x2 x3 …

y1 y2 y3 …

3×3 SOM

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

Self Organizing Maps Inside feature space

x1 x2 x3 …

y1 y2 y3 …

3×3 SOM

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

Self Organizing Maps Inside feature space

x1 x2 x3 …

y1 y2 y3 …

3×3 SOM

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

Self Organizing Maps Inside feature space

x1 x2 x3 …

y1 y2 y3 …

3×3 SOM

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

Self Organizing Maps Inside feature space

x1 x2 x3 …

y1 y2 y3 …

3×3 SOM

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

Self Organizing Maps Inside feature space

x1 x2 x3 …

y1 y2 y3 …

3×3 SOM

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

Self Organizing Maps Inside feature space

x1 x2 x3 …

y1 y2 y3 …

3×3 SOM

Suppose we use a 3×3 SOM network for training our data set.Our example data has 10 points (vector) in two dimensions.(size of data matrix is 2×10)

Self Organizing Maps Inside feature space

http://www.peltarion.com/doc/images/Animated_SOM_operation.gif

http://red.csie.ntu.edu.tw/demo/art/CSM/img/SOM_2D.jpg

Self Organizing Maps Inside feature space