Self-organizing Maps (Kohonen Networks) and related models

40
Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 1 Self-organizing Maps Self-organizing Maps (Kohonen Networks) (Kohonen Networks) and related models and related models Outline: • Cortical Maps • Competitive Learning • Self-organizing Maps (Kohonen Network) • Applications to V1 • Contextual Maps • Some Extensions: Conscience, Topology Learning

description

Self-organizing Maps (Kohonen Networks) and related models. Outline: Cortical Maps Competitive Learning Self-organizing Maps (Kohonen Network) Applications to V1 Contextual Maps Some Extensions: Conscience, Topology Learning. Vesalius, 1543. Brodmann, 1909. van Essen, 1990s. - PowerPoint PPT Presentation

Transcript of Self-organizing Maps (Kohonen Networks) and related models

Page 1: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 1

Self-organizing MapsSelf-organizing Maps(Kohonen Networks)(Kohonen Networks)and related modelsand related models

Outline:• Cortical Maps• Competitive Learning• Self-organizing Maps (Kohonen Network)• Applications to V1• Contextual Maps• Some Extensions: Conscience, Topology Learning

Page 2: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 2

Vesalius, 1543

Page 3: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 3

Brodmann, 1909

Page 4: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 4

van Essen, 1990s

Page 5: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 5

Optical imaging of primary visual cortex:orientation selectivity, retinal position, and ocular dominance all mapped onto 2-dim.map.

Somatosensory System:strong magnification of certain body parts “homunculus”

Note: looks at upper layers of cortex

Page 6: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 6

General ThemesGeneral Themes

Potentially high dimensional inputs are mapped onto two-dimensional cortical surface: dimensionality reduction

Within one column: (from pia to white matter)similar properties of neurons

Cortical Neighbors:smooth changes of properties between neighboring cortical sites

In visual system:Retinotopy dominates: neighboring cortical sites process information from neighboring parts of the retina (the lower in the visual system, the stronger the effect)

Page 7: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 7

Two ideas for learning mappings:

• Willshaw & von der Malsburg:• discrete input and output space• all-to-all connectivity

• Kohonen:• more abstract formulation• continuous input space• “weight vector” for each output

Page 8: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 8

Applications of Hebbian Learning:Applications of Hebbian Learning:Retinotopic ConnectionsRetinotopic Connections

retina

tectum

Retinotopic: neighboring retina cells project to neighboring tectum cells,i.e. topology is preserved, hence “retinotopic”

Question: How do retina units know where to project to?Answer: Retina axons find tectum through chemical markers, but finestructure of mapping is activity dependent

Page 9: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 9

Principal assumptions:- local excitation of neighbors in retina and tectum- global inhibition in both layers-Hebbian learning of feed-forward weights, with constraint on sum of pre-syn. weights- spontaneous activity (blobs) in retina layer

Why should we see retinotopy emerging?

Willshaw & von der Malsburg ModelWillshaw & von der Malsburg Model

tectum

retina

cooperation inintersection area

competition for abilityto innervate tectum area

instability, tectum blob can only form at X or Y

Page 10: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 10

Hj* = activity in post-synaptic cell j

Ai* = activity of pre-synaptic cell i, either 0 or 1

sij = connection weight from i j

ekj = excitatory connection of post-synaptic cell k post-synaptic cell j

ikj = inhibitory connection of post-synaptic cell k post-synaptic cell j

Weight update: Weight Normalization:

M = # pre cellsN = # post cells

Model EquationsModel Equations

Page 11: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 11

Results: Systems MatchingResults: Systems Matching

Formation of retinotopic mappings models of this type account for a range of experiments: “half retina”, “half tectum”, “graft rotation”, …

half retina experiment

Retina Tectum

Page 12: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 12

Simple Competitive LearningSimple Competitive Learning

j

ijiji wwh

define winner: ii hi maxarg*

Assume weights >0 and possibly weights normalized.If weights normalized, hi maximal if wi has smallest Euclideandistance to .

simple linear units,

input is

output of unit:

*:0

:1

ii

iioi

Winner-Take-All Mechanism:may be implemented through lateral inhibition network (see part on Neurodynamics)

“ultra-sparse code”

Figure taken from Hertz et.al.

Page 13: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 13

)( ijjiij wOw

Weight of winning unit is moved towards current input .Similar to Oja´s and Sanger´s rule.Geometric interpretation: units (their weights) move in input space.Winning unit moves towards current input.

learning rule is Hebb-likeplus decay term

1

2

inputvector

weightvector

normalized weights

Figure taken from Hertz et.al.

Page 14: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 14

Cost function forCost function forCompetitive LearningCompetitive Learning

)( ijjiij wOw

minimization of sum of squared errors!

Note 1: may be seen as online version of k-means clusteringNote 2: can modify learning rule to make all units fire equally often

otherwise 0

)( if 1

where, 2

1

2

1}{

*

2

,,

2

*

iiM

wMwwE

i

jiijjiiij

Claim: Learning rule related to this cost function:

ijjiij

ij wMw

Ew

To see this, treat M as constant:

Page 15: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 15

Vector QuantizationVector Quantization

Idea: represent input by weight vector of the winner (can be used for data compression: just store/transmit label representing the winner)

Question: what is set of inputs that “belong” to a unit i, i.e. for which unit i is winner?

Answer: Voroni tesselation (note: matlab command for this)

Figure taken from Hertz et.al.

Page 16: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 16

Self-Organizing Maps (Kohonen)Self-Organizing Maps (Kohonen)

Idea: Output units have a priori topology (1-dim. or 2-dim.), e.g. are arranged on a chain or regular grid. Not only winner get´s to learn but also its neighbors.

SOMs transform incoming signal patterns of arbitrary dimensionon 1-dim. or 2-dim. map. Neighbors in the map respond to similarinput patterns.

Competition: again with winner-take-all mechanismCooperation: through neighborhood relations that are exploited during learning

Page 17: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 17

Self-Organizing Maps (Kohonen)Self-Organizing Maps (Kohonen)

Neighboring output units learn together! Update winner’s weightsbut also those of neighbors.

1

2

inputvector

weightvector

example of 1-dim. topology of output nodes

Page 18: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 18

SOM algorithmSOM algorithm

j

ijiji wwh

Define winner: ii hi maxarg*

again, simple linear units, input is

Output of unit j when winner is i*:

2

2*,

*, 2exp

ij

ij

do

2

*2

*, ijij rrd

,

Usually, neighborhood shrinks with time:(for 0 : competetive learning)

)exp()(1

0 t

t

)(*, jijj wow

Same learning rule as earlier:

Usually, learning rate decays, too: )exp()(2

0 t

t

Page 19: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 19

2 Learning Phases2 Learning Phases

1. Self-organizing or ordering phase- topological ordering of weight vectors- use: = radius of layer

2. Convergence phase- fine tuning of weights- use: very small neighborhood, learning rate around 0.01

)exp()(2

0 t

t

1000,1.0 20

)exp()(1

0 t

t

0 ,log

1000,

01

time dependenceof parameters: ,

Page 20: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 20

ExamplesExamples

Figure taken from Haykin

Page 21: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 21 Figure taken from Haykin

Page 22: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 22 Figure taken from Hertz et.al.

Page 23: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 23

Feature Map PropertiesFeature Map Properties

Property 1: feature map approximates the input space

Property 2: topologically ordered, i.e. neighboring units correspond to similar input patterns

Property 3: density matching: density of output units corresponds qualitatively to input probability density function. SOM tends to under-sample high density areas and over-sample low-density areas

Property 4: can be thought of (loosely) as non-linear generalization of PCA.

Page 24: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 24

Ocular Dominance and Orientation Ocular Dominance and Orientation Tuning in V1Tuning in V1

1. singularities (pinwheels and saddle points) tend to align with centers of ocular dominance bands2. isoorientation contours intersect borders of ocular dominance bandsat approximately 90 deg. angle (“local orthogonality”)3. global disorder: (Mexican-hat like autocorrelation functions)

Page 25: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 25

SOM model and close relative (elastic net)account for observed structure of pinwheelsand their locations:

10 , )()',()()( 111 rvrrrr ttSOMtt h

zqqyx ),2cos(),2sin(,,

2/'exp)',( 22 rrrr SOMh

))(,(min))(,(' rvrvr r d

SOM based ModelSOM based Model

Obermayer et al.:

Page 26: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 26

Very similar idea: (Durbin and Mitchinson)

1'

1111 )()'( )(),()()(rr

rrrvvrrr tttttENtt h

zqqyx ),2cos(),2sin(,,

'

122

11 ),'( /2/)(,exp),(r

vrrvvr tENttEN hdh

Elastic Net ModelElastic Net Model

Note: explicitly forces unit’s weight to be similar to that of neighbors

Page 27: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 27

RF-LISSOM modelRF-LISSOM modelReceptive Field Laterally Interconnected Synergetically Self-Organizing MapReceptive Field Laterally Interconnected Synergetically Self-Organizing Map

Idea: learn RF properties and map simultaneously

Figures taken from http://www.cs.texas.edu/users/nn/web-pubs/htmlbook96/sirosh/

Page 28: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 28

Lissom equationsLissom equations

input activities are elongated Gaussian blobs:

initial map activity is nonlinear function of input activity, weights μ:

time evolution of activity (μ, E, I are all weights):

Hebbian style learning with weight normalization: all weights learnbut with different parameters:

Page 29: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 29

Demo: (needssupercomputerpower)

Page 30: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 30

Contextual MapsContextual Maps

Different way of displaying SOM: label output nodes witha class label describing what output node represents. Can be usedto display data from high dimensional input spaces in 2d.

][ Ta

Ts

T

Input vector is concatenation of attribute vector and symbol code:(symbol code “small” and free of correlations)

Figure takenfrom Haykin

Page 31: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 31

Contextual Maps cont’d.Contextual Maps cont’d.

Contextual Map trained just like standard SOM.Which unit is winner for symbol input only?

Figure taken from Haykin

Page 32: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 32

Contextual Maps cont’d.Contextual Maps cont’d.

For which symbol code fires unit the most?

Figure taken from Haykin

Page 33: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 33

Extension with “Conscience”Extension with “Conscience”

Problem: standard SOM does not faithfully represent input densityMagnification factor: m(x) is the number of units in small volume dx of input space.Ideally:

But this is not what SOM does!

Idea (DeSieno): If neuron wins too often/not often enough,it decreases/increases its chance of winning.(intrinsic plasticity)

)()( xx pm

Page 34: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 34

Algorithm: SOM with conscience

Page 35: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 35

Topology Learning: neural gasTopology Learning: neural gasidea: no fixed underlying topology but learn it on-line

Figure taken from Ballard

Page 36: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 36

Figure taken from Ballard

Page 37: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 37

SOM SummarySOM Summary

• “simple algorithms” with complex behavior.

• Can be used to understand organization of response preferences for neurons in cortex, i.e. formation of cortical maps.

• Used for visualization of high dimensional data.

• Can be extended for supervised learning.

• Some notable extensions, e.g. neural gas and growing neural gas, that are generalizations to 2-dim topology. Proper (local) topology is discovered during the learning process

Page 38: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 38

Learning Vector QuantizationLearning Vector Quantization

Idea: supervised add-on for the case that class labels are available for the input vectorsHow it works: first do unsupervised learning, then label output nodes

Learning rule: Cx: desired class label for input x Cw: class label of winning unit with weight vector wIn case Cw=Cx: w=a(t)[x-w] (move weight towards input)In case CwCx: w=-a(t)[x-w] (move away from input)a(t): decaying learning rate Figure taken from Haykin

Page 39: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 39

LVQdata:

Page 40: Self-organizing Maps (Kohonen Networks) and related models

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 40

LVQresult:

Figure taken from Haykin