Self Organinising neural networks

20
Self Organising Neural Networks Kohonen Networks. A Problem with Neural Networks. ART. Beal, R. and Jackson, T. (1990). Neural Computing: An Introduction. Chapters 5 & 7. Adam Hilger, NY. Hertz, J., Krogh, A. and Palmer, R. (1991). Introduction to the Theory of Neural Computation. Chapter9. Addison–Wesley. NY. Grossberg, S. (1987). Competitive Learning: from interactive acti- vation to adaptive resonance. Cognitive Science, 11: 23–63. 1

description

 

Transcript of Self Organinising neural networks

Page 1: Self Organinising  neural networks

Self Organising Neural Networks

Kohonen Networks.

A Problem with Neural Networks.

ART.

Beal, R. and Jackson, T. (1990). Neural Computing: An Introduction.Chapters 5 & 7. Adam Hilger, NY.

Hertz, J., Krogh, A. and Palmer, R. (1991). Introduction to the Theoryof Neural Computation. Chapter 9. Addison–Wesley. NY.

Grossberg, S. (1987). Competitive Learning: from interactive acti-vation to adaptive resonance. Cognitive Science, 11: 23–63.

1

Page 2: Self Organinising  neural networks

Kohonen Self Organising Networks

Kohonen, T. (1982). Self–organized formation of topologically cor-

rect feature maps., Biological Cybernetics, 43: 59–69.

An abstraction from earlier models (e.g. Malsburg,1973).

The formation of feature maps (introducing a geo-metric layout).

Popular and useful.

Can be traced to biologically inspired origins.

Why have topographic mappings?

– Minimal wiring

– Help subsequent processing layers.

Example: Xenopus retinotectal mapping (Price & Will-shaw 2000, p121).

2

Page 3: Self Organinising  neural networks

Basic Kohonen Network

Geometric arrangement of units.

Units respond to “part” of the environment.

Neighbouring units should respond to similar partsof the environment.

Winning unit selected by:

������ � min�������

where �� is the weight vector of winning unit, and� is the input pattern.

and Neighbourhoods...

3

Page 4: Self Organinising  neural networks

Neighbourhoods in the Kohonen Network

Example in 2D.

Neighbourhood of winning unit � called ��.

4

Page 5: Self Organinising  neural networks

Learning in the Kohonen Network

All units in �� are updated.

d���

d��

����� ������� ������� for � � ��

� otherwise

whered���

d�= change in weight over time.

���� = time dependent learning parameter.

����� = input component � at time �.

������ = weight from input � to unit � at time �.

� Geometrical effect: move weight vector closer to in-put vector.

� � is strongest for winner and can decrease with dis-tance. Also decreases over time for stability.

5

Page 6: Self Organinising  neural networks

Biological origins of the Neighbourhoods

Lateral interaction of the units.

Mexican Hat form:

-100 -80 -60 -40 -20 0 20 40 60 80 100-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

010

2030

40

0

10

20

30

40-1

0

1

2

3

6

Page 7: Self Organinising  neural networks

Biological origins of the Neighbourhoods: Mals-burg

Inhibitory connections:

Excitatory units

Inhibitory units

Excitatory units

Inhibitory units

Excitatory connections:

Implements winner-take-all processing.

7

Page 8: Self Organinising  neural networks

1-d example

1

43

2

5

5

4

2

1

3

5

2

1

34

4 15 3 2

4

13 2

5

8

Page 9: Self Organinising  neural networks

2-d example: uniform density

8x8 units in 2D lattice

2 input lines.

Inputs between �� and ��.

Input space:

+1

+1

-1

-1

9

Page 10: Self Organinising  neural networks

2-d example: uniform density

10

Page 11: Self Organinising  neural networks

2-d example: non-uniform density

Same 8x8 units in 2D lattice.

Same input space.

Different input distribution

+1

+1

-1

-1

11

Page 12: Self Organinising  neural networks

2-d example: non-uniform density

12

Page 13: Self Organinising  neural networks

2-d� 1-d example: dimension reduction

2-d input uniform density; 1-d output arrangement.

“Space-filling” (Peano) curves; can solve TravellingSalesman Problem.

init wts epoch 10

epoch 500 epoch 700

13

Page 14: Self Organinising  neural networks

Example Application of Kohonen’s Network

The Phonetic Typewriter

MP Filter A/D

FFT

Rules

Kohonen

Network

Problem: Classification of phonemes in real time.

Pre and post processing.

Network trained on time sliced speech wave forms.

Rules needed to handle co-articulation effects.

14

Page 15: Self Organinising  neural networks

A Problem with Neural Networks

Consider 3 network examples:

Kohonen Network.

Associative Network.

Feed Forward Back-propagation.

Under the situation:

Network learns environment (or I/O relations).

Network is stable in the environment.

Network is placed in a new environment.

What happens:

Kohonen Network won’t learn.

Associative Network OK.

Feed Forward Back-propagation Forgets.

called The Stability/Plasticity Dilemma.

15

Page 16: Self Organinising  neural networks

Adaptive Resonance Theory

Grossberg, S. (1976a). Adaptive pattern classification and univer-

sal recoding I: Feedback, expectation, olfaction, illusions. Biological

Cybernetics, 23: 187–202.

a “neural network that self–organize[s] stable pat-tern recognition codes in real time, in response toarbitrary sequences of input patterns”.

ART1 (1976). Localist representation, binary patterns.

ART2 (1987). Localist representation, analog patterns.

ART3 (1990). Distributed representation, analog pat-terns.

Desirable properties:

plastic + stable

biological mechanisms

analytical math foundation

16

Page 17: Self Organinising  neural networks

ART1O

rienting subsystem

++

-+

+ (���)

G

+ (���)

r-

+

Attentional subsystem

Input (��)

F2 units (��)

F1 units (��)

F1��F2 fully connected, excitatory (���).

F2��F1 fully connected, excitatory (���).

Pattern of activation on F1 and F2 called Short TermMemory.

Weight representations called Long Term Memory.

Localist representations of binary input patterns.

17

Page 18: Self Organinising  neural networks

Summary of ART 1

(Lippmann, 1987). N = number of F1 units.

Step 1: Initialization

��� � � ��� ��

���

Set vigilance parameter � � � � �

Step 2: apply new input (binary ��)

Step 3: compute F2 activation

�� �

�����

�����

Step 4: find best matching node �, where �� � �� ��.

Step 5: vigilance test

� � ������

�� �� � ������

�����

Is�� �

� ��

If no, go to step 6. If yes go to step 7.

Step 6: mismatch/reset: set �� � � and go to step 4.

Step 7: resonance — adapt best match

��� �����

��� ���

� ���

��� �����

Step 8: Re-enable all F2 units and go to step 2

18

Page 19: Self Organinising  neural networks

ART1: Example

INPUT

UNIT 1 UNIT 2

resonance

resonance

1st choicereset

resonance

1st choicereset

2nd choicereset

resonance

1st choicereset

2nd choiceresetreset

3rd choice resonance

UNIT 3 UNIT 4

1st choiceresonance

1st choicereset

2nd choiceresonance

1st choicereset

2nd choicereset

3rd choiceresetreset

4th choice resonance

UNIT 5

F2 UNITS REPRESENT:

19

Page 20: Self Organinising  neural networks

Summary

Simple?

Interesting biological parallels.

Diverse applications.

Extensions.

20