Download - Neural Networks: Self-Organizing Maps (SOM)

Transcript
Page 1: Neural Networks:  Self-Organizing Maps (SOM)

CHAPTERS 9

UNSUPERVISED LEARNING: SELF-ORGANIZING MAPS (SOM)

CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq M. Mostafa

Computer Science Department

Faculty of Computer & Information Sciences

AIN SHAMS UNIVERSITY

(most of figures in this presentation are copyrighted to Pearson Education, Inc.)

Page 2: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq

Unsupervised Learning

Principles of Self-Organizations

Self-Organizing Maps (SOM)

Willshaw-von der Malsburg model

Kohonen Feature Maps

Computer Examples

2

Outlines

Page 3: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 3

Unsupervised Learning

Self-organized learning (neurobiological learning):

the learning algorithm is supplied with a set of rules of local behavior to use to compute an input–output mapping with desirable properties.

the term “local” means that the adjustments of weights are confined to the immediate local neighborhood of the neuron.

Page 4: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 4

Principles of Self-Organization

Principle 1: Self-amplification (self-reinforcement) “Modifications in the synaptic weights of a neuron tend to self-amplify

in accordance with Hebb’s postulate of learning, which is made possible by synaptic plasticity (adjustability).”

Hebb’s postulate (1949): When an axon of cell A is near enough to excite a cell B and repeatedly or

persistently takes part in firing it, some growth process or metabolic changes take place in one or both cells such that A’s efficiency as one of the cells firing B is increased.

Hebb’s postulate is modefied to this Two-Part Rule (1976): 1) If two neurons on either side of a synapse (connection) are activated

simultaneously (i.e., synchronously), then the strength of that synapse is selectively increased.

2) If two neurons on either side of a synapse are activated asynchronously, then that synapse is selectively weakened or eliminated.

Page 5: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 5

Principles of Self-Organization

Hebbian synapse could be defined as:

“a synapse that uses a time-dependent, highly local, and strongly interactive mechanism to increase synaptic efficiency as a function

of the correlation between the presynaptic and postsynaptic activities.”

Hebbian learning in mathematical terms:

consider a neuron k with synaptic weight w, a presynaptic and postsynaptic signals denoted by x and y respectively. Then at a time step n, the weight is updated as:

∆𝒘𝒌𝒋 𝒏 = 𝒇(𝒚𝒌 𝒏 , 𝒙𝒋 𝒏 )

Which could take many forms. It’s simplest form:

∆𝒘𝒌𝒋 𝒏 = 𝜼 𝒚𝒌 𝒏 𝒙𝒋 𝒏

Page 6: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 6

Principles of Self-Organization

Principle 2: Competition “The limitation of available resources (e.g., energy), in one form or another, leads to competition among the synapses of a single neuron or an assembly of neurons, with the result that the most

vigorously growing (i.e., fittest) synapses or neurons, respectively, are selected at the expense of the others.”

This principle is made possible by synaptic plasticity (i.e., adjustability of a synaptic weight). Accordingly, only the “successful” synapses can grow in strength, while the less successful synapses tend to weaken and may eventually disappear altogether.

Competitive Learning: winner-takes-all neuron.

The winner neurons assume the role of feature detectors

Page 7: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 7

Principles of Self-Organization

Principle 3: Cooperation

“Modifications in synaptic weights at the neural level and in neurons at the network level tend to cooperate with

each other.

A single synapse on its own cannot efficiently produce favorable events. Rather, there has to be cooperation among the neuron’s synapses, making it possible to carry coincident signals strong enough to activate that neuron.

At the network level, cooperation may take place through lateral interaction among a group of excited neurons. That cooperative system evolves over the time, through a sequence of small changes from one configuration to another, until an equilibrium condition is established.

In a self-organizing system competition always precedes cooperation.

Page 8: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 8

Principles of Self-Organization

Principle 4: Structural Information

“The underlying order and structure that exist in an input signal represent redundant information, which is

acquired by a self-organizing system in the form of knowledge.

Structural information contained in the input data is therefore a prerequisite to self-organized learning.

Note that whereas self-amplification, competition, and cooperation are processes that are carried out within a neuron or a neural network, structural information, or redundancy, is an inherent characteristic of the input signal.

Page 9: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 9

Principles of Self-Organization

Summarizing Remarks

The neurobiologically motivated rules of self-organization hold for the unsupervised training of neural networks, but not necessarily for more general learning machines that are required to

perform unsupervised-learning tasks.

In any event, the goal of unsupervised learning is to fit a model to a set of unlabeled input data in such a way that the underlying structure of the data is well represented.

self-amplification, competition, and cooperation are the main processes in self-organization.

It is essential, for the model to be realizable, that the data be structured .

Page 10: Neural Networks:  Self-Organizing Maps (SOM)

Self-Organizing Maps (SOM)

Neural Networks

10

Page 11: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq

Self-Organizing Maps

Self-Organizing Maps (SOM) are special classes of artificial neural networks, which are based on competitive learning.

In competitive learning the output neurons of the network compete among themselves to be activated or fired, with the result that only one output neuron, or one neuron per group, is on at any one time.

The neuron that wins the competitive is called winner-takes-all neuron, or simply a winning neuron.

11

Page 12: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq

Self-Organizing Maps

One way of including a winner-takes-all competition

among the output neurons is to use lateral inhibitory

connections (i.e., negative feedback paths) between

them, (Rosenblatt, 1958).

12

Page 13: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq

Self-Organizing Maps

In a SOM, the neurons are placed at the nodes of a

lattice that is usually one or two dimensional. Higher-

dimensional maps are also possible but not as common.

The neurons become selectively tuned to various input

patterns (stimuli) or classes of input patterns. That is,

The neurons become ordered such that a meaningful

coordinate system or different input features is created

over the lattice.

13

Page 14: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq

Self-Organizing Maps

A self-organizing map is therefore characterized by the formation of a topographic map of the input patterns, in which the spatial locations (i.e., coordinates) of the neurons in the lattice are indicative of intrinsic statistical features contained in the input patterns—hence, the name “self-organizing map.”

The self-organizing map is inherently nonlinear.

14

Page 15: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq

The Human Brain

Figure 4. Cytoarchitectural map of the cerebral cortex. The different areas are identified by the

thickness of their layers and types of cells within them. Some of the key sensory areas are as

follows: Motor cortex: motor strip, area 4; premotor area, area 6; frontal eye fields, area 8.

Somatosensory cortex: areas 3, 1, and 2. Visual cortex: areas 17, 18, and 19. Auditory cortex:

areas 41 and 42. (A. Brodal, Neurological Anatomy in Relation to Clinical Medicine, 3rd Ed. Oxford Press. 1981)

15

Page 16: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq

Computational Maps of the Brain

What is equally impressive is the way in which different sensory inputs (motor, somatosensory, visual, auditory, etc.) are mapped onto corresponding areas of the cerebral cortex in an orderly fashion: 1. In each map, neurons act in parallel and process pieces of

information that are similar in nature, but originate from different regions in the sensory input space.

2. At each stage of representation, each incoming piece of information is kept in its proper context.

3. Neurons dealing with closely related pieces of information are close together so that they can interact via short synaptic connections.

4. Contextual maps can be understood in terms of decision-reducing mappings from higher-dimensional parameter spaces onto the cortical surface.

16

Page 17: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq

Self-Organizing Maps

Goal: building artificial topographic maps that learn through self-organization in a neurobiologically manner

Principle of topographic map formation (Kohonen, 1990):

“The spatial location of an output neuron in a topographic map corresponds to a particular domain or

feature of data drawn from the input space.”

This principle has provided the neurobiological motivation for two different feature mapping models,

17

Page 18: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq

Willshaw-von der Malsburg Model

The Willshaw-von der Malsburg model: Two separate 2-D lattices of neurons connected together. the input (presynaptic) neurons are projecting onto the output

(postsynaptic) neurons. The postsynaptic uses a short-range excitatory mechanism as

well as a long-range inhibitory mechanism. The layers use Hebbian learning. That

is, all neurons can fire, but rather a threshold is used to ensure that only a few will fire at any time.

Each neuron is limited by an upper boundary condition to prevent steady building (instability). Thus, for each neuron some synaptic

weights increase, while others decrease.

18

Page 19: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq

Willshaw-von der Malsburg Model

the Basic idea of the Willshaw–von der Malsburg model

The model codes the geometric proximity of presynaptic

neurons in the form of correlations in their electrical activity

the postsynaptic lattice uses these correlations so as to

connect neighboring presynaptic neurons to neighboring

postsynaptic neurons.

A topologically ordered mapping is thereby produced through

a process of self-organization.

Note that, in the Willshaw–von der Malsburg model the input

dimension is the same as the output dimension.

19

Page 20: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq

Kohonen Model

Characteristics of Kohonen Model: captures the essential features of computational (cortical) maps, yet

remains computationally tractable.

more general than the Willshaw–von der Malsburg model; it is capable of performing data compression (dimensionality reduction). So it belongs to the class of vector-coding algorithms.

Provides topological mapping that optimally places a fixed number of vectors (i.e., code words) into a higher-dimensional input space, thereby facilitating data compression.

Can be derived in two ways:

Self-organization (neurobiological)

Vector quantization (communication theory) 20

Page 21: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 21

Kohonen Model

An example of mapping an input vector into a Two-Dimensional lattice.

Page 22: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq

Kohonen SOM Learning Algorithm

Kohonen learning algorithm starts by initializing the synaptic weights by small random values, so that no prior order is imposed on the feature map. Then performs the following three processes: Competition:

For each input pattern, neurons compute a discriminant function. The particular neuron with the largest value is declared winner.

Cooperation: The winning neuron determines the spatial location of a topological

neighborhood of excited neurons, thereby providing the basis for cooperation among such neighboring neurons.

Synaptic Adaption: adjustments applied to the synaptic weights of the excited neurons such

that the response of the winning neuron is enhanced for similar input patterns.

22

Page 23: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 23

Kohonen SOM Learning Algorithm

Competition Process: Suppose m is the dimension of the input space:

𝐱 = [𝒙𝟏, 𝒙𝟐,…, 𝒙𝒎]𝑻

If l is the no. of neuron in the output layer, then the synaptic-weight vector of each neuron is

𝐰𝒋 = [𝒘𝒋𝟏, 𝒘𝒋𝟐,…, 𝒘𝒋𝒎]𝑻, 𝒋 ∈ [𝟏, 𝒍]

Now we should compute the product 𝐰𝑻𝐱 for each neuron and select the largest. This is equivalent to computing:

𝒊 𝐱 = 𝐚𝐫𝐠 𝐦𝐢𝐧𝒋

𝐱 − 𝐰𝒋 , 𝒋 ∈ [𝟏, 𝒍]

w𝐡𝐞𝐫𝐞 𝒊 𝐱 is the index if the particular neuron that satisfies this condition is called the best-matching, or winning, neuron for the vector 𝐱.

Page 24: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 24

Kohonen SOM Learning Algorithm

Cooperative Process:

The winning neuron locates the center of a topological neighborhood of cooperating neurons at distance dj,i .

Winner neuron We may assume that the topological neighborhood function hj,i satisfy:

The neighborhood hj, i is symmetric around the winner neuron.

The amplitude of hj, i decreasses monotonically with increasing dj, i

Page 25: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 25

Kohonen SOM Learning Algorithm

Cooperative Process (Cont.):

A good choice is

𝒉𝒋,𝒊 𝒙 = 𝐞𝐱𝐩 −𝒅𝒋,𝒊

𝟐𝝈𝟐 , 𝒋 ∈ [𝟏, 𝒍]

For stability, that the size of the topological neighborhood is should shrink with time (n)

Then the neighborhood function is

Page 26: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 26

Kohonen SOM Learning Algorithm

Adaptive Process:

First we modify the Hebbian learning to overcome steady building by including a forgetting term, g(yj)wj , that is

∆𝐰 = 𝛈𝒚𝒋𝐱 − 𝒈(𝒚𝒋)𝐰𝒋

𝜼 is the learning-rate. We choose a linear function for 𝒈 𝒚𝒋

= 𝛈𝒚𝒋 . Also we take 𝒚𝒋 = 𝒉𝒋,𝒊(𝐱) , then

∆𝐰 = 𝛈𝒉𝒋,𝒊 𝐱 𝐱 − 𝐰𝒋 , 𝒊: 𝒘𝒊𝒏𝒏𝒊𝒏𝒈

𝒋: 𝐞𝐱𝐜𝐢𝐭𝐞𝐝 𝐚𝐜𝐭𝐢𝐯𝐚𝐭𝐞𝐝 𝐧𝐞𝐮𝐫𝐨𝐧

Then given the weight vector wj(n) of neuron j at time n, we update it according to

𝐰𝒋(𝒏 + 𝟏) = 𝐰𝒋 𝒏 + 𝛈(𝒏)𝒉𝒋,𝒊 𝐱 𝐱(𝐧) − 𝐰𝒋(𝒏) ,

where

Page 27: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 27

Kohonen SOM Learning Algorithm

Adaptive Process (Cont.):

Two-phases adaptive process:

Self-organizing or ordering phase:

Takes long iteration; 1000 or more. Then we carefully choose parameters that 𝜼 start at 0.1 and ends above 0.01. the following good choices

𝜼0 = 0.1, τ2 = 1000, and τ1 =1000

𝑙𝑜𝑔𝜎0

Conversion phase:

The learning rate and Gaussian spread have small fixed values during the execution of SOM.

Page 28: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 28

Kohonen SOM learning algorithm

Initialize weights randomly (all weights should be different )

Draw a sample x randomly according to a certain probability

1- Initialization

2- Sampling

Find the best-matching (winning) neuron

𝒊 𝒙 = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒋

𝒙 − 𝒘𝒋 , 𝒋 ∈ [𝟏, 𝒍] 3- Similarity

matching

Adjust the synaptic-wight of all excited neurons

𝐰𝒋(𝒏 + 𝟏) = 𝐰𝒋 𝒏 + 𝛈(𝒏)𝒉𝒋,𝒊 𝐱 𝐱(𝐧) − 𝐰𝒋(𝒏) 4- Wieghts

updating

5-

co

nti

nu

ati

on

Page 29: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 29

Computer Experiments

Figure 9.8 (a) Distribution of the input data. (b) Initial condition of the two-dimensional lattice. (c) Condition of the lattice at the end of the ordering phase. (d) Condition of the lattice at the end of the convergence phase. The times indicated under maps (b), (c), and (d) represent the numbers of iterations.

1- Two-dimensional lattice driven by two-dimensional stimulus:

Page 30: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 30

Computer Experiments

Figure 9.9 (a) Distribution of the two-dimensional input data. (b) Initial condition of the one-dimensional lattice. (c) Condition of the one dimensional lattice at the end of the ordering phase. (d) Condition of the lattice at the end of the convergence phase. The times included under maps (b), (c), and (d) represent the numbers of iterations.

2- One-dimensional lattice driven by two-dimensional stimulus:

Page 31: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 31

Computer Experiments: Contextual Maps

Page 32: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 32

Computer Experiments: Contextual Maps

Figure 9.10 Feature map containing labeled neurons with strongest responses to their respective inputs.

Page 33: Neural Networks:  Self-Organizing Maps (SOM)

ASU-CSC445: Neural Networks

Prof. Dr. Mostafa Gadal-Haqq 33

Computer Experiments: Contextual Maps

Figure 9.11 Semantic map or Contextual map is obtained through the use of simulated electrode penetration mapping. The map is divided into three regions, representing birds (white), peaceful species (grey), and hunters (blue).

Page 34: Neural Networks:  Self-Organizing Maps (SOM)

END OF THE COURSE

H O P E Y O U E N J O Y E D I T

34