Neural Networks: Self-Organizing Maps (SOM)

Click here to load reader

  • date post

    06-Jan-2017
  • Category

    Education

  • view

    175
  • download

    1

Embed Size (px)

Transcript of Neural Networks: Self-Organizing Maps (SOM)

  • CHAPTERS 9

    UNSUPERVISED LEARNING: SELF-ORGANIZING MAPS (SOM)

    CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq M. Mostafa

    Computer Science Department

    Faculty of Computer & Information Sciences

    AIN SHAMS UNIVERSITY

    (most of figures in this presentation are copyrighted to Pearson Education, Inc.)

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq

    Unsupervised Learning

    Principles of Self-Organizations

    Self-Organizing Maps (SOM)

    Willshaw-von der Malsburg model

    Kohonen Feature Maps

    Computer Examples

    2

    Outlines

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq 3

    Unsupervised Learning

    Self-organized learning (neurobiological learning):

    the learning algorithm is supplied with a set of rules of local behavior to use to compute an inputoutput mapping with desirable properties.

    the term local means that the adjustments of weights are confined to the immediate local neighborhood of the neuron.

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq 4

    Principles of Self-Organization

    Principle 1: Self-amplification (self-reinforcement) Modifications in the synaptic weights of a neuron tend to self-amplify

    in accordance with Hebbs postulate of learning, which is made possible by synaptic plasticity (adjustability).

    Hebbs postulate (1949): When an axon of cell A is near enough to excite a cell B and repeatedly or

    persistently takes part in firing it, some growth process or metabolic changes take place in one or both cells such that As efficiency as one of the cells firing B is increased.

    Hebbs postulate is modefied to this Two-Part Rule (1976): 1) If two neurons on either side of a synapse (connection) are activated

    simultaneously (i.e., synchronously), then the strength of that synapse is selectively increased.

    2) If two neurons on either side of a synapse are activated asynchronously, then that synapse is selectively weakened or eliminated.

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq 5

    Principles of Self-Organization

    Hebbian synapse could be defined as:

    a synapse that uses a time-dependent, highly local, and strongly interactive mechanism to increase synaptic efficiency as a function

    of the correlation between the presynaptic and postsynaptic activities.

    Hebbian learning in mathematical terms:

    consider a neuron k with synaptic weight w, a presynaptic and postsynaptic signals denoted by x and y respectively. Then at a time step n, the weight is updated as:

    = ( , )

    Which could take many forms. Its simplest form:

    =

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq 6

    Principles of Self-Organization

    Principle 2: Competition The limitation of available resources (e.g., energy), in one form or another, leads to competition among the synapses of a single neuron or an assembly of neurons, with the result that the most

    vigorously growing (i.e., fittest) synapses or neurons, respectively, are selected at the expense of the others.

    This principle is made possible by synaptic plasticity (i.e., adjustability of a synaptic weight). Accordingly, only the successful synapses can grow in strength, while the less successful synapses tend to weaken and may eventually disappear altogether.

    Competitive Learning: winner-takes-all neuron.

    The winner neurons assume the role of feature detectors

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq 7

    Principles of Self-Organization

    Principle 3: Cooperation

    Modifications in synaptic weights at the neural level and in neurons at the network level tend to cooperate with

    each other.

    A single synapse on its own cannot efficiently produce favorable events. Rather, there has to be cooperation among the neurons synapses, making it possible to carry coincident signals strong enough to activate that neuron.

    At the network level, cooperation may take place through lateral interaction among a group of excited neurons. That cooperative system evolves over the time, through a sequence of small changes from one configuration to another, until an equilibrium condition is established.

    In a self-organizing system competition always precedes cooperation.

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq 8

    Principles of Self-Organization

    Principle 4: Structural Information

    The underlying order and structure that exist in an input signal represent redundant information, which is

    acquired by a self-organizing system in the form of knowledge.

    Structural information contained in the input data is therefore a prerequisite to self-organized learning.

    Note that whereas self-amplification, competition, and cooperation are processes that are carried out within a neuron or a neural network, structural information, or redundancy, is an inherent characteristic of the input signal.

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq 9

    Principles of Self-Organization

    Summarizing Remarks

    The neurobiologically motivated rules of self-organization hold for the unsupervised training of neural networks, but not necessarily for more general learning machines that are required to

    perform unsupervised-learning tasks.

    In any event, the goal of unsupervised learning is to fit a model to a set of unlabeled input data in such a way that the underlying structure of the data is well represented.

    self-amplification, competition, and cooperation are the main processes in self-organization.

    It is essential, for the model to be realizable, that the data be structured .

  • Self-Organizing Maps (SOM)

    Neural Networks

    10

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq

    Self-Organizing Maps

    Self-Organizing Maps (SOM) are special classes of artificial neural networks, which are based on competitive learning.

    In competitive learning the output neurons of the network compete among themselves to be activated or fired, with the result that only one output neuron, or one neuron per group, is on at any one time.

    The neuron that wins the competitive is called winner-takes-all neuron, or simply a winning neuron.

    11

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq

    Self-Organizing Maps

    One way of including a winner-takes-all competition

    among the output neurons is to use lateral inhibitory

    connections (i.e., negative feedback paths) between

    them, (Rosenblatt, 1958).

    12

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq

    Self-Organizing Maps

    In a SOM, the neurons are placed at the nodes of a

    lattice that is usually one or two dimensional. Higher-

    dimensional maps are also possible but not as common.

    The neurons become selectively tuned to various input

    patterns (stimuli) or classes of input patterns. That is,

    The neurons become ordered such that a meaningful

    coordinate system or different input features is created

    over the lattice.

    13

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq

    Self-Organizing Maps

    A self-organizing map is therefore characterized by the formation of a topographic map of the input patterns, in which the spatial locations (i.e., coordinates) of the neurons in the lattice are indicative of intrinsic statistical features contained in the input patternshence, the name self-organizing map.

    The self-organizing map is inherently nonlinear.

    14

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq

    The Human Brain

    Figure 4. Cytoarchitectural map of the cerebral cortex. The different areas are identified by the

    thickness of their layers and types of cells within them. Some of the key sensory areas are as

    follows: Motor cortex: motor strip, area 4; premotor area, area 6; frontal eye fields, area 8.

    Somatosensory cortex: areas 3, 1, and 2. Visual cortex: areas 17, 18, and 19. Auditory cortex:

    areas 41 and 42. (A. Brodal, Neurological Anatomy in Relation to Clinical Medicine, 3rd Ed. Oxford Press. 1981)

    15

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq

    Computational Maps of the Brain

    What is equally impressive is the way in which different sensory inputs (motor, somatosensory, visual, auditory, etc.) are mapped onto corresponding areas of the cerebral cortex in an orderly fashion: 1. In each map, neurons act in parallel and process pieces of

    information that are similar in nature, but originate from different regions in the sensory input space.

    2. At each stage of representation, each incoming piece of information is kept in its proper context.

    3. Neurons dealing with closely related pieces of information are close together so that they can interact via short synaptic connections.

    4. Contextual maps can be understood in terms of decision-reducing mappings from higher-dimensional parameter spaces onto the cortical surface.

    16

  • ASU-CSC445: Neural Networks

    Prof. Dr. Mostafa Gadal-Haqq

    Self-Organizing Maps

    Goal: building artificial topographic maps that learn through self-organization in a neurobiologically manner

    Principle of topographic map formation (Kohonen, 1990):

    The sp