Download - Unsupervised Learning: Part II Self-Organizing Maps

Transcript
  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    1/36

    Klinkhachorn:CpE520

    Unsupervised Learning: Part IISelf-Organizing Maps

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    2/36

    Klinkhachorn:CpE520

    Classification of ANN

    Paradigms

    Unsupervised

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    3/36

    Klinkhachorn:CpE520

    Self -Organizing Maps

    Presented by Kohonen in 1988

    Two layer Network

    Input layer Competitive layer

    Develops a topological map of

    relationships between the input

    patterns Orderly progression of inputs produce and

    orderly progression of outputs

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    4/36

    Klinkhachorn:CpE520

    Teuvo KohonenDr. Eng., Emeritus Professor of the Academy of Finland;Academician

    His research areas are the theory of self-organization,associative memories, neural networks, and patternrecognition, in which he has published over 300research papers and four monography books. His fifthbook is on digital computers. His more recent work isexpounded in the third, extended edition (2001) of hisbook Self-Organizing Maps.

    Since the 1960s, Professor Kohonen has introduced several new concepts toneural computing: fundamental theories of distributed associative memory andoptimal associative mappings, the learning subspace method, the self-organizing feature maps (SOMs), the learning vector quantization (LVQ), novelalgorithms for symbol processing like the redundant hash addressing,dynamically expanding context and a special SOM for symbolic data, and a SOMcalled the Adaptive-Subspace SOM (ASSOM) in which invariant-feature filtersemergence. A new SOM architecture WEBSOM has been developed in hislaboratory for exploratory textual data mining. In the largest WEBSOMimplemented so far, about seven million documents have been organized in aone-million neuron network: for smaller WEBSOMs, see the demo athttp://websom.hut.fi/websom/.

    http://www.cis.hut.fi/research/som-research/teuvo.html

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    5/36

    Klinkhachorn:CpE520

    Self -Organizing Maps

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    6/36

    Klinkhachorn:CpE520

    SOM - Training algorithm

    Determine winner in competition by either of

    two methods (both determine the weight vector

    best matching the input vector)

    1. Calculate difference between weight and

    input vector for each neuron

    Vector difference =

    Winner = Dc = min (Dj)

    jD = ix - jiw( )2

    i

    2.Normalize inputs (as presented to network) and

    weight vectors (as updated) and calculate dot product

    Winner = Dc = min (Dj)

    DotProduct=jD = ix * jiw( )

    i

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    7/36

    Klinkhachorn:CpE520

    Vector Normalization

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    8/36

    Klinkhachorn:CpE520

    Example - Preserve Direction

    ( ) 543 22 =+

    Divide each element of vector by the magnitude of

    the vector

    If inputs = 6 and 8

    Magnitude =

    Normalized inputs = 0.6 and 0.8

    If inputs = 3 and 4

    Magnitude =

    Normalized inputs = 0.6 and 0.8

    ( ) 108622 =+

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    9/36

    Klinkhachorn:CpE520

    Example - Preserve relationship between vectors

    ( ) 14.14515 22 =-

    Add another element which places the vector on

    the surface of a sphere and then divide by the

    magnitude of the vector

    If inputs = 6 and 8 , N = 15

    d =

    Normalized inputs = 0.4,0.53, .75

    If inputs = 3 and 4

    d =

    Normalized inputs = 0.2, 0.27, 0.94

    ( ) 18.111015 22 =-

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    10/36

    Klinkhachorn:CpE520

    SOM - Training algorithm

    Update weights of winner (Dc) and weights of

    all others in the neighborhood of winner.

    Wji(t+1) = Wji(t) + a(xi- Wji(t))

    for all neurons in neighborhood of winner

    Choice ofneighborhoods (size and

    dimensionality) affect network performanceand function.

    Neighborhood and gain should be decreases

    as time progresses

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    11/36

    Klinkhachorn:CpE520

    SOM - Neighborhoods

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    12/36

    Klinkhachorn:CpE520

    SOM Parameter Adjustments

    Gain parameter, a, should decrease as time

    progresses

    a = A*(1-t/T) where, A = initial gain

    T = final interation time

    Neighborhood adjustment, d, should also decrease

    as time progresses

    d = D*(1-t/T) where, D = initial neighborhood

    T = final interation time

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    13/36

    Klinkhachorn:CpE520

    SOM Training Summary

    Locate winner of Competitive Layer

    Adapt weights of all units inneighborhood to increase similarity

    between input and weight vectors

    Decrease gain and neighborhood astime progresses

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    14/36

    Klinkhachorn:CpE520

    SOM Example 1

    The weight vector distribution will

    approximate the probability density

    distribution of the input space

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    15/36

    Klinkhachorn:CpE520

    SOM Weight Maps

    Iteration: 100 Iteration: 1200

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    16/36

    Klinkhachorn:CpE520

    SOM Example 2

    Input Distribution Weight Map after 1700 iterations

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    17/36

    Klinkhachorn:CpE520

    SOM 1-D Neighborhood

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    18/36

    Klinkhachorn:CpE520

    SOM Example 3

    The weight vector distribution will

    approximate the probability density

    distribution of the input space

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    19/36

    Klinkhachorn:CpE520

    SOM Weight Map

    Iteration: 100 Iteration: 3400

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    20/36

    Klinkhachorn:CpE520

    SOM Characteristics

    Dimension Mapping (the dimensions

    of the input space are mapped to the

    dimensions of the output space)

    Variable discretion among input

    patterns due to frequency of occurance

    Weight distribution tends to

    approximate the probability density of

    the input vectors

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    21/36

    Klinkhachorn:CpE520

    SOM Example 4

    Input distribution

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    22/36

    Klinkhachorn:CpE520

    SOM Example 4

    Initial Weight Map

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    23/36

    Klinkhachorn:CpE520

    SOM Example 4

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    24/36

    Klinkhachorn:CpE520

    SOM Example 4

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    25/36

    Klinkhachorn:CpE520

    SOM Example 4

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    26/36

    Klinkhachorn:CpE520

    SOM Example 4

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    27/36

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    28/36

    Klinkhachorn:CpE520

    SOM Example 5

    Traveling Salesman Problem

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    29/36

    Klinkhachorn:CpE520

    SOM Example 5

    Traveling Salesman Problem

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    30/36

    Klinkhachorn:CpE520

    SOM Example 5

    Color Reduction Map

    24 bits RGB

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    31/36

    Klinkhachorn:CpE520

    SOM Example 5

    Color Reduction Map

    256 color Indexed

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    32/36

    Klinkhachorn:CpE520

    SOM Example 5

    Color Reduction Map

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    33/36

    Klinkhachorn:CpE520

    SOM Example 5

    Color Reduction Map

    Typical color reduction map

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    34/36

    Klinkhachorn:CpE520

    SOM Example 5

    Color Reduction Map

    Statistical clustering method

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    35/36

    Klinkhachorn:CpE520

    SOM Example 5

    Color Reduction Map

  • 8/3/2019 Unsupervised Learning: Part II Self-Organizing Maps

    36/36

    Klinkhachorn:CpE520

    SOM Example 5

    Color Reduction Map

    Kohonen neural Network