Artificial Neural vasighi/courses/ann96fall/ann15.pdf · PDF file Artificial Neural...

Click here to load reader

  • date post

    20-May-2020
  • Category

    Documents

  • view

    4
  • download

    0

Embed Size (px)

Transcript of Artificial Neural vasighi/courses/ann96fall/ann15.pdf · PDF file Artificial Neural...

  • Artificial Neural Networks

    Part 16

  • Self Organizing Maps

    Design a SOM network

    selforgmap(dim,cover,initNeighb,topol,distFcn)

    dim Row vector of dimension sizes (default = [8 8]) cover Number of training steps (default = 100) initNeighb Initial neighborhood size (default = 3) topol Layer topology function (default = 'hextop') distFcn Neuron distance function (default = 'linkdist')

    x = simplecluster_dataset; net = selforgmap([8 8]); net = train(net,x); view(net) y = net(x); classes = vec2ind(y);

  • Self Organizing Maps

    plotsompos(net,x)

  • Self Organizing Maps

    plotsomhits(net,x)

  • plotsomplanes(net)

    Self Organizing Maps

  • Self Organizing Maps

    plotsomnd(net)

  • Self Organizing Maps

    http://michem.disat.unimib.it/chm/

    http://michem.disat.unimib.it/chm/download/kohoneninfo.htm

  • Self Organizing Maps

    settings = som_settings(‘Kohonen’);

    settings.nsize = 8; settings.epochs = 100;

    settings.bound = 'normal'; setting.training= 'sequential'; settings.topol = 'hexagonal';

    model = model_kohonen(X,settings); pred_koh = pred_kohonen(Xnew,model);

  • Self Organizing Maps

    settings = som_settings(‘cpann’); settings.nsize = 8; settings.epochs = 100; Model_cp = model_cpann(X,class,settings); visualize_model(model_cp); pred_cp = pred_cpann(Xnew,model_cp);

    cv = cv_cpann(X,class,settings,1,5);

    cv.class_param.accuracy cv.class_param.ner

    cv.class_param.conf_mat

    50 0 0 0 45 5 0 4 46C

    la ss

    Predicted

  • Self Organizing Maps

  • Self Organizing Maps

    i

    j

    k

    Patterns

    k

    j feature vectors

    attributes

    k

    j feature vectors

    attributes

    • Feature based sensor fusion • Brain-inspired computing • It is possible to obtain a feature matrix (several feature

    vectors) per pattern.

  • Self Organizing Maps MOLMAP

  • Self Organizing Maps MOLMAP

  • Classification of multiway analytical data was carried out by means of a method for the calculation of molecular descriptors, called:

    MOLMAP (MOLecular Map of Atom-level Properties)

    The input of the MOLMAP approach is a three-way data matrix: molecules on the first mode, molecule bonds on the second mode and bond properties on the last mode. The data array is unfolded and used to train a Kohonen map

    i

    j

    k

    molecules

    k

    j molecule bonds

    bond properties

    Self Organizing Maps MOLMAP

  • The MOLMAP approach requires two major steps:

    (a) Generation of MOLMAP scores by means of Kohonen maps and

    (b) development of predictive classification models which use MOLMAP scores as independent variables.

    Data pretreatment

    Data need to be processed by Kohonen networks in order to make them comparable with the Kohonen weights, multiway data are always range scaled between 0 and 1 when MOLMAP classification models are applied, in the following way:

    range scaled value of the ijk-th element of X maximum value of X

    minimum value of X

    Self Organizing Maps MOLMAP

  • After scaling, the data are arranged into a bi-dimensional matrix Xarr with I*J rows (input vectors) and K columns

    i

    j

    k

    i*j

    k

    The MOLMAP approach requires the Kohonen map to be trained with the bi- dimensional matrix, i.e. I*J input vectors (each composed by K values) are presented to the map

    k

    i*j

    Molecular bonds Molecule 1

    Molecule 2

    Molecule 3

    Self Organizing Maps MOLMAP

  • A trained Kohonen map shows similarities between input vectors in the sense that similar vectors are mapped into the same or closely adjacent neurons.

    The pattern of activated neurons can be seen as a fingerprint of the object and constitutes its MOLMAP score.

    K × ij Kohonen

    map

    N × N

    a representation of the objects of the original multiway dataset can be obtained by projecting them onto the trained map, one at a time, and by mapping the J input vectors of each multiway object.

    Winner (Activated)

    Self Organizing Maps MOLMAP

  • Self Organizing Maps MOLMAP

  • MOLMAP scores

    Trained Kohonen Map

    1.3 0.6 0.3 0.3 1.0V1 V3

    V2

    V4 V5

    0.6 1.3 0.3 0.3 0.3

    0.3 0.3 0.3 0.0 0.0

    0.3 0.3 0.0 0.3 0.3

    1.0 0.3 0.0 0.3 1.0

    MOLMAP score for molecule i

    For each molecule a score vector can be obtained using this procedure

    Self Organizing Maps MOLMAP

  • Self Organizing Maps MOLMAP

    k

    i*j

    Molecule 1

    Molecule 2

    Molecule 3

    Kohonen map

    N × N

    The MOLMAP score matrix M is constituted by i object fingerprints calculated by means of Kohonen maps.

    Classification Regression

    MOLMAP score matrix

    i

    N × N

    X Y

  • Self Organizing Maps MOLMAP

    The MOLMAP score matrix M is constituted by i object fingerprints calculated by means of Kohonen maps.

    Classification Regression

    MOLMAP score matrix

    i

    N × N

    X Y

    One of the major advantages of this approach is related to the fact that MOLMAP descriptors are able to represent properties of bonds in a molecular structure by a fixed-length code and allow the comparison of molecules that have different numbers of bonds.

  • Learning Vector Quantization (LVQ) is a supervised method which has been introduced by Kohonen as a simple, universal and efficient learning classifier.

    LVQ represents a family of algorithms that are widely used in the classification of potentially high-dimensional data.

    Their popularity and success in numerous applications is closely related to their easy implementation and their intuitively clear approach.

    A

    B

    Learning Vector Quantization

  • Class A Class B

    A complex boundary should be used for discrimination between classes

    LVQ1

    Learning Vector Quantization

  • Learning Vector Quantization

    Class A Class B

    Codebook A Codebook B

    B

    B

    A

    A

    LVQ1

  • Learning Vector Quantization

    Class A Class B

    Codebook A Codebook B

    wi(t+l) = wi(t) - α(t) (x(t) – wi(t)), where α(t) is a monotonically decreasing function of time.

    B

    B

    A

    A

    If the closest reference vector (Codebook) wi belongs to a class other than that of the point x(t), it is moved away in proportion to the distance between the two vectors

    LVQ1

    x w

  • Learning Vector Quantization

    Class A Class B

    Codebook A Codebook B

    B

    B

    A

    A

    LVQ1

    If the closest reference vector (Codebook) wi belongs to a class other than that of the point x(t), it is moved away in proportion to the distance between the two vectors

    wi(t+l) = wi(t) - α(t) (x(t) – wi(t)), where α(t) is a monotonically decreasing function of time.

  • Learning Vector Quantization

    Class A Class B

    Codebook A Codebook B

    B

    B

    A

    A

    LVQ1

    If the closest reference vector (Codebook) wi belongs to a class other than that of the point x(t), it is moved away in proportion to the distance between the two vectors

    wi(t+l) = wi(t) - α(t) (x(t) – wi(t)), where α(t) is a monotonically decreasing function of time.

  • Learning Vector Quantization

    Class A Class B

    Codebook A Codebook B

    B

    B

    A

    A

    LVQ1

    wi(t+l) = wi(t) + α(t) (x(t) – wi(t)), where α(t) is a monotonically decreasing function of time.

    If this reference vector (Codebook) wi belongs to the same class as the training point x(t), it is moved closer to the point, in proportion to the distance between the two vectors

  • Learning Vector Quantization

    Class A Class B

    Codebook A Codebook B

    B

    B

    A

    A

    LVQ1

    When all points were used in training process once, one epoch is completed. Training will continue until maximum number of epochs has been reached or codebooks have stabilized.

  • Learning Vector Quantization

    Class A Class B

    Codebook A Codebook B

    B

    B

    A

    A

    LVQ1

  • Learning Vector Quantization

    Class A Class B

    Codebook A Codebook B

    B

    B

    A

    A

    LVQ1

    Voronoi region

  • Learning Vector Quantization

    LVQ Issues  Initialization of the prototypes

     Dead neurons – too far away from the training examples, never move in proper direction.

     Number of codebooks for each class  Depends on class structure in data space and number of input

    vectors.  Performance of classification can be checked at different

    number of codebooks

     Number of epochs  Depends on the complexity of the data and learning rate.