1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish...

46
1 Study of Topographic and Study of Topographic and Equiprobable Mapping Equiprobable Mapping with Clustering for with Clustering for Fault Classification Fault Classification Ashish Babbar Ashish Babbar EE645 Final Project EE645 Final Project
  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    214
  • download

    2

Transcript of 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish...

Page 1: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

11

Study of Topographic and Study of Topographic and Equiprobable Mapping with Equiprobable Mapping with

Clustering for Fault Clustering for Fault ClassificationClassification

Ashish BabbarAshish BabbarEE645 Final ProjectEE645 Final Project

Page 2: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

22

IntroductionIntroduction A typical control system consists of four basic elements: A typical control system consists of four basic elements:

Dynamic plantDynamic plant ControllersControllers Actuators Actuators Sensors Sensors

Any kind of malfunction in these components can result in Any kind of malfunction in these components can result in unacceptable anomaly in overall system performance.unacceptable anomaly in overall system performance.

They are referred to as fault in a control system. The Objective of They are referred to as fault in a control system. The Objective of fault detection and identification is to detect, isolate and identify fault detection and identification is to detect, isolate and identify these faults so that system performance can be recovered these faults so that system performance can be recovered

Condition Based Maintenance (CBM) is the process of executing Condition Based Maintenance (CBM) is the process of executing repairs when objective evidence indicates need for such actions or repairs when objective evidence indicates need for such actions or in other words when anomalies or faults are detected in a control in other words when anomalies or faults are detected in a control system. system.

Page 3: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

33

MotivationMotivation Model based CBM can be applied when we have a mathematical Model based CBM can be applied when we have a mathematical

model of the system to be monitored. model of the system to be monitored.

When CBM needs to be performed based on just the data available When CBM needs to be performed based on just the data available from sensors, data driven methodologies are utilized for this from sensors, data driven methodologies are utilized for this purpose.purpose.

SOM is widely used in data mining as a tool for exploration and SOM is widely used in data mining as a tool for exploration and analysis of large amounts of data. analysis of large amounts of data.

It can be used for data reduction or vector quantization so that we It can be used for data reduction or vector quantization so that we can analyze the system data for anomalies by using only the data can analyze the system data for anomalies by using only the data clusters formed from the trained map instead of the large initial data clusters formed from the trained map instead of the large initial data sets.sets.

Page 4: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

44

ii’

Output layer

Input layer

V

v

Competitive LearningCompetitive Learning Assume a sequence of input Assume a sequence of input

samples v(t) in d-dimensional input samples v(t) in d-dimensional input space and a lattice of N neurons, space and a lattice of N neurons, labeled labeled i i = 1,2,……..,N and with the = 1,2,……..,N and with the corresponding weight vectors corresponding weight vectors

wwii(t) =[w(t) =[wijij(t)] . (t)] .

If v(t) can be simultaneously If v(t) can be simultaneously compared with each weight vector compared with each weight vector of the lattice then the best matching of the lattice then the best matching weight, for example wweight, for example wii** can be can be

determined and updated to match determined and updated to match or even better the current input. or even better the current input.

As a result of the competitive As a result of the competitive learning, different weights will learning, different weights will become tuned to different regions become tuned to different regions in the input space.in the input space.

Page 5: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

55

Self Organizing MapsSelf Organizing Maps SOM is an unsupervised neural network technique which finds SOM is an unsupervised neural network technique which finds

application in:application in: Density estimation e.g. clustering or classification purposes Density estimation e.g. clustering or classification purposes Blind Source Separation Blind Source Separation Visualization of data sets Visualization of data sets

It projects the input space on prototypes of low-dimensional regular It projects the input space on prototypes of low-dimensional regular grid that can be effectively utilized to visualize and explore grid that can be effectively utilized to visualize and explore properties of data.properties of data.

The SOM consists of a regular, two dimensional grid of map units The SOM consists of a regular, two dimensional grid of map units (neurons).(neurons).

Each unit Each unit ii is represented by a prototype vector is represented by a prototype vector

wwii(t) = [w(t) = [wi1i1(t), …, w(t), …, widid(t)]; where d is input vector dimension(t)]; where d is input vector dimension

Page 6: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

66

Self Organizing Maps (Algorithm)Self Organizing Maps (Algorithm) Given a data set the number of map units (neurons) is first chosen.Given a data set the number of map units (neurons) is first chosen.

The Map units can be selected to be approximately equal to √N to The Map units can be selected to be approximately equal to √N to 5√N, where N is the number of data samples in the given data set.5√N, where N is the number of data samples in the given data set.

The SOM is trained iteratively. At each training step, a sample The SOM is trained iteratively. At each training step, a sample

vector vector vv is randomly chosen from the input data set. is randomly chosen from the input data set.

Distances between Distances between v v and all prototype vectors are computed. The and all prototype vectors are computed. The Best Matching Unit (BMU) or the winner, which is denoted here by Best Matching Unit (BMU) or the winner, which is denoted here by b b is the map unit with prototype closest to is the map unit with prototype closest to vv. .

||v – w||v – wbb|| = min|| = minii {||v – w{||v – wii||}||}

Page 7: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

77

Self Organizing MapsSelf Organizing Maps The BMU or winner and its topological neighbors are moved closer to The BMU or winner and its topological neighbors are moved closer to

the input vector in the input space the input vector in the input space

wwii((tt+1) = w+1) = wii((tt) + ) + αα((tt)*h)*hbibi((tt)*[v-w)*[v-wii((tt)])]

t timet time

αα((tt) adaptation coefficient) adaptation coefficient

hhbibi((tt) neighborhood kernel centered on winner unit) neighborhood kernel centered on winner unit

where rwhere rbb and r and rii are positions of neurons b and i on the SOM grid. Both are positions of neurons b and i on the SOM grid. Both

αα(t) and σ(t) decrease monotonically with time. (t) and σ(t) decrease monotonically with time.

)(2

||||exp)(

2

2

t

rrth ib

bi

Page 8: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

88

Clustering- The Two Level ApproachClustering- The Two Level Approach Group the input data into clusters where the data is grouped into Group the input data into clusters where the data is grouped into

same cluster if it’s similar to one another.same cluster if it’s similar to one another.

A widely adopted definition of clustering is a partitioning that A widely adopted definition of clustering is a partitioning that minimizes the distances within and maximizes the distance between minimizes the distances within and maximizes the distance between clusters.clusters.

Once the neurons are trained the next step is clustering of SOM. Once the neurons are trained the next step is clustering of SOM. For clustering of SOM the two level approach is followed.For clustering of SOM the two level approach is followed.

First a large set of neurons much larger than the expected number of First a large set of neurons much larger than the expected number of clusters is formed using the SOM.clusters is formed using the SOM.

The Neurons in the next step are combined to form the actual clusters The Neurons in the next step are combined to form the actual clusters using the k-means clustering technique .using the k-means clustering technique .

The number of clusters is K and number of Neurons is M. The number of clusters is K and number of Neurons is M. K<<M<<N K<<M<<N

Page 9: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

99

M Prototypes of the SOMN Samples

K Clusters

Level 1 Level 2

K < M < N

Two level ApproachTwo level Approach

Data Samples

Page 10: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

1010

Advantage of using the two level approachAdvantage of using the two level approach

The primary benefit is the reduction of computational cost. Even The primary benefit is the reduction of computational cost. Even with relatively small number of data samples many clustering with relatively small number of data samples many clustering algorithms become intractably heavy.algorithms become intractably heavy.

For e.g. By using two level approach the reduction of computational For e.g. By using two level approach the reduction of computational load is about √N /15 or about six fold for N=10,000 from the direct load is about √N /15 or about six fold for N=10,000 from the direct clustering of data using k-meansclustering of data using k-means

Another benefit is noise reduction. The prototypes are local Another benefit is noise reduction. The prototypes are local averages of data and therefore less sensitive to random variations averages of data and therefore less sensitive to random variations

than the original datathan the original data..

Page 11: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

1111

K-means decision on number of clustersK-means decision on number of clusters

K-means algorithm was used at level 2 for clustering of the trained K-means algorithm was used at level 2 for clustering of the trained SOM neurons.SOM neurons.

K-means algorithm clusters the given data into k clusters where we K-means algorithm clusters the given data into k clusters where we define k.define k.

To decide the value of k one method is to run the algorithm from k=2 To decide the value of k one method is to run the algorithm from k=2 to k= √N where N is the number of data samples.to k= √N where N is the number of data samples.

K-means algorithm minimizes the error function:K-means algorithm minimizes the error function:

Where Where CC is the number of clusters and is the number of clusters and cckk is the center of cluster k. is the center of cluster k.

The approach followed in this project was to pick the number of The approach followed in this project was to pick the number of clusters as the value k which makes the error E 0.10E’ to 0.15*E’ or clusters as the value k which makes the error E 0.10E’ to 0.15*E’ or 10 to 15% of E’ where E’ is the error when k=2 10 to 15% of E’ where E’ is the error when k=2

2

1

||||

C

k Qxk

k

cxE

Page 12: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

1212

Selection of number of clustersSelection of number of clusters

Page 13: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

1313

Reference Distance CalculationReference Distance Calculation The clusters thus formed using training/nominal data sets are used The clusters thus formed using training/nominal data sets are used

to calculate the reference distance (to calculate the reference distance (dRefdRef).).

Knowing the cluster centers calculated from the k-means algorithm Knowing the cluster centers calculated from the k-means algorithm and the prototypes/Neurons which formed a particular cluster, we and the prototypes/Neurons which formed a particular cluster, we calculate the reference distance for each cluster.calculate the reference distance for each cluster.

Reference distance specific to a particular cluster is equal to the Reference distance specific to a particular cluster is equal to the distance between the cluster center and the prototype/Neuron distance between the cluster center and the prototype/Neuron belonging to this cluster that is at the maximum distance from this belonging to this cluster that is at the maximum distance from this cluster center. cluster center.

Similarly the reference distance for each of the clusters formed from Similarly the reference distance for each of the clusters formed from the nominal data set is calculated and serves as a base for fault the nominal data set is calculated and serves as a base for fault detection.detection.

To classify the given data cluster as nominal or faulty this underlying To classify the given data cluster as nominal or faulty this underlying structure of the initial known nominal data set is used.structure of the initial known nominal data set is used.

Page 14: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

1414

Fault IdentificationFault Identification The assumption made in this case is that the nominal data sets The assumption made in this case is that the nominal data sets

available which are used to form the underlying cluster structure available which are used to form the underlying cluster structure spans the space of all information that is not faulty.spans the space of all information that is not faulty.

The same procedure is then repeated for the unknown data sets (no The same procedure is then repeated for the unknown data sets (no idea if nominal or in fault) i.e. first the N data points are reduced to a idea if nominal or in fault) i.e. first the N data points are reduced to a mapping of M neurons and then clustered using k-means algorithm.mapping of M neurons and then clustered using k-means algorithm.

Now taking the training data clusters as centers and knowing the Now taking the training data clusters as centers and knowing the reference distance for each cluster, we see if the clusters from the reference distance for each cluster, we see if the clusters from the unknown data set are a member of the region spanned by the radius unknown data set are a member of the region spanned by the radius equal to the specific reference distance for that training cluster. equal to the specific reference distance for that training cluster.

Any unknown data set cluster which is not a part of the region Any unknown data set cluster which is not a part of the region spanned by taking the training data cluster as center and radius spanned by taking the training data cluster as center and radius equal to reference distance for that cluster is termed as faulty.equal to reference distance for that cluster is termed as faulty.

Page 15: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

1515

Block DiagramBlock Diagram

Training Data

Clustering Reference Distance

Unknown Data

Clustering Distance Deviation

Fault Identification

N samples

N samples M Neurons

K clusters

K clusters

M Neurons

K<<M<<N

Mapping Algorithm

Mapping Algorithm

Page 16: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

1616

SOM Using Nominal dataSOM Using Nominal data

Page 17: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

1717

Clustering of nominal dataClustering of nominal data

Page 18: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

1818

SOM using Unknown dataSOM using Unknown data

Page 19: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

1919

Clustering of unknown dataClustering of unknown data

Page 20: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

2020

Fault IdentificationFault Identification

Page 21: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

2121

SOM PerformanceSOM Performance

Page 22: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

2222

Equiprobable MapsEquiprobable Maps For Self Organizing feature maps the weight density at convergence For Self Organizing feature maps the weight density at convergence

is not a linear function of the input density is not a linear function of the input density pp(v) and hence the (v) and hence the neurons of the map will not be active with equal probabilities (i.e. the neurons of the map will not be active with equal probabilities (i.e. the map is not map is not equiprobabilisticequiprobabilistic). ).

For a discrete lattice of neurons, the weight density will be For a discrete lattice of neurons, the weight density will be proportional to:proportional to:

Regardless of the type of neighborhood function used SOM tends to Regardless of the type of neighborhood function used SOM tends to undersample the high probability regions and oversample the low undersample the high probability regions and oversample the low probability regionsprobability regions

)()( 2

1

1

vpwp di

Page 23: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

2323

Avoiding Dead UnitsAvoiding Dead Units SOM algorithm converges to a mapping which yields neurons that SOM algorithm converges to a mapping which yields neurons that

are never active (“dead units”).are never active (“dead units”).

These units do not contribute to the minimization of the overall These units do not contribute to the minimization of the overall (MSE) distortion of the map.(MSE) distortion of the map.

To produce maps in which the neurons would have an equal To produce maps in which the neurons would have an equal probability to be active (Equiprobabilistic Maps) the idea of adding probability to be active (Equiprobabilistic Maps) the idea of adding conscience to the winning neuron was introduced.conscience to the winning neuron was introduced.

Techniques of generating Equiprobabilistic Maps discussed are:Techniques of generating Equiprobabilistic Maps discussed are: Conscience LearningConscience Learning Frequency Sensitive Competitive Learning (FSCL)Frequency Sensitive Competitive Learning (FSCL)

Page 24: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

2424

Conscience LearningConscience Learning When a neural network is trained with unsupervised competitive When a neural network is trained with unsupervised competitive

learning on a set of input vectors that are clustered into K learning on a set of input vectors that are clustered into K groups/clusters then a given input vector groups/clusters then a given input vector vv will activate neuron will activate neuron i* i* that has been sensitized to the cluster containing the input vector. that has been sensitized to the cluster containing the input vector.

However if some region in the input space is sampled more However if some region in the input space is sampled more frequently than the others, then a single unit begins to win all frequently than the others, then a single unit begins to win all competitions for this region.competitions for this region.

To counter this defect, one records for each neuron To counter this defect, one records for each neuron i i the frequency the frequency with which it has won competition in the past, with which it has won competition in the past, ccii, and adds this , and adds this

quantity to the Euclidean distance between the weight vector quantity to the Euclidean distance between the weight vector wwii and and

the current input v.the current input v.

Page 25: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

2525

Conscience LearningConscience Learning In Conscience learning two stages are distinguishedIn Conscience learning two stages are distinguished

First the winning neuron is determined out of the N units:First the winning neuron is determined out of the N units:

i* i* = = Second, the winning neuron Second, the winning neuron i* i* is not necessarily the one that will is not necessarily the one that will

have its weight vectors updated.have its weight vectors updated.

Which neurons need to be updated depends on an additional term Which neurons need to be updated depends on an additional term for each unit, which is related to the number of times the unit has for each unit, which is related to the number of times the unit has won the competition in recent past.won the competition in recent past.

The rule of update is that for each neuron the number of times it has The rule of update is that for each neuron the number of times it has won the competition is recorded and a scaled version of this won the competition is recorded and a scaled version of this quantity is added to the distance metric used in minimum Euclidean quantity is added to the distance metric used in minimum Euclidean distance ruledistance rule

2,....,1 ||||minarg vwiNi

Page 26: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

2626

Conscience LearningConscience Learning

Update ruleUpdate rule

With cWith ci i the number of times neuron the number of times neuron ii has won the competition, and has won the competition, and C the scaling factor (“Conscience Factor”).C the scaling factor (“Conscience Factor”).

After determining the winning neuron After determining the winning neuron i*i* it’s conscience is it’s conscience is incrementedincremented

The weight of the winning neuron is updated usingThe weight of the winning neuron is updated using

where is the learning rate and its value is equal to small positive where is the learning rate and its value is equal to small positive

constantconstant

** |||| ii Ccvw ii Ccvw |||| i

1** ii cc

)( ** ii wvw

Page 27: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

2727

Conscience learning using nominal dataConscience learning using nominal data

Page 28: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

2828

Clusters shown with neuronsClusters shown with neurons

Page 29: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

2929

Clustering of nominal dataClustering of nominal data

Page 30: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

3030

Conscience learning on new data setConscience learning on new data set

Page 31: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

3131

Clustering of unknown data setClustering of unknown data set

Page 32: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

3232

Clusters represented on data setClusters represented on data set

Page 33: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

3333

Fault IdentificationFault Identification

Page 34: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

3434

Conscience learning PerformanceConscience learning Performance

Page 35: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

3535

Frequency Sensitive Competitive LearningFrequency Sensitive Competitive Learning

Another competitive learning scheme which is used is the Another competitive learning scheme which is used is the Frequency Sensitive Competitive LearningFrequency Sensitive Competitive Learning

This learning scheme keeps a record of the total number of times This learning scheme keeps a record of the total number of times each neuron has won the competition during training.each neuron has won the competition during training.

The distance metric in the Euclidean distance rule is then scaled as The distance metric in the Euclidean distance rule is then scaled as follows:follows:

After selection of winning neuron its conscience is then incremented After selection of winning neuron its conscience is then incremented and the weight vector updated using the UCL rule:and the weight vector updated using the UCL rule:

** x |||| ii cvw ii cvw x |||| i

)( ** ii wvw

Page 36: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

3636

FSCL using Nominal dataFSCL using Nominal data

Page 37: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

3737

Clustering of nominal dataClustering of nominal data

Page 38: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

3838

Clusters shown with neuronsClusters shown with neurons

Page 39: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

3939

FSCL using the unknown data setFSCL using the unknown data set

Page 40: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

4040

Clustering using unknown dataClustering using unknown data

Page 41: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

4141

Clusters of unknown dataClusters of unknown data

Page 42: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

4242

Fault IdentificationFault Identification

Page 43: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

4343

FSCL performanceFSCL performance

Page 44: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

4444

ConclusionsConclusions As shown in the results the performance of SOM algorithm was not As shown in the results the performance of SOM algorithm was not

as good as compared to the CLT and FSCL approaches. as good as compared to the CLT and FSCL approaches.

As SOM produces dead units even if the neighborhood function As SOM produces dead units even if the neighborhood function converges slowly so it was not able to train the neurons well converges slowly so it was not able to train the neurons well according to the available data sets.according to the available data sets.

Due to undersampling of high probability regions, SOM was able to Due to undersampling of high probability regions, SOM was able to detect only two faulty clusters out of the four and thus its detect only two faulty clusters out of the four and thus its performance was not good.performance was not good.

Using CLT and FSCL approach all four faulty clusters were detected Using CLT and FSCL approach all four faulty clusters were detected using the reference distance as the distance measure. using the reference distance as the distance measure.

Thus the equiprobable maps perform much better than the SOM by Thus the equiprobable maps perform much better than the SOM by avoiding the dead units and training the neurons by assigning a avoiding the dead units and training the neurons by assigning a conscience with the winning neuron conscience with the winning neuron

Page 45: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

4545

ReferencesReferences

Marc M. Van Hulle, Marc M. Van Hulle, Faithful Representations and Topographic maps Faithful Representations and Topographic maps ‘From Distortion to Information based Self Organization’‘From Distortion to Information based Self Organization’, John , John Willey & sons 2000Willey & sons 2000

T. Kohonen, T. Kohonen, Self Organizing MapsSelf Organizing Maps, Springer 1997, Springer 1997

Anil K. Jain and Richard C. Dubes, Anil K. Jain and Richard C. Dubes, Algorithms for Clustering Data,Algorithms for Clustering Data, Prentice Hall 1988Prentice Hall 1988

Juha Vesanto and Esa Alhoniemi, “Clustering of the Self Organizing Juha Vesanto and Esa Alhoniemi, “Clustering of the Self Organizing Map”, IEEE Trans. Neural Networks, Vol 11, No 3, May 2000Map”, IEEE Trans. Neural Networks, Vol 11, No 3, May 2000

Page 46: 1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.

4646

Questions/Comments ?Questions/Comments ?