Concurrent Self-Organizing Maps for Pattern Classification · The Self-Organizing Map (SOM) (also...

9
Concurrent Self-Organizing Maps for Pattern Classification Victor-Emil NEAGOE and Armand-Dragos ROPOT Depart. of Applied Electronics and Information Eng., POLITEHNICA University of Bucharest, Bucharest, 77206 Romania Email: [email protected] Abstract We present a new neural classification model called Concurrent Self-Organizing Maps (CSOM), representing a winner-takes-all collection of small SOM networks. Each SOM of the system is trained individually to provide best results for one class only. We have considered two significant applications: face recognition and multispectral satellite image classification. For first application, we have used the ORL database of 400 faces (40 classes). With CSOM (40 small linear SOMs), we have obtained a recognition score of 91%, while using a single big SOM one obtains a score of 83.5% only! For second application, we have classified the multispectral pixels belonging to a LANDSAT TM image with 7 bands into seven thematic categories. The experimental results lead to the recognition rate of 95.29% using CSOM (7 circular SOMs), while with a single big SOM, one obtains a 94.31% recognition rate. Simultaneously, CSOM leads to a significant reduction of training time by comparison to SOM. 1. Introduction The Self-Organizing Map (SOM) (also called Kohonen network) is characterized by the fact that neighbouring neurons in the above neural network develop adaptively into specific detectors of different vector patterns. The neurons become specifically tuned to various classes of patterns through a competitive, unsupervised or self-organizing learning. Only one cell (neuron) or group of cells at a time gives the active response to the current input. The spatial location of a cell in the network (given by its co-ordinates) corresponds to a particular input vector pattern. One important characteristics of SOM is that it can simultaneously extract the statistics of the input vectors and it performs the classification as well. Starting from the idea to consider the SOM as a cell characterizing a specific class only, we present a new neural recognition model called Concurrent Self- Organizing Maps (CSOM) (proposed by Neagoe in [9]), representing a collection of small SOMs, which use a global winner-takes-all strategy. Each SOM is used to correctly classify the patterns of one class only and the number of networks equals the number of classes. We have tested the proposed CSOM model for two significant applications: (1) face recognition; (2) multispectral satellite image classification. 2. Concurrent Self-Organizing Maps for Pattern Classification Concurrent Self-Organizing Maps (CSOM) are a collection of small SOM, which use a global winner- takes-all strategy. Each network is used to correctly classify the patterns of one class only and the number of networks equals the number of classes. The CSOM training technique is a supervised one, but for any individual net the SOM specific training algorithm is used. We built “n” training patterns sets and we used the SOM training algorithm independently for each of the “n” SOMs. The CSOM model for training is shown in Fig. 1. For the recognition, the test pattern has been applied in parallel to every previously trained SOM. The map providing the least quantization error is decided to be the winner and its index is the class index that the pattern belongs to (see Fig. 2). DATA BASE Pattern Set “1” SOM “1” Pattern Set“2” SOM “2” Pattern Set“n” SOM “n” Proceedings of the First IEEE International Conference on Cognitive Informatics (ICCI’02) 0-7695-1724-2/02 $17.00 © 2002 IEEE Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 19, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

Transcript of Concurrent Self-Organizing Maps for Pattern Classification · The Self-Organizing Map (SOM) (also...

Page 1: Concurrent Self-Organizing Maps for Pattern Classification · The Self-Organizing Map (SOM) (also called Kohonen network) is characterized by the fact that neighbouring neurons in

Concurrent Self-Organizing Maps for Pattern Classification

Victor-Emil NEAGOE and Armand-Dragos ROPOT Depart. of Applied Electronics and Information Eng.,

POLITEHNICA University of Bucharest, Bucharest, 77206 Romania Email: [email protected]

Abstract

We present a new neural classification model called Concurrent Self-Organizing Maps (CSOM), representing a winner-takes-all collection of small SOM networks. Each SOM of the system is trained individually to provide best results for one class only. We have considered two significant applications: face recognition and multispectral satellite image classification. For first application, we have used the ORL database of 400 faces (40 classes). With CSOM (40 small linear SOMs), we have obtained a recognition score of 91%, while using a single big SOM one obtains a score of 83.5% only!

For second application, we have classified the multispectral pixels belonging to a LANDSAT TM image with 7 bands into seven thematic categories. The experimental results lead to the recognition rate of 95.29% using CSOM (7 circular SOMs), while with a single big SOM, one obtains a 94.31% recognition rate. Simultaneously, CSOM leads to a significant reduction of training time by comparison to SOM. 1. Introduction

The Self-Organizing Map (SOM) (also called Kohonen network) is characterized by the fact that neighbouring neurons in the above neural network develop adaptively into specific detectors of different vector patterns. The neurons become specifically tuned to various classes of patterns through a competitive, unsupervised or self-organizing learning. Only one cell (neuron) or group of cells at a time gives the active response to the current input. The spatial location of a cell in the network (given by its co-ordinates) corresponds to a particular input vector pattern. One important characteristics of SOM is that it can simultaneously extract the statistics of the input vectors and it performs the classification as well.

Starting from the idea to consider the SOM as a cell characterizing a specific class only, we present a new neural recognition model called Concurrent Self-Organizing Maps (CSOM) (proposed by Neagoe in [9]),

representing a collection of small SOMs, which use a global winner-takes-all strategy. Each SOM is used to correctly classify the patterns of one class only and the number of networks equals the number of classes. We have tested the proposed CSOM model for two significant applications: (1) face recognition; (2) multispectral satellite image classification. 2. Concurrent Self-Organizing Maps for Pattern Classification

Concurrent Self-Organizing Maps (CSOM) are a collection of small SOM, which use a global winner-takes-all strategy.

Each network is used to correctly classify the patterns of one class only and the number of networks equals the number of classes. The CSOM training technique is a supervised one, but for any individual net the SOM specific training algorithm is used. We built “n” training patterns sets and we used the SOM training algorithm independently for each of the “n” SOMs. The CSOM model for training is shown in Fig. 1.

Figure 1. The CSOM model (training

phase). For the recognition, the test pattern has been applied

in parallel to every previously trained SOM. The map providing the least quantization error is decided to be the winner and its index is the class index that the pattern belongs to (see Fig. 2).

DATA BASE

Pattern Set “1”

SOM “1”

Pattern Set“2”

SOM “2”

Pattern Set“n”

SOM “n”

Proceedings of the First IEEE International Conference on Cognitive Informatics (ICCI’02) 0-7695-1724-2/02 $17.00 © 2002 IEEE

Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 19, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

Page 2: Concurrent Self-Organizing Maps for Pattern Classification · The Self-Organizing Map (SOM) (also called Kohonen network) is characterized by the fact that neighbouring neurons in

Figure 2. The CSOM model (classification

phase).

3. CSOM for Face Recognition

Face recognition is specific topics of computer vision that has been studied for 25 years and has recently become a hot topic. However, face recognition remains a difficult task because of variation of factors such as lighting conditions, viewpoint, body movement and facial expressions. The algorithms of face recognition have numerous potential applications in areas such as visual surveillance, criminal identification, multimedia and visually mediated interaction. 3.1. Face Database

For experimenting the proposed CSOM model, we

have used “The ORL Database of Faces” provided by the AT&T Laboratories from Cambridge University with 400 images, corresponding to 40 subjects (namely, 10 images for each class) .We have divided the whole gallery into a training lot (200 pictures) and a test lot (200 pictures). Each image has the size of 92 x 112 pixels with 256 levels of grey. For the same subject (class), the images have been taken at different hours, lighting conditions, and facial expressions, with or without glasses. For each class, one chooses five images for training and five images for test (see Figs. 3-4).

Figure 3. Example of training images (5

classes).

Figure 4.Test images corresponding to the

training ones given in figure 3 (5 classes).

Input Pattern

SOM “1”

Minimization of Quantization Error

CLASS

SOM “2”

SOM “n”

Proceedings of the First IEEE International Conference on Cognitive Informatics (ICCI’02) 0-7695-1724-2/02 $17.00 © 2002 IEEE

Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 19, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

Page 3: Concurrent Self-Organizing Maps for Pattern Classification · The Self-Organizing Map (SOM) (also called Kohonen network) is characterized by the fact that neighbouring neurons in

3.2. Experimental Results of Face Recognition

For the task of face recognition, we have used a processing cascade having two stages: (a) Feature extraction using the Principal Component

Analysis (PCA); (b) Pattern classification using CSOM.

We have software implemented the proposed technique and have experimented the model using the previously mentioned face database of 400 images.

● Feature Extraction with PCA The original pictures of 92 x 112 pixels have been

resized to 46 x 56, so that the input space has the dimension of 2576. The PCA stage is equivalent to the computation of the Karhunen-Loeve Transform [4], [12];

for example, we can reduce the space dimension from 2576 to 158, by preserving 99% of the We have computed the covariance matrix of the whole training set of 200 vectors X ∈∈∈∈ R2576, the eigenvalues and the eigenvectors. We have ordered the eigenvalues λ 1 ≥ λ 2 ≥ λ 3 ≥ . . . ≥ λ 2575 ≥ λ 2576, and have computed the energy preservation factor E, by retaining only n eigenvalues

1002576

1

1 ⋅=�

=

=

ii

n

ii

λ

λE .

In Table 1, the factor of energy preservation is given for various n. We have considered the case of n=158 (E=99.02%) and n=10 (E=65.28%).

Table 1. Energy preservation factor for various n.

Number of Features (n)

199 158 135 117 100 92 56 50 10

Energy preservation

factor E 100 99.02 98.05 97.03 95.79 95.09 90.14 88.84 65.28

Table 2. Experimental results of face classification with CSOM versus SOM.

Nr

Number of retained principal

components n

Type of classifier

Total number

of neurons

Number of

networks

Recognition score for the training lot

(%)

Recognition score for the

test lot (%)

Training Time (s)

1 158 Linear CSOMs (40 x 4)

160 40 100 91 15

2 158 Linear SOM 160 1 98 71 / 83.5 225

3 158 Rectangular

CSOMs [40 x (8 x 5)]

1600 40 100 88 250

4 158 Rectangular

SOM (40 x 40)

1600 1 100 3 / 81 3750

5 10 Linear CSOMs (40 x 3)

120 40 100 85 2

6 10 Linear SOM 120 1 97 68 / 77.5 30

7 10 Rectangular

CSOMs [40 x (8 x 5)]

1600 40 100 85.5 25

8 10 Rectangular

SOM (40 x 40)

1600 1 100 16 / 83 500

Proceedings of the First IEEE International Conference on Cognitive Informatics (ICCI’02) 0-7695-1724-2/02 $17.00 © 2002 IEEE

Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 19, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

Page 4: Concurrent Self-Organizing Maps for Pattern Classification · The Self-Organizing Map (SOM) (also called Kohonen network) is characterized by the fact that neighbouring neurons in

●CSOM versus SOM for Face Classification For the second processing stage of face recognition,

we have performed a neural clasification using the following techniques:

a. the new CSOM model b. the SOM classifier with classical calibration

c. the SOM classifier with k-NN calibration. The results of simulation are given in Table 2 and

figs. 5-8. For the recognition score on the test lot using SOM, there are shown both variants of calibration („b/c”).

Figure 5. Recognition rate on the test lot as

a function of the total number of neurons (n

= 158 features). Figure 7. Recognition rate on the test lot as

a function of the total number of neurons (n

= 10 features).

Figure 6. Recognition rate on the training lot

as a function of the total number of neurons

(n = 158 features). Figure 8. Recognition rate on the training lot as a function of the total number of

neurons (n = 10 features).

Recognition rate [%]

0

50

60

70

90

80

100

40 80 120 160 200eurons1600

Legend

CSOM

SOM – Classical calibration

SOM – k-NN calibration

Recognitionrate [%]

0

50

60

70

90

80

100

40 80 120 160 200 Neurons

1600

LegendCSOM

SOM

Legend

Recognitionrate [%]

0

50

60

70

90

80

100

40 80 120 160 200 Neurons

1600

10

CSOM

SOM

Recognition rate [%]

0

50

60

70

90

80

100

40 80 120 160 200 Neurons1600

10

LegendCSOM

SOM – Classical

alibration

Proceedings of the First IEEE International Conference on Cognitive Informatics (ICCI’02) 0-7695-1724-2/02 $17.00 © 2002 IEEE

Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 19, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

Page 5: Concurrent Self-Organizing Maps for Pattern Classification · The Self-Organizing Map (SOM) (also called Kohonen network) is characterized by the fact that neighbouring neurons in

4. CSOM for Classification of Multispectral Satellite Imagery

Processing of satellite imagery has wide applications for generation of various kinds of maps: maps of vegetation, maps of mineral resources of the Earth, land-use maps (civil or military buildings, agricultural fields, woods, rivers, lakes, and highways), and so on. The standard approach to satellite image classification uses statistical methods. A relative new and promising category of techniques for satellite image classification is based on neural models. The concluding remarks obtained as a result of the research on applying neural networks for classification of satellite imagery are the following: • neural classifiers do not require initial hypotheses

on the data distribution and are able to learn nonlinear and discontinuous input data;

• neural networks can adapt easily to input data containing texture information;

• the neural classifiers are generally more accurate than the statistical ones;

• architecture of neural networks is very flexible, so it can be easily adapted for improving the performances of a particular application.

4.1. Satellite Image Database For training and testing the software of the

proposed CSOM classification model as well as the classical SOM (for comparison), we have used a LANDSAT TM image with 7 bands (Figs.9.a-g), having a number of 368,125 pixels (7-dimensional), out of which 6,331 pixels were classified by an expert into seven thematic categories: A- urban area; B-

barren fields, C-bushes, D- agricultural fields, E-meadows, F- woods, G- water (Fig. 10).

Fig. 9.a. Spectral Band 1 Fig. 9.b. Spectral Band

2

Fig. 9.c. Spectral Band 3 Fig. 9.d. Spectral Band

4

Fig. 9.e. Spectral Band 5 Fig. 9.f. Spectral Band

6

Fig. 9.g. Spectral Band 7 Fig.10. Calibration

image

Figure 11. Classified multispectral pixels

(7 categories) using a circular CSOM

architecture with 7 x 112 neurons

(classification error 95.29%).

Figure 12. Classified multispectral pixels

(7 categories) using a circular SOM

architecture with 784 neurons

(classification error 94.31 %).

Proceedings of the First IEEE International Conference on Cognitive Informatics (ICCI’02) 0-7695-1724-2/02 $17.00 © 2002 IEEE

Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 19, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

Page 6: Concurrent Self-Organizing Maps for Pattern Classification · The Self-Organizing Map (SOM) (also called Kohonen network) is characterized by the fact that neighbouring neurons in

Figure 13. Histogram of the classified

multispectral LANDSAT TM image given in

Fig. 11 (using CSOM).

Figure 14. Histogram of the classified

multispectral LANDSAT TM image given in

Fig. 12 (using SOM).

Table 3. Experimental results of multispectral satellite image classification

with CSOM, SOM and Bayes classifiers (input vector space has the dimension 7).

Nr Type of

classifier

Total number

of neurons

Number of

networks

Recognition score for the training lot

(%)

Recognition score for the

test lot (%)

Training Time

(s)

1 Circular CSOMs (7 x 112)

784 7 98.71 95.29 100

2 Circular

SOM 784 1 96.49 94.31 3800

3 Linear CSOMs (7 x 112)

784 7 98.64 95.10 50

4 Linear SOM

784 1 97.06 94.12 3700

5 Rectangular

CSOMs [7 x (14 x 8)]

784 7 97.98 95.07 92

6 Rectangular

SOM (28 x 28)

784 1 96.53 92.80 3500

7 Bayes

classifier 95.83 94.22

4.2. Experimental Results of CSOM Satellite

Image Classification

Each multispectral pixel (7 bands) is characterized by a corresponding 7-dimensional vector containing the pixel projections in each band. These vectors are applied to the input of the neural classifier. For clasification, we have experimented the following techniques:

▪ the new CSOM model ▪ the classical SOM classifier ▪ the Bayes classifier (by assuming the seven classes

have normal repartitions). The results of simulation are given in Tables 3-8.

Two classified multispectral images are given in Figs. 11 and 12, and the corresponding histograms are shown in Figs. 13 and 14. The recognition rates for the training lot and also for the test lot are shown in Figs. 15-16.

A:

B:

C:

D:

E:

F:

G:

N:

7.21 % : Urban area

32.76 % : Barren fields

20.17 % : Bushes

10.36 % : Agricult. fields

21.64 % : Meadows

4.75 % : Woods

3.11 % : Water

0 00 % : Un l ssified

Histogram of the CSOM classified image

A:

B:

C:

D:

E:

F:

G:

N:

5.12 % : Urban area

35.53 % : Barren fields

19.73 % : Bushes

13.13 % : Agricultural

fields

18.50 % : Meadows

4.58 % : Woods

2 61 % : W ter

Histogram of the SOM classified image

Proceedings of the First IEEE International Conference on Cognitive Informatics (ICCI’02) 0-7695-1724-2/02 $17.00 © 2002 IEEE

Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 19, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

Page 7: Concurrent Self-Organizing Maps for Pattern Classification · The Self-Organizing Map (SOM) (also called Kohonen network) is characterized by the fact that neighbouring neurons in

Table 4. Comparison of the best pixel classification scores obtained by

SOM and CSOM for the training lot as a function of the number of

neurons.

Number of neurons 49 98 196 392 784 Bayes SOM 91.60 93.53 95.10 95.86 97.06 Recognition

rate [%] CSOM 93.27 95.01 96.62 97.76 98.71 95.83

Table 5. Comparison of the best pixel classification scores obtained by

SOM and CSOM for the test lot as a function of the number of neurons.

Number of neurons 49 98 196 392 784 Bayes SOM 92.04 93.87 93.62 94.34 94.31 Recognition

rate [%] CSOM 92.86 94.79 93.71 94.85 95.29 95.17

Figure 15. Recognition rate on the training lot as a

function of the total number of neurons.

Figure 16. Recognition rate on the test lot as a

function of the total number of neurons.

Recognition

euro9

9

9

9

9

95

9

4 9 19 39 78

9

9Legend

CSOM

SOM

Bayes

Recognition

Neuron9

9

93

94

96

95

97

49 98 19 392 78

LegendCSOM

SOM

Bayes

Proceedings of the First IEEE International Conference on Cognitive Informatics (ICCI’02) 0-7695-1724-2/02 $17.00 © 2002 IEEE

Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 19, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

Page 8: Concurrent Self-Organizing Maps for Pattern Classification · The Self-Organizing Map (SOM) (also called Kohonen network) is characterized by the fact that neighbouring neurons in

Table 6. Confusion matrix for the circular SOM with 784 neurons (test lot).

Real class Assigned Class A B C D E F G Total [%]

A’ 80.00 0.08 1.97 0.00 0.00 0.21 0.62 1.96 B’ 8.57 99.41 0.66 0.00 0.00 0.00 0.00 37.54 C’ 5.71 0.17 73.68 0.33 0.48 4.95 3.73 4.80 D’ 0.00 0.00 1.97 96.45 0.00 9.28 0.00 29.00 E’ 0.00 0.00 0.66 0.00 98.55 0.00 0.00 6.48 F’ 0.00 0.00 14.47 2.77 0.00 84.74 1.86 14.57 G’ 5.71 0.08 4.61 0.00 0.00 0.41 93.79 5.21

Unclassified

0.00 0.25 1.97 0.44 0.97 0.41 0.00 0.44

Total [%] 2.21 37.54 4.80 28.50 6.54 15.32 5.09 100.00

Table 7. Confusion matrix for the circular CSOMs with (7 x 112) neurons (test lot).

Real class Assigned

Class A B C D E F G Total [%]

A’ 90.00 0.25 0.00 0.22 0.00 0.21 0.00 2.18 B’ 2.86 99.58 0.00 0.00 0.00 0.00 0.00 37.44 C’ 4.29 0.17 84.87 0.44 1.45 6.80 2.48 5.62 D’ 0.00 0.00 1.32 95.79 0.00 6.39 0.00 28.34 E’ 0.00 0.00 0.00 0.00 98.55 0.00 0.00 6.45 F’ 0.00 0.00 5.26 3.55 0.00 85.77 0.00 14.41 G’ 2.86 0.00 8.55 0.00 0.00 0.82 97.52 5.56

Unclassified

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Total [%] 2.21 37.54 4.80 28.50 6.54 15.32 5.09 100.00

Table 8. Training time required by the previous SOM and CSOM as a function of

the number of neurons.

Number of neurons 49 98 196 392 784 SOM 276 545 1140 2040 4872 Training

time [sec] CSOM 56 93 171 423 1020

5. Concluding Remarks

1. The proposed CSOM model uses a collection of small SOM, each network having the task to correctly classify the patterns of one class only. The decision is based on a global winner-takes-all strate gy. 2. From the experimental results of the considered applications, we can evaluate the advantage of the CSOM over SOM both from the point of view of recognition rate and also regarding the training time. A. Face Recognition ▪ n= 158 principal components

3. By retaining only 158 components in the transformed space (instead of 2576 components of the

vectors in the original space), we preserve 99.02 % of the signal energy.

4. Using a CSOM built by a set of 40 small linear CSOMs, each with 4 neurons, one obtains a recognition score of 91 % (for the test lot), while using a single linear SOM with the same total number of neurons (160), one obtains a recognition score of only 71% for classical calibration as well as of 83.5 % for improved (k-NN) calibration.

5. Using a CSOM consisting of a collection of 40 rectangular SOMs, each with 8 x 5 = 40 neurons, we have obtained a recognition score of 88%, while with a big rectangular SOM having the same total number of neurons (40 x 40 = 1600), we have obtained only a 3% rate of recognition with classical calibration and 81% for the k-NN calibrated SOM.

Proceedings of the First IEEE International Conference on Cognitive Informatics (ICCI’02) 0-7695-1724-2/02 $17.00 © 2002 IEEE

Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 19, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

Page 9: Concurrent Self-Organizing Maps for Pattern Classification · The Self-Organizing Map (SOM) (also called Kohonen network) is characterized by the fact that neighbouring neurons in

6. For the CSOM model, the recognition rate over the test lot increases by increasing the number of neurons till it reaches an optimum (for example, 91% for a number of 160 neurons), and then the recognition rate decreases (see Fig. 5). ▪ n= 10 principal components

7. By retaining only 10 components in the transformed space, one preserves 65.28 % of the signal energy contained in the original space having the dimensionality of 2576.

8. Using a set of 40 small linear CSOMs, each with 3 neurons, one obtains a recognition score of 85 %, while using a corresponding linear SOM with the same total number of neurons (120), one obtains a recognition score of only 68% for classical calibration as well as 77.5 % for k-NN calibration.

9. For a rectangular architecture, the CSOM model leads also to better results than SOM (see Table 2).

10. From the point of view of training time, the advantage of CSOM over SOM is obvious. Theoretically, for 40 classes, the training time of CSOM must be about 40 times less than that of the corresponding SOM with the same number of neurons. During the training of CSOM, each input vector is applied only to a specific small SOM corresponding to the vector class; then, one has to compute only 1/40 of the number of distances computed for SOM. Moreover, the radiuses of neighbourhoods are smaller for CSOM components than for a single big SOM. The results of simulation are given in Table 2.

B. Classification of Multispectral Satellite Imagery

11. We can evaluate the very good classification score of multispectral pixel classification for all the experimented classifiers, but the CSOM model leads to slightly better results for all the presented variants by comparison to SOM and Bayes.

12. The best results (a classification rate of 95.29%) is obtained using a CSOM model containing 7 circular SOMs with 112 neurons each of them. Taking into account the architecture variants for the components of CSOM, for this application the best variant is circular, followed by linear and then by rectangular.

13. Moreover, the CSOM model requires a significantly less training time by comparison to a single big SOM (Tables 3 and 8). 14. From the histogram of CSOM classified image given in Fig. 13, one deduces there are no

unclassified pixels, while for the corresponding SOM there are 0.79% unclassified pixels (Fig. 14). 15. The CSOM model does not require a calibration phase, while SOM does.

16. The classification score increases by increasing the number of neurons (Tables 4 and 5, Figs. 15 and 16).

17. The confusion matrices (Tables 6 and 7) show there are specific differences regarding the recognition of the seven thematic categories (for example, one can better identify pixels belonging to barren fields than those representing woods!). 6. References [1] T. Kohonen, “The Self-Organizing Map”, Proceedings IEEE, Vol. 78, No. 9, Sept. 1990, pp. 1464-1479. [2] T. Kohonen, Self-Organizing Maps, Springer- Verlag, Berlin, 1995. [3] P. W. Hallinan, G. C. Gordon, A. L. Yuille, P. Giblin, and D. Mumford, Two- and Three-Dimensional Patterns of the Face, Natick, A K Peters, Massachusetts, 1999. [4] S. Gong, S. J. McKenna, and A. Psarron, Dynamic Vision, Imperial College Press, London, 2000. [5] R. Chelappa, C. Wilson, and S. Sirohey,”Human and Machine Recognition of Faces: A Survey”, Proceedings IEEE, vol. 83, pp. 705-740, 1995. [6] G.A. Carpenter, M. N. Gjaja, S. Gopal, and C. E. Woodcock, "ART Neural Networks for Remote Sensing: Vegetation Clasification from LANDSAT TM and Terrain Data", IEEE Transactions on Geoscience and Remote Sensing, Vol. 35, nr. 2, 1997, pp. 308-325. [7] G.A. Carpenter, S. Grossberg, N. Markuzon, J. Reynolds, and D. B. Rosen, “Fuzzy ARTMAP: A Neural Network Architecture for Incremental Supervized Learning of Analog Multidimensional Maps”, IEEE Transactions on Neural Networks, Vol.3, nr. 5, 1992, pp.698-713. [8] N. Kopco, P. Sincak, H. Veregin, “Extended Methods for Classification of Remotely Sensed Images Based on ARTMAP Neural Networks”, Computational Intelligence– Theory and Applications, (B. Reusch Ed.), Springer, Berlin-New York, 1999, pp. 206-219. [9] V. Neagoe: ”Concurrent Self-Organizing Maps for Automatic Face Recognition” , Proceedings of the 29th International Conference of the Romanian Technical Military Academy, published by Technical Military Academy, Bucharest, Romania, November 15-16, 2001, Section 9 (Communications), ISBN: 973-8290-27-9,(2001), pp. 35-40. [10] V. Neagoe and I. Fratila, "A Neural Segmentation of Multispectral Satellite Images", Computational Intelligence, Theory and Applications, (ed. B. Reusch), Springer, Berlin-New York, 1999, pp. 334-341. [11] V. Neagoe, “A Circular Kohonen Network for ImageVector Quantization”, Parallel Computing: State-of – the Art and Perspectives (E. H. D’Hollander, G. R Joubert, F. J. Peters, and D. Trystram eds.), Vol. 11, Elsevier, Amsterdam-New York, 1996, pp. 677-680. [12] V. Neagoe and O. Stanasila, Recunoasterea formelor si retele neurale - algoritmi fundamentali (Pattern Recognition and Neural Networks-Fundamental Algorithms), Ed. Matrix Rom, Bucharest, 1999.

Proceedings of the First IEEE International Conference on Cognitive Informatics (ICCI’02) 0-7695-1724-2/02 $17.00 © 2002 IEEE

Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 19, 2008 at 00:57 from IEEE Xplore. Restrictions apply.