[IEEE 2010 Fourth Pacific-Rim Symposium on Image and Video Technology (PSIVT) - Singapore, Singapore...

6
This work is supported by National Natural Science Foundation (60933008, 61020106001). 3D Face Recognition Using Multi-level Multi-Feature Fusion Cuicui Zhang Department of Computer Science and Technology Shandong University Jinan City, Shandong Province, China E-mail: [email protected] Caimimg Zhang Department of Computer Science and Technology Shandong University Jinan City, Shandong Province, China E-mail: [email protected] Keiichi Uchimura Department of Computer Science and Electrical Engineering Kumamoto University Kumamoto, Japan E-mail: [email protected] Gou Koutaki Department of Computer Science and Electrical Engineering Kumamoto University Kumamoto, Japan E-mail: [email protected] Abstract—This paper proposed a novel 3D face recognition algorithm using multi-level multi-feature fusions. A new face representation method named average edge image is proposed in addition to traditional ones such as maximal principal curvature image and range image. In the matching process stage, a new weight calculation algorithm based on the sum rule is presented for feature fusion and match score fusion in order to improve the matching precision. Depending on the complementary characteristic of feature fusion and match score fusion, a combination of them named two-level fusion is proposed. Experiments are conducted using our own 3D database consisting of nearly 400 samples. Mesh simplification is utilized for data reduction. Recognition results show that the new weight calculation method improves the recognition accuracy and the two-level fusion algorithm performs better than feature fusion and match score fusion. Keywords-3D face recognition; muliti-level multi-feature fusion; feature fusion; match score fusion; two-level fusion I. INTRODUCTION Face recognition has been studied for over 40 years. It has drawn considerable attentions for its potential applications in the fields of identification and verification. In the past years, numerous algorithms based on two dimensional face images were developed such as Eigenface (PCA) [1], Fisherface (LDA) [2], and Byasian face recognition method [3]. However, there are still some challenges, in which pose and illumination variations are recognized as two major problems to 2D face recognition algorithms. Recently, 3D scanning techniques have developed rapidly. In order to solve these problems, 3D face recognition algorithms were proposed. The task of face recognition involves two procedures: the development of a face representation for feature extraction and the subsequent matching process using these features. According to face representations, algorithms for 3D face recognition can be classified as follows. Firstly, some recognition algorithms perform recognition using features obtained from face surface analysis. Gordon [4] proposed a curvature feature-driven algorithm. Samir [5] utilized face surface contours and defined a contours distance as metric measurement for face recognition. Nagamine [6] introduced a curve and profile-based face description. Feng [7] extracted affine integral invariants as features. Secondly, some algorithms perform recognition based on geometry features. Beumier [8] developed a volumetric approximate face representation method. Sun [9] explored two kinds of face features: one was the face surface attribute features (e.g. face surface area, face surface volume, surface normal vectors, and so on) and the other was the correlation of key feature points measured by the Euclidean distances. Thirdly, some algorithms perform recognition based on range image. Hesher [10] performed recognition using principle component analysis (PCA) and independent component analysis (ICA) on range image. However, each face representation has its limitation and advantage. It becomes more reasonable and feasible to perform face recognition using multi-features extracted from different kinds of face representations. In [9], face recognition was implemented based on multi-geometric features and a feature-fusion method based on linear discriminant analysis was introduced. In [11], an algorithm named multi-parts and multi-feature fusion was explored. Information from multiple sources can be consolidated at distinct levels, including feature fusion, match score fusion, and decision fusion. At feature level, the feature sets extracted from multiple data sources can be combined to create a new feature set to represent the individual. At match score level, different feature-matching outputs a set of match scores which are fused to generate a single scalar score for classification [12]. The fusion strategy is a crucial problem to be solved first. Two popular fusion strategies are the feature series joint strategy and the weighted joint strategy. The former is doomed to deal with high dimensional feature vectors. The latter is a linear weighted parallel fusion strategy based on the recognition results of various features [9]. For large scaled feature vectors, the former strategy may result in higher computational costs. On the other hand, despite the simplicity of the latter strategy, the system that adopts such fusion schemes may perform neither better nor more robustly in uncontrollable environments than single features. That is because, sometimes, one or two features may dominate the entire system's performance, and different 2010 Fourth Pacific-Rim Symposium on Image and Video Technology 978-0-7695-4285-0/10 $26.00 © 2010 IEEE DOI 10.1109/PSIVT.2010.11 21

Transcript of [IEEE 2010 Fourth Pacific-Rim Symposium on Image and Video Technology (PSIVT) - Singapore, Singapore...

This work is supported by National Natural Science Foundation (60933008,61020106001).

3D Face Recognition Using Multi-level Multi-Feature Fusion

Cuicui Zhang Department of Computer Science and Technology

Shandong University Jinan City, Shandong Province, China

E-mail: [email protected] Caimimg Zhang

Department of Computer Science and Technology Shandong University

Jinan City, Shandong Province, China E-mail: [email protected]

Keiichi Uchimura Department of Computer Science and Electrical Engineering

Kumamoto University Kumamoto, Japan

E-mail: [email protected] Gou Koutaki

Department of Computer Science and Electrical Engineering Kumamoto University

Kumamoto, Japan E-mail: [email protected]

Abstract—This paper proposed a novel 3D face recognition algorithm using multi-level multi-feature fusions. A new face representation method named average edge image is proposed in addition to traditional ones such as maximal principal curvature image and range image. In the matching process stage, a new weight calculation algorithm based on the sum rule is presented for feature fusion and match score fusion in order to improve the matching precision. Depending on the complementary characteristic of feature fusion and match score fusion, a combination of them named two-level fusion is proposed. Experiments are conducted using our own 3D database consisting of nearly 400 samples. Mesh simplification is utilized for data reduction. Recognition results show that the new weight calculation method improves the recognition accuracy and the two-level fusion algorithm performs better than feature fusion and match score fusion.

Keywords-3D face recognition; muliti-level multi-feature fusion; feature fusion; match score fusion; two-level fusion

I. INTRODUCTION Face recognition has been studied for over 40 years. It

has drawn considerable attentions for its potential applications in the fields of identification and verification. In the past years, numerous algorithms based on two dimensional face images were developed such as Eigenface (PCA) [1], Fisherface (LDA) [2], and Byasian face recognition method [3]. However, there are still some challenges, in which pose and illumination variations are recognized as two major problems to 2D face recognition algorithms. Recently, 3D scanning techniques have developed rapidly. In order to solve these problems, 3D face recognition algorithms were proposed.

The task of face recognition involves two procedures: the development of a face representation for feature extraction and the subsequent matching process using these features. According to face representations, algorithms for 3D face recognition can be classified as follows. Firstly, some recognition algorithms perform recognition using features obtained from face surface analysis. Gordon [4] proposed a curvature feature-driven algorithm. Samir [5] utilized face surface contours and defined a contours distance as metric

measurement for face recognition. Nagamine [6] introduced a curve and profile-based face description. Feng [7] extracted affine integral invariants as features. Secondly, some algorithms perform recognition based on geometry features. Beumier [8] developed a volumetric approximate face representation method. Sun [9] explored two kinds of face features: one was the face surface attribute features (e.g. face surface area, face surface volume, surface normal vectors, and so on) and the other was the correlation of key feature points measured by the Euclidean distances. Thirdly, some algorithms perform recognition based on range image. Hesher [10] performed recognition using principle component analysis (PCA) and independent component analysis (ICA) on range image. However, each face representation has its limitation and advantage. It becomes more reasonable and feasible to perform face recognition using multi-features extracted from different kinds of face representations. In [9], face recognition was implemented based on multi-geometric features and a feature-fusion method based on linear discriminant analysis was introduced. In [11], an algorithm named multi-parts and multi-feature fusion was explored.

Information from multiple sources can be consolidated at distinct levels, including feature fusion, match score fusion, and decision fusion. At feature level, the feature sets extracted from multiple data sources can be combined to create a new feature set to represent the individual. At match score level, different feature-matching outputs a set of match scores which are fused to generate a single scalar score for classification [12]. The fusion strategy is a crucial problem to be solved first. Two popular fusion strategies are the feature series joint strategy and the weighted joint strategy. The former is doomed to deal with high dimensional feature vectors. The latter is a linear weighted parallel fusion strategy based on the recognition results of various features [9]. For large scaled feature vectors, the former strategy may result in higher computational costs. On the other hand, despite the simplicity of the latter strategy, the system that adopts such fusion schemes may perform neither better nor more robustly in uncontrollable environments than single features. That is because, sometimes, one or two features may dominate the entire system's performance, and different

2010 Fourth Pacific-Rim Symposium on Image and Video Technology

978-0-7695-4285-0/10 $26.00 © 2010 IEEE

DOI 10.1109/PSIVT.2010.11

21

kinds of features extracted from the same datum might be of high correlations. Therefore, the simple weighted strategy cannot be sufficiently helpful to make the fused feature effective for classification [13]. In this paper, a new fusion strategy is proposed. It is a linear weighted strategy based on the sum rule.

The contributions of this paper are as follows: • Three kinds of facial feature representations are

developed from 3D raw data: maximal principal curvature image, average edge image, and range image.

• Three kinds of fusion frameworks based on the proposed fusion strategy are presented: feature fusion, match score fusion, and two-level fusion.

• A mesh simplification algorithm (proposed in [14]) is adopted here. Comparisons of the effect of it to face recognition are included.

This paper is organized as follows: the development of three kinds of facial feature representations is presented in Section 2. Section 3 explains the theory of two kinds of most effective recognition approaches: PCA and LDA. Section 4 presents the three kinds of fusion frameworks and the proposed weight calculation method for fusion. Extensive experimental results are reported in Section 5. Finally, Section 6 concludes this paper.

II. DEVELOPMENT OF FACIAL FEATURE IMAGES

A. Data Set The dataset used here belongs to our lab, which is created

by VIVID910. Data is created by triangulation to determine distance information. The laser beam is scanned using a high-precision galvanometric mirror, and 640×480 individual points can be measured per scan. In addition to distance data, color images can also be acquired. Our dataset consists of 38 face classes. For each person, there are 10 samples, of which 9 samples are various poses and 1 sample of different lighting conditions. For recognition, the dataset is divided into two subsets: the training subset and the test subset. For each person, 6 samples of 10 are selected randomly and placed into the training subset while the remaining 4 samples are put into the test subset.

B. Data preprocessing This process is to minimize the impact of input data to

subsequent processes. It consists of firstly, affine transformation: after transformation, all faces of one person are rotated to a frontal view; secondly, filling holes: to fill holes that are produced by laser scans; finally, median cut and data normalization: which are applied to segment face regions, to remove spikes, and to confine coordinates in a certain range.

C. Mesh simplification This step is to reduce the data of a mesh while it contains

the same shape and features as before. There are three kinds of mesh simplification algorithms including removing vertices, removing edges (edge collapse), and removing triangles (triangle collapse). The third type seems more

efficient than that of the others. Given an initial mesh, they reduce the number of triangles through a series of triangle collapse operations (see Fig. 1). At first, a weight is assigned to each triangle, which is used as the criterion to select triangles to be collapsed [15]. The triangle collapse method used here is based on square volume measure, which is proposed in [14]. Meshes are simplified by minimizing an error object function which is used to assign a weight for each triangle. This object function is defined by the combination of square volume error, shape factor and normal constraint factor of the triangles. In order to preserve strong feature triangles, the Gaussian curvature factor is computed for each triangle. Due to its feature preserving measure, it is the very choice for our application.

An example of mesh simplification on a face model is shown in Fig. 2. The initial mesh (see Fig. 2 (a)) consists of 7772 vertices and 15226 triangles. Fig. 2 (b-d) displays meshes of 50%, 25% and 12.5% of the original data respectively.

D. Facial Feature Images The development of three kinds of 2D face

representations are as follows, and the results are stored as PGM files.

1) Maximum principle curvature image (MPCI): This kind of representation captures many features necessary to accurately describe the face, including the shape of the forehead, jaw line, cheeks and so on. The calculation of maximal principal curvature is introduced in [4]. There are two types of principal curvatures: the maximum curvature

maxk and the minimum curvature mink . Let K and H define the Gaussian curvature and mean curvature at a point, then

maxK and minK can be calculated as follows: 2 2

max mink H H K k H H K= + − , = − − …………..(1)

For each vertex, maxK is selected as the feature and mapped into the gray code of a pixel to generate MPCI.

2) Average edge image (AEI): This representation captures strong features in the areas where vertices are dense, such as the eyes and eye brows. Let us refer to Fig. 1 (b), to the central vertex 0P , there are seven direct adjacent vertices around it. The average distance between 0P and each adjacent vertex can be determined by the mean length of those edges, called the average edge. For each vertex of a mesh, an average edge can be calculated in a similar way. Then AEI can be obtained by mapping the average edge of each vertex to the gray code of a pixel.

3) Range image (RI): This is a very simple representation but of sufficient resolution for recognition. In the previous work [4], data was created by a rotating laser scanner system and depth was stored in a cylindrical coordinates system ( , )f yθ at each point. In this paper data is created by triangulation and coordinates are represented as ( , , )x y z . Range image here is defined as a geometrical image of one channel which is encoded by the component z of each vertex.

22

Figure 1: An example of triangle collapse: (a) Mesh before

triangle collapse, (b) Mesh after triangle collapse

(a) (b) (c) (d)

Figure 2: Mesh simplification (a) 100% data model, (b) 50% data model, (c) 25% data model, and (d) 12.5% data model.

E. Image Processing This step is to normalize the obtained 2D feature images

using image cropping and image filling. As the nose tip is an important feature point of top z value, we treat it as the original point and cut images downsize to 120× 120.The median filter is used to fill the black pixels of MPCI and RI. The final 2D feature images are shown in Fig. 3.

III. RECOGNITION APPROACHES

In this paper, PCA and LDA, which are adjudged to be among the top three recognition algorithms, have been selected to perform recognition. Given the Eigenfaces (for PCA) or Fisherfaces (for LDA), every face in the database can be presented as a vector of weights which are obtained by projecting the image into eigenface components by a simple inner product operation [16]. The matching process is carried out by calculating the scores between the query projected feature vector and the training projected feature vectors using Euclidian distance. For decision, the nearest distance (the smallest score) is considered to be the best likeness. The goal of PCA [1] is to find a transformation so that feature clusters can be easily separated after this operation. It is also a standard technique to reduce the dimension of features. LDA [2] approach is known as one

(a) (b) (c)

Figure 3: (a) Maximum principle curvature image,

(b) Average edge Image, (c) Rang image

of the most successful face recognition methods. It performs better than PCA [16]. Through a linear transformation, the original face representation is projected into a new LDA subspace where the ratio of between-class scatter bS and within-class scatter wS is maximized. Then the project matrix W can be obtained by solving a generalized eigenvector problem:

b WS W S Wλ= ……………… ……...(2)

The size of each face image is 120×120 which implies the dimension of each feature vector is 14400. If recognition is carried out on them directly, high computational costs will be required. To solve this problem, PCA is used for dimension reduction. After performing PCA, the dimension of each feature is reduced significantly to 38. The lower dimension feature vectors are further processed by LDA to generate the final feature vectors.

Corresponding to these three kinds of face representations, three kinds of feature vectors of each sample can be obtained. Let

1 2{ , , ... }

nX x x x= ,

1 2{ , , ... }

nY y y y= and

1 2{ , , ... }

nZ z z z= denote them respectively. Since , ,X Y and Z may exhibit significant variations in the range, feature normalization is used to modify these variations. The simple max normalization technique is used here. Let x and 'x denote an element of a feature vector before and after normalization. The max technique

computes 'x by 'max( ( ))

xx

abs X= . Feature vectors after

normalization are as follows:1 2

' { ' , ' , ... ' }n

X x x x= ,

1 2' { ' , ' , ... ' }

nY y y y= , and

1 2' { ' , ' , ... ' }

nZ z z z= .

IV. MULTI-LEVEL MULTI-FEATURE FUSIONS In this section, three kinds of fusion mechanisms are

presented based on those three kinds of features which provide different discriminative information and play different roles in face recognition.

A. Feature fusion mechanism In this case, the feature vectors obtained from MPCI, AEI,

and RI using PCA+LDA are combined into a new feature vector. It is implemented based on a new linear weighted joint strategy as defined in (3), where , and are the weighted coefficients denoting the contributive efficiency of each kind of feature. The next matching process is performed based on the combined feature vector V . This process is shown in Fig. 4.

' ' 'V X Y Zα β γ= + + ………………………(3)

Figure 4: Feature fusion

23

Figure 5: Match score fusion

Figure 6: Two- level fusion

In the literature, weighted coefficients were defined by

the recognition rates [9]. However, range image here can be recognized as the dominate feature due to its best performance. As discussed before, this simple weighted strategy may not perform better or sometimes even worse. In order to balance the contributions of different kinds of features and give prominence to the dominant feature, a new linear weight calculation method is proposed. The weighted coefficient 'iw of each feature can be calculated as follows:

2

1 2' (10 min( 10 , 10 , ... 10 ))i miw w w w w= − � � � � � �� � � � � � ….(4)

where iw is the original recognition rate of -thi feature

( i =1,2,..m, here m=3 and1 2 3' ' ', ,w w wα β γ= = = ). This

definition means that the larger the original recognition rate is, the more significant the new discriminated contribution measured by 'iw will be. It means that the role of the dominate feature will be more important than before.

B. Match score fusion mechanism This procedure is shown in Fig. 5. Features are first

extracted using PCA+LDA. At match score level, different feature-matching results between a test feature vector and a training feature vector are combined into a final similar score. Fusion is performed based on a weighted sum rule defined in (5), where ' ', X YS S and 'ZS stand for three kinds of feature-

matching results, and matchS is the final similar score:

' ' 'X Y ZmatchS S S Sα β γ= + + …………….…(5) where weighted coefficients are also calculated using (4).

Table 2 summaries recognition results of the above two fusion mechanisms. We can see that, for all cases except one, recognition using match score fusion outperforms that using feature fusion. In order to combine the advantages of the above two mechanisms, a two-level fusion mechanism is proposed.

C. A two-level fusion mechanism Fig. 6 illustrates the framework of the design of two-level

fusion. At feature level, three kinds of features are pairwise combined into a new feature vector based on a new linear weighted sum rule defined in (6). After this operation, three kinds of new feature vectors named '', '', ''X Y Z are obtained.

At match score level, different feature-matching results based on the combined feature vectors are fused into a final similar score based on a new linear weighted sum rule defined in (7).

'' ' '

'' ' '

'' ' '

X X Y

Y Y Z

Z Z X

α β

α β γ α β γ

β γ

α β γ α β γ

γ α

α β γ α β γ

= ++ + + +

= ++ + + +

= ++ + + +

��������

….….(6)

'' '' '''match X Y ZS S S Sα β β γ γ α

α β γ α β γ α β γ

+ + += + +

+ + + + + + (7)

where , ,α β and γ are weighted coefficients which can be calculated by (4).

V. EXPERIMENTAL RESULTS AND ANALYSIS In order to evaluate the performances of our system,

several experiments have been conducted and are reported in this section. As mentioned previously, the scanning device can also produce color images. The color images of one person are shown in Fig. 7. Fig. 8 displays his corresponding range images for example. Experiments are carried out using two kinds of recognition approaches PCA and PCA+LDA. Mesh simplification is implemented iteratively. Mesh

1iM + is obtained from mesh iM . Data ratio is defined as the percent of current data to the original data. According to data ratio, meshes are classified into six levels: the complete data mesh, 75% data mesh, 50% data mesh, 25% data mesh, 12.5% data mesh, and 6.25% data mesh.

Figure 7: Color facial images of one person

Figure 8: Range images of one person

24

A. The recognition performance evaluation of three kinds of facial feature representations

The performances of three kinds of face representations are summarized in Table 1. From the results, we can conclude that all of them achieve encouraging recognition rates. The range image results in a better performance than the others, which can be considered as the dominant feature. In addition, we can find that PCA+LDA outperforms PCA significantly. Thus, the following experiments are carried out based on PCA+LDA.

B. The performance evaluation of our weight calculation method In order to test the performance of our proposed weight

calculation method, experiments on feature fusion and match score fusion were carried out. The previous fusion strategy defined the weighted coefficients by recognition rates of different kind of features [12], while we define the weighted coefficients as Eq.(4). Comparisons of the performances of the two weighted methods are shown in Table 2. The results show that our weighted method obviously outperforms the previous one, either for feature fusion or for match score fusion.

C. Selection of different fusion mechanisms Comparisons of the performances of three kinds of fusion

mechanisms based on our weight calculation method are demonstrated in Table 3. Compared to the recognition results based on range image (which performs better than the other two kinds of representations), it is obvious that recognition using match score fusion can improve the performance of recognition. However, in some cases, feature fusion degrades the recognition performance. The united two-level fusion combined the advantages of them and performed better than match score fusion. As the scale of our database is limited the advantage of the two-level fusion is not remarkable.

D. Effect of mesh simplification to recognition Comparisons on the effect of mesh simplification to face

recognition are shown in Table (1 - 3). Experimental results show that recognition rates can be improved when the data size is within a certain range. In addition, the range image still maintains an excellent performance even when the data is reduced to 6.25% of the original data. For all kinds of meshes, the computation time needed for generating facial feature images of a sample from 3D raw data is shown in Table 4. Time was measured on an Intel Pentium 2.4 GHz CPU with 1 GB RAM. We find that mesh simplification not only improves the recognition rate but also reduces the memory space requirements and computational costs.

VI. CONCLUSION A novel 3D face recognition method based on multi-

feature and multi-level fusions was proposed in this paper. Experimental results show that our proposed weight calculation method for fusion outperforms the previous method. Recognition using match score fusion generally significantly outperforms that using feature fusion and single

Table 1: Recognition rates using three kinds of feature images generated from six ranks of 3D data based on two recognition methods

PCA and PCA+LDA respectively.

Table 2: Recognition rates using feature fusion and match score fusion based on the old and the new weight calculation method respectively.

Table 3: Recognition rates using feature fusion, match score fusion, two-level fusion and without fusion respectively.

Table 4: Time spent on generating three kinds of feature images from six ranks of 3D data of a sample respectively.

25

features. However, in some cases, recognition with feature-fusion may provide better performance. Depending on the complementary characteristic of these two fusion mechanisms, the proposed two-level fusion performs better than them. Moreover, the adopted mesh simplification method can not only improve recognition rate but also reduce the memory space requirements and computational costs. The proposed system provides an excellent recognition rate which achieves over 99% using just a small ratio of the original data.

REFERENCES [1] M. Turk and A. Pentland. "Eigenfaces for recognition". Journal of

Cognitive Neuroscience 3 (1), 1991, pp.71–86. [2] P.N. Belhumeur, J.P. Hespanha, D.J. Kriegman. “Eigenfaces vs.

Fisherfaces: Recognition using class specific linear projection”. IEEE Transaction of Pattern Analysis and Machine Intelligence, 20(7), 1997, pp.711-720.

[3] B. Moghaddam, W. Wahid, and A. Pentland. “Beyond Eigenfaces: Probabilistic Matching for Face Recognition”, IEEE International Conference on Automatic Face and Gesture Recognition (FG) , 1998, pp.30-35.

[4] G. G. Gordon, “Face recognition based on depth and curvature features”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1992, pp.108-110.

[5] C. Samir, A. Arivastava, M. Daoudi. “Three dimensional face recognition using shapes of facial curves”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(11), 2006, pp.1858-1863.

[6] T. Nagamine, T. Uemura,and I.Masuda. “3D facial image analysis for human identification”, International Conference on Pattern Recognition, 1992, pp.324-327.

[7] S. Feng, H. Krim, I. Gu, M. Viberg, “3D Face Recognition Using Affine Integral Invariants”, IEEE International Conference on Acoustics, Speech and Signal Processing, 2006, pp.14-19.

[8] C. Beumier. “3D face recognition”, Computational Intelligence for Homeland Security and Safety, 2004, pp.93-96.

[9] Y. F. Sun, H. L. Tang, B. C. Yin, “The 3D Face Recognition Algorithm Fusing Multi-geometry Features”, Journal of Acta Automatica Sinica, 34(12), 2008, pp.1483-1489.

[10] C. Hesher, A. Srivasteva, G. Erlebacher. “A novel technique for face recognition using range imaging”, IEEE Proceedings of the 7th International Symposium on Signal Proceeding and Its Applications. Washington D. C., USA, 2003, pp.201-204.

[11] Y. Xiang and G. Su. “Multi-parts and Multi-feature fusion in face recognition”,IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW’08, 2008, pp.1-6 .

[12] A. Ross and R. Govindarajan. “Feature level fusion using hand and face biometrics”, Proceedings of SPIE Conference on Biometric Technology for Human Identification, 2004, pp.196-204.

[13] Y. Fu, L. Cao, G. Guo, T. S. Huang. Multiple feature fusion by subspace learning. In: ACM CIVR, 2008, pp. 127–134.

[14] Y. F. Zhou, C. M. Zhang, P. He, “Feature Preserving Mesh Simplification Algorithm Based on Square Volume Measure”, Chinese Journal of Computers, 2009, pp.203-212.

[15] T. S. Gieng, B. Hamann, I, L, Schussman, J. Trotts,. “Smooth Hierarchical Surface Triangulation”, IEEE Computer Society Press, 1997, pp.379-386.

[16] W. Zhao, R.Chellappa, P.JPhillips, and A.Rosenfeld, “Face Recognition: A Literature Survey”, ACM Computing Survey, December Issue, 2003, pp. 339-458.

26