[IEEE 2013 19th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV2013) - Incheon,...

6
The 19th Korea -Japan Joint Workshop on Frontiers of Computer Vision Multi Pose Face Recognition Using Double Stages Classifications: SMLDA and Fusion of Scale Invariant Features I Gede Pasek Suta Wijaya Dept. of Informatics Engineering and Electrical Engineering, Faculty of Engineering, Mataram University, Mataram, Indonesia. [email protected] Abstract- This paper presents alternative technique for multi-pose face recognition using double stages classifications: shifting mean LDA (SMLDA) and fusion of scale invariant features (FSIF) based face descriptor. The first stage is employed to find the best class candidates that are similar to the query image the second stage (i.e FSIF) is employed to find the best matched class corresponding to the query image. The aims of this method are to solve the large face variability due to pose variations, to decrease the computational time of FSIF-based face recognition, and to avoid using the 3D scanner for estimating any pose variations of a face image without decreasing the recognition performance. The experimental results show that proposed method can overcome large face variability due to face pose variations, need short the computational time, and give better recognition rate than those of the previous method. In addition, the proposed method also provides better recognition rate than that of 3D based methods without requiring 3D scanner Keywords- multi-poses, face descriptor, LDA, SIFT, and face recognition. I. INTRODUCTION The availability of the current technology supports the researchers to do many researches in this topic. Several methods[l] regarding to this topic have been reported such as holistic based method, features based (structural) matching, and hybrid based methods. The method involving to the first category are principle component analysis (PCA), linear component analysis (LDA), probabilistic decisions based neural network, and their variations or combinations. The methods included in the second category are dynamic link architecture, hidden Markov model (HMM), and convolution neural network (CNN). The methods included in the third category are modular eigenfaces, hybrid local feature method, flexible appearance model, and face regions and components. However, face recognition still have several challenges in terms of pose and illumination variations which are known as most difficult problems in the 2D face recognition. The 3D scanning techniques have been proposed to overcome the mentioned problems in face recognition. However, the 3D scanner is expensive tool for data acquisition. 978-1 -4673 -5621 -3/13/$31.00 02013 IEEE 143 Keiichi Uchimura[2] and Gou Koutaki[2] Dept. of Computer Science and Electrical Engineering, GSST - Kumamoto University Kumamoto, Japan {uchimura, koutaki} @cs.kumamoto-u.ac.jp In this paper, we propose an alternative technique for multi-pose face recognition double stages classifications: shifting mean LDA (SMLDA) and fusion of scale invariant features (FSIF) based face descriptor. The aims of this method are to solve the large face variability due to pose variations to decrease the computational time of FSIF -based face recognition, and to avoid using the 3D scanner for estimating any pose variations of a face image without decreasing the recognition rate. It means abstraction of 3D face image is created by fusing scale invariant features of some poses variation of face image. In this case, the scale invariant features are extracted by scale invariant features transforms (SIFT) algorithms. This paper is organized as follows: section 2 describes the previous works which are mostly related to our proposed methods; section 3 explains our proposed method including shifting mean algorithm for face classifier and FSIF descriptor extraction as well as their advantages, section 4 presents the experimental results and discussion, and the rest concludes the paper. II. PREVIOUS WORKS The related works to our approach are the face recognition based on holistic or global matching methods as described in Refs. [lJ-{12]. The Ref. [1] presented the comprehensive state of art of face recognition and current two-dimensional face recognition algorithms. However, face recognition still have several challenges in terms of pose and illumination variations which are known as most difficult problems in the 2D face recognition. The 3D scanning techniques have been proposed to overcome the mentioned problems for face recognition The recent techniques for 3D face recognition for handling multipose problem has been reviewed in Ref. [2] and the PCA-based algorithm for 3D face recognition has been developed in Ref. [3]. As presented in Refs. [4,5], some statistical approaches such as Cook's Gaussian mixture-based Iterative Close Point algorithm, and Lee's Extended Gaussian Image model have been proposed for solving multipose problem. In addition, some algorithms based on geometric features, curvature feature-driven, profile-based face

Transcript of [IEEE 2013 19th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV2013) - Incheon,...

The 19th Korea -Japan Joint Workshop on Frontiers of Computer Vision

Multi Pose Face Recognition Using Double StagesClassifications: SMLDA and Fusion of Scale

Invariant Features

I Gede Pasek Suta WijayaDept. of Informatics Engineering and Electrical Engineering,

Faculty of Engineering, Mataram University,Mataram, Indonesia.

[email protected]

Abstract- This paper presents alternative technique formulti-pose face recognition using double stages classifications:shifting mean LDA (SMLDA) and fusion of scale invariantfeatures (FSIF) based face descriptor. The first stage is employedto find the best class candidates that are similar to the queryimage the second stage (i.e FSIF) is employed to find the bestmatched class corresponding to the query image. The aims of thismethod are to solve the large face variability due to posevariations, to decrease the computational time of FSIF-based facerecognition, and to avoid using the 3D scanner for estimating anypose variations of a face image without decreasing therecognition performance. The experimental results show thatproposed method can overcome large face variability due to facepose variations, need short the computational time, and givebetter recognition rate than those of the previous method. Inaddition, the proposed method also provides better recognitionrate than that of 3D based methods without requiring 3D scanner

Keywords- multi-poses, face descriptor, LDA, SIFT, and facerecognition.

I. INTRODUCTION

The availability of the current technology supports theresearchers to do many researches in this topic. Severalmethods[l] regarding to this topic have been reported such asholistic based method, features based (structural) matching,and hybrid based methods. The method involving to the firstcategory are principle component analysis (PCA), linearcomponent analysis (LDA), probabilistic decisions basedneural network, and their variations or combinations. Themethods included in the second category are dynamic linkarchitecture, hidden Markov model (HMM) , and convolutionneural network (CNN). The methods included in the thirdcategory are modular eigenfaces, hybrid local feature method,flexible appearance model, and face regions and components.However, face recognition still have several challenges interms of pose and illumination variations which are known asmost difficult problems in the 2D face recognition. The 3Dscanning techniques have been proposed to overcome thementioned problems in face recognition. However, the 3Dscanner is expensive tool for data acquisition.

978-1 -4673 -5621 -3/13/$31.00 02013 IEEE 143

Keiichi Uchimura[2] and Gou Koutaki[2]Dept. of Computer Science and Electrical Engineering,

GSST - Kumamoto UniversityKumamoto, Japan

{uchimura, koutaki} @cs.kumamoto-u.ac.jp

In this paper, we propose an alternative technique formulti-pose face recognition double stages classifications:shifting mean LDA (SMLDA) and fusion of scale invariantfeatures (FSIF) based face descriptor. The aims of this methodare to solve the large face variability due to pose variations todecrease the computational time of FSIF-based facerecognition, and to avoid using the 3D scanner for estimatingany pose variations of a face image without decreasing therecognition rate. It means abstraction of 3D face image iscreated by fusing scale invariant features of some posesvariation of face image. In this case, the scale invariantfeatures are extracted by scale invariant features transforms(SIFT) algorithms.

This paper is organized as follows: section 2 describes theprevious works which are mostly related to our proposedmethods; section 3 explains our proposed method includingshifting mean algorithm for face classifier and FSIF descriptorextraction as well as their advantages, section 4 presents theexperimental results and discussion, and the rest concludes thepaper.

II. PREVIOUS WORKS

The related works to our approach are the face recognitionbased on holistic or global matching methods as described inRefs. [lJ-{12]. The Ref. [1] presented the comprehensive stateof art of face recognition and current two-dimensional facerecognition algorithms. However, face recognition still haveseveral challenges in terms of pose and illumination variationswhich are known as most difficult problems in the 2D facerecognition. The 3D scanning techniques have been proposedto overcome the mentioned problems for face recognition

The recent techniques for 3D face recognition for handlingmultipose problem has been reviewed in Ref. [2] and thePCA-based algorithm for 3D face recognition has beendeveloped in Ref. [3]. As presented in Refs. [4,5], somestatistical approaches such as Cook's Gaussian mixture-basedIterative Close Point algorithm, and Lee's Extended GaussianImage model have been proposed for solving multiposeproblem. In addition, some algorithms based on geometricfeatures, curvature feature-driven, profile-based face

The 19th Korea -Japan Joint Workshop on Frontiers of Computer Vision

description, and a hierarchical system using range and textureinformation for feature extraction have been available. Inaddition, 3D face recognition based multi-features and multi­features fusion of face images for multipose problem werereported in Refs. [4,5,12]. In this case, the multi-features areextracted from 3D face image using three approaches namely:maximal curvature image, average edge image, and rangeimage, respectively. The fusion features are built using weightlinear combination. However, all of the proposed methodsexcept FSIF-based methods [12] require 3D scanner/camerawhich is an expensive tool for data registration. FSIF -basedmethods [12] required long time processing.

In addition, suppose we have large variability face imagesdue to pose variations, as shown in Fig, 1. the previousmethods [8"-'11], which did the face recognition using PCA,LDA, and predictive linear discriminant analysis (PDLDA)with holistic features as dimensional reduction of the inputface image, shows good enough performance for the face posevariations labeled P4, P5, and P6 of Fig 1. Especially forPDLDA-based approach, It gave 99.52% recognition rate inoff-line test mode and provided the recognition rate, falserejection rate, and false acceptance rate by about: >98%, 2%,and 4%, respectively with short processing time, in real timetest mode[II]. Those achievements were achieved withconsidering the chrominance components of face image,which were extracted using YCbCr color spacetransformation. However, it remains have several problems interm of the large variability of face due to the lightingcondition and large variability due to the face pose variations.In other words, it does not work at all for face pose variationslabeled P1-P3, and P7-P9 of data in Fig. 1. Therefore, we haveto find another solution, which can solve this problemespecially for multi-pose variations, as shown in Fig. 1.

III. PROPOSED METHODS

The proposed face recognition algorithm can be illustratedbriefly in Fig. 2. There are two main processes: SMLDAprocessing as the first stage of face features classifier, and thefusion of scale invariant features (FSIF) based face descriptorshortly called as fusion face descriptor (FFD) processing asthe second stage of classifier which classify just first five theSMLDA's determined class candidates.

The SMLDA process performs the first classificationin order to define class candidates for the next process.In this case, the first five smallest score of training set IDkeep as class candidates. Next, from the first stageclassifier, we got the five

Fig. 1. Large variability of face due to pose variations

144

ID candidates, which is much similar to the query face images.The second stage classifiers try to find out the best IDcorrespond to the face the query.

The second stage verification is done using the followingstep:

1. Loading all of the face training set which correspondto the candidates ID.

2. Extracting the FSIF of each face image from trainingset using the algorithm which will be presented in sub­section 3.B and representing them as [D j , •• •D s] .

3. Extracting the FSIF of query face image andrepresenting it as representing D q

4. Matching the Dq with [Dj , •• •Ds] to find out thenumber of matching key points. The number matchingkey point of two descriptors (D q and D j ) can bedetermined by the following procedure.func noOfMatch (01 EJ!xc1, 02 EYhc2 r thresh)

nOfMatch=O;for i = 1 to c 1 do

Good= 10000; Best=10000; idx=-l ;for j = 1 to c2 do

d=norm(Ol (:,i)-02(:,j));if (d<Good)

Best=Good; Good=d; idx=j;elseif(d<Best)

Best=d;endif

endforif ((thresh*Good <= Best)&& idx>O)

nOfMatch=nOfMatch+1;endif

endforreturn (nOfMatch)

endfunc

Finally, verification criterion is defined based on number ofmatching key points. The face descriptor which has the largestnumber ofmatching points is concluded as the best likeness.

A. Shifting Mean LDA Based ClassifierSuppose, we have the three-dimensional data cluster of twoclasses which is normalized in the range [0-1], shown in Fig.4(a). By expanding this illustration to n-dimensional datawhich have L classes and each class (k-th) has N, samples,then the optimum projection matrix (W), which has to satisfythe Fisher criterion (Eq. 1), can be determined by eigenanalysis of inverse S; time Sb and then select m orthonormaleigenvectors corresponding to the largest eigenvalues (i.e. m <n).

IWTSbwlJLDA(W)=argnnx I I (1)

w WTSwW

Where both of the S; and Sb are defined as follows.1 L rSb= - L P(Xk )(Ilk - Ila Xllk - Ila (2)Lk=l

1 L N k( k Xk rSw=- L L xi -Ilk xi -Ilk (3)

N k=li=l

The 19th Korea -Japan Joint Workshop on Frontiers of Computer Vision

Fig. 2. Face recognition diagram block.

(4)

(5)

(8)

This LDA algorithm has been implemented successfully asface recognition with providing good and stable performancein both small and large sample size data, as explained in Ref.[9-11]. However, it has to retrain all data samples to obtainoptimum projection matrix when new data samples enter intothe system. The retraining has to be done because the Sb

depends on the global means, which has to be recalculatedwhen new data sample comes into the system. In order toavoid this problem and to decrease its computational load, wesimplify the Sbusing the shifting mean algorithm as follows.

Sb=-L1

fNk(f.lk-f.laXf.lk-f.la'fk=l

1 L T T T T=- 'LNkJ-LkJ-Lk +aJ-LaJ-La -rJ-La -J-Lar

L k=l

() T=L- J-LaJ-La

Where, ()= 'Lf=l NkJ-LkJ-Lk , a = N = 'Lf=l Nk , r = 'Lf=l NkJ-Lk ,

J-La =±'Lf=l NkJ-Lk =f;;, and P(xk) = Ni . If a new data comesinto the system, the Sb can be updated as follows:

Sf; =_l_(e+Nnf.lnf.l~)-f.l~~~rL+Nn

=_1_(en_1+en)-f.l~~~rL+NnWhere

J-L~ =_l_(LJ-La+NnJ-Ln) (6)L+Nn

By using this simplification, the updated Sb has exactly thesame scatter as the original one (Eq. (2)) with requiring lesscomputational complexity than that of the Eq. (2). In detail, toupdate the Sb using Eq. (5), we just need to calculate the en 'f.l~ , and f.l~ ~~rwhich require (2n2

) multiplicationoperations and (n2 + n) additions. However, the Eq. (2)requires (L+1) n2 multiplications and (L+1) n2 additions.

In addition, the within class scatter, S; (Eq. 3), which doesnot depend on the global mean, can be redefined as follows,

S =~ ,±Sk (7)w N k=l w

145

k Nk

( k Xk rwhere Sw = L x j - Jlk x j - Jlk .j=l

Then, it can be simplified as follows,

1 {L-l k L}Sw=- L Sw+SwN k=l

= _1_ ~~ld + S~ew},N n ew

with Nnew = No1d + NL .

The optimum projection matrix is obtained by substitutingthe Sb with the Sbu ofLDA eigen analysis and then select somelarge eigen vectors which correspond to the largest eigenvalues. This optimum projection matrix is called as alternativeLDA projection matrix (WALDA). The projected features of theboth training and querying data set can be performed using theWALDA as done by the original LDA.

For matching process, the Euclidean distance based onnearest neighbor rule is implemented for face classificationand the negative samples (non-training faces and non-faceimages) are used to define the threshold for face verification.If the minimum score is less then the defined threshold theinput data is verified as known face and positive ID and otherwise is concluded as negative face or unknown face.

B. Fusion Face DescriptorsThe fusion face descriptors is a fusion of scale invariant

features from selected pose variations features whichrepresents the 3D image information. In this case, the featuresare extracted from 2D face images. Therefore, to realize thisidea, we require a set of 2D face images which represent sub­sampling of 3D face images, as shown in Fig. 1.

From these images, we extract the FSIF which is startedfrom extracted invariant features from some face images usingSIFT algorithms [6,7], next remove the redundant featuresusing intersection (n) and subtraction operation, and finallyfuse all of the non-redundant features into a descriptor usingthe union (U) operation. The FSIF is an abstractsrepresentation of 3D face images. The detail explanation ofFSIF extraction algorithm can be found in Refs. [12,13].

In this case, the FSIF is a two dimensional data which isrepresented using a matrix. The benefit of this representationis simple and requires less memory space compare to the real

The 19th Korea -Japan Joint Workshop on Frontiers of Computer Vision

3D data. Furthermore, the more 2D-face images are includedfor building the descriptor the more rich face descriptors willbe gotten. Consequently, if the more rich face descriptors areused for recognition, the higher recognition rate will beachieved.

IV. EXPERIMENTS AND RESULTS

A. Experiment SetupIn order to know the performance of the proposed method,

several experiments using data from some challengesdatabases: the ITS-Lab.[11,12], ORL[14], CVL[15], and GTVdatabases[16] were done in PC with specification: Core-DuoProcessor 1.7 GHz and 2 GB RAM. The face pose variationsexamples from the mentioned databases are shown in Fig. 4.In addition, the detail explanation of each database can befound in the Refs. [11,12].

In addition, all experiments were done using the followingcondition:

Fig. 3. Example of face pose varition of single persons.

146

• Both training and querying face images size was setup on128 x 128 pixels.

• The image stretching was employed before holisticfeatures extraction and the holistic features size weresetup 53 elements per images for the first stage classifier

B. Results and DiscussionFirstly, two experiments were done to know the

performance of multi stage classifier for solving the large facevariability due to pose variations. In these test, the first testwas done on ITS-Lab and ORL databases containing smallposes variations (see Fig. 3 (a,b)) but containing large datasamples (each classes has at least 10 pose variations). Fromthese data sets, first three face images (P1-P3) were selected astraining and remaining were selected as testing data. Theexperimental result shows that the proposed method providessufficiently high recognition rate for tested pose variations, asshown in Table 1. From Table 1, the multi stage classifiercould improve significantly the SM-LDA methods andrequired almost the same retraining and queering time. Eventhough, the FSIF based classifier has been reported that couldovercome the multipose problem, but it required much longertraining and querying time than that of SM-LDA+FSIF. Itmeans the multi stage classifier (i.e. SM-LDA and FSIF) isone alternative solution for multipose problem on facerecognition.

Next, the second test was done on CVL and GTAVdatabases representing large face pose variations. From CVLdatabase, first three face images (P1-P3) were selected astraining and remaining were selected as testing data whilefrom GTAV database, first five face images (P1-PS) wereselected as training and remaining were selected as testingdata. The experimental results are inline with to those of theTable 1 (see Table 2) which means the multi stage classifiercould improve significantly the SM-LDA methods andrequired almost the same retraining and queering time. Inaddition, even though the FSIF based classifier could solve themultipose problem, but it required much longer training andquerying time than that of SM-LDA+FSIF. From both Table 1and Table 2, the first one provides large recognition rate forany pose variation than that of Table 1. It can be achieved,because the on CVL and GTAV databases contain large facevariability due to face pose variations. In other words, thesecond FSIF-based classifier can overcome the multi -posedifficulty because the FSIF contains any estimation of posesvariations face features.

The next experiment was done to know the performance ofthe proposed method compare to the recent 3D methodswhich work based on combination multi-features (MF) andmulti-feature fusion (MFF) of 3D face image with PCA andPCALDA (see Refs. [4,5] for multi pose face recognition. Inthis test, we compare three best variants of those methodscalled as MF+PCA, MF+PCALDA and MFF+PCALDA. Theexperiment was done in the ITS-Lab face database version 1which is 3D face database containing 40 classes which eachclass consist of 10 face pose variations. The face images wereacquired by Konica Minolta 3D-camera series VIVID 900.The testing parameters were setup the same as done in

TABLE I.

TABLE II.

The 19th Korea -Japan Joint Workshop on Frontiers of Computer Vision

THE PERFORMANCE OF THE PROPOSED METHODS COMPARES TO SINGLE STAGE CLASSIFIERS ON DATABASES CONTAINING SMALLPOSE VARIATIONS.

THE PERFORMANCE OF THE PROPOSED METHODS COMPARES TO SINGLE STAGE CLASSIFIERS ON DATABASES CONTAINING LARGEPOSE VARIATIONS.

Refs. [4,5]: five images of each classes was chosen fortraining set and the remaining images. In addition, we alsocompared the performance of our proposed method with base­line method on the experiment using 3 face images as trainingset.

The experimental results show that our proposed methodcan achieve almost the same recognition rate asMFF+PCALDA methods (99.71 % and 99.98%, respectively),even though the SMLDA-FSIF jus consider 3 face images fortraining. When the number of face training is increased to 5faces per class, our proposed method recognize all of thetesting data (see Table. 3). It means the SMLDA-FSIF is analternative solution for building multi -pose face recognitionwith having reasonable achievements compared to 3D-basedface recognition. Even though the recognition of FSIF+LDA isnot much different with the MFF+PCALDA but our proposedmethod does not require multi-features consisting of threefeatures vectors and does not need 3D camera sensors formaking the features at all. In other words, our proposedmethod is cheapest 2D-based face recognition approach whichcan be implemented for real time multi -pose face recognitionwith 2D web-camera as the image capturing.

In addition, the more images are considered (trained) forbuilding the face descriptor, the higher recognition rate isachieved as shown in Table 1. The proposed method canrecognize all testing images, which mean each training class,contain the richer face descriptor when the more face imagesare trained.

147

TABLE III. THE RECOGNITION RATE OF THE PROPOSED METHODSCOMPARED TO MULTI-FEATURES AND PCALDABASED 3D FACE

RECOGNITION[4,5].

Recognition Rate as functionNo Methods Number of Training (%)

3 51 MF+PCA NA 94.082 MF+PCALDA NA 99.343 MFF+PCALDA NA 99.984 SMLDA 97.67 98.485 SMLDA+FSIF 99.71 100.00

v. CONCLUSION AND FUTURE WORKS

The double stages (SMLDA-FSIF) face recognitionapproach could solve large face variability due to posevariations. The proposed give sufficient recognition rateagainst face pose variations, and provide better recognitionrate over recent 3D-face recognition methods. The mainachievement is the SMLDA-FSIF approach needs muchshorter computational time than that of FSIF-based facerecognition. It means, our proposed method is potentially to beimplemented for real time face recognition with 2D web­camera as face image sensor.

This research will be implemented for real-time facerecognition and be developed for security system. In addition,we will try to visualize the FSIF into 3D images and consider

The 19th Korea -Japan Joint Workshop on Frontiers of Computer Vision

developing compact and powerful features to decrease thecomputational time and to increase the recognition rate.

ACKNOWLEDGEMENTS

I would like to send my appreciation and my great thank toInnovation Project of Kumamoto University, which give manysupports to this research, and my great thank to owner ofORL, CVL, GTAV databases.

REFERENCES

[1]. W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, "Facerecognition: a literature survey", ACM Computing Surveys, vol. 35, no.4, pp. 399--458, December 2003.

[2]. K. I. Chang, K. W. Bowyre, and P. J. Flyn, "An evaluation of multi­modal 2D+3D face biometrics", IEEE Transactions on PAMI, vol. 27,pp. 619-624, April 2005.

[3]. K. I. Chang, K. W. Bowyre, and P. J. Flyn, "Face recognition using 2Dand 3D facial data", Multimodal User Authentication Workshop, pp.25-32,2003.

[4]. Z. Cuicui, IGPS. Wijaya, G. Koutaki, and K. Uchimura, "3D facerecognition based on multi-features of face image", Proceedings oflEE-Japan conference, Kumamoto Japan, October 2010, (CDROM).

[5]. Z. Cuicui, K. Uchimura, C. Zhang, G. Koutaki, "3D face recognitionusing multi-level multi-feature fusion", Proceedings of the 4th Pacific­Rim Symposium on Image and Video Technology (PSIVT 2010),Singapore, pp.21-26, November 2010.

[6]. D. G. Lowe, "Distinctive image features from scale-invariant keypoints", International Journal of Computer Vision, vol. 60, no. 2, pp.91-110,2004.

148

[7]. D. G. Lowe, "Object recognition from local scale-invariant features,Proceedings of the International Conference on Computer Vision,Corfu, September 1999.

[8]. M. Turk and A. Pentland, "Eigenfaces for recognition", Journal ofCognitive Neuroscience, vol 3, no. 1, pp. 71-86, 1991.

[9]. H. Yu, J. Yang, "A direct LDA algorithm for high-dimensional datawith application to face recognition", Pattern Recognition, vol. 34, pp.2067-2070,200l.

[10]. W. Chen, J-E. Meng, and S. Wu, "PCA and LDA in DCT domain",Pattern Recognition Letter, vol. 26, pp. 2474-2482, 2005.

[11]. IGPS. Wijaya, K. Uchimura, and Z. Hu, "Improving the PDLDA basedface recognition using lighting compensation", Proceedings ofWorkshop of Image Electronics and Visual Computing 2010, NiceFrance, March 2010, (CDROM).

[12]. IGPS Wijaya, K Uchimura, and G Koutaki, "Multi-Pose facerecognition using fusion of scale invariant features", InternationalCongress on Computer Applications and Computational Science(CACS), Bali-Indonesia, November 2011.

[13]. IGPS Wijaya, K Uchimura, and G Koutaki, "Face recognition incrowded environmental", Proceeding of 2013 International Workshopon Advanced Image Technology, Nagoya, pp. 1022 - 1027, January2013.

[14]. F. Samaria and A. Harter, "Parameterization of a stochastic model forhuman face identification", Proceeding of the 2nd IEEE Workshop onApplications of Computer Vision, Sarasota Florida, 138-142, 1994.

[15]. http://lrv.fri.uni-lj.si/facedb.html[16]. http://gps-tsc.upc.es/GTAV/ResearchAreas/ UPCFaceData- basel

GTAVFaceDatabase.htm.