Face Recog Dynamic

download Face Recog Dynamic

of 7

Transcript of Face Recog Dynamic

  • 8/13/2019 Face Recog Dynamic

    1/7

    Dynamic Local Feature Analysis for Face Recognition

    Johnny NG and Humphrey CHEUNG

    Titanium Technology Research Centre10/F, Tianjin Building, 167 Connaught Road West, Hong Kong, PR China

    {Johnny.ng, Humphrey.cheung}@titanium-tech.com

    Abstract.This paper introduces an innovative method,Dynamic Local FeatureAnalysis(DLFA), for human face recognition. In our proposed method, the face

    shape and the facial texture information are combined together by using theLocal Feature Analysis (LFA) technique. The shape information is obtained byusing our proposed adaptive edge detecting method that can reduce the effect

    on different lighting conditions, while the texture information provides thedetails of the normalized facial feature on the image. Finally, both the shapeand texture information is combined together by means of LFA for dimension

    reduction. As a result, a high recognition rate is achieved no matter the face isenrolled under different or bad lighting conditions.

    1 Introduction

    Face recognition has become one of most important biometrics authenticationtechnologies during the past few years. There are at least two reasons that can explainwhy face recognition has received extensive attention: 1) Face recognition has manyapplications, such as the biometrics system, the content-based video processingsystem, and law enforcement system. A strong need for a robust automatic facerecognition system is obvious due to the widespread use of photo-ID for personalidentification and security. 2) Although there are some extremely reliable methods ofbiometric personal identification existed such as fingerprint scans and iris scans, face

    recognition can still be effective because it does not require the cooperation or anyother special knowledge of the participant. Besides that, building an automatic face

    recognition system normally is cheaper than building a personal identification systembased on fingerprint scans or iris scans.

    The human face is a highly variable objects, it is very difficult to develop a fast androbust face recognition system. Since the recognition rate may be affected by thepresence of glasses, facial hairs, facial expression, lighting conditions, etc. In order toreduce the above problems, we combined the texture and the shape information byusing the Local Feature Analysis (LFA) technique to develop a robust face

    recognition algorithm. The shape information is obtained by using our proposedadaptive edge detecting method that can reduce the effect on different lighting

    conditions, while the texture information provides the details of the normalized facialfeature on the image. The organization of this paper is as follows. In Section 2, a

  • 8/13/2019 Face Recog Dynamic

    2/7

    literature review is presented to summaries the recent works by other researchers. InSection 3, we present our proposed Dynamic Local Feature Analysis method, whichis based on the LFA formalism combined with the texture space and the edgeinformation. In Section 4, the experimental result of our proposed method will bepresented to support the significance of this research. Finally, a conclusion is given inSection 5.

    2 Literature ReviewIn 1995, Chellappa [1] summarized the existing techniques for human facerecognition. Later, Zhao, Chellappa and Rosenfeld provide a more details information

    about human face recognition in a technical report [2]. From both reports, we can findthat most of the current face recognition algorithms can be categorized into twoclasses, image template based or geometry feature-based. The template-basedmethods [3] compute the correlation between a face and one or more model templatesto estimate the face identity. Statistical tools such as Support Vector Machines (SVM)

    [5], Linear Discriminant Analysis (LDA) [4], Principal Component Analysis (PCA)[6], Kernel methods [7], and Neural Networks [10] have been used to construct asuitable set of face templates. Wiskott et al. developed an elastic Bunch graphmatching algorithm for face recognition in [8]. However, it may not achieve a highrecognition rate if the enrollment process and the verification process are not done inthe same lighting conditions. Thus, Penev et. al[9] proposed the LFA by modifyingthe PCA in order to solve the above problems. Local Feature Analysis (LFA) is aderivative of the eigenface method. Local Feature Analysis utilizes specific features

    for identification instead of the entire representation of the face. The system selectsspecific areas of the face, such as the eyes or mouth, to use as the defining features for

    recognition.

    3 Dynamic Local Feature AnalysisIn this section, a novel face recognition technique Dynamic Local Feature Analysis(DLFA) is illustrated. Our approach can be divided into two main steps. The first step

    is preprocessing. The goal of this step is to get rid of high intensity noise, transformthe input face image into a binary one by adaptive edge detection and then extract the

    texture of the face. The second step employs the local feature analysis to combine

    both edge of face shape and the texture.

    3.1 PreprocessingIn general, a face contains four main organs, i.e. eyebrows, eye, nose, and mouth; it isvery important in the face recognition system. In order to reduce the noise and somedark features around these four organs, an opening operation is employed on the input

  • 8/13/2019 Face Recog Dynamic

    3/7

    face image to achieve this propose. This operation can prevent the facial features frombreaking into pieces, sharps and bright noises such as reflections on eyes.

    In order to properly generate a binary edge image from an input face image, aadaptive edge analysis is proposed in our approach. The advantage of using edge asone of the image feature is that they can provide robustness to illumination changeand simplicity of presentation. The input face image is firstly processed withmorphological edge detection and then converts the resulted gray scale edge imageinto binary format. A fixed threshold does not work well for converting a gray scaleedge image to a binary edge image. This is due to the fact that the contrast in edge

    image may vary significantly. In our approach, we use a dynamic threshold in eachsub-block of edge image to obtain the corresponding threshold binary image. Thethreshold Tis calculated dynamically in each sub-block of face image by consideringthe gray-level intensities of the 15% highest pixel intensities. Assume that thehistogram of each sub-block of the edge image of size his(i), where i = 0,1,,255.Then T is determined as the largest value such that the following equation is satisfied:

    whihisTi

    =

    15.0)(255

    (1)

    where h and w is the high and the width of each sub-block of an edge imagerespectively. Fig. 2 shows some of adaptive edge analysis result.

    (a) (b) (c) (d)Fig. 1.Some of adaptive edge analysis result. (a) and (c): original facial images; (b)and (d): binary edge.

    3.2 Local Feature AnalysisLocal Feature Analysis (LFA) defines a set of topographic, local kernels that areoptimally matched to the second-order statistics of the input ensemble [15]. The

    kernels are derived from the principal component axes, and consist of sphering thePCA coefficients to equalize their variance, followed by a rotation to pixel space. Webegin with the zero-mean matrix of original images, X. The X is the textureinformation of the training image and then follow is the edge information. Then,calculate the principal component eigenvectorsPaccording to S = PDPT. Penev andAtick [15] defined a set of kernels,Kas

    TPVPK= (2)

  • 8/13/2019 Face Recog Dynamic

    4/7

    where

    ==

    i

    diagDV

    12

    1

    i= 1, ,p

    where i are the eigenvalues of S. The rows of K contain the kernels. The kernelswere found to have spatially local properties and are topographic in the sense thatthey are indexed by spatial location. The kernel matrix Ktransforms X to the LFAoutput O=KXT. Note that the matrix V is the inverse square root of the covariance

    matrix of the principal component coefficients. This transform spheres the principalcomponent coefficients (normalizes their output variance to unity) and minimizes

    correlations in the LFA output. Another way to interpret the LFA output Ois that it isthe image reconstruction using sphered PCA coefficients, O = P(VP

    TX

    T).

    3.3 Sparsification of LFALFA produces an n-dimensional representation, where nis the number of pixels in theimages. Since we have noutputs described byp

  • 8/13/2019 Face Recog Dynamic

    5/7

    ),(),(

    ),()(

    )( NMONMO

    NMOO

    XX

    YXT

    Trec

    T ==

    (5)

    Equation (5) can also be expressed in terms of the correlation matrix of the outputs, C= O

    TO,

    1),(),( = MMCNMC

    (6)

    the termination condition was |M| =N.

    4 Experimental ResultsIn order to prove the efficiency and accuracy of our face recognition method, we useda number of face databases, which included face images taken under roughlycontrolled imaging conditions as test images. The ORL face database (from the OliverResearch Laboratory in Cambridge, UK), Yale face database (from the YaleUniversity), MIT face database, and Ti-Lab database were used in the experiments.

    The MIT database has 144 face images with 16 distinct subjects, while the Yaledatabase has 150 face images with 15 distinct subjects. The Ti-Lab database has

    58,960 face images with 5,896 subjects. For the ORL database, there are 400 differentface images corresponding to 40 distinct persons, but only six images for each of the40 subjects were included in the testing set. For the FERET database, there are over6,000 different face images corresponding to 699 distinct persons, but only 289

    subjects were included in the testing set.

    The experimental setup consisted of an upright frontal view of each of the subjectswith a suitable scale and a normal facial expression was chosen to form a database. Inour system, the position of the two eyes in a face image is manually located firstly.Then based on the eye positions, all the facial images in the database and the query

    input are normalized to a size of 80 80 with 256 gray scale levels. Since the imagesare acquired from different face databases, the pose variations, the lighting conditions

    and the facial expressions may vary (see Fig. 3). The number of subjects and numberof testing images for the respective databases are tabulated in Table 1. Theperformances of the Dynamic Local Feature Analysis technique was evaluated basedon the respective database.

    Table 1.The number of faces in the ORL, Yale, MIT, FERET, Ti-Lab database.

    ORL Yale MIT FERET Ti-Lab Total

    Subject 40 15 16 289 5,896 6,256

    Testing set 240 150 144 1,445 58,960 60,939

  • 8/13/2019 Face Recog Dynamic

    6/7

    Fig. 2.Sample face images of Ti-Lab face database used in our experiments.

    In the experiment, we implemented and evaluated the relative performances of theDynamic Local Feature Analysis (DLFA) and Local Feature Analysis (LFA)technique. A query image is compared to all of the face images in the database andthe face images are then ranked and arranged in ascending order according to their

    corresponding measured euclidean distances. Table 2 tabulates the recognition ratesof DLFA and LFA for each of the five individual databases. Experimental results

    show that the DLFA outperforms LFA method.

    The recognition rates of DLFA based on the ORL, Yale, MIT, FERET, Ti-Lab are91%, 89%, 92%, 90%, and 87%, respectively. Fig. 4 illustrates the cumulativerecognition rates of the both techniques based on the five face databases. Theexperiments were conducted on a Pentium IV 2.4 GHz computer system. The averageruntimes for the DLFA are about 345ms.

    Table 2. Recognition rates of the LFA and DLFA face recognition techniques with different

    database

    Method LFA (%) DLFA (%)

    ORL database 63 91Yale database 62 89MIT database 58 92

    FERET database 61 90Ti-Lab database 62 87

    Average Recognition Rate 61.2 89.8

    0.50.550.6

    0.65

    0.70.750.8

    0.850.9

    0.951

    1 2 3 4 5 6 7 8 9 10

    Number of best matches (Rank)

    RecognitionRate

    LFA

    DLFA

    Fig. 3.Comparison of the overall recognition rates.

  • 8/13/2019 Face Recog Dynamic

    7/7

    5 ConclusionsIn this paper, a robust face recognition method, Dynamic Local Feature Analysis(DLFA), which employs the LFA on integrated with the face shape, is proposed. Themain idea of DLFA focuses on the individual features such as the eyes, the nose, themouth and areas of definite bone curvature differences, such as the cheeks. The

    DLFA approach offers several advantages over similar facial recognitiontechnologies. DLFA systems are not as sensitive to face deformities and lightingvariations as the eigenface method. Also, DLFA systems can recognize faces that arenot presented to the camera as a complete frontal view. The system can recognize anindividual facing up to 25 degrees away from the camera. Experimental results basedon a combination of the MIT, Yale, FERET, Ti-Lab, and ORL databases show that

    DLFA can achieve recognition rates of 89.8%, 94.7% and 96.6% for the first one, thefirst three and the first five likely matched faces, respectively. The technique in thispaper is computationally simple and can provide a reasonable performance level. Inpractice, our approach can be used as a robust human face recognition system, which

    selects those similar faces to an input face from a large face database.

    References

    1. R. Chellappa, C.L. Wilson, and S. Sirohey, Human and Machine Recognition of Faces,A Survey,Proc, IEEE, Vol. 83, pp. 705-740, 1995.

    2. W. Zhao, R. Chellappa, A. Rosenfeld, and P.J. Phillips, Face Recognition: A LiteratureSurvey, UMD CFAR Technical Report CAR-TR-948, 2000.

    3. Robert J. Baron. Mechanisms of human facial recognition. International Journal ofMan-Machine Studies, 15(2):137178, 1981.

    4. P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces vs. fisherfaces:Recognition using class specific linear projection.IEEE Transactions on Pattern

    5. Analysis and Machine Intelligence, 19(7):711720, July 1997.6. Matthew Turk and Alex Paul Pentland. Eigenfaces for recognition.Journal of Cognitive

    Neuroscience, 3(1):7186, 1991.7. B. Schoelkopf, A. Smola, and K.-R. Muller. Kernal principal component analysis. In

    Artificial Neural Networks ICANN97, 1997.8. Laurenz Wiskott, Jean-Marc Fellous, Norbert Kruger, and Christoph von der Malsburg.

    Face recognition by elastic bunch graph matching. IEEE Transactions on Pattern

    Analysis and Machine Intelligence, 19(7):775779, July 1997.9. P. Penev and J. Atick. Local feature analysis: A general statistical theory for object

    representation, 1996.

    10. A. Jonathan Howell and Hilary Buxton. Invariance in radial basis function neuralnetworks in human face classification.Neural Processing Letters, 2(3):2630, 1995.