Face Detection Using Laplacian Face Approach

download Face Detection Using Laplacian Face Approach

of 32

  • date post

  • Category


  • view

  • download


Embed Size (px)

Transcript of Face Detection Using Laplacian Face Approach



An image is an artifact, usually two-dimensional, that has a similar appearance to some subject usually a physical object or a person. Images may be two-dimensional, such as a photograph, screen display, and as well as a three-dimensional, such as a statue. They may be captured by optical devicessuch as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural objects and phenomena, such as the human eye or water surfaces. The elements of a digital image are called as pixel(or)picture elements.

IMAGE PROCESSING Image processing is any form of signal processing for which the input is an image , such as photographs or frames of video The output of image processing can be either an image or a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a twodimensional signal and applying standard signal-processing techniques to it.


One of the most important building blocks of smart environments is a person identification system, face recognition devices are ideal for such systems. Facial recognition systems are built on computer programs that analyze images of human faces for the purpose of identifying them. Facial recognition systems are computer-based security systems that are able to automatically detect and identify human faces. These systems depend on a recognition algorithm. A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database . It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems.

Usual Face Recognition

Why face recognition?

Verification of credit card, personal ID, passport Bank or store security Crowd surveillance Access control Human-computer-interaction


The programs take a facial image Measure characteristics such as the distance between the eyes, the length of the nose, and the angle of the jaw, and create a unique file called a "template. Using templates, the software then compares that image with another image and produces a score that measures how similar the images are to each other. Typical sources of images for use in facial recognition include video camera signals and pre-existing photos such as those in driver's license databases. The first step for a facial recognition system is to recognize a human face and extract it fro the rest of the scene. Next, the system measures nodal points on the face, such as the distance between the eyes, the shape of the cheekbones and other distinguishable features. These nodal points are then compared to the nodal points computed from a database of pictures in order to find a match.

Techniques Involved

Some facial recognition algorithms identify faces by extracting landmarks, or features, from an image of the subject's face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features. Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face detection. A probe image is then compared with the face data. One of the earliest, successful systems is based on template matching techniques applied to a set of salient facial features, providing a sort of compressed face representation. Popular recognition algorithms include eigenface, fisherface, the Hidden Markov model, and the neurona.


Based on three main concepts-Locality Preserving Projections (LPP), Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LPP is a new algorithm for learning a locality preserving subspace and a general method for manifold learning ie, It helps in calculating the pixel values between the subspaces that could not be calculated using PCA. PCA helps only in calculating the pixel values that are just near in a sub-space and it doesnt helps in calculating the values of pixels that are far away. PCA and LDA aim to discover the global structure of the manifold whereas LPP aim to discover the local structure of the manifold. Here the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced.


The existing system used Principal Component Analysis Linear Discriminant Analysis concept. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. The jobs which PCA can do are prediction, redundancy removal, feature extraction, data compression, etc. LDA searches for the project axes on which the data points of different classes are far from each other while requiring data points of the same class to be close to each other But the most of the algorithm considers some what global data patterns while recognition process. This will not yield accurate recognition system.


Less accurate Does not deal with manifold structure It does not deal with biometric characteristics


It uses the Principal Component Analysis method along with Linear Discriminant Analysis and Locality Preserving Projections method. It includes three concepts PCA LDA LPP


It is a simple, non-parametric method of extracting relevant information from confusing datasets. With minimal additional effort PCA provides a roadmap for how to reduce a complex data set to a lower dimension to reveal the sometimes hidden,simplified dynamics that often underlie it. Principal component analysis (PCA) involves a mathematical procedure that transforms a number of possibly correlated variables into a smaller number of uncorrelated variables called principal components. Important consideration of using PCA as preprocessing is for noise reduction.


Linear Discriminant Analysis aim to preserve the global structure. LDA explicitly attempts to model the difference between the classes of data. PCA on the other hand does not take into account any difference in class. LDA is a supervised learning algorithm. LDA searches for the project axes on which the data points of different classes are far from each other while requiring data points of the same class to be close to each other. Locality Preserving Projection (LPP) , a new algorithm for learning a locality preserving subspace. LPP is a general method for manifold learning. It aim to preserve the local structure.



High recognition rate is achieved

Error Rate is low compared to existing system

MODULESInputPreprocessing PCA Projection

Constructing Nearest Neighbor Graph

Choosing the weights of Neighboring Pixel

Recognize the Image



MODULE DESCRIPTIONPRE PROCESSSING In the preprocessing take the single gray image in 10 different directions Eliminate the background And measure the points in 28 dimensions of each gray image The original face image and the cropped image.


We project the image set into the PCA subspace by throwing away the smallest principal components. In our experiments, we kept 98 percent information in the sense of reconstruction error. For the sake of simplicity, we still use x to denote the images in the PCA subspace in the following steps. We denote by WPCA the transformation matrix of PCA.


Let G denote a graph with n nodes. The ith node corresponds to the face image xi. We put an edge between nodes i and j if xi and xj are close, i.e., xj is among k nearest neighbors of xi, or xi is among k nearest neighbors of xj. The constructed nearest neighbor graph is an approximation of the local manifold structure.

MODULE DESCRIPTIONChoosing the weights of neighboring pixel

Here we compare the images that has the nearest neighboring pixel values. The images in the test folder are compared with the images in the train folder. Find the locations of eyes, nose and mouth, extract the pixel points. Use the width of head, the distances between eye corners,angles between eye corners, etc. Calculate the pixel values.

MODULE DESCRIPTIONRecognize the image

Then measure the value as from test which contain more gray image If it is match with any gray image then it recognize and show the image or else it not recognize

Comparison with databases

Here we compare the three methods (Eigen face,Fisher Face and Laplacian Face) with the following three different data sets. YALE face database constructed at the Yale Center for Computational Vision and Control. PIE Database(Pose,Illumination and Expression)

Yale Database

PIE Database

MSRA Database


Three experiments on three databases have been systematically performed. These experiments reveal a number of interesting points: In all the three experiments, Laplacianfaces consistently performs better than Eigenfaces and Fisherfaces. Especially, it significantly outperforms Fisherfaces and Eigenfaces on Yale database and MSRA database. These experi