Presentación tesis

Post on 25-May-2015

52 views 0 download

Tags:

Transcript of Presentación tesis

Robust face recognition using wavelets and neural networks

Ph.D Rubén Machucho Cadena

Istambul, Turkey September 2013

IntroductionMethodologyConclusions

Contents

1 IntroductionMotivationObjectives

2 MethodologyStage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

3 Conclusions

2 / 38

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionAutomatic face recognition system

In the last years, face recognition has become a popular areaof research.

More accurate identification/verification technique thantraditional systems.Increased computing capabilities.A large number of application areas.

GovernmentLaw EnforcementSecurityImmigration

CommercialMissing Children/RunawaysInternet, E-commerceGaming Industry

3 / 38

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionAutomatic face recognition system

In the last years, face recognition has become a popular areaof research.

More accurate identification/verification technique thantraditional systems.

Increased computing capabilities.A large number of application areas.

GovernmentLaw EnforcementSecurityImmigration

CommercialMissing Children/RunawaysInternet, E-commerceGaming Industry

3 / 38

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionAutomatic face recognition system

In the last years, face recognition has become a popular areaof research.

More accurate identification/verification technique thantraditional systems.Increased computing capabilities.

A large number of application areas.

GovernmentLaw EnforcementSecurityImmigration

CommercialMissing Children/RunawaysInternet, E-commerceGaming Industry

3 / 38

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionAutomatic face recognition system

In the last years, face recognition has become a popular areaof research.

More accurate identification/verification technique thantraditional systems.Increased computing capabilities.A large number of application areas.

GovernmentLaw EnforcementSecurityImmigration

CommercialMissing Children/RunawaysInternet, E-commerceGaming Industry

3 / 38

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionAutomatic face recognition system

In the last years, face recognition has become a popular areaof research.

More accurate identification/verification technique thantraditional systems.Increased computing capabilities.A large number of application areas.

GovernmentLaw EnforcementSecurityImmigration

CommercialMissing Children/RunawaysInternet, E-commerceGaming Industry

3 / 38

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionAutomatic face recognition system

In the last years, face recognition has become a popular areaof research.

More accurate identification/verification technique thantraditional systems.Increased computing capabilities.A large number of application areas.

GovernmentLaw EnforcementSecurityImmigration

CommercialMissing Children/RunawaysInternet, E-commerceGaming Industry

3 / 38

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionBiometric Systems

Biometric systems are automated, mostly computerizedsystems using distinctive physio-biological or behaviouralmeasurements of the human body that serve as a (supposedly)unique indicator of the presence of a particular individual.

Face images are easy to get.Contactless authentication.Low hardware cost.

4 / 38

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionBiometric Systems

Biometric systems are automated, mostly computerizedsystems using distinctive physio-biological or behaviouralmeasurements of the human body that serve as a (supposedly)unique indicator of the presence of a particular individual.

Face images are easy to get.Contactless authentication.Low hardware cost.

4 / 38

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionBiometric Systems

Biometric systems are automated, mostly computerizedsystems using distinctive physio-biological or behaviouralmeasurements of the human body that serve as a (supposedly)unique indicator of the presence of a particular individual.

Face images are easy to get.Contactless authentication.Low hardware cost.

4 / 38

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionBiometric Systems

Biometric systems are automated, mostly computerizedsystems using distinctive physio-biological or behaviouralmeasurements of the human body that serve as a (supposedly)unique indicator of the presence of a particular individual.

Face images are easy to get.

Contactless authentication.Low hardware cost.

4 / 38

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionBiometric Systems

Biometric systems are automated, mostly computerizedsystems using distinctive physio-biological or behaviouralmeasurements of the human body that serve as a (supposedly)unique indicator of the presence of a particular individual.

Face images are easy to get.Contactless authentication.

Low hardware cost.

4 / 38

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionBiometric Systems

Biometric systems are automated, mostly computerizedsystems using distinctive physio-biological or behaviouralmeasurements of the human body that serve as a (supposedly)unique indicator of the presence of a particular individual.

Face images are easy to get.Contactless authentication.Low hardware cost.

4 / 38

IntroductionMethodologyConclusions

MotivationObjectives

Motivation

Despite the progress made in the last years, the facerecognition problem has not been completely solved.

The need of systems with a higher level of accuracy androbustness still remains as an open research topic.

5 / 38

IntroductionMethodologyConclusions

MotivationObjectives

Motivation

Despite the progress made in the last years, the facerecognition problem has not been completely solved.

The need of systems with a higher level of accuracy androbustness still remains as an open research topic.

5 / 38

IntroductionMethodologyConclusions

MotivationObjectives

Motivation

Despite the progress made in the last years, the facerecognition problem has not been completely solved.

The need of systems with a higher level of accuracy androbustness still remains as an open research topic.

5 / 38

IntroductionMethodologyConclusions

MotivationObjectives

Objectives

1.- Propose a feature extraction technique, which uses the discretewavelet transform.

2.- Determine the most situable wavelet base and decompositionlevels for its use in face recognition systems.

3.- Design a neural network for classify faces.

4.- Determine the best parameter configuration for the proposedNN.

5.- Compare the proposed net with a backprop net.

6 / 38

IntroductionMethodologyConclusions

MotivationObjectives

Objectives

1.- Propose a feature extraction technique, which uses the discretewavelet transform.

2.- Determine the most situable wavelet base and decompositionlevels for its use in face recognition systems.

3.- Design a neural network for classify faces.

4.- Determine the best parameter configuration for the proposedNN.

5.- Compare the proposed net with a backprop net.

6 / 38

IntroductionMethodologyConclusions

MotivationObjectives

Objectives

1.- Propose a feature extraction technique, which uses the discretewavelet transform.

2.- Determine the most situable wavelet base and decompositionlevels for its use in face recognition systems.

3.- Design a neural network for classify faces.

4.- Determine the best parameter configuration for the proposedNN.

5.- Compare the proposed net with a backprop net.

6 / 38

IntroductionMethodologyConclusions

MotivationObjectives

Objectives

1.- Propose a feature extraction technique, which uses the discretewavelet transform.

2.- Determine the most situable wavelet base and decompositionlevels for its use in face recognition systems.

3.- Design a neural network for classify faces.

4.- Determine the best parameter configuration for the proposedNN.

5.- Compare the proposed net with a backprop net.

6 / 38

IntroductionMethodologyConclusions

MotivationObjectives

Objectives

1.- Propose a feature extraction technique, which uses the discretewavelet transform.

2.- Determine the most situable wavelet base and decompositionlevels for its use in face recognition systems.

3.- Design a neural network for classify faces.

4.- Determine the best parameter configuration for the proposedNN.

5.- Compare the proposed net with a backprop net.

6 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Methodology

Stage 1: State of the art

Review of face recognition algorithms that use neural networks and wavelets.

Stage 2: Proposed SolutionDesign of the face recognition system.

Stage 3: Implementation and ResultsImplementation of the proposed system.System experimentation and validation.Conclusions.

7 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Methodology

Stage 1: State of the artReview of face recognition algorithms that use neural networks and wavelets.

Stage 2: Proposed SolutionDesign of the face recognition system.

Stage 3: Implementation and ResultsImplementation of the proposed system.System experimentation and validation.Conclusions.

7 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Methodology

Stage 1: State of the artReview of face recognition algorithms that use neural networks and wavelets.

Stage 2: Proposed SolutionDesign of the face recognition system.

Stage 3: Implementation and ResultsImplementation of the proposed system.System experimentation and validation.Conclusions.

7 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Methodology

Stage 1: State of the artReview of face recognition algorithms that use neural networks and wavelets.

Stage 2: Proposed SolutionDesign of the face recognition system.

Stage 3: Implementation and ResultsImplementation of the proposed system.

System experimentation and validation.Conclusions.

7 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Methodology

Stage 1: State of the artReview of face recognition algorithms that use neural networks and wavelets.

Stage 2: Proposed SolutionDesign of the face recognition system.

Stage 3: Implementation and ResultsImplementation of the proposed system.System experimentation and validation.

Conclusions.

7 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Methodology

Stage 1: State of the artReview of face recognition algorithms that use neural networks and wavelets.

Stage 2: Proposed SolutionDesign of the face recognition system.

Stage 3: Implementation and ResultsImplementation of the proposed system.System experimentation and validation.Conclusions.

7 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artWavelet theory

Wavelet transform can be successfully applied for analysis andprocessing of non stationary signals e.g., speech and imageprocessing, data compression, communications, etc

Wavelet transform is able to construct a high resolutiontime-frequency representation of the signal.

A wavelet is a wave-like oscillation with an amplitude that beginsat zero, increases, and then decreases back to zero. It can typicallybe visualized as a "brief oscillation"like one might see recorded by a

seismograph or heart monitor

8 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artWavelet theory

Wavelet transform can be successfully applied for analysis andprocessing of non stationary signals e.g., speech and imageprocessing, data compression, communications, etcWavelet transform is able to construct a high resolutiontime-frequency representation of the signal.

A wavelet is a wave-like oscillation with an amplitude that beginsat zero, increases, and then decreases back to zero. It can typicallybe visualized as a "brief oscillation"like one might see recorded by a

seismograph or heart monitor

8 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artWavelet theory

Wavelet transform can be successfully applied for analysis andprocessing of non stationary signals e.g., speech and imageprocessing, data compression, communications, etcWavelet transform is able to construct a high resolutiontime-frequency representation of the signal.

A wavelet is a wave-like oscillation with an amplitude that beginsat zero, increases, and then decreases back to zero. It can typicallybe visualized as a "brief oscillation"like one might see recorded by a

seismograph or heart monitor

8 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artWavelet theory

Wavelet transform can be successfully applied for analysis andprocessing of non stationary signals e.g., speech and imageprocessing, data compression, communications, etcWavelet transform is able to construct a high resolutiontime-frequency representation of the signal.

A wavelet is a wave-like oscillation with an amplitude that beginsat zero, increases, and then decreases back to zero. It can typicallybe visualized as a "brief oscillation"like one might see recorded by a

seismograph or heart monitor

8 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artWavelet theory

Wavelet transform can be successfully applied for analysis andprocessing of non stationary signals e.g., speech and imageprocessing, data compression, communications, etcWavelet transform is able to construct a high resolutiontime-frequency representation of the signal.

A wavelet is a wave-like oscillation with an amplitude that beginsat zero, increases, and then decreases back to zero. It can typicallybe visualized as a "brief oscillation"like one might see recorded by a

seismograph or heart monitor

8 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artDiscrete Wavelet Transform (DWT)

The filters increase to double theoriginal data. It makes necessary

to downsample.

9 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artDiscrete Wavelet Transform (DWT)

The filters increase to double theoriginal data. It makes necessary

to downsample.

9 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artDiscrete Wavelet Transform (DWT)

The filters increase to double theoriginal data. It makes necessary

to downsample.

9 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artDiscrete Wavelet Transform (DWT)

The filters increase to double theoriginal data. It makes necessary

to downsample.

9 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.LH: Horizontal details.HL: Vertical details.HH: Diagonal details.

10 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.LH: Horizontal details.HL: Vertical details.HH: Diagonal details.

10 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.

LH: Horizontal details.HL: Vertical details.HH: Diagonal details.

10 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.LH: Horizontal details.

HL: Vertical details.HH: Diagonal details.

10 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.LH: Horizontal details.HL: Vertical details.

HH: Diagonal details.

10 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.LH: Horizontal details.HL: Vertical details.HH: Diagonal details.

10 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.LH: Horizontal details.HL: Vertical details.HH: Diagonal details.

10 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artNeural Networks

Artificial neural networks are models inspired by animalcentral nervous systems (in particular the brain) that arecapable of machine learning and pattern recognition.

Name E/S RelationHard Limit a = 0 n < 0

a = 1 n >= 0Linear a = n

Log-Sigmoid a =1

1+ e−n

11 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artNeural Networks

Artificial neural networks are models inspired by animalcentral nervous systems (in particular the brain) that arecapable of machine learning and pattern recognition.

Name E/S RelationHard Limit a = 0 n < 0

a = 1 n >= 0Linear a = n

Log-Sigmoid a =1

1+ e−n

11 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artNeural Networks

Artificial neural networks are models inspired by animalcentral nervous systems (in particular the brain) that arecapable of machine learning and pattern recognition.

Name E/S RelationHard Limit a = 0 n < 0

a = 1 n >= 0Linear a = n

Log-Sigmoid a =1

1+ e−n

11 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artNeural Network Architecture

Neural network architectures refers to the organization anddisposition of their neurons forming layers or groups ofneurons.

12 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artNeural Network Architecture

Neural network architectures refers to the organization anddisposition of their neurons forming layers or groups ofneurons.

12 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artNeural Network Architecture

Neural network architectures refers to the organization anddisposition of their neurons forming layers or groups ofneurons.

12 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artTraining an Artificial Neural Network

Once a network has been structured for a particularapplication, that network is ready to be trained. To start thisprocess the initial weights are chosen randomly. Then, thetraining, or learning, begins.

Supervised TrainingIn supervised training, both the inputs and the outputs areprovided.The network then processes the inputs and compares itsresulting outputs against the desired outputs.Errors are then propagated back through the system, causingthe system to adjust the weights which control the network

13 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artTraining an Artificial Neural Network

Once a network has been structured for a particularapplication, that network is ready to be trained. To start thisprocess the initial weights are chosen randomly. Then, thetraining, or learning, begins.

Supervised TrainingIn supervised training, both the inputs and the outputs areprovided.

The network then processes the inputs and compares itsresulting outputs against the desired outputs.Errors are then propagated back through the system, causingthe system to adjust the weights which control the network

13 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artTraining an Artificial Neural Network

Once a network has been structured for a particularapplication, that network is ready to be trained. To start thisprocess the initial weights are chosen randomly. Then, thetraining, or learning, begins.

Supervised TrainingIn supervised training, both the inputs and the outputs areprovided.The network then processes the inputs and compares itsresulting outputs against the desired outputs.

Errors are then propagated back through the system, causingthe system to adjust the weights which control the network

13 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artTraining an Artificial Neural Network

Once a network has been structured for a particularapplication, that network is ready to be trained. To start thisprocess the initial weights are chosen randomly. Then, thetraining, or learning, begins.

Supervised TrainingIn supervised training, both the inputs and the outputs areprovided.The network then processes the inputs and compares itsresulting outputs against the desired outputs.Errors are then propagated back through the system, causingthe system to adjust the weights which control the network

13 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Related work

E. Gumus, N. Kilic, A. Sertbas and O. N. Ucan “Evaluation of face recognitiontechniques using PCA, wavelets and SVM”, 2010

This work shows the use of the wavelet transform and PCA technique for featureextraction stage. Distance classifier and Support Vector Machines (SVMs) areused for classification step. Autors reported a recognition rate above 95%.

S. Kakarwal and R. Deshmukh “Wavelet Transform based Feature Extraction for FaceRecognition”, 2010

Authors propose the use of the wavelet transform to get a set of principalcharacteristics of each face and the correlation method for classification stage.They have reported a good performance when they use frontal and side-viewimages.

M. Mazloom and S. Kasaei “Face Recognition using Wavelet, PCA, and NeuralNetworks”, 2005

Authors propose a face recognition method which combines the use of wavelets,PCA and a backpropagation neural network. They reported a recognition rate of90.35%.

14 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Related work

E. Gumus, N. Kilic, A. Sertbas and O. N. Ucan “Evaluation of face recognitiontechniques using PCA, wavelets and SVM”, 2010

This work shows the use of the wavelet transform and PCA technique for featureextraction stage. Distance classifier and Support Vector Machines (SVMs) areused for classification step. Autors reported a recognition rate above 95%.

S. Kakarwal and R. Deshmukh “Wavelet Transform based Feature Extraction for FaceRecognition”, 2010

Authors propose the use of the wavelet transform to get a set of principalcharacteristics of each face and the correlation method for classification stage.They have reported a good performance when they use frontal and side-viewimages.

M. Mazloom and S. Kasaei “Face Recognition using Wavelet, PCA, and NeuralNetworks”, 2005

Authors propose a face recognition method which combines the use of wavelets,PCA and a backpropagation neural network. They reported a recognition rate of90.35%.

14 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Related work

E. Gumus, N. Kilic, A. Sertbas and O. N. Ucan “Evaluation of face recognitiontechniques using PCA, wavelets and SVM”, 2010

This work shows the use of the wavelet transform and PCA technique for featureextraction stage. Distance classifier and Support Vector Machines (SVMs) areused for classification step. Autors reported a recognition rate above 95%.

S. Kakarwal and R. Deshmukh “Wavelet Transform based Feature Extraction for FaceRecognition”, 2010

Authors propose the use of the wavelet transform to get a set of principalcharacteristics of each face and the correlation method for classification stage.They have reported a good performance when they use frontal and side-viewimages.

M. Mazloom and S. Kasaei “Face Recognition using Wavelet, PCA, and NeuralNetworks”, 2005

Authors propose a face recognition method which combines the use of wavelets,PCA and a backpropagation neural network. They reported a recognition rate of90.35%.

14 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionProposed System Architecture

15 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Histogram equalization

Histogram equalization is a method in image processing ofcontrast adjustment using the image’s histogram.

16 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Histogram equalization

Histogram equalization is a method in image processing ofcontrast adjustment using the image’s histogram.

16 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Histogram equalization

Histogram equalization is a method in image processing ofcontrast adjustment using the image’s histogram.

16 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face detection and segmentation

The ViolaJones object detection framework is the first objectdetection framework to provide competitive object detectionrates in real-time proposed in 2001 by Paul Viola and MichaelJones.

17 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face detection and segmentation

The ViolaJones object detection framework is the first objectdetection framework to provide competitive object detectionrates in real-time proposed in 2001 by Paul Viola and MichaelJones.

17 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face detection and segmentation

The ViolaJones object detection framework is the first objectdetection framework to provide competitive object detectionrates in real-time proposed in 2001 by Paul Viola and MichaelJones.

17 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face detection and segmentation

The ViolaJones object detection framework is the first objectdetection framework to provide competitive object detectionrates in real-time proposed in 2001 by Paul Viola and MichaelJones.

17 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighborBilinear interpolationBicubic Interpolation

18 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighborBilinear interpolationBicubic Interpolation

18 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighbor

Bilinear interpolationBicubic Interpolation

18 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighborBilinear interpolation

Bicubic Interpolation

18 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighborBilinear interpolationBicubic Interpolation

18 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighborBilinear interpolationBicubic Interpolation

18 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighborBilinear interpolationBicubic Interpolation

18 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Log-Polar conversion

Useful for dealing with rotation and scale issues.

Log-polar images are based on a polar plane represented byrings and sectors.

ξ =√

x2 + y2, η = arctan xy

19 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Log-Polar conversion

Useful for dealing with rotation and scale issues.Log-polar images are based on a polar plane represented byrings and sectors.

ξ =√

x2 + y2, η = arctan xy

19 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Log-Polar conversion

Useful for dealing with rotation and scale issues.Log-polar images are based on a polar plane represented byrings and sectors.

ξ =√

x2 + y2, η = arctan xy

19 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Log-Polar conversion

Useful for dealing with rotation and scale issues.Log-polar images are based on a polar plane represented byrings and sectors.

ξ =√

x2 + y2, η = arctan xy

19 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: DWT

1 Let J be the number of decomposition levels.

2 Let F be the wavelet filter used for the decomposition.3 Apply the discrete wavelet transform to the detected face,

using the low-pass and high-pass filters obtained from F, asmany times as directed by J.

4 Take the approximation coefficients, discarding the detailcoefficients.

20 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: DWT

1 Let J be the number of decomposition levels.2 Let F be the wavelet filter used for the decomposition.

3 Apply the discrete wavelet transform to the detected face,using the low-pass and high-pass filters obtained from F, asmany times as directed by J.

4 Take the approximation coefficients, discarding the detailcoefficients.

20 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: DWT

1 Let J be the number of decomposition levels.2 Let F be the wavelet filter used for the decomposition.

3 Apply the discrete wavelet transform to the detected face,using the low-pass and high-pass filters obtained from F, asmany times as directed by J.

4 Take the approximation coefficients, discarding the detailcoefficients.

20 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: DWT

1 Let J be the number of decomposition levels.2 Let F be the wavelet filter used for the decomposition.3 Apply the discrete wavelet transform to the detected face,

using the low-pass and high-pass filters obtained from F, asmany times as directed by J.

4 Take the approximation coefficients, discarding the detailcoefficients.

20 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: DWT

1 Let J be the number of decomposition levels.2 Let F be the wavelet filter used for the decomposition.3 Apply the discrete wavelet transform to the detected face,

using the low-pass and high-pass filters obtained from F, asmany times as directed by J.

4 Take the approximation coefficients, discarding the detailcoefficients.

20 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply entropy

H(X ) = −kn∑

i=1p(xi) log p(xi)

21 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply entropy

H(X ) = −kn∑

i=1p(xi) log p(xi)

21 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply entropy

H(X ) = −kn∑

i=1p(xi) log p(xi)

21 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply entropy

H(X ) = −kn∑

i=1p(xi) log p(xi)

21 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply entropy

H(X ) = −kn∑

i=1p(xi) log p(xi)

21 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply autocorrelation

It is the correlation of a signal with itself.

Provides information about the structure of an image.

G(a, b) =M∑a

N∑a

i(x , y) ∗ i(x − a, y − b)

22 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply autocorrelation

It is the correlation of a signal with itself.Provides information about the structure of an image.

G(a, b) =M∑a

N∑a

i(x , y) ∗ i(x − a, y − b)

22 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply autocorrelation

It is the correlation of a signal with itself.Provides information about the structure of an image.

G(a, b) =M∑a

N∑a

i(x , y) ∗ i(x − a, y − b)

22 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply autocorrelation

It is the correlation of a signal with itself.Provides information about the structure of an image.

G(a, b) =M∑a

N∑a

i(x , y) ∗ i(x − a, y − b)

22 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply autocorrelation

It is the correlation of a signal with itself.Provides information about the structure of an image.

G(a, b) =M∑a

N∑a

i(x , y) ∗ i(x − a, y − b)

22 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply autocorrelation

It is the correlation of a signal with itself.Provides information about the structure of an image.

G(a, b) =M∑a

N∑a

i(x , y) ∗ i(x − a, y − b)

22 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply sampling

Reduce the dimensionality of the characteristic vector, that will besend to the neural network.

Supposing that the size of detected face is 80 x 80 pixels, andwe are using a second decomposition level....

23 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply sampling

Reduce the dimensionality of the characteristic vector, that will besend to the neural network.

Supposing that the size of detected face is 80 x 80 pixels, andwe are using a second decomposition level....

23 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply sampling

Reduce the dimensionality of the characteristic vector, that will besend to the neural network.

Supposing that the size of detected face is 80 x 80 pixels, andwe are using a second decomposition level....

23 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply sampling

Reduce the dimensionality of the characteristic vector, that will besend to the neural network.

Supposing that the size of detected face is 80 x 80 pixels, andwe are using a second decomposition level....

23 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionClassification: Proposed neural network

24 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsFaces database

Public face database Faces94 1

Images of 153 persons with 20 snapshots by each one of them.Image resolution: 180 by 200 pixels (portrait format).Minor variation in image lighting, head pose and head scale.

1http://cswww.essex.ac.uk/mv/allfaces/faces94.html25 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsFaces database

Public face database Faces94 1

Images of 153 persons with 20 snapshots by each one of them.

Image resolution: 180 by 200 pixels (portrait format).Minor variation in image lighting, head pose and head scale.

1http://cswww.essex.ac.uk/mv/allfaces/faces94.html25 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsFaces database

Public face database Faces94 1

Images of 153 persons with 20 snapshots by each one of them.Image resolution: 180 by 200 pixels (portrait format).

Minor variation in image lighting, head pose and head scale.

1http://cswww.essex.ac.uk/mv/allfaces/faces94.html25 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsFaces database

Public face database Faces94 1

Images of 153 persons with 20 snapshots by each one of them.Image resolution: 180 by 200 pixels (portrait format).Minor variation in image lighting, head pose and head scale.

1http://cswww.essex.ac.uk/mv/allfaces/faces94.html25 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsFaces database

Public face database Faces94 1

Images of 153 persons with 20 snapshots by each one of them.Image resolution: 180 by 200 pixels (portrait format).Minor variation in image lighting, head pose and head scale.

1http://cswww.essex.ac.uk/mv/allfaces/faces94.html25 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsExperimental design

Validation and results of feature extraction phaseExperiments at this stage will allow us to know the best methodcombination (log-polar, autocorrelation, entropy), wavelet base yand decomposition level for use in a face recognition system.

Validation and results of classification phaseExperiments will be directed to detect the best configurationparameters for the proposed neural network.

Validation and results of the preprocessing phaseThis test will allow us to know the benefit of implement apreprocessing stage in the proposed system.

26 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsExperimental design

Validation and results of feature extraction phaseExperiments at this stage will allow us to know the best methodcombination (log-polar, autocorrelation, entropy), wavelet base yand decomposition level for use in a face recognition system.

Validation and results of classification phaseExperiments will be directed to detect the best configurationparameters for the proposed neural network.

Validation and results of the preprocessing phaseThis test will allow us to know the benefit of implement apreprocessing stage in the proposed system.

26 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsExperimental design

Validation and results of feature extraction phaseExperiments at this stage will allow us to know the best methodcombination (log-polar, autocorrelation, entropy), wavelet base yand decomposition level for use in a face recognition system.

Validation and results of classification phaseExperiments will be directed to detect the best configurationparameters for the proposed neural network.

Validation and results of the preprocessing phaseThis test will allow us to know the benefit of implement apreprocessing stage in the proposed system.

26 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Method combination:

Log-polar (Optional).DWT.Entropy or autocorrelation (Optional).

Wavelet base: Bior 1.3, Daubechies 4 and Coif 5.For classification we use the proposed neural net, with thefollowing configuration parameters:

Number of neurons Layer 1, 2, 3 y 4 3Layer 5 1

Minimum error 0,01

Activation function Layer 1, 2, 3 y 4 SigmoidLayer 5 Linear

27 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Method combination:Log-polar (Optional).DWT.Entropy or autocorrelation (Optional).

Wavelet base: Bior 1.3, Daubechies 4 and Coif 5.For classification we use the proposed neural net, with thefollowing configuration parameters:

Number of neurons Layer 1, 2, 3 y 4 3Layer 5 1

Minimum error 0,01

Activation function Layer 1, 2, 3 y 4 SigmoidLayer 5 Linear

27 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Method combination:Log-polar (Optional).DWT.Entropy or autocorrelation (Optional).

Wavelet base: Bior 1.3, Daubechies 4 and Coif 5.

For classification we use the proposed neural net, with thefollowing configuration parameters:

Number of neurons Layer 1, 2, 3 y 4 3Layer 5 1

Minimum error 0,01

Activation function Layer 1, 2, 3 y 4 SigmoidLayer 5 Linear

27 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Method combination:Log-polar (Optional).DWT.Entropy or autocorrelation (Optional).

Wavelet base: Bior 1.3, Daubechies 4 and Coif 5.For classification we use the proposed neural net, with thefollowing configuration parameters:

Number of neurons Layer 1, 2, 3 y 4 3Layer 5 1

Minimum error 0,01

Activation function Layer 1, 2, 3 y 4 SigmoidLayer 5 Linear

27 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Method combination:Log-polar (Optional).DWT.Entropy or autocorrelation (Optional).

Wavelet base: Bior 1.3, Daubechies 4 and Coif 5.For classification we use the proposed neural net, with thefollowing configuration parameters:

Number of neurons Layer 1, 2, 3 y 4 3Layer 5 1

Minimum error 0,01

Activation function Layer 1, 2, 3 y 4 SigmoidLayer 5 Linear

27 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Recognition rate using the available method combinations.

ND BW Train patterns Recognition rate(test patterns)W W_A LP_W LP_W_A

2Daub 4 100% 85% 86.6% 65% 55%Bior 1.3 100% 77% 79% 66.7% 71.7%Coif 5 100% 72% 72% 58.3% 50%

3Daub 4 100% 80% 85% 68.3% 18.3%Bior 1.3 100% 84% 83% 45% 56.6%Coif 5 100% 78% 82% 36.6% 26.6%

W: Wavelet, A: Autocorrelation, LP: Log-polar

28 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Recognition rate using the available method combinations.

ND BW Train patterns Recognition rate(test patterns)W W_A LP_W LP_W_A

2Daub 4 100% 85% 86.6% 65% 55%Bior 1.3 100% 77% 79% 66.7% 71.7%Coif 5 100% 72% 72% 58.3% 50%

3Daub 4 100% 80% 85% 68.3% 18.3%Bior 1.3 100% 84% 83% 45% 56.6%Coif 5 100% 78% 82% 36.6% 26.6%

W: Wavelet, A: Autocorrelation, LP: Log-polar28 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Recognition rate using the available method combinations.

ND BW Train patterns Recognition rate(test patterns)W W_A LP_W LP_W_A

2Daub 4 100% 85% 86.6% 65% 55%Bior 1.3 100% 77% 79% 66.7% 71.7%Coif 5 100% 72% 72% 58.3% 50%

3Daub 4 100% 80% 85% 68.3% 18.3%Bior 1.3 100% 84% 83% 45% 56.6%Coif 5 100% 78% 82% 36.6% 26.6%

W: Wavelet, A: Autocorrelation, LP: Log-polar29 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of classification phase

Recognition rate obtained by varying the number of neuronsand network minimum error.

Neurons Error Train. time Recognition rateTraining Test

2.3 >1 seg. 100% 78.3%.2 >1 seg. 100% 76.6%.1 1 seg. 100% 88.3%.01 2 seg. 100% 91.6%.001 3 seg. 100% 70%.0001 5 seg. 100% 66.6%.00001 5 seg. 100% 75%.000001 7 seg. 100% 65%

30 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of classification phase

Recognition rate obtained by varying the number of neuronsand network minimum error.

Neurons Error Train. time Recognition rateTraining Test

2.3 >1 seg. 100% 78.3%.2 >1 seg. 100% 76.6%.1 1 seg. 100% 88.3%.01 2 seg. 100% 91.6%.001 3 seg. 100% 70%.0001 5 seg. 100% 66.6%.00001 5 seg. 100% 75%.000001 7 seg. 100% 65%

30 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of classification phase

Recognition rate obtained by varying the number of neuronsand network minimum error.

Neurons Error Train. time Recognition rateTraining Test

4.3 >1 seg. 100% 78.3%.2 >1 seg. 100% 78.3%.1 1 seg. 100% 88%.01 2 seg. 100% 95.33%.001 2 seg. 100% 81.6%.0001 4 seg. 100% 85%.00001 6 seg. 100% 81%.000001 7 seg. 100% 81.6%

31 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of classification phase

Recognition rate obtained by varying the number of neuronsand network minimum error.

Neurons Error Train. time Recognition rateTraining Test

6.3 1 seg. 100% 76.6%.2 2 seg. 100% 76.6%.1 2 seg. 100% 83.3%.01 3 seg. 100% 85%.001 6 seg. 100% 83.3%.0001 7 seg. 100% 76.6%.00001 7 seg. 100% 78.33%.000001 8 seg. 100% 80%

32 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of classification phase

Recognition rate obtained by varying the number of neuronsand network minimum error.

Neurons Error Train. time Recognition rateTraining Test

8.3 2 seg. 100% 73.3%.2 2 seg. 100% 83.3%.1 1 seg. 100% 81.66%.01 6 seg. 100% 90%.001 7 seg. 100% 88.33%.0001 11 seg. 100% 85%.00001 14 seg. 100% 85%.000001 16 seg. 100% 76.6%

33 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of preprocessing phase

Comparison of recognition rates obtained by applying apreprocessing stage in contrast to the omission of suchactivity.

34 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of preprocessing phase

Comparison of recognition rates obtained by applying apreprocessing stage in contrast to the omission of suchactivity.

34 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of preprocessing phase

Comparison with a backpropagation neural net.

35 / 38

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of preprocessing phase

Comparison with a backpropagation neural net.

35 / 38

IntroductionMethodologyConclusions

Conclusions

We presented a new framework for face recognition, usingdiscrete wavet transform and neural networks.

The following relevant results were obtained:

PreprocessingWe detected an increase of approximately 5% in therecognition rates obtained, which determines that theapplication of techniques that improve the visual quality ofthe image have a positive influence on the overall systemperformance.

36 / 38

IntroductionMethodologyConclusions

Conclusions

We presented a new framework for face recognition, usingdiscrete wavet transform and neural networks.

The following relevant results were obtained:

PreprocessingWe detected an increase of approximately 5% in therecognition rates obtained, which determines that theapplication of techniques that improve the visual quality ofthe image have a positive influence on the overall systemperformance.

36 / 38

IntroductionMethodologyConclusions

Conclusions

We presented a new framework for face recognition, usingdiscrete wavet transform and neural networks.

The following relevant results were obtained:

PreprocessingWe detected an increase of approximately 5% in therecognition rates obtained, which determines that theapplication of techniques that improve the visual quality ofthe image have a positive influence on the overall systemperformance.

36 / 38

IntroductionMethodologyConclusions

Conclusions

Feature extractionThe use of the wavelet Daubechies 4, the seconddecomposition level and the autocorrelation method give arecognition rate of 95.33%; this allow us to ascertain that theuse of the wavelet transform is an excellent imagedecompostion and texture description tool.

ClassificationIt was proved that the proposed neural network is a feasibleand efficient option to perform face recognition tasks, since itoutperformed the recognition rates, and decreased trainingtime in comparison with a backpropagation network.

37 / 38

IntroductionMethodologyConclusions

Conclusions

Feature extractionThe use of the wavelet Daubechies 4, the seconddecomposition level and the autocorrelation method give arecognition rate of 95.33%; this allow us to ascertain that theuse of the wavelet transform is an excellent imagedecompostion and texture description tool.

ClassificationIt was proved that the proposed neural network is a feasibleand efficient option to perform face recognition tasks, since itoutperformed the recognition rates, and decreased trainingtime in comparison with a backpropagation network.

37 / 38

IntroductionMethodologyConclusions

Thank you

Questions

38 / 38

IntroductionMethodologyConclusions

[1] R.C. Gonzalez and R.E. Woods.Digital Image Processing.Springer US, 2008.

[2] Ergun Gumus, Niyazi Kilic, Ahmet Sertbas, and Osman N.Ucan.Evaluation of face recognition techniques using pca, waveletsand {SVM}.Expert Systems with Applications, 37(9):6404 – 6408, 2010.

[3] R Jafri and H.R. Arabnia.A survey of face recognition techniques.Journal of Information Processing Systems, 5(2):41–68, June2009.

[4] SN Kakarwal and Dr RR Deshmukh.Wavelet transform based feature extraction for facerecognition.

38 / 38

IntroductionMethodologyConclusions

IJCSA Issue-I June, pages 0974–0767, 2010.

[5] F. Khalid and L. N. A.3D face recognition using multiple features for local depthinformation.IJCSNS International Journal of Computer Science andNetwork Security, 9(1):27–32, 2009.

[6] Masoud Mazloom and Shohreh Kasaei.Face recognition using wavelet, pca, and neural networks.2005.

[7] W. Zhao, R. Chellappa, A. Rosenfeld, and P. J. Phillips.Face Recognition: A Literature Survey.ACM Computing Surveys, pages 399–458, 2003.

38 / 38