Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis...

115

Transcript of Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis...

Page 1: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

  

 

Page 2: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM
Page 3: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

ii 

  

DEDICATION

I would like to dedicate my thesis to

My Parents

My Wife

and my two children

Page 4: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

iii 

  

ACKNOWLEDGMENT

All my thanks are due to Almighty Allah (Subhanahu Wa Taalaa) who gave me courage

and patience to carry out this work.

I would like to thank Dr. Lahouari Ghouti for his precious time and continuous advice,

support and encouragement throughout this research. Also, my thesis committee

members, Dr. Nasir Al-Darwish and Dr. Adel Fadhl Ahmed, provided very helpful

advice, comments and feedback to improve the content of this research.

Furthermore, I would like to thank all faculty members, colleagues and friends who

shared with me the path of learning and knowledge throughout my graduate studies.

I wish to express my gratitude to my family and my wife’s family members for their

encouragement and being patient with me.

Special thanks are to King Abdul-Aziz University (KAU) for their generous support and

scholarship to complete my Master degree and to King Fahd University of Petroleum and

Minerals (KFUPM) for their support of this research.

Page 5: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

iv 

  

TABLE OF CONTENT Page

DEDICATION ................................................................................................................................. ii

ACKNOWLEDGMENT ................................................................................................................. iii

TABLE OF CONTENT .................................................................................................................. iv

LIST OF TABLES .......................................................................................................................... vi

LIST OF FIGURES ........................................................................................................................ vii

THESIS ABSTRACT ...................................................................................................................... ix

x ....................................................................................................................................... ملخص الرسالة

CHAPTER 1 ........................................................................................................................................ 1

1. INTRODUCTION .................................................................................................................... 1

1.1. Biometrics ......................................................................................................................... 3

1.2. Face Recognition .............................................................................................................. 4

1.2.1. Face Recognition Processing .................................................................................... 6

1.3. Problem Statement ............................................................................................................ 8

1.4. Motivation ........................................................................................................................ 8

1.5. Thesis Contributions ......................................................................................................... 9

1.6. Thesis Outline ................................................................................................................. 10

CHAPTER 2 ...................................................................................................................................... 11

2. LITERATURE REVIEW ....................................................................................................... 11

2.1. Grayscale-based Face Recognition ................................................................................. 11

2.1.1. Grayscale PCA-based Approach ............................................................................ 11

2.1.2. Grayscale Gabor Filters Approach ......................................................................... 12

2.1.3. Grayscale Wavelet-based Approach ....................................................................... 13

2.2. Color-based Face Recognition ........................................................................................ 13

2.2.1. Color PCA-based Approach ................................................................................... 15

2.2.2. Color Quaternion-based Approach ......................................................................... 15

CHAPTER 3 ...................................................................................................................................... 19

3. QUATERNION (HYPERCOMPLEX) REPRESENTATION .............................................. 19

3.1. Quaternion Numbers....................................................................................................... 19

3.2. Properties of Quaternion Numbers ................................................................................. 20

3.3. Quaternion Representation of Color Face Images .......................................................... 22

Page 6: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

  

3.4. Quaternion Fourier Transform (QFT) ............................................................................ 23

3.5. Quaternion Gabor Filters ................................................................................................ 25

CHAPTER 4 ...................................................................................................................................... 26

4. PRINCIPAL COMPONENT ANALYSIS (PCA).................................................................. 26

4.1. Eigenfaces ....................................................................................................................... 27

4.2. Solving PCA Using SVD ............................................................................................... 28

4.3. Quaternion Principal Component Analysis (Q-PCA) ..................................................... 28

4.3.1. Quaternion Singular Value Decomposition (Q-SVD) ............................................ 29

4.3.2. QSVD-Based Color Image Compression ............................................................... 32

4.3.3. Proposed Q-PCA Algorithm ................................................................................... 34

CHAPTER 5 ...................................................................................................................................... 36

5. METHODOLOGY AND EXPERIMENT ............................................................................. 36

5.1. Research Methodology ................................................................................................... 36

5.2. Color Face Image Database ............................................................................................ 36

5.2.1. Georgia Tech Face Database (GTDB) .................................................................... 37

5.2.2. Color FERET Database .......................................................................................... 37

5.3. Performance Evaluation ................................................................................................. 39

5.3.1. Standard Testing Subsets ........................................................................................ 40

5.3.2. Test Design ............................................................................................................. 42

5.3.3. Standard Performance Measures ............................................................................ 43

5.3.4. Identification Performance against Probe Categories ............................................. 49

5.3.5. Performance Comparison Using CMC in Different Resolutions ........................... 57

5.3.6. Variation in Identification Performance ................................................................. 72

5.3.7. GD versus ID .......................................................................................................... 79

5.3.8. FAR versus FRR errors .......................................................................................... 84

5.3.9. ROC Curves ............................................................................................................ 89

CHAPTER 6 ...................................................................................................................................... 94

6. CONCLUSION AND FUTURE WORK ............................................................................... 94

6.1. Summary and Findings ................................................................................................... 94

6.2. Future Work .................................................................................................................... 97

REFERENCES ............................................................................................................................... 99

Vita ............................................................................................................................................... 104

Page 7: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

vi 

  

LIST OF TABLES

Page

Table 5.1: Summary of FERET color face database....................................................................... 39

Table 5.2: Summary of PCA-based algorithms used in this thesis................................................. 40

Table 5.3: Gallery and four probe sets............................................................................................ 40

Table 5.4: Resolution of gallery and probe sets.............................................................................. 40

Table 5.5: Subsets used to investigate the variation in performance.............................................. 42

Table 5.6: Summary of used gallery and probe categories............................................................. 49

Table 5.7: Match scores of the top-rank for each PCA-based algorithm........................................ 50

Table 5.8: Average score of the total match scores of rank 50 for each PCA-based algorithm..... 50

Table 5.9: Average score of the full rank for each PCA-based algorithm in two resolutions........ 57

Table 5.10: Variation in identification performance using five different galleries from the FB

probes (top match).......................................................................................................................... 72

Table 5.11: Variation in identification performance using five different galleries from the FB

probes (average of rank scores)...................................................................................................... 73

Table 5.12: Variation in identification performance using five different galleries from the

duplicate I probes (top match)........................................................................................................ 76

Table 5.13: Variation in identification performance using five different galleries from the

duplicate I probes (average of rank scores).................................................................................... 76

Table 5.14: Decidability index (d) for each algorithm.................................................................... 79

Table 5.15: The EER of each algorithm......................................................................................... 84

Page 8: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

vii 

  

LIST OF FIGURES

Page

Figure 1.1: Main processing steps in pattern recognition systems. .................................................. 2 Figure 1.2: Biometric categories. ..................................................................................................... 3 Figure 1.3: Categorization of Face Recognition (FR) methods. ....................................................... 6 Figure 1.4: Face recognition processing flow. ................................................................................. 6 Figure 1.5: Face detection and alignment processes. ....................................................................... 7 Figure 1.6: Four areas of related work and the current research location. ....................................... 9 Figure 3.1: Quaternion representation of a color face image. ........................................................ 22 Figure 4.1: The PCA concept, (Li et al. [14]). ................................................................................ 26 Figure 4.2: In the top gray-eigenfaces and in the bottom color-eigenfaces. ................................... 28 Figure 4.3: Q-SVD-based image compression. (a),(b),(c) and (d) are the reconstructed images with k=8,16,25,144 respectively. Where (d) is the perfect reconstruction of the original image ........... 34 Figure 5.1: One subject face images (gallery and probes). ............................................................. 41 Figure 5.2: Design scheme of the testing procedure....................................................................... 45 Figure 5.3: GD versus ID and FAR versus FRR errors. ................................................................. 47 Figure 5.4: FAR versus FRR curves. .............................................................................................. 48 Figure 5.5: Identification performance using the FB with 192×128 pixels resolution. .................. 53 Figure 5.6: Identification performance using the FB with 384×256 pixels resolution. .................. 53 Figure 5.7: Identification performance using the duplicate I with 192×128 pixels resolution. ...... 54 Figure 5.8: Identification performance using the duplicate I with 384×256 pixels resolution. ...... 54 Figure 5.9: Identification performance using the fc with 192×128 pixels resolution. .................... 55 Figure 5.10: Identification performance using the fc with 384×256 pixels resolution. .................. 55 Figure 5.11: Identification performance using the duplicate II with 192×128 pixels resolution. .. 56 Figure 5.12: Identification performance using the duplicate II with 384×256 pixels resolution. .. 56 Figure 5.13: The CMC curves for PCA using the FB probes. ....................................................... 58 Figure 5.14: The CMC curve for PCA using the duplicate I probes. ............................................. 58 Figure 5.15: The CMC curves for PCA using the fc probes. ......................................................... 59 Figure 5.16: The CMC curves for PCA using the duplicate II probes. .......................................... 59 Figure 5.17: The CMC curves for QPCA using the FB probes. ..................................................... 60 Figure 5.18: The CMC curves for QPCA using the duplicate I probes. ......................................... 60 Figure 5.19: The CMC curves for QPCA using the fc probes. ....................................................... 61 Figure 5.20: The CMC curves for QPCA using the duplicate II probes. ....................................... 61 Figure 5.21: The CMC curves for Red-PCA using the FB probes. ................................................ 62 Figure 5.22: The CMC curves for Red-PCA using the duplicate I probes. .................................... 62 Figure 5.23: The CMC curves for Red-PCA using the fc probes. .................................................. 63 Figure 5.24: The CMC curves for Red-PCA using the duplicate II probes. ................................... 63 Figure 5.25: The CMC curves for Green-PCA using the FB probes. ............................................. 64 Figure 5.26: The CMC curves for Green-PCA using the duplicate I probes. ................................. 64 Figure 5.27: The CMC curves for Green-PCA using the fc probes. .............................................. 65 Figure 5.28: The CMC curves for Green-PCA using the duplicate II probes. ............................... 65

Page 9: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

viii 

  

Figure 5.29: The CMC curves for Blue-PCA using the FB probes. ............................................... 66 Figure 5.30: The CMC curves for Blue-PCA using the duplicate I probes. ................................... 66 Figure 5.31: The CMC curves for Blue-PCA using the fc probes. ................................................. 67 Figure 5.32: The CMC curves for Blue-PCA using the duplicate II probes. ................................. 67 Figure 5.33: The CMC curves for Avg RGB-PCA using the FB probes. ...................................... 68 Figure 5.34: The CMC curves for Avg RGB-PCA using the duplicate I probes. .......................... 68 Figure 5.35: The CMC curves for Avg RGB-PCA using the fc probes. ........................................ 69 Figure 5.36: The CMC curves for Avg RGB-PCA using the duplicate II probes. ......................... 69 Figure 5.37: The CMC curves for SCFT-CPCA using the FB probes. .......................................... 70 Figure 5.38: The CMC curves for SCFT-CPCA using the duplicate I probes. .............................. 70 Figure 5.39: The CMC curves for SCFT-CPCA using the fc probes. ............................................ 71 Figure 5.40: The CMC curves for SCFT-CPCA using the duplicate II probes. ............................. 71 Figure 5.41: Identification performance using the FB probes of (Set 1) and rank of 50 ................ 73 Figure 5.42: Identification performance using the FB probes of (Set 2) and rank of 50 ................ 74 Figure 5.43: Identification performance using the FB probes of (Set 3) and rank of 50 ................ 74 Figure 5.44: Identification performance using the FB probes of (Set 4) and rank of 50 ................ 75 Figure 5.45: Identification performance using the FB probes of (Set 5) and rank of 50 ................ 75 Figure 5.46: Identification performance using the duplicate I probes of (Set 1) and rank of 50 .... 77 Figure 5.47: Identification performance using the duplicate I probes of (Set 2) and rank of 50 .... 77 Figure 5.48: Identification performance using the duplicate I probes of (Set 3) and rank of 50 .... 78 Figure 5.49: Identification performance using the duplicate I probes of (Set 4) and rank of 50 .... 78 Figure 5.50: Identification performance using the duplicate I probes of (Set 5) and rank of 50 .... 79 Figure 5.51: GD versus ID for PCA algorithm .............................................................................. 81 Figure 5.52: GD versus ID for QPCA algorithm ............................................................................ 81 Figure 5.53: GD versus ID for Red PCA algorithm ....................................................................... 82 Figure 5.54: GD versus ID for Green PCA algorithm .................................................................... 82 Figure 5.55: GD versus ID for Blue PCA algorithm ...................................................................... 83 Figure 5.56: GD versus ID for Avg RGB PCA algorithm ............................................................. 83 Figure 5.57: GD versus ID for SCFT CPCA algorithm ................................................................. 84 Figure 5.58: FAR versus FRR curve for PCA algorithm ............................................................... 86 Figure 5.59: FAR versus FRR curves for QPCA algorithm ........................................................... 86 Figure 5.60: FAR versus FRR curves for Red PCA algorithm ...................................................... 87 Figure 5.61: FAR versus FRR curves for Green PCA algorithm ................................................... 87 Figure 5.62: FAR versus FRR curves for Blue PCA algorithm ..................................................... 88 Figure 5.63: FAR versus FRR curves for Avg RGB PCA algorithm ............................................. 88 Figure 5.64: FAR versus FRR curves for SCFT CPCA algorithm ................................................. 89 Figure 5.65: PCA and QPCA algorithms ROC curves ................................................................... 90 Figure 5.66: ROC curve for Red PCA algorithm ........................................................................... 91 Figure 5.67: ROC curve for Green PCA algorithm ........................................................................ 91 Figure 5.68: ROC curve for Blue PCA algorithm .......................................................................... 92 Figure 5.69: ROC curve for Avg RGB PCA algorithm.................................................................. 92 Figure 5.70: ROC curve for SCFT CPCA algorithm ..................................................................... 93

Page 10: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

ix 

  

THESIS ABSTRACT

Name: Emad Sami Jaha

Title: Color Face Recognition Using Quaternion Principle Component

Analysis (Q-PCA)

Major Field: Information and Computer Science

Date of Degree: November 2010

Recently, biometric systems have attracted the attention of both academic and industrial

communities. Advances in hardware and software technologies have paved the way to

such growing interest. Nowadays, efficient and cost-effective biometric solutions are

continuously emerging. Fingerprint-based biometric systems have pioneered the

commercial applications. Face and iris traits have been proven to be reliable candidates.

Until recently, face recognition research literally followed the research undertaken in the

field of fingerprint recognition which is inherently gray-scale. In this research, efforts are

restricted to the investigation of subspace representations in the color domain. The

concept of principal component analysis (PCA) implemented via singular value

decomposition (SVD) is carried over into the hypercomplex (i.e., quaternionic) to define

quaternionic PCA (Q-PCA) where color faces are compactly represented. Unlike the

existing approaches for handling the color information, the proposed research implicitly

accounts for the correlation that exists between the color components (i.e., red, green and

blue components).

Page 11: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

  

ملخص الرسالة

عماد بن سامي بن محمد جاها :االســـــــــــــــم

التعرف على صور الوجوه الملونة باستخدام تحليل المكونات األساسية المرآبة الرباعية :عنوان الرسالة

األبعاد

علوم الحاسب اآللي :التخصـــــــص

هـ1431ذو القعدة :تاريخ التخـرج

 

إن . األشخاصهوية في اآلونة األخيرة انصرف اهتمام األوساط األآاديمية والصناعية لنظم التعرف الحيوية على

ظهرت و أليام في هذه اوبالتالي . التطور في تقنيات العتاد والبرمجيات قد مهد الطريق لمثل هذا االهتمام المتزايد

الحيوية المعتمدة على التعرفنظم إن. اقتصادية وعالية الكفاءة حلولبشكل مستمر الكثير من نظم التعرف الحيوية آ

وقد ثبت أن سمات الوجه والقزحية مرشح يمكن االعتماد .بصمات األصابع آان لها السبق في التطبيقات التجارية

حتى وقت قريب آانت البحوث في مجال التعرف على صور الوجوه تتبع . عليه في بناء مثل هذه النظم الحيوية

حرفيا مسار البحوث التي أجريت في مجال التعرف على بصمات األصابع والتي تعتمد بطبيعتها على الصور الممثلة

أما في هذا البحث تقتصر الجهود على مساحة جزئية لتمثيل الصور في نطاق . تدرجات اللون الرمادي في نطاق

التي يتم حسابها بواسطة تحليل القيم المفردة ومن ثم تمثيل صور الوجوه المكونات األساسيةمفهوم تحليل إن . األلوان

األساليب الموجودة لمعالجة معلومات األلوان، فإن بخالف. بشكل مدمج باستخدام الصيغة المرآبة الرباعية األبعاد

.)األحمر واألخضر واألزرقأي (البحث المقترح يضع في الحسبان االرتباط القائم بين األلوان المكونة للصورة

Page 12: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

CHAPTER 1 

1. INTRODUCTION

The revolution in the realm of modern technologies has influenced explicitly the people

daily life. Technology has become widely used in many forms and everywhere. In fact, it

has facilitated several needs such as communication, automated processes, and enforced

security. Many of existing security-based systems such as banking systems are using

different forms of technology. In such sensitive systems, a facility for user authentication

and identification is considered an urgent need. Several of the commonly used solutions

are username/password, electronic and smart cards. Although these solutions are effective

and have many advantages, they suffer from serious uncontrollable disadvantages. Several

violations could be easily performed such as using a stolen username/password or

electronic/smart card by unauthorized persons. Another drawback consists in losing or

forgetting the identity mean (i.e., username/password or smart card), which can also

prevent the legal person from accessing the system. Also, other various forms of

violations and problems can occur effortlessly.

Due to such disadvantages, biometric solutions have taken place and started playing a

main role in developing efficient authentication and identification systems which are free

from the above mentioned drawbacks. Biometric-based systems make use of various

unique human traits. The majority of earlier research efforts have focused on fingerprint-

based biometric systems. After that, face and iris traits have become the focus of very

active research. Face recognition (FR) has many advantages over other biometric traits

because it is natural, nonintrusive, and easy to use. Face recognition research has literally

Page 13: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

followed fingerprint recognition research in methodologies and techniques used; therefore

most of published research has used inherently gray-scale images. However, a number of

promises were carried out from research which has used color images for FR purposes.

The FR problem can be cast into the broader area of pattern recognition.

A simplified description of any automated pattern recognition system is using a computer

with its integrated components (either hardware or software), instead of human

observation, to recognize or identify the different samples by using a previous knowledge

of similar models.

Any pattern recognition system comprises three main steps: preprocessing, feature

extraction, and classification, as shown in Figure 1.1. Preprocessing is a set of prior

operations applied, as preparation, on the input data such as resizing, normalizing,

filtering, localization, and normalization. Features are the individual measurable heuristic

properties of the phenomena being observed. These properties are used within

comparisons between a new unknown sample and already known models to recognize to

which class it belongs. Classifiers are mechanisms to assign each point that represents a

pattern in the space with a class label or membership scores to the defined classes.

Figure 1.1: Main processing steps in pattern recognition systems.

Page 14: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

1.1. Biometrics

A biometric is defined as “a measurable, physical characteristic or personal behavioral

trait used to recognize the identity, or verify the claimed identity, of an enrollee” [1].

There exist several different biometrics comprise some methods for uniquely recognizing

humans based on one or more essential (innate) physical or behavioral characteristic.

Biometric characteristics can be classified into two main categories listed below and

depicted in Figure 1.2:

Physiological biometrics are related to the shape of the body, e.g. fingerprint, face and

iris recognition, hand and palm geometry, and DNA.

Behavioral biometrics are related to the behavior of a person, e.g. typing rhythm or

Keystroke, gait, signature, and voice.

Figure 1.2: Biometric categories.

Biometric

Physiological

Face    

Fingerprint

Hand

Iris

DNA

Behavioral

Keystroke

Gait

Signature

Voice

Page 15: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

A biometric system can operate in the following two modes [4]:

Verification (or authentication) which is a one-to-one matching or comparison of a

captured biometric with a stored template to verify that the individual is the one he

claimed by comparing a query face image against a template face image.

Identification (or recognition) which is a one-to-many matching or comparison of the

captured biometric against all the template images in a biometric database in attempt to

identify an unknown individual, where a query face is matched to a list of suspects can

be one-to-few matches.

1.2. Face Recognition

A face recognition (FR) system is simply an attempt to emulate the typical human face

recognition task that a human performs routinely, effortlessly, and frequently along his

life. Thus face recognition defined as the ability of a computer to receiving and

interpreting of face image input. Such face recognition system is supposed to identify

faces presented in images and videos automatically. It can operate in either a single or

dual mode. Like other biometric systems, an FR system is capable of [1]:

1) Capturing a face sample from an end user.

2) Extracting biometric data from that sample.

3) Comparing the biometric data with that contained in one or more reference

templates.

4) Deciding how well they match.

Page 16: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

5) Indicating whether or not an identification or verification of identity has been

achieved.

Face recognition has undergone approximately fifteen years of intensive development in

both the academic and commercial arenas [1]. Despite of there exist several different

approaches in the academic literature, they can be divided into four major classes of

algorithms:

1) Eigenface Systems based on eigenfaces and eigenvectors.

2) Local Feature Analysis System based on the analysis of local features.

3) Neural Network methods based on machine learning techniques.

4) Gabor filter methods based on Gabor and wavelets analogies.

Definitely, each class has its own advantages and disadvantages [1]. The diagram in,

shown in Figure 1.3, demonstrates classification of FR methods.

Although, FR is a very popular and promising research topic, the task is also tending to be

a difficult one due to the existence of unconstrained tasks such as viewpoint, illumination,

expression, occlusion, and accessories.

Page 17: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

 

Figure 1.3: Categorization of Face Recognition (FR) methods.

 1.2.1. Face Recognition Processing

The face recognition processing flow, depicted in Figure 1.4, consists of four modules:

detection, alignment, feature extraction, and matching. Generally speaking, the whole

system modules or steps can be divided into two main stages: preprocessing and

recognition stages. The former stage includes face detection and alignment or localization

and normalization. Facial feature extraction and matching constitute the latter stage.

Face Recognition (FR) 

Gray‐scale Image Color Image

Appearance based

Space Domain

PCA

Q ‐ PCA

ICA LDA

Feature based

Frequency Domain

Gabor Filters Wavelets

Preprocessing stage  Recognition stage 

Figure 1.4: Face recognition processing flow.

Face Detection ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ 

Tracking 

Face Alignment 

Feature Extraction 

 

Feature Matching 

Database of Enrolled 

Users 

Image  

or Video 

Face Location 

 

Size & Pose 

Aligned 

Face 

Feature  

Vector 

Face ID

Page 18: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

Face detection segments the face areas from the background known as “nonface”

segments. Face alignment is used to achieve more accurate localization, and at

normalizing faces thereby it can treat the coarse estimates of the location and scale of

each detected face which is detected and provided by face detection module. Figure 1.5

illustrates face detection and alignment processes. The two processes yield the outlines of

located and normalized facial components outlines.

Feature extraction is performed to provide effective information that is useful for

distinguishing between faces of different persons and stable with respect to the

geometrical and photometrical variations. Face matching matches the extracted feature

vector of the input face against those of enrolled faces in the database. It is worth noting

that, the face recognition results depend highly on features that are extracted to represent

the face pattern and classification methods used to distinguish between faces. Also,

efficient distinct and effective features depend mainly on the face localization and

normalization processes.

Figure 1.5: Face detection and alignment processes.

Page 19: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

1.3. Problem Statement

Human face trait has been proven to be a reliable candidate for person identification. As a

result, face recognition has become a very active research field. Consequently, several

research efforts have been employed for this active field. However, the great majority of

these efforts have used gray-scale representations of the face information. Although the

usage of gray-scale representation is considered as one of the more powerful methods for

face recognition, it has a major disadvantage of losing and discarding implicitly and

explicitly the valued color information. As a remedy to this drawback, recent approaches

have used the color information. However, each color component (i.e. red, green and

blue) is used individually. In this case, color independence is assumed which violates

basic principle of computer vision. Indeed, such approaches could perform better than

their gray-scale counterparts if they make use of the color information constructively.

1.4. Motivation

In this thesis, a solution based on hypercomplex (i.e., quaternionic) image representation

is proposed where the color information is used in a “holistic” manner. Hence, unlike

most of the existing approaches to handle the color information separately, the proposed

approach for face recognition implicitly accounts for the correlation that exists between

the color components. Figure 1.6, illustrates the proposed approach as an intersection of

four different areas.

Page 20: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

 

1.5. Thesis Contributions

The major contributions of this research work are:

A novel scheme for efficient color face representation and processing for face-based

biometric recognition system.

The concept of principal component analysis (PCA) and singular value decomposition

(SVD) are extended to the quaternionic and carried over into the hypercomplex to

compactly represent color face images.

Detailed analysis of the incorporation of color information in face recognition is

provided.

Figure 1.6: Four areas of related work and the current research location.

Page 21: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

10 

Performance improvement by using color information and the Q-PCA method.

Extended performance evaluation is preformed and comparisons from different

perspectives are provided.

1.6. Thesis Outline

This thesis is organized as follows, Chapter 1 provides an introduction of biometric traits

and systems. Face recognition concepts and techniques are laid out. In Chapter 2, a set of

previous related works are presented and categorized into Grayscale-based and Color-

based face recognition approaches.

An overview of Quaternion (Hypercomplex) concepts is provided in Chapter 3. Basic

definitions, properties, and representations are discussed therein in details.. Chapter 4,

defines Eigenface method for face recognition. Besides, a brief background of principal

component analysis (PCA). Furthermore, a detailed description of the proposed

quaternionic principal component analysis (Q-PCA) method is presented along with

related mathematical expression of singular value decomposition (SVD) and it

quaternionic counterpart (QSVD).

In Chapter 5, the methodology experimental results are reported and discussed along with

a detailed performance analysis. A performance comparison with the current state-of-the-

art methods is carried out. Finally, Chapter 6 gives a summary of the thesis work along

with the contributions made in this thesis in light of the research and the testing results.

The chapter concludes with an outline of some proposed research directions where the

work described in this thesis can be further investigated.

Page 22: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

11 

CHAPTER 2 

2. LITERATURE REVIEW

Biometrics is a very active area that has a lot of research efforts. Biometric systems

provide novel solutions to different access, control and security application. Face

recognition techniques evolved from being simple and limited to advanced, complicated,

and less limited.

2.1. Grayscale-based Face Recognition

Zhao et al. [24] provide a good comprehensive and critical face recognition literature

survey. Furthermore, relevant topics such as psychophysical studies, system evaluation,

and issues of illumination and pose variation are discussed.

2.1.1. Grayscale PCA-based Approach

Eigenfaces were first considered for face recognition by Turk and Pentland [22]. 2D gray-

face images are projected onto a feature space that captures the significant variation

among known face images. These significant features include the eigenfaces because they

are eigenvectors or principal components of the set of faces. The projection operation

characterizes an individual face by a weighted sum of the eigenface features. Hence in

order to recognize a particular face it is necessary only to compare these weights to those

of known individuals.

Page 23: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

12 

A method for gray-face recognition using principal component analysis (PCA) and radial

basis function (RBF) neural networks is presented by Thakur et al. [25]. PCA is employed

to reduce the dimensionality of the image and to keep some of the variations in the image

data. Adapted (RBF) neural networks are used under imposed conditions. Reported

experimental results show that the proposed method enhances the recognition

performance.

2.1.2. Grayscale Gabor Filters Approach

A novel algorithm for face recognition using Gabor features to train neural networks is

introduced by Rahman and Bhuiyan [26]. The algorithm is applied to gray morphed face

images within a constructed system using different scales and orientations. The Multi-

Layer Perceptron (MLP) neural network with back-propagation algorithm is employed for

face recognition and incorporates the convolution Gabor filter responses.

Wang et al. [27] propose an effective algorithm for face recognition using gray-face

Gabor image and Support Vector Machine (SVM) for face recognition. The proposed

approach derives the face Gabor image by down-sampling and concatenating the Gabor

wavelets representations. These Gabor wavelets are convolution of the face image with a

family of Gabor kernels, and then (2D-PCA) method is used to extract the feature space.

Then, the extracted features are fed to the SVM classifier.

Page 24: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

13 

2.1.3. Grayscale Wavelet-based Approach

In [28], Liu and Dai propose a facial representation based on the dual-tree complex

wavelet transform (DT-CWT). The proposed method effectively represents the structures

in gray facial image with low redundancy.

By exploiting the DT-CWT and independent component analysis (ICA), a novel face

recognition method is proposed by Chai et al. [29]. It provides a better representation for

feature extraction. Using PCA, the dimension of the DT-CWT feature vectors is further

reduced. Then, the ICA is exploited to reduce the feature redundancies and derive the

independent feature vectors.

2.2. Color-based Face Recognition

Several approaches make use of color information [6-8, 16, 20].

Choi et al. [6, 7] propose a metric called “variation ratio gain” (VRG) to theoretically

prove the significance of color effect on low-resolution faces. In addition, extensive

performance comparison studies are conducted.

Yang and Liu [8] introduced a General Discriminant Model (GDM) for color face

recognition. It is deals with two sets of variables: a set of color component combination

coefficients for color image representation and a set of projection basis vectors for image

discrimination. In order to find the optimal solution of the model an iterative whitening-

maximization (IWM) algorithm is designed and used.

Page 25: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

14 

An evolutionary two-fold framework is proposed by Shih and Liu [16]. First, two new

color spaces are defined as linear transformations of the input RGB (i.e., red, green and

blue) color space. The first color space is defined by one luminance (L) channel and two

chrominance channels (C1, C2). The second color space incorporates one luminance

channel (L) and three chrominance channels (C1, C2, C3). A genetic Algorithm (GA)

searches for the optimal transformations from the RGB color space to the (LC1C2) and

(LC1C2C3) color spaces, respectively.

Since color inputs have large dimensionality which increases the computational cost in FR

systems, it is important to determine the scenarios in which usage of color information

helps the FR system. Ganapathi [20] provides an empirical study for this purpose and

indicates the following observations: the inclusion of chromatic information in FR

systems is shown to be particularly advantageous in poor illumination conditions, a color

input of optimal dimensionality would improve the FR performance.

The diagonal nonnegative matrix factorization (BDNMF) is proposed by Wang et al. [3]

for color face representation and recognition. The approach employs block diagonal

matrix to encode color information of different channels. An adapted Nearest

Neighborhood classifier is used to identify color face samples. Whereas the obtained

experimental results on (CVL) and (CMU PIE) color face databases verify the

effectiveness of the proposed approach.

Thomas et al. [21] investigate the use of the characteristics of a 3D color to generate a

color Linear Discriminant Analysis (LDA) subspace, which could be used to recognize a

new testing image. In order to test the potential improvement in accuracy, a recognition

Page 26: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

15 

rate across two color face databases is computed. The observed results indicated that the

LDA color subspace significantly improves recognition accuracy over the standard

grayscale approach.

2.2.1. Color PCA-based Approach

The PCA technique is used repeatedly in various adaptations for color face recognition.

Actually, the algorithm based on PCA, form the basis of numerous studies in the

psychological and algorithmic face recognition literature as indicated in [12].

Wang et al. [2] propose a color face recognition approach based on 2D-PCA. The

proposed approach comprises a matrix-representation model that encodes the color

information directly to describe the color face image. In this way, color face images are

represented efficiently in the matrix format. Consequently, color-eigenfaces are computed

for feature extraction using 2D-PCA, and the Nearest Neighborhood classification is

adapted to identify the color face samples.

Moon and Phillips [12] introduce a generic modular PCA-algorithm in order to investigate

the influence of the computational and performance aspects of PCA-based for face

recognition algorithms.

2.2.2. Color Quaternion-based Approach

The first attempt of using the quaternion concept for color images is attributed to Pei and

Cheng [13]. They propose a quaternion model called the quaternion-moment block

Page 27: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

16 

truncation coding (QMBTC). It is proposed for compressing color-pixel blocks. It is

mainly based on color (BTC) algorithm and is derived by using the quaternion arithmetic

and moment preserving principle.

2.2.2.1. Color Quaternion PCA (Q-PCA) Approach

A technique for quaternion matrix algebra which can be used to process the eigen analysis

of a color image is proposed by Le Bihan and Sangwine [10]. This technique introduces

extensions of two classical techniques to their quaternionic case: Singular Value

Decomposition (SVD) and Karhunen-Loeve Transform (KLT). It also introduces the

problem of Eigne Value Decomposition (EVD) of quaternion matrix. The properties of

these quaternion tools are given and their behavior on natural color images is presented.

Furthermore, a method to compute the decomposition using complex matrix algebra is

provided. In addition, another consequent work by Sangwine and Le Bihan [31], the

Jacobi algorithm for computing the quaternion SVD (Q-SVD) is presented.

Based on Q-SVD, Shi [30] implementes the Q-PCA, and applies it to several applications

such as color image segmentation. Trilateral filtering is proposed by locally adapting

color and changing the shape of the filter to achieve the effect of smoothing colors that

preserve the edges.

Ding and Feng [11] propose another method of quaternion Karhunen-Loeve Transform

(Q-KLT) and biomimetic pattern recognition (BPR) for color face recognition. The model

of (Q-KLT) is used to extract the eigen-faces of training samples and algebraic features

for each by BPR method.

Page 28: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

17 

A method proposed by some researchers called Extended Two-Dimensional PCA (E2D-

PCA) which is an extension to the original 2D-PCA for gray face recognition. E2D-PCA

is promoted by a new covariance matrix preserving more local geometric structure

information than previous 2D-PCA methods. It avoids small sample size problem of the

PCA method. Although the new E2D-PCA improves the previous 2D-PCA, the former

one still has some limitation that, it is only considering the global information of face

images. Besides, it is only applied for gray face recognition. Where some local and color

information may be ignored. Thus, Chen et al. [15] provides a solution to such problems

by proposing SpE2D-PCA a hybrid approach based on sub-pattern technique and E2D-

PCA.

2.2.2.2. Color Quaternion Gabor Filters Approach

Numerous adapted biometric approaches by using quaternion concept with the well-

known Gabor-based methods are defined in [4] for color iris recognition and [1, 9, 23] for

color face recognition with distinct specifications of each.

A novel method for the automatic recognition and matching of color iris images is

proposed by Al-Qunaieer [4]. Particularly, the well-known IrisCode algorithm, developed

by Daugman for gray-scale iris images, is extended to the color domain by compact

quaternionic Gabor wavelets representation.

The Gabor filter, which is typically defined as a complex function, is extended to the

hypercomplex (quaternion) domain by Jones and Abbott [1, 9]. Several extensions to the

hypercomplex domain are discussed and a preferred formulation is selected.

Page 29: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

18 

Wei et al. [23] proposes a novel color face recognition method based on Local Binary

Patterns (LBP) of Quaternionic Gabor features (QGF). The approach is mainly based on

quaternion Gabor analysis within color image representation. This approach is

characterized by making full use of the interrelationship among different color channels to

enhance the performance of the face recognition system. (QGF) are used to encode the

positions and attributes of face elements. Then by using (LBP) which is a non-parametric

method was imposed on these (QGF) to obtain the robustness against variations of phase,

illumination, and facial expressions.

Page 30: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

19 

CHAPTER 3  

3. QUATERNION (HYPERCOMPLEX) REPRESENTATION

3.1. Quaternion Numbers

Quaternions were first proposed in 1843 by W.R. Hamilton [36]. They are associative but

non-commutative and they belong to a specific class of hypercomplex numbers. Such

numbers are used in several applications such as mechanics in 3D space and 3D rotations

[4]. Also, quaternions are being applied and used in image processing to represent and

process color images in a holistic manner rather than each color separately. Complex

numbers represented as a combination of a real and an imaginary parts such as:

  · (3.1)

where i is the imaginary unit, a and bi represent the real and imaginary parts, respectively.

Quaternions can be considered as a generalization of the complex numbers having one

real part and three imaginary parts [30]. A quaternion number can be written as a linear

combination defined as follows:

    · · · (3.2)  

where the quaternion q is made of one real part (a) and three imaginary parts (b, c, and

d),with i, j and k being the imaginary units. is the quaternionic set, is the complex set

and is the real set satisfying:

  (3.3)

Page 31: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

20 

The imaginary parts i, j, and k satisfy the following properties:

ijk = i2 = j2 = k2 = −1  (3.4)

(ij = −ji = k), (jk = −kj = i), and (ki = −ik = j)  (3.5)

3.2. Properties of Quaternion Numbers

Quaternions satisfy the multiplication rules which are sometimes known as Hamilton's

rules.

A quaternion with a zero real part is called a pure quaternion (i.e., a = 0):

  0   (3.6)

 

The conjugate of a quaternion, q, is defined by negating its imaginary parts:

                (3.7)

 

The modulus or magnitude of a quaternion, q, is defined by:

| | √   (3.8)

 

The addition or the sum of two quaternions, and is given by:

     (3.9)

Page 32: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

21 

The multiplication or the product of two quaternions, and is given by:

                                     

                   

(3.10)

The quaternion norm is defined by:

          (3.11)

The norm is multiplicative such that:

·      (3.12)

 

The division is uniquely defined except division by zero, therefore quaternions form a

division algebra. The inverse of a quaternion is defined by:

  (3.13)

Given that the norm of q is non-zero.

To convert a quaternion to a regular complex number, where the quaternions can be

represented using its uniquely defined equivalent complex 2×2 matrices such that:

  (3.14)

where z and w are complex numbers, a, b, c and d are real numbers, and and are

the complex conjugate of z and w, respectively.

 

Page 33: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

22 

3.3. Quaternion Representation of Color Face Images

Each pixel in a color image has the property of having the values of the three color

channels or components. These three colors simultaneously represent a pixel and are

defined as the three imaginary parts of the quaternion. Thus, any pixel or point, (x, y), of a

color face image can be represented by a pure quaternion as follows:

    0   (3.15)  

where x and y are the pixel coordinates, respectively. Figure 3.1 illustrates the approach

adopted to represent RGB color face images using a quaternionic form.

 

In this method, quaternions are used to represent the RGB color space. Therefore, the

three colors channels are processed equally in arithmetic and geometric operations such as

multiplication.

Figure 3.1: Quaternion representation of a color face image.

q = 0 +   R.i +  G.j  +  B.k 

RGB image 

Red channel  Blue channel Green channel 

Page 34: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

23 

3.4. Quaternion Fourier Transform (QFT)

The 2D Fourier Transform known as Fast Fourier Transform (FFT) is a useful image

processing method used to decompose a grayscale image into its sine and cosine

components. This method transforms a given image from spatial domain to Fourier or

frequency domain, where each pixel represents a specific frequency. It is employed in

several applications such as image analysis, filtering and compression. The 2D FFT can

given by [4]:

    , , e e   (3.16)

where I(u, v) is the frequency-domain representation of the image, and u and v are the

frequency coordinates, I(x, y) is the spatial-domain representation of the image, x and y

are the spatial coordinates.

While the FFT is applied on real and complex numbers, it can be extended to the QFT for

handling quaternionic numbers defined as [30]:

    , e . , . e .   (3.17)

Equation (3.17) can be generalized to obtain three different types of the QFT: the left-side

QFT, the right-side QFT and the two-side QFT. Moreover, The Inverse Quaternion

Fourier Transform (IQFT) can be performed for all types. The two-side QFT is defined as

[38]:

    , e . , . e .   (3.18)

Page 35: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

24 

where h(x, y) is the input Quaternion image, μ1 and μ2 are two pure quaternionic units,

orthogonal to each other, w and v are the spatial frequencies in x and y directions,

respectively.

The two-side IQFT is defined as:

    ,  14

e . , . e .   (3.19)

The left-side QFT is defined as:

  , e . , .   (3.20)

The left-side IQFT is defined as:

    ,14

e . , .   (3.21)

The right-side QFT is defined as:

    , , . e .   (3.22)

The right-side IQFT is defined as:

    ,14

, . e .   (3.23)

Page 36: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

25 

3.5. Quaternion Gabor Filters

The Gabor filters are well known technique used for image characterization, texture

analysis and feature extraction [9]. The complex Gabor filters are widely used in the

literature, whereas very few research works make use of the extended Gabor filters in the

Quaternion domain. A 2D Gabor filter can be defined by [1]:

    ,  1

2exp

12

exp 2    (3.24)

where σx and σy are the space constants of the Gaussian envelope along the x and y axes.

They also characterize the bandwidth of the filter. Each different frequency is given by:

  (3.25)

The Quaternion Gabor filters can be implemented by the QFT. Therefore, a 2D

Quaternion Gabor filter can be defined by an extension of Equation (3.25) as [1]:

,  1

2exp

1

2

2

2

2

2 exp 2 . exp 2    (3.26)

Another extension of Equation (3.25) can define a pure Quaternion Gabor filter is

achieved by multiplying the quaternion unit μ by the real part of the filter such as [1]:

,  1

2exp

12

cos 2   (3.27)

   

Page 37: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

26 

CHAPTER 4 

4. PRINCIPAL COMPONENT ANALYSIS (PCA)

The PCA or KLT is a dimensionality reduction technique based on extracting the desired

number of principal components of the multidimensional data [14]. Another definition of

PCA is a general method of finding orthogonal axes that contain the most information

about a set of sample vectors in a multidimensional space [1].

PCA is the simplest of the true eigenvector-based multivariate analysis. It is considered as

a standard tool in modern data analysis in several fields including computer graphics. The

concept of PCA can be diagrammatically represented as shown in Figure 4.1. PCA is

usually related to the mathematical technique of SVD. A typical PCA mechanism can be

summarized in the following steps [17]:

1) Organize data as an m×n matrix, where m is the number of measurement types and

n is the number of samples.

2) Subtract off the mean form each row or column of the m×n matrix.

3) Apply SVD to calculate the eigenvectors of the covariance.

 Figure 4.1: The PCA concept, (Li et al. [14]).

Page 38: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

27 

Suppose T= {t1,t2, … , tn}, is a training set of (facial) images consisting of n samples.

Each tk is an (i × j) image matrix representing the kth sample in the training set T. In PCA,

each tk is first reshaped into a high-dimensional vector, h (i ∗ j)×1 , by concatenating its

columns or its rows. Therefore, an (m × n) matrix, H, of all reshaped training samples is

obtained. m is the number of measurement types and n is the number of samples. Then,

the mean of whole training set, µ, is computed as follows [9]:

 

 1

  (4.1) 

 

The covariance matrix, C, is computed as follows [17]:

 

 1

  (4.2) 

 

4.1. Eigenfaces

Eigenfaces are a set of eigenvectors used in PCA-based human face recognition. The use

of eigenfaces for recognition was first proposed by Sirovich and Kirby in 1987 [22].

Informally, eigenfaces can be considered as a set of standardized face ingredients or

substances, derived from statistical analysis of many face images. The generated

eigenfaces can be represented visually as shown in Figure 4.2. In this case, a face image

can be considered as a combination of standard faces (eigenfaces).

Page 39: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

28 

 

  4.2. Solving PCA Using SVD

The PCA calculation can be implemented via the SVD technique. The SVD of an (m × n)

matrix A (m ≥ n) is given by:

  ∑ (4.3)  

where the m × n matrix U and the n × n matrix V have orthonormal columns. The n × n

matrix ∑ has the singular values of A with 0 along its main

diagonal and zero elsewhere. It should be noted that a singular value of a matrix, A, is the

square root of an eigenvalue of matrix AAT. Note that, the SVD allows for efficient and

robust computation of the PCA without the need to estimate the data covariance matrix.

4.3. Quaternion Principal Component Analysis (Q-PCA)

In the real domain, the eigenvalues and eigenvectors of real matrix can be easily

computed. However, computing those of a quaternion matrix is a difficult task [15]. In

order to calculate the PCA projection matrix based on a quaternion representation, it is

Figure 4.2: In the top gray-eigenfaces and in the bottom color-eigenfaces.

Page 40: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

29 

necessary to obtain the orthogonal eigenvector set of the covariance. The orthogonal

eigenvectors of the PCA covariance matrix can be obtained using three PCA-based

methods: Quaternion Singular Value Decomposition (Q-SVD), Quaternion Eigen-Value

Decomposition (Q-EVD), and Quaternion Karhunen-Loeve Transform (Q-KLT).

The Q-SVD can be considered as a generalization of the real or complex SVD. It inherits

similar properties [30]. The decomposition of a quaternion matrix by means of classical

complex algorithms is based on the SVD computation of the complex adjoint matrix

(CAM). Therefore, in the Q-SVD solution, a quaternion matrix can be decomposed into

its singular value form by converting it into complex matrix representation. Each

quaternion matrix of size M×N has an equivalent complex matrix of size 2M×2N

[30, 31]. Hence, the existing complex SVD algorithm can be applied to this equivalent

complex matrix to obtain the eigenvectors and eigenvalues (i.e. singular values) of the

corresponding quaternion matrix .

4.3.1. Quaternion Singular Value Decomposition (Q-SVD)

The Q-PCA calculation can be implemented via the Q-SVD technique. Based on a

theorem given in [34] to show the existence of the SVD of a quaternion matrix. Let

 be a n×n quaternion valued matrix with rank k, and there exist two unitary

quaternion matrices and such that:

    · · ∑ 00 0

  (4.4) 

Page 41: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

30 

where (  ) represents the Hermitian transpose (or conjugate-transposition) operator. Note

that, ( q , c and r) denote data types (quaternion, complex and real), respectively. The

unitary quaternion matrices and have the property such that:

  · ·   (4.5) 

Hence, the multiplication of quaternion matrices actually yields the real identity matrix

. The notation ∑ is a real diagonal matrix ( ∑ ) (i.e., the number of non-null

singular values), k is the rank of . The values on the diagonal of ∑ ,… ,

with (1      ), are the real positive singular values of the quaternion matrix ,

arranged in decreasing magnitude order along the diagonal. Thus, Equation (4.5) can be

rewritten as:

    · ∑ 00 0

·   (4.6) 

According to a definition of the equivalent complex matrix of a quaternion matrix [34].

Each quaternion defined by:

    · · · (4.7)

This quaternion q can be decomposed using:

  · · · · (4.8)

where a and b are two complex numbers.

For instance, an n×n quaternion valued matrix defined as:

  ·   (4.9)

Page 42: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

31 

where A and B are n×n complex matrices, then from equation (3.14) in Chapter 3 and

equation (4.9) the equivalent 2nx2n complex matrix of is

    (4.10) 

Thus, the classical complex SVD algorithm can be applied to to generate the eigen-

basis (i.e., eigenvectors) and the corresponding singular values ranked in descending

order.

Let the SVD of a quaternion matrix be:

  · ∑ ·   (4.11) 

Let the equivalent complex matrix for be:

  · ∑ · (4.12) 

then,

 ∑ ∑ (4.13) 

If 2

2

  (4.14) 

Then, ·  

    ·(4.15) 

Page 43: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

32 

where, means the odd rows and means the odd columns of matrix

M. In addition, some of the most significant properties of the Q-SVD, when applied to

color images, are listed below by Le Bihan et al. [10]:

1) Invariance to spatial rotation (also true in the case of greyscale images with SVD).

2) Invariance to spatial shift (vectors in U and V are shifted by the same amount).

3) Invariance to color space rotation.

4.3.2. QSVD-Based Color Image Compression

Several useful methods applied to the SVD can be extended to color images using Q-

SVD. Such extensions are achieved without separating the color image into three channels

and processing each color channel independently. One very useful SVD-based method is

image compression which definitely can be extended to color images. The robustness of

this compression method is in storing large images as smaller more manageable ones.

QSVD-based compression accomplished by reproducing the original image with each

succeeding non-zero singular value (  ∑). Furthermore, by using fewer singular values

( ′ ) it can achieve more compression to produce approximate image [30]. Using equation

(4.12), the image can be decomposed as:

  · ∑ · (4.16) 

Then, to construct the eigen-image of original one, the SVD of a color image, Q, can be

decomposed into:

Page 44: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

33 

    · ∑ ·   (4.17) 

where ui and vi are the column vectors of matrix U and V, respectively. The λi’s are the

diagonal elements of the real matrix ∑ and R is the rank of Q. Here, each i product

generates an eigen-image. Therefore, the constructed color image, x, can be

considered as the linear combination of R color eigen-images. Note that, in quaternion and

complex matrices, the preceding eigen-images represent the low-frequency components of

the original image, and the later ones represent the high-frequency components [30].

Therefore, an approximate of a color image can be obtained by summing the first k

eigen-images such that:

    (4.18) 

where the real part of is small, and decreases to zero when k increases to R.

However, a good approximation of the original color image can be provided by a small k.

Due to this compression, the storage requirements for the color image is reduced from

(3×N×N) to k(2*4N+1).

Figure 4.3 illustrates four estimated images based on Q-PCA and its approximation

reconstruction applied to a facial image of size 213×144. Note that when the image has

more high-frequency components, a larger k is necessary to have a good approximation.

Otherwise, for images containing mostly low-frequency signals, small k is good for the

reconstruction. As shown in the two approximations in (c) and (d), it is clear that a

satisfactory approximate representation can be found with far fewer singular values. This

Page 45: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

34 

simple example illustrates the power and efficiency of the Q-SVD method. The

reconstructed image is produced by the first 16 singular values requiring only

16×213×4=13,632 entries. Another major advantage is performance optimization of a

recognition system.

 (a)  (b)  (c)  (d) 

Figure 4.3: Q-SVD-based image compression. (a),(b),(c) and (d) are the reconstructed images with k=8,16,25,144 respectively. Where (d) is the perfect reconstruction of the original image

4.3.3. Proposed Q-PCA Algorithm

The most significant concepts (i.e., Quaternion, PCA, SVD) associated with the novel

proposed scheme have been defined previously. The proposed QPCA algorithm, called

Quaternion Principle Component Analysis, can be described and summarized under a list

of steps needed to carry out Q-PCA procedure as follows:

1) For each facial color image in the training data, convert into pure

quaternionic representation.

    0 · · ·   (4.19)

Page 46: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

35 

2) Organize the data as an (m×n) design matrix, , where m is the number of

measurement types (also rows) and n is the number of samples (also columns).

Each matrix cell mxny is a quaternion number. Hence, each column ny represents a

single vector or orthogonal representation of quaternion image pixels.

3) Subtract off the mean, , and divide by the variance, , each measurement type,

mx, to obtain n orthonormal vectors of the design matrix .

4) Calculate the Q-SVD or the quaternion eigenvectors, , of the covariance matrix,

defined as:

  · ∑ ·   (4.20) 

5) To recognize or match a query color face image, fq, the quaternion form of fq is

normalized and projected onto the eigen-face space by:

  ·   (4.21) 

where is the projection of the image , and (  ) represents the Hermitian

transpose (or conjugate-transposition) of the eigenvectors, . and are the

mean and variance vectors of the design matrix, , respectively.

Page 47: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

36 

CHAPTER 5 

5. METHODOLOGY AND EXPERIMENT

5.1. Research Methodology

This research proposes a novel scheme for color face recognition. Particularly, the

proposed scheme is based on the well-known PCA method. In this research, the PCA is

naturally extended to the hypercomplex (quaternionic) domain to define the Q-PCA.

Besides the benefit of combining the existing color information, Q-PCA supports and

emphasizes the usage of other beneficial information (such as color correlation and

interaction) that would not be obtained using a mere parallel (and independent) processing

of the color components. The added-value brought in by the proposed scheme is made

possible by the holistic representation of the color information in the hypercomplex

domain. Moreover, a comparative analysis is carried out to highlight the improvement

attributed to the holistic processing of the color components. The experimental work is

conducted using standard color face databases.

5.2. Color Face Image Database

Within the experimental work of this thesis, two standard databases are used. The first one

is used as a minor database and the other one is used as major. The minor database is the

“Georgia Tech Face Database (GTDB)” which is used at earlier stages of the development

of the PCA technique for gray-level and color face images. The major database is “The

Color FERET Database” which is used in all experiments reported therein.

Page 48: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

37 

5.2.1. Georgia Tech Face Database (GTDB)

The GTDB was collected and prepared at the Center for Signal and Image Processing at

Georgia Institute of Technology. It contains images of 50 people taken in two or three

sessions between 06/01/1999 and 11/15/1999 to take into account the variations in

illumination conditions, facial expression and appearance. Furthermore, the faces were

captured at different scales and orientations. Each subject (a person) in the database is

represented by 15 color JPEG images with cluttered background taken at a resolution of

640×480 pixels. Hence, the database contains 750 face images. The average image is

150×150 pixels. The GTDB also provides a preprocessed set (GTDB_crop) of cropped

and relabeled images where each in the database was cropped and background sremoved.

An addition at preprocessing step is performed in this thesis. Due to the various sizes of

the cropped images, each image is resized to 213×144 pixels.

5.2.2. Color FERET Database

The FERET database is a well known standard facial database in both color and grayscale

representations provided by the US National Institute of Standards and Technology

(NIST). It has been designed to advance the state of the art in face recognition. The

database collection was a collaborative effort between Dr. Wechsler and Dr. Phillips. The

database was collected at various angles in 15 sessions between August 1993 and July

1996. It contains 1564 sets of images for a total of 14,126 images that includes 1199

individuals and 365 duplicate sets of images. A duplicate set is a second set of images of a

person already in the database and was usually taken on a different day. For some

Page 49: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

38 

individuals, two years have elapsed between their first and last sittings, with some

subjects being photographed multiple times. This period of time was important to enable

researchers to study the effects of changes in a subject's appearance that occur over years.

Thus, the FERET database introduces variability by the inclusion of images taken at

different dates and locations. This results in changes in lighting, scale and background.

The whole database is divided into a development set, provided to researchers, and a set

of sequestered images for testing. The images in the development set are used as a

representative of the sequestered images. However, the available database for researchers

is a subset of the database consisting of 11,338 images from the total of 14,126. These

images represent 994 individuals from the total of 1199 individuals. The remaining

individuals and images were not provided for the researchers. The database, provided in

color PPM-format, represents images in three different resolutions:

1) Full size images 768×512 pixels.

2) Half size images 384×256 pixels.

3) Quarter size images 192×128 pixels.

There are 13 different poses in the collected database representing the person’s face at

various angles. Table 5.1 gives a summary of the database content and the imagery pose

types.

Page 50: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

39 

Two letter code

Pose Angle (degrees) Description

Number of images in Database

Number of Subjects (persons)

fa 0 = Regular facial expression 1364 994 fb 0 Alternative facial expression 1358 993 ql -22.5 quarter left - head turned about 22.5 degrees left 761 501 qr +22.5 quarter right - head turned about 22.5 degrees 761 501 hl -67.5 half left - head turned about 67.5 degrees left 1267 917 hr +67.5 half right - head turned about 67.5 degrees right 1320 953 pl -90 Profile left - head turned about 90 degrees left 1312 960 pr +90 Profile right - head turned about 90 degrees 1363 994 ra -45 random image - head turned about 45 degree 321 261 rb -15 random image - head turned about 15 degree 321 261 rc +15 random image - head turned about 15 degree 610 423 rd +45 random image - head turned about 45 degree 290 236 re +75 random image - head turned about 75 degree 290 236

Table 5.1: Summary of FERET color face database.

5.3. Performance Evaluation

A biometric evaluation protocol determines how to test a system, select the data, and

measure the performance. In order to apply validation and verification to the proposed

approach, the (FERET) evaluation protocol is used, which is a standard method for

evaluating face-recognition algorithms [32]. This biometric evaluation protocol

investigates the performance of Q-PCA-based versus the typical PCA-based algorithms.

The PCA-based algorithms considered in this thesis are: gray-PCA, Red-component-PCA,

Green-component-PCA, Blue-component-PCA, Avg-RGB-PCA of the three components,

and SCFT-CPCA (complex PCA of Spatio-chromatic Fourier transform) [37].

A number of stress test experiments are performed for these different implementations to

evaluate and compare their performance. All these algorithms are examined by applying a

number of prober performance evaluation methods and using the same data sets. Table 5.2

shows the PCA-based algorithms used in this work.

Page 51: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

40 

Algorithm Description PCA Calculates PCA for grayscales face images QPCA Calculates Quaternion PCA for color face images Red PCA Calculates PCA for the Red component of color face images Green PCA Calculates PCA for the Green component of color face images Blue PCA Calculates PCA for the Blue component of color face images

Avg RGB PCA Calculates PCA for the Average of the three color channels (i.e. red, green and blue) of color face images

SCFT CPCA Calculates Complex PCA for the spatio-chromatic Fourier transform of color face images

Table 5.2: Summary of PCA-based algorithms used in this thesis.

5.3.1. Standard Testing Subsets

In this experimental work, FERET standard testing subsets are used to carry out the

performance and statistical evaluation. Particularly, the gallery and probe sets used in

FERET tests (September 1996) are selected for this purpose. Therefore, the identification

scores are carried out for four categories of probes and all the tests for these probes are

using a single gallery containing 993 images as indicated in Table 5.3.

Evaluation Task Recognized Names Gallery (993) Probe Set Facial Expression FB gallery.names probe_fafb_*.names (993) Low aging of subjects duplicate I gallery.names probe_dup_1_*.names Illumination fc gallery.names probe_fafc_*.names (98) High aging of subjects duplicate II gallery.names probe_dup_2_*.names

Table 5.3: Gallery and four probe sets.

Table 5.4 gives the identification scores derived two times for the same gallery and probes

in two different versions (or resolutions). This redundancy aims at assessing the effect of

image resolution on the system performance.

Dataset No. of images / Resolution version Gallery (993) / 192×128 pix (993) / 384×256 pix FB (probe) (993) / 192×128 pix (993) / 384×256 pix duplicate I (probe) (736) / 192×128 pix (736) / 384×256 pix fc (probe) (98) / 192×128 pix (98) / 384×256 pix duplicate II (probe) (228) / 192×128 pix (228) / 384×256 pix

Table 5.4: Resolution of gallery and probe sets. 

Page 52: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

41 

The first probe category is the FB probes. In each set of images, there are two frontal

images denoted by fa and fb pose types. One of the images is randomly placed in the

gallery, and the other image is placed in the FB probe set. This category is aimed to

evaluate the algorithm performance against the variation in facial expressions. The second

probe category is the duplicate I probe, which contains all duplicate frontal images in the

FERET database for the gallery images. This category is aimed to evaluate the algorithm

performance against the variation in low degree of subject aging. The third probe category

is the fc probe, where images are taken the same day with a different camera and lighting.

This category is aimed to evaluate the algorithm performance against the variation in

illumination. The fourth probe category is the duplicate II probe, which consistes of

duplicates where there is at least one year between the acquisition of the probe image and

the corresponding image in the gallery. This category is aimed to evaluate the algorithm

performance against the variation in high degree of subject aging. Figure 5.1 gives the

face images of one subject samples from the gallery and all the four probes.

Another subpart of color FERET database consists of five subsets. Each subset has its

own gallery and probes (one gallery and two probes). The gallery of each subset contains

approximately 200 individuals. For each subset, one of the two probes represents the FB

fa  fb  duplicate I  fc  duplicate II 

Figure 5.1: One subject face images (gallery and probes).

Page 53: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

42 

category and the other represents the duplicate I category. Table 5.5 lists the five subsets

with their details.

Probe Name Gallery size FB Probe set size duplicate I Probe set size Set 1 200 200 145 Set 2 200 200 70 Set 3 200 200 204 Set 4 200 200 283 Set 5 193 193 34

Table 5.5: Subsets used to investigate the variation in performance.

5.3.2. Test Design

The adopted biometric evaluation protocol investigates the performance aspects to several

FR algorithms. In this testing protocol, each evaluated algorithm can use a different

similarity measure and it does not compare similarity measures from different algorithms.

However in the current experimental work the similarity measure is the same for all

compared algorithms, since they are all PCA-based algorithms. A significant advantage of

this protocol is that for any two images, qi and tk, the similarity measure, si(k), is known

which allows for a greater flexibility and more comprehensive evaluation methodology.

This flexibility is achieved and satisfied by computing scores for virtual galleries and

probe sets. The algorithm performance is characterized by different categories of images:

1) Rotated images.

2) Duplicates taken within a week of the gallery image.

3) Duplicates where the time between the images is at least one year.

4) Galleries containing one image per person.

5) Galleries containing duplicate images of the same person.

Page 54: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

43 

A gallery G is a virtual gallery if G is a subset of the target set T . Similarly, P is a virtual

probe set if P is a subset of query set Q. For a given gallery G and probe set P, the

performance scores are computed by examination of similarity measures si(k) such that qi

P and tk G.

In all the conducted experiments, for all Algorithms the number of eigenvectors is

adopted to be 200 eigenvectors. This decision is obtained after performing a number of

trials using 25, 50 and 100 eigenvectors. Selecting this certain number of eigenvectors to

be used for projecting a tested facial image onto the eigenface space improves the

accuracy in general about 3-9%.

5.3.3. Standard Performance Measures

For the sake of comparison with existing algorithms, a set of standard biometric

evaluation methods are provided and described in this section. Such evaluation methods

or measures are commonly used to allow a fair comparison and a precise statistical

evaluation.

Cumulative Match Characteristic (CMC): It is usually computed and visually

represented as a curve of ranked cumulative match scores form rank of 1 to rank of k ≤

the number of enrolled persons.

In order to define the decision theory of CMC measure for evaluating the performance of

an algorithm, there exist two basic models are: the closed and open universes. In the

closed universe, every probe is in the gallery whereas in an open universe, some probes

Page 55: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

44 

are not in the gallery. Since the concern of this research is concerned with identification,

the appropriate model is the closed-universe model which reports different performance

statistics and reflects important aspects of FR algorithms. Such aspects allow evaluators to

ask how good an algorithm is at identifying a probe image. Note that, the question is not

always “is the top match correct?” but it could be “is the correct answer in the top n

matches?”. The answer of such question allows evaluators to know how many images

(what are the k-images) have to be examined to get a desired level of performance. The

reported performance statistics are based on the CMC curve wherein the rank is plotted

along the horizontal axis and the percentage of correct matches is plotted in the vertical

axis. The cumulative match score can be calculated for any algorithm against any subset

of the probe set. The computation of an identification score as follows:

Let | P | be the size of probe P. The probe set P is scored against gallery G, where G =

{g1, …, gm} and P = {p1, …, pn} by comparing the similarity scores si(*) such that pi in P

and gk in G, where * indicates the similarities of all gk images in a gallery set, G, to be

computed against each pi in P. For each probe image, pi in P, si(*) is scored for all

gallery images gk in G. A smaller similarity score implies a closer match. If gk and pi are

the same image, then si(k) = 0. The function id(i) gives the index of the gallery image of

the person in probe pi, i.e., pi is an image of the person in gid(i). A probe pi is correctly

identified if si(id(i)) is the smallest scores for gk G. A probe pi is in the top k if si(id(i))

is one of the k-th smallest score si(*) for gallery G. Let Rk denotes the number of probes

in the top k. Thus, the fraction of probes in the top k is reported, such that Rk / | P |. For

example, let k = 5, R5 = 80 and | P | = 100. Based on this formula, the performance score

Page 56: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

45 

for R5 is 80/100 = 0.8. The design scheme of the testing procedure, used in this research is

illustrated in Figure 5.2.

In all reported performance results, the size of the gallery is the number of different faces

(people) contained in the images that are in the gallery. Hence, there is one image per

person in the gallery. Therefore, the size of the gallery is also the number of images in this

gallery. The number of probes scored is also the size of the probe set (i.e., | P | ).

However, the probe set may contain more than one image of a person and the probe set

may not contain an image of everyone in the gallery. Every image in the probe set has a

corresponding image in the gallery (closed-universe model).

Genuine distribution (GD): A distribution which is estimated by comparing different

face image samples that belong to the same person against each other.

It should be noted that there is no need to inverse the comparisons between any two

samples i and j since the comparison is symmetric.

Figure 5.2: Design scheme of the testing procedure.

   

 PCA‐based Algorithm for Face Recognition  

 

Training code  

load   

gi

Gallery images (G) 

(One image/Person) 

 0001 

 

0002 . . . 

NNNN 

 List of probes (P) 

 0001 

 

0002 . . . 

NNNN 

 

Test code 

 load  

 

pi

 match 

 

gi and pi

Scoring code 

 Compute 

performance score 

Rk / |P | 

finding Similarity  

 of each si(k) then si(id(i)) for a defined rank size 

 score 

 

Rk

 

Results 

Page 57: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

46 

Imposter distribution (ID): A distribution which is estimated by comparing different

face images that belong to different persons.

Such that, the ith sample of each person is compared to the ith sample of all the remaining

persons in the dataset. In this case, the comparison is also symmetric.

For illustration, a sample of GD and ID distributions is given in Figure 5.3. The more

separated are the GD and ID curves, the better is the matching process. Furthermore, the

intersection areas represent errors in the matching process.

Within this error area, two subareas exist:

False Accept Rate (FAR): A subarea which is the region of ID that is considered to

belong to the GD.

False Reject Rate (FRR): A subarea which is the portion of GD that is considered as part

of the ID.

Both error areas are illustrated in Figure 5.3. Consequently, the selection of a specific

threshold represents a trade-off between the FAR and FRR errors. The selection of a

proper threshold depends significantly on the application of the biometric system at hand.

For instance, the FAR error is selected to be large in forensic applications for criminal

identification to reduce the possibility of missing a wanted criminal. On other hand, the

FRR error is selected to be large in highly secure access control systems to guarantee that

authorized genuine users can gain access.

Page 58: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

47 

Generally speaking, better results are achieved if the two distributions are well separated.

Decidability index (d): The separation or the distance between the two means the GD and

ID considered as a hint about the system performance.

The decidability index, d, can be calculated as follows [4]:

    (5.1) 

where µ and µ are the means of the GD and ID, respectively. Large values for the

decidability index do not necessarily mean that the performance is better. In some cases,

although the decidability index is large, the errors can be large and should be taken into

account with the decidability index.

The FAR and FRR error curves can be calculated from the GD and ID by using a

threshold t that ranges from 0 to 1. Thus, the FAR(t) and FRR(t) are functions of t, where

FAR (t) is the percentage of imposters greater than or equal to the threshold, and FRR(t)

is the percentage of genuine less than the threshold [35]. Besides, the following

information can be estimated from the FAR and FRR curves as illustrated in Figure 5.4:

Figure 5.3: GD versus ID and FAR versus FRR errors.

Threshold(t)

FAR

1

FRR

0

p

Matching score

Imposter distribution

Genuine distribution

Page 59: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

48 

Equal-Error Rate (EER): The error rate where accept and reject errors are equal

(FAR(t) = FRR(t)), so that lower EER values indicate less errors.

ZeroFRR: The lowest FAR where no FRR occurs.

ZeroFAR: The lowest FRR where no FAR occurs.

Receiver Operating Characteristic (ROC): A curve is used to measure and summarize

the performance of a biometric identification or verification systems.

An ROC curve is based on the representation of the decision thresholds, therefore it

shows the system performance at all thresholds. Note that it is threshold independent,

allowing performance comparison of different systems under similar conditions. It plots

the percentage of impostor attempts accepted which is the FAR on the x-axis, against the

percentage of genuine attempts accepted which known as correct acceptance rate (CAR=1

- FRR) on the y-axis [35]. In ROC curves the relationship between FAR and FRR can be

observed. Such that, as the FAR error is increased, the FRR is decreased and accordingly

FRR(t) FAR(t)

ZeroFAR

EER

t

erro

r

ZeroFRR

Figure 5.4: FAR versus FRR curves.

Page 60: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

49 

the CAR or (1- FRR) is increased, and vice versa. The best curve is a straight horizontal

line with zero FRR that means accordingly the genuine attempts acceptance is (1-0=100),

which means that EER is equal to zero. Consequently, the lower curve means more errors.

5.3.4. Identification Performance against Probe Categories

The identification performance is investigated and reported for each of the compared

algorithms against four different probe categories from the first standard dataset. This

identification performance evaluation is based on a fraction of CMC scores ranked from 1

to 50 for the sake of comparison. Moreover, the identification performance for an

algorithm is investigated and reported for the same dataset in different resolutions. Table

5.6 gives a summary of the properties of the test gallery and probes.

Figure No. Probe Category Gallery size Probe set size Resolution

5.5 and 5.6 FB 993 993 19

2×12

8 pi

x

384×

256

pix

5.7 and 5.8 duplicate I 993 736

5.9 and 5.10 fc 993 98

5.11 and 5.12 duplicate II 993 228

Table 5.6: Summary of used gallery and probe categories.

Table 5.7 gives the fraction of probes whose gallery match is top ranked (i.e., the rank

equals to 1). Hence, each number in this table represents the correct total match score

considering the match with only one nearest-neighbor to each tested sample. This is

effectively the performance on forced identification (i.e., best guess). As shown in the

table, each probe category has two score numbers representing scores of two different

resolutions.

Page 61: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

50 

Gallery Size / Scored Probes 993 /993 993 /736 993 /98 993 /228

FB duplicate I fc duplicate II Algorithm 192×128 384×256 192×128 384×256 192×128 384×256 192×128 384×256

PCA 0.624 0.626 0.135 0.132 0.010 0.010 0.004 0.004 QPCA 0.646 0.648 0.145 0.147 0.010 0.010 0.004 0.004 Red PCA 0.635 0.638 0.141 0.143 0.010 0.010 0.004 0.004 Green PCA 0.621 0.622 0.130 0.132 0.010 0.010 0.004 0.004 Blue PCA 0.599 0.600 0.128 0.129 0.010 0.010 0.004 0.004 Avg RGB PCA 0.619 0.620 0.133 0.135 0.010 0.010 0.004 0.004 SCFT CPCA 0.004 0.002 0.001 0.004 0.000 0.000 0.000 0.000

Table 5.7: Match scores of the top-rank for each PCA-based algorithm.

Table 5.8 gives the estimated average score of the total match score against each probes

(i.e., the average of scores located along a rank from 1 to 50) which consider the average

of the first 50 identification estimations. Hence, each number in this table represents the

average of correct total match scores of the ranks from 1 to 50 (i.e., from 1 to 50 nearest-

neighbors) for each test sample. This is effectively the total performance on forced

identification (i.e., best 1,2, …, 50 guesses). As shown in the table, each probe category

has two score numbers representing scores of two different resolutions.

Gallery Size / Scored Probes 993 /993 993 /736 993 /98 993 /228

FB duplicate I fc duplicate II Algorithm 192×128 384×256 192×128 384×256 192×128 384×256 192×128 384×256

PCA 0.867 0.868 0.277 0.277 0.037 0.037 0.047 0.047 QPCA 0.880 0.881 0.284 0.285 0.042 0.041 0.052 0.052 Red PCA 0.887 0.887 0.290 0.290 0.030 0.029 0.035 0.035 Green PCA 0.862 0.862 0.273 0.274 0.045 0.045 0.054 0.054 Blue PCA 0.846 0.846 0.269 0.269 0.056 0.057 0.060 0.061 Avg RGB PCA 0.865 0.865 0.277 0.278 0.044 0.044 0.050 0.050 SCFT CPCA 0.059 0.047 0.025 0.035 0.017 0.035 0.020 0.030

Table 5.8: Average score of the total match scores of rank 50 for each PCA-based algorithm.

In order to evaluate and compare the performance of different FR algorithms, two aspects

are taken in account. The first is the evaluation of an algorithm based on the achieved top-

rank match score as reported in Table 5.7. The second is the total-average of total match

scores that depends on averaging the total of scores achieved as listed in Table 5.8. So,

Page 62: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

51 

according to the scoring results summarized in Tables Table 5.7-Table 5.8, to a number of

observations can be estimated.

By testing the PCA-based algorithms against FB probe data, as illustrated in Figures

Figure 5.5-Figure 5.6, the Q-PCA algorithm yielded the highest top-rank match score, and

achieved the best performance in the identification task. While the Red PCA achieved the

second best top-rank score, it achieved a total-average score better than the Q-PCA

algorithm. It should be noted that, the total-average score of each algorithm was not

affected by the probe resolutions except in the case of the 384×256 pixels resolution,

where the top-rank score of the SCFT CPCA algorithm is slightly decreased.

The performance of all algorithms against duplicate I probe data with 192×128 pixels

resolution is illustrated in Figure 5.7. The best algorithm has the highest top-rank score

and the worst algorithm has the lowest top-rank score. It can be seen that the Q-PCA

algorithm achieved the best top-rank and the SCFT CPCA algorithm achieved the lowest

top-rank. On the other hand, the performance was quite differed with respect to the total-

average. The best total-average was achieved by the Red PCA algorithm. The Q-PCA

algorithm was the second. However, against the duplicate I probe data using 384×256

pixels resolution, as illustrated in Figure 5.8 the performance trend has slightly changed.

According to the reported top-rank scores the Q-PCA algorithm has still the best

performance. From the total-average perspective, the performance has improved for the

Q-PCA, Green PCA and Avg RGB PCA algorithms. The best total-average was achieved

by the Red PCA algorithm followed by the Q-PCA algorithm.

Page 63: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

52 

Figures Figure 5.9 -Figure 5.10 illustrate the performance of all algorithms against the fc

probe data at resolutions of 192×128 pixels and 384×256 pixels, respectively. The top-

rank score value was the same for all algorithms in both resolutions as listed in Table 5.7

except for the SCFT CPCA algorithm. However, the total-average varies and can be used

for comparing and ranking the competitor algorithms. Unlike in the FB and duplicate I

datasets the best total-average is achieved by the Blue PCA algorithm. Then, the Green

PCA and Avg RGB PCA algorithms achieved the next highest performance followed by

the Q-PCA algorithm.

As illustrated in Figures Figure 5.11 -Figure 5.12, the performance of all algorithms against

the duplicate II probe data at two resolutions of 192×128 pixels and 384×256 pixels,

respectively. The top-rank score was the same for all algorithms in both resolutions as

listed in Table 5.7  except for SCFT CPCA. However, the total-average varies for each

algorithm. With the two resolutions, the best total-average was achieved by the Blue PCA

algorithm followed by the Green PCA algorithm.

Page 64: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

53 

Figure 5.5: Identification performance using the FB with 192×128 pixels resolution.

Figure 5.6: Identification performance using the FB with 384×256 pixels resolution.

Page 65: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

54 

Figure 5.7: Identification performance using the duplicate I with 192×128 pixels resolution.

Figure 5.8: Identification performance using the duplicate I with 384×256 pixels resolution.

Page 66: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

55 

Figure 5.9: Identification performance using the fc with 192×128 pixels resolution.

Figure 5.10: Identification performance using the fc with 384×256 pixels resolution.

Page 67: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

56 

Figure 5.11: Identification performance using the duplicate II with 192×128 pixels resolution.

Figure 5.12: Identification performance using the duplicate II with 384×256 pixels resolution.

Page 68: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

57 

5.3.5. Performance Comparison Using CMC in Different Resolutions

The effect of image resolution on performance is investigated and reported in this

experiment by comparing the CMC curves of each algorithm using the same probe

categories in different resolution. Table 5.9 shows, for each PCA-based algorithm, the

average score of the total match scores of the full rank applied to the four probe categories

using two resolutions. It should be noted that the bolded numbers in the table represent the

affected performance, either positively or negatively, by increasing the resolution.

Gallery Size / Scored Probes 993 /993 993 /736 993 /98 993 /228

FB duplicate I fc duplicate II Algorithm 192×128 384×256 192×128 384×256 192×128 384×256 192×128 384×256

PCA 0.987 0.987 0.612 0.613 0.086 0.085 0.193 0.194 QPCA 0.988 0.988 0.609 0.610 0.089 0.089 0.191 0.191 Red PCA 0.989 0.989 0.624 0.624 0.074 0.074 0.187 0.187 Green PCA 0.986 0.986 0.606 0.606 0.090 0.090 0.189 0.189 Blue PCA 0.981 0.981 0.588 0.589 0.112 0.112 0.186 0.187 Avg RGB PCA 0.985 0.985 0.606 0.606 0.092 0.092 0.187 0.188 SCFT CPCA 0.550 0.522 0.350 0.377 0.034 0.052 0.093 0.121

Table 5.9: Average score of the full rank for each PCA-based algorithm in two resolutions. 

The comparison graphs for each algorithm are illustrated to represent the identification

performance curves using each probe category in two resolutions. The illustrated graphs

represent the full CMC with rank from 1 to the full rank size which is equals to full size of

the probe set.

Page 69: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

58 

5.3.5.1. CMC for the Gray-PCA algorithm

Figure 5.13: The CMC curves for PCA using the FB probes.

Figure 5.14: The CMC curve for PCA using the duplicate I probes.

Page 70: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

59 

Figure 5.15: The CMC curves for PCA using the fc probes.

Figure 5.16: The CMC curves for PCA using the duplicate II probes.

Page 71: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

60 

5.3.5.2. CMC for the QPCA algorithm

Figure 5.17: The CMC curves for QPCA using the FB probes.

Figure 5.18: The CMC curves for QPCA using the duplicate I probes.

Page 72: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

61 

Figure 5.19: The CMC curves for QPCA using the fc probes.

Figure 5.20: The CMC curves for QPCA using the duplicate II probes.

Page 73: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

62 

5.3.5.3. CMC for the Red-PCA algorithm

Figure 5.21: The CMC curves for Red-PCA using the FB probes.

Figure 5.22: The CMC curves for Red-PCA using the duplicate I probes.

Page 74: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

63 

Figure 5.23: The CMC curves for Red-PCA using the fc probes.

Figure 5.24: The CMC curves for Red-PCA using the duplicate II probes.

Page 75: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

64 

5.3.5.4. CMC for the Green-PCA algorithm

Figure 5.25: The CMC curves for Green-PCA using the FB probes.

Figure 5.26: The CMC curves for Green-PCA using the duplicate I probes.

Page 76: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

65 

Figure 5.27: The CMC curves for Green-PCA using the fc probes.

Figure 5.28: The CMC curves for Green-PCA using the duplicate II probes.

Page 77: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

66 

5.3.5.5. CMC for the Blue-PCA algorithm

Figure 5.29: The CMC curves for Blue-PCA using the FB probes.

Figure 5.30: The CMC curves for Blue-PCA using the duplicate I probes.

Page 78: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

67 

Figure 5.31: The CMC curves for Blue-PCA using the fc probes.

Figure 5.32: The CMC curves for Blue-PCA using the duplicate II probes.

Page 79: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

68 

5.3.5.6. CMC for the Avg RGB-PCA algorithm

Figure 5.33: The CMC curves for Avg RGB-PCA using the FB probes.

Figure 5.34: The CMC curves for Avg RGB-PCA using the duplicate I probes.

Page 80: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

69 

Figure 5.35: The CMC curves for Avg RGB-PCA using the fc probes.

Figure 5.36: The CMC curves for Avg RGB-PCA using the duplicate II probes.

Page 81: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

70 

5.3.5.7. CMC for the SCFT-CPCA algorithm

Figure 5.37: The CMC curves for SCFT-CPCA using the FB probes.

Figure 5.38: The CMC curves for SCFT-CPCA using the duplicate I probes.

Page 82: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

71 

Figure 5.39: The CMC curves for SCFT-CPCA using the fc probes.

Figure 5.40: The CMC curves for SCFT-CPCA using the duplicate II probes.

Page 83: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

72 

5.3.6. Variation in Identification Performance

Another performance evaluation approach aims to investigate the identification

performance using different galleries that contain different face images from the other

galleries. While a face-recognition algorithm estimates the identity of a face, a possible

question “how does the algorithm performance change using a different gallery and probe

set?” Tables Table 5.10 - Table 5.12 show the change in algorithm performance if the

galleries content is changed. In this experiment, a second standard dataset is constructed

and used. This dataset consists of five galleries of approximately 200 individuals, where

an individual is only in one gallery. Tables Table 5.10 - 5.13, summarize the results where

the algorithms are ordered by top rank score and average of rank scores, respectively. For

example, as indicated in Table 5.10, the QPCA algorithm scored highest on gallery of (Set

2). While Table 5.10 -Table 5.11 report results for the FB probes whereas Table 5.12 -

Table 5.13 report results for the duplicate I probes. The last column in these tables

represents the overall rank or order of each algorithm which indicates the general

performance indicator.

Algorithm Ranking by Top Match Gallery Size / Scored Probes

200 /200 200 /200 200 /200 200 /200 193 /193 Overall rank

Algorithm Set 1 Set 2 Set 3 Set 4 Set 5 PCA 1 4 3 3 3 2 QPCA 1 1 2 1 1 1 Red PCA 6 3 1 2 2 2 Green PCA 5 2 5 5 2 3 Blue PCA 3 4 6 6 5 4 Avg RGB PCA 4 3 4 4 4 3 SCFT CPCA 2 5 7 7 6 5 Avg Score 0.3707 0.5843 0.4988 0.5745 0.4996

Table 5.10: Variation in identification performance using five different galleries from the FB probes (top match).

Page 84: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

73 

Algorithm Ranking by Average of rank of 50 scores Gallery Size / Scored Probes

200 /200 200 /200 200 /200 200 /200 193 /193 Overall rank

Algorithm Set 1 Set 2 Set 3 Set 4 Set 5 PCA 2 4 4 3 3 3 QPCA 1 3 3 2 2 1 Red PCA 7 2 1 1 1 2 Green PCA 5 6 6 5 5 6 Blue PCA 4 7 7 6 6 7 Avg RGB PCA 6 5 5 4 4 5 SCFT CPCA 3 1 2 7 7 4 Avg Score 0.6553 0.8434 0.8739 0.8751 0.8841

Table 5.11: Variation in identification performance using five different galleries from the FB probes (average of rank scores).

 

Figure 5.41: Identification performance using the FB probes of (Set 1) and rank of 50

Page 85: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

74 

 

Figure 5.42: Identification performance using the FB probes of (Set 2) and rank of 50 

 

Figure 5.43: Identification performance using the FB probes of (Set 3) and rank of 50 

Page 86: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

75 

 

Figure 5.44: Identification performance using the FB probes of (Set 4) and rank of 50 

 

Figure 5.45: Identification performance using the FB probes of (Set 5) and rank of 50 

Page 87: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

76 

Algorithm Ranking by Top Match Gallery Size / Scored Probes

200 /200 200 /200 200 /200 200 /200 193 /193 Overall rank

Algorithm Set 1 Set 2 Set 3 Set 4 Set 5 PCA 2 3 2 3 3 2 QPCA 4 3 2 1 3 2 Red PCA 5 1 1 4 1 1 Green PCA 1 4 4 1 3 2 Blue PCA 6 4 4 3 3 3 Avg RGB PCA 3 2 3 2 2 1 SCFT CPCA 7 5 5 5 4 4 Avg Score 0.0269 0.2320 0.2829 0.0535 0.2283

Table 5.12: Variation in identification performance using five different galleries from the duplicate I probes (top match).

Algorithm Ranking by Average of rank of 50 scores Gallery Size / Scored Probes

200 /200 200 /200 200 /200 200 /200 193 /193 Overall rank

Algorithm Set 1 Set 2 Set 3 Set 4 Set 5 PCA 1 5 2 2 2 2 QPCA 4 6 1 1 3 3 Red PCA 2 1 3 1 1 1 Green PCA 3 7 5 3 4 5 Blue PCA 6 4 6 5 6 6 Avg RGB PCA 5 3 4 4 5 4 SCFT CPCA 7 2 7 6 7 7 Avg Score 0.1782 0.5068 0.5907 0.2620 0.7791

Table 5.13: Variation in identification performance using five different galleries from the duplicate I probes (average of rank scores).

Page 88: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

77 

 

Figure 5.46: Identification performance using the duplicate I probes of (Set 1) and rank of 50 

 

 

Figure 5.47: Identification performance using the duplicate I probes of (Set 2) and rank of 50 

Page 89: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

78 

 

Figure 5.48: Identification performance using the duplicate I probes of (Set 3) and rank of 50 

 

Figure 5.49: Identification performance using the duplicate I probes of (Set 4) and rank of 50 

Page 90: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

79 

 

Figure 5.50: Identification performance using the duplicate I probes of (Set 5) and rank of 50

5.3.7. GD versus ID

The decidability indices for the used PCA-based algorithms are calculated and listed in

Table 5.14.

Algorithm Decidability Index (d) PCA 0.1422

QPCA 0.1470 Red PCA 0.1694

Green PCA 0.1390 Blue PCA 0.1486

Avg RGB PCA 0.1475 SCFT CPCA 0.0251

Table 5.14: Decidability index (d) for each algorithm.

Figure 5.51 shows the genuine and imposter distributions using PCA. Blue and red dashed

curves represent genuine and imposter distributions; respectively, vertical lines illustrate

Page 91: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

80 

the means of the two distributions, and the region underlie the two curves are overlapping

indicates the errors resulting from their intersection. The developed QPCA algorithm

yields genuine versus imposter distribution illustrated in Figure 5.52. It can be noted that

QPCA achieved better separation between genuine and imposter distributions with less

errors. In Figure 5.53, the distributions of using Red PCA are shown. The separation of

genuine and imposter is better than that achieved by using QPCA, and in general it has the

largest separation (i.e. decidability index). As shown in Figure 5.54, the use of Green

PCA yields less separation than PCA, with more errors. The distributions using Blue PCA

and Avg PCA are shown in Figure 5.55 and Figure 5.56, respectively. They are almost

similar, although that the separation of Blue PCA is slightly better than what is achieved

by Avg RGB PCA. In the case of SCFT CPCA, the separation is the worst separation

overall algorithms as presented by its distributions in Figure 5.57. Moreover, it can be

inferred that the separation is even far less than the closest (second lowest) algorithm of

the remaining algorithms. Definitely, this indicates the highest error rate which increased

rapidly due to the drastic overlapping between genuine an imposter distributions.

Page 92: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

81 

Figure 5.51: GD versus ID for PCA algorithm

 

 Figure 5.52: GD versus ID for QPCA algorithm

Page 93: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

82 

Figure 5.53: GD versus ID for Red PCA algorithm

Figure 5.54: GD versus ID for Green PCA algorithm

Page 94: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

83 

Figure 5.55: GD versus ID for Blue PCA algorithm

Figure 5.56: GD versus ID for Avg RGB PCA algorithm

Page 95: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

84 

Figure 5.57: GD versus ID for SCFT CPCA algorithm

5.3.8. FAR versus FRR errors

Table 5.15 gives the EER value for each PCA-based algorithm.

Algorithm EER PCA 0.151

QPCA 0.152 Red PCA 0.130

Green PCA 0.168 Blue PCA 0.177

Avg RGB PCA 0.145 SCFT CPCA 0.457

Table 5.15: The EER of each algorithm.

Figure 5.58 shows the FAR and FRR curves when using PCA algorithm. It can be noted

that EER is small, and consequently errors are low. Where the blue curve represents the

FAR, whereas the other red dashed curve represents the FRR, and the cross point between

Page 96: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

85 

the two curves denotes the equal error rate (EER) of FAR and FRR. For the developed

QPCA algorithm, Figure 5.59 illustrates the curves. The EER value is very similar to the

one achieved with PCA, and thus QPCA, most likely, generates a close number of errors.

This observation confirms the results obtained previously from the genuine and imposter

distributions. Figure 5.60 shows the curves using Red PCA algorithm. Essentially, the

EER value is the smallest among the others algorithms. The curves when using Green

PCA and Blue PCA are illustrated in Figure 5.61 and Figure 5.62, respectively. It is clear

that, EER of both ore large compared to the previous mentioned algorithms (PCA, QPCA

and Red PCA). Consequently, there are both have more errors. While Green PCA has

slightly less error than Blue PCA. The curves using Avg RGB PCA are shown in Figure

5.63. The value of EER is located between Red PCA and Green PCA, where it is greater

than the first one and smaller than the other. Finally, the curves of SCFT CPCA are shown

in Figure 5.64. The far greatest EER value is obtained, which indicates that the highest

error rate is achieved by this algorithm.

Page 97: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

86 

Figure 5.58: FAR versus FRR curve for PCA algorithm

Figure 5.59: FAR versus FRR curves for QPCA algorithm

Page 98: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

87 

Figure 5.60: FAR versus FRR curves for Red PCA algorithm

Figure 5.61: FAR versus FRR curves for Green PCA algorithm

Page 99: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

88 

Figure 5.62: FAR versus FRR curves for Blue PCA algorithm

Figure 5.63: FAR versus FRR curves for Avg RGB PCA algorithm

Page 100: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

89 

Figure 5.64: FAR versus FRR curves for SCFT CPCA algorithm

5.3.9. ROC Curves

Figure 5.65 shows the ROC curves when using PCA and QPCA algorithm. Where the

relationship between FAR and FRR can be observed clearly As FAR is increased, FRR is

decreased and accordingly (1- FRR) is increased, and vice versa. Where the small black

circle denotes the estimated EER point where the FAR and FRR are equivalent. The

obtained QPCA curve, Compared to the PCA curve, indicate that QPCA achieves higher

accuracy than PCA with accepting the same number of errors. In Figure 5.66, the curve

using Red PCA algorithm. It can be observed that the curve is the highest curve overall

others, and thus it yields the least errors. This confirms the finding from genuine and

imposter distributions and FAR and FRR curves. Figure 5.67 and Figure 5.68 show the

Page 101: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

90 

result when using Green PCA algorithm and Blue PCA Algorithm, respectively. It is clear

that, they are both lower than PCA and QPCA curves, which means more errors are

achieved. While Green PCA is bitter that Blue PCA with less error. The curve using Avg

RGB PCA is shown in Figure 5.69. It can be inferred from this curve that it achieves

results with fewer errors compared with Green and Blue PCA, but still lower that Red

PCA. Finally, the lowest curve overall which yields the highest number of errors is SCFT

CPCA as illustrated in the curve of Figure 5.70.

Figure 5.65: PCA and QPCA algorithms ROC curves

Page 102: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

91 

Figure 5.66: ROC curve for Red PCA algorithm

Figure 5.67: ROC curve for Green PCA algorithm

Page 103: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

92 

Figure 5.68: ROC curve for Blue PCA algorithm

Figure 5.69: ROC curve for Avg RGB PCA algorithm

Page 104: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

93 

Figure 5.70: ROC curve for SCFT CPCA algorithm

   

Page 105: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

94 

CHAPTER 6 

6. CONCLUSION AND FUTURE WORK

This chapter presents the conclusions of the aspects studied in this thesis along with a

summary of the research and findings in light of experimental work and testing results.

Furthermore, proposed directions for the extension of this work are offered as possible

future work.

6.1. Summary and Findings

In this thesis, an overall background on biometric traits and systems, and on face

recognition concepts, techniques and processing have been provided. Holistic

quaternionic representation for color face images has been investigated. Besides, an

overview of Quaternion (Hypercomplex) concepts including basic definitions, properties,

and representations has been focused on. Then, the PCA and SVD concepts have been

explained and further extended to the QPCA and QSVD which have been adopted and

employed to design a biometric-based system for color face recognition and matching as

well. After that, the design and implementation used to evaluate and compare the

proposed technique with other PCA-based techniques and the testing methodology

including data and experiments have been described and discussed. Within the

performance evaluation, the following experimental investigation avenues have been

conducted:

1. Identification performance against probe categories.

Page 106: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

95 

2. Performance comparison using CMC in different resolutions.

3. Variation in identification performance.

4. GD versus ID.

5. FAR versus FRR errors.

6. ROC curves.

The following are the major contributions of this research work.

A novel scheme for efficient color face representation and processing for face-based

biometric recognition system.

The concept of the PCA and SVD are extended to the quaternionic and carried over

into the hypercomplex to compactly represent color face images.

Detailed analysis of the incorporation of color information in face recognition is

provided.

Performance improvement by using color information and the Q-PCA method.

Extended performance evaluation is preformed and comparisons from different

perspectives are provided.

The following work was performed in support of these contributions:

Development of a new algorithm for face recognition and matching based on

quaternion principle component analysis (QPCA) technique and extension of the face

recognition software to color images.

Page 107: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

96 

Implementation of other standard PCA-based algorithms for the sake of comparison

against the Q-PCA algorithm, these algorithms are the PCA, Red PCA, Green PCA,

Blue, Avg RGB PCA, and SCFT PCA.

Organization of facial data galleries and probes under different categories to be using

in the conducted experiments.

Performance testing and analysis by performing a number of stress test experiments

for these different compared implementations to evaluate and compare each with the

others.

Generation of prober evaluation graphs and tables demonstrating and representing

experimental outcomes.

The following are the major findings derived through this research work:

Use of color information can increase accuracy.

Regarding color face images, the red-channel or component is the highest effective

channel of the three color channels as concluded form its high performance, the best

separation overall algorithms, and the least EER over others. Hence, red is the most

correlated component and the most proper one to build PCA features.

While making use of the high-effective single color channel enhances the performance

of face recognition, further making use of the entire color channels holistically and

cooperatively is improving the performance more.

Page 108: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

97 

Extending the standard grayscale PCA features to the Q-PCA using quaternion color

representation can increase accuracy.

The Q-PCA has good separation between genuine and imposter distributions and

fewer errors than the PCA, despite of that Q-PCA has EER very similar to the PCA.

The experimental outcomes indicate in most cases that the proposed Q-PCA achieves

better performance than the other typical tested PCA-based algorithms.

From robustness and reliability point of views, where each algorithm has been tested

over special sets of facial database consists of samples represent the difficulties cases

such as impose and illumination. Overall the tested algorithms the Q-PCA is to

provide better robustness against such unconstrained cases.

Using face image data with higher resolution can increase the precision of the derived

PCA features and consequently can increase the identification and matching accuracy.

6.2. Future Work

Since the lack of research using Quaternion numbers to represent and process color

images. Because it is a complicated task, it has not received enough attention in the

research literature. Here are some of the suggested future studies to enhance the present

research:

The performance and accuracy can be improved by performing additional

preprocessing operations for the purpose of preparing more normalized data images,

Page 109: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

98 

which can reduce the variety of face images (such variety extremely exists in current

used database). Thus some preprocessing can be performed properly for example: all

images can be translated, rotated, and scaled so that the center of the eyes can be

placed on specific pixels, moreover faces can be masked to remove background and

hair and so on. Thereby accuracy is expected to be increased.

The current developed Q-PCA algorithm for color face recognition can be adapted,

modified and applied on another biometric trait such as color iris.

The proposed face-based biometric system can be applied using the quaternionic

representation for color images in other color spaces such as XYZ, YUV or YCbCr

color spaces. For example a face image in YCbCr color space can be represented as:

    0 . . . (6.1)

The proposed Q-PCA algorithm is fully automatic. Therefore it is suggested that to

investigate the development of partially automatic QPCA-based algorithm for face

recognition. Such partially automatic algorithm is supplied with some information as

guidance such as the coordinates of the center of the eyes (this guide is already

available in FERET database). These guides used to increase the precession then

consequently the performance and accuracy are improved. Consequently, this new

implementation approach can be compared with the current fully automatic approach.

Page 110: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

99 

REFERENCES

[1] C. Jones, "Color Face Recognition using Quaternionic Gabor Filters," PhD Dissertation,

Virginia Institute of Technology, 2004.

[2] C. Wang, B. Yin, X. Bai, and Y. Sun, "Color Face Recognition Based on 2DPCA," 19th

International Conference on Pattern Recognition (ICPR 2008), pp. 1-4, 8-11 Dec., 2008.

[3] C. Wang, and X. Bai, "Color Face Recognition Based on Revised NMF Algorithm," Second

International Conference on Future Information Technology and Management Engineering

(FITME '09), pp. 455-458, 13-14 Dec., 2009.

[4] F. Al-Qunaieer, "Color Iris Recognition and Matching using Quaternion Gabor Wavelets," MS

Thesis, KFUPM University, 2009.

[5] Face Recognition Homepage, http://face-rec.org.

[6] J. Choi, S. Yang, Y. Ro, and K. Plataniotis, "Color Effect on the Face Recognition with Spatial

Resolution Constraints," 10th IEEE International Symposium on Multimedia (ISM 2008), pp.

294-301, 15-17 Dec., 2008.

[7] J. Choi, Y. Ro, and K. Plataniotis, "Color Face Recognition for Degraded Face Images," IEEE

Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 39, no. 5, pp.

1217-1230, Oct., 2009.

[8] J. Yang, and C. Liu, "A General Discriminant Model for Color Face Recognition," IEEE 11th

International Conference on Computer Vision (ICCV 2007), pp. 1-6, 14-21 Oct., 2007.

[9] C. Jones, and A. Abbott, "Color Face Recognition by Hypercomplex Gabor Analysis," 7th

International Conference on Automatic Face and Gesture Recognition (FGR 2006), pp.6-131,

2-6 April., 2006.

Page 111: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

100 

[10] N. Le Bihan, and S. Sangwine, "Quaternion Principal Component Analysis of Color Images,"

International Conference on Image Processing Proceedings (ICIP 2003), vol.1, pp. I- 809-12,

14-17 Sept., 2003.

[11] L. Ding, and H. Feng, "Quaternion K-L Transform and Biomimetic Pattern Recognition

Approaches for Color-face Recognition," IEEE International Conference on Intelligent

Computing and Intelligent Systems (ICIS 2009), vol.1, pp. 165-169, 20-22 Nov., 2009.

[12] H. Moon, and P. Phillips, “Computational and Performance Aspects of PCA-based Face

Recognition Algorithms,” Perception, vol. 30, pp. 301-321, 2001.

[13] S. Pei, and C. Cheng, “A Novel Block Truncation Coding of Color Images by Using

Quaternion-moment Preserving Principle,” IEEE International Symposium on Circuits and

systems, vol. 2, pp. 684-687, Atlanta, FA. 12-15 May., 1996.

[14] S. Li, and A. Jain, Handbook of Face Recognition, Springer, New York, USA, 2005.

[15] S. Chen, Y. Sun, and B. Yin, "A Novel Hybrid Approach Based on Sub-pattern Technique

and Extended 2DPCA for Color Face Recognition," 11th IEEE International Symposium on

Multimedia (ISM '09), pp. 630-634, 14-16 Dec., 2009.

[16] P. Shih, and C. Liu, "Extracting Efficient Color Features for Face Recognition Using

Evolutionary Computation," 6th International Conference on Computational Intelligence and

Multimedia Applications, pp. 285- 290, 16-18 Aug., 2005.

[17] J. Shlens, “A Tutorial on Principal Component Analysis,” Unpublished, version 3.01, 2009.

[18] L. Smith, “A Tutorial on Principal Component Analysis,” Unpublished, 2002.

[19] T. Ganapathi, and K. Plataniotis, “Color Face Recognition under Various Learning

Scenarios,” IEEE Canadian Conference on Electrical and Computer Engineering, 5-7 May.,

2008.

Page 112: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

101 

[20] T. Ganapathi, "Color Image Based Face Recognition," MS Thesis, University of Toronto,

2009.

[21] M. Thomas, C. Kambhamettu, and S. Kumar, "Face Recognition Using a Color Subspace

LDA Approach," 20th IEEE International Conference on Tools with Artificial Intelligence

(ICTAI '08), vol. 1, pp. 231-235, 3-5 Nov., 2008.

[22] M. Turk, and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience,

vol. 3, no 1, pp. 72–86, 1991.

[23] L. Wei, X. Yi, X. Yang, and L. Song, "Local Quaternionic Gabor Binary Patterns for Color

Face Recognition," IEEE International Conference on Acoustics, Speech and Signal

Processing (ICASSP 2008), pp. 741-744, March 31 2008-April 4 2008.

[24] W. Zhao, R. Chellappa, J. Phillips, and A. Rosenfeld, “Face Recognition: A Literature

Survey,” ACM Computing Surveys, vol. 35, no. 4, pp. 399-458, Dec., 2003.

[25] S. Thakur, J.K. Sing, D. K. Basu, M. Nasipuri, and M. Kundu, “Face Recognition Using

Principal Component Analysis and RBF Neural Networks,” First International Conference on

Emerging Trends in Engineering and Technology (ICETET '08), pp. 695-700, 2008.

[26] T. Rahman, and A. Bhuiyan, “Face Recognition using Gabor Filters ,” 11th International

Conference on Computer and Information Technology Proceedings (ICCIT 2008), pp. 510-

515, Khulna, Bangladesh, 25-27 Dec., 2008.

[27] X. Wang, C. Huang, G. Ni, and J. Liu, “Face Recognition Based on Face Gabor Image and

SVM,” 2nd International Congress on Image and Signal Processing (CISP '09), pp. 1-4, Oct.,

2009.

[28] C. Liu, and D. Dai, “Face Recognition Using Dual-Tree Complex Wavelet Features,” IEEE

Transactions On Image Processing, vol. 18, no. 11, pp. 2593-2599, Nov., 2009.

Page 113: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

102 

[29] Z. Chai, K. Ma, and Z. Liu, “Complex Wavelet-based Face Recognition Using Independent

Component Analysis,” 5th International Conference on Intelligent Information Hiding and

Multimedia Signal Processing, pp. 832-835, 12-14 Sep.,2009.

[30] L. Shi, “Exploration in Quaternion Colour,” MS Thesis, Simon Fraser University, 2005.

[31] S. Sangwine, and N. Le Bihan, “Computing the SVD of a quaternion matrix,” 7th IMA

Conference on Mathematics in Signal Processing Royaume-Uni, 2006.

[32] P. J. Phillips, H. Moon, P. Rauss, and S. Rizvi, “The FERET Evaluation Methodology for

Face-Recognition Algorithms,” Tech. Rep. NISTIR 6264, National Institute of Standards and

Technology, 7 Jan. 1999.

[33] P. J. Phillips, A. Martin, C. Wilson and M. Przybocki. “An Introduction to Evaluating

Biometric Systems”, IEEE Computer, pp. 56-63, Feb., 2000.

[34] F. Zhang, "Quaternions and Matrices of Quaternions," Linear algebra and its applications,

vol. 251, pp. 21-57, 1997.

[35] A. Koschan and M. Abidi, “Digital color image processing,” first ed. John wiley &

Sons, new york, 2008.

[36] W. Hamilton, “On quaternions; or on a new system of imaginaries in algebra,” Edinburgh

and Dublin Philosophical Magazine and Journal of Science, pp. 25–36, 1844–1850.

[37] A. McCabe, T. Caelli, G. West and A. Reeves, “Theory of spatiochromatic image encoding

and feature extraction,” Journal of the Optical Society of America A: Optics, Image Science,

and Vision, vol. 17, no. 10, 2000.

[38] S.-C. Pei, J.-J. Ding and J.-H. Chang, “Efficient implementation of quaternion Fourier

transform, convolution, and correlation by 2-D complex FFT,” IEEE Transactions on Signal

Processing, vol. 49, no. 11, 2001.

Page 114: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

103 

[39] S. Westland and C. Ripamonti, “Computational Colour Science Using MATLAB,” John

Wiley & Sons, new york, 2004.

[40] A. McCabe, T. Caelli, G. West and A. Reeves, “ Encoding and Processing Spatio-Chromatic

Image Information Using Complex Fourier Transform Methods,” Technical Report, Curtin

University of Technology, Perth, Western Australia, pp. 228–24, 1997.

Page 115: Emad Jaha Thesis Draft v12 - COnnecting REpositories · Title: Microsoft Word - Emad Jaha_Thesis Draft_v12 Author: eeguest Created Date: 12/5/2010 10:07:32 AM

104 

Vita

Personal Information

Date of Birth: 1982 Place of Birth: Jeddah Nationality: Saudi Emails:

1. [email protected] 2. [email protected]

Mobile Number: 0504696694 Present Address: 1st street, Al-Badee District, Dammam,

Saudi Arabia. Permanent Address: P.O.BOX 1237, Jeddah – 21433,

Saudia Arabia. Languages:

1. Arabic (native language) 2. English

Education

2000 -2004: King Abdul-Aziz University Jeddah, Bachelor of Science in Computer Science

2007-2010: King Fahd University of Petroleum & Minerals Dhahran, Master of Science in Computer Science

Work

2004-2005: Developer in IT Unit in Deanship of Community Services & Continuing Education at King Abdul-Aziz University

2005-Present: Graduate Assistant in Computer Science Department in College of Education at King Abdul-Aziz University

Interests

Image processing and pattern recognition Biometric Systems Web development