July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
International Journal of Pattern Recognition and Artificial Intelligencec© World Scientific Publishing Company
IRIS RECOGNITION OF DEFOCUSED IMAGES
FOR MOBILE PHONES
BO LIU†‡’1, SIEW-KEI LAM‡,2,THAMBIPILLAI SRIKANTHAN‡’3 and WEIQI YUAN†’4
†Computer Vision Group, Shenyang University of Technology
Shenyang, China, 110870
‡Centre for High Performance Embedded Systems
Nanyang Technological University
Singapore, 637553
[email protected]@[email protected]
In this paper, we introduce a novel iris recognition approach for mobile phones, whichtakes into account imaging noise arising from image capture outside the Depth of Field(DOF) of cameras. Unlike existing approaches that rely on special hardware to extendthe DOF or computationally expensive algorithms to restore the defocused images priorto recognition, the proposed method performs recognition on the defocused images basedon the stable bits in the iris code representation that are robust to imaging noise. Tothe best of our knowledge, our work is the first to investigate the characteristics of irisfeatures for varying degree of image defocus when the images are captured outside theDOF of cameras. Based on our findings, we present a method to determine the stablebits of an enrolled image. When compared to iris recognition of defocused images thatrelies on the entire code representation, the proposed recognition method increases theinter-class variability while reducing the intra-class variability of the samples considered.This leads to smaller intersections between the intra-class and inter-class distance dis-tributions, which results in higher recognition performance. Experimental results basedon over 15,000 images show that the proposed method achieves an average recognitionperformance gain of about 2 times. It is envisioned that the proposed method can beincorporated as part of a multi-biometric system for mobile phones due to its lightweightcomputational requirements, which is well suited for power sensitive solutions.
Keywords: Iris Recognition; Defocused Iris Image; Depth of Field.
1. Introduction
The prevalence of internet-enabled mobile phones have given rise to many applica-
tions such as mobile banking, contactless payment, mobile marketing, etc., which
requires tight security protection of user information. As such, traditional security
features on mobile phones such as personal identification numbers is expected make
way for more sophisticated biometric systems. One of the most promising biometric
1
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
2 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN
authentication systems for mobile phones is iris recognition, which has been shown
to be an effective method for recognizing individuals. In addition, iris recognition
can be readily adopted in mobile phones due to the availability of built-in cameras
in mobile phones today.7,24,32
Most iris recognition methods adopt a similar strategy to the well-known Daug-
man’s approach, which employ binary coding of phase information to represent
the iris features.8−11 The phase information is obtained by applying mathematical
transformation (texture filtering) onto the iris images.1,4,8−11,25,27,42 For example,
Daugman extracts the iris texture’s local phase feature using Gabor filtering.8−11
The work in Refs. 42 extracts features by applying four levels of processing onto
the iris images using the Laplace pyramid decomposition algorithm. In Refs.
4, Boles and his group extract the ‘zero-crossings’ information using 1-D cubic
spline wavelet. Ma makes use of the dyadic wavelet method to extract the binary
features.27 In Refs. 25, Lim and his group extract the iris images’ high frequency
information as features by adopting the two-dimensional Harr wavelet transforma-
tion. Azizi makes use of the contourlet wavelet transformation to obtain the iris
images’ intrinsic texture structure and then choose the optimal feature sequences.1
While all of the above methods can achieve good recognition performance, they are
unable to work with images that are affected by imaging noise introduced when the
images are captured in non-constrained environment.
One of the key problems with iris image acquisition in mobile phones is the
limited DOF of the camera system. Iris camera systems must be capable of cap-
turing the iris’s texture details with high camera shutter speed so that the eye is
not exposed to long period of lighting. In order to achieve this, the camera system
requires a long focus lens to magnify the iris, and a high numerical aperture to
enable sufficient lighting.3,8−11,28,36 Such camera systems have short DOF, which
increases the difficulty in the usage of the iris recognition system by untrained and
non-cooperative users as images captured outside the DOF of cameras lead to low
recognition performance.30 This problem arises as iris patterns of images captured
outside the DOF are transformed by blurring caused by optical defocusing.21 Us-
ing an auto-focusing camera to overcome this problem will lead to increase in the
cost, size, and complexity of the system.15,21 In addition, techniques based on the
use of audio/visual cues to guide the user to appropriate distance from camera
during image acquisition is not practical to the mobile phone user, as it can be
time-consuming and will therefore hinder the acceptance of the process in daily
use.16,40
Previous work on extending the DOF of iris recognition system can be catego-
rized into two main approaches: 1) using special hardware such as aspheric optics
and 2) applying image restoration on the defocused images prior to recognition. The
former approach employs specially designed aspheric optics to increase the focus
invariance along the axis of the lens, while the latter uses mathematical models that
are constructed based on certain image capture conditions to restore the defocused
images. These two approaches have a common aim to enhance the quality of the
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
Iris Recognition of Defocused Images for Mobile Phones 3
captured images in order to improve the recognition performance for robust iris
recognition systems.
In the first approach, computational imaging systems that commonly em-
ploy wave-front coded techniques (originally proposed by Dowski and Cathey) are
used.6,12 These systems introduce a cubic phase mask in the standard optical sys-
tem to produce large focus invariance and employ signal processing algorithms on
the captured images. Various wave-front coded lenses have been proposed for iris
recognition, which take into account the non-linear steps required for iris feature
extraction and identification.29,30,33 The work in Refs. 36 and 40 examined the uti-
lization of wave-front coded imaging system for increasing the operational range of
iris recognition by considering the absence and presence of post-processing meth-
ods to restore the optical defocus in images that are introduced by the aspheric
optic. They have demonstrated that the use of such optical elements is capable of
relaxing the DOF requirements without the need to restore the defocused images.
The work in Refs. 2 and 3 proposed a pattern matching strategy for iris recogni-
tion of defocused images based on correlation filtering. However, this method can
only be used for images captured with wave-front coded systems. The methods dis-
cussed above require special optical element insertion, large computations and in
some cases generate images with lower Signal-to-Noise Ratio (SNR), which must
undergo post-processing prior to recognition.36
The second approach aims to construct a mathematic model for restoring the
defocused images prior to iris recognition. A real-time iris image restoration method
based on inverse filtering was introduced in Refs. 22 and 23. However, this method
did not consider noise factors. In addition, the point spread function (PSF) was
heuristically determined without taking into consideration the camera optics, and
this resulted in recognition performance degradation. A non-blind de-convolution
algorithm to restore iris images and extend the DOF was proposed in Ref. 21. The
authors estimated the PSF using the focus score that was measured from the high
frequency components of the iris regions after irrelevant objects such as eyelashes
and eyelids were removed. However, the Gaussian smoothness term used in the
algorithm tend to over-smooth the restored image. An iris restoration algorithm
using more accurate information pertaining to the image capture conditions that
are computed from the iris images was proposed in Ref. 20, which achieves better
performance on both synthetic and real data set when compared with the state-of-
art iris restoration algorithm. While some of the image restoration methods have
shown promising results, the biggest challenge in these approaches lie in obtaining
suitable information pertaining to the image capture conditions that is required to
construct the mathematic model for restoring the defocused images.20 In addition,
similar to the first approach, techniques relying on image restoration algorithms
have high computational complexity and do not lend themselves well towards the
power sensitive requirements of mobile phones.
The method proposed in this paper does not require special hardware or image
restoration algorithms for iris recognition of defocused images that are captured
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
4 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN
outside the camera DOF. The proposed method is based on our investigation which
reveals that certain iris features are not sensitive to optical defocusing and these
features can be represented as stable bits in the iris code representation. In addition,
we will also present our analysis of the iris feature characteristics in varying degree
of image defocus. Based on these findings, we will propose a method to identify the
stable bits of a given enrolled image, and show that good recognition performance
can be achieved by limiting the pattern matching to these stable bits instead of
using the entire iris code representation. As only a single enrolled image is required
to obtain the stable bits, the proposed approach is highly practical. In this paper, we
assume that the enrolled image used in the proposed method is a clear image that
is captured within the camera DOF. Finally, the proposed method is well suited to
be incorporated as part of a multi-biometric system for mobile phones as it does
not require any power hungry computations as compared to existing approaches for
extending the DOF of iris recognition system.
While previous work Refs. 5, 13, 14, 17-19, 35, 38 and 41 have reported on the
prospects of using stable iris features for recognition, to the best of our knowledge,
this work is the first to investigate the characteristics of iris features in varying
degree of image defocus when the images are captured outside the DOF of cameras.
Our findings have led to the development of the proposed method that is capable
of recognizing defocused iris images without the need for special hardware or time
consuming image restoration algorithms.
The paper is organized as follows. In the following section, we will discuss our
study, which reveals that certain iris features are not sensitive to optical defocusing
and these features can be represented as stable bits in the iris code representation.
These stable bits can then be exploited in the recognition process. In Sec. 3, we will
present an efficient method to identify stable bits from a single enrolled image based
on our analysis of the iris feature characteristics in varying degree of image defocus.
Next, we present experimental results to show that the proposed method can lead
to significantly higher recognition performance when compared to iris recognition
of defocused images that relies on the entire code representation. Sec. 5 concludes
the paper with a discussion on future work.
2. Exploiting Stable Features in Defocus Images for Iris
Recognition
In this section, we will discuss our experimental setup and analysis which reveals
that certain iris features in images that are captured outside the camera DOF are
invariant to optical defocusing. We will present a simple method to identify these
features and show that they can be used for iris recognition.
2.1. Image acquisition
In order to obtain an iris image dataset for our study, we employed an iris image
sampling system that is controlled by a stepping motor. The system is capable of
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
Iris Recognition of Defocused Images for Mobile Phones 5
capturing a sequence of 61 iris images with a sampling interval of 2mm for a single
eye. The sequence of 61 images consists of a single clear image (captured in sharp
focus within the camera DOF), while the remaining images may consist of different
degree of defocus as they are captured outside the camera DOF at varying distance
from the camera. In order to identify the clear image from a sequence, we employ
the clarity evaluation function presented in Refs. 26, 31, 34 and 45. We then selected
15 images (which are subjected to optical defocus) before and after the clear image
to be included in the dataset. Fig. 1 shows an example of an iris image sequence,
where the 16th image is in sharp focus as it is captured within the camera DOF.
The 30 images captured before and after the clear image are subjected to varying
degree of defocusing. The resolution of each image is 640×480 pixels.
Fig. 2 shows the clarity evaluation function curve of a particular sequence of iris
images. The x-axis denotes the image number in the sequence, which corresponds
to the sampling distance. The first half of the curve shows the gradual decrease in
defocus from the 1st image to the 16th image, while the second half of the curve
shows the gradual increase in defocus of the remaining images. As the sampling
process of a sequence is performed with a fixed interval (i.e. 2mm apart), we can
represent the degree of defocus using the image number in the sequence.
For this study, we have selected 50 people, where each of their left and right eyes
are sampled 5 times using the iris image sampling system. Each sampling sequence
of the same eye may differ due to lighting conditions, position of the iris during
Fig. 1. A sequence of iris images.
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
6 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN
5 10 15 20 25 300
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Image number
Nor
mal
ized
val
ue o
f cla
rity
eval
uatio
n fu
ntio
n
Fig. 2. Clarity evaluation function curve of a sequence of iris images.
image capture, etc. Incorporating such variations during image acquisition of a
single eye enable us to obtain a dataset that can provide more realistic evaluations.
Hence, our dataset consists of 100 classes, where each class has 5 sampling sequences
and there are 31 images in each sequence. The total number of images in the dataset
is 15,500 (100×5×31).
2.2. Image pre-processing
Each image in the dataset undergoes a pre-processing step consisting of localization
and normalization to eliminate any non-related information (e.g. sclera, pupil, etc.)
and to obtain an iris area with uniform size and shape. We have adopted the
pre-processing method presented in Ref. 43, to obtain a normalized image with
a resolution of 512×64. Fig. 3 shows an example of normalized iris images in a
particular sequence.
Performing iris feature extraction on the whole normalized image will lead to the
inclusion of pseudo features caused by irrelevant information that may still exist,
such as eyelids and eyelashes. In order to ensure that the image used for recognition
contains minimal irrelevant information, we can extract a smaller region of the
normalized image for feature extraction. However, if the chosen region is too small,
the features may not be sufficient for iris recognition. Based on the work in Ref. 44,
we chose the mid-top area, i.e. about 40% of the whole normalized image as it has
been shown to exclude features caused by irrelevant objects most of the time. The
region of normalized images chosen for feature extraction is shown in the rectangles
in Fig. 3. The resolution of the reduced normalized image is 260×32 pixels.
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
Iris Recognition of Defocused Images for Mobile Phones 7
Fig. 3. Normalized iris image.
2.3. Feature extraction
Research in iris feature extraction for over a decade reveals that the Gabor filter
used by Daugman is unanimously considered as the most efficient band-pass fil-
ter for discriminating the iris texture when compared to other wavelet extraction
methods.8,39 A Gabor filter under rectangular coordinate is given by Eq. (1).
Gabor(x, y) = exp
{
−
[
(x − x0)2
α2+
(y − y0)2
β2
]}
exp {2πi[u0(x − x0) + v0(y − y0)]} .
(1)
In Eq. (1), (x0, y0) denote the centre position of the filter, (α, β) denote the
wavelet size in length and width, and the frequency and direction of the filter
depend on (u0, v0).
Binary quantification based on the complex phase vector is also widely adopted
for representing the iris features.37 As such, we have applied the 2D Gabor filter
on the normalized iris texture images, extracted the complex phase information,
and mapped them to binary codes based on their location on the complex plane as
shown in Fig. 4. For example, if the phase vector (consisting of real and imaginary
parts) lie in the top-right quadrant of the complex plane, it will be mapped to “11”.
As a result, a feature code matrix of 260×32×2 bits is generated for each iris image.
The real part and imaginary part, each a 260×32×1 bit matrix, is stored separately
to facilitate our analysis on the changes in the individual bit characteristics due to
the imaging noise.
In a non-invasive iris recognition system, different eye images that are acquired
from a single person are commonly subjected to rotating excursion that is perpen-
dicular to the main optical axis of camera. This problem needs to be rectified so
that we can analyze the corresponding bit characteristics in the various degree of
defocus of a particular iris image sequence. This is achieved by using the clear image
(16th image in the 1st sequence of a particular class) as the reference for shifting
the bits in other images from the same class to the left/right so that the Hamming
distance between them is minimized.
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
8 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN
Fig. 4. Phase coding.
2.4. Proposed method 1: identifying stable bits for iris recognition
Traditional pattern matching process for iris recognition attempts to cluster features
from the same individual and perform classification of different individuals based
on the focused iris images. In this study, we aim to determine iris features that
are robust to imaging noise. These features are prevalent in all the iris images of
a particular sequence. As discussed above, each iris image can be converted into
two feature code matrices consisting of bit patterns. Features that are invariant
to imaging noise can therefore be identified by a constant bit value in the same
position of the feature code matrices of the entire image sequence. Hence, our goal
is to identify feature clusters from iris image sequences of an individual and perform
classification of sequences from different individuals. We first make the following
definitions:
Definition 1. Exactly Consistent Bit (ECB): A bit position in the feature code
matrix is denoted as ECB if it is always 1 or 0 in two or more feature code matrices.
Definition 2. Exactly Consistent Bit Rate (ECBR): The percentage of ECBs (in
both real and imaginary feature code matrices) with respect to the size of the real
and imaginary feature code matrices (i.e. 260×32×2).
As ECB represent bit positions in the feature code matrix that are invariant to
imaging noise, they form the ideal bits for pattern matching of defocused images.
ECBR gives an indication of the similarity in iris features in two or more feature
code matrices. Fig. 5 compares the average ECBR of 100 classes between 1) two im-
ages with same degree of defocus (as denoted by the x-axis) from different sequence
of iris images of the same class (asterisk line), and 2) two images from the same
class, one the defocus image and the other, the clear image (rhombic line). For the
rhombic line, it can be observed that the average ECBR of a defocused image (with
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
Iris Recognition of Defocused Images for Mobile Phones 9
5 10 15 20 25 300
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Image number
Ave
rage
EC
BR
Same degree of defocusDifferent degree of defocus
Fig. 5. Average ECBR of iris images from 100 classes.
respect to the 16th image which is the clear image) decreases with increasing degree
of defocus. This implies that the number of ECB in a defocused image is inverse
proportional to the amount of defocus in the iris image. Hence, if the ECB bits are
used for recognition, we can expect that the recognition performance will degrade
as the amount of defocus increases. On the other hand, the asterisk line shows that
the ECB between two defocus images of different sequences (but of the same class
and same degree of defocus), remains pretty consistent. The ECB corresponds to
the features that are invariant to imaging noise. However, these features are rare in
images that are obtained from camera systems with small DOF. Our experimental
results show that on average, the ECBR is only 9.86%, which is not sufficient for
iris recognition. Hence, there is a need to relax the constraints for identifying stable
bits in the feature codes for iris recognition.
Definition 3. Sensitivity: Each bit position in a feature code matrix can be de-
fined as an intrinsic feature if the value (1 or 0) is found to be dominant across the
entire sequence of images. It is defined as a sensitive feature otherwise. Sensitivity
is defined as the probability of a particular bit position in the feature code matrix
being a sensitive feature (in terms of %) in a sequence.
Definition 4. Stable Bit and Sensitive Bit: A bit position in the feature code
matrix is defined as a stable bit when its sensitivity is less than a predefined sen-
sitive threshold. On the other hand, if the sensitivity is higher than the predefined
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
10 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN
Fig. 6. Iris feature codes from two different classes (1 and 9).
sensitive threshold, we denote the corresponding bit position as a sensitive bit.
Based on Def. 3 and Def. 4, Eq. (2) is used to determine whether the bit position
of the feature code matrix at coordinate (x, y) is a stable bit. Fk(i, j) is the feature
code matrix of the kth image in the sequence, n is the number of images in the
sequence, and t is the predefined sensitive threshold.
(x, y) ∈ {(i, j)|G(i, j) ≤ Ta or G(i, j) ≥ Tb}
G(i, j) =∑n
k=1Fk(i, j)
Ta = bn× tc
Tb = n− bn× tc+ 1
. (2)
Based on Eq. (2), Fig. 6 shows the iris feature code consisting of stable and
sensitive bits for images in two classes (i.e. 1-1 and 1-2 are images from different
sequences in the same class, and 9-1 and 9-2 are images from different sequences of
another class). The sensitive threshold is set at 30%. The black regions correspond
to the stable bits while white regions correspond to the sensitive bits. We can
observe from Fig. 6 that the distribution of stable bits from different sequences in
the same class is similar with each other. In addition, the distribution of stable
bits from different classes varies significantly. These observations demonstrate the
feasibility of using stable bits for pattern matching to discriminate defocus iris
images of different individuals.
3. Proposed Method 2: Identifying Stable Bits from a Single
Enrolled Image
In the previous section, we have described a simple method to identify stable bits
that correspond to features which are invariant to the optical noise from a sequence
of iris images. The method however, requires a sequence of iris images to be captured
with varying degree of defocus during the enrollment phase. This is not a practical
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
Iris Recognition of Defocused Images for Mobile Phones 11
approach especially for mobile phone users. In this section, we propose a method to
identify stable bits from a single enrolled image. The proposed method is based on
our analysis of the displacement of the feature phase vector in the complex plane
for varying degree of image defocus. We will show in the following section that this
method can lead to good recognition performance.
The proposed method for identifying the stable bits is based on the likelihood
that the corresponding phase vectors will move across the axis in the complex
plane when the images are subjected to varying degree of defocus. Fig. 7 shows
an example of the displacement of a phase vector in a sequence of 31 iris images
with different degree of defocus. It can be observed that the phase vector lies in the
bottom left quadrant of the complex plane for most of the defocused images and
in other quadrants (such as the top left) for the remaining defocus images. Based
on the phase coding process described in Sec. 2.3, the displacement of the phase
vector from one quadrant to another will cause a change in the corresponding bit
value of the feature code. In other words, we can deduce the stability of a feature
bit in the iris code by analyzing the distribution of the corresponding phase vector
in different image defocus. For example in Fig. 7, we can observe that the real part
of the phase vector have a higher stability than the imaginary part. In particular,
the sensitivity of real part of the phase vector is 7%, and the imaginary part is
33%. A crude method for determining the stable bit in this example would be to
choose the real part of the feature code as the stable bit. However, this method still
requires the availability of a sequence of images with different degree of defocus.
Based on the same example, Fig. 8 shows how the real and imaginary part of
the phase vector evolved with the degree of defocus. It can be observed that the
displacement of the phase vector of defocus images from the clear image (the 16th
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2
−1.5
−1
−0.5
0
0.5
1
1.5
2 Im
Re
Fig. 7. Displacement of a complex phase vector in a sequence of iris images.
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
12 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN
5 10 15 20 25 30
−1.2
−1
−0.8
−0.6
−0.4
−0.2
0
Image number
Rea
l par
t of t
he p
hase
vec
tor
(a) Relationship between the displacement of the real part of the phase vector
with degree of defocus.
5 10 15 20 25 30
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
Image number
Imag
inar
y pa
rt o
f the
pha
se v
ecto
r
(b) Relationship between the displacement of the imaginary part of the phase vector
with degree of defocus.
Fig. 8. Gradual displacement of the phase vectors with respect to the degree of defocus.
image) increases gradually with the degree of defocus. In particular, the distance
of displacement from the clear image for the real/imaginary parts increases until it
crosses the imaginary/real axis of the complex plane. At this point, the correspond-
ing bit value in the iris code will toggle. Therefore, based on a clear image, we can
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
Iris Recognition of Defocused Images for Mobile Phones 13
deduce that the real/imaginary part of a phase vector for a particular position in iris
phase feature template is less likely to cross the axes, if its absolute distance from
the imaginary/real axis is large enough. In other words, we can identify the stable
bits of a single enrolled image based on the absolute distance of the real/imaginary
part of the corresponding phase vector to the imaginary/real axis.
Based on this, we devised the following method for identifying stable bits from
an enrolled image. We first compute the absolute distance of the real/imaginary
parts of the phase vectors of the enrolled image from the imaginary/real axis of the
complex plane, and sort the absolute distances. Based on a predefined stable bit
extraction rate, we identify the stable bits that correspond to the real/imaginary
phase vectors with the higher absolute distances.
Eq. (3) describes the method to identify the stable bit at coordinate (x, y) that
correspond to the real/imaginary phase vector. P is the real/imaginary part of the
phase vector of size 260×32, F abs computes the absolute distance, F vecsort is the
function to sort the absolute distances and rearrange the matrix into vectors, N is
the number of bits, and T is the stable bit extraction rate.
(x, y) ∈ {(i, j)|P (i, j) ≤ A(Nstable)}
A = Fvecsort(Fabs(P ))
Nstable = dN × T e
. (3)
4. Experimental Results
In this section, we will present experimental results based on the 15,500 images
in the iris image dataset (as discussed in Sec. 2.1) to demonstrate the feasibility
of the proposed approach for iris recognition which relies on the stable bits that
are identified using the methods discussed in this paper. The experiments were
undertaken using Matlab on a 2.53-GHz PC with 3 GB RAM.We first evaluate the
recognition performance of the proposed method that is based on identifying the
stable bits from a sequence of images (Proposed Method 1 as discussed in Sec. 2.4)
and the proposed method that is based on identifying the stable bits from a single
enrolled image (Proposed Method 2 as discussed in Sec. 3). Next, we evaluate the
effects of changing the stable bit extraction rate on the recognition performance of
the proposed method. Finally, we will evaluate the recognition performance of the
proposed method when the images that are subjected to recognition are captured
within certain predefined range from the camera.
4.1. Recognition performance
In order to evaluate the recognition performance of the proposed methods, we have
implemented a conventional method based on Daugman’s approach (as described
in Section 2.2 & 2.3) for iris recognition of defocused image that relies on the entire
code representation (as opposed to using only stable bits in the proposed methods)
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
14 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN
for purpose of comparison. In ProposedMethod 1, we have used a sensitive threshold
of 30% to obtain the stable bits from a sequence for each class (the 1st sequence
in each class). Hence, a total of 3100 images (i.e.100×31) were used to obtain the
stable bits. In Proposed Method 2, we have used a stable bit extracting rate of 70%
to obtain the stable bits from the clear image for each class (the 16th image in the
first sequence of each class). Hence, 100 images are used to obtain the stable bits
in Proposed Method 2. The experiments to evaluate the recognition performance
of the proposed methods are based on 12400 images (i.e.100×4×31) images in the
dataset that have not been used for identifying the stable bits.
Fig. 9 shows the inter-class and intra-class Hamming distance of the images in
the dataset. Note that for the conventional method, the entire feature code matrix
is used to calculate the Hamming distance, while only the stable bits are used in the
proposed methods for calculating the Hamming distance. It can be observed from
0 0.2 0.4 0.6 0.8 10
0.05
0.1
0.15
0.2
0.25
Hamming Distance
Per
cent
Intra−classInter−class
(a) Conventional method.
0 0.2 0.4 0.6 0.8 10
0.05
0.1
0.15
0.2
0.25
Hamming Distance
Per
cent
Intra−classInter−class
(b) Proposed method 1.
0 0.2 0.4 0.6 0.8 10
0.05
0.1
0.15
0.2
0.25
Hamming Distance
Per
cent
Intra−classInter−class
(c) Proposed method 2.
Fig. 9. Distribution of inter and intra-class Hamming distances for conventional and proposedmethods.
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
Iris Recognition of Defocused Images for Mobile Phones 15
Fig. 9, the intersection between the intra-class and inter-class distance distributions
of the proposed methods is smaller than the conventional method. It can be seen
in Table 1, that although the average inter-class and intra-class distance has been
reduced in the proposed methods, the intra-class distance reduction is significantly
larger (e.g. intra-class reduction of 0.0523 as compared to inter-class reduction of
0.0046 in Proposed Method 2). This results in a sharp increase in the average dif-
ference between inter-class and intra-class distances in the proposed methods (e.g.
0.0915 as compared to 0.1392 for the conventional method and Proposed Method
2 respectively). As shown in Table 1, this leads to higher recognition performance
in the proposed methods. In particular, when the False Acceptance Rate (FAR)
is 0.5% for both methods, the False Rejection Rate (FRR) of the conventional
and Proposed Method 2 is 65.86% and 39.52% repectively. In addition, Proposed
Method 2 has a recognition performance gain of about 2 times over the conventional
method as the Equal Error Rate (ERR) of the conventional and proposed method is
23.55% and 12.71% repectively. These results demonstrate that the proposed meth-
ods, which relies on stable bits that are tolerant to the optical defocusing for iris
recognition, is capable of discriminating iris features of individuals from different
classes.
It can be observed from Table 1 that the average difference between inter-class
and intra-class distances in Proposed Method 1 is larger than Proposed Method 2.
The recognition performance of Proposed Method 1 is also evidently higher than
Proposed Method 2.
4.2. Execution time
We have also compared the registration and matching time of the proposed methods
with the conventional method. Registration time is the time taken to generate iris
code representation during the enrollment process. Matching time refers to the time
taken for recognition. Note that the time for image acquisition and pre-processing
is not considered as these are the same for all the methods considered. Table 1
Table 1. Comparison of recognition performance between conventional and proposed methods.
Evaluation Conventional Method Proposed Method 1 Proposed Method 2Standard Intra-class Inter-class Intra-class Inter-class Intra-class Inter-class
AverageHamming 0.2448 0.3363 0.1792 0.3263 0.1925 0.3317DistanceFRR(%) 65.86 29.78 39.52
(FAR=0.5% )EER(%) 23.55 9.015 12.71
RegistrationTime (ms) 282.65 10095.17 359.22RecognitionTime (ms) 13.90 11.23 10.06
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
16 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN
shows that the registration time for the conventional method is the lowest. The
registration time for Proposed Method 1 is the highest as it requires a sequence
of images to obtain the stable bits. However, it is noteworthy that registration is
a one-time process and even then, the time taken by Proposed Method 1 is only
about 10 seconds.
The recognition time for the proposed methods are lower than the conventional
method as they do not require the entire iris code representation for matching. In
particular, Proposed Method 1 and Proposed Method 2 can achieve a reduction
in the recognition time of 19.2% and 27.6% when compared to the conventional
method.
From Table 1, it is evident that even though the recognition performance of
Proposed Method 1 is marginally better than Proposed Method 2 (less than 4%
improvement), Proposed Method 2 is a more feasible solution. In contrast to Pro-
posed Method 1, which requires a longer time to identify the stable bits, Proposed
Method 2 only requires a single enrolled image for identifying the stable bits, hence
resulting in significantly lower registration time. In addition, the recognition time
of Proposed Method 2 is lower than the recognition time of Proposed Method 1.
4.3. Effects of stable bit extraction rate on performance
The stable bit extraction rate T , which is used to determine whether a bit position
in the feature code matrix is a stable bit or not, is an important parameter as
it affects the recognition performance. Table 2 shows how the average recognition
performance (in terms of EER) changes when different stable bit extration rates
are used in Proposed Method 2. In particular, we have used stable bit extration
rates that are set to 55%, 60%, 65%, 70% and 75% to obtain the stable bits and
evaluated the iris recognition performance for all the images in the dataset.
It can be observed that the recognition performance increases when the sta-
ble bit extraction rate is increased from 55% to 65%. Thereafter, the recognition
performance decreases with further increase in the stable bit extraction rate. The
effects of the stable bit extraction rate on the recognition performance is expected,
as choosing a very small stable bit extraction rate will restrict the number of sta-
ble bits for recognition which could lead to low inter-class variability. On the other
hand, defining a very large value for the stable bit extraction rate will lead to higher
number of stable bits that are sensitive to the imaging noise and this will result in
large intra-class variability, which leads to lower recognition performance. Table 2
Table 2. Recognition performance with varying stable bit extraction rate.
Performance Stable Bit Extraction Rate (%)Standard 55 60 65 70 75
EER(%) 14.65 13.14 12.02 12.71 13.80Recognition Time (ms) 7.69 8.51 9.46 10.06 10.88
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
Iris Recognition of Defocused Images for Mobile Phones 17
also shows that the recognition time increases with the stable bit extraction rate.
This is due to the increase in the number of stable bits that are used for matching
during recognition when the stable bit extraction rate increases.
In practice, we can also determine a suitable stable bit extraction rate to obtain
a good trade-off between the recognition performance and the computational com-
plexity of the iris recognition process. For example, in a multi-biometric system for
mobile phones, where the recognition performance of a single biometric system is
not so crucial, we may decide to choose a lower stable bit extraction rate (60%),
which is not with the best EER, to reduce the number of computations for iris
recognition (due to lesser amount of stable bits for pattern matching). This may
also lead to notable power savings for the mobile phone.
4.4. Effects of sampling range on recognition performance
In order to increase the recognition performance in a non-intrusive iris recognition
system, the user can be guided to position himself within a certain image capture
distance from the camera. It is noteworthy that such a system should not impose
strict requirements on the user to position himself such that the image will be
captured within the DOF of the camera. Instead, the system should provide a
distance range for image capture such that the entire process is comfortable to the
user and can be completed fast. In this section, we evaluate the effects of varying the
distance range of image capture on the recognition performance. We have chosen
images under different sampling range in the iris image dataset to create the test
sets. The sixteen test sets are shown in Table 3, where each set contains images
from varying distance range of image capture.
We evaluated the recognition performance of each test set using the conventional
and proposed methods. It can be observed from Fig. 10 that the EER of the three
methods decreases as the distance range of image capture increases. However, the
EER of the proposed methods is lower than the EER of the conventional method in
all cases. In addition, the EER of the conventional method increases more drastically
than the proposed method. These results further justifies the effectiveness of using
stable features to extend the DOF of iris recognition system.
Table 3. Select of testing database.
Test Set No. 1 2 3 4 5 6 7 8
Image No. in Sequence 16 15-17 14-18 13-19 12-20 11-21 10-22 9-23Range of Image Capture 0 4 8 12 16 20 24 28
(mm)
Test Set No. 9 10 11 12 13 14 15 16
Image No. in Sequence 8-24 7-25 6-26 5-27 4-28 3-29 2-30 1-31Range of Image Capture 32 36 40 44 48 52 56 60
(mm)
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
18 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN
0 10 20 30 40 50 600
0.05
0.1
0.15
0.2
0.25
Acquisition Range ( AR ) = 0 : 4 : 60 ( mm )
Equ
al E
rror
Rat
e (
EE
R )
Proposed Method 2Proposed Method 1Conventional
Fig. 10. Relation diagram of AR and EER.
5. Conclusion
In this paper, we proposed a method to identify stable features in iris images that are
tolerant to imaging noise, which is introduced when the images are captured outside
the camera DOF. Our analysis reveals that the stable bits corresponding to the
stable features in the iris image can be identified based on the displacement of the
feature phase vector from the axes in the complex plane. This enabled us to devise
a method for identifying stable bits in a single enrolled image for iris recognition
of defocused images. Unlike existing methods, the proposed method lends itself
well towards robust, cost effective and power efficient iris recognition system for
mobile phones as it does not require special hardware and time consuming image
restoration algorithms. Experimental results demonstrate the superiority of the
proposed method over a conventional method that uses the entire feature code
representation for iris recognition. Future work includes exploring the benefits of
incorporating more accurate localization, feature extraction and matching in the
proposed method.
Acknowledgments
This work is partially supported by the National Natural Science Foundation of
China under Grant No. 60672078 and the China Scholarship Council (CSC).
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
Iris Recognition of Defocused Images for Mobile Phones 19
References
1. J. Azizi and H. R. Pourreza, “A novel and efficient method to extract features and vec-tor creation in iris recognition system,” 32nd Annual German Conference on ArtificialIntelligence, KI 2009, Paderborn, Germany, Sep. 2009, pp. 114–122.
2. N. Boddeti and B. V. K. V. Kumar, “Extended depth of field iris recognition withcorrelation filters,” Proc. 2nd IEEE Int. Conf. Biometrics: Theory, Applications andSystems, Arlington, VA, 2008, pp. 1–8.
3. V. N. Boddeti and B. V. K. V. Kumar, “Extended-depth-of-field iris recognition usingunrestored wavefront-coded imagery,” IEEE Trans Syst Man Cybern Pt A Syst Humans40 (2010) 495–508.
4. W. W. Boles and B. Boashah, “A human identification technique using images of theiris and wavelet transform,” IEEE Trans Signal Process 46 (1998) 1185–8.
5. R. M. Bolle, S. Pankanti, J. H. Connell and N. K. Ratha, “Iris individuality: A partialiris model,” Proc. Int. Conf. Pattern Recognition, Cambridge, United kingdom, 2004,pp. 927–930.
6. W. T. Cathey and E. R. Dowski, “New paradigm for imaging systems,” Appl. Opt. 41(2002) 6080–6092.
7. D. Cho, K. R. Park, D. W. Rhee, Y. Kim and J. Yang, “Pupil and iris localization foriris recognition in mobile phones,” Proc. 7th Int. Conf. Software Engineering, ArtificialIntelligence, Networking, and Parallel/Distributed Computing, Las Vegas, NV, Unitedstates, Jun. 2006, pp. 197–201.
8. J. Daugman, “High confidence visual recognition of persons by a test of statisticalindependence,” IEEE Trans Pattern Anal Mach Intell 15 (1993) 1148–1161.
9. J. Daugman, “Statistical richness of visual phase information: Update on recognizingpersons by iris patterns,” Int J Comput Vision 45 (2001) 25-38.
10. J. Daugman, “Demodulation by complex-valued wavelets for stochastic pattern recog-nition,” Proc. 3rd Int. Conf. Wavelet Analysis and Its Applications (WAA), Chongqing,China, May 2003, pp. 511–530.
11. J. Daugman, “How iris recognition works,” IEEE Trans. Circuits Syst. Video Technol.14 (2004) 21–30.
12. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,”Appl Opt 34 (1995) 1859–1866.
13. G. Dozier, K. Frederiksen, R. Meeks, M. Savvides, K. Bryant, D. Hopes and T.Munemoto, “Minimizing the number of bits needed for iris recognition via bit incon-sistency and GRIT,” IEEE Workshop on Computational Intelligence in Biometrics:Theory, Algorithms, and Applications, Nashville, TN, United states, 2009, pp. 30–37.
14. J. E. Gentile, N. Ratha and J. Connell, “SLIC: Short-length iris codes,” Proc. 3rd Int.Conf. Biometrics: Theory, Applications and Systems, Washington, DC, United states,2009, pp. 1–5.
15. Y. He, J. Cui, T. Tan and Y. Wang, “Key techniques and methods for imaging irisin focus,” Proc. 18th Int. Conf. Pattern Recognition, Hong Kong, China, Sep. 2006,pp. 557–561.
16. Z. He, Z. Sun, T. Tan and X. Qiu, “Enhanced usability of iris recognition via efficientuser interface and iris image restoration ,” Proc. 15th Int. Conf. Image Processing, SanDiego, CA, USA, Oct. 2008, pp. 261–264.
17. K. Hollingsworth, K. W. Bowyer and P. J. Flynn, “All iris code bits are not createdequal,” Proc. 1st Int. Conf. Biometrics: Theory, Applications, and Systems, CrystalCity, VA, Sep. 2007, pp. 1–6.
18. K. P. Hollingsworth, K. W. Bowyer and P. J. Flynn, “The best bits in an iris code,”IEEE Trans. Pattern Anal. Mach. Intell. 31 (2009) 964–973.
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
20 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN
19. K. P. Hollingsworth, K. W. Bowyer and P. J. Flynn, “Using fragile bit coincidence toimprove iris recognition,” Proc. 3rd Int. Conf. Biometrics: Theory, Applications, andSystems, Washington, DC, United states, Sep. 2009, pp. 1–6.
20. X. Huang, L. Ren and R. Yang, “Image deblurring for less intrusive iris capture,” IEEEComput. Soc. Conf. Computer Vision and Pattern Recognition Workshops, Miami, FL,United states, Jun. 2009, pp. 1558–1565.
21. B. J. Kang and K. R. Park, “Real-time image restoration for iris recognition systems,”IEEE Trans. Syst. Man Cybern. B, Cybern. 37 (2007) 1555–1566.
22. B. J. Kang and K. R. Park, “A Study on Iris Image Restoration,” Proc. 5th Int.Conf. Audio - and Video-Based Biometric Person Authentication, Hilton Rye Town,NY, United states, Jul. 2005, pp. 31–40.
23. B. J. Kang and K. R. Park, “A study on fast iris restoration based on focus check-ing,” Proc. 4th Int. Conf. Articulated Motion and Deformable Objects, Port d’Andratx,Mallorca, Spain, Jul. 2006, pp. 19–28.
24. S. Kurkovsky, T. Carpenter and C. MacDonald, “Experiments with Simple Iris Recog-nition for Mobile Phones,” Proc. 7th Int. Conf. Information Technology - New Gener-ations, Las Vegas, NV, United states, Apr. 2010, pp. 1293–1294.
25. S. Lim, K. Li, O. Byeon and T. Kim, “Efficient iris recognition through improvementof feature vector and classifier,” ETRI J 23 (2001) 61–70.
26. J. Lin, C. Zhang and Q. Shi, “Estimating the amount of defocus through a wavelettransform approach,” Pattern Recogn. Lett. 25 (2004) 407–411.
27. L. Ma, T. Tan, Y. Wang and D. Zhang, “Efficient iris recognition by characterizingkey local variations,” IEEE Trans Image Process 13 (2004) 739–750.
28. J. R. Matey, “Iris recognition: On the move, at a distance and related technologies,”Proc. Biometric Consortium Conf., Baltimore, MD, Sep. 2006, pp. 19–21.
29. R. Narayanswamy, G. E. Johnson, P. E. X. Silveira and H. B. Wach, “Extending theimaging volume for biometric iris recognition,” Appl. Opt. 44 (2005) 701–712.
30. R. Narayanswamy, P. E. X. Silveira, H. Setty, V. P. Pauca and J. van der Gracht,“Extended depth-of-field iris recognition system for a workstation environment,” Proc.SPIE - The International Society for Optical Engineering, Orlando, FL, United states,Mar. 2005, pp. 41–50.
31. L. Pan and M. Xie, “The algorithm of iris image quality evaluation,” Proc. 5th Int.Conf. Communications, Circuits and Systems, Kokura, Japan, Jul. 2007, pp. 616–619.
32. K. R. Park, H. Park, B. J. Kang, E. C. Lee and D. S. Jeong, “A study on irislocalization and recognition on mobile phones,” Eurasip. J. Adv. Sign. Process. 2008(2008) 1–12.
33. R. Plemmons, M. Horvath, E. Leonhardt, P. Pauca, S. Prasad, S. Robinson, H. Setty,T. Torgersen, J. van der Gracht, E. Dowski, R. Narayanswamy and P. E. X. Silveira,“Computational imaging systems for iris recognition,” Proc. SPIE - The InternationalSociety for Optical Engineering, Denver, CO, United states, Aug. 2004, pp. 346–357.
34. J. Ren and M. Xie, “Research on Clarity-evaluation-method for iris images,” Proc.2nd Int. Conf. Intelligent Computation Technology and Automation, Piscataway, NJ,USA, Oct. 2009, pp. 682–685.
35. S. Ring and K. W. Bowyer, “Detection of iris texture distortions by analyzing iriscode matching results,” IEEE Proc. 2nd Int. Conf. Biometrics: Theory, Applicationsand Systems, Arlington, VA, United states, 2008, pp. 1–6.
36. K. N. Smith, V. P. Pauca, A. Ross, T. Torgersen and M. C. King, “Extended eval-uation of simulated wavefront coding technology in iris recognition,” Proc. 1st Int.Conf. Biometrics: Theory, Applications, and Systems, Piscataway, NJ, USA, Sep. 2007,pp. 316–322.
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
Iris Recognition of Defocused Images for Mobile Phones 21
37. Z. Sun, T. Tan, Y. Wang, “Robust encoding of local ordinal measures: A generalframework of iris recognition,” Proc. Biometric Authentication. ECCV 2004 Interna-tional Workshop, BioAW 2004, Piscataway, NJ, USA, May 2004, pp. 270–282.
38. P. Thoonsangngam, S. Thainimit and V. Areekul, “Relative iris codes,” IEEE Int.Symposium on Signal Processing and Information Technology, Cairo, Egypt, Dec. 2007,pp. 40–45.
39. J. Thornton, M. Savvides and B. V. K. V. Kumar, “An evaluation of iris patternrepresentation,” Proc. 1st Int. Conf. Biometrics: Theory, Applications, and Systems,Piscataway, NJ, USA, Sep. 2007, pp. 196–201.
40. J. van der Gracht, V. P. Pauca, H. Setty, R. Narayanswamy, R. J. Plemmons, S. Prasadand T. Torgersen, “Iris recognition with enhanced depth-of-field image acquisition,”Proc. SPIE - The International Society for Optical Engineering, Orlando, FL, Unitedstates, Apr. 2004, pp. 120–129.
41. V. Velisavljevic, “Low-complexity iris coding and recognition based on directionlets,”IEEE Trans. Inf. Forensics Sec. 4 (2009) 410–417.
42. R. P. Wildes, “Iris recognition: An emerging biometric technology,” Proc IEEE 85
(1997) 1348–1363.43. W. Yuan and X. Bai, “A new iris edge extraction method,” Acta Optica Sinica 29
(2009) 2158–2163.44. W. Yuan, Y. Bai and L. Ke, “Analysis of relationship between region of iris and the
accuracy rate,” Acta Optica Sinica 28 (2008) 937–942.45. X. Zhu, Y. Liu, X. Ming and Q. Cui, “A quality evaluation method of iris images
sequence based on wavelet coefficients in “region of interest”,” Proc. 4th Int. Conf.Computer and Information Technology, Los Alamitos, CA, USA, Sep. 2004, pp. 24–27.
Bo Liu received herBachelor’s degree of En-gineering in ShenyangUniversity of Technol-ogy (SUT), Shenyang,China, in 2006. Cur-rently, she is a PhDcandidate in the schoolof Information Scienceand Engineering, SUT.
She also has been a Joint PhD student inthe Centre for High Performance EmbeddedSystems of Nanyang Technological Univer-sity (NTU), Singapore from Sep. 2010 to Sep.2011. Her research interests include biomet-ric identification, computer vision, image pro-cessing and pattern recognition, and also in-telligent embedded systems and its applica-tion.
Siew-Kei Lam re-ceived the BASc(Hons.) degree, MEngdegree, and PhD incomputer engineeringfrom Nanyang Tech-nological University(NTU), Singapore.Since 1994, he has beenwith NTU, where he is
currently a Research Fellow with the Cen-tre for High Performance Embedded Systemsand has worked on a number of challengingprojects that involved the porting of complexalgorithms in VLSI. He is also familiar withrapid prototyping and application specificintegrated-circuit design flow methodologies.His research interests include embedded sys-tem design algorithms and methodologies,algorithms-to-architectural translations, andhigh-speed arithmetic units. He is the mem-ber of the IEEE.
July 18, 2012 17:35 WSPC/INSTRUCTION FILE ws-ijprai
22 B. LIU, S. K. LAM, T. SRIKANTHAN & W. YUAN
Thambipillai Srikan-
than joined NanyangTechnological Univer-sity (NTU), Singapore,in 1991. At present, heholds a full professorand joint appointmentsas a director of a 100strong Centre for HighPerformance Embedded
Systems (CHiPES). He founded CHiPES in1998 and elevated it to a university levelresearch centre in February 2000. He haspublished more than 250 technical papers.His research interests include design method-ologies for complex embedded systems, ar-chitectural translations of compute intensivealgorithms, computer arithmetic, and high-speed techniques for image processing anddynamic routing. He is the senior member ofthe IEEE.
Weiqi Yuan receivedhis PhD degree fromNortheast University(NEU), China in 1997.He is currently a pro-fessor of Shenyang Uni-versity of Technology(SUT) and the dean ofthe school of Informa-tion Science and Engi-
neering. He is the director of the ComputerVision Group of SUT and also the directorof SUT-Texas Instruments DSP Joint Lab-oratory. He has published more than 200technical papers. His main research inter-ests include biometric identification, com-puter vision, image processing and patternrecognition and image capture and process-ing system based on DSP. He is the standingdirector of China Instrument and ControlSociety.
Top Related