Iris Segmentation Using Geodesic Active Contours ShahRossGACIris_TIFS2009
Chapter-3 IRIS SEGMENTATION -...
Transcript of Chapter-3 IRIS SEGMENTATION -...
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 32
Chapter-3
IRIS SEGMENTATION
This chapter summarizes the most common iris segmentation methods and discusses
their robustness when dealing with noisy images. Based on this, a more robust iris
segmentation method for noisy images is proposed. It also discusses various steps as
pre-processing, segmentation and normalization. In this chapter, Integro-differential
operator, Hough transform, Geodesic active contour method and Canny edge
detection algorithm are explained. The results obtained after implementation,
comparison and analysis of these segmentation methods using performance
parameters are discussed in this chapter.
3.1 Introduction
Iris segmentation plays a key role in the performance of an iris recognition system.
This is because improper segmentation can lead to incorrect feature extraction from
less discriminative regions (e.g., sclera, eyelids, eyelashes, pupil, etc.), thereby
reducing the recognition performance.
The work in this thesis addresses the issue of iris image segmentation for an
improved texture extraction, as it is reported that most match failures in iris
recognition system result from inaccurate iris segmentation. However, iris
segmentation is the most time consuming step in the iris recognition system and so
become the bottleneck in real time environments. Iris segmentation is difficult task
and faces some challenges such as specular reflection, contrast enhancement, blurred
images and occlusion.
In this chapter, for performing iris segmentation in challenging conditions,
Integro-differential operator, Hough transform, Active contours and Canny edge
detector techniques are considered. The first two curve fitting techniques are classical
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 33
approaches that are computationally inexpensive. The two newly proposed techniques
ensure a good combination between contour fitting and curve evolution based
approaches for performing iris segmentation in a challenging database.
3.2 Challenges in Iris Segmentation
Iris segmentation refers to the process of automatically detecting the pupillary (inner)
and limbic (outer) boundaries of an iris in a given image. This process helps in
extracting features from the discriminative texture of the iris, while excluding the
surrounding regions. Segmentation of an iris image is a classical image processing
problem. Processing non-ideal iris images is a challenging task because of following
reasons:
• The iris is often partially occluded by eyelids, eyelashes, and shadows.
• The iris is sometimes occluded by specular reflection.
• The iris and the pupil are noncircular, and the shape varies depending on how the
image is captured.
• Some of the other challenges of iris segmentation are defocusing, motion blur,
poor contrast etc. These challenges are taken care by measuring the quality of
input iris image and then continue with segmentation of the good quality image
only.
• The noise in images is considered to be of these types- the iris obstruction by
eyelids or eyelashes, specular or lighting reflections, poor focused image, partial
or out-of iris image, motion blurred as belonging to the iris.
Two main challenges of iris segmentation of realistic eye images are addressed:
segmentation accuracy and processing speed.
The first proposed method is to improve the iris segmentation accuracy by
using Geodesic Active Contours at the expense of higher computational complexity.
The second proposed method is based on Canny edge detector and is primarily aiming
for the faster iris segmentation of more noisy database like UBIRIS database with
sufficient segmentation accuracy as compared to earlier reported work.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 34
3.3 Existing iris segmentation methods
A significant number of iris segmentation techniques have been proposed in the
literature. Two most popular techniques are based on using an integro-differential
operator and the Hough transform, respectively. The performance of an iris
segmentation technique is greatly dependent on its ability to precisely isolate the iris
from the other parts of the eye. Both the above listed techniques rely on curve fitting
approach on the edges in the image. Such an approach works well with good quality,
sharply focused iris images. However, under challenging conditions (e.g., non-
uniform illumination, motion blur, off-angle, etc.), the edge information may not be
reliable. Table 3.1 summarizes the different algorithms for iris segmentation proposed
by various researchers.
Table 3.1: Overview of prominent existing methods of iris segmentation
Author Iris segmentation techniques used
Daugman et al. [14] Integro-differential operator
Wildes et al. [23] Image intensity gradient and Hough transform
Ma et al. [34] Hough transform
Ma et al. [35] Gray level information, canny edge detection, and Hough transform
Masek [98] Canny edge detection, and Hough transform
Abhyankar et al. [55] Active shape model
Basit & Javed et al.[99] Maximization of the difference of intensities of radial direction
M. Vatsa et al.[30] Modified Mumford–Shah functional
K. Nguyen et al.[103] Shrinking and expanding active contour methods
The best known and thoroughly examined iris segmentation method is
Daugman et.al.[17] method using Integro differential operators, which are a variant of
the Hough Transform, act as circular edge detectors and have been used to determine
the inner and the outer boundaries of the iris. They also have been used to determine
the elliptical boundaries of the lower and the upper eyelids.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 35
The Hough transform is a standard computer vision algorithm that can be used
to determine the parameters of simple geometric objects, such as lines and circles,
present in an image. The circular Hough transform can be employed to deduce the
radius and centre coordinates of the pupil and iris regions. Firstly, an edge map is
generated by calculating the first derivatives of intensity values in an eye image and
then thresholding the result. From the edge map, votes are cast in Hough space for the
parameters of circles passing through each edge point.
The Hough transform and its variants were employed by authors B.Chouhan et
al. [93], H. Proenca et al. [78], Liu et al. [58][86], Ma et al. [34][35] , Masek
[98],Wildes [23] for iris segmentation.
3.4 Typical Iris Recognition System
Figure 3.1 shows the block diagram of typical iris recognition system used for the
performance evaluation. The main focus is on iris segmentation and feature extraction
method. For the comparison of proposed different segmentation algorithms, all other
modules in the recognition system are kept same and the performance of recognition
is analyzed by changing the segmentation method. Many open access databases are
available for the research use. The databases used for evaluating performance of iris
recognition system here are CASIA Interval version 3 database and UBIRIS database.
3.4.1 Pre-processing :
Pre-processing of the acquired iris image involves detection of specular reflections
and this stage must remove these noise elements that can affect the feature extraction
process. 2-D median filtering and in-painting is used here, if specular reflections in
the image are present.
3.4.2 Segmentation :
Two methods are proposed during this work as Geodesic Active Contours without
edges and Canny edge detector method. Also, segmentation performance is compared
with the earlier reported work with Integro-Differential Operator and Hough
transform.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 36
3.4.3 Normalization :
Iris segmentation is followed by a normalization to generate fixed dimension feature
vectors further to be used for recognition by comparisons. The rubber sheet model
proposed by Daugman maps each point in the (x, y) domain to a pair of polar
coordinates (r, θ). This results in a fixed size unwrapped rectangular iris image.
Fig.(3.1) : Block diagram of Iris Recognition system
3.4.4 Feature Encoding and Matching :
For accurate recognition result, the most discriminating information present in an iris
pattern must be extracted. Only the significant features of the iris must be encoded so
that comparisons between irises can be made. In order to study the effects of iris
segmentation on the iris recognition accuracy performance Log Gabor filters are used
to extract the textural information (encoding) from the unwrapped iris. Further, the
Decision
Compare
Match Score
Data Base
Same method of
Segmentation used for
Enrollment
Segmentation
Iris Image
Normalization Feature Extraction and Encoding
Pre- Processing Active Contour
without edges (Proposed 1 )
Hough transform
Integro-differential operator
Iris Image
Normalization
Feature Extraction and Encoding
Pre- Processing
ENROLLMENT
AUTHENTICATION
Canny edge detector (Proposed 2)
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 37
feature vectors are compared using a similarity measure for which matching algorithm
is used.
3.5 Iris Databases
Different iris databases of different universities / institutes have been used for testing
the proposed methods. Two databases of Chinese Academy of Sciences, Institute of
Automation (CASIA) Iris Database Version 1.0 and 3.0 [9], one from University of
Bath (BATH), UK [10], University of Beria Interior (UBIRIS)[75] and one from
Multi Media University (MMU), Malaysia [11] have been used for evaluation.
Table 3.2 shows some of the attributes such as file format, number of classes and
dimension.
Table 3.2: Some attributes of the Iris Databases
Sr. No.
Name of Database
File format
Number of images
Number of classes
Number of images in each class
Dimension of image in pixels
1. CASIA Version 1.0
BMP 756 108 7 280X320
2. CASIA Version 3.0
JPG 2655 396 1-26 280X320
3. Bath BMP 1000 50 20 960X1280
4. MMU Version 1.0
BMP 450 90 5 240X320X3
5. UBIRIS TIFF 1877 241 1-9 800X600
As the acquiring device as well as environment is not unique for all databases,
therefore, different types of pupil images are present in the databases. Figure 3.2 is set
of sample images taken from UBIRIS database, which contains the combination of
poorly focussed, good, occluded, Motion blurred and light reflected images.
Analysis of the noise factors and the image heterogeneity has been considered
as the most important parameters. Various noise factors in iris images considered are:
Iris obstructions by eyelids and eyelashes, Lighting reflections, specular reflections,
Poor focused images, Off-angle iris and Motion blurred images. Each database
considered here is having such images in the respective dataset.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 38
Good Quality Poor Focus Eyelids and eyelashes Extreme eyelid occlusion
Light reflection Specular reflection Out of iris Motion Blur
Fig. (3.2): Sample images from the UBIRIS database
As can be seen from the Table 3.3 the UBIRIS and CASIA are the two
databases with the highest amount of noise present. So, for the further study we
decide to use these databases for studying the performance of the proposed
segmentation algorithms.
Table 3.3: Average quantity of noise pixels within the iris
Iris Database Image Size Noise, %
BATH 1280 × 960 6.29
CASIA 320 × 280 12.72
MMU 320 × 240 7.83
UBIRIS 800 × 600 27.62
The analysis of the database allowed us to conclude that the noisiest database
is the UBIRIS database. All the remaining databases contain less number of noisy
images and their images have more homogeneous characteristics. After the
identification of the types of noise that each available database contains, it prompted
to have an objective measure in terms of average quantity of noisy pixels per iris
image in each database [79].
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 39
3.6 Pre-processing of Iris Image
Processing of the acquired iris image involves detection of specular reflections and
this stage must remove noise and elements that can affect the feature extraction
process. The features extracted must not be corrupted. The different operations used
for preprocessing are image binarization (threshold), histogram equalization,
inpainting, morphology operators and 2-D median filtering.
Iris images of CASIA-Iris-Interval were captured with their self-developed
close-up camera with circular NIR LED array. Because of this there are six specular
reflection points in the iris image. Due to the specular reflections the pupil
segmentation gets affected which results in the under segmentation. Hence, if specular
reflection (bright spots in an image) is detected in the surrounding area of the pupil, it
is “in painted” using the surrounding information. Figure 3.3 shows the images before
and after the specular reflection removal and inpainting. Inpainting is a process to fill
the missing portions of an image (in our case specular reflections) to improve its
integrity.
(a) (b)
Fig. (3.3) : (a) Original Image (b) Image after Preprocessing (S1011L07 from CASIA interval version 3 database)
3.7 Iris Segmentation Using Different Methods
Each algorithm of iris recognition system begins with iris segmentation. Following
important steps are involved in ris segmentation methods.
1. Finding the pupillary boundary of the iris
2. Finding the limbic boundary
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 40
3. Specular reflection removal, if any
4. Detecting and removing any superimposed occlusions of eyelashes, shadows or
reflections.
It is reported that most failures to match in iris recognition system result from
inaccurate iris segmentation. For instance, even an effective feature extraction method
would not be able to obtain useful information from an iris image that is not
segmented accurately. So for the better performance of the iris recognition system
correct segmentation method plays vital role.
We have also implemented the iris segmentation methods using Integro-
Differential Operator and Hough transform by using freely available MATLAB code
with appropriate modifications for images of CASIA-interval-V3and UBIRIS
databases.
3.7.1 Integro Differential operator :
John Daugman developed the fundamental iris algorithm for recognition system. The
best known and thoroughly examined iris segmentation method is Daugman et.al.[13]
method using Integro differential operators, which are a variant of the Hough
Transform, act as circular edge detectors and have been used to determine the inner
and the outer boundaries of the iris. They also have been used to determine the
elliptical boundaries of the lower and the upper eyelids. An Integro differential
operator can be defined as:
∫∂∂∗
0y,0xr,σ)0y,0x(r,
ds2
y)I(x,
r(r)Gmax
rπ (3.1)
where, I(x, y) is the image, the operator search over the image domain I(x, y) for
the maximum in the blurred derivative with respect to increasing radius r, of the
normalized contour integral of I(x, y) along a circular arc ds of radius r and centre
(x0 ,y0). The symbol * denotes convolution and G(r) is a Gaussian filter used as a
smoothing function. It is obvious that the results are inner and outer boundaries of iris.
First, the inner boundary is localized, due to the significant contrast between iris and
pupil regions. Then, outer boundary is detected; using the same operator with
different radius and parameters. The eyelids can be detected in a similar fashion by
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 41
performing the integration over an elliptical boundary rather than a circular one [31].
The output of segmentation using integro-differential operator is shown in the
figure 3.4.
• Algorithm used for Implementation of Integrodifferntial operator:
Step 1: Initialize rmin=95, rmax=105 & scaling=0.25.
Step 2: Coarse search for iris center.
Step 3: Perform line integration.
Step 4: Carry out the differentiation.
Step 5: Obtained the blurred image using Gaussian filter.
Step 6: Determine the maximum value of blurred image for the coarse center coordinates and radius.
Step 7: Implement equation, ∫∂∂∗
0000 y,xr,σ)y,x(r, ds
2ππ
y)I(x,
r(r)Gmax
Step 8: Fine search around this roughly located center.
Step 9: Initialize rmin and rmax for pupil
Step 10: Repeat step 3 to step 6.
Step 11: Draw the circle using iris center and radius.
Step 12: Draw the circle using pupil center and radius.
Fig. (3.4) : Segmentation output using Integrodifferential operator of image from CASIA interval version 3 database
3.7.2 Hough Transform :
The Hough transform is a standard computer vision algorithm that can be used to
determine the parameters of simple geometric objects, such as lines and circles,
present in an image. The circular Hough transform can be employed to deduce the
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 42
radius and centre coordinates of the pupil and iris regions. Firstly, an edge map is
generated by calculating the first derivatives of intensity values in an eye image and
then thresholding the result. From the edge map, votes are cast in Hough space for the
parameters of circles passing through each edge point. These parameters are the
centre coordinates xc
and yc, and the radius r, which are able to define any circle
according to the equation (3.2),
0ryx 22c
2c =−+ (3.2)
A maximum point in the Hough space will correspond to the radius and centre
coordinates of the circle best defined by the edge points. This involves first employing
canny edge detection to generate an edge map. Gradients were biased in the vertical
direction for the outer iris/sclera boundary, as suggested by Wildes et al. [23].
Vertical and horizontal gradients were weighted equally for the inner iris/pupil
boundary. A modified version of Canny edge detection (MATLAB function) is
implemented, which allowed for weighting of the gradients. The range of radius
values to look for is set manually, based on the database used. For the CASIA
V1database, values of the iris radius range from 90 to 150 pixels, while the pupil
radius ranges from 28 to 75 pixels. For more efficient and accurate circle detection
process, the Hough transform for the iris/sclera boundary is performed first, then the
Hough transform for the iris/pupil boundary was performed within the iris region,
instead of the whole eye region, as the pupil is always inside the iris region.
After the completion of this procedure, six parameters; the radius, and x and y
centre coordinates for both circles are saved. Canny edge detection is used to create
an edge map, and only horizontal gradient information is used. The segmentation iris
image using Hough Transform is shown in the figure 3.5.
• Algorithm for Hough transform (Masek method) Segmentation:
Step 1: Initialize pupil radius =28 and iris radius =75 for CASIA database.
Step 2: Scale the image.
Step 3: Gaussian filtering.
Step 4: Edge map creation using canny edge detection.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 43
Step 5: Circular Hough transform for limbic boundary detection
Step 6: Circular Hough transform for pupillary boundary detection inside located iris.
Step 7: Linear Hough transform for eyelid detection.
Step 8: Display the segmented image.
Fig. (3.5) : Segmentation output using Hough transform of image S1011L07 from CASIA interval version 3 databases
3.8 Proposed Method 1:
Accurate iris segmentation using Geodesic Active Contour (GAC)
The active contour models or snake is defined as an energy-minimizing spline- the
snakes energy depends upon its shape and location within image. Local minima of
this energy then correspond to desired image properties. Active contour models may
be used in image segmentation and understanding.
There are two main groups of deformable contour/surface models: active
contour/ snakes belong to parametric model family as borders are represented in
parametric form. The second family of deformable surfaces-geometric deformable
models-overcome the problems with snakes by representing developing surfaces by
partial differential equations. The main feature separating geometric deformable
models from parametric ones is that the curves are evolved using only geometric
computations, independent of any parameterization: the process is implicit [32].
This approach is based on the relation between active contours and the
computation of geodesics (minimal length curves). The technique is to evolve the
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 44
contour from inside the iris under the influence of geometric measures of the iris
image. Some of the features of GAC are given as:
• The GAC scheme used for the detection of object boundaries.
• It is extensively used in medical image segmentation.
• Contour evolution scheme based on image content and curve regularity.
• Evolution terminates when contour encounters the “boundary” of the object
• Able to detect boundary even if gaps exist in the boundary.
• GAC can split and merge at local minima
The iris segmentation procedure using GAC can be divided mainly into two
steps: I. Pupil segmentation, and II Iris segmentation.
3.8.1 Pupil Segmentation
For the detection of the pupillary boundary, the eye image is first smoothed using a
2-D median filter. Then from that image minimum pixel (Min) value is determined.
The iris is then binarized using a threshold value 30+ (Min). As predicted other than
the pupil, other dark regions of the eye (e.g., eyelashes) are with less value than this
threshold value. A 2-D median filter is then applied on the binary image to remove the
comparatively smaller regions related with the eyelashes.
Based on the median-filtered binary image, the exterior boundaries of all the
remaining objects are traced. Generally, the largest boundary of the residual regions
of the eye corresponds to the pupil. However, when the pupil is confined, it is very
likely that the boundary of the detected region corresponding to the eyelashes is larger
than that of the pupil. Regions with diameters more than half the image size are not
considered. But pupil cannot be considered as circle. So an ellipse fitting procedure is
carried out on all detected regions. Finally, if the major and minor axes of the ellipse
are same then as a result detected region will be circle in some cases. Hence, if
specular reflection (bright spots in an image) is noticed in the neighborhood of the
pupil, it is “in painted” using the surrounding information as explained in the
preprocessing. The final result of pupil segmentation is shown in the figure 3.6.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 45
Fig. (3.6) : Pupil Segmentation of image S1011L07 from CASIA interval version 3 database
• Algorithm for Pupil Segmentation
Step 1: Input eye image.
Step 2: Median filtering of eye image
Step 3: Use morphological operators.
Step 4: Perform thresholding using threshold (30+minimum) value
Step 5: Use median filtering.
Step 6: Find out centroid, major axis & minor axis values for pupil center and radius.
Step 7: Display segmented image.
3.8.2 Iris Segmentation :
For detection of the limbic boundary of the iris, a novel method based on a level sets
representation of the Geodesic active contour (GAC) model is used. Let be the curve
that has to gravitate toward the boundary of any object, at a particular time as shown
in Figure 3.7. The time t corresponds to the iteration number. Let ψ be a function
defined as a signed distance function from the curve )(tγ .Thus, =),( yxψ distance of
point (x, y) to the curve )(tγ .
><=
curve theoutside is y)(x, if 0,
curve theinside is y)(x, if,0
curve theon is y)(x, if0,
),( yxψ (3.3)
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 46
Fig. (3.7): Curveγ evolving towards the boundary of the object.
ψ is of the same dimension as that of the image I(x, y) that is to be segmented. The
curve )(tγ is a level set of the functionψ . Level sets are the set of all points in ψ
where =ψ some constant.
Thus, =ψ 0 is the zeroth level set,
=ψ 1 is the first level set and so on.
ψ is the implicit representation of the curve )(tγ and is called as the
embedding function since it embeds the evolution of )(tγ illustrated in the figure 3.8.
The embedding function evolves under the influence of image gradients and regions
characteristics so that the curve )(tγ approaches the boundary of the object. Thus,
instead of evolving the parametric curve )(tγ (e.g., the Lagrangian approach used in
snakes), the embedding function itself is evolved. In the implementation, the initial
curve )(tγ is assumed to be a circle of radius r just beyond the pupillary boundary.
Let the curve )(tγ be the zeroth-level set of the embedding function.
Fig.(3.8) : Implicit representation of ψ from curve )(tγ =x2+y2=1.
y
x 0<ψ
0>ψ
0=ψ
γ
Object to be segmented
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 47
This implies that 0.dt
dψ =
By the chain rule,
t
ψ
dt
dy
y
ψ
dt
dx
x
ψ
dt
dψ
∂∂+
∂∂+
∂∂= (3.4)
i.e. (t).γψ.t
ψ ′−∇=∂∂
(3.5)
Splitting the )(tγ ′ in the normal (N(t)) and tangential (T(t)) directions,
).T(t)υN(t)ψ.(υt
ψTN +−∇=
∂∂
(3.6)
Now, since ψ∇ is perpendicular to the tangent to )(tγ .
.N(t))ψ.(υt
ψN−∇=
∂∂
(3.7)
The normal component is given by, .ψ
ψN
∇∇= (3.8)
Substituting this in equation (3.7),
ψυt
ψN ∇−=
∂∂
(3.9)
Let Nυ be a function of the curvature of the curveκ , stopping function K (to stop the
evolution of the curve) and the inflation force c (to evolve the curve in the outward
direction) such that,
ψcKψ
ψKdiv
t
ψ ∇
+
∇∇−=
∂∂
(3.10)
Thus, the evolution equation for tψ such that )(tγ remains the zeroth level set is
given by,
Κψψεκ)Κ(cψ t ∇⋅∇+∇+−= (3.11)
where, K the stopping term for the evolution, is an image dependant force and is used
to decelerate the evolution near the boundaries; c is the velocity of the evolution;
ε indicates the degree of smoothness of the level sets; and κ is the curvature of the
level sets computed as,
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 48
2
32y
2x
yyxyyxxx
)ψ(ψ
ψψψψ2ψψψκ
2
x
2
y
+
−−=
+
(3.12)
Where xψ is the gradient of the image in the x direction; yψ is the gradient in the
y direction; xxψ is the second order gradient in the x direction; yyψ is the second order
gradient in the y direction; xyψ is the second order gradient first in the x direction and
then in the y direction. Equation 3.13 is the level set representation of geodesic active
contour model. This means that the level set C of ψ is evolving according to,
( ) ( )NNΚNεκcΚC t
rrr⋅∇−+= (3.13)
where, Nr
is the normal to the curve. The first term )( Nv
κ provides the smoothing
constraints on the level sets by reducing the total curvature of the level sets. The
second term ( )Ncr
acts like a balloon force and it pushes the curve outward towards
the object boundary. The goal of stopping function is to slow down the evolution
when it reaches the boundaries.
However, the evolution of the curve will terminate only when K=0, i.e., near
an ideal edge. In most images gradient values will be different along the edge, thus
necessitating different K values. In order to circumvent this issue, the third geodesic
term ( )( )Nr
⋅Κ∇ is necessary so that the curve is attracted towards the boundaries (Κ∇
points toward the middle of the boundary).This term makes it possible to terminate
the evolution process even if (a) the stopping function has different values along the
edges, and (b) gaps are present in stopping function. The stopping term used for the
evolution of level sets is given by,
α
κ
y))I(x,y)(G(x,1
1y)Κ(x,
∗∇+
= (3.14)
where, I(x,y) is the iris image and κ and α are constants. As from the equation it can
be seen that K(x,y) is not a function of t. The stopping function is illustrated in
figure 3.9.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 49
(a) (b) Fig.(3.9) : Stopping Function (K) -(a) Original Image (b) Stopping Function (K) of
image (S1011L07 from CASIA interval version 3 database)
As the pupil segmentation is done prior to segmenting the iris. This ensures
that the evolving level set is not terminated by the edges of the pupillary boundary. A
contour is first initialized near the pupil and the embedding function, ψ , over the
image domain is initialized to a signed distance function from )0( =tγ which looks
like a cone as shown in figure 3.10.
(a) (b)
Fig. (3.10) : Initial contour (Zeroth level set) ψ (a) Iris image with initial contour (Zeroth level set) (b) Mesh plot of the signed distance functionψ .(X and Y axis correspond to the size of the iris image and the Z axis represents different level sets).
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 50
Discretizing equation (3.11) leads to the following expression:
( ) t'ji,
tji,
ttji,
'ji,
t'ji,
tji,
1tji,
ΚψψεκΚψcΚ∆t
ψψ∇⋅∇+∇−∇−=
−+
(3.15)
where, t∆ is the time step.(t∆ =0.05]). The first term tjic ψ∇Κ '
, on the right hand
side of the above equation is the velocity term (advection term) and in case of the
limbic boundary segmentation, acts as an inflation force. This term can lead to
singularities and hence lead to upwind finite differences. The upwind scheme for
approximating ψ∇ is given by Aψ =∇ , where,
),0)ψmax(D),0)ψmin(D),0)ψmax(D),0)ψmin(DA 2ji,x
2ji,y
2ji,x
2ji,x
−−+− +++= (3.16)
where, ψ−xD is the first-order backward difference of ψ in the x- direction; ψ+xD is the
first-order forward difference ofψ in the x-direction; ψ−yD is the first-order backward
difference of ψ in the y-direction; and ψ+yD is the first-order forward difference ofψ
in the y-direction. The second term ( )ttji,ji, ψεκΚ ∇′ is curvature based smoothing term
and can be discretized using central differences. (c=0.65 and 1=ε for all iris images).
The third geodesic term tji,
tji, Κψ ′∇⋅∇ is also discretized using the central differences.
After evolving the embedding function according to (3.15), the curve starts to
grow until it satisfies the stopping criterion defined by the stopping function K. But at
times, the contour continues to evolve in a local region of the iris where the stopping
criterion is not strong. This can causes the over evolution of the contour. To avoid it,
we minimize the thin plate spline energy of the contours.
Thus, if all points on the contour lie on a circle, then the thin plate spline
energy will be zero; however, if the contour starts evolving non-uniformly, the thin
plate spline energy to fit the contour through these points will start increasing. By
computing the difference in energy between two successive contours, the evolution
scheme can be regulated. If the difference between the contours is less than a
threshold (indicating that the contour evolution has stopped at most places), then the
contour evolution process is stopped. Figure 3.11 shows the evolution of GAC during
iris segmentation process.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 51
(a) (b)
(c) (d)
Fig.(3.11): Evolution of the GAC during iris segmentation :(Image S1011L07 from CASIA interval version3 database) (a) Iris image with initial contour ( Zeroth level set) (b) Contour after 150 iterations, (b) Contour after 500 iterations, (d) Final Contour after 1000 iterations.
One important feature of GACs is their ability to handle “splitting and
merging” boundaries. This is especially important in the case of iris segmentation
since the radial fibers may be thick in some portions of the iris, or the crypts present
in the ciliary region may be unusually dark, giving rise to prominent edges in the
stopping function. If the segmentation technique is based on parametric curves, then
the evolution of the curve might terminate at these local minima. However, the
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 52
stopping criterion for the evolution of GACs is image independent and does not take
into account the amount of edge details present in an image. Thus, if the iris edge
details are weak, the contour evolution may not stop at the desired iris boundary
leading to an over segmentation of the iris. Over segmentation can be avoided by
developing an adaptive stopping criterion for the evolution of the GACs.
Estimation of radius :
The extracted contour is used to create the binary mask that is used for the matching
of the iris codes. To normalize the iris and convert it to a rectangular entity, its radius
and the corresponding center coordinates have to be estimated. If the occlusion due to
the upper or lower eyelids is large, then a circle that fits all the points on the extracted
contour will lie within the actual boundary of the iris. Thus, only those points on the
contour lying on the boundary of the iris and sclera (as opposite to the iris and the
eyelids) should be used to estimate the radius and center of the iris.
To make sure this, six points at angles of [-30°,0°,30°,150°,180°,210°] with
respect to the horizontal axis are chosen from the extracted contour and their mean
distance from the center of the pupil is calculated. This value is used as the
approximate radius (R) of the iris. A circle is next fitted through all the points on the
contour that are within a distance of R±20 pixels from the center of the pupil. The
center and radius of such a circle is the center (x, y) and the radius R of the iris.
Figure 3.12 illustrates the estimated radius by this approach.
Fig. (3.12) : Accurate estimation of radius of image S1011L07 from CASIA
Interval version 3 database.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 53
When the iris is detected in the corner of the eye, all six points chosen to
estimate the approximate iris radius may not lie on the iris boundary; instead some
may lie on the eyelid. As the eyelid boundary is closer to the center of the pupil, the
approximate iris radius is off-center and is smaller than the actual radius.
• Algorithm for GAC segmentation:
• Stopping function:
Step 1: Input σ for Gaussian form user [range for σ=2-4 ]
Step 2: Input k from user [range for k =0.5- 2]
Step 3: Input α from user [α=8]
Step 4: Filter the image with Gaussian filter σ,(G(x,y))
Step 5: Find gradient of smoothed image.
Step 6: Implement the equation for stopping function:
α
κ
y))I(x,y)(G(x,1
1y)Κ(x,
∗∇+
=
• Generating ψ , zeroth level set:
Step 1: Input segmented pupil image.
Step 2: Create pupil mask having radius greater than pupil radius.
Step 3: Generate ψ according to
><=
curve theoutside is y)(x, if 0,
curve theinside is y)(x, if0,
curve on the is y)(x, if0,
y)ψ(x,
Step 4: Display it on input eye image.
• Perform segmentation:
Step 1: Maximum iterations = Input from user.[range=1000-2000]
Step 2: ε = Input from user =1
Step 3: Propagation= Input from user =1(constant)
Step 4: Initialize ψ
Step 5: Evolve ψ according to discrete implementation equation,
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 54
( ) t'
ji,
t
ji,
tt
ji,
'
ji,
t'
ji,
t
ji,
1t
ji,ΚψψεκΚψcΚ
∆t
ψψ∇⋅∇+∇−∇−=
−+
Step 6: Increment t∆ according to Courant-Friedrichs-Lewy (CFL) condition.
Step 7: Check number of iterations and convergence.
Step 8: If number of iterations < maximum iterations or convergence is not reached
Go back to step 6.
Step 9: Else Exit
Step10: Display final contour.
• Algorithm for estimation of radius:
Step 1: Create mask by binarization of final contour.
Step 2: Calculate angle for all the values of final extracted contour.
Step 3: Check calculated angle is less than 182 and greater than 179.
Step 4: If yes then angle = 180°
Step 5: If no, Check calculated angle is less than or equal 212 and greater than 208.
Step 6: If yes then angle = 210°
Step 7: If no, Check calculated angle is less than or equal 152 and greater than 150.
Step 8: If yes then angle = 150°
Step 9: If no, Check calculated angle is less than or equal 32 and greater than30.
Step 10: If yes then angle = 30°
Step 11: If no, Check calculated angle is less than or equal 1 and greater than -1.
Step 12: If yes then angle = 0°
Step 13: If no, Check calculated angle is less than or equal -29 and greater than -31.
Step 14: If yes then angle = -30°
Step 15: Calculate Euclidean distance from pupil center using each angle.
Step 16: Take average of this 5 distances.
Step 17: Draw circle with this radius±20.
The output of the segmentation process is a binary mask that indicates the iris
and non-iris pixels in the image. After segmentation by any one method described in
this chapter, the obtained mask is normalized.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 55
3.8.3 Normalization of Iris :
Once the iris region is successfully segmented from an eye image, the next step is to
transform the iris region so that it has fixed dimensions for doing the comparisons of
templates. The dimensional variations between eye images are mostly due to the
stretching of the iris caused by pupil dilation from varying levels of illumination.
Other causes of variation include, varying imaging distance, rotation of the camera,
head tilt, and rotation of the eye within the eye socket. The normalization process
produces iris regions, which have the same constant dimensions, so that two
photographs of the same iris under different conditions will have characteristic
features at the same spatial location. Another point of note is that the pupil region is
not always concentric within the iris region, and is usually slightly nasal. This must be
taken into account if trying to normalize the ‘doughnut’ shaped iris region to have
constant radius [27].
Daugman’s Rubber Sheet Model :
The homogenous rubber sheet model devised by Daugman remaps each point within
the iris region to a pair of polar coordinates (r,θ) where r is on the interval [0,1] and θ
is angle [0,2π]. The remapping of the iris region from (x,y) Cartesian coordinates to
the normalized non-concentric polar representation as shown in figure 3.13 can be
given as ,
( ) ( )( ) ( )θr,Iθr,y,θr,xI → (3.17)
( ) )()()1(, θθθ lp rxxrrx +−= (3.18)
( ) )()()1(, θθθ lp ryyrry +−= (3.19)
Fig. (3.13) : Daugman’s rubber sheet model.
0 1
r
θ
θ
r
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 56
where, I(x, y) is the iris region image, (x, y) are the original Cartesian coordinates, (r,
θ) are the corresponding normalized polar coordinates, xp , yp, and xl ,yl the
coordinates of the pupil and iris boundaries along the θ direction. The centre of the
pupil was considered as the reference point, and radial vectors pass through the iris
region, as shown in Figure 3.14.
The number of radial lines going around the iris region is defined as the
angular resolution. Since the pupil can be non-concentric to the iris, a remapping
formula is needed to rescale points depending on the angle around the circle. This is
given by,
22
Irr −−±=′ ααββα (3.20)
with 22yx oo +=α ,
−
−= θπβ
y
x
oo
arctancos (3.21)
where, displacement of the centre of the pupil relative to the centre of the iris
is given by ox , o
y, and r’ is the distance between the edge of the pupil and edge of the
iris at an angle, θ around the region, and rI is the radius of the iris. The remapping
formula first gives the radius of the iris region ‘doughnut’ as a function of the angle θ.
(a) (b)
Fig.(3.14) : (a) Pupil displacement relative to the iris centre, (b) Sketch of the normalization process. with radial resolution of 10 pixels, and angular resolution of 40 pixels.
A constant number of points are chosen along each radial line, so that a
constant number of radial data points are taken, irrespective of how narrow or wide
the radius is at a particular angle. Another 2D array was created for marking
20 p
ixel
s
r θ
240 pixels
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 57
reflections, eyelashes, and eyelids detected in the segmentation stage. In order to
prevent non-iris region data from corrupting the normalized representation, data
points which occur along the pupil border or the iris border are discarded [27].
Normalized iris image is shown in the figure 3.15. Figure 3.16 shows the sample
outputs of the iris segmentation and normalization for UBIRIS database images.
• Algorithm For Normalization
Step 1: Initialize radial resolution=20 and angular resolution=240.
Step 2: Calculate displacement of pupil center from the iris center by equation,
2
I
2 rααββαr −−±=′
with 2y
2x ooα += ;
−
−= θ
o
oarctanπcosβ
y
x
Step 3: Exclude values at the boundary of the pupil iris border, and the iris sclera border
Step 4: Calculate Cartesian location of each data point around the circular iris by x = r cosθ and y = r sinθ
Step 5: Extract intensity values into the normalized polar representation through interpolation.
Step 6: Store it in polar array. Create noise array with location of NaN value in polar array
(a) (b)
Fig. (3.15) : Normalization output: (a) Unwrapped iris, (b) with the zones of analysis on the image S1011L07 from CASIA interval version 3 database.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 58
Fig. (3.16) : Sample outputs of Iris segmentation followed by the normalization for UBIRIS database image .
3.8.4 Isolation of Eyelids and eyelashes :
Eyelids and eyelases are isolated from the detected iris image considering them as
noise because they degrade the performance of the system. Eyelashes detection is
treated as belonging to two types, separable eyelashes, which are isolated in the
image, and multiple eyelashes, which are bunched together and overlap in the
eyeimage. Eyelids were isolated by first fitting a line to the upper and lower eyelid
using the linear Hough transform. A second horizontal line is drawn, which intersects
with the first line at the iris edge that is closest to the pupil. This is illustrated in the
figure 3.17 and is done for the both top and bottom eyelids. The second horizontal
line allows maximum isolation of eyelid regions.
Fig. (3.17) : Detection of eyelid regions in Segmentation process
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 59
A linear Hough transform has the advantage over its parabolic version, in that there
are less parameters to deduce, making the process less computationally demanding.
For isolating eyelashes, in the CASIA database a simple thresholding technique was
used, since analysis reveals that eyelashes are quite dark when compared with the rest
of the eye image.
3.8.5 Generating polar Iris image and its mask:
After detecting the eyelids and eyelashes, a mask based on the eyelids and eyelashes
is used to cover the noisy area and extract the iris without noise as shown in
figure 3.18.
Fig. (3.18) : Iris in Polar Coordinates
A mask for this polar iris image is generated using the masked iris image as
shown in figure 3.19 and the process is similar to polar iris image generation. Further
the template is generated by encoding the textural features. We proposed the use of
1D Log Gabor wavelet for iris texture template. This is discussed in the next section.
Figure (3.19) : Mask Corresponding to figure 3.18
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 60
3.8.6 Feature Extraction :
For accurate recognition result, the most discriminating information present in an iris
pattern must be extracted. Only the significant features of the iris must be encoded so
that comparisons between irises can be made. Several techniques have been proposed
for feature extraction, some of them are Gabor filters, wavelet transform, Laplacian of
Gaussian filter, key local variations, Hilbert transform, discrete cosine transform, etc.
Log Gabor filters : An alternative to the Gabor function is the log-Gabor function
proposed by Field in 1987. Field suggests that natural images are better coded by
filters that have Gaussian transfer functions when viewed on the logarithmic
frequency scale. Frequency response of a Log-Gabor filter is given as:
( ) ( ){ }2
0
2
0]fσ2[log]ff[logexpG(f) −= (3.22)
Here, f0 represents the centre frequency, and σ gives the bandwidth of the filter [19].
Feature encoding is implemented by convolving the normalized iris pattern
with 1D Log-Gabor wavelets. The 2D normalized pattern is broken up into a number
of 1D signals, and then these 1D signals are convolved with 1D Gabor wavelets. The
rows of the 2D normalized pattern are taken as the 1D signal; each row corresponds to
a circular ring on the iris region. The intensity values at known noise areas in the
normalized pattern are set to the average intensity of surrounding pixels to prevent
influence of noise in the output of the filtering.
3.8.7 Phase quantization :
Daugman demodulates the output of the Gabor filters in order to compress the data.
Taking only the phase will allow encoding of discriminating information in the iris,
while discarding redundant information such as illumination, which is represented by
the amplitude component.
These four levels are represented using two bits of data, so each pixel in the
normalized iris pattern corresponds to two bits of data in the iris template. A total of
2,048 bits are calculated for the template, and an equal number of masking bits are
generated in order to mask out corrupted regions within the iris. This creates a
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 61
compact 256-byte template, which allows for efficient storage and comparison of
irises.
The output of filtering is then phase quantized to four levels using the
Daugman method, with each filter producing two bits of data for each phasor. The
feature encoding process is illustrated in Figure 3.20.
Fig.(3.20) : Phase quantization
The encoding process produces a bitwise template containing a number of bits
of information, and a corresponding noise mask which corresponds to corrupt areas
within the iris pattern, and marks bits in the template as corrupt. Figure 3.21 shows
the template after encoding of Image S1011L07 from CASIA interval version3
database using GAC technique.
Fig. (3.21) : Template after encoding of Image using GAC technique
S1011L07 from CASIA interval version3 database.
• Algorithm for Feature Encoding:
Step 1: Initialize number of scales (filters) =1
Step 2: Initialize minimum wavelength=18
Im
Re
[1, 1] [0, 1]
[0, 0]
[1, 0]
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 62
Step 3: Initialize multiplier =1
Step 4: Sigma =0.5
Step 5: Take polar array as input
Step 6: Encode using Log Gabor filter
( ) ( ){ }2
0
2
0]fσ2[log]ff[logexpG(f) −=
Step 7: Calculate length = angular resolution*2*number of filters used.
Step 8: Perform 2 bit quantization.
Step 9: Store created template of above length.
Figure 3.22 shows the iris code generation process. Iris template is a bitwise template
containing a number of bits of information, and a noise mask which corresponds to
noisy areas within the iris pattern. The feature vectors obtained through this iris
template are further compared by using Hamming distance calculation method.
(a) (b) (c) (d) (e)
Fig. (3.22) : Iris code generation process a) Eye image, b) Inner and outer Boundaries,
c) Iris portion in the eye image, d) Mask for the noise present in the iris image e) Iris Template
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 63
3.8.8 Matching Algorithms :
The feature vectors are compared using a similarity measure for which matching
algorithms are used. Let I1 and I2 be the two iris codes to be compared, and M1 and M2
be their respective masks. The Hamming distance (HD) is calculated:
( )21
2121
MM
MMIIHD
I
II⊗= (3.23)
where, the XOR operator,⊗ , detects the disagreement between the corresponding bits
in the iris codes, the AND operator I ensures that the Hamming distance is calculated
using only the bits generated from the true iris region and the operator ⋅ computes
the norm of the bit vector. Ideally, the Hamming distance between two images of the
same iris will be 0 corresponds to genuine score and that between two images of
different irises will be 0.5 which is imposter score.
For matching, the Hamming distance was chosen as a metric for recognition,
since bit-wise comparisons were necessary. The Hamming distance algorithm used
also incorporates noise masking, so that only significant bits are used in calculating
the Hamming distance between two iris templates. Now when taking the Hamming
distance, only those bits in the iris pattern that corresponds to ‘0’ bits in noise masks
of both iris patterns will be used in the calculation. The Hamming distance will be
calculated using only the bits generated from the true iris region, and this modified
Hamming distance formula is given as,
'
j
'
j
N
1jjjN
1kkk
(AND)YnXn(AND)(XOR)YX(OR)YnXnN
1HD ∑
∑ =
=
= (3.24)
where , Xj and Yj are the two bit-wise templates to compare, Xnj and Ynj
are
the corresponding noise masks for Xj and Yj, and N is the number of bits represented
by each template. Although, in theory, two iris templates generated from the same iris
will have a Hamming distance of 0.0, in practice this will not occur. Normalization is
not perfect, and also there will be some noise that goes undetected, so some variation
will be present when comparing two intra-class iris templates.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 64
In order to account for rotational inconsistencies, when the Hamming distance
of two templates is calculated, one template is shifted left and right bit-wise and a
number of Hamming distance values are calculated from successive shifts. This bit-
wise shifting in the horizontal direction corresponds to rotation of the original iris
region by an angle given by the angular resolution used. From the calculated
Hamming distance values, only the lowest is taken, since this corresponds to the best
match between two templates.
The number of bits moved during each shift is given by two times the number
of filters used, since each filter will generate two bits of information from one pixel of
the normalized region. The actual number of shifts required to normalize rotational
inconsistencies will be determined by the maximum angle difference between two
images of the same eye, and one shift is defined as one shift to the left, followed by
one shift to the right [27].
• Algorithm for Matching:
Step 1: Take all the enrolled templates in one array.
Step 2: Initialize I= total no of enrolled eyes.
Step 3: Increment I.
Step 4: Take template from the array.
Step 5: Create template1 and mask1 from that stored first template
Step 6: Create template2 and mask2 from individual which claimed from matching
stage.
Step 7: Calculate Hamming distance by implementing equation,
'
j
'
j
N
1jjjN
1kkk
(AND)YnXn(AND)(XOR)YX(OR)YnXnN
1HD ∑
∑ =
=
=
Step 8: Check if I become zero.
Step 9: If No then Repeat step 4 to step 8.
Step 10: If yes then find minimum hamming distance, HD from all obtained values.
Step 11: Check if HD is less than or equal to set threshold.
Step 12: If yes then Display result, “Positive Identity Match Obtained.”
Step 13: If No then Display result, “The individual is not present in the database”
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 65
3.8.9 Experimental results of proposed method-1 :
Two experiments are conducted to evaluate the performance of the proposed iris
segmentation algorithm using active contour without edges. Parameters chosen for the
performance evaluation are segmentation accuracy, processing time required for
segmentation and iris recognition performance when normalized iris image is used as
input for feature extraction and matching. The experimental results are evaluated
using MATLAB 7.10 (R2010a) on Intel(R) core(TM) i4 CPU at 3GHz with 4GB
RAM. For all segmentation methods other blocks such as normalization, feature
encoding and matching method are kept same.
Experiment on evaluation of segmentation performance:
Implementation of two prominent and widely used segmentation methods viz.
Integro-differential operator (Daugman’s Model) and Hough transform (Wilde’s
Model) is carried out by using freely available MATLAB Code with appropriate
modifications. Performance evaluation of iris segmentation algorithms is being
carried out using the CASIA-Interval-3 database to evaluate segmentation accuracy
and processing time required by proposed method. We have used all the 2655 images
of the CASIA-interval-3 database and the results of iris segmentation are tabulated in
Table 3.4.
Table 3.4 Results of Iris Segmentation on CASIA-interval-3 Database
Segmentation Methods
Segmentation Accuracy(%)
Processing time of Iris segmentation
Minimum Time
(Seconds)
Maximum Time
(Seconds)
Average Time
(Seconds) Integro Differential operator
82.76 1.238 1.974 1.536
Hough Transform (Masek Method)
93.77 1.683 2.382 1.985
Geodesic Active Contours
96.89 5.73 6.89 6.31
Our tests on more challenging iris image database set, CASIA-IrisV3-Interval,
confirmed that our proposed segmentation method using geodesic active contour
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 66
method is robust in dealing with low quality images. More importantly, the proposed
method offers better performance with 96.89% segmentation accuracy. This
performance measure is indeed encouraging due to the low quality of the images.
Table 3.4 shows average processing time needed for iris segmentation is about 6.31
seconds, which is considerably longer than other approaches. Hugh discrepancy in the
time is due to the large search space and active contour iterations for fixing the
boundary. However, the segmentation accuracy achieved using active contour method
is far better than other methods.
Experiments on effect of segmentation on Iris recognition performance :
The feature extraction and matching stages of Daugman’s system are implemented by
using freely available open MATLAB codes. With the help of this implementation,
iris codes of gallery set images are generated and compared with iris codes of images
of probe set to obtain recognition errors. Using segmented and normalised irises
obtained from our segmentation results and two most referred methods as inputs to
feature extraction and pattern matching module. Table 3.5 shows the details of the
gallery set and probe set used for experiments from the CASIA-IrisV3-Interval
database.
Table 3.5 Details of the gallery set and probe set used for experiments CASIA-IrisV3-Interval database
Table 3.6 : Accuracy of recognition using each segmentation method for database.
Segmentation method UBIRIS Accuracy (%)
CASIA- Interval-3 Accuracy (%)
Integrodifferential Operator 68.46 82.76
Hough transform (Masek method) 89.23 93.77
Geodesic Active Contours 92.57 96.88
Total classes and images in the database
Number of images-2655 Number of classes-396 Number of images in each class-1-26
Gallery Size (150 classes) 900
Probe set Size (Genuine Class) 750
Probe set Size (Impostor Class) 935
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 67
The previous Table 3.6 shows the accuracy of the recognition achieved for three
segmentation techniques for both UBIRIS and CASIA-V3 databases.
Figure 3.23 shows receiver operating characteristics for three segmentation methods.
Fig. (3.23) : Receiver operating characteristics(ROC) curve for three methods Experiments on the database independency:
• The minimum and maximum values for radius of iris are required to set manually
for integro differential operator, depending upon database used.
• It must be noted that the GAC technique used for segmenting the iris is suitable
for operating on the iris images of different databases. The stopping criterion for
the evolution of GACs is image independent and does not take into account the
amount of edge details present in an image. Figure 3.24 and figure 3.25 shows
accurate segmentation on the off-angle irises by GAC technique.
• The range of radius values to search for is set manually for Hough transform,
depending on the database used. For the CASIA database, values of the iris
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
% FRR
% F
AR
FRR Vs FAR
Integro-differential
Hough transformGeodesic active contour
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 68
radius range from 90 to 150 pixels, while the pupil radius ranges from 28 to 75
pixels.
Fig. (3.24) : Segmentation of off-angle irise using GAC of Image 1_1,
Fig. (3.25) : Segmentation of off-angle iris using GAC of Image 1_2
Experiments on the estimation of the radius :
The estimation of radius by GAC technique gives better result on the images for
which Masek and Daugman method fails. The estimated radiuses of image from
CASIA Interval version3 database using each segmentation method are shown in
figure 3.26.
Thus the results are evaluated on the iris images for analyzing the effect of the
segmentation on the recognition as well as to study the different ways to achieve the
extraction of iris from eye image.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 69
(a) (b) (c)
Fig.(3.26) : Radius obtained from three segmentation techniques: (a) Integro
differential operator, (b) Hough transform (Masek method),(c) Geodesic active contour on image S1144L02 from CASIA interval V3 database
3.8.10 Discussion on results :
From the results of the above stated experiments, following inferences are drawn.
1. The comparative analysis of experimental results on the CASIA-interval-version3
dataset indicates the segmentation accuracy of the GAC over the Daugman’s
integro differential operator and Masek’s hough transform is better. The GAC
algorithm also supports in accurately estimating the radius of the iris and its center.
The improvement in the performance is due to the following reasons:
a) The GAC scheme is an evolution process that attempts to extract the limbic
boundary of the iris as well as the contour of the eyelid in order to isolate the
iris texture from its surroundings.
b) Since active contours can assume any shape and can segment multiple objects
simultaneously, the GAC approach lessens some of the concerns related with
the traditional models. But, the stopping criterion for the evolution of GACs is
image independent and does not take into account the amount of edge details
present in an image. Thus, if the iris edge details are weak, the contour
evolution may not stop at the desired iris boundary leading to an inaccurate
segmentation.
2. Though GAC scheme gives more accuracy it takes more time for segmentation than
Integro differential operator and Hough transform.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 70
3.9 Proposed Method 2: Fast Iris Recognition Using Canny Edge Detector
Daugman [15 ] uses integro differential operator for locating the circular iris and pupil
regions, and then arcs of the upper and lower eyelids. However, the algorithm fails
when there is noise in the iris image, such as reflections, since it works well only on a
local scale. Wildes [20] is based on the contour fitting approach using the circular
Hough transform for the parameters of circles passing through each edge point using
‘brute-force’ approach. This approach is used by many researchers due to its better
accuracy.
This method has two major difficulties despite of good segmentation accuracy.
In case of the realistic images such as UBIRIS database the intensity variation is not
same across the whole image. It is difficult to decide the single threshold for iris
segmentation for both iris and pupil detection both at a time to produce edge map of
outer and inner boundary. So, minimum two iterations need to be carried, first for
outer boundary and then for the inner boundary. Due to this the computational
complexity of the implementation is very high.
In the proposed approach, to overcome these problems, canny edge detector
[89] uses two thresholds, unlike other edge detectors and Hough transform, that to for
detection of outer boundary only. Inner boundary is detected by thresholding the
images and detecting the edge by analyzing connected region. Figure 3.27 shows a
flowchart of the proposed iris recognition algorithm with its major processing steps.
Fig. (3.27) : Flowchart of proposed Iris segmentation system
3.9.1 Inner Boundary (Pupil) Location :
The module of the pupil detection has been designed to localize pupillary boundaries.
The module is responsible for the following four processing steps. In the present
work, pupil is assumed to having circular shape. Thus, pupil detection consists of
localization of the Centre of the pupil circle and estimation of the radius of the
circular pupillary boundaries. The sub regions in the pupil area where specular
reflections occur need to be isolated in advance; then, the specular reflections can be
Segmented Iris image
Iris Image
Limbic Boundary Detection
Eyelid/ Eyelash detection
Gaussian Filtering
Pupil Detection
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 71
eliminated. Figure 3.28 shows the flowchart of proposed pupillary detection
algorithm.
Fig. (3.28) : Flowchart of pupillary detection algorithm
Gaussian filtering :
The input eye image is initially filtered by a Gaussian filter before being submitted to
intend to smooth the input image and also enhance the image contrast, mainly on the
edge.
Image binarisation :
The input iris image is first binarized by a chosen threshold value. Since the intensity
values of the pixels in the pupil area are small in the iris image and the area of the
pupil is not large, 5 - 8% (in considering the proportion of iris region to the entire iris
image for the CASIA database [9]) of the pixels with lowest intensity values are
chosen and set to 1. Those pixels are assumed to include the pupil. For iris images
from version 2 of the database, there is usually glare in the pupil area, which
sometimes makes the detection incorrect. The inner boundary is localized by
thresholding the images and freeman’s chain code. Make the gray scale histogram of
the image. By setting a threshold value T, change the original image into binary image
B(x,y) as shown in the figure 3.29 the original image, if the gray values of pixel are
bigger than T, set them as 0, on the contrary, set others to 1.
(a) (b)
Fig. (3.29) : a) Binary image b) Binary image with excluded eyelashes
Localized pupillary boundary
Iris Image
Reflection noise removal
Gaussian Filtering
Image Binarisation
Most probable region detection
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 72
Detection of the most probable region :
Binarisation process is capable of enhancing the most pupil area however; it preserves
many irregular and non-connected stains having varying sizes, both inside and outside
of the true pupil region. Pupil region detection is carried out relying on the analysis of
the connected region. The steps involved in this process are as follows:
1. Exclude the eyelashes part of the binary image, then utilize Freeman chain
code to search eight-adjacent regions, set the pixel values of these regions as
According to the characteristic that the area of eyelashes is much smaller than
that of the pupil, apply threshold value to binary image.
2. If , in region R, the number of pixels is smaller than M; value of which need to
be decided, set them to 0. The result of the binary image excluded eyelashes
(B1(x,y)) is shown in figure 3.29(b).
3. Analyse connected region: The coordinates of pupil centre and the radius of
pupil of the largest connected region C is calculated by,
),(1
),(@),(
c
N
Ccycxcpp
yxn
yx ∑= (3.25)
)(πN
roundr =
Where, N is the number of pixel in connected region C and (Xc, Yc) is the coordinates
of connected region C is radius of circle.
3.9.2 Outer boundary detection using canny edge detector :
To complete outer boundary location, the operator of the Canny edge detector and
Hough transform are used. Hough transform requires the three dimensional searching
and so requires the higher computational complexity. We used Canny edge detector
because it is an optimal detector in which edges are marked at maxima in gradient
magnitude of Gaussian smoothened image.
The Canny edge detector :
The canny edge detector detects true edges from noisy images and suppresses the
false edges due to noise in the image. Thus, Canny edge detector is suitable for outer
boundary detection in Iris segmentation. Steps involved in the Canny edge detector:
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 73
1. Utilization of the Gaussian filters of standard deviation 1.5 to smooth image by
minimizing the high frequency noise occurring due to eyelids and eyelashes in iris
images.
2. Calculation of the gradient value and the direction of the each pixel using Sobel
operator to mark edges with large gradient magnitudes.
3. Restrain the gradient values with non-maximum suppression to convert blurred
edges in the image of the gradient magnitude to sharp edges so that all local
maxima in the gradient images are preserved.
4. Threshold value processing: The edge pixels remaining after the non-maximum
suppression step are marked with their strength pixel-by-pixel. Some of these are
true edges in the image, but some are false edges as they are caused by the noise.to
discriminate among them two thresholds are used. Edge pixels stronger than the
high threshold are marked as strong and edge pixels weaker than the low threshold
are removed and edge pixels between the two thresholds are marked as weak.
Thus, double thresholding removes the weak edges (false edges) which are not
connected strong edges and all other edges are preserved.
3.9.3 Outer boundary detection algorithm :
Based on the above discussion the steps involved in the outer boundary detection are
as follows:
1. Read the Iris image from the UBIRIS/CASIA databases and convert to gray scale
image, if required to restrict the computational requirement.
2. Resize the image I(x, y) to 256 X 256 size for further reduction in computational
overhead and to reduce processing time.
3. Calculate the threshold from the histogram of an image. Obtain the binary edge
map of image I(x, y) using canny edge detector using the threshold obtained to get
the image IRef(x, y).
4. Obtain another binary edge map of image, Icanny(x, y) with using the preset
threshold. However, overall time required to obtain the binary edge map Icanny(x, y)
is less than Iref(x, y) and that to with better edge map with minimal noise edges.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 74
Step 4 generates the binary image of same size of input image I(x, y) using canny
edge detector with preset threshold value. Figure 3.30 shows the sample grayscale
images and their binary edge map with 1’s where function finds edge in I(x, y) and
0’s elsewhere.
5. Obtain the rows and two columns in the matrix of binary edge map image,
ICanny (x, y), containing single ‘1’ to draw the four tangents through these four
boundary points. Various steps involved in this process are:
Fig. (3.30) : Edge map of an eye image using Canny edge detector.
1) The edge map image section is as shown in figure 3.31. For detecting the outer iris
boundary, this matrix is read row-wise and column-wise within the range 65 and 195
to find a row and a column having single ‘1’ with rest of the pixels ‘0’, within the
above specified range. Four such pixels (points) , two each on two rows and two each
on two columns are determined.
a) By joining the points of contacts of the two opposite tangents two diameters are
obtained, DH and DV. The diameter of iris is computed by taking the average of
these two diameters.
2
)D(DD VH
+= Thus, radius of outer boundary is R= D/2.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 75
b) The point of intersection of these diameters is taken as the centre of the iris as
shown in figure 3.32b.
c) The iris centre and radius thus obtained are rough estimates of the outer circle of
iris. Figure 3.33 shows the accurately detected inner and outer boundary of iris.
Fig. (3.31) : The part of matrix of edge map image 2) Then, four tangents are drawn through these points as shown in figure 3.32. a) Four tangents b) Iris centre and outer radius detection
Fig. (3.32) : Estimation of center and radius of outer boundary of the iris. 3.9.4 Experimental results of proposed method 2 :
Two experiments are conducted to evaluate the performance of the proposed iris
segmentation algorithm using Canny edge detector. Parameters chosen for the
performance evaluation are segmentation accuracy, processing time required for
0 0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 1 0
0 0 0 0 0 0 0 0 1
0 0 1 1 0 0 0 0 1
0 0 0 1 0 0 0 0 0
0 0 0 0 1 1 0 0 0
0 0 0 0 0 0 1 0 0
DH
Dv
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 76
segmentation and iris recognition performance when normalized iris image is used as
input for feature extraction and matching. The experimental results are evaluated
using MATLAB 7.10 on Intel(R) core(TM) i4 CPU at 3GHz with 4GB RAM. For
each of segmentation methods, normalization, feature encoding and matching method
is kept same.
Fig. (3.33) : Inner and outer boundary detection of iris
Experiments on evaluation of segmentation performance :
Implementation of two prominent and widely used segmentation methods viz.
Integro-differential operator (Daugman’s Model) and Hough transform (Wilde’s
Model) is carried out by using freely available MATLAB Code with suitable
modifications. Performance evaluation of iris segmentation algorithms is being
carried out using the UBIRIS database to evaluate segmentation accuracy and
processing time required by proposed method. We have used all the 1877 images of
the UBIRIS database and the results of iris segmentation are tabulated in Table 3.7.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 77
Table 3.7 Results of Iris Segmentation on UBIRIS Database
Segmentation Methods
Segmentation Accuracy(%)
Processing time of Iris segmentation Minimum
Time (Sec.) Maximum Time (Sec.)
Average Time (Sec.)
Daugman’s Integro Differential operator
76.06 1.26 2.09 1.67
Hough Transform (Masek Method) 90.32 1.62 2.51 2.06
Canny edge detector
89.76 0.44 0.68 0.56
Our tests on more challenging iris image database set, UBIRIS, confirmed that our
proposed segmentation method using canny edge detector robust in dealing with low
quality images. More importantly, the proposed method offers comparable
performance with 89.76% segmentation accuracy. This performance measure is
indeed encouraging due to the low quality of the images. Table 3.7 shows average
processing time needed for iris segmentation is about 0.56 seconds, which is
considerably smaller than other approaches.
Experiments on effect of segmentation on Iris recognition performance :
We have also implemented the feature extraction and matching stages of Daugman’s
system by using freely available open MATLAB codes to evaluate the effect of partial
loss of iris data in segmented iris of our method on recognition rate. With the help of
this implementation, iris codes of gallery set images are generated and compared with
iris codes of images of probe set to obtain recognition errors, using segmented and
normalised irises obtained from our segmentation results and two most referred
methods as inputs to feature extraction and pattern matching module.
Table 3.8 shows the details of the gallery set and probe set used for
experiments from the UBIRIS database. Table 3.9 gives the ERR and accuracy of the
implemented iris recognition system at different thresholds respectively.
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 78
Table 3.8 Details of the gallery set and probe set used for experiments UBIRIS
The EER and recognition accuracy for the UBIRIS database using different
segmentation methods is given in Table 3.11.
Table 3.9: Accuracy of recognition using each segmentation method for UBIRIS
database.
Segmentation method EER Accuracy (%)
Integrodifferential Operator 31.54 68.46
Hough transform (Masek method) 10.77 89.23
Geodesic active contours 7.43 92.57
Canny edge detector 11.21 88.79
FAR is plotted versus different threshold values as shown in figure 3.34 and figure
3.35 for CASIA and UBIRIS databases respectively.
Threshold Vs FAR
0
20
40
60
80
100
120
0 0.2 0.4 0.6 0.8
Threshold
% F
AR
HougttransfmGeodesic
proposed
Fig.(3.34) : Threshold Vs FAR (CASIA)
Total classes and images in the database
Number of images-1877 Number of classes-241 Number of images in each class-1-9
Gallery Size (100 classes) 600
Probe set Size (Genuine Class) 450
Probe set Size (Imposter Class) 750
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 79
Threshold Vs FAR
0
20
40
60
80
100
120
0 0.2 0.4 0.6 0.8
Threshold
% F
AR Hough transform
Active contourr
proposed method
Fig. (3.35) : Plot of FAR vs. Thresholds for the UBIRIS database
3.9.5 Discussions on results :
From the results of the above stated experiments, following inferences are drawn.
Time required for iris segmentation and normalization for the proposed method (0.56
seconds). As compared to Daugman’s method (1.67 second) and Hough transform
method (2.06 second). Improvement in the processing time as compared with the
other two methods is due to following reasons:
• In the Hough transform method, two steps are involved in outer boundary
detection. The first step is to convert the gray scaled eye image into a binary edge
map using suitable (Canny) edge detector method and then circular Hough
transform is used. In circular Hough transform, each edge pixel with coordinates
(x, y) in the image space is mapped for the parameters space determining the
circle parameters i.e. center and radius which resolves the circle equation. As a
result the point (x, y, r) is obtained in the parameters space which represents a
possible iris circle in the image, all such possible circles are obtained and then
correct iris circle is drawn.
• As center and radius are different for the pupil the so, these two steps are repeated.
This makes the Hough transform method computationally highest among three
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 80
methods. However, the accuracy of the segmentation is best among the three
methods.
• Integro differential operator is used in the Daugman’s method for detection of the
both the boundaries. This operator searches for the circular path where there is
greatest change in the pixel values when there is change in radius and pixel
coordinates from the center of iris circle in iterative operations. The same process
need to be replaced for the pupil detection. Due to repeat set of calculations for
pupil and iris the method becomes computationally heavy.
• In the proposed method the outer boundary is detected using Canny edge detector
and then by obtaining the four tangent points, using very fast search of edge map
matrix, to find the center and radius of iris circle. This method is neither on the
computationally Hough transform method nor it is iterative in nature. Pupil is
detected separately by using the gradient vector approach. Moreover the centre of
the iris is mapped to centre of the pupil. This further reduces the normalization
time required. As processing time is saved in all the steps pupil detection, outer
boundary detection and the normalization, this method is computationally very
efficient as compared to other two methods.
• The iris segmentation accuracy achieved by the proposed method for UBIRIS
database is slightly less (89.76%) as compared to Hough transform method
(90.32%) but It is better than comparable to Daugman’s method (76.06%). Hough
transform method uses the pupil circle and iris circle mapping separately.
However, in the proposed method little useful part of the iris is lost due to
assumption of the same center for pupil and iris. This is also a reason for the
slightly less segmentation accuracy.
• Recognition accuracy using the Wildes method is slightly better than that of
proposed method. The conclusions drawn from this are as follows.
• The improvement in the recognition performance improvement is due to best
segmentation accuracy achieved at the segmentation stage.
• Recognition performance of the proposed method is slightly lower than Hough
transform method because of the loss of information in the pupil and iris
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 81
segmentation process. The recognition accuracy could be further improved by
selection of better feature extraction method. Chapter 6 discusses on the quality
measures for the selection of features used in feature extraction algorithm.
• The least recognition rate of the Daugman’s method is due to the low success rate
of the iris segmentation and presence of noise in the segmented iris image.
3.10 Concluding Remarks
Iris segmentation is probably one of the most crucial operations involved in iris
recognition. Accurate iris segmentation is fundamental for the success and precision
of the subsequent feature extraction and recognition, and consequently allowing the
iris recognition system to achieve desired high performances.
Employing better image acquisition systems can significantly improve the
quality of input images and, thereby, the recognition performance. However, for
practical purposes, it is necessary to develop algorithms that can handle poor quality
data. In this regard, the present chapter discusses the problem of iris segmentation in
challenging images. A set of 4 iris segmentation techniques were evaluated on a noisy
database containing various non-ideal factors such as occlusions, blur, and drastic
illumination variations.
The pre-processing schemes appear to have a significant role in the
segmentation performance for all the techniques. Similarly, eye centre detector helps
in localizing a region for iris boundary search process.
The considered set of iris segmentation techniques ensures a balance between
the classical approaches, and the relatively newer approaches to handle challenging
iris images. Both the integro-differential operator and Hough transform require
relatively less computations, when compared to the GAC techniques. However, their
performance was observed to be low, due to the poor quality input data. On the other
hand, Geodesic Active Contours, and canny edge detector method provide better
performance, at the expense of higher computational complexity in case of active
contour method.
Geodesic Active Contours can be effectively used to evolve a contour that can
fit to a non-circular iris boundary (typically caused by eyelids or eyelash occlusions).
Comparative study of iris recognition system using WPNN and Gabor wavelet
Nonlinear Predictive In-line pH c Via Artificial Neural Computation
Iris Segmentation 82
However, edge information is required to control the evolution and stopping of the
contour. Based on the discussion of the experiments and results obtained, following
conclusions are drawn.
1. Improvement in iris recognition accuracy depends on the segmentation accuracy
as it considers the loss of iris data and noise in the image.
2. Segmentation using Canny edge detector contributes to best recognition rate
because of accurate segmentation of lossless and noise free irises of actual shape.
3. Processing time required for the first method is more, while that of second method
using Canny edge detector is small as compared with the other methods discussed.
However, presented timings cannot be trusted completely because the processing
time of algorithm depends not only on the power of the algorithm but also on
other various factors like way of implementation, loops, optimization, overheads
of other running processes on PC and so on. Strength of algorithms in terms of
processing time can be very well estimated from the complexity of the
mathematical equations involved and the basic logics used in the algorithms.
4. Recognition performance of the proposed method is slightly lower than Hough
transform method because of the loss of information in the pupil and iris
segmentation process. Extracting the features using the 1D LogGabor filter and
using Hamming distance for matching has provided these results. The recognition
accuracy could be further improved by selection of better feature extraction
method. This has motivated us to device new techniques of feature extraction and
selection of best quality features to be used in feature extraction algorithm. These
issues are discussed in the further chapters in this thesis.
------ ------