IRIS SEGMENTATION PROCESS - csnow.in SEGMENTATION PROCESS ... Kattamanchi Hemachandran, ... I would...
Transcript of IRIS SEGMENTATION PROCESS - csnow.in SEGMENTATION PROCESS ... Kattamanchi Hemachandran, ... I would...
IRIS SEGMENTATION PROCESS
A PROJECT SUBMITTED TO ASSAM UNIVERSITY, SILCHAR IN PARTIAL
FULFILLMENT OF THE REQUIREMENT FOR THE DEGREE OF THE
MASTERS OF COMPUTER SCIENCE
SUBMITTED BY
SAHENA BEGAM BARBHUIYA
Msc 4th sem
Roll : No : 22220384
Regn. No: 01-110016666 of 2011-2012
UNDER THE ABLE GUIDANCE OF
DR. KATTAMANCHI HEMACHANDRAN
PROFESSOR
DEPARTMENT OF COMPUTER SCIENCE
ALBERT EINSTEIN SCHOOL OF PHYSICAL SCIENCE
ASSAM UNIVERSITY, SILCHAR
YEAR OF SUBBIMISSION: 2016
CERTIFICATECERTIFICATECERTIFICATECERTIFICATE
This is to certify that the project work entitled “IRIS SEGMENTATION IRIS SEGMENTATION IRIS SEGMENTATION IRIS SEGMENTATION
PROCESSPROCESSPROCESSPROCESS” submitted to Assam University is a bonafide record of the
project carried out by Sahena Begam Barbhuiya in the department of
Computer Science, Assam University, Silchar under my guidance. No part
of the project has been submitted for any other Degree or Diploma .The
work included in this project is original and own work of the candidate.
Place: Silchar Prof. Kattamanchi Hemachandran
Date: (Supervisor)
Department of Computer Science
Assam University, Silchar
DEPARTMENT OF COMPUTER SCIENCE
SCHOOL OF PHYSICAL SCIENCES
ASSAM UNIVERSITY S IL C HAR
A CENTRAL UNIVERSITY CONSTITUTED UNDER ACT XIII OF 1989
ASSAM, INDIA, PIN - 788011
CERTIFICATECERTIFICATECERTIFICATECERTIFICATE
This is to certify that the project work entitled “IRIS SEGMENTATION IRIS SEGMENTATION IRIS SEGMENTATION IRIS SEGMENTATION
PROCESSPROCESSPROCESSPROCESS” submitted to Assam University is a bonafide record of the
project carried out by Sahena Beagam Barbhuiya in the department of
Computer Science, Assam University,Silchar under the able guidance of Dr.
Kattamanchi Hemachandran, Professor, Department of Computer
Science.No part of the project has been submitted for any other Degree or
Diploma .The work included in this project is original and own work of the
candidate.
Place: Silchar (Dr.Bipul SyamPurkayastha)
Date: HOD
Department of Computer Science
Assam University, Silchar
DEPARTMENT OF COMPUTER SCIENCE
SCHOOL OF PHYSICAL SCIENCES
ASSAM UNIVERSITY S IL C HAR
A CENTRAL UNIVERSITY CONSTITUTED UNDER ACT XIII OF 1989
ASSAM, INDIA, PIN - 788011
Department of Computer Science
School of Physical Science Assam University, Silchar (A Central University constituted under Act XIII of 1989) Silchar – 788011, Assam, India
DECLARATIONDECLARATIONDECLARATIONDECLARATION
I, Sahena Begam Barbhuiya do hereby declare that the project work entitled “IRIS IRIS IRIS IRIS
SEGMENTATION PROCESSSEGMENTATION PROCESSSEGMENTATION PROCESSSEGMENTATION PROCESS” has been carried out by me under the able guidance of Dr. Dr. Dr. Dr.
Kattamanchi HemachandranKattamanchi HemachandranKattamanchi HemachandranKattamanchi Hemachandran, Professor, Department Of Computer Science, Assam
University, Silchar. This project has not been submitted in any part or full for the award of
any degree in any university or institute.
Place: Silchar (Sahena Begam Barbhuiya)
Date:
ACKNOWLEDGEMENTACKNOWLEDGEMENTACKNOWLEDGEMENTACKNOWLEDGEMENT
At the very outset, I would like to convey my sincere and heartfelt thanks and
gratitude to Dr. Kattamanchi Hemachandran, Professor of Department Of
Computer Science, Assam University, Silchar, for his excellent and able guidance,
valuable suggestions and kind co-operation, which resulted in successful
completion of the project work.
I would like to express my gratitude to Dr.Bipul Syam Purkayastha, Head of
Department of Computer Science, Assam University, Silchar, for his kind co-
operation and help.
I am also thankful to all the respected teachers of the Department of Computer
Science, Assam University,Silchar for their valuable suggestions.
I am pleased to thank the research scholar Sunita Ningthoujam and Arif Iqbal
Mozumder for their great help and co-operation during my project work.
I also wish to express my heartfelt gratitude to the office staff and all other
non teaching staff of the Department of Computer Science, Assam University,
Silchar for their help and support during my project work.
Lastly, I would like to express my deepest regards to my parents, friends and
those who had helped me directly or indirectly in the way to the successful
completion in this aspect.
Place, Silchar Sahena Begam Barbhuiya
Date:
CONTENTSCONTENTSCONTENTSCONTENTS
CHAPTER 1: INTRODUCTION 1-8
1.1 Biometric system 2-3
1.2 Iris Recognition System(IRS) 4
1.3 Stages of Iris Recognition System 4-5
1.4 Advantage of Iris Recognition System 6
1.5 Disadvantage of Iris Recognition System 6
1.6 Application of Iris Recognition System 6
1.7 Motivation 7
1.8 Objectives 8
CHAPTER 2: REVIEW OF LITERATURE 9-14
CHAPTER 3: DESIGN AND IMPEMENTATION 15-22
3.1 Introduction 15
3.2 Pupil detection 15-16
3.3 Iris detection 16-18
3.4 Eyelid detection 18-21
3.5 Noise reduction 21-22
CHAPTER 4: EXPERIMENTAL RESULTS AND DISCUSSION 23-26
CHAPTER 5: USER MANUAL 27-31
CHAPTER 6: CONCLUSION AND FUTURE WORK 32
CHAPTER 7: REFERENCES 33-37
CHAPTER 3
DESIGN AND DESIGN AND DESIGN AND DESIGN AND
IMPLEMENTATIONIMPLEMENTATIONIMPLEMENTATIONIMPLEMENTATION
CHAPTER 4
EXPERIMENTAL EXPERIMENTAL EXPERIMENTAL EXPERIMENTAL RESULTSRESULTSRESULTSRESULTS
AND DISCUSSION AND DISCUSSION AND DISCUSSION AND DISCUSSION
CHAPTER 6
CONCLUSIONCONCLUSIONCONCLUSIONCONCLUSION AND FUTURE AND FUTURE AND FUTURE AND FUTURE
WORKWORKWORKWORK
Chapter 1: Introduction
Page: 1
Iris recognition system is one of the most reliable biometric systems for personal
identification due to unique properties of iris and high degree of randomness [1, 2].
Typically, iris recognition systems consist of four modules viz. Iris Segmentation,
Normalization, Feature extraction and Matching [3]. In order to achieve the high
performance iris recognition system, proper segmentation of an iris is required. For
real time application, the computational time of the system should also consider. So,
the segmentation step is the vital steps for overall performance of the system. The
segmentation refers to the isolation of an iris region from an eye image by properly
detecting the iris inner and outer boundaries. Iris region is an annular part between
pupil and sclera as shown in Fig. 1.
Fig. 1 An Iris Image (SI213LOljpg) form CASIA Iris V3 Interval Database
The artifacts such as eyelid, eyelashes, specular reflections etc. make the
segmentation more difficult which in turn decrease the system performance. Daugman
[4] proposed an Integro-Differential operator for proper segmentation of an iris.
Wildes [5], Matveev et al. [6] proposed a segmentation approach based on Hough
transformation. Masek [7] performed the iris segmentation based on Hough
transformation. Circular Hough transform has been applied to detect inner and outer
circle of an iris. In order to improve the speed of the iris segmentation Ma et al. [8],
Zubi et al. [9] roughly determine the iris region in advance before applying Hough
Transform. Shah et al. [10] extracted the iris region from an eye image by using
Geodesic Active contours (GACs). AI-Daoud [11] proposed a method based on
competitive chords to detect pupil-iris and iris-sclera boundaries. Roy et al. [12]
Chapter 1: Introduction
Page: 2
applied parallel game-theoretic decision making procedure to elicit iris boundaries.
Shin et al. [13] applied circular edge detector algorithm to detect the iris boundaries.
Abdullah et al. [14] applied active contour method for complete segmentation of an
iris. Daugman's approach works on local scale and fails to detect circle boundaries
where there is noise such as reflections in the image. Masek's method based on Hough
transformation is computationally expensive to detect the coordinates of the iris [15].
1.1 Biometric System
The term biometrics derived from two Greek words bio means “life” and
metric means “to measure”. It refers to the automatic individual’s identification
based on their physical or behavioral characteristics [16]. The physiological
characteristics of a person such as face pattern, iris pattern, fingerprint, palm print,
hand geometry present unique information to distinguish among them and can be
used in authentication applications [17]. The biometric recognition system
involves two phase viz. Enrollment phase and Identification or Verification phase.
During enrollment phase feature vector is stored in a database after being extracted
from individual object. In identification or verification phase, user provides a
sample vector to the system where it is compare to stored vector and depending on
pre-determined threshold value a decision is made [18]. Fig. 2 demonstrates the
stages of biometric system.
To serve any human physiological or behavioral traits as a biometric characteristic
it should satisfy the following requirements [19, 20]:
• Universality: Every individual should have it.
• Distinctiveness: No two individuals should be the same.
• Permanence. It should be invariable over a given era of time.
• Collectability: The feature must be easy to collect.
Chapter 1: Introduction
Page: 3
Fig 2: Stages in biometric Recognition system [3]
However, for a practical biometric system several other issues [21] such as
performance, acceptability and circumvention of systems are also considered. The
different biometric technologies fingerprint, hand geometry, voice recognition, retinal
recognition, handwritten patterns, iris recognition, and dynamic signature recognition
are used in various applications. Each biometric has its own advantages and
disadvantage and no single biometric system can be considered as an optimal system.
[19].
Due to the advancement of science and technology, application areas of biometrics
are increasing day by day where identification or verification is required for
individuals. The applications of biometrics can be divided into the mainly three
categories viz. Commercial, Government and forensic. Commercial applications
include computer network login, data security, ATM, credit card, distance learning
and many more. Government applications include passport control, national ID and
driver’s license. Criminal investigation, parenthood determination, corpse
identification are belongs to forensic applications of biometric systems. Unlike
traditional methods based on password and PIN number, the use of biometrics
provides more comfort to the user while increasing their security. For example, uses
of biometrics on banking services are much safer and faster compared with existing
methods based on credit and debit cards [22].
Biometric Capture Create Feature Vector
Create Feature Vector
Capture Biometric
STORE
Compare
Match
Enrollment
Verification
Not Match
Threshold
Chapter 1: Introduction
Page: 4
1.2 Iris Recognition System (IRS)
Out of various biometric techniques such as face recognition, fingerprint
recognition, gait, hand and finger geometry, ear, iris recognition have been accepted
as best and most accurate biometric techniques because of the stability, uniqueness,
and non-invasiveness of the iris pattern. The iris region (shown in figure 3), the part
between the pupil and the white sclera provides many minute visible characteristics
such as freckles, coronas, stripes, furrows, crypts which are unique for each
individual. Even two eyes of same person have different characteristics. Furthermore,
the chance of obtaining two people with same characteristics is almost zero that makes
the system efficient and reliable when security is concerned [23].
Fig. 3: Eye Image (CASIA Iris Database)
Typically, the iris recognition system consists of four modules viz. Image
acquisition, segmentation, feature extraction and matching as shown in fig. 4. After
acquiring eye images, iris part is localized by demarcating its inner and outer
boundaries and then the circular iris are transformed to the rectangular with fixed size.
This is done in segmentation and normalization module. Next is the feature extraction
module where the unique iris feature is extracted using appropriate technique from the
segmented iris. Finally, the extracted features are matched with the stored pattern to
validate the identification process [24-25].
1.3 Stages of Iris Recognition System
Iris recognition is a stepwise procedure as shown in fig. 4. The first step is the image
capture. Then images are brought to appropriate forms in order to perform some
preprocessing steps. Then the iris is localized and segmented for further processing.
The texture of the iris is extracted then using appropriate techniques. Finally texture
Iris
Pupil
Eyelid
Sclera
Chapter 1: Introduction
Page: 5
matching is done to validate identification process. The major steps can be
summarized as:
• Image Acquisition: Image is captured under proper illumination, distance
and other factors affecting image quality are taken into consideration. This
step is crucial because image quality plays an important role in iris
Localization.
• Image Segmentation: In this step, the iris region is isolated from the given
image. The iris segmentation is a vital step for overall performance of the
system.
• Feature extraction: The portion of interest i.e. iris patterns are extracted
from the localized iris using techniques like Haar wavelets.
• Matching: The extracted patterns are mapped onto the patterns already
extracted and stored in database. The degree of similarity decides whether
the identification is to be established or not.
Iris Image Acquisition
Iris Segmentation
Normalisation
Feature Extraction
Matching
Compare
Not match
Match
Fig 4: Stages of Iris Recognition System
Chapter 1: Introduction
Page: 6
1.4 Advantage of Iris Recognition System
• The smallest outlier population of all biometrics.
• Iris pattern and structure exhibit long term stability.
• Ideal for handling long databases.
• Comparatively fast matching technique.
• Convenient intuitive user interface.
• Very high accuracy.
• Verification time is generally less than 5 seconds.
1.5 Disadvantage of Iris Recognition System
• Intrusive.
• A lot of memory for the data to be stored.
• Very expansive.
1.6 Application of Iris Recognition System
• One of the most promising applications for iris recognition is that it
increases security for the transportation industry. The current
requirements for security airports could increase the use of biometric
devices in this area.
• Another promising application is for bank ATMs. Someday ATM users
will be identified by their irises rather than their PIN numbers. A
person’s Iris code can be stored either in a database or on a smart card.
• The ability to store the Iris Code on a card or token is important because
it eliminates privacy concerns associated with retaining identities in a
centralized database.
• A biometric technology such as Iris recognition can easily eliminate or
complement the standard log-in password for individual authentication to
a computer.
Chapter 1: Introduction
Page: 7
1.7 Motivation
• Something we hold for security, can be lost, something we know like passwords
or PIN, can be guessed, or forgot. Biometrics provides an alternative to these
methods, or they can be used in combination (multimodal). Fingerprints, which
are widely used, can be forged (gummy fingers). The face changes over a period
of time, even with the best algorithms face recognition(for faces taken one year
apart) has error rates of about 43 to 50 % ,hand geometry is not distinctive enough
to be used in large scale applications, hand-written signatures can be forged.
• Iris recognition systems are the most reliable biometric system since iris patterns
are unique to each individual and do not change with time. A variety of methods
were developed to handle eye data in biometric systems after J. Daugman
developed the first commercial system.
• To obtain a good quality image, users’ cooperation is required like the user must
look straight into camera, still and there should be proper illumination. These
causes inconvenience to the user and is also time consuming.
• Segmentation is a vital step among all the steps involve in iris recognition system.
Proper segmentation of an iris is needed for overall performance of the system.
Various authors proposed different techniques for iris segmentation. All these
existing techniques viz. Daugman nd Masek’s approach are computationally
expensive and time consuming.
• So, in this project work, new segmentation approach has been proposed to
overcome the disadvantages of existing methods.
Chapter 1: Introduction
Page: 8
1.8 Objectives
� To study the various iris recognition systems.
� To study the various iris segmentation techniques.
� Implement an iris segmentation technique for Iris recognition system.
� Compare the proposed segmentation technique with existing techniques in order
to evaluate the efficiency of the proposed technique.
Chapter 2: Review of literature
Page: 9
In the field of iris segmentation techniques, much advancement has been made by
different authors since from 1993, when J.G Daugman proposed an approach for iris
segmentation. In the segmentation stage, this author introduced an Integro-differential
operator to find both the iris inner and outer borders. The methodology used by him is
most popular amongst all iris recognition techniques. In this paper, Daugman assumed
the iris and pupil to be circular and introduced an operator for edge detection. This
operator searches over the image domain (x, y) for the maximum in the blurred
derivative with respect to increasing radius r, of the normalized contour integral of I(x,
y) along a circular arc disk of radius r and center (X0 , Y0)[26].
Wildes [27] proposed an approach for iris segmentation based on intensity
values of the image which is converted into a binary edge map. The edge map is
constructed through the Canny Edge detector. In order to incorporate directional
tuning, the image intensity derivatives are weighted to favour ranges of orientation.
Then the well-known Circular Hough Transform is used to obtain the boundaries. The
accuracy of this methodology is dependent on the edge detection algorithm.
Over the past decade, Daugman [28] has constantly modified and improved the
recognition algorithms. In [28], Daugman presented alternative segmentation methods
based on active contours to transform an off-angle iris image into a more frontal view.
Here, to deal with non-circular iris inner and outer boundaries discrete Fourier
transform is applied to detect the iris inner and outer boundaries.
Boles et al. [29] developed an iris recognition system using 1-D dyadic wavelet
transform with various resolution levels on an iris image to characterize the texture of
the iris and then used zero-crossing for feature representation. It made the use of two
dissimilarity functions to compare the new pattern with the reference patterns. Boles’
approaches have the advantage of processing 1-D iris signals rather than a 2-D image.
Here, ‘‘1D dyadic’’ means a pair of 1 dimensional wavelet filters, such as low pass
and high pass filters.
Li Ma et al. [30] defined new spatial filters to extract iris features, which were
Gaussians modulated by circularly symmetric sinusoidal function. Experiments show
Chapter 2: Review of literature
Page: 10
that the method was suitable for iris feature extraction. Daouk et al. [31] used the
fusion of the canny edge detection scheme and circular Hough transform for iris
segmentation. Based on the Gabor filters and the characteristics of the iris pattern,
Chen et al. [32] introduced Gradient Direction Coding (GDC) with grey code method
and Delta Modulation coding (DMC) method to extract and encode the iris features.
GDC method is a 2-D method and encodes the gradient direction of each small 2-D
iris image block in Wavelet Transform domain. DMC method is delta modulation
concept used to efficiently encode the feature information. This is a 1-D method in
which the 2-D feature information is converted into 1-D feature signals before
encoding.
Barzegar et al. [33] proposed an iris segmentation method based on point wise
level set approach. Yahya et al. [34] proposed efficient technique for iris localization.
First, Direct least square fitting of the ellipse was used to detect the iris inner
boundaries and then Integro-differential operator was applied to detect outer
boundaries which increase the speed and accuracy in iris segmentation compare to
other approaches. Abra et al. [35] proposed an algorithm based on optical composite
correlation filter. The algorithm eliminates redundancy by using new design of
composite filter called Indexed Composite Filter (ICF). The values of the inner and
the outer boundary are determined through two ICF.
Murty et al. [36] presented a new and modified algorithm for iris recognition based
on canny edge detection scheme and Hough transform to segment the iris part. The
segmented iris part was normalized by using the rubber sheet model and then
Principal component analysis (PCA) was used for pattern matching. Yahya et al. [37],
introduced a Chan-Vese active contour method to extract the iris. The reflections were
identified by using impainting technique in loaded image. The adaptive boosting
(AdaBoost)-Cascade Detector was adopted to detect iris region. Finally Chan-Vese
active contour method was applied to find the iris boundaries. It performed better with
an error rejection rate (ERR) of 5.5068 compare to Daugman of 16.8635 and Wildes
of 33.8226 when the UBIRIS iris database is considered.
Most commercially available iris recognition systems are based on the pioneered
algorithms of Daugman [28] and Wildes [27]. However, they perform well in ideal
Chapter 2: Review of literature
Page: 11
conditions but may fail for non-ideal data. The non-ideal eye images may contain
multiple issues such as specular reflections, low contrast, blurring, focus, non-uniform
illumination, glasses and contact lens, off-axis and the off-angle eyes; occlusions such
as eyelashes, eyelids and hair [27].
Jan et al. [38] proposed a robust iris localization algorithm for non-ideal eye image
based on the Hough transform, gray level statistics, adaptive thresholding and a
geometrical transform. The algorithm involves two phases. In the first phase, iris
circle in a sub-image centered at the pupil circle was localized after localizing the
pupil region. However, on failure of first phase the coarse iris region was localized in
the second phase. Finally, the iris circular boundaries were regularized by using radial
gradients and the active contours. The proposed technique was tolerant to off-axis eye
images, specular reflections, non-uniform illumination, glasses, and contact lens, hair,
eyelashes, and eyelids occlusions.
Li et al. [39] introduced a robust iris segmentation algorithm based on the
Random Sample Consensus (RANSAC). The algorithm localized the iris boundaries
more accurately than the methods based on the Hough transform. Li et al. [40]
presented a weighted co-occurrence phase histogram (WCPH) for representing the
local characteristics of texture pattern which accounts for inconsistencies brought by
the disturbing factors such as noise, illumination changes. Raffei et al. [41] proposed
an algorithm based on a multi-scale sparse representation of local radon transform
(msLRT) to extract iris features when eye images were captured in a non-cooperative
environment and under visible wavelength illumination. The method was able to
reduced the computational complexity and to generate a compact iris feature vector.
Jan et al. [42], introduced reliable iris localization techniques using Hough
transform, histogram bisection and eccentricity for non-ideal iris image. It includes
localizing a coarse iris location in the eye image using the Hough transform and image
statistics; localizing the pupillary boundary using a bi-valued adaptive threshold and
the two-dimensional (2D) shape properties; localizing the limbic boundary by reusing
the Hough accumulator and image statistics; and finally, regularizing these boundaries
using a technique based on the Fourier series and radial gradients. The experimental
result shows that the proposed method has tolerance to non-ideal issues, such as the
Chapter 2: Review of literature
Page: 12
off- axis eye images, specular reflections; hair, glasses, cosmetic lenses, eyelids, and
eyelashes occlusions. Sun et al.[43] approach was based on scale invariant feature
transform (SIFT) and bag-of-features. After detecting the iris inner boundary by
region based active contour, SIFT method is applied to detect the key points in the iris
image. The points located in pupil region were removed. The histogram representation
for each iris image was generated from the constructed feature vocabulary. The
histogram distance was adopted for the matching test.
Chowhan et al. [44] described the Modified Fuzzy Hyperline Segment Neural
Network (MFHSNN) based iris recognition. The Gabor filters technique was used for
Iris feature extraction after segmentation and normalization using Integro-differential
operator and Cartesian to Polar Coordinate transform respectively. The FHLSNN with
its learning algorithm was used for classification of iris patterns.
Bindra et al. [45] introduced an iris recognition system by dividing the iris
image into three and two to reduce the computational complexity which was different
from a traditional system where complete iris image is extracted. The Sobel operator
and wavelet transformation were used for feature extraction. The combination of the
Euclidean distance with Particle Swarm Optimization (PSO) was presented for
classification. The algorithm was tested on an IITK iris database. Logannathan et al.
[46] presented an approach based on wavelet probabilistic neural network (WPNN)
model. WPNN model combines the wavelet neural networks and probabilistic neural
networks (PNN) and able to improve the recognition accuracy and system
performance. The PSO technique was used to train the WPNN. PSO is an
evolutionary computation technique developed and can search automatically the
optimum solution in the vector space.
Poornima et al. [47] presented a new way of iris segmentation based on neural
network. The neural network was trained with best iris localization method among
Daugman’s algorithm, Hough Transform, Canny edge detection algorithm and
Integro-differential operator. The best method was selected based on output of each
algorithm. The Integro-differential operator was found better than other algorithms
and used to train the network. Other neural network techniques such as Intersecting
Chapter 2: Review of literature
Page: 13
Cortical Model (ICM) Neural Network to generate iris code, Notation Spreading
Neural Network (R-SAN net) are also proposed for iris recognition. The ICM neural
network is a simplified model of pulse-coupled neural network (PCNN) has excellent
performance for image segmentation whereas the R-SAN network is suitable for
recognizing the orientation of the object regardless of its shape [48]. The survey on
iris recognition systems summarized in table 1.
Table 1: Iris Recognition techniques
References Description
Segmentation and Normalization Feature Extraction & Matching
Daugman
1994 [26]
Integro-Differential operator and Rubber
Sheet Model
2D Gabor wavelet and Humming Distance
Boles et al.
1998 [29]
Edge detection schemes Zero crossing based on Dyatic wavelet
transform and dissimilarity function
Daouk et al.
2002 [31]
Canny edge detection scheme, circular
Hough transform and Bilinear
Transformation
Haar wavelet transform for feature
extraction and Humming distance for
pattern matching
Chen et al.
2005 [32]
Edge detection algorithm Wavelet transform & Humming distance
Daugman
2007 [28]
Active Contour methods Gabor wavelets & Humming distance
Barzegar et
al. 2008
[33]
Point wise level set approach Not described
Yahya et al.
2008 [34]
Direct least square fitting of ellipse and
Integro-differential operator
Not described
Abra et al.
2009 [35]
Indexed composite correlation filters indexed composite phase only filter
(ICPOF)
Murty et al.
2009 [36]
Canny edge detector, Hough Transform
& Rubber Sheet Model
Gabor filter and Principle Component
Analysis (PCA)
Yahya et al. Chan-Vese active contour method 1-D log polar Gabor transform &
Chapter 2: Review of literature
Page: 14
2010 [37] Humming distance
Jan et al.
2012 [38]
Hough Transform, bi-valued adaptive
threshold, & the Fourier series and radial
gradients.
Not described
Li et al.,
2012 [39]
Random Sample Consensus (RANSAC) Gabor filter
Li et al.,
2012 [40]
Canny edge detection Algorithm,
circular Hough transform & Rubber
sheet model
weighted co-occurrence phase histogram
(WCPH)& Bhattacharyya distance
Jan et al.
2013 [38]
Hough transform, gray level statistics,
adaptive thresholding, and a geometrical
transform
Not described
Sun et al.,
2013 [43]
Scale Invariant Feature Transfor &
Active contour model
Histogram Distance
Chowhan et
al. , 2011
[44]
Integro-differential operator & Rubber
sheet model
2D spatial Gabor wavelet filters &
Modified Fuzzy Hyperline Segment
Neural Network
Bindra et
al., 2012
[45]
2-D wavelet filtering & Rubber sheet
model
Sobel Operator and 1-D wavelet transform
& Euclidean Distance
Logannatha
n et al.,
2012 [46]
Hough Transform & Sobel Transform Wavelet Probabilistic Neural Network
(WPNN) with Particle swam optimization
as training algorithm.
Poornima et
al., 2010
[47]
Artificial neural network with Integro-
differential operator
Not described
Chapter 3: Design and Implementation
Page: 15
3.1 Introduction
The personal identification based on Iris biometric is one of the most suitable
and reliable methods with respect to performance and accuracy. However, the
reliability and accuracy of the method depends on the proper segmentation of an iris
from an eye image. Typically, iris recognition systems consist of four modules viz.
Iris Segmentation, Normalization, Feature extraction and Matching [49]. In order to
achieve the high performance iris recognition system, proper segmentation of an iris is
required. For real time application, the computational time of the system should also
consider. So, the segmentation step is the vital steps for overall performance of the
system. The segmentation refers to the isolation of an iris region from an eye image by
properly detecting the iris inner and outer boundaries. In this project work, new Iris
segmentation Approach has been proposed. The Fig. 5 shows the proposed
segmentation approach for iris segmentation.
3.2 Pupil Detection
The first step of the proposed segmentation approach is to localize the pupil. With
the help of pupil location, image is cropped into a rectangle which holds the whole eye
image. The Pupil localization is done by thresholding. The following algorithm 1
describes the one process of pupil localization and boundary detection from an eye
image.
Algorithm 1:
i. Create binary image with threshold value ≤ 30
Fig. 5: Proposed Segmentation Approach
Input Image
Iris Detection
Eyelid Detection
Noise Detection
Segmented Iris
Pupil Localization
Crop Image
Pupil Detection
Chapter 3: Design and Implementation
Page: 16
ii. Perform morphological operation ‘detect’ and ‘erode’ in following by
flood fill.
iii. Apply median filter in order to remove smaller region.
iv. Determine the centroid of the detected pupil region (Cx , Cy)
v. The horizontal radius.
The result of the pupil detection algorithm on the input image is shown in
fig.6
(a) (b) (c) (d) (e)
(f)
3.3 Iris Detection
In this project work, the Circular Hough Transform (CHT) described in algorithm
2 has been applied on the cropped image to detect the iris outer boundary.
Circular Hough Transform is used to detect the presence of circular shapes in any
image. For example detection of number of circular discs in an image. Another well
Fig. 6 (a) Original Image (b) Binary image (c) Image after flood fill (d) Image after morphological operations (e) Image after median filtering (f) Detected pupil region
Chapter 3: Design and Implementation
Page: 17
known application of Circular Hough Transform is the detection of number of
coconuts in an image [50]. Circular Hough Transform uses the parameterized equation
of circle for this purpose.
The equation of circle can be written as
(� − �)� + (� − )� = ��
Where x, y are the points on the circumference of the circle, a, b is the centre of circle
and r is the radius of the circle.
The equation of circle can be written as
� = � + � ∗ cos(�)
� = + � ∗ sin(�)
The circular Hough Transform uses these equations to compute the CHT of a
circle and detect the presence of circular objects in an image. The CHT algorithm used
in this project work is defined as:
Algorithm 2:
i. Read an image file.
ii. Find edges in the image.
iii. Define a radius range to be used.
iv. For each edge point, draw a circle with that edge point as the centre and a
radius r and increment the number of votes by 1 for all the coordinates that
coincide with the circumference of the circle drawn, in the accumulator
space and find circles for an edge point for all the radius in the range.
v. Find maximum number of votes in the accumulator space.
vi. Plot circle with parameters (r, a, b) corresponding to the maximum votes in
the accumulator space.
vii. The circle obtained is the desired circle with (r, a, b) as the radius and
centre of circle respectively.
Chapter 3: Design and Implementation
Page: 18
The result of iris detection step is on different images of the considered dataset is
shown in figure 7.
3.4 Eyelid Detection
In most of the cases, top and bottom eyelid overlaps the iris region. For this proper
detection and isolation of eyelids is necessary for accurate iris segmentation. In
this project work, canny edge detection method and circle geometry has been
applied in order to isolate the eyelids.
The top and bottom eyelids are isolated by using Algorithm 3 and Algorithm 4
respectively
Algorithm 3: Top Eyelid Detection
i. Select three small rectangles from the top side of the given image, two on
left and right of detected iris and one on top of the detected pupil, as shown
in fig 8.
Fig. 7: Detected Iris Boundaries
Fig. 8: Image with Selected rectangles
Chapter 3: Design and Implementation
Page: 19
ii. Apply adaptive histogram equalization and Median Filtering to enhance
the selected portion.
iii. Detect the horizontal line by using canny edge detection technique on each
of the rectangle and calculate middle point of the detected line as shown in
figure 9.
The detected points will lie on the top eyelid edge.
iv. The curves passing these detected points are drawn using circle geometry.
Circle geometry states that there is one and only are circle passing through
the any three non-collinear points.
The result of Algorithm 3 is shown in fig. 10.
Fig. 9: Image with detected points
Fig. 10: Image with detected top eyelid
Chapter 3: Design and Implementation
Page: 20
Algorithm 4: Bottom Eyelid Detection
i. Select three small rectangles from the bottom side of the given image, two
on left and right of detected iris and one on bottom of the detected pupil, as
shown in fig 11.
ii. Apply adaptive histogram equalization and Median Filtering to enhance
the selected portion.
iii. Detect the horizontal line by using canny edge detection technique on each
of the rectangle and calculate middle point of the detected line as shown in
figure 12.
The detected points will lie on the bottom eyelid edge.
Fig. 11: Image with selected rectangles
Fig. 12: Image with detected points
Chapter 3: Design and Implementation
Page: 21
iv. The curves passing these detected points are drawn using circle geometry.
The result of Algorithm 3 is shown in fig. 13.
3.5 Noise Reduction
Eye lashes and reflections are removed from the segmentation iris by linear
thresholding method. The Pseudo-code of thresholding method is as follows:
[row, col]=size(img); for i=1:row for j=1:col if ((j-xt)^2+(i-yt)^2)>rt^2 img(i,j)=255; end if ((j-xb)^2+(i-yb)^2)>rb^2 img(i,j)=255; end if ((j-xp)^2+(i-yp)^2)<rp^2 img(i,j)=255; end if ((j-xi)^2+(i-yi)^2)>ri^2 img(i,j)=255; end end end for i=1:row for j=1:col if ((j-xp)^2+(i-yp)^2)>rp^2 if ((j-xi)^2+(i-yi)^2)<ri^2 if img(i,j)>240; img(i,j)=255; img(i,j)=255; end if img(i,j)<=30; img1(i,j)=255; img(i,j)=255; end end
Fig. 13: Image with detected bottom eyelid
Chapter 3: Design and Implementation
Page: 22
end end end
where (xt, yt), (xb, yb), (xp, yp) and (xi, yi) are the centre coordinates of top eyelid,
bottom eyelid, pupil and iris respectively. And rt, rb, rp and ri are the radius of the top
eyelid, bottom eyelid, pupil and iris respectively. The threshold value 30 and 240
selected by trial and error method. The result of whole iris segmentation approach on
various images is shown in fig 15.
a b c d
Fig 14: (a) Original Image (b) Localized Iris (c) Detected top and Bottom eyelid (d) Segmented Iris
Chapter 4: Experimental result and Discussion
Page: 23
The experiments were implemented in MATLAB 7.12.0 and executed on Intel Core i3
2.4 GHz with 3 GB RAM. To evaluate the performance of the approach CASIA Iris
lamp database is used [51]. This data base consists of 16440 images from 411 persons.
Each image is an 8 bit gray level value of resolution 480*640. Total 250 images of left
eye of 50 persons are taken randomly for the experimentation. The results of the
proposed iris segmentation approach on different eye images are shown in fig. 16.
The performance of the proposed approach has been evaluated in terms of running
time of the system and segmentation accuracy. To validate the results obtained by the
proposed approach, results are compare with existing method viz. Daugman’s method
[26] and Masek’s method [52].
c d b a
Fig 15: (a) Original Image (b) Localized Iris (c) Detected top and Bottom eyelid (d) Segmented Iris
Chapter 4: Experimental result and Discussion
Page: 24
The average running time of the system with proposed approach and with existing
methods are shown in table 1. From this table, it is observed that the proposed
segmentation method is efficient compare to other methods viz. Daugman and Masek
with respect to the running time for iris segmentation.
To calculate the segmentation accuracy, the radius and centroid of the iris and
pupil is calculated manually with the help of MATLAB Image tool and circle
Table 1: Running time for iris segmentation
Method Technique
Running time of iris
Segmentation
Min. time
(sec.)
Max.
time
(sec.)
Avg. time
for 200
images
(sec.)
Masek [52]
Modified Canny and circular Hough
Transform is used to detect inner and outer
circle of an iris.
Isolation of top and bottom eyelids.
Linear thresholding techniques to isolate
reflection and eyelashes.
7.28 37.94 19.47
Daugman
[26]
Integro-differential operation is used to
detect iris inner and outer boundary. 10.45 61.10 27.70
Proposed
Approach
Pupil localization by simple morphological
operations with the help of thresholding and
median filtering technique.
Iris localization by Circular Hough
Transform.
Eye lashes and reflections are removed by
linear thresholding.
Top and bottom Eyelid is isolated using
canny and Circle Geometry.
1.9 4.35 3.11
Chapter 4: Experimental result and Discussion
Page: 25
geometry. The fig. 2 shows the difference between manually computed and system
computed iris boundaries. The segmentation accuracy is calculated as:
�������� = 1 − ���(��� − ���)���
+ ���(��� − ���)���
� × 100
Where ��� and ��� represents the radius of iris outer boundary and pupil computed
manually respectively. The ��� and ��� represents the radius of iris outer boundary and
pupil computed by the system respectively. Table 2 shows the segmentation accuracy
of the proposed method on different from the iris image database.
Fig 16: The difference between manually computed and iris boundaries computed by
proposed method. Red circles represent manually computed Iris Boundaries and blue circles
represent iris boundaries computed by proposed approach
Image Segmentation Accuracy
‘S2001L01.JPEG’ 98.63%
‘S2002L01.JPEG’ 97.43%
‘S2003L06.JPEG’ 96.95%
‘S2003L12.JPEG’ 99.33%
‘S2003L13.JPEG’ 98.91%
Table 2: Segmentation Accuracy
Chapter 4: Experimental result and Discussion
Page: 26
The average accuracy of the proposed method is 98.20 %. From these results, it
is observed that the proposed segmentation approach achieved high degree of
segmentation accuracy in a reasonable amount of time and it is suitable for
accommodate in a real life iris recognition system.
Chapter 5: User Manual
Page: 27
• The following screen is the starting screen which has five button options: load
image, localize iris, localize eyelids, remove noise and clear.
Chapter 5: User Manual
Page: 28
• To load an image, click on the “Load image” button and a dialogue box will
display which contains a folder of various eye images. Select and open the file
folder for displaying the eye images as shown below :
• Select the image file which you want to segment and click on “Open” button.
Chapter 5: User Manual
Page: 29
• Selected eye image file will be displayed in the “Original Image” field.
• To localize the pupil and iris outer boundary, click on “Localize Iris” button
and image will be displayed in the “Localized Iris” field as shown below:
Chapter 5: User Manual
Page: 30
• To detect the top and bottom eyelids, click on “Localize Eyelids” button. The
detected top and bottom eyelids will be displayed in the “Detected Eyelids”
field.
• To remove the noises from the eye image click on “Remove Noises” button
and the display output will be the “Segmented Iris” of the selected eye image
along with its “Segmentation Time” as shown below:
Chapter 6: Conclusion and Future work
Page: 32
In this project work, new Iris segmentation approach has been proposed. Simple
morphological operations and two dimensional median filtering techniques are used to
detect the pupil. The Iris outer boundary is detected by using circular Houghman
Transformation (CHT). The canny edge detection method with circle geometry is
applied to isolate upper and lower eyelids. The noises such as eyelashes, reflections
are removed through the linear thresholding. From the experimental results, it is
observed that the proposed method is more efficient compare to existing methods viz.
Daugman and Masek for the considered dataset. It is also observed that proposed
method takes reasonable amount of time to perform iris segmentation.
The future work would be to test the influence on accuracy of the proposed
method over a large dataset and also to develop the iris recognition system with this
segmentation approach to validate the proposed method on recognition accuracy.
Chapter 7: References
Page: 33
[1] S. A. Sahmoud, et al., “Efficient iris segmentation method in unconstrained
environments”, Pattern Recognition, Elsevier, pp. 3174-3185, 2013.
[2] T. Lefevre, et. al., “Effective Elliptic Fitting for Iris Normalization”, Computer Vision
and Image Understanding, Elsevier, 2013.
[3] S. Sun., et al., “Non-cooperative Bovine Iris Recognition via SIFT”,
Neurocomputing, Elsevier, pp. 310-317, 2013.
[4] J. G. Daugman, “High Confidence Visual Recognition of Persons by a Test of
Statistical Independence”, IEEE Trans, Pattern Anal. Mach Intell, vol. 15, pp. 1148-
1161, 1993.
[5] R. P. Wildes, “Iris Recognition: An Emerging Biometric Technology”, Proc. IEEE,
Vol. 85(9), pp. 1348-1363, 1997.
[6] I. Matveev, “Iris segmentation system based on approximate feature detection with
subsequent refinements”, Computer Society, IEEE, pp. 1704-1709, 2014.
[7] Libor Masek, Peter Kovesi. MATLAB Source Code for a Biometric Identification
System Based on Iris Patterns. The School of Computer Science and Software
Engineering, The University of Western Australia. 2003.
[8] L. Ma, Y. Wang, D. Zhang, “Efficient iris recognition by characterizing key local
variations”, IEEE Trans. Image Process. Vol. 13(6), pp. 739-750, 2004.
[9] R. T. A. Zubi, D. I. A. Nadi, “Automated personal identification system based on
human iris analysis”. Pattern Anal Applic, pp. 147-164, 2007.
[10] E. A. Daoud, “A New Iris Localization method based on the competitive chords”,
SlVip, 6, pp. 547-555, 2012.
[11] S. A. Sahmoud, I. S. Abuhaiba, “Efficient iris segmentation method in unconstrained
environments”, Pattern Recognition, Elsevier, 2013.
[12] K. Y. Shin, et al., “New Iris Recognition Method for Noisy Iris Images”, Pattern
Recognition Letters, 33, pp. 991-999, 2012.
Chapter 7: References
Page: 34
[13] X. He, P. Shi, “A new segmentation approach for iris recognition based on hand-held
capture device”, Pattern Recognition, Vol. 40, Elsevier, pp. 1326-1333, 2007.
[14] M. P. Stephen, et al., “Adaptive histogram equalization and its variations”, Computer
Visions, Graphics and Image Processing, Vol. 39, pp. 355-368, 1987.
[15] A. Walid, K Lotfi, M. Nouri, “A Fast and Accurate Eyelids and Eyelashes Detection
Approach for Iris Segmentation”, Journal of Multimedia Processing and
Technologies, Vol. 3 No. 4,pp. 166-173, 2012.
[16] Nabti M. et al.: An effective and fast iris recognition system based on a combined
multi-scale feature extraction technique. Pattern Recognition, Elsevier, 41, 2008, pp.
868-879.
[17] Alice I: Biometric Recognition: Security and Privacy concern, IEEE Security and
Privacy, 2003.
[18] Ramkumar R. P.: Novel Iris Recognition Algorithm. ICCCNT, IEEE, 2012, pp. 1-6.
[19] Jain A. K., et al.: An Introduction to Biometric Recognition. Trans. IEEE, Circuits
and Systems for Video Technology, 14, 2004, pp. 4-20.
[20] Huang J.: A New Iris Segmentation Method for Recognition. In: Proc. of the 17th
International Conference on Pattern Recognition, IEEE, 2004.
[21] Jain A. K.: An Introduction to Biometric Recognition. Trans. on Circuits and Systems
for Video Technology, IEEE, 14, 2004, pp. 4-20.
[22] Abiyev R. H., Altunkaya K.: Personal Iris Recognition Using Neural Network.
International Journal of Security and its Applications, 2, 2008, pp. 41-50.
[23] Jhamb M. et al.: IRIS Based Human Recognition System. Int. Journal of Biometrics
and Bioinformatics (IJBB), 5, 2011, pp. 1-13.
[24] Daugman J. G.: High Confidence Visual Recognition of person by a test of Statistical
Independence. Trans. on Pattern Analysis and Machine Intelligence, IEEE, 15, 1993,
pp. 1148-1161.
Chapter 7: References
Page: 35
[25] Devi, Ningthoujam Sunita, and K. Hemachandran. "Automatic Face Recognition
System using Pattern Recognition Techniques: A Survey."
[26] Daugman J. G.: Biometric Personal Identification System Based on Iris Analysis,
United States Patent, no. 5291560, 1994.
[27] Wildes R. P.: Iris Recognition: An Emerging Biometric Technology. In: Proc. of the
IEEE, 85, 1997, pp. 1348-1363.
[28] Daugman J. G.: High Confidence Visual Recognition of person by a test of Statistical
Independence. Trans. on Pattern Analysis and Machine Intelligence, IEEE, 15, 1993,
pp. 1148-1161.
[29] Boles W., Boashash B.: A Human Identification Technique Using Images of the Iris
and Wavelet Transform. IEEE Trans. Signal Processing, 46, 1998 pp.1185-1188.
[30] Ma L. et al.: Personal Identification Based on Iris Texture Analysis. Trans. on Pattern
Analysis And Machine Intelligence, IEEE, 25, 2003, pp.1519-1533.
[31] Daouk C. H. et al.: Iris Recognition, In: Proc. of the 2nd IEEE Int. Symposium on
Signal Processing and Information Technology, 2002, pp. 558-562.
[32] Chen W. S. et al.: Personal Identification Technique based on Human Iris
Recognition with Wavelet Transform. Int. Conf. on Acoustics, Speech and Signal
Processing, IEEE, 2, 2005, pp. ii - 949.
[33] Barzegar N. et al.: A New Approach for Iris Localization in Iris Recognition Systems.
, Int. conf. on Computer Systems and Applications, IEEE, 2008, pp. 516-523.
[34] Yahya A. E. et al.: A New Technique for Iris Localization in Iris Recognition
Systems. Information Technology Journals, 7, 2008, pp. 924-929
[35] Abra O. E. K. et al.: Optical Iris Localization Approach. In: proc. of the IEEE Int.
Conf. on Computer Systems and Applications, 2009, pp. 563-566.
[36] Murty P. S. R. C. et al.: Iris Recognition System using Principal Component of
Texture Characteristics. International Journal of Computing Science and
Communication Technologies, 2, 2009, pp. 343-348.
Chapter 7: References
Page: 36
[37] Yahya A. E. et al.: Accurate Iris Segmentation Method for Non-Cooperative Iris
Recognition System. Journal of Computer Science, 6, 2010, pp. 492-497.
[38] Jan F. et al.: Iris localization in frontal eye images for less constrained iris recognition
systems, Digital Signal Processing, Elsevier, 22, 2012, pp. 971-986.
[39] Li. P. et al.: Iris Reognition in non-ideal imaging conditions, Pattern Recognition
Letters, Elsevier, 33, 2012, pp. 1012-1018.
[40] Li. P. et al.: Weighted Co-occurrence Phase Histogram for Iris Recognition, Pattern
Recognition Letters, Elsevier, 33, 2012, pp. 1000-1005.
[41] Raffei A. F. et al.: Feature Extraction for Different Distances of Visible Reflection
Iris Using Multiscale Sparse Representation of Local Randon transform, Pattern
Recognition, Elsevier, 46, 2013, pp. 2622-2633.
[42] Jan F. et al.: Reliable Iris Localization Using Hough Transform, Histogram-bisection,
and Eccentricity. Signal Processing, Elsevier, 93, 2013, pp. 230-241
[43] Sun S. et al.: Non-cooperative bovine iris recognition via SIFT. Neurocomputing,
Elsevier, 120, 2013, pp. 310-317.
[44] Chowhan S. S. et al.: Iris Recognition Using Modified Fuzzy Hyperline Segment
Neural Network. Journal of Computing, 3, 2011, pp. 72-77.
[45] Bindra G. S. et al.: Feature Based Iris Recognition System functioning on Extraction
of 2D Features. International Conference on System Engineering and Technology,
IEEE, 2012, 1, pp. 17-19.
[46] Logannathan B. et al.: Iris Authentication Using PSO, International Journal of
Computer& Organization Trends, 2, 2012, pp. 10-15.
[47] Poornima S.: Comparison and a Neural Network Approach for Iris Localization.
Procedia Computer Science, Elsevier, 2, 2010, pp. 127–132.
[48] Proenca, H.: Iris segmentation methodology for non cooperative recognition. In: Proc.
of Vis. Image Signal Process, IEE, 153, 2006, pp. 199-205.
Chapter 7: References
Page: 37
[49] Kumar V. et al.: Importance of Statistical Measures in Digital Image Processing.
International Journal of Emerging Technology and Advanced Engineering, 2, 2012,
pp. 56-62.
[50] Daugman J.: New Methods in Iris Recognition. Trans. on Systems, Man, and
Cybernatics - Part B: Cybernatics, IEEE, 37, 2007, pp. 1167-1175.
[51] Huang Y. et al.: An Efficient Iris Recognition System. In: Proc. of the First Int. Conf.
on Machine Learning and Cybernetics, IEEE, 1, 2002, pp. 450-454.
[52] Zhou Z. et al.: A New Iris Recognition Method Based on Gabor Wavelet Neural
Network. Int. Conf. on Intelligent Information Hiding and Multimedia Signal
Processing, Computer Society, IEEE, 2008, pp. 1101-1104.