Artificial Intelligence Techniques for Ocular Pattern Classi...
-
Upload
luis-enrique-bejarano-castillo -
Category
Documents
-
view
232 -
download
0
Transcript of Artificial Intelligence Techniques for Ocular Pattern Classi...
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
1/200
FacultyofComputersandInformationComputerScienceDepartment
ArtificialIntelligenceTechniquesforOcular
PatternClassification
Amr
Ahmed
Sabry
Abdel
Rahman
Ghoneim
Supervisedby:ProfessorAtefZakiGhalwash
ProfessorofComputerScience,TheFacultyofComputers&Information,Helwan
University,
Cairo,
Egypt.
AssociateProfessorAliaaAbdelHaleimAbdelRazikYoussif
AssociateProfessorofComputerScience,TheFacultyOfComputers&Information,HelwanUniversity,Cairo,Egypt.
AssistantProfessorHosamElDeanMahmoudBakeir
AssistantProfessorofOphthalmology,TheFacultyofMedicine,Cairo
University,
Cairo,
Egypt.
A thesissubmitted toHelwanUniversity inaccordancewith therequirements forthedegreeofMasterofScienceinComputerScienceattheFacultyofComputers&Information,DepartmentofComputerScience.
May2007
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
2/200
References2
Thispageisleftblankintentionally
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
3/200
)285(
))286
InthenameofAllah,MostGracious,MostMercifulTheMessengerbelieveth inwhathathbeenrevealed tohimfromhisLord,asdo
themenoffaith.Eachone(ofthem)believethinAllah,Hisangels,HisBooks,and
His Messengers. Wemake nodistinction (they say)between one andanotherof
HisMessengers. Andtheysay: Wehear,andweobey,(weseek)Thyforgiveness,
ourLord,andtoTheeistheendofalljourneys. (285)OnnosouldothAllahplace
aburden greater than it canbear. It gets every good that it earns, and it suffers
everyill
that
it
earns.
(Pray:) Our
Lord!
Condemn
us
not
if
we
forget
or
fall
into
error;ourLord!Laynotonusaburden like thatwhichThoudidst layon those
beforeus;ourLord!laynotonusaburdengreaterthanwehavestrengthtobear.
Blot out our sins, and grant us forgiveness. Have mercy on us. Thou art our
Protector;helpusagainstthosewhostandagainstFaith.(286)TheholyQuran:Chapter2AlBaqarah285:286
: : .
AbuHuraira(Allahbepleasedwithhim)reportedAllahsMessenger(Maypeaceandblessingsbeuponhim)assaying:Whenamandies,hisactscometoanend,butthree,recurringcharity,orknowledge(bywhichpeople)benefit,orapiousson,whopraysforhim(forthedeceased).
SahihMuslim
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
4/200
References2
Thispageisleftblankintentionally
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
5/200
To
thememoryofmycolleagues
IbraheimArafat,
MustaphaGamal,
&
KhaledAbdelMoneim
Andtomybelovedcity,Cairo,acitythatnever
failsto
make
an
impression,
an
everlasting,
unique
impression
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
6/200
Theresearch
described
in
this
thesis
was
carried
out
at
the
Faculty
of
Computers
&
Information HelwanUniversity,Cairo,TheArabRepublicofEgypt.
Copyright 2007 by Amr S. Ghoneim. All rights reserved. No part of this
publication may be reproduced or transmitted in any form or by any means,
electronic or mechanical, including photocopy, recording, or any information
storageandretrievalsystem,withoutpermission inwriting from theauthor.Any
trademarksinthispublicationarepropertyoftheirrespectiveowners.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
7/200
Contents vii
C o n t e n t s
Abstract & Keywords. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Acknowledgment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Declaration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
List of Publications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
List of Figures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
List of Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxviii
Awards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
Medical Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxx
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Eye Anatomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Fundus Photography and Eye Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6.1 Diabetic Retinopathies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6.2 Glaucoma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.6.3 Detecting Retina Landmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.4 Fundus Photography Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . 12
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
8/200
Contents
viii
2 Preprocessing 13
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Fundamentals of Retinal Digital Image Representation . . . . . . . . . . . . . . 13
2.3 Mask Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Illumination Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Contrast Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.1 Green Band Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.2 Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5.3 Local Contrast Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5.4 Adaptive Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . 23
2.5.5 Background Subtraction of Retinal Blood Vessels (BSRBV)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
2.5.6 Estimation of Background Luminosity and Contrast
Variability (EBLCV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.6 Color Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.6.1 Gray-World Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.6.2 Comprehensive Normalization . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.6.3 Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.6.4 Histogram Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.7 Image Quality Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.7.1 Convolution of Global Intensity Histograms . . . . . . . . . . . . . . 32
2.7.2 Edge Magnitude and Local Pixel Intensity Distributions . . . . . 33
2.7.3 Asymmetry of Histograms Derived from Edge Maps . . . . . . . 34
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
9/200
Contents ix
2.7.4 Chromaticity Values Distribution . . . . . . . . . . . . . . . . . . . . . . 37
2.7.5 Clarity and Field Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3 Automatic Localization of the Optic Disc 43
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 Properties of the Optic Disc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3 Automatic Optic Disc Localization: A Literature Review . . . . . . . . . . . . . 46
3.3.1 Optic Disc Localization versus Disc Boundary Detection . . . . 46
3.3.2 Existing Automatic OD-Localization Algorithms Review . . . . 46
3.3.3 Alternative OD-Localization Algorithms . . . . . . . . . . . . . . . . . 58
3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4 Automatic Segmentation of the Retinal Vasculature 61
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2 Properties of the Retinal Vasculature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3 Automatic Retinal Vasculature Segmentation: A Literature Review . . . . 62
4.3.1 Detection of Blood Vessels in Retinal Images using TwoDimensional Matched Filters . . . . . . . . . . . . . . . . . . . . . . . . . .
63
4.3.2 Existing Automatic Retinal Vasculature Segmentation
Algorithms Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5 Automatic Detection of Hard Exudates 75
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.2 Properties of the Diabetic Retinopathy Lesions . . . . . . . . . . . . . . . . . . . . 76
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
10/200
Contents
x
5.3 Automatic Diabetic Retinopathy Lesions Detection: A Literature Review
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78
5.3.1 Existing Automatic Bright Lesions Detection Algorithms
Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79
5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6 Comparative Studies 85
6.1 A Comparative Study of Mask Generation Methods . . . . . . . . . . . . . . . . . 85
6.1.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.1.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.1.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.1.4 Observations and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.2 A Comparative Study of Illumination Equalization Methods . . . . . . . . . . 88
6.2.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.2.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.2.4 Observations and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.3 A Comparative Study of Contrast Enhancement Methods . . . . . . . . . . . . 90
6.3.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.3.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
11/200
Contents xi
6.3.4 Observations and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.4 A Comparative Study of Color Normalization Methods . . . . . . . . . . . . . . 95
6.4.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.4.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.4.4 Observations and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 96
7 The Developed Automatic DR Screening System Components 100
7.1 Optic Disc Localization by Means of a Vessels Direction Matched Filter
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
7.1.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
7.1.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
7.1.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.1.4 Observations and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 106
7.2 Retinal Vasculature Segmentation using a Large-Scale Support Vector
Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
7.2.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.2.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.2.4 Observations and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 114
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
12/200
Contents
xii
7.3 Hard Exudates Detection using a Large-Scale Support Vector Machine .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118
7.3.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
7.3.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
7.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.3.4 Observations and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 123
8 Conclusions & Future Work 125
8.1 Preprocessing of Digital Retinal Fundus Images . . . . . . . . . . . . . . . . . . . . 125
8.2 Retinal Landmarks Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
8.3 Diabetic Retinopathies Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Appendix A Eye-Related Images 129
A.1 Fundus Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
A.2 Fluorescein Angiograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
A.3 Indocyanine Green Dye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
A.4 Hartmann-Shack Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
A.5 Iris Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Appendix B Diabetes, Diabetic Retinopathy and their Prevalence in Egypt 135
B.1 Diabetes Mellitus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
B.2 Diabetes in Egypt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
B.3 Diabetic Retinopathy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
13/200
Contents xiii
B.4 Diabetic Retinopathy in Egypt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Appendix C Fundus Photography Datasets 142
C.1 DRIVE Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
C.2 STARE Project Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Appendix D Color Models 147
D.1 HSI Color Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
D.1.1 Converting Colors from RGB to HIS . . . . . . . . . . . . . . . . . . . . 148
D.1.2 Converting Colors from HSI to RGB . . . . . . . . . . . . . . . . . . . . 148
D.2 Chromaticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Appendix E A Threshold Selection Method from Gray-Level Histograms 151
References 154
(Abstract & Keywords in Arabic) (Cover page in Arabic)
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
14/200
Abstract & Keywordsxiv
Abstract
Diabetes is a disease that affects about 5.5% of the global population. In
Egypt, nearly 9 million (over 13% of the population 20 years) will have
diabetes by the year 2025, while recent surveys from Oman and Pakistan
suggest that this may be a regional phenomenon. Consequently, about 10%
of all diabetic patients have diabetic retinopathy (DR); one of the most
prevalent complications of diabetes and which is the primary cause of
blindness in the Western World, and this is likely to be true in Hong Kong,
and Egypt. Moreover, diabetic population is expected to have a 25 times
greater risk of going blind than non-diabetic. Due to the growing number of
patients, and with insufficient ophthalmologists to screen them all, automatic
screening can reduce the threat of blindness by 50%, provide considerable
cost savings, and decrease the pressure on available infrastructures and
resources.
Retinal photography is significantly more effective than direct
ophthalmoscopy in detecting DR. Digital fundus images do not require
injecting the body by fluorescein or indocyanine green dye, thus not
requiring a trained personnel. Digital fundus images are routinely analyzed
by screening systems, and owing to the acquisition process, these images are
very often of poor quality that hinders further analysis. State-of-the-art
studies still struggle with the issue of preprocessing in retinal images, mainly
due to the lack of literature reviews and comparative studies. Furthermore,
available preprocessing methods are not being evaluated on large benchmark
publicly-available datasets.
The first part of this dissertation discusses four major preprocessing
methodologies described in literature (mask generation, illumination
equalization, contrast enhancement, and color normalization), and their
effect on detecting retinal anatomy. In each methodology, a comparative
performance measure based on proposed appropriate metrics is
accomplished among available methods, using two publicly available fundus
datasets. In addition, we proposed the comprehensive normalizationand a
local contrast enhancement proceeded by illumination equalizationwhich
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
15/200
Abstract & Keywordsxv
recorded acceptable results when applied for color normalization and
contrast enhancement respectively.
Detecting retinal landmarks (the vasculature, optic disc, and macula)
give a framework from which automatedanalysis and human interpretation
of the retina proceed.And therefore, it will highly aid the future detection
and hence quantification of diseases in the mentioned regions. In addition,
recognizing main components can be used for criteria that allow the
discarding of images that have a too bad quality for assessment of
retinopathy.
The second part of the dissertation deals with the Optic Disc (OD)detection as a main step while developing automated screening systems for
diabetic retinopathy. We present in this study a method to automatically
detect the position of the OD in digital retinal fundus images based on
matching the expected directional pattern of the retinal blood vessels. Hence,
a simple matched filter is proposed to roughly match the direction of the
vessels at the OD vicinity. The proposed method was evaluated using a
subset of the STARE project's dataset, containing 81fundus images of both
normal and diseased retinas, and initially used by literature OD detection
methods. The OD-center was detected correctly in 80 out of the 81 images(98.77%). In addition, the OD-center was detected correctly in all of the 40
images (100%) using the publicly available DRIVE dataset.
The third part of the dissertation deals with the Retinal vasculature
(RV) segmentation as another basic foundation while developing retinal
screening systems, since the RV acts as the main landmark for further
analysis. Recently, supervised classification proved to be more efficient and
accurate for the segmentation process. Moreover, novel features have been
used in literature methods, showing high separability between vessels/non-vessels classes. This work utilizes the large-scale support vector machine for
automatic segmentation of RV, using for the pixel features a mixture of the
2D-Gabor wavelet, Top-hat, and Hessian-based enhancements. The
presented method noticeably reduces the number of training pixels since
2000 instead of 1 million pixels, as presented in recent literature studies, are
only needed for training. As a result, the average training time drops to 3.75
seconds instead of the 9 hours that was previously recorded in literature. For
classifying an image, 30 seconds were only needed. Small training sets and
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
16/200
Abstract & Keywordsxvi
efficient training time are critical for systems that always need readjustment
and tuning with various datasets. The publicly available benchmark DRIVE
dataset was used for evaluating the performance of the presented method.Experiments reveal that the area under the receiver operating characteristic
curve (AUC) reached a value 0.9537 which is highly comparable to
previously reported AUCs that range from 0.7878 to 0.9614.
Finally, the fourth part of the presented work deals with the
automated detection of hard exudates, as a main manifestation of diabetic
retinopathies. Methods dealing with the detection of retinal bright-lesions in
general were reviewed. Then, a method based on the response of the large-
scale support vector machine for the RV segmentation was proposed. Themethod uses to a closed inverted version of the large-scale support vector
machine response to determine the potential exudates regions according to
the properties of each region. The response image was segmented into
regions using the watersheds algorithm. The proposed method achieved and
accuracy of 100% for detecting hard exudates, an accuracy of 90% for
indicating images not including any form of bright-lesions.
Keywords Medical Image Processing, Artificial Intelligence,
Preprocessing, Retinal/Fundus Image Analysis, Optic Disc Localization,
Vessels Segmentation, Exudates Detection, Diabetic Retinopathies
Detection.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
17/200
Acknowledgment xvii
Acknowledgment
For their tireless support, I thankMom,Dad,my sisterGhada,and
mybrotherAyman. Iamsure thesuccessof thisworkwouldmake
themdelighted.
Formany years of priceless advice andwisdomwhich shapedmy
development as a researcher, I thankmy supervisors Prof.AtefZ.
Ghalwashand
Prof.
Aliaa
A.
Youssif.
Iowe
them
much
for
their
assistance in all aspects of my work, providing me great help on
varioustopics,exchangingideasonbothacademicandnonacademic
matters,andforawonderfulfriendship.Thankyouverymuch!
Special thanks are given to the consultant ophthalmologist Dr.
HosamElDeanBakeirforhiskindsupervisionofmedicalexpertise.
For taking the time tobelieve inmeand investing theirmindsand
hearts into teaching me how to be a better person, I thank my
teachers Lecturers & Assistants at the Faculty of Computers &
Information(FCI),HelwanUniversitywhohaveshapedmymindand
madeathoughtfulimpactonmydevelopmentovertheyears.Andof
them, I wish to express my gratitude to Dr. Mohamed A. Belal
(currently at the Faculty of Science and Information Technology, Al
Zaytoonah
University,Amman,
Jordan)
for
supporting
me
in
ComputationalIntelligence.AndIwishalsotoexpressmygratitude
toDr.WaleedA.YousefwhosupportedmeinLinearAlgebra.
IwouldliketothankmanyofmycolleaguestheTeachingAssistants
at both departments in my faculty for their support and
understanding.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
18/200
Acknowledgmentxviii
I would like to deeply thank my true friends from all grades,
especiallymy
class
(the
2002
graduates),
and
both
the
2005
and
2006
graduates, always givingme support, encouragement, and helping
mekeepmyspiritsup.AndIshallneverforgetmystudents(hopefully
the2007and2008graduates) forbeingsopatient,understandingand
loving. I acknowledge thatwithout allmyFCIHelwan friends,my
lifeisempty.Youkeptmegoingonguys!!
I am also indebted to many researchers who supported me with
variousresources
during
all
stages
of
writing,
and
so
Special
thanks
aredueto:
Ayman S.Ghoneim; a TeachingAssistant at the Operations
Research Dept., the Faculty of Computers & Information
Cairo Univ. (now aM.Sc. student at the School of Information
Technology & Electrical Engineering (ITEE), Australian Defense
Force Academy (ADFA), the University of New South Wales
(UNSW),
Canberra,
Australia.)
Amany AbdelHaleim & Bassam Morsy; both are Teaching
Assistants at the Information Systems/Computer Science
Departments respectively, the Faculty of Computers &
Information Helwan Univ. (currently students at the
DepartmentofElectricalandComputerEngineering,Universityof
Victoria,Victoria,BritishColumbia,Canada.)
GhadaKhoriba&AymanEzzat;bothareTeachingAssistants
atthe
Computer
Science
Department,
the
Faculty
of
Computers & Information Helwan Univ. (currently PhD.
students at the Graduate School of Systems and Information
Engineering,TsukubaUniversity,Tsukuba,Japan.)
Mohamed Ali; a Teaching Assistant at the Bioengineering
Dept.,theFacultyofEngineering HelwanUniversity.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
19/200
Acknowledgment xix
Theworkofmanyresearchers inthisfieldmade thisworkpossible,
butmydirectcontactswithsomeofthemwhokindlyansweredsome
technicalquestions
pertaining
to
their
work
have
influenced
my
thinkingagreatdeal,andsoIwishtoexpressmysincerethanksto:
Chanjira Sinthanayothin; the National Electronics and
ComputerTechnologyCentre(NECTEC),Thailand.
Subhasis Chaudhuri; Professor & Head, Dept. of Electrical
Engineering, Indian Institute of Technology (IIT), Powai,
Bombay,India.
SarahA. Barman; Senior Lecturer,Digital Imaging Research
Centre,Kingston
University,
London,
UK.
NormanKatz;CEO of the IPConsulting (a custom enterprise
softwarecompany),SanDiego,California,USA.
LangisGagnon;AssociateProfessor,DepartmentofComputer
andElectricalEngineering,UniversitLaval,Quebec,Canada.
Meindert Niemeijer; Ph.D. Student, Image Sciences Institute
(ISI), University Medical Center Utrecht, Utrecht, the
Netherlands.
Andfinally,StephenR.Aylward;Anassistantprofessorinthe
DepartmentofRadiologyandanadjunctassistantprofessorin
the Department of Computer Science, College of Arts &
Sciences at theUniversity ofNorthCarolina atChapelHill,
USA.HewastheAssociateEditorresponsibleforcoordinating
thereviewofmy first IEEETransactionsonMedical Imaging
paper.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
20/200
Chapter 1 Introductionxx
Declaration
I declare that the work in this dissertation was carried out inaccordancewith theRegulationsofHelwanUniversity.Thework isoriginalexceptwhereindicatedbyspecialreferenceinthetextandnopartofthedissertationhasbeensubmittedforanyotherdegree.Thedissertationhasnotbeenpresented toanyotherUniversity forexamination
either
in
the
Arab
Republic
of
Egypt
or
abroad.
AmrS.Ghoneim
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
21/200
List of Publications xxi
ListofPublicationsInpeerreviewedjournals:[1] Aliaa A. A. Youssif, Atef Z. Ghalwash, and Amr S. Ghoneim, Optic
Disc Detection from Normalized Digital Fundus Images by Means of a
Vessels Direction Matched Filter,IEEETransactions on Medical Imaging,
accepted for publication (in press).
[2] Aliaa A. A. Youssif, Atef Z. Ghalwash, and Amr S. Ghoneim,
Automatic Segmentation of the Retinal Vasculature using a Large-Scale
Support Vector Machine, IEEE Transactions on Medical Imaging,
Submitted for publication.
Ininternationalconferenceproceedings:[3] Aliaa A. A. Youssif, Atef Z. Ghalwash, and Amr S. Ghoneim,
Comparative Study of Contrast Enhancement and Illumination Equalization
Methods for Retinal Vasculature Segmentation, in: Proceedings of the
Third CairoInternational Biomedical Engineering Conf. (CIBEC06), Cairo
Egypt, December 21-24, 2006.
[4] Aliaa A. A. Youssif, Atef Z. Ghalwash, and Amr S. Ghoneim, A
Comparative Evaluation of Preprocessing Methods for Automatic Detection
of Retinal Anatomy, in: Proceedings of the fifth International Conference
on Informatics and Systems (INFOS2007), Cairo Egypt, pp. 24-31, March
24-26, 2007.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
22/200
List of Figuresxxii
L i s t o f F i g u r e s
Figure 1.1
A typical generic automatic eye screening system, the figure
highlights the modules that will be included in our research (the light
grey-blocks point out modules that are out of the scope of this work.)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
Figure 1.2Simplified diagram of a horizontal cross section of the human eye.
[4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
Figure 1.3(a) A typical retinal image from the right eye. (b) Diagram of the
retina. [1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
Figure 1.4 (a) A normal optic disc. (b) Glaucomatous optic disc. [15] . . . . . . . . 11
Figure 2.1
(a) The area of the retina captured in the photograph with respect tothe different FOV's. (b) The computation of scale according the
FOV-geometry. [19] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
Figure 2.2 A typical mask for a fundus image. [24] . . . . . . . . . . . . . . . . . . . . . . . 15
Figure 2.3
(a) Typical retinal image. [24] (b) The green image (green band)of
'a'. (c) The smoothed local average intensity image of 'b'using a 40 40window. (d) Illumination equalized version of 'b'usingEq. 2.1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
Figure 2.4(a) A typical RGB colored fundus image. (b) Red component image.
(c) Green component. (d) Blue component. [27] . . . . . . . . . . . . . . . .19
Figure 2.5
(a) Typical retinal image. [24] (b) Contrast enhanced version of 'a'
by applying histogram equalization to eachR, G, B bandseparately.
(c) Histogram equalization applied to theIntensitycomponent of 'a'.
(d) Color local contrast enhancement of each R, G, B band of 'a'
separately. (e) Color local contrast enhancement of the Intensity
component of 'a'. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
Figure 2.6
(a) The inverted green channel of the retinal image shown in '2.3(a)'.
(b) Illumination equalized version of 'a'. (c) Adaptive histogram
equalization applied to the image 'b'usingEq. 2.9. . . . . . . . . . . . . . .
23
Figure 2.7
(a) The background subtraction of retinal blood vessels and (b) the
estimation of background luminosity and contrast variability
enhancements, both methods are applied to the retinal image shown
in '2.3(a)'. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
23/200
List of Figures xxiii
Figure 2.8
The gray-world (a, c)and comprehensive (b, d)normalizations of
2.5(a) and 2.3(a) respectively. The normalization process was
repeated for 5 iterations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
Figure 2.9
The summarized procedure shown in blue for applying
histogram specification to a given input image using the histogram
of a reference image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
Figure 2.10
(a) Reference image. [24] (b) Typical retinal image. (c) Colornormalized version of 'b' using the histogram of 'a' by applying
histogram specification separately using each R, G, B bandof both
images. (d) Histogram specification applied using the Intensitycomponent of both images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
Figure 2.11Scatter plot showing the separability of the three classes "Good
image", "Fair image" and "Bad image". [37] . . . . . . . . . . . . . . . . . . .35
Figure 2.12
The Rayleigh distribution, which is a continuous probability
distribution that usually arises when a two dimensional vector (e.g.
wind velocity) has its two orthogonal components normally and
independently distributed. The absolute value (e.g. wind speed)will
then have a Rayleigh distribution. [40] . . . . . . . . . . . . . . . . . . . . . . . .
36
Figure 2.13Chromaticity space (r and g values)plots. (a) No normalization (b)
Histogram specification. [22] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39
Figure 2.14
(a) The method of macular vessel length detection. (b) The filed
definition metrics [41]. Constraints are expressed in multiples of
disc diameters DD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
Figure 3.1
(a) Swollen nerve, showing a distorted size and shape. (b) Nerve that
is completely obscured by hemorrhaging. (c) Bright circular lesion
that looks similar to an optic nerve. (d) Retina containing lesions of
the same brightness as the nerve. [18, 27] . . . . . . . . . . . . . . . . . . . . . .
44
Figure 3.2A fundus diagnosed of having high severity retinal/sub-retinal
exudates. [27] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Figure 3.3A typical healthy fundus image, it shows the properties of a normal
optic disc. [24] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Figure 3.4
(a) Adaptive local contrast enhancement applied to the intensityimage of the retinal fundus in figure 3.3. (b) The variance image of
'a'. (c) The average variances of 'b'. (d) The OD location (white
cross) determined as the area of highest average variation in
intensity values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
Figure 3.5
An example for the training set of images. Ten intensity images,
manually cropped around the OD from the DRIVE-dataset [24], &
can be used to create the OD model. . . . . . . . . . . . . . . . . . . . . . . . . . .
49
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
24/200
List of Figuresxxiv
Figure 3.6 The OD template image used by Alireza Osareh [1, 10]. . . . . . . . . . . 50
Figure 3.7 Five-level pyramidal decomposition applied to the green band of thefundus image in Figure 3.3. (a) (e) Image at the first, second, third,
fourth and fifth level correspondingly. . . . . . . . . . . . . . . . . . . . . . . . .
51
Figure 3.8Closing applied to the green band of the fundus image in Figure 3.3
in order to suppress the blood vessels. . . . . . . . . . . . . . . . . . . . . . . . .52
Figure 3.9 The fuzzy convergence image of a retinal blood vasculature. [18] . . . 54
Figure 3.10 A schematic drawing of the vessel orientations. [19] . . . . . . . . . . . . . 55
Figure 3.11
Complete model of vessels direction. For sake of clarity, directions
(gray segments)are shown only on an arbitrary grid of points. [53] .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
Figure 3.12 The OD localization filter (template)used by [43]. . . . . . . . . . . . . . . 57
Figure 4.1One of the 12 different kernels that have been used to detect vessel
segments along the vertical direction. [30] . . . . . . . . . . . . . . . . . . . . .65
Figure 4.2
(a) The maximum responses after applying the 12 kernels proposed
by Chaudhuri et al.[30] to the retinal image shown in 2.6(c). (b)
The corresponding binarized image using the threshold selection
method proposed by Otsu [63]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
Figure 4.3 The eight impulse response arrays of Kirschs method. [12] . . . . . . . 67
Figure 4.4
(a) A typical fundus image from the STARE dataset. (b) The first
manual RV segmentation by Adam Hoover. (c) The second manualRV segmentation by Valentina Kouznetsova. (d) The results of
applying thepiecewise threshold probing of a MFR. [27] . . . . . . . . .
68
Figure 4.5
The first (a) and second (b) manual segmentations for the retinal
image in 2.5(a), and the corresponding results of the RVsegmentation methods by Chaudhuri et al.[30] (c), Zana and Klein
[65] (d), Niemeijer et al.[50] (e), and Staal et al.[7] (f). . . . . . . . . . .
70
Figure 4.6
(a)(d) The maximum 2D-Gabor wavelet response for scales a= 2,
3, 4, and 5 pixels respectively, applied to the retinal image in2.5(a). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
Figure 4.7(a) Top-hat and (b) Top-hat Hessian-based enhancements, both
applied to the retinal image in 2.5(a). . . . . . . . . . . . . . . . . . . . . . . . .73
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
25/200
List of Figures xxv
Figure 5.1
(a)(d) STARE images showing different symptoms of DR.
(e) Drusen, a macular degeneration disease usually confused with
bright DR lesions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
Figure 5.2Manual image segmentation. (a) An abnormal image. (b) Manually
segmented exudates (in green). (c) Close-up view of exudates. [1] . .80
Figure 5.3 Tissue layers within the ocular fundus. [85] . . . . . . . . . . . . . . . . . . . . 83
Figure 6.1
Results of comparing Mask Generation methods using the STARE.
(1st row) three typical images from the STARE, (2nd, 3rd, and 4th
rows) are the results of applying the mask generation methods of
Gagnon et al. [23], Goatman et al. [22], and Frank ter Haar [19]respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87
Figure 6.2
Results of comparing Illumination Equalization methods using theDRIVE. Top row images are typical gray-scale images, while the
bottom images represent the highest 2% intensity pixels per image.(a) green-band of a typical DRIVE image. (b) and (c) are
illumination equalized by [19] and [25] respectively. . . . . . . . . . . . . .
89
Figure 6.3
Reviewed normalizations of a typical fundus image. (a) Intensity
image. (b) Green-band image. (c) Histogram equalization.(d) Adaptive local contrast enhancement. (e) Adaptive histogram
equalization. (f) Desired average intensity. (g) Division by an over-
smoothed version. (h) Background subtraction of retinal bloodvessels. (i) Estimation of background luminosity and contrast
variability. (j) Adaptive local contrast enhancement applied to g
instead of a. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
92
Figure 6.4
(a)(j) are the results of applying a RV segmentation method to theimages Fig. 6.3 (a)(j) respectively. (k) A manual segmentation of
Fig. 6.3(a)(used as a gold-standard). . . . . . . . . . . . . . . . . . . . . . . . . .
93
Figure 6.5 ROC curves of the compared contrast enhancements methods. . . . . . 94
Figure 6.6
Color normalization chromaticity plots. (a) Before applying any
normalization methods. (b) Gray-world normalization.
(c) Comprehensive normalization. (d) Histogram equalization.(e) Histogram specification (matching). Red ellipse and elements
plots represent the non-vessels cluster, while the blue represent the
vessels cluster.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
Figure 7.1The proposed vessels direction at the OD vicinity matched filter. .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
26/200
List of Figuresxxvi
Figure 7.2
The proposed method applied to the fundus image in 6.8(i). (a) ROI
mask generated. (b) Green-band image. (c) Illumination equalized
image. (d) Adaptive histogram equalization. (e) Binary vessel/non-
vessel image. (f) Thinned version of the preceding binary image.(g) The intensity mask. (h) Final OD-center candidates. (i) OD
detected successfully using the proposed method (White cross, right-
hand side). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
104
Figure 7.3
Results of the proposed method using the STARE dataset (white
cross represents the estimated OD center). (a) The only case where
the OD detection method failed. (b)(h) The results of the proposed
method on the images shown in [53]. . . . . . . . . . . . . . . . . . . . . . . . . .
105
Figure 7.4Results of the proposed method using the DRIVE dataset (white
cross represents the estimated OD center). . . . . . . . . . . . . . . . . . . . . .106
Figure 7.5
The pixels features. (a) A typical digital fundus image from the
DRIVE. (b) The inverted green-channel of a padded using [66].
(c)(f) The maximum 2D-Gabor wavelet response for scales a = 2,
3, 4, and 5 pixels respectively. (g) Top-hat enhancement. (h) Top-hat
Hessian-based enhancement. (i) Green-band Hessian-basedenhancement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
113
Figure 7.6
(1strow) Using three typical retinal images from the DRIVE, results
of the LS-SVM classifier trained using 2000 pixels with the final set
of features. (2nd row) The corresponding ground-truth manualsegmentation. (3rdrow) Another manual segmentation available for
the DRIVE images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
115
Figure 7.7
ROC curve for classification on the DRIVE dataset using the LS-
SVM classifier trained using 2000 pixels with the final set of
features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
116
Figure 7.8
The first steps while detecting Exudates. (1st row) Typical STARE
images containing exudates. (2nd row) The LS-SVM RV
segmentation soft-responses for the 1st row images. (3rd row) The
binarization hard-responses of the RV segmentation outputs inthe 2ndrow. (4throw) The effect of the morphological closing when
applied to inverted versions of the images in the 2ndrow. . . . . . . . . .
120
Figure 7.9
The final steps while detecting Exudates. (1st row) The watersheds
segmentation results of the images in Figure 6.14 (4th row). (2ndrow) A binary mask showing the regions finally selected as exudates
according to their properties. (3rd row) The identified exudates
shown in blue superimposed on the original images of Figure6.14 (1strow). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
121
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
27/200
List of Figures xxvii
Figure 7.10
STARE images diagnosed manually as not containing any form of
bright-lesions, and indicated as free of any bright-lesions by our
proposed approach (a message indicating so is superimposed on the
upper-left most corner). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
123
Figure 7.11
STARE images manually diagnosed as having forms of bright-
lesions other than hard exudates, and indicated as having brightlesions by our proposed approach. The identified bright lesions are
shown in blue (follow the white arrows), and superimposed on the
original images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
123
Figure A.1 Front view of a healthy retina. [97] . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Figure A.2
(a) An example fundus image shows "Puckering the macula" a
macular disease were an opaque membrane obscures the visibilityof the macula and drags the para-macular vessels. (b) The dragged
vessels are shown better in fluorescein angiogram. [98] . . . . . . . . . . .
130
Figure A.3
(a) Typical Hartmann-Shack spot pattern (inverted) from a human
eyes measurement. (b) Hartmann-Shack wavefront sensor with
micro-lens array and image sensor in the focal plane. [101] . . . . . . . .
132
Figure A.4 A typical iris image. [105] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Figure B.1Diabetic retinopathy effect on the vision (a) Without retinopathy. (b)
With retinopathy. [1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138
Figure B.2Diabetic retinopathy syndromes as categorized by [1] (the light grey-
blocks point out DR forms that are not detected by this work.). . . . .140
Figure D.1
Chromaticity Diagram, by the CIE (Commission Internationale de
l'Eclairage the international Commission on Illumination). [112] .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
150
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
28/200
List of Tables
xxviii
L i s t o f T a b l e s
Table 2.1 Image-Clarity grading scheme. [41] . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Table 2.2 Field definition grading scheme. [41] . . . . . . . . . . . . . . . . . . . . . . . . . 41
Table 2.3 The sensitivity and specificity of inadequate detection. [41] . . . . . . . 41
Table 6.1 Results of comparing Mask Generation methods using the DRIVE. . 86
Table 6.2Area under curve (AUC) measured per contrast enhancementmethod. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
Table 7.1OD detection results for the proposed and literature reviewed
methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
Table 7.2
Results for the performance evaluation experiments made for our
presented method, and compared to different literature segmentationmethods (for the DRIVE dataset). . . . . . . . . . . . . . . . . . . . . . . . . . . . .
117
Table 7.3Results of detecting Hard Exudates using the STARE. . . . . . . . . . . . 122
Table B.1Projected counts and prevalence of diabetes in Egypt (population 20 years),1995 to 2025. [8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
137
Table B.2Distribution of the 300 diabetic patients by their seeking of medical
care. [110] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141
Table C.1
The total STARE publicly available 82 images used by [52]
and/or [18], together with the ground truth diagnoses of each image.
[27] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
146
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
29/200
Awards xxix
A w a r d s
November
2007
The 1stPlace Winning Postgraduate in the MIA Made-In-the-Arab-
worldCompetition Organized by the Arab League & the Arab
Academy for Science and Technology (ASTF) Cairo, Egypt.
July 2007
The 1stPlace Winning Postgraduate in the MIE Made-In-Egypt
Competition Organized by the IEEE-Egypt Gold Section Cairo,
Egypt.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
30/200
Medical Glossaryxxx
M e d i c a l G l o s s a r y
Agerelated macular degeneration (AMD, ARMD); (MAKyulur).Group of conditions that include
deteriorationofthemacula,resultinginlossofsharpcentralvision.Twogeneraltypes: dry, whichis
morecommon,and wet, inwhichabnormalnewbloodvesselsgrowundertheretinaand leakfluid
andblood(neovascularization),furtherdisturbingmacularfunction.Mostcommoncauseofdecreased
visionafterage60.Anteriorchamber;Portionoftheeyebetweenthecorneaandtheiris.
Blindspot;Sightlessareawithinthevisualfieldofanormaleye.Causedbyabsenceof lightsensitive
photoreceptorswheretheopticnerveenterstheeye.
Choroid;Thevascularmiddlecoatoftheeyelocatedbehindtheretinaandinfrontofthesclera.Ciliary
body;Theportionoftheuvealtractbetweenthe irisand thechoroid;composedofciliarymuscleand
processes. Cone; Lightsensitive retinal receptor cell that provides sharp visual acuity and color
discrimination.Cornea;Transparentportionoftheoutercoatoftheeyeballformingtheanteriorwallof
the anterior chamber. Cottonwool
spots; Infarction of the opticnerve fiber layer of the retina, as in
hypertension.
Diabetic retinopathy; (retinAHPuhthee).Spectrumof retinal changesaccompanying longstanding
diabetesmellitus.Early stage is background retinopathy.May advance toproliferative retinopathy,whichincludes the growth of abnormal newblood vessels (neovascularization) and fibrous tissue. Dilated
pupil;Enlargedpupil,resultingfromcontractionofthedilatormuscleorrelaxationoftheirissphincter.
Occursnormallyindimillumination,ormaybeproducedbycertaindrugs(mydriatics,cycloplegics)or
resultfromblunttrauma.Drusen;(DRUzin).Tiny,whitehyalinedepositsonBruchsmembrane(ofthe
retinalpigmentepithelium).Commonafterage60;sometimesanearlysignofmaculardegeneration.
Fluorescein angiography; (FLORuhseen anjeeAHgruhfee). Technique used for visualizing andrecording location and sizeofbloodvessels and any eyeproblems affecting them; fluoresceindye is
injectedinto
an
arm
vein,
then
rapid,
sequential
photographs
are
taken
of
the
eye
as
the
dye
circulates.
Fovea;Thethinnedcentreofthemacula,responsibleforfineacuity.Fundus;Interiorposteriorsurfaceof
theeyeball;includesretina,opticdisc,macula,posteriorpole.Canbeseenwithanophthalmoscope.
Glaucoma;Progressiveopticneuropathywithcharacteristicnerveandvisualfieldchanges.
Intraocularpressure(IOP);Pressurewithintheglobe(Nrrange=8 21mmHg).Iris;Pigmentedtissue
lyingbehindthecorneathatgivescolortotheeye(e.g.,blueeyes)andcontrolsamountoflightentering
theeyebyvaryingthesizeofthepupillaryopening.
Macula;Thesmallavascularareaoftheretinasurroundingthefovea.Mydriasis;Dilationofthepupil.
Neovascularization; (neeohVASkyulurihZAYshun). Abnormal formation of new blood vessels,
usuallyinorundertheretinaorontheirissurface.Maydevelopindiabeticretinopathy,blockageofthe
centralretinalvein,ormaculardegeneration.
Ophthalmologist; (ahfthalMAHlohjist).Physician (MD) specializing indiagnosis and treatment of
refractive,medicalandsurgicalproblemsrelatedtoeyediseasesanddisorders.Ophthalmoscope;(ahfTHALmuhskohp).Illuminatedinstrumentforvisualizingtheinterioroftheeye(especiallythefundus).
Opticdisc,Opticnervehead;Ocularendoftheopticnerve.Denotestheexitofretinalnervefibersfrom
theeyeandentranceofbloodvesselstotheeye.Opticnerve;Largestsensorynerveoftheeye;carries
impulsesforsightfromtheretinatothebrain.
Peripheralvision;Sidevision;visionelicitedbystimulifallingonretinalareasdistantfromthemacula.
Pupil;Variablesizedblackcircularopening in thecenterof the iris thatregulates theamountof light
thatenterstheeye.
Retina;Theinnermostlayeroftheeyecomprisedoftenlayers.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
31/200
Chapter 1 Introduction 1
C h a p t e r 1
Introduction1.1 Motivation
The classification of various ocular/ophthalmic eye related patterns is
an essential step in many fields. Whether in medical diagnosis medical
image processing or security, analyzing and classifying ocular images can
aid significantly in automating and improving real-time systems.
Consequently, that leads to a practical and sensible reduction in time and
costs, and prevents in case of medical diagnosis people from suffering
due to various forms of pathologies, with blindness at the forefront.
Medical image processing can be considered recently as one the mostattractive research areas due to the considerable achievements that
significantly improved the type of medical care available to patients,
although it's a multidisciplinary that requires comprehensive knowledge in
many areas such as medicine, pattern recognition, machine learning, and
image processing. Medical image analysis can highly assist physicians in
diagnosing, treating, and monitoring changes of various diseases; hence, a
physician can obtain decision support [1].
Diabetes severe progression is one of the greatest immediate challenges
to the current worldwide health system [1]. Diabetic retinopathy (DR),
beside others, is a common complication of diabetes, and a leading cause of
blindness in Egypt, the Middle-East, and in the working-age population of
western countries. Glaucoma is also a leading cause of blindness worldwide.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
32/200
Chapter 1Introduction
2
In general, threats to vision and blinding complications of DR and glaucoma
give little or no warning, but can be moderated if detected early enough for
treatment. Thus, annual screening(s) that employs direct examination with an
ophthalmoscope especially for diabetic patients is highly recommended.
Automatic screening has been shown to be cost effective compared to the
high cost of conventional examination. Insufficient ophthalmologists
especially in rural areas also hinders patients from obtaining regular
examinations. Thus, an automatic system for analyzing the retinal fundus
images would be more practical, efficient, and cost effective.
Artificial Intelligence (AI) may be defined as the branch of computer
science that is concerned with the automation of intelligent behavior. It is
still a young discipline, and its structure, concerns, and methods are less
clearly defined than those of a more mature science such as physics [2],
although it has always been more concerned with expanding the capabilities
of computer science than with defining its limits. AI can be broadly
classified into two major directions:
Logic-Based Traditional AI which includes symbolic processing,
And, Computational Intelligence (CI) which is relatively new and
encompasses approaches primarily based on the bio-inspired artificial
neural networks and evolutionary algorithms, besides fuzzy logic rules,
support vector machines, and also hybrid approaches.
CI approaches can be trained to learn patterns, a property that must be a partof any system that would claim to possess general intelligence [2], and hence
the so-called Machine Learning is one major branch of AI. CI techniques
are increasingly being used in biomedical areas because of the complexity of
the biological systems as well as the limitations of the existing quantitative
techniques in modeling [3]. Even though just stated, performed researches
concerning the analysis and classification of ocular images are mainly based
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
33/200
Chapter 1 Introduction 3
on other approaches; for example, statistical methods, geometrical models,
and convolutional kernels.
Finally, many of the researches and methods conducted, especially for
medical diagnosis, lack the evaluation on large benchmark datasets. As a
result, carrying out comparative studies is durable and inflexible.
1.2 Objectives
The main objective of this research is to aid in developing automatic
screening systems for retinopathies (especially DR). Such systems will
significantly help ophthalmologists while diagnosing and treating patients.
Automated screening systems promise with more efficient and costless
medical services, in addition to delivering health-care services to rural areas.
Over and above, automated screening systems will lend a hand to hold back
the personal and social costs of DR as one of the most prevalent
complications of diabetes and one of the leading causes of blindness.
To develop any automatic ocular screening system, firstly we have toanalyze the anatomical structure of the retinal fundus image, which consists
mainly of the Retinal Blood Vessels (RBV), Optic Disc (OD), and Macula.
Detecting the previously mentioned structures will further help in detecting
and quantizing several retinopathies. Then, to detect the presence of DR, we
have to detect a DR manifestation such as exudates, or microaneurysms.
The presence of AI approaches has to be investigated through this
research, since these approaches have proven great effectiveness in pattern
classification and image processing. Employing CI approaches while dealing
with retinal fundus images may highly improve the results achieved. Finally,
all the methods that will be selected for a comparative study or for being
employed in the final proposed system should be compared against the
appropriate benchmark dataset(s) to realize a practical evaluation.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
34/200
Chapter 1Introduction
4
1.3 Thesis Overview
This thesis mainly presents a literature review and a comparative studyfor some of the basic tasks employed while developing an automated ocular
screening system for detecting DR. These main tasks and the overall scope
of this work are shown in Figure 1.1.
The thesis starts with exploring various phases used for preprocessing a
retinal fundus image; which includes mask generation, color normalization,
contrast enhancement, and illumination equalization. Preprocessing a fundus
image is vital step that prepares the image to be effectively analyzed.
Selected preprocessing methods are compared and evaluated using a
standard dataset. The thesis then moves on by exploring various approaches
used for detecting retinal fundus images' landmarks (retinal blood vessels,
optic disc)which are considered the most important anatomical structures in
fundus images. Then new methods are proposed, compared and evaluated on
somehow large, benchmark, and publicly available datasets.
The thesis continues by surveying some researches that aim to detect and
somehow aid the quantification of bright forms manifestations of DR,
and especially the hard exudates. The later researches basically depend on
using the retinal structures previously segmented. Finally, the thesis
summarizes the contribution achieved, and brings together all different
modules of the automated screening system which will be able at this point
to identify diabetic patients who need further examination.
1.4 Thesis Outline
Chapter 1continues on by furnishing basics required to understand the
presented work, and introduces the medical background. Chapter 2
describes different methodologies used to preprocess fundus images and
prepare them for further analysis.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
35/200
Chapter 1 Introduction 5
Figure 1.1A typical generic automatic eye screening system, the figure highlights
the modules that will be included in our research (the light grey-blocks point out
modules that are out of the scope of this work.)
Diabetic Retinopathy
Bright Lesions (mainlyHard Exudates)
Dark (Red)Lesions
Pre-Processing
Mask GenerationColor Normalization
Contrast Enhancement Illumination Equalization
Analyzing Retinal Landmarks
Optic Disc Localization
Retinal Blood Vessels
Segmentation
Fovea Localization
Detecting Retinopathies
Glaucoma
Data Acquisition
Distinguishing the Central
Retinal Artery and Vein
A typical digitized retinal fundus image
acquired for medical diagnoses through a
non-mydriatic fundus camera.
Differentiate Exudates
from Cottonwool Spots
Boundary Detection
Type-of-Eye (left/right)
Detection
Differentiate
Haemorrhages from
Microaneurysms
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
36/200
Chapter 1Introduction
6
Chapters 3 and 4provide an overview of previous work, and describe in
some details the implementation for automatically segmenting the optic disc,
and the retinal blood vessels respectively. Chapter 5 describes the
approaches used to automatically extract Exudates, as main bright DR
manifestation of non-proliferative background diabetic retinopathy.
Chapters 6 and 7 present our prototyped system for preprocessing
fundal images, automatically segmenting retinal landmarks, and
automatically detecting exudates. Both chapters also include the comparative
studies carried out to evaluate the system, and provide the results of
experiments that test the capabilities/limitations of the presented methods.
Chapter 8ends this thesis with a summary of the main achievements of the
research, a discussion of possible improvements, proposing possible areas
for future research, as well as concluding remarks.
1.5 Eye Anatomy
Developing a basic understanding of the human eye anatomy (Figure 1.2)
is an appropriate step before going on with this thesis. The eye is nearly a
sphere, with an average diameter of approximately 20 mm [4], enclosed by
three membranes: the cornea and sclera compose the outer cover, the
choroid, and the retina. The cornea is a tough, transparent tissue that covers
the frontal surface, and continuous with it, the sclera is an opaque membrane
that encloses the remainder of the optic globe.
The choroid lies directly below the sclera and it contains a network ofblood vessels that serve as the major source of nutrition. Even superficial
injury to the choroid, often not deemed serious, can lead to severe eye
damage as a result of inflammation that restricts blood flow [4]. The choroid
coat is heavily pigmented and so helps to reduce the amount of extraneous
light entering the eye and the backscatter within the optical globe. At its
anterior extreme, the choroid is divided into the ciliary body and the iris
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
37/200
Chapter 1 Introduction 7
diaphragm. The central opening of the iris (the pupil) varies in diameter
from approximately 2 to 8 mm. The front of the iris contains the visible pig-
ment of the eye, whereas the back contains a black pigment.
Retina is the innermost membrane, which lines the inside of the walls
entire posterior portion. When the eye is properly focused, light from an
object outside the eye is imaged on the retina. Pattern vision is afforded by
the distribution of discrete light receptors photoreceptors over the
surface of the retina. These receptors are responsible for receiving light
beams, exchanging them into electrical impulses and then transmitting these
impulses information to the brain where they are turned into images [1].
Figure 1.2
Simplified diagram
of a horizontal cross
section of the human
eye. [4]
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
38/200
Chapter 1Introduction
8
There are two classes of receptors: cones and rods. The cones in each eye
number between 6 and 7 million, are highly sensitive to color, and are
located primarily in the central portion of the retina, called the fovea a
circular indentation of 1.5 mm in diameter.Rods are distributed over the
retinal surface and number from 75 to 150 million. The absence of receptors
in the region of emergence of the optic nerve fibers and the blood vessels
from the eye results in the so-called blind spotor the optic disc [4]. Detailed
description of the optic disc, retinal blood vessels, and the fovea are found in
sections 3.2, 4.2 and D.2 respectively, while figure 1.3 shows an example of
a right fundus image including the main anatomical structures.
Figure 1.3(a) A typical retinal image from the right eye. (b) Diagram of the
retina. [1]
The main retinal components numbered in Figure 1.3 are as follows:1- Superior temporal blood vessels
2- Superior nasal blood vessels
3- Fovea / Macula
4- Optic nerve head / Optic disc
5- Inferior temporal blood vessels
6- Inferior nasal blood vessels
a b
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
39/200
Chapter 1 Introduction 9
1.6 Fundus Photography and Eye Diseases
Currently, the majority of screenings are carried out by fundalexamination performed by medical staff, which is expensive, and has been
shown to be inaccurate [5]. Using digital fundus photography provides us
with digitized data that could be exploited for computerized detection of
diseases. Fully automated approaches involving fundus image analysis by a
computer could provide an immediate classification of retinopathy without
the need for specialist opinions. Thus, it is more cost-effective, ideal for
those who are unable or unwilling to travel to hospital clinics (especiallythose who live in rural areas), and greatly facilitating the management of
certain diseases.
Automated retinopathy screening systems lately depend on fundus
images taken by a non-mydriatic fundus camera which mainly does not
require pupillary dilatation, and its operator does not need to be skilled at
ophthalmoscopy [5]. In addition, fundus photography surpassed fluorescein
angiography and infrared fluorescence spectrum since no dye need to be
injected into the bloodstream. See Appendix A for more details about the
various forms of eye-related images.
The objective of an automatic screening of fundus images is to improve
the image appearance, interpretation of the main retinal components and
analysis of the image in order to detect and quantify retinopathies such as
microaneurysms, haemorrhages and exudates [5]. In this thesis, we are
mainly concerned with diabetic retinopathies, and somehow glaucoma,due
to their impact on society, and were both should be included among the
avoidable major causes of blindness as some forms of treatment are
available. Among patients presenting to the Alexandria Specialized Medical
Committee for Eye Diseases (Alexandria, Egypt), glaucoma was responsible
for 19.7% of blindness, and diabetic retinopathy for 9% [6].
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
40/200
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
41/200
Chapter 1 Introduction 11
they do not require the injection of fluorescein or indocyanine green dye into
the body. In general, a screening method that does not require trained
personnel would be of a great benefit to screening services by decreasing
their costs, also by decreasing the pressure on available infrastructures and
resources [14]. The pressure is due to the growing numbers of diabetic
patients, with insufficient ophthalmologists to screen them all [5],specially
while World Health Organization (WHO) advices yearly ocular screening of
patients [7].
1.6.2 Glaucoma
Glaucoma is also one of the major causes of preventable blindness. It
induces nerve damage to the optic nerve head (Figure 1.3) via increased
pressure in the ocular fluid (Figure 1.4). In most cases the damage occurs
asymptotically, i.e. before the patient notices any changes to his or her
vision. And this damage is irreversible; treatment can only reduce or prevent
further damage [15]. Age is the most constant risk factor for glaucoma, and a
family history of glaucoma is also a risk factor [6].
Figure 1.4(a) A normal optic disc. (b) Glaucomatous optic disc. [15]a b
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
42/200
Chapter 1Introduction
12
Glaucoma is presently detected either by regular inspection of the retina,
measurement of the intra ocular pressure (IOP) or by a loss of vision. It has
been observed that nerve head damage precedes that latter two events and
that direct observation of the nerve head could therefore be a better method
of detecting glaucoma [16]. In [15], fundus images were used to detect and
characterize the abnormal optic disc in glaucoma.
1.6.3 Detecting Retina Landmarks
Detecting retinal landmarks (Figure 1.3) give a framework from which
automated analysis and human interpretation of the retina proceed [17].Identifying these landmarks in the retinal image will highly aid the future
detection and hence quantification of diseases in the mentioned regions. As
an example, in diabetic retinopathy, after detecting retinal main components
an image could be analyzed for sight-threatening complications such as disc
neovascularisation, vascular changes or foveal exudation. Besides just
mentioned, recognizing main components can be used for criteria that allow
the discarding of images that have a too bad quality for assessment ofretinopathy [5]. Methods for detecting the optic disc, and retinal blood
vessels are described in Chapters 3, and 4 respectively.
1.6.4 Fundus Photography Datasets
In this work we used a number of publicly available datasets of retinal
images as a benchmark to evaluate our work, and to compare the
performance of some selected methods. These datasets include a somehowlarge number of healthy retinas images and others with various diseases, it
may include also the field-of-view (FOV) mask of each image, the gold
standard (manual segmentation) used by some algorithms, and the results of
applying specific algorithms to each image (for details seeAppendix C).
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
43/200
Chapter 2
Preprocessing
13
C h a p t e r 2
Preprocessing2.1 Introduction
A significant percentage of fundus images are of poor quality that
hinders analysis due to many factors such as patient movement, poor focus,
bad positioning, reflections, disease opacity, or uneven/inadequate
illumination. The sphericity of the eye is a significant determinant in the
intensity of reflections from the retinal tissues, in addition, the interface
between the anterior and posterior ocular chambers may cause compound
artifacts such as circular and crescent-shaped low-frequency contrast and
intensity changes [17]. The improper focusing of light may radially decreasethe brightness of the image outward from the center, leading to sort of
uneven illumination known as vignetting [18], and which consequently
results in the optic disc appearing darker than other areas of the image [19].
These artifacts are significant enough to impede human grading in about
10% of retinal images [17], and it can reach 15% in some retinal images sets
[20]. A similar amount is assumed to be of inadequate quality for automated
analysis. Preprocessing of the fundus images can wilt or even remove the
mentioned interferences. This chapter is a literature review that starts with
describing automatic methods for mask generation, proceeds on by
discussing various methods for the preprocessing of a fundus image, and
ends up by describing methods for automatically assessing the quality of
retinal images.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
44/200
Chapter 2 Preprocessing14
2.2 Fundamentals of Retinal Digital Image Representation
Digital images in general may be defined as a two-dimensional lightintensity function,f( x, y),wherexandyare spatial (plane)coordinates, and
the amplitude (value)offat any point of coordinates ( x, y)is proportional to
the intensity (brightness, or gray-level)at that point [4]. Digital images are
images whose spatial coordinates and brightness values are selected in
discrete increments, not in a continuous range. Thus a digital image is
composed of a finite number of picture elements (pixels), each of which has
a particular location and value in case of a monochrome 'grayscale' imageor values three values, usually red, green, and blue, in case of a colored
image. In a typical retinal true colored image, the value of each pixel is
usually represented by 24 bits of memory, giving 256 (28) different shades
for each of the three color bands, thus the total of approximately 16.7 million
possible colors.
Generally, the retinal images of the publicly available databases used
through this work were about 685 rows and 640 columns, for a total of
about 440,000 pixels. Although digital retinal images must be of size 2-3000
pixels (2-3 mega-pixels) to match ordinary film resolution [21], digital
retinal images are a practical alternative to ordinary filming.
The extent scope of the captured scene of the retina is called the field
of view (FOV), and is measured in degrees of arc (Figure 2.1(a)). A typical
retina has a FOV that is somewhat more than 180 degrees of arc, but it's not
all captured. The images that were used in this work have a 35 or 45 degrees
of FOV depending on the type and settings of the retinal fundus cameras
used. Since cameras using a 35-FOV show a smaller area of the retina
compared to 45-FOV cameras, 35-FOV images are comparatively
magnified and need to be subsampled before processing [19]. Figure 2.1(b)
shows the subsampling factor calculated according to the FOV-geometry.
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
45/200
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
46/200
Chapter 2 Preprocessing16
(4-sigma threshold with a free parameter) value was automatically
calculated using pixel value statistics (mean and standard deviation)outside
the ROI for each color band. Then logical operators (AND/OR) together with
region connectivity test are used to combine the binary results of all bands in
order to identify the largest common connected mask (due to the different
color response of the camera, ROI size is not always the same for each
band)[23].
In [19], the ROI was detected by applying a threshold t to the red color
band (empirically, 35=t ), and then the morphological operators opening,
closing, and erosion were applied respectively (to the result of the
preceding step)using a 33 square kernel to give the final ROI mask. For
more details on the mentioned logical and morphological operators (i.e.
AND, OR, opening, closing, and erosion), see [4].
2.4 Illumination Equalization
The illumination in a retinal image is non-uniform (uneven) due to the
variation of the retina response or the non-uniformity of the imaging system
(e.g. vignetting, and varying the eye position relative to the camera).
Vignetting and other forms of uneven illumination make the typical analysis
of retinal images impractical and useless. For instance, the optic disc (OD) is
characterized as the brightest anatomical structure in a retinal image, hence
applying a simple threshold or grouping the high intensity pixels should
localize the OD successfully. Yet, due to uneven illumination vignetting in
particular the OD may appear darker than other retinal regions, especially
when retinal images are often captured with the fovea appearing in the
middle of the image and the OD to one side [18]. Once the OD loses its
distinct appearance due to the non-uniform illumination or pathologies,
localizing the OD won't be straightforward, especially for methods based on
intensity variation or just on intensity values [19].
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
47/200
Chapter 2
Preprocessing
17
Figure 2.3
(a) Typical retinal image. [24] (b) The green image (green band)of 'a'
(c) The smoothed local average intensity image of 'b'using a 40 40window.
(d) Illumination equalized version of 'b'usingEq. 2.1.
In order to overcome the non-uniform illumination, illumination
equalization is applied to the image, where each pixel ),( crI is adjusted
using the following equation [18, 19]:
),(),(),( crImcrIcrI Weq += (Eq. 2.1)
where m is the desired average intensity (128 in an 8-bit grayscale image)
and ),( crIW is the mean intensity value (i.e. local average intensity). The
a bdc
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
48/200
Chapter 2 Preprocessing18
mean intensity value is computed independently for each pixel as the
average intensity of the pixels within a window Wof sizeNN. The local
average intensities are smoothed using the same windowing (Fig. 2.3). The
window sizeNapplied in [18] is variable, in order to use the same number of
pixels (between 30 and 50) every time while computing the average in the
center or near the border of the image.
In [19], a running window of only one size (40 40) and pixels inside
the ROI were used to calculate the mean intensity value, therefore the
amount of pixels used while calculating the local average intensity in the
center is more than the amount of pixels used near the border where the
running window overlaps background pixels. Although the resulting images
look very similar to those using the variable running window, the ROI of the
retinal images is shrunk by five pixels to discard the pixels near the border
where the chances of erroneous values are higher [19].
In [25], correcting the non-uniform illumination in retinal images is
achieved by dividing the image by an over-smoothed version of it using aspatially large median filter. Usually, the illumination equalization process is
applied to the green band (green image)of the retina [18, 19, 25].
2.5 Contrast Enhancement
Enhancement is processing an image so that the result is more
appropriate than the original appearance for a specific application. Contrast
enhancement refers to any process that expands the range of the significantintensities. The various possible processes differ in how the significant range
is identified and how the expansion is performed [5].
2.5.1 Green Band Processing
In order to simply enhance the contrast of the retinal fundus images,
some information is commonly discarded before processing, such as the red
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
49/200
Chapter 2
Preprocessing
19
so-called red-free images and blue components of the image.
Consequently, only the green band (green image)is extensively used in the
processing (Figure 2.4) as it displays the best vessels/background contrast
[26], and the greatest contrast between the optic disc and the retinal tissue
[15]. In addition, micro-aneurysms a diabetic retinopathy early symptom
are more distinguishable from the background in the green band although
they normally appear as small reddish spots on the retina [25].
Conversely, the red band tends to be highly saturated, thus it's hardly
used by any automated application that uses intensity information alone.
Besides, the blue band tends to be empty, and therefore discarded. Therefore,
many vessel detection and optic disc localization methods are based on the
green component/channel of the color fundus image [15, 25, 26, 28-30].
a b
c d
Figure 2.4
(a) A typical
RGB colored
fundus image.
(b) Red
component
image.(c) Green
component.
(d) Blue
component. [27]
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
50/200
Chapter 2 Preprocessing20
2.5.2 Histogram Equalization
A typical well-known technique for contrast enhancement is the
histogram (gray-level) equalization [4], which spans the histogram of an
image to a fuller range of the gray scale. The histogram of a digital image
withLtotal possible intensity levels in the range ]1,0[ L is defined as the
discrete function:
1,,2,1,0)( == Lknrh kk K (Eq. 2.2)
where kr is the kth intensity level in the given interval and kn is the number
of pixels in the image whose intensity level is kr [31]. The probability of
occurrence of gray level kr in an image (i.e. the probability density function
'PDF')is approximated by:
1,,2,1,0/)( == Lknnrp kkr K (Eq. 2.3)
where n is the total number of pixels in the image. Thus, a histogram
equalized image is obtained by mapping each pixel with level kr in the input
image to a corresponding pixel with level ks in the output image using the
following equation (based on the cumulative distribution function 'CDF'):
1,,2,1,0/)()(00
==== ==
LknnrprTsk
j
j
k
j
jrkk K
(Eq. 2.4)
Although histogram equalization is a standard technique, it has some
drawbacks since it depends on the global statistics of the image (i.e. pixels
are modified by a transformation function based on the gray-level content of
an entire image). For example, a washed-out appearance can be seen in some
parts of the image due to over enhancement, while other parts need more
enhancing such as the peripheral region (Figure 2.5) [4, 5].
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
51/200
Chapter 2
Preprocessing
21
2.5.3 Local Contrast Enhancement
As a result of the histogram equalization drawbacks, a local contrast
enhancement technique was invented by Sinthanayothin et al. [5, 32] in
which it doesn't depend on the global statistics of an image, and so it's not
applied to the entire image. Instead, it's applied to local areas depending on
the mean and variance in that area. Considering a small running window or a
sub-image, W, with the size , and centered on the pixel ),( ji , the
mean of the intensity within Wcan be defined as:
=>< ),(),(),( ),(/1
jiWlk
jiW lkfMf (Eq. 2.5)
while the standard deviation of the intensity within Wis:
>
-
8/13/2019 Artificial Intelligence Techniques for Ocular Pattern Classi...
52/200
Chapter 2 Preprocessing22
The local contrast enhancement of a colored image is applied