[IEEE 2013 10th International Multi-Conference on Systems, Signals & Devices (SSD) - Hammamet,...

6
Blood Vessels Extraction and Classification into Arteries and Veins in Retinal Images Jihene Malek Electronics and MicroelectronicsLaborotory University of Monastir Email: [email protected] Rached Tourki Electronics and Microelectronics Laborotory University of Monastir Abstract—Many retinal diseases are characterized by changes in retinal vessels. The retina vascular structure consists of two kinds of vessels : arteries and veins. An important symptom for Diabetic Retinopathy DR is irregularly wide veins, leading to an unusually low ratio of the average diameter of arteries to veins (AVR). In this paper, we present an approach to separate arteries and veins based on a segmentation and neural classification method. Blood vessels are segmented using two-dimensional matched filters, which derived from Gaussian functions. We used feature vectors based on vessel profile extraction for each segment. The obtained features will be introduced as the input vector of a Multi-Layer Perceptron (MLP); to classify the vessel into arteries and veins ones. Our approach achieves 95.32% correctly classified vessel pixels classification Keywords — Retinal vessel segmentation, Vessel Classi- fication, Arteries and Veins, Pattern Recognition, Neural Networks, Diabetic Retinopathy. I. I NTRODUCTION Ocular fundus images tell us about retinal, ophthalmic, and even systemic diseases. The retinal images are widely used by the ophthalmologists, play an important role in the detection and diagnosis of many eye diseases [1]. Several diseases such as glaucoma [2] [3], diabetic retinopathy, and macular degeneration are very serious; and are the most common causes of blindness if they are not detected in time [4] . Symptoms of many retinopathies are related to morphological features of the retinal vascular tree [5] . The clinical procedure of diagnostic for retinopathy is based on a attentive evaluation of the main features of the retinal vessel structure, obtained from fundus camera images. Traditionally, the vascular tree is represented by hand in a time-consuming process that demands both training and skill. Furthermore, manual detection of blood vessels is very difficult since the blood vessels in a retinal image are complicated in structure and with low contrast. Therefore, an automatic analysis of fundus images would be of great help to the ophthalmologist due to the large number of patients. As previous step, vessel assessment requires vascular network segmentation from the background for further processing. Knowledge on blood vessel location can be useful for evaluation retinopathy prematurity [6] , arteriolar narrowing [7] [8] , vessel tortuosity to characterize hypertensive retinopathy [9] , vessel diameter measurement to diagnose hypertension and cardiovascular diseases [10] [11], and computer assisted laser surgery [12] [13]. Also, the vascular tree can be used as valuable information to locate other fundus features such as the fovea and the optic disc [14] - [18] [19] . Furthermore, it may serve as a mean for the registration of multimodal images [20], [21]. In the literature, retinal vessel segmentation methods have been divided into two groups. Pixel processing-based methods and tracking methods. In [22] , The concept of matched filter detection was proposed. Improved matched filtering approaches using global [23] or local thresholding strategies [24] have been reported for retinal vessels segmentation. The retinal vessels are segmented progressively in a two-stage region growing procedure [25] , [26]. The combination of matched filtering, edge detection and region growing was proposed in [27] . In [28] authors used a 2-D wavelet transform for the segmentation of the vessels in fundus images . A K-nearst neighbor (KNN) classifier was used to determine the probability for each pixel of being a vessel[29]. In [30], retinal vessels are identified using a neural network for each pixel. Tracking-based methods was based on vessel profile model. Several methods reported in the literature use Gaussian functions to characterize the vessel profile. They are starting from some initial points and then tracing following a path that is best matches the profile model. In [31], the authors described an algorithm that is initiated by the definition of the starting and ending points. Then, automatically followed by a matched filter for tracking the midline. In [32] , Fuzzy C- means clustering has been used for vessel classification based on intensity information. In [33] , the tracking process begins from the circumference of the optic disc, the estimation of the next search location was based on a Kalman filter. There is a lot of physiology information contained in fundus images. Arteries and veins have many observable features, including color, diameter, opacity (reflectivity) and curvature; which can serve as diagnostic indicators. An important symptom for Diabetic Retinopathy DR is irregularly wide veins, leading to an unusually low AVR. To determine it, a classification of vessels as arteries or veins is essential. In this paper we have implemented the Gaussian matched filter [22] . The resulted image is finally thresholded to produce a binary segmentation of the vasculature. Then based on neural classifier each vessel segment is classified as either arteries or veins SSD'13 1569696509 1 2013 10th International Multi-Conference on Systems, Signals & Devices (SSD) Hammamet, Tunisia, March 18-21, 2013 978-1-4673-6457-7/13/$31.00 ©2013 IEEE

Transcript of [IEEE 2013 10th International Multi-Conference on Systems, Signals & Devices (SSD) - Hammamet,...

1 2 3 4 5 6 7 8 91011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556576061

Blood Vessels Extraction and Classification intoArteries and Veins in Retinal Images

Jihene MalekElectronics and

MicroelectronicsLaborotoryUniversity of Monastir

Email: [email protected]

Rached TourkiElectronics and

Microelectronics LaborotoryUniversity of Monastir

Abstract—Many retinal diseases are characterized by changesin retinal vessels. The retina vascular structure consists of twokinds of vessels : arteries and veins. An important symptom forDiabetic Retinopathy DR is irregularly wide veins, leading to anunusually low ratio of the average diameter of arteries to veins(AVR). In this paper, we present an approach to separate arteriesand veins based on a segmentation and neural classificationmethod. Blood vessels are segmented using two-dimensionalmatched filters, which derived from Gaussian functions. Weused feature vectors based on vessel profile extraction for eachsegment. The obtained features will be introduced as the inputvector of a Multi-Layer Perceptron (MLP); to classify the vesselinto arteries and veins ones. Our approach achieves 95.32%correctly classified vessel pixels classification

Keywords — Retinal vessel segmentation, Vessel Classi-fication, Arteries and Veins, Pattern Recognition, NeuralNetworks, Diabetic Retinopathy.

I. INTRODUCTION

Ocular fundus images tell us about retinal, ophthalmic, andeven systemic diseases. The retinal images are widely used bythe ophthalmologists, play an important role in the detectionand diagnosis of many eye diseases [1]. Several diseasessuch as glaucoma [2] [3], diabetic retinopathy, and maculardegeneration are very serious; and are the most commoncauses of blindness if they are not detected in time [4] .Symptoms of many retinopathies are related to morphologicalfeatures of the retinal vascular tree [5] . The clinical procedureof diagnostic for retinopathy is based on a attentive evaluationof the main features of the retinal vessel structure, obtainedfrom fundus camera images. Traditionally, the vascular tree isrepresented by hand in a time-consuming process that demandsboth training and skill. Furthermore, manual detection ofblood vessels is very difficult since the blood vessels ina retinal image are complicated in structure and with lowcontrast. Therefore, an automatic analysis of fundus imageswould be of great help to the ophthalmologist due to thelarge number of patients. As previous step, vessel assessmentrequires vascular network segmentation from the backgroundfor further processing. Knowledge on blood vessel locationcan be useful for evaluation retinopathy prematurity [6] ,arteriolar narrowing [7] [8] , vessel tortuosity to characterizehypertensive retinopathy [9] , vessel diameter measurementto diagnose hypertension and cardiovascular diseases [10]

[11], and computer assisted laser surgery [12] [13]. Also, thevascular tree can be used as valuable information to locateother fundus features such as the fovea and the optic disc [14]- [18] [19] . Furthermore, it may serve as a mean for theregistration of multimodal images [20], [21]. In the literature,retinal vessel segmentation methods have been divided intotwo groups. Pixel processing-based methods and trackingmethods. In [22] , The concept of matched filter detection wasproposed. Improved matched filtering approaches using global[23] or local thresholding strategies [24] have been reported forretinal vessels segmentation. The retinal vessels are segmentedprogressively in a two-stage region growing procedure [25] ,[26]. The combination of matched filtering, edge detection andregion growing was proposed in [27] . In [28] authors useda 2-D wavelet transform for the segmentation of the vesselsin fundus images . A K-nearst neighbor (KNN) classifier wasused to determine the probability for each pixel of being avessel[29]. In [30], retinal vessels are identified using a neuralnetwork for each pixel. Tracking-based methods was based onvessel profile model. Several methods reported in the literatureuse Gaussian functions to characterize the vessel profile. Theyare starting from some initial points and then tracing followinga path that is best matches the profile model. In [31], theauthors described an algorithm that is initiated by the definitionof the starting and ending points. Then, automatically followedby a matched filter for tracking the midline. In [32] , Fuzzy C-means clustering has been used for vessel classification basedon intensity information. In [33] , the tracking process beginsfrom the circumference of the optic disc, the estimation of thenext search location was based on a Kalman filter. There isa lot of physiology information contained in fundus images.Arteries and veins have many observable features, includingcolor, diameter, opacity (reflectivity) and curvature; which canserve as diagnostic indicators. An important symptom forDiabetic Retinopathy DR is irregularly wide veins, leadingto an unusually low AVR. To determine it, a classification ofvessels as arteries or veins is essential. In this paper we haveimplemented the Gaussian matched filter [22] . The resultedimage is finally thresholded to produce a binary segmentationof the vasculature. Then based on neural classifier each vesselsegment is classified as either arteries or veins

SSD'13 1569696509

1

2013 10th International Multi-Conference on Systems, Signals & Devices (SSD) Hammamet, Tunisia, March 18-21, 2013

978-1-4673-6457-7/13/$31.00 ©2013 IEEE

1 2 3 4 5 6 7 8 91011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556576061

(a) (b) (c)

Figure 1. Illustration of the preprocessing process: Original image (a) Green channel of the original image (b) Enhanced image (c).

II. METHODS

A. Segmentation of Retinal Blood Vessels

1) Preprocessing: The green channel presents a highercontrast between vessels and retinal background Fig.1b. Itwas considered as the natural basis for vessel segmentationin several works [34],[24],[35],[29]. However, the intensityof some background pixels is comparable to that of brightervessel pixels. In order to reduce these imperfections thedetails of the image was enhanced by homomorphic filteringHomomorphic filter is used to remove multiplicative noise. Forfundus images, the observed image intensity at a pixel x canbe modeled as the product of two components [36] :

I (x) = Il (x) · Iσ (x) (1)

Where,Iσ (x) : The reflectance component, contains information

about the objects in the image;Il (x) : The illumination component from the light sources inthe image.A homomorphic lter was used to separate linearly the twocomponents of the intensity signal. That is, by taking thelogarithms on both sides of (1) to obtain:

ln I (x) = ln Il (x) + ln Iσ (x) (2)

Consequently, Illumination variations can be reduced byfiltering in the log domain. The reflectance component canthus be estimated as:

Iσ (x) = exp{F (ln I (x))} (3)

Where F(.) is a high pass filter.Fig.1c shows that this method can effectively achieve non-

uniform illumination and contrast normalization and imagecontrast enhancement, which meet the requirements of retinalimage preprocessing.

2) Detection of Blood Vessels Using Matched Filter : Thematched filter kernel can be expressed [22]:

k (x, y) = − exp

(− x2

2σ2

)∀ |y| ≤ L

2(4)

Where L is the length of the vessel segment. σ characterizethe spread of the intensity profile. The kernel must be rotated

to all possible vessel orientations and the maximum responseis registered. Several papers found that rotating by an amountof 15o is adequate to detect vessels. Which results a filterbank with 12 kernels. In [22] the authors used the parametersvalues L = 9 and σ = 2. A Gaussian curve has infinitely longdouble sided trails. The trails are truncated at u = ±3σ. Aneighborhood N is defined such that:

N = {(u, v) , |u| ≤ 3σ, |v| ≤ L

2} (5)

The rotation matrice is expressed by:cos θi − sin θi

sinθi cos θi

(6)

Letpi be the points that belongs to the neighborhood N given:

pi = [u v] = [x y]

cos θi − sin θi

sinθi cos θi

(7)

The corresponding weights in the kernel i (i =1,..., 12) aregiven by:

ki (x, y) = − exp

(− u2

2σ2

)∀ pi ∈ N (8)

The convolution mask used in this algorithm is given as :

k′

i (x, y) = ki (x, y)−mi (9)

Where mi = 1a∑

pi∈N ki(x,y)and a denotes the number of

points in N.The MFR has been applied to enhanced image. Fig.2b showsthe output filter. The MFR image has been threshold toget the binary image containing blood vessels as connectedcomponents Fig.2c. The length filter cleans the imageobtained applying the threshold operator by removing isolatedobjects. An example of the final vessel segmented image afterthis further post processing step is shown in Fig.2d.

2

1 2 3 4 5 6 7 8 91011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556576061

(a) (b) (c) (d)

(e)

Figure 2. Enhancedimage (a)The Matched filter result (c) Thresholded image (d) Postprocessed image(e) Handled result [40]

B. Blood Vessel Classification

There are different features that can be used to differentiatearteries from veins:

Arteries are brighter than veins; Arteries are thinner thanneighboring veins; The central reflex is smaller in veins andwider in arteries; Arteries and veins generally alternate nearthe optic disk before branching out. The reported features oftenprovide enough information to successfully classify a vesselas artery or vein. However, in many cases they do not suffice,for more detail see [37]. So, in this paper we extracted featurevector for each segment vessel than we classified with neuralnetwork into vein or arterial vessel. As preprocessing steps,the skeleton of the segmented vessel tree is extracted. Then,vessel tree is partitioned into sections between crossings. Anexample can be seen in Fig.3.

1) Vessel Profile Based Feature Vectors: In this paper weused vessel profile as feature vectors to each extracted vesselsegment. For each skeleton pixel we determined the orthogonaldirection to the general direction of the vessel. Then, we drewline across the vessel and we read the vessel profile along thisline into a vector as shown in Fig.4

The profile consists of color information of all three colorchannels. The vessels show different widths leading to differ-ent lengths of the profile feature vectors. Hence, we use splineinterpolation to obtain unity length for each feature vector. Theprofile feature vectors of all three color channels are shown inFig.5

Principal component analysis (PCA) is widely used as goodpreprocessing method to reduce the size of the network.Hence, we apply combined multiclass PCA [38] to the featurevector. We define arteries as one class and veins as the otherclass to define the feature classes for the combined multiclassPCA. Then, the resulting set of principal components consists

(a) (b)

(c) (d)

Figure 3. Original image (a) MFR output (b) Vessel tree skeleton (c)Vesseltree partitioned into sections between crossings(d)

(a) (b) (c)

Figure 4. Example of cropped vein from fundus image (a)binary image(b)red line presents the orthogonal of main vessel direction at current pixel

3

1 2 3 4 5 6 7 8 91011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556576061

(a) (b) (c)

Figure 5. Interpolation profile vessel for green canal (a) Interpolation profilevessel for red canal( b) Interpolation profile vessel for blue canal (c)

of three sections. The components derived from PCA carriedout on 1) all training data samples, 2) artery data samplesonly, 3) vein data samples only. The resulting first principalcomponents of all three classes can be seen in Fig.6.

Figure 6. Example of the first important eigenprofiles based on the profilefeature vector composition method

2) Classification: Finally, the obtained features will beintroduced as the input vector of a Multi-Layer Perceptron(MLP), to classify blood vessel into arterial and vein ones.The input layer size depends on the feature vector dimensionobtained in the previous section. We obtained the best resultfor the value 24 of PCA. We compared one and two hiddenlayers with various hidden layer sizes between five and fortyneurons. The best result was obtained for twenty eight hiddenneurons in one layer, that is for the size combination (24-28-2)for input, one hidden and output layer.

III. RESULTS END DISCUSSION

The vessel segmentation methodology described in the lastsection was evaluate using two publicly available databases,the DRIVE [39] and STARE [40] databases. These databasesprovide manual segmentations for performance evaluation.They have been widely used by researchers to test theirvessel segmentation methodologies. The efficiency analysiscan be performed measuring the Receiver operating curveROC; which is used to plot the variation of false ratio versustrue ratio. A ROC curve has been obtained for each test imagein Drive and start database. The closer the curve approachesthe top left corner, the better the performance of the system.The mainly used performance measure extracted from theROC curve is the value of the area under the curve (AUC),which is 1 for an ideal system. An Example of the ROCcurve is shown in Fig.7. The corresponding value of AUCwas 0.94514

Figure 7. ROC curve for classification on the STARE Database (im0081)

The mean of AUC values over stare and drive Database are93,46 and 92,4 respectively. In the next step, a profile vesselbased feature vectors has been introduced as input vectors toMLP network. A classifier we have conducted tests on four605 x 700 retinal images containing. Our approach achieves95.32% correctly classified vessel pixels classification. Anexample of classification of both arterial and vein vessels wasshown in Fig.8.

IV. CONCLUSION

In this work, first the enhancement of retinal images hasbeen done using Homomorfic filter. Then blood vessels hasbeen detected using the Much filter; when this step is achievedand blood vessel is cropped a Vessel Profile Based FeatureVectors has been extracted. This vector is then used for neuralclassifier. Good rate of classification of the blood vessel intoarterial and vein vessel in the database has been obtained atthe end of this process.

4

1 2 3 4 5 6 7 8 91011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556576061

(a) (b)

(c) (d)

Figure 8. Cropped arterial vessel (a) successful arterial classification (b) cropped vein vessel (a) successful vein classification

REFERENCES

[1] M. A. Rawi, M. Qutaishat and M. Arrar, An improved matched filter forblood vessel detection of digital retinal images, Computers in Biologyand Medicine. vol. 37(2), pp. 262-267, 2007.

[2] K. Stapor, A. Switonski, R. Chrastek and G. Michelson ,Segmentationof fundus eye images using methods of mathematical morphology forglaucoma diagnosis, Lecture Notes in Computer Science, pp. 41-48,2004.

[3] K. Stapor and A. Switonski, Automatic analysis of fundus eye imagesusing mathematical morphology and neural networks for supportingglaucoma diagnosis, Machine Graphics Vision, pp. 65-78, 2004.

[4] E. F. Riveron and N. G. Guimeras ,Extraction of blood vessels inophthalmic color images of human retinas, Lecture Notes in ComputerScience, pp. 118-126, 2006.

[5] A. V. Stanton, B. Wasan, A. Cerutti, S. Ford, R. Marsh, P. P. Sever, S. A.Thom, and A. D. Hughes,Vascular network changes in the retina withage and hypertension, Journal of Hypertension, vol. 13, pp. 17241728,1995.

[6] C. Heneghan, J. Flynn, M. OKeefe, and M. Cahill,Characterizationof changes in blood vessel width and tortuosity in retinopathy ofprematurity using image analysis, Med. Image Anal., vol. 6, pp. 407-429,2002.

[7] E. Grisan and A. Ruggeri, A divide and impera strategy for the automaticclassification of retinal vessels into arteries and veins, in Proc. 25th Int.Conf. IEEE Eng. Med. Biol. Soc., pp. 890-893, 2003.

[8] Y. Hatanaka, H. Fujita, M. Aoyama, H. Uchida, and T. Yamamoto,Automated analysis of the distribuitions and geometries of blood vesselson retinal fundus images, Proc. SPIE Med. Imag. 2004: Image Process.,vol. 5370, pp. 1621-1628, 2004.

[9] M. Foracchia, E. Grisan, and A. Ruggeri, Extraction and quantitativedescription of vessel features in hypertensive retinopathy fundus images,in Book Abstracts 2nd Int. Workshop Comput. Asst. Fundus Image Anal,2001.

[10] X. Goa, A. Bharath, A. Stanton, A. Hughes, N. Chapman, and S. Thom,A method of vessel tracking for vessel diameter measurement on retinalimages, Proc. ICIP, pp. 881-884, 2001.

[11] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, and R. L. Kennedy,Measurement of retinal vessel widths from fundus images based on 2-

D modeling, IEEE Trans. Med. Imag., vol. 23, no. 10, pp. 1196-1204,2004.

[12] D. E. Becker, A. Can, J. N. Turner, H. L. Tanenbaum, and B. Roysam,Image processing algorithms for retinal montage, synthesis, mappingand real-time location determination, IEEE Trans. Biomed. Eng., vol.45, no. 1, pp. 115-118, 1998.

[13] H. Shen, B. Roysam, C. V. Stewart, J. N. Turner, and H. L. Tanenbaum,Optimal scheduling of tracing computations for real-time vascular land-mark extraction from retinal fundus images, IEEE Trans.Inf. Technol.Biomed., vol. 5, pp. 77-91,2001.

[14] A. Hoover and M. Goldbaum, Locating the optic nerve in a retinalimage using the fuzzy convergence of the blood vessels, IEEE Trans.Med. Imag., vol. 22, no. 8, pp. 951958, Aug. 2003.

[15] A. R. Youssif, A. Z. Ghalwash, and A. R. Ghoneim, Optic disc detectionfrom normalized digital fundus images by means of avessels directionmatched filter, IEEE Trans. Med. Imag., vol. 27, no.1, pp. 11-18, 2008.

[16] A. R. Youssif, A. Z. Ghalwash, and A. R. Ghoneim, Optic disc detectionfrom normalized digital fundus images by means of a vessels directionmatched filter, IEEE Trans. Med. Imag., vol. 27, pp.11-18, 2008.

[17] P. C. Siddalingaswamy, K. G. Prabhu , Automatic localization andboundary detection of optic disc using implicit active contours, Interna-tional Journal of Computer Applications, Vol. 1, pp. 1-5, 2010.

[18] A. Aquino, M. E. Gegndez-Arias, and D. Marn, Automated optic discdetection in retinal images of patients with diabetic retinopathy and riskof macular edema, International Journal of Biological and Life Sciences8:2, pp. 87-92, 2012.

[19] H. Li and O. Chutatape, Automated feature extraction in color retinalimages by a model based approach, IEEE Trans. Biomed. Eng., vol. 51,no. 2, pp. 246-254, Feb. 2004.

[20] F. Zana and J. C. Klein, A multimodal registration algorithm of eyefundus images using vessels detection and Hough transform, IEEETrans. Med. Imag., vol. 18, no. 5, pp. 419-428, May 1999.

[21] G. K. Matsopoulos, P. A. Asvestas, N. A. Mouravliansky, and K. K.Delibasis, Multimodal registration of retinal images using self organizingmaps, IEEE Trans. Med. Imag., vol. 23, no. 12, pp. 1557-1563,Dec.2004.

[22] S. Chaudhuri, S. Chateterjee, N. katz, M. Nelson, and M. Goldbaum,Detection of blood vessels in retinal images using two-dimensional

5

1 2 3 4 5 6 7 8 91011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556576061

matched filters, IEEE Trans. Med. Imag., vol. 8, no. 3, pp. 263-269,Sep. 1989.

[23] T. Chanwimaluang and G. Fan, An efficient algorithm for extraction ofanatomical structures in retinal images, in Proc. ICIP, pp. 1193-1196,2003.

[24] A. Hoover, V. Kouznetsova, and M. Goldbaum, Locating blood vesselsin retinal images by piecewise threshold probing of a matched filter re-sponse, IEEE Trans. Med. Imag., vol. 19, no. 3, pp. 203-211, Mar.2000.

[25] M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, A. A.Bharath, and K. H. Parker, Segmentation of retinal blood vessels basedon the second directional derivative and region growing, in Proc. ICIP,pp. 173-176, 1999.

[26] M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, A. A.Bharath, and K. H. Parker, Scale-space analysis for the characterizationof retinal blood vessels, in Medical Image Computing and Computer-Assisted InterventionMICCAI99, C. Taylor and A. Colchester, Eds.Springer: New York, vol. 16794, Lecture Notes Comput.Sci., pp. 90-97, 1999.

[27] Y. Wang and S. C. Lee, A fast method for automated detection of bloodvessels in retinal images, in IEEE Comput. Soc. Proc. Asilomar Conf,pp. 1700-1704, 1998.

[28] R. M. Cesar, Jr and H. F. Jelinek, Segmentation of retinal fundus vascu-lature in nonmydriatic camera images using wavelets, in Angiographyand Plaque Imaging. Advanced Segmentation Techniques, J. S. Suri andS. Laxminarayan, Eds. Boca Raton, FL: CRC, pp. 193-224, 2003.

[29] M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. D. Abrmoff,Comparative study of retinal vessel segmentation methods on a newpublicly available database, in Proc. SPIE Med. Imag., M. Fitzpatrickand M. Sonka, Eds., 2004, vol. 5370, pp. 648-656.

[30] C. Sinthanayothin, J. F. Boyce, H. L. Cook, and T. H.Williamson,Automated localization of the optic disc, fovea, and retinal blood vesselsfrom digital colour fundus images, Br. J. Ophthalmol., vol. 83, pp. 902-911, 1999.

[31] L. Zhou, M. S. Rzeszotarski, L. J. Singerman, and J. M. Chokreff,The detection and quantification of retinopathy using digital angiograms,IEEE Trans. Med. Imag., vol. 13, no. 4, pp. 619-626, Dec. 1994.

[32] Y. Tolias and S. Panas, A fuzzy vessel tracking algorithm for retinalimages based on fuzzy clustering, IEEE Trans. Med. Imag., vol. 17, pp.263-273, 1998.

[33] O. Chutatape, L. Zheng, and S. M. Krishnan, Retinal blood vesseldetection and tracking by matched Gaussian and Kalman filters, in Proc.20th Annu. Int. Conf. IEEE Engineering in Medicine and Biology, pp.3144-3149, 1998.

[34] J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, and B. vanGinneken, Ridge-based vessel segmentation in color images of the retina,IEEE Trans. Med. Imag., vol. 23, no. 4, pp. 501-509, Apr. 2004.

[35] X. Jiang and D. Mojon, Adaptive local thresholding by verificationbasedmultithreshold probing with application to vessel detection in retinalimages, IEEE Trans. Pattern Analy. Mach. Intell., vol. 254, no.1, pp.131137, Jan. 2003.

[36] B. Phong, Illumination for computer generated pictures, Commun.ACM, vol. 18, pp. 311-317, 1975

[37] C Kondermann, D Kondermann and M Yan, Blood vessel classificationinto arteries and veins in retinal images, Proc. SPIE 6512,(2007).

[38] C. Nieuwenhuis and M. Yan,Knowledge based image enhancement usingneural networks, 18th International Conference on Pattern Recognition(ICPR) , pp. 814-817, 2006.

[39] Research Section, Digital Retinal Image for VesselExtraction (DRIVE) Database. Utrecht, The Netherlands,Univ. Med. Center Utrecht, Image Sci. Inst. [Online].Available:http://www.isi.uu.nl/Research/Databases/DRIVE

[40] STARE ProjectWebsite. Clemson, SC, Clemson Univ. [Online]. Avail-able: http://www.ces.clemson.edu/ine, VIII(4):283-298, 1978.

6