[IEEE 2012 5th International Conference on Biomedical Engineering and Informatics (BMEI) -...

4
978-1-4673-1184-7/12/$31.00 ©2012 IEEE 243 2012 5th International Conference on BioMedical Engineering and Informatics (BMEI 2012) Blur Detection in Fundus Images Alexandra A. Chernomorets and Andrey S. Krylov Laboratory of Mathematical Methods of Image Processing Faculty of Computational Mathematics and Cybernetics Lomonosov Moscow State University Abstract—The paper addresses the problem of retinal image quality assessment, in particular the problem of blur detection for retinal images is considered. The blur value is obtained using an analysis of blood vessel edge widths as the vasculature is the object that is least prone to structural distortion even if blurred. The paper presents the model of a general edge based on the Gaussian filtering and also the paper presents the algorithm for edge width estimation. The values of the parameters for the algorithm are chosen according to experiments on synthetic data. I. I NTRODUCTION Automatic quality analysis of medical images has become an important research direction. However, the automatic blur detection in fundus images has not received enough attention. Fundus images are acquired at different sites, using different cameras operated by people with varying levels of experience. This results in a variety of images with different quality, and in some of them pathologies cannot be clearly detected or are artificially introduced. These low quality images should be examined by an ophthalmologist and reacquired if needed. Current approaches for retinal image quality determination are based on global image intensity histogram analysis [1] or on the analysis of the global edge histogram in combination with localized image intensity histograms [2]. In both these approaches a small set of excellent quality images was used to construct a mean histogram. The difference between the mean histogram and a histogram of a given image then indicated the image quality. Nevertheless the methods cannot be used as general fundus image quality classifiers because images of poor quality that match the method’s characteristics of acceptable quality can be easily presented. In [3] a correlation between image blurring and visibility of the vessels was pointed out. By running a vessel segmentation algorithm and measuring the area of detected vessels over the entire image, the authors estimate if the image is good enough to be used for screening. The main drawback is that the classification between good and poor quality needs a thresholding. In [4] a quality class classification of the images is proposed. An analysis is made on the vasculature in a circular area around the macula. The presence of small vessels in this area is used as an indicator of image quality. The presented results are good, but the method requires a segmentation of the vasculature and other major anatomical structures to find the region of interest around the fovea. This paper presents a novel approach to blur detection in fundus images based on estimation of blur level of automat- ically segmented vasculature. The method does not require the segmentation of optic disk or macula, and only a rough segmentation of blood vessels is necessary. This allows us to perform the procedure of vasculature segmentation using downsampled images and significantly decrease computational time. II. EDGE WIDTH ESTIMATION A. Edge model We consider the following edge model: the general edge is a result of convolution of the ideal step edge of unit height and a Gaussian kernel with some dispersion σ. Such assumption gives us a unique correspondence between the edge and a numeric value, i.e. the dispersion σ of the Gaussian kernel, which we take as the value of the edge width. We define the ideal step edge function of unit height as: H(x)= ( 1, x 0, 0, x< 0. (1) The edge E σ (x) (see fig. 1) is defined as E σ (x)=[H * G σ ](x). (2) Fig. 1. Edge model Note that the function E σ (x) holds E σ (x)= E σ 0 ( σ 0 σ x). (3) B. Edge width For the edge width estimation we use the unsharp masking approach. Let U σ,α [E σ0 ](x) be the result of unsharp masking applied to the edge E σ0 (x): U σ,α [E σ0 ](x)= = (1 + α)E σ0 (x) - αE σ0 * G σ = = (1 + α)E σ0 (x) - αE σ 2 0 +σ 2 (x). (4)

Transcript of [IEEE 2012 5th International Conference on Biomedical Engineering and Informatics (BMEI) -...

Page 1: [IEEE 2012 5th International Conference on Biomedical Engineering and Informatics (BMEI) - Chongqing, China (2012.10.16-2012.10.18)] 2012 5th International Conference on BioMedical

978-1-4673-1184-7/12/$31.00 ©2012 IEEE 243

2012 5th International Conference on BioMedical Engineering and Informatics (BMEI 2012)

Blur Detection in Fundus ImagesAlexandra A. Chernomorets and Andrey S. Krylov

Laboratory of Mathematical Methods of Image ProcessingFaculty of Computational Mathematics and Cybernetics

Lomonosov Moscow State University

Abstract—The paper addresses the problem of retinal imagequality assessment, in particular the problem of blur detectionfor retinal images is considered. The blur value is obtained usingan analysis of blood vessel edge widths as the vasculature isthe object that is least prone to structural distortion even ifblurred. The paper presents the model of a general edge based onthe Gaussian filtering and also the paper presents the algorithmfor edge width estimation. The values of the parameters for thealgorithm are chosen according to experiments on synthetic data.

I. INTRODUCTION

Automatic quality analysis of medical images has becomean important research direction. However, the automatic blurdetection in fundus images has not received enough attention.Fundus images are acquired at different sites, using differentcameras operated by people with varying levels of experience.This results in a variety of images with different quality, andin some of them pathologies cannot be clearly detected orare artificially introduced. These low quality images shouldbe examined by an ophthalmologist and reacquired if needed.

Current approaches for retinal image quality determinationare based on global image intensity histogram analysis [1] oron the analysis of the global edge histogram in combinationwith localized image intensity histograms [2]. In both theseapproaches a small set of excellent quality images was used toconstruct a mean histogram. The difference between the meanhistogram and a histogram of a given image then indicatedthe image quality. Nevertheless the methods cannot be usedas general fundus image quality classifiers because imagesof poor quality that match the method’s characteristics ofacceptable quality can be easily presented.

In [3] a correlation between image blurring and visibility ofthe vessels was pointed out. By running a vessel segmentationalgorithm and measuring the area of detected vessels overthe entire image, the authors estimate if the image is goodenough to be used for screening. The main drawback is thatthe classification between good and poor quality needs athresholding.

In [4] a quality class classification of the images is proposed.An analysis is made on the vasculature in a circular areaaround the macula. The presence of small vessels in thisarea is used as an indicator of image quality. The presentedresults are good, but the method requires a segmentation ofthe vasculature and other major anatomical structures to findthe region of interest around the fovea.

This paper presents a novel approach to blur detection infundus images based on estimation of blur level of automat-

ically segmented vasculature. The method does not requirethe segmentation of optic disk or macula, and only a roughsegmentation of blood vessels is necessary. This allows usto perform the procedure of vasculature segmentation usingdownsampled images and significantly decrease computationaltime.

II. EDGE WIDTH ESTIMATION

A. Edge modelWe consider the following edge model: the general edge is

a result of convolution of the ideal step edge of unit height anda Gaussian kernel with some dispersion σ. Such assumptiongives us a unique correspondence between the edge and anumeric value, i.e. the dispersion σ of the Gaussian kernel,which we take as the value of the edge width.

We define the ideal step edge function of unit height as:

H(x) =

{1, x ≥ 0,

0, x < 0.(1)

The edge Eσ(x) (see fig. 1) is defined as

Eσ(x) = [H ∗Gσ](x). (2)

Fig. 1. Edge model

Note that the function Eσ(x) holds

Eσ(x) = Eσ′(σ′

σx). (3)

B. Edge widthFor the edge width estimation we use the unsharp masking

approach.Let Uσ,α[Eσ0 ](x) be the result of unsharp masking applied

to the edge Eσ0(x):

Uσ,α[Eσ0](x) =

= (1 + α)Eσ0(x)− αEσ0

∗Gσ =

= (1 + α)Eσ0(x)− αE√

σ20+σ

2(x).(4)

Page 2: [IEEE 2012 5th International Conference on Biomedical Engineering and Informatics (BMEI) - Chongqing, China (2012.10.16-2012.10.18)] 2012 5th International Conference on BioMedical

244

Using (3) and supposing σ = σ0 = σ1, (4) holds

Uσ1,α[Eσ1 ](x) =

= (1 + α)Eσ1(x)− αE√2σ1

(x) =

= (1 + α)Eσ1(x)− αE√

2σ2(

√2σ2√2σ1

x) =

= Uσ2,α[Eσ2](σ2σ1x).

(5)

The unsharp masking approach (4), due to (5), holds thatthe intensity values of corresponding extrema of Uσ,α[Eσ](x)at xmaxσ and xminσ are the same for all σ > 0 with fixedα. Thus, taking into account the monotonicity of Uσ,α[Eσ](x)as a function of σ due to (5) and the properties of Gaussianfunctions, fixing the value of α and UE = UσE ,α[EσE ](x) forsome σE

Uσ,α[Eσ0](xmaxσ ) ≤ UE , σ < σ0

Uσ,α[Eσ0](xmaxσ ) ≥ UE , σ > σ0.

(6)

C. The edge width estimation algorithm

The final edge width estimation algorithm looks as follows:1. Given values: α, UE , 1-dimensional edge profile Eσ0

(x).2. for σ = σmin to σmax : σstep

compute Uσ,α[Eσ0](x),

find local maxima xmaxσ of Uσ,α[Eσ0](x),

if Uσ,α[Eσ0](xmaxσ ) ≥ UE

result = σ,stop cycle.

3. Output: result.

D. Automatization of the parameters of the edge width esti-mation algorithm

The algorithm requires the values of α, σmin, σmax, σstepand UE that is dependent on α. The research [5] has shownthat the best value for α is 4 (see Table I).

The value of UE for α = 4 equals to 1.24.The value for σmin is fixed and equals to 0.5. This value

is the smallest possible value for the edge blur due to thedigitization of the image. The value of σstep is also fixed to0.1 as this is the acceptable accuracy for the task.

The value of σmax depends on the size l of the edge profilesample. In order to reach the best approximation of the edgewidth estimation the size of the Gaussian filter with dispersionσ is taken as 8σ. Thus the value of σmax is set to l/8.

III. APPLICATION OF THE EDGE WIDTH ESTIMATIONALGORITHM TO BLUR DETECTION IN FUNDUS IMAGES

For the problem of the edge width estimation in fundusimages we use the following parameters: α = 4, σmin =0.3, σmax = 10, σstep = 0.1. For this α the value UE isequal to 1.24. The choice of α does not affect the result ofthe algorithm, it is only important that the value of UE iscomputed according to the used value of α.

In order to compute the blur value of the image we use thefollowing algorithm:

σ ↓ α→ 2 2.5 3 3.5 4 4.5 50.5 0.6 0.6 0.6 0.6 0.6 0.6 0.61.5 1.5 1.5 1.5 1.5 1.5 1.6 1.62.5 2.5 2.6 2.6 2.6 2.5 2.5 2.53.5 3.6 3.5 3.5 3.5 3.5 3.5 3.64.5 4.5 4.5 4.5 4.6 4.5 4.5 4.55.5 5.5 5.6 5.5 5.5 5.5 5.5 5.66.5 6.5 6.5 6.5 6.6 6.5 6.5 6.57.5 7.6 7.5 7.5 7.5 7.5 7.5 7.58.5 8.5 8.5 8.5 8.5 8.5 8.5 8.59.5 9.5 9.5 9.5 9.5 9.5 9.5 9.5

10.5 10.5 10.5 10.5 10.5 10.5 10.5 10.511.5 11.5 11.5 11.5 11.5 11.5 11.5 11.512.5 12.5 12.5 12.5 12.5 12.5 12.5 12.513.5 13.5 13.5 13.5 13.5 13.5 13.5 13.514.5 14.5 14.5 14.5 14.5 14.5 14.5 14.515.5 15.5 15.5 15.5 15.5 15.5 15.5 15.516.5 16.5 16.5 16.5 16.5 16.5 16.5 16.517.5 17.5 17.5 17.5 17.5 17.5 17.5 17.518.5 18.5 18.5 18.5 18.5 18.5 18.5 18.519.5 19.5 19.5 19.5 19.5 19.5 19.5 19.5

TABLE ITHE RESULTS OF THE EDGE WIDTH ESTIMATION ALGORITHM FOR

DIFFERENT VALUES OF PARAMETER α

a) Extract vessels,b) Extract edge profiles from vessels,c) Compute the blur value for the image taking into

account the edge heights.

A. Vessel detection

The algorithm for vessel segmentation was previously de-scribed in [6]. It includes the following steps:

1. Preprocessing of retinal images [7]: this step consists ofluminosity correction on the green channel. The algorithm im-plies the computation of the background image by computingthe Mahalanobis distance

D(x, y) =

∣∣∣∣I(x, y)− µ(x, y)σ(x, y) + σε

∣∣∣∣ ,where µ(x, y) is the mean value of the image I(x, y), σ(x, y)is its standard deviation and σε > 0 is a small coefficient. Thepixel is classified as the background pixel if D(x, y) < DT .We use DT = 0.7. The corrected image is computed as

IC(x, y) =

∣∣∣∣I(x, y)− µB(x, y)σB(x, y) + σε

∣∣∣∣ ,where µB(x, y) and σB(x, y) are the mean value and the stan-dard deviation of the background pixels in the neighborhoodof (x, y).

2. Alternating Sequential Filtering (ASF) [8]: the negativepart of the difference between the corrected image and theresult of ASF. Essentially ASF is morphological openings andclosings of increasing structural element size. We use circularSE of sizes from 1 to double maximum vessel width. Atpresent we use fixed values of maximum vessel widths forimages of different sizes.

3. Maximum of Gabor filter responses on 4 scales by 6directions;

Page 3: [IEEE 2012 5th International Conference on Biomedical Engineering and Informatics (BMEI) - Chongqing, China (2012.10.16-2012.10.18)] 2012 5th International Conference on BioMedical

245

4. Rough vessel centers segmentation using edge detection;5. Segmentation using morphological amoebas [9][10],

which is a dilation of the set of starting points acquired onthe previous step by a structuring element of adjustable shapefor each pixel.

An example of vessel segmentation is shown in fig. 2.

B. Extraction of the vessel profiles

We analyze the profiles of the boundaries of the widestvessels. In order to obtain these vessels’ segments, we usethe following algorithm:

1. Skeletonize the vessel mask,2. Take only segments with the length greater than double

maximum vessel width,3. Sort the segments list in the descending order of the

segment width,4. Take 50 of the widest segments,5. For every segment find its center and direction. The edge

profile is taken as the cross-section of the segment.The algorithm can be fastened if only the area near the optic

disk is analyzed.

C. Blur value estimation

In order to obtain an adequate result in comparison betweentwo fundus images we should take into account not only theaverage edge widths, but also amplitudes of the edges. Themedian of the weighted edge widths is taken as the blur valueof the image. The weights are taken as inverse amplitude ofthe edge 10/A.

So the algorithm for computing the blur value of the imageis as follows:

1. Compute the edge amplitude Ai and normalize everyedge profile so that the values belong to the interval from 0to 1,

2. For every normalized profile compute the edge width Wi,3. Scale the obtained values by the inverse original edges

amplitudes and obtain the value 10Wi

Aithat characterizes the

edge,4. Compute the median value of the weighted edge widths.

IV. RESULTS

Most of the publicly available databases contain images onlyof good quality. As an example the proposed algorithm wastested on retinal images from the DRIVE database [11]. Theaverage blur value for the images from this database was foundas 0.29.

The method was also tested on real images from ophthal-mological practice of different quality.

The results for images of different quality are shown infig. 3, 4.

The results show that the images with blur value less than1 are good enough for pathology analysis.

Original image

Fig. 2. Vessel Segmentation

Page 4: [IEEE 2012 5th International Conference on Biomedical Engineering and Informatics (BMEI) - Chongqing, China (2012.10.16-2012.10.18)] 2012 5th International Conference on BioMedical

246

Blur value estimated 0.28

Fig. 3. Blur value estimation for DRIVE image 01 test.

Blur value estimated 0.63

Blur value estimated 0.82

Blur value estimated 1.09

Fig. 4. Blur value estimation for real fundus images.

V. CONCLUSION

The paper presents the solution to the problem of thedetection of blurred retinal images.

The application of the proposed algorithm to the classifica-tion of real retinal images has shown good results. The possibleapplication of the algorithm is its use during the acquisitionof fundus images and for the preliminary control of input datafor retinal image CAD systems.

ACKNOWLEDGMENT

The work was supported by Federal Targeted Programme”R&D in Priority Fields of the S&T Complex of Russia 2007–2013”.

REFERENCES

[1] S.Lee and Y.Wang, “Automatic retinal image quality assessment andenhancement,” in Proceedings of SPIE Image Processing, 1999, pp.1581–1590.

[2] M.Lalonde, L.Gagnon, and M.C.Boucher, “Automatic visual qualityassessment in optical fundus images,” in Proceedings of Vision Interface,2001, pp. 259–264.

[3] D.B.Usher, M.Himaga, and M.J.Dumskyj, “Automated assessment ofdigital fundus image quality using detected vessel area,” in Proceedingsof Medical Image Understanding and Analysis, 2003, pp. 81–84.

[4] A.D.Fleming, S.Philip, K.A.Goatman, J.A.Olson, and P.F.Sharp, “Auto-mated assessment of diabetic retinal image quality based on clarity andfield definition,” nvest Ophthalmol Vis Sci, vol. 47(3).

[5] A. A. Chernomorets and A. V. Nasonov, “Deblurring in fundusimages,” in 22-th International Conference on Computer GraphicsGraphiCon’2012, 2012, pp. 76–79.

[6] A.A.Chernomorets, A.S.Krylov, A.V.Nasonov, A.S.Semashko,V.V.Sergeev, V.S.Akopyan, A.S.Rodin, and N.S.Semenova, “Automatedprocessing of retinal images,” in 21-th International Conference onComputer Graphics GraphiCon’2011, September 2011, pp. 78–81.

[7] G.D.Joshi and J.Sivaswamy, “Colour retinal image enhancement basedon domain knowledge,” 6th Indian Conf. on Computer Vision, Graphicsand Image Processing, pp. 591–598, 2008.

[8] J.Serra, Image Analysis and Mathematical Morphol-ogy.Vol.2.:Theoretical Advances, Academic Press, London, 1988.

[9] M.Welk, M.Breub, and O.Vogel, “Differential equations for morpholog-ical amoebas,” Lecture Notes in Computer Science, vol. 5720/2009, pp.104–114, 2009.

[10] R.Lerallut, E.Decenciere, and F.Meyer, “Image filtering using morpho-logical amoebas,” Image and Vision Computing, vol. 25, pp. 395–404,2007.

[11] A.Can, H.Shen, J.N.Turner, H.L.Tanenbaum, and B.Roysam, “Rapid au-tomated tracing and feature extraction from retinal fundus images usingdirect exploratory algorithms,” in IEEE Transactions on InformationTechnology in Biomedicine, 1999, vol. 3, pp. 125–138.