Classifying image analysis techniques from their...

26
Classifying image analysis techniques from their output C. Guada 1 * , D. G ´ omez 2 , J.T. Rodr´ ıguez 1 , J. Y ´ nez 1 , J. Montero 1,3 1 Facultad de Ciencias Matem´ aticas, Complutense University, Plaza de las Ciencias 3, Madrid, 28040, Spain E-mail: [email protected], [email protected], [email protected], [email protected] 2 Facultad de Estudios Estad´ ısticos, Complutense University, Av. Puerta de Hierro s/n, Madrid, 28040, Spain E-mail: [email protected] 3 Instituto de Geociencias IGEO (CSIC, UCM), Plaza de las Ciencias 3, Madrid, 28040, Spain § Abstract In this paper we discuss some main image processing techniques in order to propose a classification based upon the output these methods provide. Because despite a particular image analysis technique can be su- pervised or unsupervised, and can allow or not the existence of fuzzy information at some stage, each technique has been usually designed to focus on a specific objective, and their outputs are in fact different according to each objective. Thus, they are in fact different methods. But due to the essential relation- ship between them they are quite often confused. In particular, this paper pursues a clarification of the differences between image segmentation and edge detection, among other image processing techniques. Keywords: Image segmentation, image classification, edge detection, fuzzy sets, machine learning, graphs. 1. Introduction Image analysis or image processing has become a scientific hot topic during the last decades, partic- ularly because of the increasing amount of relevant information being stored in this format. Image anal- ysis has a wide range of applications in different ar- eas as remote sensing, image and data storage for transmission in business applications, medical imag- ing, acoustic imaging and security, among many other fields. Digital image processing techniques are being increasingly demanded through all areas of science and industry. Many different techniques are considered “image * Facultad de Ciencias Matem´ aticas, Complutense University, Plaza de las Ciencias 3, Madrid, 28040, Spain. Facultad de Ciencias Matem´ aticas, Complutense University, Plaza de las Ciencias 3, Madrid, 28040, Spain. Facultad de Estudios Estad´ ısticos, Complutense University, Av. Puerta de Hierro s/n, Madrid, 28040, Spain. § Instituto de Geociencias IGEO (CSIC-UCM), Plaza de las Ciencias 3, Madrid, 28040, Spain.

Transcript of Classifying image analysis techniques from their...

Page 1: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques from their output

C. Guada 1 ∗, D. Gomez 2, J.T. Rodrıguez 1, J. Yanez 1, J. Montero 1,3

1 Facultad de Ciencias Matematicas, Complutense University,Plaza de las Ciencias 3,Madrid, 28040, Spain†

E-mail: [email protected], [email protected], [email protected], [email protected] Facultad de Estudios Estadısticos, Complutense University,

Av. Puerta de Hierro s/n,Madrid, 28040, Spain ‡

E-mail: [email protected] Instituto de Geociencias IGEO (CSIC, UCM),

Plaza de las Ciencias 3,Madrid, 28040, Spain§

Abstract

In this paper we discuss some main image processing techniques in order to propose a classification basedupon the output these methods provide. Because despite a particular image analysis technique can be su-pervised or unsupervised, and can allow or not the existence of fuzzy information at some stage, eachtechnique has been usually designed to focus on a specific objective, and their outputs are in fact differentaccording to each objective. Thus, they are in fact different methods. But due to the essential relation-ship between them they are quite often confused. In particular, this paper pursues a clarification of thedifferences between image segmentation and edge detection, among other image processing techniques.

Keywords: Image segmentation, image classification, edge detection, fuzzy sets, machine learning,graphs.

1. Introduction

Image analysis or image processing has become ascientific hot topic during the last decades, partic-ularly because of the increasing amount of relevantinformation being stored in this format. Image anal-ysis has a wide range of applications in different ar-

eas as remote sensing, image and data storage fortransmission in business applications, medical imag-ing, acoustic imaging and security, among manyother fields. Digital image processing techniquesare being increasingly demanded through all areasof science and industry.

Many different techniques are considered “image∗Facultad de Ciencias Matematicas, Complutense University, Plaza de las Ciencias 3, Madrid, 28040, Spain.†Facultad de Ciencias Matematicas, Complutense University, Plaza de las Ciencias 3, Madrid, 28040, Spain.‡Facultad de Estudios Estadısticos, Complutense University, Av. Puerta de Hierro s/n, Madrid, 28040, Spain.§Instituto de Geociencias IGEO (CSIC-UCM), Plaza de las Ciencias 3, Madrid, 28040, Spain.

Page 2: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

processing” or “image analysis” techniques. Usu-ally, each technique is appropriate for some a smallrange of tasks or for a specific problem.

For example, machine learning algorithms usedin image analysis help in finding solutions to manyproblems in speech recognition 1, robotics 2 and vi-sion 3. Computer vision uses image processing al-gorithms to extract significant information automat-ically from images. A number of algorithms areavailable to analyze images and recognize objects inthem, and depending on the delivered output and themanner in which the images were encoded, specificlearning methodologies are needed 4.

Another well-known example is image segmen-tation. Image segmentation methods try to simplifyan image into something of easier analysis. Usu-ally the image is transformed into segments or re-gions. These regions are supposed to be connectedand represent a set of homogeneous pixels. Never-theless, it is not clear when certain techniques shouldbe considered as image segmentation methods, andsuch a situation also applies to other image process-ing techniques. It is quite common to classify twomethods in the same group of techniques even if theoutput and the information they provide are very dif-ferent. This is the case, for example, of edge detec-tion methods 5, clustering image 6 and image thresh-olding algorithms 7, all of them classified as “imagesegmentation procedures” despite the obvious dif-ferences in their outputs.

There is a significant difference between im-age classification methods, edge detection methods,image segmentation methods and hierarchical im-age segmentation methods. Particularly, besides thestrong relationship between them, all these meth-ods address different problems, and produce differ-ent outputs. However, those different methods arenot always clearly differentiated in the literature, andsometimes even confused. This paper poses a crit-ical view on some of these techniques in order toshow the conceptual differences between them. De-pending on how a particular algorithm detects ob-jects in an image and shows the output, the methodmight be understood as either performing classifi-cation, detection, segmentation or hierarchical seg-mentation.

Hence, the main objective of this paper is topresent a classification of a set of widely used im-age processing algorithms, attending to the prob-lems they face, the learning scheme they are basedon, and the representational framework they utilize.Some new image analysis concepts and methods arealso provided.

In order to understand the remainder paper, thefollowing notation is adopted. The image we con-sider is being modeled as a two-dimensional func-tion, where x and y are the coordinates in a plane,and f (x,y) represents each pixel by a fixed num-ber of measurable attributes 8. Formally, an imagecan be defined as I = { f (x,y);x = 1, . . . ,n and y =1, . . .m} which can be represented computationallyin:

• Binary ≡ f (x,y) ∈ {0,1}.• Gray ≡ f (x,y) ∈ {0, . . . ,255}.• RGB ≡ f (x,y) ∈ {0, . . . ,255}3.

A binary image is represented as a matrix allow-ing two possible values for each cell, two-tone col-ors (usually 0 refers to black and 1 to white). Sim-ilarly, a grey scale image may be defined as a twodimensional function f (x,y) where the amplitude off at any pair of coordinates (x,y) refers to the inten-sity (gray level) of the image at that point. Instead, acolor image is defined by a combination of individ-ual 2D images. For instance, the RGB color systemrepresents a color image in three components (red,green and blue). That is, it has an array of three ma-trices of equal size where the intensity of color of apixel compound is represented by the three colors 8.

Figures 1, 2 and 3 show a binary image, agrayscale image and RGB image, respectively.

The remainder of this paper is organized as fol-lows: a review on image classification is presentedin Section 2. Section 3 is devoted to fuzzy im-age classification. Some edge detection techniquesare shown in Section 4, which is extended to fuzzyedge detection in Section 5. Image segmentation andsome methods addressing this problem are analyzedin Section 6. Section 7 pays attention to hierarchicalimage segmentation. In Section 8 fuzzy image seg-mentation is addressed. Finally, some conclusionsare shed.

Page 3: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques

Fig. 1. Binary image.

Fig. 2. Gray level image.

Fig. 3. RGB image 9.

2. Image classification

Classification techniques have been widely used inimage processing to extract information from im-ages by assigning each pixel to a class. Therefore,two types of outputs can be obtained at the end of animage classification procedure. The first kind of out-put is a thematic map where pixels are accompaniedby a label for identification with a class. The secondkind of output is a table summarizing the number ofimage pixels belonging to each class. Furthermore,both supervised and unsupervised learning schemesare applied for this image classification task 10. Nextwe provide an overview of these methods.

2.1. Supervised classification

Supervised classification is one of the most usedtechniques for image analysis, in which the super-visor provides information to the system about thecategories present in each pattern in the training set.

Neural networks 11, genetic algorithms 12, sup-port vector machines 13, bayesian networks 14, max-

imum likelihood classification 15 or minimum dis-tance classification 16 are some techniques for su-pervised classification. Depending on certain fac-tors such as data source, spatial resolution, availableclassification software, desired output type and oth-ers, it is more appropriate to use one of them to pro-cess a given image.

Supervised classification procedures are essen-tial analytical tools for extracting quantitative infor-mation from images. According to Richards and Jia4, the basic steps to implement these techniques arethe following:

• Decide the classes to be identified in the image.• Choose representative pixels of each class which

will form the training data.• Estimate the parameters of the algorithm classifier

using the training data.• Categorize all the remaining pixels in the image

with the classifier in each of regions desired.• Summarize the classification results in tables or

display the segmented image.• Evaluate the accuracy of the final model using a

test data set.

A variety of classification techniques such asthose mentioned above have been used in imageanalysis. In addition, some of those methods havebeen jointly used (e.g., neural networks 17,18, geneticalgorithm with multilevel thresholds 19 or some vari-ant of support vector machines 20).

In general, supervised classification providesgood results if representative pixels of each class arechosen for the training data 4. Next, we pay attentionto methods for unsupervised classification.

2.2. Unsupervised classification, clustering

Unsupervised classification techniques are intendedto identify groups of individuals having commoncharacteristics from the observation of several vari-ables for each individual. In this sense, the maingoal is to find regularities in the input data, becauseunlike supervised classification techniques, there isno supervisor to provide existing classes. The pro-cedure tries to find groups of patterns, focussing inthose that occur more frequently in the input data 21.

Page 4: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

According to Nilsson 22, unsupervised classifica-tion consists of two stages to find patterns:

• Form a partition R of the set Ξ of unlabeled train-ing patterns. The partition separates Ξ into mutu-ally exclusive and exhaustive subsets R known asclusters.

• Design a classifier based on the labels assigned tothe training patterns by the partition.

Clustering 23 is perhaps the most extendedmethod of unsupervised classification for image pro-cessing, which obtains groups or clusters in the inputimage.

Particularly, cluster analysis looks for patternsin images, gathering pixels within natural groupsthat make sense in the context studied 21. The aimis that the proposed clustering has to be somehowoptimal in the sense that observations in a clusterhave to be similar, and in turn be dissimilar to thoseof other clusters. In this way, we try to maximizethe homogeneity of objects within the clusters whilethe heterogeneity between aggregates is maximized.In principle, the number of groups to form and thegroups themselves is unknown 24.

A standard cluster analysis creates groups thatare as homogeneous as possible, and in turn the dif-ference between the various groups is as large aspossible. The analysis consist on the main follow-ing steps:

• First, we need to measure the similarity and dis-similarity between two separate objects (similar-ity measures the closeness of objects, so closervalues suggest more homogeneity).

• Next, similarity and dissimilarity between clus-ters is measured so that the difference betweengroups become larger and thus group observationsbecome as close as possible.

In this manner, the analysis can be divided intothese two basic steps, and in both steps we can beuse correlation measures or any distance 25 as sim-ilarity measure. Some similarity and dissimilaritymeasures for quantitative features are shown in 26.

Correlation measurements refer to the measure-ment of similarity between two objects in a symmet-ric matrix. If the objects tend to be more similar, the

correlations are high. Conversely, if the objects tendto be more dissimilar, then the correlations are low.However, because these measures indicate the simi-larity by pattern matching between the features anddo not observe the magnitudes of the observations,they are rarely used in cluster analysis.

Distance measurements represent the similarityand proximity between observations. Unlike corre-lation measurements, these similarity measures ofdistance or proximity are widely used in clusteranalysis. Among the distance measures available,the most common are the Euclidean distance or dis-tance metric, the standardized distance metric or theMalahanobis distance 26.

Let X(n× p) be a symmetric matrix with n ob-servations of p variables. The similarity betweenthe observations can be described by the matrixD(n×n):

D =

d11 d12 . . . . . . d1n... d22

......

.... . .

......

.... . .

...dn1 dn2 . . . . . . dnn

where di j represent the distance between observa-tions i and j. If di j is a distance, then d′i j =maxi, j{di j}− di j represents a measure of proximity25.

Once a measure of similarity is calculated, weare able to proceed with the formation of clusters.A popular algorithm is k-means clustering 27. Thealgorithm was first proposed in 1957 by Lloyd 28

and published much later 29. K-means clusteringalgorithm classifies the pixels of the input imageinto multiple classes according to the distance fromeach other. The points are clustered around cen-troids µi, i = 1, . . .k obtained by minimizing the Eu-clidean distance. Let x1 and x2 be two pixels whoseEuclidean distance between them is:

d(x1,x2) =

√N

∑i=1

(x1i− x2i)2 . (1)

Thus, the sum of distances to k-center is mini-mized, and the objective is to minimize the maxi-

Page 5: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques

mum distance from every point to its nearest center30.

Then, having selected the centroids, each obser-vation is assigned to the most similar cluster basedon the distance of the observation and the mean ofthe cluster. The average of the cluster is then re-calculated, beginning an iterative process of cen-troid location depending on how observations areassigned to clusters; until the criterion function con-verges 31.

Clustering techniques have been used to per-form unsupervised classification for image process-ing since 1969, when an algorithm finding bound-aries in remote sensing data was proposed 32. Clus-tering implies grouping the pixels of an image in themultispectral space 4. Therefore, clustering seeks toidentify a finite set of categories and clusters to clas-sify the pixels, having previously defined a criteriaof similarity between them. There are a number ofavailable algorithms 33.

The resulting image of the k-means clustering al-gorithm with k=3 over Figure 3 is shown in Figure 4.

Fig. 4. Cluster’s result.

Thus, to achieve its main goal, any clustering al-gorithm must address three basic issues: i) how tomeasure the similarity or dissimilarity?; ii) how theclusters are formed?; and iii) how many groups areformed?. These issues have to be addressed so theprinciple of maximizing the similarity between indi-viduals in each cluster and maximizing the dissim-ilarity between clusters can be met 33. A detailedreview on clustering analysis is presented in 6.

Thus, summarizing, in the framework of super-vised classification pixels are assigned to a knownnumber of predefined groups. However, in clusteranalysis the number of cluster groups and the groupsthemselves are not known in advance 24. Anyway,image classification is a complex process for patternrecognition based on the contextual information ofthe analyzed images 34. A more detailed discussion

of image classification methods is presented in 35.

3. Fuzzy image classification

In this section, we present an overview of fuzzy im-age classification. As in Section 2, these methodsare divided in supervised techniques and unsuper-vised techniques.

3.1. Supervised fuzzy classification

Fuzzy classification techniques can be consideredextensions of classical classification techniques. Wesimply need to take advantage of the fuzzy logic pro-posed to Zadeh in such a way that restricted applica-tions within a crisp framework we produce all thosetechniques presented in Section 2.1. And similarlyto traditional classification techniques, fuzzy classi-fication includes both supervised and unsupervisedmethods.

Fuzzy logic 36 was introduced in the mid-sixtiesof last century as an extension of classical binarylogic. Some objects have a state of ambiguity re-garding their membership to a particular class. Inthis way, a person may be both “tall” and “not tall”to some extent. The difference lies in the degreeof membership assigned to the fuzzy set “tall” andits complement “not tall”. Therefore, the intersec-tion of a fuzzy set and its complement is not al-ways an empty set. Also, conflictive views can si-multaneously appear within a fuzzy set 37. Such avagueness is in the roots of fuzzy logic, and bringsspecific problems difficult to be treated by classicallogic. Still, these problems are real and should beaddressed 38.

A fuzzy set C refers to a class of objects withmembership degrees. This set is usually character-ized by a continuous membership function µC(x)that assigns to each object a grade of membershipin [0,1] indicating the degree of membership of xin C. The closer µC is to 1, the greater the degreeof membership of x in C 36. Classical overviews offuzzy sets can be found in 36,39,40,41,42,43.

Hence, classes are defined according to certainattributes, and objects that possess these attributesbelong to the respective class. Thus, in many appli-cations of fuzzy classification, we consider a set of

Page 6: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

fuzzy classes C . The degree of membership µC(x)of each object x ∈ X to each class C ∈ C has to bethen determined 44.

The membership function is given by µc : X →[0,1] for each class c∈C 44, where a quite completeframework is proposed, subject to learning).

Fuzzy rule-based systems (FRBS) are methodsof reasoning, where knowledge is expressed bymeans of linguistic rules. FRBS are widely used invarious contexts, as for example fuzzy control 45 orfuzzy classification 46. FRBS allow the simultane-ous use of certain parts of this knowledge to performinference.

The main stages of a FRBS are listed below:

• Fuzzification, understood as the transformation ofthe crisp input data into fuzzy data 47.

• Construction of the fuzzy rule base, expressedthrough linguistic variables, i.e., IF antecedentTHEN result 48,49,50.

• Inference over fuzzy rules, where we find the con-sequence of the rule and then combine these con-sequences to get an output distribution 51.

• Defuzzification, to produce a crisp value from thefuzzy or linguistic output obtained from the previ-ous inference process 37,52.

FRBS are commonly applied to classificationproblems. These classifiers are called fuzzy rulesbased classification systems (FRBCS) or fuzzy clas-sifiers 53.

Fuzzy classification is the process of groupingobjects in a family of fuzzy sets by assigning de-grees of membership to each of them, defined by thetruth value of a fuzzy propositional function 44.

One of the advantages of fuzzy classifiers is thatthey do not assigned two similar observations to dif-ferent classes in case the observations are near theboundaries of the classes. Also, FRBCS facilitatesmooth degrees of membership in the transitions be-tween different classes.

3.2. Unsupervised classification, fuzzy clustering

Regarding unsupervised techniques, perhaps themost widely used technique is fuzzy c-means, thatextends clustering to a fuzzy framework.

The fuzzy c-means algorithm 54 constitutes analternative approach to the methods defined in sub-section 2.2, based on fuzzy logic. Fuzzy c-meansprovide a powerful method for cluster analysis.

The objective function of the fuzzy c-means al-gorithm, given by Bezdek 55, is as follows:

Jm(U,v) =N

∑k=1

c

∑i=1

umik‖yk− vi‖2

A. (2)

where Y = {y1,y2, . . . ,yN} ⊂ ℜn are the observa-tions, c are the numbers of clusters in Y ;2 6 c <n, m;1 6 m < ∞ is the weighting exponent whichrepresents the degree of fuzziness, U ;U ∈ M f c,v = (v1,v2, . . . ,vc) are the vectors of centers, vi =(vi1,vi2, . . . ,vin) is the centroid of cluster i, ‖ ‖A in-troduce A-norm above ℜn, and A is positive weightmatrix (n×n).56

According to Bezdek et al. 56, the fuzzy c-meansalgorithm basically consists of the following foursteps:

• Set c,m,A,‖k‖A and choose a matrix U (0) ∈M f c.• Then at step k, k = 0,1, . . . ,LMAX , calculate the

mean v(i), i = 1,2, . . . ,c withvi =

∑Nk=1(uik)

myk

∑Nk=1(uik)m where 1 6 i 6 c.

• Calculate the updated membership matrixU (k+1) = [u(k+1)

i j ] with uik = (∑cj=1(

dik

d jk)

2(m−1) )−1;

1 6 k 6 N; 1 6 i 6 c.• Compare U (k+1) and U (k) in any convenient ma-

trix norm. If ‖U (k+1)−U (k)‖< ε stop, otherwiseset U (k) = U (k+1) and return to the second step.

Fuzzy techniques have found a wide applicationinto image processing (some studies can be foundin 57,58,59). They often complement the existingtechniques and can contribute to the development ofmore robust methods 60. Applying the technique offuzzy c-means on Figure 2, the resulting image isshown in Figure 5.

Fig. 5. Fuzzy c-means’s result.

Page 7: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques

As it has been noted, the fuzzy c-means producesa fuzzy partition 61 of the input image characterizingthe membership of each pixel to all groups by a de-gree of membership 56.

In conclusion, fuzzy classification assigns differ-ent degrees of membership of an object to the de-fined categories.

4. Edge detection

Edge detection has been an important topic in imageprocessing for its ability to provide relevant infor-mation of the image and provide the boundaries ofobjects of interest.

Edge detection is a useful and basic operation toacquire information about an image, such as its size,shape and even texture. This technique is consideredthe most common approach to detect significant dis-continuities in the values of intensities of an image,and this is achieved by taking spatial derivatives offirst and second order (typically with the gradientand Laplacian, respectively). That is, non smoothchanges in the function f (x,y) of the image can bedetermined with the derivatives. Thus, the operatorsthat describe edges are tipically expressed by partialderivatives 8,62.

The result obtained with this technique consistsof a binary image such that those pixels where sharpchanges have occurred appear bright, while the otherpixels remain dark. This output in turn allows asignificant reduction on the amount of informationwhile preserving the important structural propertiesof the image.

The first derivative is determined by the gradient,which is able to determine a change in the intensityfunction through a one-component vector (direction)pointing in the direction of maximum growth of theimage’s intensity 62. Thus, the gradient of a two-dimensional intensity function f (x,y) is defined as:

∇ f ≡ grad(f) =[

gx

gy

]=

[∂ f∂x∂ f∂y

]. (3)

The magnitude of this vector is:

∇ f=mag(∇ f )=√

g2x+g2

y=√

(∂ f/∂x)2+(∂ f/∂y)2. (4)

And the direction is given by the angle:

α(x,y) = tan−1[

gy

gx

]. (5)

However, the errors associated with the approx-imation may cause errors which require adequateconsideration. For example, an error may be thatthe edges are not detected equally well in all direc-tions, thus leading to erroneous direction estimation(anisotropic detecting edges) 10.

Therefore, the value of the gradient is related tothe change of intensity in areas where it is variable,and is zero where intensity is constant 8.

The second derivative is generally computed us-ing the Laplace operator or Laplacian, which for atwo-dimensional function f (x,y) is formed from thesecond order derivatives8:

∇2 f (x,y) =

∂ 2 f (x,y)∂x2 +

∂ 2 f (x,y)∂y2 . (6)

Remarkably, the Laplacian is rarely used directlybecause its sensitivity to noise and its magnitude cangenerate double edges. Moreover, it is unable to de-tect the direction of the edges, for which it is used incombination with other techniques 8.

Thus, the basic purpose of edge detection is tofind the pixels in the image where the intensity orbrightness function changes abruptly, in such a waythat if the first derivative of the intensity is greaterin magnitude than a predetermined threshold, or ifthe second derivative crosses zero, then a pixel isdeclared to be an edge 8,62. In practice, this canbe determined through the convolution of the imagemasks, which involves taking a 3×3 matrix of num-bers and multiply pixel by pixel with a 3×3 sectionof the image. Then, the products are added and theresult is placed in the center pixel of the image 63.

There are several algorithms which implementthis method, using various masks 8,64. For ex-ample, the Sobel operator 65, a Prewitt operator66, Roberts cross operator 67, Gaussian Laplacianfilter5, Moment-based operator 68, etc. Next, welook more closely to two of the most used operators.

Page 8: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

4.1. Sobel operator

Sobel operator was first introduced in 1968 by Sobel65 and then formally accredited and described in 69.This algorithm uses a discrete differential operator,and finds the edges calculating an approximation tothe gradient of the intensity function in each pixel ofan image through a 3×3 mask. That is, the gradientat the center pixel of a neighborhood is calculatedby the Sobel detector 8:

Gx⇒

∣∣∣∣∣∣−1 −2 −1

0 0 01 2 1

∣∣∣∣∣∣ Gy⇒

∣∣∣∣∣∣−1 0 1−2 0 2−1 0 1

∣∣∣∣∣∣Where the neighborhood would be represented

as: ∣∣∣∣∣∣z1 z2 z3z4 z5 z6z7 z8 z9

∣∣∣∣∣∣Consequently:

g =√

g2x +g2

y

=√[(z7 +2z8 + z9)− (z1 +2z2 + z3)]2 . . .

. . .+[(z3 +2z6 + z9)− (z1 +2z4 + z7)]2. (7)

Thus, a pixel (x,y) is identified as an edge pixelif g > T in this point, where T refers to a predefinedthreshold.

The resulting image of edge detection using So-bel operator on Figure 2 is shown in Figure 6.

Fig. 6. Edge detection’s result - Sobel.

In summary, this technique estimates the gradi-ent of an image in a pixel by the vector sum of the

four possible estimates of the single central gradi-ents (each single central gradient estimated is a vec-tor sum of vectors orthogonal pairs) in a 3×3 neigh-borhood. The vector sum operation provides an av-erage over the directions of gradient measurement.The four gradients will have the same value if thedensity function is planar throughout the neighbor-hood, but any difference refers to deviations in theflatness of the function in the neighborhood 65.

4.2. Canny operator

This edge detection algorithm is very popular andattempts to find edges looking for a local maximumgradient in the gradient direction 70. The gradientis calculated by the derivative of the Gaussian fil-ter with a specified standard deviation σ to reducenoise. At each pixel the local gradient g =

√g2

x +g2y

and its direction (Eq. (5)) is determined. Then, apixel will be an edge when his strength is a localmaximum in the gradient direction, and thus the al-gorithm classifies border pixels with 1, and thosethat are not in the peak gradient with 0 (Ref. 11).

This method detects both strong and weak edges(very marked), and it uses two thresholds (T1 and T2such that T1 < T2). The hard edges will be those whoare superior to T2 and will be weak edges those whoare between T1 and T2. Then, the algorithm linksboth types of edges but just considering as weakedges those connected to strong edges. This wouldensure that those edges are truly weak edges 8.

The resulting image of edge detection usingCanny operator on Figure 2 is shown in Figure 7.

Fig. 7. Edge detection’s result - Canny.

Summarizing, edge detection techniques usea gradient operator, and subsequently evaluatethrough a predetermined threshold if an edge hasbeen found or not 71. Therefore, edge detection is aprocess by which the analysis of an image is simpli-fied by reducing the amount of processed informa-

Page 9: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques

tion, and in turn retaining valuable structural infor-mation on object edges 70. A more detailed analysison edge detection techniques can be obtained in 5.

5. Fuzzy edge detection

Frequently, the edges detected through an edge de-tection process (as those previously presented) arefalse edges because these classics methods are sen-sitive to various issues such as noise or thick edges,among others, and require a great amount of calcu-lation, so that discretization may present problems64,72.

Fuzzy logic has proved to be well suited to ad-dress the uncertainty characterizing the process ofextracting information from an image 73. Hence,many algorithms have included fuzzy logic in theentire process or at any particular stage of the imageprocessing 72. An overview of models and methodsbased on fuzzy sets for image processing and imageunderstanding can be found in 74.

A first idea for applying fuzzy logic in edge de-tection was proposed by Pal and King 75. Subse-quently, Russo 76 designed fuzzy rules for edge de-tection. Different algorithms were created to addressfuzzy edge detection since then.

Fuzzy edge detection detects classes of pixelscorresponding to the variation in level of gray inten-sity in different directions. This is achieved by usinga specified membership function for each class, insuch a way that the class assigned to each pixel cor-responds to the highest membership value 64. Russo76 proposed that an efficient way to process eachpixel of the image must consider the set of neighbor-ing pixels belonging to a rectangular window, calledfuzzy mask.

Let K be the number of gray levels in an im-age, the appropriate fuzzy set to process that im-age are made up of membership functions defined inthe universe [0, . . . ,K−1] 76. Fuzzy sets are createdto represent the intensities of each variable and theyare associated to the linguistic variables “black” and“white”.

The resulting image of applied fuzzy edge detec-tion on Figure 2 is shown in Figure 8.

Fig. 8. Fuzzy edge detection’s result.

The fuzzy rules used to fuzzy inference systemfor edge detection consist on “coloring a pixel whiteif it belongs to a uniform region. Otherwise, color-ing the pixel black”:

(i) IF Ix is zero and Iy is zero THEN Iout is white(ii) IF Ix is not zero or Iy is not zero THEN Iout is

black.

where Ix and Iy are the image gradient along the x-axis and y-axis and they have a zero-mean Gaus-sian membership function. Also, it is specified forIout the triangular membership functions, white andblack.

Fig. 9. Membership functions of the inputs/outputs.

As we mentioned previously, there are severalversions of fuzzy edge detection algorithms in theliterature. For instance, there is an algorithm thatuses Epanechnikov function extended as a member-ship function for each class to be assigned to eachpixel 64, or another algorithm which uses operatorsthat detect specific patterns of neighboring pixelsfor fuzzy rules 77. A study about representing thedataset as a fuzzy relation, associating a membershipdegree with each element of the relation is presentedin 78. Other issue is that fuzzy logic can be used onlyon a certain part of the edge detection process 72.

Fuzzy edge detection problems in which eachpixel has a degree of membership to the border can

Page 10: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

be seen in some studies 79,80,81,82,83. Moreover, inthese studies the concept of fuzzy boundary havebeen introduced.

6. Image segmentation

Around 1970, image segmentation boomed as an ad-vanced research topic. In computer vision, imagesegmentation is a process that divides an image intomultiple segments, in order to simplify or modifythe representation of the image to be more mean-ingful and easy to analyze 84. Actually, there aremany applications of image segmentation 85, as itis useful in a robot vision 86, detection of cancer-ous cells 87, identification of knee cartilage and graymatter/white matter segmentation in MR images 88,among others.

The goal of segmentation is subdividing an im-age into regions or non-overlapping objects withsimilar properties. For each pixel, it is determinedwhether it belongs to an object or not, producing abinary image. The subdivision level is dependent onthe problem to be solved. Segmentation stops whenthe objects of interest have been identified and iso-lated. However, in most cases, processed images arenot trivial and hence the segmentation process be-comes more complicated.

The most basic attribute for segmentation is theintensity of a monochrome image and the colorcomponents for a color image, as well as ob-jects boundaries and texture, which are very use-ful. In summary, the regions of a segmented im-age should be uniform and homogeneous with re-gard to some property such as color, texture, inten-sity, etc 84,89. Also, the quality of the segmentationdepends largely on the output. However, althoughimage segmentation is an essential image process-ing technique and has applications in many fields,there is no single method applicable to all kinds ofimages. Current methods have been classified ac-cording to certain characteristics of their algorithms84,90.

Formally, image segmentation can be defined asa process of partitioning the image into a set of non-intersecting regions such that each group of con-nected pixels is homogeneous 85,91. It can be defined

as follows:Image segmentation partitions an image R in C

subregions R1,R2, . . . ,RC for a uniformity predicateP, such that the following items are satisfied:

(i)n⋃

i=1

Ri = R.

(ii) Ri∩R j = /0 for all i and j, i 6= j.(iii) P(Ri) = T RUE for all i = 1,2, . . . ,C.(iv) P(Ri ∪R j) = FALSE for any pair of adjacent

regions Ri and R j.(v) Ri is connected, i = 1,2, . . . ,C.

The first condition states that the segmentationmust be complete, i.e., the union of the segmentedregions Ri must contain all pixels in the image. Thesecond condition indicates that regions have to bedisjoint, that is, that there is no overlap betweenthem. The third condition states that pixels belong-ing to the same region should have similar proper-ties. The fourth condition states that pixels from ad-jacent regions Ri and R j differ in some properties.Finally, the fifth condition expresses that each regionmust be connected 8.

Next sections describe main methods of imagesegmentation. They are classified into four cate-gories: thresholding methods (6.1), watershed meth-ods (6.2), region segmentation methods (6.3) andgraph partitioning methods (6.4).

6.1. Thresholding methods

Thresholding 92 is considered one of the simplestand most common techniques that has been widelyused in image segmentation. In addition, it hasserved in a variety of applications such as objectrecognition, image analysis and interpretation ofscenes. Its main features are that it reduces thecomplexity of the data and simplifies the process ofrecognition and classification of objects in the image93.

Thresholding assumes that all pixels whosevalue, as its gray level, is within a certain range be-long to a certain class 94. In other words, threshold-ing methods separate objects from the background 7.So, according to a threshold, either preset by the re-searcher or by a technique that determines its initial

Page 11: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques

value, a gray image is converted to a binary image84. An advantage of this procedure is that it is usefulto segment images with very bright objects and withdark and homogeneous backgrounds 33,100.

Let f (x,y) be a pixel of an image I with x =1, . . . ,n and y = 1, . . . ,m and B = {a,b} be a pairof binary gray levels. Then, the pixel belongs to theobject if and only if its intensity value is greater thanor equal to a threshold value T , and otherwise it be-longs to the background 8,33. The resulting binaryimage of thresholding an image function at graylevel T is given by:

fT (x,y) ={

a si f (x,y)> Tb si f (x,y)6 T

(8)

In this way, the main step of this technique isto select the threshold value T , but this may not bea trivial task at all 84. However, many approachesto obtain a threshold value have been developed, aswell as a number of indicators of performance eval-uation 93. For example, threshold T can remain con-stant for the entire image (global threshold), or thismay change while it is classifying the pixels in theimage (variable threshold) 33. Hence, some thresh-olding algorithms lie in choosing the threshold T ei-ther automatically, or through methods that vary thethreshold for the process according to the proper-ties of local residents of the image (local or regionalthreshold) 8,84. Some algorithms are: basic globalthresholding 95, minimum error thresholding 96, it-erative thresholding 97, entropy based thresholding98 and Otsu thresholding 99.

According to Sezgin and Sankur 7, thresholdingtechniques can be divided into six groups based onthe information the algorithm manipulates:

(i) Histogram shape-based techniques 100, wherean analysis of the peaks, valleys and curvaturesof the histogram is performed.

(ii) Clustering-based methods 101, where the graylevels are grouped into two classes, back-ground and object, or alternatively they aremodeled as a mixture of two Gaussians.

(iii) Entropy-based techniques 102, in which the al-gorithms use the entropy of the object andthe background, the cross entropy between theoriginal binary image, etc.

(iv) Object attribute-based techniques 103,104,which seek measures of similarity betweenthe gray level and binary images.

(v) Spatial techniques 105, using probability distri-butions and/or correlations between pixels.

(vi) Local techniques 106, which adapts the thresh-old value at each pixel according to the localcharacteristics of the image.

Possibly, the most known and used thresholdingtechniques are those that focus on the shape of thehistogram. A threshold value can be selected byinspection of the histogram of the image, where iftwo different modes are distinguished then a valuewhich separates them is chosen. Then, if necessarythis value is adjusted by trial and error to generatebetter results. However, updating this value by anautomatic approach can be carried out through an it-erative algorithm 8:

(i) An initial estimate of a global threshold T isperformed.

(ii) The image is segmented by the T value intwo groups G1 and G2 of pixels, resultingfrom Eq. (8).

(iii) The average intensities t1 and t2 for the twogroups of pixels are calculated.

(iv) A new threshold value is calculated: T =12(t1 + t2).

(v) Repeat steps 2 to 4 until subsequent iterationsproduce a difference in thresholds lower than agiven sensibility or precision (∆T ).

(vi) The image is segmented according to the lastvalue of T .

Therefore, the basic idea is to analyze the his-togram of the image, and ideally, if two dominantmodes exist, there is a remarkable valley wherethreshold T will be in the middle 84. However, de-pending on the image, histograms can be very com-plex (e.g., having more than two peaks or not obvi-ous valleys), and this could lead this method to selecta wrong value. Furthermore, another disadvantageis that only two classes can be generated, therefore,the method is not able to be applied in multispec-tral images. It also does not take into account spatial

Page 12: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

information in the image, so it makes the methodsensitive to noise 33.

The histogram of Figure 2 is shown in Figure 10.The threshold value is 170,8525.

Fig. 10. Histogram.

The outcome of thresholding to the previous im-age is shown on Figure 11.

Fig. 11. Thresholding’s result.

In general, thresholding is considered to be asimple and effective tool for supervised image seg-mentation. It however must achieve an ideal valueT in order to segment images in objects and back-ground. More detailed studies can be found in7,107,108,109.

6.2. Watershed methods

Watershed 110 was introduced in the field of digi-tal topography. This method considered a grayscalepicture as topographic reliefs 111,112,113,114. Later,this notion was studied in the field of image pro-cessing 115. Watershed aims to segmenting regionsin catchments, i.e., in an area where a stream of con-ceptual water bounded by a line trickles through thetop of the mountains, which decomposes the imageinto rivers and valleys 116. Light pixels can be con-sidered the top of the mountain, and dark pixels tobe in the valleys 117. The gradient is treated as themagnitude of an image as a topographic surface 84.Thus, it is considered that a monochrome image isa surface altitude where the pixels of greater ampli-tude or greater magnitude of intensity gradient cor-respond to the points of the river, i.e. the lines of thewatershed that form the boundaries of the object. On

the other hand, the pixels of lower amplitude are re-ferred to as valley points 84,89.

By leaving a drop of conceptual water on anypixel from an altitude, this will flow to a lower el-evation point until reaching a local minimum. Andin turn, the pixels that drain to a common mini-mum, form a retention gap characterizing a region ofthe image. Therefore the accumulations of water inlocal minima neighborhood form such catchments.So, the points belonging to these basins belong tothe same watershed 89. Thus, an advantage of thismethod is that it first detects the leading edge andthen calculates the basins of the detected gradients84.

Overall, the watershed algorithm is as follows:

• Calculate curvature of each pixel.• Find the local minima and assign a unique label to

each of them.• Find each flat area and classify them as minimum

or valley.• Browse the valleys and allow each drop to find a

marked region.• Allow remaining unlabeled pixels fall similarly

and attach them to the labeled regions.• Join those regions having a depth of basin below

a predetermined threshold.

There are different algorithms to calculate thewatershed, depending on how to extract the moun-tain rims from the topographic landscape. Two ba-sic approaches can be distinguished in the followingsubsections.

6.2.1. Rainfall approximation

The rainfall algorithm 118 is based on finding the lo-cal minima of the entire image, which are assigned aunique label. Those that are adjacent are also com-bined with a unique tag. Then a drop of water isplaced in each pixel without label. These drops willflow through neighboring lower amplitudes to a lo-cal minimum assuming the value of the label. Thenlocal minimum pixels are shown in black and theroads of every drop to the local minimum are indi-cated by white 89.

Page 13: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques

6.2.2. Flooding approximation

The flooding algorithm 110 extracts the mountainrims by gradually flooding the landscape. The floodcomes from below, through a hole in each minimumof the relief and immerse the surface in a lake en-tering through the holes while it fill up the variouscatchment basins 119. In other words, the methodinvolves placing a water fountain in local minima,then the valleys are submerged with water and eachcatchment filled until it reaches almost to overflow,which will create a reservoir in which neighborsare connected by the ridge 89,117. This approachcomes from the bottom up to find the watershed,i.e., it starts from a local minimum, and each re-gion is flooded incrementally until it connects withits neighbors 116.

By performing the procedure of watershed, im-age transform complement of Figure 1, is as shownin Figure 12.

Fig. 12. Transform complement of binary image.

In Figure 13 is shown the outcome of watershed.

Fig. 13. Watershed’s result.

In summary, watershed is a very simple andadvantageous technique for real-life applications,mainly for segmenting grayscale images trou-bleshooting, and which uses image morphology84,117. Notably, because of this method finds thebasins and borders at the time of use to segmentimages, the key lies in changing the original imagein an image where the basins correspond to objectsyou want to identify. Also, when it is combinedwith other morphological tools, watershed transfor-mation is at the basis of extremely powerful segmen-tation procedures 115,120.

A critical review of several definitions of the wa-tershed transform and their algorithms, can be foundin 121,122.

6.3. Region segmentation methods

In this section, we present two region segmentationmethods. These techniques try that neighboring pix-els with similar properties are in the same region,i.e., they focus on groups of pixels that have sim-ilar intensity 71,94. This leads to the class of al-gorithms known as region growning and split andmerge, where the general procedure is to comparea pixel with its neighbors, and if they are homoge-neous then they are assigned to the same class 94.Region growning and split and merge are presentedin section 6.3.1 and section 6.3.2 respectively.

6.3.1. Region growning

Region growning 123 is a method for groupingneighboring pixels in segmented regions accordingto predefined criteria of growth. Firstly, it specifiessmall sets of initial pixel called seed points. Fromthem, the regions grow by adding neighboring pix-els with similar predefined properties, until no morepixels that meet the criteria for inclusion in the re-gions are found and all pixels in the image have beenscanned 8,33. Similarly, if two or more regions arehomogeneous, they are grouped into one. Also, oncea region has no more similar pixels, then we proceedto establish a new region from another seed 124.

In summary, the basic procedure is presented bel-low 33:

• Select a seed group of pixels in the original image.• Define true value criterion for the stopping rule.• Combine all pairs of spatially adjacent regions

fulfilling the criterion value.• Stop the growth of the regions when it is not pos-

sible to combine more adjacent pixels.

However, a disadvantage of this method is thefact that getting a good result depends on the selec-tion of the initial seed points and the order in whichthey are scanned for grouping each pixel. This de-pendence makes the segmented result to be sensitiveto the location and ordering of seeds 71,89.

Page 14: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

The resulting image of region growing on Fig-ure 2 by applying a seed in the pixel x= 148 y= 359,is shown in Figure 14.

Fig. 14. Region growing’s result.

Consequently, region growing is a procedurewhich iteratively groups pixels of an image into sub-regions according to a predefined criterion of homo-geneity from a seed 33,124. Similarly, region grown-ing has a good performance because it considers thespatial distribution of the pixels 124.

6.3.2. Split and merge

Split and merge 71 is based on a representation ofdata in a quadtree where if a region of the originalimage is non-uniform in attribute then a square im-age segment is divided into four quadrants. Hence,if four neighbors are uniform, they are united in asquare formed by these adjacent squares 89.

Definition 1. Let I be an image and P be a predi-cate, the image is subdivided into small quadrants,such that for any region Ri it is P(Ri) = T RUE. IfP(R) = FALSE, the image is divided into quadrants.If P is FALSE for any quadrant, the quadrant is sub-divided into sub-quadrants, and so on.

This process is represented by a tree with fourleaves descendants called quadtree, where the rootrefers to the initial image and each of its nodes rep-resent its four subdivisions 8.

The procedure is as follow 8,33:

(i) Start with the complete image, where R repre-sent the entire image region, if P(R) = FALSE.

(ii) The entire image R is divided into four disjointquadrants, and if P is false for any quadrant(P(Ri) = FALSE), then the quadrants are sub-divided into sub-quadrants until no more splitsare possible.

(iii) Once no more splits can be done, any adja-cent regions Ri and R j for which P(Ri∪R j) =

T RUE are joined together.(iv) The process ends once further connections are

not possible.

Thus, split and merge arbitrarily subdivides animage into disjoint regions, and then these regionsare joined or divided according to the conditions ofimage segmentation in Section 6. This method per-forms well when the colors of objects in the imageare not very opposite.

The resulting image of split and merge on Fig-ure 15 is shown in Figure 16.

Fig. 15. RGB image 9.

Fig. 16. Split and merge’s result.

Generally speaking, split and merge begins withnodes (squares) at some level of the quadtree, so if aquadrant is not uniform a split is done, being subdi-vided into four sub-quadrants. However, if four ad-jacent quadrants are uniform, a merge between themis performed 71.

Besides, it is important to mention that the mainadvantage of region segmentation based techniquesis that they attempt to segment an image into regionsquickly, because the image is analyzed by regionsand no by pixels.

6.4. Graph partitioning methods

A digital image I is modeled as a valued graph ornetwork composed by a set of nodes (υ ∈ V ) and aset of arcs E (e ∈ E ⊆ V ×V ) that link such nodes125,126,127. Graphs have been used in many segmen-tation and classification techniques in order to rep-resent complex visual structures of computer visionand machine learning. In this section algorithms

Page 15: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques

which use graph are presented, but first a formalproposition is shown.

Proposition 1. If A digital image I is viewed asa graph where the nodes are the pixels, let V ={P1,P2, . . . ,Pn} be a finite set of pixels in the imageand let E{{Pa,Pb}|Pa,Pb ∈ V} be unordered pairsof neighboring pixels. Two pixels are neighbors ifthere is an arc eab = {Pa,Pb} ∈ E. Let G = (V,E)be the representation of neighboring relations be-tween the pixels in an image. Let also dab > 0 bethe degree of dissimilarity between pixels Pa y Pb.Let D = {dab|eab ∈ E} be the set of all the dissim-ilarities. Let N(I) = {G = (V,E);D} be the net-work which summarizes all the previous informationabout an image I.

With the above proposition, it is possible to ex-tract meaningful information about the processedimage, checking the various properties of the graphs.As we shall see throughout this section, the graphsturn out to be extremely important and useful toolsfor image segmentation. Some of the most populargraph algorithms for image processing are presentednext.

6.4.1. Random walker

A random walker 128 approach can be used for thetask of image segmentation through graphs: a userinteractively label a small number of pixels calledseeds, and each of the not labeled pixels releases arandom walker, so the probability is calculated whenthe random walker reaches a labeled seed 129. Thisalgorithm may be useful for segmenting an imagein, for example, object and background, from pre-defined seeds indicating image regions belonging tothe objects. These seeds are labeled, and unlabeledpixels will be marked by the algorithm using thefollowing approach: let a random walker start onan untagged pixel, what is the probability that ar-rives first at each seed pixel?. A K− tuple vectoris assigned to each pixel, which specify the proba-bility that a walker beginnings in an unlabeled pixel,first reaches each seed point K. Then, from those ofK− tuples it is selected the more likely seed for therandom walker 127.

The random walker algorithm consist of four

steps 127:

• Represent the image structure in a network, defin-ing a function that assigns a change in image in-tensities, so assign the weights of the edges in thegraph.

• Define the set of K labeled pixels to act as seedpoints.

• Solve the Dirichlet problem for each label exceptthe last, which is equivalent to calculate the prob-ability that a random walker beginning from eachunlabeled pixel reaches first each of the K seedpixels.

• Select from these K− tuples the more likely seedfor the random walker, that is, assign each node υito the maximum probability label.

The algorithm above consists in generating thegraph weights, establishing the system of equationsto solve the problem and implement the practical de-tails. Several properties of this algorithm are quickcalculation, rapid editing, inability to produce arbi-trary segmentation with sufficient interaction and in-tuitive segmentation 127. Moreover, an advantage isthat the edges of faint objects are found when theyare part of a consistent edge 130.

The resulting image of random walker on Fig-ure 3 is shown in Figure 17. Notice that this fig-ure shows the two selected seeds (K = 2), green andblue, as well as the probability that a random walkerreleased at each pixel reaches the foreground seeds.That is, the near the green seed are the nearest ob-jects so their color is similar to the pixel where seedis. Similarly, blue seed is surrounded of other colorand its color is more similar. The intensity of thecolor represents the probability that a random walkerleaving the first pixel reaches each seed.

Fig. 17. Random walker’s result.

Random walker contributes as a neutral tech-nique for image segmentation because it does not

Page 16: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

consider any information of the image. Instead, thesegmentation is derived from the K− tuples of eachpixel, selecting the most probable seed for each pixelor random walker 130.

6.4.2. Graph cuts

Graph cuts 131 is a multidimensional optimizationtool that is capable of applying smoothing in pieces,while maintaining marked discontinuities 132. It issupported in both graph theory and the problem ofminimizing the energy function, uses the maximumflow problem within graph. Thus, through the max-flow min-cut theorem, it is defined the minimum cutof the graph, such that the cut size is not larger inany other cut 133.

A s-t cut is a subset of arcs C ∈ E such that theS and T terminals are completely separated in theinduced graph G(C) = (V,E \C) 134.

Definition 2. A s-t cut or simple cut of a graphG = (V,E) is a binary partition of nodes V into twosubsets with a primary vertex in each: source S andsink T =V −S such that s ∈ S and t ∈ T 132,135.

Thus, each graph has a set of edges E with twotypes of terminal nodes P: n− links, referring tonon-terminal neighboring arcs; and t − links, thatare used to connect terminal pixels to non-terminalpixels. Each graph edge is assigned some nonneg-ative weight or cost w(p,q), so a cost of a directededge (p,q) may differ from the cost of the reverseedge (q, p). The set of all graph edges consist ofn− links in N and t − links {(s, p),(p, t)} for non-terminal nodes p ∈ P 132,134.

The minimum cut problem is based on find-ing the cut with the minimum cost among all cuts.The cost of a cut C = (S,T ) is the sum of thecosts/weights of the arcs borders (p,q) such thatp ∈ S and q ∈ T . Thus, the min-cut and max-flow problems are equivalent, because the maximumflow value is equal to the cost of the minimum cut132,136,137.

If f is a flow in a graph G with two terminalnodes called source s and sink t, respectively repre-senting “object” and “background” labels, and a setof non-terminals P nodes, then the value of the max-imum flow is equal to the capacity of a minimum cut

C 132,134,135.The basic algorithm for this procedure developed

by Boykov and Kolmogorov 133 based on increas-ing paths through two search trees for the terminalnodes, consists of three steps that are iteratively re-peated:

• Growth phase: search trees S and T grow untilthey touch giving an s→ t path.

• Augmentation phase: the found path is aug-mented, search tree(s) break into forest(s).

• Adoption phase: trees S and T are restored.

Other max-flow min-cut algorithms are based onthe push-relabel methods, which are based on twobasic operations: push flow and relabel from anoverflowing node 135.

In general, the cut may generate a binary seg-mentation with arbitrary topological properties 134.For example, if you work with the observed inten-sity I(p) of a pixel p, a cut would symbolize a binarylabeling process f , which assigns labels fp ∈ {0,1}to the pixels in such a way that if p ∈ S then fp = 0and if p ∈ T then fp = 1 132.

One advantage of this method is that it allows ageometric interpretation, and also works with a pow-erful tool for the energy minimization (a binary op-timization method). There are a lot of techniquesbased on graph cuts that produce good approxima-tions. This technique is used in many applicationsas an optimization method for low vision problems,based on global energy formulations. In addition,graph cuts can be very efficient computationally, andthis is why they provide a clean, flexible formulationof image segmentation 132.

Frequently, this method is combined with others,however, an improved graph cut method named nor-malized cuts 138 is based on modifying the cost func-tion to eliminate outliers. The usage of this tech-nique within image processing context is presentedin the following section.

6.4.3. Normalized cuts

This procedure works with weighted graphs G =(V,E) where the weight w(i, j) of each arc is a func-tion of the similarity between nodes i and j 139.

Page 17: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques

A graph G = (V,E) is optimally divided intotwo disjoint subsets A and B with A∪ B = V andA∩B = /0 when Ncut value is minimized. The de-gree of dissimilarity of A and B is measured as afraction of the total weight of the connections arcson all nodes in the network. This is called normal-ized cut (Ncut) 139:

Ncut(A,B) =cut(A,B)

assoc(A,V )+

cut(A,B)assoc(B,V )

. (9)

where assoc(A,V ) = ∑u∈A,t∈V w(u, t) is the totalcost of connecting the nodes in A with all nodes inthe graph, and similarly with assoc(B,V ).

The total normalized association measure withina group for a given partition is expressed as 139:

Ncut(A,B) =assoc(A,A)assoc(A,V )

+assoc(B,B)assoc(B,V )

. (10)

where assoc(A,A) y assoc(B,B) refers to the totalweight of the arcs connecting the nodes within A anB respectively.

Given a partition of nodes of graph V into twodisjoint complementary sets A and B, let x be anN = |V | dimensional indication vector, xi = 1 ifnode i is in A and xi = −1 otherwise. Also, letdi =∑ j(W (i, j)) be the total connection weight fromnode i to all other nodes 138. Therefore, Eq. (6.4.3)can be rewritten as:

Ncut(A,B) =∑(xi>0,x j<0)−wi jxix j

∑xi>0 di. . .

+∑(xi<0,x j>0)−wi jxix j

∑xi<0 di. (11)

The algorithm can be summarized in the follow-ing steps 138,139:

• Given an image, represent it in a weighted graphG = (V,E), and summarize the information intoW and D (let D = diag(d1,d2, . . . ,dN be an N×Ndiagonal matrix and W be an N ×N symmetricmatrix with W (i, j) = wi j).

• Solve (D−W )x = λDx for the eigenvectors withthe smallest eigenvalues.

• Use the eigenvector with the second smallesteigenvalue to subdivide the graph by finding thesplitting points to minimize Ncut.

• Check whether it is necessary to split the currentpartition recursively by checking the stability ofthe cut.

• Stop if for any segment Ncut exceeds a specifiedvalue.

The resulting image of gray-scale image segmen-tation using normalized graph cuts on Figure 2 isshown in Figure 18.

Fig. 18. Normalized cuts’s result.

Thus, the normalized cuts method originated be-cause if we only consider to minimizing the cutvalue cut(A,B) = ∑u∈A,v∈B w(u,v), a problem arisethat the sets of small cuts are favored if isolated pix-els appears 138.

6.4.4. Region adjacency graphs

Region adjacency graphs (RGA) 71 provides a spa-tial view of the image, in order to efficiently manipu-late the information contained in it, through a graphwhich associates a region and the arcs linking eachpair of adjacent regions with each vertex 124.

A region Ri defined by a quadruple Ri =(k,xk,yk,Ak)|k ∈ (1, . . . ,n), where k refers to the in-dex number of the region, (xk,yk) represents the cen-ter of gravity and Ak represents the area.

A RAG is defined by a set of regions and a setof pairs of regions: ({Ri},{(R j,Rk)} | i, j,k ∈(1, . . . ,n)). Thus, the set of parts described all re-gions involved. And the set of pairs of regions indi-cate regions which are adjacent 140.

As we mentioned above, nodes represent theregions and are compared in terms of a formula-

Page 18: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

tion based on chains. Thus, a region can be per-ceived as a shape whose boundary is represented bya cyclic chain (boundary string), which consists ofa sequence of simple graph arcs. According to agiven algorithm, the similarities between two borderchains are calculated 126.

Two important characteristics of the RAG raisedby Pavlidis 71 are:

(i) The degree of a node is the number of adja-cent regions to a given region, which usually isproportional to the size of the region.

(ii) If a region completely surrounds other regions,that node is a cutnode in RAG, as shown in thefollowing figure.

Fig. 19. Example of region adjacency graph 71.

Despite being a very useful and robust tool, thismethod has the disadvantage that it does not producegood results when the processed image has not per-fectly defined regions or discontinuities 141.

The resulting image of RAG on Figure 2 isshown in Figure 20. The algorithm uses watershedalgorithm to find structure of the regions in the im-age. Then, two regions are considered as neighborsif they are separated by a small number of pixels inhorizontal or vertical direction.

Fig. 20. Region adjacency graph’s result.

Accordingly, this technique is used to provide anefficient system for manipulating implicit informa-tion in an image through a spatial view. So eachnode in the RAG represents a region, and two nodes(regions) are linked if they are neighbors 142.

Summarizing, graph theory has the advantage oforganizing the image elements into mathematically

sound structures making the formulation of the prob-lem more understandably and enabling less complexcomputation 143.

The image segmentation techniques presented inthis section have been categorized into four classes:thresholding methods, watershed methods, regionsegmentation methods and graph partitioning meth-ods. In addition, some evaluation methods are cre-ated to evaluate quality of an image segmented. Areview of these evaluation methods for image seg-mentation is presented in 144,145.

7. Hierarchical image segmentation

Hierarchy theory 146 has been frequently used in so-ciology and anthropology. These sciences study ahierarchical set of social classes or strata 147. In thissection, we present the definition of hierarchical seg-mentation methods, which try to group individualsaccording to criteria starting from one group includ-ing all individuals until creating n groups each con-taining a unique individual. This can be done in twoways, by agglomerative methods 148 or by divisivemethods 149, where the groups are represented by adendrogram based on a two-dimensional diagram asa hierarchical clustering 4.

The agglomerative methods are responsible forsuccessively merge the n individuals in groups. Onthe other hand, the divisive methods choose to sep-arate the n individuals into progressively smallergroupings.

First, we begin by presenting the elementsneeded to obtain a hierarchical image segmentationusing graphs.

According to Gomez et al. 150,151 given an imageI and its network N(I), a family S = (S0,S1, . . . ,SK)of segmentations of N(I) constitute a hierarchicalimage segmentation of N(I) when the followingproperties are verified:

• St ∈ Sn(N(I)) for all t ∈ {0,1, . . . ,K}, (i.e., eachSt is an image segmentation of N(I)).

• There are two trivial partitions, the first S0 ={{v},v ∈ V} containing groups with one node(singleton), and another partition SK = {V} con-taining all pixels in the same cluster.

Page 19: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques

• |St | > |St+1| for all t = 0,1, . . . ,K − 1, indicat-ing that in each iteration the number of groups orcommunities increases.

• St⊆St+1 for all t = 0,1, . . . ,K−1, means that St isfiner that St−1 (i.e., given two partitions P and Qof the set of pixels V of a network I, it is said thatP is finer than Q if for all A ∈ P exists B ∈Q suchthat A⊆ B).

One advantage of hierarchical segmentation isthat it can produce an ordering of the segmentations,which may be informative for picture display. Soin any hierarchical image segmentation proceedings,no apriori information about the number of clustersis required. A dendrogram helps in making the se-lection of the number of groups on the same levels.The partitions are achieved through the selection ofone of the solutions in the nested sequence of clus-ters making up the hierarchy. The best partition isat a certain level, where the groupings below thisheight are far apart.

Another advantage of hierarchical segmentationis that it can be used to build fuzzy boundaries by ahierarchical segmentation algorithm designed by theauthors 150,152,153,154,155,156.

There are some algorithms of hierarchical imagesegmentation, as 158,159,160,161,162 among others. Inthe next subsection, we present an algorithm of hier-archical image segmentation based on graphs.

7.1. Divide & link algorithm

The divide-and-link (D&L) algorithm proposed byGomez et al. 155 and extended in 156 is a binary itera-tive unsupervised method that obtains a hierarchicalpartition of a network, to be shown in a dendrogram,and it is framed as a process to partition networksthrough graphs.

Generally speaking, D&L tries to group a set ofpixels V of a graph G = (V,E) through an iterativeprocedure that classifies nodes in V in two classes V0and V1. However, the rating depends on the type ofedge. There are endpoints of division edges that areassigned in different classes (split nodes) and thereare endpoints of link edges that are assigned in thesame class (link nodes).

The D&L algorithm consists on the followingmain steps150:

(i) Calculate the division and link weights foreach edge.

(ii) Organize subgraph edges Gt sequentially ac-cording to the weights of the previous step.

(iii) Build a spanning forest F t ⊂ Gt (through, forinstance, a Kruskal-like algorithm 163) basedon the arrangement found in the previous step.

(iv) Build a partition Pt following the binary pro-cedure applied to F t .

(v) Define a new set of edges Et+1 from Et byremoving those edges that connect differentgroups of Pt .

(vi) Repeat steps i-v while Et+1 6= /0.

The output of the D&L algorithm (with six par-titions) applied to Figure 3 is shown in Figure 21.

Fig. 21. D&L’s result.

In summary, any hierarchical image segmenta-tion algorithm is a divisive process because it startswith a trivial partition, formed by a single group con-taining all nodes, and ends with another trivial par-tition formed by as many groups as nodes possessesthe finite set of elements you wish to group. It alsohas a polynomial time complexity and a low compu-tational cost. One advantage is that it can be used ina variety of applications.

Page 20: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

8. Fuzzy image segmentation

Fuzzy image segmentation concept was introducedin 151,156 and extended in 150,157, where they estab-lished that a fuzzy image segmentation should be aset of fuzzy regions R1, . . . ,Rk of the image. Theypresented a way to define this concept based on thefact that crisp image segmentation can be character-ized in terms of the set of edges that separates theadjacent regions of the segmentation.

Definition 1. Given an image modeled as a networkimage N(I) =

{G = (V,E); D

}, a subset B ⊂ E

characterizes an image segmentation if and only ifthe number of connected components of the par-tial graph generated by the edges E−B, denoted asG(E−B) = (V,E−B), decreases when any edge ofB is deleted.

In this sense, a formal definition of fuzzy im-age segmentation is through the fuzzyfication of theedge-based segmentation concept introduced in Def-inition 1 of this section (see 150 for more details).

Definition 2. Given a network image N(I) ={

G =

(V,E); D}

, we will say that the fuzzy set B ={(e,µB(e)), e ∈ E} produces a fuzzy image seg-mentation if and only for all α ∈ [0,1] the crisp setB(α) = {e ∈ E : µB(e) > α} produces an imagesegmentation in the sense of Definition 1.

In the previous definition, the membership func-tion of the fuzzy set B for a given edge representsthe degree of separation between these two adjacentpixels in the segmentation process.

In 157, it has proven that there exist a relationbetween fuzzy image segmentation concept and theconcept of hierarchical image segmentation. In thatpaper, it is showed that it is possible to build a fuzzyimage segmentation from a hierarchical image seg-mentation in the following way.

Given a network image N(I) ={

G = (V,E);D}

,let B = {B0 = E,B1, . . . ,BK = /0} be a hierarchicalimage segmentation, and for all t ∈ {0,1, . . . ,K} letµ t : E −→ {0,1} be the membership function asso-ciated to the boundary set Bt ⊂ E. Then, the fuzzyset B defined as:

µB(e) =K

∑t=0

wt µt(e) ∀e ∈ E. (12)

induces a fuzzy image segmentation of N(I) for anysequence w = (w0,w1, . . . ,wK) such that:

wt > 0 ∀t ∈ {0,1, . . . ,K}K

∑t=0

wt = 1. (13)

The output of fuzzy image segmentation appliedto Figure 3 is shown in Figure 22. In this picturehas been aggregated the output of hierarchical im-age segmentation of Section 7.1, in order to build afuzzy image segmentation 157.

Fig. 22. Fuzzy image segmentation’s result.

9. Final remarks

The main characteristics and relationships betweensome of the best known techniques of digital imageprocessing have been analyzed in this paper. Thesetechniques have been classified according to the out-put which they generate, and attending to the prob-lems they face. Also, this classification depends onwhether they are supervised or unsupervised meth-ods, and whether they are crisp or fuzzy techniques.In this way we can notice that different problemsshould be formally distinguished when trying to rec-ognize an object in a digital image (such as classifi-cation, edge detection, segmentation and hierarchi-cal segmentation). For example, it has been pointedout that the output of an edge detection problem doesnot necessarily produce a suitable image segmenta-tion. Edge detection only detects the border of theobjects and does not necessarily separate objects inthe image.

Furthermore, some of the techniques reviewed inthis paper can be used as a previous step for anothertechnique. For example, an edge detection outputcould be the input for an image segmentation if wefind a partition of the set of pixels into connected re-gions. The opposite is also true, and it is possibleto find the boundary of a set of objects through theset of pixels that connect pixels of different regions.

Page 21: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques

Also, in an output of image segmentation, if two dif-ferent and not adjacent subsets of pixel can share thesame characteristics, we can apply an image classifi-cation technique and then these regions will belongto the same class. Moreover, a solution of clusteringcan be a previous step to reach an image segmenta-tion. Thus, although different, these techniques canbe related to each other.

Computational intelligence is a must in a widerange of challenging real-world image processingproblems. More sophisticated original approaches,variations and combinations should be expected de-pending on the specific characteristics of each imageand the decision-maker objectives (see, for example164). In particular, one of the most important ap-plications of computational intelligent techniques isthe imaging process which is used in many fields asfor example, digital imaging 165, medical imaging166, industrial tomography 167, chemical imaging 168

and thermography 169.We would like to remark that there were found

no significative differences in computational timewhen processing all the images considered in thispaper with different algorithms, being in generalquite short.

Finally, it is important to point out that in thebackground of an edge detection process, there is aclassification problem, because a pixel is classifiedas either edge or not edge. Similarly, a threshold-ing method can be a classification problem because apixel is classified as either object or background. Inthis manner, the classification of the methods shownin this paper can be extended even more generally, aproblem to be considered in a future research.

Acknowledgments

This research has been partially supported by theGovernment of Spain, grant TIN2015-66471-P, andby the Government of the Community of Madrid,grant S2013/ICE-2845 (CASI-CAM-CM).

References

1. J. Martin and D. Jurafsky, “Speech and language pro-cessing,” International Edition, (2000).

2. S. Thrun, “Learning metric-topological maps for in-door mobile robot navigation,” Artificial Intelligence,99, 1:21–71, (1998).

3. E. Rosten and T. Drummond, “Machine learning forhigh-speed corner detection,” Computer Vision-ECCV.Springer Berlin Heidelberg, 430–443, (2006).

4. J. Richards and X. Jia, “Remote Sensing Digital Im-age Analysis. An Introduction,” Springer, (1999).

5. D. Marr and E. Hildreth, “Theory of Edge Detection,”Proc. Roy. Soc. London. Series B, Biological Sciences,207, 1167:187–217, (1980).

6. A. Jain, M. Murty and P. Flynn, “Data Clustering:A Review,” ACM Computing Surveys, 31, 3:264–322,(1999).

7. M. Sezgin and B. Sankur, “Survey over image thresh-olding techniques and quantitative performance eval-uation,” Electronic Imaging, 13, 1:146–165, (2004).

8. R. Gonzalez, R. Woods and S. Eddins, “Digital Im-age Processing using MATLAB,” Chapman and HallComputing, (2009).

9. D. Martin, C. Fowlkes, D. Tal and J. Malik, “ADatabase of Human Segmented Natural Images and itsApplication to Evaluating Segmentation Algorithmsand Measuring Ecological Statistics,” Proc. 8th Intl.Conf. Computer Vision, 2, 416–423, (2001).

10. J. Bernd, “Digital Image Processing,” Springer,(2002).

11. W. McCulloch and W. Pitts, “A logical calculus of theideas immanent in nervous activity,” The bulletin ofmathematical biophysics, 5, 4:115–133, (1943).

12. A. Turing, “Computing machinery and intelligence,”Mind, 49, 236:433–460, (1950).

13. C. Cortes and V. Vapnik, “Support-Vector Networks,”Machine Learning, 20, 273–297, (1995).

14. J. Pearl, “Reverend Bayes on Inference Engines: ADistributed Hierarchical Approach,” Association forthe Advancement of Artificial Intelligence, AAAI-82,133–136, (1982).

15. A. Scott and M. Symons, “Clustering methods basedon likelihood ratio criteria,” Biometrics, 27, 2:387–397, (1971).

16. A. Wacker and D. Landgrebe, “Minimum DistanceClassification in Remote Sensing,” Proc. of the FirstCanadian Symposium on Remote Sensing, 2, 577–599, (1972).

17. I. Aizenberg, N. Aizenberg, J. Hiltner, C. Moraga andE. Meyer zu Bexten, “Cellular neural networks andcomputational intelligence in medical image process-ing,” Image and Vision Computing, 19, 4:177–183,(2001).

18. J. Jiang, P. Trundle and R. Ren, “Medical imageanalysis with artificial neural networks,” Computer-ized Medical Imaging and Graphics, 34, 8:617–631,(2010).

19. S. Manikandan, K. Ramar, M. Willjuice Iruthayara-

Page 22: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

jan and K. Srinivasagan, “Multilevel thresholdingfor segmentation of medical brain images using realcoded genetic algorithm,” Measurement, 47, 0:558–568, (2014).

20. H. Yang, H. Wang, Q. Wang and X. Zhang, “LS-SVMbased image segmentation using color and texture in-formation,” JVCIR, 23, 7:1095–1112, (2012).

21. E. Alpaydin, “Introduction to Machine Learning,” TheMIT Press, (2010).

22. N. Nilsson, “Introduction to Machine Learning,”Ebook, (1996).

23. H. Driver and A. Kroeber, “Quantitative expression ofcultural relationships,” University of California Pub-lications in American Archeology and Ethnology, 31,211–256, (1932).

24. A. Rencher, “Methods of Multivariate Analysis,” Wi-ley Interscience, (2002).

25. W. Hardle and L. Simar, “Applied Multivariate Statis-tical Analysis,” Springer, (2003).

26. R. Xu and D. Wunsch II, “Survey of Clustering Algo-rithms,” IEEE Transactions on Neural Networks, 16,3:645–678, (2005).

27. J. MacQueen, “Some methods for classification andanalysis of multivariate observations,” Proc. of 5thBerkeley Symposium on Mathematical Statistics andProbability 1. University of California Press, 31, 281–297, (1967).

28. S. Lloyd, “Least square quantization in PCM,” BellTelephone Laboratories Paper, (1957).

29. S. Lloyd, “Least square quantization in PCM,” IEEETransactions on Information Theory, 28, 2:129–137,(1982).

30. T. Kanungo, D. Mount, N. Netanyahu, C. Piatko,R. Silverman and A. Wum, “An Efficient k-MeansClustering Algorithm: Analysis and Implementation,”IEEE Trans. on Pattern Analysis and Machine Intelli-gence, 24, 7:881–892, (2002).

31. J. Wang, “Encyclopedia of Data Warehousing andMining,” Idea Group Reference, (2006).

32. A. Wacker. “A cluster approach to finding spatialboundaries in multispectral imagery,” Laboratory forApplications of Remote Sensing, Purdue UniversityLARS Info. Note 122969, (1969).

33. R. Dass, Priyanka and S. Devi, “Image SegmentationTechniques,” IJECT, 3, 1:66–70, (2012).

34. G. Toussaint, “The use of context in Pattern Recogni-tion,” Pattern Recognition, 10, 3:189–204, (1978).

35. J. Lu and Q. Weng, “A survey of image classificationmethods and techniques for improving classificationperformance,” IJRSA, 28, 5:823–870, (2007).

36. L. Zadeh, “Fuzzy Sets,” Information and Control, 8,338–353, (1965).

37. Z. Chi, H. Yan and T. Pham, “Fuzzy Algorithms: WithApplications to Image Processing and Pattern Recog-nition,” World Scientific, 10, (1996).

38. V. Nokak, I. Perfilieva and J. Mockor, “Mathematicalprinciples of fuzzy logic,” Springer Science & Busi-ness Media, 517, (2012).

39. V. Novak, I. Perfilieva and J. Mockor, “MathematicalPrinciples of Fuzzy Logic,” Boston: Kluwer AcademicPublishers, (1999).

40. D. Dubois, W. Ostasiewicz and H. Prade, “Fuzzy sets:history and basic notions,” Fundamentals of fuzzy sets.Springer, 21–124, (2000).

41. D. Dubois, H. Prade and H. Prade, “Fundamentals offuzzy sets,” Springer & Business Media, 7, (2000).

42. G. Klir and B. Yuan, “Fuzzy sets and fuzzy logic,”New Jersey: Prentice Hall., 4, (1995).

43. E. Kerre, “Introduction to the Basic Principles ofFuzzy Set Theory and some of its Applications, sec-ond revised edition,” Gent: Communication and Cog-nition., (1993).

44. A. Amo, J. Montero, G. Biging and V. Cutello, “Fuzzyclassification systems,” EJOR, 156, 495–507, (2004).

45. L. Zadeh, “A rationale for fuzzy control,” J. DynamicSyst. Meas.Control, 94, 3–4, (1972).

46. F. Herrera, M. Lozano and J.L. Verdegay, “A learn-ing process for fuzzy control rules using genetic al-gorithms,” Fuzzy Sets and Systems, 100, 143–158,(1998).

47. M. Galar, J. Fernandez, G. Beliakov and H. Bustince,“Interval-valued fuzzy sets applied to stereo matchingof color images,” IEEE Trans. on Image Processing,20, 7:1949–1961, (2011).

48. E. Mamdani, “Application of fuzzy algorithms forcontrol of simple dynamic plant,” Proc. of the Insti-tution of Elec. Engineers, 121, 12:1585–1588, (1974).

49. D. Ruan and E. Kerre, “Fuzzy IF-THEN Rulesin Computational Intelligence: Theory and Applica-tions,” Boston: Kluwer Academic Publishers, (2000).

50. I. Perfilieva, “Logical foundations of rule-based sys-tems,” Fuzzy Sets and Systems, 157, 5:615–621,(2006).

51. D. Dubois and H. Prade, “What are fuzzy rules andhow to use them,” Fuzzy sets and systems, 84, 2:169–185, (1996).

52. W. van Leekwijck and E. Kerre, “Defuzzification: cri-teria and classification,” Fuzzy Sets and Systems, 108,2:159–178, (1999).

53. O. Cordon, M. del Jesus and F. Herrera “A proposal onreasoning methods in fuzzy rule-based classificationsystems,” Intl. Journal of Approximate Reasoning, 20,21–45, (1999).

54. J. Dunn, “A Fuzzy Relative of the ISODATA Pro-cess and Its Use in Detecting Compact Well-SeparatedClusters,” Journal of Cybernetics, 3, 3:32–57, (1973).

55. J. Bezdek, “Pattern Recognition with Fuzzy ObjectiveFunction Algorithms,” Plenum Press, (1981).

56. J. Bezdek, R. Ehrlich and W. Full, “FCM: The fuzzyc-means clustering algorithm,” Computers & Geo-

Page 23: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques

sciences, 10, 2-3:191–203, (1984).57. J. Yang, S. Hao and P. Chung, “Color image segmenta-

tion using fuzzy c-means and eigenspace projections,”Signal Processing, 82, 461–472, (2002).

58. D. Zhong and H. Yan, “Color image segmentation us-ing color space analysis and fuzzy clustering,” Pro-ceedings of the 2000 IEEE Signal Processing SocietyWorkshop, 2, 624–633, (2000).

59. M. Nachtegael, D. Van der Weken, E. Kerre and W.Philips, “Soft Computing in Image Processing,” Stud-ies in Fuzziness and Soft Computing. Springer, 210,(2007).

60. E. Kerre and M. Nachtegael, “Fuzzy techniques in im-age processing,” Physica-Verlag Springer, 52, (2000).

61. E. Ruspini, “A new approach to clustering,” Informa-tion and Control, 15, 22–32, (1969).

62. M. Sonka, V. Sonka and R. Boyle, “Image Processing,Analysis and Machine Vision,” Chapman and HallComputing, (1993).

63. D. Phillips, “Image Processing in C,” R & D Publica-tions, (2000).

64. L. Liang and C. Looney, “Competitive fuzzy edge de-tection,” Applied Soft Computing, 3, 123–137, (2003).

65. I. Sobel, “History and Definition of the Sobel Opera-tor,” (2014).

66. J. Prewitt, “Object Enhancement and Extraction,” Pic-ture processing and Psychopictorics, Academic Press,(1970).

67. L. Roberts, “Machine perception of three-dimensionalsolids,” Optical and Electro-optical Information Pro-cessing, 159–197, (1965).

68. A. Reeves, “A moment based two-dimensional edgeoperator,” Proc. IEEE Comput. Vision & Patt. Recogn,312–317, (1983).

69. R. Duda and P. Hart, “Pattern Classification and SceneAnalysis,” John Wiley and Sons, 271–272, (1973).

70. J. Canny, “A Computational Approach To Edge Detec-tion,” IEEE Trans. on Pattern Analysis and MachineIntelligence, PAMI-8, 6:679–698, (1986).

71. T. Pavlidis, “A graphics and image processing,”Springer, (1982).

72. C. Lopez-Molina, “The breakdown structure of edgedetection: Analysis of individual components and re-visit of the overall structure,” Ph.D Thesis, NavarraPublic University, (2012).

73. F. Russo, “Edge detection in noisy images using fuzzyreasoning,” Instrumentation and Measurement Tech-nology Conference, 1, 369–372, (1998).

74. I. Bloch, “Fuzzy sets for image processingand understanding,” Fuzzy Sets and Systems,doi:10.1016/j.fss.2015.06.017, (2015).

75. S. Pal and R. King, “On Edge Detection of X-Ray Im-ages Using Fuzzy Sets,” IEEE Transactions on Pat-tern Analysis and Machine Intelligence, 5, 1:69–77,(1983).

76. F. Russo, “A user-friendly research tool for image pro-cessing with fuzzy rules,” Proc. of the First IEEE In-ternational Conference on Fuzzy Systems, 561–568,(1992).

77. F. Russo and G. Ramponi, “Edge extraction by FIREoperators,” IEEE Conf. on Fuzzy Systems, 1, 249–253,(1994).

78. H. Bustince, E. Barrenechea, J. Fernandez, M. Pagola,J. Montero and C. Guerra, “Contrast of a fuzzyrelation,” Information Sciences, 180, 8:1326–1344,(2010).

79. H. Bustince, E. Barrenechea, M. Pagola and J.Fernandez, “Interval-valued fuzzy sets constructedfrom matrices: Application to edge detection,” FuzzySets and Systems, 160, 13:1819–1840, (2009).

80. E. Barrenechea, H. Bustince and B. De Baets, “Con-struction of interval-valued fuzzy relations with appli-cation to the generation of fuzzy edge images,” IEEETrans. On Fuzzy Systems, 19, 819–830, (2011).

81. C. Lopez-Molina, B. De Baets, H. Bustince, J.Sanz and E. Barrenechea, “Multiscale edge detec-tion based on gaussian smoothing and edge tracking,”Knowledge-Based Systems, 44, 101–111, (2013).

82. H. Bustince, V. Mohedano, E. Barrenechea and M.Pagola, “Definition and construction of fuzzy DI-subsethood measures,” Information Sciences, 176,21:3190–3231, (2006).

83. N. Senthilkumaran and R. Rajesh, “Edge detectiontechniques for image segmentation a survey of softcomputing approaches,” International Journal of Re-cent Trends in Engineering, 1, 2:250–254, (2009).

84. B. Basavaprasad and H. Ravindra, “A Survey on Tra-ditional and Graph Theoretical Techniques for Im-age Segmentation,” IJCA Proc. on Nat. Conf. on Re-cent Advances in Information Technology, NCRAIT,1:38–46, (2014).

85. N. Pal and S. Pal, “A review on image segmenta-tion techniques,” Pattern Recognition, 26, 1277–1294,(1993).

86. J. Bruce, T. Balch and M. Veloso, “Fast and inexpen-sive color image segmentation for interactive robots,”International Conference on Intelligent Robots andSystems (IROS 2000). Proceedings, 3, 2061–2066,(2000).

87. Y. Al-Kofahi, W. Lassoued, W. Lee and B. Roysam,“Improved automatic detection and segmentation ofcell nuclei in histopathology images,” IEEE Trans. onBiomedical Engineering, 57, 4:841–852, (2010).

88. V. Grau, A. Mewes, M. Alcaniz, R.Kikinis and S.Warfield, “Improved watershed transform for medicalimage segmentation using prior information,” IEEETrans. on Medical Imaging, 23, 4:447–458, (2004).

89. W. Pratt, “Digital Image Processing,” Wiley - Inter-science, (2001).

90. Y. Ma, K. Zhan and Z. Wang, “Applications of Pulse-

Page 24: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

Coupled Neural Networks,” Springer, (2010).91. K. Fu and J. Mui, “A survey on image segmentation,”

Pattern Recognition, 13, 3–16 (1981).92. J. Weszka, R. Nagel and A. Rosenfeld, “A Technique

for Facilitating Threshold Selection for Object Extrac-tion from Digital Pictures,” University of MarylandComputer Science Center, TR-243, (1973).

93. T. Wu, “Image data field-based framework for imagethresholding,” Optics & Laser Technology, 62, 1–11,(2014).

94. R. Adams and L. Bischof, “Seeded Region Growing,”IEEE Trans. on Pattern Analysis and Machine Intelli-gence, 16, 6:641–647, (1994).

95. S. Lee and S. Chung, “A comparative study of sev-eral global thresholding techniques for segmentation,”Comput. Vision. Graphics Image Process, 52, 171–190, (1990).

96. J. Kittler and J. Illingworth, “Minimum error thresh-olding,” Pattern Recognition, 19, 1:41–47, (1986).

97. I. Daubechies, M. Defrise and C. Mol, “An itera-tive thresholding algorithm for linear inverse prob-lems with a sparsity constraint,” Comm. Pure Appl.Math, 57, 1413–1457, (2004).

98. T. Pun, “A new method for grey-level picture thresh-olding using the entropy of the histogram,” SignalProcessing, 2, 3:223–237, (1980).

99. N. Otsu, “A threshold selection method from graylevel histograms,” IEEE Trans. Systems Man Cyber-net, 9, 62–66, (1979).

100. J. Weszka and A. Rosenfeld, “Threshold EvaluationTechniques,” IEEE Trans. on Systems, Man and Cy-bernetics, SMC-8, 8:622–629, (1978).

101. T. Ridler and S. Calvard, “Picture Thresholding Us-ing an Iterative Selection Method,” IEEE Trans. onSystems, Man and Cybernetics, SMC-8, 8:630–632,(1978).

102. J. Kapur, P. Sahoo and A. Wong, “A New Method forGray-Level Picture Thresholding Using the Entropyof the Histogram,” Computer Vision, Graphics andImage Processing, 29, 273–285, (1985).

103. J. Russ and C. Russ, “Automatic discrimination of fea-tures in gray-scale images,” Journal of Microscopy,148, 3:263–277, (1987).

104. L. Hertz and R. Schafer, “Multilevel thresholding us-ing edge matching,” Computer Vision, Graphics, andImage Processing, 44, 3:279–295, (1988).

105. R. Kirby and A. Rosenfeld, “A Note on the Use of(Gray Level, Local Average Gray Level) Space as anAid in Threshold Selection,” IEEE Trans. on Systems,Man and Cybernetics, SMC-9, 12:860–864, (1979).

106. Y. Nakagawa and A. Rosenfeld, “Some experimentson variable thresholding,” Pattern Recognition, 11,3:191–204, (1979).

107. P. Sahoo, S. Soltani, A. Wong and Y. Chen, “A Surveyof Thresholding Techniques,” Comput. Vision, Graph-

ics, and Image Processing, 41, 233–260, (1988).108. C. Glasbey, “An Analysis of Histogram-Based

Thresholding Algorithms,” CVGIP: Graphical Mod-els and Image Processing, 55, 6:532–537, (1993).

109. H. Bustince, M. Pagola, E. Barrenechea, J. Fernndez,P. Melo-Pinto, P. Couto, H. Tizhoosh and J. Montero,“Ignorance functions. An application to the calcula-tion of the threshold in prostate ultrasound images,”Fuzzy sets and Systems, 161, 1:20–36, (2010).

110. S. Beucher and C. Lantujoul, “Use of watersheds incontour detection,” Intl. workshop on image process-ing, real-time edge and motion detection, (1979).

111. S. Collins, “Terrain parameters directly from a digi-tal terrain model,” Canadian Surveyor, 29, 5:507–518,(1975).

112. T. Puecker, and D. Douglas, “Detection of surface-specific points by local parallel processing of discreteterrain elevation data,” Comput. Vision, Graphics, Im-age Processing, 4, 375–387, (1975).

113. D. Marks, J. Dozier and J. Frew, “Automated basin de-lineation from digital elevation data,” Geoprocessing,2, 299–311, (1984).

114. L. Band, “Topographic partition of watersheds withdigital elevation models,” Water Resources Res., 22,1:15–24, (1986).

115. L. Vincent and P. Soille, “Watersheds in DigitalSpaces: An Efficient Algorithm Based on ImmersionSimulations,” IEEE Trans. on Pattern Analysis andMachine Intelligence, 13, 6:583–598, (1991).

116. A. Mangan and R. Whitaker, “Partitioning 3D SurfaceMeshes Using Watershed Segmentation,” IEEE Trans.on Visualization and Computer Graphics, 5, 4:308–321, (1999).

117. E. Rashedi and H. Nezamabadi-pour, “A stochasticgravitational approach to feature based color imagesegmentation,” Engineering Applications of ArtificialIntelligence, 26, 4:1322–1332, (2013).

118. A. Moga and M. Gabbouj, “A Parallel Watershed Al-gorithm Based on the Shortest Paths Computation,”4th NTUG’95 & ZEUS’95 Parallel Programming andApplications, (IOS Press), 316–324, (1995).

119. F. Meyer and S. Beucher, “Morphological segmenta-tion,” Journal of Visual Communication and ImageRepresentation, 1, 21–46, (1990).

120. L. Vincent and S. Beucher, “The morphological ap-proach to Segmentation: An Introduction,” School ofMines, Paris, (1989).

121. S. Beucher and F. Meyer, “The morphological ap-proach to segmentation: the watershed transforma-tion,” Mathematical Morphology in Image Process-ing, (Ed. E.R. Dougherty), 433–481, (1993).

122. J. Roerdink and A. Meijster, “The Watershed Trans-form: Definitions, Algorithms and ParallelizationStrategies,” Fundamenta Informaticae, (IOS Press),41, 187–228, (2001).

Page 25: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

Classifying image analysis techniques

123. C. Brice and C. Fennema, “Scene analysis using re-gions,” Artificial Intelligence, 1, 3-4:205–226, (1970).

124. A. Tremeau and P. Colantoni, “Regions AdjacencyGraph Applied to Color Image Segmentation,” IEEETrans. on Image Processing, 9, 4:735–744, (2000).

125. P. Felzenszwalb and D. Huttenlocher, “Efficientgraph-based image segmentation,” International Jour-nal of Computer Vision, 59, 2:167–181, (2004).

126. J. Llados, E. Martı and J. Villanueva, “Symbol Recog-nition by Error - Tolerant Subgraph Matching betweenRegion Adjacency Graphs,” IEEE Trans. on Pat-tern Analysis and Machine Intelligence, 23, 10:1137–1143, (2001).

127. L. Grady, “Random Walks for Image Segmentation,”IEEE Trans. on Pattern Analysis and Machine Intelli-gence, 28, 11:1768–1783, (2006).

128. H. Wechsler and M. Kidode, “A random walk pro-cedure for texture discrimination,” IEEE Trans. onPattern Analysis and Machine Intelligence, PAMI-1,3:272–280, (1979).

129. P. Tetali, “Random walks and the effective resistanceof networks,” Journal of Theoretical Probability, 4,1:101–109, (1991).

130. L. Grady and G. Funka-Lea, “Multi-Label ImageSegmentation for Medical Applications Based onGraph-Theoretic Electrical Potentials,” ECCV, 230–245, (2004).

131. D. Greig, B. Porteous and A. Seheult, “Exact Max-imum A Posteriori Estimation for Binary Images,”Journal of the Royal Statistical Soc. Series B, 51, 271–279, (1989).

132. Y. Boykov and O. Veksler, “Graph Cuts in Visionand Graphics: Theories and Applications,” Handbookof Mathematical Models in Computer Vision, 76–96,(2005).

133. Y. Boykov and V. Kolmogorov, “An ExperimentalComparison of Min-Cut/Max-Flow Algorithms forEnergy Minimization in Vision,” IEEE Trans. on Pat-tern Analysis and Machine Intelligence, 26, 9:1124–1137, (2004).

134. Y. Boykov and G. Funka-Lea, “Graph Cuts and Effi-cient N-D Image Segmentation,” Intl. Journal of Com-puter Vision, 70, 2:109–131, (2006).

135. S. Sinha, “Graph Cut Algorithms in Vision, Graphicsand Machine Learning. An Integrative Paper,” UNCChapel Hill, (2004).

136. L. Ford and D. Fulkerson, “Maximal Flow Through aNetwork,” Canadian Journal of Mathematics, 8, 399–404, (1956).

137. P. Elias, A. Feinstein and C. Shannon, “Note on maxi-mum flow through a network,” IRE Trans. on Inf. The-ory, IT-2, 117–119, (1956).

138. S. Tatiraju and A. Mehta, “Image Segmentation usingk-means clustering, EM and Normalized Cuts,” Uni-versity of California, Department of Eecs.

139. J. Shi and J. Malik, “Normalized Cuts and Image Seg-mentation,” IEEE Trans. on Pattern Analysis and Ma-chine Intelligence, 22, 8:888–905, (2000).

140. S. Flavell, S. Winter and D. Wilson, “Matching Re-gion Adjacency Graphs,” Microprocessing and Micro-programming, 31, 15:31–33, (1991).

141. A. Dutta, J. Llads, H. Bunke and U. Pal, “Near ConvexRegion Adjacency Graph and Approximate Neighbor-hood String Matching for Symbol Spotting in Graph-ical Documents,” 12th International Conference onDocument Analysis and Recognition (ICDAR), 1078–1082, (2013).

142. P. Colantoni and B. Laget, “Color Image Segmenta-tion using region adjacency graphs,” Sixth Intl. Conf.on Image Proc. and Its App., 2, 698–702, (1997).

143. B. Peng, L. Zhang and D. Zhang, “A survey of graphtheoretical approaches to image segmentation,” Pat-tern Recognition., 46, 3:1020–1038, (2013).

144. Y. Zhang, “Image Segmentation,” Science Press,(2001).

145. Y. Zhang, “A review of recent evaluation methodsfor image segmentation,” IEEE Signal Proc. SocietyPress. Malaysia: Proc. of the VI Intl. Symposium onSignal Proc. and Its Appl., (2001).

146. H. Simon, “The architecture of complexity,” Proc.of the American Philosophical Soc., 106, 467–482,(1962).

147. R. Lynd and H. Lynd, “Middletown in Transition:A study in Cultural Conflicts,” Harcourt, Brace andCompany, New York., (1937).

148. M. Newman, “Fast algorithm for detecting commu-nity structure in networks,” Physical Review E, 69,6:066133, (2004).

149. M. Girvan and M. Newman, “Community structure insocial and biological networks,” Proc. of the NationalAcademy of Sciences of the USA, 99, 12:7821–7826,(2002).

150. D. Gomez, E. Zarrazola, J. Yanez and J. Montero, ADivide-and-Link Algorithm for Hierarchical Cluster-ing in Networks, Information Sciences, 316, 308–328,(2015).

151. D. Gomez, E. Zarrazola, J. Yanez, J. Rodrıguez and J.Montero, “A new concept of fuzzy image segmenta-tion,” The 11th Intl. FLINS Conference, (2014).

152. D. Gomez, J. Montero and J. Yanez, “A divide-link algorithm based on fuzzy similarity for cluster-ing networks,” International Conference on IntelligentSystems Design and Applications, ISDA, 1247-1252,(2011).

153. D. Gomez, J. Montero and G. Biging, “Improve-ments to remote sensing using fuzzy classification,graphs and accuracy statistics,” Pure and AppliedGeophysics, 165, 1555–1575, (2008).

154. D. Gomez, J. Montero and J. Yanez, “A coloring algo-rithm for image classification,” Information Sciences,

Page 26: Classifying image analysis techniques from their outputdigital.csic.es/bitstream/10261/132433/1/IJCIS_2016_9_S1_43.pdf · The partition separates X into mutu-ally exclusive and exhaustive

C. Guada et al.

176, 3645–3657, (2006).155. D. Gomez, J. Montero, J. Yanez and C. Poidomani,

“A graph coloring algorithm approach for image seg-mentation,” Omega, 35, 173-183, (2007).

156. D. Gomez and J. Montero, “Fuzzy sets in remotesensing classification,” Soft Computing, 12, 243-249,(2008).

157. D. Gomez, J. Yanez, C. Guada, J. Rodrıguez, J. Mon-tero, and E. Zarrazola, Fuzzy image segmentationbased upon hierarchical clustering, Knowledge-BasedSystems, 87, 26–37, (2015).

158. P. Arbelaez, M. Maire, C. Fowlkes and J. Malik, “Con-tour detection and hierarchical image segmentation,”IEEE Trans. on Pattern Analysis and Machine Intelli-gence, 33, 5:898–916, (2011).

159. S. Lhermitte, J. Verbesselt, I. Jonckheere, K. Nack-aerts, J. van Aardt, W. Verstraeten and P. Coppin, “Hi-erarchical image segmentation based on similarity ofNDVI time series,” Remote Sensing of Environment,112, 2:506-521, (2008).

160. P. Schroeter and J. Bigun, “Hierarchical imagesegmentation by multi-dimensional clustering andorientation-adaptive boundary refinement,” PatternRecognition, 28, 5:695-709, (1995).

161. H. Cheng and Y. Sun, “A hierarchical approach tocolor image segmentation using homogeneity,” IEEE

Trans. on Image Processing, 9, 12:2071–2082, (2000).162. J. Chamorro-Martinez, D. Sanchez, B. Prados-Suarez,

E. Galan-Perales and M. Vila, “A hierarchical ap-proach to fuzzy segmentation of colour images,”IEEE Intl. Conference on Fuzzy Systems, 2, 966–971,(2003).

163. J. Kruskal, “On the Shortest Spanning Subtree of aGraph and the Traveling Salesman Problem,” Proc. ofthe American Mathematical Soc., 7, 48–50, (1956).

164. A. Chatterjee and P. Siarry, “Computational Intelli-gence in Image Processing,” Springer, (2013).

165. G. Sharma and R. Bala, “Digital color imaging hand-book,” CRC press, (2002).

166. J. Beutel, H. Kundel and R. Van Metter, “Hand-book of Medical Imaging volume 1: Physics and Psy-chophysics,” SPIE Press, (2000).

167. M. Wang, “Industrial Tomography. Systems and Ap-plications,” Woodhead Publishing, (2015).

168. C. Evans, E. Potma, M. Puoris’haag, D. Cote, C. Linand X. Xie, “Chemical imaging of tissue in vivo withvideo-rate coherent anti-Stokes Raman scattering mi-croscopy,” Proc. of Natl. Acad. of Sciences of USA,102, 46:16807–16812, (2005).

169. E. Ng, “A review of thermography as promising non-invasive detection modality for breast tumor,” Intl.Journal of Thermal Sciences, 48, 5:849–859, (2009).