Expert Systems With...
Transcript of Expert Systems With...
Expert Systems With Applications 83 (2017) 242–256
Contents lists available at ScienceDirect
Expert Systems With Applications
journal homepage: www.elsevier.com/locate/eswa
Whale Optimization Algorithm and Moth-Flame Optimization for
multilevel thresholding image segmentation
Mohamed Abd El Aziz
a , b , Ahmed A. Ewees c , ∗, Aboul Ella Hassanien
d
a Department of Mathematics, Faculty of Science, Zagazig University, Egypt b School of Computer Science and Technology, Wuhan University of Technology, Wuhan, China. c Department of Computer, Damietta University, Egypt d Faculty of Computers and Information, Cairo University, Egypt
a r t i c l e i n f o
Article history:
Received 20 November 2016
Revised 8 April 2017
Accepted 9 April 2017
Available online 22 April 2017
Keywords:
Whale Optimization Algorithm (WOA)
Moth-Flame Optimization (MFO)
Image segmentation
Multilevel thresholding
a b s t r a c t
Determining the optimal thresholding for image segmentation has got more attention in recent years
since it has many applications. There are several methods used to find the optimal thresholding val-
ues such as Otsu and Kapur based methods. These methods are suitable for bi-level thresholding case
and they can be easily extended to the multilevel case, however, the process of determining the optimal
thresholds in the case of multilevel thresholding is time-consuming. To avoid this problem, this paper ex-
amines the ability of two nature inspired algorithms namely: Whale Optimization Algorithm (WOA) and
Moth-Flame Optimization (MFO) to determine the optimal multilevel thresholding for image segmenta-
tion. The MFO algorithm is inspired from the natural behavior of moths which have a special navigation
style at night since they fly using the moonlight, whereas, the WOA algorithm emulates the natural co-
operative behaviors of whales. The candidate solutions in the adapted algorithms were created using the
image histogram, and then they were updated based on the characteristics of each algorithm. The solu-
tions are assessed using the Otsu’s fitness function during the optimization operation. The performance
of the proposed algorithms has been evaluated using several of benchmark images and has been com-
pared with five different swarm algorithms. The results have been analyzed based on the best fitness
values, PSNR, and SSIM measures, as well as time complexity and the ANOVA test. The experimental re-
sults showed that the proposed methods outperformed the other swarm algorithms; in addition, the MFO
showed better results than WOA, as well as provided a good balance between exploration and exploita-
tion in all images at small and high threshold numbers.
© 2017 Elsevier Ltd. All rights reserved.
r
t
e
p
m
a
h
o
r
b
t
K
1. Introduction
Thresholding is one of the popular techniques in image seg-
mentation. It has two categories: bi-level and multilevel ( Bhandari,
Singh, Kumar, & Singh, 2014; Sarkar, Nayan, Kundu, Das, & Chaud-
huri, 2013 ). The bi-level thresholding splits the image into two
classes, whereas multilevel is used to segment complex images and
it can produce multiple thresholds such as tri-level or quad-level,
which split pixels into multiple similar parts based on intensity
( Guo & Li, 2007; Zhang & Wu, 2011 ). Bi-level thresholding can pro-
duce adequate outcomes in the case that the image contains only
two principal gray levels, but the foremost restriction is the time-
consuming computation which is often high when it is extended to
multilevel thresholding ( Dirami, Hammouche, Diaf, & Siarry, 2013 ).
∗ Corresponding author.
E-mail addresses: [email protected] (M.A.E. Aziz), a.ewees@hotmail.
com , [email protected] (A .A . Ewees), [email protected] (A.E. Hassanien).
a
W
a
w
i
http://dx.doi.org/10.1016/j.eswa.2017.04.023
0957-4174/© 2017 Elsevier Ltd. All rights reserved.
Basically, two approaches namely parametric and nonparamet-
ic ( Hammouche, Diaf, & Siarry, 2008 ) are used to determine
he optimal thresholds. In the first one, some statistical param-
ters (e.g. time-consuming, and their accuracy) should be com-
uted for the classes in the image. In the second one, the opti-
al thresholds are determined by maximizing some criteria such
s the Otsu’ criterion ( Otsu, 1979 ) and Kapur’s entropy ( Kapur, Sa-
oo, & Wong, 1985 ). In the Otsu-based method, the best thresh-
ld is the one that maximizes the between-class variance of the
egions. Kapur’s entropy measures the homogeneity of the classes
y using the maximization of the entropy. So, the selection of op-
imal thresholds in multilevel is a NP-hard problem ( Marciniak,
owal, Filipczuk, & Korbicz, 2014 ), and it has been considered as
challenge over decades ( Dirami et al., 2013; Sarkar et al., 2013 ).
hen the threshold number is small, classical techniques are suit-
ble; but if this number increases, the accuracy of segmentation
ill be affected. Therefore, to avoid this weakness, many swarm
ntelligence (SI) techniques have been applied and combined
M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256 243
w
(
(
(
S
P
s
o
m
C
2
n
t
o
G
(
p
a
T
c
I
a
a
a
o
p
i
K
o
i
2
m
(
(
(
s
C
&
c
l
t
t
t
(
L
l
h
&
n
2
G
K
T
g
a
p
i
a
i
t
t
a
n
M
t
g
l
i
t
p
t
i
t
o
g
a
M
c
h
a
h
a
T
s
s
a
p
o
t
b
t
m
t
p
c
g
g
p
d
O
v
b
m
c
c
s
a
W
h
h
u
(
t
y
a
p
d
o
p
a
2
l
ith thresholding algorithms; such as Genetic Algorithm (GA)
Elsayed, Sarker, & Essam, 2014 ), Particle Swarm Optimization
PSO) ( Kennedy & Eberhart, 1995 ), Ant Colony Optimization (ACO)
Kaveh & Talatahari, 2010 ), Firefly Algorithm (FA) ( Yang, 2009 ), and
ocial Spider Optimization (SSO) ( Cuevas, Cienfuegos, ZaldíVar, &
éRez-Cisneros, 2013 ). For example, Pruthi and Gupta (2016) pre-
ented an algorithm based on GA and its combination with Otsu to
vercome segmentation drawbacks. Also, PSO is applied to deter-
ine the multilevel thresholding for image segmentation ( Ghamisi,
ouceiro, Benediktsson, & Ferreira, 2012; Ye, Hu, Wang, & Chen,
011 ). Naik and Gopal (2016) used PSO as a preprocessing tech-
ique prior to the image segmentation. The image is preprocessed
o get the optimum pixels and then k-means clustering is applied
n the image for segmentation. Whereas, Erdmann, Wachs-Lopes,
allão, Ribeiro, and Rodrigues (2015) presented a firefly algorithm
FA) method for multilevel thresholding segmentation, and the im-
roved FA (IFA) is used to enhance the performance of FA to solve
multilevel image thresholding problem as in Chen et al. (2016) .
he IFA used Cauchy mutation and neighborhood strategy to in-
rease the global search ability and accelerate the convergence.
FA also is utilized to search for multilevel global best thresholds
nd segment images into background and objects ( Horng, 2011 ). In
ddition, Fayad, Hatt, and Visvikis (2015) developed and validated
n image segmentation approach based on ACO algorithm. On the
ther hand, Agarwal, Singh, Kumar, and Bhattacharya (2016) de-
loyed histogram based on bi-modal and multi-modal threshold-
ng for a gray image using SSO. They employed SSO to maximize
apur’s and Otsu’s functions values. Furthermore, there are many
ther swarm algorithms, that were used for image segmentation
ncluding: bacterial foraging algorithm (BFO) ( Bakhshali & Shamsi,
014; Sanyal, Chatterjee, & Munshi, 2011 ), honey bee mating opti-
ization (HBMO) ( Horng, 2010 ), wind driven optimization (WDO)
Bayraktar, Komurcu, Bossard, & Werner, 2013 ), cuckoo search (CS)
Agrawal, Panda, Bhuyan, & Panigrahi, 2013 ), artificial bee colony
ABC) ( Akay, 2013; Bhandari, Kumar, & Singh, 2015 ), harmony
earch (HS) algorithm ( Oliva, Cuevas, Pajares, Zaldivar, & Perez-
isneros, 2013 ), and hybrid swarm (FASSO) ( Abd El-Aziz, Ewees,
Hassanien, 2016 ). However, most of these algorithms have slow
onvergences to the global optimal solution (i.e. can get stuck in
ocal optimal) and the quality of segmentation becomes poor when
he number of thresholds is increased. In addition, before applying
hem, their parameters must be fine-tuned.
Recently, two new swarm algorithms namely, the Whale Op-
imization Algorithm (WOA) and the Moth-Flame Optimization
MFO) are introduced. The WOA is introduced by Mirjalili and
ewis (2016) to solve the global optimization problem by emu-
ating the behavior of humpback whales. The humpback whales
ave a unique hunting method called bubble-net feeding ( Mirjalili
Lewis, 2016 ). This behavior works in three different phases,
amely, coral loop, lobtail, and capture loop ( Mirjalili & Lewis,
016 ). More information about this behavior can be found in
oldbogen et al. (2013) .
The WOA has been used in several applications such as in
aveh and Ghazaan (2016) , Cherukuri and Rayapudi (2016) , and
ouma (2016) , these studies have showed that the WOA algorithm
ives a better accuracy when it is compared to other state-of-the-
rt evolutionary algorithms such PSO and GA. WOA includes ex-
loration and two approaches of exploitation that make it be flex-
ble and gradient-free mechanism as well as it has an ability to
void local optima and get the global optimal solution that makes
t suitable for real applications. In addition, it does not need struc-
ural adjustments in the algorithm for solving different optimiza-
ion problems because it has only two main parameters to be
dapted ( Mirjalili & Lewis, 2016 ).
On the other hand, Mirjalili (2015) had presented MFO, it is a
ew metaheuristic algorithm based on the behavior of the moths.
oths have special navigation style at night since they fly using
he moonlight. The moth in nature flies by calculating a fixed an-
le with respect to the moon and eventually converges towards the
ight which makes moths fly in a straight line; but when there
s an artificial light, the moths fly spirally around the light, since
his light is very close compared to the moon ( Mirjalili, 2015 ). The
ower of MFO algorithm is appeared by increasing exploration of
he search space and decreasing the possibility of being trapped
n local optima, since it assigns each moth a flame and updates
he series of flames in every iteration, as well as, saves the best-
btained position in flames so they never get lost and works as
uides for the moths ( Mirjalili, 2015 ). In Mirjalili (2015) , Li, Li,
nd Liu (2016) , Nanda et al. (2016) , and Parmar et al. (2016) , the
FO was compared with other swarm algorithms to solve several
ontinuous and discrete optimization problems. Its results proved
igh exploitation capability, applicability to solve real problems,
nd ability to optimize search spaces with infeasible regions.
To sum up, these algorithms have many advantages such as:
aving a small number of parameters, simple frameworks, and
bility to avoid the problem of getting stuck in the local optima.
herefore, they can be suitable for many real applications and can
olve several global optimization problems without changing the
tructure of the original algorithm. In addition, both MFO and WOA
lgorithms have operators to balance between exploration and ex-
loitation, unlike other optimizations (e.g. PSO and GA) they apply
ne equation to update the agent’s position, which can increase
he trapping probability in local optima. Based on the success of
oth the proposed algorithms (WOA and MFO) in several applica-
ions, this study further illustrates their power to solve image seg-
entation problems using multilevel thresholding. The Otsu’s cri-
erion is used as a fitness function to assess the solutions in each
opulation of the two algorithms and then determine the best ac-
uracy of the segmented images. In addition, MFO and WOA al-
orithms tune their control parameters through searching for the
lobal solution which avoids the problem of pre-defined control
arameters that can be unsuitable for solving the problem.
The main contributions of this study are two-fold: (1) intro-
uce Whale Optimization Algorithm (WOA) and the Moth-Flame
ptimization (MFO) algorithms for selecting an optimal threshold
alue. (2) Compare between WOA and MFO in such context. To the
est of our knowledge, this concept has not been investigated yet.
The two proposed algorithms generate random solutions in do-
ain bounded by the image histogram. During the updating pro-
ess, the quality of each solution is evaluated using the between-
lass variance function. According to the fitness function value, the
et of solutions is updated based on the characteristics of the WOA
nd MFO until a termination criterion is satisfied. The results of the
OA and MFO algorithms have been compared with other meta-
euristic algorithms. The performance of the different algorithms
as been assessed on various images using the best fitness val-
es and two other criteria namely: the peak-to-signal-noise ratio
PSNR) and the structural similarity index (SSIM), as well as the
ime complexity which is taken into account; in addition, the anal-
sis of variance between all the algorithms has been tested using
n ANOVA test.
The rest of this paper is arranged as follows: in Section 2 , the
roblem formulation and the definition of Otsu’s method are intro-
uced. The proposed algorithms for multilevel thresholding based
n WOA and MFO algorithms are illustrated in Section 3 . The ex-
eriments and results are given in Section 4 . Finally, the conclusion
nd future work are listed in the last section.
. Problem definition
In this section, the definition of multilevel thresholding prob-
em is illustrated which is defined mathematically by considering
244 M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256
C
t
b
I
d
&
D
X
w
[
b
c
a
A
B
w
[
e
ν
T
o
d
m
X
w
o
a
E
(
w
s
l
i
D
X
w
p
B
p
o
r
i
t
l
(
3
m
a
a gray level image I to be segmented consisting of K + 1 classes.
Therefore, K thresholds { t 1 , t 2 , . . . , t K } are required to divide the
image into subregions as in Eq. (1) ( Bhandari et al., 2014; Yang,
2014 ):
C 0 = { g(i, j) ∈ I | 0 ≤ g ( i, j ) ≤ t 1 − 1 } C 1 = { g(i, j) ∈ I | t 1 ≤ g ( i, j ) ≤ t 2 − 1 } ,
. . .
K = { g(i, j) ∈ I | t K ≤ g ( i, j ) ≤ L − 1 } (1)
where C k represents the k th class of image, t k ( k = 1,…, K ) is the
k th threshold value, g ( i, j ) is the gray level of the pixel ( i, j ) and L
is the gray levels of I , these levels are in the range (0, 1…L - 1).
The essential purpose of multilevel thresholding is to locate the
threshold values which split pixels into various groups; that can be
determined by maximizing the following equation:
∗1 , t
∗2 , . . . , t
∗K = max
t 1 , ... ,t K F ( t 1 , . . . , t K ) (2)
where F ( t 1 , . . . , t K ) is the Otsu’s function that defined as ( Chen
et al., 2016 ):
F =
K ∑
i =0
A i (ηi − η1 ) 2 , (3)
A i =
t i +1 −1 ∑
j= t i P j , (4)
ηi =
t i +1 −1 ∑
j= t i i
P j
A j
, where P i = h (i ) /N p (5)
where η1 is the mean intensity of I with t 0 = 0 and t K+1 = L . The
h ( i ) and P i are the frequency and the probability of the i th gray
level, respectively. N p represents the total number of pixels in I .
3. The proposed algorithms for multilevel thresholding
In this section, we introduce two algorithms that will be used
to determine the optimal multilevel thresholding for image seg-
mentation that maximizes the Otsu’s objective function.
3.1. Whale Optimization Algorithm
The Whale Optimization Algorithm (WOA) was proposed by the
authors of Mirjalili and Lewis (2016) inspired from the behavior of
the humpback whales.
For the multilevel thresholding problem, the goal of the WOA
algorithm is to determine the position in the search space (the
optimal threshold values) that optimizes the considered objective
function given by Eq. (2) . The input to this algorithm is the image
histogram and the output is the optimal position
� X ∗ that repre-
sents the optimal threshold. Each individual whale � X i (i = 1 , . . . , N)
is represented as a vector of possible real values corresponding to
thresholds as follows ( Mirjalili & Lewis, 2016 ):
� X i = (x i, 1 , x i, 2 , . . . , x i,K )
T , sub ject to 0 < x i, 1 . . . < x i,K < L (6)
where x i , 1 corresponding to t 1 in Eq. (2) . The position of each indi-
vidual whale of the population is generated randomly in the range
[ g min , g max ], where g min and g max are the minimum and the maxi-
mum gray levels in the image histogram, respectively. This can be
expressed by the following equation ( Mirjalili & Lewis, 2016 ):
x i, j = g min + rand(0 , 1) × (g max − g min ) , x i, j ∈
� X i , j = (1 , 2 , . . . , K)
(7)
Then the fitness function for each whale is computed and the best
fitness function F g is determined and its corresponding position
is � X ∗. The whales attacking their prey � X ∗ in either encircling or
ubble-net methods after searching for � X ∗ ( Mirjalili & Lewis, 2016 ).
n the encircling behavior ( Mirjalili & Lewis, 2016 ), the whales up-
ate their position based on the best location as follows ( Mirjalili
Lewis, 2016 ):
�
i = | � B � � X
∗(ν) − � X i (ν) | (8)
�
i ( ν + 1 ) =
� X
∗(ν) − � A � �
D i (9)
here � is a multiplication of elements (e.g. [ a 1 , a 2 , . . . , a n ] �
b 1 , b 2 , . . . , b n ] = [ a 1 b 1 , a 2 b 2 , . . . , a n b n ] ), � D i represents the distance
etween the position of � X ∗(ν) and
� X i (ν) , and ν represents the
urrent iteration. � A and
� B are coefficient vectors and determined
s follows ( Mirjalili & Lewis, 2016 ):
�
= 2
� a � �
r − � a (10)
�
= 2
� r (11)
here � r is a random vector and its elements are generated in
0, 1], and the value of � a is linearly decreased from 2 to 0 as it-
rations proceed as ( Mirjalili & Lewis, 2016 ) � a =
� a − ν �
a νmax
, where
max is the maximum number of iterations.
There are two strategies to simulate the bubble-net behavior.
he first is the shrinking encircling given by decreasing the value
f � a in Eq. (10) , also, � A is decreased. The other one is the spiral up-
ating position, Eq. (12) is used to mimic the helix-shaped move-
ent of the humpback whales around
� X ∗ ( Mirjalili & Lewis, 2016 ):
�
i (ν + 1) =
� D
′ � e bl
� cos (2 π l) +
� X
∗(ν) (12)
here →
D
′ = | →
X ∗(v ) − →
X i (ν) | , b is a constant for defining the shape
f the logarithmic spiral and l ∈ [ −1 , 1] is a random variable.
These whales can swim around the � X ∗ simultaneously through
shrinking circle and along a spiral-shaped path; therefore,
qs. (8) , (9) and (12) are combined in the following equation
Mirjalili & Lewis, 2016 ):
→
X i ( ν + 1 ) =
{ →
X
∗( ν) −
→
A
�→
D
if p < 0 . 5
→
D
′ � e bl
� cos ( 2 π l ) +
→
X
∗( ν) if p ≥ 0 . 5
(13)
here p ∈ [0, 1] represents the probability of choosing either the
piral model or shrinking encircling mechanism to update � X i .
As well as the humpback whales search randomly for � X ∗, the
ocation of a whale is updated by choosing a random search agent
nstead of the best search agent as follows ( Mirjalili & Lewis, 2016 ):
�
= | � B � � X rand − �
X i (ν) | (14)
�
i (ν + 1) =
� X rand − �
A � � D (15)
here � X rand is a random position vector chosen from the current
opulation.
Finally, based on the value of � a (decreases from 2 to 0), � A ,�
and the probability p , the position of i th whale is updated. If
> 0.5 then use Eq. (12) otherwise use either Eqs. (8) and (9)
r Eqs. (14) and (15) based on the value of | � A | . This process is
epeated until the stopping criterion is satisfied, then the output
s the � X ∗ that represents the optimal threshold. Fig. 1 illustrates
he entire structure of WOA that is used in this paper for multi-
evel thresholding, where E 1 is given in Eq. (12) , E 2 performs Eqs.
8) and (9) , and E 3 calculates Eqs. (14) and (15) .
.2. Moth-Flame Optimization (MFO)
The authors of Mirjalili (2015) modeled the natural behavior of
oths in a mathematical form to provide the MFO algorithm as
new metaheuristic algorithm. In this paper, the MFO algorithm
M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256 245
Fig. 1. The structure of the first proposed algorithm based on WOA.
h
l
m
E
e
i
p
t
m
M
w
r
t
t
m
X
w
fl
t
S
D
w
r
Fig. 2. The structure of the second proposed algorithm based on MFO.
d
F
t
i
m
o
(
F
w
n
t
o
a
4
m
i
a
u
4
m
(
T
t
4
o
H
(
p
a
as been applied to determine the multilevel thresholding as fol-
ows; a population of N individual moth is created. Each individual
oth or solution
� X i (i = 1 , . . . , N) is encoded as shown in Eq. (6) .
ach
� X i uses K decision variables that each one represents a differ-
nt threshold point x i, j ( j = 1 , . . . , K) that is used for the multilevel
mage thresholding. The position of each individual moth in the
opulation is initialized using Eq. (7) and evaluated using Eq. (2) ,
hen the optimal position of moths is saved in flames .
The input to MFO algorithm is the image histogram and the
ain steps of MFO can be defined as follows ( Mirjalili, 2015 ):
F O = (R, V, T ) (16)
here R is used to produce and initialize the population of moths
andomly and also the fitness values of them; V is a main function
hat makes the moths move around the search space, and T is a
ermination criterion flag.
In V function, the flames are used to update the position of
oths by the following equation ( Mirjalili, 2015 ):
�
i = S( � X i , � F u ) , (17)
here S is the spiral function, � X i is the i th moth, and
� F u is the u th
ame. Whereas, S( � X i , � F u ) is calculated using the following equa-
ions ( Mirjalili, 2015 ):
( � X i , � F u ) =
� D i · e bl · cos (2 π l) +
� F u (18)
�
i = | � F u − � X i | (19)
here b is a constant to define the shape of the logarithmic spi-
al, and l ∈ [ −1 , 1] is a random number (similar to Eq. (12) ). � D
iefines the distance between the i th moth
� X i and the u th flame
�
u . The exploitation of the best solutions may degrade because of
he updating of moths’ position regarding to N different locations
n the search space. To resolve this problem, there is an adaptive
echanism used for overcoming this issue by proposing a number
f flames ( Fno ), the following equation is applied to perform that
Mirjalili, 2015 ):
no = round
(N − ν × N − 1
T
)(20)
here ν is the current number of iterations, N is the maximum
umber of flames, and T indicates the maximum number of itera-
ions (termination criterion). Fig. 2 illustrates the whole operations
f MFO, where E 1 is calculated by Eq. (20) , E 2 is given by Eq. (19) ,
nd E 3 is performed by Eq. (18) .
. Experiments and results
In this section, we introduce the environment of the experi-
ents for the proposed algorithms. The description of benchmark
mages is introduced firstly, then the parameters setting for each
lgorithm are illustrated briefly and a set of quality metrics are
sed to evaluate the quality of the segmentation process.
.1. Benchmark images
In this paper, the proposed algorithms are tested on eight com-
on grayscale images from the database of Berkeley University
Martin, Fowlkes, Tal, & Malik, 2001 ). These images are called,
estE4, TestE1, Test1, Test6, Test5, Test11, Test3, and Test9 as illus-
rated in Fig. 3 .
.2. Experimental settings
The results of the proposed algorithms are compared with
ther five evolutionary algorithms, namely, SCA ( Mirjalili, 2016 ),
S ( Oliva et al., 2013 ), SSO ( Ouadfel & Taleb-Ahmed, 2016 ), FASSO
Abd El-Aziz et al., 2016 ), and FA; these algorithms were applied in
revious works and proved good results. For a fair comparison, all
lgorithms used in this paper have the same stopping conditions
246 M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256
Fig. 3. Samples of the tested images.
formance.
(maximum number of iterations was set to 100), with a total of 30
runs per algorithm, and the size of the population (set to 25). The
parameters of each algorithm are illustrated in Table 1 .
All experiments are evaluated on the test images with the fol-
lowing number of thresholds: 2, 3, 4, and 5 as in Bhandari et al.
(2014) , Akay (2013) , and Ouadfel and Taleb-Ahmed (2016) . The rea-
son for selecting such thresholds was to show the performance of
the proposed algorithms compared to the traditional swarm-based
segmentation algorithms.
All the algorithms are programmed in “Matlab2014” and imple-
mented on “Windows 7 - 64bit” environment on a computer hav-
ing Intel Core2Duo (1.66 GHz) processor and ”2GB” memory.
4.3. Segmented image quality metrics
Four measures are applied to evaluate the performance of the
segmented image by the proposed algorithms, as follows:
1. The time-consuming. 2) The fitness function value by Eq. (3) .
2. The Peak Signal-to-Noise Ratio (PSNR) measure ( Yin, 2007 ): it is
used to measure the difference between the segmented image
and the reference image and it depends on the intensity val-
ues in the image, and refers to the quality of the reconstructed
image. The PSNR is defined as:
P SNR = 20 log 10
(255
RMSE
), (in dB ) (21)
where the RMSE is the root mean-squared error, defined as:
RMSE =
√ ∑ M
i =1
∑ Q j=1
(I(i, j) − Seg(i, j)) 2
M × Q
(22)
where I and Seg are the original and segmented images of size
M × Q , respectively. The high value of PSNR refers to the high
performance of the segmentation algorithm.
3. The SSIM ( Wang, Bovik, Sheikh, & Simoncelli, 2004 ): it is de-
fined as in Eq. (23) and is used to assess the similarity between
I and Seg .
SSI M(I , Seg) =
(2 μI μSeg + c 1 )(2 σI,Seg + c 2 )
(μ2 I
+ μ2 Seg
+ c 1 )(σ 1 I
+ σ 2 Seg
+ c 2 ) (23)
where μI and μSeg are the mean intensity of the image I and
Seg respectively; σ I and σ Seg represent the standard deviation
of I and Seg respectively; σ I, Seg is the covariance of I and Seg .
Following ( Wang et al., 2004 ), c 1 and c 2 are two constants,
such that c 1 = 6 . 5025 and c 2 = 58 . 52252 ( Mlakar, Poto ̌cnik, &
Brest, 2016 ). The highest value of SSIM indicates a better per-
M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256 247
Table 1
The parameters of all algorithms and their values.
Algorithm Parameters Value
SCA a 2
HS HMCR 0.7
pitch adjusting rate for each generation ( PAR ) 0.3
minimum pitch adjusting rate ( PAR min ) 0.3
maximum pitch adjusting rate ( PAR max ) 0.9
minimum bandwidth ( bw min ) 0.2
maximum bandwidth 0.5
WOA a [0, 2]
b 1
l [ −1, 1]
MFO b 1
l [ −1, 1]
SSO Probabilities of attraction or repulsion ( pm ) 0.7
Lower Female Percent 65
Upper Female Percent 90
FASSO γ FA 0.7
βFA 1.0
αFA 0.8
Probabilities of attraction or repulsion ( pm ) 0.7
Lower Female Percent 65
Upper Female Percent 90
FA γ FA 0.7
βFA 1.0
αFA 0.8
4
r
s
t
T
e
e
f
t
t
i
w
i
r
S
v
b
a
m
T
i
a
t
b
1
s
h
a
w
g
t
r
n
t
t
T
T
T
.4. The results and discussions
The results of the proposed algorithms compared to other algo-
ithms are illustrated in Tables 2–6 and Figs. 4–7 . Tables 2 and 3
how the fitness values and the optimal threshold values ob-
ained by the algorithms over 30 runs, respectively. The results in
able 2
he best fitness values obtained from the algorithms.
Images K Fitness values
SCA HS WOA
Test E4 2 5064.5391 5039.9047 5070.6206
3 5165.9159 5157.9705 5230.4810
4 5299.4234 5252.2572 5312.4881
5 5319.7711 5294.0391 5338.2348
Test E1 2 3414.7717 3388.7484 3471.8841
3 3586.8042 3498.7638 3613.4802
4 3574.7764 3651.7961 3708.4724
5 3690.1246 3650.8226 3734.1223
Test 1 2 3515.8433 3576.7970 3649.3678
3 3680.5642 3676.9657 3690.1193
4 3749.1817 3733.8854 3760.5080
5 3718.1888 3775.6881 3795.8628
Test 6 2 2324.5930 2420.2291 2433.3641
3 2550.4775 2574.0580 2493.1884
4 2537.9873 2578.9291 2632.9086
5 2603.8331 2631.0178 2682.0104
Test 5 2 1901.7366 1923.8044 1942.8450
3 1983.7344 2004.9483 2022.5890
4 1998.3009 2022.2556 2054.8458
5 2069.7045 2037.5637 2074.8249
Test 11 2 3969.7370 3900.9384 3968.9561
3 3979.0300 4047.7959 4040.3573
4 4095.5692 4100.8283 4154.3274
5 4121.8177 4128.5393 4136.5779
Test 3 2 1455.2294 1537.2237 1545.9279
3 1527.80 0 0 1606.6116 1635.7034
4 1663.4374 1652.8367 1669.8319
5 1628.2841 1604.0117 16 82.4 839
Test 9 2 2483.2066 2507.2149 2526.9324
3 2782.3159 2672.5653 2856.89
4 2682.8583 2781.1644 2718.6392
5 2746.6330 2804.5901 2808.0800
he bold value indicates the best result
able 2 and Fig. 4 indicate that all the algorithms performed nearly
qually whenever K = 2 , however, the WOA algorithm is the high-
st fitness function and the FA algorithm comes at the second rank
ollowed by MFO. When K = 3 , the SSO, FA and FASSO have nearly
he same fitness function, however, the WOA and MFO are bet-
er than the rest algorithms. Also, from Fig. 4 , the MFO algorithm
s better than the WOA algorithm at a higher value of threshold,
hich indicates that the WOA algorithm is more sensitive to the
ncreasing of the threshold level than the MFO algorithm.
Tables 4 and 5 and Figs. 5 and 6 give the SSIM and PSNR values,
espectively, whereas the number of the threshold increases, the
SIM and the PSNR values are increased for all algorithms. These
alues indicate that the performance of the proposed method is
etter than other algorithms, also, the MFO is better than WOA
t all thresholds except at K = 3 , the WOA has a better perfor-
ance (with a small difference). However, it can be seen from
ables 4 and 5 that, the proposed algorithms have better results
n most cases. This occurred because each image is considered as
different optimization problem; and due to the randomness of
he swarm approaches the results can vary in some cases; also,
ased on the No-Free Lunch Theorem (NFL) ( Wolpert & Macready,
997 ) which illustrates that it is difficult to find a single algorithm
uitable for several optimization problems. In addition, each image
as a different gray level histogram and the segmentation becomes
difficult process due to the multimodality of the histograms; as
ell as, the segmented image is built by specifying each class a
rey level value determined from the average of the members in
he class. So, for each image, the proposed algorithms achieve good
esults in the fitness, PSNR, and SSIM values when the thresholds’
umber is close to the number of gray levels; whereas, when the
hresholds’ number is greater than the number of gray levels or
he image’s histogram has several spikes, the fitness, PSNR, and
MFO SSO FASSO FA
5048.1828 5051.3459 5070.7344 5067.0938
5231.6618 5229.5149 5218.8900 5219.7479
5305.1061 5295.8144 5288.4145 5288.3079
6083.6214 5333.6823 5338.2206 5323.7090
3470.7866 3458.2832 3470.1153 3458.8482
3592.7662 3615.4737 3623.573 3606.4336
4234.6624 3681.5660 3691.5305 3698.4993
5256.7775 3727.7242 3716.1480 3707.1112
3643.3199 3643.3813 3634.6238 3641.3059
3720.2506 3719.5956 3715.9656 3713.0343
3748.7521 3762.9716 3770.4375 3743.4140
3758.7248 3789.3692 3795.9645 3792.6467
2435.5069 2429.4 4 43 2429.5964 2436.7088
2574.7041 2580.0456 2513.5387 2542.0552
2647.6704 2620.4375 2641.8972 2624.0218
2669.4779 2663.1375 2639.6097 2646.3557
1945.1633 1949.2131 1937.4039 1942.9876
1996.8834 2016.1655 2016.9118 2023.2169
2061.949 2058.5454 2059.6235 2047.6726
2080.1690 2068.4019 2096.8914 2058.1369
3970.0579 3975.8412 3971.7841 3976.6612
4095.6016 4094.5830 4085.0319 4110.1048
4176.697 4169.4633 4156.2061 4123.1617
4178.6363 4188.7660 4981.247 4162.1601
1538.8138 1539.7624 1547.8407 1545.1063
1592.9889 1630.4651 1634.0543 1628.9734
1647.9387 1644.1720 1655.8984 1674.267
1705.9335 1678.6309 1808.2887 1680.8519
2530.9172 2523.9278 2492.6657 2530.2152
2695.2402 2839.5165 2848.9213 2832.9652
2815.9829 2720.5544 2656.5920 2682.3690
2848.0778 2818.3497 2766.7178 2790.5508
248 M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256
Table 3
The best threshold values obtained from the algorithms for all test images.
Images K MFO WOA SCA HS SSO FASSO FA
Test E4 2 105 231 102 197 117 217 111 201 111 188 103 200 108 203
3 93 137 231 104 138 211 85 135 232 109 190 241 81 118 205 84 129 213 95 152 204
4 24 60 94 229 67 135 185 235 83 122 158 205 87 135 199 199 34 70 123 198 65 96 152 205 85 129 163 205
5 13 78 90 90 215 51 114 138 185 203 33 80 107 155 222 52 107 142 181 240 12 86 119 128 182 56 103 140 162 221 66 81 111 155 208
Test E1 2 87 142 46 97 75 157 87 141 73 137 81 146 63 138
3 48 120 173 75 138 191 44 80 145 66 87 151 37 78 153 49 93 138 43 97 151
4 10 12 54 90 61 106 142 237 41 71 120 166 49 103 137 228 105 123 123 223 45 93 110 167 59 105 139 173
5 16 45 70 130 219 44 59 117 155 226 39 82 102 164 213 11 48 76 101 150 113 116 143 143 223 43 76 95 130 182 39 64 105 131 170
Test 1 2 96 167 94 146 86 158 96 168 97 167 86 150 87 161
3 65 155 197 59 143 208 73 151 196 74 142 175 81 142 186 74 144 201 68 143 188
4 59 117 133 203 77 135 200 222 76 126 166 188 62 124 160 198 0 0 95 134 66 119 169 210 65 115 159 184
5 47 85 110 173 210 3 53 111 166 184 80 126 171 202 207 52 95 99 137 202 15 24 83 92 252 2 45 93 93 172 78 112 148 182 206
Test 6 2 100 182 56 152 66 143 51 143 65 127 75 145 75 139
3 92 124 161 72 119 165 59 127 163 53 140 214 77 131 176 71 104 153 70 118 165
4 41 139 157 187 47 74 132 155 68 94 136 167 71 94 119 152 6 16 111 120 65 110 141 188 44 99 152 180
5 59 81 121 165 207 29 85 97 133 157 38 81 125 181 183 77 91 110 141 174 42 82 105 119 220 43 78 133 178 205 58 80 128 147 190
Test 5 2 103 166 111 181 111 174 116 167 97 165 117 173 113 175
3 96 119 178 104 121 178 92 141 195 99 135 187 112 152 189 101 150 187 94 140 177
4 141 161 190 205 34 78 145 198 57 107 147 195 93 125 145 161 2 12 121 159 78 78 107 140 66 90 159 201
5 17 72 115 162 180 39 118 149 167 188 58 81 120 173 201 36 46 116 176 195 4 24 126 169 235 28 28 31 139 194 95 112 150 174 200
Test 11 2 96 167 94 146 86 158 96 168 97 167 86 150 87 161
3 65 155 197 59 143 208 73 151 196 74 142 175 81 142 186 74 144 201 68 143 188
4 59 117 133 203 77 135 200 222 76 126 166 188 62 124 160 198 3 10 95 134 66 119 169 210 65 115 159 184
5 47 85 110 173 210 4 53 111 166 184 80 126 171 202 207 52 95 99 137 202 15 25 83 92 252 2 12 93 93 172 78 112 148 182 206
Test 3 2 111 133 83 151 103 150 77 156 92 134 96 144 87 150
3 74 137 168 27 83 144 81 112 145 36 99 148 86 115 149 53 106 161 99 128 166
4 44 84 133 173 96 109 134 152 61 82 128 164 30 81 112 159 0 0 121 127 66 103 138 164 62 89 144 163
5 46 94 158 189 219 32 77 101 148 207 75 99 140 159 188 60 117 130 138 158 26 26 81 107 155 36 36 71 123 123 71 108 136 169 194
Test 9 2 123 213 87 190 91 162 87 165 79 158 89 151 100 152
3 81 110 175 93 161 217 76 120 170 80 140 187 61 101 158 57 97 167 64 113 152
4 81 124 148 213 17 90 126 192 68 109 142 184 42 98 144 183 126 154 154 223 64 113 142 186 50 101 154 199
5 71 94 128 141 193 62 112 167 185 221 62 86 128 171 222 62 67 98 152 193 12 34 85 100 147 62 94 135 190 220 42 112 118 164 209
Table 4
The average of the SSIM measure of all algorithms.
Images K SSIM measure
SCA HS WOA MFO SSO FASSO FA
Test E4 2 0.6195 0.4922 0.6203 0.6047 0.5852 0.5439 0.5487
3 0.7276 0.7562 0.7324 0.7391 0.6819 0.6012 0.6348
4 0.7547 0.7268 0.80 0 0 0.7929 0.7613 0.7749 0.7846
5 0.5985 0.7487 0.8053 0.8251 0.8035 0.7696 0.7876
Test E1 2 0.3739 0.3812 0.4901 0.5045 0.4251 0.4509 0.4564
3 0.5814 0.4403 0.6349 0.6596 0.5938 0.6221 0.5723
4 0.5321 0.5685 0.724 0.6815 0.6428 0.7283 0.7097
5 0.6 86 8 0.6739 0.7516 0.7469 0.6259 0.7554 0.7645
Test 1 2 0.6088 0.6236 0.6455 0.6554 0.618 0.6048 0.6323
3 0.6804 0.7005 0.6935 0.6709 0.7007 0.6294 0.6168
4 0.6692 0.6917 0.7384 0.7087 0.6253 0.6919 0.6569
5 0.6221 0.6493 0.7668 0.6824 0.6635 0.663 0.6806
Test 6 2 0.4846 0.6287 0.6478 0.6399 0.6173 0.6381 0.6261
3 0.6078 0.667 0.6581 0.7017 0.6759 0.658 0.6493
4 0.6077 0.7002 0.715 0.7175 0.6728 0.6837 0.6495
5 0.6632 0.6742 0.7511 0.734 0.7065 0.6859 0.7627
Test 5 2 0.7521 0.6782 0.7333 0.7523 0.7015 0.7068 0.7242
3 0.7529 0.6796 0.8023 0.7843 0.7822 0.7424 0.8004
4 0.8053 0.8085 0.8216 0.8032 0.7624 0.7676 0.8174
5 0.8235 0.8125 0.7974 0.8420 0.8102 0.7314 0.7127
Test 11 2 0.4414 0.3723 0.5524 0.48 0.5044 0.5189 0.4906
3 0.4857 0.4529 0.6484 0.6213 0.5203 0.5472 0.4859
4 0.6553 0.7396 0.6816 0.7082 0.5892 0.5312 0.5531
5 0.5045 0.6806 0.7048 0.7108 0.5415 0.6204 0.7235
Test 3 2 0.6103 0.5844 0.6449 0.6388 0.6102 0.6227 0.6364
3 0.6448 0.6859 0.7206 0.7409 0.7267 0.6858 0.6842
4 0.7156 0.69 0.7950 0.787 0.7438 0.7287 0.7747
5 0.6802 0.6682 0.7812 0.8498 0.821 0.6295 0.7529
Test 9 2 0.4987 0.4926 0.4709 0.5903 0.442 0.3885 0.4257
3 0.5792 0.467 0.6422 0.5526 0.5196 0.4974 0.5776
4 0.6482 0.7238 0.6769 0.6676 0.6732 0.7151 0.6723
5 0.6965 0.7348 0.7652 0.7334 0.7266 0.7586 0.6688
The bold value indicates the best result
Table 5
The average of the PSNR measure of all algorithms.
Images K PSNR Values
SCA HS WOA MFO SSO FASSO FA
Test E4 2 13.0045 12.2166 13.653 13.8395 12.9733 12.5911 12.387
3 17.2053 15.7524 18.059 17.1193 16.8979 13.9514 14.7777
4 19.2715 17.9618 19.0711 19.9507 19.2601 18.2197 18.124
5 14.2107 18.337 20.0892 20.6621 17.857 19.2864 16.7468
Test E1 2 14.4233 14.7626 15.7292 15.7546 15.376 15.7217 15.763
3 16.3755 15.8382 18.1021 18.5076 18.1777 18.4039 17.9375
4 17.4391 18.2664 20.0853 19.1225 19.5899 19.9914 19.7929
5 20.5316 19.4005 20.731 20.3132 19.5003 20.5745 20.7466
Test 1 2 15.8362 15.8607 18.2862 17.3744 16.5656 18.0274 16.9873
3 18.0557 19.0232 19.9084 20.0161 19.5581 19.0713 19.4861
4 20.219 18.8846 21.1897 21.9564 19.2004 21.5815 21.1582
5 18.6505 20.6799 22.1592 21.6302 19.9675 21.2913 21.1184
Test 6 2 14.2092 15.7202 16.2716 16.6278 16.3635 16.1094 15.978
3 16.4665 17.8253 18.0953 18.1623 17.4433 18.1653 18.2383
4 16.9392 18.5928 19.7982 20.6309 20.0213 19.8527 19.3548
5 19.5709 19.9212 21.6086 21.4056 20.1802 19.8086 21.0645
Test 5 2 14.7758 13.9101 12.9643 16.2863 14.5212 14.0105 14.9586
3 16.6242 18.1719 18.9844 17.5732 19.3025 19.368 18.8623
4 18.1081 18.674 21.2975 21.4182 20.5376 20.9627 20.7689
5 17.9419 19.9499 21.9104 21.617 21.6164 15.7763 19.8828
Test 11 2 13.8044 13.594 15.0451 14.73 14.7411 14.6021 14.6212
3 15.4964 14.9382 17.1398 17.1414 15.565 16.3517 15.1539
4 17.6467 19.5433 18.3906 17.9965 17.2343 16.3003 16.6301
5 15.8154 19.0958 17.7304 20.0647 16.6175 11.6553 19.9204
Test 3 2 15.2642 14.6551 16.0311 15.8636 15.1832 15.4887 15.8376
3 16.6058 17.6512 18.3643 18.5521 18.6601 16.9365 16.8615
4 17.5951 17.1198 20.4056 20.0673 18.9211 18.4171 19.625
5 18.0885 17.4659 20.2745 22.6505 21.6425 15.8491 19.0558
Test 9 2 14.9772 14.7914 14.3004 14.3439 13.9318 13.2174 13.6242
3 16.5885 14.3528 17.3002 16.825 15.5837 14.8871 16.5448
4 17.0656 19.1206 18.2904 18.4916 18.1812 18.8855 18.5408
5 18.9307 19.1787 20.7993 19.52 19.7274 20.4243 18.4551
The bold value indicates the best result
M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256 249
Table 6
The average of the execution time of all algorithms.
Images K Execution time (in second)
SCA HS WOA MFO SSO FASSO FA
Test E4 2 5.97 4.36 5.98 3.36 5.01 5.75 3.52
3 5.85 18.75 4.53 3.78 4.06 7.89 4.77
4 4.86 6.49 3.92 4.43 3.28 3.46 2.63
5 5.06 7.63 4.98 5.01 3.76 3.79 2.95
Test E1 2 3.86 6.62 3.88 3.19 3.09 2.73 2.04
3 5.43 8.13 4.97 3.8 4.11 3.78 3.07
4 6.41 6.7 4.31 4.39 4.7 4.41 3.42
5 7.25 8.74 5.16 5.52 5.5 5.24 3.93
Test 1 2 4.33 4.13 2.71 2.7 2.68 3.56 2.4
3 5.96 6.54 3.02 3.24 3.14 4.98 2.4
4 5.28 6.28 2.14 3.73 2.83 2.97 2.33
5 7.04 7.02 4.35 4.2 4.21 4.11 3.01
Test 6 2 4 8.79 3.98 4.43 3.36 4.37 2.78
3 5.47 10.31 3.17 4.51 3.54 3.75 2.93
4 6.02 9.25 4.36 5.26 4.43 4.7 3.63
5 7.47 9.02 4.2 5.94 5.26 6.33 4.84
Test 5 2 4.45 6.21 3.8 3.74 3.27 3.42 2.74
3 5.43 6.65 4.82 4.48 3.99 3.94 3.26
4 5.2 7.61 5.4 5.32 4.02 4.04 3.33
5 6.53 11.25 6 5.95 4.81 6.12 4.24
Test 11 2 4.99 26.42 2.23 3.71 3.89 5.08 3.49
3 5.97 21.7 3.81 4.43 3.85 7.58 3.98
4 6.53 7.78 4.55 5.23 5.45 5.15 4.06
5 7.76 14.1 5.28 5.97 5.86 6.29 4.98
Test 3 2 3.92 5.79 3.56 3.74 2.82 2.84 2.32
3 4.89 6.68 3.84 4.6 3.38 3.58 2.77
4 6.48 10.48 4.11 5.25 4.26 4.83 4.18
5 6.03 8.89 4.25 6.09 4.68 4.82 4.01
Test 9 2 3.66 5.39 3.79 3.72 2.67 2.73 2.18
3 4.4 7.43 4.52 4.94 3.3 3.35 2.71
4 6.53 7.95 3.83 5.55 4.86 6 4.63
5 5.9 9.61 4.03 6.14 4.59 4.65 3.82
The bold value indicates the best result
Fig. 4. The best fitness values of the algorithms over all images.
Fig. 5. The average of the SSIM measure of algorithms over all images.
S
t
c
o
b
g
t
t
Fig. 6. The average of the PSNR measure of algorithms over all images.
Fig. 7. The average of the execution time of the algorithms over all images.
g
W
l
t
s
a
w
T
i
t
t
g
l
m
w
i
p
p
d
t
a
t
w
t
m
4
(
t
m
a
t
d
i
w
e
f
SIM values become low and both proposed algorithms may fail
o segment the image. For these reasons, we used the statistical
omparisons to determine the best algorithm for a large number
f images.
The best results of the elapsed time by each algorithm have
een illustrated in Table 6 and Fig. 7 . They show that, the FA al-
orithm is the best one, and that, the proposed methods are better
han FASSO, HS and SCA algorithms; whereas, the WOA is better
han MFO.
Figs. 8–15 show the segmentation results of the proposed al-
orithms and the other algorithms with different threshold levels.
e can conclude from these figures that, the images with higher
evel contain more details than the others.
The MFO accuracy can be viewed as a result of the fact that,
his algorithm used many operators to explore and exploit the
earch domain during the evolution process. Using these operators
lso make the distribution of agents better in the search domain
hich increases the probability to determine the global optima.
he ability to avoid the problem of getting stuck in local optima
s high because the MFO uses a moth population to perform op-
imization. The WOA limitations are probably due to determining
he optimal starting value for the parameter a . In fact, the conver-
ence of WOA depends on this value (e.g. if this value is not se-
ected in optimal form then the convergence of WOA may be pre-
ature).
Based on the experiments’ execution time the fastest algorithm
as FA, although it could not achieve the best mean fitness values
n most cases because it got stuck in local optima; since the pro-
osed algorithms obtained the best fitness value in most of the ex-
eriments, despite not being the fastest. These results, in general,
ue to the proposed algorithms have the good ability to switch be-
ween exploration and exploitation phases than other algorithms
nd they have low complexity and high performance, as well as,
hey have small control parameters compared to others algorithms
hich make them suitable for adapting to most of the optimiza-
ion problems because tuning the algorithm’s parameters might be
ore complex than the problem itself.
.5. ANOVA test
A parametric statistical test known as “the analysis of variance”
ANOVA) test has been conducted with a 5% significance level to
est the significant difference between algorithms. In the experi-
ents, the proposed algorithms were chosen as the control group
nd were compared with the SCA, HS, FASSO and FA algorithms in
erms of the mean value of the four measures (which are normally
istributed). The null hypothesis assumes that there is no signif-
cant difference between the mean values of the four algorithms;
hereas, the alternative hypothesis considers a significant differ-
nce between them.
Table 7 reports, the p -value was produced by the ANOVA test
or the fitness function, execution time, SSIM and PSNR measures
250 M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256
Fig. 8. The thresholded images of “Test E4” obtained by all algorithms.
Fig. 9. The thresholded images of “Test e1” obtained by all algorithms.
t
a
0
a
(Note: the ( ∗) indicates that there exists a significant difference).
Based on the fitness function, the p -value is greater than 0.05,
therefore, the null hypothesis is accepted and this indicates that
there is no difference between the proposed algorithms and the
other algorithms. However, the MFO has the highest fitness func-
ion, and the WOA is better than all other algorithms except SSO
nd FASSO.
The p -value for SSIM, PSNR, and execution time is less than
.05, therefore, we accept the alternative hypothesis that refers to
significant difference between the proposed algorithms and other
M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256 251
Fig. 10. The thresholded images of “Test1” obtained by all algorithms.
Fig. 11. The thresholded images of “Test6” obtained by all algorithms.
a
t
a
a
o
t
d
a
i
a
e
r
s
r
m
r
s
c
M
W
e
lgorithms. In general, there are existent significant differences be-
ween the proposed algorithms and both of SCA and HS in terms of
ll measures. There is also a significant difference between FASSO
nd the proposed algorithms in terms of SSIM and PSNR. It can be
bserved that, the MFO is better than WOA in all measures except
he execution time.
It can be concluded from Table 7 that, there is no significant
ifference between the proposed algorithms and the SSO and FA
lgorithms overall threshold values and along all images. However,
n order to investigate the difference between them (SSO, FA, MFO,
nd WOA), we compare between them at each threshold level (at
ach image) as in Tables 8–10 for fitness function, SSIM and PSNR,
espectively. Based on this tables it can be seen that, there is a
ignificant difference between the SSO, FA and the proposed algo-
ithms since the p -value (which has value less than 0.05) for all
easures. For example, in term of fitness values, the WOA algo-
ithm is better than the SSO and FA in 20 cases and 22 cases, re-
pectively; whereas, the MFO has better values in 19 cases and 21
ases than SSO and FA respectively. In terms of SSIM the WOA and
FO are better than the SSO and FA in 27 cases and 29 cases for
OA, as well as, 29 cases and 23 cases for MFO, respectively. How-
ver, the SSO and FA have better PSNR values than WOA and MFO
252 M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256
Fig. 12. The thresholded images of “Test5” obtained by all algorithms.
Fig. 13. The thresholded images of “Test11” obtained by all algorithms.
s
i
t
a
i
e
b
t
time.
only in 8 cases and 6 cases for WOA, as well as 4 cases and 7 cases
for MFO, respectively.
Moreover, by comparing the results of MFO and WOA it can be
seen that, the MFO gives the best fitness values and SSIM values in
19 cases from 32; as well as, the both algorithms have nearly the
same performance in terms of PSNR.
To sum up, the results of our experiments illustrated that us-
ing the MFO for image multilevel thresholding is the most ef-
fective and most powerful; whilst the WOA, compared to MFO,
howed good results for a small number of thresholds, whereas
ts performance degraded for higher numbers. This can be due
o, the MFO has a good ability to switch between exploration
nd exploitation, while, the WOA traps early in local optima dur-
ng the optimization process. The FA algorithm has the fast ex-
cution time (in our study), although it could not achieve the
est values in the other measures (in most cases); however,
he both proposed algorithms are have an acceptable execution
M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256 253
Fig. 14. The thresholded images of “Test3” obtained by all algorithms.
Fig. 15. The thresholded images of “Test9” obtained by all algorithms.
5
o
h
m
s
t
t
e
p
o
b
b
t
a
p
t
a
e
. Conclusion and future work
In this paper, the problem of determining the optimal thresh-
lding in a case of multilevel thresholding for image segmentation
as been considered as an optimization problem where the Otsu’s
ethod has been used as a fitness function. Therefore, two new
warm algorithms, namely, the WOA and MFO have been proposed
o solve this problem. The aim of each algorithm is to determine
he best thresholds value that maximizes the Otsu’s function. The
xperimental results of the proposed algorithms have been com-
ared to SCA, HS, SSO, FASSO, and FA algorithms. The performance
f each algorithm has been assessed based on four measures, the
est fitness values, PSNR, SSIM, and execution time. We used eight
enchmark images in all experiments. The obtained results showed
hat all the algorithms achieved the same fitness function value
t a threshold level equal two; this is because the segmentation
roblem becomes easy at this level of threshold. At the three-level
hresholding, the WOA and MFO algorithms are better than other
lgorithms on almost all images, in terms of PSNR and SSIM, how-
ver, WOA is better than MFO. For a higher number of thresholds,
254 M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256
Table 7
The ANOVA test results.
Dependent
variable
Proposed
algorithm
Algorithms Mean
difference
p -value Dependent
variable
Proposed
algorithms
Algorithms Mean
difference
p -value
Fitness values WOA SCA 49.08546 0.865 PSNR WOA SCA 1.69776( ∗) 0.004
HS 38.19250 0.895 HS 1.27529( ∗) 0.030
MFO −88.94053 0.758 MFO −0.12966 0.824
SSO −1.91314 0.995 SSO 0.66146 0.258
FASSO −25.57542 0.929 FASSO 1.13391 0.053
FA 4.31972 0.988 FA 0.72067 0.218
MFO SCA 138.02599 0.632 MFO SCA 1.82742( ∗) 0.002
HS 127.13303 0.659 HS 1.40495( ∗) 0.017
WOA 88.94053 0.758 WOA 0.12966 0.824
SSO 87.02739 0.763 SSO 0.79112 0.176
FASSO 63.36512 0.826 FASSO 1.26357( ∗) 0.031
FA 93.26025 0.746 FA 0.85033 0.146
SSIM WOA SCA 0.07191( ∗) 0.005 Execution time WOA SCA −1.42028( ∗) 0.006
HS 0.06311( ∗) 0.013 HS −4.97567( ∗) 0.0 0 0
MFO 0.00269 0.915 MFO −0.40223 0.435
SSO 0.04810 0.059 SSO 0.15035 0.770
FASSO 0.05376( ∗) 0.035 FASSO −0.39842 0.439
FA 0.04313 0.090 FA 0.81713 0.113
MFO SCA 0.06921( ∗) 0.007 MFO SCA −1.01805( ∗) 0.049
HS 0.06042( ∗) 0.018 HS −4.57344( ∗) 0.0 0 0
WOA −0.00269 0.915 WOA 0.40223 0.435
SSO 0.04541 0.074 SSO 0.55258 0.283
FASSO 0.05106( ∗) 0.045 FASSO 0.00382 0.994
FA 0.04044 0.112 FA 1.21937( ∗) 0.019
( ∗) Indicates a significant difference at level p -value < 0.005
Table 8
The ANOVA test results of SSO, FA, and the proposed algorithms for fitness values.
Mean difference
WOA MFO
K MFO SSO FA SSO FA
Test E4 2 22.437( ∗) 19.274( ∗) 3.526( ∗) −3.163( ∗) −18.911( ∗)
3 −1.180( ∗) 0.966( ∗) 10.733( ∗) 2.146( ∗) 11.913( ∗)
4 7.382( ∗) 16.673( ∗) 24.180( ∗) 9.291( ∗) 16.798( ∗)
5 −745.386( ∗) 4.552( ∗) 14.525( ∗) 749.939( ∗) 759.912( ∗)
Test E1 2 1.097( ∗) 13.600( ∗) 13.035( ∗) 12.503( ∗) 11.938( ∗)
3 20.7140( ∗) −1.993( ∗) 7.046( ∗) −22.707( ∗) −13.667( ∗)
4 −526.190( ∗) 26.906( ∗) 9.973( ∗) 553.096( ∗) 536.163( ∗)
5 −1522.65( ∗) 6.398( ∗) 27.011( ∗) 1529.053( ∗) 1549.666( ∗)
Test 1 2 6.047( ∗) 5.986( ∗) 8.061( ∗) −0.061( ∗) 2.0140( ∗)
3 −30.131( ∗) −29.476( ∗) −22.915( ∗) 0.655( ∗) 7.216( ∗)
4 11.755( ∗) −2.463( ∗) 17.094( ∗) −14.219( ∗) 5.338( ∗)
5 37.138( ∗) 6.49360( ∗) 3.21610( ∗) 30.644( ∗) −33.921( ∗)
Test 6 2 −2.14280( ∗) 3.91980( ∗) −3.34470( ∗) 6.06260( ∗) −1.20190( ∗)
3 −81.51570( ∗) −86.85720( ∗) −4 8.866 80( ∗) −5.34150( ∗) 32.64890( ∗)
4 −14.76180( ∗) 12.47110( ∗) 8.88680( ∗) 27.23290( ∗) 23.64860( ∗)
5 12.53250( ∗) 18.87290( ∗) 35.65470( ∗) 6.34040( ∗) 23.12220( ∗)
Test 5 2 −2.31830( ∗) −6.36810( ∗) −0.14260( ∗) −4.04980( ∗) 2.17570( ∗)
3 25.70560( ∗) 6.42350( ∗) −0.62790( ∗) −19.28210( ∗) −26.33350( ∗)
4 −7.10320( ∗) −3.69960( ∗) 7.17320( ∗) 3.40360( ∗) 14.27640( ∗)
5 −5.34410( ∗) 6.42300( ∗) 16.68800( ∗) 11.76710( ∗) 22.03210( ∗)
Test 11 2 −1.101( ∗) −6.885( ∗) −7.705( ∗) −5.783( ∗) −6.603( ∗)
3 −55.244( ∗) −54.225( ∗) −69.747( ∗) 1.018( ∗) −14.503( ∗)
4 −22.369( ∗) −15.135( ∗) 31.165( ∗) 7.233( ∗) 53.535( ∗)
5 −42.058( ∗) −52.188( ∗) −25.582( ∗) −10.129( ∗) 16.476( ∗)
Test 3 2 7.11410( ∗) 6.16550( ∗) 0.82160( ∗) −0.94860( ∗) −6.29250( ∗)
3 42.71450( ∗) 5.23830( ∗) 6.730 0 0( ∗) −37.47620( ∗) −35.98450( ∗)
4 21.89320( ∗) 25.65990( ∗) −4.43510( ∗) 3.76670( ∗) −26.32830( ∗)
5 −23.44960( ∗) 3.85300( ∗) 1.63200( ∗) 27.30260( ∗) 25.08160( ∗)
Test 9 2 −3.98480( ∗) 3.00460( ∗) −3.28280( ∗) 6.98940( ∗) 0.70200( ∗)
3 161.64980( ∗) 17.37350( ∗) 23.92480( ∗) −144.27630( ∗) −137.72500( ∗)
4 −97.34370( ∗) −1.91520( ∗) 36.27020( ∗) 95.42850( ∗) 133.61390( ∗)
5 −39.99780( ∗) −10.26970( ∗) 17.52920( ∗) 29.72810( ∗) 57.52700( ∗)
( ∗) Indicates a significant difference at level p -value < 0.05
h
o
m
i
the MFO algorithm is better than WOA algorithm in all images, and
in terms of the fitness value, PSNR, and SSIM. However, in terms of
time complexity, analysis of recorded execution time required by
each algorithm has shown that the MFO algorithm exhibits higher
execution time than FA, WOA, and SSO.
In future, we will study the performance of both algorithms for
igher numbers of thresholds, we will also apply them in other
ptimization problems and will try to utilize them in the dynamic
ultilevel thresholding problem, as well as multi-objective issues
n order to attain better segmentation results.
M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256 255
Table 9
The ANOVA test results of SSO, FA, and the proposed algorithms for SSIM.
Mean difference
WOA MFO
K MFO SSO FA SSO FA
Test E4 2 0.015( ∗) 0.035( ∗) 0.071( ∗) 0.019( ∗) 0.056( ∗)
3 −0.006( ∗) 0.050( ∗) 0.097( ∗) 0.057( ∗) 0.104( ∗)
4 0.007( ∗) 0.038( ∗) 0.015( ∗) 0.031( ∗) 0.008( ∗)
5 −0.019( ∗) 0.001( ∗) 0.017( ∗) 0.021( ∗) 0.037( ∗)
Test E1 2 −0.014( ∗) 0.065( ∗) 0.033( ∗) 0.079( ∗) 0.048( ∗)
3 −0.024( ∗) 0.041( ∗) 0.062( ∗) 0.065( ∗) 0.087( ∗)
4 0.042( ∗) 0.081( ∗) 0.014( ∗) 0.038( ∗) −0.028( ∗)
5 0.004( ∗) 0.125( ∗) −0.012( ∗) 0.121( ∗) −0.017( ∗)
Test 1 2 −0.009( ∗) 0.027( ∗) 0.013( ∗) 0.037( ∗) 0.023( ∗)
3 0.022( ∗) −0.007( ∗) 0.076( ∗) −0.029( ∗) 0.054( ∗)
4 0.029( ∗) 0.113( ∗) 0.081( ∗) 0.083( ∗) 0.051( ∗)
5 0.084( ∗) 0.103( ∗) 0.086( ∗) 0.018( ∗) 0.001( ∗)
Test 6 2 0.007( ∗) 0.030( ∗) 0.021( ∗) 0.022( ∗) 0.013( ∗)
3 −0.043( ∗) −0.017( ∗) 0.008( ∗) 0.025( ∗) 0.052( ∗)
4 −0.002( ∗) 0.042( ∗) 0.065( ∗) 0.044( ∗) 0.068( ∗)
5 0.017( ∗) 0.044( ∗) −0.011( ∗) 0.027( ∗) −0.028( ∗)
Test 5 2 −0.019( ∗) 0.031( ∗) 0.009( ∗) 0.050( ∗) 0.028( ∗)
3 0.018( ∗) 0.020( ∗) 0.001( ∗) 0.002( ∗) −0.016( ∗)
4 0.018( ∗) 0.059( ∗) 0.004( ∗) 0.040( ∗) −0.014( ∗)
5 −0.044( ∗) −0.012( ∗) 0.084( ∗) 0.031( ∗) 0.129( ∗)
Test 11 2 0.072( ∗) 0.048( ∗) 0.061( ∗) −0.024( ∗) −0.010( ∗)
3 0.027( ∗) 0.128( ∗) 0.162( ∗) 0.101( ∗) 0.135( ∗)
4 −0.026( ∗) 0.092( ∗) 0.128( ∗) 0.119( ∗) 0.155( ∗)
5 −0.006( ∗) 0.163( ∗) −0.018( ∗) 0.169( ∗) −0.012( ∗)
Test 3 2 0.006( ∗) 0.034( ∗) 0.008( ∗) 0.028( ∗) 0.002( ∗)
3 −0.020( ∗) −0.006( ∗) 0.036( ∗) 0.014( ∗) 0.056( ∗)
4 0.008( ∗) 0.051( ∗) 0.020( ∗) 0.043( ∗) 0.012( ∗)
5 −0.068( ∗) −0.039( ∗) 0.028( ∗) 0.028( ∗) 0.096( ∗)
Test 9 2 −0.119( ∗) 0.028( ∗) 0.045( ∗) 0.148( ∗) 0.164( ∗)
3 0.089( ∗) 0.122( ∗) 0.064( ∗) 0.033( ∗) −0.025( ∗)
4 0.009( ∗) 0.003( ∗) 0.004( ∗) −0.005( ∗) −0.004( ∗)
5 0.031( ∗) 0.038( ∗) 0.096( ∗) 0.006( ∗) 0.064( ∗)
( ∗) Indicates a significant difference at level p -value < 0.05
Table 10
The ANOVA test results of SSO, FA, and the proposed algorithms for PSNR.
Mean difference
WOA MFO
K MFO SSO FA SSO FA
Test E4 2 −0.186( ∗) 0.679( ∗) 1.266( ∗) 0.866( ∗) 1.452( ∗)
3 −0.939( ∗) 1.161( ∗) 3.281( ∗) 0.221( ∗) 2.341( ∗)
4 −0.879( ∗) −0.189( ∗) 0.947( ∗) 0.690( ∗) 1.826( ∗)
5 −0.572( ∗) 2.232( ∗) 3.342( ∗) 2.805( ∗) 3.915( ∗)
Test E1 2 −0.025( ∗) 0.35320( ∗) −0.033( ∗) 0.37860( ∗) −0.008( ∗)
3 −0.405( ∗) −0.075( ∗) 0.164( ∗) 0.329( ∗) 0.570( ∗)
4 0.962( ∗) 0.495( ∗) 0.292( ∗) −0.467( ∗) −0.670( ∗)
5 0.417( ∗) 1.230( ∗) −0.015( ∗) 0.812( ∗) −0.433( ∗)
Test 1 2 0.911( ∗) 1.720( ∗) 1.298( ∗) 0.808( ∗) 0.387( ∗)
3 −0.107( ∗) 0.350( ∗) 0.422( ∗) 0.458( ∗) 0.530( ∗)
4 −0.766( ∗) 1.989( ∗) 0.031( ∗) 2.756( ∗) 0.798( ∗)
5 0.529( ∗) 2.191( ∗) 1.040( ∗) 1.662( ∗) 0.511( ∗)
Test 6 2 −0.356( ∗) −0.091( ∗) 0.293( ∗) 0.264( ∗) 0.649( ∗)
3 −0.067( ∗) 0.652( ∗) −0.143( ∗) 0.719( ∗) −0.076( ∗)
4 −0.832( ∗) −0.223( ∗) 0.443( ∗) 0.609( ∗) 1.276( ∗)
5 0.203( ∗) 1.428( ∗) 0.544( ∗) 1.225( ∗) 0.341( ∗)
Test 5 2 −3.322( ∗) −1.556( ∗) −1.994( ∗) 1.765( ∗) 1.327( ∗)
3 1.411( ∗) −0.318( ∗) 0.122( ∗) −1.729( ∗) −1.289( ∗)
4 −0.12( ∗) 0.759( ∗) 0.528( ∗) 0.880( ∗) 0.649( ∗)
5 0.293( ∗) 0.294( ∗) 2.027( ∗) 0.0 0 0( ∗) 1.734( ∗)
Test 11 2 0.315( ∗) 0.304( ∗) 0.423( ∗) −0.011( ∗) 0.108( ∗)
3 −0.001( ∗) 1.574( ∗) 1.985( ∗) 1.574( ∗) 1.985( ∗)
4 0.394( ∗) 1.156( ∗) 1.760( ∗) 0.762( ∗) 1.366( ∗)
5 −2.334( ∗) 1.112( ∗) −2.190( ∗) 3.447( ∗) 0.144( ∗)
Test 3 2 0.167( ∗) 0.847( ∗) 0.193( ∗) 0.680( ∗) 0.026( ∗)
3 −0.187( ∗) −0.295( ∗) 1.502( ∗) 0.187( ∗) −0.108( ∗)
4 0.338( ∗) 1.484( ∗) 0.780( ∗) 1.146( ∗) 0.442( ∗)
5 −2.376( ∗) −1.368( ∗) 1.218( ∗) 1.008( ∗) 3.594( ∗)
Test 9 2 −0.043( ∗) 0.368( ∗) 0.676( ∗) 0.412( ∗) 0.719( ∗)
3 0.475( ∗) 1.716( ∗) 0.755( ∗) 1.241( ∗) 0.280( ∗)
4 −0.201( ∗) 0.109( ∗) −0.250( ∗) 0.310( ∗) −0.049( ∗)
5 1.279( ∗) 1.071( ∗) 2.344( ∗) −0.207( ∗) 1.064( ∗)
( ∗) Indicates a significant difference at level p -value < 0.05
256 M.A.E. Aziz et al. / Expert Systems With Applications 83 (2017) 242–256
T
K
K
K
M
M
M
M
M
N
O
O
P
K
S
S
V
W
W
Y
Y
Y
Z
Acknowledgment
This work was in part supported by National High-tech R&D
Program of China (863 Program)(Grant No. 2015AA015403) and Na-
ture Science Foundation of Hubei Province (Grant No. 2015CFA059).
References
Abd El-Aziz, M. , Ewees, A. A. , & Hassanien, A. E. (2016). Hybrid swarms optimiza-tion based image segmentation. In Hybrid soft computing for image segmentation
(pp. 1–21). Springer . Agarwal, P. , Singh, R. , Kumar, S. , & Bhattacharya, M. (2016). Social spider algo-
rithm employed multi-level thresholding segmentation approach. In Proceedingsof first international conference on information and communication technology for
intelligent systems: Vol. 2 (pp. 249–259). Springer . Agrawal, S. , Panda, R. , Bhuyan, S. , & Panigrahi, B. K. (2013). Tsallis entropy based
optimal multilevel thresholding using cuckoo search algorithm. Swarm and Evo-
lutionary Computation, 11 , 16–30 . Akay, B. (2013). A study on particle swarm optimization and artificial bee colony al-
gorithms for multilevel thresholding. Applied Soft Computing, 13 (6), 3066–3091 . Bakhshali, M. A. , & Shamsi, M. (2014). Segmentation of color lip images by opti-
mal thresholding using bacterial foraging optimization (BFO). Journal of Compu-tational Science, 5 (2), 251–257 .
Bayraktar, Z. , Komurcu, M. , Bossard, J. A. , & Werner, D. H. (2013). The wind driven
optimization technique and its application in electromagnetics. IEEE Transactionson Antennas and Propagation, 61 (5), 2745–2757 .
Bhandari, A. K. , Kumar, A. , & Singh, G. K. (2015). Modified artificial bee colony basedcomputationally efficient multilevel thresholding for satellite image segmenta-
tion using kapur’s, otsu and tsallis functions. Expert Systems with Applications,42 (3), 1573–1601 .
Bhandari, A. K. , Singh, V. K. , Kumar, A. , & Singh, G. K. (2014). Cuckoo search algo-
rithm and wind driven optimization based study of satellite image segmenta-tion for multilevel thresholding using kapurs entropy. Expert Systems with Ap-
plications, 41 (7), 3538–3560 . Chen, K. , Zhou, Y. , Zhang, Z. , Dai, M. , Chao, Y. , & Shi, J. (2016). Multilevel image
segmentation based on an improved firefly algorithm. Mathematical Problems inEngineering, 2016 .
Cherukuri, S. K. , & Rayapudi, S. R. (2016). A novel global MPP tracking of photo-
voltaic system based on whale optimization algorithm. International Journal ofRenewable Energy Development, 5 (3) .
Cuevas, E. , Cienfuegos, M. , ZaldíVar, D. , & PéRez-Cisneros, M. (2013). A swarm opti-mization algorithm inspired in the behavior of the social-spider. Expert Systems
with Applications, 40 (16), 6374–6384 . Dirami, A. , Hammouche, K. , Diaf, M. , & Siarry, P. (2013). Fast multilevel thresholding
for image segmentation through a multiphase level set method. Signal Process-
ing, 93 (1), 139–153 . Elsayed, S. M. , Sarker, R. A. , & Essam, D. L. (2014). A new genetic algorithm for solv-
ing optimization problems. Engineering Applications of Artificial Intelligence, 27 ,57–69 .
Erdmann, H. , Wachs-Lopes, G. , Gallão, C. , Ribeiro, M. , & Rodrigues, P. (2015). A studyof a firefly meta-heuristics for multithreshold image segmentation. In Devel-
opments in medical image processing and computational vision (pp. 279–295).
Springer . Fayad, H. , Hatt, M. , & Visvikis, D. (2015). Pet functional volume delineation using an
ant colony segmentation approach. Journal of Nuclear Medicine, 56 (supplement3), 1745 .
Ghamisi, P. , Couceiro, M. S. , Benediktsson, J. A. , & Ferreira, N. M. (2012). An efficientmethod for segmentation of images based on fractional calculus and natural
selection. Expert Systems with Applications, 39 (16), 12407–12417 . Goldbogen, J. A. , Friedlaender, A. S. , Calambokidis, J. , McKenna, M. F. , Simon, M. ,
& Nowacek, D. P. (2013). Integrative approaches to the study of baleen whale
diving behavior, feeding performance, and foraging ecology. BioScience, 63 (2),90–100 .
Guo, C. , & Li, H. (2007). Multilevel thresholding method for image segmentationbased on an adaptive particle swarm optimization algorithm. In Australasian
joint conference on artificial intelligence (pp. 654–658). Springer . Hammouche, K. , Diaf, M. , & Siarry, P. (2008). A multilevel automatic thresholding
method based on a genetic algorithm for a fast image segmentation. Computer
Vision and Image Understanding, 109 (2), 163–175 . Horng, M.-H. (2010). A multilevel image thresholding using the honey bee mating
optimization. Applied Mathematics and Computation, 215 (9), 3302–3310 . Horng, M.-H. (2011). Multilevel thresholding selection based on the artificial bee
colony algorithm for image segmentation. Expert Systems with Applications,38 (11), 13785–13791 .
ouma, H. J. (2016). Study of the economic dispatch problem on IEEE 30-bus systemusing whale optimization algorithm. International Journal of Engineering Technol-
ogy and Sciences (IJETS), 5 (1), 11–18 . apur, J. N. , Sahoo, P. K. , & Wong, A. K. (1985). A new method for gray-level picture
thresholding using the entropy of the histogram. Computer vision, graphics, andimage processing, 29 (3), 273–285 .
aveh, A. , & Ghazaan, M. I. (2016). Enhanced whale optimization algorithm for siz-ing optimization of skeletal structures. Mechanics Based Design of Structures and
Machines , 1–18 .
aveh, A. , & Talatahari, S. (2010). An improved ant colony optimization for con-strained engineering design problems. Engineering Computations, 27 (1), 155–182 .
Li, C. , Li, S. , & Liu, Y. (2016). A least squares support vector machine model opti-mized by moth-flame optimization algorithm for annual power load forecasting.
Applied Intelligence, 45 (4), 1166–1178 . arciniak, A. , Kowal, M. , Filipczuk, P. , & Korbicz, J. (2014). Swarm intelligence algo-
rithms for multi-level image thresholding. In Intelligent systems in technical and
medical diagnostics (pp. 301–311). Springer . artin, D. , Fowlkes, C. , Tal, D. , & Malik, J. (2001). A database of human segmented
natural images and its application to evaluating segmentation algorithms andmeasuring ecological statistics. In Computer vision, 2001. ICCV 2001. Proceedings.
Eighth IEEE international conference on: Vol. 2 (pp. 416–423). IEEE . irjalili, S. (2015). Moth-flame optimization algorithm: A novel nature-inspired
heuristic paradigm. Knowledge-Based Systems, 89 , 228–249 .
irjalili, S. (2016). Sca: A sine cosine algorithm for solving optimization problems.Knowledge-Based Systems, 96 , 120–133 .
irjalili, S. , & Lewis, A. (2016). The whale optimization algorithm. Advances in Engi-neering Software, 95 , 51–67 .
Mlakar, U. , Poto ̌cnik, B. , & Brest, J. (2016). A hybrid differential evolution for optimalmultilevel image thresholding. Expert Systems with Applications, 65 , 221–232 .
aik, P. P. S. , & Gopal, T. V. (2016). Particle swarm optimization (PSO) based k-means
image segmentation algorithm. International Journal of Scientific Research, 5 (1) . liva, D., Cuevas, E., Pajares, G., Zaldivar, D., & Perez-Cisneros, M. (2013). Multilevel
thresholding segmentation based on harmony search optimization. Journal ofApplied Mathematics, 2013 24 pages. doi: 10.1155/2013/575414 .
tsu, N. (1979). A threshold selection method from gray level histograms. IEEETransactions on Systems, Man and Cybernetics, 9 (1), 62–66 .
Ouadfel, S. , & Taleb-Ahmed, A. (2016). Social spiders optimization and flower polli-
nation algorithm for multilevel image thresholding: A performance study. ExpertSystems with Applications, 55 , 566–584 .
armar, S. A. , Pandya, M. , Bhoye, M. , Trivedi, I. N. , Jangir, P. , & Ladumor, D. (2016).Optimal active and reactive power dispatch problem solution using moth-flame
optimizer algorithm. In Energy efficient technologies for sustainability (ICEETS),2016 international conference on (pp. 4 91–4 96). IEEE .
Pruthi, J. , & Gupta, G. (2016). Image segmentation using genetic algorithm and OTSU.
In Proceedings of fifth international conference on soft computing for problem solv-ing (pp. 473–480). Springer .
ennedy, J. , & Eberhart, R. (1995). Particle swarm optimization. In Neural Networks1995. Proceedings., IEEE International Conference: Vol. 4(4) (pp. 1942–1948) .
anyal, N. , Chatterjee, A. , & Munshi, S. (2011). An adaptive bacterial foraging algo-rithm for fuzzy entropy based image segmentation. Expert Systems with Applica-
tions, 38 (12), 15489–15498 . arkar, S. , Nayan, S. , Kundu, A. , Das, S. , & Chaudhuri, S. S. (2013). A differential evo-
lutionary multilevel segmentation of near infra-red images using renyis entropy.
In Proceedings of the international conference on frontiers of intelligent computing:Theory and applications (FICTA) (pp. 699–706). Springer .
ikas , Nanda, S. J. , et al. (2016). Multi-objective moth flame optimization. In Ad-vances in computing, communications and informatics (ICACCI), 2016 international
conference on (pp. 2470–2476). IEEE . ang, Z. , Bovik, A. C. , Sheikh, H. R. , & Simoncelli, E. P. (2004). Image quality as-
sessment: From error measurement to structural similarity. IEEE Transactions on
Image Processing, 13 (4), 600–612 . olpert, D. H. , & Macready, W. G. (1997). No free lunch theorems for optimization.
IEEE Transactions on Evolutionary Computation, 1 (1), 67–82 . ang, X.-S. (2009). Firefly algorithms for multimodal optimization. In International
symposium on stochastic algorithms (pp. 169–178). Springer . ang, X.-S. (2014). Cuckoo search and firefly algorithm: Overview and analysis. In
Cuckoo search and firefly algorithm (pp. 1–26). Springer .
e, Z. , Hu, Z. , Wang, H. , & Chen, H. (2011). Automatic threshold selection based onartificial bee colony algorithm. In Intelligent systems and applications (ISA), 2011
3rd international workshop on (pp. 1–4). IEEE . Yin, P. I. (2007). Multilevel minimum cross entropy threshold selection based
on particle swarm optimization. Applied Mathematics and Computation, 184 ,503–892 .
hang, Y. , & Wu, L. (2011). Optimal multi-level thresholding based on maximum
tsallis entropy via an artificial bee colony approach. Entropy, 13 (4), 841–859 .