A continuously differentiable filled function method for global optimization

13
Numer Algor DOI 10.1007/s11075-013-9746-3 ORIGINAL PAPER A continuously differentiable filled function method for global optimization Hongwei Lin · Yuelin Gao · Yuping Wang Received: 2 March 2012 / Accepted: 18 July 2013 © Springer Science+Business Media New York 2013 Abstract In this paper, a new filled function method for finding a global minimizer of global optimization is proposed. The proposed filled function is continuously dif- ferentiable and only contains one parameter. It has no parameter sensitive terms. As a result, a general classical local optimization method can be used to find a better min- imizer of the proposed filled function with easy parameter adjustment. Numerical experiments show that the proposed filled function method is effective. Keywords Global optimization · Filled function method · Global minimizer · Local minimizer 1 Introduction Global optimization has wide applications in many fields, such as engineering, finance, management, decision science and so on. The task of global optimization is to find a solution with the smallest or largest objective function value. In this paper, H. Lin Department of Basic Science, Jinling Institute of Technology, Nanjing 211169, People’s Republic of China e-mail: linhongwei [email protected] Y. Gao () Institute of Information and System Science, Beifang University for Nationalities, Yinchuan 750021, People’s Republic of China e-mail: [email protected] Y. Wang School of Computer Science and Technology, Xidian University, Xi’an, Shaanxi 710071, People’s Republic of China e-mail: [email protected]

Transcript of A continuously differentiable filled function method for global optimization

Page 1: A continuously differentiable filled function method for global optimization

Numer AlgorDOI 10.1007/s11075-013-9746-3

ORIGINAL PAPER

A continuously differentiable filled function methodfor global optimization

Hongwei Lin ·Yuelin Gao ·Yuping Wang

Received: 2 March 2012 / Accepted: 18 July 2013© Springer Science+Business Media New York 2013

Abstract In this paper, a new filled function method for finding a global minimizerof global optimization is proposed. The proposed filled function is continuously dif-ferentiable and only contains one parameter. It has no parameter sensitive terms. As aresult, a general classical local optimization method can be used to find a better min-imizer of the proposed filled function with easy parameter adjustment. Numericalexperiments show that the proposed filled function method is effective.

Keywords Global optimization · Filled function method · Global minimizer ·Local minimizer

1 Introduction

Global optimization has wide applications in many fields, such as engineering,finance, management, decision science and so on. The task of global optimization isto find a solution with the smallest or largest objective function value. In this paper,

H. LinDepartment of Basic Science, Jinling Institute of Technology, Nanjing 211169,People’s Republic of Chinae-mail: linhongwei [email protected]

Y. Gao (�)Institute of Information and System Science, Beifang University for Nationalities,Yinchuan 750021, People’s Republic of Chinae-mail: [email protected]

Y. WangSchool of Computer Science and Technology, Xidian University, Xi’an, Shaanxi710071, People’s Republic of Chinae-mail: [email protected]

Page 2: A continuously differentiable filled function method for global optimization

Numer Algor

we mainly discuss the method to find the global minimizer of the objective function.For problems with only one minimizer, there are many local optimization methodsavailable. For instance, the steepest decent method, Newton’s method, the conjugategradient method, the trust region method and so on. However, many problems includemultiple local minimizers, and most of the existing methods are not applicable tothese problems.

The main difficulty for global optimization is to escape from the current localminimizer and to find a better one. One of the most efficient methods to deal withthis issue is the filled function method which was proposed by Ge [1]. The genericframework of the filled function method can be described as follows:

1. An arbitrary point is taken as an initial point to minimize the objective functionby using a local optimization method, and a minimizer of the objective functionis obtained.

2. Based on the current minimizer of the objective function, a filled function isdesigned and a point near the current minimizer is used as an initial point tofurther minimize the filled function. As a result, a minimizer of the filled func-tion will be found. This minimizer falls into a better region (called basin) of theoriginal objective function.

3. The minimizer of the filled function obtained in 2 is taken as an initial point tominimize the objective function and a better minimizer of the objective functionwill be found.

4. By repeating steps 2 and 3, the number of local minimizers will be graduallyreduced, and a global minimizer will eventually be found.

The filled function method has been applied to solve various global optimizationproblems, e.g., bound-constrained and nonlinearly constrained optimization prob-lems [2], integer programming [15], max-cut problems [13] and so on. Althoughthe filled function method is an efficient global optimization method and differ-ent filled functions have been proposed, there are some drawbacks for the existingfilled functions, such as more than one parameter needing to be controlled, sen-sitivity to the parameters and ill-conditioning. For example, the filled functionsproposed in [1, 2] contain exponential terms or logarithmic terms which will causeill-condition problem if the parameter is not properly chosen; the filled functionsproposed in [5, 6] are non-smooth functions to which the usual classical localoptimization methods can not be used; the filled functions proposed in [1, 3, 4]with more than one parameter which are difficult to control. To overcome theseshortcomings, a new filled function with only one easy to control parameter ispresented. It is a continuously differentiable function, and its minimizer can be eas-ily obtained. Based on this new filled function, a new filled function method isproposed.

The remainder of this paper is organized as follows. The basic knowledge on filledfunction methods is given in Section 2. In Section 3, a new filled function is proposedand its properties are analyzed. In Section 4, a new filled function method is proposedand the numerical experiments on several test problems are conducted. Finally, someconcluding remarks are drawn in Section 5.

Page 3: A continuously differentiable filled function method for global optimization

Numer Algor

2 Basic knowledge

In this paper, we consider the following global optimization problem with boxconstraints:

(P )

{min f (x),

s.t. x ∈ Ω.

where f (x) is a twice continuously differential function on Rn and Ω =n∏

i=1[li , ui]

⊂ Rn.In order to facilitate discussions, some assumptions are required,

Assumption 1 f (x) has only a finite number of minimizers in Ω and the set of theminimizers is denoted as

Lm = {x∗i |i = 1, 2, · · · , I }

where I is the number of minimizers of f (x).

Assumption 2 All of the local minimizers of f (x) fall into the interior of Ω , andeach point y on the boundary of Ω satisfies f (y) > M , where M satisfies M =max{f (x∗

i )|i = 1, 2, · · · , I }.

Additionally, some useful concepts and notations used in this paper are given asfollows:

x∗1 : the current local minimizer of f (x);

S1: the set defined by S1 = {x|f (x) ≥ f (x∗1 ), x ∈ Ω\{x∗

1 }};S2: the set defined by S2 = {x ∈ int Ω|f (x) < f (x∗

1 )};m: a constant defined by m = min

i,j∈{1,2,··· I },f (x∗i ) �=f (x∗

j )|f (x∗

i ) − f (x∗j )|.

The definition of the filled function is first introduced by Ge in [1, 2]. Since thedefinition of the filled function was introduced, many variations of the definition ofthe filled function are given (e.g. in [12, 13] and others). In this paper, we adopt thefollowing definition of the filled function:

Definition 1 A continuously differentiable function FF(x) is said to be a filledfunction of f (x) at x∗

1 , if it satisfies the following properties:

(p1) x∗1 is a strict local maximizer of FF(x) on Ω;

(p2) FF(x) has no stationary point in the set S1;(p3) If x∗

1 is not a global minimizer of f (x), then there exists a point x ′ such that itis a local minimizer of FF(x) on S2.

In Section 3, we will construct a new filled function with only one parameter basedon Definition 1.

Page 4: A continuously differentiable filled function method for global optimization

Numer Algor

3 A new filled function and its properties

In order to construct a new filled function, a functions of one variable is introducedfirstly:

hc(t) ={

c, t ≥ 0t3 + c, t < 0

where c is a constant. It is obvious that hc(t) is a continuous differentiable functionover R.

Assume that x∗1 is the current local minimizer of f (x) and it is the best local

minimizer found so far. Define the following function for problem (P):

FF(x, x∗1 , P ) = P

1 + ‖x − x∗1‖2

h 1P(f (x) − f (x∗

1 )) (1)

where P > 0 is a parameter.We can see that the formula (1) is continuously differentiable. The following

theorems indicate that the function in (1) is a filled function by definition 1.

Theorem 1 Suppose x∗1 is a local minimizer of f (x) and FF(x, x∗

1, P ) is defined by(1), then x∗

1 is a strict local maximizer of FF(x, x∗1 , P ) for all P > 0.

Proof Since x∗1 is a local minimizer of f(x), there exists neighborhood N(x∗

1 , δ) ⊂int Ω of x∗

1 with some δ > 0 such that f (x) ≥ f (x∗1 ) for all x ∈ N(x∗

1 , δ). For allx ∈ N(x∗

1 , δ), x �= x∗1 , one has

FF(x, x∗1 , P ) = 1

1 + ‖x − x∗1‖2

< 1 = FF(x∗1 , x∗

1 , P ). (2)

Thus, x∗1 is a strict local maximizer of FF(x, x∗

1 , P ).

Theorem 2 Suppose x∗1 is a local minimizer of f (x), and x is any point in the set S1,

then x is not a stationary point of FF(x, x∗1 , P ) for all P > 0.

Proof By x ∈ S1, one has f (x) ≥ f (x∗1 ) and x �= x∗

1 . Then FF(x, x∗1, P ) =

11+‖x−x∗

1‖2 , and ∇FF(x, x∗1 , P ) = −2

x−x∗1

(1+‖x−x∗1‖2)2 �= 0 imply that the function

FF(x, x∗1 , P ) has no stationary point in the set S1 when P > 0.

Theorem 3 Suppose x∗1 is a local minimizer of f (x) but not a global minimizer

of f (x) , then there exists a point x′ ∈ S2 such that x ′ is a local minimizer of

FF(x, x∗1 , P ) when P > 1

m3 .

Proof Since x∗1 is a local but not minimizer of f (x), there exists another local

minimizer x∗2 of f (x) such that f (x∗

2 ) < f (x∗1 )

By the definition of m and the continuity of f (x), if P > 1m3 , we have

FF(x∗2 , x∗

1 , P ) < 0 (3)

By the Assumption 2, for each y ∈ ∂Ω , one has FF(y, x∗1 , P ) > 0. Then, by

the intermediate value theorem of continuous functions, there exists a point on thesegment between x∗

2 and y denoted by[x∗

2 , y]

and the function value at this point

Page 5: A continuously differentiable filled function method for global optimization

Numer Algor

is 0. Assume that z is the closest point to x∗2 on this segment with FF(z, x∗

1 , P ) = 0,then we can obtain a segment

[x∗

2 , z].

Let S(x∗2 ) be the set of all the above line segments

[x∗

2 , z]

when y ∈ ∂Ω , thenit is a closed region. By continuity of FF(x, x∗

1 , P ), there exists a x ′ ∈ S(x∗2 ) such

that it is a minimizer FF(x, x∗1 , P ) and FF(x

′, x∗

1 , P ) < 0, since FF(x, x∗1 , P ) is

continuously differentiable, thus

∇FF(x′, x∗

1 , P ) = 0.

From Theorems 1, 2 and 3, we know that when parameter P is taken as largeenough and if the current minimizer of f (x) is not the global minimizer, then we willfind a minimizer of FF(x, x∗

1 , P ) which falls into a better region. This means that ifwe minimize f (x) with initial point x ′, a better minimizer of f (x) will be found.

4 Filled function algorithm and numerical implementation

4.1 Filled function algorithm and some explanations

Based on the theorems and discussions in the above section, a new filled func-tion algorithm (Algorithm 1) for finding a global minimizer of f (x) and someexplanations are given as follows.

Algorithm 1 (A new filled function algorithm)

Step 0: Choose an upper bound Ubp of P (e.g., 106) and a constant ρ > 0(e.g., ρ = 10); give the initial value of P , and some directions di, i =1, 2, · · · , 2n, where di = (0, · · · , 1, · · · , 0)T , 1 is at the i-th element ofdi , i = 1, 2, . . . , n and di = −di−n, i = n + 1, · · · , 2n, where n is thedimension of the problem. Set k := 1. Choose any x1 ∈ Ω .

Step 1: Minimize f (x) with the initial point xk ∈ Ω so that a minimizer x∗k of f (x)

is obtained.Step 2: Construct

FF(x, x∗k , P ) = P

1 + ‖x − x∗k ‖2 h 1

P(f (x) − f (x∗

k ))

and set i = 1.Step 3: If i ≤ 2n, then set x = x∗

k + δdi and go to Step 4; Otherwise, go to Step 5.Step 4: Use x as the initial point to minimize FF(x, x∗

k , P ) and denote thesequence point generated by a local optimization algorithm as xj , j =1, 2, · · · . if ∃j0 ∈ {1, 2, · · · } such that xj0 /∈ Ω , set i = i +1 and go to Step3; Otherwise, find a minimizer x

′ ∈ Ω of FF(x, x∗k , P ) and set xk+1 = x

′,

k = k + 1, go to Step 1.Step 5: If P < Ubp, then increase P by P := ρ × P and go to Step 2; Otherwise,

the algorithm stops and x∗k is taken as a global minimizer of f (x).

Page 6: A continuously differentiable filled function method for global optimization

Numer Algor

Some explanations about the above filled function algorithm are necessary.

1. In the minimization of f (x) and FFp(x), a local optimization method is needed.In our algorithm, the Matlab function ‘fminunc’ is used.

2. In Step 3, δ needs to be selected carefully. A large δ may cause losing the bettersolution of the original problem, while a small δ may cause the local optimizationto fail to progress in the minimization of FFp(x). In our algorithm, δ is selectedto guarantee that ‖∇FF(x, x∗

k , P )‖ is greater than a threshold (e.g. take δ as0.01). For specific problems, the selection of δ is related to the number of min-imizers of the objective function and the size of the feasible region. The fewerminimizers of the objective function and the larger size of the feasible region,the larger δ should be.

3. Step 4 means that if a local minimizer x′

of FF(x, x∗k , P ) is found in Ω with

f (x′) < f (x∗

k ), we can use x ′ as the initial point to minimize f (x) and obtain abetter local minimizer of f (x).

4.2 Test problems

In this section, the proposed algorithm is tested on some benchmark problems takenfrom the literature [14].

Problem 1 (Two-dimensional function)

min f (x) = [1 − 2x2 + c sin(4πx2) − x1]2 + [x2 − 0.5 sin(2πx1)]2,

s.t. 0 ≤ x1 ≤ 10, −10 ≤ x2 ≤ 0,

where c = 0.2, 0.5, 0.05. The global minimum function value f (x∗) = 0 for all c.

Problem 2 (Three-hump back camel function)

min f (x) = 2x21 − 1.05x4

1 + 16x6

1 − x1x2 + x22 ,

s.t. −3 ≤ x1 ≤ 3, −3 ≤ x2 ≤ 3.

The global minimizer is x∗ = (0, 0)T .

Problem 3 (Six-hump back camel function)

min f (x) = 4x21 − 2.1x4

1 + 13x6

1 − x1x2 − 4x22 + 4x4

2 ,

s.t. −3 ≤ x1 ≤ 3, −3 ≤ x2 ≤ 3.

The global minimizer is x∗ = (−0.0898, −0.7127)T or x∗ = (0.0898, 0.7127)T .

Problem 4 (Treccani function)

min f (x) = x41 + 4x3

1 + 4x21 + x2

2 ,

s.t. −3 ≤ x1 ≤ 3, −3 ≤ x2 ≤ 3.

The global minimizers are x∗ = (0, 0)T and x∗ = (−2, 0)T .

Page 7: A continuously differentiable filled function method for global optimization

Numer Algor

Problem 5 (Goldstein and Price function function)

min f (x) = g(x)h(x),

s.t. −3 ≤ x1 ≤ 3, −3 ≤ x2 ≤ 3,

Where g(x) = 1 + (x1 + x2 + 1)2(19 − 14x1 + 3x21 − 14x2 + 6x1x2 + 3x2

2), andh(x) = 30 + (2x1 − 3x2)

2(18 − 32x1 + 12x21 + 48x2 − 36x1x2 + 27x2

2). The globalminimizer is x∗ = (0, −1)T .

Problem 6 (Two-dimensional Shubert function)

min f (x) ={

5∑i=1

i cos[(i + 1)x1] + i

}{5∑

i=1i cos[(i + 1)x2] + i

},

s.t. 0 ≤ x1 ≤ 10, 0 ≤ x2 ≤ 10.

This problem has 760 minimizers in total. The global minimum value is f (x∗) =−186.7309.

Problem 7 (Shekel’s function)

min f (x) = −5∑

i=1

[4∑

j=1(xj − ai,j )

2 + ci

]−1

,

s.t. 0 ≤ xj ≤ 10, j = 1, 2, 3, 4,

Where the coefficients ai,j , ci, i =1, 2, 3, 4, 5, j =1, 2, 3, 4 are given in the Table 1:All local minimizers are approximately equal to (ai,1, ai,2, ai,3, ai,4)

T with func-tion value −1/ci , i = 1, 2, 3, 4, 5.

Problem 8 (n-dimensional function)

min f (x) = πn[10 sin2 πx1 + g(x) + (xn − 1)2],

s.t. −10 ≤ xj ≤ 10, i = 1, 2, · · · , 4,

where g(x) =n−1∑i=1

[(xi − 1)2(1 + 10 sin2 πxi+1)]. The global minimizer of this

problem is x∗ = (1, · · · , 1) for all n.

Table 1 The coefficient forProblem 7 i ai,1 ai,2 ai,3 ai,4 ci

1 4.0 4.0 4.0 4.0 0.1

2 1.0 1.0 1.0 1.0 0.2

3 8.0 8.0 8.0 8.0 0.3

4 6.0 6.0 6.0 6.0 0.4

5 3.0 7.0 3.0 7.0 0.5

Page 8: A continuously differentiable filled function method for global optimization

Numer Algor

4.3 Experimental results

The proposed algorithm is executed on the above 8 test problems and the performanceis compared with that of the algorithm in [14]. The minimizers obtained by the abovetwo algorithms are listed in Tables 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 and 16.In these Tables, we adopt the following symbols.

x0: A initial point;The initial value of P is taken as 1 and Ubp is taken as 106 for all problems;

k: The iteration number in finding the k-th local minimizer of the objectivefunction;

x∗k : The k-th local minimizer;

f ∗k : The function value at x∗

k ;CDFA: The proposed algorithm in this paper;NFA: The algorithm proposed in [14];

It can be seen from Table 2–16 that for all test problems, both the proposed algo-rithm and the algorithm proposed in [14] can find the global optimal solutions forall test problems. However, the proposed algorithm used no more iterations to findthe global optimal solution. In particular, for test problem 1 in all three cases, forproblem 6 and problem 8, the proposed algorithm used fewer iterations to find theoptimal solutions than the one in [14]. For example, for problem 1 in the case thatc = 0.05 and x0 = (10, −10) shown in Table 4, the proposed algorithm only needs3 iterations to find the global optimal solution, but the algorithm in [14] needs 8 iter-ations to find the global optimal solution. The number of iterations used by proposedalgorithm is about 1/3 of the number of iterations used by the algorithm in [14]. Forproblem 6, there are 760 local minimizers, and the proposed algorithm only needs 4iterations to find the global optimal solutions, but the algorithm in [14] needs 6 itera-tions to find the global optimal solution. The number of iterations used by proposedalgorithm is 2/3 of the number of iterations used by the algorithm in [14]. Thus, theproposed algorithm is more efficient than the algorithm in [14].

Table 2 Results for problem 1 with c = 0.2, x0 = (6,−2)T

CDFA, δ = 0.01 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (5.7221,−1.8806)T 2.5070 (5.7221,−1.8806)T 2.5070

2 (3.7387,−1.2649)T 0.6165 (4.7387,−1.7417)T 1.6212

3 (1.5909,−0.2703)T 2.8126e − 009 (4.7096,−1.3985)T 1.3566

4 (3.7387,−1.2649)T 0.61647

5 (2.7380,−0.78836)T 0.088673

6 (1.8784,−0.34585)T 0

Page 9: A continuously differentiable filled function method for global optimization

Numer Algor

Table 3 Results for problem 1 with c = 0.5, x0 = (0, 0)T

CDFA, δ = 0.01 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (0.0420,−0.0948)T 0.5175 (0.042023,−0.094772)T 0.51745

2 (1.0000, 0)T 5.7949e-016 (0.99991,−1.2524e − 4)T 2.2389e-7

3 (1.0000,−2.2205e − 14)T 0

Table 4 Results for problem 1 with c = 0.05, x0 = (10,−10)T

CDFA, δ = 0.01 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (8.7299,−3.2965)T 9.0733 (8.7299,−3.2965)T 9.0733

2 (7.7280,−0.4022)T 6.5031 (7.7280,−2.8347)T 6.5031

3 (1.8513,−0.4021)T 4.3885e-011 (6.7248,−2.3724)T 4.3943

4 (5.7198,−1.9162)T 2.7434

5 (4.7129,−1.4891)T 1.5351

6 (3.7305,−1.2306)T 0.61844

7 (2.7300,−0.79341)T 0.10216

8 (1.8513,−0.40209)T 0

Table 5 Results for problem 2 with initial point (−2,−1)T

CDFA, δ = 0.01 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (−1.7476,−0.8738)T 0.2986 (−1.7476,−0.87378)T 0.29864

2 (−0.0000,−0.0000)T 4.0157e-010 (0, 0)T 0

Table 6 Results for problem 2 with initial point (2, 1)T

CDFA, δ = 0.01 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (1.7476, 0.8738)T 0.2986 (1.7476, 0.87378)T 0.29864

2 (−0.0000,−0.0000)T 3.9567e-010 (0, 0)T 0

Page 10: A continuously differentiable filled function method for global optimization

Numer Algor

Table 7 Results for problem 3 with initial point (−2, 1)T

CDFA, δ = 0.001 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (−1.6071, 0.5687)T 2.1043 (−1.6071, 0.56865)T 2.1043

2 (0.0898, 0.7127)T −1.0316 (0.089842, 0.71266)T −1.0316

Table 8 Results for problem 3 with initial point (2,−1)T

CDFA, δ = 0.001 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (1.6071,−0.5687)T 2.1043 (1.6071,−0.56865)T 2.1043

2 (−0.0898,−0.7127)T −1.0316 (−0.089842,−0.71266)T −1.0316

Table 9 Results for problem 3 with initial point (−2,−1)T

CDFA, δ = 0.001 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (1.7036,−0.79608)T −0.21546 (1.7036,−0.79608)T −0.21546

2 (−0.0898,−0.7127)T −1.0316 (−0.089842,−0.71266)T −1.0316

Table 10 Results for problem 4 with initial point (−1, 0)T

CDFA, δ = 0.001 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (−1.0000, 0)T 1.0000 (−1.0000, 0)T 1.0000

2 (−0.0000,−0.0000)T 2.4048e-017 (0, 0)T 0

Table 11 Results for problem 5 with initial point (−1,−1)T

CDFA, δ = 0.01 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (−0.6000,−0.4000)T 30.0000 (−0.60000,−0.40000)T 30.000

2 (0.0000,−1.0000)T 3.0000 (0,−1.0000)T 3.0000

Page 11: A continuously differentiable filled function method for global optimization

Numer Algor

Table 12 Results for problem 6 with initial point (1, 1)T

CDFA, δ = 0.01 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (2.0467, 2.0467)T 0 (1.0865, 1.0865)T 2.8841e-17

2 (3.2800, 4.8581)T −46.511 (1.3200, 1.8703, e − 12)T −13.052

3 (4.2760, 4.8581)T −79.411 (1.3200, 4.8581)T −37.681

4 (5.4892, 4.8581)T −186.739 (3.2800, 4.8581)T −46.511

5 (4.2760, 4.8581)T −79.411

6 (5.4892, 4.8581)T −186.73

Table 13 Results for problem 7 with initial point (1, 1, 1, 1)T

CDFA, δ = 0.01 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (1.0001, 1.0002, (1.0001, 1.0002,

1.0001, 1.0002)T −5.0552 1.0001, 1.0002)T −5.0552

2 (4.0000, 4.0000, (4.0000, 4.0001,

4.0000, 4.0000)T −10.1529 4.0000, 4.0001)T −10.153

Table 14 Results for problem 7 with initial point (6, 6, 6, 6)T

CDFA, δ = 0.01 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (5.9987, 6.0002, (5.9987, 6.0003,

5.9987, 6.0002)T −2.6822 5.9987, 6.0003)T −2.6829

2 (4.0000, 4.0001, (7.9996, 7.9996,

4.0000, 4.0001)T −10.1529 7.9996, 7.9996)T −5.1008

3 (4.0000, 4.0001,

4.0000, 4.0001)T −10.153

Table 15 Results for problem 8 with n = 7 and initial point (2, 2, 2, 2, 2, 2, 2)T

CDFA, δ = 0.001 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (1.0000, 1.0000, 1.0000, 1.0000, (1.9899, 1.9897, 1.9896, 1.9896

1.0000, 1.0000, 1.0000)T 2.3538e-013 1.9896, 1.9896, 1.9898)T −3.1095

2 (1.0000, 1.0000, 1.0000, 1.0000

1.0000, 1.0000, 1.0000)T 0

Page 12: A continuously differentiable filled function method for global optimization

Numer Algor

Table 16 Results for problem 8 with n = 10 and initial point (6, 6, 6, 6, 6, 6, 6, 6, 6, 6)T

CDFA, δ = 0.001 NFA

k x∗k f ∗

k x∗k f ∗

k

1 (0.0101, 0.0103, 0.0103, 0.0104, (5.9490, 5.9979, 5.9980, 5.9980,

0.0103, 0.0102, 1.0000, 6.0000, 2.6653 5.9980, 5.9980, 5.9980, 5.9980, 78.432

6.0000, 6.0000)T 5.9980, 5.9980)T

2 (1.1615, 1.1651, 0.4418, 0.9258, (−1.9696, 5.9943, 5.9980, 5.9980,

0.9638,−0.4809, 0.9926, 6.0000, 2.4443 5.9980, 5.9980, 5.9980, 5.9980, 73.450

6.0000, 6.0000)T 5.9980, 5.9980)T

3 (1.9900, 1.0000, 1.0000, 1.0000, (−0.97956, 5.9871, 5.9980, 5.9980,

1.0000, 1.0000, 1.0000, 6.0000, 0.4443 5.9980, 5.9980, 5.9980, 5.9980, 71.884

6.0000, 6.0000)T 5.9980, 5.9980)T

4 (1.0000, 1.0000, 1.0000, 1.0000, (0.012709, 5.9476, 5.9979, 5.9980,

1.0000, 1.0000, 1.0000, 1.0000, 0 5.9980, 5.9980, 5.9980, 5.9980, 70.890

1.0000, 1.0000)T 5.9980, 5.9980)T

5 (1.0000, 1.0000, 1.0000, 1.0000, 1.0000,

1.0000, 1.0000, 1.0000, 1.0000, 1.0000)T 0

5 Concluding remarks

The filled function method is a popular approach for the global optimization. Exist-ing filled functions have some drawbacks such as being non-differentiable at somepoint in search domain, containing more than one parameter, ill-conditioning and soon. In this paper, a continuously differentiable filled function with one parameter ispresented. It can overcome these shortcomings to a certain degree. The numericalexperiments verify that the proposed algorithm is effective and efficient.

Acknowledgments This work was supported by the National Natural Science Foundation of China(N0.11161001), the National Natural Science Foundation of China (No.61203372), The Research Foun-dation of Jinling Institute of Technology (No. jit-b-201314) and Ningxia Foundation for key disciplines ofComputational Mathematics.

References

1. Ge, R.P.: A filled function method for finding a global minimizer of a function of several variables.Math. Program. 46, 191–204 (1990)

2. Ge, R.P.: The theory of filled function method for finding global minimizers of nonlinearly constrainedminimization problems. J. Comput. Math. 5, 1–9 (1987)

3. Liu, X., Xu, W.S.: A new filled function applied to global optimization. Comput. Oper. Res. 31, 61–80 (2004)

4. Liu, X.: The barrier attribute of filled functions. Appl. Math. Comput. 149, 641–649 (2004)

Page 13: A continuously differentiable filled function method for global optimization

Numer Algor

5. Wang, X.L., Zhou, G.B.: A new filled function for unconstrained global optimization. Appl. Math.Comput. 174, 419–429 (2006)

6. Wang, Y.J., Zhang, J.S.: A new constructing auxiliary function method for global optimization. Math.Comput. Model. 47, 1396–1410 (2008)

7. Wang, W.X., Shang, Y.L., Zhang, L.S.: A filled function method with one parameter for boxconstrained global optimization. Appl. Math. Comput. 194, 54–66 (2007)

8. Ge, R.P., Qin, Y.F.: A class of filled functions for finding global minimizers of a function of severalvariables. J. Optim. Theory Appl. 54, 241–252 (1987)

9. Dixon, L.C.W., Gomulka J., Herson, S.E.: Reflections on global optimization problem. Optimizationin Action (Academic Press, New York), pp. 398–435 (1976)

10. Liu, X.: A class of continuously differentiable filled functions for global optimization. IEEEtransactions on systems, man, and cybernetics-part A: system and humans 38, 38–47 (2008)

11. Ma, S.Z., Yang, Y.J., Liu, H.Q.: A parameter free filled function for unconstrained global optimization.Appl. Math. Comput. 215, 3610–3619 (2010)

12. Liang, Y.M., Zhang, L.S., Li, M.M., Han, B.S.: A filled function method for global optimization. J.Comput. Appl. Math. 205, 16–31 (2007)

13. Ling, A.F., Xu, C.X., Xu, F.M.: A discrete filled function algorithm for approximate global solutionsof max-cut problems. J. Comput. Appl. Math. 220, 643–660 (2008)

14. Zhang, L.S., Ng, C.K., Li, D., Tian, W.W.: A new filled function method for global optimization. J.Glob. Optim. 28, 17–43 (2004)

15. Woon, S.F., Rehbock, V.: A critical review of discrete filled function methods in solving nonlineardiscrete optimization problems. Appl. Math. Comput. 217, 25–41 (2010)