On DC optimization algorithms for solving minmax flow problems

15
Math Meth Oper Res DOI 10.1007/s00186-014-0470-0 ORIGINAL ARTICLE On DC optimization algorithms for solving minmax flow problems Le Dung Muu · Le Quang Thuy Received: 7 September 2010 / Accepted: 12 May 2014 © Springer-Verlag Berlin Heidelberg 2014 Abstract We formulate minmax flow problems as a DC optimization problem. We then apply a DC primal-dual algorithm to solve the resulting problem. The obtained computational results show that the proposed algorithm is efficient thanks to particular structures of the minmax flow problems. Keywords Minmax flow problem · Smooth DC optimization · Regularization 1 Introduction In the minimax flow problem to be considered, we are given a directed network flow N (V , E , s , t , p), where V is the set of m + 2 nodes, E is the set of n arcs, s is the single source node, t is the single sink node, and p R n is the vector of arc capacities. Let + : E V and : E V be incidence functions. When h = (u , v), i.e., arc h leaves node u and enters node v, we write + h = u and h = v. A vector x R n is said to be a feasible flow if it satisfies the capacity constraints: 0 x h p h h E (1) This work is supported in part by the National Foundation for Science and Technology Development of Vietnam (NAFOSTED). L. D. Muu (B ) Institute of Mathematics, VAST, Hanoi, Vietnam e-mail: [email protected] L. Q. Thuy School of Applied Mathematics and Informatics, HaNoi University of Science and Technology, Hanoi, Vietnam 123

Transcript of On DC optimization algorithms for solving minmax flow problems

Page 1: On DC optimization algorithms for solving minmax flow problems

Math Meth Oper ResDOI 10.1007/s00186-014-0470-0

ORIGINAL ARTICLE

On DC optimization algorithms for solving minmaxflow problems

Le Dung Muu · Le Quang Thuy

Received: 7 September 2010 / Accepted: 12 May 2014© Springer-Verlag Berlin Heidelberg 2014

Abstract We formulate minmax flow problems as a DC optimization problem. Wethen apply a DC primal-dual algorithm to solve the resulting problem. The obtainedcomputational results show that the proposed algorithm is efficient thanks to particularstructures of the minmax flow problems.

Keywords Minmax flow problem · Smooth DC optimization · Regularization

1 Introduction

In the minimax flow problem to be considered, we are given a directed network flowN (V, E, s, t, p), where V is the set of m + 2 nodes, E is the set of n arcs, s is thesingle source node, t is the single sink node, and p ∈ R

n is the vector of arc capacities.Let ∂+ : E → V and ∂− : E → V be incidence functions. When h = (u, v), i.e., arch leaves node u and enters node v, we write ∂+h = u and ∂−h = v. A vector x ∈ R

n

is said to be a feasible flow if it satisfies the capacity constraints:

0 ≤ xh ≤ ph ∀h ∈ E (1)

This work is supported in part by the National Foundation for Science and Technology Development ofVietnam (NAFOSTED).

L. D. Muu (B)Institute of Mathematics, VAST, Hanoi, Vietname-mail: [email protected]

L. Q. ThuySchool of Applied Mathematics and Informatics, HaNoi University of Science and Technology,Hanoi, Vietnam

123

Page 2: On DC optimization algorithms for solving minmax flow problems

L. D. Muu, L. Q. Thuy

and conservation equations

∂+h=v

xh =∑

∂−h=v

xh ∀ v ∈ V \{s, t}. (2)

Note that the conservation equations can be simply written as

Ax = 0,

where the m×n matrix A = (avh) is the well-known node-arc incidence matrix where,for each (v, h) ∈ V \{s, t} × E , the entry avh is defined as

avh =⎧⎨

1 if arc h leaves node v, i.e., ∂+h = v

−1 if arc h enters node v, i.e., ∂−h = v

0 otherwise.

Then, the constraints (1), (2) become

Ax = 0, 0 ≤ x ≤ p.

Let X denote the set of feasible flows, i.e.,

X = {x ∈ Rn : Ax = 0, 0 ≤ x ≤ p}.

For each x ∈ X , the value of the flow x is given by

dT x =∑

∂+h=s

xh −∑

∂−h=s

xh,

where d is a n–dimensional row vector defined as

dh =⎧⎨

1 if arc h leaves the source node s, i.e., ∂+h = s−1 if arc h enters the source node s, i.e., ∂−h = s0 otherwise.

A feasible flow x0 ∈ X is said to be a maximal flow if there is no feasible flowx ∈ X such that x ≥ x0 and x �= x0. We use X E to denote the set of maximal flows.The problem of finding a minimal value flow on the set of all maximal flows, shortlyminmax flow problem, to be solved in this paper can be given as

min dT xsubject to x ∈ X E .

(3)

Optimization problems over the efficient set of a linear vector problem are consideredin a lot of research papers and some algorithms are developed (see e.g. Benson 1984,1991; Luc and Muu 1997; Kim and Muu 2002; Yamamoto 2002 and the references

123

Page 3: On DC optimization algorithms for solving minmax flow problems

DC optimization algorithms

therein). However all of existing global algorithms work well only for problems wherethe number of the criteria is somewhat small.

Minmax flow problems have an important role in network analysis and networkdesign, and they were considered by some authors (see e.g. Gotoh et al. 2003; Shigenoet al. 2003; Shi and Yamamoto 1997; Zenke 2006). These problems are also closelyrelated to the uncontrollable flow raised by Iri (see e.g. Iri 1994, 1996). Note that sincethe set X E , in general, is nonconvex, Problem (3) is a nonconvex optimization one.

Some global optimization algorithms are proposed for solving minmax flow prob-lems (see e.g. Gotoh et al. 2003; Shigeno et al. 2003; Shi and Yamamoto 1997; Zenke2006). Because of inherent dimensionality difficulty, these global algorithms can solveminmax flow problems where the number of the criteria is relatively small. However,in minmax flow problems, the number of the criteria is just equal to the number ofdecision variables that often is large in practical applications. To this case, local opti-mization approaches should be used.

In this paper, first we formulate the mimax flow problem mentioned above as asmoothly DC optimization problem by using a regularization technique widely usedin variational inequality. Then we apply the DCA algorithm in Tao (1986) (see also Anand Tao 1997) to solve the resulting DC optimization problem. The main contributionof this work is to derive a smoothly DC formulation for minmax flow problems andto provide an efficient algorithm for approximating a stationary point of the resultingDC optimization problem. An advantage of the proposed algorithm when comparedto general DC optimization algorithms is that the subproblems to be solved at eachiteration of this algorithm are strongly convex quadratic programs rather than generalconvex ones as in general cases. This fact allows that the proposed algorithm can solveminmax flow problems with relatively large sizes (see the computational results in thelast section).

The paper is organized as follows. In Sect. 2, first we show how to use the Yoshidaregularization technique to obtain a smoothly DC optimization formulation for theminmax flow problem. Then we specify DCA algorithm for the resulting DC programto the minmax flow problem. Computational results reported in the last section showthat the proposed algorithm is efficient for minmax flow problems.

2 A smoothly DC optimization formulation

Clearly, X E is the set of all Pareto efficient solutions to the multiple objective linearprogram

Vmax xsubject to x ∈ X.

(4)

As we have seen in the introduction part, for the mimax flow problem the number ofthe decision variables is just equal to the number of the criteria.

From Philip (1972) it is known that one can find a simplex Λ ⊂ Rn++ such that a

point x ∈ X E if and only if there exists λ ∈ Λ such that

x ∈ argmax{λT y : y ∈ X

}.

123

Page 4: On DC optimization algorithms for solving minmax flow problems

L. D. Muu, L. Q. Thuy

Thus,X E = {x ∈ X : λT x ≥ φ(λ) ∀λ ∈ Λ},

whereφ(λ) = max{λT y : y ∈ X}.

In our case, the simplex Λ can be defined explicitly as in the following theorem.

Theorem 1 Shigeno et al. (2003) Let Λ be one of the following simplices

Λ := {λ ∈ Zn : e ≤ λ ≤ ne}

or

Λ ={

λ ∈ Rn : λ ≥ e,

n∑

k=1

λk = n2

},

where e is the vector whose every entry is one. Then x is a maximal flow if and onlyif there is λ ∈ Λ such that

x ∈ argmax{λT y : y ∈ X}.

By Theorem 1, we have that x ∈ X E if and only if

〈−λ, y − x〉 ≥ 0 ∀y ∈ X.

Let c > 0, plays as a regularization parameter, and K := Λ × X . Define the mappingF by taking, for each u = (λ, x) ∈ K ,

F(u) := F(λ, x) := (0,−λ).

Letγc(u) := max

v∈K{〈F(u), u − v〉 − c

2‖v − u‖2}. (5)

Since, for each fixed u ∈ K , the objective function 〈F(u), u − v〉 − c

2‖v − u‖2 of

Problem (5) is strongly concave on K with respect to v, Problem (5) defining γc(u)

is strongly convex quadratic, and therefore it has a unique solution that we denote byvc(u) ∈ K .

Proposition 1 For every u = (λ, x) ∈ Rp+n we have the following assertions

(i) vc(u) = PK

(u − 1

c F(u))

= PK(λ, x + 1

c λ), where PK (λ, x + 1

c λ) is the

projection of (λ, x + 1c λ) on the set K ;

(ii) γc(u) = γc(λ, x) ≥ 0 ∀ u = (λ, x) ∈ K , γc(u) = γc(λ, x) = 0, u = (λ, x) ∈ Kif and only if x ∈ X E ;

123

Page 5: On DC optimization algorithms for solving minmax flow problems

DC optimization algorithms

(iii) γc(., .) is continuously differentiable on K and its gradient

∇γc(u) = ∇γc(λ, x) = −[c(λ − v(λ, x)), λ + (1 + c)(x − v(λ, x))]T ,

where ∇γc(u) is a vector gradient of function γc at u and

vc(λ, x) = (v(λ, x), v(λ, x)).

Proof (i) With u = (λ, x) ∈ R2n, let w := u − 1

c F(u) and v = (μ, y) ∈ R2n . By

definition, one has F(u) = (0,−λ)T ∈ R2n and w = (λ, x + 1

c λ). Since

1

2c‖F(u)‖2 − c

2‖v − w‖2 = 1

2c‖λ‖2 − c

2‖v − w‖2

= 1

2c‖λ‖2 − c

2

∥∥∥∥(μ, y) −(

λ, x + 1

)∥∥∥∥2

= 1

2c‖λ‖2 − c

2

∥∥∥∥y −(

x + 1

)∥∥∥∥2

− c

2‖μ − λ‖2

= 1

2c‖λ‖2 − c

2

(‖y − x‖2 + 2

c〈λ, x − y〉 + 1

c2 ‖λ‖2)

− c

2‖μ − λ‖2

= 〈−λ, x − y〉 − c

2‖x − y‖2 − c

2‖μ − λ‖2

= 〈F(u), u − v〉 − c

2‖v − u‖2,

we have

〈F(u), u − v〉 − c

2‖v − u‖2 = 1

2c‖λ‖2 − c

2‖v − w‖2.

Then

γc(u) = maxv∈K

{〈F(u), u − v〉 − c

2‖v − u‖2

}

= maxv∈K

{1

2c‖λ‖2 − c

2‖v − w‖2

}

= 1

2c‖λ‖2 + max

v∈K

{− c

2‖v − w‖2

}

= 1

2c‖λ‖2 − c

2minv∈K

{‖v − w‖2}

which implies that vc(u) = vc(λ, x) = PK (λ, x + 1c λ).

(ii) Clearly, by the definition of γc(u), we have γc(u) ≥ 0 for every u = (λ, x) ∈ K .Let u∗ = (λ∗, x∗) ∈ K such that γc(u∗) = γc(λ

∗, x∗) = 0. Then

〈F(u∗), u∗ − vc(u∗)〉 − c

2‖vc(u

∗) − u∗‖2 = 0,

123

Page 6: On DC optimization algorithms for solving minmax flow problems

L. D. Muu, L. Q. Thuy

which implies

〈F(u∗), u∗ − vc(u∗)〉 = c

2‖vc(u

∗) − u∗‖2 ≥ 0. (6)

On the other hand, using the well known optimality condition (see e.g. formula(4.20), p. 139 in Boyd and Vandenberghe 2004) for convex programming appliedto the convex problem (5) defining γc(u∗) = γc(λ

∗, x∗), we can write

〈F(u∗) + c[vc(u∗) − u∗], u − vc(u

∗)〉 ≥ 0 ∀ u ∈ K .

With u = u∗ one has

〈F(u∗) + c[vc(u∗) − u∗], u∗ − vc(u

∗)〉 ≥ 0.

Thus

〈F(u∗), u∗ − vc(u∗)〉 − c‖u∗ − vc(u

∗)‖2 ≥ 0.

Combining this inequality with (6) we can deduce that ‖u∗ − vc(u∗)‖ ≤ 0. Henceu∗ = vc(λ

∗, x∗). By (i), we have

u∗ = vc(u∗) = PK

(λ∗, x∗ + 1

cλ∗

). (7)

On the other hand, according to properties of the projection PK (.) we know thatu∗ = PK (u) if and only if 〈v − u∗, u∗ − u〉 ≥ 0 for all v ∈ K . Applying thisproperty with u = (λ∗, x∗ + 1

c λ∗), by (7), we obtain 〈F(u∗), v − u∗〉 ≥ 0 for allv ∈ K . Since F(u∗) = (0,−λ∗) and u∗ = (λ∗, x∗), we have 〈−λ∗, y − x∗〉 ≥ 0for all y ∈ X, which implies x∗ ∈ X E .Conversely, suppose that x∗ ∈ X E . Then 〈−λ∗, y − x∗〉 ≥ 0 for all y ∈ X. Againby the definition of F(u∗), we can write 〈F(u∗), v − u∗〉 ≥ 0 for all v ∈ K .

Note that

γc(u∗) = γc(λ

∗, x∗) = maxv∈K

{〈F(u∗), u∗ − v〉 − c

2‖u∗ − v‖2

}

= maxv∈K

{−〈F(u∗), v − u∗〉 − c

2‖u∗ − v‖2

}≤ 0.

Thus γc(u∗) = γc(λ∗, x∗) = 0.

(iii) Since vc(u) = vc(λ, x) = PK (λ, x + 1c λ), by continuity of the projection, vc(., .)

is continuous on K . Then, from γc(u) = 〈F(u), u − vc(u)〉 − c2‖u − vc(u)‖2, we

can see that γc(u) = γc(λ, x) is continuous on K .Let u := (λ, x) and ϕ(u, v) := 〈F(u), v − u〉 + c

2‖u − v‖2. Since the func-tion ϕ(u, .) is strongly convex and continuously differentiable with respect to thevariable v, there exists a unique solution vc(u) of the problem

123

Page 7: On DC optimization algorithms for solving minmax flow problems

DC optimization algorithms

minv∈K

ϕ(u, v)

and the function γc(u) = γc(λ, x) is continuously differentiable. A simple com-putation yields

∇γc(λ, x) = −∇uϕ(u, v) = −∇uϕ(u, vc(λ, x)) = −∇(λ,x)ϕ((λ, x), vc(λ, x))

= −[λ − c(v(λ, x)), λ + (1 + c)(x − v(λ, x))]T .

��By virtue of this proposition Problem (3) can be written equivalently as

min g(u)

subject to

{u ∈ Kγc(u) = 0,

(8)

where g(u) := g(λ, x) := dT x = bT u and b := (0, d)T . A simple computationshows that γc(u) = g(u) − h(u), where

g(u) := 1

2‖F(u) + u‖2 + max

v∈K

{〈cu − F(u), v〉 − c

2‖v‖2

},

h(u) := 1

2(‖F(u)‖2 + (c + 1)‖u‖2).

It is easy to see that both g and h are convex. Since the objective function of themaximization problem

maxv∈K

{〈cu − F(u), v〉 − c

2‖v‖2

}

is strongly quadratic concave, the function g is differentiable. Clearly, h is differen-tiable. Thus Problem (8) can be converted into the smoothly DC constrained problem

min{g(u) : g(u) − h(u) = 0, u ∈ K }. (9)

For t > 0, we consider the penalized problem

min{ ft (u) := g(u) + tg(u) − th(u) : u ∈ K }. (10)

From An et al. (2003) we know that there exists an exact penalty parameter t0 ≥ 0such that for every t > t0, the solution sets of problems (8) and (10) are identical.

In the minmax flow problem being consideration, we have d = ∑nh=1 ξheh where

ξh ∈ {1,−1, 0} and eh denotes the h-th unit vector of Rn . In this case, the exact penalty

parameter t0 can be calculated as follows An et al. (2003):Let I := {h : ξh > 0}. Define t∗ by taking

0 < t∗ ≤ max{ξh : h ∈ I } if I �= ∅ and t∗ = 0 if I = ∅.

123

Page 8: On DC optimization algorithms for solving minmax flow problems

L. D. Muu, L. Q. Thuy

Since ξh ∈ {−1, 0, 1}, we can take 0 < t∗ ≤ 1 whenever I �= ∅. So in this case byProposition 2.4 in An et al. (2003) we can take 0 < t0 ≤ 1. Note that in the case whenI = ∅, i.e., t∗ = 0, it is easy to see that any optimal solution of the linear program

min{n∑

h=1ξh xh : x ∈ X} solves Problem (3). So in this case the minmax flow problem

is reduced to the linear program min{∑nh=1 ξh xh : x ∈ X}.

We now use a DC optimization algorithm, called DCA, developed in Tao (1986)to solve Problem (10). The DCA can solve nonsmooth DC optimization problems ofthe form

min{F(u) := G(u) − H(u) : u ∈ R2n} (11)

where G and H are lower semicontinuous proper convex functions. For this nonsmoothoptimization problem, the sequence of iterates is constructed by taking

vk ∈ ∂ H(uk), uk+1 = argmin{G(u) − 〈vk, u〉}.

Thus, convergence of this algorithm in this nonsmooth case crucially depends on thechoice of vk ∈ ∂ H(uk). In our case, since H is differentiable, vk is uniquely defined.

Clearly, Problem (10) is a DC program. For this special problem, we can describeDCA as follows. For simple of notation we write u = (λ, x).

Starting from a point u0 = (λ0, x0) ∈ K . At each iteration k = 0, 1, ..., having uk

we construct vk and uk+1 by setting

vk = ∇th(uk) = t[F(uk), (1 + c)uk]T ,

uk+1 = argmin{g(u) + tg(u) − 〈u − uk, vk〉 : u ∈ K }.

It is proved in An et al. (1996), among others, that ft (uk+1) ≤ ft (uk) for every k, thatif uk+1 = uk , then uk is a stationary point, and that any cluster point of the sequence{uk} is a stationary point to Problem (10).

Thanks to specific properties of the minmax flow problem under consideration, onecan find a DC decomposition for the gap function γc such that the subproblems neededto solve are strongly convex quadratic. In fact we have the following result.

Theorem 2 Let γ (u) be defined by (5), where c > 0 fixed. Then, for every real numberρ satisfying cρ ≥ 1, the function

fc(u) := 1

2ρ||u||2 − γc(u)

is convex on K .

Proof By definition, the function γc(u) can be written as

γc(u) = 1

2c‖F(u)‖2 − c

2minv∈K

{‖v − w‖2}, where w =(

λ, x + 1

).

123

Page 9: On DC optimization algorithms for solving minmax flow problems

DC optimization algorithms

Thus, we have

hc(u) = 1

2ρ‖u‖2 − 1

2c‖F(u)‖2 + c

2minv∈K

{||v − w||2}.

Since u = (λ, x) and F(u) = F(λ, x) = (0, λ)T ,

hc(u) =(

1

2ρ − 1

2c

)‖λ‖2 + 1

2ρ‖x‖2 + c

2minv∈K

{‖v − w‖2}.

Clearly, if ( 12ρ − 1

2c ) > 0, then

q(u) =(

1

2ρ − 1

2c

)||λ||2 + 1

2ρ||x ||2

is a convex quadratic function.Now we show that

p(u) := p(λ, x) = c

2minv∈K

{‖v − w‖2}

is convex on K .Indeed, let u = (λ, x), u′ = (λ′, x ′) ∈ K and a real number θ ∈ [0, 1]. We have

w =(

λ, x + 1

)and w′ =

(λ′, x ′ + 1

cλ′

).

By definition of p, one has

p(u) = c

2‖v(u) − w‖2

and

p(u′) = c

2‖v(u′) − w′‖2.

Since K is convex,

θu + (1 − θ)u′ ∈ K = (θλ + (1 − θ)λ′, θx + (1 − θ)x ′) ∈ K

and

θv(u) + (1 − θ)v(u′) ∈ K ,

whenever 0 ≤ θ ≤ 1.Let w = (

θλ + (1 − θ)λ′, θx + (1 − θ)x ′ + 1c (θλ + (1 − θ)λ′)

).

123

Page 10: On DC optimization algorithms for solving minmax flow problems

L. D. Muu, L. Q. Thuy

Then

w = θw + (1 − θ)w′

and

p(θu + (1 − θ)u′) = c

2minv∈K

{‖v − w‖2}.

On the other hand, p(θu + (1 − θ)u′) ≤ c2‖θv(u) + (1 − θ)v(u′) − w‖2

= c

2‖θ(v(u) − w) + (1 − θ)(v(u′) − w′)‖2.

≤ θc

2‖(v(u) − w)‖2 + (1 − θ)

c

2‖(v(u′) − w′)‖2.

= θp(u) + (1 − θ)p(u′),

which shows the convexity of fc. ��From the result of Theorem 2, Problem (9) can be converted into the DC optimizationproblem

α(t) := minu∈K

{Gt (u) = bT u + 1

2ρ‖u‖2 − [ 12ρ‖u‖2 − tγc(u)

]}, (12)

where u = (λ, x), b = (0, d).

For simplicity of notation, we write

G(u) = bT u + 1

2ρ‖u‖2, H(u) = 1

2ρ‖u‖2 − tγc(u).

Then the problem has the form

min{G(u) − H(u) : u ∈ K }. (13)

The sequences {uk} and {vk} constructed by DCA for Problem (13) now look as

vk = ∇H(uk) = ρuk − t∇γc(uk),

uk+1 = argmin

{bT u + 1

2ρ‖u‖2 − 〈u − uk, vk〉 : u ∈ K

}.

Thus, at each iteration k, with wk = (λk, xk + 1c λk), we have to solve the two strongly

convex quadratic programsminv∈K

‖v − wk)‖2, (14)

and

minu∈K

{bT u + 1

2ρ‖u‖2 − 〈u − uk, vk〉

}. (15)

123

Page 11: On DC optimization algorithms for solving minmax flow problems

DC optimization algorithms

Let vc(λk, xk) be the unique solution of (14). Then

vk = ρuk − t∇γc(uk) = ρ(λk, xk) + t[λk − c(v(λk, xk)), λk

+(1 + c)(xk − v(λk, xk))]T .

The DCA for this case can be described in details as follows.Algorithm

– Initialization: Choose an exact penalty parameter 0 < t ≤ 1, a tolerance ε ≥ 0and two positive numbers c and ρ such that cρ ≥ 1. Seek u0 ∈ K .

Let k := 0.

– Iteration k (k = 0, 1, 2, . . .)Calculate vk = ρuk − t∇γc(uk).Compute uk+1 = arg min{bT u + 1

2ρ||u||2 − 〈u − uk, vk〉 : u ∈ K }.If ||uk+1 − uk || ≤ ε, terminate.Otherwise, go to iteration k with k := k + 1.

We observe that if uk = uk+1, then 〈b + t∇γc(uk), u − uk〉 ≥ 0 for all u ∈ K , whichmeans that uk is a stationary of Problem (12). Motivated by this observation, in thesequel we call uk an ε-stationary point of Problem (12) if ‖uk+1 − uk‖ ≤ ε.

Theorem 3 (Convergence theorem)

(i) If the algorithm terminates at iteration k, then uk = (λk, xk) is an ε-stationarypoint of Problem (12).

(i) If the algorithm does not terminate, then any cluster point u∗ = (λ∗, x∗) of thesequence {uk} is a stationary point of Problem (12).

Proof (i) Suppose that the algorithm terminates at iteration k. Then according, tothe algorithm ‖uk+1 − uk‖ ≤ ε, which means that uk is an ε-stationary point ofProblem (12).(i i) This assertion is a direct consequence of Theorem 3 in An and Tao (1997)(see also Theorem 5 in An et al. 1996). ��

3 Illustrative example and computational results

To illustrate the algorithm we take an example in Gotoh et al. (2003). In this examplethe network has 4+2 nodes and 10 arcs as shown in Fig. 1, where the number attachedto each arc is the arc capacity given by

p = (p1, . . . , p10)T = (8, 3, 1, 4, 2, 1, 7, 1, 2, 8)T .

Initialization:Compute an extreme maximal flow u0 = (λ0, x0), where

λ0 = (1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.400, 1.000, 1.000, 90.600);x0 = (7.000, 3.000, 0.067, 4.000, 2.000, 1.000, 6.933, 0.067, 2.000, 8.000).

123

Page 12: On DC optimization algorithms for solving minmax flow problems

L. D. Muu, L. Q. Thuy

s

8x1

4 x4

x61

1x8

v1

v2

x2

3

x5

2

1 x3

v3

v4

t

8x10

x9

2

x7

7

Fig. 1 Network with 6 nodes and 10 arcs

s

(6,8)x1

(4,4) x4

x6

(0,1)

x8

v1

v2

x2

x5

(2,2)

(1,1) x3

v3

v4

t

(8,8)x10

x9

(1,2)

x7

(7,7)

(3,3)

(0,1)

Fig. 2 A minimum maximal flow (x∗i , pi )

Iteration k:At k = 1, the iterate is u1 = (λ1, x1), where

λ1 = (1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.720, 1.000, 1.000, 90.280),

x1 = (6.995, 3.000, 0.070, 4.000, 2.000, 0.995, 6.935, 0.065, 1.995, 8.000).

After 12 iterations, we obtain u∗ = (λ∗, x∗), where

λ∗ = (1.000, 1.000, 1.622, 1.000, 1.000, 1.000, 2.501, 1.000, 1.000, 88.877),

x∗ = (6.000, 3.000, 1.000, 4.000, 2.000, 0.000, 7.000, 0.000, 1.000, 8.000).

It is the same global optimal solution x∗ = (6, 3, 1, 4, 2, 0, 7, 0, 1, 8) that is obtainedin Gotoh et al. (2003). In Fig. 2, the vector (x∗

i , pi ) is given on each arc.To test the proposed algorithm, for each pair (m, n), we run it on 5 randomly differ-

ent sets of data. These sets of data are chosen as in Zenke (2006). The test problemsare executed on CPU, chip Intel Core(2) 2.53 GHz, RAM 2 GB, C++(V C++2005)

programming language. Numerical results are summarized in Tables 1 and 2. Table 1contains the results computed by the proposed DC algorithm for data where the num-

123

Page 13: On DC optimization algorithms for solving minmax flow problems

DC optimization algorithms

Table 1 Computational resultsm n Data Op-value Iter. Value-γc Time

6 10 Data1 7 7 3.02E−07 0.77

6 10 Data2 2 19 7.92E−07 0.86

6 10 Data3 5 12 1.95E−06 0.53

6 10 Data4 3 3 0.00E+00 0.19

6 10 Data5 8 4 0.00E+00 0.23

16 20 Data6 6 8 0.00E+00 0.55

16 20 Data7 4 16 4.46E−07 0.67

16 20 Data8 1 26 2.40E−07 1.05

16 20 Data9 0 21 4.41E−07 0.88

16 20 Data10 6 43 6.82E−07 1.76

30 70 Data11 3 116 1.06E−05 7.05

30 70 Data12 4 500 4.00E−05 30.58

30 70 Data13 7 165 8.00E−06 10.14

30 70 Data14 2 234 3.94E−06 14.31

30 70 Data15 6 242 1.13E−06 14.73

100 200 Data16 4 199 2.42E−07 41.73

100 200 Data17 1 191 1.52E−05 40.52

100 200 Data18 2 500 7.70E−06 105.74

100 200 Data19 4 390 2.37E−05 80.19

100 200 Data20 5 500 3.60E−05 103.09

ber of nodes and arcs may up to several hundreds. The existing global optimizationmethods do not work for these data.where

Data: Sets of data;Op-value: Optimal value;Iter.: Number of iterations;Value-γc: Value of γc(.);Time: Average CPU-Time in seconds.

In Table 2 we compare the results computed by the proposed local algorithm andthe global OA algorithm presented in Zenke (2006) for problems of small sizes, sincethe global algorithms do not work for larger networks. For the first five problems thedata taken from network given by Fig. 1. For these instances the results obtained byboth algorithms are the same, since each of these problems has a unique solution. Forthe next fifteen examples the solutions obtained by the proposed local algorithm arenot always a minmax flow. About sixty percent of them are global optimal solutions.where

GAlg.: Optimal values obtained by the outer approximation global algorithm;LAlg.: Optimal values obtained by the local algorithm;GTime: CPU-Time in seconds by the global algorithm;LTime: CPU-Time in seconds by the local algorithm.

123

Page 14: On DC optimization algorithms for solving minmax flow problems

L. D. Muu, L. Q. Thuy

Table 2 Computational resultsfor local and global algorithms

Prob. m n GAlg. LAlg. GT ime. LTime.

1 6 10 7 7 1.2 0.77

2 6 10 2 2 1.7 0.86

3 6 10 5 5 1.2 0.53

4 6 10 3 3 0.5 0.19

5 6 10 8 8 1.3 0.23

6 14 20 5 8 10.10 0.65

7 14 20 1 1 14.46 0.67

8 14 20 0 1 21.20 0.95

9 14 20 3 3 34.41 1.08

10 14 20 1 1 6.82 0.76

11 14 20 8 8 19.06 1.05

12 14 20 2 2 24.10 1.58

13 14 20 4 4 18.14 0.14

14 14 20 0 0 8.94 1.31

15 14 20 6 7 6.13 1.03

16 14 21 7 8 22.42 1.73

17 14 21 0 5 14.52 0.92

18 14 21 0 0 17.70 0.74

19 14 21 6 6 22.37 1.09

20 14 21 2 4 32.13 1.02

21 14 21 13 16 42.4 1.73

22 14 21 1 3 12.05 0.52

23 14 21 2 2 27.26 1.74

24 14 21 3 3 21.07 0.64

25 14 21 5 5 31.61 1.09

4 Conclusions

We have used a regularization technique to formulate the minmax flow problem asa smoothly DC optimization problem. Then we have applied the DCA algorithm forsolving the resulting DC problem. We have shown that the subproblems to be solvedat each iteration of the proposed algorithm are strongly convex quadratic programsrather than general convex ones as in general cases. Computational results show thatthe proposed DC algorithm is a good choice for the minimax flow problems due tospecific structures of the problems.

Acknowledgments We would like to thank the Associate Editor and the anonymous referees for theiruseful suggestions, comments and remarks on the paper, which helped us very much in revising the paper.

References

An LTH, Tao PD, Muu LD (1996) Numerical solution for optimization over the efficient set by DC opti-mization algorithms. Oper Res Lett 19:117–128

123

Page 15: On DC optimization algorithms for solving minmax flow problems

DC optimization algorithms

An LTH, Tao PD (1997) Convex analysis approach to DC programming: theory, algorithms and applications.ACTA Math Vietnam 22:289–355

An LTH, Tao PD, Muu LD (2003) Simplically-constrained DC optimization over the efficient and weaklysets. J Optim Theory Appl 117:503–531

Benson HP (1984) Optimization over the efficient set. J Math Anal Appl 98:56–580Benson HP (1991) An all-linear programming relaxation algorithm for optimization over the efficient set.

J Global Optim 1:83–104Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, CambridgeGotoh J, Thoai NV, Yamamoto Y (2003) Global optimization method for solving the minimum maximal

flow problem. Optim Methods Softw 18:395–415Iri M (1994) An essay in the theory of uncontrollable flows and congestion, Technical Report, Department

of Information and System Engineering, Faculty of Science and Engineering, Chuo University, TRISE94–03

Iri M (1996) Network flow—theory and applications with practical impact. In: Dolezal J, Fidler J (eds)System modelling and optimization. Chapman and Hall, London, pp 24–36

Kim NTB, Muu LD (2002) On the projection of the efficient set and potential applications. Optimization51:401–421

Luc LT, Muu LD (1997) A global optimization approach to optimization over the efficient set. In: Lecturenotes in economics and mathematical systems, Springer, Berlin, 452, pp 183–196

Philip J (1972) Algorithms for the vector maximization problem. Math Program 2:207–229Shi JM, Yamamoto Y (1997) A global optimization method for minimum maximal flow problem. Acta

Math Vietnam 22:271–287Shigeno M, Takahashi I, Yamamoto Y (2003) Minimum maximal flow problem—an optimization over the

efficient set. J Glob Optim 25:425–443Tao PD (1986) Algorithms for solving a class of non convex optimization. Methods of subgradients, Fecmat

Days 85, Mathematics for Optimization, Elservier Sience Publishers, B.V. NorthHollandYamamoto Y (2002) Optimization over the efficient set: overview. J Glob Optim 22:285–317Zenke D (2006) Algorithms for the minimum maximal flow problem—optimization over the efficient set,

Graduate School of Systems and Information Engineering, University of TsukubaZhu DL, Marcotte P (1994) An extended descent framework for variational inequalities. J Optim Theory

Appl 80:349–366

123