Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society...

20
O O P T I M A Mathematical Programming Society Newsletter MAY 2001 N o 65 65 atlanta symposium 2 MPS prizes 3 solution of a weighing problem 5 conferences 14 gallimaufry 18

Transcript of Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society...

Page 1: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

OO P T I M AMathematical Programming Society Newsletter

MAY 2001

No 65

65atlanta symposium 2 MPS prizes 3 solution of a weighing problem 5 conferences 14 gallimaufry 18

Page 2: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

O P T I M A 6 5 MA7 2001 PAGE 2

The Symposium was held at the GeorgiaInstitute of Technology, August 7-11, 2000. Themain organizers were George Nemhauser,Donna Llwellyn, and Martin Savelsbergh. Theatmosphere of the meeting was as warm as theoutdoor climate, and the quality of the technicalas well as the social program was supreme. Therewere 1,055 registered participants at the meeting.

Seven plenary talks were given, including aspecial lecture by Thomas Hales about his proofof the Kepler Conjecture, and the Banquet pres-entation given by Harold Kuhn. The plenarytalks were focusing on the history of some of thekey areas of Mathematical Programming, where-as the ten semi-plenary talks were consideringmore recent developments. About 870 technicaltalks were given in parallel sessions.

On Sunday evening there was a welcomereception and registration at the Ferst Center forthe Arts – the "heart" of the Symposium site.The Symposium was opened on Monday withwelcome addresses given by: George Nemhauser;the provost, Michael Thomas; and the MPSchair, Jean-Philippe Vial. During the openingsession, the Beale - Orchard-Hays, Dantzig,Fulkerson, and Tucker prizes were awarded aswell. The winners and the prize citations can befound on the following pages. Rita Graham con-tributed with a musical program that was clearlyappreciated by the mathematical programmers. Ihave never seen so many people swinging intheir chairs at a conference! On Wednesdaythere was a fantastic conference banquet. HaroldKuhn gave an entertaining, interesting, and verymoving Banquet Presentation that will long remainin our memories.

For the first time in the MPS history,Founders Awards were given to some of ourprominent colleagues who have laid the founda-tion for the many areas of mathematical pro-gramming. The ten recipients were: WilliamDavidon, George Dantzig, Lester Ford, DavidGale, Ralph Gomory, Alan Hoffman, HaroldKuhn, Harry Markowitz, Philip Wolfe, andGuus Zoutendijk. Lester Ford and David Galecould unfortunately not attend the meeting.

GEORGIA ON MY MIND....The Seventeenth Symposium on MathematicalProgramming – A Great SuccessKaren Aardal

EIGHT OF THE FOUNDERS AWARDS RECIPIENTS, FROMLEFT TO RIGHT: PHILIP WOLFE, HAROLD KUHN, HARRYMARKOWITZ, RALPH GOMORY, GEORGE DANTZIG, ALANHOFFMAN, GUUS ZOUTENDIJK, AND WILLIAM DAVIDON.

PAST, PRESENT, AND FUTURE CHAIRMEN OF THE SOCIETY ENJOYING THE BANQUET: GEORGENEMHAUSER, CHAIR-ELECT BOB BIXBY, JOHN DENNIS, GEORGE DANTZIG, JAN KAREL LENSTRA, ANDCURRENT CHAIR JEAN-PHILIPPE VIAL.

SOUTHERN HOSPITALITY! THEHAPPY ORGANIZING TEAM:GEORGE NEMHAUSER, DONNALLWELLYN, AND MARTINSAVELSBERGH.

HAROLD KUHN DURING HIS BANQUETPRESENTATION "IN THE RIGHT PLACE AND THERIGHT TIME."

MARTIN GROETSCHEL GIVING A PLENARY TALKAT THE OPENING SESSION.

WHICH SESSION TO GO TONEXT? GEORGE DANTZIG,RALPH GOMORY, AND ELLISJOHNSON ARE CHECKING THEPROGRAM.

Page 3: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

O P T I M A 6 5 MAY 2001 PAGE 3

Listed below are the prize citations for the fourMPS prizes (The Beale - Orchard-Hays prize,the Dantzig prize, the Fulkerson prize, and theTucker prize). Some of the citations have beenslightly edited by the OPTIMA Editor.

The Beale - Orchard-Hays Prize

The Beale - Orchard-Hays prize – for excel-lence in computational mathematical program-ming – was awarded to David Applegate, RobertBixby, Vasek Chvátal, and William Cook fortheir paper, "On the Solution of TravelingSalesman Problems," Documenta MathematicaJournal der Deutschen Mathematiker-Vereinigung, ICM III (1998), 645-656.

Additional information and source code forthe associated CONCORDE software packageare available online (http://www.keck.caam.rice.edu/concorde.html – also given in the paperitself ).

DAVID APPLEGATE ROBERT BIXBY

VASEK CHVÁTAL WILLIAM COOK

The Traveling Salesman Problem (TSP) hasplayed a special role in the history of mathemati-cal programming. In addition to its many appli-cations, from manufacturing to scheduling tocomputational biology, the TSP has long servedas the paradigmatic hard combinatorial opti-

mization problem and as a prime first testbed fornew ideas in the field, from cutting planes tobranch-and-cut to simulated annealing andother metaheuristics.

The award winning paper gives an informa-tive summary of the history of cutting planebased approaches to the TSP and then describesthe ideas, both old and new, that the authorshave incorporated in their own code, which isnow by far the best optimization code for theTSP that has been developed. In its 12 pages(constrained by a page limitation for this specialissue of the journal) the paper manages toinclude a large amount of material at a level suchthat the non-expert can get a feel for the ideasinvolved and the expert can understand the keyinnovations involved in the code.

As with all experimental papers, however, it isthe code itself and the results obtained with itthat provide the measure of significance. Theauthors’ solution of a 13,509-city instance fromthe TSPLIB is arguably the most difficult opti-mization problem that has been solved to date.The 13,509 solution was the culmination of 10years of work by the group, pushing the enve-lope of optimization computing in a wide rangeof areas. The procedure that the authorsemployed is a model for how to combine mathe-matical programming techniques with advanceddata structures and algorithms, together withparallel computing, to greatly extend the reachof optimization software.

The authors have advanced the state of the artto the point where solving 1,000-city instances isroutine. Indeed, in a recent test by Applegate (togather information on the Beardwood-Halton-Hammersley constant) the authors’ code wasused to solve 10,000 geometric instances eachcontaining 1,000 points.

The "local cuts" procedure described in thepaper (and implemented via a tour de force intheir computer code) is an important advance inthe way cutting-plane separation algorithms canbe viewed. Previously our main route to produc-ing facet-defining inequalities (so useful in prac-tice over the past twenty years) was a template-based approach that required a great deal ofproblem-specific knowledge, analysis, and algo-rithms. The new and complementary approach

(and its natural extension to general integer pro-gramming) provides a general purpose routinethat can be specialized to produce facets auto-matically by a process of linear projection, givenonly that one has a method for optimizing smallinstances of the relevant subproblem.

Byproducts of the authors’ TSP work havefound their way into the leading commercialinteger and linear programming codes. A promi-nent example of this is the strong branching rule(for choosing a branching variable in mixed inte-ger programming branch and bound algorithms)that is now the default choice when solving hardMIP instances.

The authors’ source code includes a little over100,000 lines of C code. This is (to my knowl-edge) the first time the full source code to astate-of-the-art branch-and-cut algorithm hasbeen made publicly available, and it serves as amodel for the development of future efficientimplementations. Included with the cuttingplane routines are the first publicly-availablehigh-quality implementations of the Lin-Kernighan and Chained-Lin-Kernighan heuris-tics for finding near-optimal tours, which them-selves are of great practical benefit whenattempting to get good solutions to instancesbeyond the range of the optimization routines.All this allows future research to start at the top,rather than having to rebuild a code from theground up.

The two versions of the accompanying sourcecode (one from 1997 and one from 1999) havehad over 1,200 downloads (and the new versioncontinues to obtain about 40 new downloadsper week). Links to it have already been addedto over 25 web sites (outside of AT&T, Rice,and Rutgers). This amount of activity is unusual(to say the least) for a piece of advanced mathe-matical software, and a definite measure of itsimpact.

An honorable mention for the Beale -Orchard-Hays prize was given to the followingtwo papers: Joseph Czyzyk, Michael P. Mesnier,Jorge Moré, The NEOS Server, IEEE Journal onComputational Science and Engineering, 5(1998), 68-75.

William Gropp, Jorge Moré, Optimizationenvironments and the NEOS Server, in:

THE MPS PRIZES

Page 4: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

O P T I M A 6 5 MAY 2001 PAGE 4

Approximation Theory and Optimization, M.D. Buhmann and A. Iserles, eds., pages 167-182, Cambridge University Press, 1997.

The prize jury: M. Ferris, B. Fourer, D.Goldfarb (chair), M. Kojima, M. Saunders.

The George B. Dantzig Prize

The Dantzig prize is jointly administered bythe Mathematical Programming Society and theSociety for Industrial and Applied Mathematics(SIAM). The prize is awarded to one or moreindividuals for original research which, by virtueof its originality, breadth and depth, is having amajor impact on the field of mathematical pro-gramming.

The prize was awarded to Yurii Nesterov.Yurii Nesterov is probably best known for hisfundamental contributions to the theory of inte-rior point methods for convex programming,including his introduction of the concept ofself-concordance. However, his previous worksin convex optimization and in automatic differ-entiation, though perhaps less widely publicized,were just as creative and authoritative. Theseaccomplishments have justifiably attracted inter-national recognition, and have significantlyaffected the course of research in optimizationtheory and algorithms.

The prize jury: W. Cunningham, C.Lemaréchal, S. M. Robinson (chair), L.A.Wolsey.

The D. Ray Fulkerson Prize

The Fulkerson Prize for outstanding papers inthe area of discrete mathematics is sponsoredjointly by the Mathematical ProgrammingSociety and the American Mathematical Society.The jury awarded two prizes at the Symposium.

One award went to Michele Conforti, GerardCornuéjols and M. R. Rao for their paper"Decomposition of balanced matrices," JournalCombinatorial Theory, Series B 77 (1999), 292-406.

A balanced matrix is a 0-1 matrix with nosubmatrix of odd order with precisely two 1’sper row and per column. This simply definedclass of matrices has been of great interest and iswell-studied because packing and covering

Linear Programs with balanced constraint matri-ces have integral solutions. Also, balanced matri-ces are a natural generalization of bipartitegraphs.

The authors give a polynomial time algorithmfor recognizing balanced matrices, settling along-standing open question. The algorithm isbased on a deep, beautiful decomposition theo-rem whose arduous proof combines brilliantinsights and meticulous case-checking.

A second award went to Michel Goemansand David Williamson for their paper"Improved approximation algorithms for themaximum-cut and satisfiability problems usingsemi-definite programming," Journal of theAssociation for Computing Machinery, 42(1995), 1115-1145.

This paper gives a much better approximationalgorithm for the fundamental maximum-cutproblem in graphs and several other optimiza-tion problems by an elegant application of semi-definite programming. It is the first paper toapply semi-definite programming in an approxi-mation algorithm with a guaranteed perform-ance bound.

A crucial new element is the extraction of aninteger optimal solution from the semi-definiterelaxation. The authors’ techniques have sincefound a host of other applications.

The prize jury: W. Cook, R. Graham, R.Kannan (chair).

M.R. RAO, MICHELE CONFORTI, GERARDCORNUEJOLS, AND RAVI KANNAN (JURY CHAIR)

DAVID WILLIAMSON, MICHEL GOEMANS AND RAVIKANNAN

The A.W. Tucker Prize

The Tucker prize is awarded at each ISMPmeeting for the best paper authored by a student.

The committee chose three finalists for theprize, and from among these a winner. The firsttwo finalists are Fabian Chudak from CornellUniversity, and Kamal Jain from Georgia Tech.Dr. Chudak’s paper considers improved approxi-mation algorithms for the uncapacitated facilitylocation problem. Dr. Jain’s paper describes afactor-2 approximation algorithm for the gener-alized Steiner network problem.

The third finalist, and winner of the Tuckerprize, is Bertrand Guenin from Carnegie-MellonUniversity. Dr. Guenin’s paper gives a completecharacterization of graphs for which a naturalLP-relaxation of the MAXCUT problem has allintegral vertices. Such graphs were termed"weakly bipartite" by Grötschel and Pulleyblank.Guenin’s result verifies a substantial special caseof a conjecture by Seymour on ideal clutters dat-ing back to 1977, and represents the first majorprogress on the resolution of Seymour’s conjec-ture. The paper employs a sophisticated graph-theoretic analysis involving the use of Lehman’sresults on minimally nonideal matrices, forwhich Lehman was awarded the MPS Fulkersonprize in 1991. In addition to its purely theoreti-cal interest, the study of ideal matrices has sig-nificant applicability in the design of algorithmsfor partitioning and covering problems that arisein many areas, such as vehicle routing and crewscheduling.

The prize jury: K. Anstreicher (chair), R.Burkhard, D. den Hertog, D. Karger, J. Lee.

Page 5: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

MAY 2001 PAGE 5O P T I M A 6 5

1 The seven weights problem

The following problem was published in theSunday Times on Sunday, 30 June 1963, and aprize of three pounds was offered for the firstcorrect solution:

"On our last expedition to the interior," saidthe famous explorer, "we came across a tribewho had an interesting kind of Harvest Festival,in which every male member of the tribe had tocontribute a levy of grain into the communaltribal store. Their unit of weight was roughlythe same as our pound avoirdupois, and eachtribesman had to contribute one pound of grainfor every year of his age. The contributions wereweighed on the tribe’s ceremonial scales, using aset of seven ceremonial weights. Each of theseweighed an integral number of pounds, and itwas an essential part of the ritual that not morethan three of them should be used for eachweighing, though they need not all be in thesame pan.

We were told that if ever a tribesman lived tosuch an age that his contribution could nolonger be weighed by using three weights only,the levy of grain would terminate for ever. Andin the previous year, one old man had died onlya few months short of attaining the critical age,greatly to the relief of the headman of the tribe.The scientist with our expedition confirmedthat the weights had been selected so that thecritical age was the maximum possible."

What was the age of the old man when hedied? And what were the weights of the sevenceremonial weights?

In a more mathematical formulation, theproblem asks for seven positive integers W1, . . . , W7 that can represent consecutiveintegers in the widest possible range 1 . . . L.Here an integer n is said to be represented by thenumbers W1, . . . , W7, if there exists an index1 ≤ a ≤ 7 with n = Wa , or if there exist twoindices a, b with 1 ≤ a < b ≤ 7 such that n = ±Wa ± Wb , or if there exist three indicesa, b, c with 1 ≤ a < b < c ≤ 7 such thatn = ±Wa ± Wb ± Wc .

In the solution published in the SundayTimes of 7 July 1963, it was claimed that thebest possible range was for L =120, that the

weights of the seven ceremonial weights were 1,3, 7, 12, 42, 75, and 100, and that the old manhad died just before his 121st birthday. In thisnote we show that this claim is not justified:The truth is that the old man had died justbefore his 123rd birthday. In Section 2, wedescribe a weight sequence that represents therange 1. . . 122. In Section 3, we argue that arange 1. . . 123 cannot be reached; the argu-ment relies on complete enumeration with thehelp of a computer program. This program isavailable on request from the first author (bysending email to [email protected]).

2 Solutions to the problem

When we started our search, we used a primi-tive program that only searched through a verylimited number of feasible solutions. Never-theless, within two hours of running time thisprogram managed to find a solution for therange 1. . . 122. Table 1 demonstrates that allintegers in the range 1. . . 122 indeed can berepresented by the seven weights 1, 3, 7, 12, 43,76, and 102.

We then completely settled the problem byperforming an implicit complete enumerationwith the help of a computer program. For thiswe needed a more sophisticated approach,which is described in detail in Section 3. InTable 2, we list in lexicographically increasingorder all possible sets of seven weights that canrepresent ranges 1 . . . L with L ≥ 120.

3 Implicit enumeration

We have implemented an enumeration schemeto find the best possible sequence of weights Wi,where W1 ≤ W2 ≤ W3 ≤ W4 ≤ W5 ≤ W6 ≤ W7.Our enumeration scheme searches through theweight sequences in lexicographically increasingorder. A naive approach that simply enumeratesall possible sequences will fail since the numberof combinations to be inspected is fairly large:Even if we could assume a priori that W7 ≤ 123(and there is absolutely no reason for assumingthis), there still would be roughly 1011 sequencesto consider; and for each single one of these1011 sequences, we had to check whether it rep-resents the full range 1 . . . 122 of integers.

Solution of a weighingproblemCor A.J. Hurkens* Gerhard J. Woeginger†

*[email protected]; Department of Mathamatics and Computing Science, Eindhoven University ofTechnology, P.O. Box 513, NL-5600 MB Eindhoven, The Netherlands.†[email protected]; Institut für Mathematik B, TU Graz, Steyrergasse 30, A-8010 Graz,Austria. Supported by the START program Y43-MAT of the Austrian Ministry of Science.

Page 6: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

Hence, we implemented several tricks thathelped to considerably cut down the search tree.These tricks investigate potential extensions ofpartial solutions. We say that a partial solutionW1 ≤ ··· ≤ Wl (with 1 ≤ l ≤ 6) is interesting, if there is still some possibility to extend it bynumbers Wl+1, . . . ,W7, to a sequence that repre-sents all the numbers in the range 1 . . . q. Thevalue q is a threshold that will be set to somevalue ≥ 117; the choice of this threshold issomewhat arbitrary, but the analysis below isgeneral. In order to find good bounds on thenumber of distinct representable values in therange 1 . . . q, we compute and maintain the fol-lowing sets and numbers for every partial solu-tion W1 ≤ ··· ≤ Wl.

• l ≤ 6 is the number of currently fixedweights.

• M(k, l) with 1≤ k ≤ 3 is the set of all integers that can be represented by using atmost k numbers from W1 , . . .Wl. This setincludes 0 and negative numbers.

• W – is a (tentative) lower bound on thenumbers Wl+1, . . .,W7.

From now on we will write #(X) to denote thecardinality of a set X. Note that the cardinalities of the sets IN « M(1, l) , IN « M(2, l) , andIN « M(3, l) can be bounded from above by thevalues l , l2, and l2 + 4(3

l), respectively. Similarly,the cardinalities of the sets M(1, l) , M(2, l) , andM(3, l) can be bounded from above by 2l + 1,2l2 + 1 and 2l2 + 8(3

l) + 1, respectively. We will

use these bounds many times in our arguments.

3.1 Locally bounding the depth ofthe search tree

The following expression A(l, W –, q) is anupper bound on the number of distinct positiveintegers in 1 . . . q that can be represented fromany extension of the current partial solution. Forease of exposition, any number that can be cho-sen from the set M(• , •) is denoted by Mi , andany weights yet to be fixed are denoted byWj ≤ Wk ≤ Wl with j < k < l.

A(l , W –, q)=#( [1, q] « M(3, l) ) /* combinations

without any Wj */

+( 17 - l ) #( (- ∞, q - W –] « M (2, l))/* combinations abs(Wj ± Mi ) */

+( 27 - l ) #(M (1, l)) /* combinations

abs(Wj – Wk ± Mi ) */

+( 27 - l ) # ( (- ∞, q - 2W – ] « M(1, l)) /*combinations abs(Wj + Wk ± Mi ) */

+4( 37 - l ) /* combinations abs(±Wj ± Wk

± Wl ) */

E.g., the fifth additive term 4( 37 - l ) is justified

by the observation that at most 4 of the 8 com-binations ±Wj ± Wk ± Wl can be positive. Notethat the function A(l, W –, q) is non-increasingin parameter W –, and that W – ≥ Wl. Hence, ifA(l, Wl, q) < q holds, then the considered par-tial solution cannot be interesting. In this case,we skip all extensions of this partial solution.

3.2 Locally bounding the width ofthe search tree

If a partial solution survives the above feasibilitytest and satisfies A(l, Wl, q) ≥ q, then our goal isto compute lower and upper bounds on Wl+1,the value of the next weight to be fixed. Sincewe generate the weight sequences in lexicograph-ically increasing order, the value Wl yields a triv-ial lower bound for Wl+1. Now let us look forupper bounds. Intuitively it is clear that if we setthe weight Wl+1 too high, then we will not beable to represent all the ‘small’ integers belowWl+1. To make this idea precise, we compute acrude upper bound B(l , q) on the number ofintegers in the range 1. . . min {q, Wl+1 - 1} thatcan be represented by any extension of theconsidered partial solution:

B(l, q)=#( [1, q] « M (3, l) ) /* combinations

without any Wj */

MAY 2001 PAGE 6O P T I M A 6 5

l 1 2 3 4 5 6 7l2 1 4 9 16 25 36 49

l2 + 4(3l) 1 4 13 32 65 116 189

+ +( ) >

+ +− +

( ) >

otherwise triples * /if compensating for too high

otherwise for - and * /

if compensating

0

0

37

37

3

2

W W WW

W W WW W W

W

j k l

j k l

j k l

λ

λ

θ

θ

/*

/*

1 = 12 = –1 + 33 = 34 = 1 + 35 = –7 + 126 = –1 + 77 = 78 = 1 + 79 = –3 + 12

10 = 3 + 7 11 = –1 + 1212 = 1213 = 1 + 1214 = –1 + 3 + 1215 = 3 + 12 16 = 1 + 3 + 1217 = 43 + 76 – 10218 = –1 + 7 + 1219 = 7 + 1220 = 1 + 7 + 1221 = –12 – 43 + 7622 = 3 + 7 + 1223 = –3 – 76 + 10224 = –7 – 12 + 4325 = –1 – 76 + 102

26 = –76 + 10227 = 1 – 76 + 10228 = – 3 – 12 + 4329 = 3 – 76 + 10230 = –1 – 12 + 4331 = –12 + 4332 = 1 – 12 + 4333 = –43 + 7634 = 1 – 43 + 7635 = –1 – 7 + 4336 = –7 + 4337 = 1 – 7 + 4338 = 7 – 12 + 4339 = –1 – 3 + 4340 = –3 + 4341 = 1 – 3 + 4342 = –1 + 4343 = 4344 = 1 + 4345 = –1 + 3 + 43 46 = 3 + 4347 = 1 + 3 + 4348 = –7 + 12 + 4349 = –1 + 7 + 4350 = 7 + 43

51 = 1 + 7 + 4352 = –3 + 12 + 4353 = 3 + 7 + 4354 = –1 + 12 + 4355 = 12 + 4356 = 1 + 12 + 4357 = –7 – 12 + 7658 = –1 – 43 + 10259 = –43 + 10260 = 1 – 43 + 10261 = –3 – 12 + 7662 = 3 – 43 + 10263 = –1 – 12 + 7664 = –12 + 7665 = 1 – 12 + 7666 = –3 – 7 + 7667 = 3 – 12 + 7668 = –1 – 7 + 7669 = –7 + 7670 = 1 – 7 + 7671 = 7 – 12 + 7672 = –1 – 3 + 7673 = 3 + 7674 = 1 – 3 + 7675 = –1 + 76

76 = 7677 = 1 + 7678 = –1 + 3 + 76 79 = 3 + 7680 = 1 + 3 + 7681 = –7 + 12 + 7682 = –1 + 7 + 7683 = 7 + 76 84 = 1 + 7 + 76 85 = –3 + 12 + 7686 = 3 + 7 + 76 87 = –1 + 12 + 76 88 = 12 + 7689 = 1 + 12 + 7690 = –12 + 10291 = 1 – 12 + 10292 = –3 – 7 + 102 93 = 3 – 12 + 10294 = –1 – 7 + 10295 = –7 + 10296 = 1 – 7 + 10297 = 7 – 12 + 10298 = –1 – 3 + 10299 = –3 + 102

100 = 1 – 3 + 102

101 = –1 + 102102 = 102103 = 1 + 102104 = –1 + 3 + 102105 = 3 + 102106 = 1 + 3 + 102107 = –7 + 12 + 102108 = –1 + 7 + 102109 = 7 + 102110 =1 + 7 + 102 111 = –3 + 12 + 102112 = 3 + 7 + 102113 = –1 + 12 + 102114 =12 + 102115 = 1 + 12 + 102116 = –3 + 43 + 76117 = 3 + 12 + 102118 = –1 + 43 + 76119 = 43 + 76120 = 1 + 43 + 76121 = 7 + 12 + 102122 = 3 + 43 + 76

Table 1: How to represent all the integers in the range 1. . . 122 by the seven weights 1, 3, 7, 12, 43, 76, and 102.

W1 W2 W3 W4 W5 W6 W7 Range1 2 12 17 33 74 99 1...1201 3 7 12 42 75 100 1...1201 3 7 12 43 76 102 1...1221 3 9 14 40 70 103 1...1201 3 10 15 38 72 103 1...1212 4 7 22 35 78 90 1...1212 5 6 15 43 74 103 1...1202 6 13 23 33 83 84 1...1203 5 15 16 33 75 99 1...120

Table 2: All possible solutions for ranges 1 . . . L with L ≥ 120

Page 7: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

MAY 2001 PAGE 7

+( 17 - l ) #( [1, ∞ ) « M (2, l))/* combinations Wj - Mi */

+( 27 - l ) #(M (1, l)) /* combinations

abs (Wj - Wk ± Mi) */

+( 37 - l ) /* combinations abs(Wj + Wk - Wl ) */

(1) First we discuss the case where B(l, q ) < qholds. Here we use the upper bound Wl + 1 ≤B(l, q) + 1. Note that we can bound B(l, q )from above by l2 + 4(3

l) + ( 17 - l ) l2 +( 2

7 - l )(2l + 1) + ( 3

7 - l ). Here l2 + 4(3l) is an upper

bound on #( [1, q ] « M(3, l) ), and ( 17 - l ) l2 is

an upper bound on ( 17 - l ) #( [1, ∞ ) « M (2,

l)) and so on. This yields the following a prioriupper bounds on Wl+1:

In fact if B(l, q) < q, we may compute an evenbetter upper bound on Wl+1 by excluding thevery high values in M(3, l). Define

x* = max{x Áx ≤ q and B(l, x) ≥ x}.

Then the inequality Wl+1 ≤ B(l, x* + 1) + 1yields the improved upper bound on Wl+1.

(2) Next we discuss the remaining case whereB(l, q) ≥ q. Here we define

y* = min{y Á A(l, y, q) < q}By the definition of function A in the precedingsection, we get that for W - ≥ y* there is no feasi-ble extension of the considered partial solution.Hence, in this second case we arrive at the upperbound Wl+1 ≤ y* - 1.

It remains to be shown that the above definedvalue y* indeed exists, i.e., that the minimum inthe right hand side exists (which is not at allobvious). By examining the function A, oneobserves that A(l, ∞, q) is bounded from aboveby l2 + 4( 3

l ) + (2l + 1) ( 27-l) + ( 3

7-l ): The righthand side of the definition is the sum of seventerms. The first term #( [1, q ]« M(3, l)) isbounded from above by l2 + 4( 3

l ). The secondand the fourth term disappear for sufficientlylarge values of W –. The third term ( 2

7-l ) #M(1, l)) is bounded from above by

( 27-l ) (2l + 1). The sum of the last three terms

equals ( 37-l ) for sufficiently large W –. Altogether,

this yields the following table.

l 1 2 3 4 5 6 7

A(l, ∞, q) ≤ 66 64 59 60 76 116 189

Since l ≤ 6 and since q > 116, the value y*indeed exists. Finally, we remark that the valuey* can be computed quickly by bisection search.

To summarize, for every partial solution wefirst determine the value B( l, q). Depending onwhether this value is < q or ≥ q, we compute anupper bound that is based on x* or y*, respec-tively. Then we only search extensions of thepartial solution for which Wl + 1 takes valuesbetween Wl and this upper bound.

3.3 Fathoming a node by lookingahead

Starting from an empty sequence of weights, werecursively compute and extend partial solutions.The bounds described above are used to limitthe number of combinations that have to beconsidered. It is computationally cheap to evalu-ate the functions A and B, since we computeand maintain all necessary auxiliary values andsets on the fly.

Once five weights have been fixed (i.e., oncewe are down to l = 5), it is possible to efficientlydetermine whether the partial solution is inter-esting. By a slight adaptation of routine A, wecan compute even sharper bounds on the valueW7. This allows us to skip several additional val-ues of W6. Once six weights have been fixed, it isan easy job to find the interesting values for W7.At this stage, almost every partial solutionindeed leads to a sequence which represents eachinteger in 1 . . . q.

3.4 Remarks on the computer program

The program has been coded in C and took 475minutes of CPU time on a 450 MHz PC run-ning under Linux. The final search tree hasapproximately 68 million leaf nodes. The last 5-tuple (W1, W2, W3, W4, W5 ) that was investigat-ed is (57, 73, 84, 91, 95). The highest valuesconsidered for the seven weights are 57, 73, 85,96, 109, 110, and 103, respectively. There exist116,315,910 5-tuples (W1 ≤ W2 ≤ W3 ≤ W4 ≤

W5), with Wi below the respective upperbounds, so at least half of them have been fath-omed by our bounding procedures. This showsthat the search tree has been kept under control,as far as the higher weights are concerned.

We have not been able to cut down the rangefor the first weights W1 andW2. The reader maywant to observe from the entries in Table 2 thatgood solutions are possible without using thevalues 1 and 2 at all. Nevertheless, one wouldexpect some argument showing that the partialsolutions in which, say, W1 ≥ 10 need not beconsidered. Such a result would speed up thealgorithm considerably. The highest upperbounds on W6 and W7 (observed when the firstfive weights have already been fixed) are 137 and262, respectively. This indicates that it will berather difficult to bound the ranges of these vari-ables a priori.

We conclude that only the application of thevarious bounds enabled us to implicitly enumer-ate all solutions.

4 Concluding remarks

Table 2 is the result of the search setting thethreshold q = 120. It therefore contains all possi-ble solutions from which a consecutive range 1 . . . 120 can be represented.

This type of weighing problem should yieldnice challenges for the constraint programmingcommunity. After all, we have invested a lot ofwork and a lot of thinking into the solution ofthis problem, and it is not clear whether thegeneric constraint programming approaches(where one only needs to formulate a problemin some meta-language and then all the opti-mization and all the searching are performedautomatically by the underlying software) cancompete against human specialization.

Acknowledgement

We thank Clive Tooth for stating this weighingproblem on his web page at(http://www.pisquaredoversix.force9.co.uk/3From7.htm).

O P T I M A 6 5

l 0 1 2 3 4 5 6 B(l, q ) ≤ 56 72 84 95 108 126 152

Wl+1 ≤ 57 73 85 96 109 ___ ___

Page 8: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

MAY 2001 PAGE 8O P T I M A 6 5

1 Multiprocessors andComputational Grids

Multiprocessor computing platforms, whichhave become available more and more widelysince the mid-1980s, are now heavily used byorganizations that need to solve very demandingcomputational problems. There has also been agreat deal of research on computational tech-niques that are suited to these platforms, and onthe software infrastructure needed to compileand run programs on them. Parallel computingis now central to the culture of many researchcommunities, in such areas as meteorology,computational physics and chemistry, and cryp-tography. Nevertheless, fundamental research innumerical techniques for these platformsremains a major topic of investigation in numer-ical PDE and numerical linear algebra.

The nature of parallel platforms has evolvedrapidly during the past 15 years. The Eightiessaw a profusion of manufacturers (Intel,Denelcor, Alliant, Thinking Machines, Convex,Encore, Sequent, among others) with a corre-sponding proliferation of architectural features:hypercube, mesh, and ring topologies; sharedand distributed memories; memory hierarchiesof different types; butterfly switches; and globalbuses. Compilers and runtime support toolswere machine specific and idiosyncratic.Argonne’s Advanced Computing ResearchFacility kept a zoo of these machines during thelate 1980s, allowing free access to manyresearchers in the United States and giving manyof us our first taste of this brave new world.

By the early 1990s, the situation had startedto change and stabilize. Most of the vendorswent out of business, and their machines gradu-ally were turned off – one processor at a time, insome cases. Architectures gravitated toward twoeasily understood paradigms that prevail today.One paradigm is the shared-memory model typ-ified by the SGI Origin series and by computersmanufactured by Sun and Hewlett-Packard. Theother paradigm, typified by the IBM SP series,can be viewed roughly as a uniform collection ofprocessors, each with its own memory and allable to pass messages to one another at a rateroughly independent of the locations of the twoprocessors involved. On the software side, theadvent of software tools such as p4, MPI, and

PVM allowed users to write code that could becompiled and executed without alteration on themachines of different manufacturers, as well ason networks of workstations.

The optimization community was quick totake advantage of parallel computers. In design-ing optimization algorithms for these machines,it was best in some cases to exploit parallelism ata lower level (that is, at the level of the linearalgebra or the function/derivative evaluations)and leave the control flow of the optimizationalgorithm essentially unchanged. Other casesrequired a complete rethinking of the algo-rithms, to allow simultaneous exploration of dif-ferent regions of the solution space, differentparts of the branch-and-bound tree, or differentcandidates for the next iterate. Novel parallelapproaches were developed for global optimiza-tion, network optimization, and direct-searchmethods for nonlinear optimization. Activitywas particularly widespread in parallel branch-and-bound approaches for various problems incombinatorial and network optimization, as abrief Web search can attest.

As the cost of personal computers and low-end workstations has continued to fall, while thespeed and capacity of processors and networkshave increased dramatically, "cluster" platformshave become popular in many settings. Thoughthe software infrastructure has yet to mature,clusters have made supercomputing inexpensiveand accessible to an even wider audience.

A somewhat different type of parallel com-puting platform known as a computational grid(alternatively, metacomputer) has arisen in com-paratively recent times [1]. Broadly speaking,this term refers not to a multiprocessor withidentical processing nodes but rather to a het-erogeneous collection of devices that are widelydistributed, possibly around the globe. Theadvantage of such platforms is obvious: theyhave the potential to deliver enormous comput-ing power. (A particular type of grid, one madeup of unused computer cycles of workstationson a number of campuses, has the additionaladvantage of costing essentially nothing.) Just asobviously, however, the complexity of gridsmakes them very difficult to use. The softwareinfrastructure and the applications programsthat run on them must be able to handle thefollowing features:

heterogeneity of the various processors inthe grid;

the dynamic nature of the platform (thepool of processors available to the user maygrow and shrink during the computation);

the possibility that a processor performingpart of the computation may disappearwithout warning;

latency (that is, time for communicationsbetween the processors) that is highly vari-able but often slow.

In many applications, however, the potentialpower and/or low cost of computational gridsmake the effort of meeting these challengesworthwhile. The Condor team, headed byMiron Livny at the University of Wisconsin,were among the pioneers in providing infra-structure for grid computations. The Condorsystem can be used to assemble a parallel plat-form from workstations, PC clusters, and multi-processors and can be configured to use only"free" cycles on these machines, sharing themwith their respective owners and other users.More recently, the Globus project has developedtechnologies to support computations on geo-graphically distributed platforms consisting ofhigh-end computers, storage and visualizationdevices, and other scientific instruments.

In 1997, we started the metaneos project as acollaborative effort between optimization spe-cialists and the Condor and Globus groups. Ouraim was to address complex, difficult optimiza-tion problems in several areas, designing andimplementing the algorithms and the softwareinfrastructure needed to solve these problems oncomputational grids. A coordinated effort onboth the optimization and the computer sciencesides was essential. The existing Condor andGlobus tools were inadequate for direct use as abase for programming optimization algorithms,whose control structures are inevitably morecomplex than those required for task-farmingapplications. Many existing parallel algorithmsfor optimization were "not parallel enough" toexploit the full power of typical grid platforms.Moreover, they were often "not asynchronousenough," in that they required too much com-munication between tasks to execute efficientlyon platforms with the heterogeneity and com-munications latency properties of our targetplatforms. A further challenge was that, in con-trast to other grid applications, the computa-tional resources required to solve an optimiza-*Mathematics and Computer Science Division,

Argonne National Laboratory, and ComputerScience Department, University of Chicago.

Solving Optimization Problems on Computational GridsStephen J. Wright*November 21, 2000

Page 9: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

MAY 2001 PAGE 9O P T I M A 6 5tion problem often cannot be predicted withmuch confidence, making it difficult to assembleand utilize these resources effectively.

This article describes some of the results wehave obtained during the first three years of themetaneos project. Our efforts have led to devel-opment of the runtime support library MW, forimplementing algorithms with master-workercontrol structure on Condor platforms. Thiswork is discussed below, along with our work onalgorithms and codes for integer linear program-ming, the quadratic assignment problem, andstochastic linear programming. Other metaneoswork, not discussed below, includes work inglobal optimization, integer nonlinear program-ming, and verification of solution quality forstochastic programming.

2 Condor, Globus, and the MWFramework

The Condor system [2, 3] had its origins at theUniversity of Wisconsin in the 1980s. It focuseson collections of computing resources, known asCondor pools, that are distributively owned. Tounderstand the implications of "distributedownership," consider a typical machine in apool: a workstation on the desk of a researcher.The Condor system provides a means by whichother users (not known to the machine’s owner)can exploit some of the unused cycles on themachine, which otherwise would sit idle most ofthe time. The owner maintains control over theaccess rights of Condor to his machine, specify-ing the hours in which Condor is allowed toschedule processes on the machine and the con-ditions under which Condor must terminate anyprocess it is running when the owner starts aprocess of his own. Whenever Condor needs toterminate a process under these conditions, itmigrates the process to another machine in thepool, guaranteeing eventual completion.

When a user submits a process, the Condorsystem finds a machine in the pool that matchesthe software and hardware requirements of theuser. Condor executes the user’s process on thismachine, trapping any system calls made by theprocess (such as input/output operations) andreferring them back to the submitting machine.In this way, Condor preserves much of the sub-mitting machine’s environment on the executionmachine. Users can submit a large number ofprocesses to the pool at once. Since each suchprocess maintains contact with the submittingmachine, this feature of Condor opens up thepossibility of parallel processing. Condor pro-vides an opportunistic environment, one that canmake use of whatever resources currently areavailable in its pool. This set of resources grows

and shrinks dynamically during execution of theuser’s job, and his algorithm should be able toexploit this situation.

The Globus Toolkit [4] is a set of componentsthat can be used to develop applications or pro-gramming tools for computational grids.Currently, the Toolkit contains tools for resourceallocation management and resource discoveryacross a grid, security and authentication, datamovement, message-passing communication,and monitoring of grid components. The mainuse of Globus within the metaneos project hasbeen at a level below Condor. By a Globusmechanism known as glide-in, a user can addmachines at a remote location into the Condorpool on a temporary basis, making them accessi-ble only to his own processes. In this way, a usercan marshal a large and powerful set of resourcesover multiple sites, some or all of them dedicat-ed exclusively to his job.

MW is a software framework that facilitatesimplementation of algorithms of master-workertype on computational grids. It was developed aspart of the metaneos project by Condor teammembers Mike Yoder and Sanjeev Kulkarni incollaboration with optimization specialists JeffLinderoth and Jean-Pierre Goux [5, 6]. MWtakes the form of a set of C++ abstract classes,which the user implements to perform the par-ticular operations associated with his algorithmand problem class. There are just ten virtualfunctions, grouped into the following three fun-damental base classes:

• MWDriver contains four functions thatobtain initial user information, set up theinitial set of tasks, pack the data required toinitialize each worker processor as itbecomes available, and act on the resultsthat are returned to the master when a taskis completed.

• MWWorker contains two functions, tounpack the initialization data for the work-er and to execute a task sent by the master.

• MWTask contains four functions to packand unpack the data defining a single taskand to pack and unpack the results associ-ated with that task.

MW also contains functions that monitor per-formance of the grid and gather various statisticsabout the run.

Internally, MW works by managing a list ofworkers and a list of tasks. The resource manage-ment mechanisms of the underlying grid areused to obtain new workers for the list and pro-vide information about each worker. The infor-mation can be used to order the worker list sothat the most suitable workers (e.g., the fastestmachines) are at the head of the list and hence

are the first to receive tasks. Similarly, the tasklist can be ordered by a user-defined key toensure that the most important tasks are per-formed first. Scheduling of tasks to workers thenbecomes simple: The first task on the list isassigned to the first available worker. New tasksare added to the list by the master process inresponse to results received from completion ofan earlier task.

MW is currently implemented on two slightlydifferent grid platforms. The first uses Condor’sversion of the PVM (parallel virtual machine)protocol, while the second uses the remote I/Ofeatures of Condor to allow master and workersto communicate via series of shared files. Inaddition, MW provides a "bottom-level" inter-face that allows it to be implemented in othergrid computing toolkits.

3 Integer Programming

Consider the linear mixed integer programmingproblem

min cT x subject to Ax ≤ b, l ≤ x ≤ u,

xi ΠZ , for all i ΠI,

where x is a vector of length n, Z represents theintegers, and I {1,2,...,n}. Parallel algorithmsand frameworks for this problem have beeninvestigated by a number of authors in recenttimes. The approaches described in [7, 8, 9, 10]implement enhancements of the branch-and-bound procedure, in which the work of explor-ing the branch-and-bound tree is distributedamong a fixed number of processors. Theseapproaches are differentiated by their use of vir-tual-shared-memory vs. message-passing models,their load balancing procedures, their choice ofbranching rules, and their use of cuts.

By contrast, the FATCOP code of Chen,Ferris, and Linderoth [11] uses the MW frame-work to greedily use whatever computationalresources become available from the Condorpool. The algorithm implemented in FATCOP(the name is a loose acronym for "fault-tolerantCondor PVM") is an enhanced branch-and-bound procedure that utilizes (globally valid)cuts, pseudocosts for branching, preprocessing atnodes within the branch-and-bound tree, andheuristics to identify integer feasible solutionsrapidly.

In FATCOP’s master-worker algorithm, eachtask consists of exploration of a subtree of thebranch-and-bound tree, not just evaluation of asingle node of the tree. Given a root node forthe subtree, and other information such as theglobal cut and pseudocost pools, the task exe-cutes for a given amount of time, making itsown branching decisions and accumulating its

Page 10: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

MAY 2001 PAGE 10O P T I M A 6 5

own collection of cuts and pseudocosts. It mayalso perform a "diving" heuristic from its rootnode to seek a new integer feasible solution.When the task is complete, it returns to themaster a stack representing the unexplored por-tions of its subtree. (Depth-first search is used tolimit the size of this stack.) The task also sendsback any new cut and pseudocost information itgenerated, which is added to the master’s globalcut and pseudocost pools.

By processing a subtree rather than a singlenode, FATCOP increases the granularity of thetask and improves utilization of the computa-tional power of each worker. The time for whicha processor is sitting idle while waiting for thetask information to be sent to and from themaster, and for the master to process its resultsand assign it a new task, is generally small rela-tive to the computation time.

The master process is responsible for main-taining the task pool, as well as the pools of cutsand pseudocosts. It recognizes new workers asthey join the computation pool, and sends themcopies of the problem data together with thecurrent cut and pseudocost pools. Moreover, itsends tasks to these workers and processes theresults of these tasks by updating its pools andpossibly its incumbent solution. Another impor-tant function of the master is to detect when amachine has disappeared from the worker pool.

Table 1: Performance of FATCOP on gesa2_o

In this case, the task that was occupying thatmachine is lost, and the master must assign it toanother worker. (This is the "fault tolerant" fea-ture that makes FATCOP fat!) On long compu-tations, the master process "checkpoints" bywriting out the current state of computation todisk. By doing so, it can restart the computationat the latest checkpoint after a crash of the mas-ter processor.

To illustrate FATCOP’s performance, we con-sider the solution of the problem gesa2_o fromthe MIPLIB test set. This problem arises in anelectricity generation application in the BalearicIslands. FATCOP was run ten times on theCondor pool at the University of Wisconsin.Because of the dynamic computational environ-ment — the size and composition of the pool ofavailable workers and communication times onthe network varied between runs and duringeach run — the search pattern followed by FAT-COP was quite different in each instance, and

different from what one would obtain from aserial implementation of the same approach.However, using the statistical features of MW,we can find the average number of workers usedduring each run, defined as

where tk is the total time during which the runhad control of k processors, and T is the wallclock time for the run. The minimum, maxi-mum, and mean values of P over the ten runsare shown in Table 1. This table also shows sta-tistics for the number of nodes evaluated byFATCOP, and the wall clock times.

Figure 1: Number of Workers during FATCOP on gesa2_o

Figure 1 profiles the size of the worker poolduring a particular run. Note the sharp dip,which occurred when a set of machines partici-pated in a daily backup procedure, and the grad-ual buildup during the end of the run, whichoccurred in the late afternoon when moremachines became available as their owners wenthome.

For a detailed performance analysis of FAT-COP, see [11].

A separate but related activity involved solv-ing to optimality the well-known seymour prob-lem from the MIPLIB library of integer pro-grams. This well-known integer-programmingproblem, posed by Paul Seymour, arises in a newproof [12] of the famous Four-Color Theorem,which states that any map can be colored usingfour colors in such a way that regions sharing aboundary segment receive different colors. Theseymour problem is to find the smallest set ofconfigurations such that the Four-ColorTheorem is true if none of these configurationscan exist in a minimal counterexample.Although Seymour claimed to have found asolution with objective value 423, nobody(including Seymour himself ) had been ableeither to reproduce this solution, or prove astrong lower bound on the optimal value. There

was some skepticism in the integer program-ming community as to whether this was indeedthe optimal value.

In July 2000, a team of metaneos researchersfound solutions with the value 423, and provedits optimality. Gabor Pataki, Stefan Schmieta,and Sebastian Ceria at Columbia used prepro-cessing, disjunctive cuts and branch-and-boundto break down the problem into a list of 256integer programs. Michael Ferris at Wisconsinand Jeff Linderoth at Argonne joined theColumbia group in working through this list.The problems were solved separately with thehelp of the Condor system, using the integerprogramming packages CPLEX and XPRESS-MP. About 9,000 hours of CPU time was need-ed, the vast majority of it spent in proving opti-mality.

4 Quadratic Assignment Problem

The quadratic assignment problem (QAP) is aproblem in location theory that has proved to beamong the most difficult combinatorial opti-mization problems to solve in practice. Given n x n matrices A, B, and C, where Ai , j representsthe flow between facilities i and j, Bi ,j is the dis-tance between locations i and j, and Ci ,j is thefixed cost of assigning facility i to location j, theproblem is to find the permutation {π(1), π(2), . . . ,π(n)} of the index set {1, 2, . . . ,n}that minimizes the following objective:

An alternative matrix-form representation is asfollows:

QAP(A,B,C,): min tr( A X B + C ) X T,s.t. X θ,

where tr(·) represents the trace and ∏ is the setof n x n permutation matrices.

The practical difficulty of solving instances ofthe QAP to optimality grows rapidly with n. Asrecently as 1998, only the second-largest prob-lem (n = 25) from the standard "Nugent"benchmark set [13] had been solved, and thiseffort required a powerful parallel platform [14].In June 2000, a team consisting of KurtAnstreicher and Nate Brixius (University ofIowa) and metaneos investigators Jean-PierreGoux and Jeff Linderoth solved the largest ofthe Nugent problems – the n = 30 instanceknown as nug30 – verifying that a solutionobtained earlier from a heuristic was optimal[15]. They devised a branch-and-bound algo-rithm based on a convex quadratic programming

A B Ci jj

n

i

n

i j i ii

n

, ( ), ( ) , ( )==

⋅=

∑∑ ∑+11 1

π π π

P k Tkk

==

∑ τ1

/ ,

Time

Nu

mb

er o

f P

roce

sso

rs

120

100

80

60

40

20

0

P Nodes Timeminumum 43.2 6207993 6951average 62.5 8214031 10074maximum 86.3 9693518 13198

Page 11: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

MAY 2001 PAGE 11O P T I M A 6 5

relaxation of QAP, implemented it using MW,and ran it on a Condor-based computationalgrid spanning eight institutions. The computa-tion used over 1,000 worker processors at itspeak, and ran for a week. It was solving linearassignment problems (the core computationaloperation in the algorithm) at the rate of nearlya million per second during this period.

In the remainder of this section, we outlinethe various theoretical, heuristic, and computa-tional ingredients that combined to make thisachievement possible.

The convex quadratic programming (QP)relaxation of QAP proposed by Anstreicher andBrixius [16] yields a lower bound on the optimalobjective that is tighter than alternative boundsbased on projected eigenvalues or linear assign-ment problems. Just as important, an approximatesolution to the QP relaxation can be found atreasonable cost by applying the Frank-Wolfemethod for quadratic programming. Each itera-tion of this method requires only the solution ofa dense linear assignment problem – an inexpen-sive operation. Hence, the Frank-Wolfe methodis preferable in this context to more sophisticatedquadratic programming algorithms; its slowasymptotic convergence properties are notimportant because only an approximate solutionis required.

The QAP is solved by embedding the QPrelaxation scheme in a branch-and-bound strate-gy. At each node of the branch-and-bound tree,some subset of the facilities is assigned to certainlocations – in the nodes at level k of the tree,exactly k such assignments have been made. At alevel-k node, a reduced QAP can be formulatedin which the unassigned part of the permutation(which has n - k components) is the unknown.The QP relaxation can then be used on thisreduced QAP to find an approximate lowerbound on its solution, and therefore on the costof all possible permutations that include the kassignments already made at this node. If thebound is greater than the cost of the best permu-tation found to date (the incumbent), the sub-tree rooted at this node can be discarded.Otherwise, we need to decide whether and howto branch from this node.

Branching is performed by choosing a facilityand assigning it to each location in turn (rowbranching) or by choosing a location and assign-ing each of the remaining facilities to it in turn(column branching). However, it is not alwaysnecessary to examine all possible n - k childrenof a level-k node; some of them can be eliminat-ed immediately by using information from thedual of the QP relaxation. In fact, one rule fordeciding the next node from which to branch atlevel k of the tree is to choose the node that

yields the fewest children. A more expensivebranching rule, using a strong branching tech-nique, is employed near the root of the tree (ksmaller). Here, the consequence of fixing eachone of a collection of promising facilities (orlocations) is evaluated by provisionally makingthe assignment in question and solving the cor-responding QP relaxation. Estimates of lowerbounds are then obtained for the grandchildrenof the current node, and these are summed. Thenode for which this summation is largest is cho-sen as the branching facility (location).

The branching rule and the parameters thatgovern the execution of the branching rule arechosen according to the level in the tree and alsothe gap, which measures the closeness of thelower bound at the current node to the incum-bent objective. When the gap is large at a partic-ular node, it is likely that exploration of the sub-tree rooted at this node will be a costly process.Use of a more elaborate (and expensive) branch-ing rule tends to ensure that exploration ofunprofitable parts of the subtree is avoided,thereby reducing overall run time.

Parallel implementation of the branch-and-bound technique uses an approach not unlikethe FATCOP code for integer programming.Each worker is assigned the root node of a sub-tree to explore, in a depth-first fashion, for agiven amount of time. When its time expires, itreturns unexplored nodes from its subtree to themaster, together with any new incumbent infor-mation. The pool of tasks on the master isordered by the gap, so that nodes with smallergaps (corresponding to subtrees that should beless difficult to explore) are assigned first. Toreduce the number of easy tasks returned to themaster, a "finish-up" heuristic permits a workerextra time to explore its subtree if its gapbecomes small.

Exploitation of the symmetries that are pres-ent in many large QAPs is another importantfactor in making solution of nug30 and otherlarge problems a practical proposition. Suchsymmetries arise when the distance matrix isderived from a rectangular grid. Symmetries canbe used, for instance, to decrease the number ofchild nodes that need to be formed (to consider-ably fewer than n - k children at a level-k node).

Prediction of the performance profile of a runis also important in tuning algorithmic parame-ters and in estimating the amount of computa-tional resources needed to tackle the problem.An estimation procedure due to Knuth wasenhanced to allow prediction of the number ofnodes that need to be evaluated at each level ofthe branch-and-bound tree, for a specific prob-lem and specific choices of the algorithmicparameters.

Figure 2 shows the number of workers usedduring the course of the nug30 run in early June2000. As can be seen from this graph, the runwas halted five times – twice because of failuresin the resource management software and threetimes for system maintenance – and restartedeach time from the latest master checkpoint.

In the weeks since the nug30 computation,the team has solved three more benchmark prob-lems of size n = 30 and n = 32, using even largercomputational grids. Several outstanding prob-lems of size n = 36 derived from a backboardwiring application continue to stand as a chal-lenge to this group and to the wider combinato-rial optimization community.

Figure 2: Number of Workers during nug30 Computation

5 Stochastic Programming

The two-stage stochastic linear programmingproblem with recourse can be formulated as fol-lows:

where

The uncertainty in this formulation is modeledby the data triplets (hi , Ti , qi ), i = 1, 2, . . . , N,each of which represents a possible scenario forthe uncertain data (h, T, q). Each pi representsthe probability that scenario i is the one thatactually happens; these quantities are nonnega-tive and sum to 1. The quantities pi , i = 1, 2, . . . , N are nonnegative and sum to 1; pi

is the probability that scenario i is the true one.The two-stage problem with recourse repre-

sents a situation in which one set of decisions(represented by the first-stage variables x) mustbe made at the present time, while a second setof decisions (represented by yi , i = 1, 2, . . . , N)can be deferred to a later time, when the uncertain-ty has been resolved and the true second-stage sce-nario is known. The objective function represents

min

subject to

def

xT

iiN

iQ x c x p Q x

Ax b x

( ) ( )

, ,

= +

= ≥=∑ 1

0

Q q y Wy h T x yy

i x iT

i i i i ii

( ) min , .= = − ≥def

s.t.

0

1000

800

600

400

200

06/9 6/10 6/11 6/12 6/13 6/14 6/156/15

Time

Wo

rker

s

Page 12: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

MAY 2001 PAGE 12

the expected cost of x, integrated over the probabilitydistribution for the uncertain part of the model.

In many practical problems, the number ofpossible scenarios N either is infinite (that is, theprobability distribution is continuous) or isfinite but much too large to allow practical solu-tion of the full problem. In these instances, sam-pling is often used to obtain an approximateproblem with fewer scenarios.

Decomposition algorithms are well suited togrid platforms, because they allow the computa-tions associated with the N second-stage scenar-ios to be performed independently and requireonly modest amounts of data movementbetween processors. These algorithms view Q asa piecewise linear, convex function of the vari-ables x, whose subgradient is given by the formula

Evaluation of each function Qi , i = 1, 2, . . . , Nrequires solution of the linear program in yi

given above. One element of the subgradient ∂Qi (x) of this function can be calculated fromthe dual solution of this linear program.

In the metaneos project, Linderoth andWright [17] have implemented a decompositionalgorithm based on techniques from non-smooth optimization and including variousenhancements to lessen the need for the masterand workers to synchronize their efforts. In theremainder of this section, we outline in turn thetrust-region algorithm ATR and its convergenceproperties, implementation of this algorithm onthe Condor grid platform using MW, and our"asynchronous" variant.

The ATR algorithm progressively builds up apiecewise-linear model function M(x) satisfyingM(x) ≤ Q (x). At the kth iteration, a candidateiterate z is obtained by solving the followingsubproblem:

where the last constraint represents a trust regionwith radius ∆ > 0. The candidate z becomes thenew iterate xk + 1 if the decrease in objectiveQ(xk) - Q(z) is a significant fraction of thedecrease Q(xk) - M(z) promised by the modelfunction. Otherwise, no step is taken. In eithercase, the trust-region radius ∆ may be adjusted,function and subgradient information about Qat z is used to enhance the model M, and thesubproblem is solved again. The algorithm uses a"multicut" variant in which subgradients forpartial sums of are included in themodel separately, allowing a more accurate

model to be constructed in fewer iterations.In the MW implementation of the ATR algo-

rithm, the function and subgradient informationdefining M is accumulated at the master proces-sor, and the subproblem is solved on this proces-sor. (Since M is always piecewise linear and thetrust region is defined by an ∞-norm, the sub-problem can be formulated as a linear program.)Most of the computational work in the algo-rithm involves solution of the N second-stagelinear programs in yi, from which we obtainQi(z) and ∂Qi(z). This work is distributedamong T tasks, to be executed in parallel, whereeach task requires solution of a "chunk" of N/Tsecond-stage linear programs.

The use of chunking allows problems withvery large N to make efficient use of a fairlylarge number of processors. However, theapproach still requires evaluation of all thechunks for Q(z) to be completed before decidingwhether to accept or reject z as the next iterate.It is possible that one chunk will be processedmuch more slowly than the others-its computa-tion may have been interrupted by the worksta-tion’s owner reclaiming the machine, forinstance. All the other workers in the pool willbe left idle while waiting for evaluation of thischunk to complete.

The ATR method maintains not just a singlecandidate for the next iterate but rather a basketß containing 5 to 20 possible candidates. At anygiven time, the workers are evaluating chunks ofsecond-stage problems associated with one orother of these basket points. ATR also maintainsan "incumbent" xI, which is the current bestestimate of the solution and is a point for whichQ(xI) is known. When all the chunks for one ofthe basket points z have been evaluated,Q(z) iscompared with the incumbent objective Q(xI)and with the decrease predicted by the modelfunction M at the time z was generated. As aresult, either z becomes the new incumbent andxI is discarded, or xI remains the incumbent andz is discarded. In either case, a vacancy is createdin the basket ß. To fill the vacancy, a new candi-date iterate is generated by solving a sub-problem with the trust-region constraint cen-tered on the incumbent, that is,

We show results obtained for sampledinstances of problems from the stochastic pro-gramming literature, using the MW implemen-tation of ATR running on a Condor pool. TheSSN problem described in [18] arises in designof a network for private-line telecommunicationsservices. In this model, each of 86 parametersrepresenting demand can independently take on3 to 7 values, giving a total of approximately

N = 1070 scenarios. Sampling is used to obtain problems with more modest values of N, whichare then solved with ATR.

Table 2: SSN, with N = 10,000

Results for an instance of SSN with N = 10000 are shown in Table 2. When writtenout as a linear program in the unknowns (x, y1 , y2 , . . . , yN), this problem has approxi-mately 1,750,000 rows and 7,060,000 columns.Table 2 compares three algorithms. The first isan L-shaped method (see [19]), which obtainsits iterates from a model function M but doesnot use a trust region or check sufficientdecrease conditions. (The implementationdescribed here is modified to improve paral-lelism, in that it does not wait for all the chunksfor the current point to be evaluated before cal-culating a new iterate.) The second entry inTable 2 is for the synchronous trust-regionapproach (which is equivalent to ATR with abasket size of 1), and the third entry is for ATRwith a basket size of 10. In all cases, the second-stage evaluations were divided into 10 chunks,and 50 partial subgradients were added to M ateach iteration. The table shows the average num-ber of processors used during the run, the pro-portion of time for which these processors werekept busy, and the wall clock time required tofind the solution.

The trust-region approaches were consider-ably faster than the L-shaped approach, indicat-ing that the need for sound algorithms remainsas keen as ever in a parallel environment; wecannot rely on raw computing power to do allthe work. The benefits of asynchronicity can alsobe seen. When ATR has a basket size of 10, it isable to use a larger number of processors andtakes less time to complete, even though thenumber of iterates increases significantly over thesynchronous trust-region approach.The real interest lies, however, not in solvingsingle sampled instances of SSN, but in obtain-ing high-quality solutions to the underlyingproblem (the one with 1070 scenarios). ATRgives a valuable tool that can be used in con-junction with variance reduction and verificationtechniques to yield such solutions.

Table 3: storm, with N = 250,000

O P T I M A 6 5

∂ = + ∂=∑Q x c p Q xii

N

i( ) ( ).1

min ( ) , ,

,

z

k

M z Az b z

z x

subject to

= ≥

−∞ ≤

0

′ − ∞ ≤z xI ∆.

Run Iter. Procs. Eff. Time(min.)

L 255 19 .46 398

ATR-1 47 19 .35 130ATR-10 164 71 .57 43

Q ziiN=∑ 1

( )

′z

Run Iter. Procs. Eff. Time(min.)

ATR-1 25 67 .57 211 ATR-5 57 86 .96 229

Page 13: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

MAY 2001 PAGE 13O P T I M A 6 5

Finally, we show results from the "storm"problem, which arises from a cargo flight sched-uling application [20]. The ATR implementa-tion was used to solve a sampled instance withN = 250,000, for which the full linear programhas approximately 132,000,000 rows and315,000,000 columns. The results in Table 3show that this huge linear program with nontriv-ial structure can be solved in less than 4 hourson a computational platform that costs essential-ly nothing. Because the second-stage work canbe divided into a much larger number of chunksthan for SSN – 125 chunks, rather than 10 –the synchronous trust-region algorithm is able tomake fairly effective use of an average of 67processors and requires less wall clock time thanATR with a basket size of 5.

Conclusions

Our experiences in the metaneos project haveshown that cheap, powerful computational gridscan be used to tackle large optimization prob-lems of various types. These results have severalinteresting implications. In an industrial or com-mercial setting, the results demonstrate that onemay not have to buy powerful computational

servers to solve many of the large problems aris-ing in areas such as scheduling, portfolio opti-mization, or logistics; the idle time on employeeworkstations (or, at worst, an investment in amodest cluster of PCs) may do the job. For theoptimization research community, our resultsmotivate further work on parallel, grid-enabledalgorithms for solving very large problems ofother types. The fact that very large problemscan be solved cheaply allows researchers to betterunderstand issues of "practical" complexity andof the role of heuristics. In stochastic optimiza-tion, higher-quality solutions can be found, andimprovements to sampling methodology can beinvestigated.

Work remains to be done in making the gridinfrastructure robust enough for general use. Thelogistics of assembling a grid – issues of securityand shared ownership – remain challenging. Thevision of a computational grid that is as easy totap into as the electric power grid remains faroff, though metaneos gives a glimpse of the wayin which optimizers could exploit such a system.

We have investigated just a few of the prob-lem classes that could benefit from solution oncomputational grids. Global optimization prob-

lems of different types should be examined fur-ther. Data-intensive applications (from tomogra-phy and data mining) represent a potentiallyhuge field of work, but these require a somewhatdifferent approach from the compute-intensiveapplications we have considered to date.

We hope that optimizers of all flavors, alongwith grid computing experts and applicationsspecialists, will join the quest. There’s plenty ofwork for all!

Acknowledgements

The metaneos project was funded by NationalScience Foundation grant CDA-9726385 andwas supported by the Mathematical,Information, and Computational SciencesDivision subprogram of the Office of AdvancedScientific Computing Research, U.S.Department of Energy, under Contract W-31-109-Eng-38.

For more information on the metaneos proj-ect, see <www.mcs.anl.gov/metaneos>.Information on Condor can be found at<www.cs.wisc.edu/condor> and on Globus at<www.globus.org>.

[1] I. Foster and C. Kesselman, edi-tors. The Grid: Blueprint for aNew Computing Infrastructure.Morgan-Kaufmann, 1998.

[2] J. Basney, M. Livny, and T.Tannenbaum. High throughputcomputing with Condor. HPCUNews, 1(2), 1997.

[3] M. Livny, J. Basney, R. Raman,and T. Tannenbaum. Mechanismsfor high throughput computing.SPEEDUP, 11(1), June 1997.

[4] I. Foster and C. Kesselman. TheGlobus project: A status report. InProceedings of the HeterogeneousComputing Workshop, pages 4-18.IEEE Press, 1998.

[5] J.-P. Goux, J. T. Linderoth, and M.Yoder. Metacomputing and the master-worker paradigm. PreprintMCS/ANL-P792-0200,Mathematics and ComputerScience Division, ArgonneNational Laboratory, Argonne,Ill., February 2000.

[6] J.-P. Goux, S. Kulkarni, J. T.Linderoth, and M. Yoder. Anenabling framework for master-worker applications for thecomputational grid. In Proceedingsof the Ninth IEEE Symposium onHigh Performance DistributedComputing (HPDC9), pages 43-

50, Pittsburgh, Pennsylvania,August 2000.

[7] J. Eckstein. Parallel branch-and-bound methods for mixed-integerprogramming on the CM-5.SIAM Journal on Optimization,4(4):794-814, 1994.

[8] R. E. Bixby, W. Cook, A. Cox,and E. K. Lee. Computationalexperience with parallel mixedinteger programming in a distrib-uted environment. Annals ofOperations Research, 90:19-43,1999.

[9] J. T. Linderoth. Topics in ParallelInteger Optimization. Ph.D. thesis,School of Industrial and SystemsEngineering, Georgia Institute ofTechnology, 1998.

[10] J. Eckstein, C. A. Phillips, and W.E. Hart. PPICO: An object-oriented framework for parallelbranch and bound. ResearchReport RRR-40-2000, RUTCOR,August 2000.

[11] Q. Chen, M. C. Ferris, and J. T.Linderoth. FATCOP 2.0:Advanced features in an oppor-tunistic mixed integer program-ming solver. Technical ReportData Mining Institute TechnicalReport 99-11, Computer SciencesDepartment, University of

Wisconsin at Madison, 1999. Toappear in Annals of OperationsResearch.

[12] N. Robertson, D. P. Sander, P. D.Seymour, and R. Thomas. A newproof of the four color theorem.Electronic Research Announcementsof the American MathematicalSociety, 1996.

[13] C. E. Nugent, T. E. Vollman, andJ. Ruml. An experimental compar-ison of techniques for the assign-ment of facilities to locations.Operations Research, 16:150-173,1968.

[14] A. Marzetta and A. Brünger. Adynamic-programming bound forthe quadratic assignment problem.In Computing and Combinatorics:5th Annual InternationalConference, COCOON ’99, vol-ume 1627 of Lecture Notes inComputer Science, pages 339-348.Springer, 1999.

[15] K. Anstreicher, N. Brixius, J.-P.Goux, and J. T. Linderoth.Solving quadratic assignmentproblems on computational grids.Technical report, Mathematicsand Computer Science Division,Argonne National Laboratory,Argonne, Ill., October 2000.

[16] K. M. Anstreicher and N. Brixius.A new bound for the quadraticassignment problem based on con-vex quadratic programming.Technical report, Department ofManagement Sciences, TheUniversity of Iowa, 1999.

[17] J. T. Linderoth and S. J. Wright.Implementing decompositionalgorithms for stochastic program-ming on a computational grid.Technical report, Mathematicsand Computer Science Division,Argonne National Laboratory,Argonne, Ill, 2000. In prepara-tion.

[18] S. Sen, R. D. Doverspike, and S.Cosares. Network planning withrandom demand. Telecommuni-cations Systems, 3:11-30, 1994.

[19] J. R. Birge and F. Louveaux.Introduction to StochasticProgramming. Springer Series inOperations Research. Springer,1997.

[20] J. M. Muley and A. Ruszczynski.A new scenario decompositionmethod for large scale stochasticoptimization. Operations Research,43:477-490, 1995.

References

Page 14: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

MAY 2001 PAGE 14O P T I M A 6 5

1st Cologne Twente Workshop on Graphsand Combinatorial OptimizationJune 6-8, 2001, University of Cologne,Cologne, Germany.

URL: http://www.zaik.uni-koeln.de/AFS/conferences/CTW2001/CTW2001.html

IPCO VIIIJune 13-15, 2001, Utrecht, The Netherlands.

URL: http://www.cs.uu.nl/events/ipco2001email: [email protected]

INFORMS International 2001June 17-20, 2001, Maui, Hawaii, USA.

URL: http://www.informs.org/Conf/Hawaii2001

SIAM Annual MeetingJuly 9-13, 2001, San Diego, California, USA.

URL: http://www.siam.org/meetings/an01/

Symposium on OR 2001September 3-5, 2001, Gerhard-Mercator-Universität, Duisburg, Germany.

URL: http://www.uni-duisburg.de/or2001/

INFORMS Annual MeetingNovember 4-7, 2001, Miami Beach,Florida, USA.

URL: http://www.informs.org/Conf/Miami2001

9th International Conference on Stochastic Programming

Berlin, Germany, August 25-31, 2001(held under the auspices of the Committee on Stochastic Programmingof the Mathematical Programming Society)

Contact Information. Web site: http://www.mathematik.hu-berlin.de/SP01; E-mail: [email protected];Mailing Address: SP01, c/o Prof. W. Roemisch, Institute of Mathematics,Humboldt-University Berlin, 10099 Berlin, Germany

Support. Humboldt-Universitaet Berlin, Gerhard-Mercator-UniversitaetDuisburg, Weierstrass-Institut fuer Angewandte Analysis und Stochastik(WIAS), Konrad-Zuse-Zentrum fuer Informationstechnik Berlin (ZIB),Deutsche Forschungsgemeinschaft

Scope. Stochastic programming addresses the optimization of decisionmaking under uncertainty. Several branches of mathematics and its prac-tical applications inspire it. Studies in stochastic programming are direct-ed to model building, theoretical analysis of models, design of algo-rithms, software development, and practical implementation. The 9thInternational Conference on Stochastic Programming will be a forum todiscuss recent advances, the state of the art, and future developments ofthe field.

Topics of the conference. modeling techniques, scenario generation,two-stage and chance constrained models, dynamic stochastic optimiza-tion, integer and combinatorial optimization under uncertainty, riskmanagement, stability analysis, estimation and simulation, approximationand bounding, decomposition methods, stochastic programming modelsin finance, and practical applications (e.g., in energy and logistics).

International Scientific Committee. J.R. Birge (USA), M.A.H.Dempster (Great Britain), J. Dupacova (Czech Republic), P. Kall(Switzerland), A.J. King (USA), A. Prekopa (Hungary/USA), G. Pflug(Austria), W. Roemisch (Germany), A. Ruszynski (USA), R. Schultz(Germany), S. Sen (USA), A. Shapiro (USA), S.W. Wallace (Norway), R.J-B Wets (USA), W.T. Ziemba (Canada)

Invited Plenary Speakers. Hans Foellmer (Berlin), Suvrajeet Sen(Tucson), Alexander Shapiro (Atlanta), Stein W. Wallace (Trondheim),Roger J-B Wets (Davis)

Organizing Committee. K. Frauendorfer (St. Gallen), R. Henrion(WIAS Berlin), P. Kall (Zuerich), K. Marti (Muenchen), G. Pflug(Wien), W. Roemisch (HU Berlin), R. Schultz (Duisburg), M.C.Steinbach (ZIB Berlin), S. Vogel (Ilmenau)

Local Organizers. W. Roemisch, R. Schultz, N. Groewe-Kuska, R.Henrion, J. Kerger, H. Lange, A. Moeller, I. Nowak, M.C. Steinbach, I.Wegner

Location. Humboldt-University Berlin (Main Building), Unter denLinden 6, 10117 Berlin

Schedule. Tutorials: August 25-26, 2001; International Conference:August 27-31, 2001; Opening Session: August 27, 2001, 9:30 am

Fees. Conference fee: 400.- DM (before May 31, 2001), 450.- DM;150.- DM (student); Tutorial fee: 100.- DM

Deadlines. February 15, 2001 Starting Online Registration; April 15,2001 Abstract Submission; April 30, 2001 Notification of Acceptance;May 31, 2001 Early Registration

Calendar

ANNOUNCEMENT

Page 15: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

O P T I M A 6 5 MAY 2001 PAGE 15

EURO 2001:The XVIII-th EuroConference on OperationsResearch

Why Visit Euro 2001: Meeting OR friends inconvenient and well-equipped conference center;participating in and contributing to high qualitytheory and practice sessions; learning moreabout smart logistics and innovative operationsduring special conference sessions, company vis-its, and excursions to the largest port in theworld; reduced fee to a post conference seminaron Quantitative Financial Risk Management inAmsterdam; enjoying social events in Europe’scultural capital of 2001Where: Erasmus University Rotterdam, TheNetherlandsWhen: July 9-11, 2001How to Register: The most convenient way toregister is via our web site (www.euro2001.org).Alternatively, you may order a registration cardvia our e-mail address ([email protected]).

Fees: Early: Euro 300 for Non-Students, Euro200 for Students; Late: Euro 350 for Non-Students, Euro 250 for Students.Abstract Submission: Please refer to our web sitefor submission instructions, or ask for instruc-tions on paper via our e-mail address([email protected]).

Further Information: E-mail: [email protected];web site: www.euro2001.org

Deadlines: The early registration deadline is May 1, 2001; the abstract submission deadline isMarch 1, 2001. Please note that submissionsthat have been received after the deadline cannotbe accepted.

Abstracts can be submitted via the web site(www.euro2001.org), which also contains a list oftopics and submission instructions.We hope to welcome you at EURO 2001!

www.euro2001.org

Ettore Majorana Centre forScientific Culture InternationalSchool of Mathematics G.Stampacchia WorkshopHigh Performance Algorithms and Software for Nonlinear Optimization

Erice, ItalyJune 30 - July 8, 2001

Objectives: In the first year of a new century, theWorkshop aims to assess the past and to discussthe future of Nonlinear Optimization. Recentachievements and promising research trends willbe highlighted. Algorithmic and high perform-ance software aspects will be emphasized, alongwith theoretical advances and new computation-al experiences. Contributions from different andcomplementary standpoints, covering severalfields of numerical optimization and its applica-tions are expected.Topics: Topics include, but are not limited to:constrained and unconstrained optimization,global optimization, derivative-free methods,interior point techniques for nonlinear program-ming, large scale optimization, linear and non-linear complementarity problems, nonsmoothoptimization, neural networks and optimization,applications of nonlinear optimization.Lectures: The Workshop will consist of invitedlectures (1 hour) and contributed lectures (1/2hour). Invited lecturers who have confirmed theparticipation are: P. Boggs, A. R. Conn, J.

Dennis, Y. G. Evtushenko, F. Facchinei, R.Fletcher, J. C. Gilbert, P. E. Gill, D. Goldfarb, S.Lucidi, J. J. Moré, S. G. Nash, E. J. Polak, E.Sachs, D. Shanno, Ph. L. Toint, V. J. Torczon, J.P. Vial, R. J. Vanderbei, M. H. Wright, S.Wright, J. Z. Zhang.Proceedings: Edited proceedings including theinvited lectures and a selection of contributedlectures will be published.

Further information can be found online(http://www.dis.uniroma1.it/~or/erice) orrequested via e-mail ([email protected]).

Franco Giannessi, School Director Gianni Di Pillo and Almerico Murli, Workshop Directors

International Congress ofMathematicians 2002Beijing, ChinaAugust 20-28, 2002

The next International Congress ofMathematicians will take place in Beijing,People’s Republic of China, August 20-28, 2002.For pre-registration forms and additional information, please visit the web site(http://www.icm2002.org.cn), or e-mail theorganizers ([email protected]). Pre-registrants will automatically receive progressreports on the upcoming Congress as well asfinal registration materials.

The International Congress of Mathematicianstakes place every four years and is supported bythe International Mathematical Union (IMU).For information regarding the IMU, please visitits web site (http://mathunion.org/).

We invite research articles on algebraic and geomet-ric methods in discrete optimization and relatedareas for a forthcoming issue of MathematicalProgramming, Series B. The goal is to provide acommon forum for non-traditional methods in thetheory and practice of discrete optimization. Ofspecial interest are methods from the geometry ofnumbers, topology, commutative algebra, algebraicgeometry, probability theory, computer science andcombinatorics.

Deadline for submission of full papers: April 30,2001.

We aim at completing a first review of all papersby September 30, 2001.

All submissions will be refereed according to theusual standards of Mathematical Programming.Information about this issue can be obtained fromthe guest editors for this volume (or atwww.math.washington.edu/~thomas).

Karen Aardal, Department of Computer Science,

Utrecht University, P.O. Box 80089, 3508 TB Utrecht,

The Netherlands; e–mail: [email protected] andRekha R.Thomas, Department of Mathematics, Box

354350, University of Washington, Seattle,WA 98195-

4350, U.S.A.; e-mail:

[email protected]

Call for Papers

MATHEMATICAL PROGRAM-MING Series B

Issue on Algebraic and GeometricMethods in Discrete Optimization

Page 16: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

MAY 2001 PAGE 16O P T I M A 6 5

A paint-by-numbers puzzle consists of an m ¥ ngrid of pixels (the canvas) together with m + ncluster-size sequences, one for each row and col-umn. The goal is to paint the canvas with a pic-ture that satisfies the following constraints:

1. Each pixel must be black or white.2. If a row or column has cluster-size

sequence s1, s2, . . . sk, then it must containk clusters of black pixels – the first with s1

black pixels, the second with s2 black pixels,and so on.

It should be noted that "first" means "leftmost"for rows and "topmost" for columns, and thatrows and columns need not begin or end withblack pixels.

Small paint-by-numbers puzzles are easy tosolve. One useful strategy is to alternate betweenanalyzing the row cluster-size sequences and thecolumn cluster-size sequences. For the puzzledisplayed in Figure 1, a first pass through therows enables us to paint 15 pixels black and onepixel white, as displayed in Figure 2. (Since row1’s cluster-size sequence is 3, 6, its first three pix-els must be black, its fourth pixel must be white,and its final six pixels must be black. Since rows7 and 8 have cluster-size sequence 2, 5, theirsecond clusters must occupy pixels 4 through 8,5 through 9, or 6 through 10; in each case theyoccupy pixels 6, 7, and 8.) A subsequent passthrough the columns enables us to paint 19more pixels black and 19 more pixels white, asdisplayed in Figure 3.

Four more passes through the rows andcolumns enable us to paint the remaining 46pixels. The unique solution, a picture of a space-

man, is displayed in Figure 4. It should be notedthat only well designed paint-by-numbers puz-zles have unique solutions.

An IP Formulation

In this section, we describe how integer pro-gramming can be used to solve paint-by-num-bers puzzles. Our approach is to think of a

paint-by-numbers puzzle as a problem com-prised of two interlocking tiling problems: oneinvolving the placement of the row clusters onthe canvas, and the other involving the place-ment of the column clusters. Note that if a pixelis painted black, it must be covered by both arow cluster and a column cluster; if it is paintedwhite, it must be left uncovered by the row clus-ters and column clusters.

Notation

Letm = the number of rows,n = the number of columns,ki

r = the number of clusters in row i,kj

c = the number of clusters in column j,sri,1, s

ri,2 , . . . , s

ri,k i

r = the cluster-size sequencefor row i,

scj,1 , sc

j,2 , . . . , scj,k c

j= the cluster-size sequence

for column j,

In addition, leter

i,t = the smallest value of j such that row i’s tth

cluster can be placed in row i with its left-most pixel occupying pixel j,

l ri,t= the largest value of j such that row i ’s tth

cluster can be placed in row i with its left-most pixel occupying pixel j

ecj,t = the smallest value of i such that column

j ’s tth cluster can be placed in column jwith its topmost pixel occupying pixel i,

l cj,t = the largest value of i such that column j ’s

tth cluster can be placed in column j withits topmost pixel occupying pixel i.

(The letters "e" and "l" stand for "earliest" and"latest.") In our example puzzle, row 7’s firstcluster must be placed so that its leftmost pixeloccupies pixel 1, 2, or 3; row 7’s second clustermust be placed so that its leftmost pixel occupiespixel 4, 5, or 6. In other words, er

7,1 = 1, l r7,1 = 3,

er7,2 = 4 and l r

7,2 = 6.

Variables

Our formulation uses three sets of variables.One set specifies which pixels are painted black.For each 1 ≤ i ≤ m and 1 ≤ j ≤ n, let

Figure 1

1 1

1111

11111

1

1

111

1

2222

2

22

3

3

3

33

3

33

4

4

455

6

2

8 9

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

Figure 2

1 1

1111

11111

1

1

111

1

2222

2

22

3

3

3

33

3

33

4

4

455

6

2

8 9

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

Figure 3

1 1

1111

11111

1

1

111

1

2222

2

22

3

3

3

33

3

33

4

4

455

6

2

8 9

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

Figure 4

1 1

1111

11111

1

1

111

1

2222

2

22

3

3

3

33

3

33

4

4

455

6

2

8 9

zi j i ji j

, = 0

1 if row 's pixel is painted white if row 's pixel is painted black,

th

th

Painting by NumbersRobert A. BoschSeptember 28, 2000

mind sharpener We invite OPTIMA readers to submit solutions to the problems to Robert Bosch

([email protected]). The most attractive solutions will be presented in a forthcoming issue.

Page 17: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

O P T I M A 6 4 MAY 2001 PAGE 17O P T I M A 6 5

The other two sets of variables are concernedwith the placements of the row and columnclusters. For each 1 ≤ i ≤ m, 1 ≤ t ≤ kr

i , and er

i,t ≤ j ≤ l ri,t , let

For each 1 ≤ j ≤ n, 1 ≤ t ≤ kcj , and

e cj,t ≤ i ≤ l c

j,t , let

Cluster Constraints

To ensure that row i’s tth cluster appears in row iexactly once, we impose

The sum is over all pixels j such that row i ’stth cluster can be placed in row i with its left-most pixel occupying pixel j. For the two clus-ters of row 7 of our example puzzle, we impose

y7,1,1 + y7,1,2 + y7,1,3 = 1and

y7,2,4 + y7,2,5 + y7,2,6 = 1.

To guarantee that row i’s (t + 1)th cluster isplaced to the right of its tth cluster, we impose

For row 7 of our example puzzle, we impose

y7,1,2 ≤ y7,2,5 + y7,2,6and

y7,1,3 ≤ y7,2,6.

Similar constraints (involving the xj,t,i’s) areneeded for the column clusters.

Double Coverage Constraints

To guarantee that each black pixel is covered byboth a row cluster and a column cluster, we

impose, for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, thefollowing pair of inequalities:

The first inequality states that if row i ’s j th

pixel is painted black, then at least one of row i ’sclusters must be placed in such a way that itcovers row i ’s j th pixel. (The upper and lowerlimits of the second summation make sure thatj satisfies the inequalities er

i,t ≤ j ≤ l ri,t and j - sr

i,t

+ 1 ≤ j ≤ j. The first two hold if and only if rowi’s tth cluster can be placed in row i with its left-most pixel occupying pixel j . The last two holdif and only if row i ’s j th pixel is covered whenrow i’s tth cluster is placed in row i with its left-most pixel occupying j .) The other one makessure that if row i’s j th pixel is painted black, thenat least one of column j’s clusters covers it. Inour example problem, for row 7’s fifth pixel, weimpose

z7,5 ≤ y7,2,4 + y7,2,5and

z7,5 ≤ x5,3,7 + x5,4,7.Finally, we include constraints that prevent

white pixels from being covered by the row clus-ters or column clusters. For each 1 ≤ i ≤ m,each 1 ≤ j ≤ n, each 1 ≤ t ≤ kr

i andeach j that satisfies the inequalitiesj - sr

i,t + 1 ≤ j ≤ j and eri,t ≤ j ≤ l r

i,t,we impose

zi,j ≥ yi,t,j´ .And for each 1 ≤ i ≤ m, 1 ≤ j ≤ n,1 ≤ t ≤ kc

j , and each i that satisfiesthe inequalities ec

j,t ≤ i´ ≤ l cj,t and

i - s cj,t + 1 ≤ i´ ≤ i, we impose

zi,j ≥ xj,t ,i´ .In our example problem, for row

7’s fifth pixel, we impose

z7,5 ≥ y7,2,4 ,z7,5 ≥ y7,2,5 ,z7,5 ≥ x5,3,7 ,z7,5 ≥ x5,4,7 .

Objective Function

There is no need for an objective function here.

Problems

Interested readers may enjoy trying to solve thefollowing problems:

1. Try to improve the IP formulation. Toobtain our code for constructing Cplex".lp" files for the IP formulation, pointyour browser to www.oberlin.edu/~math/

faculty/people/bosch.html

and scroll down to the OptimaMindsharpener section.

2. Is there a better method for solving paint-by-numbers puzzles?

3. Solve the paint-by-numbers puzzle dis-played in Figure 5. This puzzle wasdesigned by Won Yoon Jo. Its solution isquite lovely, and it is perhaps the most dif-ficult 20 x 20 puzzle on the Paint-by-Numbers Web Page (much more difficultthan many of the 30 x 30 and 40 x 40 puz-zles). To reach the Paint-by-Numbers WebPage, point your browser to www02.so-net.

ne.jp/~kajitani/cgi-bin/pbn.cgi/index.html

y y

e j l

i t j i t jj j s

l

i tr

i tr

i tr

i tr

, , , ,

, ,

.

,

for each

≤ + ′′= + +

+

+ ≤ ≤

11

1

1

yi t jj e

l

i tr

i tr

, , .

.

,

==∑ 1

yi t j

i ti

j, , =

0

1

if not.

if row 's cluster is placed in row with its leftmost pixel occupying pixel ,

th

x j t i

j tj

i, , =

0

1

if not.

if column 's cluster is placed in column with its topmost pixel occupying pixel ,

th

z y

z x

i j i t jj e j s

l j

t

k

i j j t ii e i s

l i

t

k

i tr

i tr

i tr

ir

j tc

j tc

j tc

jc

, , ,min ,

max ,

, , ,min ,

max ,

, ,

,

, ,

,

,≤

′′= − +{ }

{ }=

′′= − +{ }

{ }=

∑∑

∑∑

11

11

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .. . .. . . . . .. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

. . .. . . . . .. . .. . . . . .

. . .. . . . . .

. . .. . . . . .

2

1

121

911

7

5

32

22

434

1

2

1

5

8

344

2

2

31

214

211

7

2

1

1 2

11

111

1

1111

11

2

2

2

2 2

3

33

4

8

9111

1111

2

222

2

3

33

4

4

3

33

11211

11

1

111

1

11

1

46 6

1477

6 33

33

3

22

22

2

5 62

2

2

2

52

262

52

3

Figure 5

Page 18: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

MAY 2001 PAGE 18O P T I M A 6 5

Gallimaufry

This is the last issue by thecurrent editorial team! Wewish to thank all of theauthors that have con-tributed with articles duringthe past years.

We also wish to thank ElsaDrake for her fantastic jobas the designer of OPTIMA.She has been the designersince 1985, helping make it

one of the most importantand recognized newslettersin the mathematical sciencescommunity. The qualityemphasis of the society hasbeen captured in her cre-ative graphics and layout.The MPS expresses itsappreciation for her excel-lent work and wishes herwell. A warm thank you for

all of their assistance is alsodirected to Don Hearn, thefounding editor, and PatsyMessinger.

Finally, we wish the incom-ing Editor Jens Clausen andhis team GOOD LUCK!

Application for Membership

I wish to enroll as a member of the Society.

My subscription is for my personal use and not for the benefit of any library or institution.

� I will pay my membership dues on receipt of your invoice.

� I wish to pay by credit card (Master/Euro or Visa).

CREDIT CARD NO. EXPIRATION DATE

FAMILY NAME

MAILING ADDRESS

TELEPHONE NO. TELEFAX NO.

EMAIL

SIGNATURE

Mail to:

Mathematical Programming Society3600 University City Sciences CenterPhiladelphia PA 19104-2688 USA

Cheques or money orders should be madepayable to The Mathematical ProgrammingSociety, Inc. Dues for 2001, including sub-scription to the journal MathematicalProgramming, are US $75.Student applications: Dues are one-half theabove rate. Have a faculty member verify yourstudent status and send application with duesto above address.

Faculty verifying status

Institution

The deadline for the next issue

of OPTIMA will be June 30,

2001. Contributions should be

sent to the new editor, Jens

Clausen ([email protected]).

Page 19: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

Springer ad (insert film)

Page 20: Mathematical Programming Society Newsletter · the Mathematical Programming Society and the Society for Industrial and Applied Mathematics (SIAM). The prize is awarded to one or more

O P T I M AMATHEMATICAL PROGRAMMING SOCIETY

UNIVERSITY OF

FLORIDACenter for Applied Optimization371 WeilPO Box 116595Gainesville FL 32611-6595 USA

FIRST CLASS MAIL

EDITOR:

Karen AardalDepartment of Computer ScienceUtrecht UniversityPO Box 800893508 TB UtrechtThe Netherlandse-mail: [email protected]

URL: http://www.cs.ruu.nl/staff/aardal.html

BOOK REVIEW EDITOR:

Robert WeismantelUniversität MagdeburgFakultät für MathematikUniversitätsplatz 2D-39106 MagdeburgGermanye-mail: [email protected]

Donald W. Hearn, FOUNDING EDITOR

Christina Loosli, DESIGNER

PUBLISHED BY THE

MATHEMATICAL PROGRAMMING SOCIETY &

GATOREngineering® PUBLICATION SERVICES

University of Florida

Journal contents are subject to change by the publisher.