Random linear programs with many variables and few constraints

10
Mathematical Programming 34 (1986) 62-71 North-Holland RANDOM LINEAR PROGRAMS WITH MANY VARIABLES AND FEW CONSTRAINTS Charles BLAIR Business Adminstration Department, University of Illinois, 1206 S. Sixth St., Champaign, IL 61820, USA Received 13 April 1983 Revised manuscript received 25 January 1985 We extend and simplify Smale's work on the expected number of pivots for a linear program with many variables and few constraints. Our analysis applies to new versions of the simplex algorithm and to new random distributions. Key words: Linear Programs, Simplex Algorithm, Average-Case Analysis. I. Introduction In the important papers [10, 11], Smale studies the expected number of pivots required by the simplex algorithm for a a randomly generated problem. Smale considers a version of the simplex algorithm which can be viewed as a path-following procedure. For a fixed constraint matrix one begins with a right-hand side and objective function for which the optimal solution is trivial and deforms to the actual ones. The 'main formula' in [10, 11] gives the expected number of pivots .for a fixed matrix. The 'main formula' is then used to prove the 'main theorem'--that for a fixed number of constraints the expected number of steps grows in a sublinear manner as the number of variables increases. In this note we show how the 'main theorem' can be obtained without using the geometric analysis leading up to the 'main formula'. This simplifies the proof. At the same time the result is generalized to include other versions of the simplex algorithm and other random distributions of the data, which are not necessarily amenable to geometric analysis. 2. Statement of the problem We will consider linear programs in the form max cx, Ax>~b, x>~O [b~R', c~R~]. 62 (2.1)

Transcript of Random linear programs with many variables and few constraints

Page 1: Random linear programs with many variables and few constraints

Mathematical Programming 34 (1986) 62-71 North-Holland

R A N D O M LINEAR P R O G R A M S WITH M A N Y VARIABLES A N D F E W C O N S T R A I N T S

Charles B L A I R Business Adminstration Department, University of Illinois, 1206 S. Sixth St., Champaign, IL 61820, USA

Received 13 April 1983 Revised manuscript received 25 January 1985

We extend and simplify Smale's work on the expected number of pivots for a linear program with many variables and few constraints. Our analysis applies to new versions of the simplex algorithm and to new random distributions.

Key words: Linear Programs, Simplex Algorithm, Average-Case Analysis.

I. Introduction

In the important papers [10, 11], Smale studies the expected number o f pivots required by the simplex algori thm for a a r andomly generated problem.

Smale considers a version o f the simplex algori thm which can be viewed as a

path-fol lowing procedure. For a fixed constraint matrix one begins with a r ight-hand

side and objective funct ion for which the opt imal solution is trivial and deforms to

the actual ones. The 'main formula ' in [10, 11] gives the expected number o f pivots

.for a fixed matrix. The 'main formula ' is then used to prove the 'main t h e o r e m ' - - t h a t

for a fixed number o f constraints the expected number o f steps grows in a sublinear manner as the number o f variables increases.

In this note we show how the 'main theorem' can be obtained without using the

geometric analysis leading up to the 'main formula ' . This simplifies the proof. At

the same time the result is generalized to include other versions of the simplex

algorithm and other r a n d o m distributions o f the data, which are not necessarily

amenable to geometric analysis.

2. Statement of the problem

We will consider l inear programs in the form

max cx,

Ax>~b,

x>~O [ b ~ R ' , c ~ R ~ ] .

62

(2.1)

Page 2: Random linear programs with many variables and few constraints

C. Blair/ Random linear programs 63

We will assume b is a f ixed non-pos i t ive I vec tor and s tudy the expec ted number

o f p ivots when A and c are genera ted r andomly . The case in which b also varies

can be ob t a ined as a corol lary . The crucial p rope r ty a s sumed of the r a n d o m

d is t r ibu t ion is

Let B = (~). Fo r any ( m + 1) by n matr ix D with all e lements different

( ' ) ~ + ' p rob{b 0 > bik i f d 0 > dik for all i, j , k} = ~ . (2.2)

In words (2.2) says tha t the o rder o f e lements in c and the rows o f A are

i n d e p e n d e n t o f one another . This is c lear ly the case i f A and c are chosen using

the spher ica l measure o f [10], bu t inc ludes o ther possibi l i t ies . (2.2) does imply that

the p robab i l i t y that any two members of the same row of A are equal is zero, and is essent ia l ly equiva len t to the symmet ry a s sumpt ion in [11]. In Sect ion 5 we show

how our analysis can be ex t ended to those d is t r ibu t ions sat isfying

Let D and D* be two matr ices such tha t each row of one is a p e r m u t a t i o n o f the

co r r e spond ing row of the other. Then

prob{b0 > bik if[ dij> do; all i,j, k} = prob{b0 > b~k if[ d * > d * ; all i,j, k}. (2.2)'

(2,2)' genera l izes (2.2) to a l low the poss ib i l i ty that Several e lements of the same

row are equal . I t wou ld al low, for example , choice o f all e lements o f A and c f rom

i n d e p e n d e n t inden t ica l ly d i s t r ibu ted discrete r a n d o m var iables .

3. I d e a o f t h e p r o o f

C o n s i d e r the LP

max lOXl+5X2+9x3+ 4x4,

- x j - 2x2 - 7x3 - 10x4 ~ - 5 ,

- 7 x l - 8x2 q- 3x3 q- x4 ~ - 1 5 , (3.1)

Xl, X2, X3~ X 4 ~ 0 .

We can see i m m e d i a t e l y tha t the op t ima l so lu t ion to (3.1) mus t have x 2 = X 4 = 0.

This is because the co lumn of B (10, - 1 , - 7 ) >~ (5, - 2 , - 8 ) and ( 9 , - 7 , 3 ) ~ >

(4, - 1 0 , 1). In genera l a co lumn of B is sa id to be u n d o m i n a t e d i f . there is no o ther

co lumn at least as large in every row. 2 We are conce rned with those vers ions o f the

1 To insure feasibility. This assumption can be avoided by minor modifications, which we indicate in Section 5.

2 If we allow the possibility that several columns of B are equal, we say the leftmost equal column dominates the others but not vice versa.

Page 3: Random linear programs with many variables and few constraints

64 C. Blair / Random linear programs

simplex algorithm which satisfy

No variable corresponding to a dominated column of B enters

the basis at any iteration. (3.2)

The path-following algorithm in [10, 11] satisfies (3.2). Other versions of the

simplex algorithm satisfying (3.2) include

I f possible, choose a surplus variable as entering variable. Other- wise choose the entering variable with largest reduced cost. (3.3a)

Delete all dominated columns from the tableau at the beginning, then use any version of the simplex algorithm (See [6, 12] for (3.3b)

algorithms for identifying the set of dominated columns.)

The largest reduced cost rule for choosing entering variables does not satisfy (3.2). Consider the program: 3

max 5x l+3x2+ x3+ x4,

s.t. - x ~ + X2--5X3--3X4>~--lO,

- 3 x a - x 2 - x 3 - xg~>-40,

+ 2 X l - x 2 + 3 x 3 + 3 x 4 ~ - l O

Xl X2 X3 X4

0 0 0 0

10 0 0 0

12.5 2.5 0 0 0 81 0

The column for x4 dominates the column for x 3 but x 3 enters the basis at the third

iteration (see table at right). I f the matrix B has U undominanted columns then any version of the simplex

algorithm which satisfies (3.2) will use at most (c~+m) ~< ( U + m) m pivot steps to find

the opt imum (or discover the LP is unbounded). To establish bounds on the expected number of pivots it suffices to show that the

size of U grows slowly with n, for m fixed.

Theorem 3.1. Let B be randomly generated so that (2.2) holds. For any C1> O, there is C2 such that, for n sufficiently large, the probability that there are at most

C2(ln n) (m+l)ln(m+l)+l (3.4)

undominated columns is at least

( lnn~( '~+l ) 'n (m+) _ c , 1 . n ( 3 . 5 )

1 \ r e + l / ~ 1 •

We prove this result in Section 4. From this we easily obtain

Theorem 3.2. Fix m. Let P(n) be the expected number of pivots for B generated so

that (2.2) holds. Then there is C3 such that, for n sufficiently large

P(n) <~ C3(ln n) m('+l~ ln(,,+l~+m. (3.6)

3 This example also shows that the 'maximum increase' rule for choosing entering variables does not satisfy (3.2). This corrects an erroneous claim in the first (December 1982) version of this paper.

Page 4: Random linear programs with many variables and few constraints

C. Blair/ Random linear programs 65

Proof. If the number of undominated columns is U, the number of pivots will be ~<(U+ m) m. We choose C1 < e m, use Theorem 3.1 to obtain C2, and let C3 > C~'. The probability that U <~ the quantity given by (3.4) is bounded below by (3.5). In the remaining cases U<~n. Since lim,_~o~ (ln n ) k C ~ n ( n + m ) " =0 for any k, only

the first case appears in the calculation of P ( n ) for n large. To obtain (3.6) we make use o f l i m t ~ _ ~ o ~ ( U + m ) m / U " = l . []

4. The number of undominated columns in a randomly generated matrix

An (m + 1) by n matrix B is generated randomly so that (2.2) is satisfied. We are concerned with the random variable U = number of undominated columns. This is a problem in discrete probability which has been studied in [3, 9] and possibly elsewhere. [3,9] show that the expected value of U for n large is O(ln n) m+l. However, to obtain our main result (Theorem 3.2) we need information about the ruth moment of U, which we obtain from Theorem 3.1.

Theorem 3.1 can probably be sharpened, but the proof in its present form is reasonably self-contained, assuming only Stirling's formula and elementary proba- bility. In particular, we do not use the central limit theorem.

We begin with two lemmas about the hypergeometric distribution.

Lemma 4.1. Suppose we choose sets o f size s, t independently f r o m a set o f n elements

using uni form distributions. I f s and t are <~½n and k <~ s t / 2 n = r, then the probabili ty

that the intersection has exactly k elements is <~3r 1/2(2/e)r.

Proof. The probability is

s ! t ! ( n - s ) ! ( n - t ) ! Pk - (4.1)

n ! k ! ( s - k ) ! ( t - k ) ! ( n - s - t + k)! "

If k = 0 , (4.1) yields P o < ~ ( 1 - t / n ) S < ~ e 2r, which is less than the stated upper bound.

When k >~ 1, we analyze (4.1) with Stirling's formula in the f o r m f ( j ) <~j ! <~ 1.1f(j), where f ( j ) =/+1/2 e ~ ~/~-~w.

A lower bound on the denominator is provided by f ( n ) f ( r ) f ( s - r ) f ( t - r ) f ( n - s -

t + r). The intuitive justification for replacement of k by r is that Pi increases as i gets closer to 2r, which is the expected value of the number of members of the intersection.

For a formal justification, let g ( x ) = l n ( f ( x ) f ( s - x ) f ( t - x ) f ( n - s - t + x ) ). Then

g ' ( x ) = In x ( n - s - t + x )

( s - x ) ( t - x ) k ½ ( x - l + ( n - s - t + x ) 1 - - ( S - - X ) - I - - ( t - - x ) - I ) "

(4.2)

Page 5: Random linear programs with many variables and few constraints

66 C. Blair / Random linear programs

I f l<~x<~r, the first term is largest if x = r = s t / 2 n , when it becomes

l n ( ( 2 n 2 - 2 n s - 2 n t + s t ) / ( 2 n - s ) ( 2 n - t ) ) . I f s+t>~3n and s, t<~½n this term is

<~ln(3n2/2]n 2) <~ -1.09. Since x ~> 1 impl ies the second term is <~ 1, we conc lude that

g ' ( x ) < 0 i f s+t>~3n.

I f s+t<~3n and x~> l the second te rm is <~½(l+4/n-2(8/3n))<~½. I f x<~r,

2nx <~ st, so the first te rm is ~<ln ½~<-0.69.

Since g ' ( x ) < 0 for 1 ~ x <~ r, we have g(k)>~ g(r) for 1 ~< k ~< r, which es tabl ishes

the lower b o u n d on the d e n o m i n a t o r of (4.1). I f we use St ir l ing 's fo rmula to ob ta in

an u p p e r b o u n d on the numera to r , we obta in :

s , t ~ ( n - s ) ~ . - S ) ( n _ t)~" ,)

Pk <~ QR n,rr(s _ r)(S_r)(t _ r)(~_r)(n _ s - t + r) ("-s ~+r) (4.3)

2rsS--rt,--r(n __ s)(~-~)(n _ t)(, t)

= QR n" r(s - - r ) (s-r)( t -- r)(t-r)(n -- s -- t + r) ("-s t+r) (4.4)

where Q = ( 1 . 1 ) 4 and

( s t ( n - s ) ( n - t ) ~ 1 / 2

R = \ 2 ~ r n r ( s _ r ) ( t _ r ) ( n _ s _ t + r ) ] . (4.5)

To ob ta in an u p p e r b o u n d on R note that st = 2nr and s, t ~< ½n imply s + t ~ ½n + 4r,

hence ( s - r ) ( t r ) ( n - s - t + r ) > ~ 3 r ( l n 1 4 - - r ) ( s n - 3 r ) . S i n c e ( n - s ) ( n - t ) < ~ n 2 a n d 1 J 8r ~ l / 2 r<-gn we have R ~ s t T r r ) .

I f we take logar i thms in (4.4) we ob ta in

In Pk~<ln QR + r l n 2 + ( n - s - t + r) ln ( n - s ) ( n - t )

n ( n - s - t + r )

s ( n - t ) t ( n - s ) + (s - r) In - - ~- (t - r) In - - (4.6)

n ( s - r ) n ( t - r ) "

The inequa l i ty In ( l + x ) < ~ x may be a p p l i e d to the last three terms o f (4.6) to

ob ta in In Pk<~ln Q R + r l n 2 - r , f rom which L e m m a 4.1 fol lows. [ ]

Lemma 4.2. Let r, s, t be as in L e m m a 4.1. Then the probability that the intersection

has <~r elements is <~6r-l/2(2/e)r.

Proof. F r o m (4.1) we obta in , for k ~ < r,

P~ 1/Pk<~Pr l / P r < ~ r ( n - s - t + r ) / ( s - r ) ( t - r )

= 2n 2 - 2 n s - 2nt + s t /4n 2 - 2ns - 2nt + st <~ ½.

So the des i red p robab i l i ty , ~ Pk, is less t han twice the largest term, for which we

have the b o u n d in L e m m a 4.1. []

4 ( s - r ) ( t - r ) is minimized when s-4r, t-½n.

Page 6: Random linear programs with many variables and few constraints

C. Blair/ Random linear programs 67

The next step is to show that with high probabili ty B will contain a column whose entries are all almost the largest entries in their row.

Lemma 4.3. Let j <~ in. For 1 <~ i ~ m + 1 let B ( i ) be the set o f columns o f B whose

entry in row i is among the j largest members o f row i. Then the probability that

(-.)m+l B( i ) is nonempty is >~ 1 - mh(2-~jm+ln-m) , where h ( x ) = 6x-1/2(2/e) x. 1

Proof. Property (2.2) implies that the sets B ( i ) are chosen independently of one another using uniform distributions. We apply Lemma 4.2 to B(1), B(2) to conclude that B(1)c~ B(2) has >jZ/2n elements with probabili ty ~>1-h( j2 /2n) . Since B(3)

is chosen independently of B(1) and B(2) we may apply Lemma 4.2 with s =j , t = smallest integer >~jZ/2n to conclude that B ( 1 ) ~ B(2)c~ B(3) has >j3/4n2 ele- ments with probabili ty />(1 - h( j2 /2n ) ) (1 - h( j3 /4n2)) . 5 "Repeating this argument

we see that the intersection has >2-mjm+ln " elements with probabili ty ~>II~' [ 1 - h(2-~/i+ln-i)]. When all the factors are positive (the only interesting case) we may use H (1 - y ) i ) ~> 1 - ~ y~ to obtain the desired result. []

I f the intersection is non-empty this implies that the only undominated columns

are members of at least one B(i ) , since any other column is dominated by a member of the intersection. This implies that the set of undominated columns of B is the union of the sets of undominated columns of each of the (m + 1)-by-j submatrices formed from the columns of B(i) .

Since each of the B(i)-submatrices is randomly generated so that (2.2) holds, we may apply Lemma 4.3, except that n is replaced by j. Lemma 4.5 shows how this process may be continued in a recursive manner.

D e f i n i t i o n 4.4. Let n > j l ~> j 2 > " • " > jL be fixed. For each ( m + 1)-by-n matrix B and

sequence ( i b . . . , i s ) 6 { 1 , 2 , . . . , m + l } ~ with l<~s<~L we define a submatrix B( il, . . . , is) inductively as follows: (i) B( i) is the submatrix obtained f rom the columns

B( i ) defined in L e m m a 4.3. (ii) B ( i l , . . . , i~+1) is ( m + l ) - b y - L + l submatrix o f

B( i l , . . . , is) consisting o f those columns whose entry in row is+l is among the J~+l largest entries in that row o f B(i~ . . . . , is).

E x a m p l e . I f n = 6, Ji = 4 , j2 = 2,

( 2345 B = 4 3 1 2 '

5 We may replace t by jZ/2n because h is decreasing.

1 2 '

Page 7: Random linear programs with many variables and few constraints

68 C. Blair / Random linear programs

Lemma 4.$ Let L, ji be given with 2j~+~ ~<j~ (i = 0 . . . . , L - 1;jo = n). The probability that B has <~(m + 1)cjt undominated columns is >~

L 1 - Y~ ( m + 1) i - lmh(2- '~7 '+l j i~) (4.7)

i--1

where h is as in Lemma 4.3.

Proof. Let E( i l , . is) be the event that 0 "+1B(i~, is, i) is nonempty (we " " ~ i=1 " " "

identify the submatrix with the set of its columns). By Lemma 4.3 the probability of E ( i l , . . . , is) is >1 . . . . +1.-" - mh(2 Js+~ J.~ ). If E ( i l , . . . , is) occurs then any column of B ( i l , . . . , is) which is not a member of U ; "+1 ( B ( i l , . . . , i,, i) will be dominated by

the members of (-~+~ B ( i l , . . . , is, i). Thus the set of undominated columns of

B(im . . . . , is) will be the union of the undominated columns of B ( i l , . . . , is, i), for 1 ~< i<~ m + l . (Note that if any column of B dominates a column of B(i~ . . . . , i0,

then it must also be a column of B ( i l , . . . , i~).) By continuing this reasoning, we see that if ( '~+~ B(i) is non-empty and E ( i l , . . . , is) occurs for all sequences of length 1 <~ s ~< L - 1, then the undominated columns of B are precisely the union of the undominated columns of the ( r e + l ) c submatrices B ( i l , . . . , iL). Since each submatrix hasjL columns, B cannot have more than (m + 1)CjL undominated columns in this case. To obtain (4.7) we note that the probability that at least one E-event does not occur is bounded by the sum of the probabilities that each E-event does not occurs. []

Remark. The E-events are not independent because the corresponding sets of

columns are not disjoint. However, we conjecture that Lemma 4.5 can be strengthened by using the fact that these events are 'almost independent ' .

Theorem 4.6. For any K > I , if n satisfies the conditions (i) l n n ~ > ( m + l ) e x p ( 1 / ( m + l ) ) and (ii)

/ 1 ~ l / I n n 1 I> 2 ( 4 . 8 )

e ~ , ~ j K l n ~

then the probability that the number of undominated columns is

~ < ( l n n ~ (m+~) In("+1) \ m + 1] ( K I n n)(exp((m + 1) exp(1 /m + 1))) (4.9)

is In n x~ (m+l)ln(rn+l)

>>-l-- \m+l) , / h(2-m(K In n - m - l ) ) . (4.10)

Proof. We let L, jl, • • •, jc be the largest integers satisfying

L<~ ( r e + l ) ln ln n - ( m + 1) In (m + 1),

/ n ~ e x p ( i / ( r n + l ) )

J, <~ ( K ln n ) l~ -~n n )

(4.11)

(4.12)

Page 8: Random linear programs with many variables and few constraints

C. Blair / Random linear programs 69

The purpose of conditions (i) and (ii) is to allow us to apply Lemma 4.5. (i) clearly implies L ~ > 1. To show that (ii) implies Ji ~>2ji+l we begin by noting that raising (4.8) to the power Inn yields n/K In n >~ 2 In" ~> 1. Then from the definition of ji we have

I / l'l X~exp(--i/(m+l)) 1

J' ~> (4.13) [ n x~exp(-(i+l)/(m+l)

J'+' (K In n)~-~nn) ~ g

1 (4.14) >I \ K In n] KIn n"

Since the exponent of the first term of (4.14) is >~((m + 1)/ln n) (1 /m + 1) = 1/ln n, we obtain ji ~> 2ji+l from (4.8).

We now apply Lemma 4.5. From (4.11) and (m + 1) L = exp(L(ln(m + 1)) we obtain

( I n n , ( m + 1 ) ~n(m+l) (m + 1)LjL<~ \ -m~l / (KIn n ) n exp(-L/(m+l)) (4.15)

We then use L ~ > ( m + l ) ln ln n - ( m + l ) In ( m + l ) - I to obtain (4.9). To obtain (4.10) from (4.7) we use

• m+l. m ~ [ n ) ,exp(-i/(m+l))(m+l . . . . . p(1/(m+l)))( 1 )m+l Ji Ji-i >~ (K In n) K ~ n n 1 K In n

( _re+j) ~ >(Klnn ) 1 K l n n / (4.16)

together with the fact that h is a decreasing function and <~( I n n ~(m+l)ln(m+l)

( rn+ l ) i lm<~(rn+l )L \ - m ~ / (4.17) i=1

Finally, we use Theorem 4.6 to give the

Proof of Theorem 3.1. We choose K so that h(2-m(K Inn - m - 1)) < Ct~" for n sufficiently large (recall h(x) = 6x-~/2(2/e)X). Then from (4.9) we obtain the desired conclusion (3.4) with

C2=K exp((m+l)exp(1/m+l))(m+l)-("+l)'m"+l). [] (4.8)

5. Concluding remarks

The expression (3.6) compares unfavorably with the expected value obtained in [10], which is essentially (log n) m2+m. I conjecture that the analysis in Section 4 can be improved to obtain this stronger result. It should be mentioned that [10, 11] also use the dominance idea (see the definition of the X-sets in Section 5 of [10]). The contribution of this paper is to show that the key properties for this problem are (2.2), (3.2), and dominance, rather than geometric analysis.

Page 9: Random linear programs with many variables and few constraints

70 C. Blair / R a nd om linear programs

These results extend to cases in which b is not necessarily feasible. To determine feasibility, one adds m columns corresponding to artificial variables. Since property

(3.2) still holds, the extension is immediate. Similarly, the possibility of a degenerate problem does not affect the analysis.

The standard device of perturbing the right-hand side (lexicographic pivot rules)

may be implemented in such a way that (3.2) still holds. To replace (2.2) by (2.2)' we must show that the analysis in section 4 still works.

For any matrix B let A be a matrix with all elements different such that a 0 < aik if bij < bit ,. The number of undominated columns of B is ~< the number of undominated columns of A. Further, if we permute row elements of B and A to obtain B*, A*

then # undominated B* <~ # undominated A*. Thus the average number ofundomi- nated columns of B as we go through all possible permutations is smaller than

average number for A, to which the analysis in section 4 applies. We make some comments on the relationship of these results to other work on

random linear programs. The model of Borgwardt [4] does not include non-negativity constraints on the variables, so the dominance idea cannot be used. Further, [4, p. 458] gives a lower bound of n ~ for the number of pivots, which is larger than

the upper bound given in Theorem 3.2. This suggests the two models are funda- mentally different. Recently, some impressive results have been obtained [1, 2, 5, 7] using models in which the inequality sign in each constraint is determined by flipping

a coin. This model is different from the one considered in the present paper, and is neither more nor less general. Specifically, our model does not make any assump-

tion about the probabili ty that a given coefficient is positive or negative. It is only concerned with the relative sizes of coefficients within rows of the matrix B. I f the right-hand side b in (2.1) is non-positive with probability 1 and all entries of A and

c are drawn independently from a distribution among the negative numbers, we have a model in which, with probability 1, the linear program is feasible and

bounded. (Compare with discussion in section 6 of [2].) The choice between the two models depends in part on a feeling for which one better reflects the intuitive idea of a random linear program.

The results in this paper depend in a crucial way on n >> m. It might be useful to

look for dominance every few iterations, since new dominant columns may appear. Also, it might help to extend the concept of dominance to cases in which a

non-negative multiple of one column dominates another. Finally, we wish to mention work of Megiddo [8] on specialized algorithms for

LPs with many variables and few constraints.

Acknowledgement

I wish to thank Franco Preparata and Alvin Roth for calling [3] and [9] to my attention. The referees pointed out many obscurities and errors in previous versions of this paper.

Page 10: Random linear programs with many variables and few constraints

C. Blair / Random linear programs 71

References

[1] I. Adler and S. Berenguer, "Random linear programs", Operations Research Center Report 81-4, University of California (Berkeley, CA, 1981).

[2] I. Adler, R. Karp and R. Shamir, "A simplex variant solving an m-by-d linear program in O(min(m 2, d2)) expected number of steps", Report UCB CSD 83/157, Computer Science Division, University of California (Berkeley, CA, December 1983).

[3] J. Bentley, H. Kung, M. Schkolnick and C. Thompson, "On the average number of maxima in a set of vectors", Journal of the ACM 25 (1978) 536-543.

[4] K.-H. Borgwardt, "Some distribution-independent results aboutthe asymptotic order of the average number of pivot steps of the simplex-method", Mathematics of Operations Research 7 (1982) 441-462.

[5] M. Haimovich, "The simplex algorithm is very good!" Business Department Technical Report, Columbia University (New York, NY, April 1983).

[6] H. Kung, F. Luccio and F. Preparata, "On finding maxima of a set of vectors", Journal of the ACM 22 (1975) 469-476.

[7] J. May and R. Smith, "Random polytopes: Their definitions, generation and aggregate properties", Mathematical Programming 24 (1982) 39-54.

[8] N. Megiddo, "Solving linear-programming linear-time when the dimension is fixed", Journalofthe ACM 31 (1984) 114-127.

[9] B. O'Neill, "The number of outcomes in the Pareto-optimal set of discrete bargaining games", Ma'~hematics of Operations Research 6 (1981) 571-578.

[10] S. Smale, "On the average number of steps of the simplex method of linear programming", Mathematical Programming 27 (1983) 241-263.

[11] S. Smale, "The problem of the average speed of the simplex method", in: A. Bachem, M. Gr/Stschel and B. Korte, eds., Mathematical programming: The state of the art (Springer-Verlag, Berlin and New York, 1983), pp. 530-539.

[12] F. Yao, "On finding maximal elements in a set of plane vectors", Computer Science Technical Report R-74-667, University of Illinois (Urbana, IL, 1974).