Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University.

Post on 28-Dec-2015

217 views 0 download

Tags:

Transcript of Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University.

Geometry of Online Packing Linear Programs

Marco Molinaro and R. RaviCarnegie Mellon University

Packing Integer Programs (PIPs)

• Non-negative c, A, b • Max st

• A has entries in [0,1]

Ax ≤ b m

n

A

Online Packing Integer Programs• Adversary chooses values for c, A, b • …but columns are presented in random order• …when column comes, set variable to 0/1 irrevocably• b and n are known upfront

x ≤ b

c

A

A

A

A

n

A

A10

Online Packing Integer Programs

• Goal: Find feasible solution that maximizes expected value

• -competitive:

E [ ALG ] ≥ (1−𝜖 )OfflineIP

• First online problem: secretary problem [Dynkin 63]

• B-secretary problem (m=1, b=B, A is all 1’s) [Kleinberg 05]

-competitive for

• PIPs (B=min bi) [FHKMS 10, AWY]

-competitive for

need

Previous Results

do not depend on n

depends on n

Main Question and Result

• Q: Do general PIPs become more difficult for larger n?

• A: No!

Main result

Algorithm -competitive when

High-level Idea

1. Online PIP as learning2. Improving learning error using tailored covering bounds 3. Geometry of PIPs that allow good covering bounds4. Reduce general PIP to above

For this talk:

• Every right-hand side • Show weaker bound

Online PIP as Learning1) Reduction to learning a classifier [DH 09]

Linear classifier: given (dual) vector ,

𝑥 (𝑝)𝑡=1    iff𝑝 𝐴𝑡−1<0

𝑨𝒕

1

1

1

1

00

0 𝒙 (𝒑 )𝒕𝒑

Online PIP as Learning

Claim: If the classification ( ) given by satisfies 𝑥 𝑝1)

2)

then ( ) is (1− ) optimal.𝑥 𝑝 𝜖Moreover, such classification always exists.

[Feasible]

[Packs tightly] If , then

1) Reduction to learning a classifier [DH 09]

Linear classifier: given (dual) vector ,

𝑥 (𝑝)𝑡=1    iff𝑝 𝐴𝑡−1<0

Claim: If the classification ( ) given by satisfies 𝑥 𝑝1)

2)

then ( ) is (1− ) optimal.𝑥 𝑝 𝜖Moreover, such classification always exists.

Online PIP as Learning1) Reduction to learning [DH 09]

Linear classifier: given (dual) vector , set

𝑥 (𝑝)𝑡=1    iff𝑝 𝐴𝑡−1<0

[Feasible]

[Packs tightly] If , then

Online PIP as Learning

2) Solving PIP via learning a)S fraction of columnsb) Compute appropriate for sampled IP

c) Use to classify remaining columns

Online PIP as Learning

2) Solving PIP via learning

Probability of learning good classifier:• Consider a classifier that overfills some budget:• Can only learn if sample is skewed. Happens with probability at most • At most distinct bad classifiers•Union bounding over all bad classifiers, learn bad classifier with prob. at most •When to get good classifier with high probability

a)S fraction of columnsb) Compute appropriate for sampled IP

c) Use to classify remaining columns

𝒑

Online PIP as Learning

2) Solving PIP via learning

Probability of learning good classification:• Consider a classification that overfills some budget:• Can only learn if sample is skewed. Happens with probability at most • At most distinct bad classifications•Union bounding over all bad classifications, learn desired good classification with

prob. at least •When to get good classification with high probability

Improve this…

Improved Learning Error

-witness: is a +-witness of for constraint if1) Columns picked by columns picked by 2) Total occupation of constraint by columns picked by is

-witness: similar…

Lemma: Suppose there is a witness set of size . Then probability of learning a bad classifier is

• Idea 1: Covering bounds via witnesses (handling multiple bad classifiers at a time)

Total weight

Geometry of PIPs with Small Witness Set

• For some PIPs, size of witness set is at least

• Idea 2: Consider PIPs whose columns lie on few () 1-d subspaces

Geometry of PIPs with Small Witness Set

• For some PIPs, size of witness set is at least

• Idea 2: Consider PIPs whose columns lie on few () 1-d subspaces

=2

Lemma: For such PIPs, can find witness set of size

Geometry of PIPs with Small Witness Set

• Covering bound + witness size: it suffices • Final step: Convert any PIP into one with , loses value

Algorithm -competitive when

Conclusion• Guarantee for online PIPs independent of number of columns • Asymptotically matches that for single constraint version [Kleinberg

05]• Ideas

1) Tailored covering bound based on witnesses2) Analyze geometry of columns to obtain small witness set Make the learning problem more robust

Open problems3) Obtain optimal ? Can do if sample columns with replacement

[DJSW 11]

4) Generalize to AdWords-type problem

5) Better online models: infinite horizon? less randomness?

Thank you!