TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
-
Upload
lucas-wright -
Category
Documents
-
view
220 -
download
0
description
Transcript of TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
TU/e Algorithms (2IL15) – Lecture 12
1
Algorithms (2IL15) – Lecture 12Linear Programming
TU/e Algorithms (2IL15) – Lecture 12
2
Summary of previous lecture
ρ-approximation algorithm: algorithm for which computed solution is within factor ρ from OPT
to prove approximation ratio we usually need lower bound on OPT (or, for maximization, upper bound)
PTAS = polynomial-time approximation scheme = algorithm with two parameters, input instance and ε > 0, such that
– approximation ratio is1+ ε– running time is polynomial in n for constant ε
FPTAS = PTAS whose running time is polynomial in 1/ ε
some problems are even hard to approximate
TU/e Algorithms (2IL15) – Lecture 12
3
Today: linear programming (LP)
most used and most widely studied optimization method
can be solved in polynomial time (input size measured in bits)
can be used to model many problems
also used in many approximation algorithms (integer LP + rounding)
we will only have a very brief look at LP …
what is LP? what are integer LP and 0/1-LP ? how can we model problems as an LP ?
… and not study algorithms to solve LP’s
TU/e Algorithms (2IL15) – Lecture 12
4
Example problem: running a chocolate factory
Assortment
Cost and availability of ingredients
cacao milk
hazelnuts
retail price (100 g)
Pure Black 1 0 0 1.99Creamy Milk 0.6 0.4 0 1.49Hazelnut Delight
0.6 0.2 0.2 1.69
Super Nuts 0.5 0.1 0.4 1.79
cost (kg)
available (kg)
cacao 2.1 50milk 0.35 50hazelnuts
1.9 30
How much should we produce of each product to maximize our
profit ?
TU/e Algorithms (2IL15) – Lecture 12
5
Modeling the chocolate-factory problem
cacao
milk
hazelnuts
price (100 g)
Pure Black 1 0 0 1.99Creamy Milk 0.6 0.4 0 1.49Hazelnut Delight
0.6 0.2 0.2 1.69
Super Nuts 0.5 0.1 0.4 1.79
cost (kg)
available (kg)
cacao 2.1 50milk 0.35 50hazelnuts
1.9 30
variables we want to determine: production (kg) of the productsb = production of Pure Blackm = production of Creamy Milkh = production of Hazelnut Delights = production of Super Nuts
TU/e Algorithms (2IL15) – Lecture 12
6
Modeling the chocolate-factory problem (cont’d)
cacao
milk
hazelnuts
price (100 g)
Pure Black 1 0 0 1.99Creamy Milk 0.6 0.4 0 1.49Hazelnut Delight
0.6 0.2 0.2 1.69
Super Nuts 0.5 0.1 0.4 1.79
cost (kg)
available (kg)
cacao 2.1 50milk 0.35 50hazelnuts
1.9 30
profits (per kg) of the productsPure Black 19.9 – 2.1 = 17.8Creamy Milk 14.9 – 0.6 x 2.1 + 0.4 x 0.35 = 13.5Hazelnut Delight 15.19Super Nuts: 16.055
total profit: 17.8 b + 13.5 m + 15.19 h + 16.055 s
TU/e Algorithms (2IL15) – Lecture 12
7
Modeling the chocolate-factory problem
cacao
milk
hazelnuts
price (100 g)
Pure Black 1 0 0 1.99Creamy Milk 0.6 0.4 0 1.49Hazelnut Delight
0.6 0.2 0.2 1.69
Super Nuts 0.5 0.1 0.4 1.79
cost (kg)
available (kg)
cacao 2.1 50milk 0.35 50hazelnuts
1.9 30
we want to maximize the total profit 17.8 b + 13.5 m + 15.19 h + 16.055 sunder the constraintsb + 0.6 m + 0.6 h + 0.5 s ≤ 50 (cacao availability) 0.4 m + 0.2 h + 0.1 s ≤ 50 (milk availability) 0.2 h + 0.4 s ≤ 30 (hazelnut availability)
This is a linear program: optimize linear function, under set of linear constraints
TU/e Algorithms (2IL15) – Lecture 12
8
1−3
y ≥ − x + 3y ≥ 2x − 4
y ≤ ½ x + 2
y ≥ 0
m constraints:linear function ≤ constant
or ≥ or =, > and < not allowed
n variables; here n=2, in chocolate example n=4, but often n is large
objective function; must be linear function in the variablesgoal: maximize (or minimize)
x
y
Linear programming
Find values of real variables x, y such that
x − 3 y is maximized
subject to the constraints − 2 x + y ≥ − 4 x + y ≥ 3 − ½ x + y ≤ 2 y ≥ 0
TU/e Algorithms (2IL15) – Lecture 12
9
y = − x + 3y = 2x − 4
y = ½ x + 2
1−3
y = 0
Linear programming
Find values of real variables x, y such that
x − 3 y is maximized
subject to the constraints − 2 x + y ≥ − 4 x + y ≥ 3 − ½ x + y ≤ 2 y ≥ 0
feasible region = region containing feasible solutions= region containing solutions satisfying all constraintsfeasible region is convex polytope in n-dim space
TU/e Algorithms (2IL15) – Lecture 12
10
Linear programming:
Find values of real variables x1, …, xn such that
given linear function c1 x1 + c2 x2 + … + cn xn is maximized (or: minimized)
and given linear constraints on the variables are satisfied constraints: equalities or inequalities using ≥ or ≤, cannot use < and >
Possible outcomes:
unique optimal solution: vertex of feasible region
TU/e Algorithms (2IL15) – Lecture 12
11
Linear programming:
Find values of real variables x1, …, xn such that
given linear function c1 x1 + c2 x2 + … + cn xn is maximized (or: minimized)
and given linear constraints on the variables are satisfied constraints: equalities or inequalities using ≥ or ≤, cannot use < and >
Possible outcomes:
unique optimal solution: vertex of feasible region
no solution: feasible region in empty
TU/e Algorithms (2IL15) – Lecture 12
12
Linear programming:
Find values of real variables x1, …, xn such that
given linear function c1 x1 + c2 x2 + … + cn xn is maximized (or: minimized)
and given linear constraints on the variables are satisfied constraints: equalities or inequalities using ≥ or ≤, cannot use < and >
Possible outcomes:
no solution: feasible region in empty
unique optimal solution: vertex of feasible region
bounded optimal solution, but not unique
TU/e Algorithms (2IL15) – Lecture 12
13
Linear programming:
Find values of real variables x1, …, xn such that
given linear function c1 x1 + c2 x2 + … + cn xn is maximized (or: minimized)
and given linear constraints on the variables are satisfied constraints: equalities or inequalities using ≥ or ≤, cannot use < and >
Possible outcomes:
no solution: feasible region in empty
unique optimal solution: vertex of feasible region
bounded optimal solution, but not unique
unbounded optimal solution
TU/e Algorithms (2IL15) – Lecture 12
14n-dimensional
vectors
m x n matrix
c, A, b are input, x must be computed
Linear programming: standard form
Maximize c1 x1 + c2 x2 + … + cn xn
Subject to a1,1 x1 + a1,2 x2 + … + a1,n xn ≤ b1
a2,1 x1 + a2,2 x2 + … + a2,n xn ≤ b2
....
am,1 x1 + am,2 x2 + … + am,n xn ≤ bm
x1 ≥ 0x2 ≥ 0
…xn ≥ 0
Maximize c∙x subject to A x ≤ b and non-negativity constraints on all xi
non-negativity constraints for each variable
only “≤”(no “=“ and no
“≥”)not:
minimize
TU/e Algorithms (2IL15) – Lecture 12
15
Lemma: Any LP with n variables and m constraints can be rewritten as anequivalent LP in standard form with 2n variables and 2n+2m constraints.
Proof. LP may not be in standard form because
minimization instead of maximization
− negate objective function: minimize 2 x1 − x2 + 4x3 maximize −2 x1 + x2 − 4 xn
some constraints are ≥ or = instead of ≤
− getting rid of =: replace 3 x + x2 − x3 = 5by 3 x + x2 − x3 ≤ 5 3 x + x2 − x3 ≥ 5
− changing ≥ to ≤: negate constraint3 x + x2 − x3 ≥ 5 − 3 x − x2 + x3 ≤ − 5
TU/e Algorithms (2IL15) – Lecture 12
16
Lemma: Any LP with n variables and m constraints can be rewritten as anequivalent LP in standard form with 2n variables and 2n+2m constraints.
Proof (cont’d). LP may not be in standard form because
minimization instead of maximization
some constraints are ≥ or = instead of ≤
variables without non-negativity constraint
− for each such variable xi introduce two new variables ui and vi − replace each occurrence of xi by (ui − vi)− add non-negativity constraints ui ≥ 0 and vi ≥ 0
TU/e Algorithms (2IL15) – Lecture 12
17
Lemma: Any LP with n variables and m constraints can be rewritten as anequivalent LP in standard form with 2n variables and 2n+2m constraints.
Proof (cont’d). variables without non-negativity constraint
− for each such variable xi introduce two new variables ui and vi − replace each occurrence of xi by (ui − vi)− add non-negativity constraints ui ≥ 0 and vi ≥ 0
new problem is equivalent to original problem :
for any original solution there is new solution with same value− if xi ≥ 0 then ui = xi and vi = 0, otherwise ui = 0 and vi = − xi
and vice versa− set xi = (ui − vi)
TU/e Algorithms (2IL15) – Lecture 12
18
Lemma: Any LP with n variables and m constraints can be rewritten as anequivalent LP in standard form with 2n variables and 2n+2m constraints.
Instead of standard form, we can also get so-called slack form:
– non-negativity constraint for each variable– all other constraints are =, not ≥ or ≤
Standard form (or slack form): convenient for developing LP algorithms
When modeling a problem: just use general form
TU/e Algorithms (2IL15) – Lecture 12
19
Algorithms for solving LP’s
simplex method− worst-case running time is exponential− fast in practice
interior-point methods− worst-case running time is polynomial in input size in bits− some are slow in practice, others are competitive with simplex method
LP when dimension (=number of variables) is constant− can be solved in linear time (see course Advanced Algorithms)
TU/e Algorithms (2IL15) – Lecture 12
20
Modeling a problem as an LP
decide what the variables are (what are the choices to be made?)
write the objective function to be optimized (should be linear)
write the constraints on the variables (should be linear)
TU/e Algorithms (2IL15) – Lecture 12
21
Example: Max Flow
TU/e Algorithms (2IL15) – Lecture 12
22
Flow: function f : V x V → R satisfying capacity constraint: 0 ≤ f (u,v) ≤ c(u,v ) for all nodes u,v flow conservation: for all nodes u ≠ s, t we have flow in = flow out: ∑v in V f (v,u) = ∑v in V f (u,v)
value of flow: |f | = ∑v in V f (s,v) − ∑v in V f (v,s)
source sink
10 2
5
3
2 51
3
2
s t
3
2 /
3 /
4 / 2 /
2 /1 /
1 /
2 /
1 /
0 / flow = 1, capacity = 5
TU/e Algorithms (2IL15) – Lecture 12
23
Modeling Max Flow as an LP
decide what the variables are (what are the choices to be made?)
for each edge (u,v) introduce variable xuv ( xuv represents f(u,v) )
write the objective function to be optimized (should be linear)
maximize ∑v in V xsv − ∑v in V xvs (note: linear function)
write the constraints on the variables (should be linear)
xuv ≥ 0 for all pairs of nodes u,v xuv ≤ c(u,v) for all pairs of nodes u,v ∑v in V xvu − ∑v in V xuv = 0 for all nodes u ≠ s, t
(note: linear functions)
TU/e Algorithms (2IL15) – Lecture 12
24
Modeling Max Flow as an LP
Now write it down nicely
maximize ∑v in V xsv − ∑v in V xv,s
subject to xuv ≥ 0 for all pairs of nodes u,v xuv ≤ c (u,v) for all pairs of nodes u,v ∑v in V xvu − ∑v in V xuv = 0 for all nodes u ≠ s, t
Conclusion: Max Flow can trivially be written as an LP
(but dedicated max-flow algorithm are faster than using LP algorithms)
TU/e Algorithms (2IL15) – Lecture 12
25
Example: Shortest Paths
TU/e Algorithms (2IL15) – Lecture 12
26
21
2.3
− 2
2 1
Shortest paths
weighted, directed graph G = (V,E )
weight (or: length) of a path = sum of edge weights
δ (u,v) = distance from u to v= min weight of any path from u to v
shortest path from u to v= any path from u to v of weight δ (u,v)
v2
v4
v7
v5v6
v3
v1
4
weight = 2
weighted, directed graph
δ(v1,v5) = 2
Is δ (u,v) always well defined?No, not if there are negative-weight cycles.
TU/e Algorithms (2IL15) – Lecture 12
27
Modeling single-source single-target shortest path as an LP
Problem: compute distance δ(s,t) from given source s to given target t
decide what the variables are (what are the choices to be made?)
for each vertex v introduce variable xv ( xv represents δ(s,v) )
write the objective function to be optimized (should be linear)
minimize xt
write the constraints on the variables (should be linear)
xv ≤ xu + w(u,v) for all edges (u,v) in Exs = 0
maximize xt
TU/e Algorithms (2IL15) – Lecture 12
28
Modeling single-source single-target shortest path as an LP
variables: for each vertex v we have a variable xv
LP: maximize xt
subject to xv ≤ xu + w(u,v) for all edges (u,v) in E xs = 0
Lemma: optimal solution to LP = δ(s,t).
Proof. (assume for simplicity that δ(s,t) is bounded)≥ : consider solution where we set xv = δ(s,v) for all v
− solution is feasible and has value dist(s,t) opt solution ≥ δ(s,t)
≤ : consider opt solution, and shortest path s = v0,v1,…,vk,vk+1 = t− prove by induction that xi ≤ δ(s,vi) opt solution ≤ δ(s,t)
TU/e Algorithms (2IL15) – Lecture 12
29
Example: Vertex Cover
TU/e Algorithms (2IL15) – Lecture 12
30
G = (V,E) is undirected graphvertex cover in G: subset C V such that for each edge (u,v) in E we have u in C or v in C (or both)
Vertex Cover (optimization version)Input: undirected graph G = (V,E)Problem: compute vertex cover for G with minimum number of vertices
Vertex Cover is NP-hard. there is a 2-approximation algorithm running in linear time.
∩
TU/e Algorithms (2IL15) – Lecture 12
31
Modeling Vertex Cover as an LP
decide what the variables are (what are the choices to be made?)
for vertex v introduce variable xv
( idea: xv = 1 if v in cover, xv = 0 if v not in cover)
write the objective function to be optimized (should be linear)
minimize ∑v in V xv (note: linear function)
write the constraints on the variables (should be linear)
− for each edge (u,v) write constraint xu + xv ≥ 1 (NB: linear function)− for each vertex v write constraint xu in {0,1}
not a linear constraint
TU/e Algorithms (2IL15) – Lecture 12
32
integrality constraint: “xi must be integral”
0/1-constraint: “xi must 0 or 1”
integer LP: LP where all variables have integrality constraint
0/1-LP: LP where all variables have 0/1-constraint
(of course there are also mixed versions)
TU/e Algorithms (2IL15) – Lecture 12
33
Theorem: 0/1-LP is NP-hard.
Proof. Consider decision problem: is there feasible solution to given 0/1-LP?
Which problem do we use in reduction?
Need to transform 3-SAT formula into instance of 0/1-LP
maximize y1 (not relevant for decision problem, pick arbitrary function)
subject to y1 + y2 + (1−y3) ≥ 1y2 + (1−y4) + (1−y5 ) ≥ 1 (1 − y2) + y3 + y5 ≥ 1 yi in {0,1} for all i
( x1 V x2 V ¬x3) Λ (x2 V ¬x4 V ¬x5) Λ (¬x2 V x3 V x5 )
already saw reduction from Vertex Cover; let’s do another one: 3-SAT
variable yi for each Boolean xi yi = 1 if xi = TRUE and yi = 0 if xi = FALSE
TU/e Algorithms (2IL15) – Lecture 12
34
Theorem: 0/1-LP is NP-hard.
problem can be modeled as “normal” LP
problem can be solved using LP algorithms problem can be solved efficiently
problem can be modeled as integer LP (or 0/1-LP)
problem can be solved using integer LP (or 0/1-LP) algorithms does not mean that problem can be solved efficiently (sometimes can get approximation algorithms by relaxation and rounding
see course Advanced Algorithms) there are solvers (software) for integer LPs that in practice are quite efficient
TU/e Algorithms (2IL15) – Lecture 12
35
Summary
what is an LP? what are integer LP and 0/1-LP?
any LP can be written in standard form (or in slack form)
normal (that is, not integer) LP can be solved in polynomial time (with input size measured in bits)
integer LP and 0/1-LP are NP-hard
when modeling a problem as an LP
− define variables and how they relate to the problem− describe objective function (should be linear)− describe constraints (should be linear, < and > not allowed)− no need to use standard or slack form, just use general form