Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard...
Transcript of Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard...
![Page 1: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/1.jpg)
Dynamic programming.
Curs 2016
![Page 2: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/2.jpg)
Fibonacci Recurrence.
n-th Fibonacci TermINPUT: n ∈ natQUESTION: ComputeFn = Fn−1 + Fn−2
Recursive Fibonacci (n)if n = 0 then
return 0else if n = 1 then
return 1else
(Fibonacci(n − 1)++Fibonacci(n − 2))
end if
![Page 3: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/3.jpg)
Computing F7.
As Fn+1/Fn ∼ (1 +√
5)/2 ∼ 1.61803 then Fn > 1.6n, and tocompute Fn we need 1.6n recursive calls.
F(0)
F(7)
F(6)
F(5)
F(4)
F(3)
F(2)
F(1)
F(1)
F(2)
F(1)
F(3)
F(2) F(1)
F(4)
F(5)
F(4)F(3)
F(0)
F(0)F(0)
F(0)
![Page 4: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/4.jpg)
A DP algorithm.
To avoid repeating multiple computations of subproblems, carrythe computation bottom-up and store the partial results in a table
DP-Fibonacci (n) Construct tableF0 = 0F1 = 1for i = 1 to n doFi = Fi−1 + Fi−2
end for
13
F[0]
F[1]
F[2]
F[3]
F[4]
F[5]
F[6]
F[7]
0
1
1
2
3
5
8
To get Fn need O(n) time and O(n) space.
![Page 5: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/5.jpg)
F(1)
F(7)
F(6)
F(5)
F(4)
F(3)
F(2)
F(1)
F(0)
F(1)
F(2)
F(0)F(1)
F(3)
F(2) F(1)
F(4)
F(5)
F(4)F(3)
F(0)
I Recursive (top-down) approach very slow
I Too many identical sub-problems and lots of repeated work.
I Therefore, bottom-up + storing in a table.
I This allows us to look up for solutions of sub-problems,instead of recomputing. Which is more efficient.
![Page 6: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/6.jpg)
Dynamic Programming.
Richard Bellman: An introduction to thetheory of dynamic programming RAND, 1953Today it would be denoted Dynamic Planning
Dynamic programming is a powerful technique of divide andconquer type, for efficiently computing recurrences by storingpartial results and re-using them when needed.
Explore the space of all possible solutions by decomposing thingsinto subproblems, and then building up correct solutions to largerproblems.
Therefore, the number of subproblems can be exponential, but ifthe number of different problems is polynomial, we can get apolynomial solution by avoiding to repeat the computation of thesame subproblem.
![Page 7: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/7.jpg)
Properties of Dynamic Programming
Dynamic Programming works when:
I Optimal sub-structure: An optimal solution to a problemcontains optimal solutions to subproblems.
I Overlapping subproblems: A recursive solution contains asmall number of distinct subproblems. repeated many times.
Difference with greedyGreedy problems have the greedy choice property: locally optimalchoices lead to globally optimal solution.For DP problems greedy choice is not possible globally optimalsolution requires back-tracking through many choices.
![Page 8: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/8.jpg)
Guideline to implement Dynamic Programming
1. Characterize the structure of an optimal solution: make surespace of subproblems is not exponential. Define variables.
2. Define recursively the value of an optimal solution: Find thecorrect recurrence formula , with solution to larger problem asa function of solutions of sub-problems.
3. Compute, bottom-up, the cost of a solution: using therecursive formula, tabulate solutions to smaller problems, untilarriving to the value for the whole problem.
4. Construct an optimal solution: Trace-back from optimal value.
![Page 9: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/9.jpg)
Implementtion of Dynamic Programming
Memoization: technique consisting in storing the results ofsubproblems and returning the result when the same sub-problemoccur again. Technique used to speed up computer programs.
I In implementing the DP recurrence using recursion could bevery inefficient because solves many times the samesub-problems.
I But if we could manage to solve and store the solution tosub-problems without repeating the computation, that couldbe a clever way to use recursion + memoization.
I To implement memoization use any dictionary data structure,usually tables or hashing.
![Page 10: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/10.jpg)
Implementtion of Dynamic Programming
I The other way to implement DP is using iterative algorithms.
I DP is a trade-off between time speed vs. storage space.
I In general, although recursive algorithms ha exactly the samerunning time than the iterative version, the constant factor inthe O is quite more larger because the overhead of recursion.On the other hand, in general the memoization version iseasier to program, more concise and more elegant.
Top-down: Recursive and Bottom-up: Iterative
![Page 11: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/11.jpg)
Weighted Activity Selection Problem
Weighted Activity Selection ProblemINPUT: a set S = 1, 2, . . . , n of activities to be processed by asingle resource. Each activity i has a start time si and a finish timefi , with fi > si , and a weight wi .QUESTION: Find the set of mutually compatible such that itmaximizes
∑i∈S wi
Recall: Greedy strategy not always solved this problem.
6
1 5
10
6
10
![Page 12: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/12.jpg)
Notation for the weighted activity selection problem
We have 1, 2, . . . , n activities with f1 ≤ f2 ≤ · · · ≤ fn andweights wi.
Define p(j) to be the largest integer i < j such that i and j aredisjoints (p(j) = 0 if no disjoint i < j exists).
Let Opt(j) be the value of the optimal solution to the problemconsisting of activities in the range 1 to j . Let Oj be the set ofjobs in optimal solution for 1, . . . , j.
13
1
2
3456
p(1)=0
p(2)=0p(3)=1p(4)=0p(5)=3
p(6)=3
1
2
23
1
2
0 1 2 3 4 5 6 7 8 9 10 11 12
![Page 13: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/13.jpg)
Recurrence
Consider sub-problem 1, . . . , j. We have two cases:1.- j ∈ Oj :
I wj is part of the solution,
I no jobs p(j) + 1, . . . , j − 1 are in Oj ,
I if Op(n) is the optimal solution for 1, . . . , p(n) thenOj = Op(n) ∪ j (optimal substructure)
2.- j 6∈ Oj : then Oj = Oj−1
Opt(j) =
0 if j = 0
max(Opt(p(j)) + wj),Opt(j − 1) if j ≥ 1
![Page 14: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/14.jpg)
Recursive algorithm
Considering the set of activities A, we start by a pre-processingphase: sorting the activities by increasing fini=1 and computingand tabulating P[j ].The cost of the pre-computing phase is: O(n lg n + n)
Therefore we assume A is sorted and all p(j) are computer andtabulated in P[1 · · · n]
To compute Opt(j):
R-Opt (j)if j = 0 then
return 0else
return max(wj + R-Opt(p(j)),R-Opt(j − 1))end if
![Page 15: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/15.jpg)
Recursive algorithm
Opt(2)
Opt(3)
Opt(1)
Opt(1)
Opt(2)
Opt(6)
Opt(5)
Opt(4)Opt(3)
Opt(3)
Opt(1)
Opt(1) Opt(1)
Opt(1)
What is the worst running time of this algorithm?: O(2n)
![Page 16: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/16.jpg)
Iterative algorithm
Assuming we have as input the set A of n activities sorted byincreasing f , each i with si ,wi and the values of p(j) tabulated,
I-Opt (n)Define a 2× n table M[]for j = 1 to n do
M[j ] = max(M[P[j ]] + wj ,M[j − 1])end forreturn M[n]
Notice: this algorithm gives only the numerical max. weight
Time complexity: O(n)(not counting the O(n lg n) pre-process)
![Page 17: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/17.jpg)
Iterative algorithm: Example
i si fi wi P
1 1 5 1 02 0 8 2 03 7 9 2 14 1 11 3 05 9 12 1 36 10 13 2 3
p(3)
0 1 2 3 4 5 6
3320 1 4 5
Sol.: Max weight=5
W
p(6)
![Page 18: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/18.jpg)
Iterative algorithm: Returning the selected activities
List-Sol (n)Define a 2× n table M[]for j = 1 to n doM[j ] = max(M[P[j ]] + wj ,M[j − 1])if M[P[j ]] + wj > M[j − 1] then
return jend if
end forreturn M[n]
In the previous example we get : 1,3,6 and value=5
![Page 19: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/19.jpg)
The DP recursive algorithm: Memoization
Notice we only have to solve a limited number of n differentsubproblems: from Opt(1) to Opt(n).A memoize recursive algorithm stores the solution for eachsub-problem solution , using a dictionary DS. Therefore it coulduse tables, hashing,. . .In the algorithm below we use a table M[]
Mem-Find-Sol (j)if j = 0 then
return 0else if M[j ] 6= ∅ then
return M[j ]elseM[j ] = max(Mem-Opt(P[j ]) + wj ,Mem-Opt(j − 1))return M[j ]
end if
Time complexity: O(n)
![Page 20: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/20.jpg)
The DP recursive algorithm: Memoization
To get also the list of selected activities:
Mem-Opt (j)if j = 0 then
return 0else if M[P[j ]] + wj > M[j − 1] then
return j together with Mem-Find-Sol(P[j ])else
return Mem-Find-Sol(j − 1)end if
Time complexity: O(n)
![Page 21: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/21.jpg)
0-1 Knapsack
0-1 KnapsackINPUT:a set I = in1 of items that can NOT be fractioned, each iwith weight wi and value vi . A maximum weight W permissibleQUESTION: select the items to maximize the profit.
Recall that we can NOT take fractions of items.
![Page 22: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/22.jpg)
Characterize structure of optimal solution and definerecurrence
Need a new variable wLet v [i ,w ] be the maximum value (optimum) we can get fromobjects 1, 2, . . . , i and taking a maximum weight of w (theremaining free weight).We wish to compute v [n,W ].
To compute v [i ,w ] we have two possibilities:I That the i-th element is not part of the solution, then we go
to compute v [i − 1,w ],I or that the i-th element is part of the solution. we add vi and
call
This gives the recurrence,
v [i , j ] =
0 if i = 0 or w = 0
v [i − 1,w − wi ] + vi if i is part of the solution
v [i − 1,w ] otherwise
![Page 23: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/23.jpg)
Iterative algorithm
Define a table M = v [1 . . . n, 0 . . .W ],
Knapsack(I ,W )for i = 1 to n − 1 dov [i , 0] := 0
end forfor i = 1 to n do
for w = 0 to W doif wi > w thenv [i ,w ] := v [i − 1,w ]
elsev [i ,w ] := maxv [i − 1,w ], v [i − 1,w − wi ] + vi
end ifend for
end forreturn v [n,W ]
The number of steps is O(nW ).
![Page 24: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/24.jpg)
Example.
i 1 2 3 4 5
wi 1 2 5 6 7
vi 1 6 18 22 28W = 11.
w0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 01 0 1 1 1 1 1 1 1 1 1 1 12 0 1 6 7 7 7 7 7 7 7 7 7
I 3 0 1 6 7 7 18 19 24 25 25 25 254 0 1 6 7 7 18 22 24 28 29 29 405 0 1 6 7 7 18 22 28 29 34 35 40
For instance,v [5, 11] = maxv [4, 10], v [4, 11− 7] + 28 = max7, 0 + 18 = 18.
![Page 25: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/25.jpg)
Recovering the solution
a table M = v [1 . . . n, 0 . . .W ],Sol-Knaps(I ,W )Initialize MDefine a list L[i ,w ]for every cell [i ,w ]for i = 1 to n do
for w = 0 to W dov [i ,w ] := maxv [i − 1,w ], v [i − 1,w − wi ] + viif v [i − 1,w ] < v [i − 1,w − wi ] + vi then
Add vi ∪ L[i − 1,w − wi ] to L[i−,w ]end if
end forend forreturn v [n,W ] and the list of [i ,w ]
Complexity: O(nW ) + cost(L[i ,w ]).
Question: How would you implement the L[i ,w ]’s?
![Page 26: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/26.jpg)
Solution for previous example
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 01 0 1 1 1 1 1 1 1 1 1 1 12 0 1 6 7 7 7 7 7 7 7 7 73 0 1 6 7 7 18 19 24 25 25 25 254 0 1 6 7 7 18 22 24 28 29 29 40(3,4)5 0 1 6 7 7 18 22 28 29 34 35 40 (3,4)
Question: could you come with a memoized algorithm?(Hint: use a hash data structure to store solution to sub-problems)
![Page 27: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/27.jpg)
Complexity
The 0-1 Knapsack is NP-complete. Does it mean P=NP?
I An algorithm runs in pseudo-polynomial time if its runningtime is polynomial in the numerical value of the input, but itis exponential in the length of the input,
I Recall that given n ∈ Z the value is n but the length of therepresentation is dlg ne bits.
I 0-1 Knapsack, has complexity O(nW ), and its length isO(lg n + lgW ).
I If W = 2n the problem is exponential its the length. Howeverthe DP algorithm works fine when W = Θ(n).
I Consider the unary knapsack problem, where all integers arecoded in unary (7=1111111). In this case, the complexity ofthe DP algorithm is polynomial on the size. I.e.unary knapsack ∈P.
![Page 28: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/28.jpg)
Multiplying a Sequence of Matrices
Multiplication of n matricesINPUT: A sequence of n matrices (A1 × A2 × · · · × An)QUESTION: Minimize the number of operation in thecomputation A1 × A2 × · · · × An
Recall that Give matrices A1,A2 with dim(A1) = p0 × p1 anddim(A2) = p1 × p2, the basic algorithm to A1 × A2 takes timep0 × p1 × p2 2 3
3 44 5
× [2 3 43 4 5
]=
13 18 2318 25 3223 32 41
![Page 29: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/29.jpg)
Recall that matrix multiplication is NOT commutative, so we cannot permute the order of the matrices without changing the result,but it is associative, so we can put parenthesis as we wish.In fact, the problem of given A1, . . . ,An with dim (Ai ) = pi−1× pi ,how to multiply them to minimize the number of operations isequivalent to the problem of how to parenthesize the sequenceA1, . . .An.Example Consider A1 × A2 × A3, where dim (A1) = 10× 100dim (A2) = 100× 5 and dim (A3) = 5× 50.((A1A2)A3) = (10× 100× 5) + (10× 5× 50) = 7500 operations,(A1(A2A3)) = (100×5×50) + (10×100×50) =75000 operations.The order makes a big difference in real computation’s time.
![Page 30: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/30.jpg)
How many ways to paranthesize A1, . . .An?A1 × A2 × A3 × A4:(A1(A2(A3A4))), ((A1A2)(A3A4)), (((A1(A2A3))A4),(A1((A2A3)A4))), (((A1A2)A3)A4))Let P(n) be the number of ways to paranthesize A1, . . .An. Then,
P(n) =
1 if n = 1∑n−1
k=1 P(k)P(n − k) si n ≥ 2
with solution P(n) = 1n+1
(2nn
)= Ω(4n/n3/2)
The Catalan numbers. Brute force will take too long!
![Page 31: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/31.jpg)
1.- Structure of an optimal solution and recursive solution.
Let Ai−j = (AiAi+1 · · ·Aj).The parenthesization of the subchain (A1 · · ·Ak) within theoptimal parenthesization of A1 · · ·An must be an optimalparanthesization of Ak+1 · · ·An.
Notice,∀k , 1 ≤ k ≤ n, cost(A1−n) = cost(A1−k) + cost(Ak+1−n) + p0pkpn.
![Page 32: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/32.jpg)
Let m[i , j ] the minimum cost of Ai × . . .× Aj . Then, m[i , j ] will begiven by choosing the k , i ≤ k ≤ j s.t. minimizesm[i , k] + m[k + 1, j ] + cost (A1−k × Ak+1−n).That is,
m[i , j ] =
0 if i = j
mini≤k≤jm[i , k] + m[k + 1, j ] + pi−1pkpj otherwise
![Page 33: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/33.jpg)
2.- Computing the optimal costs
Straightforward implementation of the previous recurrence:As dim(Ai ) = pi−1pi , the imput is given by P =< p0, p1, . . . , pn >,
MCR (P, i , j)if i = j then
return 0end ifm[i , j ] :=∞for k = i to j − 1 do
q := MCR(P, i , k) + MCR(P, k + 1, j) + pi−1pkpjif q < m[i , j ] thenm[i , j ] := q
end ifend forreturn m[i , j ].
The time complexity if the previous is given byT (n) ≥ 2
∑n−1i=1 T (i) + n ∼ Ω(2n).
![Page 34: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/34.jpg)
Dynamic programming approach.Use two auxiliary tables: m[1 . . . n, 1 . . . n] and s[1 . . . n, 1 . . . n].
MCP (P)for i = 1 to n do
m[i , i ] := 0end forfor l = 2 to n do
for i = 1 to n − l + 1 doj := i + l − 1m[i , j ] :=∞for k = i to j − 1 do
q := m[i , k] + m[k + 1, j ] + pi−1pkpjif q < m[i , j ] then
m[i , j ] := q, s[i , j ] := kend if
end forend for
end forreturn m, s.
T (n) = Θ(n3), and space = Θ(n2).
![Page 35: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/35.jpg)
Example.
We wish to compute A1,A2,A3,A4 with P =< 3, 5, 3, 2, 4 >m[1, 1] = m[2, 2] = m[3, 3] = m[4, 4] = 0
i \ j 1 2 3 4
1 0
2 0
3 0
4 0
![Page 36: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/36.jpg)
l = 2, i = 1, j = 2,k = 1 : q = m[1, 2] = m[1, 1] + m[2, 2] + 3.5.3 = 45 (A1A2)s[i , 2] = 1l = 2, i = 2, j = 3,k = 2 : q = m[2, 3] = m[2, 2] + m[3, 3] + 5.3.2 = 30 (A2A3)s[2, 3] = 2l = 2, i = 3, j = 4,k = 3 : q = m[3, 4] = m[3, 3] + m[4, 4] + 3.2.4 = 24 (A3A4)s[3, 4] = 3
i \ j 1 2 3 4
1 0 45
2 1 0 30
3 2 0 24
4 3 0
![Page 37: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/37.jpg)
l = 3, i = 1, j = 3 :
m[1, 3] = min
(k = 1)m[1, 1] + m[2, 3] + 3.5.2 = 60 A1(A2A3)
(k = 2)m[1, 2] + m[3, 3] + 3.3.2 = 63 (A1A2)A3
s[1, 3] = 1, l = 3, i = 2, j = 4 :
m[2, 4] = min
(k = 2)m[2, 2] + m[3, 4] + 5.3.4 = 84 A2(A3A4),
(k = 3)m[2, 3] + m[4, 4] + 5.2.4 = 70 (A2A3)A4.
s[2, 4] = 3
i \ j 1 2 3 4
1 0 45 60
2 1 0 30 70
3 1 2 0 24
4 3 3 0
![Page 38: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/38.jpg)
l = 4, i = 1, j = 4 :
m[1, 4] = min
(k = 1)m[1, 1] + m[2, 4] + 3.5.4 = 130 A1(A2A3A4),
(k = 2)m[1, 2] + m[3, 4] + 3.3.4 = 105 (A1A2)(A3A4),
(k = 3)m[1, 3] + m[4, 4] + 3.2.4 = 84 (A1A2A3)A4.
i \ j 1 2 3 4
1 0 45 60 84
2 1 0 30 70
3 1 2 0 24
4 3 3 3 0
![Page 39: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/39.jpg)
3.- Constructing an optimal solutionWe need to construct an optimal solution from the information ins[1, . . . , n, 1, . . . , n]. In the table, s[i , j ] contains k such that theoptimal way to multiply:
Ai × · · · × Aj = (Ai × · · · × Ak)(Ak+1 × · · · × Aj).
Moreover, s[i , s[i , j ]] determines the k to get Ai−s[i ,j] ands[s[i , j ] + 1, j ] determines the k to get As[i ,j]+1−j . Therefore,A1−n = A1−s[1,n]As[1,n]+1−n.
Multiplication(A, s, i , j)if j > 1 then
X :=Multiplication (A, s, i , s[i , j ])Y :=Multiplication (A, s, s[i , j ] + 1, j)return X × Y
elsereturn Ai
end if
Therefore (A1(A2A3))A4.
![Page 40: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/40.jpg)
Evolution DNA
Insertion
CT A A G T A C G
CT A A T A C G
CT A G A C G
A A C G
C
T C A G A C G
GACT
Mutation
Delete
![Page 41: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/41.jpg)
Sequence alignment problem
?
CT A A G T A C G
A A C GGACT
![Page 42: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/42.jpg)
Formalizing the problem
Longest common substring: Substring = chain of characterswithout gaps.
A
T C A GT T A G A
C T A T C A G
Longest common subsequence: Subsequence = ordered chain ofcharacters with gaps.
A
T C A GT T A G A
C T A T C A G
Edit distance: Convert one string into another one using a givenset of operations.
?
CT A A G T A C G
A A C GGACT
![Page 43: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/43.jpg)
String similarity problem: The Longest CommonSubsequence
LCSINPUT: sequences X =< x1 · · · xm > and Y =< y1 · · · yn >QUESTION: Compute the longest common subsequence.
A sequence Z =< z1 · · · zk > is a subsequence of X if there is asubsequence of integers 1 ≤ i1 < i2 < . . . < ik ≤ m such thatzj = xij . If Z is a subsequence of X and Y , the Z is a commonsubsequence of X and Y .
Given X = ATATAT , then TTT is a subsequence of X
![Page 44: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/44.jpg)
Greedy approach
LCS: Given sequences X =< x1 · · · xm > and Y =< y1 · · · yn >.Compute the longest common subsequence.
Greedy X ,YS := ∅for i = 1 to m do
for j = i to n doif xi = yj then
S := S ∪ xiend iflet yl such that l = mina > j |xi = yalet xk such that k = mina > i |xi = yaif ∃i , l < k then
do S := S ∪ xi, i := i + 1;j := l + 1S := S ∪ xk, i := k + 1; j := j + 1
else if not such yl , xk thendo i := i + 1 and j := j + 1.
end ifend for
end for
Greedy approach do not workFor X =A T C A C andY =C G C A C A C A C Tthe result of greedy is A T butthe solution is A C A C.
![Page 45: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/45.jpg)
Greedy approach
LCS: Given sequences X =< x1 · · · xm > and Y =< y1 · · · yn >.Compute the longest common subsequence.
Greedy X ,YS := ∅for i = 1 to m do
for j = i to n doif xi = yj then
S := S ∪ xiend iflet yl such that l = mina > j |xi = yalet xk such that k = mina > i |xi = yaif ∃i , l < k then
do S := S ∪ xi, i := i + 1;j := l + 1S := S ∪ xk, i := k + 1; j := j + 1
else if not such yl , xk thendo i := i + 1 and j := j + 1.
end ifend for
end for
Greedy approach do not workFor X =A T C A C andY =C G C A C A C A C Tthe result of greedy is A T butthe solution is A C A C.
![Page 46: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/46.jpg)
Dynamic Programming approach
Characterization of optimal solution and recurrenceLet X =< x1 · · · xn > and Y =< y1 · · · ym >.Let X [i ] =< x1 · · · xi > and Y [i ] =< y1 · · · yj >.Define c[i , j ] = length de la LCS of X [i ] and Y [j ].Want c[n,m] i.e. solution LCS X and YWhat is a subproblem?
![Page 47: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/47.jpg)
Characterization of optimal solution and recurrence
Subproblem = something that goes part of the way in convertingone string into other.Given X =C G A T C and Y =A T A Cc[5, 4] = c[4, 3] + 1If X =C G A T and Y =A T A to find c[4, 3]:either LCS of C G A T and A Tor LCS of C G A and A T Ac[4, 3] = max(c[3, 3], c[4, 2])Given X and Y
c[i , j ] =
c[i − 1, j − 1] + 1 if xi = yj
max(c[i , j − 1], c[i − 1, j ]) otherwise
![Page 48: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/48.jpg)
c[i , j ] =
c[i − 1, j − 1] + 1 if xi = yj
max(c[i , j − 1], c[i − 1, j ]) otherwise
c[0,1]
c[3,2]
c[3,1] c[2,2] c[2,1]
c[3,0] c[2,1] c[2,0] c[2,1] c[1,2] c[1,1]
c[2,0] c[1,1] c[1,0]
c[1,0] c[0,1] c[0,0]
c[1,1] c[0,2]
![Page 49: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/49.jpg)
The direct top-down implementation of the recurrence
LCS (X ,Y )if m = 0 or n = 0 then
return 0else if xm = yn then
return 1+LCS (x1 · · · xm−1, y1 · · · yn−1)else
return maxLCS (x1 · · · xm−1, y1 · · · yn)end ifLCS (x1 · · · xm, y1 · · · yn−1)
The algorithm explores a tree of depth Θ(n + m), therefore thetime complexity is T (n) = 3Θ(n+m).
![Page 50: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/50.jpg)
Bottom-up solution
Avoid the exponential running time, by tabulating the subproblemsand not repeating their computation.To memoize the values c[i , j ] we use a table c[0 · · · n, 0 · · ·m]Starting from c[0, j ] = 0 for 0 ≤ j ≤ m and from c[i , 0] = 0 from0 ≤ i ≤ n go filling row-by-row, left-to-right, all c[i , j ]
j
c[i,j-1]
c[i-1,j-1]i-1i
j-1
c[i-1,j]
c[i,j]
Use a field d [i , j ] inside c[i , j ] to indicate from where we use thesolution.
![Page 51: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/51.jpg)
Bottom-up solution
LCS (X ,Y )for i = 1 to n do
c[i , 0] := 0end forfor j = 1 to m do
c[0, j ] := 0end forfor i = 1 to n do
for j = 1 to m doif xi = yj then
c[i , j ] := c[i − 1, j − 1] + 1, b[i .j ] :=else if c[i − 1, j ] ≥ c[i , j − 1] then
c[i , j ] := c[i − 1, j ], b[i , j ] :=←else
c[i , j ] := c[i , j − 1], b[i , j ] :=↑.end if
end forend for
Time and space complexity T = O(nm).
![Page 52: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/52.jpg)
Example.
X=(ATCTGAT); Y=(TGCATA). Therefore, m = 6, n = 7
0 1 2 3 4 5 6T G C A T A
0 0 0 0 0 0 0 0
1 A 0 ↑0 ↑0 ↑0 1 ←1 1
2 T 0 1 ←1 ←1 ↑1 2 ←2
3 C 0 ↑1 ↑1 2 ←2 ↑2 ↑24 T 0 1 ↑1 ↑2 ↑2 3 ←3
5 G 0 ↑1 2 ↑2 ↑2 ↑3 ↑36 A 0 ↑1 ↑2 ↑2 3 ↑3 4
7 T 0 1 ↑2 ↑2 ↑3 4 ↑4
![Page 53: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/53.jpg)
Construct the solution
Uses as input the table c[n,m].The first call to the algorithm is con-LCS (c, n,m)
con-LCS (c , i , j)if i = 0 or j = 0 then
STOP.else if b[i , j ] = then
con-LCS (c , i − 1, j − 1)return xi
else if b[i , j ] =↑ thencon-LCS (c , i − 1, j)
elsecon-LCS (c , i , j − 1)
end if
The algorithm has time complexity O(n + m).
![Page 54: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/54.jpg)
Edit Distance.
The edit distance between strings X = x1 · · · xn and Y = y1 · · · ymis defined to be the minimum number of edit operations needed totransform X into Y .In our specific case, the set of edit operations is:
I insert(X , i , a)= x1 · · · xiaxi+1 · · · xn.
I delete(X , i)= x1 · · · xi−1xi+1 · · · xnI modify(X , i , a)= x1 · · · xi−1axi+1 · · · xn.
The cost of each operation is 1.
![Page 55: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/55.jpg)
Exemple-1
x = aabab and y = babbaabab = XX ′ =insert(X , 0, b) baababX ′′ =delete(X ′, 2) bababY =delete(X ′′, 4) babbX = aabab → Y = babb
A shortest edit distance
aabab = XX ′ =modify(X , 1, b) bababY =delete(X ′, 4) babb
Use dynamic programming.
![Page 56: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/56.jpg)
Exemple-1
x = aabab and y = babbaabab = XX ′ =insert(X , 0, b) baababX ′′ =delete(X ′, 2) bababY =delete(X ′′, 4) babbX = aabab → Y = babb
A shortest edit distance
aabab = XX ′ =modify(X , 1, b) bababY =delete(X ′, 4) babb
Use dynamic programming.
![Page 57: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/57.jpg)
Exemple-2
gat]
E[ga t ge l] E[3,3]=2
g a t
g e lE[1,1]=0
g
g
t
le
a g
g e
t
l
eE[2,2]=1 E[3,3]=2
gelgel
E[gat pagat]
gat
pagat
E[3,3]=2 +pagat
pagatE[3,5]=4 BUT
pagat
gatpaE[3,5]=2
How?
g
pagat
at gat
pagat
gat
pagat
pagat
pagat
NOTICE: E[gat pagat] equivalent to E[pagat
![Page 58: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/58.jpg)
1.- Characterize the structure of an optimal solution andset the recurrence.
Assume want to find the edit distance from X = TATGCAAGTAto Y = CAGTAGTC .Let E [10, 8] be the min. distance between X and Y .Consider the prefixes TATGCA and CAGTLet E [6, 4] = edit distance between TATGCA and CAGT• Distance between TATGCA and CAGT is E [5, 4] + 1(delete A in X D)• Distance between TATGCAT and CAGT is E [6, 3] + 1(insert T in X I)• Distance between TATGCA and CAGT is E [5, 3] + 1(modify A in X to a T (M)
Consider the prefixes TATGCA and CAGTA• Distance between TATGCA and CAGTA is E [5, 4] (keep last Aand compare TATGC and CAGT ).
![Page 59: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/59.jpg)
To compute the edit distance from X = x1 · · · xn to Y = y1 · · · ym:Let X [i ] = x1 · · · xi and Y [j ] = y1 · · · yjlet E [i , j ] = edit distance from X [i ] to Y [j ]If xi 6= yj the last step from X [i ]→ Y [j ] must be one of:
1. put yj at the end x : x → x1 · · · xiyj , and then transformx1 · · · xi into y1 · · · yj−1.
E [i , j ] = E [i , j − 1] + 1
2. delete xi : x → x1 · · · xi−1, and then transform x1 · · · xi−1 intoy1 · · · yj .
E [i , j ] = E [i − 1, j ] + 1
3. change xi into yj : x → x1 · · · xn−1ym, and then transformx1 · · · xi−1 into y1 · · · yj−1
E [i , j ] = E [i − 1, j − 1] + 1
![Page 60: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/60.jpg)
Recurrence
Therefore, we have the recurrence
E [i , j ] =
i if j = 0 (convertingλ→ y [j ])
j if i = 0 (converting X [i ]→ λ)
min
E [i − 1, j ] + 1 if D
E [i , j − 1] + 1, if I
E [i − 1, j − 1] + d(xi , yj) otherwise
where
d(xi , yj) =
0 if xi = yj
1 otherwise
![Page 61: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/61.jpg)
2.- Computing the optimal costs.
Edit X = x1, . . . , xn, Y = y1, . . . , ynfor i = 0 to n do
E [i, 0] := iend forfor j = 0 to m do
E [0, j] := jend forfor i = 1 to n do
for j = 1 to m doif xi = yj then
d(xi , yj ) = 0else
d(xi , yj ) = 1end ifE [i, j] := E [i, j − 1] + 1if E [i − 1, j − 1] + d(xi , yj ) < E [i, j]then
E [i, j] := E [i − 1, j − 1] + d(xi , yj ),b[i, j] :=
else if E [i − 1, j] + 1 < E [i, j] thenE [i, j] := E [i − 1, j] + 1, b[i, j] :=↑
elseb[i, j] :=←
end ifend for
end for
Time and space complexity T = O(nm).
Example:X=aabab; Y=babb. Therefore,n = 5,m = 4
0 1 2 3 4λ b a b b
0 λ 0 1 2 3 41 a 1 1 1 ← 2 ← 32 a 2 2 1 2 33 b 3 2 ↑ 2 1 24 a 4 ↑ 3 2 ↑ 2 25 b 5 4 ↑ 3 ↑ 2 2
![Page 62: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/62.jpg)
3.- Construct the solution.
Uses as input the table E [n,m].The first call to the algorithm is con-Edit (E , n,m)
con-Edit (E , i , j)if i = 0 or j = 0 then
STOP.else if b[i , j ] = and xi = yj then
change(X , i , yj)con-Edit (E , i − 1, j − 1)
else if b[i , j ] =↑ thendelete(X , i) , con-Edit (c , i − 1, j)
elseinsert(X , i , yj), con-Edit (c , i , j − 1)
end if
This algorithm has time complexity O(nm).
![Page 63: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/63.jpg)
Sequence Alignment.
Finding similarities between sequences is important inBioinformaticsFor example,• Locate similar subsequences in DNA• Locate DNA which may overlap.Similar sequences evolved from common ancestorEvolution modified sequences by mutations• Replacement.• Deletions.• Insertions.
![Page 64: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/64.jpg)
Sequence Alignment.
Given two sequences over same alphabetG C G C A T G G A T T G A G C G AT G C G C C A T T G A T G A C C AAn alignment- GCGC- ATGGATTGAGCGATGCGCCATTGAT -GACC- A
Alignments consist of:• Perfect matchings.• Mismatches.• Insertion and deletions.There are many alignmentsof two sequences.Which is better?
![Page 65: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/65.jpg)
BLASTTo compare alignments define a scoring function:
s(i) =
+1 if the ith pair matches
−1 if the ith mismatches
−2 if the ith pair InDel
Score alignment =∑
i s(i)Better = max. score between all alignments of two sequences.Very similar to LCS, but with weights.
![Page 66: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/66.jpg)
Guideline to implement Dynamic Programming
1. Characterize the structure of an optimal solution,
2. define recursively the value of an optimal solution,
3. compute, bottom-up, the cost of a solution,
4. construct an optimal solution.
![Page 67: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/67.jpg)
The ski rental problem.
A set E of n students from theUPC, organize an excursion toLa Molina. A ski rental agencyhas m pairs of skis S , wherethe height of the ith pair ofskis is si .
Ideally, each skier should obtain a pair of skis whose heightmatches his/her own height as closely as possible. Consider thatthe height of the jth student is hj Design an efficient algorithm toassign the skis to the students as to minimize
∑i |hi − si |.
Case n = m: sort the students and skis by increasing heigh, andassign correlatively.This algorithm has a time complexity of O(n log n).
![Page 68: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/68.jpg)
The ski rental problem for n < m.
Lemma (Key observation)
For any pair of skis with s1 < s2 and any pair of students withh1 < h2, exist an optimal solution such that s1 ↔ h1 and s2 ↔ h2
ProofCase s1 ≤ h1 < h2 ≤ s2
1. If we match s1 ↔ h1 and s2 ↔ h2 thenh1 − s1 + s2 − h2 = (s2 − s1)− (h2 − h1)2. If we match s1 ↔ h2 and s2 ↔ h1 thenh2 − s1 + s2 − h1 = (s2 − s1) + (h2 − h1).
As (h2 − h1) > 0, the matching (1) costs less than (2).
![Page 69: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/69.jpg)
Proof cont.
Case s1 ≤ h1 ≤ s2 ≤ h2
1. If we match s1 ↔ h1 and s2 ↔ h2 thenh1 − s1 + h2 − s2 = (h2 − s1)− (s2 − h1)2. If we match s1 ↔ h2 and s2 ↔ h1 thenh2 − s1 + s2 − h1 = (h2 − s1) + (s2 − h1).As (s2 − h1) ≥ 0, the matching (1) costs less than (2).
In the same manner we can argue with the case s1 < s2 ≤ h1 < h2.
![Page 70: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/70.jpg)
Observe that the solutions have the property of optimalsubstructure and also ther are repeated subproblems. So the skirental problem looks like a candidate to be solved by dynamicprogramming.Recursive definition of the optimal solution• Sort the skis and the students by increasing height.• Let A[i , j ] be the optimal cost of matching the first j pairs of skiswith the first i students. Then,
A[i , j ] =
0 if i = 0 ∨ j = 0
mini (A[i , j − 1],A[i − 1, j − 1] + |hj − si |) if 1 ≤ i ≤ j
∞ if i > j > 1.
A direct top-down implementation of the recurrence yields anexponential algorithm. Therefore, we use the DP paradigme:bottom-up and tabulate
![Page 71: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/71.jpg)
A[1 · · · n, 1 · · ·m] a table w. rows=students, colums =skis. n < m.
Stud-Ski (E ,S)Compute |E | = n, |S | = msort E and Sfor i = 0 to n do
A[i , 0] := 0end forfor j = 0 to m do
A[0, j ] := 0end forfor i = 1 to n do
for j = 1 to m doif j < i then
A[i , j ] :=∞else
A[i , j ] := min(A[i , j − 1],A[i − 1, j − 1] + |hj − si |)end if
end forend forprint A[n,m]
![Page 72: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/72.jpg)
Given E with: s1 = 5; s2 = 7;S3 = 8; s4 = 9; s5 = 10.and S with h1 = 1; h2 = 2; h3 = 4; h4 = 5; h5 = 6; h6 = 7; h7 =9; h8 = 11; h9 = 12.
0 1 2 3 4 5 6 7 8 9
0 0 0 0 0 0 0 0 0 0 01 ∞ 4 3 1 0 0 0 0 0 0 A[1, 9]2 ∞ ∞ 9 6 3 1 0 0 0 0 A[2, 9]3 ∞ ∞ ∞ 13 9 5 2 1 1 1 A[3, 9]4 ∞ ∞ ∞ ∞ 17 12 7 2 2 2 A[4, 9]5 ∞ ∞ ∞ ∞ ∞ 21 17 8 4 4 A[5, 9]
![Page 73: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/73.jpg)
Construct the solution
To find the actual assignment of E to S (Recover the path fromA[n,m])Using as input the table A, start from A[n,m]:
1. If A[i , j ] = A[i , j − 1] continue at A[i , j − 1],
2. else match hi ↔ si and continue at A[i − 1, j − 1].
In the previous example:
10↔ 12, 9↔ 9, 8↔ 7, 7↔ 6, 5↔ 5
ComplexityΘ(n log n) to sort E , Θ(m logm) to sort S , Θ(mn) to xompute A.Therefore Θ(n log n + m logm + mn)But don’t need to store the whole A, just a table of (m − n)m,therefore we can do it in, Θ(n log n + m logm + (m − n)n)
![Page 74: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/74.jpg)
Dynamic Programming in Trees
Trees are nice graphs to bound the number of subproblems.Given T = (V ,A) with |V | = n, recall that there are n subtrees inT .
Therefore, when considering problems defined on trees, it is easy tobound the number of subproblems
This allows to use Dynamic Programming to give polynomialsolutions to ”difficult” graph problems when the input is a tree.
![Page 75: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/75.jpg)
The Maximum Weight Independent Set (MWIS).
INPUT: G = (V ,E ), together with a weight w : V → RQUESTION: Find the largest S ⊆ V such that no two vertices in Sare connected in G .
For general G , the problem is difficult, asthe case with all weights =1 is alreadyNP-complete.
2
5
1
2
1
2
6
Given a tree T = (V ,E ) choose a r ∈ V and root it from r
Given a rooted tree Tr = (V ,A) asinstance for the MWIS. is rooted⇒ ∀v ∈ V , Tv subtree rooted at v .Notation: Given v ∈ Tr let F (v) bethe set of children of v , and N(v) bethe set of grandchildren of v
j
6
4 8 8
79
2
3
8
25 6
5 4 6
a
b dc
e f g h i k
l m n o
![Page 76: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/76.jpg)
Characterization of the optimal solution
Key observation: An max WIS Sr in Tr can’t contain verticeswhich are father-son.
I If r ∈ Sr : then F (r) 6⊆ Sr . So Sr − r contains an optimumsolution for each Tv , with v ∈ N(r).
I If r 6∈ Sr : Sr contains an optimum solution for each Tu, withu ∈ F (r).
Let M(v) the value of the max independent weight set Sv in Tv .Recursive definition of the optimal solution
M(v) =
w(v) if v is a leaf,
max∑
u∈F (v) M(u),w(v) +∑
u∈N(v) otherwise.
![Page 77: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/77.jpg)
Iterative Algorithm to find Mr
Let v1, . . . , vn = r be the post-order traversal of Tr
Define a 2× n table A to store the values of M and M ′.
WIS Tr
Let v1, . . . , vn = r the post-order traversal of Tr
for i = 1 to n doif vi a leaf then
M(vi ) = w(vi ),M′(vi ) = 0
elseM ′(vi ) =
∑u∈N(v) M(u)
M(vi ) = max∑
u∈F (v) M(u),w(v) + M ′(vi )end ifstore M(vi ) and M ′(vi ) in A[2i − 1] and A[2i ]
end for
Complexity: space = O(n), time = O(n) (we visit the nodes inpost-order and examine each node exactly once)
How can we recover the set Sr??
![Page 78: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/78.jpg)
Top-down algorithm to recover the WIS Sr
Let r = vn, . . . , r1 be the level-order traversal of Tr
Recover-WIS Tr ,A[2n]Sr = ∅for i = n to n do
if M(vi ) > M ′(vi ) thenS = S ∪ vi
end ifend forreturn S
![Page 79: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/79.jpg)
Bottom-upM(l) = 5,M(m) = 4,M(n) = 6,M(e) =5M(f ) = 6,M(o) = 2,M(k) = 7M(b) = 4, M ′(b) = 11M(h) = 8, M ′(h) = 15M(j) = 9, M ′(j) = 2M(d) = 10, M ′(d) = 16M(c) = 23, M ′(c) = 20M(a) = 6 + M ′(b) + M ′(c) + M ′(d) = 53M ′(a) = 6 + M ′(b) + M(c) + M ′(d) = 50
j
6
4 8 8
79
2
3
8
25 6
5 4 6
a
b dc
e f g h i k
l m n o
![Page 80: Dynamic programming.mjserna/docencia/grauA/P16/... · 2016. 4. 26. · Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today](https://reader035.fdocuments.in/reader035/viewer/2022071605/61417256a2f84929c30464f3/html5/thumbnails/80.jpg)
Top downM(l) = 5,M(m) = 4,M(n) = 6,M(e) =5M(f ) = 6,M(o) = 2,M(k) = 7M(b) = 4, M ′(b) = 11M(h) = 8, M ′(h) = 15M(j) = 9, M ′(j) = 2M(d) = 10, M ′(d) = 16M(c) = 23, M ′(c) = 20M(a) = 6 + M ′(b) + M ′(c) + M ′(d) = 53M ′(a) = 6 + M ′(b) + M(c) + M ′(d) = 50
j
6
4 8 8
79
2
3
8
25 6
5 4 6
a
b dc
e f g h i k
l m n o
Maximum Weighted Independent Set:S = a, e, f , g , i , j , k , l ,m, n w(S) = 53.