Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 •...

96
Dynamic Programming Algorithm Design 6.1, 6.2, 6.3 1

Transcript of Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 •...

Page 1: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Dynamic ProgrammingAlgorithm Design 6.1, 6.2, 6.3

�1

Page 2: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• In class (today and next time)• Weighted interval scheduling

• Set of weighted intervals with start and finishing times• Goal: find maximum weight subset of non-overlapping intervals

Applications

j1j2j3j4j5j6j7

j8

24

1

10 7

56

4

�2

Page 3: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• In class (today and next time)• Weighted interval scheduling• Segmented least squares

• Given n points in the plane find a small sequence of lines that minimizes the squared error.

Applications

y

x

�3

Page 4: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• In class (today and next time)• Weighted interval scheduling• Segmented least squares• Sequence alignment

• Given two strings A and B how many edits (insertions, deletions, relabelings) is needed to turn A into B?

Applications

A C A A - G T C - C A - T G T -

1 mismatch, 2 gaps 0 mismatches, 4 gaps

A C A A G T C - C A T G T -

�4

Page 5: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• In class (today and next time)• Weighted interval scheduling• Segmented least squares• RNA Secondary structure• Sequence alignment• Shortest paths with negative weights

• Given a weighted graph, where edge weights can be negative, find the shortest path between two given vertices.

Applications

3-4

-6

3

1

10

9

8

53

5

-1

s t

�5

Page 6: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• In class (today and next time)• Weighted interval scheduling• Segmented least squares• RNA Secondary structure• Sequence alignment• Shortest paths with negative weights

• Some other famous applications• Unix diff for comparing 2 files• Vovke-Kasami-Younger for parsing context-free grammars• Viterbi for hidden Markov models• ….

Applications

�6

Page 7: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Greedy. Build solution incrementally, optimizing some local criterion.

• Divide-and-conquer. Break up problem into independent subproblems, solve each subproblem, and combine to get solution to original problem.

• Dynamic programming. Break up problem into overlapping subproblems, and build up solutions to larger and larger subproblems. • Can be used when the problem have “optimal substructure”:

Solution can be constructed from optimal solutions to subproblems Use dynamic programming when subproblems overlap.

Dynamic Programming

�7

Page 8: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted Interval Scheduling

�8

Page 9: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Weighted interval scheduling problem• n jobs (intervals)• Job i starts at si, finishes at fi and has weight/value vi.• Goal: Find maximum weight subset of non-overlapping (compatible) jobs.

Weighted interval scheduling

j1j2j3j4j5j6j7

j8

24

1

10 7

56

4

�9

Page 10: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling

24

1

10 7

56

4

�10

Page 11: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn

Weighted interval scheduling

24

1

10 7

56

4

�10

Page 12: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

�11

Page 13: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

�11

Page 14: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0

�11

Page 15: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0

�11

Page 16: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0

�11

Page 17: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1

�11

Page 18: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

�11

Page 19: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2

�11

Page 20: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2p(7) = 3

�11

Page 21: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2p(7) = 3

p(8) = 5

�11

Page 22: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.• Optimal solution OPT:

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2p(7) = 3

p(8) = 5

�11

Page 23: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.• Optimal solution OPT:

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2p(7) = 3

p(8) = 54

�11

Page 24: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.• Optimal solution OPT:

• Case 1. OPT selects last job

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2p(7) = 3

p(8) = 54

�11

Page 25: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.• Optimal solution OPT:

• Case 1. OPT selects last job

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2p(7) = 3

p(8) = 5

OPT = vn + optimal solution to subproblem on 1,…,p(n)

4

�11

Page 26: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.• Optimal solution OPT:

• Case 1. OPT selects last job

• Case 2. OPT does not select last job

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2p(7) = 3

p(8) = 5

OPT = vn + optimal solution to subproblem on 1,…,p(n)

4

�11

Page 27: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Label/sort jobs by finishing time: f1 ≤ f2 ≤…≤ fn • p(j) = largest index i < j such that job i is compatible with j.• Optimal solution OPT:

• Case 1. OPT selects last job

• Case 2. OPT does not select last job

Weighted interval scheduling

2

4

1

10 7

56

j1j2

j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2p(7) = 3

p(8) = 5

OPT = vn + optimal solution to subproblem on 1,…,p(n)

4

�11

OPT = optimal solution to subproblem on 1,…,n

Page 28: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• OPT(j) = value of optimal solution to the problem consisting job requests 1,2,..,j.

• Case 1. OPT(j) selects job j

• Case 2. OPT(j) does not job j

Weighted interval scheduling

OPT(j) = vj + optimal solution to subproblem on 1,…,p(j)

�12

OPT = optimal solution to subproblem 1,…j-1

Page 29: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• OPT(j) = value of optimal solution to the problem consisting job requests 1,2,..,j.

• Case 1. OPT(j) selects job j

• Case 2. OPT(j) does not job j

• Recursion:

Weighted interval scheduling

OPT(j) = vj + optimal solution to subproblem on 1,…,p(j)

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

�12

OPT = optimal solution to subproblem 1,…j-1

Page 30: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

�13

Page 31: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

�13

Page 32: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

1

2

3

4

5�13

Page 33: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

5

1

2

3

4

5�13

Page 34: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

5

3

1

2

3

4

5�13

Page 35: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

1

5

3

1

2

3

4

5�13

Page 36: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

1

0

5

3

1

2

3

4

5�13

Page 37: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

1

0

5

3

01

2

3

4

5�13

Page 38: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

1

0

5

3

2

01

2

3

4

5�13

Page 39: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

1

0

5

3

2

001

2

3

4

5�13

Page 40: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

1

0

5

3

2

0 101

2

3

4

5�13

Page 41: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

1

0

5

3

2

0 10

00

1

2

3

4

5�13

Page 42: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

1

0

5

3

2

0 10

00

1

2

3

4

5�13

Page 43: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

21

0

5

3

2

0 10

00

1

2

3

4

5�13

Page 44: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

21

0

5

3

2

00 10

00

1

2

3

4

5�13

Page 45: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

21

0

5

3

2

0 10 10

00

1

2

3

4

5�13

Page 46: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

21

0

5

3

2

0 10 10

0 000

1

2

3

4

5�13

Page 47: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

321

0

5

3

2

0 10 10

0 000

1

2

3

4

5�13

Page 48: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

321

0

5

3

2

10 10 10

0 000

1

2

3

4

5�13

Page 49: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

321

0

5

3

2

10 10 10

0 00 000

1

2

3

4

5�13

Page 50: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

321

0

5

3

2

10 210 10

0 00 000

1

2

3

4

5�13

Page 51: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

321

0

5

3

2

10 210 10

00 00 000

1

2

3

4

5�13

Page 52: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

321

0

5

3

2

10 210 10

0 10 00 000

1

2

3

4

5�13

Page 53: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

321

0

5

3

2

10 210 10

0 10 00 000

0 0

1

2

3

4

5�13

Page 54: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

321

0

5

3

2

10 210 10

0 10 00 000

0 0

1

2

3

4

5

time Θ(2^n)

�13

Page 55: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

321

0

5

3

2

10 210 10

0 10 00 000

0 0

1

2

3

4

5

time Θ(2^n)

�13

Page 56: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

321

0

5

3

2

10 210 10

0 10 00 000

0 0

1

2

3

4

5

time Θ(2^n)

X

X

X

XX

XX

X

X X XX

X

XX

X

X

�13

Page 57: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: brute force

Input: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

Compute-Brute-Force-Opt(j)if j = 0return 0

elsereturn max(v[j] + Compute-Brute-Force-Opt(p[j]),

Compute-Brute-Force-Opt(j-1))

OPT( j) = {0 if j = 0max{vj + OPT(p( j)), OPT( j − 1)} otherwise

4

321

0

5

3

2

10 210 10

0 10 00 000

0 0

1

2

3

4

5

time Θ(2^n)

X

X

X

XX

XX

X

X X XX

X

XX

X

X

Avoid recomputation?�13

Page 58: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: memoizationInput: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

for j=1 to nM[j] = null

M[0] = 0.

Compute-Memoized-Opt(j)if M[j] is emptyM[j] = max(v[j] + Compute-Memoized-Opt(p[j]),

Compute-Memoized-Opt(j-1))return M[j]

�14

Page 59: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: memoizationInput: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

for j=1 to nM[j] = null

M[0] = 0.

Compute-Memoized-Opt(j)if M[j] is emptyM[j] = max(v[j] + Compute-Memoized-Opt(p[j]),

Compute-Memoized-Opt(j-1))return M[j]

4

1

2

3

0

5

�14

Page 60: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Running time O(n log n):

Weighted interval scheduling: memoizationInput: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

for j=1 to nM[j] = null

M[0] = 0.

Compute-Memoized-Opt(j)if M[j] is emptyM[j] = max(v[j] + Compute-Memoized-Opt(p[j]),

Compute-Memoized-Opt(j-1))return M[j]

4

1

2

3

0

5

�14

Page 61: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Running time O(n log n):• Sorting takes O(n log n) time.

Weighted interval scheduling: memoizationInput: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

for j=1 to nM[j] = null

M[0] = 0.

Compute-Memoized-Opt(j)if M[j] is emptyM[j] = max(v[j] + Compute-Memoized-Opt(p[j]),

Compute-Memoized-Opt(j-1))return M[j]

4

1

2

3

0

5

�14

Page 62: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Running time O(n log n):• Sorting takes O(n log n) time.• Computing p(n): O(n log n) by using sort by start time

Weighted interval scheduling: memoizationInput: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

for j=1 to nM[j] = null

M[0] = 0.

Compute-Memoized-Opt(j)if M[j] is emptyM[j] = max(v[j] + Compute-Memoized-Opt(p[j]),

Compute-Memoized-Opt(j-1))return M[j]

4

1

2

3

0

5

�14

Page 63: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Running time O(n log n):• Sorting takes O(n log n) time.• Computing p(n): O(n log n) by using sort by start time• Each subproblem solved once.

Weighted interval scheduling: memoizationInput: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

for j=1 to nM[j] = null

M[0] = 0.

Compute-Memoized-Opt(j)if M[j] is emptyM[j] = max(v[j] + Compute-Memoized-Opt(p[j]),

Compute-Memoized-Opt(j-1))return M[j]

4

1

2

3

0

5

�14

Page 64: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Running time O(n log n):• Sorting takes O(n log n) time.• Computing p(n): O(n log n) by using sort by start time• Each subproblem solved once.• Time to solve a subproblem constant.

Weighted interval scheduling: memoizationInput: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

for j=1 to nM[j] = null

M[0] = 0.

Compute-Memoized-Opt(j)if M[j] is emptyM[j] = max(v[j] + Compute-Memoized-Opt(p[j]),

Compute-Memoized-Opt(j-1))return M[j]

4

1

2

3

0

5

�14

Page 65: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Running time O(n log n):• Sorting takes O(n log n) time.• Computing p(n): O(n log n) by using sort by start time• Each subproblem solved once.• Time to solve a subproblem constant.

• Space O(n)

Weighted interval scheduling: memoizationInput: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

for j=1 to nM[j] = null

M[0] = 0.

Compute-Memoized-Opt(j)if M[j] is emptyM[j] = max(v[j] + Compute-Memoized-Opt(p[j]),

Compute-Memoized-Opt(j-1))return M[j]

4

1

2

3

0

5

�14

Page 66: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: memoizationInput: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

for j=1 to nM[j] = empty

M[0] = 0.

Compute-Memoized-Opt(j)if M[j] is empty

M[j] = max(v[j] + Compute-Memoized-Opt(p[j]), Compute-Memoized-Opt(j-1))

return M[j]

2

4

1

10 7

56

j1j2j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2p(7) = 3

p(8) = 5

i M[i]

0

1

2

3

4

5

6

7

8

�15

Page 67: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: memoizationInput: n, s[1..n], f[1..n], v[1..n]

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

for j=1 to nM[j] = empty

M[0] = 0.

Compute-Memoized-Opt(j)if M[j] is empty

M[j] = max(v[j] + Compute-Memoized-Opt(p[j]), Compute-Memoized-Opt(j-1))

return M[j]

2

4

1

10 7

56

j1j2j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2p(7) = 3

p(8) = 5

i M[i]

0 0

1 4

2 4

3 4

4 11

5 11

6 11

7 11

8 15

�16

Page 68: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: bottom-up

Compute-Bottom-Up—Opt(n, s[1..n], f[1..n], v[1..n])

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

M[0] = 0.for j=1 to nM[j] = max(v[j] + M(p[j]), M(j-1))

return M[n]

4

1

2

3

0

5

�17

Page 69: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Running time O(n log n):

Weighted interval scheduling: bottom-up

Compute-Bottom-Up—Opt(n, s[1..n], f[1..n], v[1..n])

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

M[0] = 0.for j=1 to nM[j] = max(v[j] + M(p[j]), M(j-1))

return M[n]

4

1

2

3

0

5

�17

Page 70: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Running time O(n log n):• Sorting takes O(n log n) time.

Weighted interval scheduling: bottom-up

Compute-Bottom-Up—Opt(n, s[1..n], f[1..n], v[1..n])

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

M[0] = 0.for j=1 to nM[j] = max(v[j] + M(p[j]), M(j-1))

return M[n]

4

1

2

3

0

5

�17

Page 71: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Running time O(n log n):• Sorting takes O(n log n) time.• Computing p(n): O(n log n) by using sort by start time

Weighted interval scheduling: bottom-up

Compute-Bottom-Up—Opt(n, s[1..n], f[1..n], v[1..n])

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

M[0] = 0.for j=1 to nM[j] = max(v[j] + M(p[j]), M(j-1))

return M[n]

4

1

2

3

0

5

�17

Page 72: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Running time O(n log n):• Sorting takes O(n log n) time.• Computing p(n): O(n log n) by using sort by start time• For loop: O(n) time

Weighted interval scheduling: bottom-up

Compute-Bottom-Up—Opt(n, s[1..n], f[1..n], v[1..n])

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

M[0] = 0.for j=1 to nM[j] = max(v[j] + M(p[j]), M(j-1))

return M[n]

4

1

2

3

0

5

�17

Page 73: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Running time O(n log n):• Sorting takes O(n log n) time.• Computing p(n): O(n log n) by using sort by start time• For loop: O(n) time

• Each iteration takes constant time.

Weighted interval scheduling: bottom-up

Compute-Bottom-Up—Opt(n, s[1..n], f[1..n], v[1..n])

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

M[0] = 0.for j=1 to nM[j] = max(v[j] + M(p[j]), M(j-1))

return M[n]

4

1

2

3

0

5

�17

Page 74: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Running time O(n log n):• Sorting takes O(n log n) time.• Computing p(n): O(n log n) by using sort by start time• For loop: O(n) time

• Each iteration takes constant time.• Space O(n)

Weighted interval scheduling: bottom-up

Compute-Bottom-Up—Opt(n, s[1..n], f[1..n], v[1..n])

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

M[0] = 0.for j=1 to nM[j] = max(v[j] + M(p[j]), M(j-1))

return M[n]

4

1

2

3

0

5

�17

Page 75: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: bottom-upCompute-Bottom-Up—Opt(n, s[1..n], f[1..n], v[1..n])

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

M[0] = 0.for j=1 to n

M[j] = max(v[j] + M(p[j]), M(j-1))return M[n]

2

4

1

10 7

56

j1j2j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2p(7) = 3

p(8) = 5

i M[i]

0

1

2

3

4

5

6

7

8

�18

Page 76: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Weighted interval scheduling: bottom-upCompute-Bottom-Up—Opt(n, s[1..n], f[1..n], v[1..n])

Sort jobs by finish time so that f[1] ≤ f[2]≤ … ≤ f[n] Compute p[1], p[2], …, p[n]

M[0] = 0.for j=1 to n

M[j] = max(v[j] + M(p[j]), M(j-1))return M[n]

2

4

1

10 7

56

j1j2j3j4j5j6j7

j8 4

p(1) = 0p(2) = 0p(3) = 0p(4) = 1p(5) = 0

p(6) = 2p(7) = 3

p(8) = 5

i M[i]

0 0

1 4

2 4

3 4

4 11

5 11

6 11

7 11

8 15

�19

Page 77: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• DP algorithm returns value. How do we find the solution itself?• Make a second pass:

• Analysis: #recursive calls ≤ n => O(n) time.

Weighted interval scheduling: finding solution

Find-Solution(j)if j=0Return emptyset

else if v[j] + M[p[j]] > M[j-1]return {j} ∪ Find-Solution(p[j])

elsereturn Find-Solution(j-1)

�20

Page 78: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Segmented Least Squares

�21

Page 79: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Least squares.

Least squares

y

x

�22

Page 80: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Least squares. • Given n points in the plane: (x1,y1), (x2,y2), …, (xn,yn).

Least squares

y

x

�22

Page 81: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Least squares. • Given n points in the plane: (x1,y1), (x2,y2), …, (xn,yn).• Find a line y = ax + b that minimizes the sum of the squared error:

Least squares

SSE =n

∑i=1

(yi − axi − b)2y

x

�22

Page 82: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Least squares. • Given n points in the plane: (x1,y1), (x2,y2), …, (xn,yn).• Find a line y = ax + b that minimizes the sum of the squared error:

• Solution. Calculus => minimum error is achieved when

Least squares

a =n∑i xiyi − (∑i xi)(∑i yi)

n∑i x2i − (∑i xi)2

, b =∑i yi − a∑i xi

n

SSE =n

∑i=1

(yi − axi − b)2y

x

�22

Page 83: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Segmented least squares• Points lie roughly on a sequence of line segments.• Given n points in the plane (x1,y1), (x2,y2), …, (xn,yn).• Find a sequence of lines that minimizes f(x).

• What is a good choice for f(x) that balance accuracy and number of lines?

Segmented least squares

y

x

�23

Page 84: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Segmented least squares. Given n points in the plane (x1,y1), (x2,y2), …, (xn,yn) and a constant c > 0 find a sequence of lines that minimizes f(x) = E + cL:

• E = sum of sums of the squared errors in each segment.

• L = number of lines

Segmented least squares

y

x

�24

Page 85: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Dynamic programming: multiway choice

y

x

�25

Page 86: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• OPT(j) = minimum cost for points p1, p2,…, pj.

Dynamic programming: multiway choice

y

x

�25

Page 87: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• OPT(j) = minimum cost for points p1, p2,…, pj.• e(i,j) = minimum sum of squares for points pi, pi+1,…, pj.

Dynamic programming: multiway choice

y

x

�25

Page 88: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• OPT(j) = minimum cost for points p1, p2,…, pj.• e(i,j) = minimum sum of squares for points pi, pi+1,…, pj.• To compute OPT(j):

Dynamic programming: multiway choice

y

x

�25

Page 89: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• OPT(j) = minimum cost for points p1, p2,…, pj.• e(i,j) = minimum sum of squares for points pi, pi+1,…, pj.• To compute OPT(j):

Dynamic programming: multiway choice

y

x

�25

Page 90: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• OPT(j) = minimum cost for points p1, p2,…, pj.• e(i,j) = minimum sum of squares for points pi, pi+1,…, pj.• To compute OPT(j):

• Last segment uses points pi, pi+1,…, pj for some i.

Dynamic programming: multiway choice

y

x

�25

Page 91: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• OPT(j) = minimum cost for points p1, p2,…, pj.• e(i,j) = minimum sum of squares for points pi, pi+1,…, pj.• To compute OPT(j):

• Last segment uses points pi, pi+1,…, pj for some i.• Cost = e(i,j) + c + OPT(i-1).

Dynamic programming: multiway choice

y

x

�25

Page 92: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• OPT(j) = minimum cost for points p1, p2,…, pj.• e(i,j) = minimum sum of squares for points pi, pi+1,…, pj.• To compute OPT(j):

• Last segment uses points pi, pi+1,…, pj for some i.• Cost = e(i,j) + c + OPT(i-1).

Dynamic programming: multiway choice

y

x

�25

Page 93: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• OPT(j) = minimum cost for points p1, p2,…, pj.• e(i,j) = minimum sum of squares for points pi, pi+1,…, pj.• To compute OPT(j):

• Last segment uses points pi, pi+1,…, pj for some i.• Cost = e(i,j) + c + OPT(i-1).

Dynamic programming: multiway choice

y

x

�25

Page 94: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• OPT(j) = minimum cost for points p1, p2,…, pj.• e(i,j) = minimum sum of squares for points pi, pi+1,…, pj.• To compute OPT(j):

• Last segment uses points pi, pi+1,…, pj for some i.• Cost = e(i,j) + c + OPT(i-1).

Dynamic programming: multiway choice

y

x

OPT( j) = {0 if j = 0min1≤i≤ j{e(i, j) + c + OPT(i − 1)} otherwise

�25

Page 95: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

Segmented least squares algorithm

OPT( j) = {0 if j = 0min1≤i≤ j{e(i, j) + c + OPT(i − 1)} otherwise

Segmented-least-squares(n, p1, p2, …,pn,c)

for j=1 to nfor i=1 to jCompute the least squares e(i,j) for the segment pi, pi+1, …,pj.

M[0] = 0.for j=1 to nM[j] = ∞for i=1 to jM[j] = min(M[j],e(i,j) + c + M[i-1])

Return M[n]

�26

Page 96: Dynamic Programming · 2020. 8. 21. · Applications j1 j2 j3 j4 j5 j6 j7 j8 2 4 1 10 7 5 6 4 2 • In class (today and next time) ... • Viterbi for hidden Markov models • ….

• Time.• O(n3) for computing e(i,j) for O(n2) pairs (O(n) per pair).• O(n2) for computing M.

• Total O(n3)

• Space

• O(n2).

Segmented least squares algorithm

Segmented-least-squares(n, p1, p2, …,pn,c)

for j=1 to nfor i=1 to j

Compute the least squares e(i,j) for the segment pi, pi+1, …,pj.

M[0] = 0.for j=1 to n

M[j] = ∞for i=1 to j

M[j] = min(M[j],e(i,j) + c + M[i-1])

Return M[n] �27