Dynamicpgmming

52
Dynamic Programming Briana B. Morrison With thanks to Dr. Hung

Transcript of Dynamicpgmming

Page 1: Dynamicpgmming

Dynamic Programming

Briana B. Morrison

With thanks to Dr. Hung

Page 2: Dynamicpgmming

2

Topics

What is Dynamic Programming Binomial Coefficient Floyd’s Algorithm Chained Matrix Multiplication Optimal Binary Search Tree Traveling Salesperson

Page 3: Dynamicpgmming

3

Divide-and-Conquer: a top-down approach. Many smaller instances are computed more

than once.

Dynamic programming: a bottom-up approach. Solutions for smaller instances are stored in a

table for later use.

Why Dynamic Programming?

Page 4: Dynamicpgmming

4

An Algorithm Design Technique A framework to solve Optimization problems Elements of Dynamic Programming Dynamic programming version of a recursive

algorithm. Developing a Dynamic Programming Algorithm

– Example: Multiplying a Sequence of Matrices

Dynamic Programming

Page 5: Dynamicpgmming

5

Why Dynamic Programming?

• It sometimes happens that the natural way of dividing an It sometimes happens that the natural way of dividing an instance suggested by the structure of the problem leads us to instance suggested by the structure of the problem leads us to consider several overlapping subinstances.consider several overlapping subinstances.• If we solve each of these independently, they will in turn If we solve each of these independently, they will in turn create a large number of identical subinstances.create a large number of identical subinstances.• If we pay no attention to this duplication, it is likely that we If we pay no attention to this duplication, it is likely that we will end up with an inefficient algorithm.will end up with an inefficient algorithm.• If, on the other hand, we take advantage of the duplication and If, on the other hand, we take advantage of the duplication and solve each subinstance only once, saving the solution for later solve each subinstance only once, saving the solution for later use, then a more efficient algorithm will result.use, then a more efficient algorithm will result.

Page 6: Dynamicpgmming

6

Why Dynamic Programming? …

The underlying idea of dynamic programming is The underlying idea of dynamic programming is thus quite simple: avoid calculating the same thing thus quite simple: avoid calculating the same thing twice, usually by keeping a table of known results, twice, usually by keeping a table of known results, which we fill up as subinstances are solved.which we fill up as subinstances are solved.

• Dynamic programming is a Dynamic programming is a bottom-upbottom-up technique. technique.• Examples:Examples:

1) Fibonacci numbers1) Fibonacci numbers2) Computing a Binomial coefficient2) Computing a Binomial coefficient

Page 7: Dynamicpgmming

7

Dynamic Programming

• Dynamic ProgrammingDynamic Programming is a general algorithm design is a general algorithm design technique.technique.• Invented by American mathematician Richard Bellman in Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems.the 1950s to solve optimization problems.• “ “Programming” here means “planning”.Programming” here means “planning”.

• Main idea: Main idea: • solve several smaller (overlapping) subproblems.solve several smaller (overlapping) subproblems.• record solutions in a table so that each subproblem is record solutions in a table so that each subproblem is only solved once.only solved once.• final state of the table will be (or contain) solution.final state of the table will be (or contain) solution.

Page 8: Dynamicpgmming

8

Dynamic Programming

Define a container to store intermediate results

Access container versus recomputing results

Fibonacci numbers example (top down)– Use vector to store results as calculated so they

are not re-calculated

Page 9: Dynamicpgmming

9

Dynamic Programming

Fibonacci numbers:

0, 1, 1, 2, 3, 5, 8, 13, 21, 24

Recurrence Relation of Fibonacci numbers

?

Page 10: Dynamicpgmming

10

Example: Fibonacci numbers

• Recall definition of Fibonacci numbers:

f(0) = 0f(1) = 1f(n) = f(n-1) + f(n-2) for n ≥ 2

• Computing the nth Fibonacci number recursively (top-down): f(n)

f(n-1) + f(n-2)

f(n-2) + f(n-3) f(n-3) + f(n-4)

...

Page 11: Dynamicpgmming

11

Fib vs. fibDyn

int fib(int n) {if (n <= 1) return n; // stopping conditions

else return fib(n-1) + fib(n-2); // recursive step}int fibDyn(int n, vector<int>& fibList) {

int fibValue;if (fibList[n] >= 0) // check for a previously computed result and return

return fibList[n];// otherwise execute the recursive algorithm to obtain the resultif (n <= 1) // stopping conditions

fibValue = n;else // recursive step

fibValue = fibDyn(n-1, fibList) + fibDyn(n-2, fibList);// store the result and return its valuefibList[n] = fibValue;return fibValue;

}

Page 12: Dynamicpgmming

12

Example: Fibonacci numbers

Computing the nth fibonacci number using bottom-up iteration:

• f(0) = 0• f(1) = 1• f(2) = 0+1 = 1• f(3) = 1+1 = 2• f(4) = 1+2 = 3• f(5) = 2+3 = 5• • • • f(n-2) = f(n-3)+f(n-4)• f(n-1) = f(n-2)+f(n-3)• f(n) = f(n-1) + f(n-2)

Page 13: Dynamicpgmming

13

Recursive calls for fib(5)

fib (5 )

fib (4 ) fib (3 )

fib (3 ) fib (2 ) fib (2 ) fib (1 )

fib (2 ) fib (1 ) fib (1 ) fib (0 ) fib (1 ) fib (0 )

fib (1 ) fib (0 )

Page 14: Dynamicpgmming

14

fib(5) Using Dynamic Programming

fib (5 )

fib (4 ) fib (3 )

fib (3 ) fib (2 ) fib (2 ) fib (1 )

fib (2 ) fib (1 ) fib (1 ) fib (0 ) fib (1 ) fib (0 )

fib (1 ) fib (0 )

6

5

21

3

4

Page 15: Dynamicpgmming

15

Statistics (function calls)

fib fibDyn

N = 20 21,891 39

N = 40 331,160,281 79

Page 16: Dynamicpgmming

16

Top down vs. Bottom up

Top down dynamic programming moves through recursive process and stores results as algorithm computes

Bottom up dynamic programming evaluates by computing all function values in order, starting at lowest and using previously computed values.

Page 17: Dynamicpgmming

17

Examples of Dynamic Programming Algorithms

• Computing binomial coefficientsComputing binomial coefficients

• Optimal chain matrix multiplicationOptimal chain matrix multiplication

• Floyd’s algorithms for all-pairs shortest paths Floyd’s algorithms for all-pairs shortest paths

• Constructing an optimal binary search treeConstructing an optimal binary search tree

• Some instances of difficult discrete optimization problems:Some instances of difficult discrete optimization problems:• travelling salesmantravelling salesman• knapsackknapsack

Page 18: Dynamicpgmming

18

A framework to solve Optimization problems

For each current choice:– Determine what subproblem(s) would remain if this

choice were made.– Recursively find the optimal costs of those

subproblems.– Combine those costs with the cost of the current

choice itself to obtain an overall cost for this choice

Select a current choice that produced the minimum overall cost.

Page 19: Dynamicpgmming

19

Elements of Dynamic Programming

Constructing solution to a problem by building it up dynamically from solutions to smaller (or simpler) sub-problems

– sub-instances are combined to obtain sub-instances of increasing size, until finally arriving at the solution of the original instance.

– make a choice at each step, but the choice may depend on the solutions to sub-problems.

Page 20: Dynamicpgmming

20

Elements of Dynamic Programming …

Principle of optimality– the optimal solution to any nontrivial instance of a problem is a

combination of optimal solutions to some of its sub-instances.

Memorization (for overlapping sub-problems)– avoid calculating the same thing twice, – usually by keeping a table of know results that fills up as sub-

instances are solved.

Page 21: Dynamicpgmming

21

Development of a dynamic programming algorithm

Characterize the structure of an optimal solution– Breaking a problem into sub-problem– whether principle of optimality apply

Recursively define the value of an optimal solution– define the value of an optimal solution based on value of solutions

to sub-problems Compute the value of an optimal solution in a bottom-up fashion

– compute in a bottom-up fashion and save the values along the way

– later steps use the save values of pervious steps Construct an optimal solution from computed information

Page 22: Dynamicpgmming

22

Binomial Coefficient

Binomial coefficient:

Cannot compute using this formula because of n!

Instead, use the following formula:

nkforknk

n

k

n

0

)!(!

!

Page 23: Dynamicpgmming

23

Binomial Using Divide & Conquer

Binomial formula:

)0

( 0 1

0 1

1

1

n

n or C

nCnor kk

nkk

nC

k

nC

k

nC

Page 24: Dynamicpgmming

24

Binomial using Dynamic Programming

Just like Fibonacci, that formula is very inefficient Instead, we can use the following:

niinnn bnnCbainCanCba ),(...),(...)0,()(

Page 25: Dynamicpgmming

25

Bottom-Up

Recursive property:– B[i] [j] = B[i – 1] [j – 1] + B[i – 1][j] 0 < j < i

1 j = 0 or j = i

Page 26: Dynamicpgmming

26

Pascal’s Triangle

0 1 2 3 4 … j k

0 1

1 1 1

2 1 2 1

3 1 3 3 1

4 1 4 6 4 1

… B[i-1][j-1]+ B[i-1][j]

i B[i][j]

n

Page 27: Dynamicpgmming

27

Binomial Coefficient Record the values in a table of n+1 rows and k+1 columns

0 1 2 3 … k-1 k

0 1

1 1 1

2 1 2 1

3 1 3 3 1

...

k 1 1

n-1 1

n 1

k

nC

1

1

k

nC

k

nC

1

Page 28: Dynamicpgmming

28

Binomial Coefficient

ALGORITHM Binomial(n,k) //Computes C(n, k) by the dynamic programming algorithm //Input: A pair of nonnegative integers n ≥ k ≥ 0 //Output: The value of C(n ,k)for i 0 to n do for j 0 to min (i ,k) do if j = 0 or j = k C [i , j] 1 else C [i , j] C[i-1, j-1] + C[i-1, j]return C [n, k]

)()(2

)1(

)1(11),(1

1

1 1 1 1 1

nkknkkk

kiknAk

i

i

j

n

ki

k

j

k

i

n

Ki

Page 29: Dynamicpgmming

29

Floyd’s Algorithm: All pairs shortest paths

•Find shortest path when direct path doesn’t exist Find shortest path when direct path doesn’t exist •In a weighted graph, find shortest paths between every pair of In a weighted graph, find shortest paths between every pair of verticesvertices

• Same idea: construct solution through series of matrices Same idea: construct solution through series of matrices D(0), D(1), … using an initial subset of the vertices as D(0), D(1), … using an initial subset of the vertices as intermediaries.intermediaries.

• Example:Example:

3

42

14

16

1

5

3

Page 30: Dynamicpgmming

30

Shortest Path

Optimization problem – more than one candidate for the solution

Solution is the candidate with optimal value Solution 1 – brute force

– Find all possible paths, compute minimum– Efficiency?

Solution 2 – dynamic programming– Algorithm that determines only lengths of shortest paths– Modify to produce shortest paths as well

Worse than O(n2)

Page 31: Dynamicpgmming

31

Example

1 2 3 4 5

1 0 1 ∞ 1 5

2 9 0 3 2 ∞

3 ∞ ∞ 0 4 ∞

4 ∞ ∞ 2 0 3

5 3 ∞ ∞ ∞ 0

1 2 3 4 5

1 0 1 3 1 4

2 8 0 3 2 5

3 10 11 0 4 7

4 6 7 2 0 3

5 3 4 6 4 0

W - Graph in adjacency matrix D - Floyd’s algorithm

Page 32: Dynamicpgmming

32

Meanings

D(0)[2][5] = lenth[v2, v5]= ∞ D(1)[2][5] = minimum(length[v2,v5], length[v2,v1,v5])

= minimum (∞, 14) = 14 D(2)[2][5] = D(1)[2][5] = 14 D(3)[2][5] = D(2)[2][5] = 14 D(4)[2][5] = minimum(length[v2,v1,v5], length[v2,v4,v5]),

length[v2,v1,v5], length[v2, v3,v4,v5]),= minimum (14, 5, 13, 10) = 5

D(5)[2][5] = D(4)[2][5] = 5

Page 33: Dynamicpgmming

33

Floyd’s Algorithm

c d

6

a b

3

2

7

1

ijijk

kjk

ikk

ijk

ij wdkdddd )0()1()1()1()( ,1for } ,min{

Page 34: Dynamicpgmming

34

Computing D

D(0) = W Now compute D(1) Then D(2) Etc.

Page 35: Dynamicpgmming

35

Floyd’s Algorithm: All pairs shortest paths

• ALGORITHM Floyd (W[1 … n, 1… n])ALGORITHM Floyd (W[1 … n, 1… n])•For k For k ← 1 to n do← 1 to n do

•For i For i ← 1 to n do← 1 to n do•For j ← 1 to n doFor j ← 1 to n do

•W[i, j] ← min{W[i,j], W{i, k] + W[k, j]}W[i, j] ← min{W[i,j], W{i, k] + W[k, j]}

•Return WReturn W•Efficiency = ? Efficiency = ?

Θ(n)

Page 36: Dynamicpgmming

36

Example: All-pairs shortest-path problem

ExampleExample: Apply Floyd’s algorithm to find the t All-: Apply Floyd’s algorithm to find the t All-pairs shortest-path problem of the digraph defined by pairs shortest-path problem of the digraph defined by the following weight matrixthe following weight matrix

0 2 0 2 ∞∞ 1 8 1 86 0 3 2 ∞6 0 3 2 ∞

∞ ∞ ∞ ∞ 0 4 ∞0 4 ∞∞ ∞ ∞ ∞ 2 0 32 0 33 ∞ ∞ ∞3 ∞ ∞ ∞ 00

Page 37: Dynamicpgmming

37

Visualizations

http://www.ifors.ms.unimelb.edu.au/tutorial/path/#list http://www1.math.luc.edu/~dhardy/java/alg/floyd.html http://students.ceid.upatras.gr/%7Epapagel/project/kef5_7_2.htm

Page 38: Dynamicpgmming

38

Chained Matrix Multiplication

Problem: Matrix-chain multiplication– a chain of <A1, A2, …, An> of n matrices

– find a way that minimizes the number of scalar multiplications to compute the product A1A2…An

Strategy: Breaking a problem into sub-problem

– A1A2...Ak, Ak+1Ak+2…An

Recursively define the value of an optimal solution– m[i,j] = 0 if i = j– m[i,j]= min{i<=k<j} (m[i,k]+m[k+1,j]+pi-1pkpj)

– for 1 <= i <= j <= n

Page 39: Dynamicpgmming

39

Example

Suppose we want to multiply a 2x2 matrix with a 3x4 matrix

Result is a 2x4 matrix In general, an i x j matrix times a j x k matrix

requires i x j x k elementary multiplications

Page 40: Dynamicpgmming

40

Example

Consider multiplication of four matrices:A x B x C x D(20 x 2) (2 x 30) (30 x 12) (12 x

8) Matrix multiplication is associative

A(B (CD)) = (AB) (CD) Five different orders for multiplying 4 matrices

1. A(B (CD)) = 30*12*8 + 2*30*8 + 20*2*3 = 3,6802. (AB) (CD) = 20*2*30 + 30*12*8 + 20*30*8 = 8,8803. A ((BC) D) = 2*30*12 + 2*12*3 + 20*2*8 = 1,2324. ((AB) C) D = 20*2*30 + 20*30*12 + 20*12*8 = 10,3205. (A (BC)) D = 2*30*12 + 20*2*12 + 20*12*8 = 3,120

Page 41: Dynamicpgmming

41

Algorithm

int minmult (int n, const ind d[], index P[ ] [ ]){

index i, j, k, diagonal;int M[1..n][1..n];for (i = 1; i <= n; i++)

M[i][i] = 0;for (diagonal = 1; diagonal <= n-1; diagonal++)

for (i = 1; i <= n-diagonal; i++){ j = i + diagonal;

M[i] [j] = minimum(M[i][k] + M[k+1][j] + d[i-1]*d[k]*d[j]);

// minimun (i <= k <= j-1)P[i] [j] = a value of k that gave the minimum;

}return M[1][n];

}

Page 42: Dynamicpgmming

42

Optimal Binary Trees

Optimal way of constructing a binary search tree

Minimum depth, balanced (if all keys have same probability of being the search key)

What if probability is not all the same? Multiply probability of accessing that key by

number of links to get to that key

Page 43: Dynamicpgmming

43

Example

Key3

3(0.7) + 2(0.2) + 1(0.1) = 2.6

Θ (n3) Efficiencykey2

key1

If p1 = 0.7

p2 = 0.2

p3 = 0.1

Page 44: Dynamicpgmming

44

Traveling Salesperson

The Traveling Salesman Problem (TSP) is a deceptively simple combinatorial problem. It can be stated very simply:

A salesman spends his time visiting n cities (or nodes) cyclically. In one tour he visits each city just once, and finishes up where he started. In what order should he visit them to minimize the distance traveled?

Page 45: Dynamicpgmming

45

Why study?

The problem has some direct importance, since quite a lot of practical applications can be put in this form.

It also has a theoretical importance in complexity theory, since the TSP is one of the class of "NP Complete" combinatorial problems.

NP Complete problems are intractable in the sense that no one has found any really efficient way of solving them for large n.

– They are also known to be more or less equivalent to each other; if you knew how to solve one kind of NP Complete problem you could solve the lot.

Page 46: Dynamicpgmming

46

Efficiency

The holy grail is to find a solution algorithm that gives an optimal solution in a time that has a polynomial variation with the size n of the problem.

The best that people have been able to do, however, is to solve it in a time that varies exponentially with n.

Page 47: Dynamicpgmming

47

Later…

We’ll get back to the traveling salesperson problem in the next chapter….

Page 48: Dynamicpgmming

48

Animations

http://www.pcug.org.au/~dakin/tsp.htm http://www.ing.unlp.edu.ar/cetad/mos/TSPBI

B_home.html

Page 49: Dynamicpgmming

49

Chapter Summary

• Dynamic programming is similar to divide-and-conquer.• Dynamic programming is a bottom-up approach.• Dynamic programming stores the results (small instances) in the table and reuses it instead of recomputing it.• Two steps in development of a dynamic programming algorithm:

• Establish a recursive property• Solve an instance of the problem in a bottom-up fashion

Page 50: Dynamicpgmming

50

Exercise: Sudoku puzzle

Page 51: Dynamicpgmming

51

Rules of Sudoku

• Place a number (1-9) in each blank cell.• Each row (nine lines from left to right), column (also nine lines from top to bottom) and 3x3 block bounded by bold line (nine blocks) contains number from 1 through 9.

Page 52: Dynamicpgmming

52

A Little Help Please…

Try this:–   http://www.ccs.neu.edu/jpt/sudoku/