Algorithm.ppt

47
Divide-and-Conquer Divide the problem into a number of subproblems. •Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, solve the subproblems in a straightforward manner.

Transcript of Algorithm.ppt

Page 1: Algorithm.ppt

Divide-and-Conquer

• Divide the problem into a number of subproblems.

•Conquer the subproblems by solving them recursively. If

the subproblem sizes are small enough, solve the

subproblems in a straightforward manner.

Page 2: Algorithm.ppt

Divide-and-Conquer

• Combine the solutions to the subproblems into the

solution for the original problem.

Page 3: Algorithm.ppt

Merge Sort Algorithm

Divide: Divide the n-element sequence into two

subsequences of n/2 elements each.

Conquer: Sort the two subsequences recursively using

merge sort.

Combine: Merge the two sorted sequences.

Page 4: Algorithm.ppt

How to merge two sorted sequences

•We have two subarrays A[p..q] and A[q+1..r] in sorted

order.

• Merge sort algorithm merges them to form a single

sorted subarray that replaces the current subarray A[p..r]

Page 5: Algorithm.ppt

Merging Algorithm

Merge( A, p, q, r)

1. n1 q – p + 1

2. n2 r – q

3. Create arrays L[1..n1+1] and R[1..n2+1]

Page 6: Algorithm.ppt

Merging Algorithm

4. For i 1 to n1

5. do L[i] A[p + i -1]

6. For j 1 to n2

7. do R[j] A[q + j]

Page 7: Algorithm.ppt

Merging Algorithm

8. L[n1 + 1] ∞

9. R[n2 + 1] ∞

10. i 1

11. j 1

Page 8: Algorithm.ppt

Merging Algorithm

12. For k p to r

13. Do if L[i] <= R[j]

14. Then A[k] L[i]

15. i=i+1

16. else A[k] R[j]

17. j=j+1

Page 9: Algorithm.ppt

Merging Algorithm8 9 10 11 12 13 14 15 16 17

A… 2 4 5 7 1 2 3 6 …

k

1 2 3 4 5

L 2 4 5 7 ∞

i

1 2 3 4 5

R 1 2 3 6 ∞

j

(a)

Page 10: Algorithm.ppt

Merging Algorithm8 9 10 11 12 13 14 15 16 17

A… 1 4 5 7 1 2 3 6 …

k

1 2 3 4 5

L 2 4 5 7 ∞

i

1 2 3 4 5

R 1 2 3 6 ∞

j

(b)

Page 11: Algorithm.ppt

Merging Algorithm8 9 10 11 12 13 14 15 16 17

A… 1 2 5 7 1 2 3 6 …

k

1 2 3 4 5

L 2 4 5 7 ∞

i

1 2 3 4 5

R 1 2 3 6 ∞

j

(c)

Page 12: Algorithm.ppt

Merging Algorithm8 9 10 11 12 13 14 15 16 17

A… 1 2 2 7 1 2 3 6 …

k

1 2 3 4 5

L 2 4 5 7 ∞

i

1 2 3 4 5

R 1 2 3 6 ∞

j

(d)

Page 13: Algorithm.ppt

Merging Algorithm8 9 10 11 12 13 14 15 16 17

A… 1 2 2 3 1 2 3 6 …

k

1 2 3 4 5

L 2 4 5 7 ∞

i

1 2 3 4 5

R 1 2 3 6 ∞

j

(e)

Page 14: Algorithm.ppt

Merging Algorithm8 9 10 11 12 13 14 15 16 17

A… 1 2 2 3 4 2 3 6 …

k

1 2 3 4 5

L 2 4 5 7 ∞

i

1 2 3 4 5

R 1 2 3 6 ∞

j

(f)

Page 15: Algorithm.ppt

Merging Algorithm8 9 10 11 12 13 14 15 16 17

A… 1 2 2 3 4 5 3 6 …

k

1 2 3 4 5

L 2 4 5 7 ∞

i

1 2 3 4 5

R 1 2 3 6 ∞

j

(g)

Page 16: Algorithm.ppt

Merging Algorithm8 9 10 11 12 13 14 15 16 17

A… 1 2 2 3 4 5 6 6 …

k

1 2 3 4 5

L 2 4 5 7 ∞

i

1 2 3 4 5

R 1 2 3 6 ∞

j

(h)

Page 17: Algorithm.ppt

Merging Algorithm8 9 10 11 12 13 14 15 16 17

A… 1 2 2 3 4 5 6 7 …

k

1 2 3 4 5

L 2 4 5 7 ∞

i

1 2 3 4 5

R 1 2 3 6 ∞

j

(i)

Page 18: Algorithm.ppt

Merge Sort Algorithm

Merge-Sort( A, p, r)

1. If p<r

2. then q (p+r)/2

3. Merge-Sort( A, p, q)

4. Merge-Sort( A, q+1, r)

5. Merge( A, p, q, r)

Page 19: Algorithm.ppt

Operation of Merge Sort

1 2 2 3 4 5 6 7

2 4 5 7 1 2 3 6

2 5 4 7 1 3 2 6

5 2 4 7 1 3 2 6

Page 20: Algorithm.ppt

Analyzing Divide-and-Conquer Algorithm

When an algorithm contains a recursive call to

itself, its running time can be described by a

recurrence equation or recurrence which

describes the running time

Page 21: Algorithm.ppt

Recurrence

If the problem size is small enough, say

n<=c for some constant c, the

straightforward solution takes constant

time, can be written as θ(1).

Page 22: Algorithm.ppt

Recurrence

If we have

•a subproblems, each of which is 1/b the

size of the original.

•D(n) time to divide the problem

Page 23: Algorithm.ppt

Recurrence

•C(n) time to combine the solution

The recurrence

T(n)= θ(1) if n <= c

aT(n/b) + D(n) + C(n) otherwise

Page 24: Algorithm.ppt

Recurrence

Divide: The divide step computes the

middle of the subarray which takes

constant time, D(n)=θ(1)

Page 25: Algorithm.ppt

Recurrence

Conquer: We recursively solve two

subproblems, each of size n/2, which

contributes 2T(n/2) to the running time.

Page 26: Algorithm.ppt

Recurrence

Combine: Merge procedure takes θ(n) time

on an n-element subarray. C(n)=θ(n)

The recurrence

T(n)= θ(1) if n=1

2T(n/2) + θ(n) if n>1

Page 27: Algorithm.ppt

Recurrence

Let us rewrite the recurrence

T(n)=

C represents the time required to solve

problems of size 1

C if n=1

2T(n/2) + cn if n>1

Page 28: Algorithm.ppt

A Recursion Tree for the Recurrence

T(n)Cn

T(n/2) T(n/2)

Page 29: Algorithm.ppt

A Recursion Tree for the Recurrence

Cn

Cn/2 Cn/2

T(n/4) T(n/4) T(n/4) T(n/4)

Page 30: Algorithm.ppt

A Recursion Tree for the Recurrence

C(n)

Cn/2 Cn/2

Cn/4 Cn/4 Cn/4 Cn/4

C C C C C C C

cn

cn

cn

cn

lg n

Page 31: Algorithm.ppt

Total Running Time

The fully expanded tree has lg n +1 levels

and each level contributes a total cost of

cn. Therefore T(n)= cn lg n + cn = θ(nlg n)

Page 32: Algorithm.ppt

Growth of Functions

We look at input sizes large enough to

make only the order of growth of the

running time relevant.

Page 33: Algorithm.ppt

Asymptotic Notation

Used to describe running time of an

algorithm and defined in terms of functions

whose domains are the set of natural

numbers N=0,1,2--------------

Page 34: Algorithm.ppt

θ-Notation

θ(g(n)) = f(n) : there exist positive

constants C1, C2, and n0 such that

0 <= C1g(n)<=f(n) <=C2g(n) for all n>=n0

Page 35: Algorithm.ppt

θ-Notation

n0n

C2g(n)

C1g(n)

f(n)

f(n)=θ(g(n))

Page 36: Algorithm.ppt

θ-Notation

For all n>=n0, the function f(n) is equal to

g(n) to within a constant factor. So g(n) is

asymptotically tight bound for f(n).

Page 37: Algorithm.ppt

θ-Notation

Let us show that 1/2n2- 3n=θ(n2)

To do so, we must determine C1, C2, and

n0 such that C1n2<=1/2n2-3n<=C2n2 for all

n>=n0

Page 38: Algorithm.ppt

θ-Notation

Diving by n2

C1<=1/2 – 3/n <= C2

By choosing C1=1/14 , C2=1/2, and n0=7,

we can verify that 1/2n2- 3n=θ(n2)

Page 39: Algorithm.ppt

O-Notation

O(g(n)) = f(n) : there exist positive

constants C and n0 such that

0 <= f(n) <=Cg(n) for all n>=n0

Page 40: Algorithm.ppt

O-Notation

n0 n

Cg(n)

f(n)

f(n)=O(g(n))

Page 41: Algorithm.ppt

O-Notation

O(n2) bound on worst-case running time of

insertion sort also applies to its running

time on every input.

Page 42: Algorithm.ppt

O-Notation

θ(n2) bound on worst-case running time of

insertion sort, however, does not imply a

θ(n2) bound on the running time of

insertion sort on every input.

Page 43: Algorithm.ppt

Ω-Notation

Ω(g(n)) = f(n) : there exist positive

constants C and n0 such that

0 <= Cg(n)<= f(n) for all n>=n0

Page 44: Algorithm.ppt

Ω-Notation

n0 n

Cg(n)

f(n)

f(n)=Ω(g(n))

Page 45: Algorithm.ppt

Ω-Notation

Since Ω-notation describes a lower bound,

we use it to bound the best-case running

time of an algorithm, we also bound the

running time of algorithm on arbitrary inputs

Page 46: Algorithm.ppt

o-Notation

We use o-notation to denote a upper bound

that is not asymptotically tight. The bound

2n2=O(n2) is asymptotically tight, but the

bound 2n=O(n2) is not.

Page 47: Algorithm.ppt

ω-Notation

We use ω-notation to denote a lower bound

that is not asymptotically tight.