Advanced algorithms at a glance

51
1 Jawaharlal Nehru Engineering College Laboratory Manual ADVANCED ALGORITHM For Final Year Students CSE Dept: Computer Science & Engineering (NBA Accredited) Author JNEC, Aurangabad

description

advnced algorithms

Transcript of Advanced algorithms at a glance

Page 1: Advanced algorithms at a glance

1

Jawaharlal Nehru Engineering College

Laboratory Manual

ADVANCED ALGORITHM

For

Final Year Students CSE

Dept: Computer Science & Engineering (NBA Accredited)

Author JNEC, Aurangabad

Page 2: Advanced algorithms at a glance

2

FOREWORD

It is my great pleasure to present this laboratory manual for Final Year

engineering students for the subject of Advanced Algorithm keeping in view the vast

coverage required to create analytical skills, to enable the students to design algorithms

and to analyze the algorithm.

As a student, many of you may be wondering with some of the questions in your mind

regarding the subject and exactly what has been tried is to answer through this manual.

As you may be aware that MGM has already been awarded with ISO 9000 certification

and it is our endure to technically equip our students taking the advantage of the

procedural aspects of ISO 9000 Certification.

Faculty members are also advised that covering these aspects in initial stage itself, will

greatly relived them in future as much of the load will be taken care by the enthusiasm

energies of the students once they are conceptually clear.

Dr. S.D.Deshmukh

Principal

Page 3: Advanced algorithms at a glance

3

LABORATORY MANUAL CONTENTS

This manual is intended for the Final Year students of Computer Science and

Engineering in the subject of Advanced Algorithm. This manual typically contains

practical/Lab Sessions related Advanced Algorithm implemented in C covering various

aspects related the subject to enhanced understanding.

As per the syllabus along with Study of Java Language, we have made the efforts to

cover various aspects of Advanced Algorithm covering different Techniques used to

construct and understand concepts of Advanced Algorithm.

Students are advised to thoroughly go though this manual rather than only topics

mentioned in the syllabus as practical aspects are the key to understanding and

conceptual visualization of theoretical aspects covered in the books.

Good Luck for your Enjoyable Laboratory Sessions

Prof. D.S.Deshpande Ms. J.D.Pagare

HOD, CSE Lecturer, CSE Dept.

Page 4: Advanced algorithms at a glance

4

DOs and DON’Ts in Laboratory:

1. Make entry in the Log Book as soon as you enter the Laboratory.

2. All the students should sit according to their roll numbers starting from their left to right.

3. All the students are supposed to enter the terminal number in the log book.

4. Do not change the terminal on which you are working.

5. All the students are expected to get at least the algorithm of the program/concept to be

implemented.

6. Strictly observe the instructions given by the teacher/Lab Instructor.

Instruction for Laboratory Teachers::

1. Submission related to whatever lab work has been completed should be done during the next

lab session. The immediate arrangements for printouts related to submission on the day of

practical assignments.

2. Students should be taught for taking the printouts under the observation of lab teacher.

3. The promptness of submission should be encouraged by way of marking and evaluation

patterns that will benefit the sincere students.

Page 5: Advanced algorithms at a glance

5

SUBJECT INDEX

1. Program for Recursive Binary & Linear Search.

2. Program for Heap Sort.

3. Program for Merge Sort.

4. Program for Selection Sort.

5. Program for Insertion Sort.

6. Program for Quick Sort.

7. Program for FFT.

8.Study of NP-Complete theory.

9.Study of Cook’s theorem.

10.Study of Sorting network.

Page 6: Advanced algorithms at a glance

6

1. Program to Perform recursive Binary and Linear search

Aim: Write a program to perform Recursive binary and linear search

ALGORITHM :

BINARY SEARCH

Step 1:if(begin<=end) the go to Step 2 else go to Step

Step 2:mid = (begin+end)/2

Step 3:if(key element = a[mid]) then successful search else go to step 4

/* search the upper half of the array */

Step 4:If(k<a[mid]) then recursively call binary(begin, mid-1)

/* search the lower half of the array */

Step 5:if(k>a[mid]) then recursively call binary (mid+1,end)

THEORY:

This technique is applied only if the items to be compared are in ascending or

descending order. It uses divide and conqueror method.

EFFICIENCY ANALYSIS:

BEST CASE: 1- Occurs when the item to be searched is present in the middle

of the array.

WORST CASE: Log 2 n- Occurs when the key to be searched is either at first

position or last position.

AVERAGE CASE: Log 2 n

PROGRAM FOR BINARY SEARCH : #include<stdio.h>

#include<conio.h>

int a[20] , n , k ; /* variable declaration */

int bsearch ( int begin , int end ) ; /* Function declaration */

void main ()

{

int i , flag = 0 ;

clrscr () ;

printf ( " \n Enter size of the array n : " ) ;

scanf ( "%d" , &n ) ;

printf ( " \n Enter elements of array in ascending order : " ) ;

for ( i = 0 ; i < n ; i++ )

scanf ( "%d" , &a[i] ) ;

Page 7: Advanced algorithms at a glance

7

printf ( "\n Enter the key element : " ) ;

scanf ( "%d" , &k ) ;

flag = bsearch ( 0, n - 1 ) ;

if ( flag == 1 )

printf ( " \n Successful search , key element is present " ) ;

else

printf ( " \n Unsuccessful search " ) ;

getch () ;

}

int bsearch ( int begin , int end )

{

int mid ;

if ( begin <= end )

{

mid = ( begin + end ) / 2 ;

if ( k == a[mid] )

return 1 ;

if ( k < a[mid] )

return bsearch ( begin , mid - 1 ) ;

if ( k > a[mid] )

return bsearch ( mid + 1, end ) ;

}

return 0 ;

}

==========Input - Output============= Enter size of the array n : 5

Enter elements of array in ascending order : 1

2345

Enter the key element : 2

Successful search , key element is present

==========Input - Output============= Enter size of the array n : 5

Enter elements of array in ascending order : 1

2345

Enter the key element : 6

Unsuccessful search

LINEAR SERACH

Step 1:input an array with n number of elements

Step 2:input the key element

Step 3:if( key element = a[1] ) then successful search exit.

Step 4:else go to Step 5

Step 5: Recursively call the same algorithm for the rest n-1 elements.

THEORY:

It is also called SEQUNTIAL SEARCH. In this technique, the search key is

Page 8: Advanced algorithms at a glance

8

compared with each item sequentially one after another. If the search key is

found, position of the search key is returned.

EFFICIENCY ANALYSIS:

BEST CASE:

SUCCESSFUL SEARCH: 1- Occurs when the key to be searched is first

element.

UNSUCCESSFUL SEARCH: n- Occurs when the element to be

searched is not present in the table.

WORST CASE: n- occurs if the key is either the last element or it is not

present in the table searched.

AVERAGE CASE: n.

PROGRAM FOR LINEAR SEARCH : # include <stdio.h>

# include <conio.h>

int RLSearch(int a[], int low, int high, int key)

{

if( low>high )

return(-1);

if(key==a[low])

return(low);

else

return( RLSearch(a, low+1, high, key) );

}

void main()

{

int n, a[20], key;

int pos;

int k;

clrscr();

printf("\n Enter How many Numbers : ");

scanf("%d", &n);

printf("\n Enter %d Numbers : \n ", n);

for(k=1; k<=n; k++)

scanf("%d", &a[k]);

printf("\n Enter the Key to be Searched : ");

scanf("%d", &key);

pos = RLSearch(a, 1, n, key);

if( pos == -1 )

printf("\n Key Not Found");

else

printf("\n Key %d Found in position %d", key, pos);

getch();

}

Page 9: Advanced algorithms at a glance

9

==========Input - Output============= Enter size of the array n : 5

Enter elements of array in ascending order : 1

2345

Enter the key element : 2

Key 2 found in position 2

==========Input - Output============= Enter size of the array n : 5

Enter elements of array in ascending order : 1

2345

Enter the key element : 6

Key not found

Conclusion: Thus we have perform recursive Binary and Linear search

Page 10: Advanced algorithms at a glance

10

2. Program to Perform Heap Sort

Aim: Sort a given set of elements using the Heap Sort method.

ALGORITHM : /* Constructs a heap from the elements of a given array */

/* Input : An array H[1..n] of orderable items

/* Output : A heap H[1..n]

Step 1: for i = n/2 downto 1 do Step 2

Step 2: k = i ; v = H[k] and heap = false

Step 3: while not heap and 2 * k <= n do

Step 4: j = 2 * k

Step 5: if j < n // there are two children

Step 6: if H[j] < H[j+1] j = j + 1

Step 7: if v >= H[j] heap = true

Step 8: else go to Step 9

Step 9: H[k] = H[j] ; k = j

Step 10: H[k] = v

THEORY:

This sorting technique uses heap to arrange numbers in ascending or

descending order. It uses TRANSFORM AND CONQUER method. To arrange

in ascending order we use bottom up approach to create max heap. And to

arrange in descending order we use top down approach min heap. It has 2

phases:

1. Heap creation phase: here the unsorted array is transformed into heap.

2. Sorting phase: here the items are arranged in ascending order (if we use

max heap) or descending order (if we use min heap).

EFFICIENCY ANALYSIS:

Sorting using bottom up approach: n Log 2 n.

Sorting using top down approach: n Log 2 n.

PROGRAM : #include<stdio.h>

#include<conio.h>

#define max 100

void heapify () ;

void heapsort () ;

int maxdel ();

int a[max] , b[max] ,n ;

void main ()

{

int i ;

int m ;

clrscr () ;

Page 11: Advanced algorithms at a glance

11

printf ( " \n Enter array size : " ) ;

scanf ( "%d" , &n ) ;

m = n ;

printf ( " \n Enter elements : \n " ) ;

for ( i = 1 ; i <= n ; i++ )

scanf ( "%d" , &a[i] ) ;

heapsort () ;

printf ( " \n The sorted array is : \n " ) ;

for ( i = 1 ; i <= m ; i++ )

printf ( "\n%d" , b[i] ) ;

getch () ;

}

void heapsort ()

{

int i ;

heapify () ;

for ( i = n ; i >= 1 ; i-- )

b[i] = maxdel () ;

}

void heapify ()

{

int i , e , j ;

// start from middle of a and move up to 1

for ( i = n/2 ; i >= 1 ; i-- )

{

e = a[i] ; //save root of subtree

j = 2 * i ; //left child and c+1 is right child

while ( j <= n )

{

if ( j < n && a[j] < a[j+1] )

j++ ; //pick larger of children

if ( e >= a[j] )

break; // is a max heap

a[j/2] = a[j] ;

j = j * 2 ; //go to the next level

}

a[j/2] = e ;

}

}

int maxdel ()

{

int x , j , e , i ;

if ( n == 0 )

return -1 ;

x = a[1] ; //save the maximum element

e = a[n] ; //get the last element

n-- ;

// Heap the structure again

i = 1 ;

j = 2;

Page 12: Advanced algorithms at a glance

12

while ( j <= n )

{

if ( j < n && a[j] < a[j+1] )

j++ ; //Pick larger of two childrean

if ( e >= a[j] )

break ; //subtree is heap

a[i] = a[j] ;

i = j ;

j = j * 2 ; // go to the next level

}

a[i] = e ;

return x ;

}

=============Input - Output============ Enter array size : 5

Enter elements :

7

1

9

3

5

The sorted array is :

1

3

5

7

9

Conclusion: Thus we have perform Heap Sort

Page 13: Advanced algorithms at a glance

13

3.Program to Perform Merge Sort

Aim: Sort a given set of elements using Merge Sort

ALGORITHM :

Mergesort ( A [ 0…..n-1 ] ) // Sorts array A [ 0 …….n-1} by recursive mergesort

//Input: An array A [ 0…..n-1 ] of orderable elements

//Output: Array A [ 0….n-1 ] Sorted in nondecreasing order

Step 1: If n > 1 then go thru the following steps

Step 2: copy A [ 0…[n/2] – 1 ] to B [ 0…[n/2] – 1 ] to

Step 3: copy A [ [n/2] – n - 1 ] to B [ 0…[n/2] – 1 ] to

Step 4: Mergesort(B [ 0…[n/2] – 1 ] )

Step 5: Mergesort(C [ 0…[n/2] – 1 ] )

Step 6: Merge ( B, C, A )

Merge ( B [ 0…..p-1 ], C [ 0….q-1 ], A [ 0….p+q-1] )

//Merges two sorted arrays into one sorted array

//Input: Arrays B [ 0….p-1 ] and C [ 0….q-1 ] both sorted

//Output: Sorted array A [ 0…p+q-1 ] of the elements of B and C

Step 1: i = 0

Step 2: j = 0

Step 3: k = 0

Step 4: while i < p and j < q do {

Step 5: if B[i] <= C[j] {

Step 6: A[k] = B[i]

Step 7: i = i + 1

} /* end if */

Step 8: else {

Step 9: A[k] = C[j]

Step 10: j = j + 1

} /* end if */

Step 11: k = k + 1

/* end while */

Step 12: if i = p then copy C [ j..q-1 ] to A [ k..p+q-1 ]

Step 13: else copy B [ i….p-1 ] to A [ k…p+q-1 ]

THEORY:

The concept used here is divide and conqueror. The steps involved are:

1. DIVIDE: divide the given array consisting of n elements into 2 parts of

n/2 elements each.

2. CONQUER: sort left part and right part of the array recursively using

merge sort.

3. COMBINE: merge the sorted left part and sorted right part to get a

single sorted array.

The key operation in merge sort is combining the sorted left part and sorted

right part into a single sorted array.

Page 14: Advanced algorithms at a glance

14

EFFICIENCY ANALYSIS:

Time complexity using Master theorem: n Log 2 n.

Time complexity using Substitution method: n Log 2 n.

\PROGRAM: # include <stdio.h>

# include <conio.h>

void Merge(int a[], int low, int mid, int high)

{

int i, j, k, b[20];

i=low; j=mid+1; k=low;

while ( i<=mid && j<=high )

{

if( a[i] <= a[j] )

b[k++] = a[i++] ;

else

b[k++] = a[j++] ;

}

while (i<=mid) b[k++] = a[i++] ;

while (j<=high) b[k++] = a[j++] ;

for(k=low; k<=high; k++)

a[k] = b[k];

}

void MergeSort(int a[], int low, int high)

{

int mid;

if(low >= high)

return;

mid = (low+high)/2 ;

MergeSort(a, low, mid);

MergeSort(a, mid+1, high);

Merge(a, low, mid, high);

}

void main()

{

int n, a[20];

int k;

clrscr();

printf("\n Enter How many Numbers : ");

scanf("%d", &n);

printf("\n Enter %d Numbers : \n ", n);

for(k=1; k<=n; k++)

scanf("%d", &a[k]);

MergeSort(a, 1, n);

printf("\n Sorted Numbers are : \n ");

for(k=1; k<=n; k++)

printf("%5d", a[k]);

getch();

Page 15: Advanced algorithms at a glance

15

}

================Input – Output=============== Enter how many numbers : 5

Enter 5 numbers :

99

67

85

12

97

Sorted numbers are :

12

67

85

97

99

Conclusion: Thus we have perform Merge Sort

Page 16: Advanced algorithms at a glance

16

4.Program to Perform Selection Sort

Aim: Sort a given set of elements using Selection sort .

ALGORITHM :

Selection sort ( A [ 0…n-1 ] )

//The algorithm sorts a given array by selection sort

//Input: An array A [ 0…n-1 ] of orderable elements

//Output: Array A [ 0….n-1 ] sorted in ascending order

Step 1: for i = 0 to n – 2 do

Step 2: min = i

Step 3: for j =i + 1 to n – 1 do

Step 4: if a[j] < a[min] then go to Step 5

Step 5: min = j

//end for

Step 6: swap A[i] and A[min]

//end for

THEORY:

This uses the brute force method for sorting. In this method, obtain the

first smallest number and exchange that with the element in first position. In

second pass, obtain second smallest number and exchange that with second

element.

EFFICIENCY ANALYSIS:

Time complexity: n2.

Even though the time complexity of selection sort is n2, the number of swaps in

each pass will be n-1, both in worst case and in best case. Observe that in worst

case, selection sort requires n-1 exchanges(each pass requires one exchange)

where as bubble sort requires(n-1)n/2 exchanges. This property distinguishes

selection sort from other algorithms.

PROGRAM : #include<stdio.h>

#include<conio.h>

int a[20] , n ;

void main ()

{

void selectionsort() ;

int i ;

clrscr () ;

printf ( " \n Enter size of the array : " ) ;

scanf ( "%d" , &n ) ;

printf ( " \n Enter the elements : \n " ) ;

for ( i = 0 ; i < n ; i++ )

Page 17: Advanced algorithms at a glance

17

scanf ( "%d" , &a[i] ) ;

selectionsort () ;

printf ( " \n The sorted elements are : " ) ;

for ( i = 0 ; i < n ; i++ )

printf ( "\n%d" , a[i] ) ;

getch () ;

}

void selectionsort ()

{

int i , j , min , temp ;

for ( i = 0 ; i < n - 1 ; i++ )

{

min = i ;

for ( j = i + 1 ; j < n ; j++ )

{

if ( a[j] < a[min] )

min = j ;

}

temp = a[i] ;

a[i] = a[min] ;

a[min] = temp ;

}

}

==============Input - Output============= Enter size of the array : 5

Enter the elements :

7

1935

The sorted elements are :

13579

Conclusion: Thus we have perform Selection Sort

Page 18: Advanced algorithms at a glance

18

5.Program to Perform Insertion Sort

Aim: Sort a given set of elements using Insertion sort method.

THEORY:

This uses decrease and conqueror method to sort a list of elements.

This sorting technique is very efficient if the elements to be sorted are partially

arranged in ascending order. Consider an array of n elements to sort. We

assume that a[i] is the item to be inserted and assign it to item. Compare the item

a[i] from position (i-1) down to 0 and insert into appropriate place.

EFFICIENCY ANALYSIS:

BEST CASE: n- Occurs when the elements are already sorted.

WORST CASE: n2- Occurs when the elements are sorted in descending order.

AVERAGE CASE: n2.

ALGORITHM :

Insertionsort ( A [ 0…n-1 ] )

//Input: An array A [ 0….n-1 ] of orderable elements

//Output: Array A [ 0…..n-1 ] sorted in increasing order

Step 1: for i = 1 to n – 1 do {

Step 2: v = A[i]

Step 3: j = i – 1

Step 4: while j >= 0 and A[j] > v do {

Step 5: A[j+1] = a[j]

Step 6: j = j – 1

} //end while

Step 7: A[j+1] = v

} //end for

PROGRAM : #include<stdio.h>

#include<conio.h>

int a[20] , n ;

void main ()

{

void insertionsort() ;

int i ;

clrscr () ;

printf ( " \n Enter size of the array : " ) ;

scanf ( "%d" , &n ) ;

printf ( " \n Enter the elements : \n " ) ;

for ( i = 0 ; i < n ; i++ )

scanf ( "%d" , &a[i] ) ;

Page 19: Advanced algorithms at a glance

19

insertionsort () ;

printf ( " \n The sorted elements are : " ) ;

for ( i = 0 ; i < n ; i++ )

printf ( "\n%d" , a[i] ) ;

getch () ;

}

void insertionsort ()

{

int i , j , min ;

for ( i = 0 ; i < n ; i++ )

{

min = a[i] ;

j = i - 1 ;

while ( j >= 0 && a[j] > min )

{

a[j + 1] = a[j] ;

j = j - 1 ;

}

a[j + 1] = min ;

}

}

=============Input - Output============= Enter size of the array : 5

Enter the elements :

9

2851

The sorted elements are :

12589

Conclusion: Thus we have perform Insertion Sort

Page 20: Advanced algorithms at a glance

20

6.Program to Perform Quick Sort

Aim: Sort a given set of elements using Quick sort method.

THEORY:

This works on divide and conqueror technique.this works well on large set of

data. The first step in this technique is to partition the given table into 2 sub

tables such that the elements towards the left of the key element are less than the

key element and element towards right are greater than the key element. Then

the left and right sub tables are sorted individually and recursively.

EFFICIENCY ANALYSIS:

BEST CASE: nlog2n.

WORST CASE: n2- Occurs when at each invocation of the procedure,

the current array is partitioned into 2 sub arrays with one of them

being empty.

AVERAGE CASE: nlog2n.

ALGORITHM :

Quicksort ( A [l…r] )

//Sorts a subarray by quicksort

//Input: A subarray A[l…r] of A[0….n-1], defined by its left and right

//indices l and r

//Output: The subarray A[l…r] sorted in increasing order

Step 1:if l < r then {

Step 2:s = partition( A[l…r] ) //s is a split position

Step 3:Quicksort ( A [ l…s-1 ] )

Step 4: Quicksort ( A [ s+1…r ] )

Partition ( A [ l……r ] ) //Partitions a sub array by using its first element as a pivot

//Input: A sub array A[l..r] of A[0…n-1], defined by its left and right

//indices l and r ( l < r ).

//Output: A partition of A[l…r], with the split position returned as

this

//function’s value.

Step 1: p = a[l]

Step 2: i = l

Step 3: j = r + 1

Step 4: repeat

Page 21: Advanced algorithms at a glance

21

Step 5: repeat i = i + 1 until A[i] >= p

Step 6: repeat j = j –1 until A[j] <= p

Step 7: swap ( A[i] , A[j] )

Step 8: until i >= j

Step 9: swap ( A[i] , A[j] ) //undo last swap when i >= j

Step 10: swap ( A[l], A[j] )

Step 11: return j

PROGRAM : # include <stdio.h>

# include <conio.h>

void Exch(int *p, int *q)

{

int temp = *p;

*p = *q;

*q = temp;

}

void QuickSort(int a[], int low, int high)

{

int i, j, key, k;

if(low>=high)

return;

key=low; i=low+1; j=high;

while(i<=j)

{

while ( a[i] <= a[key] ) i=i+1;

while ( a[j] > a[key] ) j=j-1;

if(i<j) Exch(&a[i], &a[j]);

}

Exch(&a[j], &a[key]);

QuickSort(a, low, j-1);

QuickSort(a, j+1, high);

}

void main()

{

int n, a[20];

int k;

clrscr();

printf("\n Enter How many Numbers : ");

scanf("%d", &n);

printf("\n Enter %d Numbers : \n ", n);

for(k=1; k<=n; k++)

scanf("%d", &a[k]);

QuickSort(a, 1, n);

printf("\n Sorted Numbers are : \n ");

for(k=1; k<=n; k++)

printf("%5d", a[k]);

getch();

}

Page 22: Advanced algorithms at a glance

22

===============Input – Output============= Enter array size : 10

Enter the array :

4 2 1 9 8 3 5 7 10 6

The sorted array is :

1 2 3 4 5 6 7 8 9 10

Conclusion: Thus we have perform Quick Sort

Page 23: Advanced algorithms at a glance

23

7.Program to Perform Fast Fourier Transform

Aim: Write a program to perform fast fourier transform

ALGORITHM:

Algorithm FFT(N,a(x),w,A)

// N=2m, a(x)=

aN-1x

N-1+……+a0,and w is a primitivwe Nth root of //unity.

A[0: N-1] is set to the values a(wj),0<=j<=N-1.

{

// b and c are polynomials.

//B,C,and wp are complex arrays.

if N=1 then A[0]:=a0;

else

{

n:=N/2;

b(x):=aN-2xn-1

+….+a2x+a0;

c(x):=aN-1xn-1

+….+a3x+a1;

FFT(n,b(x),w2,B);

FFT(n,c(x),w2,C);

Wp[-1]:=1/w;

For j:=0 to n-1 do

{

Wp[j]:=w*wp[j-1];

A[j]:=B[j]+wp[j]*C[j];

A[j+n]:=B[j]-wp[j]*c[j];

}

}

}

THEORY:

A fast Fourier transform (FFT) is an efficient algorithm to compute the discrete

Fourier transform (DFT) and its inverse. There are many distinct FFT algorithms

involving a wide range of mathematics, from simple complex-number arithmetic to group

theory and number theory.A DFT decomposes a sequence of values into components of

different frequencies. This operation is useful in many fields but computing it directly

from the definition is often too slow to be practical. An FFT is a way to compute the

same result more quickly: computing a DFT of N points in the obvious way, using the

definition, takes O(N 2) arithmetical operations, while an FFT can compute the same

Page 24: Advanced algorithms at a glance

24

result in only O(N log N) operations. The difference in speed can be substantial,

especially for long data sets where N may be in the thousands or millions—in practice,

the computation time can be reduced by several orders of magnitude in such cases, and

the improvement is roughly proportional to N/log(N). This huge improvement made

many DFT-based algorithms practical; FFTs are of great importance to a wide variety of

applications, from digital signal processing and solving partial differential equations to

algorithms for quick multiplication of large integers.

The most well known FFT algorithms depend upon the factorization of N, but (contrary

to popular misconception) there are FFTs with O(N log N) complexity for all N, even for

prime N. Many FFT algorithms only depend on the fact that is an Nth primitive root of

unity, and thus can be applied to analogous transforms over any finite field, such as

number-theoretic transforms.

Since the inverse DFT is the same as the DFT, but with the opposite sign in the exponent

and a 1/N factor, any FFT algorithm can easily be adapted for it

data.txt file is as follows

Page 25: Advanced algorithms at a glance

25

PROGRAM:

/*----------------------------------------------------------------------------

fft.c - fast Fourier transform and its inverse (both recursively)

----------------------------------------------------------------------------*/

/******************************************************************************

* This file defines a C function fft that, by calling another function *

* fft_rec (also defined), calculates an FFT recursively. Usage: *

* fft(N, x, X); *

* Parameters: *

* N: number of points in FFT (must equal 2^n for some integer n >= 1) *

* x: pointer to N time-domain samples given in rectangular form (Re x, *

* Im x) *

* X: pointer to N frequency-domain samples calculated in rectangular form *

* (Re X, Im X) *

* Similarly, a function ifft with the same parameters is defined that *

* calculates an inverse FFT (IFFT) recursively. Usage: *

* ifft(N, x, X); *

* Here, N and X are given, and x is calculated. *

******************************************************************************/

#include <stdlib.h>

#include <math.h>

/* macros */

#define TWO_PI (6.2831853071795864769252867665590057683943L)

/* function prototypes */

void fft(int N, double (*x)[2], double (*X)[2]);

void fft_rec(int N, int offset, int delta,

double (*x)[2], double (*X)[2], double (*XX)[2]);

void ifft(int N, double (*x)[2], double (*X)[2]);

/* FFT */

void fft(int N, double (*x)[2], double (*X)[2])

{

/* Declare a pointer to scratch space. */

double (*XX)[2] = malloc(2 * N * sizeof(double));

/* Calculate FFT by a recursion. */

fft_rec(N, 0, 1, x, X, XX);

/* Free memory. */

free(XX);

}

/* FFT recursion */

void fft_rec(int N, int offset, int delta,

double (*x)[2], double (*X)[2], double (*XX)[2])

{

int N2 = N/2; /* half the number of points in FFT */

int k; /* generic index */

double cs, sn; /* cosine and sine */

int k00, k01, k10, k11; /* indices for butterflies */

double tmp0, tmp1; /* temporary storage */

if(N != 2) /* Perform recursive step. */

Page 26: Advanced algorithms at a glance

26

{

/* Calculate two (N/2)-point DFT's. */

fft_rec(N2, offset, 2*delta, x, XX, X);

fft_rec(N2, offset+delta, 2*delta, x, XX, X);

/* Combine the two (N/2)-point DFT's into one N-point DFT. */

for(k=0; k<N2; k++)

{

k00 = offset + k*delta; k01 = k00 + N2*delta;

k10 = offset + 2*k*delta; k11 = k10 + delta;

cs = cos(TWO_PI*k/(double)N); sn = sin(TWO_PI*k/(double)N);

tmp0 = cs * XX[k11][0] + sn * XX[k11][1];

tmp1 = cs * XX[k11][1] - sn * XX[k11][0];

X[k01][0] = XX[k10][0] - tmp0;

X[k01][1] = XX[k10][1] - tmp1;

X[k00][0] = XX[k10][0] + tmp0;

X[k00][1] = XX[k10][1] + tmp1;

}

}

else /* Perform 2-point DFT. */

{

k00 = offset; k01 = k00 + delta;

X[k01][0] = x[k00][0] - x[k01][0];

X[k01][1] = x[k00][1] - x[k01][1];

X[k00][0] = x[k00][0] + x[k01][0];

X[k00][1] = x[k00][1] + x[k01][1];

}

}

/* IFFT */

void ifft(int N, double (*x)[2], double (*X)[2])

{

int N2 = N/2; /* half the number of points in IFFT */

int i; /* generic index */

double tmp0, tmp1; /* temporary storage */

/* Calculate IFFT via reciprocity property of DFT. */

fft(N, X, x);

x[0][0] = x[0][0]/N; x[0][1] = x[0][1]/N;

x[N2][0] = x[N2][0]/N; x[N2][1] = x[N2][1]/N;

for(i=1; i<N2; i++)

{

tmp0 = x[i][0]/N; tmp1 = x[i][1]/N;

x[i][0] = x[N-i][0]/N; x[i][1] = x[N-i][1]/N;

x[N-i][0] = tmp0; x[N-i][1] = tmp1;

}

}

/******************************************************************************

* This program demonstrates how to use the file fft.c to calculate an FFT *

* of given time-domain samples, as well as to calculate an inverse FFT *

* (IFFT) of given frequency-domain samples. First, N complex-valued time- *

* domain samples x, in rectangular form (Re x, Im x), are read from a *

* specified file; the 2N values are assumed to be separated by whitespace. *

* Then, an N-point FFT of these samples is found by calling the function *

* fft, thereby yielding N complex-valued frequency-domain samples X in *

* rectangular form (Re X, Im X). Next, an N-point IFFT of these samples is *

* is found by calling the function ifft, thereby recovering the original *

* samples x. Finally, the calculated samples X are saved to a specified *

Page 27: Advanced algorithms at a glance

27

* file, if desired. *

******************************************************************************/

#include <stdio.h>

#include <stdlib.h>

#include "fft.c"

int main()

{

int i; /* generic index */

char file[FILENAME_MAX]; /* name of data file */

int N; /* number of points in FFT */

double (*x)[2]; /* pointer to time-domain samples */

double (*X)[2]; /* pointer to frequency-domain samples */

double dummy; /* scratch variable */

FILE *fp; /* file pointer */

/* Get name of input file of time-domain samples x. */

printf("Input file for time-domain samples x(n)? ");

scanf("%s", file);

/* Read through entire file to get number N of points in FFT. */

if(!(fp = fopen(file, "r")))

{

printf(" File \'%s\' could not be opened!", file);

exit(EXIT_FAILURE);

}

N=0;

while(fscanf(fp, "%lg%lg", &dummy, &dummy) == 2) N++;

printf("N = %d", N);

/* Check that N = 2^n for some integer n >= 1. */

if(N >= 2)

{

i = N;

while(i==2*(i/2)) i = i/2; /* While i is even, factor out a 2. */

} /* For N >=2, we now have N = 2^n iff i = 1. */

if(N < 2 || i != 1)

{

printf(", which does not equal 2^n for an integer n >= 1.");

exit(EXIT_FAILURE);

}

/* Allocate time- and frequency-domain memory. */

x = malloc(2 * N * sizeof(double));

X = malloc(2 * N * sizeof(double));

/* Get time-domain samples. */

rewind(fp);

for(i=0; i<N; i++) fscanf(fp, "%lg%lg", &x[i][0], &x[i][1]);

fclose(fp);

/* Calculate FFT. */

fft(N, x, X);

/* Print time-domain samples and resulting frequency-domain samples. */

printf("\nx(n):");

for(i=0; i<N; i++) printf("\n n=%d: %12f %12f", i, x[i][0], x[i][1]);

printf("\nX(k):");

for(i=0; i<N; i++) printf("\n k=%d: %12f %12f", i, X[i][0], X[i][1]);

Page 28: Advanced algorithms at a glance

28

/* Clear time-domain samples and calculate IFFT. */

for(i=0; i<N; i++) x[i][0] = x[i][1] = 0;

ifft(N, x, X);

/* Print recovered time-domain samples. */

printf("\nx(n):");

for(i=0; i<N; i++) printf("\n n=%d: %12f %12f", i, x[i][0], x[i][1]);

/* Write frequency-domain samples X to a file, if desired. */

printf("\nOutput file for frequency-domain samples X(k)?"

"\n (if none, abort program): ");

scanf("%s", file);

if(!(fp = fopen(file, "w")))

{

printf(" File \'%s\' could not be opened!", file);

exit(EXIT_FAILURE);

}

for(i=0; i<N; i++) fprintf(fp, "%23.15e %23.15e\n", X[i][0], X[i][1]);

fclose(fp);

printf("Samples X(k) were written to file %s.", file);

/* Free memory. */

free(x);

free(X);

return 0;

}

/*===========================================================================

===

* Program output (example)

*===========================================================================

===

* Input file for time-domain samples x(n)? data.txt

* N = 8

* x(n):

* n=0: 3.600000 2.600000

* n=1: 2.900000 6.300000

* n=2: 5.600000 4.000000

* n=3: 4.800000 9.100000

* n=4: 3.300000 0.400000

* n=5: 5.900000 4.800000

* n=6: 5.000000 2.600000

* n=7: 4.300000 4.100000

* X(k):

* k=0: 35.400000 33.900000

* k=1: 3.821320 0.892893

* k=2: -5.800000 -3.300000

* k=3: 5.971068 7.042641

* k=4: -0.400000 -14.700000

* k=5: -0.421320 2.307107

* k=6: -1.600000 -3.900000

* k=7: -8.171068 -1.442641

* x(n):

* n=0: 3.600000 2.600000

* n=1: 2.900000 6.300000

* n=2: 5.600000 4.000000

* n=3: 4.800000 9.100000

Page 29: Advanced algorithms at a glance

29

* n=4: 3.300000 0.400000

* n=5: 5.900000 4.800000

* n=6: 5.000000 2.600000

* n=7: 4.300000 4.100000

* Output file for frequency-domain samples X(k)?

* (if none, abort program): X.txt

* Samples X(k) were written to file X.txt.

*/

Conclusion: Thus we have perform fft

Page 30: Advanced algorithms at a glance

30

8.Study of Np-Complete theory

Aim: To Study Np-Complete Theory

I. Introduction

A problem is said to be polynomial if there exists an algorithm that solves the

problem in time T(n)=O(nc), where c is a constant.

Examples of polynomial problems:

o Sorting: O(n log n) = O(n2)

o All-pairs shortest path: O(n3)

o Minimum spanning tree: O(E log E)= O(E2)

A problem is said to be exponential if no polynomial-time algorithm can be

developed for it and if we can find an algorithm that solves it in O(nu(n)

), where

u(n) goes to infinity as n goes to infinity.

The world of computation can be subdivided into three classes:

1. Polynomial problems (P)

2. Exponential problems (E)

3. Intractable (non-computable) problems (I)

There is a very large and important class of problems that

0. we know how to solve exponentially,

1. we don't know how to solve polynomially, and

2. we don't know if they can be solved polynomially at all

This class is a gray area between the P-class and the E-class. It will be studied in

this chapter.

II. Definition of NP

Definition 1 of NP: A problem is said to be Nondeterministically Polynomial

(NP) if we can find a nodeterminsitic Turing machine that can solve the problem

in a polynomial number of nondeterministic moves.

For those who are not familiar with Turing machines, two alternative definitions

of NP will be developed.

Definition 2 of NP: A problem is said to be NP if

1. its solution comes from a finite set of possibilities, and

2. it takes polynomial time to verify the correctness of a candidate solution

Remark: It is much easier and faster to "grade" a solution than to find a solution

from scratch.

We use NP to designate the class of all nondeterministically polynomial

problems.

Clearly, P is a subset of NP

A very famous open question in Computer Science:

Page 31: Advanced algorithms at a glance

31

P = NP ?

To give the 3rd alternative definition of NP, we introduce an imaginary, non-

implementable instruction, which we call "choose()".

Behavior of "choose()":

1. if a problem has a solution of N components, choose(i) magically returns

the i-th component of the CORRECT solution in constant time

2. if a problem has no solution, choose(i) returns mere "garbage", that is, it

returns an uncertain value.

An NP algorithm is an algorithm that has 2 stages:

1. The first stage is a guessing stage that uses choose() to find a solution to

the problem.

2. The second stage checks the correctness of the solution produced by the

first stage. The time of this stage is polynomial in the input size n.

Template for an NP algorithm:

begin

/* The following for-loop is the guessing stage*/

for i=1 to N do

X[i] := choose(i);

endfor

/* Next is the verification stage */

Write code that does not use "choose" and that

verifies if X[1:N] is a correct solution to the

problem.

end

Remark: For the algorithm above to be polynomial, the solution size N must be

polynomial in n, and the verification stage must be polynomial in n.

Definition 3 of NP: A problem is said to be NP if there exists an NP algorithm for

it.

Example of an NP problem: The Hamiltonian Cycle (HC) problem

1. Input: A graph G

2. Question: Goes G have a Hamiltonian Cycle?

Here is an NP algorithm for the HC problem:

begin

/* The following for-loop is the guessing stage*/

for i=1 to n do

X[i] := choose(i);

endfor

Page 32: Advanced algorithms at a glance

32

/* Next is the verification stage */

for i=1 to n do

for j=i+1 to n do

if X[i] = X[j] then

return(no);

endif

endfor

endfor

for i=1 to n-1 do

if (X[i],X[i+1]) is not an edge then

return(no);

endif

endfor

if (X[n],X[1]) is not an edge then

return(no);

endif

return(yes);

end

The solution size of HC is O(n), and the time of the verification stage is O(n2).

Therefore, HC is NP.

The K-clique problem is NP

1. Input: A graph G and an integer k

2. Question: Goes G have a k-clique?

Here is an NP algorithm for the K-clique problem:

begin

/* The following for-loop is the guessing stage*/

for i=1 to k do

X[i] := choose(i);

endfor

/* Next is the verification stage */

for i=1 to k do

for j=i+1 to k do

if (X[i] = X[j] or (X[i],X[j]) is not an edge) then

return(no);

endif

endfor

endfor

return(yes);

end

Page 33: Advanced algorithms at a glance

33

The solution size of the k-clique is O(k)=O(n), and the time of the verification

stage is O(n2). Therefore, the k-clique problem is NP.

III. Focus on Yes-No Problems

Definition: A yes-no problem consists of an instance (or input I) and a yes-no

question Q.

The yes-no version of the HC problem was described above, and so was the yes-

no version of the k-clique problem.

The following are additional examples of well-known yes-no problems.

The subset-sum problem:

o Instance: a real array a[1:n]

o Question: Can the array be partitioned into two parts that add up to the

same value?

The satisfiability problem (SAT):

o Instance: A Boolean Expression F

o Question: Is there an assignment to the variables in F so that F evaluates to

1?

The Treveling Salesman Problem

The original formulation:

o Instance: A weighted graph G

o Question: Find a minimum-weight Hamiltonian Cycle in G.

The yes-no formulation:

o Instance: A weighted graph G and a real number d

o Question: Does G have a Hamiltonian cycle of weight <= d?

IV. Reductions and Transforms

Notation: If P stands for a yes-no problem, then

o IP: denotes an instance of P

o QP: denotes the question of P

o Answer(QP,IP): denotes the answer to the question QP given input IP

Let P and R be two yes-no problems

Definition: A transform (that transforms a problem P to a problem R) is an

algorithm T such that:

1. The algorithm T takes polynomial time

2. The input of T is IP, and the output of T is IR

3. Answer(QP,IP)=Answer(QR,IR)

Definition: We say that problem problem P reduces to problem R if there exists a

transform from P to R.

Page 34: Advanced algorithms at a glance

34

V. NP-Completeness

Definition: A problem R is NP complete if

1. R is NP

2. Every NP problem P reduces to R

An equivalent but casual definition: A problem R is NP-complete if R is the

"most difficult" of all NP problems.

Theorem: Let P and R be two problems. If P reduces to R and R is polynomial,

then P is polynomial.

Proof:

o Let T be the transform that transforms P to R. T is a polynomial time

algorithm that transforms IP to IR such that

Answer(QP,IP) = Answer(QR,IR)

o Let AR be the polynomial time algorithm for problem R. Clearly, A takes

as input IR, and returns as output Answer(QR,IR)

o Design a new algorithm AP as follows:

Algorithm AP(input: IP)

begin

IR := T(IP);

x := AR(IR);

return x;

end

o Note that this algorithm AP returns the correct answer Answer(QP,IP)

because x = AR(IR) = Answer(QR,IR) = Answer(QP,IP).

o Note also that the algorithm AP takes polynomial time because both T and

AR

Q.E.D.

The intuition derived from the previous theorem is that if a problem P reduces to

problem R, then R is at least as difficult as P.

Theorem: A problem R is NP-complete if

0. R is NP, and

1. There exists an NP-complete problem R0 that reduces to R

Proof:

o Since R is NP, it remain to show that any arbitrary NP problem P reduces

to R.

o Let P be an arbitrary NP problem.

o Since R0 is NP-complete, it follows that P reduces to R0

o And since R0 reduces to R, it follows that P reduces to R (by transitivity of

transforms).

Q.E.D.

Page 35: Advanced algorithms at a glance

35

The previous theorem amounts to a strategy for proving new problems to be NP

complete. Specifically, to problem a new problem R to be NP-complete, the

following steps are sufficient:

0. Prove R to be NP

1. Find an already known NP-complete problem R0, and come up with a

transform that reduces R0 to R.

For this strategy to become effective, we need at least one NP-complete problem.

This is provided by Cook's Theorem below.

Cook's Theorem: SAT is NP-complete.

VI. NP-Completeness of the k-Clique Problem

The k-clique problem was laready shown to be NP.

It remain to prove that an NP-complete problem reduces to k-clique

Theorem: SAT reduces to the k-clique problem

Proof:

o Let F be a Boolean expression.

o F can be put into a conjunctive normal form: F=F1F2...Fr

where every factor Fi is a sum of literals (a literal is a Bollean variable or

its complement)

o Let k=r and G=(V,E) defined as follows:

V={<xi,Fj> | xi is a variable in Fj}

E={(<xi,Fj> , <ys,Ft>) | j !=t and xi != ys'}

where ys' is the complement of ys

o We prove first that if F is satisfiable, then there is a k-clique.

o Assume F is satisfiable

o This means that there is an assignment that makes F equal to 1

o This implies that F1=1, F2=1, ... , Fr=1

o Therefore, in every factor Fi there is (at least) one variable assigned 1. Call

that variable zi

o As a result, <z1,F1>, <z2,F2>, ... , <zk,Fk> is a k-clique in G because they

are k distinct nodes, and each pair (<zi,Fi> , <zj,Fj>) forms an edge since

the endpoints come from different factors and zi != zj' due to the fact that

they are both assigned 1.

o We finally prove that if G has a k-clique, then F is satistiable

o Assume G has a k-clique <u1,F1>, <u2,F2>, ... , <uk,Fk> which are pairwise

adjacent

o These k nodes come the k fifferent factors, one per factor, becuae no two

nodes from the same factor can be adjacent

o Furthermore, no two ui and uj are complements because the two nodes

<ui,Fi> and <uj,Fj> are adjacent, and adjacent nodes have non-complement

first-components.

o As a result, we can consistently assign each ui a value 1.

o This assignment makes each Fi equal to 1 because ui is one of the additive

literals in Fi

o Consequently, F is equal to 1.

Q.E.D.

Page 36: Advanced algorithms at a glance

36

9. To Study Cook’s theorem

Aim: To Study the Cook’s theorem

Cook’s Theorem

Cook’s Theorem states that

Any NP problem can be converted to SAT in polynomial time.

In order to prove this, we require a uniform way of representing NP problems. Remember

that what makes a problem NP is the existence of a polynomial-time algorithm—more

specifically, a Turing machine—for checking candidate certificates. What Cook did was

somewhat analogous to what Turing did when he showed that the Entscheidungsproblem

was equivalent to the Halting Problem. He showed how to encode as Propositional

Calculus clauses both the relevant facts about the problem instance and the Turing

machine which does the certificate-checking, in such a way that the resulting set of

clauses is satisfiable if and only if the original problem instance is positive. Thus the

problem of determining the latter is reduced to the problem of determining the former.

Assume, then, that we are given an NP decision problem D. By the definition of NP,

there is a polynomial function P and a Turing machineM which, when given any instance

I of D, together with a candidate certificate c, will check in time no greater than P(n),

where n is the length of I, whether or not c is acertificate of I.

Let us assume that M has q states numbered 0; 1; 2; : : : ; q � 1, and a tape alphabet a1;

a2; : : : ; as. We shall assume that the operation of the machine is governed by the

functions T, U, and D.

We shall further assume that the initial tape is inscribed with the

problem instance on the squares 1; 2; 3; : : : ; n, and the putative certificate on the squares

�m; : : : ;�2;�1.

Square zero can be assumed to contain a designated separator symbol. We shall also

assume that the machine halts scanning square 0, and that the symbol in this square at

that stage will be a1 if and only if the candidate certificate is a true certificate. Note that

we must have m _ P(n). This is because with a problem instance of length n the

computation is completed in at most P(n) steps; during this process, the Turing machine

head cannot move more than P(n) steps to the left of its starting point.

We define some atomic propositions with their intended interpretations as follows:

1. For i = 0; 1; : : : ; P(n) and j = 0; 1; : : : ; q � 1, the proposition Qij says that after i

computation steps, M is in state j.

2. For i = 0; 1; : : : ; P(n), j = �P(n); : : : ; P(n), and k = 1; 2; : : : ; s, the proposition Sijk

says that after i computation steps, square j of the tape contains the symbol ak.

Page 37: Advanced algorithms at a glance

37

3. i = 0; 1; : : : ; P(n) and j = �P(n); : : : ; P(n), the proposition Tij says that after i

computation steps, the machine M is scanning square j of the tape.

Next, we define some clauses to describe the computation executed by M:

1. At each computation step, M is in at least one state. For each i = 0; : : : ; P(n) we have

the clause

Qi0 _ Qi1 _ _ _ _ _ Qi(q�1);

giving (P(n) + 1)q = O(P(n)) literals altogether.

2. At each computation step, M is in at most one state. For each i = 0; : : : ; P(n) and for

each pair j; k of distinct states, we have the clause:(Qij ^ Qik);

giving a total of q(q � 1)(P(n) + 1) = O(P(n)) literals altogether.

3. At each step, each tape square contains at least one alphabet symbol. For each i = 0; : :

: ; P(n) and

�P(n) _ j _ P(n) we have the clause

Sij1 _ Sij2 _ _ _ _ _ Sijs;

giving (P(n) + 1)(2P(n) + 1)s = O(P(n)2) literals altogether.

4. At each step, each tape square contains at most one alphabet symbol. For each i = 0; : :

: ; P(n) and

�P(n) _ j _ P(n), and each distinct pair ak; al of symbols we have the clause

:(Sijk ^ Sijl);

giving a total of (P(n) + 1)(2P(n) + 1)s(s � 1) = O(P(n)2) literals altogether

5. At each step, the tape is scanning at least one square. For each i = 0; : : : ; P(n), we

have the clause

Ti(�P(n)) _ Ti(1�P(n)) _ _ _ _ _ Ti(P(n)�1) _ TiP (n);

giving (P(n) + 1)(2P(n) + 1) = O(P(n)2) literals altogether.

6. At each step, the tape is scanning at most one square. For each i = 0; : : : ; P(n), and

each distinct pair j; k of tape squares from �P(n) to P(n), we have the clause

:(Tij ^ Tik);

giving a total of 2P(n)(2P(n) + 1)(P(n) + 1) = O(P(n)3) literals.

7. Initially, the machine is in state 1 scanning square 1. This is expressed by the two

clauses

Q01; T01;

giving just two literals.

8. The configuration at each step after the first is determined from the configuration at

the previous step by the functions T, U, and D defining the machine M. For each i = 0; : :

: ; P(n), �P(n) _ j _

P(n), k = 0; : : : ; q � 1, and l = 1; : : : ; s, we have the clauses

Tij ^ Qik ^ Sijl ! Q(i+1)T(k;l)

Tij ^ Qik ^ Sijl ! S(i+1)jU(k;l)

Tij ^ Qik ^ Sijl ! T(i+1)(j+D(k;l))

Sijk ! Tij _ S(i+1)jk

The fourth of these clauses ensures that the contents of any tape square other than the

currently scanned square remains the same (to see this, note that the given clause is

equivalent to the formula

Sijk ^ :Tij ! S(i+1)jk). These clauses contribute a total of (12s + 3)(P(n) + 1)(2P(n) + 1)q =

O(P(n)2) literals.

9. Initially, the string ai1ai2 : : : ain defining the problem instance I is inscribed on squares

1; 2; : : : ; n

of the tape. This is expressed by the n clauses

S01i1 ; S02i2 ; : : : ; S0nin;

a total of n literals.

Page 38: Advanced algorithms at a glance

38

10. By the P(n)th step, the machine has reached the halt state, and is then scanning

square 0, which contains the symbol a1. This is expressed by the three clauses

QP(n)0; SP(n)01; TP(n)0;

giving another 3 literals. Altogether the number of literals involved in these clauses is

O(P(n)3) (in working this out, note that q and s are constants, that is, they depend only on

the machine and do not vary with the problem instance; thus they do not contribute to the

growth of the the number of literals with increasing problem size, which is what the O

notation captures for us). It is thus clear that the procedure for setting up these clauses,

given the original machine M and the instance I of problem D, can be accomplished in

polynomial time.

We must now show that we have succeeded in converting D into SAT. Suppose first that

I is a positive instance of D. This means that there is a certificate c such that when M is

run with inputs c; I, it will halt scanning symbol a1 on square 0. This means that there is

some sequence of symbols that can be placed initially on squares �P(n); : : : ;�1 of the

tape so that all the clauses above are satisfied. Hence those clauses constitute a positive

instance of SAT.

Conversely, suppose I is a negative instance of D. In that case there is no certificate for I,

which means that whatever symbols are placed on squares �P(n); : : : ;�1 of the tape,

when the computation halts the machine will not be scanning a1 on square 0. This means

that the set of clauses above is not satisfiable, and hence constitutes a negative instance of

SAT. Thus from the instance I of problem D we have constructed, in polynomial time, a

set of clauses whichconstitute a positive instance of SAT if and only I is a positive

instance of D. In other words, we have converted D into SAT in polynomial time. And

since D was an arbitrary NP problem it follows that any NP problem can be converted to

SAT in polynomial time.

NP-completeness Cook’s Theorem implies that any NP problem is at most polynomially harder than SAT.

This means that if we find a way of solving SAT in polynomial time, we will then be in a

position to solve any NP problem in polynomial time. This would have huge practical

repercussions, since many frequently encountered problems which are so far believed to

be intractable are NP.This special property of SAT is called NP-completeness. A

decision problem is NP-complete if it has the property that any NP problem can be

converted into it in polynomial time. SAT was the first NP-complete problem to be

recognised as such (the theory of NP-completeness having come into existence with the

proof of Cook’s Theorem), but it is by no means the only one. There are now literally

thousands of problems, cropping up in many different areas of computing, which have

been proved to be NP-complete.

In order to prove that an NP problem is NP-complete, all that is needed is to show that

SAT can be converted into it in polynomial time. The reason for this is that the sequential

composition of two polynomial time algorithms is itself a polynomial-time algorithm,

since the sum of two polynomials is itself a polynomial.

Suppose SAT can be converted to problem D in polynomial time. Now take any NP

problem D0. We know we can convert it into SAT in polynomial time, and we know we

can convert SAT into D in polynomial time. The result of these two conversions is a

polynomial-time conversion of D0 into D. Since D0 was an arbitrary NP problem, it

follows that D is NP-complete. We illustrate this by showing that the problem 3SAT is

NP-complete. This problem is similar to SAT, but restricts the clauses to at most three

schematic letters each:

Page 39: Advanced algorithms at a glance

39

Given a finite set fC1;C2; : : : ;Cng of clauses, each of which contains at most three

schematic letters, determine whether there is an assignment of truth-values to the

schematic letters appearing in the clauses which makes all the clauses true.

3SAT is obviously NP (since it is a special case of SAT, which is NP). It turns out to be

straightforward to convert an arbitrary instance of SAT into an instance of 3SAT with the

same satisfiability property.

Take any clause written in disjunctive form as C _ L1 _ L2 _ : : : _ Ln, where n > 3 and

each Li is a literal. We replace this by n � 2 new clauses, using n � 3 new schematic

letters X1; : : : ;Xn�3, as follows:

L1 _ L2 _ X1

X1 ! L3 _ X2

X2 ! L4 _ X3

...

Xn�4 ! Ln�2 _ Xn�3

Xn�3 ! Ln�1 _ Ln

Call the new set of clauses C. Any truth-assignment to the schematic letters appearing in

the Li which satisfies C can be extended to the Xi so that C is satisfied, and conversely

any truth-assignment which satisfies C also satisfies C.

To prove this, suppose that a certain truth-assignment satisfies C. Then it satisfies at least

one of the literals appearing in C, say Lk. Now assign true to X1;X2; : : : ;Xk�2 and false to

Xk�1; : : : ;Xn�3. Then all the clauses in C are satisfied: for i = 1; 2; : : : ; k � 2, the ith

clause is satisfied because Xi is true; the(k � 1)th clause is satisfied because Lk is true;

and for j = k; k + 1; : : : ; n � 2 the jth clause is satisfied because Xj�1 (appearing in the

antecedent) is false.

Conversely, suppose we have a truth-assignment satisfying C: each clause in C is

satisfied. Suppose L1; : : : ;Ln�2 are all false. Then it is easy to see that all the Xi must be

true; in particular Xn�3 is true, so either Ln�1 or Ln is true. Thus in any event at least one of

the Li is true, and hence C is true.

If we take an instance of SAT and replace all the clauses containing more than three

literals by clausescontaining exactly three in the way described above, we end up with an

instance of 3SAT which is satisfiable if and only if the original instance is satisfiable.

Moreover, the conversion can be accomplished in polynomial time. It follows that 3SAT,

like SAT, is NP-complete.

P=NP? We have seen that a problem is NP-complete if and only if it is NP and any NP problem

can be converted into it in polynomial time. (A problem satisfying the second condition is

called NP-hard; so NP-complete means NP and NP-hard.) It follows from this that all

NP-complete problems are mutually interconvertible in polynomial time. For if D1 and D2

are both NP-complete, then D1 can be converted into D2 by virtue of the fact that D1 is NP

and D2 is NP-hard, and D2 can be converted into D1 by virtue of the fact that D2 is NP and

D1 is NP-hard. Thus as far as computational complexity is concerned, all NP-complete

problems agree to within some polynomial amount of difference. But what is the

computational complexity of these problems? If any one NP-complete problem can be

shown to be of polynomial complexity, then by the above they all are. If on the other

hand any one NPcomplete problem can be shown not to be solvable in polynomial time,

then by the above, none of them are so solvable. All NP-complete problems stand or fall

together.

Page 40: Advanced algorithms at a glance

40

The current state of our knowledge is this: we know how to solve NP-complete problems

in exponential time, but there is no NP-complete problem for which any algorithm is

known that runs in less than exponential time. On the other hand, no-one has ever

succeeded in proving that it is not possible to solve an NP-complete problem faster than

that. This implies, of course, that no-one has proved that NP-complete problems can’t be

solved in polynomial time. If we could solve NP-complete problems in polynomial time,

then the whole class NP would collapse down into the class P of problems solvable in

polynomial time. This is because the NP-complete problems are the hardest of all NP

problems, and if they are polynomial then all NP problems are polynomial. Thus the

question of whether or not there are polynomial algorithms for NP-complete problems

has become known as the “P=NP?” problem. Most people who have an opinion on this

believe that the answer is no, that is, NP-complete problems are strictly harder than

polynomial problems. All this assumes, of course, the Turing model of computation,

which applies to all existing computers. However, as suggested above, if we had access

to unlimited parallelism, we could solve any NP problem (including therefore the NP-

complete problems) in polynomial time. Existing computers do not provide us with such

parallelism; but if the current work on Quantum Computation comes to fruition, then it

may be that a new generation of computers will have exactly such abilities, and if that

happens, then everything changes. It would not solve the classic P=NP question, of

course, because that question concerns the properties of Turing-equivalent computation;

what would happen is that the theory of Turing-equivalent computation would suddenly

become much less relevant to practical computing concerns, making the question only of

theoretical interest.

Conclusion: Thus we have studied Cook’s Theorem

Page 41: Advanced algorithms at a glance

41

10. To Study Sorting Network

Aim: To Study Sorting Network

THEORY:

1. Computing with a circuit

We are going to design a circuit, where the inputs are the numbers, and we compare two

numbers using a comparator gate:

For our drawings, we will draw such a gate as follows:

So, circuits would just be horizontal lines, with vertical segments (i.e., gates) between

them. A complete sorting network, looks like:

Page 42: Advanced algorithms at a glance

42

The inputs come on the wires on the left, and are output on the wires on the right. The

largest number is output on the bottom line.The surprising thing, is that one can generate

circuits from a sorting algorithm.In fact, consider the following circuit:

Q: What does this circuit does?

A: This is the inner loop of insertion sort.

Repeating this inner loop, we get the following sorting network:

Alternative way of drawing it:

Q: How much time does it take for this circuit to sort the n numbers?

Running time = how many time clocks we have to wait till the result stabilizes. In this

case:

Page 43: Advanced algorithms at a glance

43

In general, we get:

Lemma 1 Insertion sort requires 2n − 1 time units to sort n numbers.

2 Definitions

Definition 2. A comparison network is a DAG (directed acyclic graph), with n inputs and

n outputs, which each gate has two inputs and two outputs. (Note that a comparison

Definition 2.2 The depth of a wire is 0 at the input. For a gate with two inputs of depth d1

and d2 the depth of the output is 1 + max(d1, d2). The depth of a comparison network is

the maximum depth of an output wire.

Definition 2.3 A sorting network is a comparison network such that for any input, the

output is monotonically sorted. The size of a sorting network is the number of gates in the

sorting network. The running time of a sorting network is just its depth.

3 The Zero-One Principle

The zero-one principle:

If a comparison network sort correctly all binary inputs (i.e.., every number is either 0 or

1) then it sorts correctly all inputs.We of course, need prove that the zero-one principle is

true.

Lemma 2 If a comparison network transforms the input sequence a = ha1, a2, . . . , ani into

the output sequence b = hb1, b2, . . . , bni, then for any monotonically increasing function f

, the network transforms the input sequence f(a) = hf(a1), . . . , f(an)i into the sequence

f(b) = hf(b1), . . . , f(bn)i.

Proof: Consider a single comparator with inputs x, y, and outputs x0 = min(x, y) and y0 =

max(x, y). If f(x) = f(y) then the claim trivially holds for this comparator. If

f(x) < f(y) then clearly

max(f(x), f(y)) = f(max(x, y)) and

min(f(x), f(y)) = f(min(x, y))

Page 44: Advanced algorithms at a glance

44

Thus, for input hx, yi, for x < y, we have output hx, yi thus

for input hf(x), f(y)i the output is hf(x), f(y)i

Thus, for input hx, yi, for x > y, we have output hy, xi thus

for input hf(x), f(y)i the output is hf(y), f(x)i

Establishing the claim for one comparator.

This implies that if a wire carry a value ai when the network get input a1, . . . , an then for

the input f(a1), . . . , f(an) this wire would carry the value f(ai). This follows immediately

by using the above claim for a single comparator together with induction on the network

structure.

This immediately implies the lemma.

Theorem : If a comparison network with n inputs sorts all 2n binary strings of length n

correct, then it sorts all sequences correctly.

Proof: Assume for the sake of contradiction, that it sorts incorrectly the sequence a1, . . . ,

an.

Let b1, . . . bn be the output sequence for this input.

Let ai < ak be the two numbers that outputted in incorrect order (i.e. ak appears before ai in

the output).

Clearly, by the above lemma, for the input

f(a1), . . . , f(an) - binary circuit

the circuit would output f(b1), . . . , f(bn). But then, this sequence looks like

000..0????f(ak)????f(ai)??1111 but f(ai) = 0 and f(aj) = 1. Namely, the output is

????1????0????.

Namely, we have a binary input (f(b1), . . . , f(bn)) for which the comparison network does

not sort it correctly. A contradiction to our assumption.

4 A bitonic sorting network

Definition: A bitonic sequence is a sequence which is first increasing and then

decreasing, or can be circularly shifted to become so.

Example (1, 2, 3, _, 4, 5, 4, 3, 2, 1) - bitonic.

(4, 5, 4, 3, 2, 1, 1, 2, 3) - bitonic

(1, 2, 1, 2) - not bitonic.

Observation A bitonic sequence over 0, 1 is either of the form 0i1j0k or of the form

1i0j1k where 0i denote a sequence of i zeros.

Definition A bitonic sorter is a comparison network that sort bitonic sequences.

Definition A half-cleaner is a comparison network, connecting line i with line i + n/2.

Page 45: Advanced algorithms at a glance

45

A Half − Cleaner[n]: A half-cleaner with n inputs.

The depth of a Half − Cleaner[n] is one.

What does a half-cleaner do for a (binary) bitonic sequence?

Example Properties:

1. Left side is clean and all 0.

2. Right side is bitonic.

Lemma : If the input to a half-cleaner is a binary bitonic sequence then for the output

sequence:

1. The elements in the top half are smaller than the elements in bottom half.

2. One of the halves is clean, and the other is bitonic.

Page 46: Advanced algorithms at a glance

46

This suggests a simple recursive construction of Bitonic − Sorter[n]:

�________ __

_ ____________ ____ _,!#_-$

�________ __

Opening the recursion, we have:

� _______ _________

_ _____

Namely, Bitonic − Sorter[n] is

Lemma:

Page 47: Advanced algorithms at a glance

47

Merging sequence Q: Given two sorted sequences of length n/2 how do we merge them into a single sorted

sequence?

A:Concatinate the two sequence, where the second sequence is being flipped. It is easy to

verify that the resulting sequence is bitonic, and as such we can sort it using the Bitonic −

Sorter[n].

Observation: Given two sorted sequences a1 _ a2 _ . . . _ an and b1 _ b2 _ . . . _ bn

the sequence a1, a2, . . . , an, bn, bn−1, bn−2, . . . , b2, b1 is bitonic.

Thus, to merge two sorted sequences of length n/2, just flip one of them, and use bitonic−

sorter[n]:

�________ __

_ _______________ _ _______________

This is of course, illegal. What we do, is to take bitonic − sorter[n] and physically flip the

last n/2 entries:

Page 48: Advanced algorithms at a glance

48

Thus, Merger[n] is

But this is equivalent to:

Which is Bitonic − Sorter[n] with the first component flipped. Formally, let flip −

cleaner[n] to be the component:

Page 49: Advanced algorithms at a glance

49

Thus, Merger[n] is just the comparator network:

� _______ _________

_ _____ _ ___________

� _ _ _ ___!_#"%$&('

� _ _ _ ___!_ "%$&(' _ ___________

Lemma :

Sorting Network Q: How to build a sorting network?

A:Just implement merge sort using Merger[n].

sorter[n]:

� ___________ __

_________________ __________ ______

Lemma :Sorter[n] is a sorting network (i.e., it sorts any n numbers) using G(n) =

As for the depth D(n) = D(n/2) + Depth(Merger[n]) = D(n/2) + O(log(n))

Page 50: Advanced algorithms at a glance

50

Here is how Sorter[8] looks like:

Conclusion: Thus we have studied Sorting Network

Page 51: Advanced algorithms at a glance

51

Possible Question on Advanced Algorithm:

VIVA VOCE QUESTIONS 1)Define and state the importance of sub algorithm in computation and its relation ship

with main algorithm?

2) Give the difference of format between an algorithm and a sub algorithm?

3) What is the general algorithm model for any recursive procedure?

4)State recursion and its different types?

5)Explain the depth of recursion?

6)State the problems which differentiate between recursive procedure and non-recursive

procedure?

7)what is the complexity of fft?

8)what is mean by modular arithmetic?

9)What is chinese Remainder theorem?

10)What is mean by NP-Complete and NP-Hard?

11)Differentiate between NP-Complete and NP-Hard

12)Differentiate between deterministic and non deterministic algorithm

13)What mean by algorithm?

14)what are different techniques to solve the given problem

15)Explain Knapsack Problem

16)What is mean by dynamic programming

17)Explain the Greedy method

18)What is mean by Time Complexity?

19)What is mean by Space Complexity?

20)What is Asymptotic Notation?

Evaluation and marking system:

Basic honesty in the evaluation and marking system is absolutely essential

and in the process impartial nature of the evaluator is required in the

examination system to become popular amongst the students. It is a wrong

approach or concept to award the students by way of easy marking to get

cheap popularity among the students to which they do not deserve. It is a

primary responsibility of the teacher that right students who are really

putting up lot of hard work with right kind of intelligence are correctly

awarded.

The marking patterns should be justifiable to the students without any

ambiguity and teacher should see that students are faced with unjust

circumstances.

The assessment is done according to the directives of the Principal/ Vice-

Principal/ Dean Academics.